문제 설명
오염되지 않은 노드로 예약된 Kubernetes 포드 (Kubernetes pods scheduled to non‑tainted node)
GKE Kubernetes 클러스터와 해당 클러스터에 배포된 두 개의 워크로드를 만들었습니다. 각 워크로드에 대해 별도의 노드 풀이 있습니다. 셀러리 작업 부하에 대한 노드 풀이 celery‑node‑pool=true
로 오염되었습니다. 포드의 사양에는 다음과 같은 허용 오차가 있습니다.
tolerations:
‑ key: "celery‑node‑pool"
operator: "Exists"
effect: "NoSchedule"
노드 오염 및 허용 오차가 있음에도 불구하고 셀러리 워크로드의 일부 포드는 오염되지 않은 노드에 배포됩니다. 왜 이런 일이 일어나고 내가 뭔가를 잘못하고 있습니까? 특정 노드에 포드를 유지하려면 어떤 다른 오염 및 허용 오차를 추가해야 하나요?
참조 솔루션
방법 1:
Using Taints:
Taints allow a node to repel a set of pods.You have not specified the effect in the taint. It should be node‑pool=true:NoSchedule
. Also your other node need to repel this pod so you need to add a different taint to other nodes and not have that toleration in the pod.
Using Node Selector:
You can constrain a Pod to only be able to run on particular Node(s) , or to prefer to run on particular nodes.
You can label the node
kubectl label nodes kubernetes‑foo‑node‑1.c.a‑robinson.internal node‑pool=true
Add node selector in the pod spec:
kind: Pod
metadata:
name: nginx
labels:
env: test
spec:
containers:
‑ name: nginx
image: nginx
imagePullPolicy: IfNotPresent
nodeSelector:
node‑pool: true
Using Node Affinity
nodeSelector
provides a very simple way to constrain pods to nodes with particular labels. The affinity/anti‑affinity feature, greatly expands the types of constraints you can express.
apiVersion: v1
kind: Pod
metadata:
name: with‑node‑affinity
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
‑ matchExpressions:
‑ key: node‑pool
operator: In
values:
‑ true
containers:
‑ name: with‑node‑affinity
image: k8s.gcr.io/pause:2.0
방법 2:
What other taints and tolerations should I add to keep the pods on specific nodes?
You should also add a node selector
to pin your pods to tainted node, else pod is free to go to a non‑tainted node if scheduler wants.
kubectl taint node node01 hostname=node01:NoSchedule
If i taint node01 and want my pods be placed on it with toleration
need node selector
as well.
nodeSelector provides a very simple way to constrain(affinity) pods to nodes with particular labels.
apiVersion: v1
kind: Pod
metadata:
name: nginx
spec:
tolerations:
‑ key: "hostname"
operator: "Equal"
value: "node01"
effect: "NoSchedule"
containers:
‑ name: nginx
image: nginx
imagePullPolicy: IfNotPresent
nodeSelector:
kubernetes.io/hostname: node01
(by Rao Anees、Arghya Sadhu、DT.)