千家信息网

k8s调度器

发表于:2025-01-31 作者:千家信息网编辑
千家信息网最后更新 2025年01月31日,节点硬亲和度:Pod调度时必须满足规划,如不满足Pod状态将是Penging状态。节点软亲和度:Pod调度时先按规则进行调度,如不满足规则也会找一个不匹配的节点运行。Pod硬亲和度,软亲和度与节点硬亲
千家信息网最后更新 2025年01月31日k8s调度器

节点硬亲和度:Pod调度时必须满足规划,如不满足Pod状态将是Penging状态。

节点软亲和度:Pod调度时先按规则进行调度,如不满足规则也会找一个不匹配的节点运行。

Pod硬亲和度,软亲和度与节点硬亲和度,软亲和度相似。

1.节点硬亲和度(节点不满足规则)

[root@k8s01 yaml]# cat pod-affinity01.yaml

apiVersion: v1kind: Podmetadata:  name: pod-01spec:  affinity:    nodeAffinity:      requiredDuringSchedulingIgnoredDuringExecution:        nodeSelectorTerms:        - matchExpressions:          - {key: zone,operator: In,values: ["one"]}   --节点zone(键)标签必须是one(值)  containers:  - name: pod-01    image: nginx:latest    imagePullPolicy: Never

[root@k8s01 yaml]# kubectl apply -f pod-affinity01.yaml

pod/pod-01 created

[root@k8s01 yaml]# kubectl get pods --show-labels

NAME READY STATUS RESTARTS AGE LABELS

pod-01 0/1 Pending 0 103s --node节点不满足pod标签

[root@k8s01 yaml]# kubectl describe pods pod-01

。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。。

Events:

Type Reason Age From Message

---- ------ ---- ---- -------

Warning FailedScheduling default-scheduler 0/3 nodes are available: 3 node(s) didn't match node selector.

Warning FailedScheduling default-scheduler 0/3 nodes are available: 3 node(s) didn't match node selector.

[root@k8s01 yaml]#

- matchExpressions:  - {key: zone,operator: In,values: ["one"]} --zone是键,operator:是表达式,one是值  - {key: zone,operator: In,values: ["one","two"]} --In 是包含  - {key: ssd,operator: Exists,values: []} --Exists是存在

2.节点硬亲和度(节点满足规则)

[root@k8s01 yaml]# kubectl label node k8s02 zone=one --在k8s02节点创建标签

node/k8s02 labeled

[root@k8s01 yaml]# kubectl get pods --show-labels -o wide

NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES LABELS

pod-01 1/1 Running 0 6m13s 10.244.1.37 k8s02

[root@k8s01 yaml]#

3.节点软亲和度

[root@k8s01 yaml]# cat pod-affinity02.yaml

apiVersion: apps/v1kind: Deploymentmetadata:  name: pod-02spec:  replicas: 3  selector:    matchLabels:      app: myapp  template:    metadata:      name: myapp-pod      labels:        app: myapp    spec:      affinity:        nodeAffinity:          preferredDuringSchedulingIgnoredDuringExecution:          - weight: 60    --60%的Pod会调度到zone标签,one值这个节点上            preference:              matchExpressions:              - {key: zone,operator: In,values: ["one"]}      containers:      - name: pod-02        image: nginx:latest        imagePullPolicy: Never

[root@k8s01 yaml]# kubectl apply -f pod-affinity02.yaml

deployment.apps/pod-02 created

[root@k8s01 yaml]# kubectl get pods --show-labels -o wide --满足标签可创建,不满足也可创建

NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES LABELS

pod-02-77d87986b-9bzg5 1/1 Running 0 16s 10.244.1.39 k8s02 app=myapp,pod-template-hash=77d87986b

pod-02-77d87986b-dckjq 1/1 Running 0 16s 10.244.2.42 k8s03 app=myapp,pod-template-hash=77d87986b

pod-02-77d87986b-z7v47 1/1 Running 0 16s 10.244.1.38 k8s02 app=myapp,pod-template-hash=77d87986b

[root@k8s01 yaml]#

4. Pod硬亲和度(节点不满足规则)

[root@k8s01 yaml]# kubectl get pods --show-labels
NAME READY STATUS RESTARTS AGE LABELS
nginx 1/1 Running 0 4s app=web
[root@k8s01 yaml]# cat pod-affinity03.yaml

apiVersion: v1kind: Podmetadata:  name: pod-01spec:  affinity:    podAffinity:      requiredDuringSchedulingIgnoredDuringExecution:      - labelSelector:          matchExpressions:          - {key: app,operator: In,values: ["web1"]}   --标签值不满足        topologyKey: kubernetes.io/hostname  containers:  - name: pod-01    image: nginx:latest    imagePullPolicy: Never

[root@k8s01 yaml]# kubectl apply -f pod-affinity03.yaml
pod/pod-01 created
[root@k8s01 yaml]# kubectl get pods --show-labels -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES LABELS
nginx 1/1 Running 0 8m9s 10.244.1.42 k8s02 app=web
pod-01 0/1 Pending 0 28s
[root@k8s01 yaml]#

5.Pod硬亲和度 (节点满足规则)

[root@k8s01 yaml]# kubectl get pods --show-labels --查看pod标签
NAME READY STATUS RESTARTS AGE LABELS
nginx 1/1 Running 0 4s app=web
[root@k8s01 yaml]# cat pod-affinity04.yaml

apiVersion: v1kind: Podmetadata:  name: pod-01spec:  affinity:    podAffinity:      requiredDuringSchedulingIgnoredDuringExecution:      - labelSelector:          matchExpressions:          - {key: app,operator: In,values: ["web"]}        topologyKey: kubernetes.io/hostname  containers:  - name: pod-01    image: nginx:latest    imagePullPolicy: Never

[root@k8s01 yaml]# kubectl apply -f pod-affinity04.yaml
pod/pod-01 created
[root@k8s01 yaml]# kubectl get pods --show-labels -o wide --创建的新Pod会找app=web的标签
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES LABELS
nginx 1/1 Running 0 4m14s 10.244.1.42 k8s02 app=web
pod-01 1/1 Running 0 17s 10.244.1.43 k8s02
[root@k8s01 yaml]#

6.Pod软亲和度

[root@k8s01 yaml]# kubectl get pods --show-labels -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES LABELS
nginx 1/1 Running 0 14m 10.244.1.42 k8s02 app=web

[root@k8s01 yaml]# cat pod-affinity04.yaml

apiVersion: apps/v1kind: Deploymentmetadata:  name: pod-02spec:  replicas: 3  selector:    matchLabels:      app: myapp  template:    metadata:      name: myapp-pod      labels:        app: myapp    spec:      affinity:        podAffinity:          preferredDuringSchedulingIgnoredDuringExecution:          - weight: 60            podAffinityTerm:              labelSelector:                matchExpressions:                - {key: app,operator: In,values: ["web12"]}              topologyKey: zone      containers:      - name: pod-02        image: nginx:latest        imagePullPolicy: Never

[root@k8s01 yaml]# kubectl get pods --show-labels -o wide --不满足Pod软亲和度也可创建
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES LABELS
nginx 1/1 Running 0 16m 10.244.1.42 k8s02 app=web
pod-02-6f96fbfdf-52gbf 1/1 Running 0 11s 10.244.2.43 k8s03 app=myapp,pod-template-hash=6f96fbfdf
pod-02-6f96fbfdf-dl5z5 1/1 Running 0 11s 10.244.1.44 k8s02 app=myapp,pod-template-hash=6f96fbfdf
pod-02-6f96fbfdf-f8bzn 1/1 Running 0 11s 10.244.0.55 k8s01 app=myapp,pod-template-hash=6f96fbfdf
[root@k8s01 yaml]#

7.定义污点和容忍度

NoSchedule:不能容忍此污点的新Pod对像,不能调度到当前节点,强制性。

PreferNoSchedule:新Pod尽量不调度到此节点,若无其它节点可以调度,也允许使用当前节点调度,柔和性。

NoExecute:新Pod不能调度到当前节点,如果当前节点存在Pod也会被驱逐,强制性。

[root@k8s01 yaml]# kubectl describe node k8s02 | grep -i taints --查看污点
Taints:
[root@k8s01 yaml]# kubectl taint node k8s02 node-type=production:NoSchedule --为k8s02节点打污点

node/k8s02 tainted
[root@k8s01 yaml]# kubectl describe node k8s02 | grep -i taints
Taints: node-type=production:NoSchedule
[root@k8s01 yaml]# cat pod-affinity05.yaml

apiVersion: apps/v1kind: Deploymentmetadata:  name: pod-02spec:  replicas: 3  selector:    matchLabels:      app: myapp  template:    metadata:      name: myapp-pod      labels:        app: myapp    spec:      containers:      - name: pod-02        image: nginx:latest        imagePullPolicy: Never

[root@k8s01 yaml]# kubectl apply -f pod-affinity05.yaml
deployment.apps/pod-02 created
[root@k8s01 yaml]# kubectl get pods -o wide --show-labels --k8s02节点已打污点,所以不能调度Pod对像
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES LABELS
pod-02-5c54dc6489-j7tsh 1/1 Running 0 29s 10.244.0.56 k8s01 app=myapp,pod-template-hash=5c54dc6489
pod-02-5c54dc6489-sk5dm 1/1 Running 0 29s 10.244.2.44 k8s03 app=myapp,pod-template-hash=5c54dc6489
pod-02-5c54dc6489-sn5wd 1/1 Running 0 29s 10.244.2.45 k8s03 app=myapp,pod-template-hash=5c54dc6489

[root@k8s01 yaml]

8.删除污点

[root@k8s01 yaml]# kubectl taint node k8s02 node-type:NoSchedule- --删除 NoSchedule污点

node/k8s02 untainted

[root@k8s01 yaml]# kubectl taint node k8s02 node-type=production:PreferNoSchedule --打上污点
node/k8s02 tainted
[root@k8s01 yaml]# kubectl describe node k8s02 | grep -i node-type
Taints: node-type=production:PreferNoSchedule
[root@k8s01 yaml]# kubectl taint node k8s02 node-type:PreferNoSchedule- --删除 PreferNoSchedule污点

node/k8s02 untainted
[root@k8s01 yaml]# kubectl describe node k8s02 | grep -i node-type

[root@k8s01 yaml]# kubectl taint node k8s02 node-type- --删除所有污点
node/k8s02 untainted
[root@k8s01 yaml]# kubectl describe node k8s02 | grep -i node-type
[root@k8s01 yaml]#

9. 驱逐节点上的Pod

[root@k8s01 yaml]# kubectl cordon k8s03 --将新创建的pod不能调度到k8s03节点,以前的Pod不受影响。
node/k8s03 cordoned
[root@k8s01 yaml]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s01 Ready master 71d v1.16.0
k8s02 Ready 70d v1.16.0
k8s03 Ready, SchedulingDisabled 30d v1.16.0

[root@k8s01 yaml]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod-02-5c54dc6489-442kx 1/1 Running 0 12m 10.244.2.48 k8s03
pod-02-5c54dc6489-92l8m 1/1 Running 0 12m 10.244.2.49 k8s03
pod-02-5c54dc6489-k4bc7 1/1 Running 0 12m 10.244.0.58 k8s01
[root@k8s01 yaml]# kubectl uncordon k8s03 --释放调度规则
node/k8s03 uncordoned
[root@k8s01 yaml]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s01 Ready master 71d v1.16.0
k8s02 Ready 70d v1.16.0
k8s03 Ready 30d v1.16.0
[root@k8s01 yaml]# kubectl drain k8s03 --ignore-daemonsets --去除k8s03节点上的Pod
node/k8s03 already cordoned
WARNING: ignoring DaemonSet-managed Pods: kube-system/kube-flannel-ds-amd64-cg795, kube-system/kube-proxy-h5dkf
evicting pod "pod-02-5c54dc6489-92l8m"
evicting pod "pod-02-5c54dc6489-442kx"
pod/pod-02-5c54dc6489-92l8m evicted
pod/pod-02-5c54dc6489-442kx evicted
node/k8s03 evicted
[root@k8s01 yaml]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s01 Ready master 71d v1.16.0
k8s02 Ready 70d v1.16.0
k8s03 Ready, SchedulingDisabled 30d v1.16.0
[root@k8s01 yaml]# kubectl get pods -o wide --以前k8s03节点的Pod全部转移到其它节点上
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod-02-5c54dc6489-k4bc7 1/1 Running 0 14m 10.244.0.58 k8s01
pod-02-5c54dc6489-mxk46 1/1 Running 0 25s 10.244.1.46 k8s02
pod-02-5c54dc6489-vmb8l 1/1 Running 0 25s 10.244.1.45 k8s02
[root@k8s01 yaml]# kubectl uncordon k8s03 --恢复调度
node/k8s03 uncordoned
[root@k8s01 yaml]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s01 Ready master 71d v1.16.0
k8s02 Ready 70d v1.16.0
k8s03 Ready 30d v1.16.0
[root@k8s01 yaml]#

0