千家信息网

Kubernetes​怎么部署高可用kube-apiserver集群

发表于:2024-09-28 作者:千家信息网编辑
千家信息网最后更新 2024年09月28日,这篇文章主要讲解了"Kubernetes怎么部署高可用kube-apiserver集群",文中的讲解内容简单清晰,易于学习与理解,下面请大家跟着小编的思路慢慢深入,一起来研究和学习"Kubernete
千家信息网最后更新 2024年09月28日Kubernetes​怎么部署高可用kube-apiserver集群

这篇文章主要讲解了"Kubernetes怎么部署高可用kube-apiserver集群",文中的讲解内容简单清晰,易于学习与理解,下面请大家跟着小编的思路慢慢深入,一起来研究和学习"Kubernetes怎么部署高可用kube-apiserver集群"吧!

部署master节点

master节点的kube-apiserver、kube-scheduler 和 kube-controller-manager 均以多实例模式运行:kube-scheduler 和 kube-controller-manager 会自动选举产生一个 leader 实例,其它实例处于阻塞模式,当 leader 挂了后,重新选举产生新的 leader,从而保证服务可用性;kube-apiserver 是无状态的,需要通过 kube-nginx 进行代理访问,从而保证服务可用性;

下面部署命令均在k8s-master01节点上执行,然后远程分发文件和执行命令。

下载最新版本二进制文件[root@k8s-master01 ~]# cd /opt/k8s/work[root@k8s-master01 work]# wget https://dl.k8s.io/v1.14.2/kubernetes-server-linux-amd64.tar.gz[root@k8s-master01 work]# tar -xzvf kubernetes-server-linux-amd64.tar.gz[root@k8s-master01 work]# cd kubernetes[root@k8s-master01 work]# tar -xzvf  kubernetes-src.tar.gz将二进制文件拷贝到所有 master 节点:[root@k8s-master01 ~]# cd /opt/k8s/work[root@k8s-master01 work]# source /opt/k8s/bin/environment.sh[root@k8s-master01 work]# for node_master_ip in ${NODE_MASTER_IPS[@]}  do    echo ">>> ${node_master_ip}"    scp kubernetes/server/bin/{apiextensions-apiserver,cloud-controller-manager,kube-apiserver,kube-controller-manager,kube-proxy,kube-scheduler,kubeadm,kubectl,kubelet,mounter} root@${node_master_ip}:/opt/k8s/bin/    ssh root@${node_master_ip} "chmod +x /opt/k8s/bin/*"  done

部署高可用 kube-apiserver 集群

这里部署一个三实例kube-apiserver集群环境,它们通过nginx四层代理进行访问,对外提供一个统一的vip地址,从而保证服务可用性。下面部署命令均在k8s-master01节点上执行,然后远程分发文件和执行命令。

1) 创建 kubernetes 证书和私钥创建证书签名请求:[root@k8s-master01 ~]# cd /opt/k8s/work[root@k8s-master01 work]# source /opt/k8s/bin/environment.sh[root@k8s-master01 work]# cat > kubernetes-csr.json <>> ${node_master_ip}"    ssh root@${node_master_ip} "mkdir -p /etc/kubernetes/cert"    scp kubernetes*.pem root@${node_master_ip}:/etc/kubernetes/cert/  done   2) 创建加密配置文件[root@k8s-master01 work]# cd /opt/k8s/work[root@k8s-master01 work]# source /opt/k8s/bin/environment.sh[root@k8s-master01 work]# cat > encryption-config.yaml <>> ${node_master_ip}"    scp encryption-config.yaml root@${node_master_ip}:/etc/kubernetes/  done   3) 创建审计策略文件[root@k8s-master01 work]# cd /opt/k8s/work[root@k8s-master01 work]# source /opt/k8s/bin/environment.sh[root@k8s-master01 work]# cat > audit-policy.yaml <>> ${node_master_ip}"    scp audit-policy.yaml root@${node_master_ip}:/etc/kubernetes/audit-policy.yaml  done   4) 创建后续访问 metrics-server 使用的证书创建证书签名请求:[root@k8s-master01 work]# cat > proxy-client-csr.json <>> ${node_master_ip}"    scp proxy-client*.pem root@${node_master_ip}:/etc/kubernetes/cert/  done   5) 创建 kube-apiserver systemd unit 模板文件[root@k8s-master01 work]# cd /opt/k8s/work[root@k8s-master01 work]# source /opt/k8s/bin/environment.sh[root@k8s-master01 work]# cat > kube-apiserver.service.template < kube-apiserver-${NODE_MASTER_IPS[i]}.service  done   其中:NODE_NAMES 和 NODE_IPS 为相同长度的 bash 数组,分别为节点名称和对应的 IP;   [root@k8s-master01 work]# ll kube-apiserver*.service-rw-r--r-- 1 root root 2718 Jun 18 10:38 kube-apiserver-172.16.60.241.service-rw-r--r-- 1 root root 2718 Jun 18 10:38 kube-apiserver-172.16.60.242.service-rw-r--r-- 1 root root 2718 Jun 18 10:38 kube-apiserver-172.16.60.243.service   分发生成的 systemd unit 文件, 文件重命名为 kube-apiserver.service;[root@k8s-master01 work]# cd /opt/k8s/work[root@k8s-master01 work]# source /opt/k8s/bin/environment.sh[root@k8s-master01 work]# for node_master_ip in ${NODE_MASTER_IPS[@]}  do    echo ">>> ${node_master_ip}"    scp kube-apiserver-${node_master_ip}.service root@${node_master_ip}:/etc/systemd/system/kube-apiserver.service  done   6) 启动 kube-apiserver 服务[root@k8s-master01 work]# source /opt/k8s/bin/environment.sh[root@k8s-master01 work]# for node_master_ip in ${NODE_MASTER_IPS[@]}  do    echo ">>> ${node_master_ip}"    ssh root@${node_master_ip} "mkdir -p ${K8S_DIR}/kube-apiserver"    ssh root@${node_master_ip} "systemctl daemon-reload && systemctl enable kube-apiserver && systemctl restart kube-apiserver"  done   注意:启动服务前必须先创建工作目录;   检查 kube-apiserver 运行状态[root@k8s-master01 work]# source /opt/k8s/bin/environment.sh[root@k8s-master01 work]# for node_master_ip in ${NODE_MASTER_IPS[@]}  do    echo ">>> ${node_master_ip}"    ssh root@${node_master_ip} "systemctl status kube-apiserver |grep 'Active:'"  done   预期输出:>>> 172.16.60.241   Active: active (running) since Tue 2019-06-18 10:42:42 CST; 1min 6s ago>>> 172.16.60.242   Active: active (running) since Tue 2019-06-18 10:42:47 CST; 1min 2s ago>>> 172.16.60.243   Active: active (running) since Tue 2019-06-18 10:42:51 CST; 58s ago   确保状态为 active (running),否则查看日志,确认原因(journalctl -u kube-apiserver)   7)打印 kube-apiserver 写入 etcd 的数据[root@k8s-master01 work]# source /opt/k8s/bin/environment.sh[root@k8s-master01 work]# ETCDCTL_API=3 etcdctl \    --endpoints=${ETCD_ENDPOINTS} \    --cacert=/opt/k8s/work/ca.pem \    --cert=/opt/k8s/work/etcd.pem \    --key=/opt/k8s/work/etcd-key.pem \    get /registry/ --prefix --keys-only   预期会打印出很多写入到etcd中的数据信息   8)检查集群信息[root@k8s-master01 work]# kubectl cluster-infoKubernetes master is running at https://172.16.60.250:8443To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.   [root@k8s-master01 work]# kubectl get all --all-namespacesNAMESPACE   NAME                 TYPE        CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGEdefault     service/kubernetes   ClusterIP   10.254.0.1           443/TCP   8m25s   查看集群状态信息[root@k8s-master01 work]# kubectl get componentstatuses            #或者执行命令"kubectl get cs"NAME                 STATUS      MESSAGE                                                                                     ERRORcontroller-manager   Unhealthy   Get http://127.0.0.1:10252/healthz: dial tcp 127.0.0.1:10252: connect: connection refusedscheduler            Unhealthy   Get http://127.0.0.1:10251/healthz: dial tcp 127.0.0.1:10251: connect: connection refusedetcd-0               Healthy     {"health":"true"}                                                                        etcd-2               Healthy     {"health":"true"}                                                                        etcd-1               Healthy     {"health":"true"}   controller-managerhe 和 schedule状态为Unhealthy,是因为此时还没有部署这两个组件,待后续部署好之后再查看~   这里注意:-> 如果执行 kubectl 命令式时输出如下错误信息,则说明使用的 ~/.kube/config 文件不对,请切换到正确的账户后再执行该命令:   The connection to the server localhost:8080 was refused - did you specify the right host or port?-> 执行 kubectl get componentstatuses 命令时,apiserver 默认向 127.0.0.1 发送请求。当 controller-manager、scheduler 以集群模式运行时,有可能和kube-apiserver   不在一台机器上,这时 controller-manager 或 scheduler 的状态为 Unhealthy,但实际上它们工作正常。   9) 检查 kube-apiserver 监听的端口[root@k8s-master01 work]# netstat -lnpt|grep kubetcp        0      0 172.16.60.241:6443      0.0.0.0:*               LISTEN      15516/kube-apiserve   需要注意:6443: 接收 https 请求的安全端口,对所有请求做认证和授权;由于关闭了非安全端口,故没有监听 8080;   10)授予 kube-apiserver 访问 kubelet API 的权限在执行 kubectl exec、run、logs 等命令时,apiserver 会将请求转发到 kubelet 的 https 端口。这里定义 RBAC 规则,授权 apiserver 使用的证书(kubernetes.pem)用户名(CN:kuberntes)访问 kubelet API 的权限:   [root@k8s-master01 work]# kubectl create clusterrolebinding kube-apiserver:kubelet-apis --clusterrole=system:kubelet-api-admin --user kubernetes11)查看kube-apiserver输出的metrics需要用到根证书使用nginx的代理端口获取metrics[root@k8s-master01 work]# curl -s --cacert /opt/k8s/work/ca.pem --cert /opt/k8s/work/admin.pem --key /opt/k8s/work/admin-key.pem https://172.16.60.250:8443/metrics|head# HELP APIServiceOpenAPIAggregationControllerQueue1_adds (Deprecated) Total number of adds handled by workqueue: APIServiceOpenAPIAggregationControllerQueue1# TYPE APIServiceOpenAPIAggregationControllerQueue1_adds counterAPIServiceOpenAPIAggregationControllerQueue1_adds 12194# HELP APIServiceOpenAPIAggregationControllerQueue1_depth (Deprecated) Current depth of workqueue: APIServiceOpenAPIAggregationControllerQueue1# TYPE APIServiceOpenAPIAggregationControllerQueue1_depth gaugeAPIServiceOpenAPIAggregationControllerQueue1_depth 0# HELP APIServiceOpenAPIAggregationControllerQueue1_longest_running_processor_microseconds (Deprecated) How many microseconds has the longest running processor for APIServiceOpenAPIAggregationControllerQueue1 been running.# TYPE APIServiceOpenAPIAggregationControllerQueue1_longest_running_processor_microseconds gaugeAPIServiceOpenAPIAggregationControllerQueue1_longest_running_processor_microseconds 0# HELP APIServiceOpenAPIAggregationControllerQueue1_queue_latency (Deprecated) How long an item stays in workqueueAPIServiceOpenAPIAggregationControllerQueue1 before being requested.直接使用kube-apiserver节点端口获取metrics[root@k8s-master01 work]# curl -s --cacert /opt/k8s/work/ca.pem --cert /opt/k8s/work/admin.pem --key /opt/k8s/work/admin-key.pem https://172.16.60.241:6443/metrics|head[root@k8s-master01 work]# curl -s --cacert /opt/k8s/work/ca.pem --cert /opt/k8s/work/admin.pem --key /opt/k8s/work/admin-key.pem https://172.16.60.242:6443/metrics|head[root@k8s-master01 work]# curl -s --cacert /opt/k8s/work/ca.pem --cert /opt/k8s/work/admin.pem --key /opt/k8s/work/admin-key.pem https://172.16.60.243:6443/metrics|head

8.2 - 部署高可用 kube-controller-manager 集群

该集群包含 3 个节点,启动后将通过竞争选举机制产生一个 leader 节点,其它节点为阻塞状态。当 leader 节点不可用时,阻塞的节点将再次进行选举产生新的 leader 节点,从而保证服务的可用性。为保证通信安全,本文档先生成 x509 证书和私钥,kube-controller-manager 在如下两种情况下使用该证书:与 kube-apiserver 的安全端口通信; 在安全端口(https,10252) 输出 prometheus 格式的 metrics;下面部署命令均在k8s-master01节点上执行,然后远程分发文件和执行命令。

1)创建 kube-controller-manager 证书和私钥创建证书签名请求:[root@k8s-master01 ~]# cd /opt/k8s/work[root@k8s-master01 work]# cat > kube-controller-manager-csr.json <>> ${node_master_ip}"    scp kube-controller-manager*.pem root@${node_master_ip}:/etc/kubernetes/cert/  done  2) 创建和分发 kubeconfig 文件kube-controller-manager 使用 kubeconfig 文件访问 apiserver,该文件提供了 apiserver 地址、嵌入的 CA 证书和 kube-controller-manager 证书:[root@k8s-master01 work]# cd /opt/k8s/work[root@k8s-master01 work]# source /opt/k8s/bin/environment.sh[root@k8s-master01 work]# kubectl config set-cluster kubernetes \  --certificate-authority=/opt/k8s/work/ca.pem \  --embed-certs=true \  --server=${KUBE_APISERVER} \  --kubeconfig=kube-controller-manager.kubeconfig  [root@k8s-master01 work]# kubectl config set-credentials system:kube-controller-manager \  --client-certificate=kube-controller-manager.pem \  --client-key=kube-controller-manager-key.pem \  --embed-certs=true \  --kubeconfig=kube-controller-manager.kubeconfig  [root@k8s-master01 work]# kubectl config set-context system:kube-controller-manager \  --cluster=kubernetes \  --user=system:kube-controller-manager \  --kubeconfig=kube-controller-manager.kubeconfig  [root@k8s-master01 work]# kubectl config use-context system:kube-controller-manager --kubeconfig=kube-controller-manager.kubeconfig  分发 kubeconfig 到所有 master 节点:[root@k8s-master01 work]# cd /opt/k8s/work[root@k8s-master01 work]# source /opt/k8s/bin/environment.sh[root@k8s-master01 work]# for node_master_ip in ${NODE_MASTER_IPS[@]}  do    echo ">>> ${node_master_ip}"    scp kube-controller-manager.kubeconfig root@${node_master_ip}:/etc/kubernetes/  done  3) 创建和分发kube-controller-manager system unit 文件[root@k8s-master01 work]# cd /opt/k8s/work[root@k8s-master01 work]# source /opt/k8s/bin/environment.sh[root@k8s-master01 work]# cat > kube-controller-manager.service.template < kube-controller-manager-${NODE_MASTER_IPS[i]}.service  done  注意: NODE_NAMES 和 NODE_IPS 为相同长度的 bash 数组,分别为节点名称和对应的 IP;  [root@k8s-master01 work]# ll kube-controller-manager*.service-rw-r--r-- 1 root root 1878 Jun 18 12:45 kube-controller-manager-172.16.60.241.service-rw-r--r-- 1 root root 1878 Jun 18 12:45 kube-controller-manager-172.16.60.242.service-rw-r--r-- 1 root root 1878 Jun 18 12:45 kube-controller-manager-172.16.60.243.service  分发到所有 master 节点:[root@k8s-master01 work]# cd /opt/k8s/work[root@k8s-master01 work]# source /opt/k8s/bin/environment.sh[root@k8s-master01 work]# for node_master_ip in ${NODE_MASTER_IPS[@]}  do    echo ">>> ${node_master_ip}"    scp kube-controller-manager-${node_master_ip}.service root@${node_master_ip}:/etc/systemd/system/kube-controller-manager.service  done  注意:文件重命名为 kube-controller-manager.service;  启动 kube-controller-manager 服务[root@k8s-master01 work]# source /opt/k8s/bin/environment.sh[root@k8s-master01 work]# for node_master_ip in ${NODE_MASTER_IPS[@]}  do    echo ">>> ${node_master_ip}"    ssh root@${node_master_ip} "mkdir -p ${K8S_DIR}/kube-controller-manager"    ssh root@${node_master_ip} "systemctl daemon-reload && systemctl enable kube-controller-manager && systemctl restart kube-controller-manager"  done  注意:启动服务前必须先创建工作目录;  检查服务运行状态[root@k8s-master01 work]# source /opt/k8s/bin/environment.sh[root@k8s-master01 work]# for node_master_ip in ${NODE_MASTER_IPS[@]}  do    echo ">>> ${node_master_ip}"    ssh root@${node_master_ip} "systemctl status kube-controller-manager|grep Active"  done  预期输出结果:>>> 172.16.60.241   Active: active (running) since Tue 2019-06-18 12:49:11 CST; 1min 7s ago>>> 172.16.60.242   Active: active (running) since Tue 2019-06-18 12:49:11 CST; 1min 7s ago>>> 172.16.60.243   Active: active (running) since Tue 2019-06-18 12:49:12 CST; 1min 7s ago  确保状态为 active (running),否则查看日志,确认原因(journalctl -u kube-controller-manager)  kube-controller-manager 监听 10252 端口,接收 https 请求:[root@k8s-master01 work]# netstat -lnpt|grep kube-controlltcp        0      0 172.16.60.241:10252     0.0.0.0:*               LISTEN      25709/kube-controll  检查集群状态,controller-manager的状态为"ok"注意:当kube-controller-manager集群中的1个或2个节点的controller-manager服务挂掉,只要有一个节点的controller-manager服务活着,则集群中controller-manager的状态仍然为"ok",仍然会继续提供服务![root@k8s-master01 work]# kubectl get csNAME                 STATUS      MESSAGE                                                                                     ERRORscheduler            Unhealthy   Get http://127.0.0.1:10251/healthz: dial tcp 127.0.0.1:10251: connect: connection refused controller-manager   Healthy     ok                                                                                        etcd-0               Healthy     {"health":"true"}                                                                         etcd-1               Healthy     {"health":"true"}                                                                         etcd-2               Healthy     {"health":"true"}  4) 查看输出的 metrics注意:以下命令在3台kube-controller-manager节点上执行。  由于在kube-controller-manager启动文件中关掉了"--port=0"和"--secure-port=10252"这两个参数,则只能通过http方式获取到kube-controller-manager输出的metrics信息。kube-controller-manager一般不会被访问,只有在监控时采集metrcis指标数据时被访问。  [root@k8s-master01 work]# curl -s http://172.16.60.241:10252/metrics|head                                    # HELP apiserver_audit_event_total Counter of audit events generated and sent to the audit backend.# TYPE apiserver_audit_event_total counterapiserver_audit_event_total 0# HELP apiserver_audit_requests_rejected_total Counter of apiserver requests rejected due to an error in audit logging backend.# TYPE apiserver_audit_requests_rejected_total counterapiserver_audit_requests_rejected_total 0# HELP apiserver_client_certificate_expiration_seconds Distribution of the remaining lifetime on the certificate used to authenticate a request.# TYPE apiserver_client_certificate_expiration_seconds histogramapiserver_client_certificate_expiration_seconds_bucket{le="0"} 0apiserver_client_certificate_expiration_seconds_bucket{le="1800"} 0  [root@k8s-master01 work]# curl -s --cacert /etc/kubernetes/cert/ca.pem http://172.16.60.241:10252/metrics |head# HELP apiserver_audit_event_total Counter of audit events generated and sent to the audit backend.# TYPE apiserver_audit_event_total counterapiserver_audit_event_total 0# HELP apiserver_audit_requests_rejected_total Counter of apiserver requests rejected due to an error in audit logging backend.# TYPE apiserver_audit_requests_rejected_total counterapiserver_audit_requests_rejected_total 0# HELP apiserver_client_certificate_expiration_seconds Distribution of the remaining lifetime on the certificate used to authenticate a request.# TYPE apiserver_client_certificate_expiration_seconds histogramapiserver_client_certificate_expiration_seconds_bucket{le="0"} 0apiserver_client_certificate_expiration_seconds_bucket{le="1800"} 0  [root@k8s-master01 work]# curl -s --cacert /etc/kubernetes/cert/ca.pem http://127.0.0.1:10252/metrics |head  # HELP apiserver_audit_event_total Counter of audit events generated and sent to the audit backend.# TYPE apiserver_audit_event_total counterapiserver_audit_event_total 0# HELP apiserver_audit_requests_rejected_total Counter of apiserver requests rejected due to an error in audit logging backend.# TYPE apiserver_audit_requests_rejected_total counterapiserver_audit_requests_rejected_total 0# HELP apiserver_client_certificate_expiration_seconds Distribution of the remaining lifetime on the certificate used to authenticate a request.# TYPE apiserver_client_certificate_expiration_seconds histogramapiserver_client_certificate_expiration_seconds_bucket{le="0"} 0apiserver_client_certificate_expiration_seconds_bucket{le="1800"} 0[root@k8s-master01 ~]# curl -s --cacert /opt/k8s/work/ca.pem --cert /opt/k8s/work/admin.pem --key /opt/k8s/work/admin-key.pem http://172.16.60.241:10252/metrics |head# HELP apiserver_audit_event_total Counter of audit events generated and sent to the audit backend.# TYPE apiserver_audit_event_total counterapiserver_audit_event_total 0# HELP apiserver_audit_requests_rejected_total Counter of apiserver requests rejected due to an error in audit logging backend.# TYPE apiserver_audit_requests_rejected_total counterapiserver_audit_requests_rejected_total 0# HELP apiserver_client_certificate_expiration_seconds Distribution of the remaining lifetime on the certificate used to authenticate a request.# TYPE apiserver_client_certificate_expiration_seconds histogramapiserver_client_certificate_expiration_seconds_bucket{le="0"} 0apiserver_client_certificate_expiration_seconds_bucket{le="1800"} 0  5) kube-controller-manager 的权限ClusteRole system:kube-controller-manager 的权限很小,只能创建 secret、serviceaccount 等资源对象,各 controller 的权限分散到 ClusterRole system:controller:XXX 中:  [root@k8s-master01 work]# kubectl describe clusterrole system:kube-controller-managerName:         system:kube-controller-managerLabels:       kubernetes.io/bootstrapping=rbac-defaultsAnnotations:  rbac.authorization.kubernetes.io/autoupdate: truePolicyRule:  Resources                                  Non-Resource URLs  Resource Names  Verbs  ---------                                  -----------------  --------------  -----  secrets                                    []                 []              [create delete get update]  endpoints                                  []                 []              [create get update]  serviceaccounts                            []                 []              [create get update]  events                                     []                 []              [create patch update]  tokenreviews.authentication.k8s.io         []                 []              [create]  subjectaccessreviews.authorization.k8s.io  []                 []              [create]  configmaps                                 []                 []              [get]  namespaces                                 []                 []              [get]  *.*                                        []                 []              [list watch]  需要在 kube-controller-manager 的启动参数中添加 --use-service-account-credentials=true 参数,这样 main controller 会为各 controller 创建对应的 ServiceAccount XXX-controller。内置的 ClusterRoleBinding system:controller:XXX 将赋予各 XXX-controller ServiceAccount 对应的 ClusterRole system:controller:XXX 权限。  [root@k8s-master01 work]# kubectl get clusterrole|grep controllersystem:controller:attachdetach-controller                              141msystem:controller:certificate-controller                               141msystem:controller:clusterrole-aggregation-controller                   141msystem:controller:cronjob-controller                                   141msystem:controller:daemon-set-controller                                141msystem:controller:deployment-controller                                141msystem:controller:disruption-controller                                141msystem:controller:endpoint-controller                                  141msystem:controller:expand-controller                                    141msystem:controller:generic-garbage-collector                            141msystem:controller:horizontal-pod-autoscaler                            141msystem:controller:job-controller                                       141msystem:controller:namespace-controller                                 141msystem:controller:node-controller                                      141msystem:controller:persistent-volume-binder                             141msystem:controller:pod-garbage-collector                                141msystem:controller:pv-protection-controller                             141msystem:controller:pvc-protection-controller                            141msystem:controller:replicaset-controller                                141msystem:controller:replication-controller                               141msystem:controller:resourcequota-controller                             141msystem:controller:route-controller                                     141msystem:controller:service-account-controller                           141msystem:controller:service-controller                                   141msystem:controller:statefulset-controller                               141msystem:controller:ttl-controller                                       141msystem:kube-controller-manager                                         141m  以 deployment controller 为例:[root@k8s-master01 work]# kubectl describe clusterrole system:controller:deployment-controllerName:         system:controller:deployment-controllerLabels:       kubernetes.io/bootstrapping=rbac-defaultsAnnotations:  rbac.authorization.kubernetes.io/autoupdate: truePolicyRule:  Resources                          Non-Resource URLs  Resource Names  Verbs  ---------                          -----------------  --------------  -----  replicasets.apps                   []                 []              [create delete get list patch update watch]  replicasets.extensions             []                 []              [create delete get list patch update watch]  events                             []                 []              [create patch update]  pods                               []                 []              [get list update watch]  deployments.apps                   []                 []              [get list update watch]  deployments.extensions             []                 []              [get list update watch]  deployments.apps/finalizers        []                 []              [update]  deployments.apps/status            []                 []              [update]  deployments.extensions/finalizers  []                 []              [update]  deployments.extensions/status      []                 []              [update]  6)查看kube-controller-manager集群中当前的leader[root@k8s-master01 work]# kubectl get endpoints kube-controller-manager --namespace=kube-system  -o yamlapiVersion: v1kind: Endpointsmetadata:  annotations:    control-plane.alpha.kubernetes.io/leader: '{"holderIdentity":"k8s-master02_4e449819-9185-11e9-82b6-005056ac42a4","leaseDurationSeconds":15,"acquireTime":"2019-06-18T04:55:49Z","renewTime":"2019-06-18T05:04:54Z","leaderTransitions":3}'  creationTimestamp: "2019-06-18T04:03:07Z"  name: kube-controller-manager  namespace: kube-system  resourceVersion: "4604"  selfLink: /api/v1/namespaces/kube-system/endpoints/kube-controller-manager  uid: fa824018-917d-11e9-90d4-005056ac7c81  可见,当前的leader为k8s-master02节点。  测试 kube-controller-manager 集群的高可用停掉一个或两个节点的 kube-controller-manager 服务,观察其它节点的日志,看是否获取了 leader 权限。  比如停掉k8s-master02节点的kube-controller-manager 服务[root@k8s-master02 ~]# systemctl stop kube-controller-manager [root@k8s-master02 ~]# ps -ef|grep kube-controller-managerroot     25677 11006  0 13:06 pts/0    00:00:00 grep --color=auto kube-controller-manager  接着观察kube-controller-manager集群当前的leader情况[root@k8s-master01 work]# kubectl get endpoints kube-controller-manager --namespace=kube-system  -o yamlapiVersion: v1kind: Endpointsmetadata:  annotations:    control-plane.alpha.kubernetes.io/leader: '{"holderIdentity":"k8s-master03_4e4c28b5-9185-11e9-b98a-005056ac7136","leaseDurationSeconds":15,"acquireTime":"2019-06-18T05:06:32Z","renewTime":"2019-06-18T05:06:57Z","leaderTransitions":4}'  creationTimestamp: "2019-06-18T04:03:07Z"  name: kube-controller-manager  namespace: kube-system  resourceVersion: "4695"  selfLink: /api/v1/namespaces/kube-system/endpoints/kube-controller-manager  uid: fa824018-917d-11e9-90d4-005056ac7c81  发现当前leader已经转移到k8s-master03节点上了!!

8.3 - 部署高可用 kube-scheduler 集群

该集群包含 3 个节点,启动后将通过竞争选举机制产生一个 leader 节点,其它节点为阻塞状态。当 leader 节点不可用后,剩余节点将再次进行选举产生新的 leader 节点,从而保证服务的可用性。为保证通信安全,本文档先生成 x509 证书和私钥,

kube-scheduler 在如下两种情况下使用该证书:

与kube-apiserver 的安全端口通信;在安全端口(https,10251) 输出 prometheus 格式的 metrics;

下面部署命令均在k8s-master01节点上执行,然后远程分发文件和执行命令。

1)创建 kube-scheduler 证书和私钥创建证书签名请求:[root@k8s-master01 ~]# cd /opt/k8s/work[root@k8s-master01 work]# cat > kube-scheduler-csr.json <>> ${node_master_ip}"    scp kube-scheduler*.pem root@${node_master_ip}:/etc/kubernetes/cert/  done2) 创建和分发 kubeconfig 文件kube-scheduler 使用 kubeconfig 文件访问 apiserver,该文件提供了 apiserver 地址、嵌入的 CA 证书和 kube-scheduler 证书:[root@k8s-master01 work]# cd /opt/k8s/work[root@k8s-master01 work]# source /opt/k8s/bin/environment.sh[root@k8s-master01 work]# kubectl config set-cluster kubernetes \  --certificate-authority=/opt/k8s/work/ca.pem \  --embed-certs=true \  --server=${KUBE_APISERVER} \  --kubeconfig=kube-scheduler.kubeconfig[root@k8s-master01 work]# kubectl config set-credentials system:kube-scheduler \  --client-certificate=kube-scheduler.pem \  --client-key=kube-scheduler-key.pem \  --embed-certs=true \  --kubeconfig=kube-scheduler.kubeconfig[root@k8s-master01 work]# kubectl config set-context system:kube-scheduler \  --cluster=kubernetes \  --user=system:kube-scheduler \  --kubeconfig=kube-scheduler.kubeconfig[root@k8s-master01 work]# kubectl config use-context system:kube-scheduler --kubeconfig=kube-scheduler.kubeconfig分发 kubeconfig 到所有 master 节点:[root@k8s-master01 work]# cd /opt/k8s/work[root@k8s-master01 work]# source /opt/k8s/bin/environment.sh[root@k8s-master01 work]# for node_master_ip in ${NODE_MASTER_IPS[@]}  do    echo ">>> ${node_master_ip}"    scp kube-scheduler.kubeconfig root@${node_master_ip}:/etc/kubernetes/  done3) 创建 kube-scheduler 配置文件[root@k8s-master01 work]# cd /opt/k8s/work[root@k8s-master01 work]# cat >kube-scheduler.yaml.template < kube-scheduler-${NODE_MASTER_IPS[i]}.yaml  done注意:NODE_NAMES 和 NODE_IPS 为相同长度的 bash 数组,分别为节点名称和对应的 IP;[root@k8s-master01 work]# ll kube-scheduler*.yaml-rw-r--r-- 1 root root 399 Jun 18 14:57 kube-scheduler-172.16.60.241.yaml-rw-r--r-- 1 root root 399 Jun 18 14:57 kube-scheduler-172.16.60.242.yaml-rw-r--r-- 1 root root 399 Jun 18 14:57 kube-scheduler-172.16.60.243.yaml分发 kube-scheduler 配置文件到所有 master 节点:[root@k8s-master01 work]# cd /opt/k8s/work[root@k8s-master01 work]# source /opt/k8s/bin/environment.sh[root@k8s-master01 work]# for node_master_ip in ${NODE_MASTER_IPS[@]}  do    echo ">>> ${node_master_ip}"    scp kube-scheduler-${node_master_ip}.yaml root@${node_master_ip}:/etc/kubernetes/kube-scheduler.yaml  done注意:重命名为 kube-scheduler.yaml;4)创建 kube-scheduler systemd unit 模板文件[root@k8s-master01 work]# cd /opt/k8s/work[root@k8s-master01 work]# cat > kube-scheduler.service.template < kube-scheduler-${NODE_MASTER_IPS[i]}.service  done其中:NODE_NAMES 和 NODE_IPS 为相同长度的 bash 数组,分别为节点名称和对应的 IP;[root@k8s-master01 work]# ll kube-scheduler*.service-rw-r--r-- 1 root root 981 Jun 18 15:30 kube-scheduler-172.16.60.241.service-rw-r--r-- 1 root root 981 Jun 18 15:30 kube-scheduler-172.16.60.242.service-rw-r--r-- 1 root root 981 Jun 18 15:30 kube-scheduler-172.16.60.243.service分发 systemd unit 文件到所有 master 节点:[root@k8s-master01 work]# cd /opt/k8s/work[root@k8s-master01 work]# source /opt/k8s/bin/environment.sh[root@k8s-master01 work]# for node_master_ip in ${NODE_MASTER_IPS[@]}  do    echo ">>> ${node_master_ip}"    scp kube-scheduler-${node_master_ip}.service root@${node_master_ip}:/etc/systemd/system/kube-scheduler.service  done5) 启动 kube-scheduler 服务[root@k8s-master01 work]# source /opt/k8s/bin/environment.sh[root@k8s-master01 work]# for node_master_ip in ${NODE_MASTER_IPS[@]}  do    echo ">>> ${node_master_ip}"    ssh root@${node_master_ip} "mkdir -p ${K8S_DIR}/kube-scheduler"    ssh root@${node_master_ip} "systemctl daemon-reload && systemctl enable kube-scheduler && systemctl restart kube-scheduler"  done注意:启动服务前必须先创建工作目录;检查服务运行状态[root@k8s-master01 work]# source /opt/k8s/bin/environment.sh[root@k8s-master01 work]# for node_master_ip in ${NODE_MASTER_IPS[@]}  do    echo ">>> ${node_master_ip}"    ssh root@${node_master_ip} "systemctl status kube-scheduler|grep Active"  done预期输出结果:>>> 172.16.60.241   Active: active (running) since Tue 2019-06-18 15:33:29 CST; 1min 12s ago>>> 172.16.60.242   Active: active (running) since Tue 2019-06-18 15:33:30 CST; 1min 11s ago>>> 172.16.60.243   Active: active (running) since Tue 2019-06-18 15:33:30 CST; 1min 11s ago确保状态为 active (running),否则查看日志,确认原因: (journalctl -u kube-scheduler)看看集群状态,此时状态均为"ok"[root@k8s-master01 work]# kubectl get csNAME                 STATUS    MESSAGE             ERRORscheduler            Healthy   ok                 controller-manager   Healthy   ok                 etcd-2               Healthy   {"health":"true"}  etcd-0               Healthy   {"health":"true"}  etcd-1               Healthy   {"health":"true"}6) 查看输出的 metrics注意:以下命令要在kube-scheduler集群节点上执行。kube-scheduler监听10251和10259端口:10251:接收 http 请求,非安全端口,不需要认证授权;10259:接收 https 请求,安全端口,需要认证授权;两个接口都对外提供 /metrics 和 /healthz 的访问。[root@k8s-master01 work]# netstat -lnpt |grep kube-scheduletcp6       0      0 :::10251                :::*                    LISTEN      6075/kube-schedulertcp6       0      0 :::10259                :::*                    LISTEN      6075/kube-scheduler[root@k8s-master01 work]# lsof -i:10251                   COMMAND    PID USER   FD   TYPE DEVICE SIZE/OFF NODE NAMEkube-sche 6075 root    3u  IPv6 628571      0t0  TCP *:10251 (LISTEN)[root@k8s-master01 work]# lsof -i:10259                   COMMAND    PID USER   FD   TYPE DEVICE SIZE/OFF NODE NAMEkube-sche 6075 root    5u  IPv6 628574      0t0  TCP *:10259 (LISTEN)下面几种方式均能获取到kube-schedule的metrics数据信息(分别使用http的10251 和 https的10259端口)[root@k8s-master01 work]# curl -s http://172.16.60.241:10251/metrics |head       # HELP apiserver_audit_event_total Counter of audit events generated and sent to the audit backend.# TYPE apiserver_audit_event_total counterapiserver_audit_event_total 0# HELP apiserver_audit_requests_rejected_total Counter of apiserver requests rejected due to an error in audit logging backend.# TYPE apiserver_audit_requests_rejected_total counterapiserver_audit_requests_rejected_total 0# HELP apiserver_client_certificate_expiration_seconds Distribution of the remaining lifetime on the certificate used to authenticate a request.# TYPE apiserver_client_certificate_expiration_seconds histogramapiserver_client_certificate_expiration_seconds_bucket{le="0"} 0apiserver_client_certificate_expiration_seconds_bucket{le="1800"} 0[root@k8s-master01 work]# curl -s http://127.0.0.1:10251/metrics |head   # HELP apiserver_audit_event_total Counter of audit events generated and sent to the audit backend.# TYPE apiserver_audit_event_total counterapiserver_audit_event_total 0# HELP apiserver_audit_requests_rejected_total Counter of apiserver requests rejected due to an error in audit logging backend.# TYPE apiserver_audit_requests_rejected_total counterapiserver_audit_requests_rejected_total 0# HELP apiserver_client_certificate_expiration_seconds Distribution of the remaining lifetime on the certificate used to authenticate a request.# TYPE apiserver_client_certificate_expiration_seconds histogramapiserver_client_certificate_expiration_seconds_bucket{le="0"} 0apiserver_client_certificate_expiration_seconds_bucket{le="1800"} 0[root@k8s-master01 work]# curl -s --cacert /etc/kubernetes/cert/ca.pem http://172.16.60.241:10251/metrics |head  # HELP apiserver_audit_event_total Counter of audit events generated and sent to the audit backend.# TYPE apiserver_audit_event_total counterapiserver_audit_event_total 0# HELP apiserver_audit_requests_rejected_total Counter of apiserver requests rejected due to an error in audit logging backend.# TYPE apiserver_audit_requests_rejected_total counterapiserver_audit_requests_rejected_total 0# HELP apiserver_client_certificate_expiration_seconds Distribution of the remaining lifetime on the certificate used to authenticate a request.# TYPE apiserver_client_certificate_expiration_seconds histogramapiserver_client_certificate_expiration_seconds_bucket{le="0"} 0apiserver_client_certificate_expiration_seconds_bucket{le="1800"} 0[root@k8s-master01 work]# curl -s --cacert /etc/kubernetes/cert/ca.pem http://127.0.0.1:10251/metrics |head      # HELP apiserver_audit_event_total Counter of audit events generated and sent to the audit backend.# TYPE apiserver_audit_event_total counterapiserver_audit_event_total 0# HELP apiserver_audit_requests_rejected_total Counter of apiserver requests rejected due to an error in audit logging backend.# TYPE apiserver_audit_requests_rejected_total counterapiserver_audit_requests_rejected_total 0# HELP apiserver_client_certificate_expiration_seconds Distribution of the remaining lifetime on the certificate used to authenticate a request.# TYPE apiserver_client_certificate_expiration_seconds histogramapiserver_client_certificate_expiration_seconds_bucket{le="0"} 0apiserver_client_certificate_expiration_seconds_bucket{le="1800"} 0[root@k8s-master01 work]# curl -s --cacert /opt/k8s/work/ca.pem --cert /opt/k8s/work/admin.pem --key /opt/k8s/work/admin-key.pem https://172.16.60.241:10259/metrics |head# HELP apiserver_audit_event_total Counter of audit events generated and sent to the audit backend.# TYPE apiserver_audit_event_total counterapiserver_audit_event_total 0# HELP apiserver_audit_requests_rejected_total Counter of apiserver requests rejected due to an error in audit logging backend.# TYPE apiserver_audit_requests_rejected_total counterapiserver_audit_requests_rejected_total 0# HELP apiserver_client_certificate_expiration_seconds Distribution of the remaining lifetime on the certificate used to authenticate a request.# TYPE apiserver_client_certificate_expiration_seconds histogramapiserver_client_certificate_expiration_seconds_bucket{le="0"} 0apiserver_client_certificate_expiration_seconds_bucket{le="1800"} 07)查看当前的 leader[root@k8s-master01 work]# kubectl get endpoints kube-scheduler --namespace=kube-system  -o yamlapiVersion: v1kind: Endpointsmetadata:  annotations:    control-plane.alpha.kubernetes.io/leader: '{"holderIdentity":"k8s-master01_5eac29d7-919b-11e9-b242-005056ac7c81","leaseDurationSeconds":15,"acquireTime":"2019-06-18T07:33:31Z","renewTime":"2019-06-18T07:41:13Z","leaderTransitions":0}'  creationTimestamp: "2019-06-18T07:33:31Z"  name: kube-scheduler  namespace: kube-system  resourceVersion: "12218"  selfLink: /api/v1/namespaces/kube-system/endpoints/kube-scheduler  uid: 5f466875-919b-11e9-90d4-005056ac7c81可见,当前的 leader 为 k8s-master01 节点。测试 kube-scheduler 集群的高可用随便找一个或两个 master 节点,停掉 kube-scheduler 服务,看其它节点是否获取了 leader 权限。比如停掉k8s-master01节点的kube-schedule服务,查看下leader的转移情况[root@k8s-master01 work]# systemctl stop kube-scheduler[root@k8s-master01 work]# ps -ef|grep kube-schedulerroot      6871  2379  0 15:42 pts/2    00:00:00 grep --color=auto kube-scheduler再次看看当前的leader,发现leader已经转移为k8s-master02节点了[root@k8s-master01 work]# kubectl get endpoints kube-scheduler --namespace=kube-system  -o yamlapiVersion: v1kind: Endpointsmetadata:  annotations:    control-plane.alpha.kubernetes.io/leader: '{"holderIdentity":"k8s-master02_5efade79-919b-11e9-bbe2-005056ac42a4","leaseDurationSeconds":15,"acquireTime":"2019-06-18T07:43:03Z","renewTime":"2019-06-18T07:43:12Z","leaderTransitions":1}'  creationTimestamp: "2019-06-18T07:33:31Z"  name: kube-scheduler  namespace: kube-system  resourceVersion: "12363"  selfLink: /api/v1/namespaces/kube-system/endpoints/kube-scheduler  uid: 5f466875-919b-11e9-90d4-005056ac7c81

九、部署node工作节点

kubernetes node节点运行的组件有docker、kubelet、kube-proxy、flanneld。

下面部署命令均在k8s-master01节点上执行,然后远程分发文件和执行命令。

安装依赖包[root@k8s-master01 ~]# source /opt/k8s/bin/environment.sh[root@k8s-master01 ~]# for node_node_ip in ${NODE_NODE_IPS[@]}  do    echo ">>> ${node_node_ip}"    ssh root@${node_node_ip} "yum install -y epel-release"    ssh root@${node_node_ip} "yum install -y conntrack ipvsadm ntp ntpdate ipset jq iptables curl sysstat libseccomp && modprobe ip_vs "  done

9.1 - 部署 docker 组件

docker 运行和管理容器,kubelet 通过 Container Runtime Interface (CRI) 与它进行交互。

下面操作均在k8s-master01上执行,然后远程分发文件和执行命令。

1) 下载和分发 docker 二进制文件[root@k8s-master01 ~]# cd /opt/k8s/work[root@k8s-master01 work]# wget https://download.docker.com/linux/static/stable/x86_64/docker-18.09.6.tgz[root@k8s-master01 work]# tar -xvf docker-18.09.6.tgz分发二进制文件到所有node节点:[root@k8s-master01 work]# cd /opt/k8s/work[root@k8s-master01 work]# source /opt/k8s/bin/environment.sh[root@k8s-master01 work]# for node_node_ip in ${NODE_NODE_IPS[@]}  do    echo ">>> ${node_node_ip}"    scp docker/*  root@${node_node_ip}:/opt/k8s/bin/    ssh root@${node_node_ip} "chmod +x /opt/k8s/bin/*"  done2) 创建和分发 systemd unit 文件[root@k8s-master01 work]# cd /opt/k8s/work[root@k8s-master01 work]# cat > docker.service <<"EOF"[Unit]Description=Docker Application Container EngineDocumentation=http://docs.docker.io[Service]WorkingDirectory=##DOCKER_DIR##Environment="PATH=/opt/k8s/bin:/bin:/sbin:/usr/bin:/usr/sbin"EnvironmentFile=-/run/flannel/dockerExecStart=/opt/k8s/bin/dockerd $DOCKER_NETWORK_OPTIONSExecReload=/bin/kill -s HUP $MAINPIDRestart=on-failureRestartSec=5LimitNOFILE=infinityLimitNPROC=infinityLimitCORE=infinityDelegate=yesKillMode=process[Install]WantedBy=multi-user.targetEOF注意事项:-> EOF 前后有双引号,这样 bash 不会替换文档中的变量,如 $DOCKER_NETWORK_OPTIONS (这些环境变量是 systemd 负责替换的。);-> dockerd 运行时会调用其它 docker 命令,如 docker-proxy,所以需要将 docker 命令所在的目录加到 PATH 环境变量中;-> flanneld 启动时将网络配置写入 /run/flannel/docker 文件中,dockerd 启动前读取该文件中的环境变量 DOCKER_NETWORK_OPTIONS ,然后设置 docker0 网桥网段;-> 如果指定了多个 EnvironmentFile 选项,则必须将 /run/flannel/docker 放在最后(确保 docker0 使用 flanneld 生成的 bip 参数);-> docker 需要以 root 用于运行;-> docker 从 1.13 版本开始,可能将 iptables FORWARD chain的默认策略设置为DROP,从而导致 ping 其它 Node 上的 Pod IP 失败,遇到这种情况时,需要手动设置策略为 ACCEPT:   # iptables -P FORWARD ACCEPT   并且把以下命令写入 /etc/rc.local 文件中,防止节点重启iptables FORWARD chain的默认策略又还原为DROP   # /sbin/iptables -P FORWARD ACCEPT分发 systemd unit 文件到所有node节点机器:[root@k8s-master01 work]# cd /opt/k8s/work[root@k8s-master01 work]# source /opt/k8s/bin/environment.sh[root@k8s-master01 work]# sed -i -e "s|##DOCKER_DIR##|${DOCKER_DIR}|" docker.service[root@k8s-master01 work]# for node_node_ip in ${NODE_NODE_IPS[@]}  do    echo ">>> ${node_node_ip}"    scp docker.service root@${node_node_ip}:/etc/systemd/system/  done3) 配置和分发 docker 配置文件使用国内的仓库镜像服务器以加快 pull image 的速度,同时增加下载的并发数 (需要重启 dockerd 生效):[root@k8s-master01 work]# cd /opt/k8s/work[root@k8s-master01 work]# source /opt/k8s/bin/environment.sh[root@k8s-master01 work]# cat > docker-daemon.json <>> ${node_node_ip}"    ssh root@${node_node_ip} "mkdir -p  /etc/docker/ ${DOCKER_DIR}/{data,exec}"    scp docker-daemon.json root@${node_node_ip}:/etc/docker/daemon.json  done4) 启动 docker 服务[root@k8s-master01 work]# source /opt/k8s/bin/environment.sh[root@k8s-master01 work]# for node_node_ip in ${NODE_NODE_IPS[@]}  do    echo ">>> ${node_node_ip}"    ssh root@${node_node_ip} "systemctl daemon-reload && systemctl enable docker && systemctl restart docker"  done检查服务运行状态[root@k8s-master01 work]# source /opt/k8s/bin/environment.sh[root@k8s-master01 work]# for node_node_ip in ${NODE_NODE_IPS[@]}  do    echo ">>> ${node_node_ip}"    ssh root@${node_node_ip} "systemctl status docker|grep Active"  done预期输出结果:>>> 172.16.60.244   Active: active (running) since Tue 2019-06-18 16:28:32 CST; 42s ago>>> 172.16.60.245   Active: active (running) since Tue 2019-06-18 16:28:31 CST; 42s ago>>> 172.16.60.246   Active: active (running) since Tue 2019-06-18 16:28:32 CST; 42s ago确保状态为 active (running),否则查看日志,确认原因 (journalctl -u docker)5) 检查 docker0 网桥[root@k8s-master01 work]# source /opt/k8s/bin/environment.sh[root@k8s-master01 work]# for node_node_ip in ${NODE_NODE_IPS[@]}  do    echo ">>> ${node_node_ip}"    ssh root@${node_node_ip} "/usr/sbin/ip addr show flannel.1 && /usr/sbin/ip addr show docker0"  done预期输出结果:>>> 172.16.60.2443: flannel.1:  mtu 1450 qdisc noqueue state UNKNOWN group default    link/ether c6:c2:d1:5a:9a:8a brd ff:ff:ff:ff:ff:ff    inet 172.30.88.0/32 scope global flannel.1       valid_lft forever preferred_lft forever4: docker0:  mtu 1500 qdisc noqueue state DOWN group default    link/ether 02:42:27:3c:5e:5f brd ff:ff:ff:ff:ff:ff    inet 172.30.88.1/21 brd 172.30.95.255 scope global docker0       valid_lft forever preferred_lft forever>>> 172.16.60.2453: flannel.1:  mtu 1450 qdisc noqueue state UNKNOWN group default    link/ether 02:36:1d:ab:c4:86 brd ff:ff:ff:ff:ff:ff    inet 172.30.56.0/32 scope global flannel.1       valid_lft forever preferred_lft forever4: docker0:  mtu 1500 qdisc noqueue state DOWN group default    link/ether 02:42:6f:36:7d:fb brd ff:ff:ff:ff:ff:ff    inet 172.30.56.1/21 brd 172.30.63.255 scope global docker0       valid_lft forever preferred_lft forever>>> 172.16.60.2463: flannel.1:  mtu 1450 qdisc noqueue state UNKNOWN group default    link/ether 4e:73:d1:0e:27:c0 brd ff:ff:ff:ff:ff:ff    inet 172.30.72.0/32 scope global flannel.1       valid_lft forever preferred_lft forever4: docker0:  mtu 1500 qdisc noqueue state DOWN group default    link/ether 02:42:21:39:f4:9e brd ff:ff:ff:ff:ff:ff    inet 172.30.72.1/21 brd 172.30.79.255 scope global docker0       valid_lft forever preferred_lft forever确认各node节点的docker0网桥和flannel.1接口的IP一定要处于同一个网段中(如下 172.30.88.0/32 位于 172.30.88.1/21 中)!!!到任意一个node节点上查看 docker 的状态信息[root@k8s-node01 ~]# ps -elfH|grep docker0 S root     21573 18744  0  80   0 - 28180 pipe_w 16:32 pts/2    00:00:00         grep --color=auto docker4 S root     21147     1  0  80   0 - 173769 futex_ 16:28 ?       00:00:00   /opt/k8s/bin/dockerd --bip=172.30.88.1/21 --ip-masq=false --mtu=14504 S root     21175 21147  0  80   0 - 120415 futex_ 16:28 ?       00:00:00     containerd --config /data/k8s/docker/exec/containerd/containerd.toml --log-level debug[root@k8s-node01 ~]# docker infoContainers: 0 Running: 0 Paused: 0 Stopped: 0Images: 0Server Version: 18.09.6Storage Driver: overlay2 Backing Filesystem: xfs Supports d_type: true Native Overlay Diff: trueLogging Driver: json-fileCgroup Driver: cgroupfsPlugins: Volume: local Network: bridge host macvlan null overlay Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslogSwarm: inactiveRuntimes: runcDefault Runtime: runcInit Binary: docker-initcontainerd version: bb71b10fd8f58240ca47fbb579b9d1028eea7c84runc version: 2b18fe1d885ee5083ef9f0838fee39b62d653e30init version: fec3683Security Options: seccomp  Profile: defaultKernel Version: 4.4.181-1.el7.elrepo.x86_64Operating System: CentOS Linux 7 (Core)OSType: linuxArchitecture: x86_64CPUs: 4Total Memory: 3.859GiBName: k8s-node01ID: R24D:75E5:2OWS:SNU5:NPSE:SBKH:WKLZ:2ZH7:6ITY:3BE2:YHRG:6WRUDocker Root Dir: /data/k8s/docker/dataDebug Mode (client): falseDebug Mode (server): true File Descriptors: 22 Goroutines: 43 System Time: 2019-06-18T16:32:44.260301822+08:00 EventsListeners: 0Registry: https://index.docker.io/v1/Labels:Experimental: falseInsecure Registries: docker02:35000 127.0.0.0/8Registry Mirrors: https://docker.mirrors.ustc.edu.cn/ https://hub-mirror.c.163.com/Live Restore Enabled: trueProduct License: Community Engine

9.2 - 部署 kubelet 组件

kubelet 运行在每个node节点上,接收 kube-apiserver 发送的请求,管理 Pod 容器,执行交互式命令,如 exec、run、logs 等。kubelet 启动时自动向 kube-apiserver 注册节点信息,内置的 cadvisor 统计和监控节点的资源使用情况。为确保安全,部署时关闭了 kubelet 的非安全 http 端口,对请求进行认证和授权,拒绝未授权的访问(如 apiserver、heapster 的请求)。

下面部署命令均在k8s-master01节点上执行,然后远程分发文件和执行命令。

1)下载和分发 kubelet 二进制文件[root@k8s-master01 ~]# cd /opt/k8s/work[root@k8s-master01 work]# source /opt/k8s/bin/environment.sh[root@k8s-master01 work]# for node_node_ip in ${NODE_NODE_IPS[@]}  do    echo ">>> ${node_node_ip}"    scp kubernetes/server/bin/kubelet root@${node_node_ip}:/opt/k8s/bin/    ssh root@${node_node_ip} "chmod +x /opt/k8s/bin/*"  done    2)创建 kubelet bootstrap kubeconfig 文件[root@k8s-master01 work]# cd /opt/k8s/work[root@k8s-master01 work]# source /opt/k8s/bin/environment.sh[root@k8s-master01 work]# for node_node_name in ${NODE_NODE_NAMES[@]}  do    echo ">>> ${node_node_name}"        # 创建 token    export BOOTSTRAP_TOKEN=$(kubeadm token create \      --description kubelet-bootstrap-token \      --groups system:bootstrappers:${node_node_name} \      --kubeconfig ~/.kube/config)        # 设置集群参数    kubectl config set-cluster kubernetes \      --certificate-authority=/etc/kubernetes/cert/ca.pem \      --embed-certs=true \      --server=${KUBE_APISERVER} \      --kubeconfig=kubelet-bootstrap-${node_node_name}.kubeconfig        # 设置客户端认证参数    kubectl config set-credentials kubelet-bootstrap \      --token=${BOOTSTRAP_TOKEN} \      --kubeconfig=kubelet-bootstrap-${node_node_name}.kubeconfig        # 设置上下文参数    kubectl config set-context default \      --cluster=kubernetes \      --user=kubelet-bootstrap \      --kubeconfig=kubelet-bootstrap-${node_node_name}.kubeconfig        # 设置默认上下文    kubectl config use-context default --kubeconfig=kubelet-bootstrap-${node_node_name}.kubeconfig  done    解释说明: 向 kubeconfig 写入的是 token,bootstrap 结束后 kube-controller-manager 为 kubelet 创建 client 和 server 证书;    查看 kubeadm 为各节点创建的 token:[root@k8s-master01 work]# kubeadm token list --kubeconfig ~/.kube/configTOKEN                     TTL       EXPIRES                     USAGES                   DESCRIPTION               EXTRA GROUPS0zqowl.aye8f834jtq9vm9t   23h       2019-06-19T16:50:43+08:00   authentication,signing   kubelet-bootstrap-token   system:bootstrappers:k8s-node03b46tq2.muab337gxwl0dsqn   23h       2019-06-19T16:50:43+08:00   authentication,signing   kubelet-bootstrap-token   system:bootstrappers:k8s-node02heh51x.foguhh2qa5crpzlq   23h       2019-06-19T16:50:42+08:00   authentication,signing   kubelet-bootstrap-token   system:bootstrappers:k8s-node01    解释说明:-> token 有效期为 1 天,超期后将不能再被用来 boostrap kubelet,且会被 kube-controller-manager 的 tokencleaner 清理;-> kube-apiserver 接收 kubelet 的 bootstrap token 后,将请求的 user 设置为 system:bootstrap:,group 设置为 system:bootstrappers,   后续将为这个 group 设置 ClusterRoleBinding;    查看各 token 关联的 Secret:[root@k8s-master01 work]# kubectl get secrets  -n kube-system|grep bootstrap-tokenbootstrap-token-0zqowl                           bootstrap.kubernetes.io/token         7      88sbootstrap-token-b46tq2                           bootstrap.kubernetes.io/token         7      88sbootstrap-token-heh51x                           bootstrap.kubernetes.io/token         7      89s    3) 分发 bootstrap kubeconfig 文件到所有node节点[root@k8s-master01 work]# cd /opt/k8s/work[root@k8s-master01 work]# source /opt/k8s/bin/environment.sh[root@k8s-master01 work]# for node_node_name in ${NODE_NODE_NAMES[@]}  do    echo ">>> ${node_node_name}"    scp kubelet-bootstrap-${node_node_name}.kubeconfig root@${node_node_name}:/etc/kubernetes/kubelet-bootstrap.kubeconfig  done    4) 创建和分发 kubelet 参数配置文件从 v1.10 开始,部分 kubelet 参数需在配置文件中配置,kubelet --help 会提示:DEPRECATED: This parameter should be set via the config file specified by the Kubelet's --config flag    创建 kubelet 参数配置文件模板:[root@k8s-master01 work]# cd /opt/k8s/work[root@k8s-master01 work]# source /opt/k8s/bin/environment.sh[root@k8s-master01 work]# cat > kubelet-config.yaml.template < address:kubelet 安全端口(https,10250)监听的地址,不能为 127.0.0.1,否则 kube-apiserver、heapster 等不能调用 kubelet 的 API;-> readOnlyPort=0:关闭只读端口(默认 10255),等效为未指定;-> authentication.anonymous.enabled:设置为 false,不允许匿名�访问 10250 端口;-> authentication.x509.clientCAFile:指定签名客户端证书的 CA 证书,开启 HTTP 证书认证;-> authentication.webhook.enabled=true:开启 HTTPs bearer token 认证;-> 对于未通过 x509 证书和 webhook 认证的请求(kube-apiserver 或其他客户端),将被拒绝,提示 Unauthorized;-> authroization.mode=Webhook:kubelet 使用 SubjectAccessReview API 查询 kube-apiserver 某 user、group 是否具有操作资源的权限(RBAC);-> featureGates.RotateKubeletClientCertificate、featureGates.RotateKubeletServerCertificate:自动 rotate 证书,证书的有效期取决于   kube-controller-manager 的 --experimental-cluster-signing-duration 参数;-> 需要 root 账户运行;    为各节点创建和分发 kubelet 配置文件:[root@k8s-master01 work]# cd /opt/k8s/work[root@k8s-master01 work]# source /opt/k8s/bin/environment.sh[root@k8s-master01 work]# for node_node_ip in ${NODE_NODE_IPS[@]}  do    echo ">>> ${node_node_ip}"    sed -e "s/##NODE_NODE_IP##/${node_node_ip}/" kubelet-config.yaml.template > kubelet-config-${node_node_ip}.yaml.template    scp kubelet-config-${node_node_ip}.yaml.template root@${node_node_ip}:/etc/kubernetes/kubelet-config.yaml  done    5)创建和分发 kubelet systemd unit 文件创建 kubelet systemd unit 文件模板:[root@k8s-master01 work]# cd /opt/k8s/work[root@k8s-master01 work]# source /opt/k8s/bin/environment.sh[root@k8s-master01 work]# cat > kubelet.service.template < 如果设置了 --hostname-override 选项,则 kube-proxy 也需要设置该选项,否则会出现找不到 Node 的情况;-> --bootstrap-kubeconfig:指向 bootstrap kubeconfig 文件,kubelet 使用该文件中的用户名和 token 向 kube-apiserver 发送 TLS Bootstrapping 请求;-> K8S approve kubelet 的 csr 请求后,在 --cert-dir 目录创建证书和私钥文件,然后写入 --kubeconfig 文件;-> --pod-infra-container-image 不使用 redhat 的 pod-infrastructure:latest 镜像,它不能回收容器的僵尸;    为各节点创建和分发 kubelet systemd unit 文件:[root@k8s-master01 work]# cd /opt/k8s/work[root@k8s-master01 work]# source /opt/k8s/bin/environment.sh[root@k8s-master01 work]# for node_node_name in ${NODE_NODE_NAMES[@]}  do    echo ">>> ${node_node_name}"    sed -e "s/##NODE_NODE_NAME##/${node_node_name}/" kubelet.service.template > kubelet-${node_node_name}.service    scp kubelet-${node_node_name}.service root@${node_node_name}:/etc/systemd/system/kubelet.service  done    6)Bootstrap Token Auth 和授予权限-> kubelet启动时查找--kubeletconfig参数对应的文件是否存在,如果不存在则使用 --bootstrap-kubeconfig 指定的 kubeconfig 文件向 kube-apiserver 发送证书签名请求 (CSR)。-> kube-apiserver 收到 CSR 请求后,对其中的 Token 进行认证,认证通过后将请求的 user 设置为 system:bootstrap:,group 设置为 system:bootstrappers,   这一过程称为 Bootstrap Token Auth。-> 默认情况下,这个 user 和 group 没有创建 CSR 的权限,kubelet 启动失败,错误日志如下:   # journalctl -u kubelet -a |grep -A 2 'certificatesigningrequests'   May 9 22:48:41 k8s-master01 kubelet[128468]: I0526 22:48:41.798230  128468 certificate_manager.go:366] Rotating certificates   May 9 22:48:41 k8s-master01 kubelet[128468]: E0526 22:48:41.801997  128468 certificate_manager.go:385] Failed while requesting a signed certificate from the master: cannot cre   ate certificate signing request: certificatesigningrequests.certificates.k8s.io is forbidden: User "system:bootstrap:82jfrm" cannot create resource "certificatesigningrequests" i   n API group "certificates.k8s.io" at the cluster scope   May 9 22:48:42 k8s-master01 kubelet[128468]: E0526 22:48:42.044828  128468 kubelet.go:2244] node "k8s-master01" not found   May 9 22:48:42 k8s-master01 kubelet[128468]: E0526 22:48:42.078658  128468 reflector.go:126] k8s.io/kubernetes/pkg/kubelet/kubelet.go:442: Failed to list *v1.Service: Unauthor   ized   May 9 22:48:42 k8s-master01 kubelet[128468]: E0526 22:48:42.079873  128468 reflector.go:126] k8s.io/kubernetes/pkg/kubelet/kubelet.go:451: Failed to list *v1.Node: Unauthorize   d   May 9 22:48:42 k8s-master01 kubelet[128468]: E0526 22:48:42.082683  128468 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.CSIDriver: Unau   thorized   May 9 22:48:42 k8s-master01 kubelet[128468]: E0526 22:48:42.084473  128468 reflector.go:126] k8s.io/kubernetes/pkg/kubelet/config/apiserver.go:47: Failed to list *v1.Pod: Unau   thorized   May 9 22:48:42 k8s-master01 kubelet[128468]: E0526 22:48:42.088466  128468 reflector.go:126] k8s.io/client-go/informers/factory.go:133: Failed to list *v1beta1.RuntimeClass: U   nauthorized       解决办法是:创建一个 clusterrolebinding,将 group system:bootstrappers 和 clusterrole system:node-bootstrapper 绑定:   # kubectl create clusterrolebinding kubelet-bootstrap --clusterrole=system:node-bootstrapper --group=system:bootstrappers    7) 启动 kubelet 服务[root@k8s-master01 work]# source /opt/k8s/bin/environment.sh[root@k8s-master01 work]# for node_node_ip in ${NODE_NODE_IPS[@]}  do    echo ">>> ${node_node_ip}"    ssh root@${node_node_ip} "mkdir -p ${K8S_DIR}/kubelet/kubelet-plugins/volume/exec/"    ssh root@${node_node_ip} "/usr/sbin/swapoff -a"    ssh root@${node_node_ip} "systemctl daemon-reload && systemctl enable kubelet && systemctl restart kubelet"  done    解释说明:-> 启动服务前必须先创建工作目录;-> 关闭 swap 分区,否则 kubelet 会启动失败 (使用"journalctl -u kubelet |tail"命令查看错误日志)    kubelet 启动后使用 --bootstrap-kubeconfig 向 kube-apiserver 发送 CSR 请求,当这个 CSR 被 approve 后,kube-controller-manager 为 kubelet 创建 TLS 客户端证书、私钥和 --kubeletconfig 文件。    注意:kube-controller-manager 需要配置 --cluster-signing-cert-file 和 --cluster-signing-key-file 参数,才会为 TLS Bootstrap 创建证书和私钥。    [root@k8s-master01 work]# kubectl get csrNAME        AGE    REQUESTOR                 CONDITIONcsr-4wk6q   108s   system:bootstrap:0zqowl   Pendingcsr-mjtl5   110s   system:bootstrap:heh51x   Pendingcsr-rfz27   109s   system:bootstrap:b46tq2   Pending    [root@k8s-master01 work]# kubectl get nodesNo resources found.    此时三个node节点的csr均处于 pending 状态;    8)自动 approve CSR 请求创建三个 ClusterRoleBinding,分别用于自动 approve client、renew client、renew server 证书:[root@k8s-master01 work]# cd /opt/k8s/work[root@k8s-master01 work]# cat > csr-crb.yaml < auto-approve-csrs-for-group:自动 approve node 的第一次 CSR; 注意第一次 CSR 时,请求的 Group 为 system:bootstrappers;-> node-client-cert-renewal:自动 approve node 后续过期的 client 证书,自动生成的证书 Group 为 system:nodes;-> node-server-cert-renewal:自动 approve node 后续过期的 server 证书,自动生成的证书 Group 为 system:nodes;    执行创建:[root@k8s-master01 work]# kubectl apply -f csr-crb.yaml    查看 kubelet 的情况    需要耐心等待一段时间(1-10 分钟),三个节点的 CSR 都被自动 approved(测试时等待了很长一段时间才被自动approved)[root@k8s-master01 work]# kubectl get csrNAME        AGE     REQUESTOR                 CONDITIONcsr-4m4hc   37s     system:node:k8s-node01    Pendingcsr-4wk6q   7m29s   system:bootstrap:0zqowl   Approved,Issuedcsr-h8hq6   36s     system:node:k8s-node02    Pendingcsr-mjtl5   7m31s   system:bootstrap:heh51x   Approved,Issuedcsr-rfz27   7m30s   system:bootstrap:b46tq2   Approved,Issuedcsr-t9p6n   36s     system:node:k8s-node03    Pending    注意:Pending 的 CSR 用于创建 kubelet server 证书,需要手动 approve,后续会说到这个。    此时发现所有node节点状态均为"ready":[root@k8s-master01 work]# kubectl get nodesNAME         STATUS   ROLES    AGE     VERSIONk8s-node01   Ready       3m      v1.14.2k8s-node02   Ready       3m      v1.14.2k8s-node03   Ready       2m59s   v1.14.2    kube-controller-manager 为各node节点生成了 kubeconfig 文件和公私钥(如下在node节点上执行):[root@k8s-node01 ~]# ls -l /etc/kubernetes/kubelet.kubeconfig-rw------- 1 root root 2310 Jun 18 17:09 /etc/kubernetes/kubelet.kubeconfig    [root@k8s-node01 ~]# ls -l /etc/kubernetes/cert/|grep kubelet-rw------- 1 root root 1273 Jun 18 17:16 kubelet-client-2019-06-18-17-16-31.pemlrwxrwxrwx 1 root root   59 Jun 18 17:16 kubelet-client-current.pem -> /etc/kubernetes/cert/kubelet-client-2019-06-18-17-16-31.pem    注意:此时还没有自动生成 kubelet server 证书;    9)手动 approve server cert csr基于安全性考虑,CSR approving controllers 不会自动 approve kubelet server 证书签名请求,需要手动 approve:    [root@k8s-master01 work]# kubectl get csrNAME        AGE    REQUESTOR                 CONDITIONcsr-4m4hc   6m4s   system:node:k8s-node01    Pendingcsr-4wk6q   12m    system:bootstrap:0zqowl   Approved,Issuedcsr-h8hq6   6m3s   system:node:k8s-node02    Pendingcsr-mjtl5   12m    system:bootstrap:heh51x   Approved,Issuedcsr-rfz27   12m    system:bootstrap:b46tq2   Approved,Issuedcsr-t9p6n   6m3s   system:node:k8s-node03    Pending    记住上面执行结果为"Pending"的对应的csr的NAME名称,然后对这些csr进行手动approve[root@k8s-master01 work]# kubectl certificate approve csr-4m4hccertificatesigningrequest.certificates.k8s.io/csr-4m4hc approved    [root@k8s-master01 work]# kubectl certificate approve csr-h8hq6certificatesigningrequest.certificates.k8s.io/csr-h8hq6 approved    [root@k8s-master01 work]# kubectl certificate approve csr-t9p6ncertificatesigningrequest.certificates.k8s.io/csr-t9p6n approved    再次查看csr,发现所有的CSR都为approved了[root@k8s-master01 work]# kubectl get csrNAME        AGE     REQUESTOR                 CONDITIONcsr-4m4hc   7m46s   system:node:k8s-node01    Approved,Issuedcsr-4wk6q   14m     system:bootstrap:0zqowl   Approved,Issuedcsr-h8hq6   7m45s   system:node:k8s-node02    Approved,Issuedcsr-mjtl5   14m     system:bootstrap:heh51x   Approved,Issuedcsr-rfz27   14m     system:bootstrap:b46tq2   Approved,Issuedcsr-t9p6n   7m45s   system:node:k8s-node03    Approved,Issued    再次到node节点上查看,发现已经自动生成 kubelet server 证书;[root@k8s-node01 ~]# ls -l /etc/kubernetes/cert/kubelet-*-rw------- 1 root root 1273 Jun 18 17:16 /etc/kubernetes/cert/kubelet-client-2019-06-18-17-16-31.pemlrwxrwxrwx 1 root root   59 Jun 18 17:16 /etc/kubernetes/cert/kubelet-client-current.pem -> /etc/kubernetes/cert/kubelet-client-2019-06-18-17-16-31.pem-rw------- 1 root root 1317 Jun 18 17:23 /etc/kubernetes/cert/kubelet-server-2019-06-18-17-23-13.pemlrwxrwxrwx 1 root root   59 Jun 18 17:23 /etc/kubernetes/cert/kubelet-server-current.pem -> /etc/kubernetes/cert/kubelet-server-2019-06-18-17-23-13.pem    10)kubelet 提供的 API 接口kubelet 启动后监听多个端口,用于接收 kube-apiserver 或其它客户端发送的请求:    在node节点执行下面命令[root@k8s-node01 ~]# netstat -lnpt|grep kubelettcp        0      0 127.0.0.1:40831         0.0.0.0:*               LISTEN      24468/kubelet   tcp        0      0 172.16.60.244:10248     0.0.0.0:*               LISTEN      24468/kubelet   tcp        0      0 172.16.60.244:10250     0.0.0.0:*               LISTEN      24468/kubelet    解释说明:-> 10248: healthz http服务端口,即健康检查服务的端口-> 10250: kubelet服务监听的端口,api会检测他是否存活。即https服务,访问该端口时需要认证和授权(即使访问/healthz也需要);-> 10255:只读端口,可以不用验证和授权机制,直接访问。这里配置"readOnlyPort: 0"表示未开启只读端口10255;如果配置"readOnlyPort: 10255"则打开10255端口-> 从 K8S v1.10 开始,去除了 --cadvisor-port 参数(默认 4194 端口),不支持访问 cAdvisor UI & API。    例如执行"kubectl exec -it nginx-ds-5aedg -- sh"命令时,kube-apiserver会向 kubelet 发送如下请求:    POST /exec/default/nginx-ds-5aedg/my-nginx?command=sh&input=1&output=1&tty=1    kubelet 接收 10250 端口的 https 请求,可以访问如下资源:-> /pods、/runningpods-> /metrics、/metrics/cadvisor、/metrics/probes-> /spec-> /stats、/stats/container-> /logs-> /run/、/exec/, /attach/, /portForward/, /containerLogs/    由于关闭了匿名认证,同时开启了webhook 授权,所有访问10250端口https API的请求都需要被认证和授权。预定义的 ClusterRole system:kubelet-api-admin 授予访问 kubelet 所有 API 的权限(kube-apiserver 使用的 kubernetes 证书 User 授予了该权限):    [root@k8s-master01 work]# kubectl describe clusterrole system:kubelet-api-adminName:         system:kubelet-api-adminLabels:       kubernetes.io/bootstrapping=rbac-defaultsAnnotations:  rbac.authorization.kubernetes.io/autoupdate: truePolicyRule:  Resources      Non-Resource URLs  Resource Names  Verbs  ---------      -----------------  --------------  -----  nodes/log      []                 []              [*]  nodes/metrics  []                 []              [*]  nodes/proxy    []                 []              [*]  nodes/spec     []                 []              [*]  nodes/stats    []                 []              [*]  nodes          []                 []              [get list watch proxy]    11) kubelet api 认证和授权kubelet 配置了如下认证参数:-> authentication.anonymous.enabled:设置为 false,不允许匿名�访问 10250 端口;-> authentication.x509.clientCAFile:指定签名客户端证书的 CA 证书,开启 HTTPs 证书认证;-> authentication.webhook.enabled=true:开启 HTTPs bearer token 认证;    同时配置了如下授权参数:-> authroization.mode=Webhook:开启 RBAC 授权;    kubelet 收到请求后,使用 clientCAFile 对证书签名进行认证,或者查询 bearer token 是否有效。如果两者都没通过,则拒绝请求,提示 Unauthorized:[root@k8s-master01 work]# curl -s --cacert /etc/kubernetes/cert/ca.pem https://172.16.60.244:10250/metricsUnauthorized    [root@k8s-master01 work]# curl -s --cacert /etc/kubernetes/cert/ca.pem -H "Authorization: Bearer 123456" https://172.16.60.244:10250/metricsUnauthorized    通过认证后,kubelet 使用 SubjectAccessReview API 向 kube-apiserver 发送请求,查询证书或 token 对应的 user、group 是否有操作资源的权限(RBAC);    下面进行证书认证和授权:    # 权限不足的证书;[root@k8s-master01 work]# curl -s --cacert /etc/kubernetes/cert/ca.pem --cert /etc/kubernetes/cert/kube-controller-manager.pem --key /etc/kubernetes/cert/kube-controller-manager-key.pem https://172.16.60.244:10250/metricsForbidden (user=system:kube-controller-manager, verb=get, resource=nodes, subresource=metrics)    # 使用部署 kubectl 命令行工具时创建的、具有最高权限的 admin 证书;[root@k8s-master01 work]# curl -s --cacert /etc/kubernetes/cert/ca.pem --cert /opt/k8s/work/admin.pem --key /opt/k8s/work/admin-key.pem https://172.16.60.244:10250/metrics|head# HELP apiserver_audit_event_total Counter of audit events generated and sent to the audit backend.# TYPE apiserver_audit_event_total counterapiserver_audit_event_total 0# HELP apiserver_audit_requests_rejected_total Counter of apiserver requests rejected due to an error in audit logging backend.# TYPE apiserver_audit_requests_rejected_total counterapiserver_audit_requests_rejected_total 0# HELP apiserver_client_certificate_expiration_seconds Distribution of the remaining lifetime on the certificate used to authenticate a request.# TYPE apiserver_client_certificate_expiration_seconds histogramapiserver_client_certificate_expiration_seconds_bucket{le="0"} 0apiserver_client_certificate_expiration_seconds_bucket{le="1800"} 0    注意:--cacert、--cert、--key 的参数值必须是文件路径,否则返回 401 Unauthorized;    bear token 认证和授权创建一个 ServiceAccount,将它和 ClusterRole system:kubelet-api-admin 绑定,从而具有调用 kubelet API 的权限:[root@k8s-master01 work]# kubectl create sa kubelet-api-test[root@k8s-master01 work]# kubectl create clusterrolebinding kubelet-api-test --clusterrole=system:kubelet-api-admin --serviceaccount=default:kubelet-api-test[root@k8s-master01 work]# SECRET=$(kubectl get secrets | grep kubelet-api-test | awk '{print $1}')[root@k8s-master01 work]# TOKEN=$(kubectl describe secret ${SECRET} | grep -E '^token' | awk '{print $2}')[root@k8s-master01 work]# echo ${TOKEN}eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJkZWZhdWx0Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZWNyZXQubmFtZSI6Imt1YmVsZXQtYXBpLXRlc3QtdG9rZW4tanRyMnEiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoia3ViZWxldC1hcGktdGVzdCIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImRjYjljZTE0LTkxYWMtMTFlOS05MGQ0LTAwNTA1NmFjN2M4MSIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDpkZWZhdWx0Omt1YmVsZXQtYXBpLXRlc3QifQ.i_uVqjOUMLdG4lDURfhxFDOtM2addxgEquQTcpOLP_5g6UI-MjvE5jHem_Q8OtMwFs5tqlCvKJHN2IdfsRiKk_mBe_ysLQsNEoHDclZwHRVN6X84Y62q49y-ArT12YlSpfWWenw-2GawsTmORbz7AYYaU5-kgqMk95mMx57ic8uwvJYlilw4JCnkMON5ESOmgAOg30uVvsBiQVkkYTwGtAG5Tah9wADujQttBjjDOlGntpGHxj-HmZO2GivDgdrbs_UNvhzGt2maDlpP13qYv8zKiBGpSbiWOAk_olsFKQ5-dIrn04NCbh9Kkyyh9JccMSuvePaj-lgTWj5zdUfRHw    这时,再接着进行kubelet请求[root@k8s-master01 work]# curl -s --cacert /etc/kubernetes/cert/ca.pem -H "Authorization: Bearer ${TOKEN}" https://172.16.60.244:10250/metrics|head# HELP apiserver_audit_event_total Counter of audit events generated and sent to the audit backend.# TYPE apiserver_audit_event_total counterapiserver_audit_event_total 0# HELP apiserver_audit_requests_rejected_total Counter of apiserver requests rejected due to an error in audit logging backend.# TYPE apiserver_audit_requests_rejected_total counterapiserver_audit_requests_rejected_total 0# HELP apiserver_client_certificate_expiration_seconds Distribution of the remaining lifetime on the certificate used to authenticate a request.# TYPE apiserver_client_certificate_expiration_seconds histogramapiserver_client_certificate_expiration_seconds_bucket{le="0"} 0apiserver_client_certificate_expiration_seconds_bucket{le="1800"} 0    12)cadvisor 和 metricscadvisor 是内嵌在 kubelet 二进制中的,统计所在节点各容器的资源(CPU、内存、磁盘、网卡)使用情况的服务。浏览器访问https://172.16.60.244:10250/metrics 和 https://172.16.60.244:10250/metrics/cadvisor 分别返回 kubelet 和 cadvisor 的 metrics。    注意:-> kubelet.config.json 设置 authentication.anonymous.enabled 为 false,不允许匿名证书访问 10250 的 https 服务;-> 参考下面的"浏览器访问kube-apiserver安全端口",创建和导入相关证书,然后就可以在浏览器里成功访问kube-apiserver和上面的kubelet的10250端口了。    需要通过证书方式访问kubelet的10250端口[root@k8s-master01 ~]# curl -s --cacert /etc/kubernetes/cert/ca.pem --cert /opt/k8s/work/admin.pem --key /opt/k8s/work/admin-key.pem https://172.16.60.244:10250/metrics    [root@k8s-master01 ~]# curl -s --cacert /etc/kubernetes/cert/ca.pem --cert /opt/k8s/work/admin.pem --key /opt/k8s/work/admin-key.pem https://172.16.60.244:10250/metrics/cadvisor   13)获取 kubelet 的配置从 kube-apiserver 获取各节点 kubelet 的配置:如果发现没有jq命令(json处理工具),可以直接yum安装jq:[root@k8s-master01 ~]# yum install -y jq   使用部署 kubectl 命令行工具时创建的、具有最高权限的 admin 证书;[root@k8s-master01 ~]# source /opt/k8s/bin/environment.sh[root@k8s-master01 ~]# curl -sSL --cacert /etc/kubernetes/cert/ca.pem --cert /opt/k8s/work/admin.pem --key /opt/k8s/work/admin-key.pem ${KUBE_APISERVER}/api/v1/nodes/k8s-node01/proxy/configz | jq '.kubeletconfig|.kind="KubeletConfiguration"|.apiVersion="kubelet.config.k8s.io/v1beta1"'{  "syncFrequency": "1m0s",  "fileCheckFrequency": "20s",  "httpCheckFrequency": "20s",  "address": "172.16.60.244",  "port": 10250,  "rotateCertificates": true,  "serverTLSBootstrap": true,  "authentication": {    "x509": {      "clientCAFile": "/etc/kubernetes/cert/ca.pem"    },    "webhook": {      "enabled": true,      "cacheTTL": "2m0s"    },    "anonymous": {      "enabled": false    }  },  "authorization": {    "mode": "Webhook",    "webhook": {      "cacheAuthorizedTTL": "5m0s",      "cacheUnauthorizedTTL": "30s"    }  },  "registryPullQPS": 0,  "registryBurst": 20,  "eventRecordQPS": 0,  "eventBurst": 20,  "enableDebuggingHandlers": true,  "enableContentionProfiling": true,  "healthzPort": 10248,  "healthzBindAddress": "172.16.60.244",  "oomScoreAdj": -999,  "clusterDomain": "cluster.local",  "clusterDNS": [    "10.254.0.2"  ],  "streamingConnectionIdleTimeout": "4h0m0s",  "nodeStatusUpdateFrequency": "10s",  "nodeStatusReportFrequency": "1m0s",  "nodeLeaseDurationSeconds": 40,  "imageMinimumGCAge": "2m0s",  "imageGCHighThresholdPercent": 85,  "imageGCLowThresholdPercent": 80,  "volumeStatsAggPeriod": "1m0s",  "cgroupsPerQOS": true,  "cgroupDriver": "cgroupfs",  "cpuManagerPolicy": "none",  "cpuManagerReconcilePeriod": "10s",  "runtimeRequestTimeout": "10m0s",  "hairpinMode": "promiscuous-bridge",  "maxPods": 220,  "podCIDR": "172.30.0.0/16",  "podPidsLimit": -1,  "resolvConf": "/etc/resolv.conf",  "cpuCFSQuota": true,  "cpuCFSQuotaPeriod": "100ms",  "maxOpenFiles": 1000000,  "contentType": "application/vnd.kubernetes.protobuf",  "kubeAPIQPS": 1000,  "kubeAPIBurst": 2000,  "serializeImagePulls": false,  "evictionHard": {    "memory.available": "100Mi"  },  "evictionPressureTransitionPeriod": "5m0s",  "enableControllerAttachDetach": true,  "makeIPTablesUtilChains": true,  "iptablesMasqueradeBit": 14,  "iptablesDropBit": 15,  "failSwapOn": true,  "containerLogMaxSize": "20Mi",  "containerLogMaxFiles": 10,  "configMapAndSecretChangeDetectionStrategy": "Watch",  "enforceNodeAllocatable": [    "pods"  ],  "kind": "KubeletConfiguration",  "apiVersion": "kubelet.config.k8s.io/v1beta1"}  或者直接执行下面语句:(https://172.16.60.250:8443 就是变量${KUBE_APISERVER})[root@k8s-master01 ~]# curl -sSL --cacert /etc/kubernetes/cert/ca.pem --cert /opt/k8s/work/admin.pem --key /opt/k8s/work/admin-key.pem https://172.16.60.250:8443/api/v1/nodes/k8s-node01/proxy/configz | jq '.kubeletconfig|.kind="KubeletConfiguration"|.apiVersion="kubelet.config.k8s.io/v1beta1"'[root@k8s-master01 ~]# curl -sSL --cacert /etc/kubernetes/cert/ca.pem --cert /opt/k8s/work/admin.pem --key /opt/k8s/work/admin-key.pem https://172.16.60.250:8443/api/v1/nodes/k8s-node02/proxy/configz | jq '.kubeletconfig|.kind="KubeletConfiguration"|.apiVersion="kubelet.config.k8s.io/v1beta1"'[root@k8s-master01 ~]# curl -sSL --cacert /etc/kubernetes/cert/ca.pem --cert /opt/k8s/work/admin.pem --key /opt/k8s/work/admin-key.pem https://172.16.60.250:8443/api/v1/nodes/k8s-node03/proxy/configz | jq '.kubeletconfig|.kind="KubeletConfiguration"|.apiVersion="kubelet.config.k8s.io/v1beta1"'

9.3 - 浏览器访问kube-apiserver等安全端口,创建和导入证书的做法

浏览器访问 kube-apiserver 的安全端口 6443 (代理端口是8443)时,提示证书不被信任:

这是因为 kube-apiserver 的 server 证书是我们创建的根证书 ca.pem 签名的,需要将根证书 ca.pem 导入操作系统,并设置永久信任。

这里说下Mac OS系统客户机上导入证书的方法:

1)点击Mac本上的"钥匙串访问" -> "系统" -> "证书" -> "kebernetes"(双击里面的"信任",改成"始终信任"),如下图:

清除浏览器缓存,再次访问,发现证书已经被信任了!(红色感叹号已经消失了)

2)需要给浏览器生成一个 client 证书,访问 apiserver 的 6443 https 端口时使用。

这里使用部署 kubectl 命令行工具时创建的 admin 证书、私钥和上面的 ca 证书,创建一个浏览器可以使用 PKCS#12/PFX 格式的证书:

[root@k8s-master01 ~]# cd /opt/k8s/work/[root@k8s-master01 work]# openssl pkcs12 -export -out admin.pfx -inkey admin-key.pem -in admin.pem -certfile ca.pemEnter Export Password:                      # 这里输入自己设定的任意密码,比如"123456"Verifying - Enter Export Password:          # 确认密码: 123456[root@k8s-master01 work]# ll admin.pfx-rw-r--r-- 1 root root 3613 Jun 23 23:56 admin.pfx

将在k8s-master01服务器上生成的client证书admin.pfx拷贝到Mac本机,导入到"钥匙串访问" -> "系统" -> "证书" 里面 (导入时会提示输入admin.pfx证书的密码,即"123456"),如下图:

清除浏览器历史记录,一定要重启浏览器,接着访问apiserver地址,接着会提示选择一个浏览器证书,这里选中上面导入的"admin.pfx", 然后再次访问apiserver,发现相应的metrics数据就成功显示出来了!!(注意,如果失败了。则可以删除证书,然后重新生成,重新导入再跟着操作步骤来一遍,清除浏览器缓存,重启浏览器,选择导入的证书,再次访问即可!)

同样的,再上面apiserver访问的client证书导入到本地浏览器后,再访问kubelet的10250端口的metric时,也会提示选择导入的证书"admin.pfx",然后就会正常显示对应的metrics数据了。(k8s集群的其他组件metrics的https证书方式方式同理!)

9.4 - 部署 kube-proxy 组件

kube-proxy运行在所有的node节点上,它监听apiserver中service和endpoint的变化情况,创建路由规则以提供服务IP和负载均衡功能。下面部署命令均在k8s-master01节点上执行,然后远程分发文件和执行命令。

1)下载和分发 kube-proxy 二进制文件[root@k8s-master01 ~]# cd /opt/k8s/work[root@k8s-master01 work]# source /opt/k8s/bin/environment.sh[root@k8s-master01 work]# for node_node_ip in ${NODE_NODE_IPS[@]}  do    echo ">>> ${node_node_ip}"    scp kubernetes/server/bin/kube-proxy root@${node_node_ip}:/opt/k8s/bin/    ssh root@${node_node_ip} "chmod +x /opt/k8s/bin/*"  done2) 创建 kube-proxy 证书创建证书签名请求:[root@k8s-master01 ~]# cd /opt/k8s/work[root@k8s-master01 work]# cat > kube-proxy-csr.json <>> ${node_node_name}"    scp kube-proxy.kubeconfig root@${node_node_name}:/etc/kubernetes/  done4)创建 kube-proxy 配置文件从 v1.10 开始,kube-proxy 部分参数可以配置文件中配置。可以使用 --write-config-to 选项生成该配置文件。创建 kube-proxy config 文件模板:[root@k8s-master01 work]# cd /opt/k8s/work[root@k8s-master01 work]# cat > kube-proxy-config.yaml.template <>> ${NODE_NODE_NAMES[i]}"    sed -e "s/##NODE_NODE_NAME##/${NODE_NODE_NAMES[i]}/" -e "s/##NODE_NODE_IP##/${NODE_NODE_IPS[i]}/" kube-proxy-config.yaml.template > kube-proxy-config-${NODE_NODE_NAMES[i]}.yaml.template    scp kube-proxy-config-${NODE_NODE_NAMES[i]}.yaml.template root@${NODE_NODE_NAMES[i]}:/etc/kubernetes/kube-proxy-config.yaml  done[root@k8s-master01 work]# ll kube-proxy-config-k8s-node0*-rw-r--r-- 1 root root 500 Jun 24 20:27 kube-proxy-config-k8s-node01.yaml.template-rw-r--r-- 1 root root 500 Jun 24 20:27 kube-proxy-config-k8s-node02.yaml.template-rw-r--r-- 1 root root 500 Jun 24 20:27 kube-proxy-config-k8s-node03.yaml.template5)创建和分发 kube-proxy systemd unit 文件[root@k8s-master01 work]# cd /opt/k8s/work[root@k8s-master01 work]# source /opt/k8s/bin/environment.sh[root@k8s-master01 work]# cat > kube-proxy.service <>> ${node_node_name}"    scp kube-proxy.service root@${node_node_name}:/etc/systemd/system/  done6)启动 kube-proxy 服务[root@k8s-master01 work]# cd /opt/k8s/work[root@k8s-master01 work]# source /opt/k8s/bin/environment.sh[root@k8s-master01 work]# for node_node_ip in ${NODE_NODE_IPS[@]}  do    echo ">>> ${node_node_ip}"    ssh root@${node_node_ip} "mkdir -p ${K8S_DIR}/kube-proxy"    ssh root@${node_node_ip} "modprobe ip_vs_rr"    ssh root@${node_node_ip} "systemctl daemon-reload && systemctl enable kube-proxy && systemctl restart kube-proxy"  done注意:启动服务前必须先创建工作目录;检查启动结果:[root@k8s-master01 work]# source /opt/k8s/bin/environment.sh[root@k8s-master01 work]# for node_node_ip in ${NODE_NODE_IPS[@]}  do    echo ">>> ${node_node_ip}"    ssh root@${node_node_ip} "systemctl status kube-proxy|grep Active"  done预期结果:>>> 172.16.60.244   Active: active (running) since Mon 2019-06-24 20:35:31 CST; 2min 0s ago>>> 172.16.60.245   Active: active (running) since Mon 2019-06-24 20:35:30 CST; 2min 0s ago>>> 172.16.60.246   Active: active (running) since Mon 2019-06-24 20:35:32 CST; 1min 59s ago确保状态为 active (running),否则查看日志,确认原因(journalctl -u kube-proxy)7)查看监听端口(在任意一台node节点上查看)[root@k8s-node01 ~]# netstat -lnpt|grep kube-proxtcp        0      0 172.16.60.244:10249     0.0.0.0:*               LISTEN      3830/kube-proxy    tcp        0      0 172.16.60.244:10256     0.0.0.0:*               LISTEN      3830/kube-proxy  需要注意:10249:该端口用于http prometheus metrics port;10256:该端口用于http healthz port;8)查看 ipvs 路由规则[root@k8s-master01 work]# source /opt/k8s/bin/environment.sh[root@k8s-master01 work]# for node_node_ip in ${NODE_NODE_IPS[@]}  do    echo ">>> ${node_node_ip}"    ssh root@${node_node_ip} "/usr/sbin/ipvsadm -ln"  done预期输出:>>> 172.16.60.244IP Virtual Server version 1.2.1 (size=4096)Prot LocalAddress:Port Scheduler Flags  -> RemoteAddress:Port           Forward Weight ActiveConn InActConnTCP  10.254.0.1:443 rr  -> 172.16.60.241:6443           Masq    1      0          0          -> 172.16.60.242:6443           Masq    1      0          0          -> 172.16.60.243:6443           Masq    1      0          0        >>> 172.16.60.245IP Virtual Server version 1.2.1 (size=4096)Prot LocalAddress:Port Scheduler Flags  -> RemoteAddress:Port           Forward Weight ActiveConn InActConnTCP  10.254.0.1:443 rr  -> 172.16.60.241:6443           Masq    1      0          0          -> 172.16.60.242:6443           Masq    1      0          0          -> 172.16.60.243:6443           Masq    1      0          0        >>> 172.16.60.246IP Virtual Server version 1.2.1 (size=4096)Prot LocalAddress:Port Scheduler Flags  -> RemoteAddress:Port           Forward Weight ActiveConn InActConnTCP  10.254.0.1:443 rr  -> 172.16.60.241:6443           Masq    1      0          0          -> 172.16.60.242:6443           Masq    1      0          0          -> 172.16.60.243:6443           Masq    1      0          0由上面可以看出:所有通过 https 访问 K8S SVC kubernetes 的请求都转发到 kube-apiserver 节点的 6443 端口;

十、验证Kubernetes集群功能

使用 daemonset 验证 master 和 worker 节点是否工作正常。1)检查节点状态[root@k8s-master01 ~]# kubectl get nodesNAME         STATUS   ROLES    AGE    VERSIONk8s-node01   Ready       6d3h   v1.14.2k8s-node02   Ready       6d3h   v1.14.2k8s-node03   Ready       6d3h   v1.14.2各node节点状态都为 Ready 时正常。2)创建测试文件[root@k8s-master01 ~]# cd /opt/k8s/work[root@k8s-master01 work]# cat > nginx-ds.yml <           nginx-ds-6kfsw   1/1     Running   0          46s   172.30.72.2   k8s-node03              nginx-ds-xqdgw   1/1     Running   0          46s   172.30.88.2   k8s-node01              可见,nginx-ds的 Pod IP分别是 172.30.56.2、172.30.72.2、172.30.88.2,在所有 Node 上分别 ping 这三个 IP,看是否连通:[root@k8s-master01 work]# source /opt/k8s/bin/environment.sh[root@k8s-master01 work]# for node_node_ip in ${NODE_NODE_IPS[@]}  do    echo ">>> ${node_node_ip}"    ssh ${node_node_ip} "ping -c 1 172.30.56.2"    ssh ${node_node_ip} "ping -c 1 172.30.72.2"    ssh ${node_node_ip} "ping -c 1 172.30.88.2"  done预期输出结果:>>> 172.16.60.244PING 172.30.56.2 (172.30.56.2) 56(84) bytes of data.64 bytes from 172.30.56.2: icmp_seq=1 ttl=63 time=0.542 ms--- 172.30.56.2 ping statistics ---1 packets transmitted, 1 received, 0% packet loss, time 0msrtt min/avg/max/mdev = 0.542/0.542/0.542/0.000 msPING 172.30.72.2 (172.30.72.2) 56(84) bytes of data.64 bytes from 172.30.72.2: icmp_seq=1 ttl=63 time=0.654 ms--- 172.30.72.2 ping statistics ---1 packets transmitted, 1 received, 0% packet loss, time 0msrtt min/avg/max/mdev = 0.654/0.654/0.654/0.000 msPING 172.30.88.2 (172.30.88.2) 56(84) bytes of data.64 bytes from 172.30.88.2: icmp_seq=1 ttl=64 time=0.103 ms--- 172.30.88.2 ping statistics ---1 packets transmitted, 1 received, 0% packet loss, time 0msrtt min/avg/max/mdev = 0.103/0.103/0.103/0.000 ms>>> 172.16.60.245PING 172.30.56.2 (172.30.56.2) 56(84) bytes of data.64 bytes from 172.30.56.2: icmp_seq=1 ttl=64 time=0.106 ms--- 172.30.56.2 ping statistics ---1 packets transmitted, 1 received, 0% packet loss, time 0msrtt min/avg/max/mdev = 0.106/0.106/0.106/0.000 msPING 172.30.72.2 (172.30.72.2) 56(84) bytes of data.64 bytes from 172.30.72.2: icmp_seq=1 ttl=63 time=0.408 ms--- 172.30.72.2 ping statistics ---1 packets transmitted, 1 received, 0% packet loss, time 0msrtt min/avg/max/mdev = 0.408/0.408/0.408/0.000 msPING 172.30.88.2 (172.30.88.2) 56(84) bytes of data.64 bytes from 172.30.88.2: icmp_seq=1 ttl=63 time=0.345 ms--- 172.30.88.2 ping statistics ---1 packets transmitted, 1 received, 0% packet loss, time 0msrtt min/avg/max/mdev = 0.345/0.345/0.345/0.000 ms>>> 172.16.60.246PING 172.30.56.2 (172.30.56.2) 56(84) bytes of data.64 bytes from 172.30.56.2: icmp_seq=1 ttl=63 time=0.350 ms--- 172.30.56.2 ping statistics ---1 packets transmitted, 1 received, 0% packet loss, time 0msrtt min/avg/max/mdev = 0.350/0.350/0.350/0.000 msPING 172.30.72.2 (172.30.72.2) 56(84) bytes of data.64 bytes from 172.30.72.2: icmp_seq=1 ttl=64 time=0.105 ms--- 172.30.72.2 ping statistics ---1 packets transmitted, 1 received, 0% packet loss, time 0msrtt min/avg/max/mdev = 0.105/0.105/0.105/0.000 msPING 172.30.88.2 (172.30.88.2) 56(84) bytes of data.64 bytes from 172.30.88.2: icmp_seq=1 ttl=63 time=0.584 ms--- 172.30.88.2 ping statistics ---1 packets transmitted, 1 received, 0% packet loss, time 0msrtt min/avg/max/mdev = 0.584/0.584/0.584/0.000 ms4)检查服务 IP 和端口可达性[root@k8s-master01 work]# kubectl get svc |grep nginx-dsnginx-ds     NodePort    10.254.41.83           80:30876/TCP   4m24s可见:Service Cluster IP:10.254.41.83服务端口:80NodePort 端口:30876在所有 Node 上 curl Service IP:[root@k8s-master01 work]# source /opt/k8s/bin/environment.sh[root@k8s-master01 work]# for node_node_ip in ${NODE_NODE_IPS[@]}  do    echo ">>> ${node_node_ip}"    ssh ${node_node_ip} "curl -s 10.254.41.83"  done预期输出: nginx欢迎页面内容。5)检查服务的 NodePort 可达性在所有 Node 上执行:[root@k8s-master01 work]# source /opt/k8s/bin/environment.sh[root@k8s-master01 work]# for node_node_ip in ${NODE_NODE_IPS[@]}  do    echo ">>> ${node_node_ip}"    ssh ${node_node_ip} "curl -s ${node_node_ip}:30876"  done预期输出: nginx 欢迎页面内容。

感谢各位的阅读,以上就是"Kubernetes怎么部署高可用kube-apiserver集群"的内容了,经过本文的学习后,相信大家对Kubernetes怎么部署高可用kube-apiserver集群这一问题有了更深刻的体会,具体使用情况还需要大家实践验证。这里是,小编将为大家推送更多相关知识点的文章,欢迎关注!

0