千家信息网

kubernetes 1.13.3版本升级至1.14.1版本

发表于:2025-01-31 作者:千家信息网编辑
千家信息网最后更新 2025年01月31日,相关推荐本文的kubernetes环境:https://blog.51cto.com/billy98/2350660一、说明本文章介绍如何将使用kubeadm创建的Kubernetes集群从版本1.1
千家信息网最后更新 2025年01月31日kubernetes 1.13.3版本升级至1.14.1版本

相关推荐

本文的kubernetes环境:https://blog.51cto.com/billy98/2350660


一、说明

本文章介绍如何将使用kubeadm创建的Kubernetes集群从版本1.13.x升级到版本1.14.x。

只能从一个MINOR版本升级到下一个MINOR版本,或者在同一个MINOR的PATCH版本之间升级。也就是说,升级时不能跳过MINOR版本。例如,可以从1.y升级到1.y + 1,但不能从1.y升级到1.y + 2。

升级工作流程如下:

  • 升级主master节点。
  • 升级其他master节点。
  • 升级work节点。

当前kubernetes版本信息:

[root@node-01 ~]# kubectl get nodesNAME      STATUS   ROLES    AGE   VERSIONnode-01   Ready    master   99d   v1.13.3node-02   Ready    master   99d   v1.13.3node-03   Ready    master   99d   v1.13.3node-04   Ready       99d   v1.13.3node-05   Ready       99d   v1.13.3node-06   Ready       99d   v1.13.3

二、升级主master节

1. 在第一个master节点上,升级kubeadm

yum install kubeadm-1.14.1  -y

2. 验证下载是否有效并具有预期版本:

kubeadm version

3. 运行kubeadm upgrade plan

此命令检查您的群集是否可以升级,并获取可以升级到的版本
,您应该看到与此类似的输出:

[root@node-01 ~]#  kubeadm upgrade plan[preflight] Running pre-flight checks.[upgrade] Making sure the cluster is healthy:[upgrade/config] Making sure the configuration is correct:[upgrade/config] Reading configuration from the cluster...[upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'[upgrade] Fetching available versions to upgrade to[upgrade/versions] Cluster version: v1.13.3[upgrade/versions] kubeadm version: v1.14.1I0505 13:55:58.449783   12871 version.go:96] could not fetch a Kubernetes version from the internet: unable to get URL "https://dl.k8s.io/release/stable.txt": Get https://dl.k8s.io/release/stable.txt: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)I0505 13:55:58.449867   12871 version.go:97] falling back to the local client version: v1.14.1[upgrade/versions] Latest stable version: v1.14.1I0505 13:56:08.645796   12871 version.go:96] could not fetch a Kubernetes version from the internet: unable to get URL "https://dl.k8s.io/release/stable-1.13.txt": Get https://dl.k8s.io/release/stable-1.13.txt: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)I0505 13:56:08.645861   12871 version.go:97] falling back to the local client version: v1.14.1[upgrade/versions] Latest version in the v1.13 series: v1.14.1Components that must be upgraded manually after you have upgraded the control plane with 'kubeadm upgrade apply':COMPONENT   CURRENT       AVAILABLEKubelet     6 x v1.13.3   v1.14.1Upgrade to the latest version in the v1.13 series:COMPONENT            CURRENT   AVAILABLEAPI Server           v1.13.3   v1.14.1Controller Manager   v1.13.3   v1.14.1Scheduler            v1.13.3   v1.14.1Kube Proxy           v1.13.3   v1.14.1CoreDNS              1.2.6     1.3.1Etcd                 3.2.24    3.3.10You can now apply the upgrade by executing the following command:        kubeadm upgrade apply v1.14.1_____________________________________________________________________

4. 运行升级命令

kubeadm upgrade apply v1.14.1

你应该看到与此类似的输出

[root@node-01 ~]# kubeadm upgrade apply v1.14.1[preflight] Running pre-flight checks.[upgrade] Making sure the cluster is healthy:[upgrade/config] Making sure the configuration is correct:[upgrade/config] Reading configuration from the cluster...[upgrade/config] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'[upgrade/version] You have chosen to change the cluster version to "v1.14.1"[upgrade/versions] Cluster version: v1.13.3[upgrade/versions] kubeadm version: v1.14.1[upgrade/confirm] Are you sure you want to proceed with the upgrade? [y/N]: y[upgrade/prepull] Will prepull images for components [kube-apiserver kube-controller-manager kube-scheduler etcd][upgrade/prepull] Prepulling image for component etcd.[upgrade/prepull] Prepulling image for component kube-apiserver.[upgrade/prepull] Prepulling image for component kube-scheduler.[upgrade/prepull] Prepulling image for component kube-controller-manager.[apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-kube-apiserver[apiclient] Found 0 Pods for label selector k8s-app=upgrade-prepull-kube-scheduler[apiclient] Found 0 Pods for label selector k8s-app=upgrade-prepull-etcd[apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-kube-controller-manager[apiclient] Found 3 Pods for label selector k8s-app=upgrade-prepull-kube-apiserver[apiclient] Found 3 Pods for label selector k8s-app=upgrade-prepull-kube-scheduler[apiclient] Found 3 Pods for label selector k8s-app=upgrade-prepull-etcd[apiclient] Found 3 Pods for label selector k8s-app=upgrade-prepull-kube-controller-manager[upgrade/prepull] Prepulled image for component kube-scheduler.[upgrade/prepull] Prepulled image for component kube-apiserver.[upgrade/prepull] Prepulled image for component etcd.[upgrade/prepull] Prepulled image for component kube-controller-manager.[upgrade/prepull] Successfully prepulled the images for all the control plane components[upgrade/apply] Upgrading your Static Pod-hosted control plane to version "v1.14.1"...Static pod: kube-apiserver-node-01 hash: 26d86add2bfd0fd6825f5507fff1fb5eStatic pod: kube-controller-manager-node-01 hash: 21ea3d3ccb8d8dc00056209ca3da698bStatic pod: kube-scheduler-node-01 hash: a8d928943d47ec793a700ef95c4b6b4a[upgrade/etcd] Upgrading to TLS for etcdStatic pod: etcd-node-01 hash: 35015766b0b1714f398bed77aae5be95[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/etcd.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2019-05-05-14-00-02/etcd.yaml"[upgrade/staticpods] Waiting for the kubelet to restart the component[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)Static pod: etcd-node-01 hash: 35015766b0b1714f398bed77aae5be95Static pod: etcd-node-01 hash: 35015766b0b1714f398bed77aae5be95Static pod: etcd-node-01 hash: 17ddbcfb2ddf1d447ceec2b52c9faa96[apiclient] Found 3 Pods for label selector component=etcd[upgrade/staticpods] Component "etcd" upgraded successfully![upgrade/etcd] Waiting for etcd to become available[upgrade/staticpods] Writing new Static Pod manifests to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests940835611"[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-apiserver.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2019-05-05-14-00-02/kube-apiserver.yaml"[upgrade/staticpods] Waiting for the kubelet to restart the component[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)Static pod: kube-apiserver-node-01 hash: 26d86add2bfd0fd6825f5507fff1fb5eStatic pod: kube-apiserver-node-01 hash: ff2267bcddb83b815efb49ff766ad897[apiclient] Found 3 Pods for label selector component=kube-apiserver[upgrade/staticpods] Component "kube-apiserver" upgraded successfully![upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-controller-manager.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2019-05-05-14-00-02/kube-controller-manager.yaml"[upgrade/staticpods] Waiting for the kubelet to restart the component[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)Static pod: kube-controller-manager-node-01 hash: 21ea3d3ccb8d8dc00056209ca3da698bStatic pod: kube-controller-manager-node-01 hash: ff8be061048a4660a1fbbf72db229d0d[apiclient] Found 3 Pods for label selector component=kube-controller-manager[upgrade/staticpods] Component "kube-controller-manager" upgraded successfully![upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-scheduler.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2019-05-05-14-00-02/kube-scheduler.yaml"[upgrade/staticpods] Waiting for the kubelet to restart the component[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)Static pod: kube-scheduler-node-01 hash: a8d928943d47ec793a700ef95c4b6b4aStatic pod: kube-scheduler-node-01 hash: 959a5cdf1468825401daa8d35329351e[apiclient] Found 3 Pods for label selector component=kube-scheduler[upgrade/staticpods] Component "kube-scheduler" upgraded successfully![upload-config] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace[kubelet] Creating a ConfigMap "kubelet-config-1.14" in namespace kube-system with the configuration for the kubelets in the cluster[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.14" ConfigMap in the kube-system namespace[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster[addons] Applied essential addon: CoreDNS[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address[addons] Applied essential addon: kube-proxy[upgrade/successful] SUCCESS! Your cluster was upgraded to "v1.14.1". Enjoy![upgrade/kubelet] Now that your control plane is upgraded, please proceed with upgrading your kubelets if you haven't already done so.

5. 手动升级您的CNI提供程序插件。

你的容器网络接口(CNI)提供商可能有自己的升级说明。检查插件页面以查找你的CNI提供商,并查看是否需要其他升级步骤。

6. 升级第一个master节点上的kubelet和kubectl

yum install kubectl-1.14.1 kebulet-1.14.1 -y

重启kubelet

systemctl daemon-reloadsystemctl restart kubelet

三、升级其他master节点

1. 升级kubeadm程序

yum install kubeadm-1.14.1 -y          

2. 升级静态pod

kubeadm upgrade node experimental-control-plane

你可以看到类似信息:

[root@node-02 ~]# kubeadm upgrade node experimental-control-plane[upgrade] Reading configuration from the cluster...[upgrade] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'[upgrade] Upgrading your Static Pod-hosted control plane instance to version "v1.14.1"...Static pod: kube-apiserver-node-02 hash: 26d86add2bfd0fd6825f5507fff1fb5eStatic pod: kube-controller-manager-node-02 hash: 21ea3d3ccb8d8dc00056209ca3da698bStatic pod: kube-scheduler-node-02 hash: a8d928943d47ec793a700ef95c4b6b4a[upgrade/etcd] Upgrading to TLS for etcdStatic pod: etcd-node-02 hash: 1aa55e50528bd0621c5734515b3265fd[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/etcd.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2019-05-05-14-10-46/etcd.yaml"[upgrade/staticpods] Waiting for the kubelet to restart the component[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)Static pod: etcd-node-02 hash: 1aa55e50528bd0621c5734515b3265fdStatic pod: etcd-node-02 hash: 1aa55e50528bd0621c5734515b3265fdStatic pod: etcd-node-02 hash: 4710a34897e7838519a1bf8fe4dccf07[apiclient] Found 3 Pods for label selector component=etcd[upgrade/staticpods] Component "etcd" upgraded successfully![upgrade/etcd] Waiting for etcd to become available[upgrade/staticpods] Writing new Static Pod manifests to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests483113569"[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-apiserver.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2019-05-05-14-10-46/kube-apiserver.yaml"[upgrade/staticpods] Waiting for the kubelet to restart the component[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)Static pod: kube-apiserver-node-02 hash: 26d86add2bfd0fd6825f5507fff1fb5eStatic pod: kube-apiserver-node-02 hash: fe1005f40c3f390280358921c3073223[apiclient] Found 3 Pods for label selector component=kube-apiserver[upgrade/staticpods] Component "kube-apiserver" upgraded successfully![upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-controller-manager.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2019-05-05-14-10-46/kube-controller-manager.yaml"[upgrade/staticpods] Waiting for the kubelet to restart the component[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)Static pod: kube-controller-manager-node-02 hash: 21ea3d3ccb8d8dc00056209ca3da698bStatic pod: kube-controller-manager-node-02 hash: ff8be061048a4660a1fbbf72db229d0d[apiclient] Found 3 Pods for label selector component=kube-controller-manager[upgrade/staticpods] Component "kube-controller-manager" upgraded successfully![upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-scheduler.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2019-05-05-14-10-46/kube-scheduler.yaml"[upgrade/staticpods] Waiting for the kubelet to restart the component[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)Static pod: kube-scheduler-node-02 hash: a8d928943d47ec793a700ef95c4b6b4aStatic pod: kube-scheduler-node-02 hash: 959a5cdf1468825401daa8d35329351e[apiclient] Found 3 Pods for label selector component=kube-scheduler[upgrade/staticpods] Component "kube-scheduler" upgraded successfully![upgrade] The control plane instance for this node was successfully updated!

3. 更新kubelet和kubectl

yum install kubectl-1.14.1 kebulet-1.14.1 -y

4. 重启kubelet

systemctl daemon-reloadsystemctl restart kubelet

四、升级work节点

1. 在work节点上升级kubeadm:

yum install -y kubeadm-1.14.1

2.调整调度策略

通过将节点标记为不可调度并逐出pod来准备节点以进行维护(此步骤在master上执行)。

kubectl drain $NODE --ignore-daemonsets
[root@node-01 ~]# kubectl drain node-04 --ignore-daemonsetsnode/node-04 already cordonedWARNING: ignoring DaemonSet-managed Pods: cattle-system/cattle-node-agent-h655m, default/glusterfs-vhdqv, kube-system/canal-mbwvf, kube-system/kube-flannel-ds-amd64-zdfn8, kube-system/kube-proxy-5d64levicting pod "coredns-55696d4b79-kfcrh"evicting pod "cattle-cluster-agent-66bd75c65f-k7p6n"pod/cattle-cluster-agent-66bd75c65f-k7p6n evictedpod/coredns-55696d4b79-kfcrh evictednode/node-04 evicted

3. 更新node

[root@node-04 ~]# kubeadm upgrade node config --kubelet-version v1.14.1[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.14" ConfigMap in the kube-system namespace[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"[upgrade] The configuration for this node was successfully updated![upgrade] Now you should go ahead and upgrade the kubelet package using your package manager.

4. 更新kubelet和kubectl

yum install kubectl-1.14.1 kebulet-1.14.1 -y

5.重启kubelet

systemctl daemon-reloadsystemctl restart kubelet

6. 恢复调度策略(master执行)

kubectl uncordon $NODE

其他work节点升级根据以上的步骤执行一遍即可。

四、验证群集的状态

在所有节点上升级kubelet之后,通过从kubectl可以访问群集的任何位置运行以下命令来验证所有节点是否可用:

[root@node-01 ~]# kubectl get nodesNAME      STATUS   ROLES    AGE   VERSIONnode-01   Ready    master   99d   v1.14.1node-02   Ready    master   99d   v1.14.1node-03   Ready    master   99d   v1.14.1node-04   Ready       99d   v1.14.1node-05   Ready       99d   v1.14.1node-06   Ready       99d   v1.14.1

该STATUS列所有节点应显示Ready,并且应更新版本号。

五、工作原理

kubeadm upgrade apply 执行以下操作:

  1. 检查您的群集是否处于可升级状态:
  2. API服务器是可访问的
  3. 所有节点都处于该Ready状态
  4. master是否健康
  5. 执行版本控制策略。
  6. 确保master镜像可用或可用于拉到机器。
  7. 升级master组件或回滚(如果其中任何一个组件无法启动)。
  8. 应用新的kube-dns和kube-proxy清单,并确保创建所有必需的RBAC规则。
  9. 创建API服务器的新证书和密钥文件,如果旧文件即将在180天后过期,则备份旧文件。

kubeadm upgrade node experimental-control-plane在其他控制平面节点上执行以下操作:

  • ClusterConfiguration从群集中获取kubeadm 。
  • 可选择备份kube-apiserver证书。
  • 升级master组件的静态Pod清单。

本次的分享就到此,如有问题欢迎在下面留言交流,希望大家多多关注和点赞,谢谢!

0