千家信息网

kubeadm安装Kubernetes1.15安装部署详解-Part 2

发表于:2025-01-29 作者:千家信息网编辑
千家信息网最后更新 2025年01月29日,前提Kubernetes发布了今年第二大版本 Kubernetes 1.15,此次版本共更新加强了25个相关功能,其中2个升级到GA版本,13个升级到beta版,10个alpha版。Kubernete
千家信息网最后更新 2025年01月29日kubeadm安装Kubernetes1.15安装部署详解-Part 2

前提

Kubernetes发布了今年第二大版本 Kubernetes 1.15,此次版本共更新加强了25个相关功能,其中2个升级到GA版本,13个升级到beta版,10个alpha版。

Kubernetes 从1.14版本开始引入了新功能,用于动态地将主节点添加到群集。无需在节点之间复制证书和密钥,从而减轻了自举过程中的额外编排和复杂性。本文就使用这个新特性进行部署。整体部署过程多快好省!

初始化群集并系统环境 (所有节点上进行如下操作)

1.设置主机名hostname,管理节点设置主机名为 master 。
2.编辑 /etc/hosts 文件,添加域名解析。
3.关闭防火墙、selinux和swap。
4.配置内核参数,将桥接的IPv4流量传递到iptables的链
5.配置国内yum源

yum install -y wgetmkdir /etc/yum.repos.d/bak && mv /etc/yum.repos.d/*.repo /etc/yum.repos.d/bakwget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.cloud.tencent.com/repo/centos7_base.repowget -O /etc/yum.repos.d/epel.repo http://mirrors.cloud.tencent.com/repo/epel-7.repoyum clean all && yum makecache

6.配置国内Kubernetes源

cat < /etc/yum.repos.d/kubernetes.repo[kubernetes]name=Kubernetesbaseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/enabled=1gpgcheck=1repo_gpgcheck=1gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpgEOF

安装必须软件(所有节点上进行如下操作)

1:安装docker

添加docker-ce repo文件

yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repoyum install -y docker-ce-18.06.1.ce-3.el7systemctl enable docker && systemctl start docker

2:安装kubeadm、kubelet、kubectl
Kubeadm是Kubernetes集群管理工具。
Kubelet负责与其他节点集群通信,并进行本节点Pod和容器生命周期的管理。
Kubeadm是Kubernetes的自动化部署工具,降低了部署难度,提高效率。

yum install -y kubelet kubeadm kubectlsystemctl enable kubelet

集群部署

节点角色介绍

[kub-master]        节点名称                部署服务172.20.101.157 name=kubm-01  docker、keepalived、nginx、etcd、kube-apiserver、kube-controller-manager、kube-scheduler172.20.101.164 name=kubm-02  docker、keepalived、nginx、etcd、kube-apiserver、kube-controller-manager、kube-scheduler172.20.101.165 name=kubm-03  docker、keepalived、nginx、etcd、kube-apiserver、kube-controller-manager、kube-scheduler[kub-node]172.20.101.160 name=kubnode-01 kubelet、docker、kube_proxy172.20.101.166 name=kubnode-02 kubelet、docker、kube_proxy172.20.101.167 name=kubnode-03 kubelet、docker、kube_proxy

新建安装部署目录

mkdir -p /etc/kuber/kubeadm

创建一个初始初始化文件 (kubm-01执行)

我使用的flannel 网络插件需要配置网络参数 --pod-network-cidr=10.244.0.0/16 。

vim /etc/kuber/kubeadm/kubeadm-config.yamlapiVersion: kubeadm.k8s.io/v1beta2kind: ClusterConfigurationkubernetesVersion: stablecontrolPlaneEndpoint: "172.20.101.252:16443"networking:  podSubnet: 10.244.0.0/16  serviceSubnet: 10.245.0.0/16
注意我使用nginx做的代理

master上面都配置Nginx反向代理 API Server

API Server使用 6443 端口;
Nginx 代理端口为 16443 端口;
172.20.101.252 是master节点的vip。

推荐清理环境

如果之前配置过k8s或者首次配置没有成功等情况,推荐把系统环境清理一下,每一个节点。

systemctl stop kubeletdocker rm -f -v $(docker ps  -a -q)rm -rf /etc/kubernetesrm -rf  /var/lib/etcdrm -rf   /var/lib/kubeletrm -rf  $HOME/.kube/configiptables -F && iptables -t nat -F && iptables -t mangle -F && iptables -Xyum reinstall -y kubeletsystemctl daemon-reloadsystemctl restart dockersystemctl enable kubelet

使用upload-certs和config指定初始化集群。

kubeadm init \    --config=/etc/kuber/kubeadm/kubeadm-config.yaml \    --upload-certs \    --control-plane

第一台master节点初始化返回结果

在执行节点上执行如下操作,初始化一下k8s环境。
  mkdir -p $HOME/.kube  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config  sudo chown $(id -u):$(id -g) $HOME/.kube/config
添加master节点执行如下操作
  kubeadm join 172.20.101.252:16443 --token slsxxo.5aiu0lqpxy61n8ah \    --discovery-token-ca-cert-hash sha256:2c3286ca0ac761ff7e29f590545d3370f801854158e7c6adde586ba96f1a6675 \    --control-plane --certificate-key 1a139dc53b553091c59262b2f08b948848d7cda7e9cb0169c3f2e3db480ea255
添加nodes节点执行如下操作
kubeadm join 172.20.101.252:16443 --token slsxxo.5aiu0lqpxy61n8ah \    --discovery-token-ca-cert-hash sha256:2c3286ca0ac761ff7e29f590545d3370f801854158e7c6adde586ba96f1a6675 

列出可用的令牌kubeadm。

[root@kubm-01 ~]# kubeadm  token listTOKEN                     TTL       EXPIRES                     USAGES                   DESCRIPTION                                           EXTRA GROUPSezwzjn.9uslrdvu8y3o7hxc   23h       2019-06-27T13:29:21+08:00   authentication,signing                                                   system:bootstrappers:kubeadm:default-node-tokennt7p9j.dgnf30gcr4bxg1le   1h        2019-06-26T15:29:20+08:00                      Proxy for managing TTL for the kubeadm-certs secret   

检查命名空间中的kubeadm-cert秘密kube-system。

kubectl get secrets -n kube-system kubeadm-certs -o yaml

配置kubectl访问环境。

mkdir -p $HOME/.kubesudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/configsudo chown $(id -u):$(id -g) $HOME/.kube/config

查看系统环境镜像启动

[root@kubm-01 ~]# kubectl get pods -n kube-systemNAME                              READY   STATUS    RESTARTS   AGEcoredns-5c98db65d4-bdjh7          0/1     Pending   0          71scoredns-5c98db65d4-xvvpl          0/1     Pending   0          71setcd-kubm-01                      1/1     Running   0          20skube-apiserver-kubm-01            1/1     Running   0          34skube-controller-manager-kubm-01   1/1     Running   0          14skube-proxy-l29hl                  1/1     Running   0          70skube-scheduler-kubm-01            1/1     Running   0          32s

只用dns服务pending,在添加网络模块后会自动修复。

部署flannel网络

使用与podSubnet上面配置匹配的pod CIDR 安装CNI插件,按照实际情况修改。

kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/a70459be0084506e4ec919aa1c114638878db11b/Documentation/kube-flannel.yml

验证1个master节点是Ready。

kubectl get nodesNAME      STATUS   ROLES    AGE    VERSION   kubm-01   Ready    master   7m2s   v1.15.0   

再次验证kube-systempod是Running。

kubectl get pods -n kube-system

[root@kubm-01 ~]# kubectl get pods -n kube-systemNAME                              READY   STATUS    RESTARTS   AGEetcd-kubm-01                      1/1     Running   0          80skube-apiserver-kubm-01            1/1     Running   0          94skube-controller-manager-kubm-01   1/1     Running   0          74skube-proxy-l29hl                  1/1     Running   0          2m10skube-scheduler-kubm-01            1/1     Running   0          92scoredns-5c98db65d4-bdjh7          0/1     Running   0          2m11scoredns-5c98db65d4-xvvpl          0/1     Running   0          2m11skube-flannel-ds-amd64-chb4p       1/1     Running   0          16s

发现dns和网络插件容器都正常工作。

添加第二个 master

直接执行初始化集群时返回的命令添加即可,大致步骤如下。
1:清理系统环境
2:运行上一节中记录的join命令。

  kubeadm join 172.20.101.252:16443 --token slsxxo.5aiu0lqpxy61n8ah \    --discovery-token-ca-cert-hash sha256:2c3286ca0ac761ff7e29f590545d3370f801854158e7c6adde586ba96f1a6675 \    --control-plane --certificate-key 1a139dc53b553091c59262b2f08b948848d7cda7e9cb0169c3f2e3db480ea255

3:初始化系统环境

[root@kubm-02 ~]# mkdir -p $HOME/.kubesudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/configsudo chown $(id -u):$(id -g) $HOME/.kube/config

获取集群状态

[root@kubm-02 ~]# kubectl get nodes                                               NAME      STATUS   ROLES    AGE     VERSIONkubm-01   Ready    master   7m58s   v1.15.0kubm-02   Ready    master   103s    v1.15.0
[root@kubm-02 ~]# kubectl get pods -n kube-systemNAME                              READY   STATUS    RESTARTS   AGEcoredns-5c98db65d4-bdjh7          1/1     Running   0          8m29scoredns-5c98db65d4-xvvpl          1/1     Running   0          8m29setcd-kubm-01                      1/1     Running   0          7m38setcd-kubm-02                      1/1     Running   0          80skube-apiserver-kubm-01            1/1     Running   0          7m52skube-apiserver-kubm-02            1/1     Running   0          72skube-controller-manager-kubm-01   1/1     Running   0          7m32skube-controller-manager-kubm-02   1/1     Running   0          72skube-flannel-ds-amd64-chb4p       1/1     Running   0          6m34skube-flannel-ds-amd64-h74k9       1/1     Running   1          2m30skube-proxy-kww9g                  1/1     Running   0          2m30skube-proxy-l29hl                  1/1     Running   0          8m28skube-scheduler-kubm-01            1/1     Running   0          7m50skube-scheduler-kubm-02            1/1     Running   0          94s

第三个 master 过程与店家第二个master节点保持一致即可。

node 节点添加

推荐清理环境

如果之前配置过k8s或者首次配置没有成功等情况,推荐把系统环境清理一下,每一个节点。

systemctl stop kubeletdocker rm -f -v $(docker ps  -a -q)rm -rf /etc/kubernetesrm -rf  /var/lib/etcdrm -rf   /var/lib/kubeletrm -rf  $HOME/.kube/configiptables -F && iptables -t nat -F && iptables -t mangle -F && iptables -Xyum reinstall -y kubeletsystemctl daemon-reloadsystemctl restart dockersystemctl enable kubelet

node节点接入

运行首次初始化时返回结果中,添加node的join命令。

kubeadm join 172.20.101.252:16443 --token slsxxo.5aiu0lqpxy61n8ah \    --discovery-token-ca-cert-hash sha256:2c3286ca0ac761ff7e29f590545d3370f801854158e7c6adde586ba96f1a6675 \    --node-name kubnode02

返回信息

[preflight] Reading configuration from the cluster...[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.15" ConfigMap in the kube-system namespace[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"[kubelet-start] Activating the kubelet service[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...This node has joined the cluster:* Certificate signing request was sent to apiserver and a response was received.* The Kubelet was informed of the new secure connection details.Run 'kubectl get nodes' on the control-plane to see this node join the cluster.

查看node节点加入集群 (master 节点执行)

kubectl get nodesNAME         STATUS   ROLES    AGE     VERSIONkubm-01      Ready    master   19m     v1.15.0kubm-02      Ready    master   13m     v1.15.0kubm-03      Ready    master   7m36s   v1.15.0kubnode-01   Ready       71s     v1.15.0

再次查看系统服务

[root@kubm-01 ~]# kubectl get pods -n kube-systemNAME                              READY   STATUS    RESTARTS   AGEcoredns-5c98db65d4-bdjh7          1/1     Running   0          59mcoredns-5c98db65d4-xvvpl          1/1     Running   0          59metcd-kubm-01                      1/1     Running   0          58metcd-kubm-02                      1/1     Running   0          52metcd-kubm-03                      1/1     Running   0          45mkube-apiserver-kubm-01            1/1     Running   0          58mkube-apiserver-kubm-02            1/1     Running   0          52mkube-apiserver-kubm-03            1/1     Running   0          46mkube-controller-manager-kubm-01   1/1     Running   0          58mkube-controller-manager-kubm-02   1/1     Running   0          52mkube-controller-manager-kubm-03   1/1     Running   0          46mkube-flannel-ds-amd64-6v5bw       1/1     Running   0          29mkube-flannel-ds-amd64-chb4p       1/1     Running   0          57mkube-flannel-ds-amd64-h74k9       1/1     Running   1          53mkube-flannel-ds-amd64-nbx85       1/1     Running   2          47mkube-flannel-ds-amd64-s4gv6       1/1     Running   0          40mkube-flannel-ds-amd64-wv9gx       1/1     Running   0          30mkube-proxy-hdkb9                  1/1     Running   0          47mkube-proxy-kww9g                  1/1     Running   0          53mkube-proxy-l29hl                  1/1     Running   0          59mkube-proxy-ln5rj                  1/1     Running   0          29mkube-proxy-rw22h                  1/1     Running   0          40mkube-proxy-vbc6k                  1/1     Running   0          30mkube-scheduler-kubm-01            1/1     Running   0          58mkube-scheduler-kubm-02            1/1     Running   0          52mkube-scheduler-kubm-03            1/1     Running   0          46m

按照添加添加node节点步骤继续添加其它节点即可。

token处理命令

如果token超时或者自己想新建token,执行如下命令解决。

删除token
[root@kubm-01 ~]# kubeadm token delete  yqds5b.gyiax5ntlzeiavrzbootstrap token "yqds5b" deleted
新建 token 超时时间24h
[root@kubm-01 ~]# kubeadm token create --ttl 24h --print-join-command kubeadm join 172.20.101.252:16443 --token rgqz2k.q4529102hz5ctej5     --discovery-token-ca-cert-hash sha256:7b2c700b984b992fe96f50bb45ee782d68168a98197b79ecd87f7e68b089819f 

参考文档:
https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/
https://octetz.com/posts/ha-control-plane-k8s-kubeadm

0