千家信息网

kubeadm安装kubernetes 1.13.2多master高可用集群

发表于:2025-02-03 作者:千家信息网编辑
千家信息网最后更新 2025年02月03日,1. 简介Kubernetes v1.13版本发布后,kubeadm才正式进入GA,可以生产使用,用kubeadm部署kubernetes集群也是以后的发展趋势。目前Kubernetes的对应镜像仓库
千家信息网最后更新 2025年02月03日kubeadm安装kubernetes 1.13.2多master高可用集群

1. 简介

Kubernetes v1.13版本发布后,kubeadm才正式进入GA,可以生产使用,用kubeadm部署kubernetes集群也是以后的发展趋势。目前Kubernetes的对应镜像仓库,在国内阿里云也有了镜像站点,使用kubeadm部署Kubernetes集群变得简单并且容易了很多,本文使用kubeadm带领大家快速部署Kubernetes v1.13.2版本。

注意:请不要把目光仅仅放在部署上,如果你是新手,推荐先熟悉用二进制文件部署后,再来学习用kubeadm部署。二进制文件部署请查看我博客的其他文章。

2. 架构信息

系统版本:CentOS 7.6内核:3.10.0-957.el7.x86_64Kubernetes: v1.13.2Docker-ce: 18.06推荐硬件配置:2核2GKeepalived保证apiserever服务器的IP高可用Haproxy实现apiserver的负载均衡

为了减少服务器数量,haproxy、keepalived配置在node-01和node-02。

节点名称角色IP安装软件
负载VIPVIP10.31.90.200
node-01master10.31.90.201kubeadm、kubelet、kubectl、docker、haproxy、keepalived
node-02master10.31.90.202kubeadm、kubelet、kubectl、docker、haproxy、keepalived
node-03master10.31.90.203kubeadm、kubelet、kubectl、docker
node-04node10.31.90.204kubeadm、kubelet、kubectl、docker
node-05node10.31.90.205kubeadm、kubelet、kubectl、docker
node-06node10.31.90.206kubeadm、kubelet、kubectl、docker
service网段10.245.0.0/16

2.部署前准备工作

1) 关闭selinux和防火墙

sed -ri 's#(SELINUX=).*#\1disabled#' /etc/selinux/configsetenforce 0systemctl disable firewalldsystemctl stop firewalld

2) 关闭swap

swapoff -a

3) 为每台服务器添加host解析记录

cat >>/etc/hosts<

4) 创建并分发密钥

在node-01创建ssh密钥。

[root@node-01 ~]# ssh-keygen -t rsaGenerating public/private rsa key pair.Enter file in which to save the key (/root/.ssh/id_rsa): Created directory '/root/.ssh'.Enter passphrase (empty for no passphrase): Enter same passphrase again: Your identification has been saved in /root/.ssh/id_rsa.Your public key has been saved in /root/.ssh/id_rsa.pub.The key fingerprint is:SHA256:26z6DcUarn7wP70dqOZA28td+K/erv7NlaJPLVE1BTA root@node-01The key's randomart image is:+---[RSA 2048]----+|            E..o+||             .  o||               . ||         .    .  ||        S o  .   ||      .o X   oo .||       oB +.o+oo.||       .o*o+++o+o||     .++o+Bo+=B*B|+----[SHA256]-----+

分发node-01的公钥,用于免密登录其他服务器

for n in `seq -w 01 06`;do ssh-copy-id node-$n;done

5) 配置内核参数

cat <  /etc/sysctl.d/k8s.confnet.bridge.bridge-nf-call-ip6tables = 1net.bridge.bridge-nf-call-iptables = 1net.ipv4.ip_nonlocal_bind = 1net.ipv4.ip_forward = 1vm.swappiness=0EOFsysctl --system

6) 加载ipvs模块

cat > /etc/sysconfig/modules/ipvs.modules <

7) 添加yum源

cat << EOF > /etc/yum.repos.d/kubernetes.repo[kubernetes]name=Kubernetesbaseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/enabled=1gpgcheck=1repo_gpgcheck=1gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpgEOFwget http://mirrors.aliyun.com/repo/Centos-7.repo -O /etc/yum.repos.d/CentOS-Base.repowget http://mirrors.aliyun.com/repo/epel-7.repo -O /etc/yum.repos.d/epel.repo wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -O /etc/yum.repos.d/docker-ce.repo

2. 部署keepalived和haproxy

1) 安装keepalived和haproxy

在node-01和node-02安装keepalived和haproxy

yum install -y keepalived haproxy

2) 修改配置

keepalived配置

node-01的priority为100,node-02的priority为90,其他配置一样。

[root@node-01 ~]# cat /etc/keepalived/keepalived.conf! Configuration File for keepalivedglobal_defs {   notification_email {        feng110498@163.com   }   notification_email_from Alexandre.Cassen@firewall.loc   smtp_server 127.0.0.1   smtp_connect_timeout 30   router_id LVS_1}vrrp_instance VI_1 {    state MASTER              interface eth0    lvs_sync_daemon_inteface eth0    virtual_router_id 88    advert_int 1    priority 100             authentication {        auth_type PASS        auth_pass 1111    }    virtual_ipaddress {      10.31.90.200/24    }}

haproxy配置

node-01和node-02的haproxy配置是一样的。此处我们监听的是10.31.90.200的8443端口,因为haproxy是和k8s apiserver是部署在同一台服务器上,都用6443会冲突。

global        chroot  /var/lib/haproxy        daemon        group haproxy        user haproxy        log 127.0.0.1:514 local0 warning        pidfile /var/lib/haproxy.pid        maxconn 20000        spread-checks 3        nbproc 8defaults        log     global        mode    tcp        retries 3        option redispatchlisten https-apiserver        bind 10.31.90.200:8443        mode tcp        balance roundrobin        timeout server 900s        timeout connect 15s        server apiserver01 10.31.90.201:6443 check port 6443 inter 5000 fall 5        server apiserver02 10.31.90.202:6443 check port 6443 inter 5000 fall 5        server apiserver03 10.31.90.203:6443 check port 6443 inter 5000 fall 5

3) 启动服务

systemctl enable keepalived && systemctl start keepalived systemctl enable haproxy && systemctl start haproxy 

3. 部署kubernetes

1) 安装软件

由于kubeadm对Docker的版本是有要求的,需要安装与kubeadm匹配的版本。
由于版本更新频繁,请指定对应的版本号,本文采用1.13.2版本,其它版本未经测试。

yum install -y kubelet-1.13.2 kubeadm-1.13.2 kubectl-1.13.2 ipvsadm ipset docker-ce-18.06.1.ce#启动dockersystemctl enable docker && systemctl start docker#设置kubelet开机自启动systemctl enable kubelet 

2) 修改初始化配置

使用kubeadm config print init-defaults > kubeadm-init.yaml 打印出默认配置,然后在根据自己的环境修改配置.

[root@node-01 ~]# cat kubeadm-init.yaml apiVersion: kubeadm.k8s.io/v1beta1bootstrapTokens:- groups:  - system:bootstrappers:kubeadm:default-node-token  token: abcdef.0123456789abcdef  ttl: 24h0m0s  usages:  - signing  - authenticationkind: InitConfigurationlocalAPIEndpoint:  advertiseAddress: 10.31.90.201  bindPort: 6443nodeRegistration:  criSocket: /var/run/dockershim.sock  name: node-01  taints:  - effect: NoSchedule    key: node-role.kubernetes.io/master---apiVersion: kubeadm.k8s.io/v1beta1kind: ClusterConfigurationapiServer:  timeoutForControlPlane: 4m0scertificatesDir: /etc/kubernetes/pkiclusterName: kubernetescontrolPlaneEndpoint: "10.31.90.200:8443"dns:  type: CoreDNSetcd:  local:    dataDir: /var/lib/etcdimageRepository: registry.cn-hangzhou.aliyuncs.com/google_containerskubernetesVersion: v1.13.2networking:  dnsDomain: cluster.local  podSubnet: ""  serviceSubnet: "10.245.0.0/16"scheduler: {}controllerManager: {}---apiVersion: kubeproxy.config.k8s.io/v1alpha1kind: KubeProxyConfigurationmode: "ipvs"

3) 预下载镜像

[root@node-01 ~]# kubeadm config images pull --config kubeadm-init.yaml [config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.13.2[config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.13.2[config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.13.2[config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.13.2[config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.1[config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.2.24[config/images] Pulled registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:1.2.6

4) 初始化

[root@node-01 ~]# kubeadm init --config kubeadm-init.yaml    [init] Using Kubernetes version: v1.13.2[preflight] Running pre-flight checks[preflight] Pulling images required for setting up a Kubernetes cluster[preflight] This might take a minute or two, depending on the speed of your internet connection[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"[kubelet-start] Activating the kubelet service[certs] Using certificateDir folder "/etc/kubernetes/pki"[certs] Generating "ca" certificate and key[certs] Generating "apiserver" certificate and key[certs] apiserver serving cert is signed for DNS names [node-01 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.12.0.1 10.31.90.201 10.31.90.200][certs] Generating "apiserver-kubelet-client" certificate and key[certs] Generating "etcd/ca" certificate and key[certs] Generating "etcd/server" certificate and key[certs] etcd/server serving cert is signed for DNS names [node-01 localhost] and IPs [10.31.90.201 127.0.0.1 ::1][certs] Generating "etcd/peer" certificate and key[certs] etcd/peer serving cert is signed for DNS names [node-01 localhost] and IPs [10.31.90.201 127.0.0.1 ::1][certs] Generating "etcd/healthcheck-client" certificate and key[certs] Generating "apiserver-etcd-client" certificate and key[certs] Generating "front-proxy-ca" certificate and key[certs] Generating "front-proxy-client" certificate and key[certs] Generating "sa" key and public key[kubeconfig] Using kubeconfig folder "/etc/kubernetes"[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address[kubeconfig] Writing "admin.conf" kubeconfig file[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address[kubeconfig] Writing "kubelet.conf" kubeconfig file[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address[kubeconfig] Writing "controller-manager.conf" kubeconfig file[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address[kubeconfig] Writing "scheduler.conf" kubeconfig file[control-plane] Using manifest folder "/etc/kubernetes/manifests"[control-plane] Creating static Pod manifest for "kube-apiserver"[control-plane] Creating static Pod manifest for "kube-controller-manager"[control-plane] Creating static Pod manifest for "kube-scheduler"[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s[apiclient] All control plane components are healthy after 22.503955 seconds[uploadconfig] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace[kubelet] Creating a ConfigMap "kubelet-config-1.13" in namespace kube-system with the configuration for the kubelets in the cluster[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "node-01" as an annotation[mark-control-plane] Marking the node node-01 as control-plane by adding the label "node-role.kubernetes.io/master=''"[mark-control-plane] Marking the node node-01 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule][bootstrap-token] Using token: abcdef.0123456789abcdef[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles[bootstraptoken] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials[bootstraptoken] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token[bootstraptoken] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster[bootstraptoken] creating the "cluster-info" ConfigMap in the "kube-public" namespace[addons] Applied essential addon: CoreDNS[endpoint] WARNING: port specified in controlPlaneEndpoint overrides bindPort in the controlplane address[addons] Applied essential addon: kube-proxyYour Kubernetes master has initialized successfully!To start using your cluster, you need to run the following as a regular user:  mkdir -p $HOME/.kube  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config  sudo chown $(id -u):$(id -g) $HOME/.kube/configYou should now deploy a pod network to the cluster.Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:  https://kubernetes.io/docs/concepts/cluster-administration/addons/You can now join any number of machines by running the following on each nodeas root:  kubeadm join 10.31.90.200:8443 --token abcdef.0123456789abcdef --discovery-token-ca-cert-hash sha256:84201a329ec4388263e97303c6e4de50c2de2aa157a3b961cb8a6f325fadedb1

kubeadm init主要执行了以下操作:

  • [init]:指定版本进行初始化操作

  • [preflight] :初始化前的检查和下载所需要的Docker镜像文件

  • [kubelet-start] :生成kubelet的配置文件"/var/lib/kubelet/config.yaml",没有这个文件kubelet无法启动,所以初始化之前的kubelet实际上启动失败。

  • [certificates]:生成Kubernetes使用的证书,存放在/etc/kubernetes/pki目录中。

  • [kubeconfig] :生成 KubeConfig 文件,存放在/etc/kubernetes目录中,组件之间通信需要使用对应文件。

  • [control-plane]:使用/etc/kubernetes/manifest目录下的YAML文件,安装 Master 组件。

  • [etcd]:使用/etc/kubernetes/manifest/etcd.yaml安装Etcd服务。

  • [wait-control-plane]:等待control-plan部署的Master组件启动。

  • [apiclient]:检查Master组件服务状态。

  • [uploadconfig]:更新配置

  • [kubelet]:使用configMap配置kubelet。

  • [patchnode]:更新CNI信息到Node上,通过注释的方式记录。

  • [mark-control-plane]:为当前节点打标签,打了角色Master,和不可调度标签,这样默认就不会使用Master节点来运行Pod。

  • [bootstrap-token]:生成token记录下来,后边使用kubeadm join往集群中添加节点时会用到

  • [addons]:安装附加组件CoreDNS和kube-proxy

5) 为kubectl准备Kubeconfig文件

kubectl默认会在执行的用户家目录下面的.kube目录下寻找config文件。这里是将在初始化时[kubeconfig]步骤生成的admin.conf拷贝到.kube/config。

[root@node-01 ~]# mkdir -p $HOME/.kube[root@node-01 ~]# cp -i /etc/kubernetes/admin.conf $HOME/.kube/config[root@node-01 ~]# chown $(id -u):$(id -g)$HOME/.kube/config

在该配置文件中,记录了API Server的访问地址,所以后面直接执行kubectl命令就可以正常连接到API Server中。

6) 查看组件状态

[root@node-01 ~]# kubectl get csNAME                 STATUS    MESSAGE              ERRORscheduler            Healthy   ok                   controller-manager   Healthy   ok                   etcd-0               Healthy   {"health": "true"}  
[root@node-01 ~]# kubectl get nodeNAME      STATUS   ROLES    AGE   VERSIONnode-01   NotReady    master   14m   v1.13.2

目前只有一个节点,角色是Master,状态是NotReady。

7) 其他master部署

在node-01将证书文件拷贝至其他master节点

USER=rootCONTROL_PLANE_IPS="node-02 node-03"for host in ${CONTROL_PLANE_IPS}; do    ssh "${USER}"@$host "mkdir -p /etc/kubernetes/pki/etcd"    scp /etc/kubernetes/pki/ca.* "${USER}"@$host:/etc/kubernetes/pki/    scp /etc/kubernetes/pki/sa.* "${USER}"@$host:/etc/kubernetes/pki/    scp /etc/kubernetes/pki/front-proxy-ca.* "${USER}"@$host:/etc/kubernetes/pki/    scp /etc/kubernetes/pki/etcd/ca.* "${USER}"@$host:/etc/kubernetes/pki/etcd/    scp /etc/kubernetes/admin.conf "${USER}"@$host:/etc/kubernetes/done

在其他master执行,注意--experimental-control-plane参数

kubeadm join 10.31.90.200:8443 --token abcdef.0123456789abcdef --discovery-token-ca-cert-hash sha256:84201a329ec4388263e97303c6e4de50c2de2aa157a3b961cb8a6f325fadedb1 --experimental-control-plane

注意:token有效期是有限的,如果旧的token过期,可以使用kubeadm token create --print-join-command重新创建一条token。

8) node部署

在node-04、node-05、node-06执行,注意没有--experimental-control-plane参数

kubeadm join 10.31.90.200:8443 --token abcdef.0123456789abcdef --discovery-token-ca-cert-hash sha256:84201a329ec4388263e97303c6e4de50c2de2aa157a3b961cb8a6f325fadedb1

9) 部署网络插件flannel

Master节点NotReady的原因就是因为没有使用任何的网络插件,此时Node和Master的连接还不正常。目前最流行的Kubernetes网络插件有Flannel、Calico、Canal、Weave这里选择使用flannel。

[root@node-01 ~]# kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

10) 查看节点状态

所有的节点已经处于Ready状态。

[root@node-01 ~]# kubectl get nodeNAME      STATUS   ROLES    AGE   VERSIONnode-01   Ready    master   35m   v1.13.2node-02   Ready    master   36m   v1.13.2node-03   Ready    master   36m   v1.13.2node-04   Ready       40m   v1.13.2node-05   Ready       40m   v1.13.2node-06   Ready       40m   v1.13.2

查看pod

[root@node-01 ~]# kubectl get pod -n kube-systemNAME                                        READY   STATUS    RESTARTS   AGEcoredns-89cc84847-j8mmg                     1/1     Running   0          1dcoredns-89cc84847-rbjxs                     1/1     Running   0          1detcd-node-01                                1/1     Running   1          1detcd-node-02                                1/1     Running   0          1detcd-node-03                                1/1     Running   0          1dkube-apiserver-node-01                      1/1     Running   0          1dkube-apiserver-node-02                      1/1     Running   0          1dkube-apiserver-node-03                      1/1     Running   0          1dkube-controller-manager-node-01             1/1     Running   2          1dkube-controller-manager-node-02             1/1     Running   0          1dkube-controller-manager-node-03             1/1     Running   0          1dkube-proxy-jfbmv                            1/1     Running   0          1dkube-proxy-lvkms                            1/1     Running   0          1dkube-proxy-qx7kh                            1/1     Running   0          1dkube-proxy-xst5v                            1/1     Running   0          1dkube-proxy-zfwrk                            1/1     Running   0          1dkube-proxy-ztg6j                            1/1     Running   0          1dkube-scheduler-node-01                      1/1     Running   1          1dkube-scheduler-node-02                      1/1     Running   1          1dkube-scheduler-node-03                      1/1     Running   1          1dkube-flannel-ds-amd64-87wzj                 1/1     Running   0          1dkube-flannel-ds-amd64-lczwm                 1/1     Running   0          1dkube-flannel-ds-amd64-lwc2j                 1/1     Running   0          1dkube-flannel-ds-amd64-mwlfq                 1/1     Running   0          1dkube-flannel-ds-amd64-nj2mk                 1/1     Running   0          1dkube-flannel-ds-amd64-wx7vd                 1/1     Running   0          1d

查看ipvs的状态

[root@node-01 ~]# ipvsadm -L -nIP Virtual Server version 1.2.1 (size=4096)Prot LocalAddress:Port Scheduler Flags  -> RemoteAddress:Port           Forward Weight ActiveConn InActConn  TCP  10.245.0.1:443 rr  -> 10.31.90.201:6443            Masq    1      2          0           -> 10.31.90.202:6443            Masq    1      0          0           -> 10.31.90.203:6443            Masq    1      2          0         TCP  10.245.0.10:53 rr  -> 10.32.0.3:53                 Masq    1      0          0           -> 10.32.0.4:53                 Masq    1      0          0                TCP  10.245.90.161:80 rr  -> 10.45.0.1:80                 Masq    1      0          0         TCP  10.245.90.161:443 rr  -> 10.45.0.1:443                Masq    1      0          0         TCP  10.245.149.227:1 rr  -> 10.31.90.204:1               Masq    1      0          0           -> 10.31.90.205:1               Masq    1      0          0           -> 10.31.90.206:1               Masq    1      0          0         TCP  10.245.181.126:80 rr  -> 10.34.0.2:80                 Masq    1      0          0           -> 10.45.0.0:80                 Masq    1      0          0           -> 10.46.0.0:80                 Masq    1      0          0             UDP  10.245.0.10:53 rr  -> 10.32.0.3:53                 Masq    1      0          0           -> 10.32.0.4:53                 Masq    1      0          0    

至此kubernetes集群部署完成。如有问题欢迎在下面留言交流。希望大家多多关注和点赞,谢谢!

0