千家信息网

K8S实践Ⅷ(HA集群部署)

发表于:2024-09-22 作者:千家信息网编辑
千家信息网最后更新 2024年09月22日,一、环境准备1.集群规划主机名IP角色VIP20.0.20.200master-VIPk8s-master0120.0.20.201masterk8s-master0220.0.20.202maste
千家信息网最后更新 2024年09月22日K8S实践Ⅷ(HA集群部署)

一、环境准备

1.集群规划
主机名IP角色
VIP20.0.20.200master-VIP
k8s-master0120.0.20.201master
k8s-master0220.0.20.202master
k8s-master0320.0.20.203master
k8s-node0120.0.20.204node
k8s-node0220.0.20.205node
k8s-node0320.0.20.206node
2.基础环境配置
  • 关闭防火墙
  • 关闭selinux
  • 配置hosts
  • 配置master之间的ssh免密登录

  • 关闭swap分区
sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab
  • 配置时钟同步
systemctl start chronydsystemctl enable chronydtimedatectl set-timezone "Asia/Shanghai"
  • 配置内核参数
    cat > /etc/sysctl.d/k8s.conf <
    sysctl -p /etc/sysctl.d/k8s.conf
  • 开启ipvs
cat > /etc/sysconfig/modules/ipvs.modules < /dev/null 2>&1    if [ $? -eq 0 ]; then        /sbin/modprobe \${kernel_module}    fidoneEOF
chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep ip_vs
  • 安装docker

二、在master上安装HAproxy和keepalived

1.安装相关软件
yum install -y keepalived haproxy ipvsadm socat
2.配置keepalived
# cat /etc/keepalived/keepalived.confglobal_defs {   router_id master01}vrrp_instance VI_1 {    state MASTER    interface ens192    virtual_router_id 88    priority 100    advert_int 1    authentication {        auth_type PASS        auth_pass 2323    }    virtual_ipaddress {        20.0.20.200/24    }}

三个节点配置区别在于route_id、state、priority这上地方

3.配置HAproxy
# cat /etc/haproxy/haproxy.cfgglobal    log         127.0.0.1 local3    chroot      /var/lib/haproxy    pidfile     /var/run/haproxy.pid    maxconn     32768   #设定每个haproxy进程所接受的最大并发连接数    user        haproxy    group       haproxy    daemon      #以守护进程的方式工作于后台    nbproc      1   #指定启动的haproxy进程的个数    stats socket /var/lib/haproxy/statsdefaults    mode                    tcp    log                     global    option                  tcplog    option                  dontlognull    option                  redispatch    retries                 3    timeout queue           1m    timeout connect         10s    timeout client          1m    timeout server          1m    timeout check           10slisten stats    mode   http    bind :8888      #管理页面登陆端口    stats   enable  #启用管理页面    stats   uri     /admin?stats    #管理页面登陆地址    stats   auth    admin:admin     #管理页面的用户账号    stats   admin   if TRUEfrontend  k8s_https    mode    tcp    bind    *:8443    maxconn 2000    default_backend     https-apibackend https-api    balance      roundrobin    server master01 20.0.20.201:6443  check inter 5000 fall 5 rise 3 weight 1    server master01 20.0.20.202:6443  check inter 5000 fall 5 rise 3 weight 1    server master01 20.0.20.203:6443  check inter 5000 fall 5 rise 3 weight 1
4.启动服务
systemctl enable keepalived && systemctl start keepalivedsystemctl enable haproxy && systemctl start haproxy

三、部署Master

1.配置yum源
cat < /etc/yum.repos.d/kubernetes.repo[kubernetes]name=Kubernetesbaseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64enabled=1gpgcheck=1repo_gpgcheck=1gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpgEOF
2.安装、启动
yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes
systemctl enable kubelet && systemctl start kubelet
3.配置初始化文件
# kubeadm config print init-defaults > kubeadm.conf# cat kubeadm.confapiVersion: kubeadm.k8s.io/v1beta2bootstrapTokens:- groups:  - system:bootstrappers:kubeadm:default-node-token  token: abcdef.0123456789abcdef  ttl: 24h0m0s  usages:  - signing  - authenticationkind: InitConfigurationlocalAPIEndpoint:  advertiseAddress: 20.0.20.201  bindPort: 6443nodeRegistration:  criSocket: /var/run/dockershim.sock  name: k8s-master01  taints:  - effect: NoSchedule    key: node-role.kubernetes.io/master---apiServer:  timeoutForControlPlane: 4m0sapiVersion: kubeadm.k8s.io/v1beta2certificatesDir: /etc/kubernetes/pkiclusterName: kubernetescontrollerManager: {}dns:  type: CoreDNSetcd:  local:    dataDir: /var/lib/etcdimageRepository: gcr.azk8s.cn/google_containerskind: ClusterConfigurationkubernetesVersion: v1.15.3controlPlaneEndpoint: 20.0.20.200:6443networking:  dnsDomain: cluster.local  podSubnet: 192.168.0.0/16  serviceSubnet: 10.96.0.0/12scheduler: {}---apiVersion: kubeproxy.config.k8s.io/v1alpha1kind: KubeProxyConfigurationmode: "ipvs"
4.拉取所需要的镜像
# kubeadm config images list --config kubeadm.conf# kubeadm config images pull --config kubeadm.conf[config/images] Pulled gcr.azk8s.cn/google_containers/kube-apiserver:v1.15.3[config/images] Pulled gcr.azk8s.cn/google_containers/kube-controller-manager:v1.15.3[config/images] Pulled gcr.azk8s.cn/google_containers/kube-scheduler:v1.15.3[config/images] Pulled gcr.azk8s.cn/google_containers/kube-proxy:v1.15.3[config/images] Pulled gcr.azk8s.cn/google_containers/pause:3.1[config/images] Pulled gcr.azk8s.cn/google_containers/etcd:3.3.10[config/images] Pulled gcr.azk8s.cn/google_containers/coredns:1.3.1
5.初始化master01
[root@k8s-master01 ~]# kubeadm init --config kubeadm.conf[init] Using Kubernetes version: v1.15.3[preflight] Running pre-flight checks[preflight] Pulling images required for setting up a Kubernetes cluster[preflight] This might take a minute or two, depending on the speed of your internet connection[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"[kubelet-start] Activating the kubelet service[certs] Using certificateDir folder "/etc/kubernetes/pki"[certs] Generating "etcd/ca" certificate and key[certs] Generating "apiserver-etcd-client" certificate and key[certs] Generating "etcd/server" certificate and key[certs] etcd/server serving cert is signed for DNS names [k8s-master01 localhost] and IPs [20.0.20.201 127.0.0.1 ::1][certs] Generating "etcd/peer" certificate and key[certs] etcd/peer serving cert is signed for DNS names [k8s-master01 localhost] and IPs [20.0.20.201 127.0.0.1 ::1][certs] Generating "etcd/healthcheck-client" certificate and key[certs] Generating "front-proxy-ca" certificate and key[certs] Generating "front-proxy-client" certificate and key[certs] Generating "ca" certificate and key[certs] Generating "apiserver" certificate and key[certs] apiserver serving cert is signed for DNS names [k8s-master01 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 20.0.20.201 20.0.20.200][certs] Generating "apiserver-kubelet-client" certificate and key[certs] Generating "sa" key and public key[kubeconfig] Using kubeconfig folder "/etc/kubernetes"[kubeconfig] Writing "admin.conf" kubeconfig file[kubeconfig] Writing "kubelet.conf" kubeconfig file[kubeconfig] Writing "controller-manager.conf" kubeconfig file[kubeconfig] Writing "scheduler.conf" kubeconfig file[control-plane] Using manifest folder "/etc/kubernetes/manifests"[control-plane] Creating static Pod manifest for "kube-apiserver"[control-plane] Creating static Pod manifest for "kube-controller-manager"[control-plane] Creating static Pod manifest for "kube-scheduler"[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s[apiclient] All control plane components are healthy after 37.001511 seconds[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace[kubelet] Creating a ConfigMap "kubelet-config-1.15" in namespace kube-system with the configuration for the kubelets in the cluster[upload-certs] Skipping phase. Please see --upload-certs[mark-control-plane] Marking the node k8s-master01 as control-plane by adding the label "node-role.kubernetes.io/master=''"[mark-control-plane] Marking the node k8s-master01 as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule][bootstrap-token] Using token: abcdef.0123456789abcdef[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace[addons] Applied essential addon: CoreDNS[addons] Applied essential addon: kube-proxyYour Kubernetes control-plane has initialized successfully!To start using your cluster, you need to run the following as a regular user:  mkdir -p $HOME/.kube  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config  sudo chown $(id -u):$(id -g) $HOME/.kube/configYou should now deploy a pod network to the cluster.Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:  https://kubernetes.io/docs/concepts/cluster-administration/addons/You can now join any number of control-plane nodes by copying certificate authorities and service account keys on each node and then running the following as root:  kubeadm join 20.0.20.200:6443 --token abcdef.0123456789abcdef \    --discovery-token-ca-cert-hash sha256:52dab3f1cfe0b0c202d676175f1216cf3c5919558d50805d595b74f480bcf75b \    --control-plane       Then you can join any number of worker nodes by running the following on each as root:kubeadm join 20.0.20.200:6443 --token abcdef.0123456789abcdef \    --discovery-token-ca-cert-hash sha256:52dab3f1cfe0b0c202d676175f1216cf3c5919558d50805d595b74f480bcf75b 
6.配置kubeconfig
[root@k8s-master01 ~]# mkdir -p $HOME/.kube[root@k8s-master01 ~]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config[root@k8s-master01 ~]# sudo chown $(id -u):$(id -g) $HOME/.kube/config
7.复制证书至其他master节点
# cat ./scp_pki.shUSER=rootIP="20.0.20.202 20.0.20.203"for host in ${IP}; do    ssh "${USER}"@$host "mkdir -p /etc/kubernetes/pki/etcd"    scp /etc/kubernetes/pki/ca.* "${USER}"@$host:/etc/kubernetes/pki/    scp /etc/kubernetes/pki/sa.* "${USER}"@$host:/etc/kubernetes/pki/    scp /etc/kubernetes/pki/front-proxy-ca.* "${USER}"@$host:/etc/kubernetes/pki/    scp /etc/kubernetes/pki/etcd/ca.* "${USER}"@$host:/etc/kubernetes/pki/etcd/    scp /etc/kubernetes/admin.conf "${USER}"@$host:/etc/kubernetes/done
8.将master02和master03加入集群
[root@k8s-master02 ~]# kubeadm join 20.0.20.200:6443 --token abcdef.0123456789abcdef \>     --discovery-token-ca-cert-hash sha256:52dab3f1cfe0b0c202d676175f1216cf3c5919558d50805d595b74f480bcf75b \>     --control-plane
[root@k8s-master02 ~]# mkdir -p $HOME/.kube[root@k8s-master02 ~]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config[root@k8s-master02 ~]# sudo chown $(id -u):$(id -g) $HOME/.kube/config
9.检查节点状态
[root@k8s-master01 ~]# kubectl get nodeNAME           STATUS     ROLES    AGE     VERSIONk8s-master01   NotReady   master   2m27s   v1.15.3k8s-master02   NotReady   master   45s     v1.15.3k8s-master03   NotReady   master   46s     v1.15.3
10.配置网络
# kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')"serviceaccount/weave-net createdclusterrole.rbac.authorization.k8s.io/weave-net createdclusterrolebinding.rbac.authorization.k8s.io/weave-net createdrole.rbac.authorization.k8s.io/weave-net createdrolebinding.rbac.authorization.k8s.io/weave-net createddaemonset.extensions/weave-net created
# kubectl get pod -ANAMESPACE     NAME                                   READY   STATUS    RESTARTS   AGEkube-system   coredns-cf8fb6d7f-64hsx                1/1     Running   0          45mkube-system   coredns-cf8fb6d7f-lqws8                1/1     Running   0          45mkube-system   etcd-k8s-master01                      1/1     Running   0          44mkube-system   etcd-k8s-master02                      1/1     Running   0          43mkube-system   etcd-k8s-master03                      1/1     Running   0          44mkube-system   kube-apiserver-k8s-master01            1/1     Running   0          44mkube-system   kube-apiserver-k8s-master02            1/1     Running   0          42mkube-system   kube-apiserver-k8s-master03            1/1     Running   0          44mkube-system   kube-controller-manager-k8s-master01   1/1     Running   1          44mkube-system   kube-controller-manager-k8s-master02   1/1     Running   0          42mkube-system   kube-controller-manager-k8s-master03   1/1     Running   0          44mkube-system   kube-proxy-6gwzs                       1/1     Running   0          44mkube-system   kube-proxy-dppmv                       1/1     Running   0          37mkube-system   kube-proxy-msz97                       1/1     Running   0          43mkube-system   kube-proxy-tgkr9                       1/1     Running   0          37mkube-system   kube-proxy-tw4lh                       1/1     Running   0          37mkube-system   kube-proxy-zbf5f                       1/1     Running   0          45mkube-system   kube-scheduler-k8s-master01            1/1     Running   1          44mkube-system   kube-scheduler-k8s-master02            1/1     Running   0          42mkube-system   kube-scheduler-k8s-master03            1/1     Running   0          44mkube-system   weave-net-6b7px                        2/2     Running   0          6m12skube-system   weave-net-6b8wn                        2/2     Running   0          6m12skube-system   weave-net-dq7sz                        2/2     Running   0          6m12skube-system   weave-net-mfv8t                        2/2     Running   0          6m12skube-system   weave-net-t76p9                        2/2     Running   0          6m12skube-system   weave-net-wctz4                        2/2     Running   0          6m12s
11.查看ipvs是否启用
[root@k8s-master01 ~]# kubectl get pod -n kube-system | grep kube-proxykube-proxy-6gwzs                           1/1     Running   0          2m47skube-proxy-msz97                           1/1     Running   0          2m37skube-proxy-zbf5f                           1/1     Running   0          4m8s[root@k8s-master01 ~]# kubectl logs kube-proxy-6gwzs -n kube-systemI0909 06:48:02.103768       1 server_others.go:170] Using ipvs Proxier.W0909 06:48:02.104548       1 proxier.go:401] IPVS scheduler not specified, use rr by defaultI0909 06:48:02.104949       1 server.go:534] Version: v1.15.3I0909 06:48:02.110944       1 conntrack.go:52] Setting nf_conntrack_max to 131072I0909 06:48:02.111114       1 config.go:187] Starting service config controllerI0909 06:48:02.111143       1 controller_utils.go:1029] Waiting for caches to sync for service config controllerI0909 06:48:02.111174       1 config.go:96] Starting endpoints config controllerI0909 06:48:02.111184       1 controller_utils.go:1029] Waiting for caches to sync for endpoints config controllerI0909 06:48:02.211286       1 controller_utils.go:1036] Caches are synced for endpoints config controllerI0909 06:48:02.211338       1 controller_utils.go:1036] Caches are synced for service config controller
12.配置命令自动补全
yum install -y bash-completionsource /usr/share/bash-completion/bash_completionsource <(kubectl completion bash)echo "source <(kubectl completion bash)" >> ~/.bashrc

四、配置Worker节点加入

1.将node节点加入集群
[root@k8s-node01 ~]# kubeadm join 20.0.20.200:6443 --token abcdef.0123456789abcdef \>     --discovery-token-ca-cert-hash sha256:52dab3f1cfe0b0c202d676175f1216cf3c5919558d50805d595b74f480bcf75b 
2.检查节点
[root@k8s-master01 ~]# kubectl get nodeNAME           STATUS   ROLES    AGE     VERSIONk8s-master01   Ready    master   9m30s   v1.15.3k8s-master02   Ready    master   7m48s   v1.15.3k8s-master03   Ready    master   7m49s   v1.15.3k8s-node01     Ready       94s     v1.15.3k8s-node02     Ready       91s     v1.15.3k8s-node03     Ready       90s     v1.15.3

五、测试集群

1.测试pod
# kubectl run nginx --image=nginx:1.14 --replicas=2kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.deployment.apps/nginx created
# kubectl get pod -o wideNAME                     READY   STATUS    RESTARTS   AGE   IP          NODE         NOMINATED NODE   READINESS GATESnginx-7b4d6c6559-82jd7   1/1     Running   0          81s   10.46.0.1   k8s-node03              nginx-7b4d6c6559-hzlx6   1/1     Running   0          81s   10.45.0.0   k8s-node01              
# curl 10.46.0.1Welcome to nginx!
2.测试dns
# kubectl run curl --image=radial/busyboxplus:curl -itkubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead.If you don't see a command prompt, try pressing enter.[ root@curl-6bf6db5c4f-472s2:/ ]$ nslookup kubernetes.defaultServer:    10.96.0.10Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.localName:      kubernetes.defaultAddress 1: 10.96.0.1 kubernetes.default.svc.cluster.local
0