千家信息网

如何部署Kubernetes高可用

发表于:2025-01-31 作者:千家信息网编辑
千家信息网最后更新 2025年01月31日,这篇文章主要介绍了如何部署Kubernetes高可用,具有一定借鉴价值,感兴趣的朋友可以参考下,希望大家阅读完这篇文章之后大有收获,下面让小编带着大家一起了解一下。一、Kubernetes高可用概述K
千家信息网最后更新 2025年01月31日如何部署Kubernetes高可用

这篇文章主要介绍了如何部署Kubernetes高可用,具有一定借鉴价值,感兴趣的朋友可以参考下,希望大家阅读完这篇文章之后大有收获,下面让小编带着大家一起了解一下。

一、Kubernetes高可用概述

Kubernetes高可用是保证Master节点中API Server服务的高可用。API Server提供了Kubernetes各类资源对象增删改查的唯一访问入口,是整个Kubernetes系统的数据总线和数据中心。采用负载均衡(Load Balance)连接两个Master节点可以提供稳定容器云业务。

1.1、Kubernetes高可用主机分配

主机名

IP地址

操作系统

主要软件

K8s-master01

192.168,200.111

CentOS7.x

Etcd+Kubernetes

K8s-master02

192.168.200.112

CentOS7.x

Etcd+Kubernetes

K8s-node01

192.168.200.113

CentOS7.x

Etcd+Kubernetes+Flannel+Docker

K8s-node02

192.168.200.114

CentOS7.x

Etcd+Kubernetes+Flannel+Docker

K8s-lb01

192.168.200.115

CentOS7.x

Nginx+Keepalived

K8s-lb02

192.168.200.116

CentOS7.x

Nginx+Keepalived

LB集群VIP地址为192.168.200.200。

1.2、Kubernetes高可用架构拓扑

二、高可用架构部署

2.1、基础环境配置

(1)配置基础网络信息

为所有主机配置IP地址、网关、DNS(建议配置阿里云的223.5.5.5)等基础网络信息。建议主机设置为静态IP地址,避免因为IP地址变化出现集群集中无法连接API Server的现象,导致Kubernetes群集不可用。

(2)配置主机名与地址解析记录

为所有主机配置主机名并添加地址解析记录,下面以k8s-master01主机为例进行操作演示。

[root@localhost ~]# hostnamectl set-hostname k8s-master01[root@localhost ~]# bash[root@k8s-master01 ~]# cat <> /etc/hosts192.168.200.111 k8s-master01192.168.200.112 k8s-master02192.168.200.113 k8s-node01192.168.200.114 k8s-node02192.168.200.115 k8s-lb01192.168.200.116 k8s-lb02EOF

(3)禁用防火墙与Selinux

[root@k8s-master01 ~]# iptables -F[root@k8s-master01 ~]# systemctl stop firewalld && systemctl disable firewalld[root@k8s-master01 ~]# setenforce 0[root@k8s-master01 ~]# sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config

2.2、部署集群证书

在k8s-master01主机上创建的目录"/k8s",并将准备好的脚本文件etcd-cert.sh和etcd.sh上传至/k8s目录中。其中etcd-cert.sh脚本是etcd证书创建的脚本:etcd.sh脚本是etcd服务脚本,包含配置文件及启动脚本。

[root@k8s-master01 ~]# mkdir /k8s[root@k8s-master01 ~]# cd /k8s/[root@k8s-master01 k8s]# lsetcd-cert.sh  etcd.sh

创建目录/k8s/etcd-cert,证书全部存放至该目录中,方便管理。

[root@k8s-master01 k8s]# mkdir etcd-cert[root@k8s-master01 k8s]# mv etcd-cert.sh etcd-cert

上传cfssl、cfssl-certinfo、cfssljson软件包。部署到/usr/local/bin目录下并配置执行权限

[root@k8s-master01 k8s]# ls    #上传cfssl、cfssl-certinfo、cfssljson软件包(证书生成工具)cfssl  cfssl-certinfo  cfssljson  etcd-cert  etcd.sh[root@k8s-master01 k8s]# mv cfssl* /usr/local/bin/[root@k8s-master01 k8s]# chmod +x /usr/local/bin/cfssl*[root@k8s-master01 k8s]# ls -l /usr/local/bin/cfssl*-rwxr-xr-x 1 root root 10376657 7月  21 2020 /usr/local/bin/cfssl-rwxr-xr-x 1 root root  6595195 7月  21 2020 /usr/local/bin/cfssl-certinfo-rwxr-xr-x 1 root root  2277873 7月  21 2020 /usr/local/bin/cfssljson

创建CA和Server证书

[root@k8s-master01 ~]# cd /k8s/etcd-cert/[root@k8s-master01 etcd-cert]# cat etcd-cert.sh cat > ca-config.json < ca-csr.json < server-csr.json <

2.3、部署Etcd集群

2.3.1、准备Etcd相关工具与启动所需证书

[root@k8s-master01 ~]# cd /k8s/

上传 etcd-v3.3.18-linux-amd64.tar.gz软件包

[root@k8s-master01 k8s]# ls etcdetcd-cert/                       etcd.sh                          etcd-v3.3.18-linux-amd64/        etcd-v3.3.18-linux-amd64.tar.gz[root@k8s-master01 k8s]# mkdir /opt/etcd/{cfg,bin,ssl} -p[root@k8s-master01 k8s]# cd etcd-v3.3.18-linux-amd64[root@k8s-master01 etcd-v3.3.18-linux-amd64]# lsDocumentation  etcd  etcdctl  README-etcdctl.md  README.md  READMEv2-etcdctl.md[root@k8s-master01 etcd-v3.3.18-linux-amd64]# mv etcd etcdctl /opt/etcd/bin/[root@k8s-master01 etcd-v3.3.18-linux-amd64]# cp /k8s/etcd-cert/*.pem /opt/etcd/ssl/[root@k8s-master01 etcd-v3.3.18-linux-amd64]# ls /opt/etcd/ssl/ca-key.pem  ca.pem  server-key.pem  server.pem

2.3.2、部署Etcd集群

[root@k8s-master01 etcd-v3.3.18-linux-amd64]# cd /k8s/[root@k8s-master01 k8s]# bash etcd.sh etcd01 192.168.200.111 etcd02=https://192.168.200.112:2380,etcd03=https://192.168.200.113:2380,etcd04=https://192.168.200.114:2380Created symlink from /etc/systemd/system/multi-user.target.wants/etcd.service to /usr/lib/systemd/system/etcd.service.

执行时会卡在启动etcd服务上,实际已经启动Ctrl+C终止就行。(查看进程存在)(因为第一个etcd会尝试去连接其他节点,但他们此时还并未启动)

[root@k8s-master01 k8s]# scp -r /opt/etcd/ root@k8s-master02:/opt/[root@k8s-master01 k8s]# scp -r /opt/etcd/ root@k8s-node01:/opt/[root@k8s-master01 k8s]# scp -r /opt/etcd/ root@k8s-node02:/opt/[root@k8s-master01 k8s]# scp /usr/lib/systemd/system/etcd.service root@k8s-master02:/usr/lib/systemd/system/[root@k8s-master01 k8s]# scp /usr/lib/systemd/system/etcd.service root@k8s-node01:/usr/lib/systemd/system/[root@k8s-master01 k8s]# scp /usr/lib/systemd/system/etcd.service root@k8s-node02:/usr/lib/systemd/system/

其他节点拿到后需修改后使用

[root@k8s-master02 ~]# cat /opt/etcd/cfg/etcd #[Member]ETCD_NAME="etcd02"    # 修改为相应的主机名ETCD_DATA_DIR="/var/lib/etcd/default.etcd"ETCD_LISTEN_PEER_URLS="https://192.168.200.112:2380"    # 修改为相应的IP地址ETCD_LISTEN_CLIENT_URLS="https://192.168.200.112:2379"    # 修改为相应的IP地址#[Clustering]ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.200.112:2380"    # 修改为相应的IP地址ETCD_ADVERTISE_CLIENT_URLS="https://192.168.200.112:2379"    # 修改为相应的IP地址ETCD_INITIAL_CLUSTER="etcd01=https://192.168.200.111:2380,etcd02=https://192.168.200.112:2380,etcd03=https://192.168.200.113:2380,etcd04=https://192.168.200.114:2380"ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"ETCD_INITIAL_CLUSTER_STATE="new"
[root@k8s-node01 ~]# cat /opt/etcd/cfg/etcd #[Member]ETCD_NAME="etcd03"ETCD_DATA_DIR="/var/lib/etcd/default.etcd"ETCD_LISTEN_PEER_URLS="https://192.168.200.113:2380"ETCD_LISTEN_CLIENT_URLS="https://192.168.200.113:2379"#[Clustering]ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.200.113:2380"ETCD_ADVERTISE_CLIENT_URLS="https://192.168.200.113:2379"ETCD_INITIAL_CLUSTER="etcd01=https://192.168.200.111:2380,etcd02=https://192.168.200.112:2380,etcd03=https://192.168.200.113:2380,etcd04=https://192.168.200.114:2380"ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"ETCD_INITIAL_CLUSTER_STATE="new"
[root@k8s-node02 ~]# cat /opt/etcd/cfg/etcd #[Member]ETCD_NAME="etcd04"ETCD_DATA_DIR="/var/lib/etcd/default.etcd"ETCD_LISTEN_PEER_URLS="https://192.168.200.114:2380"ETCD_LISTEN_CLIENT_URLS="https://192.168.200.114:2379"#[Clustering]ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.200.114:2380"ETCD_ADVERTISE_CLIENT_URLS="https://192.168.200.114:2379"ETCD_INITIAL_CLUSTER="etcd01=https://192.168.200.111:2380,etcd02=https://192.168.200.112:2380,etcd03=https://192.168.200.113:2380,etcd04=https://192.168.200.114:2380"ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"ETCD_INITIAL_CLUSTER_STATE="new"

maser01、master02、node01、node02这4台主机上均执行以下操作

[root@k8s-master01 k8s]# systemctl daemon-reload && systemctl restart etcd && systemctl enable etcd

2.4、部署APIServer组件

2.4.1、创建所需证书

上传并解压master.zip包后会生成三个脚本:apiserver.sh、controller-manager.sh、及scheduler.sh。为脚本文件添加执行权限,后面每一个服务的启动都要依赖于这三个脚本。

[root@k8s-master01 ~]# cd /k8s/[root@k8s-master01 k8s]# unzip master.zip Archive:  master.zip  inflating: apiserver.sh              inflating: controller-manager.sh     inflating: scheduler.sh   [root@k8s-master01 k8s]# chmod +x *.sh

创建/k8s/k8s-cert目录,作为证书自签的工作目录,将所有证书都生成到此目录中。在/k8s/k8s-cert目录中创建证书生成脚本k8s-cert.sh,脚本内容如下所示。执行k8s-cert.sh脚本即可生成CA证书、服务器端的私钥、admin证书、proxy代理端证书。

[root@k8s-master01 k8s]# mkdir /k8s/k8s-cert[root@k8s-master01 k8s]# cd /k8s/k8s-cert/[root@k8s-master01 k8s-cert]# vim k8s-cert.sh cat > ca-config.json < ca-csr.json < server-csr.json < admin-csr.json < kube-proxy-csr.json <

执行k8s-cert.sh脚本会生成8张证书。

[root@k8s-master01 k8s-cert]# bash k8s-cert.sh 2021/01/28 16:34:13 [INFO] generating a new CA key and certificate from CSR2021/01/28 16:34:13 [INFO] generate received request2021/01/28 16:34:13 [INFO] received CSR2021/01/28 16:34:13 [INFO] generating key: rsa-20482021/01/28 16:34:13 [INFO] encoded CSR2021/01/28 16:34:13 [INFO] signed certificate with serial number 3084393441937660387569298348169828803889269969862021/01/28 16:34:13 [INFO] generate received request2021/01/28 16:34:13 [INFO] received CSR2021/01/28 16:34:13 [INFO] generating key: rsa-20482021/01/28 16:34:14 [INFO] encoded CSR2021/01/28 16:34:14 [INFO] signed certificate with serial number 753688615899313023013304017504807446293884963972021/01/28 16:34:14 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable forwebsites. For more information see the Baseline Requirements for the Issuance and Managementof Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);specifically, section 10.2.3 ("Information Requirements").2021/01/28 16:34:14 [INFO] generate received request2021/01/28 16:34:14 [INFO] received CSR2021/01/28 16:34:14 [INFO] generating key: rsa-20482021/01/28 16:34:14 [INFO] encoded CSR2021/01/28 16:34:14 [INFO] signed certificate with serial number 1082925241126934406282466980042548711599379051772021/01/28 16:34:14 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable forwebsites. For more information see the Baseline Requirements for the Issuance and Managementof Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);specifically, section 10.2.3 ("Information Requirements").2021/01/28 16:34:14 [INFO] generate received request2021/01/28 16:34:14 [INFO] received CSR2021/01/28 16:34:14 [INFO] generating key: rsa-20482021/01/28 16:34:14 [INFO] encoded CSR2021/01/28 16:34:14 [INFO] signed certificate with serial number 2623992127907042495874683099314957902200052723572021/01/28 16:34:14 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable forwebsites. For more information see the Baseline Requirements for the Issuance and Managementof Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);specifically, section 10.2.3 ("Information Requirements").[root@k8s-master01 k8s-cert]# ls *.pemadmin-key.pem  admin.pem  ca-key.pem  ca.pem  kube-proxy-key.pem  kube-proxy.pem  server-key.pem  server.pem[root@k8s-master01 k8s-cert]# ls *.pem | wc -l8

证书生辰以后,需要将其中的CA与Server相关证书拷贝到Kubernetes的工作目录。创建/opt/kubernetes/{cfg,bin,ssl}目录,分别用于存放配置文件、可执行文件以及证书文件。

[root@k8s-master01 ~]# mkdir /opt/kubernetes/{cfg,bin,ssl} -p[root@k8s-master01 ~]# cd /k8s/k8s-cert/[root@k8s-master01 k8s-cert]# cp ca*pem server*pem /opt/kubernetes/ssl/[root@k8s-master01 k8s-cert]# ls /opt/kubernetes/ssl/ca-key.pem  ca.pem  server-key.pem  server.pem

2.4.2、部署APIServer组件

上传并解压Kubernetes软件压缩包,将压缩包中的kube-apiserver、kubectl、kube-controller-manager与kube-scheduler组件的脚本文件拷贝到/opt/kubernetes/bin/目录下。

[root@k8s-master01 ~]# cd /k8s/[root@k8s-master01 k8s]# tar xf kubernetes-server-linux-amd64.tar.gz[root@k8s-master01 k8s]# cd kubernetes/server/bin/[root@k8s-master01 bin]# cp kube-apiserver kubectl kube-controller-manager kube-scheduler /opt/kubernetes/bin/[root@k8s-master01 bin]# ls /opt/kubernetes/bin/kube-apiserver  kube-controller-manager  kubectl  kube-scheduler

在/opt/kubernetes/cfg/目录中创建名为token.csv的token文件,其本质就是创建一个用户角色,可以理解为管理性的角色。Node节点加入到群集当中也是通过这个角色去控制。但是,在此之前需要通过head命令生成随机序列号作为token令牌。token文件的主要内容如下所示,其中:

  • 48be2e8be6cca6e349d3e932768f5d71为token令牌;

  • kubelet-bootstrap为角色名;

  • 10001为角色ID;

  • "system:kubelet-bootstrap"为绑定的超级用户权限。

[root@k8s-master01 bin]# head -c 16 /dev/urandom | od -An -t x | tr -d ' '48be2e8be6cca6e349d3e932768f5d71[root@k8s-master01 ~]# vim /opt/kubernetes/cfg/token.csv48be2e8be6cca6e349d3e932768f5d71,kubelet-bootstrap,10001,"system:kubelet-bootstrap"

将k8s-master01主机/opt/kubernetes/目录下的所有文件拷贝到k8s-master02主机中。

[root@k8s-master01 ~]# ls -R /opt/kubernetes//opt/kubernetes/:    # 三个目录bin  cfg  ssl/opt/kubernetes/bin:    # 一些命令kube-apiserver  kube-controller-manager  kubectl  kube-scheduler/opt/kubernetes/cfg:    # token的文件token.csv/opt/kubernetes/ssl:    # 一些证书ca-key.pem  ca.pem  server-key.pem  server.pem[root@k8s-master01 ~]#  scp -r /opt/kubernetes/ root@k8s-master02:/opt

运行apiserver.sh脚本,运行脚本需要填写两个位置参数。第一个位置参数是本地的IP地址,第二个位置参数是API Server群集列表。

[root@k8s-master01 ~]# cd /k8s/[root@k8s-master01 k8s]# bash apiserver.sh https://192.168.200.111 https://192.168.200.111:2379,https://192.168.200.112:2379,https://192.168.200.113:2379,https://192.168.200.114:2379Created symlink from /etc/systemd/system/multi-user.target.wants/kube-apiserver.service to /usr/lib/systemd/system/kube-apiserver.service.[root@k8s-master01 k8s]# ps aux | grep [k]ube

查看k8s-master01节点的6443安全端口以及https的8080端口是否启动。

[root@k8s-master01 k8s]# netstat -anpt | grep -E "6443|8080"tcp        0      0 192.168.200.111:6443    0.0.0.0:*               LISTEN      39105/kube-apiserve tcp        0      0 127.0.0.1:8080          0.0.0.0:*               LISTEN      39105/kube-apiserve tcp        0      0 192.168.200.111:46832   192.168.200.111:6443    ESTABLISHED 39105/kube-apiserve tcp        0      0 192.168.200.111:6443    192.168.200.111:46832   ESTABLISHED 39105/kube-apiserve

将/opt/kubernetes/cfg/工作目录下的kube-apiserver配置文件及其token.csv令牌文件拷贝到k8s-master02主机上。在k8s-master02主机是上修改kube-apiserver配置文件,将bind-address、advertise-address地址修改为本机地址。

[root@k8s-master01 k8s]# scp /opt/kubernetes/cfg/* root@k8s-master02:/opt/kubernetes/cfg/

k8s-master02主机操作:

KUBE_APISERVER_OPTS="--logtostderr=true \--v=4 \--etcd-servers=https://192.168.200.111:2379,https://192.168.200.112:2379,https://192.168.200.113:2379,https://192.168.200.114:2379 \--bind-address=192.168.200.112 \    # 修改为相应的IP地址--secure-port=6443 \--advertise-address=192.168.200.112 \    # 修改为相应的IP地址--allow-privileged=true \--service-cluster-ip-range=10.0.0.0/24 \--enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota,NodeRestriction \--authorization-mode=RBAC,Node \--kubelet-https=true \--enable-bootstrap-token-auth \--token-auth-file=/opt/kubernetes/cfg/token.csv \--service-node-port-range=30000-50000 \--tls-cert-file=/opt/kubernetes/ssl/server.pem  \--tls-private-key-file=/opt/kubernetes/ssl/server-key.pem \--client-ca-file=/opt/kubernetes/ssl/ca.pem \--service-account-key-file=/opt/kubernetes/ssl/ca-key.pem \--etcd-cafile=/opt/etcd/ssl/ca.pem \--etcd-certfile=/opt/etcd/ssl/server.pem \--etcd-keyfile=/opt/etcd/ssl/server-key.pem"[root@k8s-master02 ~]# vim /opt/kubernetes/cfg/kube-apiserver

将k8s-master01节点的kube-apiserver.service启动脚本拷贝到k8s-master02节点的/usr/lib/systemd/system目录下,并且在k8s-master02启动API Server,并且查看端口信息。

[root@k8s-master01 k8s]# scp /usr/lib/systemd/system/kube-apiserver.service root@k8s-master02:/usr/lib/systemd/system

k8s-master02主机操作:

[root@k8s-master02 ~]# systemctl start kube-scheduler  && systemctl enable kube-scheduler[root@k8s-master02 ~]# netstat -anptu | grep -E "6443|8080"tcp        0      0 192.168.200.112:6443    0.0.0.0:*               LISTEN      544/kube-apiserver  tcp        0      0 127.0.0.1:8080          0.0.0.0:*               LISTEN      544/kube-apiserver

2.5、部署Schedule组件

[root@k8s-master01 ~]# cd /k8s/[root@k8s-master01 k8s]# ./scheduler.sh 127.0.0.1Created symlink from /etc/systemd/system/multi-user.target.wants/kube-scheduler.service to /usr/lib/systemd/system/kube-scheduler.service.[root@k8s-master01 k8s]# ps aux | grep [k]ube

将k8s-master01节点的kube-scheduler配置文件与kube-scheduler.service启动脚本拷贝到k8s-master02节点上,并且在k8s-master02启动Schedule。

[root@k8s-master01 k8s]# scp /opt/kubernetes/cfg/kube-controller-manager root@k8s-master02:/opt/kubernetes/cfg/[root@k8s-master01 k8s]# scp /usr/lib/systemd/system/kube-controller-manager.service root@k8s-master02:/usr/lib/systemd/system

k8s-master02主机操作:

[root@k8s-master02 ~]# systemctl start kube-scheduler[root@k8s-master02 ~]# systemctl enable kube-scheduler

2.6、部署Controller-Manager组件

在k8s-master01节点,启动Controller-Manager服务。

[root@k8s-master01 k8s]# ./controller-manager.sh 127.0.0.1Created symlink from /etc/systemd/system/multi-user.target.wants/kube-controller-manager.service to /usr/lib/systemd/system/kube-controller-manager.service.

将k8s-master01节点的kube-controller-manager配置文件和controller-manager.service启动脚本拷贝到k8s-master02节点的/opt/kubernetes/cfg目录下,并且在k8s-master02节点上启动Controller-Manager。

[root@k8s-master01 k8s]# scp /opt/kubernetes/cfg/kube-controller-manager root@k8s-master02:/opt/kubernetes/cfg/[root@k8s-master01 k8s]# scp /usr/lib/systemd/system/kube-controller-manager.service root@k8s-master02:/usr/lib/systemd/system

k8s-master02主机操作:

[root@k8s-master02 ~]# systemctl start kube-controller-manager[root@k8s-master02 ~]# systemctl enable kube-controller-manager

在k8s-master01和k8s-master02节点上,查看各组件状态。

[root@k8s-master01 k8s]# /opt/kubernetes/bin/kubectl get csNAME                 STATUS    MESSAGE             ERRORscheduler            Healthy   ok                  controller-manager   Healthy   ok                  etcd-0               Healthy   {"health":"true"}   etcd-1               Healthy   {"health":"true"}   etcd-2               Healthy   {"health":"true"}   etcd-3               Healthy   {"health":"true"}
[root@k8s-master02 ~]#  /opt/kubernetes/bin/kubectl get csNAME                 STATUS    MESSAGE             ERRORscheduler            Healthy   ok                  controller-manager   Healthy   ok                  etcd-1               Healthy   {"health":"true"}   etcd-0               Healthy   {"health":"true"}   etcd-3               Healthy   {"health":"true"}   etcd-2               Healthy   {"health":"true"}

2.7、部署Docker环境

在两台node节点上均需要操作,以k8s-node01主机为例:

安装docker-ce

wget -O /etc/yum.repos.d/CentOS-Base.repo https://mirrors.aliyun.com/repo/Centos-7.repoyum install -y yum-utils device-mapper-persistent-data lvm2yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repoyum makecache fastyum -y install docker-cesystemctl start docker && systemctl enable dockerdocker version

阿里云镜像加速器

 tee /etc/docker/daemon.json <<-'EOF'{  "registry-mirrors": ["https://vbgix9o1.mirror.aliyuncs.com"]}EOFsystemctl daemon-reload && systemctl restart dockerdocker info

2.8、部署Flannel网络组件

虽然在两台node节点上安装了Docker,但是Docker运行的容器还需要网络组件Flannel的支持来实现彼此之间互联互通。

首先需要将分配的子网段写入到Etcd中,以便Flannel使用。网络中涉及到的路由如何转发、源目地址如何封装等信息均存储到Etcd中。

通过执行以下的etcdctl命令,定义以逗号进行分割列出群集中的IP地址,set指定网络中的配置,对应的参数etcd是一个键值对,设置网段为172.17.0.0/16,类型是vxlan。

执行完后,查看两台node节点的docker0地址,即docker网关的地址是否为172.17.0.0/16网段的地址。

[root@k8s-master01 ~]# cd /k8s/etcd-cert/[root@k8s-master01 etcd-cert]# /opt/etcd/bin/etcdctl --ca-file=ca.pem --cert-file=server.pem --key-file=server-key.pem --endpoints="https://192.168.200.111:2379,https://192.168.200.112:2379,https://192.168.200.113:2379,https://192.168.200.114:2379" set /coreos.com/network/config '{"Network":"172.17.0.0/16","Backend":{"Type":"vxlan"}}'{"Nerwork":"172.17.0.0/16","Backend":{"Type":"vxlan"}}
[root@k8s-node01 ~]# ifconfig docker0docker0: flags=4099  mtu 1500        inet 172.17.0.1  netmask 255.255.0.0  broadcast 172.17.255.255        ether 02:42:d6:c7:05:8b  txqueuelen 0  (Ethernet)        RX packets 0  bytes 0 (0.0 B)        RX errors 0  dropped 0  overruns 0  frame 0        TX packets 0  bytes 0 (0.0 B)        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

查看写入的网络信息

[root@k8s-master01 etcd-cert]# /opt/etcd/bin/etcdctl --ca-file=ca.pem --cert-file=server.pem --key-file=server-key.pem --endpoints="https://192.168.200.111:2379,https://192.168.200.112:2379,https://192.168.200.113:2379,https://192.168.200.114:2379" get /coreos.com/network/config {"Nerwork":"172.17.0.0/1","Backend":{"Type":"vxlan"}}

将flannel-v0.10.1-linux-amd64.tar.gz软件包上传两个node节点服务器,并进行解压缩。

在两台node节点上均需要操作

[root@k8s-node01 ~]# tar xf flannel-v0.12.0-linux-amd64.tar.gz

在node节点上创建k8s工作目录。将flanneld脚本和mk-docker-opts.sh脚本剪切至k8s工作目录中。

[root@k8s-node01 ~]# mkdir /opt/kubernetes/{cfg,bin,ssl} -p[root@k8s-node01 ~]# mv mk-docker-opts.sh flanneld /opt/kubernetes/bin/

将追备好的flannel.sh脚本拖至两台node节点上,用以启动Flannel服务和创建陪孩子文件。其中:指定陪孩子文件路径/opt/kubernetes/cfg/flanneld/,Etcd的终端地址以及需要认证的证书密钥文件;指定启动脚本路径/usr/lib/systemd/system/flanneld.service,添加至自定义系统服务中,交由系统统一管理。

以k8s-node01为例:

[root@k8s-node01 ~]# cat /opt/etcd/cfg/etcd #[Member]ETCD_NAME="etcd03"ETCD_DATA_DIR="/var/lib/etcd/default.etcd"ETCD_LISTEN_PEER_URLS="https://192.168.200.113:2380"ETCD_LISTEN_CLIENT_URLS="https://192.168.200.113:2379"#[Clustering]ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.200.113:2380"ETCD_ADVERTISE_CLIENT_URLS="https://192.168.200.113:2379"ETCD_INITIAL_CLUSTER="etcd01=https://192.168.200.111:2380,etcd02=https://192.168.200.112:2380,etcd03=https://192.168.200.113:2380,etcd04=https://192.168.200.114:2380"ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"ETCD_INITIAL_CLUSTER_STATE="new"[root@k8s-node01 ~]# cat flannel.sh #!/bin/bashETCD_ENDPOINTS=${1:-"http://127.0.0.1:2379"}cat </opt/kubernetes/cfg/flanneldFLANNEL_OPTIONS="--etcd-endpoints=${ETCD_ENDPOINTS} \-etcd-cafile=/opt/etcd/ssl/ca.pem \-etcd-certfile=/opt/etcd/ssl/server.pem \-etcd-keyfile=/opt/etcd/ssl/server-key.pem"EOFcat </usr/lib/systemd/system/flanneld.service[Unit]Description=Flanneld overlay address etcd agentAfter=network-online.target network.targetBefore=docker.service[Service]Type=notifyEnvironmentFile=/opt/kubernetes/cfg/flanneldExecStart=/opt/kubernetes/bin/flanneld --ip-masq \$FLANNEL_OPTIONSExecStartPost=/opt/kubernetes/bin/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/subnet.envRestart=on-failure[Install]WantedBy=multi-user.targetEOFsystemctl daemon-reloadsystemctl enable flanneldsystemctl restart flanneld[root@k8s-node01 ~]# bash flannel.sh https://192.168.200.111:2379,https://192.168.200.112:2379,https://192.168.200.113,https://192.168.200.114:2379

两台node节点配置Docker连接Flannel。docker.service需要借助Flannel进行通信,需要修改docker.service。添加EnvironmentFile=/run/flannel/subnet.env,借助Flannel的子网进行通信以及添加$DOCKER_NETWORK_OPTIONS网络参数。以上两个参数均是官网要求。下面以k8s-node01主机为例进行操作演示。

[root@k8s-node01 ~]# vim /usr/lib/systemd/system/docker.service [Unit]Description=Docker Application Container EngineDocumentation=https://docs.docker.comAfter=network-online.target firewalld.service containerd.serviceWants=network-online.targetRequires=docker.socket containerd.service[Service]Type=notify# the default is not to use systemd for cgroups because the delegate issues still# exists and systemd currently does not support the cgroup feature set required# for containers run by dockerEnvironmentFile=/run/flannel/subnet.env    # 添加此行ExecStart=/usr/bin/dockerd $DOCKER_NETWORK_OPTIONS -H fd:// --containerd=/run/containerd/containerd.sock  #  添加变量ExecReload=/bin/kill -s HUP $MAINPIDTimeoutSec=0RestartSec=2Restart=always# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229.# Both the old, and new location are accepted by systemd 229 and up, so using the old location# to make them work for either version of systemd.StartLimitBurst=3# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230.# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make# this option work for either version of systemd.StartLimitInterval=60s# Having non-zero Limit*s causes performance problems due to accounting overhead# in the kernel. We recommend using cgroups to do container-local accounting.LimitNOFILE=infinityLimitNPROC=infinityLimitCORE=infinity# Comment TasksMax if your systemd version does not support it.# Only systemd 226 and above support this option.TasksMax=infinity# set delegate yes so that systemd does not reset the cgroups of docker containersDelegate=yes# kill only the docker process, not all processes in the cgroupKillMode=processOOMScoreAdjust=-500[Install]WantedBy=multi-user.target

在两台node节点上查看使用的子网地址分别为172.17.11.1/24和172.17.100.1/24。bip是指定启动时的子网。

[root@k8s-node01 ~]# cat /run/flannel/subnet.env DOCKER_OPT_BIP="--bip=172.17.11.1/24"DOCKER_OPT_IPMASQ="--ip-masq=false"DOCKER_OPT_MTU="--mtu=1450"DOCKER_NETWORK_OPTIONS=" --bip=172.17.11.1/24 --ip-masq=false --mtu=1450"
[root@k8s-node02 ~]# cat /run/flannel/subnet.env DOCKER_OPT_BIP="--bip=172.17.100.1/24"DOCKER_OPT_IPMASQ="--ip-masq=false"DOCKER_OPT_MTU="--mtu=1450"DOCKER_NETWORK_OPTIONS=" --bip=172.17.100.1/24 --ip-masq=false --mtu=1450"

在两台node节点上修改完启动脚本之后,需要重新启动Docker服务。分别查看两台node节点的docker0网卡信息。

[root@k8s-node01 ~]# systemctl daemon-reload && systemctl restart docker[root@k8s-node01 ~]# ip add s docker03: docker0:  mtu 1500 qdisc noqueue state DOWN group default     link/ether 02:42:d6:c7:05:8b brd ff:ff:ff:ff:ff:ff    inet 172.17.11.1/24 brd 172.17.11.255 scope global docker0       valid_lft forever preferred_lft forever
[root@k8s-node02 ~]#  systemctl daemon-reload && systemctl restart docker[root@k8s-node02 ~]# ip add s docker03: docker0:  mtu 1500 qdisc noqueue state DOWN group default     link/ether 02:42:b8:77:89:4a brd ff:ff:ff:ff:ff:ff    inet 172.17.100.1/24 brd 172.17.100.255 scope global docker0       valid_lft forever preferred_lft forever

在两台node节点上分别运行busybox容器。(busybox是一个集成了三百多个常用linux命令和工具的软件工具箱,在本案例中用于测试)。

进入容器内部查看k8s-node01节点的地址是172.17.11.2;k8s-node02节点的地址是172.17.100.2。与/run/flannel/subnet.env文件中看到的子网信息处于同一个网段。

接着再通过ping命令测试,如果k8s-node02容器能ping通k8s-node01容器的IP地址就代表两个独立的容器可以互通,说明Flannel组件搭建成功。

[root@k8s-node01 ~]# docker pull busybox[root@k8s-node01 ~]# docker run -it busybox /bin/sh/ # ipaddr show eth09: eth0@if10:  mtu 1450 qdisc noqueue     link/ether 02:42:ac:11:0b:03 brd ff:ff:ff:ff:ff:ff    inet 172.17.11.2/24 brd 172.17.11.255 scope global eth0       valid_lft forever preferred_lft forever
[root@k8s-node02 ~]# docker pull busybox[root@k8s-node02 ~]# docker run -it busybox /bin/sh/ # ip a s eth07: eth0@if8:  mtu 1450 qdisc noqueue     link/ether 02:42:ac:11:64:02 brd ff:ff:ff:ff:ff:ff    inet 172.17.100.2/24 brd 172.17.100.255 scope global eth0       valid_lft forever preferred_lft forever/ # ping -c 4 172.17.11.2PING 172.17.11.2 (172.17.11.2): 56 data bytes64 bytes from 172.17.11.2: seq=0 ttl=62 time=1.188 ms64 bytes from 172.17.11.2: seq=1 ttl=62 time=0.598 ms64 bytes from 172.17.11.2: seq=2 ttl=62 time=0.564 ms64 bytes from 172.17.11.2: seq=3 ttl=62 time=0.372 ms--- 172.17.11.2 ping statistics ---4 packets transmitted, 4 packets received, 0% packet lossround-trip min/avg/max = 0.372/0.680/1.188 ms

2.9、部署kubeconfig配置(类似于kubeadmin的初始化)

在k8s-master01节点上将kubelet和kube-proxy执行脚本拷贝到两台node节点上。

[root@k8s-master01 ~]# cd /k8s/kubernetes/server/bin/[root@k8s-master01 bin]# scp kubelet kube-proxy root@k8s-node01:/opt/kubernetes/bin/[root@k8s-master01 bin]# scp kubelet kube-proxy root@k8s-node02:/opt/kubernetes/bin/

将node.zip上传至两台node节点,并解压node.zip,可获得proxy.sh和kubelet.sh两个执行脚本。以k8s-node01为例进行操作演示。

[root@k8s-node01 ~]# unzip node.zip Archive:  node.zip  inflating: proxy.sh                  inflating: kubelet.sh  [root@k8s-node02 ~]# unzip node.zip Archive:  node.zip  inflating: proxy.sh                  inflating: kubelet.sh

在k8s-master01节点上创建kubeconfig工作目录。将kubecofng.sh脚本上传至当前目录/k8s/kubeconfig/下,此脚本中包含有创建TLS Bootstrapping Token、创建kubeletbootstrapping kubeconfig、设置集群参数、设置客户端认证参数、设置上下文参数、设置默认上下文、创建kube-proxy kubeconfig文件。

查看序列号将其拷贝到客户端认证参数。更新kubeconfig.sh脚本的token值。

[root@k8s-master01 ~]# mkdir /k8s/kubeconfig[root@k8s-master01 ~]# cd /k8s/kubeconfig/[root@k8s-master01 kubeconfig]# cat /opt/kubernetes/cfg/token.csv 48be2e8be6cca6e349d3e932768f5d71,kubelet-bootstrap,10001,"system:kubelet-bootstrap"[root@k8s-master01 kubeconfig]# vim kubeconfig.sh# 创建 TLS Bootstrapping Token#BOOTSTRAP_TOKEN=$(head -c 16 /dev/urandom | od -An -t x | tr -d ' ')BOOTSTRAP_TOKEN=48be2e8be6cca6e349d3e932768f5d71

为了便于识别在k8s-master01和k8s-master02节点上声明路径export PATH=$PATH:/opt/kubernetes/bin/到环境变量中。

[root@k8s-master01 ~]# echo "export PATH=$PATH:/opt/kubernetes/bin/" >> /etc/profile[root@k8s-master01 ~]# source /etc/profile[root@k8s-master02 ~]# echo "export PATH=$PATH:/opt/kubernetes/bin/" >> /etc/profile[root@k8s-master02 ~]# source /etc/profile

将kubeconfig.sh重命名为kubeconfig,执行kubeconfig脚本。使用bash执行kubeconfig,第一个参数是当前APIServer的IP,它会写入整个配置当中;第二个参数执行证书kubenets的证书位置。执行完成以后会生成bootstrap.kubeconfig和kube-proxy.kubeconfig两个文件。

[root@k8s-master01 ~]# cd /k8s/kubeconfig/[root@k8s-master01 kubeconfig]# mv kubeconfig.sh kubeconfig[root@k8s-master01 kubeconfig]# bash kubeconfig 192.168.200.111 /k8s/k8s-cert/Cluster "kubernetes" set.User "kubelet-bootstrap" set.Context "default" created.Switched to context "default".Cluster "kubernetes" set.User "kube-proxy" set.Context "default" created.Switched to context "default".[root@k8s-master01 kubeconfig]# lsbootstrap.kubeconfig  kubeconfig  kube-proxy.kubeconfig  token.csv

将bootstrap.kubeconfig和kube-proxy.kubeconfig文件拷贝到两台node节点上。

[root@k8s-master01 kubeconfig]# scp bootstrap.kubeconfig kube-proxy.kubeconfig root@k8s-node01:/opt/kubernetes/cfg/[root@k8s-master01 kubeconfig]# scp bootstrap.kubeconfig kube-proxy.kubeconfig root@k8s-node02:/opt/kubernetes/cfg/

创建bootstrap角色,并赋予权限。用于连接API Server请求签名(关键)。查看k8s-node01节点的bootstrap.kubeconfig。kubelet在启动的时候如果想加入集群中,需要请求申请API Server请求签名。kubeconfig的作用是指名如果想要加入群集,需要通过哪一个地址、端口才能申请到所需要的证书。

[root@k8s-master01 kubeconfig]# kubectl create clusterrolebinding kubelet-bootstrap --clusterrole=system:node-bootstrapper --user=kubelet-bootstrapclusterrolebinding.rbac.authorization.k8s.io/kubelet-bootstrap created
[root@k8s-node01 ~]# cat /opt/kubernetes/cfg/bootstrap.kubeconfig apiVersion: v1clusters:- cluster:    certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUR2akNDQXFhZ0F3SUJBZ0lVTmdibUJxMkJpRkF5Z1lEVFpvb1p1a3V4QWZvd0RRWUpLb1pJaHZjTkFRRUwKQlFBd1pURUxNQWtHQTFVRUJoTUNRMDR4RURBT0JnTlZCQWdUQjBKbGFXcHBibWN4RURBT0JnTlZCQWNUQjBKbAphV3BwYm1jeEREQUtCZ05WQkFvVEEyczRjekVQTUEwR0ExVUVDeE1HVTNsemRHVnRNUk13RVFZRFZRUURFd3ByCmRXSmxjbTVsZEdWek1CNFhEVEl4TURFeU9EQTRNamt3TUZvWERUSTJNREV5TnpBNE1qa3dNRm93WlRFTE1Ba0cKQTFVRUJoTUNRMDR4RURBT0JnTlZCQWdUQjBKbGFXcHBibWN4RURBT0JnTlZCQWNUQjBKbGFXcHBibWN4RERBSwpCZ05WQkFvVEEyczRjekVQTUEwR0ExVUVDeE1HVTNsemRHVnRNUk13RVFZRFZRUURFd3ByZFdKbGNtNWxkR1Z6Ck1JSUJJakFOQmdrcWhraUc5dzBCQVFFRkFBT0NBUThBTUlJQkNnS0NBUUVBMk1GRzYyVDdMTC9jbnpvNGx4V0gKZGNOVnVkblkzRTl0S2ZvbThNaVZKcFRtVUhlYUhoczY2M1loK1VWSklnWXkwVXJzWGRyc2VPWDg2Nm9PcEN1NQpUajRKbEUxbXQ5b1NlOEhLeFVhYkRqVjlwd05WQm1WSllCOEZIMnZVaTZVZEVpOVNnVXF2OTZIbThBSUlFbTFhCmpLREc2QXRJRWFZdFpJQ1MyeVg5ZStPVXVCUUtkcDBCcGdFdUxkMko5OEpzSjkrRzV6THc5bWdab0t5RHBEeHUKVHdGRC9HK2k5Vk9mbTh7ZzYzVzRKMUJWL0RLVXpTK1Q3NEs0S3I5ZmhDbHp4ZVo3bXR1eXVxUkM2c1lrcXpBdApEbklmNzB1QWtPRzRYMU52eUhjVmQ5Rzg4ZEM3NDNSbFZGZGNvbzFOM0hoZ1FtaG12ZXdnZ0tQVjZHWGwwTkJnCkx3SURBUUFCbzJZd1pEQU9CZ05WSFE4QkFmOEVCQU1DQVFZd0VnWURWUjBUQVFIL0JBZ3dCZ0VCL3dJQkFqQWQKQmdOVkhRNEVGZ1FVRVJ0cngrWHB4andVQWlKemJnUEQ2bGJOUlFFd0h4WURWUjBqQkJnd0ZvQVVFUnRyeCtYcAp4andVQWlKemJnUEQ2bGJOUlFFd0RRWUpLb1pJaHZjTkFRRUxCUUFEZ2dFQkFJWTBtdWxJK25BdE1KcWpSZXFnCmRuWk1Ya2U3ZGxIeHJKMmkvT3NaSXRoZUhYakMwMGdNWlRZSGV6WUxSKzl0MUNKV1lmUVdOR3V3aktvYitPaDUKMlE5SURUZmpJblhXcmU5VU5SNUdGNndNUDRlRzZreUVNbE9WcUc3L2tldERpNlRzRkZyZWJVY0FraEFnV0J1eApJWXJWb1ZhMFlCK3hhZk1KdTIzMnQ5VmtZZHovdm9jWGV1MHd1L096Z1dsUEJFNFBkSUVHRWprYW5yQTk5UCtGCjhSUkJudmVJcjR4S21iMlJIcEFYWENMRmdvNTc1c1hEQWNGbWswVm1KM2kzL3pPbmlsd3cwRmpFNFU2OVRmNWMKekhncE0vdmtLbG9aTjYySW44YUNtbUZTcmphcjJRem1Ra3FwWHRsQmdoZThwUjQ3UWhiZS93OW5DWGhsYnVySgpzTzQ9Ci0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K    server: https://192.168.200.200:6443  name: kubernetescontexts:- context:    cluster: kubernetes    user: kubelet-bootstrap  name: defaultcurrent-context: defaultkind: Configpreferences: {}users:- name: kubelet-bootstrap  user:    token: 48be2e8be6cca6e349d3e932768f5d71

2.10、部署Kubelet组件

在两台node节点上,执行kubelet脚本,并通过ps命令查看服务启动情况。kubelet启动之后会自动联系API Server发进行证书申请。在k8s-master01节点上通过get csr命令查看是否收到请求申请。当看到处于Pending状态时,即为等待集群给该节点颁发证书。

[root@k8s-node01 ~]# bash kubelet.sh 192.168.200.113Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service.[root@k8s-node01 ~]# ps aux | grep [k]ube

[root@k8s-node02 ~]# bash kubelet.sh 192.168.200.114Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service.[root@k8s-node02 ~]# ps aux | grep [k]ube

[root@k8s-master01 kubeconfig]# kubectl get csrNAME                                                   AGE    REQUESTOR           CONDITIONnode-csr-4R-rZXsEI6Zf6HnklxVGZooXPwS11_8mTG5H5czDXTM   105s   kubelet-bootstrap   Pendingnode-csr-Ha9B_rJVSOJ2OSTlOSiXwFbnwyLw_x4qUVfQfX-ks_4   48s    kubelet-bootstrap   Pending

k8s-master01节点颁发证书给两台node节点。通过get csr命令可以查看到证书已经颁发。使用get node查看,两台node节点都已经加入到了群集中。

[root@k8s-master01 kubeconfig]# kubectl certificate approve node-csr-4R-rZXsEI6Zf6HnklxVGZooXPwS11_8mTG5H5czDXTMcertificatesigningrequest.certificates.k8s.io/node-csr-4R-rZXsEI6Zf6HnklxVGZooXPwS11_8mTG5H5czDXTM approved[root@k8s-master01 kubeconfig]# kubectl certificate approve node-csr-Ha9B_rJVSOJ2OSTlOSiXwFbnwyLw_x4qUVfQfX-ks_4certificatesigningrequest.certificates.k8s.io/node-csr-Ha9B_rJVSOJ2OSTlOSiXwFbnwyLw_x4qUVfQfX-ks_4 approved[root@k8s-master01 kubeconfig]# kubectl get csrNAME                                                   AGE     REQUESTOR           CONDITIONnode-csr-4R-rZXsEI6Zf6HnklxVGZooXPwS11_8mTG5H5czDXTM   5m44s   kubelet-bootstrap   Approved,Issuednode-csr-Ha9B_rJVSOJ2OSTlOSiXwFbnwyLw_x4qUVfQfX-ks_4   4m47s   kubelet-bootstrap   Approved,Issued

2.11、部署Kube-Proxy组件

在两台node节点上执行proxy.sh脚本。

[root@k8s-node01 ~]# bash proxy.sh 192.168.200.113Created symlink from /etc/systemd/system/multi-user.target.wants/kube-proxy.service to /usr/lib/systemd/system/kube-proxy.service.[root@k8s-node01 ~]# systemctl status kube-proxy.service

[root@k8s-node02 ~]# bash proxy.sh 192.168.200.114Created symlink from /etc/systemd/system/multi-user.target.wants/kube-proxy.service to /usr/lib/systemd/system/kube-proxy.service.[root@k8s-node02 ~]# systemctl status kube-proxy

[root@k8s-master02 ~]# kubectl get nodeNAME              STATUS   ROLES    AGE   VERSION192.168.200.113   Ready       25h   v1.12.3192.168.200.114   Ready       25h   v1.12.3

2.12、部署Nginx反向代理

在NodePort基础上,Kubernetes可以请求底层云平台创建一个负载均衡器,将每个Node作为后端,进行服务分发。该模式需要底层云平台(例如GCE)支持。

安装配置Nginx服务,lb01、lb02主机上执行以下操作,以lb01节点为例

[root@k8s-lb01 ~]# rpm -ivh epel-release-latest-7.noarch.rpm[root@k8s-lb01 ~]# yum -y install nginx[root@k8s-lb01 ~]# vim /etc/nginx/nginx.confevents {    worker_connections 1024;}stream {    #四层代理stream和http是平级所以不要放在里面    log_format main '$remote_addr $upstream_addr - [$time_local] $status $upstream_bytes_sent';    access_log /var/log/nginx/k8s-access.log main;    upstream k8s-apiserver {    # upstream声明k8s-apiserver,指定了两个master的6443端口        server 192.168.200.111:6443;        server 192.168.200.112:6443;    }    server {  # 然后server,listen监听的端口6443,proxy_pass反向代理给他        listen 6443;        proxy_pass k8s-apiserver;    }}http {    log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '[root@k8s-lb01 ~]# nginx -tnginx: the configuration file /etc/nginx/nginx.conf syntax is oknginx: configuration file /etc/nginx/nginx.conf test is successful[root@k8s-lb01 ~]# systemctl start nginx && systemctl enable nginxCreated symlink from /etc/systemd/system/multi-user.target.wants/nginx.service to /usr/lib/systemd/system/nginx.service.

修改两台Nginx节点的首页,以示区分,并且浏览器中访问两台LB节点

[root@k8s-lb01 ~]# echo "This is Master Server" > /usr/share/nginx/html/index.html[root@k8s-lb02 ~]# echo "This is Backup Server" > /usr/share/nginx/html/index.html

2.13、部署Keepalived

[root@k8s-lb01 ~]# yum -y install keepalived[root@k8s-lb01 ~]# vim /etc/keepalived/keepalived.conf! Configuration File for keepalivedglobal_defs {   router_id LVS_DEVEL}vrrp_script check_nginx {    script "/etc/nginx/check_nginx.sh"}vrrp_instance VI_1 {    state MASTER    interface ens32    virtual_router_id 51    priority 100    advert_int 1    authentication {        auth_type PASS        auth_pass 1111    }    virtual_ipaddress {        192.168.200.200    }    track_script {        check_nginx    }}[root@k8s-lb01 ~]# scp /etc/keepalived/keepalived.conf 192.168.200.116:/etc/keepalived/
[root@k8s-lb02 ~]# vim /etc/keepalived/keepalived.conf! Configuration File for keepalivedglobal_defs {   router_id LVS_DEVEL}vrrp_script check_nginx {    script "/etc/nginx/check_nginx.sh"}vrrp_instance VI_1 {    state BACKUP    # 修改    interface ens32    virtual_router_id 51    priority 90    # 修改优先级    advert_int 1    authentication {        auth_type PASS        auth_pass 1111    }    virtual_ipaddress {        192.168.200.200    }    track_script {        check_nginx    }}

在两台LB节点上创建触发脚本,统计数据进行比对,值为0的时候,关闭Keepalived服务。

lb01、lb02主机上均执行以下操作

[root@k8s-lb01 ~]# vim /etc/nginx/check_nginx.shcount=$(ps -ef|grep nginx|egrep -cv "grep|$$")if [ "$count" -eq 0 ];then    systemctl stop keepalivedfi[root@k8s-lb02 ~]# chmod +x /etc/nginx/check_nginx.sh[root@k8s-lb02 ~]# systemctl start keepalived && systemctl enable keepalived

查看网卡信息,可以查看到k8s-lb01节点上有漂移地址192.168.200.200/24,而k8s-lb02节点上没有漂移地址。

[root@k8s-lb01 ~]# ip a s ens322: ens32:  mtu 1500 qdisc pfifo_fast state UP group default qlen 1000    link/ether 00:0c:29:3b:05:a3 brd ff:ff:ff:ff:ff:ff    inet 192.168.200.115/24 brd 192.168.200.255 scope global noprefixroute ens32       valid_lft forever preferred_lft forever    inet 192.168.200.200/32 scope global ens32       valid_lft forever preferred_lft forever    inet6 fe80::e88c:df62:6a14:b1f3/64 scope link tentative noprefixroute dadfailed        valid_lft forever preferred_lft forever    inet6 fe80::5d90:6146:c376:1e0f/64 scope link tentative noprefixroute dadfailed        valid_lft forever preferred_lft forever    inet6 fe80::ea69:6f19:80be:abc3/64 scope link noprefixroute        valid_lft forever preferred_lft forever
[root@k8s-lb02 ~]# ip a s ens322: ens32:  mtu 1500 qdisc pfifo_fast state UP group default qlen 1000    link/ether 00:0c:29:74:cc:8a brd ff:ff:ff:ff:ff:ff    inet 192.168.200.116/24 brd 192.168.200.255 scope global noprefixroute ens32       valid_lft forever preferred_lft forever    inet6 fe80::5d90:6146:c376:1e0f/64 scope link noprefixroute        valid_lft forever preferred_lft forever    inet6 fe80::e88c:df62:6a14:b1f3/64 scope link noprefixroute        valid_lft forever preferred_lft forever    inet6 fe80::ea69:6f19:80be:abc3/64 scope link tentative noprefixroute dadfailed        valid_lft forever preferred_lft forever

验证故障转移切换:首先将k8s-lb01节点上的Nginx服务关闭,查看IP信息可以看出k8s-lb01的漂移IP已经不存在,Keepalived服务也关闭离;查看k8s-lb02的IP信息,漂移IP地址已经绑定在k8s-lb02节点上。此时在将k8s-lb01的Nginx与Keepalived服务开启,漂移IP地址就会重新k8s-lb01节点上。

[root@k8s-lb01 ~]# systemctl stop nginx[root@k8s-lb01 ~]# ps aux | grep [k]eepalived[root@k8s-lb02 ~]# ip a s ens322: ens32:  mtu 1500 qdisc pfifo_fast state UP group default qlen 1000    link/ether 00:0c:29:74:cc:8a brd ff:ff:ff:ff:ff:ff    inet 192.168.200.116/24 brd 192.168.200.255 scope global noprefixroute ens32       valid_lft forever preferred_lft forever    inet 192.168.200.200/32 scope global ens32       valid_lft forever preferred_lft forever    inet6 fe80::5d90:6146:c376:1e0f/64 scope link noprefixroute        valid_lft forever preferred_lft forever    inet6 fe80::e88c:df62:6a14:b1f3/64 scope link noprefixroute        valid_lft forever preferred_lft forever    inet6 fe80::ea69:6f19:80be:abc3/64 scope link tentative noprefixroute dadfailed        valid_lft forever preferred_lft forever

故障恢复测试

[root@k8s-lb01 ~]# systemctl start nginx [root@k8s-lb01 ~]# systemctl start keepalived[root@k8s-lb01 ~]# ip a s ens322: ens32:  mtu 1500 qdisc pfifo_fast state UP group default qlen 1000    link/ether 00:0c:29:3b:05:a3 brd ff:ff:ff:ff:ff:ff    inet 192.168.200.115/24 brd 192.168.200.255 scope global noprefixroute ens32       valid_lft forever preferred_lft forever    inet 192.168.200.200/32 scope global ens32       valid_lft forever preferred_lft forever    inet6 fe80::e88c:df62:6a14:b1f3/64 scope link tentative noprefixroute dadfailed        valid_lft forever preferred_lft forever    inet6 fe80::5d90:6146:c376:1e0f/64 scope link tentative noprefixroute dadfailed        valid_lft forever preferred_lft forever    inet6 fe80::ea69:6f19:80be:abc3/64 scope link noprefixroute        valid_lft forever preferred_lft forever
[root@k8s-lb02 ~]# ip add s ens322: ens32:  mtu 1500 qdisc pfifo_fast state UP group default qlen 1000    link/ether 00:0c:29:74:cc:8a brd ff:ff:ff:ff:ff:ff    inet 192.168.200.116/24 brd 192.168.200.255 scope global noprefixroute ens32       valid_lft forever preferred_lft forever    inet6 fe80::e88c:df62:6a14:b1f3/64 scope link tentative noprefixroute dadfailed        valid_lft forever preferred_lft forever    inet6 fe80::5d90:6146:c376:1e0f/64 scope link tentative noprefixroute dadfailed        valid_lft forever preferred_lft forever    inet6 fe80::ea69:6f19:80be:abc3/64 scope link tentative noprefixroute dadfailed        valid_lft forever preferred_lft forever

修改两台node节点上的bootstrap.kubeconfig、kubelet.kubeconfig和kube-prosy.kubeconfig配置文件,这三个文件中指向API Server的IP地址,将此地址更新为VIP地址。

node01、node02主机上均执行以下操作

[root@k8s-node01 ~]# cd /opt/kubernetes/cfg/[root@k8s-node01 cfg]# vim bootstrap.kubeconfig……//省略部分内容    server: https://192.168.200.111:6443……//省略部分内容[root@k8s-node01 cfg]# vim kubelet.kubeconfig……//省略部分内容    server: https://192.168.200.111:6443……//省略部分内容[root@k8s-node01 cfg]# vim kube-proxy.kubeconfig……//省略部分内容    server: https://192.168.200.111:6443……//省略部分内容[root@k8s-node01 cfg]# grep 200.200 *bootstrap.kubeconfig:    server: https://192.168.200.200:6443kubelet.kubeconfig:    server: https://192.168.200.200:6443kube-proxy.kubeconfig:    server: https://192.168.200.200:6443

重启两台node节点相关服务。node01、node02主机上均执行以下操作

[root@k8s-node01 cfg]# systemctl restart kubelet[root@k8s-node01 cfg]# systemctl restart kube-proxy

k8s-lb01节点上动态查看Nginx的访问日志。从日志中可以看出了负载均衡已经实现。

[root@k8s-lb01 ~]# tail -fn 200 /var/log/nginx/k8s-access.log 192.168.200.113 192.168.200.111:6443 - [29/Jan/2021:20:29:41 +0800] 200 1120192.168.200.113 192.168.200.112:6443 - [29/Jan/2021:20:29:41 +0800] 200 1120192.168.200.114 192.168.200.112:6443 - [29/Jan/2021:20:30:12 +0800] 200 1121192.168.200.114 192.168.200.111:6443 - [29/Jan/2021:20:30:12 +0800] 200 1121

2.14、部署测试应用

在k8s-master01节点上创建Pod,使用的镜像是Nginx。

[root@k8s-node01 ~]# docker pull nginx[root@k8s-node02 ~]# docker pull nginx
[root@k8s-master01 ~]# kubectl run nginx --image=nginxkubectl run --generator=deployment/apps.v1beta1 is DEPRECATED and will be removed in a future version. Use kubectl create instead.deployment.apps/nginx created[root@k8s-master01 ~]# kubectl get podNAME                    READY   STATUS    RESTARTS   AGEnginx-dbddb74b8-9f5m6   1/1     Running   1          21h

开启查看日志权限。

[root@k8s-master01 ~]# kubectl create clusterrolebinding cluseter-system-anonymous --clusterrole=cluster-admin --user=system:anonymousclusterrolebinding.rbac.authorization.k8s.io/cluseter-system-anonymous created

通过-o wide参数,输出整个网络状态。可以查看此容器的IP是172.17.11.2,容器是放在IP地址为192.168.200.113的node节点中。

[root@k8s-master01 ~]# kubectl get pods -o wideNAME                    READY   STATUS    RESTARTS   AGE     IP            NODE              NOMINATED NODEnginx-dbddb74b8-9f5m6   1/1     Running   0          4m27s   172.17.11.2   192.168.200.113   
[root@k8s-node01 ~]# ip a s flannel.14: flannel.1:  mtu 1450 qdisc noqueue state UNKNOWN group default     link/ether a6:29:7d:74:2d:1a brd ff:ff:ff:ff:ff:ff    inet 172.17.11.0/32 scope global flannel.1       valid_lft forever preferred_lft forever    inet6 fe80::a429:7dff:fe74:2d1a/64 scope link        valid_lft forever preferred_lft forever

使用curl访问Pod容器地址172.17.11.2。访问日志会产生信息,回到k8s-master01节点中查看日志信息。并且查看容器。其他的node节点也能访问到。

[root@k8s-node01 ~]#  curl 172.17.11.2Welcome to nginx!

Welcome to nginx!

If you see this page, the nginx web server is successfully installed andworking. Further configuration is required.

For online documentation and support please refer tonginx.org.
Commercial support is available atnginx.com.

Thank you for using nginx.

查看日志输出

[root@k8s-master01 ~]# kubectl logs nginx-dbddb74b8-9f5m6/docker-entrypoint.sh: /docker-entrypoint.d/ is not empty, will attempt to perform configuration/docker-entrypoint.sh: Looking for shell scripts in /docker-entrypoint.d//docker-entrypoint.sh: Launching /docker-entrypoint.d/10-listen-on-ipv6-by-default.sh10-listen-on-ipv6-by-default.sh: info: Getting the checksum of /etc/nginx/conf.d/default.conf10-listen-on-ipv6-by-default.sh: info: Enabled listen on IPv6 in /etc/nginx/conf.d/default.conf/docker-entrypoint.sh: Launching /docker-entrypoint.d/20-envsubst-on-templates.sh/docker-entrypoint.sh: Configuration complete; ready for start up172.17.11.1 - - [29/Jan/2021:12:58:28 +0000] "GET / HTTP/1.1" 200 612 "-" "curl/7.29.0" "-"[root@k8s-master01 ~]# kubectl logs nginx-dbddb74b8-9f5m6/docker-entrypoint.sh: /docker-entrypoint.d/ is not empty, will attempt to perform configuration/docker-entrypoint.sh: Looking for shell scripts in /docker-entrypoint.d//docker-entrypoint.sh: Launching /docker-entrypoint.d/10-listen-on-ipv6-by-default.sh10-listen-on-ipv6-by-default.sh: info: Getting the checksum of /etc/nginx/conf.d/default.conf10-listen-on-ipv6-by-default.sh: info: Enabled listen on IPv6 in /etc/nginx/conf.d/default.conf/docker-entrypoint.sh: Launching /docker-entrypoint.d/20-envsubst-on-templates.sh/docker-entrypoint.sh: Configuration complete; ready for start up172.17.11.1 - - [29/Jan/2021:12:58:28 +0000] "GET / HTTP/1.1" 200 612 "-" "curl/7.29.0" "-"172.17.11.1 - - [29/Jan/2021:12:59:41 +0000] "GET / HTTP/1.1" 200 612 "-" "curl/7.29.0" "-"

感谢你能够认真阅读完这篇文章,希望小编分享的"如何部署Kubernetes高可用"这篇文章对大家有帮助,同时也希望大家多多支持,关注行业资讯频道,更多相关知识等着你来学习!

节点 地址 脚本 证书 主机 文件 目录 配置 服务 参数 信息 容器 组件 拷贝 网络 两个 内容 命令 集群 生成 数据库的安全要保护哪些东西 数据库安全各自的含义是什么 生产安全数据库录入 数据库的安全性及管理 数据库安全策略包含哪些 海淀数据库安全审计系统 建立农村房屋安全信息数据库 易用的数据库客户端支持安全管理 连接数据库失败ssl安全错误 数据库的锁怎样保障安全 ow与游戏服务器失去链接 东阳软件开发者 机关网络安全注意事项 物流数据库哪个软件好 宁夏联想服务器维修续保 怀柔区定制软件开发配置 数据库大学生选课管理系统 卫生院网络安全应急演练方案 流媒体服务器开发 一超四强服务器 原型化方法是软件开发中 斗破苍穹手游服务器有什么区别 大数据商务软件开发是学什么的 钉钉自建应用外接数据库 网络技术员未来前景 网络技术 同义词 华为云数据库服务安全白皮书 大数据网络技术有限公司 网吧网络安全员考试题目 sql 数据库查询用户名的语句 学软件开发对电脑配置高吗 中国移动网络技术部有哪些岗位 数据库概念模型圆圈 33小说软件开发 梦幻西游7月新开服务器 如何创建数据库只读账号 连接docker容器中的数据库 中国平安互联网科技金融有限公司 全国信息网络技术有限公司 河南省公安部网络安全保卫局电话
0