千家信息网

kubernetes1.15.0高可用(keepalived+haproxy)

发表于:2025-01-23 作者:千家信息网编辑
千家信息网最后更新 2025年01月23日,一、Master高可用解决Master单节点,以及etcd单节点的问题,需要针对Master高可用,etcd数据要保持一致。[root@localhost ~]# hostnamectl set-ho
千家信息网最后更新 2025年01月23日kubernetes1.15.0高可用(keepalived+haproxy)

一、Master高可用

解决Master单节点,以及etcd单节点的问题,需要针对Master高可用,etcd数据要保持一致。

[root@localhost ~]# hostnamectl set-hostname master01[root@localhost ~]# hostnamectl set-hostname master02[root@localhost ~]# hostnamectl set-hostname master03#生成ssh-key[root@localhost ~]# ssh-keygenGenerating public/private rsa key pair.Enter file in which to save the key (/root/.ssh/id_rsa): Created directory '/root/.ssh'.Enter passphrase (empty for no passphrase): Enter same passphrase again: Your identification has been saved in /root/.ssh/id_rsa.Your public key has been saved in /root/.ssh/id_rsa.pub.The key fingerprint is:SHA256:6H0xzKWAv63KofmN8wNlt93tO/Asbl6WDICBCYhvcds root@master01The key's randomart image is:+---[RSA 2048]----+|  . ... o.       || . o . +  o      ||  . o + .. ..    ||   o . Eo+.o.    ||  .   .oS.*o o . ||     ... o.o..+ o||      o.o o   +* ||     +.+.o   oo+.||    o.=++.  +o..o|+----[SHA256]-----+[root@localhost ~]# ssh-copy-id root@172.16.216.229/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_rsa.pub"The authenticity of host '172.16.216.229 (172.16.216.229)' can't be established.ECDSA key fingerprint is SHA256:RSjZGjpxNF+3FfNVScnO7si+ixmb5cvjEQChMZANJl8.ECDSA key fingerprint is MD5:91:c5:3d:0a:22:4a:51:9b:b6:57:04:c8:f4:10:df:fd.Are you sure you want to continue connecting (yes/no)? yes/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keysroot@172.16.216.229's password: Number of key(s) added: 1Now try logging into the machine, with:   "ssh 'root@172.16.216.229'"and check to make sure that only the key(s) you wanted were added.[root@localhost ~]# ssh-copy-id root@172.16.216.230/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_rsa.pub"The authenticity of host '172.16.216.230 (172.16.216.230)' can't be established.ECDSA key fingerprint is SHA256:RSjZGjpxNF+3FfNVScnO7si+ixmb5cvjEQChMZANJl8.ECDSA key fingerprint is MD5:91:c5:3d:0a:22:4a:51:9b:b6:57:04:c8:f4:10:df:fd.Are you sure you want to continue connecting (yes/no)? yes/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keysroot@172.16.216.230's password: Number of key(s) added: 1Now try logging into the machine, with:   "ssh 'root@172.16.216.230'"and check to make sure that only the key(s) you wanted were added.#修改主机名[root@localhost ~]# vim /etc/hosts172.16.216.228 master01 master01.linuxplus.com172.16.216.226 master02 master02.linuxplus.com172.16.216.230 master03 master03.linuxplus.com172.16.216.234 cluster-node1172.16.216.235 cluster-node2[root@localhost ~]# vim /etc/hosts172.16.216.228 master01 master01.linuxplus.com172.16.216.226 master02 master02.linuxplus.com172.16.216.230 master03 master03.linuxplus.com172.16.216.234 cluster-node1172.16.216.235 cluster-node2[root@localhost ~]# vim /etc/hosts172.16.216.228 master01 master01.linuxplus.com172.16.216.226 master02 master02.linuxplus.com172.16.216.230 master03 master03.linuxplus.com172.16.216.234 cluster-node1172.16.216.235 cluster-node2#三台机器分别重启[root@localhost ~]# reboot

部署keepalived

#三台服务器分别配置转发[root@master01 ~]# cat >> /etc/sysctl.conf << EOF> net.ipv4.ip_forward = 1> EOF[root@master01 ~]# sysctl -pnet.ipv4.ip_forward = 1[root@master02 ~]# cat >> /etc/sysctl.conf << EOF> net.ipv4.ip_forward = 1> EOF[root@master02 ~]# sysctl -pnet.ipv4.ip_forward = 1[root@master03 ~]# cat >> /etc/sysctl.conf << EOF> net.ipv4.ip_forward = 1> EOF[root@master03 ~]# sysctl -pnet.ipv4.ip_forward = 1#三台服务器分别安装keepalived[root@master01 ~]# yum install -y keepalived[root@master02 ~]# yum install -y keepalived[root@master03 ~]# yum install -y keepalived#配置keepalived[root@master01 ~]# cd /etc/keepalived/[root@master01 keepalived]# vim keepalived.conf! Configuration File for keepalivedglobal_defs {        router_id LVS_DEVEL}vrrp_script check_haproxy {        script "killall -0 haproxy"        interval 3        weight -2        fall 10        rise 2}vrrp_instance VI_1 {        state MASTER        interface ens33        virtual_router_id 51        priority 250        advert_int 1        authentication {            auth_type PASS            auth_pass 35f18af7190d51c9f7f78f37300a0cbd        }        virtual_ipaddress {       172.16.216.30/24 dev ens33        }        track_script {           check_haproxy        }}[root@master02 ~]# cd /etc/keepalived/[root@master02 keepalived]# vim keepalived.conf! Configuration File for keepalivedglobal_defs {        router_id LVS_DEVEL}vrrp_script check_haproxy {        script "killall -0 haproxy"        interval 3        weight -2        fall 10        rise 2}vrrp_instance VI_1 {        state BACKUP        interface ens33        virtual_router_id 51        priority 249        advert_int 1        authentication {            auth_type PASS            auth_pass 35f18af7190d51c9f7f78f37300a0cbd        }        virtual_ipaddress {       172.16.216.30/24 dev ens33        }        track_script {           check_haproxy        }}[root@master03 ~]# cd /etc/keepalived/[root@master03 keepalived]# vim keepalived.conf! Configuration File for keepalivedglobal_defs {        router_id LVS_DEVEL}vrrp_script check_haproxy {        script "killall -0 haproxy"        interval 3        weight -2        fall 10        rise 2}vrrp_instance VI_1 {        state BACKUP        interface ens33        virtual_router_id 51        priority 248        advert_int 1        authentication {            auth_type PASS            auth_pass 35f18af7190d51c9f7f78f37300a0cbd        }        virtual_ipaddress {       172.16.216.30/24 dev ens33        }        track_script {           check_haproxy        }}#启动服务并查看状态//Master01[root@master01 keepalived]# systemctl enable keepalived.service Created symlink from /etc/systemd/system/multi-user.target.wants/keepalived.service to /usr/lib/systemd/system/keepalived.service.[root@master01 keepalived]# systemctl start keepalived.service [root@master01 keepalived]# systemctl status keepalived.service● keepalived.service - LVS and VRRP High Availability Monitor   Loaded: loaded (/usr/lib/systemd/system/keepalived.service; enabled; vendor preset: disabled)   Active: active (running) since Sun 2019-06-30 21:53:50 CST; 5s ago  Process: 45326 ExecStart=/usr/sbin/keepalived $KEEPALIVED_OPTIONS (code=exited, status=0/SUCCESS) Main PID: 45327 (keepalived)   CGroup: /system.slice/keepalived.service           ├─45327 /usr/sbin/keepalived -D           ├─45328 /usr/sbin/keepalived -D           └─45329 /usr/sbin/keepalived -DJun 30 21:53:50 master01 Keepalived_vrrp[45329]: VRRP sockpool: [ifindex(2), proto(112), unicast(0), fd(10,11)]Jun 30 21:53:51 master01 Keepalived_vrrp[45329]: VRRP_Instance(VI_1) Transition to MASTER STATEJun 30 21:53:52 master01 Keepalived_vrrp[45329]: VRRP_Instance(VI_1) Entering MASTER STATEJun 30 21:53:52 master01 Keepalived_vrrp[45329]: VRRP_Instance(VI_1) setting protocol VIPs.Jun 30 21:53:52 master01 Keepalived_vrrp[45329]: Sending gratuitous ARP on ens33 for 172.16.216.30Jun 30 21:53:52 master01 Keepalived_vrrp[45329]: VRRP_Instance(VI_1) Sending/queueing gratuitous ARPs on ens33 for 172.16.216.30Jun 30 21:53:52 master01 Keepalived_vrrp[45329]: Sending gratuitous ARP on ens33 for 172.16.216.30Jun 30 21:53:52 master01 Keepalived_vrrp[45329]: Sending gratuitous ARP on ens33 for 172.16.216.30Jun 30 21:53:52 master01 Keepalived_vrrp[45329]: Sending gratuitous ARP on ens33 for 172.16.216.30Jun 30 21:53:52 master01 Keepalived_vrrp[45329]: Sending gratuitous ARP on ens33 for 172.16.216.30[root@master01 keepalived]# ip address show ens332: ens33:  mtu 1500 qdisc pfifo_fast state UP group default qlen 1000    link/ether 00:0c:29:37:f5:ef brd ff:ff:ff:ff:ff:ff    inet 172.16.216.228/24 brd 172.16.216.255 scope global ens33       valid_lft forever preferred_lft forever    inet 172.16.216.30/24 scope global secondary ens33       valid_lft forever preferred_lft forever    inet6 fe80::20c:29ff:fe37:f5ef/64 scope link        valid_lft forever preferred_lft forever//Master02[root@master02 keepalived]# systemctl enable keepalived.serviceCreated symlink from /etc/systemd/system/multi-user.target.wants/keepalived.service to /usr/lib/systemd/system/keepalived.service.[root@master02 keepalived]# systemctl start keepalived.service[root@master02 keepalived]# systemctl status keepalived.service● keepalived.service - LVS and VRRP High Availability Monitor   Loaded: loaded (/usr/lib/systemd/system/keepalived.service; enabled; vendor preset: disabled)   Active: active (running) since Sun 2019-06-30 21:54:09 CST; 3s ago  Process: 45054 ExecStart=/usr/sbin/keepalived $KEEPALIVED_OPTIONS (code=exited, status=0/SUCCESS) Main PID: 45055 (keepalived)   CGroup: /system.slice/keepalived.service           ├─45055 /usr/sbin/keepalived -D           ├─45056 /usr/sbin/keepalived -D           └─45057 /usr/sbin/keepalived -DJun 30 21:54:09 master02 Keepalived_vrrp[45057]: Registering gratuitous ARP shared channelJun 30 21:54:09 master02 Keepalived_vrrp[45057]: Opening file '/etc/keepalived/keepalived.conf'.Jun 30 21:54:09 master02 Keepalived_vrrp[45057]: WARNING - default user 'keepalived_script' for script execution does not exist - please create.Jun 30 21:54:09 master02 Keepalived_vrrp[45057]: Truncating auth_pass to 8 charactersJun 30 21:54:09 master02 Keepalived_vrrp[45057]: Cannot find script killall in pathJun 30 21:54:09 master02 Keepalived_vrrp[45057]: Disabling track script check_haproxy since not foundJun 30 21:54:09 master02 Keepalived_vrrp[45057]: VRRP_Instance(VI_1) removing protocol VIPs.Jun 30 21:54:09 master02 Keepalived_vrrp[45057]: Using LinkWatch kernel netlink reflector...Jun 30 21:54:09 master02 Keepalived_vrrp[45057]: VRRP_Instance(VI_1) Entering BACKUP STATEJun 30 21:54:09 master02 Keepalived_vrrp[45057]: VRRP sockpool: [ifindex(2), proto(112), unicast(0), fd(10,11)][root@master02 keepalived]# ip address show ens332: ens33:  mtu 1500 qdisc pfifo_fast state UP group default qlen 1000    link/ether 00:0c:29:dd:8b:2b brd ff:ff:ff:ff:ff:ff    inet 172.16.216.229/24 brd 172.16.216.255 scope global ens33       valid_lft forever preferred_lft forever    inet6 fe80::20c:29ff:fedd:8b2b/64 scope link        valid_lft forever preferred_lft forever//Master03[root@master03 keepalived]# systemctl enable keepalived.serviceCreated symlink from /etc/systemd/system/multi-user.target.wants/keepalived.service to /usr/lib/systemd/system/keepalived.service.[root@master03 keepalived]# systemctl start keepalived.service[root@master03 keepalived]# systemctl status keepalived.service● keepalived.service - LVS and VRRP High Availability Monitor   Loaded: loaded (/usr/lib/systemd/system/keepalived.service; enabled; vendor preset: disabled)   Active: active (running) since Sun 2019-06-30 21:54:22 CST; 3s ago  Process: 42102 ExecStart=/usr/sbin/keepalived $KEEPALIVED_OPTIONS (code=exited, status=0/SUCCESS) Main PID: 42103 (keepalived)   CGroup: /system.slice/keepalived.service           ├─42103 /usr/sbin/keepalived -D           ├─42104 /usr/sbin/keepalived -D           └─42105 /usr/sbin/keepalived -DJun 30 21:54:22 master03 Keepalived_vrrp[42105]: WARNING - default user 'keepalived_script' for script execution does not exist - please create.Jun 30 21:54:22 master03 Keepalived_vrrp[42105]: Truncating auth_pass to 8 charactersJun 30 21:54:22 master03 Keepalived_vrrp[42105]: Cannot find script killall in pathJun 30 21:54:22 master03 Keepalived_vrrp[42105]: Disabling track script check_haproxy since not foundJun 30 21:54:22 master03 Keepalived_vrrp[42105]: VRRP_Instance(VI_1) removing protocol VIPs.Jun 30 21:54:22 master03 Keepalived_vrrp[42105]: Using LinkWatch kernel netlink reflector...Jun 30 21:54:22 master03 Keepalived_vrrp[42105]: VRRP_Instance(VI_1) Entering BACKUP STATEJun 30 21:54:22 master03 Keepalived_vrrp[42105]: VRRP sockpool: [ifindex(2), proto(112), unicast(0), fd(10,11)]Jun 30 21:54:22 master03 Keepalived_healthcheckers[42104]: Initializing ipvsJun 30 21:54:22 master03 Keepalived_healthcheckers[42104]: Opening file '/etc/keepalived/keepalived.conf'.[root@master03 keepalived]# ip address show ens332: ens33:  mtu 1500 qdisc pfifo_fast state UP group default qlen 1000    link/ether 00:0c:29:6d:08:5b brd ff:ff:ff:ff:ff:ff    inet 172.16.216.230/24 brd 172.16.216.255 scope global ens33       valid_lft forever preferred_lft forever    inet6 fe80::20c:29ff:fe6d:85b/64 scope link        valid_lft forever preferred_lft forever

安装配置haproxy

[root@master01 keepalived]# cat >> /etc/sysctl.conf << EOF> net.ipv4.ip_nonlocal_bind = 1> EOF[root@master01 keepalived]# sysctl -pnet.ipv4.ip_forward = 1net.ipv4.ip_nonlocal_bind = 1[root@master02 keepalived]# cat >> /etc/sysctl.conf << EOF> net.ipv4.ip_nonlocal_bind = 1> EOF[root@master02 keepalived]# sysctl -pnet.ipv4.ip_forward = 1net.ipv4.ip_nonlocal_bind = 1[root@master03 keepalived]# cat >> /etc/sysctl.conf << EOF> net.ipv4.ip_nonlocal_bind = 1> EOF[root@master03 keepalived]# sysctl -pnet.ipv4.ip_forward = 1net.ipv4.ip_nonlocal_bind = 1[root@master01 ~]# yum install -y haproxy[root@master02 ~]# yum install -y haproxy[root@master03 ~]# yum install -y haproxy#修改配置文件[root@master01 ~]# cd /etc/haproxy/[root@master01 haproxy]# vim haproxy.cfg#---------------------------------------------------------------------# Example configuration for a possible web application.  See the# full configuration options online.##   http://haproxy.1wt.eu/download/1.4/doc/configuration.txt##---------------------------------------------------------------------#---------------------------------------------------------------------# Global settings#---------------------------------------------------------------------global    # to have these messages end up in /var/log/haproxy.log you will    # need to:    #    # 1) configure syslog to accept network log events.  This is done    #    by adding the '-r' option to the SYSLOGD_OPTIONS in    #    /etc/sysconfig/syslog    #    # 2) configure local2 events to go to the /var/log/haproxy.log    #   file. A line like the following can be added to    #   /etc/sysconfig/syslog    #    #    local2.*                       /var/log/haproxy.log    #    log         127.0.0.1 local2    chroot      /var/lib/haproxy    pidfile     /var/run/haproxy.pid    maxconn     40000    user        haproxy    group       haproxy    daemon    # turn on stats unix socket    stats socket /var/lib/haproxy/stats#---------------------------------------------------------------------# common defaults that all the 'listen' and 'backend' sections will# use if not designated in their block#---------------------------------------------------------------------defaults    mode                    http    log                     global    option                  httplog    option                  dontlognull    option http-server-close    option forwardfor       except 127.0.0.0/8    option                  redispatch    retries                 3    timeout http-request    10s    timeout queue           1m    timeout connect         10s    timeout client          1m    timeout server          1m    timeout http-keep-alive 10s    timeout check           10s    maxconn                 3000#---------------------------------------------------------------------# kubernetes apiserver frontend which proxys to the backends# --------------------------------------------------------------------frontend kubernetes-apiserver    mode                 tcp    bind                 *:16443    option               tcplog    default_backend      kubernetes-apiserver#---------------------------------------------------------------------# round robin balancing between the various backends#---------------------------------------------------------------------backend kubernetes-apiserver    mode        tcp    balance     roundrobin    server  master01  172.16.216.228:6443 check    server  master02  172.16.216.229:6443 check    server  master03  172.16.216.230:6443 check#---------------------------------------------------------------------# collection haproxy statistics message#---------------------------------------------------------------------listen stats    bind               *:1080    stats auth         admin:awesomePassword    stats refresh      5s    stats realm        HAProxy\ Statistics    stats uri          /admin?stats[root@master01 haproxy]# systemctl enable haproxy.service Created symlink from /etc/systemd/system/multi-user.target.wants/haproxy.service to /usr/lib/systemd/system/haproxy.service.[root@master01 haproxy]# systemctl start haproxy.service [root@master01 haproxy]# systemctl status haproxy.service ● haproxy.service - HAProxy Load Balancer   Loaded: loaded (/usr/lib/systemd/system/haproxy.service; enabled; vendor preset: disabled)   Active: active (running) since Sun 2019-06-30 22:27:15 CST; 6s ago Main PID: 80058 (haproxy-systemd)   CGroup: /system.slice/haproxy.service           ├─80058 /usr/sbin/haproxy-systemd-wrapper -f /etc/haproxy/haproxy.cfg -p /run/haproxy.p...           ├─80059 /usr/sbin/haproxy -f /etc/haproxy/haproxy.cfg -p /run/haproxy.pid -Ds           └─80060 /usr/sbin/haproxy -f /etc/haproxy/haproxy.cfg -p /run/haproxy.pid -DsJun 30 22:27:15 master01 systemd[1]: Started HAProxy Load Balancer.Jun 30 22:27:15 master01 systemd[1]: Starting HAProxy Load Balancer...Jun 30 22:27:15 master01 haproxy-systemd-wrapper[80058]: haproxy-systemd-wrapper: executing /usr...DsJun 30 22:27:15 master01 haproxy-systemd-wrapper[80058]: [WARNING] 180/222715 (80059) : config :...e.Jun 30 22:27:15 master01 haproxy-systemd-wrapper[80058]: [WARNING] 180/222715 (80059) : config :...e.Hint: Some lines were ellipsized, use -l to show in full.[root@master01 haproxy]# ss -lnt |grep -E "16443|1080"LISTEN     0      128          *:16443                    *:*                  LISTEN     0      128          *:1080                     *:*     [root@master01 haproxy]# scp haproxy.cfg root@172.16.216.229:/etc/haproxy/haproxy.cfg                                                        100% 4320     2.2MB/s   00:00    [root@master01 haproxy]# scp haproxy.cfg root@172.16.216.230:/etc/haproxy/haproxy.cfg                                                        100% 4320     3.3MB/s   00:00    [root@master02 keepalived]# systemctl enable haproxyCreated symlink from /etc/systemd/system/multi-user.target.wants/haproxy.service to /usr/lib/systemd/system/haproxy.service.[root@master02 keepalived]# systemctl start haproxy[root@master02 keepalived]# systemctl status haproxy● haproxy.service - HAProxy Load Balancer   Loaded: loaded (/usr/lib/systemd/system/haproxy.service; enabled; vendor preset: disabled)   Active: active (running) since Sun 2019-06-30 22:28:39 CST; 9s ago Main PID: 80834 (haproxy-systemd)   CGroup: /system.slice/haproxy.service           ├─80834 /usr/sbin/haproxy-systemd-wrapper -f /etc/haproxy/haproxy.cfg -p /run/haproxy.p...           ├─80839 /usr/sbin/haproxy -f /etc/haproxy/haproxy.cfg -p /run/haproxy.pid -Ds           └─80843 /usr/sbin/haproxy -f /etc/haproxy/haproxy.cfg -p /run/haproxy.pid -DsJun 30 22:28:39 master02 systemd[1]: Started HAProxy Load Balancer.Jun 30 22:28:39 master02 systemd[1]: Starting HAProxy Load Balancer...Jun 30 22:28:39 master02 haproxy-systemd-wrapper[80834]: haproxy-systemd-wrapper: executing /usr...DsJun 30 22:28:39 master02 haproxy-systemd-wrapper[80834]: [WARNING] 180/222839 (80839) : config :...e.Jun 30 22:28:39 master02 haproxy-systemd-wrapper[80834]: [WARNING] 180/222839 (80839) : config :...e.Hint: Some lines were ellipsized, use -l to show in full.[root@master02 keepalived]# ss -lnt |grep -E "16443|1080"LISTEN     0      128          *:16443                    *:*                  LISTEN     0      128          *:1080                     *:*  [root@master03 keepalived]# systemctl enable haproxyCreated symlink from /etc/systemd/system/multi-user.target.wants/haproxy.service to /usr/lib/systemd/system/haproxy.service.[root@master03 keepalived]# systemctl start haproxy[root@master03 keepalived]# systemctl status haproxy.service ● haproxy.service - HAProxy Load Balancer   Loaded: loaded (/usr/lib/systemd/system/haproxy.service; enabled; vendor preset: disabled)   Active: active (running) since Sun 2019-06-30 22:30:08 CST; 16s ago Main PID: 82314 (haproxy-systemd)   CGroup: /system.slice/haproxy.service           ├─82314 /usr/sbin/haproxy-systemd-wrapper -f /etc/haproxy/haproxy.cfg -p /run/haproxy.p...           ├─82315 /usr/sbin/haproxy -f /etc/haproxy/haproxy.cfg -p /run/haproxy.pid -Ds           └─82316 /usr/sbin/haproxy -f /etc/haproxy/haproxy.cfg -p /run/haproxy.pid -DsJun 30 22:30:08 master03 systemd[1]: Started HAProxy Load Balancer.Jun 30 22:30:08 master03 systemd[1]: Starting HAProxy Load Balancer...Jun 30 22:30:08 master03 haproxy-systemd-wrapper[82314]: haproxy-systemd-wrapper: executing /usr...DsJun 30 22:30:08 master03 haproxy-systemd-wrapper[82314]: [WARNING] 180/223008 (82315) : config :...e.Jun 30 22:30:08 master03 haproxy-systemd-wrapper[82314]: [WARNING] 180/223008 (82315) : config :...e.Hint: Some lines were ellipsized, use -l to show in full.[root@master03 keepalived]# ss -lnt |grep -E "16443|1080"LISTEN     0      128          *:16443                    *:*                  LISTEN     0      128          *:1080                     *:*

安装配置kubernetes

#------------------系统配置--------------------------------[root@master01 ~]# cat < /etc/sysctl.d/k8s.confnet.bridge.bridge-nf-call-ip6tables = 1net.bridge.bridge-nf-call-iptables = 1EOF[root@master01 ~]# sysctl --system[root@master02 ~]# cat < /etc/sysctl.d/k8s.confnet.bridge.bridge-nf-call-ip6tables = 1net.bridge.bridge-nf-call-iptables = 1EOF[root@master02 ~]# sysctl --system[root@master03 ~]# cat < /etc/sysctl.d/k8s.confnet.bridge.bridge-nf-call-ip6tables = 1net.bridge.bridge-nf-call-iptables = 1EOF[root@master03 ~]# sysctl --system#---------------安装Docker-----------------------------------[root@master01 ~]# yum install -y docker[root@master01 ~]# systemctl enable dockerCreated symlink from /etc/systemd/system/multi-user.target.wants/docker.service to /usr/lib/systemd/system/docker.service.[root@master01 ~]# systemctl start docker[root@master02 ~]# yum install -y docker[root@master02 ~]# systemctl enable dockerCreated symlink from /etc/systemd/system/multi-user.target.wants/docker.service to /usr/lib/systemd/system/docker.service.[root@master02 ~]# systemctl start docker[root@master03 ~]# yum install -y docker[root@master03 ~]# systemctl enable dockerCreated symlink from /etc/systemd/system/multi-user.target.wants/docker.service to /usr/lib/systemd/system/docker.service.[root@master03 ~]# systemctl start docker#-------------------------配置yum源----------------------------------------[root@master01 ~]# cat </etc/yum.repos.d/k8s.repo[kubernetes]name=Kubernetesbaseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/enabled=1gpgcheck=1repo_gpgcheck=1gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpgexclude=kube*EOF[root@master01 ~]# yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes[root@master01 ~]# systemctl enable --now kubeletCreated symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service.[root@master02 ~]# cat </etc/yum.repos.d/k8s.repo[kubernetes]name=Kubernetesbaseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/enabled=1gpgcheck=1repo_gpgcheck=1gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpgexclude=kube*EOF[root@master02 ~]# yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes[root@master02 ~]# systemctl enable --now kubeletCreated symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service.[root@master03 ~]# cat </etc/yum.repos.d/k8s.repo[kubernetes]name=Kubernetesbaseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/enabled=1gpgcheck=1repo_gpgcheck=1gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpgexclude=kube*EOF[root@master03 ~]# yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes[root@master03 ~]# systemctl enable --now kubeletCreated symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service.#-------------------------编辑kubeadmin的配置文件----------------------------------[root@master01 ~]# cat > kubeadm-config.yaml <   2d4h   v1.15.0cluster-node2   Ready       2d4h   v1.15.0master01        Ready    master   2d6h   v1.15.0master02        Ready    master   2d5h   v1.15.0master03        Ready    master   2d5h   v1.15.0

二、Node节点安装配置

[root@localhost ~]# hostnamectl set-hostname cluster-node1[root@localhost ~]# logout[root@localhost ~]# hostnamectl set-hostname cluster-node2[root@localhost ~]# logout[root@master01 ~]# scp /etc/hosts root@cluster-node1:/etc/The authenticity of host 'cluster-node1 (172.16.216.234)' can't be established.ECDSA key fingerprint is SHA256:RSjZGjpxNF+3FfNVScnO7si+ixmb5cvjEQChMZANJl8.ECDSA key fingerprint is MD5:91:c5:3d:0a:22:4a:51:9b:b6:57:04:c8:f4:10:df:fd.Are you sure you want to continue connecting (yes/no)? yesWarning: Permanently added 'cluster-node1,172.16.216.234' (ECDSA) to the list of known hosts.root@cluster-node1's password: hosts                                                                                 100%  389   140.9KB/s   00:00    [root@master01 ~]# scp /etc/hosts root@cluster-node2:/etc/The authenticity of host 'cluster-node2 (172.16.216.235)' can't be established.ECDSA key fingerprint is SHA256:RSjZGjpxNF+3FfNVScnO7si+ixmb5cvjEQChMZANJl8.ECDSA key fingerprint is MD5:91:c5:3d:0a:22:4a:51:9b:b6:57:04:c8:f4:10:df:fd.Are you sure you want to continue connecting (yes/no)? yesWarning: Permanently added 'cluster-node2,172.16.216.235' (ECDSA) to the list of known hosts.root@cluster-node2's password: hosts                                                                                 100%  389    17.7KB/s   00:00  [root@cluster-node1 ~]# cat < /etc/sysctl.d/k8s.confnet.bridge.bridge-nf-call-ip6tables = 1net.bridge.bridge-nf-call-iptables = 1EOF[root@cluster-node1 ~]# sysctl --system[root@cluster-node1 ~]# systemctl enable dockerCreated symlink from /etc/systemd/system/multi-user.target.wants/docker.service to /usr/lib/systemd/system/docker.service.[root@cluster-node1 ~]# systemctl start docker[root@cluster-node2 ~]# yum install -y docker[root@cluster-node2 ~]# cat < /etc/sysctl.d/k8s.confnet.bridge.bridge-nf-call-ip6tables = 1net.bridge.bridge-nf-call-iptables = 1EOF[root@cluster-node2 ~]# sysctl --system[root@cluster-node2 ~]# yum install -y docker[root@cluster-node2 ~]# systemctl enable dockerCreated symlink from /etc/systemd/system/multi-user.target.wants/docker.service to /usr/lib/systemd/system/docker.service.[root@cluster-node2 ~]# systemctl start docker#--------------------------------------------------------------------------[root@cluster-node1 ~]# cat </etc/yum.repos.d/k8s.repo[kubernetes]name=Kubernetesbaseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/enabled=1gpgcheck=1repo_gpgcheck=1gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpgexclude=kube*EOF[root@cluster-node2 ~]# cat </etc/yum.repos.d/k8s.repo[kubernetes]name=Kubernetesbaseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/enabled=1gpgcheck=1repo_gpgcheck=1gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpgexclude=kube*EOF[root@cluster-node1 ~]# swapoff -a[root@cluster-node1 ~]# vim /etc/fstab## /etc/fstab# Created by anaconda on Sun Aug  5 13:54:05 2018## Accessible filesystems, by reference, are maintained under '/dev/disk'# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info#/dev/mapper/cl-root     /                       xfs     defaults        0 0UUID=5c427701-1abf-4d52-821d-b7bdc68ed358 /boot                   xfs     defaults        0 0#/dev/mapper/cl-swap     swap                    swap    defaults        0 0[root@cluster-node1 ~]# docker pull mirrorgooglecontainers/kube-proxy:v1.15.0Trying to pull repository docker.io/mirrorgooglecontainers/kube-proxy ... v1.15.0: Pulling from docker.io/mirrorgooglecontainers/kube-proxy6cf6a0b0da0d: Already exists 8e1ce322a1d9: Pull complete b593bfa65f6f: Pull complete Digest: sha256:63b8aaf1697550f318e9b46e5a7fc019f1d86912f1f3c9d9070bd00aaa361d0b[root@cluster-node1 ~]# docker pull mirrorgooglecontainers/pause:3.1Trying to pull repository docker.io/mirrorgooglecontainers/pause ... 3.1: Pulling from docker.io/mirrorgooglecontainers/pause67ddbfb20a22: Pull complete Digest: sha256:59eec8837a4d942cc19a52b8c09ea75121acc38114a2c68b98983ce9356b8610Status: Downloaded newer image for docker.io/mirrorgooglecontainers/pause:3.1[root@cluster-node1 ~]# docker pull coredns/coredns:1.3.1Trying to pull repository docker.io/coredns/coredns ... 1.3.1: Pulling from docker.io/coredns/corednsDigest: sha256:02382353821b12c21b062c59184e227e001079bb13ebd01f9d3270ba0fcbf1e4Status: Image is up to date for docker.io/coredns/coredns:1.3.1[root@cluster-node1 ~]# docker tag docker.io/mirrorgooglecontainers/kube-proxy:v1.15.0 k8s.gcr.io/kube-proxy:v1.15.0[root@cluster-node1 ~]# docker tag docker.io/mirrorgooglecontainers/pause:3.1 k8s.gcr.io/pause:3.1[root@cluster-node1 ~]# yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes[root@cluster-node1 ~]# systemctl enable kubelet.service[root@cluster-node1 ~]# systemctl enable --now kubelet[root@cluster-node1 ~]# echo 1 > /proc/sys/net/ipv4/ip_forward[root@cluster-node1 ~]# kubeadm join 172.16.216.30:6443 --token z37llz.huyi3c5j1l3tt1uz --discovery-token-ca-cert-hash sha256:a61ce60107cb929f65416b31b6ae95299c90e482f95aac25cf1d42700ab36481[preflight] Running pre-flight checks[preflight] Reading configuration from the cluster...[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.15" ConfigMap in the kube-system namespace[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"[kubelet-start] Activating the kubelet service[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...This node has joined the cluster:* Certificate signing request was sent to apiserver and a response was received.* The Kubelet was informed of the new secure connection details.Run 'kubectl get nodes' on the control-plane to see this node join the cluster.[root@cluster-node2 ~]# swapoff -a[root@cluster-node2 ~]# vim /etc/fstab## /etc/fstab# Created by anaconda on Sun Aug  5 13:54:05 2018## Accessible filesystems, by reference, are maintained under '/dev/disk'# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info#/dev/mapper/cl-root     /                       xfs     defaults        0 0UUID=5c427701-1abf-4d52-821d-b7bdc68ed358 /boot                   xfs     defaults        0 0#/dev/mapper/cl-swap     swap                    swap    defaults        0 0[root@cluster-node2 ~]# docker pull mirrorgooglecontainers/kube-proxy:v1.15.0Trying to pull repository docker.io/mirrorgooglecontainers/kube-proxy ... v1.15.0: Pulling from docker.io/mirrorgooglecontainers/kube-proxy6cf6a0b0da0d: Already exists 8e1ce322a1d9: Pull complete b593bfa65f6f: Pull complete Digest: sha256:63b8aaf1697550f318e9b46e5a7fc019f1d86912f1f3c9d9070bd00aaa361d0bStatus: Downloaded newer image for docker.io/mirrorgooglecontainers/kube-proxy:v1.15.0[root@cluster-node2 ~]# docker pull mirrorgooglecontainers/pause:3.1Trying to pull repository docker.io/mirrorgooglecontainers/pause ... 3.1: Pulling from docker.io/mirrorgooglecontainers/pause67ddbfb20a22: Pull complete Digest: sha256:59eec8837a4d942cc19a52b8c09ea75121acc38114a2c68b98983ce9356b8610Status: Downloaded newer image for docker.io/mirrorgooglecontainers/pause:3.1[root@cluster-node2 ~]# docker tag docker.io/mirrorgooglecontainers/kube-proxy:v1.15.0 k8s.gcr.io/kube-proxy:v1.15.0[root@cluster-node2 ~]# docker tag docker.io/mirrorgooglecontainers/pause:3.1 k8s.gcr.io/pause:3.1[root@cluster-node2 ~]# yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes[root@cluster-node2 ~]# systemctl enable kubelet.service[root@cluster-node2 ~]# systemctl enable --now kubelet[root@cluster-node2 sysctl.d]# echo 1 > /proc/sys/net/ipv4/ip_forward[root@cluster-node2 sysctl.d]# kubeadm join 172.16.216.30:6443 --token z37llz.huyi3c5j1l3tt1uz --discovery-token-ca-cert-hash sha256:a61ce60107cb929f65416b31b6ae95299c90e482f95aac25cf1d42700ab36481[preflight] Running pre-flight checks[preflight] Reading configuration from the cluster...[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.15" ConfigMap in the kube-system namespace[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"[kubelet-start] Activating the kubelet service[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...This node has joined the cluster:* Certificate signing request was sent to apiserver and a response was received.* The Kubelet was informed of the new secure connection details.Run 'kubectl get nodes' on the control-plane to see this node join the cluster.


0