千家信息网

K8s完整多节点部署(线网实战!含排错!)

发表于:2024-11-24 作者:千家信息网编辑
千家信息网最后更新 2024年11月24日,K8s多节点部署---->使用Nginx服务实现负载均衡---->UI界面展示特别注意:此实验开始前必须要先部署单节master的k8s群集可以见本人上一篇博客:https://blog.csdn.n
千家信息网最后更新 2024年11月24日K8s完整多节点部署(线网实战!含排错!)

K8s多节点部署---->使用Nginx服务实现负载均衡---->UI界面展示


特别注意:此实验开始前必须要先部署单节master的k8s群集
可以见本人上一篇博客:https://blog.csdn.net/JarryZho/article/details/104193913

环境部署:
相关软件包及文档:

链接:https://pan.baidu.com/s/1l4vVCkZ03la-VpIFXSz1dA
提取码:rg99

使用Nginx做负载均衡:

lb1:192.168.195.147/24 mini-2

lb2:192.168.195.133/24 mini-3

Master节点:

master1:192.168.18.128/24 CentOS 7-3

master2:192.168.18.132/24 mini-1

Node节点:

node1:192.168.18.148/24 CentOS 7-4

node2:192.168.18.145/24 CentOS 7-5

VRRP漂移地址:192.168.18.100


多master群集架构图:


------master2部署------

第一步:优先关闭master2的防火墙服务
[root@master2 ~]# systemctl stop firewalld.service[root@master2 ~]# setenforce 0
第二步:在master1上操作,复制kubernetes目录到master2
[root@master1 k8s]# scp -r /opt/kubernetes/ root@192.168.18.132:/optThe authenticity of host '192.168.18.132 (192.168.18.132)' can't be established.ECDSA key fingerprint is SHA256:mTT+FEtzAu4X3D5srZlz93S3gye8MzbqVZFDzfJd4Gk.ECDSA key fingerprint is MD5:fa:5a:88:23:49:60:9b:b8:7e:4b:14:4b:3f:cd:96:a0.Are you sure you want to continue connecting (yes/no)? yesWarning: Permanently added '192.168.18.132' (ECDSA) to the list of known hosts.root@192.168.18.132's password:token.csv                                                 100%   84    90.2KB/s   00:00kube-apiserver                                            100%  934   960.7KB/s   00:00kube-scheduler                                            100%   94   109.4KB/s   00:00kube-controller-manager                                   100%  483   648.6KB/s   00:00kube-apiserver                                            100%  184MB  82.9MB/s   00:02kubectl                                                   100%   55MB  81.5MB/s   00:00kube-controller-manager                                   100%  155MB  70.6MB/s   00:02kube-scheduler                                            100%   55MB  77.4MB/s   00:00ca-key.pem                                                100% 1675     1.2MB/s   00:00ca.pem                                                    100% 1359     1.5MB/s   00:00server-key.pem                                            100% 1675     1.2MB/s   00:00server.pem                                                100% 1643     1.7MB/s   00:00
第三步:复制master1中的三个组件启动脚本kube-apiserver.service,kube-controller-manager.service,kube-scheduler.service到master2
[root@master1 k8s]# scp /usr/lib/systemd/system/{kube-apiserver,kube-controller-manager,kube-scheduler}.service root@192.168.18.132:/usr/lib/systemd/system/root@192.168.18.132's password:kube-apiserver.service                                    100%  282   286.6KB/s   00:00kube-controller-manager.service                           100%  317   223.9KB/s   00:00kube-scheduler.service                                    100%  281   362.4KB/s   00:00
第四步:master2上操作,修改配置文件kube-apiserver中的IP
[root@master2 ~]# cd /opt/kubernetes/cfg/[root@master2 cfg]# lskube-apiserver  kube-controller-manager  kube-scheduler  token.csv[root@master2 cfg]# vim kube-apiserver5 --bind-address=192.168.18.132 \7 --advertise-address=192.168.18.132 \#第5和7行IP地址需要改为master2的地址#修改完成后按Esc退出插入模式,输入:wq保存退出
第五步:拷贝master1上已有的etcd证书给master2使用

特别注意:master2一定要有etcd证书,否则apiserver服务无法启动

[root@master1 k8s]# scp -r /opt/etcd/ root@192.168.18.132:/opt/root@192.168.18.132's password:etcd                                                      100%  516   535.5KB/s   00:00etcd                                                      100%   18MB  90.6MB/s   00:00etcdctl                                                   100%   15MB  80.5MB/s   00:00ca-key.pem                                                100% 1675     1.4MB/s   00:00ca.pem                                                    100% 1265   411.6KB/s   00:00server-key.pem                                            100% 1679     2.0MB/s   00:00server.pem                                                100% 1338   429.6KB/s   00:00
第六步:启动master2中的三个组件服务
[root@master2 cfg]# systemctl start kube-apiserver.service[root@master2 cfg]# systemctl enable kube-apiserver.serviceCreated symlink from /etc/systemd/system/multi-user.target.wants/kube-apiserver.service to /usr/lib/systemd/system/kube-apiserver.service.[root@master2 cfg]# systemctl status kube-apiserver.service● kube-apiserver.service - Kubernetes API Server   Loaded: loaded (/usr/lib/systemd/system/kube-apiserver.service; enabled; vendor preset: disabled)   Active: active (running) since 五 2020-02-07 09:16:57 CST; 56min ago[root@master2 cfg]# systemctl start kube-controller-manager.service[root@master2 cfg]# systemctl enable kube-controller-manager.serviceCreated symlink from /etc/systemd/system/multi-user.target.wants/kube-controller-manager.service to /usr/lib/systemd/system/kube-controller-manager.service.[root@master2 cfg]# systemctl status kube-controller-manager.service● kube-controller-manager.service - Kubernetes Controller Manager   Loaded: loaded (/usr/lib/systemd/system/kube-controller-manager.service; enabled; vendor preset: disabled)   Active: active (running) since 五 2020-02-07 09:17:02 CST; 57min ago[root@master2 cfg]# systemctl start kube-scheduler.service[root@master2 cfg]# systemctl enable kube-scheduler.serviceCreated symlink from /etc/systemd/system/multi-user.target.wants/kube-scheduler.service to /usr/lib/systemd/system/kube-scheduler.service.[root@master2 cfg]# systemctl status kube-scheduler.service● kube-scheduler.service - Kubernetes Scheduler   Loaded: loaded (/usr/lib/systemd/system/kube-scheduler.service; enabled; vendor preset: disabled)   Active: active (running) since 五 2020-02-07 09:17:07 CST; 58min ago
第七步:增加环境变量并生效
[root@master2 cfg]# vim /etc/profile#末尾添加export PATH=$PATH:/opt/kubernetes/bin/[root@master2 cfg]# source /etc/profile[root@master2 cfg]# kubectl get nodeNAME             STATUS   ROLES    AGE   VERSION192.168.18.145   Ready       21h   v1.12.3192.168.18.148   Ready       22h   v1.12.3#此时可以看到node1和node2的加入情况

此时master2部署完毕


------Nginx负载均衡部署------

注意:此处使用nginx服务实现负载均衡,1.9版本之后的nginx具有了四层的转发功能(负载均衡),该功能中多了stream

多节点原理:

和单节点不同,多节点的核心点就是需要指向一个核心的地址,我们之前在做单节点的时候已经将vip地址定义过写入k8s-cert.sh脚本文件中(192.168.18.100),vip开启apiserver,多master开启端口接受node节点的apiserver请求,此时若有新的节点加入,不是直接找moster节点,而是直接找到vip进行spiserver的请求,然后vip再进行调度,分发到某一个master中进行执行,此时master收到请求之后就会给改node节点颁发证书

第一步:上传keepalived.conf和nginx.sh两个文件到lb1和lb2的root目录下
`lb1`[root@lb1 ~]# lsanaconda-ks.cfg       keepalived.conf  公共  视频  文档  音乐initial-setup-ks.cfg  nginx.sh         模板  图片  下载  桌面`lb2`[root@lb2 ~]# lsanaconda-ks.cfg       keepalived.conf  公共  视频  文档  音乐initial-setup-ks.cfg  nginx.sh         模板  图片  下载  桌面
第二步:lb1(192.168.18.147)操作
[root@lb1 ~]# systemctl stop firewalld.service[root@lb1 ~]# setenforce 0[root@lb1 ~]# vim /etc/yum.repos.d/nginx.repo[nginx]name=nginx repobaseurl=http://nginx.org/packages/centos/7/$basearch/gpgcheck=0#修改完成后按Esc退出插入模式,输入:wq保存退出`重新加载yum仓库`[root@lb1 ~]# yum list`安装nginx服务`[root@lb1 ~]# yum install nginx -y[root@lb1 ~]# vim /etc/nginx/nginx.conf#在12行下插入以下内容stream {   log_format  main  '$remote_addr $upstream_addr - [$time_local] $status $upstream_bytes_sent';    access_log  /var/log/nginx/k8s-access.log  main;    upstream k8s-apiserver {        server 192.168.18.128:6443;     #此处为master1的ip地址        server 192.168.18.132:6443;     #此处为master2的ip地址    }    server {                listen 6443;                proxy_pass k8s-apiserver;    }    }#修改完成后按Esc退出插入模式,输入:wq保存退出`检测语法`[root@lb1 ~]# nginx -tnginx: the configuration file /etc/nginx/nginx.conf syntax is oknginx: configuration file /etc/nginx/nginx.conf test is successful[root@lb1 ~]# cd /usr/share/nginx/html/[root@lb1 html]# ls50x.html  index.html[root@lb1 html]# vim index.html14 

Welcome to mater nginx!

#14行中添加master以作区分#修改完成后按Esc退出插入模式,输入:wq保存退出`启动服务`[root@lb2 ~]# systemctl start nginx
浏览器验证访问,输入192.168.18.147,可以访问master的nginx主页

部署keepalived服务
[root@lb1 html]# yum install keepalived -y`修改配置文件`[root@lb1 html]# cd ~[root@lb1 ~]# cp keepalived.conf /etc/keepalived/keepalived.confcp:是否覆盖"/etc/keepalived/keepalived.conf"? yes#用我们之前上传的keepalived.conf配置文件,覆盖安装完成后原有的配置文件[root@lb1 ~]# vim /etc/keepalived/keepalived.conf18     script "/etc/nginx/check_nginx.sh"       #18行目录改为/etc/nginx/,脚本后写23     interface ens33      #eth0改为ens33,此处的网卡名称可以使用ifconfig命令查询24     virtual_router_id 51     #vrrp路由ID实例,每个实例是唯一的25     priority 100             #优先级,备服务器设置9031     virtual_ipaddress {32         192.168.18.100/24    #vip地址改为之前设定好的192.168.18.100#38行以下删除#修改完成后按Esc退出插入模式,输入:wq保存退出`写脚本`[root@lb1 ~]# vim /etc/nginx/check_nginx.sh     count=$(ps -ef |grep nginx |egrep -cv "grep|$$")    #统计数量if [ "$count" -eq 0 ];then    systemctl stop keepalivedfi#匹配为0,关闭keepalived服务#写入完成后按Esc退出插入模式,输入:wq保存退出[root@lb1 ~]# chmod +x /etc/nginx/check_nginx.sh[root@lb1 ~]# ls /etc/nginx/check_nginx.sh/etc/nginx/check_nginx.sh       #此时脚本为可执行状态,绿色[root@lb1 ~]# systemctl start keepalived[root@lb1 ~]# ip a2: ens33:  mtu 1500 qdisc pfifo_fast state UP qlen 1000    link/ether 00:0c:29:24:63:be brd ff:ff:ff:ff:ff:ff    inet 192.168.18.147/24 brd 192.168.18.255 scope global dynamic ens33       valid_lft 1370sec preferred_lft 1370sec    inet `192.168.18.100/24` scope global secondary ens33       #此时漂移地址在lb1中       valid_lft forever preferred_lft forever    inet6 fe80::1cb1:b734:7f72:576f/64 scope link       valid_lft forever preferred_lft forever    inet6 fe80::578f:4368:6a2c:80d7/64 scope link tentative dadfailed       valid_lft forever preferred_lft forever    inet6 fe80::6a0c:e6a0:7978:3543/64 scope link tentative dadfailed       valid_lft forever preferred_lft forever
第三步:lb2(192.168.18.133)操作
[root@lb2 ~]# systemctl stop firewalld.service[root@lb2 ~]# setenforce 0[root@lb2 ~]# vim /etc/yum.repos.d/nginx.repo[nginx]name=nginx repobaseurl=http://nginx.org/packages/centos/7/$basearch/gpgcheck=0#修改完成后按Esc退出插入模式,输入:wq保存退出`重新加载yum仓库`[root@lb2 ~]# yum list`安装nginx服务`[root@lb2 ~]# yum install nginx -y[root@lb2 ~]# vim /etc/nginx/nginx.conf#在12行下插入以下内容stream {   log_format  main  '$remote_addr $upstream_addr - [$time_local] $status $upstream_bytes_sent';    access_log  /var/log/nginx/k8s-access.log  main;    upstream k8s-apiserver {        server 192.168.18.128:6443;     #此处为master1的ip地址        server 192.168.18.132:6443;     #此处为master2的ip地址    }    server {                listen 6443;                proxy_pass k8s-apiserver;    }    }#修改完成后按Esc退出插入模式,输入:wq保存退出`检测语法`[root@lb2 ~]# nginx -tnginx: the configuration file /etc/nginx/nginx.conf syntax is oknginx: configuration file /etc/nginx/nginx.conf test is successful[root@lb2 ~]# vim /usr/share/nginx/html/index.html14 

Welcome to backup nginx!

#14行中添加backup以作区分#修改完成后按Esc退出插入模式,输入:wq保存退出`启动服务`[root@lb2 ~]# systemctl start nginx
浏览器验证访问,输入192.168.18.133,可以访问master的nginx主页

部署keepalived服务
[root@lb2 ~]# yum install keepalived -y`修改配置文件`[root@lb2 ~]# cp keepalived.conf /etc/keepalived/keepalived.confcp:是否覆盖"/etc/keepalived/keepalived.conf"? yes#用我们之前上传的keepalived.conf配置文件,覆盖安装完成后原有的配置文件[root@lb2 ~]# vim /etc/keepalived/keepalived.conf18     script "/etc/nginx/check_nginx.sh"       #18行目录改为/etc/nginx/,脚本后写22     state BACKUP     #22行角色MASTER改为BACKUP23     interface ens33  #eth0改为ens3324     virtual_router_id 51     #vrrp路由ID实例,每个实例是唯一的25     priority 90      #优先级,备服务器为9031     virtual_ipaddress {32         192.168.18.100/24    #vip地址改为之前设定好的192.168.18.100#38行以下删除#修改完成后按Esc退出插入模式,输入:wq保存退出`写脚本`[root@lb2 ~]# vim /etc/nginx/check_nginx.sh     count=$(ps -ef |grep nginx |egrep -cv "grep|$$")    #统计数量if [ "$count" -eq 0 ];then    systemctl stop keepalivedfi#匹配为0,关闭keepalived服务#写入完成后按Esc退出插入模式,输入:wq保存退出[root@lb2 ~]# chmod +x /etc/nginx/check_nginx.sh[root@lb2 ~]# ls /etc/nginx/check_nginx.sh/etc/nginx/check_nginx.sh       #此时脚本为可执行状态,绿色[root@lb2 ~]# systemctl start keepalived[root@lb2 ~]# ip a2: ens33:  mtu 1500 qdisc pfifo_fast state UP qlen 1000    link/ether 00:0c:29:9d:b7:83 brd ff:ff:ff:ff:ff:ff    inet 192.168.18.133/24 brd 192.168.18.255 scope global dynamic ens33       valid_lft 958sec preferred_lft 958sec    inet6 fe80::578f:4368:6a2c:80d7/64 scope link       valid_lft forever preferred_lft forever    inet6 fe80::6a0c:e6a0:7978:3543/64 scope link tentative dadfailed       valid_lft forever preferred_lft forever#此时没有192.168.18.100,因为地址在lb1(master)上
第四步:验证地址漂移
`停止lb1中的nginx服务`[root@lb1 ~]# pkill nginx[root@lb1 ~]# systemctl status nginx● nginx.service - nginx - high performance web server   Loaded: loaded (/usr/lib/systemd/system/nginx.service; disabled; vendor preset: disabled)   Active: failed (Result: exit-code) since 五 2020-02-07 12:16:39 CST; 1min 40s ago#此时状态为关闭`检查keepalived服务是否同时被关闭`[root@lb1 ~]# systemctl status keepalived.service● keepalived.service - LVS and VRRP High Availability Monitor   Loaded: loaded (/usr/lib/systemd/system/keepalived.service; disabled; vendor preset: disabled)   Active: inactive (dead)#此时keepalived服务被关闭,说明check_nginx.sh脚本执行成功[root@lb1 ~]# ps -ef |grep nginx |egrep -cv "grep|$$"0#此时判断条件为0,应该停止keepalived服务`查看lb1上的漂移地址是否存在`[root@lb1 ~]# ip a2: ens33:  mtu 1500 qdisc pfifo_fast state UP qlen 1000    link/ether 00:0c:29:24:63:be brd ff:ff:ff:ff:ff:ff    inet 192.168.18.147/24 brd 192.168.18.255 scope global dynamic ens33       valid_lft 1771sec preferred_lft 1771sec    inet6 fe80::1cb1:b734:7f72:576f/64 scope link       valid_lft forever preferred_lft forever    inet6 fe80::578f:4368:6a2c:80d7/64 scope link tentative dadfailed       valid_lft forever preferred_lft forever    inet6 fe80::6a0c:e6a0:7978:3543/64 scope link tentative dadfailed       valid_lft forever preferred_lft forever#此时192.168.18.100漂移地址消失,如果双机热备成功,该地址应该漂移到lb2上`再查看lb2看漂移地址是否存在`[root@lb2 ~]# ip a2: ens33:  mtu 1500 qdisc pfifo_fast state UP qlen 1000    link/ether 00:0c:29:9d:b7:83 brd ff:ff:ff:ff:ff:ff    inet 192.168.18.133/24 brd 192.168.18.255 scope global dynamic ens33       valid_lft 1656sec preferred_lft 1656sec    inet 192.168.18.100/24 scope global secondary ens33       valid_lft forever preferred_lft forever    inet6 fe80::578f:4368:6a2c:80d7/64 scope link       valid_lft forever preferred_lft forever    inet6 fe80::6a0c:e6a0:7978:3543/64 scope link tentative dadfailed       valid_lft forever preferred_lft forever#此时漂移地址192.168.18.100到了lb2上,说明双机热备成功
第五步:恢复操作
`在lb1上启动nginx和keepalived服务`[root@lb1 ~]# systemctl start nginx[root@lb1 ~]# systemctl start keepalived`漂移地址又会重新回到lb1上`[root@lb1 ~]# ip a2: ens33:  mtu 1500 qdisc pfifo_fast state UP qlen 1000    link/ether 00:0c:29:24:63:be brd ff:ff:ff:ff:ff:ff    inet 192.168.18.147/24 brd 192.168.18.255 scope global dynamic ens33       valid_lft 1051sec preferred_lft 1051sec    inet 192.168.18.100/24 scope global secondary ens33       valid_lft forever preferred_lft forever    inet6 fe80::1cb1:b734:7f72:576f/64 scope link       valid_lft forever preferred_lft forever    inet6 fe80::578f:4368:6a2c:80d7/64 scope link tentative dadfailed       valid_lft forever preferred_lft forever    inet6 fe80::6a0c:e6a0:7978:3543/64 scope link tentative dadfailed       valid_lft forever preferred_lft forever#反之lb2上的漂移地址就会消失
第六步:此时我们用宿主机的cmd命令测试测试漂移地址是否联通
C:\Users\zhn>ping 192.168.18.100正在 Ping 192.168.18.100 具有 32 字节的数据:来自 192.168.18.100 的回复: 字节=32 时间<1ms TTL=64来自 192.168.18.100 的回复: 字节=32 时间<1ms TTL=64来自 192.168.18.100 的回复: 字节=32 时间=1ms TTL=64来自 192.168.18.100 的回复: 字节=32 时间<1ms TTL=64192.168.18.100 的 Ping 统计信息:    数据包: 已发送 = 4,已接收 = 4,丢失 = 0 (0% 丢失),往返行程的估计时间(以毫秒为单位):    最短 = 0ms,最长 = 1ms,平均 = 0ms#此时可以ping通,说明可以访问此虚拟IP
第七步:在宿主机中使用192.168.18.100地址访问到的就应该是我们之前设置的master的nginx主页,也就是lb1


第八步:开始修改node节点配置文件统一VIP(bootstrap.kubeconfig,kubelet.kubeconfig)
node1:
[root@node1 ~]# vim /opt/kubernetes/cfg/bootstrap.kubeconfig5     server: https://192.168.18.100:6443       #5行改为Vip的地址#修改完成后按Esc退出插入模式,输入:wq保存退出[root@node1 ~]# vim /opt/kubernetes/cfg/kubelet.kubeconfig5     server: https://192.168.18.128:6443       #5行改为Vip的地址#修改完成后按Esc退出插入模式,输入:wq保存退出[root@node1 ~]# vim /opt/kubernetes/cfg/kube-proxy.kubeconfig5     server: https://192.168.18.128:6443       #5行改为Vip的地址#修改完成后按Esc退出插入模式,输入:wq保存退出`替换完成直接自检`[root@node1 ~]# cd /opt/kubernetes/cfg/[root@node1 cfg]# grep 100 *bootstrap.kubeconfig:    server: https://192.168.18.100:6443kubelet.kubeconfig:    server: https://192.168.18.100:6443kube-proxy.kubeconfig:    server: https://192.168.18.100:6443[root@node1 cfg]# systemctl restart kubelet.service[root@node1 cfg]# systemctl restart kube-proxy.service
node2:
[root@node2 ~]# vim /opt/kubernetes/cfg/bootstrap.kubeconfig5     server: https://192.168.18.100:6443       #5行改为Vip的地址#修改完成后按Esc退出插入模式,输入:wq保存退出[root@node2 ~]# vim /opt/kubernetes/cfg/kubelet.kubeconfig5     server: https://192.168.18.128:6443       #5行改为Vip的地址#修改完成后按Esc退出插入模式,输入:wq保存退出[root@node2 ~]# vim /opt/kubernetes/cfg/kube-proxy.kubeconfig5     server: https://192.168.18.128:6443       #5行改为Vip的地址#修改完成后按Esc退出插入模式,输入:wq保存退出`替换完成直接自检`[root@node2 ~]# cd /opt/kubernetes/cfg/[root@node2 cfg]# grep 100 *bootstrap.kubeconfig:    server: https://192.168.18.100:6443kubelet.kubeconfig:    server: https://192.168.18.100:6443kube-proxy.kubeconfig:    server: https://192.168.18.100:6443[root@node2 cfg]# systemctl restart kubelet.service[root@node2 cfg]# systemctl restart kube-proxy.service
第九步:在lb01上查看nginx的k8s日志
[root@lb1 ~]# tail /var/log/nginx/k8s-access.log192.168.18.145 192.168.18.128:6443 - [07/Feb/2020:14:18:54 +0800] 200 1119192.168.18.145 192.168.18.132:6443 - [07/Feb/2020:14:18:54 +0800] 200 1119192.168.18.148 192.168.18.128:6443 - [07/Feb/2020:14:18:57 +0800] 200 1120192.168.18.148 192.168.18.132:6443 - [07/Feb/2020:14:18:57 +0800] 200 1120
第十步:在master1上操作
`测试创建pod`[root@master1 ~]# kubectl run nginx --image=nginxkubectl run --generator=deployment/apps.v1beta1 is DEPRECATED and will be removed in a future version. Use kubectl create instead.deployment.apps/nginx created`查看状态`[root@master1 ~]# kubectl get podsNAME                    READY   STATUS              RESTARTS   AGEnginx-dbddb74b8-7hdfj   0/1     ContainerCreating   0          32s#此时状态为ContainerCreating正在创建中[root@master1 ~]# kubectl get podsNAME                    READY   STATUS    RESTARTS   AGEnginx-dbddb74b8-7hdfj   1/1     Running   0          73s#此时状态为Running,表示创建完成,运行中`注意:日志问题`[root@master1 ~]# kubectl logs nginx-dbddb74b8-7hdfjError from server (Forbidden): Forbidden (user=system:anonymous, verb=get, resource=nodes, subresource=proxy) ( pods/log nginx-dbddb74b8-7hdfj)#此时日志不可看,需要开启权限`绑定群集中的匿名用户赋予管理员权限`[root@master1 ~]# kubectl create clusterrolebinding cluster-system-anonymous --clusterrole=cluster-admin --user=system:anonymousclusterrolebinding.rbac.authorization.k8s.io/cluster-system-anonymous created[root@master1 ~]# kubectl logs nginx-dbddb74b8-7hdfj        #此时就不会报错了`查看pod网络`[root@master1 ~]# kubectl get pods -o wideNAME                  READY     STATUS    RESTARTS   AGE      IP            NODE         NOMINATED NODEnginx-dbddb74b8-7hdfj   1/1     Running   0          20m   172.17.32.2   192.168.18.148  

在对应网段的node1节点上操作可以直接访问
[root@node1 ~]# curl 172.17.32.2Welcome to nginx!

Welcome to nginx!

If you see this page, the nginx web server is successfully installed andworking. Further configuration is required.

For online documentation and support please refer tonginx.org.
Commercial support is available atnginx.com.

Thank you for using nginx.

#此时看到的就是容器中nginx的信息
访问就会产生日志,我们就可以回到master1上查看日志
[root@master1 ~]# kubectl logs nginx-dbddb74b8-7hdfj172.17.32.1 - - [07/Feb/2020:06:52:53 +0000] "GET / HTTP/1.1" 200 612 "-" "curl/7.29.0" "-"#此时就可以看到node1使用网关(172.17.32.1)进行访问的记录

------创建UI显示界面------

在master1上创建dashborad工作目录
[root@master1 ~]# cd k8s/[root@master1 k8s]# mkdir dashboard[root@master1 k8s]# cd dashboard/#此处需要上传页面文件到此文件夹下

`此时就可以看到页面的yaml文件`[root@master1 dashboard]# lsdashboard-configmap.yaml   dashboard-rbac.yaml    dashboard-service.yamldashboard-controller.yaml  dashboard-secret.yaml  k8s-admin.yaml`创建页面,顺序一定要注意`[root@master1 dashboard]# kubectl create -f dashboard-rbac.yaml     #授权访问apirole.rbac.authorization.k8s.io/kubernetes-dashboard-minimal createdrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard-minimal created[root@master1 dashboard]# kubectl create -f dashboard-secret.yaml   #进行加密secret/kubernetes-dashboard-certs createdsecret/kubernetes-dashboard-key-holder created[root@master1 dashboard]# kubectl create -f dashboard-configmap.yaml    #配置应用configmap/kubernetes-dashboard-settings created[root@master1 dashboard]# kubectl create -f dashboard-controller.yaml   #控制器serviceaccount/kubernetes-dashboard createddeployment.apps/kubernetes-dashboard created[root@master1 dashboard]# kubectl create -f dashboard-service.yaml      #发布出去进行访问service/kubernetes-dashboard created`完成后查看创建在指定的kube-system命名空间下`[root@master1 dashboard]# kubectl get pods -n kube-systemNAME                                    READY   STATUS    RESTARTS   AGEkubernetes-dashboard-65f974f565-9qs8j   1/1     Running   0          3m27s`查看如何访问`[root@master1 dashboard]# kubectl get pods -n kube-systemNAME                                    READY   STATUS    RESTARTS   AGEkubernetes-dashboard-65f974f565-9qs8j   1/1     Running   0          3m27s[root@master1 dashboard]# kubectl get pods,svc -n kube-systemNAME                                        READY   STATUS    RESTARTS   AGEpod/kubernetes-dashboard-65f974f565-9qs8j   1/1     Running   0          4m21sNAME                           TYPE       CLUSTER-IP   EXTERNAL-IP   PORT(S)         AGEservice/kubernetes-dashboard   NodePort   10.0.0.169           443:30001/TCP   4m15s
验证:在浏览器中输入nodeIP就可以访问:

解决方法:关于谷歌浏览器无法访问题
`在master1中:`[root@master1 dashboard]# vim dashboard-cert.shcat > dashboard-csr.json <

`生成令牌`[root@master1 dashboard]# kubectl create -f k8s-admin.yamlserviceaccount/dashboard-admin createdclusterrolebinding.rbac.authorization.k8s.io/dashboard-admin created`保存`[root@master1 dashboard]# kubectl get secret -n kube-systemNAME                               TYPE                                  DATA   AGEdashboard-admin-token-l9z5f        kubernetes.io/service-account-token   3      30s#dashboard-admin-token-l9z5f后面要用于查看令牌default-token-8hwtl                kubernetes.io/service-account-token   3      2d3hkubernetes-dashboard-certs         Opaque                                11     11mkubernetes-dashboard-key-holder    Opaque                                2      26mkubernetes-dashboard-token-crqvs   kubernetes.io/service-account-token   3      25m`查看令牌`[root@master1 dashboard]# kubectl describe secret dashboard-admin-token-l9z5f -n kube-systemName:         dashboard-admin-token-l9z5fNamespace:    kube-systemLabels:       Annotations:  kubernetes.io/service-account.name: dashboard-admin              kubernetes.io/service-account.uid: 115a70a5-4988-11ea-b617-000c2986f9b2Type:  kubernetes.io/service-account-tokenData====token:      eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJkYXNoYm9hcmQtYWRtaW4tdG9rZW4tbDl6NWYiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoiZGFzaGJvYXJkLWFkbWluIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiMTE1YTcwYTUtNDk4OC0xMWVhLWI2MTctMDAwYzI5ODZmOWIyIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Omt1YmUtc3lzdGVtOmRhc2hib2FyZC1hZG1pbiJ9.DdqS8xHxQYUw68NpqR1XIqQRgOFS3nsrfhjPe1pdqbt6PepAf1pOaDYTJ2cGtbA89J4v0go-6ZWc1BiwidMcthVv_LgXD9cD_5RXN_GoYqsEFFFgkzdyG0y4_BSowMCheS9tGCzuo-O-w_U5gPz3LGTwMRPyRbfEVDaS3Dign_b8SASD_56WkHkSGecI42t1Zct5h2Mnsam_qPhpfgMCzwxQ8l8_8XK6t5NK6orSwL9ozAmX5XGR9j4EL06OKy6al5hAHoB1k0srqT_mcj8Lngt7iq6VPuLVVAF7azAuItlL471VR5EMfvSCRrUG2nPiv44vjQPghnRYXMWS71_B5wca.crt:     1359 bytesnamespace:  11 bytes#整个token段落就是我们需要复制的令牌
把令牌粘贴之后登录,得到UI界面:

以上就是完整的K8s多节点的完整部署到页面呈现的过程!

0