千家信息网

K8S多master部署二:部署LoadBlance

发表于:2025-01-23 作者:千家信息网编辑
千家信息网最后更新 2025年01月23日,前情提要以下所有操作均在单master群集已完成部署的情况下进行。所有服务器均保证防火墙常闭,核心功能selinux关闭。服务器角色分配角色地址安装组件master192.168.142.220kub
千家信息网最后更新 2025年01月23日K8S多master部署二:部署LoadBlance

前情提要

以下所有操作均在单master群集已完成部署的情况下进行。

所有服务器均保证防火墙常闭,核心功能selinux关闭。

服务器角色分配

角色地址安装组件
master192.168.142.220kube-apiserver kube-controller-manager kube-scheduler etcd
master02192.168.142.120kube-apiserver kube-controller-manager kube-scheduler
node1192.168.142.136kubelet kube-proxy docker flannel etcd
node2192.168.142.132kubelet kube-proxy docker flannel etcd
nginx1192.168.142.130nginx keepalived
nginx2192.168.142.140nginx keepalived
VIP192.168.142.20虚拟地址

一、nginx端部署

建立nginx的YUM源

[root@lb-ma ~]# cat > /etc/yum.repos.d/nginx.repo <

安装nginx并进行配置

[root@lb-ma ~]# yum install nginx -y#添加stream模块实现四层转发[root@lb-ma ~]# vim /etc/nginx/nginx.conf###添加stream {   log_format  main  '$remote_addr $upstream_addr - [$time_local] $status $upstream_bytes_sent';    access_log  /var/log/nginx/k8s-access.log  main;    upstream k8s-apiserver {        server 192.168.142.220:6443;        server 192.168.142.120:6443;    }    server {                listen 6443;                proxy_pass k8s-apiserver;    }    }#开启服务[root@lb-ma ~]# systemctl start nginx[root@lb-ma ~]# systemctl enable nginx

安装Keppalived服务

[root@lb-ma ~]# yum -y install keepalived##修改keepalived配置文件[root@lb-ma ~]# vim /etc/keepalived/keepalived.conf###原本的全部删除,按下面新建! Configuration File for keepalivedglobal_defs {  # 接收邮件地址     notification_email {     acassen@firewall.loc     failover@firewall.loc     sysadmin@firewall.loc   }   # 邮件发送地址   notification_email_from Alexandre.Cassen@firewall.loc   smtp_server 127.0.0.1   smtp_connect_timeout 30   router_id NGINX_MASTER}vrrp_script check_nginx {    script "/usr/local/nginx/sbin/check_nginx.sh"}vrrp_instance VI_1 {    state MASTER   #备服务器改为BACKUP    interface ens33   #监控ens33网卡    virtual_router_id 51 # VRRP 路由 ID实例,每个实例是唯一的    priority 100    # 优先级,备服务器设置 90    advert_int 1    # 指定VRRP 心跳包通告间隔时间,默认1秒    authentication {        auth_type PASS        auth_pass 1111    }    virtual_ipaddress {        192.168.142.20/24   #VIP地址    }    track_script {        check_nginx    }}

建立监控脚本

一旦nginx处于down状态,将会自动关闭keeplived

[root@lb-ma ~]# mkdir -p /usr/local/nginx/sbin/[root@lb-ma ~]# vim /usr/local/nginx/sbin/check_nginx.sh##手动进行编写count=$(ps -ef |grep nginx |egrep -cv "grep|$$")if [ "$count" -eq 0 ];then    systemctl stop keepalivedfi[root@lb-ma ~]# chmod +x /usr/local/nginx/sbin/check_nginx.sh#开启keeplived[root@lb-ma ~]# systemctl start keepalived[root@lb-ma ~]# systemctl enable keepalived

此时,前面的负载均衡已经配置完毕,但是并不能起到实际的作用,原因就是后方node节点中负责进行身份识别的kubeconfig文件中的地址没有改变,无法识别。

二、node端修改

更改kubeconfig中的地址

[root@node1 ~]# vim /opt/kubernetes/cfg/bootstrap.kubeconfig[root@node1 ~]# vim /opt/kubernetes/cfg/kube-proxy.kubeconfig[root@node1 ~]# vim /opt/kubernetes/cfg/kubelet.kubeconfig##三个文件全部改为    server: https://192.168.142.20:6443    #指向VIP地址

重启kubelet & kube-proxy服务

[root@node1 ~]# systemctl restart kubelet[root@node1 ~]# systemctl restart kube-proxy

以上,就是nginx做负载均衡,keppalived做双机热备的全部部署过程


DEMO:建立POD进行检测

master端建立测试pod

[root@master ~]# kubectl run nginx --image=nginx##建立podkubectl run --generator=deployment/apps.v1beta1 is DEPRECATED and will be removed in a future version. Use kubectl create instead.deployment.apps/nginx created#查看建立打的pod[root@master ~]# kubectl get podsNAME                    READY   STATUS    RESTARTS   AGEnginx-dbddb74b8-7tdvp   1/1     Running   0          21s

此时,pod只能进行简单的查看,一旦查看日志会报错。为了解决这个问题,可采用下面的办法解决。

#注意日志问题[root@master ~]# kubectl logs nginx-dbddb74b8-7tdvpError from server (Forbidden): Forbidden (user=system:anonymous, verb=get, resource=nodes, subresource=proxy) ( pods/log nginx-dbddb74b8-7tdvp)###解决办法:[root@master ~]# kubectl create clusterrolebinding cluster-system-anonymous --clusterrole=cluster-admin --user=system:anonymousclusterrolebinding.rbac.authorization.k8s.io/cluster-system-anonymous created
0