Kubernetes 系列第二篇: Kubernetes 架构设计和部署
发表于:2025-02-07 作者:千家信息网编辑
千家信息网最后更新 2025年02月07日,1. 架构设计和环境设计1.1. 架构设计部署 Haproxy 为 Kubernetes 提供 Endpoint 访问入口使用 Keepalived 将 Endpoint 入口地址设置为 Virtua
千家信息网最后更新 2025年02月07日Kubernetes 系列第二篇: Kubernetes 架构设计和部署
1. 架构设计和环境设计
1.1. 架构设计
- 部署 Haproxy 为 Kubernetes 提供 Endpoint 访问入口
- 使用 Keepalived 将 Endpoint 入口地址设置为 Virtual IP 并通过部署多台节点的方式实现冗余
- 使用 kubeadm 部署高可用 Kubernetes 集群, 指定 Endpoint IP 为 Keepalived 生成的 Virtual IP
- 使用 prometheus 作为 Kubernetes 的集群监控系统, 使用 grafana 作为图表监控图表展示系统, 使用 alertmanager 作为报警系统
- 使用 jenkins + gitlab + harbor 构建 CI/CD 系统
- 使用单独的域名在 Kubernetes 集群内进行通信, 在内网搭建 DNS 服务用于解析域名
1.2. 环境设计
主机名 | IP | 角色 |
---|---|---|
kube-master-01.sk8s.io-01.sk8s.io | 192.168.0.201 | k8s master, haprxoy + keepalived(虚拟IP: 192.168.0.250) |
kube-master-01.sk8s.io-02.sk8s.io | 192.168.0.202 | k8s master, haprxoy + keepalived(虚拟IP: 192.168.0.250) |
kube-master-01.sk8s.io-03.sk8s.io | 192.168.0.203 | k8s master, DNS, Storage, GitLab, Harbor |
kube-node-01.sk8s.io | 192.168.0.204 | node |
kube-node-02.sk8s.io | 192.168.0.205 | node |
2. 操作系统初始化设置
2.1. 关闭 SELINUX
[root@localhost ~]# setenforce 0[root@localhost ~]# sed -i 's#^SELINUX=.*#SELINUX=disabled#' /etc/sysconfig/selinux[root@localhost ~]# sed -i 's#^SELINUX=.*#SELINUX=disabled#' /etc/selinux/config
2.2. 关闭无用服务
[root@localhost ~]# systemctl disable firewalld postfix auditd kdump NetworkManager
2.3. 升级系统内核
[root@master ~]# rpm --import https://www.elrepo.org/RPM-GPG-KEY-elrepo.org[root@master ~]# rpm -Uvh http://www.elrepo.org/elrepo-release-7.0-3.el7.elrepo.noarch.rpm [root@master ~]# yum -y --disablerepo=\* --enablerepo=elrepo-kernel install kernel-lt.x86_64 kernel-lt-devel.x86_64 kernel-lt-headers.x86_64[root@master ~]# yum -y remove kernel-tools-libs.x86_64 kernel-tools.x86_64[root@master ~]# yum -y --disablerepo=\* --enablerepo=elrepo-kernel install kernel-lt-tools.x86_64[root@master ~]# cat < /etc/default/grubGRUB_TIMEOUT=5GRUB_DISTRIBUTOR="$(sed 's, release .*$,,g' /etc/system-release)"GRUB_DEFAULT=0GRUB_DISABLE_SUBMENU=trueGRUB_TERMINAL_OUTPUT="console"GRUB_CMDLINE_LINUX="crashkernel=auto console=ttyS0 console=tty0 panic=5"GRUB_DISABLE_RECOVERY="true"GRUB_TERMINAL="serial console"GRUB_TERMINAL_OUTPUT="serial console"GRUB_SERIAL_COMMAND="serial --speed=9600 --unit=0 --word=8 --parity=no --stop=1"EOF[root@localhost ~]# grub2-mkconfig -o /boot/grub2/grub.cfg
2.4. 统一网卡名称
[root@localhost ~]# grub_cinfig='GRUB_CMDLINE_LINUX="crashkernel=auto ipv6.disable=1 net.ifnames=0 rd.lvm.lv=centos/root rd.lvm.lv=centos/swap rhgb quiet"'[root@localhost ~]# sed -i "s#GRUB_CMDLINE_LINUX.*#${grub_cinfig}#" /etc/default/grub[root@localhost ~]# grub2-mkconfig -o /boot/grub2/grub.cfg# ATTR{address} 为网卡 MAC 地址, NAME 为修改后的地址[root@localhost ~]# cat /etc/udev/rules.d/70-persistent-net.rules SUBSYSTEM=="net", ACTION=="add", DRIVERS=="?*", ATTR{address}=="ec:xx:yy:cc:b6:xx", NAME="eth0"SUBSYSTEM=="net", ACTION=="add", DRIVERS=="?*", ATTR{address}=="ec:xx:yy:cc:b6:xx", NAME="eth2"# 重启之前先修改 /etc/sysconfig/network-scripts/ 下面的网卡配置文件[root@localhost ~]# reboot
2.5. 其他配置
[root@localhost ~]# yum -y install vim net-tools lrzsz lbzip2 bzip2 ntpdate curl wget psmisc[root@localhost ~]# timedatectl set-timezone Asia/Shanghai[root@localhost ~]# echo "nameserver 223.5.5.5" > /etc/resolv.conf[root@localhost ~]# echo "nameserver 114.114.114.114" >> /etc/resolv.conf[root@localhost ~]# echo 'LANG="en_US.UTF-8"' > /etc/locale.conf[root@localhost ~]# echo 'export LANG="en_US.UTF-8"' >> /etc/profile.d/custom.sh[root@localhost ~]# cat >> /etc/security/limits.conf < /etc/sysctl.d/k8s.conf << EOFnet.bridge.bridge-nf-call-iptables=1net.bridge.bridge-nf-call-ip6tables=1net.ipv4.ip_forward=1net.ipv4.tcp_tw_recycle=0vm.swappiness=0vm.overcommit_memory=1vm.panic_on_oom=0fs.inotify.max_user_instances=8192fs.inotify.max_user_watches=1048576fs.file-max=52706963fs.nr_open=52706963net.ipv4.tcp_sack = 0EOF[root@localhost ~]# sysctl -p /etc/sysctl.d/k8s.conf
2.6. 配置 Hosts Deny
[root@localhost ~]# echo "sshd:192.168.0." > /etc/hosts.allow[root@localhost ~]# echo "sshd:ALL" > /etc/hosts.deny
2.7. ssh 配置
# 创建管理员用户, 并生成 ssh key(将私钥下载下来, 禁止在服务器留存; 将公钥复制到其他服务器上的 ~/.ssh/authorized_keys)[root@localhost ~]# useradd huyuan[root@localhost ~]# echo "sycx123" | passwd --stdin huyuan[root@localhost ~]# su - huyuan[root@localhost ~]# ssh-keygen -b 4096[root@localhost ~]# mv ~/.ssh/id_rsa.pub ~/.ssh/authorized_keys# 回到 root 用户[root@localhost ~]# exit# 禁止 DNS 反解, 优化 SSH 连接速度[root@localhost ~]# sed -i 's/^#UseDNS.*/UseDNS no/' /etc/ssh/sshd_config # 禁用密码认证[root@localhost ~]# sed -i 's/^PasswordAuthentication.*/PasswordAuthentication no/' /etc/ssh/sshd_config# 禁止 root 用户登录[root@localhost ~]# sed -i 's/#PermitRootLogin.*/PermitRootLogin no/' /etc/ssh/sshd_config# 只允许 huyuan 登录服务器, 多个用户使用空格进行分隔[root@localhost ~]# echo "AllowUsers huyuan" >> /etc/ssh/sshd_config# 重启服务[root@localhost ~]# systemctl restart sshd
2.7. 设置统一 root 密码
[root@localhost ~]# echo "xxxxx" | passwd --stdin root
2.8. 设置主机名
[root@localhost ~]# hostnamectl set-hostname kube-master-01.sk8s.io-01.sk8s.io[root@localhost ~]# echo "192.168.0.201 kube-master-01.sk8s.io-01.sk8s.io" >> /etc/hosts[root@localhost ~]# echo "192.168.0.202 kube-master-01.sk8s.io-02.sk8s.io" >> /etc/hosts[root@localhost ~]# echo "192.168.0.203 kube-master-01.sk8s.io-03.sk8s.io" >> /etc/hosts[root@localhost ~]# echo "192.168.0.204 kube-node-01.sk8s.io" >> /etc/hosts[root@localhost ~]# echo "192.168.0.205 kube-node-02.sk8s.io" >> /etc/hosts
3. 初始化 Kubernetes 集群
3.1. 安装和配置 docker (所有节点)
[root@kube-master-01.sk8s.io01 ~]# yum install -y yum-utils device-mapper-persistent-data lvm2[root@kube-master-01.sk8s.io01 ~]# yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo[root@kube-master-01.sk8s.io01 ~]# yum -y install docker-ce-18.09.6 docker-ce-cli-18.09.6[root@kube-master-01.sk8s.io01 ~]# cat /etc/docker/daemon.json {"registry-mirrors": ["https://c7i79lkw.mirror.aliyuncs.com"],"insecure-registries": ["122.228.208.72:9000"],"exec-opts": ["native.cgroupdriver=systemd"],"log-driver": "json-file","graph": "/opt/docker","log-opts": { "max-size": "100m"},"storage-driver": "overlay2"}[root@kube-master-01.sk8s.io01 ~]# systemctl enable docker[root@kube-master-01.sk8s.io01 ~]# systemctl start docker
3.2. 配置 haproxy 作为 ApiServer 代理
# 在 kube-master-01.sk8s.io01 和 kube-master-01.sk8s.io02 主机上安装和配置[root@kube-master-01.sk8s.io01 ~]# yum -y install haproxy[root@kube-master-01.sk8s.io01 ~]# cat > /etc/haproxy/haproxy.cfg << EOFglobal log 127.0.0.1 local2 notice chroot /var/lib/haproxy stats socket /var/run/haproxy.sock mode 660 level admin stats timeout 30s user haproxy group haproxy daemon nbproc 1defaults log global timeout connect 5000 timeout client 10m timeout server 10mlisten admin_stats bind 0.0.0.0:8000 mode http stats refresh 30s stats uri /status stats realm welcome login\ Haproxy stats auth admin:tuitui99 stats hide-version stats admin if TRUElisten kube-master-01.sk8s.io bind 0.0.0.0:8443 mode tcp option tcplog balance source server 192.168.0.201 192.168.0.201:6443 check inter 2000 fall 2 rise 2 weight 1 server 192.168.0.202 192.168.0.202:6443 check inter 2000 fall 2 rise 2 weight 1 server 192.168.0.203 192.168.0.203:6443 check inter 2000 fall 2 rise 2 weight 1EOF[root@kube-master-01.sk8s.io01 ~]# systemctl enable haproxy[root@kube-master-01.sk8s.io01 ~]# systemctl start haproxy
3.3. 配置 keepalived 为 haproxy 做主从备份
# 在 kube-master-01.sk8s.io01 和 kube-master-01.sk8s.io02 主机上安装和配置[root@kube-master-01.sk8s.io01 ~]# yum -y install keepalived[root@kube-master-01.sk8s.io01 ~]# cp /etc/keepalived/keepalived.conf{,.bak}[root@kube-master-01.sk8s.io01 ~]# cat > /etc/keepalived/keepalived.conf << EOFglobal_defs { # 唯一表示 192.168.0.201 节点 ip router_id master-192.168.0.201 # 执行 notify_master notify_backup notify_fault 等脚本的用户 script_user root}vrrp_script check-haproxy { # 检测进程是否存在 script "/bin/killall -0 haproxy &>/dev/null" interval 5 weight -30 user root}vrrp_instance k8s { state MASTER # 设置优先级, 主服务器为 120, 从服务器为 100 priority 120 dont_track_primary interface eth0 virtual_router_id 80 advert_int 3 track_script { check-haproxy } authentication { auth_type PASS auth_pass tuitui99 } virtual_ipaddress { # 设置虚拟 ip 192.168.0.254 } # 脚本参考 https://blog.51cto.com/hongchen99/2298896 notify_master "/bin/python /etc/keepalived/notify_keepalived.py master" notify_backup "/bin/python /etc/keepalived/notify_keepalived.py backup" notify_fault "/bin/python /etc/keepalived/notify_keepalived.py fault"}EOF[root@kube-master-01.sk8s.io01 ~]# chmod +x /etc/keepalived/notify_keepalived.py[root@kube-master-01.sk8s.io01 ~]# systemctl enable keepalived[root@kube-master-01.sk8s.io01 ~]# systemctl start keepalived
3.4. 配置 haproxy 和 keepalived 日志
# 配置 haproxy 日志[root@kube-master-01.sk8s.io01 ~]# echo "local2.* /var/log/haproxy.log" >> /etc/rsyslog.conf# 配置 keepalived 日志[root@kube-master-01.sk8s.io01 ~]# cp /etc/sysconfig/keepalived{,.bak}[root@kube-master-01.sk8s.io01 ~]# echo KEEPALIVED_OPTIONS="-D -d -S 0" > /etc/sysconfig/keepalived[root@kube-master-01.sk8s.io01 ~]# echo "local0.* /var/log/keepalived.log" >> /etc/rsyslog.conf# 由于 haproxy 日志通过 udp 传输, 需要打开 rsyslog 的 udp 端口, 在 rsyslog 里面, 去掉下面两个变量的注释[root@kube-master-01.sk8s.io01 ~]# cat /etc/rsyslog.conf$ModLoad imudp$UDPServerRun 514[root@kube-master-01.sk8s.io01 ~]# systemctl restart rsyslog[root@kube-master-01.sk8s.io01 ~]# systemctl restart haproxy[root@kube-master-01.sk8s.io01 ~]# systemctl restart keepalived
3.5. 安装 kubelet kubeadm 和 kubectl
[root@kube-master-01.sk8s.io01 ~]# cat > /etc/yum.repos.d/kubernetes.repo << EOF[kubernetes]name=Kubernetesbaseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/enabled=1gpgcheck=1repo_gpgcheck=1gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpgEOF[root@kube-master-01.sk8s.io01 ~]# yum install -y kubelet-1.14.0 kubeadm-1.14.0 kubectl-1.14.0[root@kube-master-01.sk8s.io01 ~]# systemctl enable kubelet
3.6. 初始化 kubernetes 集群
# 加载 ipvs 模块[root@kube-master-01.sk8s.io01 ~]# cat > /etc/sysconfig/modules/ipvs.modules << EOF#!/bin/bashmodprobe -- ip_vsmodprobe -- ip_vs_rrmodprobe -- ip_vs_wrrmodprobe -- ip_vs_shmodprobe -- nf_conntrack_ipv4EOF[root@kube-master-01.sk8s.io01 ~]# chmod +x /etc/sysconfig/modules/ipvs.modules[root@kube-master-01.sk8s.io01 ~]# sh /etc/sysconfig/modules/ipvs.modules[root@kube-master-01.sk8s.io01 ~]# lsmod | grep ip_vs# 安装 ipvsadm 管理 ipvs[root@kube-master-01.sk8s.io01 ~]# yum -y install ipvsadm# 编写初始化配置文件[root@kube-master-01.sk8s.io01 ~]# cat > kubeadm-init.yaml << EOFapiVersion: kubeadm.k8s.io/v1beta1kind: ClusterConfigurationkubernetesVersion: v1.14.0# 192.168.0.254 为虚拟 IP, 8443 为 haproxy 监听的端口controlPlaneEndpoint: "192.168.0.254:8443"# 设置拉取初始化镜像地址imageRepository: registry.cn-hangzhou.aliyuncs.com/google_containersnetworking: podSubnet: 10.244.0.0/16apiServer: certSANs: - "kube-master-01.sk8s.io01" - "kube-master-01.sk8s.io02" - "kube-master-01.sk8s.io03" - "192.168.0.201" - "192.168.0.202" - "192.168.0.203" - "192.168.0.254" - "127.0.0.1"---apiVersion: kubeproxy.config.k8s.io/v1alpha1kind: KubeProxyConfigurationmode: "ipvs"EOF# 初始化 kubernetes 集群 --experimental-upload-certs 共享证书[root@kube-master-01.sk8s.io01 ~]# kubeadm init --config=kubeadm-init.yaml --experimental-upload-certs# 创建 kubernetes 集群管理用户[root@kube-master-01.sk8s.io01 ~]# groupadd -g 5000 kubelet[root@kube-master-01.sk8s.io01 ~]# useradd -c "kubernetes-admin-user" -G docker -u 5000 -g 5000 kubelet[root@kube-master-01.sk8s.io01 ~]# echo "kubelet" | passwd --stdin kubelet# 复制 kubernetes 集群配置文件到管理用户[root@kube-master-01.sk8s.io01 ~]# mkdir /home/kubelet/.kube[root@kube-master-01.sk8s.io01 ~]# cp -i /etc/kubernetes/admin.conf /home/kubelet/.kube/config[root@kube-master-01.sk8s.io01 ~]# chown -R kubelet:kubelet /home/kubelet/.kube
3.7. 更新 coredns
[root@kube-master-01.sk8s.io01 ~]# su - kubelet[kubelet@kube-master-01.sk8s.io01 ~]$ cat > coredns.yaml << EOFapiVersion: apps/v1kind: Deploymentmetadata: labels: k8s-app: kube-dns name: coredns namespace: kube-systemspec: replicas: 3 selector: matchLabels: k8s-app: kube-dns strategy: rollingUpdate: maxSurge: 25% maxUnavailable: 1 type: RollingUpdate template: metadata: labels: k8s-app: kube-dns spec: affinity: podAntiAffinity: preferredDuringSchedulingIgnoredDuringExecution: - weight: 100 podAffinityTerm: labelSelector: matchExpressions: - key: k8s-app operator: In values: - kube-dns topologyKey: kubernetes.io/hostname containers: - args: - -conf - /etc/coredns/Corefile image: coredns/coredns:1.5.0 imagePullPolicy: IfNotPresent livenessProbe: failureThreshold: 5 httpGet: path: /health port: 8080 scheme: HTTP initialDelaySeconds: 60 periodSeconds: 10 successThreshold: 1 timeoutSeconds: 5 name: coredns ports: - containerPort: 53 name: dns protocol: UDP - containerPort: 53 name: dns-tcp protocol: TCP - containerPort: 9153 name: metrics protocol: TCP resources: limits: memory: 170Mi requests: cpu: 100m memory: 70Mi securityContext: allowPrivilegeEscalation: false capabilities: add: - NET_BIND_SERVICE drop: - all readOnlyRootFilesystem: true terminationMessagePath: /dev/termination-log terminationMessagePolicy: File volumeMounts: - mountPath: /etc/coredns name: config-volume readOnly: true dnsPolicy: Default restartPolicy: Always schedulerName: default-scheduler securityContext: {} serviceAccount: coredns serviceAccountName: coredns terminationGracePeriodSeconds: 30 tolerations: - key: CriticalAddonsOnly operator: Exists - effect: NoSchedule key: node-role.kubernetes.io/master - effect: NoSchedule key: node.kubernetes.io/not-ready volumes: - configMap: defaultMode: 420 items: - key: Corefile path: Corefile name: coredns name: config-volumeEOF[kubelet@kube-master-01.sk8s.io01 ~]$ kubectl apply -f coredns.yaml
3.8. 其他节点加入集群
# 其他 master 加入集群[kubelet@kube-master-01.sk8s.io01 ~]$ kubeadm join 192.168.0.254:8443 --token h5n7uy.5qibssxu27vveko5 \--discovery-token-ca-cert-hash sha256:a27738a4457d57ee611dd1c0281aeaabd32bc834797fe307980b95755b052e41 \--experimental-control-plane --certificate-key eb37e5810fe300a42c5b610117ad57acf682a92da928cf94435a135aa338bc12# 其他 node 加入集群[kubelet@kube-master-01.sk8s.io01 ~]$ kubeadm join 192.168.0.254:8443 --token h5n7uy.5qibssxu27vveko5 \--discovery-token-ca-cert-hash sha256:a27738a4457d58ee611dd1c0281aeaabd34bc834797fe307980b95755b052e41
配置
集群
服务
用户
系统
服务器
设计
主机
地址
日志
网卡
节点
管理
架构
入口
图表
域名
密码
文件
环境
数据库的安全要保护哪些东西
数据库安全各自的含义是什么
生产安全数据库录入
数据库的安全性及管理
数据库安全策略包含哪些
海淀数据库安全审计系统
建立农村房屋安全信息数据库
易用的数据库客户端支持安全管理
连接数据库失败ssl安全错误
数据库的锁怎样保障安全
二进制和网络安全的关系
网络安全领导组工作
沪深在线软件开发商
服务器之间网络延时一会大一会小
win10网络安全中心
网络安全法宣传周总结
c社服务器
淄博采购管理软件开发
怎样接单软件开发
域名指向的服务器目录
公安指纹数据库
pubmed是循证数据库吗
广西便民平台软件开发专业制作
烽火通信光软件开发
软件开发人员平均工资
为什么连接服务器会一直出现问题
数据库查看已有对象
如何获得服务器管理员权限
事务故障破坏数据库吗
数据库安全知识
nport串口服务器
win7无法解析服务器的dns地址
系统分析和软件开发的联系
怎么在文档中设置行数据库
上海勇进软件开发
网络安全教育内容家长会
专题地理信息数据库
实时防护服务器
sql2008r2数据库附加
ukg打卡需要服务器