千家信息网

k3s高可用部署

发表于:2024-09-21 作者:千家信息网编辑
千家信息网最后更新 2024年09月21日,说明:# 影响k3s 高可用的阻力就是所有master 节点的证书统一,解决方案是先成功部署一个master 节点然后把节点生成证书复制到其它master节点包括token,同时使用etcd 作为数据
千家信息网最后更新 2024年09月21日k3s高可用部署

说明:

# 影响k3s 高可用的阻力就是所有master 节点的证书统一,解决方案是先成功部署一个master 节点然后把节点生成证书复制到其它master节点包括token,同时使用etcd 作为数据库

环境说明:

# 操作系统:centos8  1905#k3s版本:v0.9.1#etcd 版本: v3.4.1# etcd 服务器IP:192.168.30.50,192.168.30.51,192.168.30.52 # 安装目录:/apps/业务# 服务器节点IP:192.168.30.50,192.168.30.51,192.168.30.52 node节点192.168.30.53,vip 节点:192.168.30.59# k3s 集群域名: cluster.local# k3s api 接口域名:api.k3s.tyong.com# k3s cluster-cidr:10.48.0.0/12# k3s service-cidr:10.64.0.0/16#k3s cluster-dns:10.64.0.2

二进制准备:

# 所有节点# 下载etcd 二进制wget https://github.com/etcd-io/etcd/releases/download/v3.4.1/etcd-v3.4.1-linux-amd64.tar.gz# 解压二进制文件tar -xvf etcd-v3.4.1-linux-amd64.tar.gz# 创建etcd 运行目录mkdir -p /apps/etcd/{bin,conf,ssl,data}# 复制二进制到运行目录cd etcd-v3.4.1-linux-amd64mv mv etcd* /apps/etcd/bin# 下载K3Swget https://github.com/rancher/k3s/releases/download/v0.9.1/k3s# 可执行权限chmod +x k3s# 复制k3s 到运行目录mv k3s /usr/local/bin/# 创建软链方便使用cd /usr/local/bin/ln -sf k3s kubectlln -sf k3s crictlln -sf k3s ctr# 习惯修改vi  ~/.bashrcalias docker='k3s crictl'. ~/.bashrc

对系统做简单优化

# 设置 system.confcat >> /etc/systemd/system.conf << EOFDefaultLimitMEMLOCK=infinityDefaultLimitCORE=infinityDefaultCPUAccounting=yesDefaultMemoryAccounting=yesDefaultLimitNOFILE=1024000DefaultLimitNPROC=1024000EOF# 设置关闭防火墙及SELINUXsed -i 's/SELINUX=.*/SELINUX=disabled/g' /etc/selinux/config systemctl stop firewalld && systemctl disable firewalldsetenforce 0# 关闭Swapswapoff -a && sysctl -w vm.swappiness=0vi /etc/fstab#/dev/mapper/centos-swap swap swap defaults 0 0# 设置 sysctl.conf 内核配置true > /etc/sysctl.confcat >> /etc/sysctl.conf << EOF net.ipv4.ip_forward = 1 net.ipv4.conf.default.rp_filter = 1 net.ipv4.conf.default.accept_source_route = 0 kernel.sysrq = 0 kernel.core_uses_pid = 1 net.ipv4.tcp_syncookies = 1 fs.file-max = 1024000 fs.nr_open = 1024000 vm.swappiness = 0 vm.max_map_count = 2048000 vm.overcommit_memory = 1 kernel.sem =5010 641280 5010 128 kernel.pid_max = 4194303 kernel.msgmnb = 65536 kernel.msgmax = 65536 kernel.shmmax = 68719476736 kernel.shmall = 4294967296 net.ipv4.tcp_max_tw_buckets = 6000 net.ipv4.tcp_sack = 1 net.ipv4.tcp_window_scaling = 1 net.ipv4.tcp_mem = 786432 1697152 1945728 net.ipv4.tcp_rmem = 4096 87380 16777216 net.ipv4.tcp_wmem = 4096 65536 16777216 net.core.wmem_default = 8388608 net.core.rmem_default = 8388608 net.core.rmem_max = 16777216 net.core.wmem_max = 16777216 net.core.netdev_max_backlog = 2048000 net.core.somaxconn = 65535 net.ipv4.tcp_max_orphans = 3276800 net.ipv4.tcp_max_syn_backlog = 2048000 net.ipv4.tcp_mem = 94500000 915000000 927000000 net.ipv4.tcp_fin_timeout = 1 net.ipv4.tcp_keepalive_time = 600 net.ipv4.tcp_keepalive_probes = 10 net.ipv4.tcp_keepalive_intvl = 30 net.ipv4.ip_local_port_range = 1024 65535 net.ipv4.neigh.default.gc_stale_time=120 net.ipv4.conf.default.rp_filter=0 net.ipv4.conf.all.rp_filter=0 net.ipv4.conf.all.arp_announce=2 net.ipv4.conf.lo.arp_announce=2 sunrpc.tcp_slot_table_entries=256EOF/sbin/sysctl -p# 设置limits.confcat >> /etc/security/limits.conf << EOF *           soft   nproc       1024000 *           hard   nproc       1024000 *           soft   nofile      1024000 *           hard   nofile      1024000 *           soft   core        1024000 *           hard   core        1024000 ######big mem ######## #*           hard    memlock    unlimited   #*           soft    memlock    unlimitedEOF# centos8 已经取消20-nproc.conf 文件# 设置NetworkManager 配置静态IPvi /etc/sysconfig/network-scripts/ifcfg-eth0TYPE=EthernetBOOTPROTO=staticDEFROUTE=yesNAME=eth0DEVICE=eth0ONBOOT=yesIPADDR="192.168.30.50"PREFIX="24"GATEWAY="192.168.30.1"DNS1="192.168.30.10"# 生效配置 nmcli c reload# centos8 已经取消network 访问管理网络 其它节点参考

部署etcd

# 操作节点:192.168.30.50# 部署go 环境变量,当然也可以在工作机器部署#安装及配置CFSSLyum install govi ~/.bash_profileGOBIN=/root/go/bin/PATH=$PATH:$GOBIN:$HOME/binexport PATHgo get  github.com/cloudflare/cfssl/cmd/cfsslgo get  github.com/cloudflare/cfssl/cmd/cfssljson# 创建etcdCA 证书配置mkdir -p /apps/work/k8s/cfssl/ && \cat << EOF | tee /apps/work/k8s/cfssl/ca-config.json{"signing": {"default": {"expiry": "87600h"},"profiles": {"kubernetes": {"usages": ["signing","key encipherment","server auth","client auth"],"expiry": "87600h"}}}}EOF---------------------------------------------------------------------mkdir -p /apps/work/k8s/cfssl/etcdcat << EOF | tee /apps/work/k8s/cfssl/etcd/etcd-ca-csr.json{"CN": "etcd","key": {"algo": "rsa","size": 2048},"names": [{"C": "CN","ST": "GuangDong","L": "GuangZhou","O": "cluster","OU": "ETCD"}]}EOF# 生成ca证书----------------------------------------------------------------------cfssl gencert -initca \/apps/work/k8s/cfssl/etcd/etcd-ca-csr.json | \cfssljson -bare ./etcd-ca# 创建etcd server 证书配置-----------------------------------------------------------------------------export ETCD_SERVER_IPS=" \\"192.168.30.50\", \\"192.168.30.51\", \\"192.168.30.52\" \" && \export ETCD_SERVER_HOSTNAMES=" \\"k3s-001\", \\"k3s-002\", \\"k3s-003\" \" && \cat << EOF | tee /apps/work/k8s/cfssl/etcd/etcd_server.json{"CN": "etcd","hosts": ["127.0.0.1",${ETCD_SERVER_IPS},${ETCD_SERVER_HOSTNAMES}],"key": {"algo": "rsa","size": 2048},"names": [{"C": "CN","ST": "GuangDong","L": "GuangZhou","O": "cluster","OU": "ETCD"}]}EOF# 生成etcd server 证书-----------------------------------------------------------------------------cfssl gencert \-ca=./etcd-ca.pem \-ca-key=./etcd-ca-key.pem \-config=/apps/work/k8s/cfssl/ca-config.json \-profile=kubernetes \/apps/work/k8s/cfssl/etcd/etcd_server.json | \cfssljson -bare ./etcd_server# 创建member证书 k3s-01 节点--------------------------------------------------------------------------------------export ETCD_MEMBER_1_IP=" \    \"192.168.30.50\" \" && \export ETCD_MEMBER_1_HOSTNAMES="k3s-001\" && \cat << EOF | tee /apps/work/k8s/cfssl/etcd/${ETCD_MEMBER_1_HOSTNAMES}.json{  "CN": "etcd",  "hosts": [    "127.0.0.1",    ${ETCD_MEMBER_1_IP},    "${ETCD_MEMBER_1_HOSTNAMES}"  ],  "key": {    "algo": "rsa",    "size": 2048  },  "names": [    {      "C": "CN",      "ST": "GuangDong",      "L": "GuangZhou",      "O": "cluster",      "OU": "ETCD"    }  ]}EOF# 生成k3s-01 节点证书-----------------------------------------------------------------------------cfssl gencert \    -ca=./etcd-ca.pem \    -ca-key=./etcd-ca-key.pem \    -config=/apps/work/k8s/cfssl/ca-config.json \    -profile=kubernetes \    /apps/work/k8s/cfssl/etcd/${ETCD_MEMBER_1_HOSTNAMES}.json | \    cfssljson -bare ./etcd_member_${ETCD_MEMBER_1_HOSTNAMES}# 创建生成k3s-02 节点配置-----------------------------------------------------------------------------export ETCD_MEMBER_2_IP=" \    \"192.168.30.51\" \" && \export ETCD_MEMBER_2_HOSTNAMES="k3s-002\" && \cat << EOF | tee /apps/work/k8s/cfssl/etcd/${ETCD_MEMBER_2_HOSTNAMES}.json{  "CN": "etcd",  "hosts": [    "127.0.0.1",    ${ETCD_MEMBER_2_IP},    "${ETCD_MEMBER_2_HOSTNAMES}"  ],  "key": {    "algo": "rsa",    "size": 2048  },  "names": [    {      "C": "CN",      "ST": "GuangDong",      "L": "GuangZhou",      "O": "cluster",      "OU": "ETCD"    }  ]}EOF  # 生成k3s-02 节点证书--------------------------------------------------------------------------cfssl gencert \    -ca=./etcd-ca.pem \    -ca-key=./etcd-ca-key.pem \    -config=/apps/work/k8s/cfssl/ca-config.json \    -profile=kubernetes \    /apps/work/k8s/cfssl/etcd/${ETCD_MEMBER_2_HOSTNAMES}.json | \    cfssljson -bare ./etcd_member_${ETCD_MEMBER_2_HOSTNAMES}# 创建k3s-03 节点配置--------------------------------------------------------------------------    export ETCD_MEMBER_3_IP=" \    \"192.168.30.52\" \" && \export ETCD_MEMBER_3_HOSTNAMES="k3s-003\" && \cat << EOF | tee /apps/work/k8s/cfssl/etcd/${ETCD_MEMBER_3_HOSTNAMES}.json{  "CN": "etcd",  "hosts": [    "127.0.0.1",    ${ETCD_MEMBER_3_IP},    "${ETCD_MEMBER_3_HOSTNAMES}"  ],  "key": {    "algo": "rsa",    "size": 2048  },  "names": [    {      "C": "CN",      "ST": "GuangDong",      "L": "GuangZhou",      "O": "cluster",      "OU": "ETCD"    }  ]}EOF#  生成k3s-03 节点证书------------------------------------------------------------------------------cfssl gencert \    -ca=./etcd-ca.pem \    -ca-key=./etcd-ca-key.pem \    -config=/apps/work/k8s/cfssl/ca-config.json \    -profile=kubernetes \    /apps/work/k8s/cfssl/etcd/${ETCD_MEMBER_3_HOSTNAMES}.json | \    cfssljson -bare ./etcd_member_${ETCD_MEMBER_3_HOSTNAMES}# 创建etcd client 证书-----------------------------------------------------------------------------------    cat << EOF | tee /apps/work/k8s/cfssl/etcd/etcd_client.json{"CN": "client","hosts": [""],"key": {"algo": "rsa","size": 2048},"names": [{"C": "CN","ST": "GuangDong","L": "GuangZhou","O": "cluster","OU": "ETCD"}]}EOF# 生成etcd client 证书--------------------------------------------------------------cfssl gencert \-ca=./etcd-ca.pem \-ca-key=./etcd-ca-key.pem \-config=/apps/work/k8s/cfssl/ca-config.json \-profile=kubernetes \/apps/work/k8s/cfssl/etcd/etcd_client.json | \cfssljson -bare ./etcd_client# 配置etcd01 启动文件vi /apps/etcd/conf/etcd------------------------------------------------------------------------------------------------------------------------------------------------ETCD_OPTS="--name=k3s-001 \           --data-dir=/apps/etcd/data/default.etcd \           --listen-peer-urls=https://192.168.30.50:2380 \           --listen-client-urls=https://192.168.30.50:2379,https://127.0.0.1:2379 \           --advertise-client-urls=https://192.168.30.50:2379 \           --initial-advertise-peer-urls=https://192.168.30.50:2380 \           --initial-cluster=k3s-001=https://192.168.30.50:2380,k3s-002=https://192.168.30.51:2380,k3s-003=https://192.168.30.52:2380 \           --initial-cluster-token=k3s-001=https://192.168.30.50:2380,k3s-002=https://192.168.30.51:2380,k3s-003=https://192.168.30.52:2380 \           --initial-cluster-state=new \           --heartbeat-interval=6000 \           --election-timeout=30000 \           --snapshot-count=5000 \           --auto-compaction-retention=1 \           --max-request-bytes=33554432 \           --quota-backend-bytes=17179869184 \           --trusted-ca-file=/apps/etcd/ssl/etcd-ca.pem \           --cert-file=/apps/etcd/ssl/etcd_server.pem \           --key-file=/apps/etcd/ssl/etcd_server-key.pem \           --peer-cert-file=/apps/etcd/ssl/etcd_member_k3s-001.pem \           --peer-key-file=/apps/etcd/ssl/etcd_member_k3s-001-key.pem \           --peer-client-cert-auth \           --peer-trusted-ca-file=/apps/etcd/ssl/etcd-ca.pem"# 创建etcd 启动文件------------------------------------------------------------------------vi /usr/lib/systemd/system/etcd.service---------------------------------------------------------------------[Unit]Description=Etcd ServerAfter=network.targetAfter=network-online.targetWants=network-online.target[Service]Type=notifyLimitNOFILE=1024000LimitNPROC=1024000LimitCORE=infinityLimitMEMLOCK=infinityUser=etcdGroup=etcdEnvironmentFile=-/apps/etcd/conf/etcdExecStart=/apps/etcd/bin/etcd $ETCD_OPTSRestart=on-failure[Install]WantedBy=multi-user.target-----------------------------------------------------------------# 创建etcd 用户useradd etcd -s /sbin/nologin -M-----------------------------------------------------------------# 给予/apps/etcd 目录etcd用户权限chown -R etcd:etcd /apps/etcd----------------------------------------------------------------# k3s-02 节点创建etcd 用户useradd etcd -s /sbin/nologin -M-----------------------------------------------------------------# k3s-03 节点创建etcd 用户useradd etcd -s /sbin/nologin -M-----------------------------------------------------------------# 分发文件到  k3s-02  k3s-03 节点scp -r /apps/etcd 192.168.30.51:/apps/scp -r /apps/etcd 192.168.30.52:/apps/# 分发启动文件到 k3s-02  k3s-03 节点scp  /usr/lib/systemd/system/etcd.service  192.168.30.51: /usr/lib/systemd/system/etcd.servicescp  /usr/lib/systemd/system/etcd.service  192.168.30.52: /usr/lib/systemd/system/etcd.service# 修改k3s-02 /apps/etcd/conf/etcd 文件vi /apps/etcd/conf/etcd--------------------------------------------------------------------------ETCD_OPTS="--name=k3s-002 \           --data-dir=/apps/etcd/data/default.etcd \           --listen-peer-urls=https://192.168.30.51:2380 \           --listen-client-urls=https://192.168.30.51:2379,https://127.0.0.1:2379 \           --advertise-client-urls=https://192.168.30.51:2379 \           --initial-advertise-peer-urls=https://192.168.30.51:2380 \           --initial-cluster=k3s-001=https://192.168.30.50:2380,k3s-002=https://192.168.30.51:2380,k3s-003=https://192.168.30.52:2380 \           --initial-cluster-token=k3s-001=https://192.168.30.50:2380,k3s-002=https://192.168.30.51:2380,k3s-003=https://192.168.30.52:2380 \           --initial-cluster-state=new \           --heartbeat-interval=6000 \           --election-timeout=30000 \           --snapshot-count=5000 \           --auto-compaction-retention=1 \           --max-request-bytes=33554432 \           --quota-backend-bytes=17179869184 \           --trusted-ca-file=/apps/etcd/ssl/etcd-ca.pem \           --cert-file=/apps/etcd/ssl/etcd_server.pem \           --key-file=/apps/etcd/ssl/etcd_server-key.pem \           --peer-cert-file=/apps/etcd/ssl/etcd_member_k3s-002.pem \           --peer-key-file=/apps/etcd/ssl/etcd_member_k3s-002-key.pem \           --peer-client-cert-auth \           --peer-trusted-ca-file=/apps/etcd/ssl/etcd-ca.pem"# 修改 修改k3s-03 /apps/etcd/conf/etcd 文件----------------------------------------------------------------------------vi /apps/etcd/conf/etcd----------------------------------------------------------------------------ETCD_OPTS="--name=k3s-003 \           --data-dir=/apps/etcd/data/default.etcd \           --listen-peer-urls=https://192.168.30.52:2380 \           --listen-client-urls=https://192.168.30.52:2379,https://127.0.0.1:2379 \           --advertise-client-urls=https://192.168.30.52:2379 \           --initial-advertise-peer-urls=https://192.168.30.52:2380 \           --initial-cluster=k3s-001=https://192.168.30.50:2380,k3s-002=https://192.168.30.51:2380,k3s-003=https://192.168.30.52:2380 \           --initial-cluster-token=k3s-001=https://192.168.30.50:2380,k3s-002=https://192.168.30.51:2380,k3s-003=https://192.168.30.52:2380 \           --initial-cluster-state=new \           --heartbeat-interval=6000 \           --election-timeout=30000 \           --snapshot-count=5000 \           --auto-compaction-retention=1 \           --max-request-bytes=33554432 \           --quota-backend-bytes=17179869184 \           --trusted-ca-file=/apps/etcd/ssl/etcd-ca.pem \           --cert-file=/apps/etcd/ssl/etcd_server.pem \           --key-file=/apps/etcd/ssl/etcd_server-key.pem \           --peer-cert-file=/apps/etcd/ssl/etcd_member_k3s-003.pem \           --peer-key-file=/apps/etcd/ssl/etcd_member_k3s-003-key.pem \           --peer-client-cert-auth \           --peer-trusted-ca-file=/apps/etcd/ssl/etcd-ca.pem"-----------------------------------------------------------------------------# 启动etcd 集群 k3s-01 k3s-02 k3s-03systemctl start etcd# 设置开机启动  k3s-01 k3s-02 k3s-03systemctl enable etcd----------------------------------------------------------------------------------# 验证K3S 是否正常vi  /etc/profileexport ETCDCTL_API=3export ENDPOINTS=https://192.168.30.50:2379,https://192.168.30.51:2379,https://192.168.30.52:2379# 生效环境变量 .  /etc/profile# 配置命令别名 aliasvi  /root/.bashrcalias etcdctl='/apps/etcd/bin/etcdctl   --endpoints=${ENDPOINTS}   --cacert=/apps/etcd/ssl/etcd-ca.pem'# 生效. /root/.bashrc# 验证集群etcdctl  member listetcdctl endpoint statushttps://192.168.30.50:2379, 7b98f2ed4d780753, 3.3.12, 290 MB, true, 37886, 82704406https://192.168.30.51:2379, 47fa5d2eb78a7751, 3.3.12, 289 MB, false, 37886, 82704408https://192.168.30.52:2379, 76c6cd81499cf7ba, 3.3.12, 289 MB, false, 37886, 82704433# etcd 集群正常

k3s master 节点部署

# 添加一个虚拟IPip addr add 192.168.30.59/24 dev eth0# 安装依赖--------------------------------------------dnf install epel-release---------------------------dnf install   dnf-utils ipvsadm telnet wget net-tools conntrack ipset jq iptables curl sysstat libseccomp socat nfs-utils fuse fuse-devel------------------------------------------------# centos 8 不能自动加载ipvs 创建开机加载cat << EOF > /etc/sysconfig/modules/ipvs.modules#!/bin/bashmodprobe -- ip_vsmodprobe -- ip_vs_rrmodprobe -- ip_vs_wrrmodprobe -- ip_vs_shmodprobe -- nf_conntrack_ipv4EOF# /etc/sysconfig/modules/ipvs.modules 可执行权限chmod +x /etc/sysconfig/modules/ipvs.modules# 执行 /etc/sysconfig/modules/ipvs.modules/etc/sysconfig/modules/ipvs.modules-----------------------------------mkdir -p /apps/k3s# 操作节点k3s-01 节点# 创建k3s envvi /etc/sysconfig/k3s.env----------------------------------------------K3S_SERVER_OPTS='--data-dir=/apps/k3s \         --no-deploy=traefik \         --no-deploy=coredns \         --no-deploy=servicelb \         --no-deploy=helm-install-traefik \         --kube-proxy-arg="proxy-mode=ipvs" \         --kube-proxy-arg="masquerade-all=true" \         --cluster-cidr="10.48.0.0/12" \         --service-cidr="10.64.0.0/16" \         --cluster-dns="10.64.0.2" \         --cluster-domain="cluster.local" \         --tls-san="192.168.30.51" \         --tls-san="192.168.30.52" \         --tls-san="192.168.30.59" \         --tls-san="192.168.30.50" \         --tls-san="api.k3s.tyong.com" \         --tls-san="kubernetes" \         --tls-san="kubernetes.default" \         --tls-san="kubernetes.default.svc" \         --tls-san="kubernetes.default.svc.cluster.local" \         --storage-endpoint=etcd \         --kube-apiserver-arg="etcd-cafile=/apps/etcd/ssl/etcd-ca.pem" \         --kube-apiserver-arg="etcd-certfile=/apps/etcd/ssl/etcd_client.pem" \         --kube-apiserver-arg="etcd-keyfile=/apps/etcd/ssl/etcd_client-key.pem" \         --kube-apiserver-arg="etcd-prefix=/registry" \         --kube-apiserver-arg="etcd-servers=https://192.168.30.50:2379,https://192.168.30.51:2379,https://192.168.30.52:2379" \         --kube-apiserver-arg="runtime-config=api/all=true" \         --kube-apiserver-arg="enable-admission-plugins=DefaultStorageClass,DefaultTolerationSeconds,LimitRanger,NamespaceExists,NamespaceLifecycle,NodeRestriction,OwnerReferencesPermissionEnforcement,PodNodeSelector,PersistentVolumeClaimResize,PodPreset,PodTolerationRestriction,ResourceQuota,ServiceAccount,StorageObjectInUseProtection,MutatingAdmissionWebhook,ValidatingAdmissionWebhook" \         --kube-apiserver-arg="disable-admission-plugins=DenyEscalatingExec,ExtendedResourceToleration,ImagePolicyWebhook,LimitPodHardAntiAffinityTopology,NamespaceAutoProvision,Priority,EventRateLimit,PodSecurityPolicy" \         --kube-controller-arg="horizontal-pod-autoscaler-use-rest-clients=true" \         --pause-image=docker.io/juestnow/pause-amd64:3.1 \         --resolv-conf="/etc/resolv.conf"'----------------------------------------------------------------------------------------------------------------------------------# 创建k3s 启动文件vi /etc/systemd/system/k3s.service-----------------------------------------------------------------------------------------------------------------------------[Unit]Description=Lightweight KubernetesDocumentation=https://k3s.ioAfter=network-online.target[Service]Type=notifyEnvironmentFile=/etc/sysconfig/k3s.envExecStartPre=-/sbin/modprobe br_netfilterExecStartPre=-/sbin/modprobe overlayExecStart=/usr/local/bin/k3s \    server $K3S_SERVER_OPTS \KillMode=processDelegate=yesLimitNOFILE=1024000LimitNPROC=1024000LimitCORE=infinityTasksMax=infinityTimeoutStartSec=0Restart=alwaysRestartSec=5s[Install]WantedBy=multi-user.target# 启动k3ssystemctl enable k3s && systemctl start k3s# 等待K3S 启动正常 然后关闭k3s-----------------------------------------------systemctl stop k3s--------------------------------# 复制K3S 生成的配置文件到其它master节点scp -r /etc/rancher 192.168.30.51:/etc/rancherscp -r /etc/rancher 192.168.30.52:/etc/rancherscp -r /apps/k3s   192.168.30.51:/apps/scp -r /apps/k3s    192.168.30.52:/apps/-------------------------------------------#启动 k3s-01 节点-------------------------------------------systemctl start k3s-------------------------------------------ssh 192.168.30.51,52 systemctl enable k3s && systemctl start k3s---------------------------------------------# 验证k3s 节点是否启动正常k3s  kubectl get node# kubeconfig 文件生成账号密码 操作K3S 集群用到admin 权限cat /etc/rancher/k3s/k3s.yaml# 远程操作scp /etc/rancher/k3s/k3s.yaml /root/.kube/configvim /root/.kube/config#127.0.0.1 改成远程服务器IP 192.168.30.50,51,52 测试所有节点是否能正常返回如果都返回正常证明集群部署成功

k3s agent 部署

# 依赖coredns 可以先部署coredns 也可以把--no-deploy=coredns  删除这边使用自建coredns--------------------------------------------------------------------------------------------------------------------------vi coredns.yaml-------------------------------------------------------------------------------------------------------------------------# __MACHINE_GENERATED_WARNING__apiVersion: v1kind: ServiceAccountmetadata:  name: coredns  namespace: kube-system  labels:      kubernetes.io/cluster-service: "true"      addonmanager.kubernetes.io/mode: Reconcile---apiVersion: rbac.authorization.k8s.io/v1kind: ClusterRolemetadata:  labels:    kubernetes.io/bootstrapping: rbac-defaults    addonmanager.kubernetes.io/mode: Reconcile  name: system:corednsrules:- apiGroups:  - ""  resources:  - endpoints  - services  - pods  - namespaces  verbs:  - list  - watch- apiGroups:  - ""  resources:  - nodes  verbs:  - get---apiVersion: rbac.authorization.k8s.io/v1kind: ClusterRoleBindingmetadata:  annotations:    rbac.authorization.kubernetes.io/autoupdate: "true"  labels:    kubernetes.io/bootstrapping: rbac-defaults    addonmanager.kubernetes.io/mode: EnsureExists  name: system:corednsroleRef:  apiGroup: rbac.authorization.k8s.io  kind: ClusterRole  name: system:corednssubjects:- kind: ServiceAccount  name: coredns  namespace: kube-system---apiVersion: v1kind: ConfigMapmetadata:  name: coredns  namespace: kube-system  labels:      addonmanager.kubernetes.io/mode: EnsureExistsdata:  Corefile: |    .:53 {        errors        health        kubernetes mddgame.local in-addr.arpa ip6.arpa {            pods insecure            upstream /etc/resolv.conf            fallthrough in-addr.arpa ip6.arpa        }        prometheus :9153        forward . /etc/resolv.conf        cache 30        reload        loadbalance    }---apiVersion: apps/v1kind: Deploymentmetadata:  name: coredns  namespace: kube-system  labels:    k8s-app: kube-dns    kubernetes.io/cluster-service: "true"    addonmanager.kubernetes.io/mode: Reconcile    kubernetes.io/name: "CoreDNS"spec:  # replicas: not specified here:  # 1. In order to make Addon Manager do not reconcile this replicas parameter.  # 2. Default is 1.  # 3. Will be tuned in real time if DNS horizontal auto-scaling is turned on.  strategy:    type: RollingUpdate    rollingUpdate:      maxUnavailable: 1  selector:    matchLabels:      k8s-app: kube-dns  template:    metadata:      labels:        k8s-app: kube-dns      annotations:        seccomp.security.alpha.kubernetes.io/pod: 'docker/default'    spec:      priorityClassName: system-cluster-critical      serviceAccountName: coredns      tolerations:        - key: "CriticalAddonsOnly"          operator: "Exists"      nodeSelector:        beta.kubernetes.io/os: linux      containers:      - name: coredns        image: coredns/coredns        imagePullPolicy: Always        resources:          limits:            memory: 170Mi          requests:            cpu: 100m            memory: 70Mi        args: [ "-conf", "/etc/coredns/Corefile" ]        volumeMounts:        - name: config-volume          mountPath: /etc/coredns          readOnly: true        ports:        - containerPort: 53          name: dns          protocol: UDP        - containerPort: 53          name: dns-tcp          protocol: TCP        - containerPort: 9153          name: metrics          protocol: TCP        livenessProbe:          httpGet:            path: /health            port: 8080            scheme: HTTP          initialDelaySeconds: 60          timeoutSeconds: 5          successThreshold: 1          failureThreshold: 5        readinessProbe:          httpGet:            path: /health            port: 8080            scheme: HTTP        securityContext:          allowPrivilegeEscalation: false          capabilities:            add:            - NET_BIND_SERVICE            drop:            - all          readOnlyRootFilesystem: true      dnsPolicy: Default      volumes:        - name: config-volume          configMap:            name: coredns            items:            - key: Corefile              path: Corefile---apiVersion: v1kind: Servicemetadata:  name: kube-dns  namespace: kube-system  annotations:    prometheus.io/port: "9153"    prometheus.io/scrape: "true"  labels:    k8s-app: kube-dns    kubernetes.io/cluster-service: "true"    addonmanager.kubernetes.io/mode: Reconcile    kubernetes.io/name: "CoreDNS"spec:  selector:    k8s-app: kube-dns  clusterIP: 10.64.0.2  ports:  - name: dns    port: 53    protocol: UDP  - name: dns-tcp    port: 53    protocol: TCP  - name: metrics    port: 9153    protocol: TCP# 创建coredns dnsk3s   kubectl apply -f coredns.yamlk3s   kubectl  get pod -A# 等待部署完成# 安装依赖--------------------------------------------dnf install epel-release---------------------------dnf install   dnf-utils ipvsadm telnet wget net-tools conntrack ipset jq iptables curl sysstat libseccomp socat nfs-utils fuse fuse-devel------------------------------------------------# centos 8 不能自动加载ipvs 创建开机加载cat << EOF > /etc/sysconfig/modules/ipvs.modules#!/bin/bashmodprobe -- ip_vsmodprobe -- ip_vs_rrmodprobe -- ip_vs_wrrmodprobe -- ip_vs_shmodprobe -- nf_conntrack_ipv4EOF# /etc/sysconfig/modules/ipvs.modules 可执行权限chmod +x /etc/sysconfig/modules/ipvs.modules# 执行 /etc/sysconfig/modules/ipvs.modules/etc/sysconfig/modules/ipvs.modules-----------------------------------# 获取token 任意master 节点 所有master 节点token 一点要一致---------------------------------------------cat /apps/k3s/server/node-tokenK1000966fac151ec94a53040dadd727a4ef1ccac022aa8747f0b601ca33665417ea::node:0aa3ce3afaf275fd33ae6a2a9580d3a0-----------------------------------------------------------------------------------------------------------------# 创建k3s agent envvi /etc/sysconfig/k3a.env----------------------------------------------------------K3S_AGENT_OPTS='--data-dir=/apps/k3s \         --kube-proxy-arg="proxy-mode=ipvs" \         --kube-proxy-arg="masquerade-all=true" \         --pause-image=docker.io/juestnow/pause-amd64:3.1 \         --resolv-conf="/etc/resolv.conf" \         --server=https://192.168.30.59:6443 \         --token=K1000966fac151ec94a53040dadd727a4ef1ccac022aa8747f0b601ca33665417ea::node:0aa3ce3afaf275fd33ae6a2a9580d3a0'--------------------------------------------------------------------------------------------------------------------------------# 创建 启动脚本vi /etc/systemd/system/k3a.service----------------------------------------------------------------------------------------------------[Unit]Description=Lightweight KubernetesDocumentation=https://k3s.ioAfter=network-online.target[Service]Type=notifyEnvironmentFile=/etc/sysconfig/k3a.envExecStartPre=-/sbin/modprobe br_netfilterExecStartPre=-/sbin/modprobe overlayExecStart=/usr/local/bin/k3s \     agent $K3S_AGENT_OPTS \KillMode=processDelegate=yesLimitNOFILE=1024000LimitNPROC=1024000LimitCORE=infinityTasksMax=infinityTimeoutStartSec=0Restart=alwaysRestartSec=5s[Install]WantedBy=multi-user.target-----------------------------------------------------------------------------# 启动 k3s agentsystemctl enable k3a && systemctl start k3a# 验证 agent 是否加入集群k3s kubectl get node# 应该有work 名字的节点# 这个与K8S 集群几乎没任何区别可以部署监控及kubernetes-dashboard及所有应用# k3s 默认使用containerd kubelet 还是不能监控pod 网络当然切换成docker 就可以# 单mater 部署这里就不展开讨论,网络上很多这样的示例。# agent 会t同时连接3台master 节点任意节点关闭都不会对agent 节点有影响可以不用考虑 haproxy做代理
0