千家信息网

怎么使用kubeadm安装kubernetes 1.13高可用集群

发表于:2025-02-02 作者:千家信息网编辑
千家信息网最后更新 2025年02月02日,小编给大家分享一下怎么使用kubeadm安装kubernetes 1.13高可用集群,希望大家阅读完这篇文章之后都有所收获,下面让我们一起去探讨吧!kubeadm安装kubernetes 1.13高可
千家信息网最后更新 2025年02月02日怎么使用kubeadm安装kubernetes 1.13高可用集群

小编给大家分享一下怎么使用kubeadm安装kubernetes 1.13高可用集群,希望大家阅读完这篇文章之后都有所收获,下面让我们一起去探讨吧!

kubeadm安装kubernetes 1.13高可用集群

初始化集群:

配置hosts文件

vim /etc/hosts

192.168.3.147test-master01192.168.3.148test-master02192.168.3.149test-master03192.168.3.150test-work01
配置免密登录
ssh-keygenssh-copy-id test-master01ssh-copy-id test-master02ssh-copy-id test-master03ssh-copy-id test-work01
设置参数
  • 关闭防火墙

systemctl stop firewalldsystemctl disable firewalld
  • 关闭swap

swapoff -ased -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab

修改 /etc/fstab 文件,注释掉 SWAP 的自动挂载,使用free -m确认swap已经关闭。

  • 关闭selinux

sed-i 's/SELINUX=permissive/SELINUX=disabled/' /etc/sysconfig/selinuxsetenforce0
  • 配置转发相关参数,否则可能会出错

cat < /etc/sysctl.d/k8s.confnet.bridge.bridge-nf-call-ip6tables = 1net.bridge.bridge-nf-call-iptables = 1net.ipv4.ip_forward=1net.ipv4.tcp_tw_recycle=0vm.swappiness=0vm.overcommit_memory=1vm.panic_on_oom=0fs.inotify.max_user_watches=89100fs.file-max=52706963fs.nr_open=52706963net.ipv6.conf.all.disable_ipv6=1net.netfilter.nf_conntrack_max=2310720EOFsysctl --system

以上在所有的Kubernetes节点执行命令使修改生效

  • kube-proxy开启ipvs

在所有work节点执行:

cat > /etc/sysconfig/modules/ipvs.modules <

上面脚本创建了的/etc/sysconfig/modules/ipvs.modules文件,保证在节点重启后能自动加载所需模块。 使用lsmod | grep -e ip_vs -e nf_conntrack_ipv4命令查看是否已经正确加载所需的内核模块.

接下来还需要确保各个节点上已经安装了ipset软件包yum install ipset。 为了便于查看ipvs的代理规则,最好安装一下管理工具ipvsadm yum install ipvsadm

yum install ipset -yyum install ipvsadm -y

如果以上前提条件如果不满足,则即使kube-proxy的配置开启了ipvs模式,也会退回到iptables模式

  • 系统优化参数

systemctl enable ntpdate.serviceecho '*/30 * * * * /usr/sbin/ntpdate time7.aliyun.com >/dev/null 2>&1'> /tmp/crontab2.tmpcrontab /tmp/crontab2.tmpsystemctl start ntpdate.service
echo "* soft nofile 65536" >> /etc/security/limits.confecho "* hard nofile 65536" >> /etc/security/limits.confecho "* soft nproc 65536"  >>/etc/security/limits.confecho "* hard nproc 65536"  >>/etc/security/limits.confecho "* soft  memlock  unlimited"  >> /etc/security/limits.confecho "* hard memlock unlimited"  >>/etc/security/limits.conf

安装docker

yum install -y epel-releaseyum install -y yum-utils device-mapper-persistent-data lvm2 net-tools conntrack-toolswget vim  ntpdate libseccomp libtool-ltdltelnet rsync bind-utilsyum install -y https://mirrors.aliyun.com/docker-ce/linux/centos/7/x86_64/stable/Packages/docker-ce-18.06.1.ce-3.el7.x86_64.rpm
配置docker国内镜像:

所有节点安装docker

编辑/etc/docker/daemon.json,添加以下一行

{"registry-mirrors":["https://registry.docker-cn.com"]}

重启docker

systemctl daemon-reloadsystemctl enable dockersystemctl start docker

注:如果使用overlay2的写法:

daemon.json{    "log-driver": "json-file",    "log-opts": {        "max-size": "100m",        "max-file": "10"    },    "registry-mirrors": ["https://pqbap4ya.mirror.aliyuncs.com"],        "storage-driver": "overlay2",    "storage-opts":["overlay2.override_kernel_check=true"]}

如果要使用overlay2,前提条件为使用ext4,如果使用xfs,需要格式化磁盘加上参数 mkfs.xfs -n ftype=1 /path/to/your/device ,ftype=1这个参数需要配置为1

安装keepalived+haproxy

三台master 节点

VIP : 192.168.3.80

安装 kubeadm, kubelet 和 kubectl

所有节点都执行

设置yum源
cat < /etc/yum.repos.d/kubernetes.repo[kubernetes]name=Kubernetesbaseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/enabled=1gpgcheck=1repo_gpgcheck=1gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpgEOF
安装组件
yum install -y kubelet-1.13.1 kubeadm-1.13.1 kubectl-1.13.1
开机启动
systemctl enable kubelet.service

初始化K8S集群

编辑kubeadm配置文件:

下面配置是kubeadm安装etcd写法:

cat > kubeadm-config.yaml << EOF apiVersion: kubeadm.k8s.io/v1beta1 kind: ClusterConfiguration kubernetesVersion: v1.13.1 apiServer:   certSANs:     - "192.168.3.80" controlPlaneEndpoint: "192.168.3.80:8443" networking:   podSubnet: "10.50.0.0/16" imageRepository: "harbor.oneitfarm.com/k8s-cluster-images" EOF

CNI使用Calico,设置podSubnet: "10.50.0.0/16"

192.168.3.80是刚才安装haproxy+keepalived的VIP

初始化第一个master
kubeadm init --config kubeadm-config.yaml ... [root@master01 ~]# mkdir -p $HOME/.kube [root@master01 ~]# cp -i /etc/kubernetes/admin.conf $HOME/.kube/config [root@master01 ~]# chown $(id -u):$(id -g) $HOME/.kube/config
安装网络插件

按官网方式:

Installing with the Kubernetes API datastore-50 nodes or less:

kubectl apply -f \ https://docs.projectcalico.org/v3.3/getting-started/kubernetes/installation/hosted/rbac-kdd.yaml kubectl apply -f \ https://docs.projectcalico.org/v3.3/getting-started/kubernetes/installation/hosted/kubernetes-datastore/calico-networking/1.7/calico.yaml

以上建议先wget下来,需要根据自己网络修改配置 :

- name: CALICO_IPV4POOL_CIDR   value: "10.50.0.0/16"
复制相关文件到其他master节点
ssh root@master02 mkdir -p /etc/kubernetes/pki/etcd scp /etc/kubernetes/admin.conf root@master02:/etc/kubernetes scp /etc/kubernetes/pki/{ca.*,sa.*,front-proxy-ca.*} root@master02:/etc/kubernetes/pki scp /etc/kubernetes/pki/etcd/ca.* root@master02:/etc/kubernetes/pki/etcd
部署master-other

在其它slave节点上执行下面命令,加入集群

kubeadm join 192.168.3.80:8443 --token pv2a9n.uh3yx1082ffpdf7n --discovery-token-ca-cert-hash sha256:872cac35b0bfec28fab8f626a727afa6529e2a63e3b7b75a3397e6412c06ebc5 --experimental-control-plane
kube-proxy开启ipvs

修改ConfigMap的kube-system/kube-proxy中的config.conf,mode: "ipvs":

kubectl edit configmap kube-proxy -n kube-system kubectl get pod -n kube-system | grep kube-proxy | awk '{system("kubectl delete pod "$1" -- grace-period=0 --force -n kube-system")}'

检查测试

查看kubernetes集群状态
kubectl get nodes -o wide kubectl get cs     NAME                 STATUS    MESSAGE              ERROR     controller-manager   Healthy   ok     scheduler            Healthy   ok     etcd-0               Healthy   {"health": "true"}
查看etcd集群状态

本文通过kubeadm自动安装etcd,也就是docker方式安装的etcd,可以exec进去容器内检查:

kubectl exec -ti -n kube-system etcd-an-master01 sh / # export ETCDCTL_API=3 / # etcdctl --endpoints=https://[127.0.0.1]:2379 --cacert=/etc/kubernetes/pki/etcd/ca.crt --cert=/etc/kubernetes/pki/etcd/healthcheck-client.crt --key=/etc/kubernetes/pki/etcd/healthcheck-client.key member list
安装失败清理集群

集群初始化如果遇到问题,可以使用下面的命令进行清理:

kubeadm reset systemctl stop kubelet systemctl stop docker rm -rf /var/lib/cni/ rm -rf /var/lib/kubelet/* rm -rf /etc/cni/ ifconfig cni0 down ifconfig flannel.1 down ifconfig docker0 down ip link delete cni0 ip link delete flannel.1 systemctl start docker

设置资源调度

使用kubeadm初始化的集群,出于安全考虑Pod不会被调度到Master Node上,也就是说Master Node不参与工作负载。这是因为当前的master节点被打上了node-role.kubernetes.io/master:NoSchedule的污点:

kubectl describe node master01 | grep Taint Taints:             node-role.kubernetes.io/master:NoSchedule

检查join进集群的master和work节点,如果调度不对,可以通过如下方式设置:

[root@an-master01 ~]# kubectl get nodes NAME          STATUS   ROLES    AGE     VERSION an-master01   Ready    master   4h49m   v1.13.1 an-master02   Ready       4h42m   v1.13.1 an-master03   Ready    master   86m     v1.13.1 an-work01     Ready       85m     v1.13.1 查看当前状态: kubectl describe nodes/an-master02 |grep -E '(Roles|Taints)' Roles:               Taints:              设置为master节点且不调度: kubectl label node an-master02 node-role.kubernetes.io/master= kubectl taint nodes an-master02 node-role.kubernetes.io/master=:NoSchedule 如果想去除限制的话: kubectl taint nodes an-master03 node-role.kubernetes.io/master- work节点设置: kubectl label node an-work01 node-role.kubernetes.io/work= kubectl describe nodes/an-work01 |grep -E '(Roles|Taints)' Roles:              work Taints:             

看完了这篇文章,相信你对"怎么使用kubeadm安装kubernetes 1.13高可用集群"有了一定的了解,如果想了解更多相关知识,欢迎关注行业资讯频道,感谢各位的阅读!

0