千家信息网

CentOS7.5 使用 kubeadm 安装配置 Kubernetes1.12(四)

发表于:2025-02-01 作者:千家信息网编辑
千家信息网最后更新 2025年02月01日,在之前的文章,我们已经演示了 yum 和二进制方式的安装方式,本文我们将用官方推荐的kubeadm来进行安装部署。kubeadm是 Kubernetes 官方提供的用于快速安装Kubernetes集群
千家信息网最后更新 2025年02月01日CentOS7.5 使用 kubeadm 安装配置 Kubernetes1.12(四)

在之前的文章,我们已经演示了 yum 和二进制方式的安装方式,本文我们将用官方推荐的kubeadm来进行安装部署。

kubeadm是 Kubernetes 官方提供的用于快速安装Kubernetes集群的工具,伴随Kubernetes每个版本的发布都会同步更新,kubeadm会对集群配置方面的一些实践做调整,通过实验kubeadm可以学习到Kubernetes官方在集群配置上一些新的最佳实践。

一、所有节点环境准备

1、软件版本

软件版本
kubernetesv1.12.2
CentOS 7.5CentOS Linux release 7.5.1804
Dockerv18.06
flannel0.10.0

2、节点规划

IP角色主机名
172.18.8.200k8s mastermaster.wzlinux.com
172.18.8.201k8s node01node01.wzlinux.com
172.18.8.202k8s node02node02.wzlinux.com

节点及网络规划如下:

3、系统配置

关闭防火墙。

systemctl stop firewalldsystemctl disable firewalld

配置/etc/hosts,添加如下内容。

172.18.8.200 master.wzlinux.com master172.18.8.201 node01.wzlinux.com node01172.18.8.202 node02.wzlinux.com node02

关闭SELinux。

sed -i 's#SELINUX=enforcing#SELINUX=disabled#g' /etc/selinux/configsetenforce 0

关闭swap。

swapoff -ased -i 's/.*swap.*/#&/' /etc/fstab

配置转发参数。

cat <  /etc/sysctl.d/k8s.confnet.bridge.bridge-nf-call-ip6tables = 1net.bridge.bridge-nf-call-iptables = 1EOFsysctl --system

设置国内kubernetes阿里云源。

cat < /etc/yum.repos.d/kubernetes.repo[kubernetes]name=Kubernetesbaseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/enabled=1gpgcheck=1repo_gpgcheck=1gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpgEOF

4、docker安装

因为不管是master还是node,都是需要容器引擎,所以我们提前把docker安装好。
设置官方docker源。

wget https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo -P /etc/yum.repos.d/

查看目前官方仓库的docker版本。

[root@master ~]# yum list docker-ce.x86_64  --showduplicates |sort -r已加载插件:fastestmirror可安装的软件包 * updates: mirrors.aliyun.comLoading mirror speeds from cached hostfile * extras: mirrors.aliyun.comdocker-ce.x86_64            3:18.09.0-3.el7                     docker-ce-stabledocker-ce.x86_64            18.06.1.ce-3.el7                    docker-ce-stabledocker-ce.x86_64            18.06.0.ce-3.el7                    docker-ce-stabledocker-ce.x86_64            18.03.1.ce-1.el7.centos             docker-ce-stabledocker-ce.x86_64            18.03.0.ce-1.el7.centos             docker-ce-stabledocker-ce.x86_64            17.12.1.ce-1.el7.centos             docker-ce-stabledocker-ce.x86_64            17.12.0.ce-1.el7.centos             docker-ce-stabledocker-ce.x86_64            17.09.1.ce-1.el7.centos             docker-ce-stabledocker-ce.x86_64            17.09.0.ce-1.el7.centos             docker-ce-stabledocker-ce.x86_64            17.06.2.ce-1.el7.centos             docker-ce-stabledocker-ce.x86_64            17.06.1.ce-1.el7.centos             docker-ce-stabledocker-ce.x86_64            17.06.0.ce-1.el7.centos             docker-ce-stabledocker-ce.x86_64            17.03.3.ce-1.el7                    docker-ce-stabledocker-ce.x86_64            17.03.2.ce-1.el7.centos             docker-ce-stabledocker-ce.x86_64            17.03.1.ce-1.el7.centos             docker-ce-stabledocker-ce.x86_64            17.03.0.ce-1.el7.centos             docker-ce-stable * base: mirrors.aliyun.com

根据官方的推荐要求,我们需要安装v18.06。

yum install docker-ce-18.06.1.ce -y

配置国内镜像仓库加速器。

sudo mkdir -p /etc/dockersudo tee /etc/docker/daemon.json <<-'EOF'{  "registry-mirrors": ["https://hdi5v8p1.mirror.aliyuncs.com"]}EOF

启动docker。

systemctl daemon-reloadsystemctl enable dockersystemctl start docker

5、安装kubernetes相关组件

yum install kubelet kubeadm kubectl -ysystemctl enable kubelet && systemctl start kubelet

6、加载IPVS内核

加载ipvs内核,使node节点kube-proxy支持ipvs代理规则。

modprobe ip_vs_rrmodprobe ip_vs_wrrmodprobe ip_vs_sh

并添加到开机启动文件/etc/rc.local里面。

cat <> /etc/rc.localmodprobe ip_vs_rrmodprobe ip_vs_wrrmodprobe ip_vs_shEOF

二、安装 master 节点

1、初始化master节点

因为国内没办法访问Google的镜像源,变通的方法是从其他镜像源下载后,注意下载的版本尽量和我们的kubeadm等版本一样,我们选择v1.12.2,修改tag。执行下面这个Shell脚本即可。

#!/bin/bashkube_version=:v1.12.2kube_images=(kube-proxy kube-scheduler kube-controller-manager kube-apiserver)addon_images=(etcd-amd64:3.2.24 coredns:1.2.2 pause-amd64:3.1)for imageName in ${kube_images[@]} ; do  docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/$imageName-amd64$kube_version  docker image tag registry.cn-hangzhou.aliyuncs.com/google_containers/$imageName-amd64$kube_version k8s.gcr.io/$imageName$kube_version  docker image rm registry.cn-hangzhou.aliyuncs.com/google_containers/$imageName-amd64$kube_versiondonefor imageName in ${addon_images[@]} ; do  docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/$imageName  docker image tag registry.cn-hangzhou.aliyuncs.com/google_containers/$imageName k8s.gcr.io/$imageName  docker image rm registry.cn-hangzhou.aliyuncs.com/google_containers/$imageNamedonedocker tag k8s.gcr.io/etcd-amd64:3.2.24 k8s.gcr.io/etcd:3.2.24docker image rm k8s.gcr.io/etcd-amd64:3.2.24docker tag k8s.gcr.io/pause-amd64:3.1 k8s.gcr.io/pause:3.1docker image rm k8s.gcr.io/pause-amd64:3.1

关于脚本中的各镜像的版本,如果大家不清楚的话,可以先进行kubeadm init初始化一下,查看一下报错的版本,然后我们在针对获取。
如果kubeadm升级了,我们可以选用新的版本,下载新版本镜像即可。

执行脚本,我们就把需要的的镜像下载下来了,我们是使用别人做好的仓库,当然我们也可以建自己的私有仓库。

[root@master ~]# docker imagesREPOSITORY                           TAG                 IMAGE ID            CREATED             SIZEk8s.gcr.io/kube-proxy                v1.12.2             15e9da1ca195        4 weeks ago         96.5MBk8s.gcr.io/kube-apiserver            v1.12.2             51a9c329b7c5        4 weeks ago         194MBk8s.gcr.io/kube-controller-manager   v1.12.2             15548c720a70        4 weeks ago         164MBk8s.gcr.io/kube-scheduler            v1.12.2             d6d57c76136c        4 weeks ago         58.3MBk8s.gcr.io/etcd                      3.2.24              3cab8e1b9802        2 months ago        220MBk8s.gcr.io/coredns                   1.2.2               367cdc8433a4        3 months ago        39.2MBk8s.gcr.io/pause                     3.1                 da86e6ba6ca1        11 months ago       742kB

使用kubeadm init自动安装 Master 节点,需要指定版本。

kubeadm init --kubernetes-version=v1.12.2 --pod-network-cidr=10.244.0.0/16 --service-cidr=10.96.0.0/12
[init] using Kubernetes version: v1.12.2[preflight] running pre-flight checks[preflight/images] Pulling images required for setting up a Kubernetes cluster[preflight/images] This might take a minute or two, depending on the speed of your internet connection[preflight/images] You can also perform this action in beforehand using 'kubeadm config images pull'[kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"[kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"[preflight] Activating the kubelet service[certificates] Generated etcd/ca certificate and key.[certificates] Generated etcd/server certificate and key.[certificates] etcd/server serving cert is signed for DNS names [master.wzlinux.com localhost] and IPs [127.0.0.1 ::1][certificates] Generated etcd/peer certificate and key.[certificates] etcd/peer serving cert is signed for DNS names [master.wzlinux.com localhost] and IPs [172.18.8.200 127.0.0.1 ::1][certificates] Generated apiserver-etcd-client certificate and key.[certificates] Generated etcd/healthcheck-client certificate and key.[certificates] Generated ca certificate and key.[certificates] Generated apiserver certificate and key.[certificates] apiserver serving cert is signed for DNS names [master.wzlinux.com kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 172.18.8.200][certificates] Generated apiserver-kubelet-client certificate and key.[certificates] Generated front-proxy-ca certificate and key.[certificates] Generated front-proxy-client certificate and key.[certificates] valid certificates and keys now exist in "/etc/kubernetes/pki"[certificates] Generated sa key and public key.[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf"[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf"[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf"[controlplane] wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml"[controlplane] wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml"[controlplane] wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml"[etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml"[init] waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests" [init] this might take a minute or longer if the control plane images have to be pulled[apiclient] All control plane components are healthy after 20.005448 seconds[uploadconfig] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace[kubelet] Creating a ConfigMap "kubelet-config-1.12" in namespace kube-system with the configuration for the kubelets in the cluster[markmaster] Marking the node master.wzlinux.com as master by adding the label "node-role.kubernetes.io/master=''"[markmaster] Marking the node master.wzlinux.com as master by adding the taints [node-role.kubernetes.io/master:NoSchedule][patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "master.wzlinux.com" as an annotation[bootstraptoken] using token: 3mfpdm.atgk908eq1imgwqp[bootstraptoken] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials[bootstraptoken] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token[bootstraptoken] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster[bootstraptoken] creating the "cluster-info" ConfigMap in the "kube-public" namespace[addons] Applied essential addon: CoreDNS[addons] Applied essential addon: kube-proxyYour Kubernetes master has initialized successfully!To start using your cluster, you need to run the following as a regular user:  mkdir -p $HOME/.kube  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config  sudo chown $(id -u):$(id -g) $HOME/.kube/configYou should now deploy a pod network to the cluster.Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:  https://kubernetes.io/docs/concepts/cluster-administration/addons/You can now join any number of machines by running the following on each nodeas root:  kubeadm join 172.18.8.200:6443 --token 3mfpdm.atgk908eq1imgwqp --discovery-token-ca-cert-hash sha256:ff67ead9f43931f08e67873ba00695cd4b997f87dace5255ff45fc386b08941d

服务启动后需要根据输出提示,进行配置:

mkdir -p $HOME/.kubecp -i /etc/kubernetes/admin.conf $HOME/.kube/config

2、给pod配置网络

pod网络插件是必要安装,以便pod可以相互通信。在部署应用和启动kube-dns之前,需要部署网络,kubeadm仅支持CNI的网络。

pod支持的网络插件有很多,如CalicoCanalFlannelRomanaWeave Net等,因为之前我们初始化使用了参数--pod-network-cidr=10.244.0.0/16,所以我们使用插件flannel

kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/bc79dd1505b0c8681ece4de4c0d86c5cd2643275/Documentation/kube-flannel.yml

检查是否正常启动,因为要下载flannel镜像,需要时间会稍微长一些。

[root@master ~]# kubectl get pods --all-namespacesNAMESPACE     NAME                                         READY   STATUS    RESTARTS   AGEkube-system   coredns-576cbf47c7-ptzmh                     1/1     Running   0          22mkube-system   coredns-576cbf47c7-q78r9                     1/1     Running   0          22mkube-system   etcd-master.wzlinux.com                      1/1     Running   0          21mkube-system   kube-apiserver-master.wzlinux.com            1/1     Running   0          22mkube-system   kube-controller-manager-master.wzlinux.com   1/1     Running   0          22mkube-system   kube-flannel-ds-amd64-vqtzq                  1/1     Running   0          5m54skube-system   kube-proxy-ld262                             1/1     Running   0          22mkube-system   kube-scheduler-master.wzlinux.com            1/1     Running   0          22m

故障排查思路:

  • 确认端口和容器是否正常启动,查看 /var/log/message日志信息
  • 通过docker logs ID查看容器的启动日志,特别是频繁创建的容器
  • 使用kubectl --namespace=kube-system describe pod POD-NAME查看错误状态的pod日志。
  • 使用kubectl -n ${NAMESPACE} logs ${POD_NAME} -c ${CONTAINER_NAME}查看具体错误。
  • Calico - Canal - Flannel已经被官方验证过,其他的网络插件有可能有坑,能不能爬出来就看个人能力了。
  • 一般常见的错误是镜像名称版本不对或者镜像无法下载。

三、安装node节点

1、下载需要的镜像

同样的node节点也需要下载镜像kube-proxypause,它需要的镜像会少一些。

#!/bin/bashkube_version=:v1.12.2coredns_version=1.2.2pause_version=3.1docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy-amd64$kube_versiondocker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy-amd64$kube_version k8s.gcr.io/kube-proxy$kube_versiondocker image rm registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy-amd64$kube_versiondocker pull registry.cn-hangzhou.aliyuncs.com/google_containers/pause-amd64:$pause_versiondocker tag registry.cn-hangzhou.aliyuncs.com/google_containers/pause-amd64:$pause_version k8s.gcr.io/pause:$pause_versiondocker image rm registry.cn-hangzhou.aliyuncs.com/google_containers/pause-amd64:$pause_versiondocker pull registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:$coredns_versiondocker tag registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:$coredns_version k8s.gcr.io/coredns:$coredns_versiondocker image rm registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:$coredns_version

查看下载好的镜像。

[root@node01 ~]# docker imagesREPOSITORY               TAG                 IMAGE ID            CREATED             SIZEk8s.gcr.io/kube-proxy    v1.12.2             15e9da1ca195        4 weeks ago         96.5MBk8s.gcr.io/pause   3.1                 da86e6ba6ca1        11 months ago       742kB

2、添加节点(node1为例)

我们在master节点上初始化成功的时候,在最后有一个kubeadm join的命令,就是用来添加node节点的。

kubeadm join 172.18.8.200:6443 --token 3mfpdm.atgk908eq1imgwqp --discovery-token-ca-cert-hash sha256:ff67ead9f43931f08e67873ba00695cd4b997f87dace5255ff45fc386b08941d
[preflight] running pre-flight checks[discovery] Trying to connect to API Server "172.18.8.200:6443"[discovery] Created cluster-info discovery client, requesting info from "https://172.18.8.200:6443"[discovery] Requesting info from "https://172.18.8.200:6443" again to validate TLS against the pinned public key[discovery] Cluster info signature and contents are valid and TLS certificate validates against pinned roots, will use API Server "172.18.8.200:6443"[discovery] Successfully established connection with API Server "172.18.8.200:6443"[kubelet] Downloading configuration for the kubelet from the "kubelet-config-1.12" ConfigMap in the kube-system namespace[kubelet] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"[kubelet] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"[preflight] Activating the kubelet service[tlsbootstrap] Waiting for the kubelet to perform the TLS Bootstrap...[patchnode] Uploading the CRI Socket information "/var/run/dockershim.sock" to the Node API object "node01.wzlinux.com" as an annotationThis node has joined the cluster:* Certificate signing request was sent to apiserver and a response was received.* The Kubelet was informed of the new secure connection details.Run 'kubectl get nodes' on the master to see this node join the cluster.

提示:如果执行join命令时提示token过期,按照提示在Master 上执行kubeadm token create生成一个新的token。
如果忘记token,可以使用kubeadm token list查看。

执行添加命令后,在Master上查看节点信息。

[root@master ~]# kubectl get nodesNAME                 STATUS   ROLES    AGE   VERSIONmaster.wzlinux.com   Ready    master   64m   v1.12.2node01.wzlinux.com   Ready       32m   v1.12.2node02.wzlinux.com   Ready       15m   v1.12.2

可以把master节点的配置文件放到node节点上面,方便node节点使用kubectl。

scp /etc/kubernetes/admin.conf  172.18.8.201:/root/.kube/config

创建几个pod看看。

[root@master ~]# kubectl run nginx --image=nginx --replicas=3
[root@master ~]# kubectl get pods -o wideNAME                    READY   STATUS    RESTARTS   AGE   IP           NODE                 NOMINATED NODEnginx-dbddb74b8-7qnsl   1/1     Running   0          27s   10.244.2.2   node02.wzlinux.com   nginx-dbddb74b8-ck4l9   1/1     Running   0          27s   10.244.1.2   node01.wzlinux.com   nginx-dbddb74b8-rpc2r   1/1     Running   0          27s   10.244.1.3   node01.wzlinux.com   

完整的架构图如下:

四、案例演示

为了帮助大家更好地理解 Kubernetes 架构,我们部署一个应用来演示各个组件之间是如何协作的。

kubectl run httpd-app --image=httpd --replicas=2

查看部署的应用。

[root@master ~]# kubectl get  pod -o wideNAME                         READY   STATUS    RESTARTS   AGE   IP           NODE                 NOMINATED NODEhttpd-app-66cb7d499b-gskrg   1/1     Running   0          59s   10.244.1.2   node01.wzlinux.com   httpd-app-66cb7d499b-km5t8   1/1     Running   0          59s   10.244.2.2   node02.wzlinux.com   

Kubernetes 部署了 deployment httpd-app,有两个副本 Pod,分别运行在node1node2

整个部署过程流程如下:

  1. kubectl 发送部署请求到 API Server。
  2. API Server 通知 Controller Manager 创建一个 deployment 资源。
  3. Scheduler 执行调度任务,将两个副本 Pod 分发到 node1 和 node2。
  4. node1 和 node2 上的 kubelet 在各自的节点上创建并运行 Pod。

应用的配置和当前状态信息保存在 etcd 中,执行 kubectl get pod 时 API Server 会从 etcd 中读取这些数据。
flannel 会为每个 Pod 都分配 IP。因为没有创建 service,目前 kube-proxy 还没参与进来。

一切OK,到此为止,我们的集群已经部署完成,大家可以开始应用了。

五、kube-proxy 启动 ipvs

从kubernetes1.8版本开始,新增了kube-proxy对ipvs的支持,并且在新版的kubernetes1.11版本中被纳入了GA。

iptables模式问题不好定位,规则多了性能会显著下降,甚至会出现规则丢失的情况;相比而言,ipvs就稳定的多。

默认安装使用的是iptables,我们需要进行修改配置开启ipvs。

1、加载内核模块。

modprobe ip_vs_rrmodprobe ip_vs_wrrmodprobe ip_vs_sh

2、更改kube-proxy配置

kubectl edit configmap kube-proxy -n kube-system

找到如下部分。

    kind: KubeProxyConfiguration    metricsBindAddress: 127.0.0.1:10249    mode: "ipvs"    nodePortAddresses: null    oomScoreAdj: -999

其中mode原来是空,默认为iptables模式,改为ipvs。scheduler默认是空,默认负载均衡算法为轮训。

3、删除所有kube-proxy的pod

kubectl delete pod kube-proxy-xxx -n kube-system

4、查看kube-proxy的pod日志

[root@master ~]# kubectl logs kube-proxy-t4t8j -n kube-systemI1211 03:43:01.297068       1 server_others.go:189] Using ipvs Proxier.W1211 03:43:01.297549       1 proxier.go:365] IPVS scheduler not specified, use rr by defaultI1211 03:43:01.297698       1 server_others.go:216] Tearing down inactive rules.I1211 03:43:01.355516       1 server.go:464] Version: v1.13.0I1211 03:43:01.366922       1 conntrack.go:52] Setting nf_conntrack_max to 196608I1211 03:43:01.367294       1 config.go:102] Starting endpoints config controllerI1211 03:43:01.367304       1 config.go:202] Starting service config controllerI1211 03:43:01.367327       1 controller_utils.go:1027] Waiting for caches to sync for endpoints config controllerI1211 03:43:01.367343       1 controller_utils.go:1027] Waiting for caches to sync for service config controllerI1211 03:43:01.467475       1 controller_utils.go:1034] Caches are synced for service config controllerI1211 03:43:01.467485       1 controller_utils.go:1034] Caches are synced for endpoints config controller

5、安装ipvsadm

使用ipvsadm查看ipvs相关规则,如果没有这个命令可以直接yum安装

yum install -y ipvsadm
[root@master ~]# ipvsadm -lnIP Virtual Server version 1.2.1 (size=4096)Prot LocalAddress:Port Scheduler Flags  -> RemoteAddress:Port           Forward Weight ActiveConn InActConnTCP  10.96.0.1:443 rr  -> 172.18.8.200:6443           Masq    1      0          0         TCP  10.96.0.10:53 rr  -> 10.244.0.4:53                Masq    1      0          0           -> 10.244.0.5:53                Masq    1      0          0         UDP  10.96.0.10:53 rr  -> 10.244.0.4:53                Masq    1      0          0           -> 10.244.0.5:53                Masq    1      0          0         

附录:生产的各组件配置文件

所有的密钥明文占用篇幅太多,我这里用秘钥内容代替。

admin.conf

apiVersion: v1clusters:- cluster:    certificate-authority-data:   秘钥内容    server: https://172.18.8.200:6443  name: kubernetescontexts:- context:    cluster: kubernetes    user: kubernetes-admin  name: kubernetes-admin@kubernetescurrent-context: kubernetes-admin@kuberneteskind: Configpreferences: {}users:- name: kubernetes-admin  user:    client-certificate-data:   秘钥内容    client-key-data:  秘钥内容

controller-manager.conf

apiVersion: v1clusters:- cluster:    certificate-authority-data:  密钥内容    server: https://172.18.8.200:6443  name: kubernetescontexts:- context:    cluster: kubernetes    user: system:kube-controller-manager  name: system:kube-controller-manager@kubernetescurrent-context: system:kube-controller-manager@kuberneteskind: Configpreferences: {}users:- name: system:kube-controller-manager  user:    client-certificate-data:  密钥内容    client-key-data:   密钥内容

kubelet.conf

apiVersion: v1clusters:- cluster:    certificate-authority-data:  密钥内容    server: https://172.18.8.200:6443  name: kubernetescontexts:- context:    cluster: kubernetes    user: system:node:master.wzlinux.com  name: system:node:master.wzlinux.com@kubernetescurrent-context: system:node:master.wzlinux.com@kuberneteskind: Configpreferences: {}users:- name: system:node:master.wzlinux.com  user:    client-certificate-data: 密钥内容    client-key-data: 密钥内容

scheduler.conf

apiVersion: v1clusters:- cluster:    certificate-authority-data: 密钥内容    server: https://172.18.8.200:6443  name: kubernetescontexts:- context:    cluster: kubernetes    user: system:kube-scheduler  name: system:kube-scheduler@kubernetescurrent-context: system:kube-scheduler@kuberneteskind: Configpreferences: {}users:- name: system:kube-scheduler  user:    client-certificate-data: 密钥内容    client-key-data: 秘钥内容

参考文档:https://kubernetes.io/docs/setup/independent/install-kubeadm/

0