千家信息网

怎样使用Kubeadm安装Kubernetes1.5版本

发表于:2025-02-19 作者:千家信息网编辑
千家信息网最后更新 2025年02月19日,怎样使用Kubeadm安装Kubernetes1.5版本,针对这个问题,这篇文章详细介绍了相对应的分析和解答,希望可以帮助更多想解决这个问题的小伙伴找到更简单易行的方法。使用Kubeadm安装Kube
千家信息网最后更新 2025年02月19日怎样使用Kubeadm安装Kubernetes1.5版本

怎样使用Kubeadm安装Kubernetes1.5版本,针对这个问题,这篇文章详细介绍了相对应的分析和解答,希望可以帮助更多想解决这个问题的小伙伴找到更简单易行的方法。

使用Kubeadm安装Kubernetes1.5版本1、系统版本:ubuntu16.04root@master:~# docker versionClient: Version:      1.12.1 API version:  1.24 Go version:   go1.6.2 Git commit:   23cf638 Built:        Tue, 27 Sep 2016 12:25:38 +1300 OS/Arch:      linux/amd64Server: Version:      1.12.1 API version:  1.24 Go version:   go1.6.2 Git commit:   23cf638 Built:        Tue, 27 Sep 2016 12:25:38 +1300 OS/Arch:      linux/amd64 1、部署前提条件每台主机上面至少1G内存。所有主机之间网络可达。2、部署:在主机上安装kubelet和kubeadmcurl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -  我是用的是 http://119.29.98.145:8070/zhi/apt-key.gpg主机master上操作如下:curl -s http://119.29.98.145:8070/zhi/apt-key.gpg | apt-key add -  cat </etc/apt/sources.list.d/kubernetes.list  deb http://apt.kubernetes.io/ kubernetes-xenial main  EOF  apt-get update  apt-get install -y docker.io  apt-get install -y kubelet kubeadm kubectl kubernetes-cni下载后的kube组件并未自动运行起来。在 /lib/systemd/system下面我们能看到kubelet.serviceroot@master:~# ls /lib/systemd/system |grep kubekubelet.servicekubelet的版本:root@master:~# kubelet --versionKubernetes v1.5.1k8s的核心组件都有了,接下来我们就要boostrap kubernetes cluster了。3、初始化集群理论上通过kubeadm使用init和join命令即可建立一个集群,这init就是在master节点对集群进行初始化。和k8s 1.4之前的部署方式不同的是,kubeadm安装的k8s核心组件都是以容器的形式运行于master node上的。因此在kubeadm init之前,最好给master node上的docker engine挂上加速器代理,因为kubeadm要从gcr.io/google_containers repository中pull许多核心组件的images在Kubeadm的文档中,Pod Network的安装是作为一个单独的步骤的。kubeadm init并没有为你选择一个默认的Pod network进行安装。我们将首选Flannel 作为我们的Pod network,这不仅是因为我们的上一个集群用的就是flannel,而且表现稳定。更是由于Flannel就是coreos为k8s打造的专属overlay network add-ons。甚至于flannel repository的readme.md都这样写着:"flannel is a network fabric for containers, designed for Kubernetes"。如果我们要使用Flannel,那么在执行init时,按照kubeadm文档要求,我们必须给init命令带上option:-pod-network-cidr=10.244.0.0/16。4、执行kubeadm init执行kubeadm init命令:root@master:~# kubeadm init --pod-network-cidr=10.244.0.0/16[kubeadm] WARNING: kubeadm is in alpha, please do not use it for production clusters.[preflight] Running pre-flight checks[preflight] Starting the kubelet service[init] Using Kubernetes version: v1.5.1[tokens] Generated token: "2909ca.c0b0772a8817f9e3"[certificates] Generated Certificate Authority key and certificate.[certificates] Generated API Server key and certificate[certificates] Generated Service Account signing keys[certificates] Created keys and certificates in "/etc/kubernetes/pki"[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf"[kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf"[apiclient] Created API client, waiting for the control plane to become ready[apiclient] All control plane components are healthy after 14.761716 seconds[apiclient] Waiting for at least one node to register and become ready[apiclient] First node is ready after 1.003312 seconds[apiclient] Creating a test deployment[apiclient] Test deployment succeeded[token-discovery] Created the kube-discovery deployment, waiting for it to become ready[token-discovery] kube-discovery is ready after 1.002402 seconds[addons] Created essential addon: kube-proxy[addons] Created essential addon: kube-dnsYour Kubernetes master has initialized successfully!You should now deploy a pod network to the cluster.Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:    http://kubernetes.io/docs/admin/addons/You can now join any number of machines by running the following on each node:kubeadm join --token=2909ca.c0b0772a8817f9e3 xxx.xxx.xxx.xxx  (ip记下)init成功后的master node有啥变化?k8s的核心组件均正常启动:root@master:~# ps -ef |grep kuberoot      23817      1  2 14:07 ?        00:00:35 /usr/bin/kubelet --kubeconfig=/etc/kubernetes/kubelet.conf --require-kubeconfig=true --pod-manifest-path=/etc/kubernetes/manifests --allow-privileged=true --network-plugin=cni --cni-conf-dir=/etc/cni/net.d --cni-bin-dir=/opt/cni/bin --cluster-dns=10.96.0.10 --cluster-domain=cluster.localroot      23921  23900  0 14:07 ?        00:00:01 kube-scheduler --address=127.0.0.1 --leader-elect --master=127.0.0.1:8080root      24055  24036  0 14:07 ?        00:00:10 kube-apiserver --insecure-bind-address=127.0.0.1 --admission-control=NamespaceLifecycle,LimitRanger,ServiceAccount,PersistentVolumeLabel,DefaultStorageClass,ResourceQuota --service-cluster-ip-range=10.96.0.0/12 --service-account-key-file=/etc/kubernetes/pki/apiserver-key.pem --client-ca-file=/etc/kubernetes/pki/ca.pem --tls-cert-file=/etc/kubernetes/pki/apiserver.pem --tls-private-key-file=/etc/kubernetes/pki/apiserver-key.pem --token-auth-file=/etc/kubernetes/pki/tokens.csv --secure-port=6443 --allow-privileged --advertise-address=master的ip --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --anonymous-auth=false --etcd-servers=http://127.0.0.1:2379root      24084  24070  0 14:07 ?        00:00:11 kube-controller-manager --address=127.0.0.1 --leader-elect --master=127.0.0.1:8080 --cluster-name=kubernetes --root-ca-file=/etc/kubernetes/pki/ca.pem --service-account-private-key-file=/etc/kubernetes/pki/apiserver-key.pem --cluster-signing-cert-file=/etc/kubernetes/pki/ca.pem --cluster-signing-key-file=/etc/kubernetes/pki/ca-key.pem --insecure-experimental-approve-all-kubelet-csrs-for-group=system:kubelet-bootstrap --allocate-node-cidrs=true --cluster-cidr=10.244.0.0/16root      24242  24227  0 14:07 ?        00:00:00 /usr/local/bin/kube-discoveryroot      24308  24293  1 14:07 ?        00:00:15 kube-proxy --kubeconfig=/run/kubeconfigroot      29457  29441  0 14:09 ?        00:00:00 /opt/bin/flanneld --ip-masq --kube-subnet-mgrroot      29498  29481  0 14:09 ?        00:00:00 /bin/sh -c set -e -x; cp -f /etc/kube-flannel/cni-conf.json /etc/cni/net.d/10-flannel.conf; while true; do sleep 3600; doneroot      30372  30357  0 14:10 ?        00:00:01 /exechealthz --cmd=nslookup kubernetes.default.svc.cluster.local 127.0.0.1 >/dev/null --url=/healthz-dnsmasq --cmd=nslookup kubernetes.default.svc.cluster.local 127.0.0.1:10053 >/dev/null --url=/healthz-kubedns --port=8080 --quietroot      30682  30667  0 14:10 ?        00:00:01 /kube-dns --domain=cluster.local --dns-port=10053 --config-map=kube-dns --v=2root      48755   1796  0 14:31 pts/0    00:00:00 grep --color=auto kube而且以多cotainer的形式启动root@master:~# docker psCONTAINER ID        IMAGE                                                           COMMAND                  CREATED             STATUS              PORTS               NAMESc4209b1077d2        gcr.io/google_containers/kubedns-amd64:1.9                      "/kube-dns --domain=c"   22 minutes ago      Up 22 minutes                           k8s_kube-dns.61e5a20f_kube-dns-2924299975-txh2v_kube-system_f5364cd5-d631-11e6-9d86-0050569c3e9b_fc02f7620908d6398b0b        gcr.io/google_containers/exechealthz-amd64:1.2                  "/exechealthz '--cmd="   22 minutes ago      Up 22 minutes                           k8s_healthz.9d343f54_kube-dns-2924299975-txh2v_kube-system_f5364cd5-d631-11e6-9d86-0050569c3e9b_0ee806f60e35e96ca4ac        gcr.io/google_containers/dnsmasq-metrics-amd64:1.0              "/dnsmasq-metrics --v"   22 minutes ago      Up 22 minutes                           k8s_dnsmasq-metrics.2bb05ef7_kube-dns-2924299975-txh2v_kube-system_f5364cd5-d631-11e6-9d86-0050569c3e9b_436b93703921b4e59aca        gcr.io/google_containers/kube-dnsmasq-amd64:1.4                 "/usr/sbin/dnsmasq --"   22 minutes ago      Up 22 minutes                           k8s_dnsmasq.f7e18a01_kube-dns-2924299975-txh2v_kube-system_f5364cd5-d631-11e6-9d86-0050569c3e9b_06c5efa718513413ba60        gcr.io/google_containers/pause-amd64:3.0                        "/pause"                 22 minutes ago      Up 22 minutes                           k8s_POD.d8dbe16c_kube-dns-2924299975-txh2v_kube-system_f5364cd5-d631-11e6-9d86-0050569c3e9b_9de0a18d45132c8d6d3d        quay.io/coreos/flannel-git:v0.6.1-28-g5dde68d-amd64             "/bin/sh -c 'set -e -"   23 minutes ago      Up 23 minutes                           k8s_install-cni.fc218cef_kube-flannel-ds-0fnxc_kube-system_22034e49-d632-11e6-9d86-0050569c3e9b_88dffd754c2a2e46c808        quay.io/coreos/flannel-git:v0.6.1-28-g5dde68d-amd64             "/opt/bin/flanneld --"   23 minutes ago      Up 23 minutes                           k8s_kube-flannel.5fdd90ba_kube-flannel-ds-0fnxc_kube-system_22034e49-d632-11e6-9d86-0050569c3e9b_2706c3cbad08c8dd177c        gcr.io/google_containers/pause-amd64:3.0                        "/pause"                 23 minutes ago      Up 23 minutes                           k8s_POD.d8dbe16c_kube-flannel-ds-0fnxc_kube-system_22034e49-d632-11e6-9d86-0050569c3e9b_279d8436847f00759977        gcr.io/google_containers/kube-proxy-amd64:v1.5.1                "kube-proxy --kubecon"   24 minutes ago      Up 24 minutes                           k8s_kube-proxy.2f62b4e5_kube-proxy-9c0bf_kube-system_f5326252-d631-11e6-9d86-0050569c3e9b_c1f31904f8da0f38f3e1        gcr.io/google_containers/pause-amd64:3.0                        "/pause"                 24 minutes ago      Up 24 minutes                           k8s_POD.d8dbe16c_kube-proxy-9c0bf_kube-system_f5326252-d631-11e6-9d86-0050569c3e9b_c340d947c1efa29640d1        gcr.io/google_containers/kube-discovery-amd64:1.0               "/usr/local/bin/kube-"   24 minutes ago      Up 24 minutes                           k8s_kube-discovery.6907cb07_kube-discovery-1769846148-4rsq9_kube-system_f49933be-d631-11e6-9d86-0050569c3e9b_c4827da24c6a646d0b2e        gcr.io/google_containers/pause-amd64:3.0                        "/pause"                 24 minutes ago      Up 24 minutes                           k8s_POD.d8dbe16c_kube-discovery-1769846148-4rsq9_kube-system_f49933be-d631-11e6-9d86-0050569c3e9b_8823b66aece79181f177        gcr.io/google_containers/pause-amd64:3.0                        "/pause"                 24 minutes ago      Up 24 minutes                           k8s_dummy.702d1bd5_dummy-2088944543-r2mw3_kube-system_f38f3ede-d631-11e6-9d86-0050569c3e9b_ade728ba9c3364c623df        gcr.io/google_containers/pause-amd64:3.0                        "/pause"                 24 minutes ago      Up 24 minutes                           k8s_POD.d8dbe16c_dummy-2088944543-r2mw3_kube-system_f38f3ede-d631-11e6-9d86-0050569c3e9b_838c58b5a64a3363a82b        gcr.io/google_containers/kube-controller-manager-amd64:v1.5.1   "kube-controller-mana"   25 minutes ago      Up 25 minutes                           k8s_kube-controller-manager.84edb2e5_kube-controller-manager-master_kube-system_7b7c15f8228e3413d3b0d0bad799b1ea_697ef6ee27625502c298        gcr.io/google_containers/kube-apiserver-amd64:v1.5.1            "kube-apiserver --ins"   25 minutes ago      Up 25 minutes                           k8s_kube-apiserver.5942f3e3_kube-apiserver-master_kube-system_aeb59dd32f3217b366540250d2c35d8c_38a838445b2cc5cb9ac1        gcr.io/google_containers/pause-amd64:3.0                        "/pause"                 25 minutes ago      Up 25 minutes                           k8s_POD.d8dbe16c_kube-controller-manager-master_kube-system_7b7c15f8228e3413d3b0d0bad799b1ea_2f88a796e12ef7b3c1f0        gcr.io/google_containers/etcd-amd64:3.0.14-kubeadm              "etcd --listen-client"   25 minutes ago      Up 25 minutes                           k8s_etcd.c323986f_etcd-master_kube-system_3a26566bb004c61cd05382212e3f978f_ef6eb51384a731cbce18        gcr.io/google_containers/pause-amd64:3.0                        "/pause"                 25 minutes ago      Up 25 minutes                           k8s_POD.d8dbe16c_kube-apiserver-master_kube-system_aeb59dd32f3217b366540250d2c35d8c_a3a2ea4e612b021457a1        gcr.io/google_containers/kube-scheduler-amd64:v1.5.1            "kube-scheduler --add"   25 minutes ago      Up 25 minutes                           k8s_kube-scheduler.bb7d750_kube-scheduler-master_kube-system_0545c2e223307b5ab8c74b0ffed56ac7_a49fab86ac0d8698f79f        gcr.io/google_containers/pause-amd64:3.0                        "/pause"                 25 minutes ago      Up 25 minutes                           k8s_POD.d8dbe16c_etcd-master_kube-system_3a26566bb004c61cd05382212e3f978f_9a6b79252a16a2217bf3        gcr.io/google_containers/pause-amd64:3.0                        "/pause"                 25 minutes ago      Up 25 minutes                           k8s_POD.d8dbe16c_kube-scheduler-master_kube-system_0545c2e223307b5ab8c74b0ffed56ac7_d2b51317kube-apiserver的IP是host ip,从而推断容器使用的是host网络,这从其对应的pause容器的network属性就可以看出:root@master:~# docker ps |grep apiserver27625502c298        gcr.io/google_containers/kube-apiserver-amd64:v1.5.1            "kube-apiserver --ins"   26 minutes ago      Up 26 minutes                           k8s_kube-apiserver.5942f3e3_kubeapiserver-master_kube-system_aeb59dd32f3217b366540250d2c35d8c_38a8384484a731cbce18        gcr.io/google_containers/pause-amd64:3.0                        "/pause"                 26 minutes ago      Up 26 minutes                           k8s_POD.d8dbe16c_kube-apiserver-master_kube-system_aeb59dd32f3217b366540250d2c35d8c_a3a2ea4e问题一、如果kubeadm init执行过程中途出现了什么问题,比如前期忘记挂加速器导致init hang住,你可能会ctrl+c退出init执行。重新配置后,再执行kubeadm init,这时你可能会遇到下面kubeadm的输出:# kubeadm init --pod-network-cidr=10.244.0.0/16[kubeadm] WARNING: kubeadm is in alpha, please do not use it for production clusters.[preflight] Running pre-flight checks[preflight] Some fatal errors occurred:    Port 10250 is in use    /etc/kubernetes/manifests is not empty    /etc/kubernetes/pki is not empty    /var/lib/kubelet is not empty    /etc/kubernetes/admin.conf already exists    /etc/kubernetes/kubelet.conf already exists[preflight] If you know what you are doing, you can skip pre-flight checks with `--skip-preflight-checks`kubeadm会自动检查当前环境是否有上次命令执行的"残留"。如果有,必须清理后再行执行init。我们可以通过"kubeadm reset"来清理环境,以备重来。# kubeadm reset[preflight] Running pre-flight checks[reset] Draining node: "iz25beglnhtz"[reset] Removing node: "iz25beglnhtz"[reset] Stopping the kubelet service[reset] Unmounting mounted directories in "/var/lib/kubelet"[reset] Removing kubernetes-managed containers[reset] Deleting contents of stateful directories: [/var/lib/kubelet /etc/cni/net.d /var/lib/etcd][reset] Deleting contents of config directories: [/etc/kubernetes/manifests /etc/kubernetes/pki][reset] Deleting files: [/etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf]5、要使用Flannel网络,因此我们需要执行如下安装命令:#kubectl apply -f  https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.ymlconfigmap "kube-flannel-cfg" createddaemonset "kube-flannel-ds" created需要稍等几秒钟,我们再来看master node上的cluster信息:root@master:~# ps -ef |grep kube |grep flannelroot      29457  29441  0 14:09 ?        00:00:00 /opt/bin/flanneld --ip-masq --kube-subnet-mgrroot      29498  29481  0 14:09 ?        00:00:00 /bin/sh -c set -e -x; cp -f /etc/kube-flannel/cni-conf.json /etc/cni/net.d/10-flannel.conf; while true; do sleep 3600; doneroot@master:~# kubectl get pods --all-namespacesNAMESPACE     NAME                              READY     STATUS    RESTARTS   AGEkube-system   dummy-2088944543-r2mw3            1/1       Running   0          30mkube-system   etcd-master                       1/1       Running   0          31mkube-system   kube-apiserver-master             1/1       Running   0          31mkube-system   kube-controller-manager-master    1/1       Running   0          31mkube-system   kube-discovery-1769846148-4rsq9   1/1       Running   0          30mkube-system   kube-dns-2924299975-txh2v         4/4       Running   0          30mkube-system   kube-flannel-ds-0fnxc             2/2       Running   0          29mkube-system   kube-flannel-ds-lpgpv             2/2       Running   0          23mkube-system   kube-flannel-ds-s05nr             2/2       Running   0          18mkube-system   kube-proxy-9c0bf                  1/1       Running   0          30mkube-system   kube-proxy-t8hxr                  1/1       Running   0          18mkube-system   kube-proxy-zd0v2                  1/1       Running   0          23mkube-system   kube-scheduler-master             1/1       Running   0          31m至少集群的核心组件已经全部run起来了。看起来似乎是成功了。接下来开始node下的操作6、minion node:join the cluster这里我们用到了kubeadm的第二个命令:kubeadm join。在minion node上执行(注意:这里要保证master node的9898端口在防火墙是打开的):前提node下需要有上面安装的kube组建7、安装kubelet和kubeadmcurl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -  我是用的是 http://119.29.98.145:8070/zhi/apt-key.gpg主机master上操作如下:curl -s http://119.29.98.145:8070/zhi/apt-key.gpg | apt-key add -  cat </etc/apt/sources.list.d/kubernetes.list  deb http://apt.kubernetes.io/ kubernetes-xenial main  EOF  apt-get update  apt-get install -y docker.io  apt-get install -y kubelet kubeadm kubectl kubernetes-cni记住master的tokenroot@node01:~# kubeadm join --token=2909ca.c0b0772a8817f9e3 xxx.xxx.xxx.xxx(ip)8、在master node上查看当前cluster状态:root@master:~# kubectl get nodeNAME      STATUS         AGEmaster    Ready,master   59mnode01    Ready          51mnode02    Ready          46m

关于怎样使用Kubeadm安装Kubernetes1.5版本问题的解答就分享到这里了,希望以上内容可以对大家有一定的帮助,如果你还有很多疑惑没有解开,可以关注行业资讯频道了解更多相关知识。

命令 组件 版本 主机 核心 问题 集群 容器 就是 网络 成功 接下来 前提 加速器 形式 文档 更多 环境 帮助 解答 数据库的安全要保护哪些东西 数据库安全各自的含义是什么 生产安全数据库录入 数据库的安全性及管理 数据库安全策略包含哪些 海淀数据库安全审计系统 建立农村房屋安全信息数据库 易用的数据库客户端支持安全管理 连接数据库失败ssl安全错误 数据库的锁怎样保障安全 哈尔滨erp软件开发价格表 魔兽怀旧服务器黑翼之巢法师装备 四川慕妮娅互联网科技有限公司 服务器硬盘用于pc java 免费开源数据库 飞塔公司有数据库安全吗 阿里云服务器云盘租用价格 n兽数码宝贝数据库 明日之后第四季服务器连不上 台州市路桥影子网络技术服务部 登录系统转移怎么重新连接数据库 华为云数据库访问公网 权威的外文数据库 我的世界服务器解封怎么进不去 高科技互联网座椅 北京浪潮服务器维修哪家好 未来教育网络技术教程 软件开发人员基础考题 数据库系统安全概念 神经网络技术有前途吗 软件开发运行维护管理支持 徐州工程学院网络技术专业 身体网络安全故事 金凤区政务软件开发案例 服务器加装显卡有什么好处 鸠鸠互联网科技有限公司 软件开发人员基础考题 共享共享单车软件开发商 学习软件开发哪个网站好 大学生手机网络安全知识竞赛
0