二进制包20分钟快速安装部署 Kubernetes v1.14.0 集群
发表于:2025-01-23 作者:千家信息网编辑
千家信息网最后更新 2025年01月23日,二进制包20分钟快速部署 Kubernetes v1.14.0 集群一 环境操作系统Docker版本Kubernetes版本Etcd版本Flannel版本CentOS Linux release 7.
千家信息网最后更新 2025年01月23日二进制包20分钟快速安装部署 Kubernetes v1.14.0 集群
二进制包20分钟快速部署 Kubernetes v1.14.0 集群
一 环境
操作系统 | Docker版本 | Kubernetes版本 | Etcd版本 | Flannel版本 |
---|---|---|---|---|
CentOS Linux release 7.6.1810 | Docker version 18.09.4 | v1.14.0 | Version: 3.3.12 | v0.11.0 |
二 架构
主机名 | IP | 角色 | 部署应用 |
---|---|---|---|
gysl-master | 10.1.1.60 | Msater | Docker/Kube-apiserver/kube-scheduler/kube-controller-manager/etcd |
gysl-node1 | 10.1.1.61 | Node | Docker/Kubelet/kube-proxy/flanneld/etcd |
gysl-node2 | 10.1.1.62 | Node | Docker/Kubelet/kube-proxy/flanneld/etcd |
gysl-node3 | 10.1.1.63 | Node | Docker/Kubelet/kube-proxy/flanneld/etcd |
三 安装过程
通过几个小时的努力,完成本次部署脚本的编写,安装脚本支持任意多个节点,主要通过三个脚本实现本次安装。
3.1 初始化脚本
#!/bin/bashdeclare -A HostIP EtcdIPHostIP=( [gysl-master]='10.1.1.60' [gysl-node1]='10.1.1.61' [gysl-node2]='10.1.1.62' [gysl-node3]='10.1.1.63' )EtcdIP=( [etcd-master]='10.1.1.60' [etcd-01]='10.1.1.61' [etcd-02]='10.1.1.62' [etcd-03]='10.1.1.63' )BinaryDir='/usr/local/bin'KubeConf='/etc/kubernetes/conf.d'KubeCA='/etc/kubernetes/ca.d'EtcdConf='/etc/etcd/conf.d'EtcdCA='/etc/etcd/ca.d'FlanneldConf='/etc/flanneld'mkdir -p {${KubeConf},${KubeCA},${EtcdConf},${EtcdCA},${FlanneldConf}}for hostname in ${!HostIP[@]} do cat>>/etc/hosts<&/dev/nullif [ $? -eq 0 ] ; then yum remove docker \ docker-client \ docker-client-latest \ docker-common \ docker-latest \ docker-latest-logrotate \ docker-logrotate \ docker-selinux \ docker-engine-selinux \ docker-engine>&/dev/null yum list docker-ce --showduplicates|grep "^doc"|sort -r yum -y install docker-ce-18.09.3-3.el7 rm -f /etc/yum.repos.d/docker-ce.repo systemctl enable docker --now && systemctl status docker else echo "Install failed! Please try again! "; exit 110fi# Modify related kernel parameters. cat>/etc/sysctl.d/docker.conf<&/dev/null # Turn off and disable the firewalld. systemctl stop firewalld systemctl disable firewalld # Disable the SELinux. sed -i.bak 's/=enforcing/=disabled/' /etc/selinux/config # Disable the swap. sed -i.bak 's/^.*swap/#&/g' /etc/fstab# Install EPEL/vim/git. yum -y install epel-release vim git treeyum repolist# Alias vim. cat>/etc/profile.d/vim.sh<>/etc/vimrc# Reboot the machine. reboot
需要每个节点都执行。
3.2 安装脚本
安装脚本较长,此处省略,日志以供参考,拓展思路。此脚本在Master节点执行即可,安装过程无需连接外网,安装日志如下:
Generating public/private rsa key pair.Enter file in which to save the key (/root/.ssh/id_rsa): Created directory '/root/.ssh'.Enter passphrase (empty for no passphrase): Enter same passphrase again: Your identification has been saved in /root/.ssh/id_rsa.Your public key has been saved in /root/.ssh/id_rsa.pub.The key fingerprint is:SHA256:rJdnEzx5GyWX9YCxq77ZMc+FCabCqA+3FwmS7LnF9qo KubernetesThe key's randomart image is:+---[RSA 1024]----+| .o. .|| .. +.|| . . o + .|| + .. . . = || . + .S.= * || o ++o. B + o || .++.=.* + o .|| .+ oo= + = . || Eo+o +.. o |+----[SHA256]-----+/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_rsa.pub"The authenticity of host '10.1.1.62 (10.1.1.62)' can't be established.ECDSA key fingerprint is SHA256:B4e7Gq9wcgr5N6ys8U72NEhNWxIFrvng5eI7GAXLf6s.ECDSA key fingerprint is MD5:ea:33:04:40:f8:31:a2:d0:91:c4:b4:37:48:fa:51:d6.Are you sure you want to continue connecting (yes/no)? yes/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keysroot@10.1.1.62's password: Number of key(s) added: 1Now try logging into the machine, with: "ssh 'root@10.1.1.62'"and check to make sure that only the key(s) you wanted were added./usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_rsa.pub"The authenticity of host '10.1.1.63 (10.1.1.63)' can't be established.ECDSA key fingerprint is SHA256:B4e7Gq9wcgr5N6ys8U72NEhNWxIFrvng5eI7GAXLf6s.ECDSA key fingerprint is MD5:ea:33:04:40:f8:31:a2:d0:91:c4:b4:37:48:fa:51:d6.Are you sure you want to continue connecting (yes/no)? yes/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keysroot@10.1.1.63's password: Number of key(s) added: 1Now try logging into the machine, with: "ssh 'root@10.1.1.63'"and check to make sure that only the key(s) you wanted were added./usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_rsa.pub"The authenticity of host '10.1.1.61 (10.1.1.61)' can't be established.ECDSA key fingerprint is SHA256:B4e7Gq9wcgr5N6ys8U72NEhNWxIFrvng5eI7GAXLf6s.ECDSA key fingerprint is MD5:ea:33:04:40:f8:31:a2:d0:91:c4:b4:37:48:fa:51:d6.Are you sure you want to continue connecting (yes/no)? yes/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keysroot@10.1.1.61's password: Number of key(s) added: 1Now try logging into the machine, with: "ssh 'root@10.1.1.61'"and check to make sure that only the key(s) you wanted were added.etcd-v3.3.12-linux-amd64/etcd-v3.3.12-linux-amd64/README.mdetcd-v3.3.12-linux-amd64/Documentation/etcd-v3.3.12-linux-amd64/Documentation/dev-guide/etcd-v3.3.12-linux-amd64/Documentation/dev-guide/interacting_v3.mdetcd-v3.3.12-linux-amd64/Documentation/dev-guide/api_concurrency_reference_v3.mdetcd-v3.3.12-linux-amd64/Documentation/dev-guide/limit.mdetcd-v3.3.12-linux-amd64/Documentation/dev-guide/local_cluster.mdetcd-v3.3.12-linux-amd64/Documentation/dev-guide/api_grpc_gateway.mdetcd-v3.3.12-linux-amd64/Documentation/dev-guide/grpc_naming.mdetcd-v3.3.12-linux-amd64/Documentation/dev-guide/apispec/etcd-v3.3.12-linux-amd64/Documentation/dev-guide/apispec/swagger/etcd-v3.3.12-linux-amd64/Documentation/dev-guide/apispec/swagger/rpc.swagger.jsonetcd-v3.3.12-linux-amd64/Documentation/dev-guide/apispec/swagger/v3lock.swagger.jsonetcd-v3.3.12-linux-amd64/Documentation/dev-guide/apispec/swagger/v3election.swagger.jsonetcd-v3.3.12-linux-amd64/Documentation/dev-guide/experimental_apis.mdetcd-v3.3.12-linux-amd64/Documentation/dev-guide/api_reference_v3.mdetcd-v3.3.12-linux-amd64/Documentation/integrations.mdetcd-v3.3.12-linux-amd64/Documentation/README.mdetcd-v3.3.12-linux-amd64/Documentation/benchmarks/etcd-v3.3.12-linux-amd64/Documentation/benchmarks/etcd-2-1-0-alpha-benchmarks.mdetcd-v3.3.12-linux-amd64/Documentation/benchmarks/etcd-storage-memory-benchmark.mdetcd-v3.3.12-linux-amd64/Documentation/benchmarks/README.mdetcd-v3.3.12-linux-amd64/Documentation/benchmarks/etcd-2-2-0-rc-benchmarks.mdetcd-v3.3.12-linux-amd64/Documentation/benchmarks/etcd-2-2-0-rc-memory-benchmarks.mdetcd-v3.3.12-linux-amd64/Documentation/benchmarks/etcd-3-watch-memory-benchmark.mdetcd-v3.3.12-linux-amd64/Documentation/benchmarks/etcd-2-2-0-benchmarks.mdetcd-v3.3.12-linux-amd64/Documentation/benchmarks/etcd-3-demo-benchmarks.mdetcd-v3.3.12-linux-amd64/Documentation/rfc/etcd-v3.3.12-linux-amd64/Documentation/rfc/v3api.mdetcd-v3.3.12-linux-amd64/Documentation/docs.mdetcd-v3.3.12-linux-amd64/Documentation/production-users.mdetcd-v3.3.12-linux-amd64/Documentation/metrics.mdetcd-v3.3.12-linux-amd64/Documentation/v2/etcd-v3.3.12-linux-amd64/Documentation/v2/authentication.mdetcd-v3.3.12-linux-amd64/Documentation/v2/proxy.mdetcd-v3.3.12-linux-amd64/Documentation/v2/glossary.mdetcd-v3.3.12-linux-amd64/Documentation/v2/docker_guide.mdetcd-v3.3.12-linux-amd64/Documentation/v2/configuration.mdetcd-v3.3.12-linux-amd64/Documentation/v2/README.mdetcd-v3.3.12-linux-amd64/Documentation/v2/benchmarks/etcd-v3.3.12-linux-amd64/Documentation/v2/benchmarks/etcd-2-1-0-alpha-benchmarks.mdetcd-v3.3.12-linux-amd64/Documentation/v2/benchmarks/etcd-storage-memory-benchmark.mdetcd-v3.3.12-linux-amd64/Documentation/v2/benchmarks/README.mdetcd-v3.3.12-linux-amd64/Documentation/v2/benchmarks/etcd-2-2-0-rc-benchmarks.mdetcd-v3.3.12-linux-amd64/Documentation/v2/benchmarks/etcd-2-2-0-rc-memory-benchmarks.mdetcd-v3.3.12-linux-amd64/Documentation/v2/benchmarks/etcd-3-watch-memory-benchmark.mdetcd-v3.3.12-linux-amd64/Documentation/v2/benchmarks/etcd-2-2-0-benchmarks.mdetcd-v3.3.12-linux-amd64/Documentation/v2/benchmarks/etcd-3-demo-benchmarks.mdetcd-v3.3.12-linux-amd64/Documentation/v2/rfc/etcd-v3.3.12-linux-amd64/Documentation/v2/rfc/v3api.mdetcd-v3.3.12-linux-amd64/Documentation/v2/libraries-and-tools.mdetcd-v3.3.12-linux-amd64/Documentation/v2/discovery_protocol.mdetcd-v3.3.12-linux-amd64/Documentation/v2/runtime-configuration.mdetcd-v3.3.12-linux-amd64/Documentation/v2/upgrade_2_3.mdetcd-v3.3.12-linux-amd64/Documentation/v2/auth_api.mdetcd-v3.3.12-linux-amd64/Documentation/v2/errorcode.mdetcd-v3.3.12-linux-amd64/Documentation/v2/admin_guide.mdetcd-v3.3.12-linux-amd64/Documentation/v2/upgrade_2_2.mdetcd-v3.3.12-linux-amd64/Documentation/v2/upgrade_2_1.mdetcd-v3.3.12-linux-amd64/Documentation/v2/clustering.mdetcd-v3.3.12-linux-amd64/Documentation/v2/other_apis.mdetcd-v3.3.12-linux-amd64/Documentation/v2/production-users.mdetcd-v3.3.12-linux-amd64/Documentation/v2/metrics.mdetcd-v3.3.12-linux-amd64/Documentation/v2/runtime-reconf-design.mdetcd-v3.3.12-linux-amd64/Documentation/v2/etcd_alert.rules.ymletcd-v3.3.12-linux-amd64/Documentation/v2/security.mdetcd-v3.3.12-linux-amd64/Documentation/v2/branch_management.mdetcd-v3.3.12-linux-amd64/Documentation/v2/internal-protocol-versioning.mdetcd-v3.3.12-linux-amd64/Documentation/v2/members_api.mdetcd-v3.3.12-linux-amd64/Documentation/v2/platforms/etcd-v3.3.12-linux-amd64/Documentation/v2/platforms/freebsd.mdetcd-v3.3.12-linux-amd64/Documentation/v2/faq.mdetcd-v3.3.12-linux-amd64/Documentation/v2/backward_compatibility.mdetcd-v3.3.12-linux-amd64/Documentation/v2/04_to_2_snapshot_migration.mdetcd-v3.3.12-linux-amd64/Documentation/v2/etcd_alert.rulesetcd-v3.3.12-linux-amd64/Documentation/v2/api.mdetcd-v3.3.12-linux-amd64/Documentation/v2/api_v3.mdetcd-v3.3.12-linux-amd64/Documentation/v2/reporting_bugs.mdetcd-v3.3.12-linux-amd64/Documentation/v2/tuning.mdetcd-v3.3.12-linux-amd64/Documentation/v2/dev/etcd-v3.3.12-linux-amd64/Documentation/v2/dev/release.mdetcd-v3.3.12-linux-amd64/Documentation/branch_management.mdetcd-v3.3.12-linux-amd64/Documentation/platforms/etcd-v3.3.12-linux-amd64/Documentation/platforms/container-linux-systemd.mdetcd-v3.3.12-linux-amd64/Documentation/platforms/freebsd.mdetcd-v3.3.12-linux-amd64/Documentation/platforms/aws.mdetcd-v3.3.12-linux-amd64/Documentation/faq.mdetcd-v3.3.12-linux-amd64/Documentation/dl_build.mdetcd-v3.3.12-linux-amd64/Documentation/reporting_bugs.mdetcd-v3.3.12-linux-amd64/Documentation/tuning.mdetcd-v3.3.12-linux-amd64/Documentation/upgrades/etcd-v3.3.12-linux-amd64/Documentation/upgrades/upgrade_3_4.mdetcd-v3.3.12-linux-amd64/Documentation/upgrades/upgrading-etcd.mdetcd-v3.3.12-linux-amd64/Documentation/upgrades/upgrade_3_2.mdetcd-v3.3.12-linux-amd64/Documentation/upgrades/upgrade_3_1.mdetcd-v3.3.12-linux-amd64/Documentation/upgrades/upgrade_3_0.mdetcd-v3.3.12-linux-amd64/Documentation/upgrades/upgrade_3_3.mdetcd-v3.3.12-linux-amd64/Documentation/dev-internal/etcd-v3.3.12-linux-amd64/Documentation/dev-internal/logging.mdetcd-v3.3.12-linux-amd64/Documentation/dev-internal/discovery_protocol.mdetcd-v3.3.12-linux-amd64/Documentation/dev-internal/release.mdetcd-v3.3.12-linux-amd64/Documentation/learning/etcd-v3.3.12-linux-amd64/Documentation/learning/auth_design.mdetcd-v3.3.12-linux-amd64/Documentation/learning/glossary.mdetcd-v3.3.12-linux-amd64/Documentation/learning/data_model.mdetcd-v3.3.12-linux-amd64/Documentation/learning/api_guarantees.mdetcd-v3.3.12-linux-amd64/Documentation/learning/why.mdetcd-v3.3.12-linux-amd64/Documentation/learning/api.mdetcd-v3.3.12-linux-amd64/Documentation/op-guide/etcd-v3.3.12-linux-amd64/Documentation/op-guide/authentication.mdetcd-v3.3.12-linux-amd64/Documentation/op-guide/versioning.mdetcd-v3.3.12-linux-amd64/Documentation/op-guide/hardware.mdetcd-v3.3.12-linux-amd64/Documentation/op-guide/grafana.jsonetcd-v3.3.12-linux-amd64/Documentation/op-guide/monitoring.mdetcd-v3.3.12-linux-amd64/Documentation/op-guide/configuration.mdetcd-v3.3.12-linux-amd64/Documentation/op-guide/v2-migration.mdetcd-v3.3.12-linux-amd64/Documentation/op-guide/maintenance.mdetcd-v3.3.12-linux-amd64/Documentation/op-guide/runtime-configuration.mdetcd-v3.3.12-linux-amd64/Documentation/op-guide/recovery.mdetcd-v3.3.12-linux-amd64/Documentation/op-guide/clustering.mdetcd-v3.3.12-linux-amd64/Documentation/op-guide/etcd3_alert.rulesetcd-v3.3.12-linux-amd64/Documentation/op-guide/runtime-reconf-design.mdetcd-v3.3.12-linux-amd64/Documentation/op-guide/etcd3_alert.rules.ymletcd-v3.3.12-linux-amd64/Documentation/op-guide/security.mdetcd-v3.3.12-linux-amd64/Documentation/op-guide/performance.mdetcd-v3.3.12-linux-amd64/Documentation/op-guide/etcd-sample-grafana.pngetcd-v3.3.12-linux-amd64/Documentation/op-guide/grpc_proxy.mdetcd-v3.3.12-linux-amd64/Documentation/op-guide/gateway.mdetcd-v3.3.12-linux-amd64/Documentation/op-guide/container.mdetcd-v3.3.12-linux-amd64/Documentation/op-guide/supported-platform.mdetcd-v3.3.12-linux-amd64/Documentation/op-guide/failures.mdetcd-v3.3.12-linux-amd64/Documentation/demo.mdetcd-v3.3.12-linux-amd64/README-etcdctl.mdetcd-v3.3.12-linux-amd64/etcdctletcd-v3.3.12-linux-amd64/READMEv2-etcdctl.mdetcd-v3.3.12-linux-amd64/etcdflanneldmk-docker-opts.shREADME.mdkubernetes/kubernetes/server/kubernetes/server/bin/kubernetes/server/bin/kube-controller-manager.docker_tagkubernetes/server/bin/kube-apiserver.tarkubernetes/server/bin/kube-proxykubernetes/server/bin/cloud-controller-manager.docker_tagkubernetes/server/bin/mounterkubernetes/server/bin/kube-proxy.docker_tagkubernetes/server/bin/kubeletkubernetes/server/bin/kube-scheduler.docker_tagkubernetes/server/bin/kube-controller-manager.tarkubernetes/server/bin/kubectlkubernetes/server/bin/kube-apiserverkubernetes/server/bin/kube-apiserver.docker_tagkubernetes/server/bin/kube-controller-managerkubernetes/server/bin/kube-proxy.tarkubernetes/server/bin/cloud-controller-managerkubernetes/server/bin/kube-scheduler.tarkubernetes/server/bin/apiextensions-apiserverkubernetes/server/bin/kubeadmkubernetes/server/bin/hyperkubekubernetes/server/bin/kube-schedulerkubernetes/server/bin/cloud-controller-manager.tarkubernetes/addons/kubernetes/kubernetes-src.tar.gzkubernetes/LICENSES2019/03/31 20:34:23 [INFO] generating a new CA key and certificate from CSR2019/03/31 20:34:23 [INFO] generate received request2019/03/31 20:34:23 [INFO] received CSR2019/03/31 20:34:23 [INFO] generating key: rsa-20482019/03/31 20:34:23 [INFO] encoded CSR2019/03/31 20:34:23 [INFO] signed certificate with serial number 3162535120090548838264661201075502443111050932552019/03/31 20:34:23 [INFO] generate received request2019/03/31 20:34:23 [INFO] received CSR2019/03/31 20:34:23 [INFO] generating key: rsa-20482019/03/31 20:34:23 [INFO] encoded CSR2019/03/31 20:34:23 [INFO] signed certificate with serial number 2881890044966362370744967230491709017161000418312019/03/31 20:34:23 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable forwebsites. For more information see the Baseline Requirements for the Issuance and Managementof Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);specifically, section 10.2.3 ("Information Requirements")./etc/etcd/ca.d├── ca-config.json├── ca.csr├── ca-csr.json├── ca-key.pem├── ca.pem├── server.csr├── server-csr.json├── server-key.pem└── server.pem0 directories, 9 filesca-key.pem 100% 1675 27.1KB/s 00:00 ca.pem 100% 1265 2.2MB/s 00:00 server-key.pem 100% 1679 639.6KB/s 00:00 server.pem 100% 1346 1.9MB/s 00:00 etcd 100% 18MB 13.1MB/s 00:01 etcdctl 100% 15MB 41.5MB/s 00:00 etcd.service 100% 994 1.0MB/s 00:00 etcd.conf 100% 520 527.8KB/s 00:00 Created symlink from /etc/systemd/system/multi-user.target.wants/etcd.service to /usr/lib/systemd/system/etcd.service.Job for etcd.service failed because a timeout was exceeded. See "systemctl status etcd.service" and "journalctl -xe" for details.ca-key.pem 100% 1675 92.3KB/s 00:00 ca.pem 100% 1265 1.7MB/s 00:00 server-key.pem 100% 1679 328.8KB/s 00:00 server.pem 100% 1346 1.6MB/s 00:00 etcd 100% 18MB 40.7MB/s 00:00 etcdctl 100% 15MB 46.0MB/s 00:00 etcd.service 100% 994 1.0MB/s 00:00 etcd.conf 100% 520 838.6KB/s 00:00 Created symlink from /etc/systemd/system/multi-user.target.wants/etcd.service to /usr/lib/systemd/system/etcd.service.Job for etcd.service failed because a timeout was exceeded. See "systemctl status etcd.service" and "journalctl -xe" for details.ca-key.pem 100% 1675 106.9KB/s 00:00 ca.pem 100% 1265 1.0MB/s 00:00 server-key.pem 100% 1679 1.3MB/s 00:00 server.pem 100% 1346 1.5MB/s 00:00 etcd 100% 18MB 31.4MB/s 00:00 etcdctl 100% 15MB 37.7MB/s 00:00 etcd.service 100% 994 916.5KB/s 00:00 etcd.conf 100% 520 487.5KB/s 00:00 Created symlink from /etc/systemd/system/multi-user.target.wants/etcd.service to /usr/lib/systemd/system/etcd.service.● etcd.service - Etcd Server Loaded: loaded (/usr/lib/systemd/system/etcd.service; enabled; vendor preset: disabled) Active: active (running) since 日 2019-03-31 20:37:32 CST; 28ms ago Main PID: 7373 (etcd) Tasks: 7 Memory: 9.1M CGroup: /system.slice/etcd.service └─7373 /usr/local/bin/etcd --name=etcd-01 --data-dir=/var/lib/etcd/default.etcd --listen-peer-urls=https://10.1.1.61:2380 --listen-client-urls=https://10.1.1.61:2379,http://127.0.0.1:2379 --advertise-client-urls=https://10.1.1.61:2379 --initial-advertise-peer-urls=https://10.1.1.61:2380 --initial-cluster=etcd-master=https://10.1.1.60:2380,etcd-01=https://10.1.1.61:2380,etcd-02=https://10.1.1.62:2380,etcd-03=https://10.1.1.63:2380 --initial-cluster-token=etcd-cluster --initial-cluster-state=new --cert-file=/etc/etcd/ca.d/server.pem --key-file=/etc/etcd/ca.d/server-key.pem --peer-cert-file=/etc/etcd/ca.d/server.pem --peer-key-file=/etc/etcd/ca.d/server-key.pem --trusted-ca-file=/etc/etcd/ca.d/ca.pem --peer-trusted-ca-file=/etc/etcd/ca.d/ca.pem3月 31 20:37:32 gysl-node1 etcd[7373]: 1c3555118a39401e initialzed peer connection; fast-forwarding 8 ticks (election ticks 10) with 2 active peer(s)3月 31 20:37:32 gysl-node1 etcd[7373]: raft.node: 1c3555118a39401e elected leader 63ac3c747757aa28 at term 1383月 31 20:37:32 gysl-node1 etcd[7373]: published {Name:etcd-01 ClientURLs:[https://10.1.1.61:2379]} to cluster 575c8b9e68fd927d3月 31 20:37:32 gysl-node1 etcd[7373]: ready to serve client requests3月 31 20:37:32 gysl-node1 etcd[7373]: serving insecure client requests on 127.0.0.1:2379, this is strongly discouraged!3月 31 20:37:32 gysl-node1 etcd[7373]: ready to serve client requests3月 31 20:37:32 gysl-node1 systemd[1]: Started Etcd Server.3月 31 20:37:32 gysl-node1 etcd[7373]: serving client requests on 10.1.1.61:23793月 31 20:37:32 gysl-node1 etcd[7373]: set the initial cluster version to 3.03月 31 20:37:32 gysl-node1 etcd[7373]: enabled capabilities for version 3.0Created symlink from /etc/systemd/system/multi-user.target.wants/etcd.service to /usr/lib/systemd/system/etcd.service.● etcd.service - Etcd Server Loaded: loaded (/usr/lib/systemd/system/etcd.service; enabled; vendor preset: disabled) Active: active (running) since 日 2019-03-31 20:37:34 CST; 124ms ago Main PID: 7551 (etcd) Tasks: 7 Memory: 10.3M CGroup: /system.slice/etcd.service └─7551 /usr/local/bin/etcd --name=etcd-master --data-dir=/var/lib/etcd/default.etcd --listen-peer-urls=https://10.1.1.60:2380 --listen-client-urls=https://10.1.1.60:2379,http://127.0.0.1:2379 --advertise-client-urls=https://10.1.1.60:2379 --initial-advertise-peer-urls=https://10.1.1.60:2380 --initial-cluster=etcd-master=https://10.1.1.60:2380,etcd-01=https://10.1.1.61:2380,etcd-02=https://10.1.1.62:2380,etcd-03=https://10.1.1.63:2380 --initial-cluster-token=etcd-cluster --initial-cluster-state=new --cert-file=/etc/etcd/ca.d/server.pem --key-file=/etc/etcd/ca.d/server-key.pem --peer-cert-file=/etc/etcd/ca.d/server.pem --peer-key-file=/etc/etcd/ca.d/server-key.pem --trusted-ca-file=/etc/etcd/ca.d/ca.pem --peer-trusted-ca-file=/etc/etcd/ca.d/ca.pem3月 31 20:37:34 gysl-master etcd[7551]: established a TCP streaming connection with peer 63ac3c747757aa28 (stream Message reader)3月 31 20:37:34 gysl-master etcd[7551]: established a TCP streaming connection with peer 1c3555118a39401e (stream Message reader)3月 31 20:37:34 gysl-master etcd[7551]: published {Name:etcd-master ClientURLs:[https://10.1.1.60:2379]} to cluster 575c8b9e68fd927d3月 31 20:37:34 gysl-master etcd[7551]: ready to serve client requests3月 31 20:37:34 gysl-master etcd[7551]: serving client requests on 10.1.1.60:23793月 31 20:37:34 gysl-master etcd[7551]: ready to serve client requests3月 31 20:37:34 gysl-master etcd[7551]: serving insecure client requests on 127.0.0.1:2379, this is strongly discouraged!3月 31 20:37:34 gysl-master etcd[7551]: 78df1ab24a6f1302 initialzed peer connection; fast-forwarding 8 ticks (election ticks 10) with 3 active peer(s)3月 31 20:37:34 gysl-master systemd[1]: Started Etcd Server.3月 31 20:37:34 gysl-master etcd[7551]: established a TCP streaming connection with peer 76bcb3b85e42210d (stream Message reader)Please wait a moment!2019/03/31 20:38:34 [INFO] generating a new CA key and certificate from CSR2019/03/31 20:38:34 [INFO] generate received request2019/03/31 20:38:34 [INFO] received CSR2019/03/31 20:38:34 [INFO] generating key: rsa-20482019/03/31 20:38:34 [INFO] encoded CSR2019/03/31 20:38:34 [INFO] signed certificate with serial number 2848798975359319540746352429122071006242641275442019/03/31 20:38:34 [INFO] generate received request2019/03/31 20:38:34 [INFO] received CSR2019/03/31 20:38:34 [INFO] generating key: rsa-20482019/03/31 20:38:34 [INFO] encoded CSR2019/03/31 20:38:34 [INFO] signed certificate with serial number 1635885377625193368228628854604086986947356477712019/03/31 20:38:34 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable forwebsites. For more information see the Baseline Requirements for the Issuance and Managementof Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);specifically, section 10.2.3 ("Information Requirements").2019/03/31 20:38:34 [INFO] generate received request2019/03/31 20:38:34 [INFO] received CSR2019/03/31 20:38:34 [INFO] generating key: rsa-20482019/03/31 20:38:35 [INFO] encoded CSR2019/03/31 20:38:35 [INFO] signed certificate with serial number 2694308461398789687540150226507912042598919373102019/03/31 20:38:35 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable forwebsites. For more information see the Baseline Requirements for the Issuance and Managementof Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);specifically, section 10.2.3 ("Information Requirements")./etc/kubernetes/ca.d├── ca-config.json├── ca.csr├── ca-csr.json├── ca-key.pem├── ca.pem├── kube-proxy.csr├── kube-proxy-csr.json├── kube-proxy-key.pem├── kube-proxy.pem├── server.csr├── server-csr.json├── server-key.pem└── server.pem0 directories, 13 filesCreated symlink from /etc/systemd/system/multi-user.target.wants/kube-apiserver.service to /usr/lib/systemd/system/kube-apiserver.service.● kube-apiserver.service - Kubernetes API Server Loaded: loaded (/usr/lib/systemd/system/kube-apiserver.service; enabled; vendor preset: disabled) Active: active (running) since 日 2019-03-31 20:38:35 CST; 41ms ago Docs: https://github.com/kubernetes/kubernetes Main PID: 7628 (kube-apiserver) Tasks: 1 Memory: 14.0M CGroup: /system.slice/kube-apiserver.service └─7628 /usr/local/bin/kube-apiserver --logtostderr=true --v=4 --etcd-servers=https://10.1.1.60:2379,https://10.1.1.61:2379,https://10.1.1.62:2379,https://10.1.1.63:2379 --bind-address=10.1.1.60 --secure-port=6443 --advertise-address=10.1.1.60 --allow-privileged=true --service-cluster-ip-range=10.0.0.0/24 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota,NodeRestriction --authorization-mode=RBAC,Node --enable-bootstrap-token-auth --token-auth-file=/etc/kubernetes/conf.d/token.csv --service-node-port-range=30000-50000 --tls-cert-file=/etc/kubernetes/ca.d/server.pem --tls-private-key-file=/etc/kubernetes/ca.d/server-key.pem --client-ca-file=/etc/kubernetes/ca.d/ca.pem --service-account-key-file=/etc/kubernetes/ca.d/ca-key.pem --etcd-cafile=/etc/etcd/ca.d/ca.pem --etcd-certfile=/etc/etcd/ca.d/server.pem --etcd-keyfile=/etc/etcd/ca.d/server-key.pem3月 31 20:38:35 gysl-master systemd[1]: Started Kubernetes API Server.Created symlink from /etc/systemd/system/multi-user.target.wants/kube-scheduler.service to /usr/lib/systemd/system/kube-scheduler.service.● kube-scheduler.service - Kubernetes Scheduler Loaded: loaded (/usr/lib/systemd/system/kube-scheduler.service; enabled; vendor preset: disabled) Active: active (running) since 日 2019-03-31 20:38:36 CST; 20s ago Docs: https://github.com/kubernetes/kubernetes Main PID: 7673 (kube-scheduler) Tasks: 7 Memory: 47.5M CGroup: /system.slice/kube-scheduler.service └─7673 /usr/local/bin/kube-scheduler --logtostderr=true --v=4 --master=127.0.0.1:8080 --leader-elect3月 31 20:38:43 gysl-master kube-scheduler[7673]: I0331 20:38:43.299502 7673 shared_informer.go:123] caches populated3月 31 20:38:43 gysl-master kube-scheduler[7673]: I0331 20:38:43.399931 7673 shared_informer.go:123] caches populated3月 31 20:38:43 gysl-master kube-scheduler[7673]: I0331 20:38:43.500642 7673 shared_informer.go:123] caches populated3月 31 20:38:43 gysl-master kube-scheduler[7673]: I0331 20:38:43.601146 7673 shared_informer.go:123] caches populated3月 31 20:38:43 gysl-master kube-scheduler[7673]: I0331 20:38:43.604604 7673 controller_utils.go:1027] Waiting for caches to sync for scheduler controller3月 31 20:38:43 gysl-master kube-scheduler[7673]: I0331 20:38:43.705500 7673 shared_informer.go:123] caches populated3月 31 20:38:43 gysl-master kube-scheduler[7673]: I0331 20:38:43.705529 7673 controller_utils.go:1034] Caches are synced for scheduler controller3月 31 20:38:43 gysl-master kube-scheduler[7673]: I0331 20:38:43.705631 7673 leaderelection.go:217] attempting to acquire leader lease kube-system/kube-scheduler...3月 31 20:38:43 gysl-master kube-scheduler[7673]: I0331 20:38:43.737674 7673 leaderelection.go:227] successfully acquired lease kube-system/kube-scheduler3月 31 20:38:43 gysl-master kube-scheduler[7673]: I0331 20:38:43.838862 7673 shared_informer.go:123] caches populatedCreated symlink from /etc/systemd/system/multi-user.target.wants/kube-controller-manager.service to /usr/lib/systemd/system/kube-controller-manager.service.● kube-controller-manager.service - Kubernetes Controller Manager Loaded: loaded (/usr/lib/systemd/system/kube-controller-manager.service; enabled; vendor preset: disabled) Active: active (running) since 日 2019-03-31 20:38:56 CST; 20s ago Docs: https://github.com/kubernetes/kubernetes Main PID: 7725 (kube-controller) Tasks: 6 Memory: 132.3M CGroup: /system.slice/kube-controller-manager.service └─7725 /usr/local/bin/kube-controller-manager --logtostderr=true --v=4 --master=127.0.0.1:8080 --leader-elect=true --address=127.0.0.1 --service-cluster-ip-range=10.0.0.0/24 --cluster-name=kubernetes --cluster-signing-cert-file=/etc/kubernetes/ca.d/ca.pem --cluster-signing-key-file=/etc/kubernetes/ca.d/ca-key.pem --root-ca-file=/etc/kubernetes/ca.d/ca.pem --service-account-private-key-file=/etc/kubernetes/ca.d/ca-key.pem3月 31 20:38:59 gysl-master kube-controller-manager[7725]: I0331 20:38:59.915581 7725 request.go:530] Throttling request took 1.356935667s, request: GET:http://127.0.0.1:8080/apis/scheduling.k8s.io/v1?timeout=32s3月 31 20:38:59 gysl-master kube-controller-manager[7725]: I0331 20:38:59.965276 7725 request.go:530] Throttling request took 1.406608026s, request: GET:http://127.0.0.1:8080/apis/scheduling.k8s.io/v1beta1?timeout=32s3月 31 20:39:00 gysl-master kube-controller-manager[7725]: I0331 20:39:00.015978 7725 request.go:530] Throttling request took 1.457255375s, request: GET:http://127.0.0.1:8080/apis/coordination.k8s.io/v1?timeout=32s3月 31 20:39:00 gysl-master kube-controller-manager[7725]: I0331 20:39:00.065993 7725 request.go:530] Throttling request took 1.507246887s, request: GET:http://127.0.0.1:8080/apis/coordination.k8s.io/v1beta1?timeout=32s3月 31 20:39:00 gysl-master kube-controller-manager[7725]: I0331 20:39:00.067050 7725 resource_quota_controller.go:427] syncing resource quota controller with updated resources from discovery: map[/v1, Resource=configmaps:{} /v1, Resource=endpoints:{} /v1, Resource=events:{} /v1, Resource=limitranges:{} /v1, Resource=persistentvolumeclaims:{} /v1, Resource=pods:{} /v1, Resource=podtemplates:{} /v1, Resource=replicationcontrollers:{} /v1, Resource=resourcequotas:{} /v1, Resource=secrets:{} /v1, Resource=serviceaccounts:{} /v1, Resource=services:{} apps/v1, Resource=controllerrevisions:{} apps/v1, Resource=daemonsets:{} apps/v1, Resource=deployments:{} apps/v1, Resource=replicasets:{} apps/v1, Resource=statefulsets:{} autoscaling/v1, Resource=horizontalpodautoscalers:{} batch/v1, Resource=jobs:{} batch/v1beta1, Resource=cronjobs:{} coordination.k8s.io/v1, Resource=leases:{} events.k8s.io/v1beta1, Resource=events:{} extensions/v1beta1, Resource=daemonsets:{} extensions/v1beta1, Resource=deployments:{} extensions/v1beta1, Resource=ingresses:{} extensions/v1beta1, Resource=networkpolicies:{} extensions/v1beta1, Resource=replicasets:{} networking.k8s.io/v1, Resource=networkpolicies:{} networking.k8s.io/v1beta1, Resource=ingresses:{} policy/v1beta1, Resource=poddisruptionbudgets:{} rbac.authorization.k8s.io/v1, Resource=rolebindings:{} rbac.authorization.k8s.io/v1, Resource=roles:{}]3月 31 20:39:00 gysl-master kube-controller-manager[7725]: I0331 20:39:00.067168 7725 resource_quota_monitor.go:180] QuotaMonitor unable to use a shared informer for resource "extensions/v1beta1, Resource=networkpolicies": no informer found for extensions/v1beta1, Resource=networkpolicies3月 31 20:39:00 gysl-master kube-controller-manager[7725]: I0331 20:39:00.067189 7725 resource_quota_monitor.go:243] quota synced monitors; added 0, kept 30, removed 03月 31 20:39:00 gysl-master kube-controller-manager[7725]: E0331 20:39:00.067197 7725 resource_quota_controller.go:437] failed to sync resource monitors: couldn't start monitor for resource "extensions/v1beta1, Resource=networkpolicies": unable to monitor quota for resource "extensions/v1beta1, Resource=networkpolicies"3月 31 20:39:13 gysl-master kube-controller-manager[7725]: I0331 20:39:13.677245 7725 reflector.go:235] k8s.io/client-go/informers/factory.go:133: forcing resync3月 31 20:39:14 gysl-master kube-controller-manager[7725]: I0331 20:39:14.215322 7725 pv_controller_base.go:407] resyncing PV controllerclusterrolebinding.rbac.authorization.k8s.io/kubelet-bootstrap createdCluster "kubernetes" set.User "kubelet-bootstrap" set.Context "default" created.Switched to context "default".Cluster "kubernetes" set.User "kube-proxy" set.Context "default" created.Switched to context "default".member 1c3555118a39401e is healthy: got healthy result from https://10.1.1.61:2379member 63ac3c747757aa28 is healthy: got healthy result from https://10.1.1.63:2379member 76bcb3b85e42210d is healthy: got healthy result from https://10.1.1.62:2379member 78df1ab24a6f1302 is healthy: got healthy result from https://10.1.1.60:2379cluster is healthy{ "Network": "172.17.0.0/16", "Backend": {"Type": "vxlan"}}kubelet 100% 122MB 26.1MB/s 00:04 kube-proxy 100% 35MB 21.9MB/s 00:01 flanneld 100% 34MB 20.5MB/s 00:01 mk-docker-opts.sh 100% 2139 916.8KB/s 00:00 flanneld.conf 100% 247 55.8KB/s 00:00 flanneld.service 100% 389 82.7KB/s 00:00 kubelet.yaml 100% 263 319.4KB/s 00:00 kubelet.conf 100% 365 326.0KB/s 00:00 kube-proxy.conf 100% 158 184.0KB/s 00:00 kubelet.service 100% 267 234.2KB/s 00:00 kube-proxy.service 100% 234 130.5KB/s 00:00 bootstrap.kubeconfig 100% 2163 1.5MB/s 00:00 kube-proxy.kubeconfig 100% 6265 4.4MB/s 00:00 Created symlink from /etc/systemd/system/multi-user.target.wants/flanneld.service to /usr/lib/systemd/system/flanneld.service.● flanneld.service - Flanneld overlay address etcd agent Loaded: loaded (/usr/lib/systemd/system/flanneld.service; enabled; vendor preset: disabled) Active: active (running) since 日 2019-03-31 20:39:27 CST; 430ms ago Process: 7536 ExecStartPost=/usr/local/bin/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/subnet.env (code=exited, status=0/SUCCESS) Main PID: 7508 (flanneld) Tasks: 7 Memory: 6.7M CGroup: /system.slice/flanneld.service └─7508 /usr/local/bin/flanneld --ip-masq3月 31 20:39:27 gysl-node2 flanneld[7508]: I0331 20:39:27.720837 7508 iptables.go:167] Deleting iptables rule: -s 172.17.0.0/16 ! -d 224.0.0.0/4 -j MASQUERADE3月 31 20:39:27 gysl-node2 flanneld[7508]: I0331 20:39:27.721919 7508 iptables.go:167] Deleting iptables rule: ! -s 172.17.0.0/16 -d 172.17.24.0/24 -j RETURN3月 31 20:39:27 gysl-node2 flanneld[7508]: I0331 20:39:27.722994 7508 iptables.go:167] Deleting iptables rule: ! -s 172.17.0.0/16 -d 172.17.0.0/16 -j MASQUERADE3月 31 20:39:27 gysl-node2 flanneld[7508]: I0331 20:39:27.724549 7508 iptables.go:155] Adding iptables rule: -s 172.17.0.0/16 -d 172.17.0.0/16 -j RETURN3月 31 20:39:27 gysl-node2 flanneld[7508]: I0331 20:39:27.730116 7508 iptables.go:155] Adding iptables rule: -d 172.17.0.0/16 -j ACCEPT3月 31 20:39:27 gysl-node2 flanneld[7508]: I0331 20:39:27.737143 7508 main.go:429] Waiting for 22h69m59.914613166s to renew lease3月 31 20:39:27 gysl-node2 flanneld[7508]: I0331 20:39:27.737262 7508 iptables.go:155] Adding iptables rule: -s 172.17.0.0/16 ! -d 224.0.0.0/4 -j MASQUERADE3月 31 20:39:27 gysl-node2 flanneld[7508]: I0331 20:39:27.744276 7508 iptables.go:155] Adding iptables rule: ! -s 172.17.0.0/16 -d 172.17.24.0/24 -j RETURN3月 31 20:39:27 gysl-node2 systemd[1]: Started Flanneld overlay address etcd agent.3月 31 20:39:27 gysl-node2 flanneld[7508]: I0331 20:39:27.766442 7508 iptables.go:155] Adding iptables rule: ! -s 172.17.0.0/16 -d 172.17.0.0/16 -j MASQUERADE● docker.service - Docker Application Container Engine Loaded: loaded (/usr/lib/systemd/system/docker.service; enabled; vendor preset: disabled) Active: active (running) since 日 2019-03-31 20:39:28 CST; 10ms ago Docs: https://docs.docker.com Main PID: 7579 (dockerd) Tasks: 8 Memory: 32.1M CGroup: /system.slice/docker.service └─7579 /usr/bin/dockerd --bip=172.17.24.1/24 --ip-masq=false --mtu=1450 -H fd:// --containerd=/run/containerd/containerd.sock3月 31 20:39:27 gysl-node2 dockerd[7579]: time="2019-03-31T20:39:27.843896719+08:00" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc3月 31 20:39:27 gysl-node2 dockerd[7579]: time="2019-03-31T20:39:27.843917442+08:00" level=info msg="pickfirstBalancer: HandleSubConnStateChange: 0xc420154920, CONNECTING" module=grpc3月 31 20:39:27 gysl-node2 dockerd[7579]: time="2019-03-31T20:39:27.843973658+08:00" level=info msg="pickfirstBalancer: HandleSubConnStateChange: 0xc420154920, READY" module=grpc3月 31 20:39:27 gysl-node2 dockerd[7579]: time="2019-03-31T20:39:27.844332744+08:00" level=info msg="[graphdriver] using prior storage driver: overlay2"3月 31 20:39:27 gysl-node2 dockerd[7579]: time="2019-03-31T20:39:27.848229255+08:00" level=info msg="Graph migration to content-addressability took 0.00 seconds"3月 31 20:39:27 gysl-node2 dockerd[7579]: time="2019-03-31T20:39:27.848828116+08:00" level=info msg="Loading containers: start."3月 31 20:39:28 gysl-node2 dockerd[7579]: time="2019-03-31T20:39:28.081132437+08:00" level=info msg="Loading containers: done."3月 31 20:39:28 gysl-node2 dockerd[7579]: time="2019-03-31T20:39:28.167227705+08:00" level=info msg="Docker daemon" commit=774a1f4 graphdriver(s)=overlay2 version=18.09.33月 31 20:39:28 gysl-node2 dockerd[7579]: time="2019-03-31T20:39:28.167281411+08:00" level=info msg="Daemon has completed initialization"3月 31 20:39:28 gysl-node2 dockerd[7579]: time="2019-03-31T20:39:28.175538228+08:00" level=info msg="API listen on /var/run/docker.sock"Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service.Created symlink from /etc/systemd/system/multi-user.target.wants/kube-proxy.service to /usr/lib/systemd/system/kube-proxy.service.● kubelet.service - Kubernetes Kubelet Loaded: loaded (/usr/lib/systemd/system/kubelet.service; enabled; vendor preset: disabled) Active: active (running) since 日 2019-03-31 20:39:28 CST; 61ms ago Main PID: 7727 (kubelet) Tasks: 1 Memory: 2.1M CGroup: /system.slice/kubelet.service └─7727 /usr/local/bin/kubelet --logtostderr=true --v=4 --hostname-override=10.1.1.62 --kubeconfig=/etc/kubernetes/conf.d/kubelet.kubeconfig --bootstrap-kubeconfig=/etc/kubernetes/conf.d/bootstrap.kubeconfig --config=/etc/kubernetes/conf.d/kubelet.yaml --cert-dir=/etc/kubernetes/ca.d --pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.03月 31 20:39:28 gysl-node2 systemd[1]: Started Kubernetes Kubelet.● kube-proxy.service - Kubernetes Proxy Loaded: loaded (/usr/lib/systemd/system/kube-proxy.service; enabled; vendor preset: disabled) Active: active (running) since 日 2019-03-31 20:39:28 CST; 7ms ago Main PID: 7728 (systemd) Tasks: 0 Memory: 0B CGroup: /system.slice/kube-proxy.service └─7728 /usr/lib/systemd/systemd --switched-root --system --deserialize 22kubelet 100% 122MB 27.7MB/s 00:04 kube-proxy 100% 35MB 13.8MB/s 00:02 flanneld 100% 34MB 33.6MB/s 00:01 mk-docker-opts.sh 100% 2139 1.1MB/s 00:00 flanneld.conf 100% 247 225.5KB/s 00:00 flanneld.service 100% 389 357.8KB/s 00:00 kubelet.yaml 100% 263 193.7KB/s 00:00 kubelet.conf 100% 365 331.3KB/s 00:00 kube-proxy.conf 100% 158 130.4KB/s 00:00 kubelet.service 100% 267 295.5KB/s 00:00 kube-proxy.service 100% 234 198.3KB/s 00:00 bootstrap.kubeconfig 100% 2163 2.0MB/s 00:00 kube-proxy.kubeconfig 100% 6265 3.7MB/s 00:00 Created symlink from /etc/systemd/system/multi-user.target.wants/flanneld.service to /usr/lib/systemd/system/flanneld.service.● flanneld.service - Flanneld overlay address etcd agent Loaded: loaded (/usr/lib/systemd/system/flanneld.service; enabled; vendor preset: disabled) Active: active (running) since 日 2019-03-31 20:39:39 CST; 391ms ago Process: 7534 ExecStartPost=/usr/local/bin/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/subnet.env (code=exited, status=0/SUCCESS) Main PID: 7502 (flanneld) Tasks: 7 Memory: 9.3M CGroup: /system.slice/flanneld.service └─7502 /usr/local/bin/flanneld --ip-masq3月 31 20:39:39 gysl-node3 flanneld[7502]: I0331 20:39:39.088309 7502 iptables.go:145] Some iptables rules are missing; deleting and recreating rules3月 31 20:39:39 gysl-node3 flanneld[7502]: I0331 20:39:39.088315 7502 iptables.go:167] Deleting iptables rule: -s 172.17.0.0/16 -d 172.17.0.0/16 -j RETURN3月 31 20:39:39 gysl-node3 flanneld[7502]: I0331 20:39:39.091984 7502 iptables.go:167] Deleting iptables rule: -s 172.17.0.0/16 ! -d 224.0.0.0/4 -j MASQUERADE3月 31 20:39:39 gysl-node3 flanneld[7502]: I0331 20:39:39.095011 7502 iptables.go:167] Deleting iptables rule: ! -s 172.17.0.0/16 -d 172.17.100.0/24 -j RETURN3月 31 20:39:39 gysl-node3 flanneld[7502]: I0331 20:39:39.098419 7502 iptables.go:167] Deleting iptables rule: ! -s 172.17.0.0/16 -d 172.17.0.0/16 -j MASQUERADE3月 31 20:39:39 gysl-node3 flanneld[7502]: I0331 20:39:39.099751 7502 iptables.go:155] Adding iptables rule: -s 172.17.0.0/16 -d 172.17.0.0/16 -j RETURN3月 31 20:39:39 gysl-node3 flanneld[7502]: I0331 20:39:39.103532 7502 iptables.go:155] Adding iptables rule: -s 172.17.0.0/16 ! -d 224.0.0.0/4 -j MASQUERADE3月 31 20:39:39 gysl-node3 flanneld[7502]: I0331 20:39:39.106520 7502 iptables.go:155] Adding iptables rule: ! -s 172.17.0.0/16 -d 172.17.100.0/24 -j RETURN3月 31 20:39:39 gysl-node3 flanneld[7502]: I0331 20:39:39.113480 7502 iptables.go:155] Adding iptables rule: ! -s 172.17.0.0/16 -d 172.17.0.0/16 -j MASQUERADE3月 31 20:39:39 gysl-node3 systemd[1]: Started Flanneld overlay address etcd agent.● docker.service - Docker Application Container Engine Loaded: loaded (/usr/lib/systemd/system/docker.service; enabled; vendor preset: disabled) Active: active (running) since 日 2019-03-31 20:39:39 CST; 10ms ago Docs: https://docs.docker.com Main PID: 7573 (dockerd) Tasks: 8 Memory: 31.9M CGroup: /system.slice/docker.service └─7573 /usr/bin/dockerd --bip=172.17.100.1/24 --ip-masq=false --mtu=1450 -H fd:// --containerd=/run/containerd/containerd.sock3月 31 20:39:39 gysl-node3 dockerd[7573]: time="2019-03-31T20:39:39.230510356+08:00" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc3月 31 20:39:39 gysl-node3 dockerd[7573]: time="2019-03-31T20:39:39.230556184+08:00" level=info msg="pickfirstBalancer: HandleSubConnStateChange: 0xc420154910, CONNECTING" module=grpc3月 31 20:39:39 gysl-node3 dockerd[7573]: time="2019-03-31T20:39:39.230711652+08:00" level=info msg="pickfirstBalancer: HandleSubConnStateChange: 0xc420154910, READY" module=grpc3月 31 20:39:39 gysl-node3 dockerd[7573]: time="2019-03-31T20:39:39.231101930+08:00" level=info msg="[graphdriver] using prior storage driver: overlay2"3月 31 20:39:39 gysl-node3 dockerd[7573]: time="2019-03-31T20:39:39.234478410+08:00" level=info msg="Graph migration to content-addressability took 0.00 seconds"3月 31 20:39:39 gysl-node3 dockerd[7573]: time="2019-03-31T20:39:39.234950238+08:00" level=info msg="Loading containers: start."3月 31 20:39:39 gysl-node3 dockerd[7573]: time="2019-03-31T20:39:39.406988224+08:00" level=info msg="Loading containers: done."3月 31 20:39:39 gysl-node3 dockerd[7573]: time="2019-03-31T20:39:39.497837879+08:00" level=info msg="Docker daemon" commit=774a1f4 graphdriver(s)=overlay2 version=18.09.33月 31 20:39:39 gysl-node3 dockerd[7573]: time="2019-03-31T20:39:39.497901197+08:00" level=info msg="Daemon has completed initialization"3月 31 20:39:39 gysl-node3 dockerd[7573]: time="2019-03-31T20:39:39.502801194+08:00" level=info msg="API listen on /var/run/docker.sock"Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service.Created symlink from /etc/systemd/system/multi-user.target.wants/kube-proxy.service to /usr/lib/systemd/system/kube-proxy.service.● kubelet.service - Kubernetes Kubelet Loaded: loaded (/usr/lib/systemd/system/kubelet.service; enabled; vendor preset: disabled) Active: active (running) since 日 2019-03-31 20:39:39 CST; 58ms ago Main PID: 7721 (kubelet) Tasks: 1 Memory: 4.2M CGroup: /system.slice/kubelet.service └─7721 /usr/local/bin/kubelet --logtostderr=true --v=4 --hostname-override=10.1.1.63 --kubeconfig=/etc/kubernetes/conf.d/kubelet.kubeconfig --bootstrap-kubeconfig=/etc/kubernetes/conf.d/bootstrap.kubeconfig --config=/etc/kubernetes/conf.d/kubelet.yaml --cert-dir=/etc/kubernetes/ca.d --pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.03月 31 20:39:39 gysl-node3 systemd[1]: Started Kubernetes Kubelet.● kube-proxy.service - Kubernetes Proxy Loaded: loaded (/usr/lib/systemd/system/kube-proxy.service; enabled; vendor preset: disabled) Active: active (running) since 日 2019-03-31 20:39:39 CST; 21ms ago Main PID: 7722 (systemd) Tasks: 0 Memory: 0B CGroup: /system.slice/kube-proxy.service └─7722 /usr/lib/systemd/systemd --switched-root --system --deserialize 223月 31 20:39:39 gysl-node3 systemd[1]: Started Kubernetes Proxy.kubelet 100% 122MB 32.7MB/s 00:03 kube-proxy 100% 35MB 14.4MB/s 00:02 flanneld 100% 34MB 30.4MB/s 00:01 mk-docker-opts.sh 100% 2139 3.5MB/s 00:00 flanneld.conf 100% 247 227.9KB/s 00:00 flanneld.service 100% 389 359.0KB/s 00:00 kubelet.yaml 100% 263 197.9KB/s 00:00 kubelet.conf 100% 365 517.1KB/s 00:00 kube-proxy.conf 100% 158 244.5KB/s 00:00 kubelet.service 100% 267 379.4KB/s 00:00 kube-proxy.service 100% 234 324.5KB/s 00:00 bootstrap.kubeconfig 100% 2163 429.6KB/s 00:00 kube-proxy.kubeconfig 100% 6265 4.7MB/s 00:00 Created symlink from /etc/systemd/system/multi-user.target.wants/flanneld.service to /usr/lib/systemd/system/flanneld.service.● flanneld.service - Flanneld overlay address etcd agent Loaded: loaded (/usr/lib/systemd/system/flanneld.service; enabled; vendor preset: disabled) Active: active (running) since 日 2019-03-31 20:39:49 CST; 319ms ago Process: 7580 ExecStartPost=/usr/local/bin/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/subnet.env (code=exited, status=0/SUCCESS) Main PID: 7550 (flanneld) Tasks: 7 Memory: 6.7M CGroup: /system.slice/flanneld.service └─7550 /usr/local/bin/flanneld --ip-masq3月 31 20:39:49 gysl-node1 flanneld[7550]: I0331 20:39:49.414921 7550 vxlan_network.go:60] watching for new subnet leases3月 31 20:39:49 gysl-node1 flanneld[7550]: I0331 20:39:49.415303 7550 iptables.go:167] Deleting iptables rule: ! -s 172.17.0.0/16 -d 172.17.96.0/24 -j RETURN3月 31 20:39:49 gysl-node1 flanneld[7550]: I0331 20:39:49.416682 7550 iptables.go:167] Deleting iptables rule: ! -s 172.17.0.0/16 -d 172.17.0.0/16 -j MASQUERADE3月 31 20:39:49 gysl-node1 flanneld[7550]: I0331 20:39:49.418320 7550 iptables.go:155] Adding iptables rule: -s 172.17.0.0/16 -d 172.17.0.0/16 -j RETURN3月 31 20:39:49 gysl-node1 flanneld[7550]: I0331 20:39:49.438055 7550 iptables.go:155] Adding iptables rule: -d 172.17.0.0/16 -j ACCEPT3月 31 20:39:49 gysl-node1 flanneld[7550]: I0331 20:39:49.443066 7550 main.go:429] Waiting for 22h69m59.922013672s to renew lease3月 31 20:39:49 gysl-node1 flanneld[7550]: I0331 20:39:49.443213 7550 iptables.go:155] Adding iptables rule: -s 172.17.0.0/16 ! -d 224.0.0.0/4 -j MASQUERADE3月 31 20:39:49 gysl-node1 systemd[1]: Started Flanneld overlay address etcd agent.3月 31 20:39:49 gysl-node1 flanneld[7550]: I0331 20:39:49.459736 7550 iptables.go:155] Adding iptables rule: ! -s 172.17.0.0/16 -d 172.17.96.0/24 -j RETURN3月 31 20:39:49 gysl-node1 flanneld[7550]: I0331 20:39:49.469674 7550 iptables.go:155] Adding iptables rule: ! -s 172.17.0.0/16 -d 172.17.0.0/16 -j MASQUERADE● docker.service - Docker Application Container Engine Loaded: loaded (/usr/lib/systemd/system/docker.service; enabled; vendor preset: disabled) Active: active (running) since 日 2019-03-31 20:39:49 CST; 9ms ago Docs: https://docs.docker.com Main PID: 7622 (dockerd) Tasks: 8 Memory: 28.6M CGroup: /system.slice/docker.service └─7622 /usr/bin/dockerd --bip=172.17.96.1/24 --ip-masq=false --mtu=1450 -H fd:// --containerd=/run/containerd/containerd.sock3月 31 20:39:49 gysl-node1 dockerd[7622]: time="2019-03-31T20:39:49.549105373+08:00" level=info msg="ccResolverWrapper: sending new addresses to cc: [{unix:///run/containerd/containerd.sock 0 }]" module=grpc3月 31 20:39:49 gysl-node1 dockerd[7622]: time="2019-03-31T20:39:49.549111902+08:00" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc3月 31 20:39:49 gysl-node1 dockerd[7622]: time="2019-03-31T20:39:49.549148708+08:00" level=info msg="pickfirstBalancer: HandleSubConnStateChange: 0xc420154bb0, CONNECTING" module=grpc3月 31 20:39:49 gysl-node1 dockerd[7622]: time="2019-03-31T20:39:49.549210269+08:00" level=info msg="pickfirstBalancer: HandleSubConnStateChange: 0xc420154bb0, READY" module=grpc3月 31 20:39:49 gysl-node1 dockerd[7622]: time="2019-03-31T20:39:49.549578647+08:00" level=info msg="[graphdriver] using prior storage driver: overlay2"3月 31 20:39:49 gysl-node1 dockerd[7622]: time="2019-03-31T20:39:49.554893473+08:00" level=info msg="Graph migration to content-addressability took 0.00 seconds"3月 31 20:39:49 gysl-node1 dockerd[7622]: time="2019-03-31T20:39:49.555866350+08:00" level=info msg="Loading containers: start."3月 31 20:39:49 gysl-node1 dockerd[7622]: time="2019-03-31T20:39:49.695192119+08:00" level=info msg="Loading containers: done."3月 31 20:39:49 gysl-node1 dockerd[7622]: time="2019-03-31T20:39:49.729225641+08:00" level=info msg="Docker daemon" commit=774a1f4 graphdriver(s)=overlay2 version=18.09.33月 31 20:39:49 gysl-node1 dockerd[7622]: time="2019-03-31T20:39:49.729282016+08:00" level=info msg="Daemon has completed initialization"Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service.Created symlink from /etc/systemd/system/multi-user.target.wants/kube-proxy.service to /usr/lib/systemd/system/kube-proxy.service.● kubelet.service - Kubernetes Kubelet Loaded: loaded (/usr/lib/systemd/system/kubelet.service; enabled; vendor preset: disabled) Active: active (running) since 日 2019-03-31 20:39:50 CST; 50ms ago Main PID: 7770 (kubelet) Tasks: 1 Memory: 2.1M CGroup: /system.slice/kubelet.service └─7770 /usr/local/bin/kubelet --logtostderr=true --v=4 --hostname-override=10.1.1.61 --kubeconfig=/etc/kubernetes/conf.d/kubelet.kubeconfig --bootstrap-kubeconfig=/etc/kubernetes/conf.d/bootstrap.kubeconfig --config=/etc/kubernetes/conf.d/kubelet.yaml --cert-dir=/etc/kubernetes/ca.d --pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.03月 31 20:39:50 gysl-node1 systemd[1]: Started Kubernetes Kubelet.● kube-proxy.service - Kubernetes Proxy Loaded: loaded (/usr/lib/systemd/system/kube-proxy.service; enabled; vendor preset: disabled) Active: active (running) since 日 2019-03-31 20:39:50 CST; 20ms ago Main PID: 7771 (systemd) Tasks: 0 Memory: 0B CGroup: /system.slice/kube-proxy.service └─7771 /usr/lib/systemd/systemd --switched-root --system --deserialize 223月 31 20:39:50 gysl-node1 systemd[1]: Started Kubernetes Proxy.[root@gysl-master ~]# kubectl get cs,nodesNAME STATUS MESSAGE ERRORcomponentstatus/scheduler Healthy okcomponentstatus/controller-manager Healthy okcomponentstatus/etcd-0 Healthy {"health":"true"}componentstatus/etcd-2 Healthy {"health":"true"}componentstatus/etcd-1 Healthy {"health":"true"}componentstatus/etcd-3 Healthy {"health":"true"}NAME STATUS ROLES AGE VERSIONnode/10.1.1.61 Ready node 4m23s v1.14.0node/10.1.1.62 Ready node 4m22s v1.14.0node/10.1.1.63 Ready node 4m22s v1.14.0
3.3 安装失败回滚脚本
#!/bin/bashdeclare -A HostIP EtcdIPHostIP=( [gysl-master]='10.1.1.60' [gysl-node1]='10.1.1.61' [gysl-node2]='10.1.1.62' [gysl-node3]='10.1.1.63' )EtcdIP=( [etcd-master]='10.1.1.60' [etcd-01]='10.1.1.61' [etcd-02]='10.1.1.62' [etcd-03]='10.1.1.63' )BinaryDir='/usr/local/bin'KubeConf='/etc/kubernetes/conf.d'KubeCA='/etc/kubernetes/ca.d'EtcdConf='/etc/etcd/conf.d'EtcdCA='/etc/etcd/ca.d'FlanneldConf='/etc/flanneld'for node_ip in ${HostIP[@]} do if [ "${node_ip}" == "${HostIP[gysl-master]}" ] ; then ps -ef|grep -e kube -e etcd -e flanneld|grep -v grep|awk '{print $2}'|xargs kill rm -rf {${KubeConf},${KubeCA},${EtcdConf},${EtcdCA},${FlanneldConf}} rm -rf ${BinaryDir}/* else ssh root@${node_ip} "ps -ef|grep -e kube -e etcd -e flanneld|grep -v grep|awk '{print $2}'|xargs kill" ssh root@${node_ip} "rm -rf {${KubeConf},${KubeCA},${EtcdConf},${EtcdCA},${FlanneldConf}}" ssh root@${node_ip} "rm -rf ${BinaryDir}/* && reboot" fi donereboot
四 总结
通过脚本实现自动化安装是一个良好的习惯,可以达到事半功倍的效果,以后工作中要注意培养这种习惯!
脚本
版本
节点
日志
过程
二进制
集群
良好
事半功倍
操作系统
三个
主机
多个
小时
思路
效果
架构
环境
系统
角色
数据库的安全要保护哪些东西
数据库安全各自的含义是什么
生产安全数据库录入
数据库的安全性及管理
数据库安全策略包含哪些
海淀数据库安全审计系统
建立农村房屋安全信息数据库
易用的数据库客户端支持安全管理
连接数据库失败ssl安全错误
数据库的锁怎样保障安全
农民歌手软件开发
网络技术题目求解
系统软件开发费如何入账
软件开发项目分工怎么写
方舟手游官方服务器怎么刷琥珀
长春先进网络技术质量推荐
宝德服务器与华为
国庆期间网络安全总结
电影票房数据库网站怎么爬取
纯生存租借服务器
魔兽世界9.1服务器人口
石器服务器 虚假人物怎么弄
正式服pvp选哪个服务器
网络安全和信息化建设审计重点
网络安全越权的种类
如何测试服务器连接成功
影响网络安全的系统本身
我的世界手机版服务器悬浮字
软件开发团队介绍范文
淘宝托管服务器异常
数据库创建留言板
杭州百世网络技术有限公司邯郸
柑橘属数据库
盈众网络技术有限公司
软件开发广宣
互联网软件开发包括的项目
网络安全问题英文翻译
b/s结构软件开发
蚕食小说软件开发
天津市总工会网络安全