K8s完整单节点二进制部署(实战必备!)
发表于:2025-02-05 作者:千家信息网编辑
千家信息网最后更新 2025年02月05日,搭建步骤:1:自签ETCD证书2:ETCD部署3:Node安装docker4:Flannel部署(先写入子网到etcd)---------master----------5:自签APIServer证书
千家信息网最后更新 2025年02月05日K8s完整单节点二进制部署(实战必备!)
搭建步骤:
1:自签ETCD证书
2:ETCD部署
3:Node安装docker
4:Flannel部署(先写入子网到etcd)
---------master----------
5:自签APIServer证书
6:部署APIServer组件(token,csv)
7:部署controller-manager(指定apiserver证书)和scheduler组件
----------node----------
8:生成kubeconfig(bootstrap,kubeconfig和kube-proxy.kubeconfig)
9:部署kubelet组件
10:部署kube-proxy组件
----------加入群集----------
11:kubectl get csr && kubectl certificate approve 允许办法证书,加入群集
12:添加一个node节点
13:查看kubectl get node 节点
环境准备:
master节点:
CentOS 7-3:192.168.18.128
node节点:
CentOS 7-4:192.168.18.148 docker
CentOS 7-5:192.168.18.145 docker
Mester7-3:
[root@master ~]# mkdir k8s[root@master ~]# cd k8s/[root@master k8s]# mkdir etcd-cert[root@master k8s]# mv etcd-cert.sh etcd-cert[root@master k8s]# lsetcd-cert etcd.sh[root@master k8s]# vim cfssl.shcurl -L https://pkg.cfssl.org/R1.2/cfssl_linux-amd64 -o /usr/local/bin/cfsslcurl -L https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64 -o /usr/local/bin/cfssljsoncurl -L https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64 -o /usr/local/bin/cfssl-certinfochmod +x /usr/local/bin/cfssl /usr/local/bin/cfssljson /usr/local/bin/cfssl-certinfo[root@master k8s]# bash cfssl.sh[root@master k8s]# ls /usr/local/bin/cfssl cfssl-certinfo cfssljson[root@master k8s]# cd etcd-cert/`定义CA证书`cat > ca-config.json < ca-csr.json < server-csr.json <
上传以下三个压缩包进行解压:
[root@master etcd-cert]# lsca-config.json etcd-cert.sh server-csr.jsonca.csr etcd-v3.3.10-linux-amd64.tar.gz server-key.pemca-csr.json flannel-v0.10.0-linux-amd64.tar.gz server.pemca-key.pem kubernetes-server-linux-amd64.tar.gzca.pem server.csr[root@master etcd-cert]# mv *.tar.gz ../[root@master etcd-cert]# cd ../[root@master k8s]# lscfssl.sh etcd.sh flannel-v0.10.0-linux-amd64.tar.gzetcd-cert etcd-v3.3.10-linux-amd64.tar.gz kubernetes-server-linux-amd64.tar.gz[root@master k8s]# tar zxvf etcd-v3.3.10-linux-amd64.tar.gz[root@master k8s]# ls etcd-v3.3.10-linux-amd64Documentation etcd etcdctl README-etcdctl.md README.md READMEv2-etcdctl.md[root@master k8s]# mkdir /opt/etcd/{cfg,bin,ssl} -p[root@master k8s]# mv etcd-v3.3.10-linux-amd64/etcd etcd-v3.3.10-linux-amd64/etcdctl /opt/etcd/bin/`证书拷贝`[root@master k8s]# cp etcd-cert/*.pem /opt/etcd/ssl/`进入卡住状态等待其他节点加入`[root@master k8s]# bash etcd.sh etcd01 192.168.18.128 etcd02=https://192.168.18.148:2380,etcd03=https://192.168.18.145:2380Created symlink from /etc/systemd/system/multi-user.target.wants/etcd.service to /usr/lib/systemd/system/etcd.service.
此时新打开一个7-3的远程连接终端:
[root@master ~]# ps -ef | grep etcdroot 3479 1780 0 11:48 pts/0 00:00:00 bash etcd.sh etcd01 192.168.18.128 etcd02=https://192.168.195.148:2380,etcd03=https://192.168.195.145:2380root 3530 3479 0 11:48 pts/0 00:00:00 systemctl restart etcdroot 3540 1 1 11:48 ? 00:00:00 /opt/etcd/bin/etcd --name=etcd01 --data-dir=/var/lib/etcd/default.etcd --listen-peer-urls=https://192.168.18.128:2380 --listen-client-urls=https://192.168.18.128:2379,http://127.0.0.1:2379 --advertise-client-urls=https://192.168.18.128:2379 --initial-advertise-peer-urls=https://192.168.18.128:2380 --initial-cluster=etcd01=https://192.168.18.128:2380,etcd02=https://192.168.195.148:2380,etcd03=https://192.168.195.145:2380 --initial-cluster-token=etcd-cluster --initial-cluster-state=new --cert-file=/opt/etcd/ssl/server.pem --key-file=/opt/etcd/ssl/server-key.pem --peer-cert-file=/opt/etcd/ssl/server.pem --peer-key-file=/opt/etcd/ssl/server-key.pem --trusted-ca-file=/opt/etcd/ssl/ca.pem --peer-trusted-ca-file=/opt/etcd/ssl/ca.pemroot 3623 3562 0 11:49 pts/1 00:00:00 grep --color=auto etcd
`拷贝证书去其他节点`[root@master k8s]# scp -r /opt/etcd/ root@192.168.18.148:/opt/The authenticity of host '192.168.18.148 (192.168.18.148)' can't be established.ECDSA key fingerprint is SHA256:mTT+FEtzAu4X3D5srZlz93S3gye8MzbqVZFDzfJd4Gk.ECDSA key fingerprint is MD5:fa:5a:88:23:49:60:9b:b8:7e:4b:14:4b:3f:cd:96:a0.Are you sure you want to continue connecting (yes/no)? yesWarning: Permanently added '192.168.18.148' (ECDSA) to the list of known hosts.root@192.168.18.148's password:etcd 100% 518 426.8KB/s 00:00etcd 100% 18MB 105.0MB/s 00:00etcdctl 100% 15MB 108.2MB/s 00:00ca-key.pem 100% 1679 1.4MB/s 00:00ca.pem 100% 1265 396.1KB/s 00:00server-key.pem 100% 1675 1.0MB/s 00:00server.pem 100% 1338 525.6KB/s 00:00[root@master k8s]# scp -r /opt/etcd/ root@192.168.18.145:/opt/The authenticity of host '192.168.18.145 (192.168.18.145)' can't be established.ECDSA key fingerprint is SHA256:mTT+FEtzAu4X3D5srZlz93S3gye8MzbqVZFDzfJd4Gk.ECDSA key fingerprint is MD5:fa:5a:88:23:49:60:9b:b8:7e:4b:14:4b:3f:cd:96:a0.Are you sure you want to continue connecting (yes/no)? yesWarning: Permanently added '192.168.18.145' (ECDSA) to the list of known hosts.root@192.168.18.145's password:etcd 100% 518 816.5KB/s 00:00etcd 100% 18MB 87.4MB/s 00:00etcdctl 100% 15MB 108.6MB/s 00:00ca-key.pem 100% 1679 1.3MB/s 00:00ca.pem 100% 1265 411.8KB/s 00:00server-key.pem 100% 1675 1.4MB/s 00:00server.pem 100% 1338 639.5KB/s 00:00`启动脚本拷贝其他节点`[root@master k8s]# scp /usr/lib/systemd/system/etcd.service root@192.168.18.148:/usr/lib/systemd/system/root@192.168.18.148's password:etcd.service 100% 923 283.4KB/s 00:00[root@master k8s]# scp /usr/lib/systemd/system/etcd.service root@192.168.18.145:/usr/lib/systemd/system/root@192.168.18.145's password:etcd.service 100% 923 347.7KB/s 00:00
Node1:7-4
`修改`[root@node1 ~]# systemctl stop firewalld.service[root@node1 ~]# setenforce 0[root@node1 ~]# vim /opt/etcd/cfg/etcd#[Member]ETCD_NAME="etcd02"ETCD_DATA_DIR="/var/lib/etcd/default.etcd"ETCD_LISTEN_PEER_URLS="https://192.168.18.148:2380"ETCD_LISTEN_CLIENT_URLS="https://192.168.18.148:2379"#[Clustering]ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.18.148:2380"ETCD_ADVERTISE_CLIENT_URLS="https://192.168.18.148:2379"ETCD_INITIAL_CLUSTER="etcd01=https://192.168.18.128:2380,etcd02=https://192.168.18.148:2380,etcd03=https://192.168.18.145:2380"ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"ETCD_INITIAL_CLUSTER_STATE="new"[root@node1 ~]# systemctl start etcd[root@node1 ~]# systemctl status etcd● etcd.service - Etcd Server Loaded: loaded (/usr/lib/systemd/system/etcd.service; disabled; vendor preset: disabled) Active: active (running) since 三 2020-01-15 17:53:24 CST; 5s ago#状态为Active
Node2:7-5
`修改`[root@node2 ~]# systemctl stop firewalld.service[root@node2 ~]# setenforce 0[root@node2 ~]# vim /opt/etcd/cfg/etcd#[Member]ETCD_NAME="etcd03"ETCD_DATA_DIR="/var/lib/etcd/default.etcd"ETCD_LISTEN_PEER_URLS="https://192.168.18.145:2380"ETCD_LISTEN_CLIENT_URLS="https://192.168.18.145:2379"#[Clustering]ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.18.145:2380"ETCD_ADVERTISE_CLIENT_URLS="https://192.168.18.145:2379"ETCD_INITIAL_CLUSTER="etcd01=https://192.168.18.128:2380,etcd02=https://192.168.18.148:2380,etcd03=https://192.168.18.145:2380"ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"ETCD_INITIAL_CLUSTER_STATE="new"[root@node2 ~]# systemctl start etcd[root@node2 ~]# systemctl status etcd● etcd.service - Etcd Server Loaded: loaded (/usr/lib/systemd/system/etcd.service; disabled; vendor preset: disabled) Active: active (running) since 三 2020-01-15 17:55:24 CST; 5s ago #状态为Active
群集状态验证:
`回到7-3上输入以下命令:`[root@master k8s]# cd etcd-cert/[root@master etcd-cert]# /opt/etcd/bin/etcdctl --ca-file=ca.pem --cert-file=server.pem --key-file=server-key.pem --endpoints="https://192.168.18.128:2379,https://192.168.18.148:2379,https://192.168.18.145:2379" cluster-healthmember 9104d301e3b6da41 is healthy: got healthy result from https://192.168.18.148:2379member 92947d71c72a884e is healthy: got healthy result from https://192.168.18.145:2379member b2a6d67e1bc8054b is healthy: got healthy result from https://192.168.18.128:2379cluster is healthy`状态为healthy健康`
两台节点服务器部署docker引擎
node1:
`安装依赖包`[root@node1 ~]# yum install yum-utils device-mapper-persistent-data lvm2 -y`设置阿里云镜像源`[root@node1 ~]# yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo`安装Docker-ce`[root@node1 ~]# yum install -y docker-ce`启动Docker并设置为开机自启动`[root@node1 ~]# systemctl start docker.service[root@node1 ~]# systemctl enable docker.serviceCreated symlink from /etc/systemd/system/multi-user.target.wants/docker.service to /usr/lib/systemd/system/docker.service.`检查相关进程开启情况`[root@node1 ~]# ps aux | grep dockerroot 5551 0.1 3.6 565460 68652 ? Ssl 09:13 0:00 /usr/bin/docke d -H fd:// --containerd=/run/containerd/containerd.sockroot 5759 0.0 0.0 112676 984 pts/1 R+ 09:16 0:00 grep --color=auto docker`镜像加速服务`[root@node1 ~]# tee /etc/docker/daemon.json <<-'EOF'{ "registry-mirrors": ["https://w1ogxqvl.mirror.aliyuncs.com"]}EOF#网络优化部分echo 'net.ipv4.ip_forward=1' > /etc/sysctl.cnfsysctl -p[root@node1 ~]# service network restartRestarting network (via systemctl): [ 确定 ][root@node1 ~]# systemctl restart docker[root@node1 ~]# systemctl daemon-reload[root@node1 ~]# systemctl restart docker
node2:
`安装依赖包`[root@node2 ~]# yum install yum-utils device-mapper-persistent-data lvm2 -y`设置阿里云镜像源`[root@node2 ~]# yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo`安装Docker-ce`[root@node2 ~]# yum install -y docker-ce`启动Docker并设置为开机自启动`[root@node2 ~]# systemctl start docker.service[root@node2 ~]# systemctl enable docker.serviceCreated symlink from /etc/systemd/system/multi-user.target.wants/docker.service to /usr/lib/systemd/system/docker.service.`检查相关进程开启情况`[root@node2 ~]# ps aux | grep dockerroot 5570 0.5 3.5 565460 66740 ? Ssl 09:18 0:00 /usr/bin/docke d -H fd:// --containerd=/run/containerd/containerd.sockroot 5759 0.0 0.0 112676 984 pts/1 R+ 09:18 0:00 grep --color=auto docker`镜像加速服务`[root@node2 ~]# tee /etc/docker/daemon.json <<-'EOF'{ "registry-mirrors": ["https://w1ogxqvl.mirror.aliyuncs.com"]}EOF[root@node2 ~]# service network restartRestarting network (via systemctl): [ 确定 ][root@node2 ~]# systemctl restart docker[root@node2 ~]# systemctl daemon-reload[root@node2 ~]# systemctl restart docker
flannel网络配置
`在master服务器中写入分配的子网段到ETCD中,供flannel使用`[root@master etcd-cert]# /opt/etcd/bin/etcdctl --ca-file=ca.pem --cert-file=server.pem --key-file=server-key.pem --endpoints="https://192.168.18.128:2379,https://192.168.18.148:2379,https://192.168.18.145:2379" set /coreos.com/network/config '{ "Network": "172.17.0.0/16", "Backend": {"Type": "vxlan"}}'{ "Network": "172.17.0.0/16", "Backend": {"Type": "vxlan"}}`查看写入的信息`[root@master etcd-cert]# /opt/etcd/bin/etcdctl --ca-file=ca.pem --cert-file=server.pem --key-file=server-key.pem --endpoints="https://192.168.18.128:2379,https://192.168.18.148:2379,https://192.168.18.145:2379" get /coreos.com/network/config{ "Network": "172.17.0.0/16", "Backend": {"Type": "vxlan"}}`将flannel的软件包拷贝到所有node节点(只需要部署在node节点即可)`[root@master etcd-cert]# cd ../[root@master k8s]# scp flannel-v0.10.0-linux-amd64.tar.gz root@192.168.18.148:/rootroot@192.168.18.148's password:flannel-v0.10.0-linux-amd64.tar.gz 100% 9479KB 55.6MB/s 00:00[root@master k8s]# scp flannel-v0.10.0-linux-amd64.tar.gz root@192.168.18.145:/rootroot@192.168.18.145's password:flannel-v0.10.0-linux-amd64.tar.gz 100% 9479KB 69.5MB/s 00:00
在所有node节点进行解压操作
node1:
[root@node1 ~]# tar zxvf flannel-v0.10.0-linux-amd64.tar.gz`创建k8s工作目录`[root@node1 ~]# mkdir /opt/kubernetes/{cfg,bin,ssl} -p[root@node1 ~]# mv mk-docker-opts.sh flanneld /opt/kubernetes/bin/[root@node1 ~]# vim flannel.sh#!/bin/bashETCD_ENDPOINTS=${1:-"http://127.0.0.1:2379"}cat </opt/kubernetes/cfg/flanneldFLANNEL_OPTIONS="--etcd-endpoints=${ETCD_ENDPOINTS} \-etcd-cafile=/opt/etcd/ssl/ca.pem \-etcd-certfile=/opt/etcd/ssl/server.pem \-etcd-keyfile=/opt/etcd/ssl/server-key.pem"EOFcat </usr/lib/systemd/system/flanneld.service[Unit]Description=Flanneld overlay address etcd agentAfter=network-online.target network.targetBefore=docker.service[Service]Type=notifyEnvironmentFile=/opt/kubernetes/cfg/flanneldExecStart=/opt/kubernetes/bin/flanneld --ip-masq \$FLANNEL_OPTIONSExecStartPost=/opt/kubernetes/bin/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/subnet.envRestart=on-failure[Install]WantedBy=multi-user.targetEOFsystemctl daemon-reloadsystemctl enable flanneldsystemctl restart flanneld`开启flannel网络功能`[root@node1 ~]# bash flannel.sh https://192.168.18.128:2379,https://192.168.18.148:2379,https://192.168.18.145:2379Created symlink from /etc/systemd/system/multi-user.target.wants/flanneld.service to /usr/lib/systemd/system/flanneld.service.`配置docker连接flannel`[root@node1 ~]# vim /usr/lib/systemd/system/docker.service#service段落做如下改动9 [Service]10 Type=notify11 # the default is not to use systemd for cgroups because the delegate issues s till12 # exists and systemd currently does not support the cgroup feature set requir ed13 # for containers run by docker14 EnvironmentFile=/run/flannel/subnet.env #在13下添加此行15 ExecStart=/usr/bin/dockerd $DOCKER_NETWORK_OPTIONS -H fd:// --containerd=/run /containerd/containerd.sock #15行中在-H前添加$DOCKER_NETWORK_OPTIONS16 ExecReload=/bin/kill -s HUP $MAINPID17 TimeoutSec=018 RestartSec=219 Restart=always#修改完成后按Esc退出插入模式,输入:wq保存退出[root@node1 ~]# cat /run/flannel/subnet.envDOCKER_OPT_BIP="--bip=172.17.32.1/24"DOCKER_OPT_IPMASQ="--ip-masq=false"DOCKER_OPT_MTU="--mtu=1450"DOCKER_NETWORK_OPTIONS=" --bip=172.17.32.1/24 --ip-masq=false --mtu=1450"#此处bip指定启动时的子网`重启docker服务`[root@node1 ~]# systemctl daemon-reload[root@node1 ~]# systemctl restart docker`查看flannel网络`[root@node1 ~]# ifconfigflannel.1: flags=4163 mtu 1450 inet 172.17.32.0 netmask 255.255.255.255 broadcast 0.0.0.0 inet6 fe80::344b:13ff:fecb:1e2d prefixlen 64 scopeid 0x20 ether 36:4b:13:cb:1e:2d txqueuelen 0 (Ethernet) RX packets 0 bytes 0 (0.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 0 bytes 0 (0.0 B) TX errors 0 dropped 27 overruns 0 carrier 0 collisions 0
node2:
[root@node2 ~]# tar zxvf flannel-v0.10.0-linux-amd64.tar.gz`创建k8s工作目录`[root@node2 ~]# mkdir /opt/kubernetes/{cfg,bin,ssl} -p[root@node2 ~]# mv mk-docker-opts.sh flanneld /opt/kubernetes/bin/[root@node2 ~]# vim flannel.sh#!/bin/bashETCD_ENDPOINTS=${1:-"http://127.0.0.1:2379"}cat </opt/kubernetes/cfg/flanneldFLANNEL_OPTIONS="--etcd-endpoints=${ETCD_ENDPOINTS} \-etcd-cafile=/opt/etcd/ssl/ca.pem \-etcd-certfile=/opt/etcd/ssl/server.pem \-etcd-keyfile=/opt/etcd/ssl/server-key.pem"EOFcat </usr/lib/systemd/system/flanneld.service[Unit]Description=Flanneld overlay address etcd agentAfter=network-online.target network.targetBefore=docker.service[Service]Type=notifyEnvironmentFile=/opt/kubernetes/cfg/flanneldExecStart=/opt/kubernetes/bin/flanneld --ip-masq \$FLANNEL_OPTIONSExecStartPost=/opt/kubernetes/bin/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/subnet.envRestart=on-failure[Install]WantedBy=multi-user.targetEOFsystemctl daemon-reloadsystemctl enable flanneldsystemctl restart flanneld`开启flannel网络功能`[root@node2 ~]# bash flannel.sh https://192.168.18.128:2379,https://192.168.18.148:2379,https://192.168.18.145:2379Created symlink from /etc/systemd/system/multi-user.target.wants/flanneld.service to /usr/lib/systemd/system/flanneld.service.`配置docker连接flannel`[root@node2 ~]# vim /usr/lib/systemd/system/docker.service#service段落做如下改动9 [Service]10 Type=notify11 # the default is not to use systemd for cgroups because the delegate issues s till12 # exists and systemd currently does not support the cgroup feature set requir ed13 # for containers run by docker14 EnvironmentFile=/run/flannel/subnet.env #在13下添加此行15 ExecStart=/usr/bin/dockerd $DOCKER_NETWORK_OPTIONS -H fd:// --containerd=/run /containerd/containerd.sock #15行中在-H前添加$DOCKER_NETWORK_OPTIONS16 ExecReload=/bin/kill -s HUP $MAINPID17 TimeoutSec=018 RestartSec=219 Restart=always#修改完成后按Esc退出插入模式,输入:wq保存退出[root@node2 ~]# cat /run/flannel/subnet.envDOCKER_OPT_BIP="--bip=172.17.40.1/24"DOCKER_OPT_IPMASQ="--ip-masq=false"DOCKER_OPT_MTU="--mtu=1450"DOCKER_NETWORK_OPTIONS=" --bip=172.17.40.1/24 --ip-masq=false --mtu=1450"#此处bip指定启动时的子网`重启docker服务`[root@node2 ~]# systemctl daemon-reload[root@node2 ~]# systemctl restart docker`查看flannel网络`[root@node2 ~]# ifconfigflannel.1: flags=4163 mtu 1450 inet 172.17.40.0 netmask 255.255.255.255 broadcast 0.0.0.0 inet6 fe80::cc6f:baff:fe89:3b93 prefixlen 64 scopeid 0x20 ether ce:6f:ba:89:3b:93 txqueuelen 0 (Ethernet) RX packets 0 bytes 0 (0.0 B) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 0 bytes 0 (0.0 B) TX errors 0 dropped 240 overruns 0 carrier 0 collisions 0
测试ping通对方docker0网卡 证明flannel起到路由作用
node1:
[root@node1 ~]# docker run -it centos:7 /bin/bashUnable to find image 'centos:7' locally7: Pulling from library/centosab5ef0e58194: Pull completeDigest: sha256:4a701376d03f6b39b8c2a8f4a8e499441b0d567f9ab9d58e4991de4472fb813cStatus: Downloaded newer image for centos:7#此时会自动进入容器[root@3cdebf0d2bb8 /]# yum install net-tools -y[root@3cdebf0d2bb8 /]# ifconfigeth0: flags=4163 mtu 1450 inet 172.17.32.2 netmask 255.255.255.0 broadcast 172.17.32.255 ether 02:42:ac:11:20:02 txqueuelen 0 (Ethernet) RX packets 16774 bytes 13938639 (13.2 MiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 7361 bytes 400658 (391.2 KiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0#eth0网卡为172.17.32.2`测试ping通`[root@3cdebf0d2bb8 /]# ping 172.17.40.2PING 172.17.40.2 (172.17.40.2) 56(84) bytes of data.64 bytes from 172.17.40.2: icmp_seq=1 ttl=62 time=0.279 ms64 bytes from 172.17.40.2: icmp_seq=2 ttl=62 time=1.07 ms64 bytes from 172.17.40.2: icmp_seq=3 ttl=62 time=0.397 ms^C--- 172.17.40.2 ping statistics ---4 packets transmitted, 4 received, 0% packet loss, time 3002msrtt min/avg/max/mdev = 0.279/0.610/1.075/0.307 ms#此时可以ping通
node2:
[root@node2 ~]# docker run -it centos:7 /bin/bashUnable to find image 'centos:7' locally7: Pulling from library/centosab5ef0e58194: Pull completeDigest: sha256:4a701376d03f6b39b8c2a8f4a8e499441b0d567f9ab9d58e4991de4472fb813cStatus: Downloaded newer image for centos:7#此时会自动进入容器[root@036c7eb6be88 /]# yum install net-tools -y[root@036c7eb6be88 /]# ifconfigeth0: flags=4163 mtu 1450 inet 172.17.40.2 netmask 255.255.255.0 broadcast 172.17.40.255 ether 02:42:ac:11:28:02 txqueuelen 0 (Ethernet) RX packets 16859 bytes 13953367 (13.3 MiB) RX errors 0 dropped 0 overruns 0 frame 0 TX packets 7528 bytes 409881 (400.2 KiB) TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0#eth0网卡为172.17.40.2`测试ping通`[root@036c7eb6be88 /]# ping 172.17.32.2PING 172.17.32.2 (172.17.32.2) 56(84) bytes of data.64 bytes from 172.17.32.2: icmp_seq=1 ttl=62 time=0.411 ms64 bytes from 172.17.32.2: icmp_seq=2 ttl=62 time=0.699 ms64 bytes from 172.17.32.2: icmp_seq=3 ttl=62 time=0.684 ms^C--- 172.17.32.2 ping statistics ---6 packets transmitted, 6 received, 0% packet loss, time 5004msrtt min/avg/max/mdev = 0.411/0.744/1.299/0.269 ms#此时可以ping通
部署master组件
`在master上操作,api-server生成证书,需要先上传master.zip到master节点上`[root@master k8s]# unzip master.zipArchive: master.zip inflating: apiserver.sh inflating: controller-manager.sh inflating: scheduler.sh[root@master k8s]# mkdir /opt/kubernetes/{cfg,bin,ssl} -p`创建apiserver自签证书目录`[root@master k8s]# mkdir k8s-cert[root@master k8s]# cd k8s-cert/[root@master k8s-cert]# ls #需要上传k8s-cert.sh到此目录下k8s-cert.sh`建立ca证书`[root@master k8s-cert]# cat > ca-config.json < ca-csr.json < server-csr.json < admin-csr.json < kube-proxy-csr.json <
生成k8s证书
[root@master k8s-cert]# bash k8s-cert.sh2020/02/05 11:50:08 [INFO] generating a new CA key and certificate from CSR2020/02/05 11:50:08 [INFO] generate received request2020/02/05 11:50:08 [INFO] received CSR2020/02/05 11:50:08 [INFO] generating key: rsa-20482020/02/05 11:50:08 [INFO] encoded CSR2020/02/05 11:50:08 [INFO] signed certificate with serial number 4738831558833089008638050792521240997711230430472020/02/05 11:50:08 [INFO] generate received request2020/02/05 11:50:08 [INFO] received CSR2020/02/05 11:50:08 [INFO] generating key: rsa-20482020/02/05 11:50:08 [INFO] encoded CSR2020/02/05 11:50:08 [INFO] signed certificate with serial number 664838177387463097934177188684703341515395339252020/02/05 11:50:08 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable forwebsites. For more information see the Baseline Requirements for the Issuance and Managementof Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);specifically, section 10.2.3 ("Information Requirements").2020/02/05 11:50:08 [INFO] generate received request2020/02/05 11:50:08 [INFO] received CSR2020/02/05 11:50:08 [INFO] generating key: rsa-20482020/02/05 11:50:08 [INFO] encoded CSR2020/02/05 11:50:08 [INFO] signed certificate with serial number 2456588660691096392789469855876034753258710082402020/02/05 11:50:08 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable forwebsites. For more information see the Baseline Requirements for the Issuance and Managementof Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);specifically, section 10.2.3 ("Information Requirements").2020/02/05 11:50:08 [INFO] generate received request2020/02/05 11:50:08 [INFO] received CSR2020/02/05 11:50:08 [INFO] generating key: rsa-20482020/02/05 11:50:09 [INFO] encoded CSR2020/02/05 11:50:09 [INFO] signed certificate with serial number 6967297660249749878734748654965621973151987334632020/02/05 11:50:09 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable forwebsites. For more information see the Baseline Requirements for the Issuance and Managementof Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);specifically, section 10.2.3 ("Information Requirements").[root@master k8s-cert]# ls *pemadmin-key.pem ca-key.pem kube-proxy-key.pem server-key.pemadmin.pem ca.pem kube-proxy.pem server.pem[root@master k8s-cert]# cp ca*pem server*pem /opt/kubernetes/ssl/[root@master k8s-cert]# cd ..`解压kubernetes压缩包`[root@master k8s]# tar zxvf kubernetes-server-linux-amd64.tar.gz[root@master k8s]# cd /root/k8s/kubernetes/server/bin`复制关键命令文件`[root@master bin]# cp kube-apiserver kubectl kube-controller-manager kube-scheduler /opt/kubernetes/bin/[root@master k8s]# cd /root/k8s`随机生成序列号`[root@master k8s]# head -c 16 /dev/urandom | od -An -t x | tr -d ' '9b3186df3dc799376ad43b6fe0108571[root@master k8s]# vim /opt/kubernetes/cfg/token.csv9b3186df3dc799376ad43b6fe0108571,kubelet-bootstrap,10001,"system:kubelet-bootstrap"#序列号,用户名,id,角色`二进制文件,token,证书都准备好,开启apiserver`[root@master k8s]# bash apiserver.sh 192.168.18.128 https://192.168.18.128:2379,https://192.168.18.148:2379,https://192.168.18.145:2379Created symlink from /etc/systemd/system/multi-user.target.wants/kube-apiserver.service to /usr/lib/systemd/system/kube-apiserver.service.`检查进程是否启动成功`[root@master k8s]# ps aux | grep kuberoot 7034 0.6 1.2 46672 23460 ? Ssl 12:23 0:33 /opt/kubernetes/bin/kube-scheduler --logtostderr=true --v=4 --master=127.0.0.1:8080 --leader-electroot 7104 0.0 2.0 108508 38552 ? Ssl 12:24 0:02 /opt/kubernetes/bin/kube-controller-manager --logtostderr=true --v=4 --master=127.0.0.1:8080 --leader-elect=true --address=127.0.0.1 --service-cluster-ip-range=10.0.0.0/24 --cluster-name=kubernetes --cluster-signing-cert-file=/opt/kubernetes/ssl/ca.pem --cluster-signing-key-file=/opt/kubernetes/ssl/ca-key.pem --root-ca-file=/opt/kubernetes/ssl/ca.pem --service-account-private-key-file=/opt/kubernetes/ssl/ca-key.pem --experimental-cluster-signing-duration=87600h0m0sroot 8146 77.5 14.7 363196 275780 ? Ssl 13:44 0:05 /opt/kubernetes/bin/kube-apiserver --logtostderr=true --v=4 --etcd-servers=https://192.168.18.128:2379,https://192.168.18.148:2379,https://192.168.18.145:2379 --bind-address=192.168.18.128 --secure-port=6443 --advertise-address=192.168.18.128 --allow-privileged=true --service-cluster-ip-range=10.0.0.0/24 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota,NodeRestriction --authorization-mode=RBAC,Node --kubelet-https=true --enable-bootstrap-token-auth --token-auth-file=/opt/kubernetes/cfg/token.csv --service-node-port-range=30000-50000 --tls-cert-file=/opt/kubernetes/ssl/server.pem --tls-private-key-file=/opt/kubernetes/ssl/server-key.pem --client-ca-file=/opt/kubernetes/ssl/ca.pem --service-account-key-file=/opt/kubernetes/ssl/ca-key.pem --etcd-cafile=/opt/etcd/ssl/ca.pem --etcd-certfile=/opt/etcd/ssl/server.pem --etcd-keyfile=/opt/etcd/ssl/server-key.pemroot 8154 0.0 0.0 112676 980 pts/1 R+ 13:44 0:00 grep --color=auto kube`查看配置文件`[root@master k8s]# cat /opt/kubernetes/cfg/kube-apiserverKUBE_APISERVER_OPTS="--logtostderr=true \--v=4 \--etcd-servers=https://192.168.18.128:2379,https://192.168.18.148:2379,https://192.168.18.145:2379 \--bind-address=192.168.18.128 \--secure-port=6443 \--advertise-address=192.168.18.128 \--allow-privileged=true \--service-cluster-ip-range=10.0.0.0/24 \--enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota,NodeRestriction \--authorization-mode=RBAC,Node \--kubelet-https=true \--enable-bootstrap-token-auth \--token-auth-file=/opt/kubernetes/cfg/token.csv \--service-node-port-range=30000-50000 \--tls-cert-file=/opt/kubernetes/ssl/server.pem \--tls-private-key-file=/opt/kubernetes/ssl/server-key.pem \--client-ca-file=/opt/kubernetes/ssl/ca.pem \--service-account-key-file=/opt/kubernetes/ssl/ca-key.pem \--etcd-cafile=/opt/etcd/ssl/ca.pem \--etcd-certfile=/opt/etcd/ssl/server.pem \--etcd-keyfile=/opt/etcd/ssl/server-key.pem"`监听的https端口`[root@master k8s]# netstat -ntap | grep 6443tcp 0 0 192.168.18.128:6443 0.0.0.0:* LISTEN 8146/kube-apiservertcp 0 0 192.168.18.128:6443 192.168.18.128:56724 ESTABLISHED 8146/kube-apiservertcp 0 0 192.168.18.128:56724 192.168.18.128:6443 ESTABLISHED 8146/kube-apiserver[root@master k8s]# netstat -ntap | grep 8080tcp 0 0 127.0.0.1:8080 0.0.0.0:* LISTEN 8146/kube-apiserver......以下省略多行`启动scheduler服务`[root@master k8s]# ./scheduler.sh 127.0.0.1Created symlink from /etc/systemd/system/multi-user.target.wants/kube-scheduler.service to /usr/lib/systemd/system/kube-scheduler.service.[root@master k8s]# ps aux | grep kupostfix 6212 0.0 0.0 91732 1364 ? S 11:29 0:00 pickup -l -t unix -uroot 7034 1.1 1.0 45360 20332 ? Ssl 12:23 0:00 /opt/kubernetes/bin/kube-scheduler --logtostderr=true --v=4 --master=127.0.0.1:8080 --leader-electroot 7042 0.0 0.0 112676 980 pts/1 R+ 12:23 0:00 grep --color=auto ku[root@master k8s]# chmod +x controller-manager.sh`启动controller-manager`[root@master k8s]# ./controller-manager.sh 127.0.0.1Created symlink from /etc/systemd/system/multi-user.target.wants/kube-controller-manager.service to /usr/lib/systemd/system/kube-controller-manager.service.`查看master 节点状态`[root@master k8s]# /opt/kubernetes/bin/kubectl get csNAME STATUS MESSAGE ERRORcontroller-manager Healthy okscheduler Healthy oketcd-1 Healthy {"health":"true"}etcd-0 Healthy {"health":"true"}etcd-2 Healthy {"health":"true"}
node节点部署
第一步:master上操作
`把 kubelet、kube-proxy拷贝到node节点上去`[root@master k8s]# cd kubernetes/server/bin/[root@master bin]# scp kubelet kube-proxy root@192.168.18.148:/opt/kubernetes/bin/root@192.168.18.148's password:kubelet 100% 168MB 81.1MB/s 00:02kube-proxy 100% 48MB 77.6MB/s 00:00[root@master bin]# scp kubelet kube-proxy root@192.168.18.145:/opt/kubernetes/bin/root@192.168.18.145's password:kubelet 100% 168MB 86.8MB/s 00:01kube-proxy 100% 48MB 90.4MB/s 00:00
第二步:node1节点操作(上传node.zip到/root目录下再解压)
[root@node1 ~]# lsanaconda-ks.cfg flannel-v0.10.0-linux-amd64.tar.gz node.zip 公共 视频 文档 音乐flannel.sh initial-setup-ks.cfg README.md 模板 图片 下载 桌面[root@node1 ~]# unzip node.zipArchive: node.zip inflating: proxy.sh inflating: kubelet.sh
第三步:再回到master上操作
[root@master bin]# cd /root/k8s/[root@master k8s]# mkdir kubeconfig[root@master k8s]# cd kubeconfig/`上传kubeconfig.sh脚本到此目录中,并对其进行重命名`[root@master kubeconfig]# lskubeconfig.sh[root@master kubeconfig]# mv kubeconfig.sh kubeconfig[root@master kubeconfig]# vim kubeconfig#删除前9行,之前生成令牌的时候已经执行过# 设置客户端认证参数kubectl config set-credentials kubelet-bootstrap \ --token=9b3186df3dc799376ad43b6fe0108571 \ #令牌中的序列号需要做更改是我们之前生成的令牌 --kubeconfig=bootstrap.kubeconfig#修改完成后按Esc退出插入模式,输入:wq保存退出----如何获取序列号----[root@master kubeconfig]# cat /opt/kubernetes/cfg/token.csv9b3186df3dc799376ad43b6fe0108571,kubelet-bootstrap,10001,"system:kubelet-bootstrap"#我们需要用到其中的序列号"9b3186df3dc799376ad43b6fe0108571"每个人的序列号是不同的---------------------`设置环境变量(可以写入到/etc/profile中)`[root@master kubeconfig]# vim /etc/profile#按大写字母G到最末行,按小写字母o在下行插入export PATH=$PATH:/opt/kubernetes/bin/#修改完成后按Esc退出插入模式,输入:wq保存退出[root@master kubeconfig]# source /etc/profile[root@master kubeconfig]# kubectl get csNAME STATUS MESSAGE ERRORscheduler Healthy okcontroller-manager Healthy oketcd-2 Healthy {"health":"true"}etcd-1 Healthy {"health":"true"}etcd-0 Healthy {"health":"true"}[root@master kubeconfig]# kubectl get nodeNo resources found.#此时还没有节点被添加
第四步:生成配置文件
[root@master kubeconfig]# bash kubeconfig 192.168.18.128 /root/k8s/k8s-cert/Cluster "kubernetes" set.User "kubelet-bootstrap" set.Context "default" created.Switched to context "default".Cluster "kubernetes" set.User "kube-proxy" set.Context "default" created.Switched to context "default".[root@master kubeconfig]# lsbootstrap.kubeconfig kubeconfig kube-proxy.kubeconfig`拷贝配置文件到两个node节点`[root@master kubeconfig]# scp bootstrap.kubeconfig kube-proxy.kubeconfig root@192.168.18.148:/opt/kubernetes/cfg/root@192.168.18.148's password:bootstrap.kubeconfig 100% 2168 2.2MB/s 00:00kube-proxy.kubeconfig 100% 6270 3.5MB/s 00:00[root@master kubeconfig]# scp bootstrap.kubeconfig kube-proxy.kubeconfig root@192.168.18.145:/opt/kubernetes/cfg/root@192.168.18.145's password:bootstrap.kubeconfig 100% 2168 3.1MB/s 00:00kube-proxy.kubeconfig 100% 6270 7.9MB/s 00:00`创建bootstrap角色赋予权限用于连接apiserver请求签名(关键步骤)`[root@master kubeconfig]# kubectl create clusterrolebinding kubelet-bootstrap --clusterrole=system:node-bootstrapper --user=kubelet-bootstrapclusterrolebinding.rbac.authorization.k8s.io/kubelet-bootstrap created
第五步:在node01节点上操作
[root@node1 ~]# bash kubelet.sh 192.168.18.148Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service.`检查kubelet服务启动`[root@node1 ~]# ps aux | grep kuberoot 8807 0.0 0.8 300512 16260 ? Ssl 09:45 0:05 /opt/kubernetes/bin/flanneld --ip-masq --etcd-endpoints=https://192.168.18.128:2379,https://192.168.18.148:2379,https://192.168.18.145:2379 -etcd-cafile=/opt/etcd/ssl/ca.pem -etcd-certfile=/opt/etcd/ssl/server.pem -etcd-keyfile=/opt/etcd/ssl/server-key.pemroot 35040 0.4 2.1 369632 40832 ? Ssl 14:53 0:00 /opt/kubernetes/bin/kubelet --logtostderr=true --v=4 --hostname-override=192.168.18.148 --kubeconfig=/opt/kubernetes/cfg/kubelet.kubeconfig --bootstrap-kubeconfig=/opt/kubernetes/cfg/bootstrap.kubeconfig --config=/opt/kubernetes/cfg/kubelet.config --cert-dir=/opt/kubernetes/ssl --pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0root 35078 0.0 0.0 112676 984 pts/1 S+ 14:54 0:00 grep --color=auto kube[root@node1 ~]# systemctl status kubelet.service● kubelet.service - Kubernetes Kubelet Loaded: loaded (/usr/lib/systemd/system/kubelet.service; enabled; vendor preset: disabled) Active: active (running) since 三 2020-02-05 14:54:45 CST; 21s ago#状态为running运行中
第六步:master上检查节点的请求
`node1会自动寻找apiserver去进行申请证书,我们就可以检查到node01节点的请求`[root@master kubeconfig]# kubectl get csrNAME AGE REQUESTOR CONDITIONnode-csr-ZZnDyPkUICga9NeuZF-M8IHTmpekEurXtbHXOyHZbDg 18s kubelet-bootstrap Pending#此时状态为Pending等待集群给该节点颁发证书`继续查看证书状态`[root@master kubeconfig]# kubectl get csrNAME AGE REQUESTOR CONDITIONnode-csr-ZZnDyPkUICga9NeuZF-M8IHTmpekEurXtbHXOyHZbDg 3m59s kubelet-bootstrap Approved,Issued#此时状态为Approved,Issued已经被允许加入群集`查看群集节点,成功加入node1节点`[root@master kubeconfig]# kubectl get nodeNAME STATUS ROLES AGE VERSION192.168.18.148 Ready 6m54s v1.12.3`在node1节点操作,启动proxy服务`[root@node1 ~]# bash proxy.sh 192.168.195.148Created symlink from /etc/systemd/system/multi-user.target.wants/kube-proxy.service to /usr/lib/systemd/system/kube-proxy.service.[root@node1 ~]# systemctl status kube-proxy.service● kube-proxy.service - Kubernetes Proxy Loaded: loaded (/usr/lib/systemd/system/kube-proxy.service; enabled; vendor preset: disabled) Active: active (running) since 四 2020-02-06 11:11:56 CST; 20s ago#状态为running运行中
第七步:node2节点部署
`在node01节点操作把现成的/opt/kubernetes目录复制到node2节点进行修改即可`[root@node1 ~]# scp -r /opt/kubernetes/ root@192.168.18.145:/opt/The authenticity of host '192.168.18.145 (192.168.18.145)' can't be established.ECDSA key fingerprint is SHA256:mTT+FEtzAu4X3D5srZlz93S3gye8MzbqVZFDzfJd4Gk.ECDSA key fingerprint is MD5:fa:5a:88:23:49:60:9b:b8:7e:4b:14:4b:3f:cd:96:a0.Are you sure you want to continue connecting (yes/no)? yesWarning: Permanently added '192.168.18.145' (ECDSA) to the list of known hosts.root@192.168.18.145's password:flanneld 100% 238 572.7KB/s 00:00bootstrap.kubeconfig 100% 2168 4.9MB/s 00:00kube-proxy.kubeconfig 100% 6270 12.0MB/s 00:00kubelet 100% 378 642.2KB/s 00:00kubelet.config 100% 268 565.0KB/s 00:00kubelet.kubeconfig 100% 2297 3.5MB/s 00:00kube-proxy 100% 191 396.6KB/s 00:00mk-docker-opts.sh 100% 2139 3.2MB/s 00:00scp: /opt//kubernetes/bin/flanneld: Text file busykubelet 100% 168MB 96.9MB/s 00:01kube-proxy 100% 48MB 108.9MB/s 00:00kubelet.crt 100% 2193 2.4MB/s 00:00kubelet.key 100% 1675 2.5MB/s 00:00kubelet-client-2020-02-06-11-03-32.pem 100% 1277 2.2MB/s 00:00kubelet-client-current.pem 100% 1277 684.2KB/s 00:00`把node1中的kubelet,kube-proxy的service文件拷贝到node2中`[root@node1 ~]# scp /usr/lib/systemd/system/{kubelet,kube-proxy}.service root@192.168.18.145:/usr/lib/systemd/system/root@192.168.18.145's password:kubelet.service 100% 264 291.3KB/s 00:00kube-proxy.service 100% 231 407.8KB/s 00:00`到node2上操作,进行修改:首先删除复制过来的证书,等会node2会自行申请证书`[root@node2 ~]# cd /opt/kubernetes/ssl/[root@node2 ssl]# rm -rf *`修改配置文件kubelet kubelet.config kube-proxy(三个配置文件)`[root@node2 ssl]# cd ../cfg/[root@node2 cfg]# vim kubelet4 --hostname-override=192.168.18.145 \ #第4行,主机名改为node2节点的IP地址#修改完成后按Esc退出插入模式,输入:wq保存退出[root@node2 cfg]# vim kubelet.config4 address: 192.168.18.145 #第4行,地址改为node2节点的IP地址#修改完成后按Esc退出插入模式,输入:wq保存退出[root@node2 cfg]# vim kube-proxy4 --hostname-override=192.168.195.145 #第4行,改为node2节点的IP地址#修改完成后按Esc退出插入模式,输入:wq保存退出`启动服务`[root@node2 cfg]# systemctl start kubelet.service[root@node2 cfg]# systemctl enable kubelet.serviceCreated symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service.[root@node2 cfg]# systemctl start kube-proxy.service[root@node2 cfg]# systemctl enable kube-proxy.serviceCreated symlink from /etc/systemd/system/multi-user.target.wants/kube-proxy.service to /usr/lib/systemd/system/kube-proxy.service.
第八步:回到master上查看node2节点请求
[root@master k8s]# kubectl get csrNAME AGE REQUESTOR CONDITIONnode-csr-QtKJLeSj130rGIccigH6-MKH7klhymwDxQ4rh5w8WJA 99s kubelet-bootstrap Pending#此时出现新的授权许可加入群集[root@master k8s]# kubectl certificate approve node-csr-QtKJLeSj130rGIccigH6-MKH7klhymwDxQ4rh5w8WJAcertificatesigningrequest.certificates.k8s.io/node-csr-QtKJLeSj130rGIccigH6-MKH7klhymwDxQ4rh5w8WJA approved`查看群集中的节点`[root@master k8s]# kubectl get nodeNAME STATUS ROLES AGE VERSION192.168.18.145 Ready 28s v1.12.3192.168.18.148 Ready 26m v1.12.3#此时两个节点都已加入到群集中
节点
证书
状态
服务
生成
文件
输入
配置
拷贝
模式
序列
序列号
目录
检查
组件
网络
地址
镜像
令牌
网卡
数据库的安全要保护哪些东西
数据库安全各自的含义是什么
生产安全数据库录入
数据库的安全性及管理
数据库安全策略包含哪些
海淀数据库安全审计系统
建立农村房屋安全信息数据库
易用的数据库客户端支持安全管理
连接数据库失败ssl安全错误
数据库的锁怎样保障安全
要设置打开数据库默认模式
数据库技术与应用课本的答案
erp配置数据库
服务器贴图教程
网络技术提升建议
河北浪潮服务器虚拟化操作
软件开发应届毕业生经典笔试题
互联网络科技有限责任公司
ssm数据库
博雅数据库安庆师范
做银行后台软件开发的弊端在哪里
上海网络安全技术研究所
海阔方舟网络技术
方向服务器
网络安全法制心得
功能权限管理数据库设计
北京软件开发人员月薪多少
如何找app数据库在哪里
建行软件开发中心北京
删除万方数据库论文价格
数据库锁说法正确的是
网络安全攻防演练讲稿
开票信息 网络技术服务
网信办开展网络安全进机关活动
节假日网络技术值班安排
数据库检查点恢复技术
红桥区网络安全培训学校
青蛙数据库软件是什么
北京软件开发人员月薪多少
数据库写入时是什么锁