千家信息网

K8s多节点部署详细步骤,附UI界面的搭建

发表于:2024-11-24 作者:千家信息网编辑
千家信息网最后更新 2024年11月24日,K8s多节点部署需要准备的环境:6台centos7设备:192.168.1.11 master01192.168.1.12 node1192.168.1.13 node2192.168.1.14 ma
千家信息网最后更新 2024年11月24日K8s多节点部署详细步骤,附UI界面的搭建

K8s多节点部署

需要准备的环境:

6台centos7设备:
192.168.1.11 master01
192.168.1.12 node1
192.168.1.13 node2
192.168.1.14 master02
192.168.1.15 lb1
192.168.1.16 lb2
VIP:192.168.1.100

实验步骤:

1:自签ETCD证书
2:ETCD部署
3:Node安装docker
4:Flannel部署(先写入子网到etcd)
---------master----------
5:自签APIServer证书
6:部署APIServer组件(token,csv)
7:部署controller-manager(指定apiserver证书)和scheduler组件
----------node----------
8:生成kubeconfig(bootstrap,kubeconfig和kube-proxy.kubeconfig)
9:部署kubelet组件
10:部署kube-proxy组件
----------加入群集----------
11:kubectl get csr && kubectl certificate approve 允许办法证书,加入群集
12:添加一个node节点
13:查看kubectl get node 节点

具体步骤

一.etcd群集搭建

1. 在master01 上操作进行etcd证书自签

[root@master ~]# mkdir k8s[root@master ~]# cd k8s/[root@master k8s]# mkdir etcd-cert[root@master k8s]# mv etcd-cert.sh etcd-cert[root@master k8s]# lsetcd-cert  etcd.sh[root@master k8s]# vim cfssl.shcurl -L https://pkg.cfssl.org/R1.2/cfssl_linux-amd64 -o /usr/local/bin/cfsslcurl -L https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64 -o /usr/local/bin/cfssljsoncurl -L https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64 -o /usr/local/bin/cfssl-certinfochmod +x /usr/local/bin/cfssl /usr/local/bin/cfssljson /usr/local/bin/cfssl-certinfo[root@master k8s]# bash cfssl.sh[root@master k8s]# ls /usr/local/bin/cfssl  cfssl-certinfo  cfssljson[root@master k8s]# cd etcd-cert/`定义CA证书`cat > ca-config.json < ca-csr.json < server-csr.json <

上传以下三个压缩包到/root/k8s目录:

[root@master k8s]# lscfssl.sh   etcd.sh                          flannel-v0.10.0-linux-amd64.tar.gzetcd-cert  etcd-v3.3.10-linux-amd64.tar.gz  kubernetes-server-linux-amd64.tar.gz[root@master k8s]# tar zxvf etcd-v3.3.10-linux-amd64.tar.gz[root@master k8s]# ls etcd-v3.3.10-linux-amd64Documentation  etcd  etcdctl  README-etcdctl.md  README.md  READMEv2-etcdctl.md[root@master k8s]# mkdir /opt/etcd/{cfg,bin,ssl} -p[root@master k8s]# mv etcd-v3.3.10-linux-amd64/etcd etcd-v3.3.10-linux-amd64/etcdctl /opt/etcd/bin/`证书拷贝`[root@master k8s]# cp etcd-cert/*.pem /opt/etcd/ssl/`进入卡住状态等待其他节点加入`[root@master k8s]# bash etcd.sh etcd01 192.168.1.11 etcd02=https://192.168.12.148:2380,etcd03=https://192.168.1.13:2380Created symlink from /etc/systemd/system/multi-user.target.wants/etcd.service to /usr/lib/systemd/system/etcd.service

2.重新打开一个master01 终端

[root@master ~]# ps -ef | grep etcdroot       3479   1780  0 11:48 pts/0    00:00:00 bash etcd.sh etcd01 192.168.1.11 etcd02=https://192.168.1.12:2380,etcd03=https://192.168.1.13:2380root       3530   3479  0 11:48 pts/0    00:00:00 systemctl restart etcdroot       3540      1  1 11:48 ?        00:00:00 /opt/etcd/bin/etcd --name=etcd01 --data-dir=/var/lib/etcd/default.etcd --listen-peer-urls=https://192.168.1.11:2380 --listen-client-urls=https://192.168.1.11:2379,http://127.0.0.1:2379 --advertise-client-urls=https://192.168.1.11:2379 --initial-advertise-peer-urls=https://192.168.1.11:2380 --initial-cluster=etcd01=https://192.168.1.11:2380,etcd02=https://192.168.1.12:2380,etcd03=https://192.168.1.13:2380 --initial-cluster-token=etcd-cluster --initial-cluster-state=new --cert-file=/opt/etcd/ssl/server.pem --key-file=/opt/etcd/ssl/server-key.pem --peer-cert-file=/opt/etcd/ssl/server.pem --peer-key-file=/opt/etcd/ssl/server-key.pem --trusted-ca-file=/opt/etcd/ssl/ca.pem --peer-trusted-ca-file=/opt/etcd/ssl/ca.pemroot       3623   3562  0 11:49 pts/1    00:00:00 grep --color=auto etcd
拷贝证书去2个node节点`[root@master k8s]# scp -r /opt/etcd/ root@192.168.1.12:/opt/root@192.168.1.12's password:etcd                                                       100%  518   426.8KB/s   00:00etcd                                                       100%   18MB 105.0MB/s   00:00etcdctl                                                    100%   15MB 108.2MB/s   00:00ca-key.pem                                                 100% 1679     1.4MB/s   00:00ca.pem                                                     100% 1265   396.1KB/s   00:00server-key.pem                                             100% 1675     1.0MB/s   00:00server.pem                                                 100% 1338   525.6KB/s   00:00[root@master k8s]# scp -r /opt/etcd/ root@192.168.1.13:/opt/root@192.168.1.13's password:etcd                                                       100%  518   816.5KB/s   00:00etcd                                                       100%   18MB  87.4MB/s   00:00etcdctl                                                    100%   15MB 108.6MB/s   00:00ca-key.pem                                                 100% 1679     1.3MB/s   00:00ca.pem                                                     100% 1265   411.8KB/s   00:00server-key.pem                                             100% 1675     1.4MB/s   00:00server.pem                                                 100% 1338   639.5KB/s   00:00复制etcd的启动脚本到2个node节点[root@master k8s]# scp /usr/lib/systemd/system/etcd.service root@192.168.1.12:/usr/lib/systemd/system/root@192.168.1.12's password:etcd.service                                               100%  923   283.4KB/s   00:00[root@master k8s]# scp /usr/lib/systemd/system/etcd.service root@192.168.1.13:/usr/lib/systemd/system/root@192.168.1.13's password:etcd.service                                               100%  923   347.7KB/s   00:00

3.分别取2个node节点修改etcd的配置文件并启动etcd服务

node1[root@node1 ~]# systemctl stop firewalld.service[root@node1 ~]# setenforce 0[root@node1 ~]# vim /opt/etcd/cfg/etcd#[Member]ETCD_NAME="etcd02"ETCD_DATA_DIR="/var/lib/etcd/default.etcd"ETCD_LISTEN_PEER_URLS="https://192.168.1.12:2380"ETCD_LISTEN_CLIENT_URLS="https://192.168.1.12:2379"#[Clustering]ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.1.12:2380"ETCD_ADVERTISE_CLIENT_URLS="https://192.168.1.12:2379"ETCD_INITIAL_CLUSTER="etcd01=https://192.168.1.11:2380,etcd02=https://192.168.1.12:2380,etcd03=https://192.168.1.13:2380"ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"ETCD_INITIAL_CLUSTER_STATE="new"[root@node1 ~]# systemctl start etcd[root@node1 ~]# systemctl status etcd● etcd.service - Etcd Server   Loaded: loaded (/usr/lib/systemd/system/etcd.service; disabled; vendor preset: disabled)   Active: active (running) since 三 2020-01-15 17:53:24 CST; 5s ago#状态为Active
node2[root@node1 ~]# systemctl stop firewalld.service[root@node1 ~]# setenforce 0[root@node1 ~]# vim /opt/etcd/cfg/etcd#[Member]ETCD_NAME="etcd03"ETCD_DATA_DIR="/var/lib/etcd/default.etcd"ETCD_LISTEN_PEER_URLS="https://192.168.1.13:2380"ETCD_LISTEN_CLIENT_URLS="https://192.168.1.13:2379"#[Clustering]ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.1.13:2380"ETCD_ADVERTISE_CLIENT_URLS="https://192.168.1.13:2379"ETCD_INITIAL_CLUSTER="etcd01=https://192.168.1.11:2380,etcd02=https://192.168.1.12:2380,etcd03=https://192.168.1.13:2380"ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"ETCD_INITIAL_CLUSTER_STATE="new"[root@node1 ~]# systemctl start etcd[root@node1 ~]# systemctl status etcd● etcd.service - Etcd Server   Loaded: loaded (/usr/lib/systemd/system/etcd.service; disabled; vendor preset: disabled)   Active: active (running) since 三 2020-01-15 17:53:24 CST; 5s ago#状态为Active

4.在master01上验证群集信息

[root@master k8s]# cd etcd-cert/[root@master etcd-cert]# /opt/etcd/bin/etcdctl --ca-file=ca.pem --cert-file=server.pem --key-file=server-key.pem --endpoints="https://192.168.1.11:2379,https://192.168.1.12:2379,https://192.168.1.13:2379" cluster-healthmember 9104d301e3b6da41 is healthy: got healthy result from https://192.168.1.11:2379member 92947d71c72a884e is healthy: got healthy result from https://192.168.1.12:2379member b2a6d67e1bc8054b is healthy: got healthy result from https://192.168.1.13:2379cluster is healthy

二.在2个node节点上部署docker

`安装依赖包`[root@node1 ~]# yum install yum-utils device-mapper-persistent-data lvm2 -y`设置阿里云镜像源`[root@node1 ~]# yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo`安装Docker-ce`[root@node1 ~]# yum install -y docker-ce`启动Docker并设置为开机自启动`[root@node1 ~]# systemctl start docker.service[root@node1 ~]# systemctl enable docker.serviceCreated symlink from /etc/systemd/system/multi-user.target.wants/docker.service to /usr/lib/systemd/system/docker.service.`检查相关进程开启情况`[root@node1 ~]# ps aux | grep dockerroot       5551  0.1  3.6 565460 68652 ?        Ssl  09:13   0:00 /usr/bin/docke d -H fd:// --containerd=/run/containerd/containerd.sockroot       5759  0.0  0.0 112676   984 pts/1    R+   09:16   0:00 grep --color=auto docker`镜像加速服务`[root@node1 ~]# tee /etc/docker/daemon.json <<-'EOF'{  "registry-mirrors": ["https://w1ogxqvl.mirror.aliyuncs.com"]}EOF#网络优化部分echo 'net.ipv4.ip_forward=1' > /etc/sysctl.cnfsysctl -p[root@node1 ~]# service network restartRestarting network (via systemctl):                        [  确定  ][root@node1 ~]# systemctl restart docker[root@node1 ~]# systemctl daemon-reload[root@node1 ~]# systemctl restart docker

三.安装flannel组件

1.在master服务器中写入分配的子网段到ETCD中,供flannel使用

[root@master etcd-cert]# /opt/etcd/bin/etcdctl --ca-file=ca.pem --cert-file=server.pem --key-file=server-key.pem --endpoints="https://192.168.1.11:2379,https://192.168.1.12:2379,https://192.168.1.13:2379" set /coreos.com/network/config '{ "Network": "172.17.0.0/16", "Backend": {"Type": "vxlan"}}'{ "Network": "172.17.0.0/16", "Backend": {"Type": "vxlan"}}查看写入的信息[root@master etcd-cert]# /opt/etcd/bin/etcdctl --ca-file=ca.pem --cert-file=server.pem --key-file=server-key.pem --endpoints="https://192.168.1.11:2379,https://192.168.1.12:2379,https://192.168.1.13:2379" get /coreos.com/network/config{ "Network": "172.17.0.0/16", "Backend": {"Type": "vxlan"}}将flannel的软件包拷贝到所有node节点(只需要部署在node节点即可)[root@master etcd-cert]# cd ../[root@master k8s]# scp flannel-v0.10.0-linux-amd64.tar.gz root@192.168.1.12:/rootroot@192.168.1.12's password:flannel-v0.10.0-linux-amd64.tar.gz             100% 9479KB  55.6MB/s   00:00[root@master k8s]# scp flannel-v0.10.0-linux-amd64.tar.gz root@192.168.1.13:/rootroot@192.168.1.13's password:flannel-v0.10.0-linux-amd64.tar.gz             100% 9479KB  69.5MB/s   00:00

2.分别在2个node节点进行配置flannel

[root@node1 ~]# tar zxvf flannel-v0.10.0-linux-amd64.tar.gz`创建k8s工作目录`[root@node1 ~]# mkdir /opt/kubernetes/{cfg,bin,ssl} -p[root@node1 ~]# mv mk-docker-opts.sh flanneld /opt/kubernetes/bin/[root@node1 ~]# vim flannel.sh#!/bin/bashETCD_ENDPOINTS=${1:-"http://127.0.0.1:2379"}cat </opt/kubernetes/cfg/flanneldFLANNEL_OPTIONS="--etcd-endpoints=${ETCD_ENDPOINTS} \-etcd-cafile=/opt/etcd/ssl/ca.pem \-etcd-certfile=/opt/etcd/ssl/server.pem \-etcd-keyfile=/opt/etcd/ssl/server-key.pem"EOFcat </usr/lib/systemd/system/flanneld.service[Unit]Description=Flanneld overlay address etcd agentAfter=network-online.target network.targetBefore=docker.service[Service]Type=notifyEnvironmentFile=/opt/kubernetes/cfg/flanneldExecStart=/opt/kubernetes/bin/flanneld --ip-masq \$FLANNEL_OPTIONSExecStartPost=/opt/kubernetes/bin/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/subnet.envRestart=on-failure[Install]WantedBy=multi-user.targetEOFsystemctl daemon-reloadsystemctl enable flanneldsystemctl restart flanneld`开启flannel网络功能`[root@node1 ~]# bash flannel.sh https://192.168.1.11:2379,https://192.168.1.12:2379,https://192.168.1.13:2379Created symlink from /etc/systemd/system/multi-user.target.wants/flanneld.service to /usr/lib/systemd/system/flanneld.service.`配置docker连接flannel`[root@node1 ~]# vim /usr/lib/systemd/system/docker.service#service段落做如下改动9 [Service]10 Type=notify11 # the default is not to use systemd for cgroups because the delegate issues s    till12 # exists and systemd currently does not support the cgroup feature set requir    ed13 # for containers run by docker14 EnvironmentFile=/run/flannel/subnet.env      #在13下添加此行15 ExecStart=/usr/bin/dockerd $DOCKER_NETWORK_OPTIONS -H fd:// --containerd=/run    /containerd/containerd.sock     #15行中在-H前添加$DOCKER_NETWORK_OPTIONS16 ExecReload=/bin/kill -s HUP $MAINPID17 TimeoutSec=018 RestartSec=219 Restart=always#修改完成后按Esc退出插入模式,输入:wq保存退出[root@node1 ~]# cat /run/flannel/subnet.envDOCKER_OPT_BIP="--bip=172.17.32.1/24"DOCKER_OPT_IPMASQ="--ip-masq=false"DOCKER_OPT_MTU="--mtu=1450"DOCKER_NETWORK_OPTIONS=" --bip=172.17.32.1/24 --ip-masq=false --mtu=1450"#此处bip指定启动时的子网`重启docker服务`[root@node1 ~]# systemctl daemon-reload[root@node1 ~]# systemctl restart docker`查看flannel网络`[root@node1 ~]# ifconfigflannel.1: flags=4163  mtu 1450        inet 172.17.32.0  netmask 255.255.255.255  broadcast 0.0.0.0        inet6 fe80::344b:13ff:fecb:1e2d  prefixlen 64  scopeid 0x20        ether 36:4b:13:cb:1e:2d  txqueuelen 0  (Ethernet)        RX packets 0  bytes 0 (0.0 B)        RX errors 0  dropped 0  overruns 0  frame 0        TX packets 0  bytes 0 (0.0 B)        TX errors 0  dropped 27 overruns 0  carrier 0  collisions 0

四.部署master组件

1.在master上操作,api-server生成证书,需要先上传master.zip到master节点上

[root@master k8s]# unzip master.zipArchive:  master.zip  inflating: apiserver.sh  inflating: controller-manager.sh  inflating: scheduler.sh[root@master k8s]# mkdir /opt/kubernetes/{cfg,bin,ssl} -p`创建apiserver自签证书目录`[root@master k8s]# mkdir k8s-cert[root@master k8s]# cd k8s-cert/[root@master k8s-cert]# ls      #需要上传k8s-cert.sh到此目录下k8s-cert.sh`建立ca证书`[root@master k8s-cert]# cat > ca-config.json < ca-csr.json < server-csr.json < admin-csr.json < kube-proxy-csr.json <

2.生成apiserver证书,并开启scheduler和controller-manager组件

[root@master k8s-cert]# bash k8s-cert.sh2020/02/05 11:50:08 [INFO] generating a new CA key and certificate from CSR2020/02/05 11:50:08 [INFO] generate received request2020/02/05 11:50:08 [INFO] received CSR2020/02/05 11:50:08 [INFO] generating key: rsa-20482020/02/05 11:50:08 [INFO] encoded CSR2020/02/05 11:50:08 [INFO] signed certificate with serial number 4738831558833089008638050792521240997711230430472020/02/05 11:50:08 [INFO] generate received request2020/02/05 11:50:08 [INFO] received CSR2020/02/05 11:50:08 [INFO] generating key: rsa-20482020/02/05 11:50:08 [INFO] encoded CSR2020/02/05 11:50:08 [INFO] signed certificate with serial number 664838177387463097934177188684703341515395339252020/02/05 11:50:08 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable forwebsites. For more information see the Baseline Requirements for the Issuance and Managementof Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);specifically, section 10.2.3 ("Information Requirements").2020/02/05 11:50:08 [INFO] generate received request2020/02/05 11:50:08 [INFO] received CSR2020/02/05 11:50:08 [INFO] generating key: rsa-20482020/02/05 11:50:08 [INFO] encoded CSR2020/02/05 11:50:08 [INFO] signed certificate with serial number 2456588660691096392789469855876034753258710082402020/02/05 11:50:08 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable forwebsites. For more information see the Baseline Requirements for the Issuance and Managementof Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);specifically, section 10.2.3 ("Information Requirements").2020/02/05 11:50:08 [INFO] generate received request2020/02/05 11:50:08 [INFO] received CSR2020/02/05 11:50:08 [INFO] generating key: rsa-20482020/02/05 11:50:09 [INFO] encoded CSR2020/02/05 11:50:09 [INFO] signed certificate with serial number 6967297660249749878734748654965621973151987334632020/02/05 11:50:09 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable forwebsites. For more information see the Baseline Requirements for the Issuance and Managementof Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);specifically, section 10.2.3 ("Information Requirements").[root@master k8s-cert]# ls *pemadmin-key.pem  ca-key.pem  kube-proxy-key.pem  server-key.pemadmin.pem      ca.pem      kube-proxy.pem      server.pem[root@master k8s-cert]# cp ca*pem server*pem /opt/kubernetes/ssl/[root@master k8s-cert]# cd ..`解压kubernetes压缩包`[root@master k8s]# tar zxvf kubernetes-server-linux-amd64.tar.gz[root@master k8s]# cd /root/k8s/kubernetes/server/bin`复制关键命令文件`[root@master bin]# cp kube-apiserver kubectl kube-controller-manager kube-scheduler /opt/kubernetes/bin/[root@master k8s]# cd /root/k8s`随机生成序列号`[root@master k8s]# head -c 16 /dev/urandom | od -An -t x | tr -d ' '9b3186df3dc799376ad43b6fe0108571[root@master k8s]# vim /opt/kubernetes/cfg/token.csv9b3186df3dc799376ad43b6fe0108571,kubelet-bootstrap,10001,"system:kubelet-bootstrap"#序列号,用户名,id,角色`二进制文件,token,证书都准备好,开启apiserver`[root@master k8s]# bash apiserver.sh 192.168.1.11 https://192.168.1.11:2379,https://192.168.1.12:2379,https://192.168.1.13:2379Created symlink from /etc/systemd/system/multi-user.target.wants/kube-apiserver.service to /usr/lib/systemd/system/kube-apiserver.service.`检查进程是否启动成功`[root@master k8s]# ps aux | grep kube`查看配置文件`[root@master k8s]# cat /opt/kubernetes/cfg/kube-apiserverKUBE_APISERVER_OPTS="--logtostderr=true \--v=4 \--etcd-servers=https://192.168.1.11:2379,https://192.168.18.148:2379,https://192.168.18.145:2379 \--bind-address=192.168.1.11 \--secure-port=6443 \--advertise-address=192.168.1.11 \--allow-privileged=true \--service-cluster-ip-range=10.0.0.0/24 \--enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota,NodeRestriction \--authorization-mode=RBAC,Node \--kubelet-https=true \--enable-bootstrap-token-auth \--token-auth-file=/opt/kubernetes/cfg/token.csv \--service-node-port-range=30000-50000 \--tls-cert-file=/opt/kubernetes/ssl/server.pem  \--tls-private-key-file=/opt/kubernetes/ssl/server-key.pem \--client-ca-file=/opt/kubernetes/ssl/ca.pem \--service-account-key-file=/opt/kubernetes/ssl/ca-key.pem \--etcd-cafile=/opt/etcd/ssl/ca.pem \--etcd-certfile=/opt/etcd/ssl/server.pem \--etcd-keyfile=/opt/etcd/ssl/server-key.pem"`监听的https端口`[root@master k8s]# netstat -ntap | grep 6443`启动scheduler服务`[root@master k8s]# ./scheduler.sh 127.0.0.1Created symlink from /etc/systemd/system/multi-user.target.wants/kube-scheduler.service to /usr/lib/systemd/system/kube-scheduler.service.[root@master k8s]# ps aux | grep kupostfix    6212  0.0  0.0  91732  1364 ?        S    11:29   0:00 pickup -l -t unix -uroot       7034  1.1  1.0  45360 20332 ?        Ssl  12:23   0:00 /opt/kubernetes/bin/kube-scheduler --logtostderr=true --v=4 --master=127.0.0.1:8080 --leader-electroot       7042  0.0  0.0 112676   980 pts/1    R+   12:23   0:00 grep --color=auto ku[root@master k8s]# chmod +x controller-manager.sh`启动controller-manager`[root@master k8s]# ./controller-manager.sh 127.0.0.1Created symlink from /etc/systemd/system/multi-user.target.wants/kube-controller-manager.service to /usr/lib/systemd/system/kube-controller-manager.service.`查看master 节点状态`[root@master k8s]#  /opt/kubernetes/bin/kubectl get csNAME                 STATUS    MESSAGE             ERRORcontroller-manager   Healthy   okscheduler            Healthy   oketcd-1               Healthy   {"health":"true"}etcd-0               Healthy   {"health":"true"}etcd-2               Healthy   {"health":"true"}

五.部署node节点组件

1.在master01上把 kubelet、kube-proxy命令文件拷贝到node节点上去

[root@master k8s]# cd kubernetes/server/bin/[root@master bin]# scp kubelet kube-proxy root@192.168.1.12:/opt/kubernetes/bin/root@192.168.1.12's password:kubelet                                                                100%  168MB  81.1MB/s   00:02kube-proxy                                                             100%   48MB  77.6MB/s   00:00[root@master bin]# scp kubelet kube-proxy root@192.168.1.13:/opt/kubernetes/bin/root@192.168.1.13's password:kubelet                                                                100%  168MB  86.8MB/s   00:01kube-proxy                                                             100%   48MB  90.4MB/s   00:00

2.在node1上解压文件

[root@node1 ~]# lsanaconda-ks.cfg  flannel-v0.10.0-linux-amd64.tar.gz  node.zip   公共  视频  文档  音乐flannel.sh       initial-setup-ks.cfg                README.md  模板  图片  下载  桌面[root@node1 ~]# unzip node.zipArchive:  node.zip  inflating: proxy.sh  inflating: kubelet.sh

3.在master01上操作

[root@master bin]# cd /root/k8s/[root@master k8s]# mkdir kubeconfig[root@master k8s]# cd kubeconfig/`上传kubeconfig.sh脚本到此目录中,并对其进行重命名`[root@master kubeconfig]# lskubeconfig.sh[root@master kubeconfig]# mv kubeconfig.sh kubeconfig[root@master kubeconfig]# vim kubeconfig#删除前9行,之前生成令牌的时候已经执行过# 设置客户端认证参数kubectl config set-credentials kubelet-bootstrap \  --token=9b3186df3dc799376ad43b6fe0108571 \    #令牌中的序列号需要做更改是我们之前生成的令牌  --kubeconfig=bootstrap.kubeconfig#修改完成后按Esc退出插入模式,输入:wq保存退出----如何获取序列号----[root@master kubeconfig]# cat /opt/kubernetes/cfg/token.csv9b3186df3dc799376ad43b6fe0108571,kubelet-bootstrap,10001,"system:kubelet-bootstrap"#我们需要用到其中的序列号"9b3186df3dc799376ad43b6fe0108571"每个人的序列号是不同的---------------------`设置环境变量(可以写入到/etc/profile中)`[root@master kubeconfig]# vim /etc/profile#按大写字母G到最末行,按小写字母o在下行插入export PATH=$PATH:/opt/kubernetes/bin/#修改完成后按Esc退出插入模式,输入:wq保存退出[root@master kubeconfig]# source /etc/profile[root@master kubeconfig]# kubectl get csNAME                 STATUS    MESSAGE             ERRORscheduler            Healthy   okcontroller-manager   Healthy   oketcd-2               Healthy   {"health":"true"}etcd-1               Healthy   {"health":"true"}etcd-0               Healthy   {"health":"true"}[root@master kubeconfig]# kubectl get nodeNo resources found.#此时还没有节点被添加[root@master kubeconfig]# bash kubeconfig 192.168.1.11 /root/k8s/k8s-cert/Cluster "kubernetes" set.User "kubelet-bootstrap" set.Context "default" created.Switched to context "default".Cluster "kubernetes" set.User "kube-proxy" set.Context "default" created.Switched to context "default".[root@master kubeconfig]# lsbootstrap.kubeconfig  kubeconfig  kube-proxy.kubeconfig`拷贝配置文件到两个node节点`[root@master kubeconfig]# scp bootstrap.kubeconfig kube-proxy.kubeconfig root@192.168.1.12:/opt/kubernetes/cfg/root@192.168.1.12's password:bootstrap.kubeconfig                                                              100% 2168     2.2MB/s   00:00kube-proxy.kubeconfig                                                             100% 6270     3.5MB/s   00:00[root@master kubeconfig]# scp bootstrap.kubeconfig kube-proxy.kubeconfig root@192.168.1.13:/opt/kubernetes/cfg/root@192.168.1.13's password:bootstrap.kubeconfig                                                              100% 2168     3.1MB/s   00:00kube-proxy.kubeconfig                                                             100% 6270     7.9MB/s   00:00`创建bootstrap角色赋予权限用于连接apiserver请求签名(关键步骤)`[root@master kubeconfig]# kubectl create clusterrolebinding kubelet-bootstrap --clusterrole=system:node-bootstrapper --user=kubelet-bootstrapclusterrolebinding.rbac.authorization.k8s.io/kubelet-bootstrap created

3.在node01节点上操作

[root@node1 ~]# bash kubelet.sh 192.168.1.12Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service.检查kubelet服务启动[root@node1 ~]# ps aux | grep kube[root@node1 ~]# systemctl status kubelet.service● kubelet.service - Kubernetes Kubelet   Loaded: loaded (/usr/lib/systemd/system/kubelet.service; enabled; vendor preset: disabled)   Active: active (running) since 三 2020-02-05 14:54:45 CST; 21s ago#状态为running运行中

4.在master01上验证node1的证书请求

node1会自动寻找apiserver去进行申请证书,我们就可以检查到node01节点的请求[root@master kubeconfig]# kubectl get csrNAME                                                   AGE   REQUESTOR           CONDITIONnode-csr-ZZnDyPkUICga9NeuZF-M8IHTmpekEurXtbHXOyHZbDg   18s   kubelet-bootstrap   Pending#此时状态为Pending等待集群给该节点颁发证书`继续查看证书状态`[root@master kubeconfig]# kubectl get csrNAME                                                   AGE     REQUESTOR           CONDITIONnode-csr-ZZnDyPkUICga9NeuZF-M8IHTmpekEurXtbHXOyHZbDg   3m59s   kubelet-bootstrap   Approved,Issued#此时状态为Approved,Issued已经被允许加入群集`查看群集节点,成功加入node1节点`[root@master kubeconfig]# kubectl get nodeNAME             STATUS   ROLES    AGE     VERSION192.168.18.148   Ready       6m54s   v1.12.3

5.在node1节点操作,启动proxy服务

[root@node1 ~]# bash proxy.sh 192.168.1.12Created symlink from /etc/systemd/system/multi-user.target.wants/kube-proxy.service to /usr/lib/systemd/system/kube-proxy.service.[root@node1 ~]# systemctl status kube-proxy.service● kube-proxy.service - Kubernetes Proxy   Loaded: loaded (/usr/lib/systemd/system/kube-proxy.service; enabled; vendor preset: disabled)   Active: active (running) since 四 2020-02-06 11:11:56 CST; 20s ago#状态为running运行中

6.把node1中的/opt/kubernetes目录复制到node2节点,并且kubelet,kube-proxy的service文件拷贝到node2中

[root@node1 ~]# scp -r /opt/kubernetes/ root@192.168.1.13:/opt/[root@node1 ~]# scp /usr/lib/systemd/system/{kubelet,kube-proxy}.service root@192.168.1.13:/usr/lib/systemd/system/root@192.168.1.13's password:kubelet.service                                           100%  264   291.3KB/s   00:00kube-proxy.service                                        100%  231   407.8KB/s   00:00

7.到node2上操作,进行修改:首先删除复制过来的证书,等会node2会自行申请证书

[root@node2 ~]# cd /opt/kubernetes/ssl/[root@node2 ssl]# rm -rf *`修改配置文件kubelet  kubelet.config kube-proxy(三个配置文件)`[root@node2 ssl]# cd ../cfg/[root@node2 cfg]# vim kubelet4 --hostname-override=192.168.1.13\      #第4行,主机名改为node2节点的IP地址#修改完成后按Esc退出插入模式,输入:wq保存退出[root@node2 cfg]# vim kubelet.config4 address: 192.168.1.13       #第4行,地址改为node2节点的IP地址#修改完成后按Esc退出插入模式,输入:wq保存退出[root@node2 cfg]# vim kube-proxy4 --hostname-override=192.168.1.13       #第4行,改为node2节点的IP地址#修改完成后按Esc退出插入模式,输入:wq保存退出`启动服务`[root@node2 cfg]# systemctl start kubelet.service[root@node2 cfg]# systemctl enable kubelet.serviceCreated symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service.[root@node2 cfg]# systemctl start kube-proxy.service[root@node2 cfg]# systemctl enable kube-proxy.serviceCreated symlink from /etc/systemd/system/multi-user.target.wants/kube-proxy.service to /usr/lib/systemd/system/kube-proxy.service.第八步:回到master上查看node2节点请求[root@master k8s]# kubectl get csrNAME                                                   AGE   REQUESTOR           CONDITIONnode-csr-QtKJLeSj130rGIccigH6-MKH7klhymwDxQ4rh5w8WJA   99s   kubelet-bootstrap   Pending#此时出现新的授权许可加入群集[root@master k8s]# kubectl certificate approve node-csr-QtKJLeSj130rGIccigH6-MKH7klhymwDxQ4rh5w8WJAcertificatesigningrequest.certificates.k8s.io/node-csr-QtKJLeSj130rGIccigH6-MKH7klhymwDxQ4rh5w8WJA approved

8.在master01上查看群集中的节点

[root@master k8s]# kubectl get nodeNAME             STATUS   ROLES    AGE   VERSION192.168.1.12   Ready       28s   v1.12.3192.168.1.13   Ready       26m   v1.12.3#此时两个节点都已加入到群集中

六.部署第2台master

1.在master上操作

在master1上操作,复制kubernetes目录到master2[root@master1 k8s]# scp -r /opt/kubernetes/ root@192.168.1.14:/opt复制master1中的三个组件启动脚本kube-apiserver.service,kube-controller-manager.service,kube-scheduler.service到master2[root@master1 k8s]# scp /usr/lib/systemd/system/{kube-apiserver,kube-controller-manager,kube-scheduler}.service root@192.168.1.14:/usr/lib/systemd/system/

2.master2上操作,修改配置文件kube-apiserver中的IP

[root@master2 ~]# cd /opt/kubernetes/cfg/[root@master2 cfg]# lskube-apiserver  kube-controller-manager  kube-scheduler  token.csv[root@master2 cfg]# vim kube-apiserver5 --bind-address=192.168.1.14 \7 --advertise-address=192.168.1.14 \#第5和7行IP地址需要改为master2的地址#修改完成后按Esc退出插入模式,输入:wq保存退出

3.在master01上操作拷贝已有的etcd证书给master2使用

[root@master1 k8s]# scp -r /opt/etcd/ root@192.168.1.14:/opt/root@192.168.1.14's password:etcd                                                      100%  516   535.5KB/s   00:00etcd                                                      100%   18MB  90.6MB/s   00:00etcdctl                                                   100%   15MB  80.5MB/s   00:00ca-key.pem                                                100% 1675     1.4MB/s   00:00ca.pem                                                    100% 1265   411.6KB/s   00:00server-key.pem                                            100% 1679     2.0MB/s   00:00server.pem                                                100% 1338   429.6KB/s   00:00

4. 在master02上启动三个组件服务

[root@master2 cfg]# systemctl start kube-apiserver.service[root@master2 cfg]# systemctl enable kube-apiserver.serviceCreated symlink from /etc/systemd/system/multi-user.target.wants/kube-apiserver.service to /usr/lib/systemd/system/kube-apiserver.service.[root@master2 cfg]# systemctl status kube-apiserver.service● kube-apiserver.service - Kubernetes API Server   Loaded: loaded (/usr/lib/systemd/system/kube-apiserver.service; enabled; vendor preset: disabled)   Active: active (running) since 五 2020-02-07 09:16:57 CST; 56min ago[root@master2 cfg]# systemctl start kube-controller-manager.service[root@master2 cfg]# systemctl enable kube-controller-manager.serviceCreated symlink from /etc/systemd/system/multi-user.target.wants/kube-controller-manager.service to /usr/lib/systemd/system/kube-controller-manager.service.[root@master2 cfg]# systemctl status kube-controller-manager.service● kube-controller-manager.service - Kubernetes Controller Manager   Loaded: loaded (/usr/lib/systemd/system/kube-controller-manager.service; enabled; vendor preset: disabled)   Active: active (running) since 五 2020-02-07 09:17:02 CST; 57min ago[root@master2 cfg]# systemctl start kube-scheduler.service[root@master2 cfg]# systemctl enable kube-scheduler.serviceCreated symlink from /etc/systemd/system/multi-user.target.wants/kube-scheduler.service to /usr/lib/systemd/system/kube-scheduler.service.[root@master2 cfg]# systemctl status kube-scheduler.service● kube-scheduler.service - Kubernetes Scheduler   Loaded: loaded (/usr/lib/systemd/system/kube-scheduler.service; enabled; vendor preset: disabled)   Active: active (running) since 五 2020-02-07 09:17:07 CST; 58min ago

5.修改系统环境变量,并查看节点状态,来验证master02运行正常

[root@master2 cfg]# vim /etc/profile#末尾添加export PATH=$PATH:/opt/kubernetes/bin/[root@master2 cfg]# source /etc/profile[root@master2 cfg]# kubectl get nodeNAME             STATUS   ROLES    AGE   VERSION192.168.1.12   Ready       21h   v1.12.3192.168.1.13   Ready       22h   v1.12.3#此时可以看到node1和node2的加入情况

七.负载均衡部署

1.上传keepalived.conf和nginx.sh两个文件到lb1和lb2的root目录下

`lb1`[root@lb1 ~]# lsanaconda-ks.cfg       keepalived.conf  公共  视频  文档  音乐initial-setup-ks.cfg  nginx.sh         模板  图片  下载  桌面`lb2`[root@lb2 ~]# lsanaconda-ks.cfg       keepalived.conf  公共  视频  文档  音乐initial-setup-ks.cfg  nginx.sh         模板  图片  下载  桌面

2.lb1(192.168.1.15)操作

[root@lb1 ~]# systemctl stop firewalld.service[root@lb1 ~]# setenforce 0[root@lb1 ~]# vim /etc/yum.repos.d/nginx.repo[nginx]name=nginx repobaseurl=http://nginx.org/packages/centos/7/$basearch/gpgcheck=0#修改完成后按Esc退出插入模式,输入:wq保存退出`重新加载yum仓库`[root@lb1 ~]# yum list`安装nginx服务`[root@lb1 ~]# yum install nginx -y[root@lb1 ~]# vim /etc/nginx/nginx.conf#在12行下插入以下内容stream {   log_format  main  '$remote_addr $upstream_addr - [$time_local] $status $upstream_bytes_sent';    access_log  /var/log/nginx/k8s-access.log  main;    upstream k8s-apiserver {        server 192.168.1.11:6443;     #此处为master1的ip地址        server 192.168.1.12:6443;     #此处为master2的ip地址    }    server {                listen 6443;                proxy_pass k8s-apiserver;    }    }#修改完成后按Esc退出插入模式,输入:wq保存退出`检测语法`[root@lb1 ~]# nginx -tnginx: the configuration file /etc/nginx/nginx.conf syntax is oknginx: configuration file /etc/nginx/nginx.conf test is successful[root@lb1 ~]# cd /usr/share/nginx/html/[root@lb1 html]# ls50x.html  index.html[root@lb1 html]# vim index.html14 

Welcome to mater nginx!

#14行中添加master以作区分#修改完成后按Esc退出插入模式,输入:wq保存退出`启动服务`[root@lb2 ~]# systemctl start nginx浏览器验证访问,输入192.168.18.147,可以访问master的nginx主页在这里插入图片描述部署keepalived服务[root@lb1 html]# yum install keepalived -y`修改配置文件`[root@lb1 html]# cd ~[root@lb1 ~]# cp keepalived.conf /etc/keepalived/keepalived.confcp:是否覆盖"/etc/keepalived/keepalived.conf"? yes#用我们之前上传的keepalived.conf配置文件,覆盖安装完成后原有的配置文件[root@lb1 ~]# vim /etc/keepalived/keepalived.conf18 script "/etc/nginx/check_nginx.sh" #18行目录改为/etc/nginx/,脚本后写23 interface ens33 #eth0改为ens33,此处的网卡名称可以使用ifconfig命令查询24 virtual_router_id 51 #vrrp路由ID实例,每个实例是唯一的25 priority 100 #优先级,备服务器设置9031 virtual_ipaddress {32 192.168.1.100/24 #vip地址改为之前设定好的192.168.18.100#38行以下删除#修改完成后按Esc退出插入模式,输入:wq保存退出`写脚本`[root@lb1 ~]# vim /etc/nginx/check_nginx.sh count=$(ps -ef |grep nginx |egrep -cv "grep|$$") #统计数量if [ "$count" -eq 0 ];then systemctl stop keepalivedfi#匹配为0,关闭keepalived服务#写入完成后按Esc退出插入模式,输入:wq保存退出[root@lb1 ~]# chmod +x /etc/nginx/check_nginx.sh[root@lb1 ~]# ls /etc/nginx/check_nginx.sh/etc/nginx/check_nginx.sh #此时脚本为可执行状态,绿色[root@lb1 ~]# systemctl start keepalived[root@lb1 ~]# ip a2: ens33: mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 00:0c:29:24:63:be brd ff:ff:ff:ff:ff:ff inet 192.168.18.147/24 brd 192.168.18.255 scope global dynamic ens33 valid_lft 1370sec preferred_lft 1370sec inet `192.168.1.100/24` scope global secondary ens33 #此时漂移地址在lb1中 valid_lft forever preferred_lft forever inet6 fe80::1cb1:b734:7f72:576f/64 scope link valid_lft forever preferred_lft forever inet6 fe80::578f:4368:6a2c:80d7/64 scope link tentative dadfailed valid_lft forever preferred_lft forever inet6 fe80::6a0c:e6a0:7978:3543/64 scope link tentative dadfailed valid_lft forever preferred_lft forever

3.lb2(192.168.1.16)操作

[root@lb2 ~]# systemctl stop firewalld.service[root@lb2 ~]# setenforce 0[root@lb2 ~]# vim /etc/yum.repos.d/nginx.repo[nginx]name=nginx repobaseurl=http://nginx.org/packages/centos/7/$basearch/gpgcheck=0#修改完成后按Esc退出插入模式,输入:wq保存退出`重新加载yum仓库`[root@lb2 ~]# yum list`安装nginx服务`[root@lb2 ~]# yum install nginx -y[root@lb2 ~]# vim /etc/nginx/nginx.conf#在12行下插入以下内容stream {   log_format  main  '$remote_addr $upstream_addr - [$time_local] $status $upstream_bytes_sent';    access_log  /var/log/nginx/k8s-access.log  main;    upstream k8s-apiserver {        server 192.168.18.128:6443;     #此处为master1的ip地址        server 192.168.18.132:6443;     #此处为master2的ip地址    }    server {                listen 6443;                proxy_pass k8s-apiserver;    }    }#修改完成后按Esc退出插入模式,输入:wq保存退出`检测语法`[root@lb2 ~]# nginx -tnginx: the configuration file /etc/nginx/nginx.conf syntax is oknginx: configuration file /etc/nginx/nginx.conf test is successful[root@lb2 ~]# vim /usr/share/nginx/html/index.html14 

Welcome to backup nginx!

#14行中添加backup以作区分#修改完成后按Esc退出插入模式,输入:wq保存退出`启动服务`[root@lb2 ~]# systemctl start nginx浏览器验证访问,输入192.168.18.133,可以访问master的nginx主页在这里插入图片描述部署keepalived服务[root@lb2 ~]# yum install keepalived -y`修改配置文件`[root@lb2 ~]# cp keepalived.conf /etc/keepalived/keepalived.confcp:是否覆盖"/etc/keepalived/keepalived.conf"? yes#用我们之前上传的keepalived.conf配置文件,覆盖安装完成后原有的配置文件[root@lb2 ~]# vim /etc/keepalived/keepalived.conf18 script "/etc/nginx/check_nginx.sh" #18行目录改为/etc/nginx/,脚本后写22 state BACKUP #22行角色MASTER改为BACKUP23 interface ens33 #eth0改为ens3324 virtual_router_id 51 #vrrp路由ID实例,每个实例是唯一的25 priority 90 #优先级,备服务器为9031 virtual_ipaddress {32 192.168.1.100/24 #vip地址改为之前设定好的192.168.18.100#38行以下删除#修改完成后按Esc退出插入模式,输入:wq保存退出`写脚本`[root@lb2 ~]# vim /etc/nginx/check_nginx.sh count=$(ps -ef |grep nginx |egrep -cv "grep|$$") #统计数量if [ "$count" -eq 0 ];then systemctl stop keepalivedfi#匹配为0,关闭keepalived服务#写入完成后按Esc退出插入模式,输入:wq保存退出[root@lb2 ~]# chmod +x /etc/nginx/check_nginx.sh[root@lb2 ~]# ls /etc/nginx/check_nginx.sh/etc/nginx/check_nginx.sh #此时脚本为可执行状态,绿色[root@lb2 ~]# systemctl start keepalived[root@lb2 ~]# ip a2: ens33: mtu 1500 qdisc pfifo_fast state UP qlen 1000 link/ether 00:0c:29:9d:b7:83 brd ff:ff:ff:ff:ff:ff inet 192.168.1.16/24 brd 192.168.1.255 scope global dynamic ens33 valid_lft 958sec preferred_lft 958sec inet6 fe80::578f:4368:6a2c:80d7/64 scope link valid_lft forever preferred_lft forever inet6 fe80::6a0c:e6a0:7978:3543/64 scope link tentative dadfailed valid_lft forever preferred_lft forever#此时没有192.168.18.100,因为地址在lb1(master)上

此时K8s 多节点部署已经全部完成

补充: k8sUI界面的搭建

1.在master01上操作

创建dashborad工作目录[root@localhost k8s]# mkdir dashboard拷贝官方的文件https://github.com/kubernetes/kubernetes/tree/master/cluster/addons/dashboard[root@localhost dashboard]# kubectl create -f dashboard-rbac.yaml role.rbac.authorization.k8s.io/kubernetes-dashboard-minimal createdrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard-minimal created[root@localhost dashboard]# kubectl create -f dashboard-secret.yaml secret/kubernetes-dashboard-certs createdsecret/kubernetes-dashboard-key-holder created[root@localhost dashboard]# kubectl create -f dashboard-configmap.yaml configmap/kubernetes-dashboard-settings created[root@localhost dashboard]# kubectl create -f dashboard-controller.yaml serviceaccount/kubernetes-dashboard createddeployment.apps/kubernetes-dashboard created[root@localhost dashboard]# kubectl create -f dashboard-service.yaml service/kubernetes-dashboard created//完成后查看创建在指定的kube-system命名空间下[root@localhost dashboard]# kubectl get pods -n kube-systemNAME                                    READY   STATUS              RESTARTS   AGEkubernetes-dashboard-65f974f565-m9gm8   0/1     ContainerCreating   0          88s//查看如何访问[root@localhost dashboard]# kubectl get pods,svc -n kube-systemNAME                                        READY   STATUS    RESTARTS   AGEpod/kubernetes-dashboard-65f974f565-m9gm8   1/1     Running   0          2m49sNAME                           TYPE       CLUSTER-IP   EXTERNAL-IP   PORT(S)         AGEservice/kubernetes-dashboard   NodePort   10.0.0.243           443:30001/TCP   2m24s//访问nodeIP就可以访问https://192.168.1.12:30001/

2.注意:谷歌浏览器无法访问的问题

[root@localhost dashboard]# vim dashboard-cert.shcat > dashboard-csr.json <

3.重新访问,此时需要令牌

4.生成令牌

[root@localhost dashboard]# kubectl create -f k8s-admin.yaml serviceaccount/dashboard-admin createdclusterrolebinding.rbac.authorization.k8s.io/dashboard-admin created//保存[root@localhost dashboard]# kubectl get secret -n kube-systemNAME                               TYPE                                  DATA   AGEdashboard-admin-token-qctfr        kubernetes.io/service-account-token   3      65sdefault-token-mmvcg                kubernetes.io/service-account-token   3      7d15hkubernetes-dashboard-certs         Opaque                                11     10mkubernetes-dashboard-key-holder    Opaque                                2      63mkubernetes-dashboard-token-nsc84   kubernetes.io/service-account-token   3      62m//查看令牌[root@localhost dashboard]# kubectl describe secret dashboard-admin-token-qctfr -n kube-systemName:         dashboard-admin-token-qctfrNamespace:    kube-systemLabels:       Annotations:  kubernetes.io/service-account.name: dashboard-admin              kubernetes.io/service-account.uid: 73f19313-47ea-11ea-895a-000c297a15fbType:  kubernetes.io/service-account-tokenData====ca.crt:     1359 bytesnamespace:  11 bytestoken:      eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJkYXNoYm9hcmQtYWRtaW4tdG9rZW4tcWN0ZnIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoiZGFzaGJvYXJkLWFkbWluIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiNzNmMTkzMTMtNDdlYS0xMWVhLTg5NWEtMDAwYzI5N2ExNWZiIiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Omt1YmUtc3lzdGVtOmRhc2hib2FyZC1hZG1pbiJ9.v4YBoyES2etex6yeMPGfl7OT4U9Ogp-84p6cmx3HohiIS7sSTaCqjb3VIvyrVtjSdlT66ZMRzO3MUgj1HsPxgEzOo9q6xXOCBb429m9Qy-VK2JxuwGVD2dIhcMQkm6nf1Da5ZpcYFs8SNT-djAjZNB_tmMY_Pjao4DBnD2t_JXZUkCUNW_O2D0mUFQP2beE_NE2ZSEtEvmesB8vU2cayTm_94xfvtNjfmGrPwtkdH0iy8sH-T0apepJ7wnZNTGuKOsOJf76tU31qF4E5XRXIt-F2Jmv9pEOFuahSBSaEGwwzXlXOVMSaRF9cBFxn-0iXRh0Aq0K21HdPHW1b4-ZQwA

5.复制生成的token,输入到浏览器中,就可以看到UI界面了

0