千家信息网

Kubernetes单节点二进制线网部署(实例!!!)

发表于:2024-10-11 作者:千家信息网编辑
千家信息网最后更新 2024年10月11日,本篇类容1.官方提供的三种部署方式Kubernetes 平台环境规划3.自签SSL证书Etcd数据库集群部署Node节点安装DockerFlannel容器集群网络部署7.部署Master组件8.部署N
千家信息网最后更新 2024年10月11日Kubernetes单节点二进制线网部署(实例!!!)

本篇类容

1.官方提供的三种部署方式

  1. Kubernetes 平台环境规划
    3.自签SSL证书
  2. Etcd数据库集群部署
  3. Node节点安装Docker
  4. Flannel容器集群网络部署
    7.部署Master组件
    8.部署Node组件

官方提供的三种部署方式

  • minikube

    Minikube是一个工具,可以在本地快速运行-一个单点的Kubernetes,仅用子尝试Kubemnetes或日常开发的用户使用。部署地址: htps://kubernetese io/docs/setup/minikube/

  • kubeadm

    Kubeadm也是一个工具,揭供kubeadm init和ukubeadm join,用于快速部署Kubermnetes集群,部署地址:htpst/:/ubee/es.cs/do/s/cference/scetup tos/kubedm/kubeadm/

  • 二进制包

    推荐,从官方下载发打版的二进制包,手动部署每个组件,组成Kubermetes集群。 下载地址:htpts//github.com/kubemetes/kuberetes/teleases

Kubernetes平台环境规划

  • 单Master集群架构图

  • 多Master集群架构图

    自签SSL证书

组件使用的证书
etcdcapem, server.pem, server-key.pem
flannelca.pem,server.pem, server-key.pem
kube-apiserverca.pem. server.pem. server-key.pem
kubeletca.pem, ca-key.pem
kube-proxyca.pem, kube-proxy pem, kube-proxy-key.pem
kubectlca.pem, admin.pem, admin-key.pem

Etcd数据库集群部署

etcd简介

etcd是CoreOS团队于2013年6月发起的开源项目,它的目标是构建一个高可用的分布式键值(key-value)数据库。etcd内部采用raft协议作为一致性算法,etcd基于Go语言实现。

  • etcd作为服务发现系统,有以下的特点:

    简单:安装配置简单,而且提供了HTTP API进行交互,使用也很简单
    安全:支持SSL证书验证
    快速:根据官方提供的benchmark数据,单实例支持每秒2k+读操作
    可靠:采用raft算法,实现分布式系统数据的可用性和一致性

Etcd三大支柱
  • 一个强一致性、高可用的服务存储目录。
    基于Ralf算法的etcd天生就是这样一个强一致性、高可用的服务存储目录。

  • 一种注册服务和健康服务健康状况的机制。
    用户可以在etcd中注册服务,并且对注册的服务配置key TTL,定时保持服务的心跳以达到监控健康状态的效果。

  • 一种查找和连接服务的机制。
    通过在etcd指定的主题下注册的服务业能在对应的主题下查找到。为了确保连接,我们可以在每个服务机器上都部署一个proxy模式的etcd,这样就可以确保访问etcd集群的服务都能够互相连接。
Etcd部署方式
  • 二进制包下载地址

    https://github.com/etcd-io/etcd/releases

  • 查看集群状态
/opt/etcd/bin/etcdctl \--a-file=ca.pem -crt-file=server.pem --key-file= server-key.pem \--endpoints=*https://192.168.0.x:2379.https://192.168.0.x:2379,https://192.168.0x:2379" \cluster-health

Node安装Docker



实例演示

环境部署

主机需要安装的软件
master(192.168.142.129/24)kube-apiserver、kube-controller-manager、kube-scheduler、etcd
node01(192.168.142.130/24)kubelet、kube-proxy、docker、flannel、etcd
node02(192.168.142.131/24)kubelet、kube-proxy、docker 、flannel 、etcd

k8s官网地址,点击获取噢!



ETCD 二进制包地址,点击即可获取噢!


将上述的压缩包复制到centos7的下面即将创建的k8s目录中

资源包链接:

https://pan.baidu.com/s/1QGvhsAVmv2SmbrWMGc3Bng 提取码:mlh5

一、Etcd数据库集群部署

1.在master端的操作

mkdir k8scd k8s/mkdir etcd-certmv etcd-cert.sh etcd-cert
  • 编辑脚本下载cfssl官方包
vim cfssl.shcurl -L https:#pkg.cfssl.org/R1.2/cfssl_linux-amd64 -o /usr/local/bin/cfsslcurl -L https:#pkg.cfssl.org/R1.2/cfssljson_linux-amd64 -o /usr/local/bin/cfssljsoncurl -L https:#pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64 -o /usr/local/bin/cfssl-certinfochmod +x /usr/local/bin/cfssl /usr/local/bin/cfssljson /usr/local/bin/cfssl-certinfo
  • 执行脚本下载cfssl官方包
bash cfssl.sh
cfssl 生成证书工具   cfssljson   通过传入json文件生成证书cfssl-certinfo  查看证书信息
cd etcd-cert/
  • 定义ca证书
cat > ca-config.json <
  • 实现证书签名
cat > ca-csr.json <
  • 生产证书,生成ca-key.pem、ca.pem
cfssl gencert -initca ca-csr.json | cfssljson -bare ca -
  • 指定etcd三个节点之间的通信验证
cat > server-csr.json <
  • 生成ETCD证书 server-key.pem server.pem
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=www server-csr.json | cfssljson -bare server
  • 解压ETCD 二进制包
tar zxvf etcd-v3.3.10-linux-amd64.tar.gz
  • 配置文件,命令文件,证书
mkdir /opt/etcd/{cfg,bin,ssl} -p    mv etcd-v3.3.10-linux-amd64/etcd etcd-v3.3.10-linux-amd64/etcdctl /opt/etcd/bin/
  • 证书拷贝
cp etcd-cert/*.pem /opt/etcd/ssl/
  • 进入卡住状态等待其他节点加入(k8s目录)
bash etcd.sh etcd01 192.168.142.129 etcd02=https:#192.168.142.130:2380,etcd03=https:#192.168.142.131:2380
  • 使用另外一个会话打开,会发现etcd进程已经开启
ps -ef | grep etcd
  • 拷贝证书去其他节点
scp -r /opt/etcd/ root@192.168.142.130:/opt/scp -r /opt/etcd/ root@192.168.142.131:/opt/
  • 启动脚本拷贝其他节点
scp /usr/lib/systemd/system/etcd.service root@192.168.142.130:/usr/lib/systemd/system/scp /usr/lib/systemd/system/etcd.service root@192.168.142.131:/usr/lib/systemd/system/

2.在node01节点的操作

  • 修改etcd文件
vim /opt/etcd/cfg/etcd
  • 修改名称和地址
#[Member]ETCD_NAME="etcd02"ETCD_DATA_DIR="/var/lib/etcd/default.etcd"ETCD_LISTEN_PEER_URLS="https:#192.168.142.130:2380"ETCD_LISTEN_CLIENT_URLS="https:#192.168.142.130:2379"#[Clustering]ETCD_INITIAL_ADVERTISE_PEER_URLS="https:#192.168.142.130:2380"ETCD_ADVERTISE_CLIENT_URLS="https:#192.168.142.130:2379"ETCD_INITIAL_CLUSTER="etcd01=https:#192.168.142.129:2380,etcd02=https:#192.168.142.130:2380,etcd03=https:#192.168.142.131:2380"ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"ETCD_INITIAL_CLUSTER_STATE="new"
  • 启动服务
systemctl start etcdsystemctl status etcd

3.在node02节点的操作

  • 修改etcd文件
vim /opt/etcd/cfg/etcd
  • 修改名称和地址
#[Member]ETCD_NAME="etcd03"ETCD_DATA_DIR="/var/lib/etcd/default.etcd"ETCD_LISTEN_PEER_URLS="https:#192.168.142.131:2380"ETCD_LISTEN_CLIENT_URLS="https:#192.168.142.131:2379"#[Clustering]ETCD_INITIAL_ADVERTISE_PEER_URLS="https:#192.168.142.131:2380"ETCD_ADVERTISE_CLIENT_URLS="https:#192.168.142.131:2379"ETCD_INITIAL_CLUSTER="etcd01=https:#192.168.142.129:2380,etcd02=https:#192.168.142.130:2380,etcd03=https:#192.168.142.131:2380"ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"ETCD_INITIAL_CLUSTER_STATE="new"
  • 启动服务
systemctl start etcdsystemctl status etcd

4.在master端检查群集状态(k8s/etcd-cert/目录)

/opt/etcd/bin/etcdctl --ca-file=ca.pem --cert-file=server.pem --key-file=server-key.pem --endpoints="https:#192.168.142.129:2379,https:#192.168.142.130:2379,https:#192.168.142.131:2379" cluster-health
member 3eae9a550e2e3ec is healthy: got healthy result from https:#192.168.142.129:2379member 26cd4dcf17bc5cbd is healthy: got healthy result from https:#192.168.142.130:2379member 2fcd2df8a9411750 is healthy: got healthy result from https:#192.168.142.131:2379cluster is healthy


二、 Node节点安装Docker

#安装依赖包yum install yum-utils device-mapper-persistent-data lvm2 -y#设置阿里云镜像源yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo#安装Docker-ceyum install -y docker-ce#关闭防火墙及增强型安全功能systemctl stop firewalld.servicesetenforce 0#启动Docker并设置为开机自启动systemctl start docker.servicesystemctl enable docker.service#检查相关进程开启情况ps aux | grep docker#重载守护进程systemctl daemon-reload#重启服务systemctl restart docker


三、Flannel容器集群网络部署

  • master端写入分配的子网段到ETCD中,供flannel使用(k8s/etcd-cert/目录)
/opt/etcd/bin/etcdctl --ca-file=ca.pem --cert-file=server.pem --key-file=server-key.pem --endpoints="https://192.168.142.129:2379,https://192.168.142.130:2379,https://192.168.142.131:2379" set /coreos.com/network/config '{ "Network": "172.17.0.0/16", "Backend": {"Type": "vxlan"}}'
  • 查看写入的信息
/opt/etcd/bin/etcdctl --ca-file=ca.pem --cert-file=server.pem --key-file=server-key.pem --endpoints="https://192.168.142.129:2379,https://192.168.142.130:2379,https://192.168.142.131:2379" get /coreos.com/network/config
  • 拷贝到所有node节点(只需要部署在node节点即可)
cd /root/k8sscp flannel-v0.10.0-linux-amd64.tar.gz root@192.168.142.130:/rootscp flannel-v0.10.0-linux-amd64.tar.gz root@192.168.142.131:/root
  • 所有node节点操作解压
tar zxvf flannel-v0.10.0-linux-amd64.tar.gz 
  • node节点建立k8s工作目录
mkdir /opt/kubernetes/{cfg,bin,ssl} -pmv mk-docker-opts.sh flanneld /opt/kubernetes/bin/vim flannel.sh#!/bin/bashETCD_ENDPOINTS=${1:-"http://127.0.0.1:2379"}cat </opt/kubernetes/cfg/flanneldFLANNEL_OPTIONS="--etcd-endpoints=${ETCD_ENDPOINTS} \-etcd-cafile=/opt/etcd/ssl/ca.pem \-etcd-certfile=/opt/etcd/ssl/server.pem \-etcd-keyfile=/opt/etcd/ssl/server-key.pem"EOFcat </usr/lib/systemd/system/flanneld.service[Unit]Description=Flanneld overlay address etcd agentAfter=network-online.target network.targetBefore=docker.service[Service]Type=notifyEnvironmentFile=/opt/kubernetes/cfg/flanneldExecStart=/opt/kubernetes/bin/flanneld --ip-masq \$FLANNEL_OPTIONSExecStartPost=/opt/kubernetes/bin/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/subnet.envRestart=on-failure[Install]WantedBy=multi-user.targetEOFsystemctl daemon-reloadsystemctl enable flanneldsystemctl restart flanneld
  • 开启flannel网络功能
bash flannel.sh https://root@192.168.142.129:2379,https://root@192.168.142.130:2379,https://root@192.168.142.131:2379
  • 配置docker连接flannel
vim /usr/lib/systemd/system/docker.service[Service]Type=notify# the default is not to use systemd for cgroups because the delegate issues still# exists and systemd currently does not support the cgroup feature set required# for containers run by docker#14行的准启动前插入以下条目EnvironmentFile=/run/flannel/subnet.env#引用参数$DOCKER_NETWORK_OPTIONSExecStart=/usr/bin/dockerd $DOCKER_NETWORK_OPTIONS -H fd:# --containerd=/run/containerd/containerd.sockExecReload=/bin/kill -s HUP $MAINPIDTimeoutSec=0RestartSec=2Restart=always#查看网络信息cat /run/flannel/subnet.env
DOCKER_OPT_BIP="--bip=172.17.15.1/24"DOCKER_OPT_IPMASQ="--ip-masq=false"DOCKER_OPT_MTU="--mtu=1450"#说明:bip指定启动时的子网DOCKER_NETWORK_OPTIONS=" --bip=172.17.15.1/24 --ip-masq=false --mtu=1450"
  • 重启docker服务
systemctl daemon-reloadsystemctl restart docker
  • 查看flannel网络信息
[root@localhost ~]# ifconfigdocker0: flags=4099  mtu1500        inet 172.17.56.1  netmask 255.255.255.0broadcast 172.17.56.255        ether 02:42:74:32:33:e3  txqueuelen 0  (Ethernet)        RX packets 0  bytes 0 (0.0 B)        RX errors 0  dropped 0  overruns 0  frame 0        TX packets 0  bytes 0 (0.0 B)        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0ens33: flags=4163  mtu 1500        inet 192.168.142.130  netmask 255.255.255.0  broadcast 192.168.142.255        inet6 fe80::8cb8:16f4:91a1:28d5  prefixlen 64  scopeid 0x20        ether 00:0c:29:04:f1:1f  txqueuelen 1000 (Ethernet)        RX packets 436817  bytes 153162687 (146.0 MiB)        RX errors 0  dropped 0  overruns 0  frame 0        TX packets 375079  bytes 47462997 (45.2 MiB)        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0flannel.1: flags=4163  mtu 1450        inet 172.17.56.0  netmask 255.255.255.255  broadcast 0.0.0.0        inet6 fe80::249c:c8ff:fec0:4baf  prefixlen 64  scopeid 0x20        ether 26:9c:c8:c0:4b:af  txqueuelen 0  (Ethernet)        RX packets 0  bytes 0 (0.0 B)        RX errors 0  dropped 0  overruns 0  frame 0        TX packets 0  bytes 0 (0.0 B)        TX errors 0  dropped 26 overruns 0  carrier 0  collisions 0lo: flags=73  mtu 65536        inet 127.0.0.1  netmask 255.0.0.0        inet6 ::1  prefixlen 128  scopeid 0x10        loop  txqueuelen 1  (Local Loopback)        RX packets 1915  bytes 117267 (114.5 KiB)        RX errors 0  dropped 0  overruns 0  frame 0        TX packets 1915  bytes 117267 (114.5 KiB)        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0virbr0: flags=4099  mtu 1500        inet 192.168.122.1  netmask 255.255.255.0  broadcast 192.168.122.255        ether 52:54:00:61:63:f2  txqueuelen 1000 (Ethernet)        RX packets 0  bytes 0 (0.0 B)        RX errors 0  dropped 0  overruns 0  frame 0        TX packets 0  bytes 0 (0.0 B)        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
  • 测试ping通对方docker0网卡 证明flannel起到路由作用
docker run -it centos:7 /bin/bashyum install net-tools -y
  • 查看容器内的flannel网络信息
[root@5f9a65565b53 /]# ifconfigeth0: flags=4163 mtu 1450        inet 172.17.56.2  netmask 255.255.255.0broadcast 172.17.56.255        ether 02:42:ac:11:38:02  txqueuelen 0  (Ethernet)        RX packets 15632  bytes 13894772 (13.2 MiB)        RX errors 0  dropped 0  overruns 0  frame 0        TX packets 7987  bytes 435819 (425.6 KiB)        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0lo: flags=73  mtu 65536        inet 127.0.0.1  netmask 255.0.0.0        loop  txqueuelen 1  (Local Loopback)        RX packets 0  bytes 0 (0.0 B)        RX errors 0  dropped 0  overruns 0  frame 0        TX packets 0  bytes 0 (0.0 B)        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
  • 再次测试ping通两个node中的centos:7容器
[root@f1e937618b50 /]# ping 172.17.15.2PING 172.17.15.2 (172.17.15.2) 56(84) bytes of data.64 bytes from 172.17.15.2: icmp_seq=1 ttl=62 time=0.420 ms64 bytes from 172.17.15.2: icmp_seq=2 ttl=62 time=0.302 ms64 bytes from 172.17.15.2: icmp_seq=3 ttl=62 time=0.420 ms64 bytes from 172.17.15.2: icmp_seq=4 ttl=62 time=0.364 ms64 bytes from 172.17.15.2: icmp_seq=5 ttl=62 time=0.114 ms

四、部署Master组件

1.自签APIServer证书

  • 建立apiserver站点
cd k8s/unzip master.zipmkdir /opt/kubernetes/{cfg,bin,ssl} -p#apiserver自签证书目录mkdir apiservercd apiserver/
  • 建立ca证书
#定义ca证书,生成ca证书配置文件cat > ca-config.json < ca-csr.json << EOF{    "CN": "kubernetes",    "key": {         "algo": "rsa",         "size": 2048    },    "names": [       {              "C": "CN",              "L": "Beijing",              "ST": "Beijing",              "O": "k8s",              "OU": "System"       }    ]}EOF#证书签名(生成ca.pem ca-key.pem)cfssl gencert -initca ca-csr.json | cfssljson -bare ca -
  • 建立apiserver通信证书
#定义apiserver证书,生成apiserver证书配置文件cat > server-csr.json <
  • 建立admin证书
#定义admin证书cat > admin-csr.json <
  • 建立kube-proxy证书
#定义kube-proxy证书cat > kube-proxy-csr.json <
  • 执行脚本并复制
bash k8s-cert.shcp -p *.pem /opt/kubernetes/ssl/
  • 复制启动命令
cd ..tar zxvf kubernetes-server-linux-amd64.tar.gzcd kubernetes/server/bin/cp -p kube-apiserver kubectl kube-controller-manager kube-scheduler /opt/kubernetes/bin/
  • 创建token文件
cd /opt/kubernetes/cfg#生成随机的令牌export BOOTSTRAP_TOKEN=$(head -c 16 /dev/urandom | od -An -t x | tr -d ' ')cat > token.csv << EOF${BOOTSTRAP_TOKEN},kubelet-bootstrap,10001,"system:kubelet-bootstrap"EOF
  • 创建apiserver启动脚本
vim apiserver.sh#!/bin/bashMASTER_ADDRESS=$1ETCD_SERVERS=$2cat </opt/kubernetes/cfg/kube-apiserverKUBE_APISERVER_OPTS="--logtostderr=true \\--v=4 \\--etcd-servers=${ETCD_SERVERS} \\--bind-address=${MASTER_ADDRESS} \\--secure-port=6443 \\--advertise-address=${MASTER_ADDRESS} \\--allow-privileged=true \\--service-cluster-ip-range=10.0.0.0/24 \\--enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota,NodeRestriction \\--authorization-mode=RBAC,Node \\--kubelet-https=true \\--enable-bootstrap-token-auth \\--token-auth-file=/opt/kubernetes/cfg/token.csv \\--service-node-port-range=30000-50000 \\--tls-cert-file=/opt/kubernetes/ssl/server.pem  \\--tls-private-key-file=/opt/kubernetes/ssl/server-key.pem \\--client-ca-file=/opt/kubernetes/ssl/ca.pem \\--service-account-key-file=/opt/kubernetes/ssl/ca-key.pem \\--etcd-cafile=/opt/etcd/ssl/ca.pem \\--etcd-certfile=/opt/etcd/ssl/server.pem \\--etcd-keyfile=/opt/etcd/ssl/server-key.pem"EOFcat </usr/lib/systemd/system/kube-apiserver.service[Unit]Description=Kubernetes API ServerDocumentation=https://github.com/GoogleCloudPlatform/kubernetes[Service]EnvironmentFile=/opt/kubernetes/cfg/kube-apiserverExecStart=/opt/kubernetes/bin/kube-apiserver $KUBE_APISERVER_OPTSRestart=on-failure[Install]WantedBy=multi-user.targetEOF
  • 执行apiserver启动脚本
bash apiserver.sh 192.168.142.129 https://192.168.142.129:2379,https://192.168.142.130:2379,https://192.168.142.131:2379
  • 查看vapiserver配置文件
cat </opt/kubernetes/cfg/kube-apiserver
KUBE_APISERVER_OPTS="--logtostderr=true \--v=4 \--etcd-servers=https://192.168.142.129:2379,https://192.168.142.130:2379,https://192.168.142.131:2379 \--bind-address=192.168.142.129 \--secure-port=6443 \--advertise-address=192.168.142.129 \--allow-privileged=true \--service-cluster-ip-range=10.0.0.0/24 \--enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota,NodeRestriction \--authorization-mode=RBAC,Node \--kubelet-https=true \--enable-bootstrap-token-auth \--token-auth-file=/opt/kubernetes/cfg/token.csv \--service-node-port-range=30000-50000 \--tls-cert-file=/opt/kubernetes/ssl/server.pem  \--tls-private-key-file=/opt/kubernetes/ssl/server-key.pem \--client-ca-file=/opt/kubernetes/ssl/ca.pem \--service-account-key-file=/opt/kubernetes/ssl/ca-key.pem \--etcd-cafile=/opt/etcd/ssl/ca.pem \--etcd-certfile=/opt/etcd/ssl/server.pem \--etcd-keyfile=/opt/etcd/ssl/server-key.pem"
  • 启动apiserver服务
systemctl daemon-reloadsystemctl start kube-apiserversystemctl status kube-apiserversystemctl enable kube-apiserver#查看服务端口状况ps aux | grep kube   

2.部署Controller-Manager服务

  • 移动控制命令
cd /k8s/kubernetes/server/bin#移动脚本cp -p kube-controller-manager /opt/kubernetes/bin/
  • 编写kube-controller-manager配置文件
cat </opt/kubernetes/cfg/kube-controller-managerKUBE_CONTROLLER_MANAGER_OPTS="--logtostderr=true \--v=4 \--master=127.0.0.1:8080 \--leader-elect=true \--address=127.0.0.1 \--service-cluster-ip-range=10.0.0.0/24 \--cluster-name=kubernetes \--cluster-signing-cert-file=/opt/kubernetes/ssl/ca.pem \--cluster-signing-key-file=/opt/kubernetes/ssl/ca-key.pem  \--root-ca-file=/opt/kubernetes/ssl/ca.pem \--service-account-private-key-file=/opt/kubernetes/ssl/ca-key.pem"EOF
  • 编写kube-controller-manager启动脚本
cat </usr/lib/systemd/system/kube-controller-manager.service[Unit]Description=Kubernetes Controller ManagerDocumentation=https:#github.com/kubernetes/kubernetes[Service]EnvironmentFile=/opt/kubernetes/cfg/kube-controller-managerExecStart=/opt/kubernetes/bin/kube-controller-manager \$KUBE_CONTROLLER_MANAGER_OPTSRestart=on-failure[Install]WantedBy=multi-user.targetEOF
  • 提权并启动服务
chmod +x /usr/lib/systemd/system/kube-controller-manager.servicesystemctl start kube-controller-managersystemctl status kube-controller-managersystemctl enable kube-controller-manager#查看服务端口状况netstat -atnp | grep kube-controll

3.部署Scheruler服务

  • 移动控制命令
cd /k8s/kubernetes/server/bin#移动脚本cp -p kube-scheduler /opt/kubernetes/bin/
  • 编写配置文件
cat </opt/kubernetes/cfg/kube-schedulerKUBE_SCHEDULER_OPTS="--logtostderr=true \--v=4 \--master=127.0.0.1:8080 \--leader-elect"EOF
  • 编写启动脚本
cat </usr/lib/systemd/system/kube-scheduler.service[Unit]Description=Kubernetes SchedulerDocumentation=https:#github.com/kubernetes/kubernetes[Service]EnvironmentFile=-/opt/kubernetes/cfg/kube-schedulerExecStart=/opt/kubernetes/bin/kube-scheduler \$KUBE_SCHEDULER_OPTSRestart=on-failure[Install]WantedBy=multi-user.targetEOF
  • 启动服务
chmod +x /usr/lib/systemd/system/kube-scheduler.servicesystemctl daemon-reloadsystemctl start kube-schedulersystemctl status kube-schedulersystemctl enable kube-scheduler#查看服务端口状况netstat -atnp | grep schedule
  • 查看master节点状态
/opt/kubernetes/bin/kubectl get cs#成功部署应该全部为healthy
NAME                 STATUS    MESSAGE             ERRORscheduler            Healthy   okcontroller-manager   Healthy   oketcd-0               Healthy   {"health":"true"}etcd-1               Healthy   {"health":"true"}etcd-2               Healthy   {"health":"true"}

4.部署Kubelet&kube-proxy

  • 传送控制命令
cd kubernetes/server/bin#向node节点推送控制命令scp -p kubelet kube-proxy root@192.168.142.130:/opt/kubernetes/bin/scp -p kubelet kube-proxy root@192.168.142.131:/opt/kubernetes/bin/
  • 创建bootstrap.kubeconfig
cd /root/k8s/kubernetes/#指定api入口,指自身即可(必须安装了apiserver)export KUBE_APISERVER="https:#192.168.142.129:6443"#设置集群/opt/kubernetes/bin/kubectl config set-cluster kubernetes \--certificate-authority=/opt/kubernetes/ssl/ca.pem \--embed-certs=true \--server=${KUBE_APISERVER} \--kubeconfig=/k8s/kubeconfig/bootstrap.kubeconfig#设置客户端认证/opt/kubernetes/bin/kubectl config set-credentials kubelet-bootstrap \--token=${BOOTSTRAP_TOKEN} \--kubeconfig=/k8s/kubeconfig/bootstrap.kubeconfig#设置上下文参数/opt/kubernetes/bin/kubectl config set-context default \--cluster=kubernetes \--user=kubelet-bootstrap \--kubeconfig=/k8s/kubeconfig/bootstrap.kubeconfig#设置默认上下文/opt/kubernetes/bin/kubectl config use-context default \--kubeconfig=/k8s/kubeconfig/bootstrap.kubeconfig
  • 创建kube-proxy kubeconfig文件
#设置集群/opt/kubernetes/bin/kubectl config set-cluster kubernetes \--certificate-authority=/opt/etcd/ssl/ca.pem \--embed-certs=true \--server=${KUBE_APISERVER} \--kubeconfig=/k8s/kubeconfig/kube-proxy.kubeconfig#设置客户端认证/opt/kubernetes/bin/kubectl config set-credentials kube-proxy \--client-certificate=/opt/kubernetes/ssl/kube-proxy.pem \--client-key=/opt/kubernetes/ssl/kube-proxy-key.pem \--embed-certs=true \--kubeconfig=/k8s/kubeconfig/kube-proxy.kubeconfig#设置上下文参数/opt/kubernetes/bin/kubectl config set-context default \--cluster=kubernetes \--user=kube-proxy \--kubeconfig=/k8s/kubeconfig/kube-proxy.kubeconfig#设置默认上下文/opt/kubernetes/bin/kubectl config use-context default \--kubeconfig=/k8s/kubeconfig/kube-proxy.kubeconfig
  • 向node节点kubeconfig文件推送
scp bootstrap.kubeconfig kube-proxy.kubeconfig root@192.168.142.130:/opt/kubernetes/cfg/scp bootstrap.kubeconfig kube-proxy.kubeconfig root@192.168.142.131:/opt/kubernetes/cfg/
  • 将kubectl写入环境变量
echo "export PATH=\$PATH:/opt/kubernetes/bin/" >> /etc/profilesource /etc/profile
  • 创建bootstrap角色权限
kubectl create clusterrolebinding kubelet-bootstrap \--clusterrole=system:node-bootstrapper \--user=kubelet-bootstrap

五、部署Node组件

1.安装Kubelet

  • 创建kubelet配置文件
cat </opt/kubernetes/cfg/kubeletKUBELET_OPTS="--logtostderr=true \\--v=4 \\--hostname-override=${NODE_ADDRESS} \\--kubeconfig=/opt/kubernetes/cfg/kubelet.kubeconfig \\--bootstrap-kubeconfig=/opt/kubernetes/cfg/bootstrap.kubeconfig \\--config=/opt/kubernetes/cfg/kubelet.config \\--cert-dir=/opt/kubernetes/ssl \\--pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0"EOFcat </opt/kubernetes/cfg/kubelet.configkind: KubeletConfigurationapiVersion: kubelet.config.k8s.io/v1beta1address: ${NODE_ADDRESS}port: 10250readOnlyPort: 10255cgroupDriver: cgroupfsclusterDNS:- ${DNS_SERVER_IP} clusterDomain: cluster.local.failSwapOn: falseauthentication:  anonymous:    enabled: trueEOF
  • 创建kubelet启动脚本
cat </usr/lib/systemd/system/kubelet.service[Unit]Description=Kubernetes KubeletAfter=docker.serviceRequires=docker.service[Service]EnvironmentFile=/opt/kubernetes/cfg/kubeletExecStart=/opt/kubernetes/bin/kubelet \$KUBELET_OPTSRestart=on-failureKillMode=process[Install]WantedBy=multi-user.targetEOF
  • 启动服务
chmod +x /usr/lib/systemd/system/kubelet.servicesystemctl daemon-reloadsystemctl enable kubeletsystemctl restart kubelet
  • 在master端验证
#检查签名请求kubectl get csr
NAME                                                   AGE     REQUESTOR           CONDITIONnode-csr-NOI-9vufTLIqJgMWq4fHPNPHKbjCXlDGHptj7FqTa8A   55s     kubelet-bootstrap   Approved,Issued
  • 同意请求并颁发证书
kubectl certificate approve node-csr-NOI-9vufTLIqJgMWq4fHPNPHKbjCXlDGHptj7FqTa8A
  • 查看集群情况
kubectl get nodes

2.安装kube-proxy

  • 建立kube-proxy配置文件
cat </opt/kubernetes/cfg/kube-proxyKUBE_PROXY_OPTS="--logtostderr=true \\--v=4 \\--hostname-override=192.168.142.130 \\--cluster-cidr=10.0.0.0/24 \\--proxy-mode=ipvs \\--kubeconfig=/opt/kubernetes/cfg/kube-proxy.kubeconfig"EOF
  • 建立kube-proxy启动脚本
cat </usr/lib/systemd/system/kube-proxy.service[Unit]Description=Kubernetes ProxyAfter=network.target[Service]EnvironmentFile=-/opt/kubernetes/cfg/kube-proxyExecStart=/opt/kubernetes/bin/kube-proxy \$KUBE_PROXY_OPTSRestart=on-failure[Install]WantedBy=multi-user.targetEOF
  • 启动服务
chmod +x /usr/lib/systemd/system/kube-proxy.servicesystemctl daemon-reloadsystemctl enable kube-proxysystemctl restart kube-proxy#查看服务端口状况netstat -atnp | grep proxy

谢谢阅读!

0