千家信息网

Centos7安装部署Kubernetes(K8s)集群

发表于:2025-01-24 作者:千家信息网编辑
千家信息网最后更新 2025年01月24日,Kubernetes集群的安装有多种方式:下载源码包编译安装、下载编译好的二进制包安装、使用kubeadm工具安装等。本文是以二进制文件方式安装Kubernetes集群。系统环境主机名IP地址操作系统
千家信息网最后更新 2025年01月24日Centos7安装部署Kubernetes(K8s)集群

Kubernetes集群的安装有多种方式:下载源码包编译安装、下载编译好的二进制包安装、使用kubeadm工具安装等。本文是以二进制文件方式安装Kubernetes集群。
系统环境

主机名IP地址操作系统安装组件
k8s-master192.168.2.212Centos 7.5 64位etcd、kube-apiserver、kube-controller-manager、kube-scheduler
k8s-node1192.168.2.213Centos 7.5 64位kubelet、kube-proxy
k8s-node2192.168.2.214Centos 7.5 64位kubelet、kube-proxy
k8s-node3192.168.2.215Centos 7.5 64位kubelet、kube-proxy

一、全局操作(所有机器执行)

1、安装需要用到的工具

yum -y install vim bash-completion wget

注:安装bash-completion工具后,使用tab键可以实现长格式参数补全,非常方便。kubectl命令的参数都是长格式,对于有些命令都记不住的我,更别说长格式参数了。
2、关闭firewalld防火墙
Kubernetes的master(管理主机)与node(工作节点)之间会有大量的网络通信,安全的做法是在防火墙上配置各组件需要相互通信的端口号,关于防火墙的配置我会在后续博文中单独讲解。在一个安全的内部网络环境中建议关闭防火墙服务,这里我们关闭防火墙来部署测试环境。

systemctl disable firewalldsystemctl stop firewalld

3、关闭SELinux
禁用SELinux的目的是让容器可以读取主机文件系统

sed -i "s/SELINUX=enforcing/SELINUX=disabled/g" /etc/selinux/configsed -i 's/SELINUXTYPE=targeted/#&/' /etc/selinux/configsetenforce 0

二、部署master管理节点

1、安装CFSSL

[root@k8s-master ~]# wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64 -O /usr/local/bin/cfssl[root@k8s-master ~]# wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64 -O /usr/local/bin/cfssljson[root@k8s-master ~]# wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64 -O /usr/local/bin/cfssl-certinfo[root@k8s-master ~]# chmod +x /usr/local/bin/cfssl /usr/local/bin/cfssljson /usr/local/bin/cfssl-certinfo

2、下载并解压已编译好的二进制包

[root@k8s-master tmp]# wget https://github.com/etcd-io/etcd/releases/download/v3.3.10/etcd-v3.3.10-linux-amd64.tar.gz[root@k8s-master tmp]# wget https://dl.k8s.io/v1.12.2/kubernetes-server-linux-amd64.tar.gz[root@k8s-master tmp]# tar zxvf etcd-v3.3.10-linux-amd64.tar.gz[root@k8s-master tmp]# tar zxvf kubernetes-server-linux-amd64.tar.gz

3、将可执行文件复制到/usr/bin目录下

[root@k8s-master tmp]# cd etcd-v3.3.10-linux-amd64/[root@k8s-master etcd-v3.3.10-linux-amd64]# cp -p etcd etcdctl /usr/bin/[root@k8s-master etcd-v3.3.10-linux-amd64]# cd /tmp/kubernetes/server/bin/[root@k8s-master bin]# cp -p kube-apiserver kube-controller-manager kube-scheduler kubectl /usr/bin/

4、配置etcd服务
注:etcd作为kubernetes集群的数据库,保存着所有资源对象的数据,安全起见,使用数字证书认证方式。生产环境建议将etcd独立出来,单独部署etcd集群。
(1)生成CA证书配置文件

[root@k8s-master bin]# mkdir -p /etc/{etcd/ssl,kubernetes/ssl}[root@k8s-master bin]# cd /etc/etcd/ssl/[root@k8s-master ssl]# cfssl print-defaults config > ca-config.json[root@k8s-master ssl]# cfssl print-defaults csr > ca-csr.json

(2)修改配置文件
修改ca-config.json文件,设置有效期43800h(5年)

{    "signing": {        "default": {            "expiry": "43800h"        },        "profiles": {            "kubernetes": {                "expiry": "43800h",                "usages": [                    "signing",                    "key encipherment",                    "server auth",                    "client auth"                ]            }        }    }}

"server auth","client auth"表示服务端和客户端使用相同的证书验证。
修改ca-csr.json文件,内容如下

{    "CN": "k8s-master",    "key": {        "algo": "rsa",        "size": 2048    },    "names": [        {            "C": "CN",            "L": "ShangHai",            "ST": "ShangHai",            "O": "k8s",            "OU": "System"        }    ]}

(3)生成CA证书和私钥相关文件

[root@k8s-master ssl]# cfssl gencert -initca ca-csr.json | cfssljson -bare ca


(4)签发etcd证书文件

[root@k8s-master ssl]# cfssl print-defaults csr > etcd-csr.json

修改server-csr.json文件,内容如下

{    "CN": "etcd",    "hosts": [        "127.0.0.1",        "192.168.2.212"    ],    "key": {        "algo": "rsa",        "size": 2048    },    "names": [        {            "C": "CN",            "L": "ShangHai",            "ST": "ShangHai",            "O": "k8s",            "OU": "System"        }    ]}

生成etcd证书和私钥

[root@k8s-master ssl]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes -hostname=127.0.0.1,192.168.2.212 etcd-csr.json | cfssljson -bare etcd

注:"hosts"里填写所有etcd主机的IP,-hostname填写当前主机的IP,也可以填写所有etcd主机的IP,这样其他etcd节点就不需要再创建证书和私钥了,拷贝过去直接使用。 -profile=kubernetes这个值根据对应ca-config.json文件中的profiles字段的值。
(5)创建生成etcd配置文件的脚本

[root@k8s-master ssl]# cd /root/[root@k8s-master ~]# vim etcd.sh
#!/bin/bashetcd_data_dir=/data/etcdmkdir -p ${etcd_data_dir}ETCD_NAME=${1:-"etcd"}ETCD_LISTEN_IP=${2:-"192.168.2.212"}ETCD_INITIAL_CLUSTER=${3:-}cat <//etc/etcd/etcd.conf# [member]ETCD_NAME="${ETCD_NAME}"ETCD_DATA_DIR="${etcd_data_dir}/default.etcd"#ETCD_SNAPSHOT_COUNTER="10000"#ETCD_HEARTBEAT_INTERVAL="100"#ETCD_ELECTION_TIMEOUT="1000"ETCD_LISTEN_PEER_URLS="https://${ETCD_LISTEN_IP}:2380"ETCD_LISTEN_CLIENT_URLS="https://${ETCD_LISTEN_IP}:2379,https://127.0.0.1:2379"#ETCD_MAX_SNAPSHOTS="5"#ETCD_MAX_WALS="5"#ETCD_CORS=""##[cluster]ETCD_INITIAL_ADVERTISE_PEER_URLS="https://${ETCD_LISTEN_IP}:2380"ETCD_INITIAL_CLUSTER="${ETCD_INITIAL_CLUSTER}"ETCD_INITIAL_CLUSTER_STATE="new"ETCD_INITIAL_CLUSTER_TOKEN="k8s-etcd-cluster"ETCD_ADVERTISE_CLIENT_URLS="https://${ETCD_LISTEN_IP}:2379"##[proxy]#ETCD_PROXY="off"##[security]CLIENT_CERT_AUTH="true"ETCD_CA_FILE="/etc/etcd/ssl/ca.pem"ETCD_CERT_FILE="/etc/etcd/ssl/${ETCD_NAME}.pem"ETCD_KEY_FILE="/etc/etcd/ssl/${ETCD_NAME}-key.pem"PEER_CLIENT_CERT_AUTH="true"ETCD_PEER_CA_FILE="/etc/etcd/ssl/ca.pem"ETCD_PEER_CERT_FILE="/etc/etcd/ssl/${ETCD_NAME}.pem"ETCD_PEER_KEY_FILE="/etc/etcd/ssl/${ETCD_NAME}-key.pem"EOFcat <//usr/lib/systemd/system/etcd.service[Unit]Description=Etcd ServerAfter=network.target[Service]Type=simpleWorkingDirectory=${etcd_data_dir}EnvironmentFile=-/etc/etcd/etcd.confExecStart=/bin/bash -c "GOMAXPROCS=\$(nproc) /usr/bin/etcd"Type=notify[Install]WantedBy=multi-user.targetEOF

(6)执行脚本,生成配置文件和启动文件

[root@k8s-master ~]# sh etcd.sh

5、配置kube-apiserver服务
(1)创建生成apiserver配置文件的脚本
apiserver.sh脚本内容如下

#!/usr/bin/env bashMASTER_ADDRESS=${1:-"192.168.2.212"}ETCD_SERVERS=${2:-"https://127.0.0.1:2379"}SERVICE_CLUSTER_IP_RANGE=${3:-"10.10.10.0/24"}ADMISSION_CONTROL=${4:-""}API_LOGDIR=${5:-"/data/apiserver/log"}mkdir -p ${API_LOGDIR}cat </etc/kubernetes/kube-apiserver# --logtostderr=true: log to standard error instead of filesKUBE_LOGTOSTDERR="--logtostderr=false"APISERVER_LOGDIR="--log-dir=${API_LOGDIR}"# --v=0: log level for V logsKUBE_LOG_LEVEL="--v=2"# --etcd-servers=[]: List of etcd servers to watch (http://ip:port),# comma separated. Mutually exclusive with -etcd-configKUBE_ETCD_SERVERS="--etcd-servers=${ETCD_SERVERS}"# --etcd-cafile="": SSL Certificate Authority file used to secure etcd communication.KUBE_ETCD_CAFILE="--etcd-cafile=/etc/etcd/ssl/ca.pem"# --etcd-certfile="": SSL certification file used to secure etcd communication.KUBE_ETCD_CERTFILE="--etcd-certfile=/etc/etcd/ssl/etcd.pem"# --etcd-keyfile="": key file used to secure etcd communication.KUBE_ETCD_KEYFILE="--etcd-keyfile=/etc/etcd/ssl/etcd-key.pem"# --insecure-bind-address=127.0.0.1: The IP address on which to serve the --insecure-port.KUBE_API_ADDRESS="--insecure-bind-address=0.0.0.0"# --insecure-port=8080: The port on which to serve unsecured, unauthenticated access.KUBE_API_PORT="--insecure-port=8080"# --kubelet-port=10250: Kubelet portNODE_PORT="--kubelet-port=10250"# --advertise-address=: The IP address on which to advertise# the apiserver to members of the cluster.KUBE_ADVERTISE_ADDR="--advertise-address=${MASTER_ADDRESS}"# --allow-privileged=false: If true, allow privileged containers.KUBE_ALLOW_PRIV="--allow-privileged=false"# --service-cluster-ip-range=: A CIDR notation IP range from which to assign service cluster IPs.# This must not overlap with any IP ranges assigned to nodes for pods.KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=${SERVICE_CLUSTER_IP_RANGE}"# --admission-control="AlwaysAdmit": Ordered list of plug-ins# to do admission control of resources into cluster.KUBE_ADMISSION_CONTROL="--admission-control=${ADMISSION_CONTROL}"EOFKUBE_APISERVER_OPTS="   \${KUBE_LOGTOSTDERR}         \\                        \${APISERVER_LOGDIR}         \\                        \${KUBE_LOG_LEVEL}           \\                        \${KUBE_ETCD_SERVERS}        \\                        \${KUBE_ETCD_CAFILE}         \\                        \${KUBE_ETCD_CERTFILE}       \\                        \${KUBE_ETCD_KEYFILE}        \\                        \${KUBE_API_ADDRESS}         \\                        \${KUBE_API_PORT}            \\                        \${NODE_PORT}                \\                        \${KUBE_ADVERTISE_ADDR}      \\                        \${KUBE_ALLOW_PRIV}          \\                        \${KUBE_SERVICE_ADDRESSES}   \\                        \${KUBE_ADMISSION_CONTROL}   \\                        \${KUBE_API_CLIENT_CA_FILE}  \\                        \${KUBE_API_TLS_CERT_FILE}   \\                        \${KUBE_API_TLS_PRIVATE_KEY_FILE}"cat </usr/lib/systemd/system/kube-apiserver.service[Unit]Description=Kubernetes API ServerDocumentation=https://github.com/kubernetes/kubernetes[Service]EnvironmentFile=-/etc/kubernetes/kube-apiserverExecStart=/usr/bin/kube-apiserver ${KUBE_APISERVER_OPTS}Restart=on-failure[Install]WantedBy=multi-user.targetEOF
启动参数说明:--logtostderr:设置为false,表示将日志写入文件,不写入stderr。--log-dir:日志文件目录。--v:日志级别。--etcd-servers:指定etcd服务的URL地址。--etcd-cafile:连接etcd的ca根证书文件路径。--etcd-certfile:连接etcd的私钥文件路径。--etcd-keyfile:连接etcd的key证书文件路径。--insecure-bind-address:apiserver的非安全IP地址,此参数已弃用,后面会替换成新的。--insecure-port:apiserver的非安全端口号,此参数已弃用,后面会替换成新的。--kubelet-port:kubelet的端口号,此参数已弃用,后面会去掉。--advertise-address:apiserver主机的IP地址,用于通知其他集群成员。--allow-privileged:是否允许容器运行在 privileged 模式,默认为false。--service-cluster-ip-range:集群中service的虚拟IP地址范围。--admission-control:集群的准入控制设置,各控制模块以插件的形式依次生效。此参数已弃用,后面会替换成新的。

(2)执行脚本,生成配置文件和启动文件

[root@k8s-master ~]# sh apiserver.sh

6、配置kube-controller-manager服务
(1)创建生成controller-manager配置文件的脚本
controller-manager.sh脚本内容如下

#!/usr/bin/env bashMASTER_ADDRESS=${1:-"192.168.2.212"}CON_LOGDIR=${2:-"/data/controller-manager/log"}mkdir -p ${CON_LOGDIR}cat </etc/kubernetes/kube-controller-managerKUBE_LOGTOSTDERR="--logtostderr=false"CONTROLLER_LOGDIR="--log-dir=${CON_LOGDIR}"KUBE_LOG_LEVEL="--v=2"KUBE_MASTER="--master=${MASTER_ADDRESS}:8080"# --root-ca-file="": If set, this root certificate authority will be included in# service account's token secret. This must be a valid PEM-encoded CA bundle.KUBE_CONTROLLER_MANAGER_ROOT_CA_FILE="--root-ca-file=/etc/kubernetes/ca.pem"# --service-account-private-key-file="": Filename containing a PEM-encoded private# RSA key used to sign service account tokens.KUBE_CONTROLLER_MANAGER_SERVICE_ACCOUNT_PRIVATE_KEY_FILE="--service-account-private-key-file=/etc/kubernetes/k8s-server-key.pem"# --leader-elect: Start a leader election client and gain leadership before # executing the main loop. Enable this when running replicated components for high availability.KUBE_LEADER_ELECT="--leader-elect"EOFKUBE_CONTROLLER_MANAGER_OPTS="  \${KUBE_LOGTOSTDERR}  \\                                \${CONTROLLER_LOGDIR} \\                                \${KUBE_LOG_LEVEL}    \\                                \${KUBE_MASTER}       \\                                \${KUBE_CONTROLLER_MANAGER_ROOT_CA_FILE} \\                                \${KUBE_CONTROLLER_MANAGER_SERVICE_ACCOUNT_PRIVATE_KEY_FILE}\\                                \${KUBE_LEADER_ELECT}"cat </usr/lib/systemd/system/kube-controller-manager.service[Unit]Description=Kubernetes Controller ManagerDocumentation=https://github.com/kubernetes/kubernetes[Service]EnvironmentFile=-/etc/kubernetes/kube-controller-managerExecStart=/usr/bin/kube-controller-manager ${KUBE_CONTROLLER_MANAGER_OPTS}Restart=on-failure[Install]WantedBy=multi-user.targetEOF

(2)执行脚本,生成配置文件和启动文件

[root@k8s-master ~]# sh controller-manager.sh

7、配置kube-scheduler服务
(1)创建生成scheduler配置文件的脚本
scheduler.sh脚本内容如下

#!/usr/bin/env bashMASTER_ADDRESS=${1:-"192.168.2.212"}SCH_LOGDIR=${2:-"/data/scheduler/log"}mkdir -p ${SCH_LOGDIR}cat </etc/kubernetes/kube-scheduler#### kubernetes scheduler config# --logtostderr=true: log to standard error instead of filesKUBE_LOGTOSTDERR="--logtostderr=false"SCHEDULER_LOGDIR="--log-dir=${SCH_LOGDIR}"# --v=0: log level for V logsKUBE_LOG_LEVEL="--v=4"# --master: The address of the Kubernetes API server (overrides any value in kubeconfig).KUBE_MASTER="--master=${MASTER_ADDRESS}:8080"# --leader-elect: Start a leader election client and gain leadership before # executing the main loop. Enable this when running replicated components for high availability.KUBE_LEADER_ELECT="--leader-elect"# Add your own!KUBE_SCHEDULER_ARGS=""EOFKUBE_SCHEDULER_OPTS="   \${KUBE_LOGTOSTDERR}     \\                        \${SCHEDULER_LOGDIR}     \\                        \${KUBE_LOG_LEVEL}       \\                        \${KUBE_MASTER}          \\                        \${KUBE_LEADER_ELECT}    \\                        \$KUBE_SCHEDULER_ARGS"cat </usr/lib/systemd/system/kube-scheduler.service[Unit]Description=Kubernetes SchedulerDocumentation=https://github.com/kubernetes/kubernetes[Service]EnvironmentFile=-/etc/kubernetes/kube-schedulerExecStart=/usr/bin/kube-scheduler ${KUBE_SCHEDULER_OPTS}Restart=on-failure[Install]WantedBy=multi-user.targetEOF

(3)执行脚本,生成配置文件和启动文件

[root@k8s-master ~]# sh scheduler.sh

8、启动master主机所有服务

[root@k8s-master ~]# systemctl daemon-reload[root@k8s-master ~]# systemctl enable etcd[root@k8s-master ~]# systemctl start etcd[root@k8s-master ~]# systemctl enable kube-apiserver[root@k8s-master ~]# systemctl start kube-apiserver[root@k8s-master ~]# systemctl enable kube-controller-manager[root@k8s-master ~]# systemctl start kube-controller-manager[root@k8s-master ~]# systemctl enable kube-scheduler [root@k8s-master ~]# systemctl start kube-scheduler

9、验证etcd运行状态

[root@k8s-master ~]# export ETCDCTL_API=3[root@k8s-master ~]# etcdctl --key="/etc/etcd/ssl/etcd-key.pem" --cert="/etc/etcd/ssl/etcd.pem" --cacert="/etc/etcd/ssl/ca.pem" endpoint health[root@k8s-master ~]# etcdctl --key="/etc/etcd/ssl/etcd-key.pem" --cert="/etc/etcd/ssl/etcd.pem" --cacert="/etc/etcd/ssl/ca.pem" member list


ETCDCTL_API=3表示使用etcd3.x版本的命令,由于我们配置了etcd的证书,所以etcdctl命令要带上证书。

三、部署node工作节点

1、部署docker环境
(1)安装docker
注:安装的是docker社区版本,版本号18.06.1-ce

[root@k8s-node1 ~]# yum install -y yum-utils device-mapper-persistent-data lvm2[root@k8s-node1 ~]# yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo[root@k8s-node1 ~]# yum makecache fast[root@k8s-node1 ~]# yum -y install docker-ce

(2)修改配置文件,添加私有仓库地址和阿里云镜像地址,并指定docker数据存储目录

[root@k8s-node1 ~]# mkdir -p /data/docker[root@k8s-node1 ~]# mkdir -p /etc/docker[root@k8s-node1 ~]# vim /etc/docker/daemon.json
{  "registry-mirrors": ["https://registry.docker-cn.com"], "graph": "/data/docker",  "insecure-registries": ["192.168.2.225:5000"]}

(3)启动docker,并加入开机启动

[root@k8s-node1 ~]# systemctl start docker[root@k8s-node1 ~]# systemctl enable docker

2、下载并解压已编译好的二进制包

[root@k8s-node1 tmp]# wget https://dl.k8s.io/v1.12.2/kubernetes-node-linux-amd64.tar.gz[root@k8s-node1 tmp]# tar zxvf kubernetes-node-linux-amd64.tar.gz

3、将可执行文件复制到/usr/bin目录下

[root@k8s-node1 tmp]# cd kubernetes/node/bin/[root@k8s-node1 bin]# cp -p kubectl kubelet kube-proxy /usr/bin/

4、配置kubelet服务
(1)创建生成kubelet配置文件的脚本
kubelet.sh脚本内容如下

#!/usr/bin/env bashMASTER_ADDRESS=${1:-"192.168.2.212"}NODE_ADDRESS=${2:-"192.168.2.213"}KUBECONFIG_DIR=${KUBECONFIG_DIR:-/etc/kubernetes}NODE_LOGDIR=${3:-"/data/kubelet/log"}mkdir -p ${KUBECONFIG_DIR}mkdir -p ${NODE_LOGDIR}# Generate a kubeconfig filecat < "${KUBECONFIG_DIR}/kubelet.kubeconfig"apiVersion: v1kind: Configclusters:  - cluster:      server: http://${MASTER_ADDRESS}:8080/    name: localcontexts:  - context:      cluster: local    name: localcurrent-context: localEOFcat </etc/kubernetes/kubelet# --logtostderr=true: log to standard error instead of filesKUBE_LOGTOSTDERR="--logtostderr=false"KUBELET_LOGDIR="--log-dir=${NODE_LOGDIR}"#  --v=0: log level for V logsKUBE_LOG_LEVEL="--v=2"# --hostname-override="": If non-empty, will use this string as identification instead of the actual hostname.NODE_HOSTNAME="--hostname-override=${NODE_ADDRESS}"# Path to a kubeconfig file, specifying how to connect to the API server.KUBELET_KUBECONFIG="--kubeconfig=${KUBECONFIG_DIR}/kubelet.kubeconfig"# Add your own!KUBELET_ARGS="--kubeconfig=${KUBECONFIG_DIR}/kubelet.kubeconfig --hostname-override=${NODE_ADDRESS} --logtostderr=false --log-dir=${NODE_LOGDIR} --v=2"EOFKUBELET_OPTS="   \${KUBE_LOGTOSTDERR}  \\                 \${KUBELET_LOGDIR}    \\                 \${KUBE_LOG_LEVEL}    \\                 \${NODE_HOSTNAME}     \\                 \${KUBELET_KUBECONFIG}"cat </usr/lib/systemd/system/kubelet.service[Unit]Description=Kubernetes KubeletAfter=docker.serviceRequires=docker.service[Service]EnvironmentFile=-/etc/kubernetes/kubeletExecStart=/usr/bin/kubelet ${KUBELET_OPTS}Restart=on-failureKillMode=processRestartSec=15s[Install]WantedBy=multi-user.targetEOF

(2)执行脚本,生成配置文件和启动文件

[root@k8s-node1 ~]# sh kubelet.sh

5、配置kube-proxy服务
(1)创建生成kube-proxy配置文件的脚本
proxy.sh脚本内容如下

#!/usr/bin/env bashMASTER_ADDRESS=${1:-"192.168.2.212"}NODE_ADDRESS=${2:-"192.168.2.213"}mkdir -p /data/proxy/logcat </etc/kubernetes/kube-proxy# --logtostderr=true: log to standard error instead of filesKUBE_LOGTOSTDERR="--logtostderr=false"PROXY_LOGDIR="--log-dir=/data/proxy/log"#  --v=0: log level for V logsKUBE_LOG_LEVEL="--v=2"# --hostname-override="": If non-empty, will use this string as identification instead of the actual hostname.NODE_HOSTNAME="--hostname-override=${NODE_ADDRESS}"# --master="": The address of the Kubernetes API server (overrides any value in kubeconfig)KUBE_MASTER="--master=http://${MASTER_ADDRESS}:8080"EOFKUBE_PROXY_OPTS="   \${KUBE_LOGTOSTDERR} \\                    \${PROXY_LOGDIR}   \\                    \${KUBE_LOG_LEVEL}   \\                    \${NODE_HOSTNAME}    \\                    \${KUBE_MASTER}"cat </usr/lib/systemd/system/kube-proxy.service[Unit]Description=Kubernetes ProxyAfter=network.target[Service]EnvironmentFile=-/etc/kubernetes/kube-proxyExecStart=/usr/bin/kube-proxy ${KUBE_PROXY_OPTS}Restart=on-failure[Install]WantedBy=multi-user.targetEOF

(2)执行脚本,生成配置文件和启动文件

[root@k8s-node1 ~]# sh proxy.sh

6、启动node节点服务

[root@k8s-node1 ~]# swapoff -a[root@k8s-node1 ~]# systemctl daemon-reload[root@k8s-node1 ~]# systemctl enable kubelet.service[root@k8s-node1 ~]# systemctl start kubelet.service[root@k8s-node1 ~]# systemctl enable kube-proxy.service[root@k8s-node1 ~]# systemctl start kube-proxy.service

使用swapoff -a命令关闭swap交换分区,否则kubelet会启动不了。
至此kubernetes(k8s)集群就搭建完成了,此时只有etcd是通过https连的,其他服务都是通过http连的,接下来我们就将其他服务也配置成https,也使用CA数字证书认证方式。

四、Kubernetes集群的安全设置

在一个安全的内网环境中,Kubernetes的各个组件与Master之间可以通过apiserver的非安全端口http://apiserver:8080 进行访问。但如果apiserver需要对外提供服务,或者集群中的某些容器也需要访问apiserver以获取集群中的某些信息,则更安全的做法是启用HTTPS安全机制。Kubernetes提供了基于CA签名的双向数字证书认证方式和简单的基于HTTP BASE或TOKEN的认证方式,其中CA证书方式的安全性最高。现在我们就来配置基于CA签名的数字证书认证方式。
1、关闭node节点所有服务

[root@k8s-node1 ~]# systemctl stop kube-proxy.service [root@k8s-node1 ~]# systemctl stop kubelet.service

2、关闭master主机除etcd外的所有服务

[root@k8s-master ~]# systemctl stop kube-scheduler.service [root@k8s-master ~]# systemctl stop kube-controller-manager.service [root@k8s-master ~]# systemctl stop kube-apiserver.service

3、生成各组件的证书和私钥
(1)复制CA根证书和私钥相关文件到存放kubernetes证书私钥文件的目录下

[root@k8s-master ~]# cd /etc/etcd/ssl/[root@k8s-master ssl]# cp ca.pem ca-config.json ca-key.pem etcd-csr.json /etc/kubernetes/ssl/[root@k8s-master ssl]# cd /etc/kubernetes/ssl/

(2)查看cluster role都有哪些用户

[root@k8s-master ssl]# kubectl get clusterrole

注:apiserver使用admin用户,controller-manager使用system:kube-controller-manager用户,scheduler使用system:kube-scheduler用户,kubelet和kube-proxy使用system:node用户,用户名对应json文件的CN(凭证)。记住用户名和组件一定要一一对应,否则其他组件会连不上apiserver。
(3)编辑apiserver-csr.json文件

[root@k8s-master ssl]# mv etcd-csr.json apiserver-csr.json[root@k8s-master ssl]# vim apiserver-csr.json

注:apiserver的证书和私钥使用apiserver-csr.json文件来创建,使用admin用户

{    "CN": "admin",    "hosts": [        "127.0.0.1",        "192.168.2.212"    ],    "key": {        "algo": "rsa",        "size": 2048    },    "names": [        {            "C": "CN",            "L": "ShangHai",            "ST": "ShangHai",            "O": "admin",            "OU": "system"        }    ]}

(4)编辑kube-controller-manager的k8s-csr.json文件
注:kube-controller-manager的证书和私钥使用k8s-csr.json文件来创建,使用system:kube-controller-manager用户

{    "CN": "system:kube-controller-manager",    "hosts": [        "127.0.0.1",        "192.168.2.212"    ],    "key": {        "algo": "rsa",        "size": 2048    },    "names": [        {            "C": "CN",            "L": "ShangHai",            "ST": "ShangHai",            "O": "controller",            "OU": "system"        }    ]}

(5)编辑scheduler-csr.json文件

{    "CN": "system:kube-scheduler",    "hosts": [        "127.0.0.1",        "192.168.2.212"    ],    "key": {        "algo": "rsa",        "size": 2048    },    "names": [        {            "C": "CN",            "L": "ShangHai",            "ST": "ShangHai",            "O": "kube-scheduler",            "OU": "system"        }    ]}

(6)编辑node-csr.json文件

{    "CN": "system:node",    "hosts": [        "127.0.0.1",        "192.168.2.212",        "192.168.2.213",        "192.168.2.214",        "192.168.2.215"    ],    "key": {        "algo": "rsa",        "size": 2048    },    "names": [        {            "C": "CN",            "L": "ShangHai",            "ST": "ShangHai",            "O": "node",            "OU": "system"        }    ]}

(7)生成各组件的证书和私钥文件

[root@k8s-master ssl]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes -hostname=127.0.0.1,192.168.2.212 apiserver-csr.json | cfssljson -bare apiserver[root@k8s-master ssl]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes -hostname=127.0.0.1,192.168.2.212 k8s-csr.json | cfssljson -bare k8s[root@k8s-master ssl]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes -hostname=127.0.0.1,192.168.2.212 scheduler-csr.json | cfssljson -bare scheduler[root@k8s-master ssl]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes -hostname=127.0.0.1,192.168.2.212,192.168.2.213,192.168.2.214,192.168.2.215 node-csr.json | cfssljson -bare node

4、将证书文件和私钥文件复制到三台node节点上
(1)在node节点上创建存放证书文件的目录

[root@k8s-node1 ~]# mkdir -p /etc/kubernetes/ssl

(2)将文件复制到node节点上

[root@k8s-master ssl]# scp ca.pem node.pem node-key.pem root@192.168.2.213:/etc/kubernetes/ssl/

注:另外两台请替换命令中的IP
5、配置kube-apiserver服务
(1)修改kube-apiserver配置文件,/etc/kubernetes/kube-apiserver

KUBE_LOGTOSTDERR="--logtostderr=false"APISERVER_LOGDIR="--log-dir=/data/apiserver/log"KUBE_LOG_LEVEL="--v=2"KUBE_ETCD_SERVERS="--etcd-servers=https://127.0.0.1:2379"KUBE_ETCD_CAFILE="--etcd-cafile=/etc/etcd/ssl/ca.pem"KUBE_ETCD_CERTFILE="--etcd-certfile=/etc/etcd/ssl/etcd.pem"KUBE_ETCD_KEYFILE="--etcd-keyfile=/etc/etcd/ssl/etcd-key.pem"KUBE_API_ADDRESS="--bind-address=0.0.0.0"KUBE_API_PORT="--secure-port=6443"KUBE_ADVERTISE_ADDR="--advertise-address=192.168.2.212"KUBE_ALLOW_PRIV="--allow-privileged=true"KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.10.10.0/24"KUBE_MODE_CONTROL="--authorization-mode=Node,RBAC"KUBE_ADMISSION_CONTROL="--enable-admission-plugins=NodeRestriction"KUBE_API_CLIENT_CA_FILE="--client-ca-file=/etc/kubernetes/ssl/ca.pem"KUBE_API_TLS_CERT_FILE="--tls-cert-file=/etc/kubernetes/ssl/apiserver.pem"KUBE_API_TLS_PRIVATE_KEY_FILE="--tls-private-key-file=/etc/kubernetes/ssl/apiserver-key.pem"

(2)修改systemd启动配置文件,/usr/lib/systemd/system/kube-apiserver.service

[Unit]Description=Kubernetes API ServerDocumentation=https://github.com/kubernetes/kubernetes[Service]EnvironmentFile=-/etc/kubernetes/kube-apiserverExecStart=/usr/bin/kube-apiserver    ${KUBE_LOGTOSTDERR}         \                        ${APISERVER_LOGDIR}         \                        ${KUBE_LOG_LEVEL}           \                        ${KUBE_ETCD_SERVERS}        \                        ${KUBE_ETCD_CAFILE}         \                        ${KUBE_ETCD_CERTFILE}       \                        ${KUBE_ETCD_KEYFILE}        \                        ${KUBE_API_ADDRESS}         \                        ${KUBE_API_PORT}            \                        ${KUBE_ADVERTISE_ADDR}      \                        ${KUBE_ALLOW_PRIV}          \                        ${KUBE_SERVICE_ADDRESSES}   \                        ${KUBE_MODE_CONTROL}        \                        ${KUBE_ADMISSION_CONTROL}   \                        ${KUBE_API_CLIENT_CA_FILE}  \                        ${KUBE_API_TLS_CERT_FILE}   \                        ${KUBE_API_TLS_PRIVATE_KEY_FILE}Restart=on-failure[Install]WantedBy=multi-user.target

(3)启动kube-apiserver服务

[root@k8s-master ssl]# systemctl daemon-reload[root@k8s-master ssl]# systemctl start kube-apiserver.service

6、配置kube-controller-manager服务
(1)创建config配置文件,/etc/kubernetes/kubeconfig

[root@k8s-master kubernetes]# kubectl config set-cluster kubernetes --certificate-authority=/etc/kubernetes/ssl/ca.pem --embed-certs=true --kubeconfig=kubeconfig[root@k8s-master kubernetes]# kubectl config set-credentials system:kube-controller-manager --client-certificate=/etc/kubernetes/ssl/k8s.pem --client-key=/etc/kubernetes/ssl/k8s-key.pem --embed-certs=true --kubeconfig=kubeconfig[root@k8s-master kubernetes]# kubectl config set-context system:kube-controller-manager --cluster=kubernetes --user=system:kube-controller-manager --kubeconfig=kubeconfig[root@k8s-master kubernetes]# kubectl config use-context system:kube-controller-manager --kubeconfig=kubeconfig

(2)修改kube-controller-manager配置文件,/etc/kubernetes/kube-controller-manager

KUBE_LOGTOSTDERR="--logtostderr=false"CON_LOGDIR="--log-dir=/data/controller-manager/log"KUBE_LOG_LEVEL="--v=2"KUBE_MASTER="--master=https://192.168.2.212:6443"KUBE_CONTROLLER_MANAGER_ROOT_CA_FILE="--root-ca-file=/etc/kubernetes/ssl/ca.pem"KUBE_CONTROLLER_MANAGER_SERVICE_ACCOUNT_PRIVATE_KEY_FILE="--service-account-private-key-file=/etc/kubernetes/ssl/k8s-key.pem"KUBE_CONFIG_FILE="--kubeconfig=/etc/kubernetes/kubeconfig"KUBE_LEADER_ELECT="--leader-elect"

(3)修改systemd启动配置文件,/usr/lib/systemd/system/kube-controller-manager.service

[Unit]Description=Kubernetes Controller ManagerDocumentation=https://github.com/kubernetes/kubernetes[Service]EnvironmentFile=-/etc/kubernetes/kube-controller-managerExecStart=/usr/bin/kube-controller-manager   ${KUBE_LOGTOSTDERR} \                                ${CON_LOGDIR}       \                                ${KUBE_LOG_LEVEL}   \                                ${KUBE_MASTER}      \                                ${KUBE_CONTROLLER_MANAGER_ROOT_CA_FILE} \                                ${KUBE_CONTROLLER_MANAGER_SERVICE_ACCOUNT_PRIVATE_KEY_FILE}\                                ${KUBE_CONFIG_FILE}  \                                ${KUBE_LEADER_ELECT}Restart=on-failure[Install]WantedBy=multi-user.target

(4)启动kube-controller-manager服务

[root@k8s-master kubernetes]# systemctl daemon-reload[root@k8s-master kubernetes]# systemctl start kube-controller-manager.service

(5)创建角色绑定
kube-controller-manager服务启动后,查看日志报如下错误

从报错来看是rbac的授权错误,node信息的维护是属于system:controller下面的用户维护的,用户system:kube-controller-manager没有权限造成的。需要将system:kube-controller-manager绑定到system:controller:node-controller用户下即可。

[root@k8s-master ssl]# kubectl create clusterrolebinding controller-node-clusterrolebing --clusterrole=system:controller:node-controller --user=system:kube-controller-manager

查看绑定信息

[root@k8s-master ssl]# kubectl describe clusterrolebinding controller-node-clusterrolebing


现在再查看日志就没有上面的报错了,但还是有cluster的错误。

将system:kube-controller-manager绑定到cluster-admin用户下

[root@k8s-master ssl]# kubectl create clusterrolebinding controller-cluster-clusterrolebing --clusterrole=cluster-admin --user=system:kube-controller-manager

7、配置kube-scheduler服务
(1)创建config配置文件,/etc/kubernetes/scheconfig

[root@k8s-master kubernetes]# kubectl config set-cluster kubernetes --certificate-authority=/etc/kubernetes/ssl/ca.pem --embed-certs=true --kubeconfig=scheconfig[root@k8s-master kubernetes]# kubectl config set-credentials system:kube-scheduler --client-certificate=/etc/kubernetes/ssl/scheduler.pem --client-key=/etc/kubernetes/ssl/scheduler-key.pem --embed-certs=true --kubeconfig=scheconfig[root@k8s-master kubernetes]# kubectl config set-context system:kube-scheduler --cluster=kubernetes --user=system:kube-scheduler --kubeconfig=scheconfig[root@k8s-master kubernetes]# kubectl config use-context system:kube-scheduler --kubeconfig=scheconfig

(2)修改kube-scheduler配置文件,/etc/kubernetes/kube-scheduler

KUBE_LOGTOSTDERR="--logtostderr=false"KUBE_LOGDIR="--log-dir=/data/scheduler/log"KUBE_LOG_LEVEL="--v=2"KUBE_MASTER="--master=https://192.168.2.212:6443"SCHEDULER_CONFIG_FILE="--kubeconfig=/etc/kubernetes/scheconfig"KUBE_LEADER_ELECT="--leader-elect"KUBE_SCHEDULER_ARGS=""

(3)修改systemd启动配置文件,/usr/lib/systemd/system/kube-scheduler.service

[Unit]Description=Kubernetes SchedulerDocumentation=https://github.com/kubernetes/kubernetes[Service]EnvironmentFile=-/etc/kubernetes/kube-schedulerExecStart=/usr/bin/kube-scheduler    ${KUBE_LOGTOSTDERR}     \                        ${KUBE_LOGDIR}          \                        ${KUBE_LOG_LEVEL}       \                        ${KUBE_MASTER}          \                        ${SCHEDULER_CONFIG_FILE} \                        ${KUBE_LEADER_ELECT}    \                        $KUBE_SCHEDULER_ARGSRestart=on-failure[Install]WantedBy=multi-user.target

(4)启动kube-scheduler服务

[root@k8s-master kubernetes]# systemctl daemon-reload [root@k8s-master kubernetes]# systemctl start kube-scheduler

8、配置kubelet服务
(1)创建kubelet和kube-proxy的config配置文件,/etc/kubernetes/nodeconfig

[root@k8s-node1 kubernetes]# kubectl config set-cluster kubernetes --certificate-authority=/etc/kubernetes/ssl/ca.pem --embed-certs=true --server=https://192.168.2.212:6443 --kubeconfig=nodeconfig[root@k8s-node1 kubernetes]# kubectl config set-credentials system:node --client-certificate=/etc/kubernetes/ssl/node.pem --client-key=/etc/kubernetes/ssl/node-key.pem --embed-certs=true --kubeconfig=nodeconfig[root@k8s-node1 kubernetes]# kubectl config set-context system:node --cluster=kubernetes --user=system:node --kubeconfig=nodeconfig[root@k8s-node1 kubernetes]# kubectl config use-context system:node --kubeconfig=nodeconfig

(2)修改kubelet配置文件,/etc/kubernetes/kubelet

KUBE_LOGTOSTDERR="--logtostderr=false"NODE_LOGDIR="--log-dir=/data/kubelet/log"KUBE_LOG_LEVEL="--v=2"NODE_HOSTNAME="--hostname-override=192.168.2.213"KUBELET_KUBECONFIG="--kubeconfig=/etc/kubernetes/nodeconfig"

(3)启动kubelet服务

[root@k8s-node1 kubernetes]# systemctl start kubelet.service

(4)创建角色绑定
kubelet启动后,报如下错误

将system:node用户也绑定到以上两个角色中

[root@k8s-master ssl]# kubectl create clusterrolebinding node-node-clusterrolebing --clusterrole=system:controller:node-controller --user=system:node[root@k8s-master ssl]# kubectl create clusterrolebinding node-cluster-clusterrolebing --clusterrole=cluster-admin --user=system:node

9、配置kube-proxy服务
(1)修改kube-proxy配置文件,/etc/kubernetes/kube-proxy

KUBE_LOGTOSTDERR="--logtostderr=false"PROXY_LOGDIR="--log-dir=/data/proxy/log"KUBE_LOG_LEVEL="--v=2"NODE_HOSTNAME="--hostname-override=192.168.2.213"KUBE_MASTER="--master=https://192.168.2.212:6443"PROXY_KUBECONFIG="--kubeconfig=/etc/kubernetes/nodeconfig"

(2)修改systemd启动配置文件,/usr/lib/systemd/system/kube-proxy.service

[Unit]Description=Kubernetes ProxyAfter=network.target[Service]EnvironmentFile=-/etc/kubernetes/kube-proxyExecStart=/usr/bin/kube-proxy    ${KUBE_LOGTOSTDERR} \                    ${PROXY_LOGDIR}   \                    ${KUBE_LOG_LEVEL}   \                    ${NODE_HOSTNAME}    \                    ${KUBE_MASTER}      \                    ${PROXY_KUBECONFIG}Restart=on-failure[Install]WantedBy=multi-user.target

(3)启动kube-proxy服务

[root@k8s-node1 kubernetes]# systemctl daemon-reload[root@k8s-node1 kubernetes]# systemctl start kube-proxy.service

如果报以下错误,需要先清空iptables的NAT规则

[root@k8s-node1 kubernetes]# iptables -F -t nat[root@k8s-node1 kubernetes]# iptables -X -t nat[root@k8s-node1 kubernetes]# iptables -Z -t nat[root@k8s-node1 kubernetes]# systemctl restart kube-proxy.service

(4)查看node状态

[root@k8s-master kubernetes]# kubectl get nodes


至此kubernetes集群的部署就全部完成了,各组件之间都是通过https协议连接的。通过上面各组件的配置,我们不难看出所有组件都需要连接apiserver服务,数据也是通过apiserver存储到etcd中的,可以说apiserver是整个集群的中心组件。

五、测试

这里创建一个nginx的deployment用来测试
1、node节点下载pause镜像
由于国内访问不了k8s.gcr.io/pause:3.1,所以这里从kocker网站下载pause镜像

[root@k8s-node1 log]# docker pull docker.io/kubernetes/pause[root@k8s-node1 log]# docker tag kubernetes/pause:latest k8s.gcr.io/pause:3.1

2、创建nginx的Deployment定义文件
nginx.yaml文件内容如下

apiVersion: extensions/v1beta1kind: Deploymentmetadata:  name: mywebspec:  replicas: 1  selector:    matchLabels:      app: myweb  strategy:    type: RollingUpdate  template:    metadata:      labels:        app: myweb    spec:      containers:      - name: myweb        image: nginx        ports:        - containerPort: 80

3、创建deployment、RS、Pod和容器

[root@k8s-master ~]# kubectl create -f nginx.yaml

4、查看创建好的deployment运行情况

[root@k8s-master ~]# kubectl get deployment


5、查看ReplicaSet(RS)的运行情况

[root@k8s-master ~]# kubectl get rs


6、查看Pod的运行情况

[root@k8s-master ~]# kubectl get pod


7、查看容器的运行情况(node节点)

[root@k8s-node1 log]# docker ps -a


8、创建nginx的service定义文件
myweb-svc.yaml文件的内容如下

apiVersion: v1kind: Servicemetadata:  name: mywebspec:  type: NodePort  ports:    - port: 80      nodePort: 30001  selector:    app: myweb

9、创建Service

[root@k8s-master ~]# kubectl create -f myweb-svc.yaml

10、查看Service的运行情况

[root@k8s-master ~]# kubectl get svc


11、通过浏览器访问
通过node节点的30001端口访问,http://192.168.2.213:30001

0