千家信息网

kubernetes k8s 部署

发表于:2024-12-04 作者:千家信息网编辑
千家信息网最后更新 2024年12月04日,集群规划centos-test-ip-207-master 192.168.11.207centos-test-ip-208 192.168.11.208cent
千家信息网最后更新 2024年12月04日kubernetes k8s 部署

集群规划

centos-test-ip-207-master    192.168.11.207centos-test-ip-208                 192.168.11.208centos-test-ip-209                 192.168.11.209

kubernetes 1.10.7
flannel flannel-v0.10.0-linux-amd64.tar
ETCD etcd-v3.3.8-linux-amd64.tar
CNI cni-plugins-amd64-v0.7.1
docker 18.03.1-ce

安装包下载

etcd:https://github.com/coreos/etcd/releases/
flannel:https://github.com/coreos/flannel/releases/
cni:https://github.com/containernetworking/plugins/releases
kubernetes:https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.10.md#v1107

注:安装包 kubernetes1.10  看官笑纳链接:https://pan.baidu.com/s/1_7EfOMlRkQSybEH_p6NtTw 提取码:345b 

互相解析,关防火墙,关掉分区,服务器时间 (三台同步)

解析

vim /etc/hosts192.168.11.207 centos-test-ip-207-master192.168.11.208 centos-test-ip-208192.168.11.209 centos-test-ip-209

防火墙

systemctl stop firewalldsetenforce 0

关闭swap

swapoff -avim /etc/fstab    //  swap设置注释

同步服务时区 # 时间同步可忽略

tzselect

公钥传输 # 主传从

ssh-keygenssh-copy-id

安装docker (三台同步)

卸载原有版本

yum remove docker docker-common docker-selinux docker-engine

安装docker所依赖驱动

yum install -y yum-utils device-mapper-persistent-data lvm2

添加yum源 #官方源拉取延迟,故选择国内阿里源

yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repoyum makecache fast

选择docker版本安装

yum list docker-ce --showduplicates | sort -r

选择安装18.03.1.ce

yum -y install docker-ce-18.03.1.ce

启动docker

systemctl start dockersystemctl enable docker 

安装ETCD集群

同步操作

tar xvf etcd-v3.3.8-linux-amd64.tar.gzcd etcd-v3.3.8-linux-amd64cp etcd etcdctl /usr/binmkdir -p /var/lib/etcd /etc/etcd    #创建相关文件夹

etcd配置文件

主要文件操作 /usr/lib/systemd/system/etcd.service/etc/etcd/etcd.conf
etcd集群的主从节点关系与kubernetes集群的主从节点关系不是同的
etcd集群在启动和运行过程中会选举出主节点
因此三个节点命名 etcd-i,etcd-ii,etcd-iii 体验关系

207-master
cat   /usr/lib/systemd/system/etcd.service[Unit]Description=Etcd ServerAfter=network.target[Service]Type=notifyWorkingDirectory=/var/lib/etcd/EnvironmentFile=/etc/etcd/etcd.confExecStart=/usr/bin/etcd[Install]WantedBy=multi-user.target
cat    /etc/etcd/etcd.conf# [member]# 节点名称ETCD_NAME=etcd-i# 数据存放位置ETCD_DATA_DIR="/var/lib/etcd/default.etcd"# 监听其他Etcd实例的地址ETCD_LISTEN_PEER_URLS="http://192.168.11.207:2380"# 监听客户端地址ETCD_LISTEN_CLIENT_URLS="http://192.168.11.207:2379,http://127.0.0.1:2379"#[cluster]# 通知其他Etcd实例地址ETCD_INITIAL_ADVERTISE_PEER_URLS="http://192.168.11.207:2380"# 初始化集群内节点地址ETCD_INITIAL_CLUSTER="etcd-i=http://192.168.11.207:2380,etcd-ii=http://192.168.11.208:2380,etcd-iii=http://192.168.11.209:2380"   # 初始化集群状态,new表示新建ETCD_INITIAL_CLUSTER_STATE="new"# 初始化集群tokenETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster-token"# 通知客户端地址ETCD_ADVERTISE_CLIENT_URLS="http://192.168.11.207:2379,http://127.0.0.1:2379"
208
cat  /usr/lib/systemd/system/etcd.service[Unit]Description=Etcd ServerAfter=network.target[Service]Type=notifyWorkingDirectory=/var/lib/etcd/EnvironmentFile=/etc/etcd/etcd.confExecStart=/usr/bin/etcd[Install]WantedBy=multi-user.target
cat /etc/etcd/etcd.conf# [member]# 节点名称ETCD_NAME=etcd-ii# 数据存放位置ETCD_DATA_DIR="/var/lib/etcd/default.etcd"# 监听其他Etcd实例的地址ETCD_LISTEN_PEER_URLS="http://192.168.11.208:2380"# 监听客户端地址ETCD_LISTEN_CLIENT_URLS="http://192.168.11.208:2379,http://127.0.0.1:2379"#[cluster]# 通知其他Etcd实例地址ETCD_INITIAL_ADVERTISE_PEER_URLS="http://192.168.11.208:2380"# 初始化集群内节点地址ETCD_INITIAL_CLUSTER="etcd-i=http://192.168.11.207:2380,etcd-ii=http://192.168.11.208:2380,etcd-iii=http://192.168.11.209:2380"   # 初始化集群状态,new表示新建ETCD_INITIAL_CLUSTER_STATE="new"# 初始化集群tokenETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster-token"# 通知客户端地址ETCD_ADVERTISE_CLIENT_URLS="http://192.168.11.208:2379,http://127.0.0.1:2379"
209
cat  /usr/lib/systemd/system/etcd.service[Unit]Description=Etcd ServerAfter=network.target[Service]Type=notifyWorkingDirectory=/var/lib/etcd/EnvironmentFile=/etc/etcd/etcd.confExecStart=/usr/bin/etcd[Install]WantedBy=multi-user.target
cat /etc/etcd/etcd.conf# [member]# 节点名称ETCD_NAME=etcd-iii# 数据存放位置ETCD_DATA_DIR="/var/lib/etcd/default.etcd"# 监听其他Etcd实例的地址ETCD_LISTEN_PEER_URLS="http://192.168.11.209:2380"# 监听客户端地址ETCD_LISTEN_CLIENT_URLS="http://192.168.11.209:2379,http://127.0.0.1:2379"#[cluster]# 通知其他Etcd实例地址ETCD_INITIAL_ADVERTISE_PEER_URLS="http://192.168.11.209:2380"# 初始化集群内节点地址ETCD_INITIAL_CLUSTER="etcd-i=http://192.168.11.207:2380,etcd-ii=http://192.168.11.208:2380,etcd-iii=http://192.168.11.209:2380"   # 初始化集群状态,new表示新建ETCD_INITIAL_CLUSTER_STATE="new"# 初始化集群tokenETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster-token"# 通知客户端地址ETCD_ADVERTISE_CLIENT_URLS="http://192.168.11.209:2379,http://127.0.0.1:2379"

启动ETCD集群

主→从顺序操作

systemctl daemon-reload    ## 重新加载配置文件systemctl start etcd.servicesystemctl enable etcd.service

查看集群信息

[root@centos-test-ip-207-master ~]# etcdctl member liste8bd2d4d9a7cba8: name=etcd-ii peerURLs=http://192.168.11.208:2380 clientURLs=http://127.0.0.1:2379,http://192.168.11.208:2379 isLeader=true50a675761b915629: name=etcd-i peerURLs=http://192.168.11.207:2380 clientURLs=http://127.0.0.1:2379,http://192.168.11.207:2379 isLeader=false9a891df60a11686b: name=etcd-iii peerURLs=http://192.168.11.209:2380 clientURLs=http://127.0.0.1:2379,http://192.168.11.209:2379 isLeader=false
[root@centos-test-ip-207-master ~]# etcdctl cluster-healthmember e8bd2d4d9a7cba8 is healthy: got healthy result from http://127.0.0.1:2379member 50a675761b915629 is healthy: got healthy result from http://127.0.0.1:2379member 9a891df60a11686b is healthy: got healthy result from http://127.0.0.1:2379cluster is healthy

安装flannel

同步操作

mkdir -p /opt/flannel/bin/tar xvf flannel-v0.10.0-linux-amd64.tar.gz -C /opt/flannel/bin/
cat /usr/lib/systemd/system/flannel.service[Unit]Description=Flanneld overlay address etcd agentAfter=network.targetAfter=network-online.targetWants=network-online.targetAfter=etcd.serviceBefore=docker.service[Service]Type=notifyExecStart=/opt/flannel/bin/flanneld -etcd-endpoints=http://192.168.11.207:2379,http://192.168.11.208:2379,http://192.168.11.209:2379 -etcd-prefix=coreos.com/networkExecStartPost=/opt/flannel/bin/mk-docker-opts.sh -d /etc/docker/flannel_net.env -cRestart=on-failure[Install]WantedBy=multi-user.targetRequiredBy=docker.service

设置flannel网络配置(网段划分,网段信息可修改) # 主操作即可

[root@centos-test-ip-207-master ~]# etcdctl mk /coreos.com/network/config '{"Network":"172.18.0.0/16", "SubnetMin": "172.18.1.0", "SubnetMax": "172.18.254.0",  "Backend": {"Type": "vxlan"}}'修改网段:删除:etcdctl rm /coreos.com/network/config  ,再执行配置命令即可

下载flannel

同步操作
flannel服务依赖flannel镜像,所以要先下载flannel镜像,执行以下命令从阿里云下载,创建镜像tag

docker pull registry.cn-beijing.aliyuncs.com/k8s_images/flannel:v0.10.0-amd64docker tag registry.cn-beijing.aliyuncs.com/k8s_images/flannel:v0.10.0-amd64 quay.io/coreos/flannel:v0.10.0

注:
配置docker
flannel配置中有一项
ExecStartPost=/opt/flannel/bin/mk-docker-opts.sh -d /etc/docker/flannel_net.env -c
flannel启动后执行mk-docker-opts.sh,并生成/etc/docker/flannel_net.env文件
flannel会修改docker网络,flannel_net.env是flannel生成的docker配置参数,因此,还要修改docker配置项

cat  /usr/lib/systemd/system/docker.service[Unit]Description=Docker Application Container EngineDocumentation=https://docs.docker.comAfter=network-online.target firewalld.serviceWants=network-online.target[Service]Type=notify# the default is not to use systemd for cgroups because the delegate issues still# exists and systemd currently does not support the cgroup feature set required# for containers run by dockerExecStart=/usr/bin/dockerdEnvironmentFile=/etc/docker/flannel_net.env      # 添加ExecReload=/bin/kill -s HUP $MAINPIDExecStartPost=/usr/sbin/iptables -P FORWARD ACCEPT    #添加# Having non-zero Limit*s causes performance problems due to accounting overhead# in the kernel. We recommend using cgroups to do container-local accounting.LimitNOFILE=infinityLimitNPROC=infinityLimitCORE=infinity# Uncomment TasksMax if your systemd version supports it.# Only systemd 226 and above support this version.#TasksMax=infinityTimeoutStartSec=0# set delegate yes so that systemd does not reset the cgroups of docker containersDelegate=yes# kill only the docker process, not all processes in the cgroupKillMode=process# restart the docker process if it exits prematurelyRestart=on-failureStartLimitBurst=3StartLimitInterval=60s[Install]WantedBy=multi-user.target

注:
After:flannel启动之后再启动docker
EnvironmentFile:配置docker的启动参数,由flannel生成
ExecStart:增加docker启动参数
ExecStartPost:在docker启动之后执行,会修改主机的iptables路由规则

启动flannel

同步操作

systemctl daemon-reloadsystemctl start flannel.servicesystemctl enable flannel.servicesystemctl restart docker.service

安装CNI

同步操作

mkdir -p /opt/cni/bin /etc/cni/net.dtar xvf cni-plugins-amd64-v0.7.1.tgz -C /opt/cni/bin
cat  /etc/cni/net.d/10-flannel.conflist{  "name":"cni0",  "cniVersion":"0.3.1",  "plugins":[    {      "type":"flannel",      "delegate":{        "forceAddress":true,        "isDefaultGateway":true      }    },    {      "type":"portmap",      "capabilities":{        "portMappings":true      }    }  ]}

安装K8S集群

CA证书


同步操作

mkdir -p /etc/kubernetes/ca
207
cd /etc/kubernetes/ca/

生产证书和私钥

[root@centos-test-ip-207-master ca]# openssl genrsa -out ca.key 2048[root@centos-test-ip-207-master ca]# openssl req -x509 -new -nodes -key ca.key -subj "/CN=k8s" -days 5000 -out ca.crt

生成kube-apiserver证书和私钥

[root@centos-test-ip-207-master ca]# cat master_ssl.conf[req]req_extensions = v3_reqdistinguished_name = req_distinguished_name[req_distinguished_name][ v3_req ]basicConstraints = CA:FALSEkeyUsage = nonRepudiation, digitalSignature, keyEnciphermentsubjectAltName = @alt_names[alt_names]DNS.1 = kubernetesDNS.2 = kubernetes.defaultDNS.3 = kubernetes.default.svcDNS.4 = kubernetes.default.svc.cluster.localDNS.5 = k8sIP.1 = 172.18.0.1IP.2 = 192.168.11.207
[root@centos-test-ip-207-master ca]# openssl genrsa -out apiserver-key.pem 2048[root@centos-test-ip-207-master ca]# openssl req -new -key apiserver-key.pem -out apiserver.csr -subj "/CN=k8s" -config master_ssl.conf[root@centos-test-ip-207-master ca]# openssl x509 -req -in apiserver.csr -CA ca.crt -CAkey ca.key -CAcreateserial -out apiserver.pem -days 365 -extensions v3_req -extfile master_ssl.conf

生成kube-controller-manager/kube-scheduler证书和私钥

[root@centos-test-ip-207-master ca]# openssl genrsa -out cs_client.key 2048[root@centos-test-ip-207-master ca]# openssl req -new -key cs_client.key -subj "/CN=k8s" -out cs_client.csr[root@centos-test-ip-207-master ca]# openssl x509 -req -in cs_client.csr -CA ca.crt -CAkey ca.key -CAcreateserial -out cs_client.crt -days 5000

拷贝证书到208,209

[root@centos-test-ip-207-master ca]# scp ca.crt ca.key centos-test-ip-208:/etc/kubernetes/ca/[root@centos-test-ip-207-master ca]# scp ca.crt ca.key centos-test-ip-209:/etc/kubernetes/ca/
208证书配置

/CN 对应本机IP
cd /etc/kubernetes/ca/

[root@centos-test-ip-208 ca]# openssl genrsa -out kubelet_client.key 2048[root@centos-test-ip-208 ca]# openssl req -new -key kubelet_client.key -subj "/CN=192.168.3.193" -out kubelet_client.csr[root@centos-test-ip-208 ca]# openssl x509 -req -in kubelet_client.csr  -CA ca.crt -CAkey ca.key -CAcreateserial -out kubelet_client.crt -days 5000
209证书配置

/CN 对应本机IP
cd /etc/kubernetes/ca/

[root@centos-test-ip-209 ca]# openssl genrsa -out kubelet_client.key 2048[root@centos-test-ip-209 ca]# openssl req -new -key kubelet_client.key -subj "/CN=192.168.11.209" -out kubelet_client.csr[root@centos-test-ip-209 ca]# openssl x509 -req -in kubelet_client.csr -CA ca.crt -CAkey ca.key -CAcreateserial -out kubelet_client.crt -days 5000

安装k8s

207
[root@centos-test-ip-207-master ~]# tar xvf kubernetes-server-linux-amd64.tar.gz -C /opt[root@centos-test-ip-207-master ~]# cd /opt/kubernetes/server/bin[root@centos-test-ip-207-master bin]# cp -a `ls |egrep -v "*.tar|*_tag"` /usr/bin[root@centos-test-ip-207-master bin]# mkdir -p /var/log/kubernetes

配置kube-apiserver

[root@centos-test-ip-207-master bin]# cat /usr/lib/systemd/system/kube-apiserver.service[Unit]Description=Kubernetes API ServerDocumentation=https://github.com/GoogleCloudPlatform/kubernetesAfter=etcd.serviceWants=etcd.service[Service]EnvironmentFile=/etc/kubernetes/apiserver.confExecStart=/usr/bin/kube-apiserver $KUBE_API_ARGSRestart=on-failureType=notifyLimitNOFILE=65536[Install]WantedBy=multi-user.target

配置apiserver.conf

[root@centos-test-ip-207-master bin]# cat /etc/kubernetes/apiserver.confKUBE_API_ARGS="\    --storage-backend=etcd3 \    --etcd-servers=http://192.168.11.207:2379,http://192.168.11.208:2379,http://192.168.11.209:2379 \    --bind-address=0.0.0.0 \    --secure-port=6443  \    --service-cluster-ip-range=172.18.0.0/16 \    --service-node-port-range=1-65535 \    --kubelet-port=10250 \    --advertise-address=192.168.11.207 \    --allow-privileged=false \    --anonymous-auth=false \    --client-ca-file=/etc/kubernetes/ca/ca.crt \    --tls-private-key-file=/etc/kubernetes/ca/apiserver-key.pem \    --tls-cert-file=/etc/kubernetes/ca/apiserver.pem \    --enable-admission-plugins=NamespaceLifecycle,LimitRanger,NamespaceExists,SecurityContextDeny,ServiceAccount,DefaultStorageClass,ResourceQuota \    --logtostderr=true \    --log-dir=/var/log/kubernets \    --v=2"

注:
#解释说明
--etcd-servers #连接到etcd集群
--secure-port #开启安全端口6443
--client-ca-file、--tls-private-key-file、--tls-cert-file配置CA证书
--enable-admission-plugins #开启准入权限
--anonymous-auth=false #不接受匿名访问,若为true,则表示接受,此处设置为false,便于dashboard访问

配置kube-controller-manager

[root@centos-test-ip-207-master bin]# cat /etc/kubernetes/kube-controller-config.yamlapiVersion: v1kind: Configusers:- name: controller  user:    client-certificate: /etc/kubernetes/ca/cs_client.crt    client-key: /etc/kubernetes/ca/cs_client.keyclusters:- name: local  cluster:    certificate-authority: /etc/kubernetes/ca/ca.crtcontexts:- context:    cluster: local    user: controller  name: default-contextcurrent-context: default-context

配置kube-controller-manager.service

[root@centos-test-ip-207-master bin]# cat /usr/lib/systemd/system/kube-controller-manager.service[Unit]Description=Kubernetes Controller ManagerDocumentation=https://github.com/GoogleCloudPlatform/kubernetesAfter=kube-apiserver.serviceRequires=kube-apiserver.service[Service]EnvironmentFile=/etc/kubernetes/controller-manager.confExecStart=/usr/bin/kube-controller-manager $KUBE_CONTROLLER_MANAGER_ARGSRestart=on-failureLimitNOFILE=65536[Install]WantedBy=multi-user.target

配置controller-manager.conf

[root@centos-test-ip-207-master bin]# cat /etc/kubernetes/controller-manager.confKUBE_CONTROLLER_MANAGER_ARGS="\    --master=https://192.168.11.207:6443 \    --service-account-private-key-file=/etc/kubernetes/ca/apiserver-key.pem \    --root-ca-file=/etc/kubernetes/ca/ca.crt \    --cluster-signing-cert-file=/etc/kubernetes/ca/ca.crt \    --cluster-signing-key-file=/etc/kubernetes/ca/ca.key \    --kubeconfig=/etc/kubernetes/kube-controller-config.yaml \    --logtostderr=true \    --log-dir=/var/log/kubernetes \    --v=2"

注:
master连接到master节点
service-account-private-key-file、root-ca-file、cluster-signing-cert-file、cluster-signing-key-file配置CA证书
kubeconfig是配置文件

配置kube-scheduler

[root@centos-test-ip-207-master bin]# cat /etc/kubernetes/kube-scheduler-config.yamlapiVersion: v1kind: Configusers:- name: scheduler  user:    client-certificate: /etc/kubernetes/ca/cs_client.crt    client-key: /etc/kubernetes/ca/cs_client.keyclusters:- name: local  cluster:    certificate-authority: /etc/kubernetes/ca/ca.crtcontexts:- context:    cluster: local    user: scheduler  name: default-contextcurrent-context: default-context
[root@centos-test-ip-207-master bin]# cat /usr/lib/systemd/system/kube-scheduler.service[Unit]Description=Kubernetes SchedulerDocumentation=https://github.com/GoogleCloudPlatform/kubernetesAfter=kube-apiserver.serviceRequires=kube-apiserver.service[Service]User=rootEnvironmentFile=/etc/kubernetes/scheduler.confExecStart=/usr/bin/kube-scheduler $KUBE_SCHEDULER_ARGSRestart=on-failureLimitNOFILE=65536[Install]WantedBy=multi-user.target
[root@centos-test-ip-207-master bin]# cat /etc/kubernetes/scheduler.confKUBE_SCHEDULER_ARGS="\    --master=https://192.168.11.207:6443 \    --kubeconfig=/etc/kubernetes/kube-scheduler-config.yaml \    --logtostderr=true \    --log-dir=/var/log/kubernetes \    --v=2"

启动master

systemctl daemon-reloadsystemctl start kube-apiserver.servicesystemctl enable kube-apiserver.servicesystemctl start kube-controller-manager.servicesystemctl enable kube-controller-manager.servicesystemctl start kube-scheduler.servicesystemctl enable kube-scheduler.service

日志查看

journalctl -xeu kube-apiserver --no-pagerjournalctl -xeu kube-controller-manager --no-pagerjournalctl -xeu kube-scheduler --no-pager# 实时查看加 -f

节点部署K8S

从(节点)同步操作

tar -zxvf kubernetes-server-linux-amd64.tar.gz -C /optcd /opt/kubernetes/server/bincp -a kubectl kubelet kube-proxy /usr/bin/mkdir -p /var/log/kubernetes
cat /etc/sysctl.d/k8s.confnet.bridge.bridge-nf-call-ip6tables = 1net.bridge.bridge-nf-call-iptables = 1# 修改内核参数,iptables过滤规则生效.如果未用到可忽略
sysctl -p  #配置生效

208 配置kubelet

[root@centos-test-ip-208 ~]# cat /etc/kubernetes/kubelet-config.yamlapiVersion: v1kind: Configusers:- name: kubelet  user:    client-certificate: /etc/kubernetes/ca/kubelet_client.crt    client-key: /etc/kubernetes/ca/kubelet_client.keyclusters:- cluster:    certificate-authority: /etc/kubernetes/ca/ca.crt    server: https://192.168.11.207:6443  name: localcontexts:- context:    cluster: local    user: kubelet  name: default-contextcurrent-context: default-contextpreferences: {}
[root@centos-test-ip-208 ~]# cat /usr/lib/systemd/system/kubelet.service[Unit]Description=Kubelet ServerDocumentation=https://github.com/GoogleCloudPlatform/kubernetesAfter=docker.serviceRequires=docker.service[Service]EnvironmentFile=/etc/kubernetes/kubelet.confExecStart=/usr/bin/kubelet $KUBELET_ARGSRestart=on-failure[Install]WantedBy=multi-user.target
[root@centos-test-ip-208 ~]#  cat /etc/kubernetes/kubelet.confKUBELET_ARGS="\    --kubeconfig=/etc/kubernetes/kubelet-config.yaml \    --pod-infra-container-image=registry.aliyuncs.com/archon/pause-amd64:3.0 \    --hostname-override=192.168.11.208 \    --network-plugin=cni \    --cni-conf-dir=/etc/cni/net.d \    --cni-bin-dir=/opt/cni/bin \    --logtostderr=true \    --log-dir=/var/log/kubernetes \    --v=2"

注:###################
--hostname-override #配置node名称 建议使用node节点的IP
#--pod-infra-container-image=gcr.io/google_containers/pause-amd64:3.0 \
--pod-infra-container-image #指定pod的基础镜像 默认是google的,建议改为国内,或者FQ
或者 下载到本地重新命名镜像
docker pull registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0
docker tag registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0 gcr.io/google_containers/pause-amd64:3.0
--kubeconfig #为配置文件

配置KUBE-代理

[root@centos-test-ip-208 ~]# cat /etc/kubernetes/proxy-config.yamlapiVersion: v1kind: Configusers:- name: proxy  user:    client-certificate: /etc/kubernetes/ca/kubelet_client.crt    client-key: /etc/kubernetes/ca/kubelet_client.keyclusters:- cluster:    certificate-authority: /etc/kubernetes/ca/ca.crt    server: https://192.168.11.207:6443  name: localcontexts:- context:    cluster: local    user: proxy  name: default-contextcurrent-context: default-contextpreferences: {}
[root@centos-test-ip-208 ~]# cat /usr/lib/systemd/system/kube-proxy.service[Unit]Description=Kube-Proxy ServerDocumentation=https://github.com/GoogleCloudPlatform/kubernetesAfter=network.targetRequires=network.service[Service]EnvironmentFile=/etc/kubernetes/proxy.confExecStart=/usr/bin/kube-proxy $KUBE_PROXY_ARGSRestart=on-failureLimitNOFILE=65536[Install]WantedBy=multi-user.target
[root@centos-test-ip-208 ~]# cat /etc/kubernetes/proxy.confKUBE_PROXY_ARGS="\    --master=https://192.168.11.207:6443 \    --hostname-override=192.168.11.208 \    --kubeconfig=/etc/kubernetes/proxy-config.yaml \    --logtostderr=true \    --log-dir=/var/log/kubernetes \    --v=2"

209 配置kubelet

[root@centos-test-ip-209 ~]# cat /etc/kubernetes/kubelet-config.yamlapiVersion: v1kind: Configusers:- name: kubelet  user:    client-certificate: /etc/kubernetes/ca/kubelet_client.crt    client-key: /etc/kubernetes/ca/kubelet_client.keyclusters:- cluster:    certificate-authority: /etc/kubernetes/ca/ca.crt    server: https://192.168.11.207:6443  name: localcontexts:- context:    cluster: local    user: kubelet  name: default-contextcurrent-context: default-contextpreferences: {}
[root@centos-test-ip-209 ~]# cat /usr/lib/systemd/system/kubelet.service[Unit]Description=Kubelet ServerDocumentation=https://github.com/GoogleCloudPlatform/kubernetesAfter=docker.serviceRequires=docker.service[Service]EnvironmentFile=/etc/kubernetes/kubelet.confExecStart=/usr/bin/kubelet $KUBELET_ARGSRestart=on-failure[Install]WantedBy=multi-user.target
[root@centos-test-ip-209 ~]# cat /etc/kubernetes/kubelet.confKUBELET_ARGS="\    --kubeconfig=/etc/kubernetes/kubelet-config.yaml \    --pod-infra-container-image=registry.aliyuncs.com/archon/pause-amd64:3.0 \    --hostname-override=192.168.11.209 \    --network-plugin=cni \    --cni-conf-dir=/etc/cni/net.d \    --cni-bin-dir=/opt/cni/bin \    --logtostderr=true \    --log-dir=/var/log/kubernetes \    --v=2"

注:
###################
--hostname-override #配置node名称 建议使用node节点的IP
#--pod-infra-container-image=gcr.io/google_containers/pause-amd64:3.0 \
--pod-infra-container-image #指定pod的基础镜像 默认是google的,建议改为国内,或者FQ
或者 下载到本地重新命名镜像
docker pull registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0
docker tag registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0 gcr.io/google_containers/pause-amd64:3.0
--kubeconfig #为配置文件

配置KUBE-代理

 [root@centos-test-ip-209 ~]# cat /etc/kubernetes/proxy-config.yamlapiVersion: v1kind: Configusers:- name: proxy  user:    client-certificate: /etc/kubernetes/ca/kubelet_client.crt    client-key: /etc/kubernetes/ca/kubelet_client.keyclusters:- cluster:    certificate-authority: /etc/kubernetes/ca/ca.crt    server: https://192.168.3.121:6443  name: localcontexts:- context:    cluster: local    user: proxy  name: default-contextcurrent-context: default-contextpreferences: {}
 [root@centos-test-ip-209 ~]# cat /usr/lib/systemd/system/kube-proxy.service[Unit]Description=Kube-Proxy ServerDocumentation=https://github.com/GoogleCloudPlatform/kubernetesAfter=network.targetRequires=network.service[Service]EnvironmentFile=/etc/kubernetes/proxy.confExecStart=/usr/bin/kube-proxy $KUBE_PROXY_ARGSRestart=on-failureLimitNOFILE=65536[Install]WantedBy=multi-user.target
 [root@centos-test-ip-209 ~]# cat /etc/kubernetes/proxy.confKUBE_PROXY_ARGS="\    --master=https://192.168.11.207:6443 \    --hostname-override=192.168.11.209 \    --kubeconfig=/etc/kubernetes/proxy-config.yaml \    --logtostderr=true \    --log-dir=/var/log/kubernetes \    --v=2"

注:
--hostname-override #配置node名称,要与kubelet对应,kubelet配置了,则kube-proxy也要配置
--master #连接master服务
--kubeconfig #为配置文件

启动节点,日志查看 #注:一定要关闭swap分区

从(节点)同步操作

systemctl daemon-reloadsystemctl start kubelet.servicesystemctl enable kubelet.servicesystemctl start kube-proxy.servicesystemctl enable kube-proxy.servicejournalctl -xeu kubelet --no-pagerjournalctl -xeu kube-proxy --no-pager# 实时查看加 -f

master查看节点

[root@centos-test-ip-207-master ~]# kubectl get nodesNAME             STATUS    ROLES     AGE       VERSION192.168.11.208   Ready         1d        v1.10.7192.168.11.209   Ready         1d        v1.10.7

集群测试

配置nginx 测试文件 (master)

[root@centos-test-ip-207-master bin]# cat  nginx-rc.yamlapiVersion: v1kind: ReplicationControllermetadata:  name: nginx-rc  labels:    name: nginx-rcspec:  replicas: 2  selector:    name: nginx-pod  template:    metadata:      labels:         name: nginx-pod    spec:      containers:      - name: nginx        image: nginx        imagePullPolicy: IfNotPresent        ports:        - containerPort: 80
[root@centos-test-ip-207-master bin]# cat nginx-svc.yaml apiVersion: v1kind: Servicemetadata:  name: nginx-service  labels:     name: nginx-servicespec:  type: NodePort  ports:  - port: 80    protocol: TCP    targetPort: 80    nodePort: 30081  selector:    name: nginx-pod

启动

master(207)

kubectl create -f nginx-rc.yamlkubectl create -f nginx-svc.yaml

#查看pod创建情况

[root@centos-test-ip-207-master bin]# kubectl get pod -o wideNAME             READY     STATUS    RESTARTS   AGE       IP             NODEnginx-rc-d9kkc   1/1       Running   0          1d        172.18.30.2    192.168.11.209nginx-rc-l9ctn   1/1       Running   0          1d        172.18.101.2   192.168.11.208

注:http://节点:30081/ 出现nginx界面配置完成

删除服务及nginx的部署 #配置文件出现问题以下命令可以删除重新操作

kubectl delete -f nginx-svc.yamlkubectl delete -f nginx-rc.yaml

界面 UI 下载部署 (master)

(主)207 操作

下载dashboard yaml
wget https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml
修改文件 kubernetes-dashboard.yaml
image 那里 要修改下.默认的地址被墙了#image: k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.0image: mirrorgooglecontainers/kubernetes-dashboard-amd64:v1.8.3
# ------------------- Dashboard Service ------------------- #kind: ServiceapiVersion: v1metadata:  labels:    k8s-app: kubernetes-dashboard  name: kubernetes-dashboard  namespace: kube-systemspec:  type: NodePort                                 #  添加  type:NodePort  ports:    - port: 443      targetPort: 8443      nodePort: 30000                            # 添加 nodePort: 30000  selector:    k8s-app: kubernetes-dashboard
创建权限控制yaml
dashboard-admin.yaml
apiVersion: rbac.authorization.k8s.io/v1beta1  kind: ClusterRoleBinding  metadata:    name: kubernetes-dashboard    labels:      k8s-app: kubernetes-dashboard  roleRef:    apiGroup: rbac.authorization.k8s.io    kind: ClusterRole    name: cluster-admin  subjects:  - kind: ServiceAccount    name: kubernetes-dashboard    namespace: kube-system

创建,查看

kubectl create -f kubernetes-dashboard.yamlkubectl create -f dashboard-admin.yaml
[root@centos-test-ip-207-master ~]#  kubectl  get pods --all-namespaces -o wideNAMESPACE     NAME                                    READY     STATUS    RESTARTS   AGE       IP             NODEdefault       nginx-rc-d9kkc                          1/1       Running   0          1d        172.18.30.2    192.168.11.209default       nginx-rc-l9ctn                          1/1       Running   0          1d        172.18.101.2   192.168.11.208kube-system   kubernetes-dashboard-66c9d98865-qgbgq   1/1       Running   0          20h       172.18.30.9    192.168.11.209

访问 # 火狐访问 google 出现不了秘钥界面

注意:HTTPS 访问
直接访问 https://节点:配置的端口 访问



访问会提示登录.我们采取token登录
kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep admin-user | awk '{print $1}') | grep token
[root@centos-test-ip-207-master ~]# kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep admin-user | awk '{print $1}') | grep tokenName:       default-token-t8hblType:       kubernetes.io/service-account-tokentoken:      eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9..(很多字符)#####将这些字符复制到前端登录即可.
0