千家信息网

搭建K8S集群:kubernetes -1.11.3

发表于:2024-12-03 作者:千家信息网编辑
千家信息网最后更新 2024年12月03日,搭建K8S集群 kubernetes 1.11.31.1 实验架构:kubernetes架构node1: master 10.192.44.129node2: node2 10.192.44.127n
千家信息网最后更新 2024年12月03日搭建K8S集群:kubernetes -1.11.3

搭建K8S集群 kubernetes 1.11.3

1.1 实验架构:

kubernetes架构

node1: master 10.192.44.129

node2: node2 10.192.44.127

node3: node3 10.192.44.126

etcd架构

node1: master 10.192.44.129

node2: node 10.192.44.127

node3: node 10.192.44.126

harbor服务器

redhat128.example.com

10.192.44.128

2.安装

2.1配置系统相关参数(每台):

2.1.1 临时禁用selinux

sed -i 's/SELINUX=permissive/SELINUX=disabled/' /etc/sysconfig/selinux

setenforce 0

2.1.2 临时关闭swap ,永久关闭直接注释fstab中swap行

swapoff -a

2.1.3 开启forward

iptables -P FORWARD ACCEPT

2.1.3 配置转发相关参数,否则可能会出错

cat < /etc/sysctl.d/k8s.conf

net.bridge.bridge-nf-call-ip6tables = 1

net.bridge.bridge-nf-call-iptables = 1

vm.swappiness=0

EOF

sysctl --system

2.1.4 配置hosts


10.192.44.126 node3

10.192.44.127 node2

10.192.44.128 redhat128

10.192.44.129 node1


2.1.5 安装docker 参考我此前的blog。

2.1.6 时间同步

yum install ntpdate -y &&ntpdate 0.asia.pool.ntp.org

3.创建TLS证书和秘钥(master节点)

3.1 生成的证书文件如下:

ca-key.pem #根私钥

ca.pem #根证书

kubernetes-key.pem #集群私钥

kubernetes.pem #集群证书

kube-proxy.pem #proxy私钥-node节点进行认证

kube-proxy-key.pem #proxy证书-node节点进行认证

admin.pem #管理员私钥-主要用于kubectl认证

admin-key.pem #管理员证书-主要用于kubectl认证

知识点补充:

TLS: TLS 的作用就是对通讯加密,防止中间人窃听;同时如果证书不信任的话根本就无法与 apiserver 建立连接,更不用提有没有权限向 apiserver 请求指定内容。

RBAC作用:RBAC 中规定了一个用户或者用户组(subject)具有请求哪些 api 的权限;在配合 TLS 加密的时候,实际上 apiserver 读取客户端证书的 CN 字段作为用户名,读取 O 字段作为用户组。

总结:想要与 apiserver 通讯就必须采用由 apiserver CA 签发的证书,这样才能形成信任关系,建立 TLS 连接;第二,可以通过证书的 CN、O 字段来提供 RBAC 所需的用户与用户组。

3.2 下载安装CFSSL(用于签名,验证和捆绑TLS证书的HTTP API工具)(master节点)

wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64

wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64

wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64

mv cfssl-certinfo_linux-amd64 /usr/local/bin/cfssl-certinfo

mv cfssl_linux-amd64 /usr/local/bin/cfssl

mv https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64 /usr/local/bin/cfssljson

chmod +x /usr/local/bin/cfssl /usr/local/bin/cfssl-certinfo /usr/local/bin/cfssljson

3.3创建CA(Certificate Authority)(master节点)

mkdir /root/ssl

cd /root/ssl

cfssl print-defaults config > config.json

cfssl print-defaults csr > csr.json

# 根据config.json文件的格式创建如下的ca-config.json文件

# 过期时间设置成了 87600h

cat > ca-config.json <

{

"signing": {

"default": {

"expiry": "87600h"

},

"profiles": {

"kubernetes": {

"usages": [

"signing",

"key encipherment",

"server auth",

"client auth"

],

"expiry": "87600h"

}

}

}

}

EOF

知识点:

ca-config.json:可以定义多个 profiles,分别指定不同的过期时间、使用场景等参数;后续在签名证书时使用某个 profile;

signing:表示该证书可用于签名其它证书;生成的 ca.pem 证书中 CA=TRUE;

server auth:表示client可以用该 CA 对server提供的证书进行验证;

client auth:表示server可以用该CA对client提供的证书进行验证;

3.4 创建证书请求

cat > ca-csr.json <

{

"CN": "kubernetes",

"key": {

"algo": "rsa",

"size": 2048

},

"names": [

{

"C": "CN",

"ST": "GuangDong",

"L": "ShenZhen",

"O": "k8s",

"OU": "System"

}

],

"ca": {

"expiry": "87600h"

}

}

EOF

知识点:

"CN":Common Name,kube-apiserver 从证书中提取该字段作为请求的用户名 (User Name)

"O":Organization,kube-apiserver 从证书中提取该字段作为请求用户所属的组 (Group)

3.5 生成CA证书和私钥

cfssl gencert -initca ca-csr.json | cfssljson -bare ca

3.6 创建kubernetes证书请求文件

cat > kubernetes-csr.json <

{

"CN": "kubernetes",

"hosts": [

"127.0.0.1",

"10.192.44.129",

"10.192.44.128",

"10.192.44.126",

"10.192.44.127",

"10.254.0.1",

"*.kubernetes.master",

"localhost",

"kubernetes",

"kubernetes.default",

"kubernetes.default.svc",

"kubernetes.default.svc.cluster",

"kubernetes.default.svc.cluster.local"

],

"key": {

"algo": "rsa",

"size": 2048

},

"names": [

{

"C": "CN",

"ST": "GuangDong",

"L": "ShenZhen",

"O": "k8s",

"OU": "System"

}

]

}

EOF

知识点:

这个证书目前专属于 apiserver加了一个 *.kubernetes.master 域名以便内部私有 DNS 解析使用(可删除);至于很多人问过 kubernetes 这几个能不能删掉,答案是不可以的;因为当集群创建好后,default namespace 下会创建一个叫 kubenretes 的 svc,有一些组件会直接连接这个 svc 来跟 api 通讯的,证书如果不包含可能会出现无法连接的情况;其他几个 kubernetes 开头的域名作用相同

hosts包含的是授权范围,不在此范围的的节点或者服务使用此证书就会报证书不匹配错误。

10.254.0.1是指kube-apiserver 指定的 service-cluster-ip-range 网段的第一个IP。

3.7 生成kubernetes证书和私钥

cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kubernetes-csr.json | cfssljson -bare kubernetes

3.8 创建admin证书

cat > admin-csr.json <

{

"CN": "admin",

"hosts": [],

"key": {

"algo": "rsa",

"size": 2048

},

"names": [

{

"C": "CN",

"ST": "GuangDong",

"L": "ShenZhen",

"O": "system:masters",

"OU": "System"

}

]

}

EOF

3.9 生成admin证书和私钥

cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes admin-csr.json | cfssljson -bare admin

知识点:

这个admin 证书,是将来生成管理员用的kube config 配置文件用的,现在我们一般建议使用RBAC 来对kubernetes 进行角色权限控制, kubernetes 将证书中的CN 字段 作为User, O 字段作为 Group

3.10 创建Kube-proxy 证书

cat > kube-proxy-csr.json <

{

"CN": "system:kube-proxy",

"hosts": [],

"key": {

"algo": "rsa",

"size": 2048

},

"names": [

{

"C": "CN",

"ST": "GuangDong",

"L": "ShenZhen",

"O": "k8s",

"OU": "System"

}

]

}

EOF

3.11 生成kube-proxy客户端证书和私钥

cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy

3.12 校验证书

openssl x509 -noout -text -in kubernetes.pem

3.13分发证书 将生成的证书和秘钥文件(后缀名为.pem)拷贝到所有机器的 /etc/kubernetes/ssl 目录下备用

mkdir -p /etc/kubernetes/ssl

scp *.pem {node2,node3}:/etc/kubernetes/ssl

4.创建kubeconfig文件 (master节点)

4.1 生成token文件

export BOOTSTRAP_TOKEN=$(head -c 16 /dev/urandom | od -An -t x | tr -d ' ')

export KUBE_APISERVER="https://10.192.44.129:6443"

echo $BOOTSTRAP_TOKEN

cat > token.csv <

${BOOTSTRAP_TOKEN},kubelet-bootstrap,10001,"system:bootstrappers"

EOF

cp token.csv /etc/kubernetes/

知识点:不要质疑 system:bootstrappers 用户组是否写错了,有疑问请参考官方文, https://kubernetes.io/docs/admin/kubelet-tls-bootstrapping/

4.2 创建kubelete-kubeconfig文件

kubeconfig 设置其实是权限配置文件,是对k8s集群层面的访问控制。如果不使用--kubeconfig=xx.kubeconfig,他就会默认保存在~/.kube/conf中文件,然后作为默认配置文件。其实通过kubeadm配置也会发现,他要求你将kubeconfig拷贝到~/.kube/conf。

cd /etc/kubernetes/ssl

4.2.1设置集群参数

kubectl config set-cluster kubernetes \

--certificate-authority=/etc/kubernetes/ssl/ca.pem \

--embed-certs=true \

--server=${KUBE_APISERVER} \

--kubeconfig=bootstrap.kubeconfig

4.2.2设置客户端认证参数

kubectl config set-credentials kubelet-bootstrap \

--token=${BOOTSTRAP_TOKEN} \

--kubeconfig=bootstrap.kubeconfig

4.2.3设置上下文参数

kubectl config set-context default \

--cluster=kubernetes \

--user=kubelet-bootstrap \

--kubeconfig=bootstrap.kubeconfig

4.2.4设置默认上下文

kubectl config use-context default --kubeconfig=bootstrap.kubeconfig

4.3 创建kube-proxy文件

4.3.1 设置集群参数

kubectl config set-cluster kubernetes \

--certificate-authority=/etc/kubernetes/ssl/ca.pem \

--embed-certs=true \

--server=${KUBE_APISERVER} \

--kubeconfig=kube-proxy.kubeconfig

4.3.2 设置客户端认证参数

kubectl config set-credentials kube-proxy \

--client-certificate=/etc/kubernetes/ssl/kube-proxy.pem \

--client-key=/etc/kubernetes/ssl/kube-proxy-key.pem \

--embed-certs=true \

--kubeconfig=kube-proxy.kubeconfig

4.3.3 设置上下文参数

kubectl config set-context default \

--cluster=kubernetes \

--user=kube-proxy \

--kubeconfig=kube-proxy.kubeconfig

4.3.4 设置默认上下文

kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig

4.4 分发kubeconfig 证书

scp bootstrap.kubeconfig kube-proxy.kubeconfig {node2,node3}:/etc/kubernetes/

4.5 创建 admin kubeconfig文件

4.5.1 设置集群参数

kubectl config set-cluster kubernetes \

--certificate-authority=/etc/kubernetes/ssl/ca.pem \

--embed-certs=true \

--server=${KUBE_APISERVER} \

--kubeconfig=admin.conf

4.5.2设置客户端认证参数

kubectl config set-credentials admin \

--client-certificate=/etc/kubernetes/ssl/admin.pem \

--embed-certs=true \

--client-key=/etc/kubernetes/ssl/admin-key.pem \

--kubeconfig=admin.conf

4.5.3设置上下文参数

kubectl config set-context default \

--cluster=kubernetes \

--user=admin \

--kubeconfig=admin.conf

4.5.4设置默认上下文

kubectl config use-context default --kubeconfig=admin.conf

4.6 创建高级审计文件

cat >> audit-policy.yaml <

# Log all requests at the Metadata level.

apiVersion: audit.k8s.io/v1beta1

kind: Policy

rules:

- level: Metadata

EOF

4.7 文件拷贝:

#cp ~/.kube/config /etc/kubernetes/kubelet.kubeconfig (#关于这一步当时我是添加node节点出问题,如果没有问题请忽略这操作,下面的kubelet.kubeconfig一样)

scp /etc/kubernetes/{kubelet.kubeconfig,bootstrap.kubeconfig,kube-proxy.kubeconfig} node2:/etc/kubernetes/

scp /etc/kubernetes/{kubelet.kubeconfig,bootstrap.kubeconfig,kube-proxy.kubeconfig} node3:/etc/kubernetes/

5 创建etcd集群

5.1创建etcd启动服务(每台)

cat > /usr/lib/systemd/system/etcd.service << EOF

[Unit]

Description=Etcd Server

After=network.target

After=network-online.target

Wants=network-online.target

Documentation=https://github.com/coreos


[Service]

Type=notify

WorkingDirectory=/var/lib/etcd/

EnvironmentFile=-/etc/etcd/etcd.conf

ExecStart=/usr/local/bin/etcd \

--name ${ETCD_NAME} \

--cert-file=/etc/kubernetes/ssl/kubernetes.pem \

--key-file=/etc/kubernetes/ssl/kubernetes-key.pem \

--peer-cert-file=/etc/kubernetes/ssl/kubernetes.pem \

--peer-key-file=/etc/kubernetes/ssl/kubernetes-key.pem \

--trusted-ca-file=/etc/kubernetes/ssl/ca.pem \

--peer-trusted-ca-file=/etc/kubernetes/ssl/ca.pem \

--initial-advertise-peer-urls ${ETCD_INITIAL_ADVERTISE_PEER_URLS} \

--listen-peer-urls ${ETCD_LISTEN_PEER_URLS} \

--listen-client-urls ${ETCD_LISTEN_CLIENT_URLS},http://127.0.0.1:2379 \

--advertise-client-urls ${ETCD_ADVERTISE_CLIENT_URLS} \

--initial-cluster-token ${ETCD_INITIAL_CLUSTER_TOKEN} \

--initial-cluster infra1=https://172.20.0.113:2380,infra2=https://172.20.0.114:2380,infra3=https://172.20.0.115:2380 \

--initial-cluster-state new \

--data-dir=${ETCD_DATA_DIR}

Restart=on-failure

RestartSec=5

LimitNOFILE=65536


[Install]

WantedBy=multi-user.target

EOF

知识点: systemd是配置管理驱动服务的。 环境变量 = -/ "-"表示抑制错误,即发生错误的时候,也不影响其他命令的执行。

5.2 编辑配置文件(以ectd1为例,etcd2,etcd3注意替换IP地址)

mkdir /etc/etcd && vim /etc/etcd/etcd.conf

cat > /etc/etcd/etcd.conf << EOF

ETCD_NAME=infra1

ETCD_DATA_DIR="/var/lib/etcd"

ETCD_LISTEN_PEER_URLS="https://10.192.44.129:2380"

ETCD_LISTEN_CLIENT_URLS="https://10.192.44.129:2379"


#[cluster]

ETCD_INITIAL_ADVERTISE_PEER_URLS="https://10.192.44.129:2380"

ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"

ETCD_ADVERTISE_CLIENT_URLS="https://10.192.44.129:2379"

5.3启动etcd服务器,记得创建/var/lib/etcd。

mkdir /var/lib/etcd

systemctl enable etcd && systemctl start etcd

6 部署master节点:(好像需要自己到服务器文件解压)

6.1 下载kubernetes 文件

下载kubernetes (v1.11.3)

wget https://github.com/kubernetes/kubernetes/releases/download/v1.11.3/kubernetes.tar.gz

tar -xzvf kubernetes.tar.gz

cd kubernetes

./cluster/get-kube-binaries.sh

#如果不行,请手动操作

cd server/

tar -zxvf kubernetes-server-linux-amd64.tar.gz

cp kubernetes/server/bin/kube-apiserver /usr/local/bin/kube-apiserver

cp kubernetes/server/bin/kube-controller-manager /usr/local/bin/kube-controller-manager

cp kubernetes/server/bin/kube-scheduler /usr/local/bin/kube-scheduler

chmod +x /usr/local/bin/{kube-apiserver,kube-controller-manager,kube-scheduler}

6.2配置系统服务启动kube-apiserver,kube-controller-manager,kube-scheduler

6.2.1创建kube-apiserver.service

cat > /usr/lib/systemd/system/kube-apiserver.service << EOF

[Unit]

Description=Kubernetes API Service

Documentation=https://github.com/GoogleCloudPlatform/kubernetes

After=network.target

After=etcd.service


[Service]

EnvironmentFile=-/etc/kubernetes/config

EnvironmentFile=-/etc/kubernetes/apiserver

ExecStart=/usr/local/bin/kube-apiserver \

$KUBE_LOGTOSTDERR \

$KUBE_LOG_LEVEL \

$KUBE_ETCD_SERVERS \

$KUBE_API_ADDRESS \

$KUBE_API_PORT \

$KUBELET_PORT \

$KUBE_ALLOW_PRIV \

$KUBE_SERVICE_ADDRESSES \

$KUBE_ADMISSION_CONTROL \

$KUBE_API_ARGS

Restart=on-failure

Type=notify

LimitNOFILE=65536


[Install]

WantedBy=multi-user.target

EOF

6.2.2 创建kube-controller-manager.service

cat > /usr/lib/systemd/system/kube-controller-manager.service << EOF

[Unit]

Description=Kubernetes Controller Manager

Documentation=https://github.com/GoogleCloudPlatform/kubernetes


[Service]

EnvironmentFile=-/etc/kubernetes/config

EnvironmentFile=-/etc/kubernetes/controller-manager

ExecStart=/usr/local/bin/kube-controller-manager \

$KUBE_LOGTOSTDERR \

$KUBE_LOG_LEVEL \

$KUBE_MASTER \

$KUBE_CONTROLLER_MANAGER_ARGS

Restart=on-failure

LimitNOFILE=65536


[Install]

WantedBy=multi-user.target

EOF

6.2.3 创建kube-scheduler.service

cat > /usr/lib/systemd/system/kube-scheduler.service << EOF

[Unit]

Description=Kubernetes Scheduler Plugin

Documentation=https://github.com/GoogleCloudPlatform/kubernetes


[Service]

EnvironmentFile=-/etc/kubernetes/config

EnvironmentFile=-/etc/kubernetes/scheduler

ExecStart=/usr/local/bin/kube-scheduler \

$KUBE_LOGTOSTDERR \

$KUBE_LOG_LEVEL \

$KUBE_MASTER \

$KUBE_SCHEDULER_ARGS

Restart=on-failure

LimitNOFILE=65536


[Install]

WantedBy=multi-user.target

EOF

6.2.4 编辑/etc/kubernetes/config文件

cat > /etc/kubernetes/config << EOF

###

# kubernetes system config

#

# The following values are used to configure various aspects of all

# kubernetes services, including

#

# kube-apiserver.service

# kube-controller-manager.service

# kube-scheduler.service

# kubelet.service

# kube-proxy.service

# logging to stderr means we get it in the systemd journal

KUBE_LOGTOSTDERR="--logtostderr=true"


# journal message level, 0 is debug

KUBE_LOG_LEVEL="--v=0"


# Should this cluster be allowed to run privileged docker containers

KUBE_ALLOW_PRIV="--allow-privileged=true"


# How the controller-manager, scheduler, and proxy find the apiserver

#KUBE_MASTER="--master=http://test-001.jimmysong.io:8080"

KUBE_MASTER="--master=http://10.192.44.129:8080"

EOF

6.2.5 编辑apiserver配置文件

cat > /etc/kubernetes/apiserver << EOF

###

## kubernetes system config

##

## The following values are used to configure the kube-apiserver

##

#

## The address on the local server to listen to.

#KUBE_API_ADDRESS="--insecure-bind-address=test-001.jimmysong.io"

KUBE_API_ADDRESS="--advertise-address=10.192.44.129 --bind-address=10.192.44.129 --insecure-bind-address=10.192.44.129"

#

## The port on the local server to listen on.

KUBE_API_PORT="--secure-port=6443"

#

## Port minions listen on

#KUBELET_PORT="--kubelet-port=10250"

#

## Comma separated list of nodes in the etcd cluster

KUBE_ETCD_SERVERS="--etcd-servers=https://10.192.44.129:2379,https://10.192.44.127:2379,https://10.192.44.126:2379"

#

## Address range to use for services

KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16"

#

## default admission control policies

KUBE_ADMISSION_CONTROL="--enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota,NodeRestriction"

#

## Add your own!

KUBE_API_ARGS="--anonymous-auth=false \

--authorization-mode=Node,RBAC \

--kubelet-https=true \

--kubelet-timeout=3s \

--enable-bootstrap-token-auth \

--enable-garbage-collector \

--enable-logs-handler \

--token-auth-file=/etc/kubernetes/token.csv \

--service-node-port-range=30000-32767 \

--tls-cert-file=/etc/kubernetes/ssl/kubernetes.pem \

--tls-private-key-file=/etc/kubernetes/ssl/kubernetes-key.pem \

--client-ca-file=/etc/kubernetes/ssl/ca.pem \

--service-account-key-file=/etc/kubernetes/ssl/ca-key.pem \

--etcd-cafile=/etc/kubernetes/ssl/ca.pem \

--etcd-certfile=/etc/kubernetes/ssl/kubernetes.pem \

--etcd-keyfile=/etc/kubernetes/ssl/kubernetes-key.pem \

--etcd-compaction-interval=5m0s \

--etcd-count-metric-poll-period=1m0s \

--enable-swagger-ui=true \

--apiserver-count=3 \

--log-flush-frequency=5s \

--audit-log-maxage=30 \

--audit-log-maxbackup=3 \

--audit-log-maxsize=100 \

--audit-log-path=/var/lib/audit.log \

--audit-policy-file=/etc/kubernetes/audit-policy.yaml \

--storage-backend=etcd3 \

--event-ttl=1h"

EOF

6.2.6 编辑controller-manager配置文件

cat > /etc/kubernetes/controller-manager << EOF

###

# The following values are used to configure the kubernetes controller-manager


# defaults from config and apiserver should be adequate


# Add your own!

KUBE_CONTROLLER_MANAGER_ARGS="--address=127.0.0.1

--service-cluster-ip-range=10.254.0.0/16

--cluster-name=kubernetes

--cluster-signing-cert-file=/etc/kubernetes/ssl/ca.pem

--cluster-signing-key-file=/etc/kubernetes/ssl/ca-key.pem

--service-account-private-key-file=/etc/kubernetes/ssl/ca-key.pem

--root-ca-file=/etc/kubernetes/ssl/ca.pem

--leader-elect=true"

EOF

6.2.7 编辑scheduler配置文件

cat > /etc/kubernetes/scheduler << EOF

###

# kubernetes scheduler config


# default config should be adequate


# Add your own!

KUBE_SCHEDULER_ARGS="--leader-elect=true --address=127.0.0.1 --algorithm-provider=DefaultProvider"

6.2.8 启动服务

systemctl daemon-reload

systemctl enable kueb-apiserver kube-controller-manager kube-scheduler

systemctl start kueb-apiserver kube-controller-manager kube-scheduler

6.2.9 验证master节点功能

kubectl get componentstatuses

如下:

NAME STATUS MESSAGE ERROR

scheduler Healthy ok

controller-manager Healthy ok

etcd-0 Healthy {"health": "true"}

etcd-1 Healthy {"health": "true"}

etcd-2 Healthy {"health": "true"}

6.2.10 kubectl命令补全

echo "source <(kubectl completion bash)" >> ~/.bashrc

source ~/.bashrc

7. 安装flannel网络插件

7.1 通过yum安装配置flannel(每节点)

yum install -y flannel

7.2 配置服务文件(每节点)

cat > /usr/lib/systemd/system/flanneld.service << EOF

[Unit]

Description=Flanneld overlay address etcd agent

After=network.target

After=network-online.target

Wants=network-online.target

After=etcd.service

Before=docker.service


[Service]

Type=notify

EnvironmentFile=/etc/sysconfig/flanneld

EnvironmentFile=-/etc/sysconfig/docker-network

ExecStart=/usr/bin/flanneld-start \

-etcd-endpoints=${FLANNEL_ETCD_ENDPOINTS} \

-etcd-prefix=${FLANNEL_ETCD_PREFIX} \

$FLANNEL_OPTIONS

ExecStartPost=/usr/libexec/flannel/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/docker

Restart=on-failure


[Install]

WantedBy=multi-user.target

RequiredBy=docker.service

知识点:mk-docker-opts.sh生成环境变量/run/flannel/subnet.env,/run/docker_opts.env。后续要docker要调用其配置文件。

7.3 创建flanneld配置文件(每节点)

cat > /etc/sysconfig/flanneld << EOF

# Flanneld configuration options

# etcd url location. Point this to the server where etcd runs

FLANNEL_ETCD_ENDPOINTS="https://10.192.44.129:2379,https://10.192.44.127:2379,https://10.192.44.126:2379"


# etcd config key. This is the configuration key that flannel queries

# For address range assignment

FLANNEL_ETCD_PREFIX="/kube-centos/network"


# Any additional options that you want to pass

FLANNEL_OPTIONS="-etcd-cafile=/etc/kubernetes/ssl/ca.pem -etcd-certfile=/etc/kubernetes/ssl/kubernetes.pem -etcd-keyfile=/etc/kubernetes/ssl/kubernetes-key.pem"

7.4 在etcd创建网络配置(每节点,gw模式)

etcdctl --endpoints=https://10.192.44.129:2379,https://10.192.44.127:2379,https://10.192.44.126:2379 \

--ca-file=/etc/kubernetes/ssl/ca.pem \

--cert-file=/etc/kubernetes/ssl/kubernetes.pem \

--key-file=/etc/kubernetes/ssl/kubernetes-key.pem \

mkdir /kube-centos/network

etcdctl --endpoints=https://10.192.44.129:2379,https://10.192.44.127:2379,https://10.192.44.126:2379 \

--ca-file=/etc/kubernetes/ssl/ca.pem \

--cert-file=/etc/kubernetes/ssl/kubernetes.pem \

--key-file=/etc/kubernetes/ssl/kubernetes-key.pem \

mk /kube-centos/network/config '{"Network":"172.30.0.0/16","SubnetLen":24,"Backend":{"Type":"host-gw"}}'

7.5 启动flannel(每节点)

systemctl daemon-reload

systemctl restart flanneld

systemctl start flanneld

7.6 查看etcd内容(随便一个节点执行就行了,因为数据是同步的)

etcdctl --endpoints=https://10.192.44.129:2379 \

--ca-file=/etc/kubernetes/ssl/ca.pem \

--cert-file=/etc/kubernetes/ssl/kubernetes.pem \

--key-file=/etc/kubernetes/ssl/kubernetes-key.pem \

ls /kube-centos/network/subnets


etcdctl --endpoints=https://10.192.44.129:2379 \

--ca-file=/etc/kubernetes/ssl/ca.pem \

--cert-file=/etc/kubernetes/ssl/kubernetes.pem \

--key-file=/etc/kubernetes/ssl/kubernetes-key.pem \

get /kube-centos/network/config

7.7 将flannel启动后生成的环境变量添加到docker的systemd目录。(每节点)

vim /usr/lib/systemd/system/docker.service

EnvironmentFile=-/run/flannel/docker

systemctl daemon-reload && systemctl restart docker

7.8 更改dockerd启动配置(每节点)

vim /usr/lib/systemd/system/docker.service

EnvironmentFile=-/run/flannel/docker

ExecStart=/usr/bin/dockerd \

$DOCKER_OPT_BIP \

$DOCKER_OPT_IPMASQ \

$DOCKER_OPT_MTU \

--log-driver=json-file

8.部署node节点

8.1 TLS bootstrapping配置(master节点)

cd /etc/kubernetes

kubectl create clusterrolebinding kubelet-bootstrap \

--clusterrole=system:node-bootstrapper \

--user=kubelet-bootstrap

kubectl create clusterrolebinding kubelet-nodes \

--clusterrole=system:node \

--group=system:nodes

知识点:

kubelet 启动时向 kube-apiserver 发送 TLS bootstrapping 请求,需要先将 bootstrap token 文件中的 kubelet-bootstrap 用户赋予 system:node-bootstrapper cluster 角色(role), 然后 kubelet 才能有权限创建认证请求(certificate signing requests):

kubelet 通过认证后向 kube-apiserver 发送 register node 请求,需要先将 kubelet-nodes 用户赋予 system:node cluster角色(role) 和 system:nodes 组(group), 然后 kubelet 才能有权限创建节点请求:

8.2 下载kubelet和kube-proxy 二进制文件(每节点)

wget https://github.com/kubernetes/kubernetes/releases/download/v1.11.3/kubernetes.tar.gz

tar -xzvf kubernetes.tar.gz

cd kubernetes

./cluster/get-kube-binaries.sh

#如果不行,请手动操作

cd server/

tar -zxvf kubernetes-server-linux-amd64.tar.gz

cp kubernetes/server/bin/kubelet /usr/local/bin/kubelet

cp kubernetes/server/bin/kube-proxy /usr/local/bin/kube-proxy

chmod +x /usr/local/bin/{kubelet,kube-proxy}

8.3 配置系统服务启动kubelet,kube-proxy

8.3.1 创建kubelete

cat > /usr/lib/systemd/system/kubelet.service << EOF

[Unit]

Description=Kubernetes Kubelet Server

Documentation=https://github.com/GoogleCloudPlatform/kubernetes

After=docker.service

Requires=docker.service


[Service]

WorkingDirectory=/var/lib/kubelet

EnvironmentFile=-/etc/kubernetes/config

EnvironmentFile=-/etc/kubernetes/kubelet

ExecStart=/usr/local/bin/kubelet \

$KUBE_LOGTOSTDERR \

$KUBE_LOG_LEVEL \

$KUBELET_ADDRESS \

$KUBELET_PORT \

$KUBELET_HOSTNAME \

$KUBE_ALLOW_PRIV \

$KUBELET_POD_INFRA_CONTAINER \

$KUBELET_ARGS

Restart=on-failure


[Install]

WantedBy=multi-user.target

EOF

8.3.2 创建Kube-proxy

cat > /usr/lib/systemd/system/kube-proxy.service << EOF

[Unit]

Description=Kubernetes Kube-Proxy Server

Documentation=https://github.com/GoogleCloudPlatform/kubernetes

After=network.target


[Service]

EnvironmentFile=-/etc/kubernetes/config

EnvironmentFile=-/etc/kubernetes/proxy

ExecStart=/usr/local/bin/kube-proxy \

$KUBE_LOGTOSTDERR \

$KUBE_LOG_LEVEL \

$KUBE_MASTER \

$KUBE_PROXY_ARGS

Restart=on-failure

LimitNOFILE=65536


[Install]

WantedBy=multi-user.target

EOF

8.3.3 创建conf文件

cd /etc/kubernetes

cat >/etc/kubernetes/config<

KUBE_LOGTOSTDERR="--logtostderr=true"

KUBE_LOG_LEVEL="--v=2"

EOF

8.3.4 创建kubelete-conf文件(master)

cat > /etc/kubernetes/kubelet << EOF

###

## kubernetes kubelet (minion) config

#

## The address for the info server to serve on (set to 0.0.0.0 or "" for all interfaces)

KUBELET_ADDRESS="--address=10.192.44.129"

#

## The port for the info server to serve on

#KUBELET_PORT="--port=10250"

#

## You may leave this blank to use the actual hostname

KUBELET_HOSTNAME="--hostname-override=master"

#

## location of the api-server

## COMMENT THIS ON KUBERNETES 1.8+


#

## pod infrastructure container

KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=gcr.io/google_containers/pause-amd64:3.0"

#

## Add your own!

KUBELET_ARGS="--cluster-dns=10.254.0.2 --bootstrap-kubeconfig=/etc/kubernetes/bootstrap.kubeconfig --kubeconfig=/etc/kubernetes/kubelet.kubeconfig --cert-dir=/etc/kubernetes/ssl --cluster-domain=cluster.local. --allow-privileged=true --serialize-image-pulls=false --fail-swap-on=false --log-dir=/var/log/kubernetes/kubelet"

EOF

8.3.5 创建kubelete-conf文件(node2)

cat > /etc/kubernetes/kubelet << EOF

###

## kubernetes kubelet (minion) config

#

## The address for the info server to serve on (set to 0.0.0.0 or "" for all interfaces)

KUBELET_ADDRESS="--address=10.192.44.127"

#

## The port for the info server to serve on

#KUBELET_PORT="--port=10250"

#

## You may leave this blank to use the actual hostname

KUBELET_HOSTNAME="--hostname-override=node2"

#

## location of the api-server

## COMMENT THIS ON KUBERNETES 1.8+


#

## pod infrastructure container

KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=gcr.io/google_containers/pause-amd64:3.0"

#

## Add your own!

KUBELET_ARGS="--cluster-dns=10.254.0.2 --bootstrap-kubeconfig=/etc/kubernetes/bootstrap.kubeconfig --kubeconfig=/etc/kubernetes/kubelet.kubeconfig --cert-dir=/etc/kubernetes/ssl --cluster-domain=cluster.local. --allow-privileged=true --serialize-image-pulls=false --fail-swap-on=false --log-dir=/var/log/kubernetes/kubelet"

EOF

8.3.6 创建kubelete-conf文件(node3)

cat > /etc/kubernetes/kubelet << EOF

###

## kubernetes kubelet (minion) config

#

## The address for the info server to serve on (set to 0.0.0.0 or "" for all interfaces)

KUBELET_ADDRESS="--address=10.192.44.126"

#

## The port for the info server to serve on

#KUBELET_PORT="--port=10250"

#

## You may leave this blank to use the actual hostname

KUBELET_HOSTNAME="--hostname-override=node3"

#

## location of the api-server

## COMMENT THIS ON KUBERNETES 1.8+


#

## pod infrastructure container

KUBELET_POD_INFRA_CONTAINER="--pod-infra-container-image=gcr.io/google_containers/pause-amd64:3.0"

#

## Add your own!

KUBELET_ARGS="--cluster-dns=10.254.0.2 --bootstrap-kubeconfig=/etc/kubernetes/bootstrap.kubeconfig --kubeconfig=/etc/kubernetes/kubelet.kubeconfig --cert-dir=/etc/kubernetes/ssl --cluster-domain=cluster.local. --allow-privileged=true --serialize-image-pulls=false --fail-swap-on=false --log-dir=/var/log/kubernetes/kubelet"

EOF

8.3.7 创建kube-proxy文件(master)

cat > /etc/kubernetes/proxy << EOF

###

# kubernetes proxy config


# default config should be adequate


# Add your own!

KUBE_PROXY_ARGS="--bind-address=10.192.44.129 --hostname-override=master --kubeconfig=/etc/kubernetes/kube-proxy.kubeconfig --cluster-cidr=10.254.0.0/16 --log-dir=/var/log/kubernetes/proxy"

EOF

8.3.7 创建kube-proxy文件(node2)

cat > /etc/kubernetes/proxy << EOF

###

# kubernetes proxy config


# default config should be adequate


# Add your own!

KUBE_PROXY_ARGS="--bind-address=10.192.44.127 --hostname-override=node2 --kubeconfig=/etc/kubernetes/kube-proxy.kubeconfig --cluster-cidr=10.254.0.0/16 --log-dir=/var/log/kubernetes/proxy"

EOF

8.3.8 创建kube-proxy文件(node3)

cat > /etc/kubernetes/proxy << EOF

###

# kubernetes proxy config


# default config should be adequate


# Add your own!

KUBE_PROXY_ARGS="--bind-address=10.192.44.126 --hostname-override=node3 --kubeconfig=/etc/kubernetes/kube-proxy.kubeconfig --cluster-cidr=10.254.0.0/16 --log-dir=/var/log/kubernetes/proxy"

EOF

8.3.9 启动kubelet

systemctl daemon-reload && systemctl enable kubelet && systemctl start kubelet

8.3.10 启动kube-proxy

systemctl daemon-reload && systemctl enable kube-proxy && systemctl start kube-proxy

8.3.11 查看证书申请请求(node节点自动去kubeapi节点申请)

kubectl get csr

8.3.12 master节点允许请求 ,查看证书请求状态

kubectl certificate approve node-csr-Yiiv675wUCvQl3HH11jDr0cC9p3kbrXWrxvG3EjWGoE

kubectl describe csr node-csr-Yiiv675wUCvQl3HH11jDr0cC9p3kbrXWrxvG3EjWGoE

状态标注下如下:

kubectl describe csr node-csr-hsBS9OyhOa8rK_Q48ee81giH17t6Nk4FL9IRWRt4ygw

Name: node-csr-hsBS9OyhOa8rK_Q48ee81giH17t6Nk4FL9IRWRt4ygw

Labels:

Annotations:

CreationTimestamp: Thu, 22 Nov 2018 20:19:09 +0800

Requesting User: kubelet-bootstrap

Status: Approved,Issued

Subject:

Common Name: system:node:node3

Serial Number:

Organization: system:nodes

8.3.13 查看节点状态

kubectl get nodes

8.3.14 创建测试

vim deploy.yaml

apiVersion: extensions/v1beta1

kind: Deployment

metadata:

name: nginx-deployment

spec:

replicas: 2

template:

metadata:

labels:

app: nginx

spec:

containers:

- name: nginx

image: ngxin:1.7.9

ports:

- containerPort: 80


kubectl create -f deploy.yaml

kubectl scale deployment nginx --replicas=4

9.部署集群DNS(CoreDNS)

9.1 下载coredns配置文件,如下:

coredns.yaml.sed

apiVersion: v1

kind: ServiceAccount

metadata:

name: coredns

namespace: kube-system

---

apiVersion: rbac.authorization.k8s.io/v1beta1

kind: ClusterRole

metadata:

labels:

kubernetes.io/bootstrapping: rbac-defaults

name: system:coredns

rules:

- apiGroups:

- ""

resources:

- endpoints

- services

- pods

- namespaces

verbs:

- list

- watch

---

apiVersion: rbac.authorization.k8s.io/v1beta1

kind: ClusterRoleBinding

metadata:

annotations:

rbac.authorization.kubernetes.io/autoupdate: "true"

labels:

kubernetes.io/bootstrapping: rbac-defaults

name: system:coredns

roleRef:

apiGroup: rbac.authorization.k8s.io

kind: ClusterRole

name: system:coredns

subjects:

- kind: ServiceAccount

name: coredns

namespace: kube-system

---

apiVersion: v1

kind: ConfigMap

metadata:

name: coredns

namespace: kube-system

data:

Corefile: |

.:53 {

errors

health

kubernetes CLUSTER_DOMAIN REVERSE_CIDRS {

pods insecure

upstream

fallthrough in-addr.arpa ip6.arpa

}

prometheus :9153

proxy . /etc/resolv.conf

cache 30

}

---

apiVersion: extensions/v1beta1

kind: Deployment

metadata:

name: coredns

namespace: kube-system

labels:

k8s-app: kube-dns

kubernetes.io/name: "CoreDNS"

spec:

replicas: 2

strategy:

type: RollingUpdate

rollingUpdate:

maxUnavailable: 1

selector:

matchLabels:

k8s-app: kube-dns

template:

metadata:

labels:

k8s-app: kube-dns

spec:

serviceAccountName: coredns

tolerations:

- key: "CriticalAddonsOnly"

operator: "Exists"

containers:

- name: coredns

image: coredns/coredns:1.1.1

imagePullPolicy: IfNotPresent

args: [ "-conf", "/etc/coredns/Corefile" ]

volumeMounts:

- name: config-volume

mountPath: /etc/coredns

ports:

- containerPort: 53

name: dns

protocol: UDP

- containerPort: 53

name: dns-tcp

protocol: TCP

- containerPort: 9153

name: metrics

protocol: TCP

livenessProbe:

httpGet:

path: /health

port: 8080

scheme: HTTP

initialDelaySeconds: 60

timeoutSeconds: 5

successThreshold: 1

failureThreshold: 5

dnsPolicy: Default

volumes:

- name: config-volume

configMap:

name: coredns

items:

- key: Corefile

path: Corefile

---

apiVersion: v1

kind: Service

metadata:

name: kube-dns

namespace: kube-system

annotations:

prometheus.io/scrape: "true"

labels:

k8s-app: kube-dns

kubernetes.io/cluster-service: "true"

kubernetes.io/name: "CoreDNS"

spec:

selector:

k8s-app: kube-dns

clusterIP: CLUSTER_DNS_IP

ports:

- name: dns

port: 53

protocol: UDP

- name: dns-tcp

port: 53

protocol: TCP

9.2 编写部署脚本

cat > deploy.sh << EOF

#!/bin/bash

# Deploys CoreDNS to a cluster currently running Kube-DNS.


SERVICE_CIDR=${1:-10.254.0.0/16}

POD_CIDR=${2:-172.30.0.0/16}

CLUSTER_DNS_IP=${3:-10.254.0.2}

CLUSTER_DOMAIN=${4:-cluster.local}

YAML_TEMPLATE=${5:-`pwd`/coredns.yaml.sed}


sed -e s/CLUSTER_DNS_IP/$CLUSTER_DNS_IP/g -e s/CLUSTER_DOMAIN/$CLUSTER_DOMAIN/g -e s?SERVICE_CIDR?$SERVICE_CIDR?g -e s?POD_CIDR?$POD_CIDR?g $YAML_TEMPLATE > coredns.yaml

EOF

知识点:根据自己的node网络,cluster修改自己的地址段。

9.3 部署coredns

chmod + deploy.sh

./deploy.sh

kubectl create -f coredns.yaml

9.4 验证dns服务

9.4.1 创建deployment

cat > busyboxdeploy.yaml << EOF

apiVersion: extensions/v1beta1

kind: Deployment

metadata:

name: busybox-deployment

spec:

replicas: 2

template:

metadata:

labels:

app: busybox

spec:

containers:

- name: busybox

image: busybox

ports:

- containerPort: 80

args: ["/bin/sh","-c","sleep 1000"]

EOF

9.4.2 进入pod,ping自己的SVC

kubectl exec busybox-deployment-6679c4bb96-86kfg -it -- /bin/sh

# ping kubernetes

# ...

# 虽然因为网络的问题ping不同,但是可以解析出名称。

10. 部署heapster

10.1.下载yaml文件

mkdir heapter

cd hapster

wget https://raw.githubusercontent.com/kubernetes/heapster/master/deploy/kube-config/influxdb/grafana.yaml

wget https://raw.githubusercontent.com/kubernetes/heapster/master/deploy/kube-config/influxdb/heapster.yaml

wget https://raw.githubusercontent.com/kubernetes/heapster/master/deploy/kube-config/influxdb/influxdb.yaml

wget https://raw.githubusercontent.com/kubernetes/heapster/master/deploy/kube-config/rbac/heapster-rbac.yaml

10.2. 修改yaml的container镜像源文件(默认使用goolge镜像源,我们下载不到只能改成其他人上传至dockerhub上的)

10.2.1 修改grafana.yaml

k8s.gcr.io/heapster-grafana-amd64:v5.0.4 mirrorgooglecontainers/heapster-grafana-amd64:v5.0.4

10.2.2 修改heapster.yaml

k8s.gcr.io/heapster-amd64:v1.5.4 cnych/heapster-amd64:v1.5.4

10.2.3 修改influxdb.yaml

k8s.gcr.io/heapster-influxdb-amd64:v1.5.2 fishchen/heapster-influxdb-amd64:v1.5.2

10.3 查看heapster状态

kubectl get svc -n kube-system

10.4 在master设置代理可以允许外部访问

kubectl proxy --port=8096 --address="10.192.44.129" --accept-hosts='^*$'

11.部署dashboard

11.1 下载dashboard的yaml文件

wget https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml -O kubernetes-dashboard.yaml

11.2 修改如下:(使用的是官方镜像,但是更换了images,添加了nodePort)

# ------------------- Dashboard Secret ------------------- #


apiVersion: v1

kind: Secret

metadata:

labels:

k8s-app: kubernetes-dashboard

name: kubernetes-dashboard-certs

namespace: kube-system

type: Opaque


---

# ------------------- Dashboard Service Account ------------------- #


apiVersion: v1

kind: ServiceAccount

metadata:

labels:

k8s-app: kubernetes-dashboard

name: kubernetes-dashboard

namespace: kube-system


---

# ------------------- Dashboard Role & Role Binding ------------------- #


kind: Role

apiVersion: rbac.authorization.k8s.io/v1

metadata:

name: kubernetes-dashboard-minimal

namespace: kube-system

rules:

# Allow Dashboard to create 'kubernetes-dashboard-key-holder' secret.

- apiGroups: [""]

resources: ["secrets"]

verbs: ["create"]

# Allow Dashboard to create 'kubernetes-dashboard-settings' config map.

- apiGroups: [""]

resources: ["configmaps"]

verbs: ["create"]

# Allow Dashboard to get, update and delete Dashboard exclusive secrets.

- apiGroups: [""]

resources: ["secrets"]

resourceNames: ["kubernetes-dashboard-key-holder", "kubernetes-dashboard-certs"]

verbs: ["get", "update", "delete"]

# Allow Dashboard to get and update 'kubernetes-dashboard-settings' config map.

- apiGroups: [""]

resources: ["configmaps"]

resourceNames: ["kubernetes-dashboard-settings"]

verbs: ["get", "update"]

# Allow Dashboard to get metrics from heapster.

- apiGroups: [""]

resources: ["services"]

resourceNames: ["heapster"]

verbs: ["proxy"]

- apiGroups: [""]

resources: ["services/proxy"]

resourceNames: ["heapster", "http:heapster:", "https:heapster:"]

verbs: ["get"]


---

apiVersion: rbac.authorization.k8s.io/v1

kind: RoleBinding

metadata:

name: kubernetes-dashboard-minimal

namespace: kube-system

roleRef:

apiGroup: rbac.authorization.k8s.io

kind: Role

name: kubernetes-dashboard-minimal

subjects:

- kind: ServiceAccount

name: kubernetes-dashboard

namespace: kube-system


---

# ------------------- Dashboard Deployment ------------------- #


kind: Deployment

apiVersion: apps/v1beta2

metadata:

labels:

k8s-app: kubernetes-dashboard

name: kubernetes-dashboard

namespace: kube-system

spec:

replicas: 1

revisionHistoryLimit: 10

selector:

matchLabels:

k8s-app: kubernetes-dashboard

template:

metadata:

labels:

k8s-app: kubernetes-dashboard

spec:

containers:

- name: kubernetes-dashboard

image: siriuszg/kubernetes-dashboard-amd64:v1.10.0

ports:

- containerPort: 8443

protocol: TCP

args:

- --auto-generate-certificates

# Uncomment the following line to manually specify Kubernetes API server Host

# If not specified, Dashboard will attempt to auto discover the API server and connect

# to it. Uncomment only if the default does not work.

# - --apiserver-host=http://my-address:port

volumeMounts:

- name: kubernetes-dashboard-certs

mountPath: /certs

# Create on-disk volume to store exec logs

- mountPath: /tmp

name: tmp-volume

livenessProbe:

httpGet:

scheme: HTTPS

path: /

port: 8443

initialDelaySeconds: 30

timeoutSeconds: 30

volumes:

- name: kubernetes-dashboard-certs

secret:

secretName: kubernetes-dashboard-certs

- name: tmp-volume

emptyDir: {}

serviceAccountName: kubernetes-dashboard

# Comment the following tolerations if Dashboard must not be deployed on master

tolerations:

- key: node-role.kubernetes.io/master

effect: NoSchedule


---

# ------------------- Dashboard Service ------------------- #


kind: Service

apiVersion: v1

metadata:

labels:

k8s-app: kubernetes-dashboard

kubernetes.io/cluster-service: 'true'

name: kubernetes-dashboard

namespace: kube-system

spec:

type: NodePort

ports:

- port: 443

nodePort: 30000

targetPort: 8443

selector:

k8s-app: kubernetes-dashboard

11.3 部署dashboard

kubectl create -f kubernetes-dashboard.yaml

11.4 查看部署状态

kubectl get services kubernetes-dashboard -n kube-system

kubectl get pods -n kube-system

11.5 创建token,以及利用token创建kubeconfig

k8s严格执行了权限访问控制。此时账户必须是sa(service account)账户,不能使用用户名和密码的认证方式。以我的理解,上文kubeconfig的用户针对的集群的访问和控制。dashboard需要是对pod层面的访问控制。

11.5.1 创建toekn,后续可以直接使用token访问。

kubectl create sa dashboard-admin -n kube-system

kubectl create clusterrolebinding dashboard-admin --clusterrole=cluster-admin --serviceaccount=kube-system:dashboard-admin

ADMIN_SECRET=$(kubectl get secrets -n kube-system | grep dashboard-admin | awk '{print $1}')

DASHBOARD_LOGIN_TOKEN=$(kubectl describe secret -n kube-system ${ADMIN_SECRET} | grep -E '^token' | awk '{print $2}')

echo ${DASHBOARD_LOGIN_TOKEN}

11.5.2 利用token创建kubeconfig文件访问形式

# 设置集群参数

kubectl config set-cluster kubernetes \

--certificate-authority=/etc/kubernetes/ssl/ca.pem \

--embed-certs=true \

--server=https://10.192.44.129:6443 \

--kubeconfig=dashboard.kubeconfig


# 设置客户端认证参数,使用上面创建的 Token

kubectl config set-credentials dashboard_user \

--token=${DASHBOARD_LOGIN_TOKEN} \

--kubeconfig=dashboard.kubeconfig


# 设置上下文参数

kubectl config set-context default \

--cluster=kubernetes \

--user=dashboard_user \

--kubeconfig=dashboard.kubeconfig


# 设置默认上下文

kubectl config use-context default --kubeconfig=dashboard.kubeconfig

11.6 访问dashboard

11.6.1 kubernetes-dashboard 服务暴露了 NodePort,可以使用 https://NodeIP:NodePort 地址访问 dashboard;

https://10.192.44.129:30000

问题记录:如果使用chrome访问的话,会提示NET:ERR_CERT_INVALID错误,这是证书的额问题。但是可以使用firefox访问。

或者导入ca和admin证书。如上文创建。但是导入windos要变换格式。

下面是关于生成证书的命令

生成p12格式证书

openssl pkcs12 -export -in admin.pem -out admin.p12 -inkey admin-key.pem

生成cer格式证书

openssl x509 -in admin.pem -outform der -out admin.cer

11.6.2 通过 kube-apiserver 访问 dashboard;

https://10.192.44.129:6443/api/v1/namespaces/kube-system/services/kubernetes-dashboard/proxy

11.6.3 通过 kubectl proxy 访问 dashboard:

kubectl proxy --address='10.192.44.129' --port=8086 --accept-hosts='^*$'

访问:http://10.192.44.129:8096/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/ 问题记录:通过kubectl proxy方式访问,虽然可以成功访问到登录界面,但是却无法登录,这是因为Dashboard只允许localhost和127.0.0.1使用HTTP连接进行访问,而其它地址只允许使用HTTPS。因此,如果需要在非本机访问Dashboard的话,只能选择其他访问方式。

12.部署EFK

EFK是三个开源软件的缩写,分别表示:Elasticsearch ,Fluentd, Kibana ,

12.1. 安装ELK(我是直接粘贴复制文件的)

git clone https://github.com/kubernetes/kubernetes.git

cd /opt/k8s/kubernetes/cluster/addons/fluentd-elasticsearch

12.2. 替换容器镜像(默认是谷歌镜像,你懂得)

12.3.1 替换es-statefulset.yaml中镜像

xxlaila/elasticsearch:v6.3.0

12.3.2 替换fluentd-es-ds.yaml中镜像

vavikast/fluentd-elasticsearch:v2.2.0

12.3.3 替换kibana-deployment.yaml中镜像

mintel/kibana-oss:6.3.2

12.3.4 意外情况(最新版本EFK中的fluentd-es-configmap.yaml文件配置有点问题,我还没有详细研究为什么,下面是我根据https://github.com/kubernetes/minikube/blob/master/deploy/addons/efk/更改如下)

kind: ConfigMap

apiVersion: v1

metadata:

name: fluentd-es-config-v0.1.6

namespace: kube-system

labels:

addonmanager.kubernetes.io/mode: Reconcile

data:

system.conf: |-

root_dir /tmp/fluentd-buffers/

containers.input.conf: |-

@id fluentd-containers.log

@type tail

path /var/log/containers/*.log

pos_file /var/log/es-containers.log.pos

time_format %Y-%m-%dT%H:%M:%S.%NZ

tag raw.kubernetes.*

read_from_head true

@type multi_format

format json

time_key time

time_format %Y-%m-%dT%H:%M:%S.%NZ

format /^(?

time_format %Y-%m-%dT%H:%M:%S.%N%:z

# Detect exceptions in the log output and forward them as one log entry.

@id raw.kubernetes

@type detect_exceptions

remove_tag_prefix raw

message log

stream stream

multiline_flush_interval 5

max_bytes 500000

max_lines 1000

output.conf: |-

# Enriches records with Kubernetes metadata

@type kubernetes_metadata

@id elasticsearch

@type elasticsearch

@log_level info

include_tag_key true

host elasticsearch-logging

port 9200

logstash_format true

@type file

path /var/log/fluentd-buffers/kubernetes.system.buffer

flush_mode interval

retry_type exponential_backoff

flush_thread_count 2

flush_interval 5s

retry_forever

retry_max_interval 30

chunk_limit_size 2M

queue_limit_length 8

overflow_action block

12.3. 给node设置标签(因为fluentd-es-ds.yaml文件中设置了nodeselector,如果你不设置,则无法部署DS)

kubectl get nodes

kubectl label nodes master beta.kubernetes.io/fluentd-ds-ready=true

kubectl label nodes node2 beta.kubernetes.io/fluentd-ds-ready=true

kubectl label nodes node3 beta.kubernetes.io/fluentd-ds-ready=true

12.4 执行定义文件

kubectl creat -f ./

12.5检查执行结果

kubectl get pods -n kube-system -o wide|grep -E 'elasticsearch|fluentd|kibana'

参考blog:https://jimmysong.io/kubernetes-handbook/practice/create-tls-and-secret-key.html

https://mritd.me/2018/01/07/kubernetes-tls-bootstrapping-note/

https://juejin.im/post/5b45cea9f265da0f652370ce

http://www.ruanyifeng.com/blog/2016/03/systemd-tutorial-part-two.html

https://www.cnblogs.com/RainingNight/p/deploying-k8s-dashboard-ui.html

https://github.com/opsnull/follow-me-install-kubernetes-cluster

这是一篇费时费力的文章,遇到很多坑,一次次跌倒又爬起来,最终才算完成,真是不动手不知道辛苦。

我希望自己的写的东西可以有所记录,同时也希望与你们有所分享。这篇文章参考了很牛人的博客,我已经贴到最后。目前来说还有一些遇到的问题,以及我自己的拙见,我没有贴出来,等我想好怎么更好的排版,我再来完善,加油。


0