Kubernetes集群的搭建方法
本篇内容主要讲解"Kubernetes集群的搭建方法",感兴趣的朋友不妨来看看。本文介绍的方法操作简单快捷,实用性强。下面就让小编来带大家学习"Kubernetes集群的搭建方法"吧!
0. 概要
使用kubeadm搭建一个单节点kubernets实例,仅供学习. 运行环境和软件概要如下:
~ | 版本 | 备注 |
---|---|---|
OS | Ubuntu 18.0.4 | 192.168.132.152 my.servermaster.local/192.168.132.154 my.worker01.local |
Docker | 18.06.1~ce~3-0~ubuntu | k8s最新版(1.12.3)支持的最高版本, 必须固定 |
Kubernetes | 1.12.3 | 目标软件 |
以上系统和软件基本是2018.11截止最新的状态, 其中docker需要注意必须安装k8s支持到的版本.
1. 安装步骤
关闭系交换分区
swapoff -a
安装运行时, 默认使用docker, 安装docker即可
apt-get install docker-ce=18.06.1~ce~3-0~ubuntu
安装kubeadm 一下命令和官网的命令一致, 但是是包源改为阿里云
apt-get update && apt-get install -y apt-transport-httpscurl -s https://mirrors.aliyun.com/kubernetes/apt/doc/apt-key.gpg | apt-key add -cat </etc/apt/sources.list.d/kubernetes.listdeb https://mirrors.aliyun.com/kubernetes/apt/ kubernetes-xenial mainEOFapt-get updateapt-get install -y kubelet kubeadm kubectlapt-mark hold kubelet kubeadm kubectl
2. 使用kubeadm创建集群
2.1 准备镜像
因为国内是访问不到k8s.gcr.io所以需要将需要的镜像提前下载, 这次采用从阿里云镜像仓库下载, 并修改下载后的镜像tag为k8s.gcr.io
# a. 查看都需要哪些镜像需要下载kubeadm config images list --kubernetes-version=v1.12.3k8s.gcr.io/kube-apiserver:v1.12.3k8s.gcr.io/kube-controller-manager:v1.12.3k8s.gcr.io/kube-scheduler:v1.12.3k8s.gcr.io/kube-proxy:v1.12.3k8s.gcr.io/pause:3.1k8s.gcr.io/etcd:3.2.24k8s.gcr.io/coredns:1.2# b. 创建一个自动处理脚本下载镜像->重新tag->删除老tagvim ./load_images.sh#!/bin/bash### config the image mapdeclare -A images map=()images["k8s.gcr.io/kube-apiserver:v1.12.3"]="registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.12.3"images["k8s.gcr.io/kube-controller-manager:v1.12.3"]="registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.12.3"images["k8s.gcr.io/kube-scheduler:v1.12.3"]="registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.12.3"images["k8s.gcr.io/kube-proxy:v1.12.3"]="registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.12.3"images["k8s.gcr.io/pause:3.1"]="registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.1"images["k8s.gcr.io/etcd:3.2.24"]="registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.2.24"images["k8s.gcr.io/coredns:1.2.2"]="registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:1.2.2"### re-tag foreachfor key in ${!images[@]}do docker pull ${images[$key]} docker tag ${images[$key]} $key docker rmi ${images[$key]}done### checkdocker images# c. 执行脚本准镜像sudo chmod +x load_images.sh./load_images.sh
2.2 初始化集群(master)
初始化需要指定至少两个参数:
kubernetes-version: 方式kubeadm访问外网获取版本
pod-network-cidr: flannel网络插件配置需要
### 执行初始化命令sudo kubeadm init --kubernetes-version=v1.12.3 --pod-network-cidr=10.244.0.0/16### 最后的结果如下... ...Your Kubernetes master has initialized successfully!To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/configYou should now deploy a pod network to the cluster.Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/You can now join any number of machines by running the following on each nodeas root: kubeadm join 192.168.132.152:6443 --token ymny55.4jlbbkxiggmn9ezh --discovery-token-ca-cert-hash sha256:70265fafdb22d524c15616543d0b76527c686329221340b3b8da3652abed46b9
2.3 根据成功信息配置非管理员账号使用kubectl
mkdir -p $HOME/.kubesudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/configsudo chown $(id -u):$(id -g) $HOME/.kube/config
使用非root账号查看节点情况:
kubectl get nodesNAME STATUS ROLES AGE VERSIONservermaster NotReady master 28m v1.12.3
发现有一个master节点, 但是状态是NotReady, 这里需要做一个决定:
如果希望是单机则执行如下
kubectl taint nodes --all node-role.kubernetes.io/master-
如果希望搭建继续, 则继续后续步骤, 此时主节点状态可以忽略.
2.4 应用网络插件
查看kube-flannel.yml文件内容, 复制到本地文件避免terminal无法远程获取
kubectl apply -f kube-flannel.ymlclusterrole.rbac.authorization.k8s.io/flannel createdclusterrolebinding.rbac.authorization.k8s.io/flannel createdserviceaccount/flannel createdconfigmap/kube-flannel-cfg createddaemonset.extensions/kube-flannel-ds-amd64 createddaemonset.extensions/kube-flannel-ds-arm64 createddaemonset.extensions/kube-flannel-ds-arm createddaemonset.extensions/kube-flannel-ds-ppc64le createddaemonset.extensions/kube-flannel-ds-s390x created
2.5 新建worker节点
worker节点新建参考[1. 安装步骤]在另外一台服务器上新建即可, worker节点不用准备2.1~2.3及之后的所有步骤, 仅需完成基本安装, 安装完毕进入新的worker节点, 执行上一步最后得到join命令:
kubeadm join 192.168.132.152:6443 --token ymny55.4jlbbkxiggmn9ezh --discovery-token-ca-cert-hash sha256:70265fafdb22d524c15616543d0b76527c686329221340b3b8da3652abed46b9... ...This node has joined the cluster:* Certificate signing request was sent to apiserver and a response was received.* The Kubelet was informed of the new secure connection details.Run 'kubectl get nodes' on the master to see this node join the cluster.
2.6 检查集群(1 master, 1 worker)
kubectl get nodesNAME STATUS ROLES AGE VERSIONservermaster Ready master 94m v1.12.3worker01 Ready54m v1.12.3
2.7 创建dashboard
复制kubernetes-dashboard.yaml内容到本地文件, 方式命令行无法访问远程文件, 编辑最后一个配置Dashboard Service, 增加type和nodePort, 结果如下:
# ------------------- Dashboard Service ------------------- #kind: ServiceapiVersion: v1metadata: labels: k8s-app: kubernetes-dashboard name: kubernetes-dashboard namespace: kube-systemspec: ports: - port: 443 targetPort: 8443 nodePort: 30000 type: NodePort selector: k8s-app: kubernetes-dashboard
在master节点执行创建dashboard服务的命令:
kubectl create -f kubernetes-dashboard.yaml secret/kubernetes-dashboard-certs createdserviceaccount/kubernetes-dashboard createdrole.rbac.authorization.k8s.io/kubernetes-dashboard-minimal createdrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard-minimal createddeployment.apps/kubernetes-dashboard createdservice/kubernetes-dashboard created
浏览器输入worker节点ip和端口使用https访问如下:https://my.worker01.local:30000/#!/login 即可以验证dashboard是否安装成功.
2.8 登录Dashboard
通过kubectl获取secret,然后在获取详细的token,复制到上一步中登录页选择Token(令牌), 即可以登录
### 查看秘钥, 列出所有kube-system命名空间下秘钥kubectl -n kube-system get secretNAME TYPE DATA AGEclusterrole-aggregation-controller-token-vxzmt kubernetes.io/service-account-token 3 10h### 查看token, 这里选择clusterrole-aggregation-controller-token-*****的秘钥kubectl -n kube-system describe secret clusterrole-aggregation-controller-token-vxzmtName: clusterrole-aggregation-controller-token-vxzmtNamespace: kube-systemLabels:Annotations: kubernetes.io/service-account.name: clusterrole-aggregation-controller kubernetes.io/service-account.uid: dfb9d9c3-f646-11e8-9861-000c29b7e604Type: kubernetes.io/service-account-tokenData====ca.crt: 1025 bytesnamespace: 11 bytestoken: eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJjbHVzdGVycm9sZS1hZ2dyZWdhdGlvbi1jb250cm9sbGVyLXRva2VuLXZ4em10Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6ImNsdXN0ZXJyb2xlLWFnZ3JlZ2F0aW9uLWNvbnRyb2xsZXIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC51aWQiOiJkZmI5ZDljMy1mNjQ2LTExZTgtOTg2MS0wMDBjMjliN2U2MDQiLCJzdWIiOiJzeXN0ZW06c2VydmljZWFjY291bnQ6a3ViZS1zeXN0ZW06Y2x1c3RlcnJvbGUtYWdncmVnYXRpb24tY29udHJvbGxlciJ9.MfjiMrmyKl1GUci1ivD5RNrzo_s6wxXwFzgM_3NIAmTCfRQreYdhat3yyd6agaCLpUResnNC0ZGRi4CBs_Jgjqkovhb80V05_YVIvCrlf7xHxBKEtGfkJ-qLDvtAwR5zrXNNd0Ge8hTRxw67gZ3lGMkPpw5nfWmc0rzk90xTTQD1vAtrHMvxjr3dVXph5rT8GNuCSXA_J6o2AwYUbaKCc2ugdx8t8zX6oFJfVcw0ZNYYYIyxoXzzfhdppORtKR9t9v60KsI_-q0TxY-TU-JBtzUJU-hL6lB5MOgoBWpbQiV-aG8Ov74nDC54-DH7EhYEzzsLci6uUQCPlHNvLo_J2A
3. 遇到问题
master搭建好了, worker也join了get nodes发现还是NotReady状态
原因: 太复杂说不清楚任然是一个k8s issue, 查看issue基本可以确定是cni(Container Network Interface)问题,而flannel覆盖修改了这个问题
解决方法: 安装flannel插件(kubectl apply -f kube-flannel.yml)
配置错误重新开始搭建集群
解决方案: kubeadm reset
不能访问dashboard
原因: Back-off pulling image "k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.0
解决方案:
修改 kubernetes-dashboard-ce.yaml 文件中的 k8s.gcr.io/kubernetes-dashboard-amd64:v1.10.0 为 registry.cn-hangzhou.aliyuncs.com/google_containers/kubernetes-dashboard-amd64:v1.10.0
提前下载好镜像并配置好tag, 注意下载的位置worker节点, 可以通过: kubectl describe pod kubernetes-dashboard-85477d54d7-wzt7 -n kube-system 查看比较具体的信息
如何增加token失效时间
原因: 默认15分钟
解决方法:
... ... - name: kubernetes-dashboardimage: registry.cn-hangzhou.aliyuncs.com/google_containers/kubernetes-dashboard-amd64:v1.10.0ports:- containerPort: 8443 protocol: TCPargs: - --auto-generate-certificates - --token-ttl=86400 # Uncomment the following line to manually specify Kubernetes API server Host # If not specified, Dashboard will attempt to auto discover the API server and connect # to it. Uncomment only if the default does not work. # - --apiserver-host=http://my-address:portvolumeMounts:... ...
如果创建dashboard后: kubectl -n kube-system edit deployment kubernetes-dashboard, 在Deployment部分关于containers的args部分添加一行: - --token-ttl=86400 即可, 和创建前修改一样的方式一样
如果创建dashboard前: 可以修改kubernetes-dashboard.yaml文件中Dashboard Deployment部分关于containers的args部分添加一行: - --token-ttl=86400 即可,数字自定义单位是秒 如下:
4. 效率&技巧
4.1 kubeadm 自动完成
使用 kubeadm completion --help 查看使用详情,这里直接贴出bash的自动完成命令
kubeadm completion bash > ~/.kube/kubeadm_completion.bash.incprintf "\n# Kubeadm shell completion\nsource '$HOME/.kube/kubeadm_completion.bash.inc'\n" >> $HOME/.bash_profilesource $HOME/.bash_profile
4.2 kubectl 自动完成
使用 kubectl completion --help 查看使用详情,这里直接贴出bash的自动完成命令, 注意第二行命令不要一次性复制,先复制第一行printf再复制剩余.
kubectl completion bash > ~/.kube/completion.bash.incprintf "# Kubectl shell completionsource '$HOME/.kube/completion.bash.inc'" >> $HOME/.bash_profilesource $HOME/.bash_profile
4.3 使用私有 docker registry
创建 secret, 然后增加添加 imagePullSecrets 配置在指定image的地方. 创建和查看secret如下:
kubectl create secret docker-registry regcred --docker-server=registry.domain.cn:5001 --docker-username=xxxxx --docker-password=xxxxx --docker-email=jimmy.w@aliyun.comkubectl get secret regcred --output=yamlkubectl get secret regcred --output="jsonpath={.data.\.dockerconfigjson}" | base64 --decode
配置 imagePullSecrets 如下:
... ...containers:- name: mirage image: registry.domain.cn:5001/mirage:latestimagePullSecrets: - name: regcredports:- containerPort: 3008 protocol: TCPvolumeMounts:... ...
4.4 使用 HostAliases 向 Pod /etc/hosts 文件添加条目
如果有一些特别的入口或者以前放置到/etc/hosts中的可以通过配置hostAliases进行配置, 作用和本地的hosts一样, 且这些hostAlieas配置会放置到容器/etc/hosts中, 具体使用如下:
apiVersion: v1kind: Podmetadata: name: hostaliases-podspec: hostAliases: - ip: "127.0.0.1" hostnames: - "foo.local" - "bar.local" - ip: "10.1.2.3" hostnames: - "foo.remote" - "bar.remote" containers: - name: cat-hosts image: busybox command: - cat args: - "/etc/hosts"
到此,相信大家对"Kubernetes集群的搭建方法"有了更深的了解,不妨来实际操作一番吧!这里是网站,更多相关内容可以进入相关频道进行查询,关注我们,继续学习!