千家信息网

K8s集群搭建(kubeadm方案)

发表于:2024-12-03 作者:千家信息网编辑
千家信息网最后更新 2024年12月03日,K8s集群搭建(kubeadm方案)1、最少3台CentosA、至少2核CPU+2G内存+20G硬盘B、必须在同一网段本示例中分配为:Master:10.170.0.7Worker1:10.170.0
千家信息网最后更新 2024年12月03日K8s集群搭建(kubeadm方案)

K8s集群搭建(kubeadm方案)

1、最少3台Centos
A、至少2核CPU+2G内存+20G硬盘
B、必须在同一网段
本示例中分配为:

Master:10.170.0.7Worker1:10.170.0.8Worker2:10.170.0.9

2、ip addr确认是否有分配到IPV4地址。没有的话nmtui,Automatically connect打上勾

3、用SSH连接

4、禁用防火墙,

systemctl stop firewalld & systemctl disable firewalld

5、禁用SELINUX

sed -i "s/SELINUX=enforcing/SELINUX=disabled/" /etc/selinux/configsetenforce 0 

6、禁用swap

    A、swapoff -a    B、vi /etc/fstab,注释掉swap

7、重启服务器,使其生效。

reboot

8、添加k8s源,建议使用阿里或者163的

cat < /etc/yum.repos.d/kubernetes.repo[kubernetes]name=Kubernetesbaseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/enabled=1gpgcheck=1repo_gpgcheck=1gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpgEOF

9、安装kubelet、kubeadm、kubectl

yum install -y kubectl kubeadm kubectl

10、kube-proxy开启ipvs

vi /etc/sysctl.d/k8s.confvm.swappiness = 0net.bridge.bridge-nf-call-ip6tables = 1net.bridge.bridge-nf-call-iptables = 1net.ipv4.ip_forward = 1
modprobe br_netfiltersysctl -p /etc/sysctl.d/k8s.conf
cat > /etc/sysconfig/modules/ipvs.modules <
chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep -e ip_vs -e nf_conntrack_ipv4

接下来还需要确保各个节点上已经安装了ipset软件包

yum install -y ipset

。 为了便于查看ipvs的代理规则,最好安装一下管理工具

yum install -y ipvsadm

11、安装docker

yum install -y dockersystemctl start docker & systemctl enable docker

12、设置开机启动kubelet

systemctl enable kubelet

13、修改hostname(非常重要,如果hostname相同会导致问题)

hostnamectl set-hostname XXXX

14、初始化Master节点

kubeadm init --kubernetes-version=v1.15.3 --apiserver-advertise-address=10.170.0.7 --pod-network-cidr=10.170.0.0/16 --service-cidr=172.20.0.0/16

现在最新版的为:1.15.3,在执行时,若版本出错,错误的信息中,会提示。

如果出现timeout是因为docker无法自动获取到images导致的,可以两种方式解决

  • A、手动去pull并修改标签
  • B、修改docker源的方式来解决

15、手动pull images

docker pull mirrorgooglecontainers/kube-apiserver:v1.15.3docker pull mirrorgooglecontainers/kube-controller-manager:v1.15.3docker pull mirrorgooglecontainers/kube-scheduler:v1.15.3docker pull mirrorgooglecontainers/kube-proxy:v1.15.3docker pull mirrorgooglecontainers/pause:3.1docker pull mirrorgooglecontainers/etcd:3.2.24docker pull coredns/coredns:1.2.6docker tag mirrorgooglecontainers/kube-proxy:v1.13.0  k8s.gcr.io/kube-proxy:v1.15.3docker tag mirrorgooglecontainers/kube-scheduler:v1.13.0 k8s.gcr.io/kube-scheduler:v1.15.3docker tag mirrorgooglecontainers/kube-apiserver:v1.13.0 k8s.gcr.io/kube-apiserver:v1.15.3docker tag mirrorgooglecontainers/kube-controller-manager:v1.15.3 k8s.gcr.io/kube-controller-manager:v1.13.0docker tag mirrorgooglecontainers/etcd:3.2.24  k8s.gcr.io/etcd:3.2.24docker tag coredns/coredns:1.2.6 k8s.gcr.io/coredns:1.2.6docker tag mirrorgooglecontainers/pause:3.1  k8s.gcr.io/pause:3.1

请注意版本号的问题,若是已更新,请相应的变更 。

16、修改docker源

kubeadm init --image-repository="mirrorgooglecontainers"

17、重新执行第15步

18、启动成功之后可以看到集群已经启动成功,并且生成了worker加入集群的指令,只要在worker上执行即可加入集群

kubeadm join 10.170.0.7:6443 --token xrhtyd.b61v0mzuu6cea8qg \    --discovery-token-ca-cert-hash sha256:2e5cb96ef0be0e791acb923cf371303b13bda4613b4fd6d11a59bc17a7f8c3dd 

19、执行如下指令

mkdir -p $HOME/.kubesudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/configsudo chown $(id -u):$(id -g) $HOME/.kube/config

20、这时候输入kubectl get cs 就可以查看服务的运行情况,确保都在正常运行

21、安装network addon(采用官方推荐的flannel方案,其中pod网段必须与之前init的时候设置的pod-network-cidr一致)

yum install -y wgetwget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml && kubectl apply -f  kube-flannel.yml

22、将worker join至master
至此集群已搭建完毕,这时候可以通过kubectl get nodes来查看各节点运行情况

安装Prometheus
1、在各个节点下载docker对应镜像

docker pull prom/node-exporterdocker pull prom/prometheus:v2.0.0docker pull grafana/grafana:4.2.0

2、以daemonset方式部署node-export组件

vi node-exporter.yaml---apiVersion: extensions/v1beta1kind: DaemonSetmetadata:  name: node-exporter  namespace: kube-system  labels:    k8s-app: node-exporterspec:  template:    metadata:      labels:        k8s-app: node-exporter    spec:      containers:      - image: prom/node-exporter        name: node-exporter        ports:        - containerPort: 9100          protocol: TCP          name: http---apiVersion: v1kind: Servicemetadata:  labels:    k8s-app: node-exporter  name: node-exporter  namespace: kube-systemspec:  ports:  - name: http    port: 9100    nodePort: 31672    protocol: TCP  type: NodePort  selector:    k8s-app: node-exporterkubectl create -f  node-exporter.yaml

3、部署prometheus组件

vi rabc-setup.yamlapiVersion: rbac.authorization.k8s.io/v1kind: ClusterRolemetadata:  name: prometheusrules:- apiGroups: [""]  resources:  - nodes  - nodes/proxy  - services  - endpoints  - pods  verbs: ["get", "list", "watch"]- apiGroups:  - extensions  resources:  - ingresses  verbs: ["get", "list", "watch"]- nonResourceURLs: ["/metrics"]  verbs: ["get"]---apiVersion: v1kind: ServiceAccountmetadata:  name: prometheus  namespace: kube-system---apiVersion: rbac.authorization.k8s.io/v1kind: ClusterRoleBindingmetadata:  name: prometheusroleRef:  apiGroup: rbac.authorization.k8s.io  kind: ClusterRole  name: prometheussubjects:- kind: ServiceAccount  name: prometheus  namespace: kube-system

4、以configmap的形式管理prometheus组件的配置文件

vi configmap.yaml apiVersion: v1kind: ConfigMapmetadata:  name: prometheus-config  namespace: kube-systemdata:  prometheus.yml: |    global:      scrape_interval:     15s      evaluation_interval: 15s    scrape_configs:    - job_name: 'kubernetes-apiservers'      kubernetes_sd_configs:      - role: endpoints      scheme: https      tls_config:        ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt      bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token      relabel_configs:      - source_labels: [__meta_kubernetes_namespace, __meta_kubernetes_service_name, __meta_kubernetes_endpoint_port_name]        action: keep        regex: default;kubernetes;https    - job_name: 'kubernetes-nodes'      kubernetes_sd_configs:      - role: node      scheme: https      tls_config:        ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt      bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token      relabel_configs:      - action: labelmap        regex: __meta_kubernetes_node_label_(.+)      - target_label: __address__        replacement: kubernetes.default.svc:443      - source_labels: [__meta_kubernetes_node_name]        regex: (.+)        target_label: __metrics_path__        replacement: /api/v1/nodes/${1}/proxy/metrics    - job_name: 'kubernetes-cadvisor'      kubernetes_sd_configs:      - role: node      scheme: https      tls_config:        ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt      bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token      relabel_configs:      - action: labelmap        regex: __meta_kubernetes_node_label_(.+)      - target_label: __address__        replacement: kubernetes.default.svc:443      - source_labels: [__meta_kubernetes_node_name]        regex: (.+)        target_label: __metrics_path__        replacement: /api/v1/nodes/${1}/proxy/metrics/cadvisor    - job_name: 'kubernetes-service-endpoints'      kubernetes_sd_configs:      - role: endpoints      relabel_configs:      - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scrape]        action: keep        regex: true      - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scheme]        action: replace        target_label: __scheme__        regex: (https?)      - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_path]        action: replace        target_label: __metrics_path__        regex: (.+)      - source_labels: [__address__, __meta_kubernetes_service_annotation_prometheus_io_port]        action: replace        target_label: __address__        regex: ([^:]+)(?::\d+)?;(\d+)        replacement: $1:$2      - action: labelmap        regex: __meta_kubernetes_service_label_(.+)      - source_labels: [__meta_kubernetes_namespace]        action: replace        target_label: kubernetes_namespace      - source_labels: [__meta_kubernetes_service_name]        action: replace        target_label: kubernetes_name    - job_name: 'kubernetes-services'      kubernetes_sd_configs:      - role: service      metrics_path: /probe      params:        module: [http_2xx]      relabel_configs:      - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_probe]        action: keep        regex: true      - source_labels: [__address__]        target_label: __param_target      - target_label: __address__        replacement: blackbox-exporter.example.com:9115      - source_labels: [__param_target]        target_label: instance      - action: labelmap        regex: __meta_kubernetes_service_label_(.+)      - source_labels: [__meta_kubernetes_namespace]        target_label: kubernetes_namespace      - source_labels: [__meta_kubernetes_service_name]        target_label: kubernetes_name    - job_name: 'kubernetes-ingresses'      kubernetes_sd_configs:      - role: ingress      relabel_configs:      - source_labels: [__meta_kubernetes_ingress_annotation_prometheus_io_probe]        action: keep        regex: true      - source_labels: [__meta_kubernetes_ingress_scheme,__address__,__meta_kubernetes_ingress_path]        regex: (.+);(.+);(.+)        replacement: ${1}://${2}${3}        target_label: __param_target      - target_label: __address__        replacement: blackbox-exporter.example.com:9115      - source_labels: [__param_target]        target_label: instance      - action: labelmap        regex: __meta_kubernetes_ingress_label_(.+)      - source_labels: [__meta_kubernetes_namespace]        target_label: kubernetes_namespace      - source_labels: [__meta_kubernetes_ingress_name]        target_label: kubernetes_name    - job_name: 'kubernetes-pods'      kubernetes_sd_configs:      - role: pod      relabel_configs:      - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_scrape]        action: keep        regex: true      - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_path]        action: replace        target_label: __metrics_path__        regex: (.+)      - source_labels: [__address__, __meta_kubernetes_pod_annotation_prometheus_io_port]        action: replace        regex: ([^:]+)(?::\d+)?;(\d+)        replacement: $1:$2        target_label: __address__      - action: labelmap        regex: __meta_kubernetes_pod_label_(.+)      - source_labels: [__meta_kubernetes_namespace]        action: replace        target_label: kubernetes_namespace      - source_labels: [__meta_kubernetes_pod_name]        action: replace        target_label: kubernetes_pod_name

5、deployment文件

vi prometheus.deploy.ymlapiVersion: apps/v1beta2kind: Deploymentmetadata:  labels:    name: prometheus-deployment  name: prometheus  namespace: kube-systemspec:  replicas: 1  selector:    matchLabels:      app: prometheus  template:    metadata:      labels:        app: prometheus    spec:      containers:      - image: prom/prometheus:v2.0.0        name: prometheus        command:        - "/bin/prometheus"        args:        - "--config.file=/etc/prometheus/prometheus.yml"        - "--storage.tsdb.path=/prometheus"        - "--storage.tsdb.retention=24h"        ports:        - containerPort: 9090          protocol: TCP        volumeMounts:        - mountPath: "/prometheus"          name: data        - mountPath: "/etc/prometheus"          name: config-volume        resources:          requests:            cpu: 100m            memory: 100Mi          limits:            cpu: 500m            memory: 2500Mi      serviceAccountName: prometheus          volumes:      - name: data        emptyDir: {}      - name: config-volume        configMap:          name: prometheus-config

6、Prometheus service文件

vi prometheus.svc.ymlkind: ServiceapiVersion: v1metadata:  labels:    app: prometheus  name: prometheus  namespace: kube-systemspec:  type: NodePort  ports:  - port: 9090    targetPort: 9090    nodePort: 30003  selector:    app: prometheus

7、创建文件

kubectl create -f  rbac-setup.yamlkubectl create -f  configmap.yaml kubectl create -f  prometheus.deploy.yml kubectl create -f  prometheus.svc.yml
0