千家信息网

Rancher 2.x,K8S,使用问题记录(持续更新.)

发表于:2024-11-22 作者:千家信息网编辑
千家信息网最后更新 2024年11月22日,1、ingress-nginx,修改默认的Nginx上传大小:使用Rancher的负载均衡,上传1M以上的文件报错,查看ingress-nginx容器,发现配置client_max_body_size
千家信息网最后更新 2024年11月22日Rancher 2.x,K8S,使用问题记录(持续更新.)

1、ingress-nginx,修改默认的Nginx上传大小:

使用Rancher的负载均衡,上传1M以上的文件报错,查看ingress-nginx容器,发现配置

client_max_body_size 1m;

解决办法:

创建ingress的时候修改参数:

apiVersion: extensions/v1beta1kind: Ingressmetadata:  annotations:    nginx.ingress.kubernetes.io/proxy-body-size: 50m

2、k8s使用cronjob自动备份gitlab

常规情况为使用定时任务运行脚本,运行命令即可

gitlab-rake gitlab:backup:create

k8s中运行的gitlab,使用cronjob定时运行脚本即可:

1、为了可以使用kubectl,需要使用kubeconfig,创建configmap 就叫kubeconfig:

#kubeconfig默认放在/root/.kube/config

kubectl create configmap kubeconfig -n gitlab --from-file=/root/.kube/config

2、创建备份脚本,configmap:

备份脚本如下:

#!/bin/sh

pod_name=$(kubectl get pods -l app=gitlab -o jsonpath={.items[*].metadata.name} -n gitlab --kubeconfig=/etc/kubeconfig/config)

kubectl --kubeconfig=/etc/kubeconfig/config exec $pod_name -n gitlab -- gitlab-rake gitlab:backup:create

导入到configmap

3、挂载两个configmap,需要放到对应路径,kubeconfig放到/etc/kubeconfig/config,备份脚本放到运行路径下即可

4、配置cronjob定时运行挂载的脚本即可

apiVersion: v1
items:

  • apiVersion: batch/v1beta1
    kind: CronJob
    metadata:
    name: gitlab2-backup
    namespace: gitlab
    spec:
    concurrencyPolicy: Allow
    failedJobsHistoryLimit: 10
    jobTemplate:
    metadata:
    creationTimestamp: null
    spec:
    template:
    metadata:
    spec:
    containers:
    • command:
      • sh
      • /home/demo.sh
        image: lachlanevenson/k8s-kubectl:v1.17.0
        imagePullPolicy: IfNotPresent
        name: gitlab2-backup
        resources: {}
        stdin: true
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
        tty: true
        volumeMounts:
      • mountPath: /etc/kubeconfig
        name: vol1
      • mountPath: /home
        name: vol2
        volumes:
    • configMap:
      defaultMode: 420
      name: kubeconfig
      optional: false
      name: vol1
    • configMap:
      defaultMode: 493
      name: backup-config
      optional: false
      name: vol2
      schedule: 25 8 *
      successfulJobsHistoryLimit: 10
      suspend: false
      kind: List
      metadata:
      resourceVersion: ""
      selfLink: ""

3、Rancher新增加节点报错

Runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized

查看发现新节点/etc/cni/net.d/下无配置文件,把其他的节点的配置文件拷贝过来即可

10-canal.conflist calico-kubeconfig

虽然节点显示正常了不过运行容器报错:

Failed create pod sandbox: rpc error: code = Unknown desc = [failed to set up sandbox container "75b6c96ee03bcdb754b01c126afb8f77016000a27e1ad7d55bd4d1e31c7889c4" network for pod "demo1111-645996f944-lrkwq": NetworkPlugin cni failed to set up pod "demo1111-645996f944-lrkwq_yj-test" network: failed to find plugin "loopback" in path [/opt/cni/bin], failed to clean up sandbox container "75b6c96ee03bcdb754b01c126afb8f77016000a27e1ad7d55bd4d1e31c7889c4" network for pod "demo1111-645996f944-lrkwq": NetworkPlugin cni failed to teardown pod "demo1111-645996f944-lrkwq_yj-test" network: failed to find plugin "portmap" in path [/opt/cni/bin]] 2 minutes ago

Warning FailedCreatePodSandBox Failed create pod sandbox: rpc error: code = Unknown desc = [failed to set up sandbox container

查看/opt/cni/bin,果然也没有相关文件,同上把其他节点的拷贝过来

可是感觉应该不会莫名的没有安装上,查看rancher system项目,发现kube-system空间下的canal在node3还未成功启动,在查看原来是相关镜像拉取较慢。。镜像拉取成功后(我是手动导的)即一切正常。

0