千家信息网

kubeadm安装报错怎么办

发表于:2024-11-17 作者:千家信息网编辑
千家信息网最后更新 2024年11月17日,这篇文章给大家分享的是有关kubeadm安装报错怎么办的内容。小编觉得挺实用的,因此分享给大家做个参考,一起跟随小编过来看看吧。kubeadm安装报错现象:由于1.15版本太低,想安装最新版本 但升级
千家信息网最后更新 2024年11月17日kubeadm安装报错怎么办

这篇文章给大家分享的是有关kubeadm安装报错怎么办的内容。小编觉得挺实用的,因此分享给大家做个参考,一起跟随小编过来看看吧。

kubeadm安装报错

现象:

由于1.15版本太低,想安装最新版本 但升级需要逐个升级,所以放弃 直接重新安装。先kubeadm reset,然后按照之前的步骤 安装k8s:

master节点kubeadm初始化报错如下:

[root@k01 ~]# kubeadm init --apiserver-advertise-address=10.129.42.131 --image-repository registry.cn-hangzhou.aliyuncs.com/lfy_k8s_images --kubernetes-version v1.21.0 --service-cidr=10.96.0.0/16 --pod-network-cidr=192.168.0.0/16……[control-plane] Using manifest folder "/etc/kubernetes/manifests"[control-plane] Creating static Pod manifest for "kube-apiserver"[control-plane] Creating static Pod manifest for "kube-controller-manager"[control-plane] Creating static Pod manifest for "kube-scheduler"[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s[kubelet-check] Initial timeout of 40s passed.[kubelet-check] It seems like the kubelet isn't running or healthy.[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp: lookup localhost on 8.8.8.8:53: no such host.[kubelet-check] It seems like the kubelet isn't running or healthy.[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp: lookup localhost on 8.8.8.8:53: no such host.[kubelet-check] It seems like the kubelet isn't running or healthy.[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp: lookup localhost on 8.8.8.8:53: no such host.[kubelet-check] It seems like the kubelet isn't running or healthy.[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp: lookup localhost on 8.8.8.8:53: no such host.[kubelet-check] It seems like the kubelet isn't running or healthy.[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp: lookup localhost on 8.8.8.8:53: no such host.        Unfortunately, an error has occurred:                timed out waiting for the condition        This error is likely caused by:                - The kubelet is not running                - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)        If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:                - 'systemctl status kubelet'                - 'journalctl -xeu kubelet'        Additionally, a control plane component may have crashed or exited when started by the container runtime.        To troubleshoot, list all containers using your preferred container runtimes CLI.        Here is one example how you may list all Kubernetes containers running in docker:                - 'docker ps -a | grep kube | grep -v pause'                Once you have found the failing container, you can inspect its logs with:                - 'docker logs CONTAINERID'error execution phase wait-control-plane: couldn't initialize a Kubernetes clusterTo see the stack trace of this error execute with --v=5 or higher

排查过程

1、根据输出结果排查:

kubelet处于active状态 正常;

根据输出结果发现,为什么会去lookup localhost on 8.8.8.8:53呢?抱着不解 去ping 了一下localhost,结果是127.0.0.1 好像没什么问题,就忽略了这个报错……

根据报错内容 手动排查,结果也是ok的。

[root@k01 ~]# curl -sSL http://localhost:10248/healthzok

2、查看messages日志

日志内容和输出差不多,但能够看出是网络问题:

[root@k01 ~]# tail -f /var/log/messagesMay 17 13:01:07 DouyuTest01 kubelet: I0517 13:01:07.003029   10865 kubelet.go:461] "Kubelet nodes not sync"May 17 13:01:07 DouyuTest01 kubelet: E0517 13:01:07.799345   10865 kubelet.go:2218] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"May 17 13:01:07 DouyuTest01 kubelet: I0517 13:01:07.983382   10865 kubelet.go:461] "Kubelet nodes not sync"May 17 13:01:07 DouyuTest01 kubelet: I0517 13:01:07.983386   10865 kubelet.go:461] "Kubelet nodes not sync"May 17 13:01:07 DouyuTest01 kubelet: I0517 13:01:07.983455   10865 kubelet.go:461] "Kubelet nodes not sync"May 17 13:01:08 DouyuTest01 kubelet: I0517 13:01:08.003006   10865 kubelet.go:461] "Kubelet nodes not sync"May 17 13:01:08 DouyuTest01 kubelet: I0517 13:01:08.983542   10865 kubelet.go:461] "Kubelet nodes not sync"May 17 13:01:08 DouyuTest01 kubelet: I0517 13:01:08.983550   10865 kubelet.go:461] "Kubelet nodes not sync"May 17 13:01:08 DouyuTest01 kubelet: I0517 13:01:08.983594   10865 kubelet.go:461] "Kubelet nodes not sync"May 17 13:01:09 DouyuTest01 kubelet: I0517 13:01:09.003077   10865 kubelet.go:461] "Kubelet nodes not sync"May 17 13:01:09 DouyuTest01 kubelet: I0517 13:01:09.983455   10865 kubelet.go:461] "Kubelet nodes not sync"May 17 13:01:09 DouyuTest01 kubelet: I0517 13:01:09.983469   10865 kubelet.go:461] "Kubelet nodes not sync"May 17 13:01:09 DouyuTest01 kubelet: I0517 13:01:09.983513   10865 kubelet.go:461] "Kubelet nodes not sync"May 17 13:01:10 DouyuTest01 kubelet: I0517 13:01:10.003036   10865 kubelet.go:461] "Kubelet nodes not sync"May 17 13:01:10 DouyuTest01 kubelet: E0517 13:01:10.025554   10865 event.go:273] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"k01.167fc1eedabaafaa", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"k01", UID:"k01", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"k01"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc0209bf99d5e33aa, ext:6597724002, loc:(*time.Location)(0x74ad9e0)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc0209bf99d5e33aa, ext:6597724002, loc:(*time.Location)(0x74ad9e0)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'Post "https://10.129.42.131:6443/api/v1/namespaces/default/events": dial tcp 10.129.42.131:6443: connect: connection refused'(may retry after sleeping)May 17 13:01:10 DouyuTest01 kubelet: E0517 13:01:10.107623   10865 controller.go:144] failed to ensure lease exists, will retry in 7s, error: Get "https://10.129.42.131:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/k01?timeout=10s": dial tcp 10.129.42.131:6443: connect: connection refusedMay 17 13:01:10 DouyuTest01 kubelet: I0517 13:01:10.715642   10865 kubelet_node_status.go:71] "Attempting to register node" node="k01"May 17 13:01:10 DouyuTest01 kubelet: E0517 13:01:10.716053   10865 kubelet_node_status.go:93] "Unable to register node with API server" err="Post \"https://10.129.42.131:6443/api/v1/nodes\": dial tcp 10.129.42.131:6443: connect: connection refused" node="k01"May 17 13:01:10 DouyuTest01 kubelet: I0517 13:01:10.983585   10865 kubelet.go:461] "Kubelet nodes not sync"May 17 13:01:10 DouyuTest01 kubelet: I0517 13:01:10.983675   10865 kubelet.go:461] "Kubelet nodes not sync"May 17 13:01:10 DouyuTest01 kubelet: I0517 13:01:10.984259   10865 kubelet.go:461] "Kubelet nodes not sync"May 17 13:01:11 DouyuTest01 kubelet: I0517 13:01:11.003219   10865 kubelet.go:461] "Kubelet nodes not sync"May 17 13:01:11 DouyuTest01 kubelet: I0517 13:01:11.133807   10865 cni.go:239] "Unable to update cni config" err="no networks found in /etc/cni/net.

3、百度

百度也没结果,死马当活马医,修改了localhost的hosts映射关系:

[root@k01 ~]# cat /etc/hosts10.129.42.131 k01 localhost10.129.42.151 k0210.129.42.152 k0310.129.42.155 hub.atguigu.com

问题解决了,继续初始化如下:

……[control-plane] Creating static Pod manifest for "kube-controller-manager"[control-plane] Creating static Pod manifest for "kube-scheduler"[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s[kubelet-check] Initial timeout of 40s passed.[kubelet-check] It seems like the kubelet isn't running or healthy.[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 10.129.42.131:10248: connect: connection refused.[kubelet-check] It seems like the kubelet isn't running or healthy.[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 10.129.42.131:10248: connect: connection refused.[kubelet-check] It seems like the kubelet isn't running or healthy.[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 10.129.42.131:10248: connect: connection refused.[apiclient] All control plane components are healthy after 65.003162 seconds……

解决方案

添加了10.129.42.131 localhost一条hosts记录;

[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp: lookup localhost on 8.8.8.8:53: no such host.[kubelet-check] It seems like the kubelet isn't running or healthy.

感谢各位的阅读!关于"kubeadm安装报错怎么办"这篇文章就分享到这里了,希望以上内容可以对大家有一定的帮助,让大家可以学到更多知识,如果觉得文章不错,可以把它分享出去让更多的人看到吧!

0