【K8S排错】在集群的POD内不能访问clusterIP和service
排错背景:在一次生产环境的部署过程中,配置文件中配置的访问地址为集群的Service,配置好后发现服务不能正常访问,遂启动了一个busybox进行测试,测试发现在busybox中,能通过coredns正常的解析到IP,然后去ping了一下service,发现不能ping通,ping clusterIP也不能ping通。
排错经历:首先排查了kube-proxy是否正常,发现启动都是正常的,然后也重启了,还是一样ping不通,然后又排查了网络插件,也重启过flannel,依然没有任何效果。后来想到自己的另一套k8s环境,是能正常ping通service的,就对比这两套环境检查配置,发现所有配置中只有kube-proxy的配置有一点差别,能ping通的环境kube-proxy使用了--proxy-mode=ipvs ,不能ping通的环境使用了默认模式(iptables)。
iptables没有具体设备响应。
然后就是开始经过多次测试,添加--proxy-mode=ipvs 后,清空node上防火墙规则,重启kube-proxy后就能正常的ping通了。
在学习K8S的时候,自己一直比较忽略底层流量转发,也即IPVS和iptables的相关知识,认为不管哪种模式,只要能转发访问到pod就可以,不用太在意这些细节,以后还是得更加仔细才行。
补充:kubeadm 部署方式修改kube-proxy为 ipvs模式。
默认情况下,我们部署的kube-proxy通过查看日志,能看到如下信息:Flag proxy-mode="" unknown,assuming iptables proxy
[root@k8s-master ~]# kubectl logs -n kube-system kube-proxy-ppdb6 W1013 06:55:35.773739 1 proxier.go:513] Failed to load kernel module ip_vs with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modulesW1013 06:55:35.868822 1 proxier.go:513] Failed to load kernel module ip_vs_rr with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modulesW1013 06:55:35.869786 1 proxier.go:513] Failed to load kernel module ip_vs_wrr with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modulesW1013 06:55:35.870800 1 proxier.go:513] Failed to load kernel module ip_vs_sh with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modulesW1013 06:55:35.876832 1 server_others.go:249] Flag proxy-mode="" unknown, assuming iptables proxyI1013 06:55:35.890892 1 server_others.go:143] Using iptables Proxier.I1013 06:55:35.892136 1 server.go:534] Version: v1.15.0I1013 06:55:35.909025 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_max' to 131072I1013 06:55:35.909053 1 conntrack.go:52] Setting nf_conntrack_max to 131072I1013 06:55:35.919298 1 conntrack.go:83] Setting conntrack hashsize to 32768I1013 06:55:35.945969 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400I1013 06:55:35.946044 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600I1013 06:55:35.946623 1 config.go:96] Starting endpoints config controllerI1013 06:55:35.946660 1 controller_utils.go:1029] Waiting for caches to sync for endpoints config controllerI1013 06:55:35.946695 1 config.go:187] Starting service config controllerI1013 06:55:35.946713 1 controller_utils.go:1029] Waiting for caches to sync for service config controllerI1013 06:55:36.047121 1 controller_utils.go:1036] Caches are synced for endpoints config controllerI1013 06:55:36.047195 1 controller_utils.go:1036] Caches are synced for service config controller
这里我们需要修改kube-proxy的配置文件,添加mode 为ipvs。
[root@k8s-master ~]# kubectl edit cm kube-proxy -n kube-system...ipvs: excludeCIDRs: null minSyncPeriod: 0s scheduler: "" strictARP: false syncPeriod: 30s kind: KubeProxyConfiguration metricsBindAddress: 127.0.0.1:10249 mode: "ipvs" ...
ipvs模式需要注意的是要添加ip_vs相关模块:
cat > /etc/sysconfig/modules/ipvs.modules < chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep -e ip_vs -e nf_conntrack_ipv4重启kube-proxy 的pod
[root@k8s-master ~]# kubectl get pod -n kube-system | grep kube-proxy |awk '{system("kubectl delete pod "$1" -n kube-system")}'pod "kube-proxy-62gvr" deletedpod "kube-proxy-n2rml" deletedpod "kube-proxy-ppdb6" deletedpod "kube-proxy-rr9cg" deleted在pod重启后再查看日志,发现模式已经变为ipvs了。
[root@k8s-master ~]# kubectl get pod -n kube-system |grep kube-proxykube-proxy-cbm8p 1/1 Running 0 85skube-proxy-d97pn 1/1 Running 0 83skube-proxy-gmq6s 1/1 Running 0 76skube-proxy-x6tcg 1/1 Running 0 81s[root@k8s-master ~]# kubectl logs -n kube-system kube-proxy-cbm8p I1013 07:34:38.685794 1 server_others.go:170] Using ipvs Proxier.W1013 07:34:38.686066 1 proxier.go:401] IPVS scheduler not specified, use rr by defaultI1013 07:34:38.687224 1 server.go:534] Version: v1.15.0I1013 07:34:38.692777 1 conntrack.go:52] Setting nf_conntrack_max to 131072I1013 07:34:38.693378 1 config.go:187] Starting service config controllerI1013 07:34:38.693391 1 controller_utils.go:1029] Waiting for caches to sync for service config controllerI1013 07:34:38.693406 1 config.go:96] Starting endpoints config controllerI1013 07:34:38.693411 1 controller_utils.go:1029] Waiting for caches to sync for endpoints config controllerI1013 07:34:38.793684 1 controller_utils.go:1036] Caches are synced for endpoints config controllerI1013 07:34:38.793688 1 controller_utils.go:1036] Caches are synced for service config controller再次测试ping service
[root@k8s-master ~]# kubectl exec -it dns-test sh/ # ping nginx-servicePING nginx-service (10.1.58.65): 56 data bytes64 bytes from 10.1.58.65: seq=0 ttl=64 time=0.033 ms64 bytes from 10.1.58.65: seq=1 ttl=64 time=0.069 ms64 bytes from 10.1.58.65: seq=2 ttl=64 time=0.094 ms64 bytes from 10.1.58.65: seq=3 ttl=64 time=0.057 ms^C--- nginx-service ping statistics ---4 packets transmitted, 4 packets received, 0% packet lossround-trip min/avg/max = 0.033/0.063/0.094 ms