大橙子网站建设,新征程启航
为企业提供网站建设、域名注册、服务器等服务
排错背景:在一次生产环境的部署过程中,配置文件中配置的访问地址为集群的Service,配置好后发现服务不能正常访问,遂启动了一个busybox进行测试,测试发现在busybox中,能通过coreDNS正常的解析到IP,然后去ping了一下service,发现不能ping通,ping clusterIP也不能ping通。
为莱西等地区用户提供了全套网页设计制作服务,及莱西网站建设行业解决方案。主营业务为网站建设、做网站、莱西网站设计,以传统方式定制建设网站,并提供域名空间备案等一条龙服务,秉承以专业、用心的态度为用户提供真诚的服务。我们深信只要达到每一位用户的要求,就会得到认可,从而选择与我们长期合作。这样,我们也可以走得更远!
排错经历:首先排查了kube-proxy是否正常,发现启动都是正常的,然后也重启了,还是一样ping不通,然后又排查了网络插件,也重启过flannel,依然没有任何效果。后来想到自己的另一套k8s环境,是能正常ping通service的,就对比这两套环境检查配置,发现所有配置中只有kube-proxy的配置有一点差别,能ping通的环境kube-proxy使用了--proxy-mode=ipvs ,不能ping通的环境使用了默认模式(iptables)。
iptables没有具体设备响应。
然后就是开始经过多次测试,添加--proxy-mode=ipvs 后,清空node上防火墙规则,重启kube-proxy后就能正常的ping通了。
在学习K8S的时候,自己一直比较忽略底层流量转发,也即IPVS和iptables的相关知识,认为不管哪种模式,只要能转发访问到pod就可以,不用太在意这些细节,以后还是得更加仔细才行。
补充:kubeadm 部署方式修改kube-proxy为 ipvs模式。
默认情况下,我们部署的kube-proxy通过查看日志,能看到如下信息:Flag proxy-mode="" unknown,assuming iptables proxy
[root@k8s-master ~]# kubectl logs -n kube-system kube-proxy-ppdb6 W1013 06:55:35.773739 1 proxier.go:513] Failed to load kernel module ip_vs with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules W1013 06:55:35.868822 1 proxier.go:513] Failed to load kernel module ip_vs_rr with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules W1013 06:55:35.869786 1 proxier.go:513] Failed to load kernel module ip_vs_wrr with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules W1013 06:55:35.870800 1 proxier.go:513] Failed to load kernel module ip_vs_sh with modprobe. You can ignore this message when kube-proxy is running inside container without mounting /lib/modules W1013 06:55:35.876832 1 server_others.go:249] Flag proxy-mode="" unknown, assuming iptables proxy I1013 06:55:35.890892 1 server_others.go:143] Using iptables Proxier. I1013 06:55:35.892136 1 server.go:534] Version: v1.15.0 I1013 06:55:35.909025 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_max' to 131072 I1013 06:55:35.909053 1 conntrack.go:52] Setting nf_conntrack_max to 131072 I1013 06:55:35.919298 1 conntrack.go:83] Setting conntrack hashsize to 32768 I1013 06:55:35.945969 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_established' to 86400 I1013 06:55:35.946044 1 conntrack.go:100] Set sysctl 'net/netfilter/nf_conntrack_tcp_timeout_close_wait' to 3600 I1013 06:55:35.946623 1 config.go:96] Starting endpoints config controller I1013 06:55:35.946660 1 controller_utils.go:1029] Waiting for caches to sync for endpoints config controller I1013 06:55:35.946695 1 config.go:187] Starting service config controller I1013 06:55:35.946713 1 controller_utils.go:1029] Waiting for caches to sync for service config controller I1013 06:55:36.047121 1 controller_utils.go:1036] Caches are synced for endpoints config controller I1013 06:55:36.047195 1 controller_utils.go:1036] Caches are synced for service config controller
这里我们需要修改kube-proxy的配置文件,添加mode 为ipvs。
[root@k8s-master ~]# kubectl edit cm kube-proxy -n kube-system ... ipvs: excludeCIDRs: null minSyncPeriod: 0s scheduler: "" strictARP: false syncPeriod: 30s kind: KubeProxyConfiguration metricsBindAddress: 127.0.0.1:10249 mode: "ipvs" ...
ipvs模式需要注意的是要添加ip_vs相关模块:
cat > /etc/sysconfig/modules/ipvs.modules < chmod 755 /etc/sysconfig/modules/ipvs.modules && bash /etc/sysconfig/modules/ipvs.modules && lsmod | grep -e ip_vs -e nf_conntrack_ipv4重启kube-proxy 的pod
[root@k8s-master ~]# kubectl get pod -n kube-system | grep kube-proxy |awk '{system("kubectl delete pod "$1" -n kube-system")}' pod "kube-proxy-62gvr" deleted pod "kube-proxy-n2rml" deleted pod "kube-proxy-ppdb6" deleted pod "kube-proxy-rr9cg" deleted在pod重启后再查看日志,发现模式已经变为ipvs了。
[root@k8s-master ~]# kubectl get pod -n kube-system |grep kube-proxy kube-proxy-cbm8p 1/1 Running 0 85s kube-proxy-d97pn 1/1 Running 0 83s kube-proxy-gmq6s 1/1 Running 0 76s kube-proxy-x6tcg 1/1 Running 0 81s [root@k8s-master ~]# kubectl logs -n kube-system kube-proxy-cbm8p I1013 07:34:38.685794 1 server_others.go:170] Using ipvs Proxier. W1013 07:34:38.686066 1 proxier.go:401] IPVS scheduler not specified, use rr by default I1013 07:34:38.687224 1 server.go:534] Version: v1.15.0 I1013 07:34:38.692777 1 conntrack.go:52] Setting nf_conntrack_max to 131072 I1013 07:34:38.693378 1 config.go:187] Starting service config controller I1013 07:34:38.693391 1 controller_utils.go:1029] Waiting for caches to sync for service config controller I1013 07:34:38.693406 1 config.go:96] Starting endpoints config controller I1013 07:34:38.693411 1 controller_utils.go:1029] Waiting for caches to sync for endpoints config controller I1013 07:34:38.793684 1 controller_utils.go:1036] Caches are synced for endpoints config controller I1013 07:34:38.793688 1 controller_utils.go:1036] Caches are synced for service config controller再次测试ping service
[root@k8s-master ~]# kubectl exec -it dns-test sh / # ping nginx-service PING nginx-service (10.1.58.65): 56 data bytes 64 bytes from 10.1.58.65: seq=0 ttl=64 time=0.033 ms 64 bytes from 10.1.58.65: seq=1 ttl=64 time=0.069 ms 64 bytes from 10.1.58.65: seq=2 ttl=64 time=0.094 ms 64 bytes from 10.1.58.65: seq=3 ttl=64 time=0.057 ms ^C --- nginx-service ping statistics --- 4 packets transmitted, 4 packets received, 0% packet loss round-trip min/avg/max = 0.033/0.063/0.094 ms
名称栏目:【K8S排错】在集群的POD内不能访问clusterIP和service
转载源于:http://dzwzjz.com/article/jopcpd.html