On any of the K8S nodes, you can see a routing table similar to the following.

1
2
3
4
5
6
> ip route show
default via 192.168.0.1 dev ens18 proto static
...
10.42.1.0/24 via 10.42.1.0 dev flannel.1 onlink
10.42.2.0/24 via 10.42.2.0 dev flannel.1 onlink
...

The 10.42.1.0/24 and 10.42.2.0/24 subnets in this table happen to be part of the K8S Overlay network. And this routing table gives you a clue that you can access the Pod from any node via Cluster IP.

1
2
3
4
> ping 10.42.2.56
PING 10.42.2.56 (10.42.2.56) 56(84) bytes of data.
64 bytes from 10.42.2.56: icmp_seq=1 ttl=62 time=0.852 ms
...

Note that the Pod is accessed from any node, not from inside the K8S Cluster Overlay network.

So now let’s summarize the current situation.

  1. K8S requires that net.ipv4.ip_forward = 1 be turned on during installation (documentation ipv4-and-letting-iptables-see-bridged-traffic)).
  2. Pods within the cluster can be accessed from any node via Cluster IP because the routing table is set on the node.
  3. Implicit case: Pod can access to the subnet where the node is located.

Combining these 3 scenarios so far, one conclusion can be drawn.

Pods within a K8S cluster can be accessed via Cluster IP using any K8S node.

This is indeed the case. Find a server on the same subnet but not a K8S node, set the routing rules, and you will find that you can successfully initiate a ping.

1
2
3
4
5
> ip route add 10.42.0.0/16 via 192.168.10.1
> ping 10.42.2.56
PING 10.42.2.56 (10.42.2.56) 56(84) bytes of data.
64 bytes from 10.42.2.56: icmp_seq=1 ttl=62 time=0.788 ms
...

So the conclusion is.

  • Pods within a K8S cluster can be accessed via Cluster IP using any K8S node.
  • Therefore, the node firewall for K8S needs to be configured with inbound rules that strictly define which IPs can be accessed.