A pod running on a kubernetes cluster is easy to access from within the cluster, most simply, through the pod’s ip, or through the corresponding svc. However, outside the cluster, the pod ip of the flannel-based kubernetes cluster is not accessible from outside the cluster because it is an internal address.

To solve this problem, kubernetes provides several methods as follows.

hostNetwork: true

When hostNetwork is true, the container will use the network of the host node, so the container’s services can be accessed from outside the cluster as node-ip + port, as long as you know which node the container is running on.

1
2
3
4
5
6
7
8
9
apiVersion: v1
kind: Pod
metadata:
  name: nginx
spec:
  hostNetwork: true
  containers:
    - name: nginx
      image: nginx

After the pod is started, as follows, you can see that the ip address of the pod is the same as that of node optiplex-2, and the service on port 80 of the pod is requested with the ip address of node optiplex-2, and the http service of pod nginx is accessed.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
$ kubectl get pods -o wide nginx
NAME    READY   STATUS    RESTARTS   AGE   IP              NODE         NOMINATED NODE   READINESS GATES
nginx   1/1     Running   0          13m   192.168.0.161   optiplex-2   <none>           <none>
$ 
$ kubectl get nodes -o wide optiplex-2
NAME         STATUS   ROLES    AGE    VERSION   INTERNAL-IP     EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION      CONTAINER-RUNTIME
optiplex-2   Ready    <none>   160d   v1.16.4   192.168.0.161   <none>        Ubuntu 18.04.3 LTS   4.15.0-63-generic   docker://18.9.7
$
$ curl http://192.168.0.161
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
...

The advantage of hostNetwork is that it uses the host’s network directly and Pods can access it as long as the host is accessible; however, the disadvantages are also obvious.

  • Ease of use: Pods drift to other nodes and need to change ip addresses when accessing. workaroud does this by binding Pods to certain nodes and running keepalived on these nodes to drift the vip so that clients can use vip+port to access.
  • Ease of use: Port conflicts may occur between Pods, causing Pods to fail to schedule successfully.
  • Security: Pods can directly observe the network of the host.

hostPort

The effect of hostPort is similar to hostNetwork in that both can access the Pod’s services via the ip address of the node where the Pod is located + Pod Port.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
apiVersion: v1
kind: Pod
metadata:
  name: nginx
spec:
  containers:
    - name: nginx
      image: nginx
      ports:
        - containerPort: 80
          hostPort: 80

After the pod is started, as follows, you can see that the ip address of the pod is the internal ip of the flannel, which is different from the ip of the host node; like hostNetwork, it can also be accessed via the node ip + pod port.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
$ kubectl get pods -o wide nginx
NAME    READY   STATUS    RESTARTS   AGE     IP             NODE       NOMINATED NODE   READINESS GATES
nginx   1/1     Running   0          3m50s   10.244.0.156   ubuntu-1   <none>           <none>
$ 
$ kubectl get nodes ubuntu-1 -o wide
NAME       STATUS   ROLES    AGE    VERSION   INTERNAL-IP     EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION      CONTAINER-RUNTIME
ubuntu-1   Ready    master   435d   v1.16.4   192.168.0.154   <none>        Ubuntu 18.04.2 LTS   4.15.0-49-generic   docker://18.9.2
$ 
$ curl http://192.168.0.154
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
...

The principle of hostPort is different from hostNetwork, as follows. hostPort is actually a fullnat made by a series of iptables rules.

1
2
3
4
5
6
-A CNI-DN-1652b489c7cf2eeec2243 -s 10.244.0.156/32 -p tcp -m tcp --dport 80 -j CNI-HOSTPORT-SETMARK
-A CNI-DN-1652b489c7cf2eeec2243 -s 127.0.0.1/32 -p tcp -m tcp --dport 80 -j CNI-HOSTPORT-SETMARK
-A CNI-DN-1652b489c7cf2eeec2243 -p tcp -m tcp --dport 80 -j DNAT --to-destination 10.244.0.156:80
-A CNI-HOSTPORT-DNAT -p tcp -m comment --comment "dnat name: \"cbr0\" id: \"3881dd51f0971e5ccdd03f89a48e2386c5e7d8987014a12a31ad40b1df685158\"" -m multiport --dports 80 -j CNI-DN-1652b489c7cf2eeec2243
-A CNI-HOSTPORT-MASQ -m mark --mark 0x2000/0x2000 -j MASQUERADE
-A CNI-HOSTPORT-SETMARK -m comment --comment "CNI portfwd masquerade mark" -j MARK --set-xmark 0x2000/0x2000

The advantages and disadvantages of hostPort are similar to those of hostNetwork because they both use the network resources of the host. one advantage of hostPort over hostNetwork is that hostPort does not need to provide network information about the host, but its performance is not as good as hostNetwork because it needs to be forwarded by iptables to reach the Pod.

NodePort

Unlike hostPort and hostNetwork, which are just configurations for Pods, NodePort is a service that uses the port number on the host node to access the Pod’s services from outside the cluster with any node’s ip + nodePort.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
apiVersion: v1
kind: Pod
metadata:
  name: nginx
  labels:
    name: nginx
spec:
  containers:
    - name: nginx
      image: nginx
      ports:
        - containerPort: 80
---
kind: Service
apiVersion: v1
metadata:
  name: nginx
spec:
  type: NodePort
  ports:
    - name: nginx
      port: 80
      nodePort: 30018
  selector:
    name: nginx

The nodePort in the svc configuration, i.e. the port number of the host when accessing the service. It can be specified in the configuration file (of course it cannot conflict with other nodePort type svc), or it can be left unconfigured and assigned by k8s.

After creating the above Pod and service, as follows, to view the information about the pod and svc, we can access the pod’s service via the host’s ip address + noePort.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
$ kubectl get pods -o wide nginx
NAME    READY   STATUS    RESTARTS   AGE     IP             NODE         NOMINATED NODE   READINESS GATES
nginx   1/1     Running   0          2m35s   10.244.4.133   optiplex-2   <none>           <none>
$
$ kubectl get svc -o wide nginx
NAME    TYPE       CLUSTER-IP      EXTERNAL-IP   PORT(S)        AGE    SELECTOR
nginx   NodePort   10.103.106.64   <none>        80:30018/TCP   190d   name=nginx,run=nginx
$
$ kubectl get nodes optiplex-1 -o wide
NAME         STATUS   ROLES    AGE    VERSION   INTERNAL-IP     EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION      CONTAINER-RUNTIME
optiplex-1   Ready    <none>   169d   v1.16.4   192.168.0.240   <none>        Ubuntu 18.04.3 LTS   4.15.0-73-generic   docker://18.9.7
$ 
$ curl http://192.168.0.240:30018
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>

The nodePort type of svc is implemented by kube-proxy via iptables, whose iptables rules have the same copy on all nodes.

LoadBalancer

LoadBalance is also a service. loadBalancer usually requires the support of external devices, such as load balancing devices on AWS, Azure and other public clouds.

In my environment, LoadBalancer is implemented through metalLB.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
apiVersion: v1
kind: Pod
metadata:
  name: nginx
  labels:
    name: nginx
spec:
  containers:
    - name: nginx
      image: nginx
      ports:
        - containerPort: 80
---
kind: Service
apiVersion: v1
metadata:
  name: nginx
spec:
  type: LoadBalancer
  ports:
    - name: nginx
      port: 80
  selector:
    name: nginx

The only difference with nodePort is the type of service. after creation, as follows, check the status of the pod and svc, you can see that service nginx than nodePort type, more EXTERNAL-IP information, where 192.168.9.0 address that is metalLB for LoadBalancer type svc assigned by the external IP address; users can access nginx services from outside the cluster through this address.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
$ kubectl get pods nginx -o wide
NAME    READY   STATUS    RESTARTS   AGE     IP             NODE       NOMINATED NODE   READINESS GATES
nginx   1/1     Running   0          3m18s   10.244.0.158   ubuntu-1   <none>           <none>
$
$ kubectl get svc nginx -o wide
NAME    TYPE           CLUSTER-IP     EXTERNAL-IP   PORT(S)        AGE   SELECTOR
nginx   LoadBalancer   10.102.66.83   192.168.9.0   80:31009/TCP   15s   name=nginx
$
$ curl http://192.168.9.0
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>

Ingress

The nodePort and LoadBalancer types of service focus on layer four, while Ingrss focuses on layer seven.

In the kubernetes design, Ingress is just a concept and kubernetes does not provide its implementation directly; cluster administrators need to deploy ingress controllers; Ingress controllers can be implemented based on nginx, treafik, etc.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: nginx
spec:
  rules:
    - host: nginx.example.com
      http:
        paths:
          - backend:
              serviceName: nginx
              servicePort: 80

As above, Ingrss nginx is bound to service nginx, whose domain name is nginx.example.com.

Taking the nginx-based ingress controller as an example, after creating the above ingress, the Ingress controller will listen to service nginx, get its endpoints, and generate and update the configuration of the ingress controller.

When an out-of-cluster request arrives at the ingress controller, it forwards the http request with host nginx.example.com to the endpoints, thus completing the seven-tier load balancing, and the clients outside the cluster can request the Pod’s services.

1
$ curl -v http://nginx.example.com/ping

Of course, to be able to accept traffic from outside the cluster, the ingress controller itself needs to be deployed using hostPort or hostNetwork.

Pod IP global reachability

When the kubernetes network solution is calico or contiv, it is also possible to configure Pod IP global reachability for direct access outside of the cluster.

The principle is that the host establishes BGP neighbors with the uplink switch; when the Pod is running, the host BGP will publish a route to the uplink switch, which in turn completes a BGP exchange with the aggregation switch, thus directing traffic to the Pod.

Summary

All the above ways can realize the access to Pod service outside the cluster, you can choose according to the actual needs and environment.