In order to facilitate testing, I am going to configure a LoadBalaner type Service for the Ingress controller, and since this is a local private environment, I need to deploy a load balancer that supports this service type, the most popular one in the community should be MetalLB This project, which now also belongs to the CNCF sandbox project, was launched at the end of 2017 and has been widely adopted in the community after 4 years of development, but my side has been unstable during testing and use, and often needs to restart the controller to take effect. So I turned my attention to another load balancer, OpenELB, which was recently open sourced by China’s Green Cloud.

OpenELB, previously called PorterLB, is a load balancer plug-in designed for physical machines (Bare-metal), edges (Edge) and privatized environments, and can be used as a LB plug-in for Kubernetes, K3s, KubeSphere to expose LoadBalancer-type services outside the cluster, and is now a CNCF sandbox project. Project, core features include.

  • Load balancing based on BGP and Layer 2 mode
  • Load balancing based on router ECMP
  • IP address pool management
  • BGP configuration using CRD

OpenELB

Comparison with MetaLB

OpenELB, as a latecomer, adopts a more Kubernetes-native implementation and can be configured and managed directly through CRD. Here is a brief comparison of OpenELB and MetaLB.

Cloud Native Architecture

In OpenELB, you can use CRD to configure both address management and BGP configuration management. For users used to Kubectl, OpenELB is very user-friendly. In MetalLB, you need to configure them via ConfigMap and perceive their status by looking at monitoring or logs.

Flexible Address Management

OpenELB manages addresses through a custom resource object called EIP, which defines subresource Status to store address assignment status so that there are no conflicts between replicas when assigning addresses.

Publishing routes using gobgp

Unlike MetalLB’s own implementation of the BGP protocol, OpenELB uses the standard gobgp to publish routes, which has the following benefits.

  • Low development cost and gobgp community support
  • The ability to take advantage of gobgp’s rich features
  • Dynamic configuration of gobgp via BgpConf/BgpPeer CRD, allowing users to dynamically load the latest configuration information without restarting OpenELB
  • OpenELB implements the BgpConf/BgpPeer CRD with the same API that the community provides when using gobgp as a lib, and is compatible with it.
  • OpenELB also provides status to view BGP neighbor configuration, rich in status information

Simple architecture, low resource consumption

OpenELB currently only deploys Deployment to achieve high availability through multiple replicas, and partial replica crashes do not affect the normal connections already established.

In BGP mode, the different copies of Deployment are connected to the router for issuing equivalent routes, so normally we can deploy two copies. In Layer 2 mode, the different replicas elect a leader between them through the Leader Election mechanism provided by Kubernetes, which in turn answers ARP/NDP.

Installation

In a Kubernetes cluster, you only need to install OpenELB once. once the installation is complete, an openelb-manager Deployment is installed in the cluster, which contains an openelb-manager Pod. openelb-manager Pod implements the OpenELB functionality for the entire Kubernetes cluster The openELB functionality is implemented for the entire Kubernetes cluster. Once installed, you can extend the openelb-manager Deployment to distribute multiple copies of OpenELB (openelb-manager Pods) to multiple cluster nodes to ensure high availability. For more information, see Configuring Multiple OpenELB Replicas.

To install and use OpenELB is very simple and can be installed with one click directly using the following command.

1
2
# 注意如果不能获取k8s.gcr.io镜像,需要替换其中的镜像
☸ ➜ kubectl apply -f https://raw.githubusercontent.com/openelb/openelb/master/deploy/openelb.yaml

The resource list above deploys a Deployment resource object named openelb-manager. The Pod of openelb-manager implements OpenELB functionality for the entire Kubernetes cluster, and the controller can be extended to two copies to ensure high availability. The first installation will also configure https certificates for the admission webhook, and check the status of the Pod after the installation is complete.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
☸ ➜ kubectl get pods -n openelb-system              
NAME                                READY   STATUS      RESTARTS      AGE
openelb-admission-create--1-cf857   0/1     Completed   0             58m
openelb-admission-patch--1-dhgrq    0/1     Completed   2             58m
openelb-manager-848495684-nppkr     1/1     Running     1 (35m ago)   48m
openelb-manager-848495684-svn7z     1/1     Running     1 (35m ago)   48m
☸ ➜ kubectl get validatingwebhookconfiguration       
NAME                                      WEBHOOKS   AGE
openelb-admission                         1          62m
☸ ➜ kubectl get mutatingwebhookconfigurations        
NAME                                    WEBHOOKS   AGE
openelb-admission                       1          62m

In addition, several related CRD user OpenELB configurations will be installed.

1
2
3
4
☸ ➜ kubectl get crd |grep kubesphere
bgpconfs.network.kubesphere.io           2022-04-10T08:01:18Z
bgppeers.network.kubesphere.io           2022-04-10T08:01:18Z
eips.network.kubesphere.io               2022-04-10T08:01:18Z

Configuration

Next we will demonstrate how to use layer2 mode OpenELB, first you need to ensure that all Kubernetes cluster nodes must be in the same layer 2 network (under the same router), my test environment has a total of 3 nodes, node information is shown below.

1
2
3
4
5
☸ ➜ kubectl get nodes -o wide      
NAME      STATUS   ROLES                  AGE   VERSION   INTERNAL-IP     EXTERNAL-IP   OS-IMAGE                KERNEL-VERSION                CONTAINER-RUNTIME
master1   Ready    control-plane,master   15d   v1.22.8   192.168.0.111   <none>        CentOS Linux 7 (Core)   3.10.0-1160.25.1.el7.x86_64   containerd://1.5.5
node1     Ready    <none>                 15d   v1.22.8   192.168.0.110   <none>        CentOS Linux 7 (Core)   3.10.0-1160.25.1.el7.x86_64   containerd://1.5.5
node2     Ready    <none>                 15d   v1.22.8   192.168.0.109   <none>        CentOS Linux 7 (Core)   3.10.0-1160.25.1.el7.x86_64   containerd://1.5.5

The IP addresses of the 3 nodes are 192.168.0.109, 192.168.0.110, and 192.168.0.111.

First you need to enable strictARP for kube-proxy so that all NICs in the Kubernetes cluster stop responding to ARP requests from other NICs, and OpenELB handles the ARP requests.

1
2
3
4
5
☸ ➜ kubectl edit configmap kube-proxy -n kube-system
......
ipvs:
  strictARP: true
......

Then just execute the following command to restart the kube-proxy component.

1
☸ ➜ kubectl rollout restart daemonset kube-proxy -n kube-system

If the node where OpenELB is installed has more than one NIC, you need to specify the NIC to be used by OpenELB in Layer 2 mode. If the node has only one NIC, you can skip this step, assuming that the master1 node where OpenELB is installed has two NICs (eth0 192.168.0.2 and ens33 192.168.0.111) and eth0 192.168.0.2 will be used for OpenELB, then you need to add an annotation for the master1 node to specify the NIC.

1
☸ ➜ kubectl annotate nodes master1 layer2.openelb.kubesphere.io/v1alpha1="192.168.0.2"

Next, you can create an Eip object to act as an IP address pool for OpenELB, creating a resource object as shown below.

1
2
3
4
5
6
7
8
9
apiVersion: network.kubesphere.io/v1alpha2
kind: Eip
metadata:
  name: eip-pool
spec:
  address: 192.168.0.100-192.168.0.108
  protocol: layer2
  disable: false
  interface: ens33

Here we specify a pool of IP addresses via the address attribute, which can be filled with one or more IP addresses (taking care that IP segments in different Eip objects do not overlap), to be used by OpenELB. The value format can be.

  • IP address, e.g. 192.168.0.100
  • IP address/subnet mask, e.g. 192.168.0.0/24
  • IP address 1 - IP address 2, e.g. 192.168.0.91 - 192.168.0.100

protocol attribute is used to specify which OpenELB mode the Eip object is used for, can be configured as layer2 or bgp, the default is bgp mode, we want to use layer2 mode here, so we need to show the specified interface is used to specify the NIC that OpenELB listens to for ARP or NDP requests, this field is only valid when the protocol This field is only valid when the protocol is set to layer2, my environment here is ens33 NIC disable indicates whether to disable the Eip object

After creating the Eip object, you can check the status of the IP pool via Status.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
☸ ➜ kubectl get eip          
NAME       CIDR                          USAGE   TOTAL
eip-pool   192.168.0.100-192.168.0.108   0       9
☸ ➜ kubectl get eip eip-pool -oyaml
apiVersion: network.kubesphere.io/v1alpha2
kind: Eip
metadata:
  finalizers:
  - finalizer.ipam.kubesphere.io/v1alpha1
  name: eip-pool
spec:
  address: 192.168.0.100-192.168.0.108
  interface: ens33
  protocol: layer2
status:
  firstIP: 192.168.0.100
  lastIP: 192.168.0.108
  poolSize: 9
  ready: true
  v4: true

Here the address pool for LB is ready, next we create a simple service to be exposed through LB as shown below.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
# openelb-nginx.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx
spec:
  selector:  
    matchLabels:
      app: nginx
  template:  
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx
        ports:
        - containerPort: 80

A simple nginx service is deployed here.

1
2
3
4
☸ ➜ kubectl apply -f openelb-nginx.yaml 
☸ ➜ kubectl get pods                  
NAME                     READY   STATUS    RESTARTS      AGE
nginx-7848d4b86f-zmm8l   1/1     Running   0             42s

Then create a Service of type LoadBalancer to expose our nginx service, as follows.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
# openelb-nginx-svc.yaml
apiVersion: v1
kind: Service
metadata:
  name: nginx
  annotations:
    lb.kubesphere.io/v1alpha1: openelb
    protocol.openelb.kubesphere.io/v1alpha1: layer2
    eip.openelb.kubesphere.io/v1alpha2: eip-pool
spec:
  selector:
    app: nginx
  type: LoadBalancer
  ports:
    - name: http
      port: 80
      targetPort: 80

Note that we have added several annotations to the Service here.

  • lb.kubesphere.io/v1alpha1: openelb to specify that the Service uses OpenELB
  • protocol.openelb.kubesphere.io/v1alpha1: layer2 specifies that OpenELB is used in Layer2 mode
  • eip.openelb.kubesphere.io/v1alpha2: eip-pool specifies the Eip object used by OpenELB, if this annotation is not configured, OpenELB will automatically use the first available Eip object matching the protocol, or you can remove this annotation and add spec: loadBalancerIP field (e.g. spec:loadBalancerIP: 192.168.0.108) to assign a specific IP address to the Service.

Also create the above Service directly.

1
2
3
4
5
☸ ➜ kubectl apply -f openelb-nginx-svc.yaml                
service/nginx created
☸ ➜ kubectl get svc nginx                   
NAME    TYPE           CLUSTER-IP      EXTERNAL-IP     PORT(S)        AGE
nginx   LoadBalancer   10.100.126.91   192.168.0.101   80:31555/TCP   4s

Once created, you can see that the Service service has been assigned an EXTERNAL-IP, and we can access the nginx service above from that address.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
☸ ➜ curl 192.168.0.101
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
html { color-scheme: light dark; }
body { width: 35em; margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif; }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>

<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>

<p><em>Thank you for using nginx.</em></p>
</body>
</html>

In addition, OpenElb also supports BGP mode and clustered multi-route scenarios, see the official documentation at https://openelb.github.io/docs/ for more information on how to use it.