1. The advantages of Tekton for multi-cluster builds

With Kubernetes, Tekton is already very resilient and can support large scale builds. At the same time, the development task mainly uses Yaml and Shell, which extends the scope of Tekton for various scenarios.

The advantages of Tekton for multi-cluster builds

Above is a diagram of Tekton under multiple clusters. Why does Tekton need a multi-cluster execution pipeline?

  • Variable Kubernetes clusters at any time. A single Kubernetes cluster cannot meet the requirements of operations and maintenance, and cannot be changed at any time. With multiple clusters, you can take down some of the clusters for maintenance.
  • CI consumes a lot of CPU, memory, and IO resources, and can easily overwhelm nodes and even clusters. Multiple clusters can effectively share the load pressure and improve availability.
  • Business isolation. Businesses have different requirements for code security level, build speed, and build environment. Multi-clusters can provide isolated environment and customized pipeline services.

2. Kubernetes Cluster Federation

Kubernetes Cluster Federation is called KubeFed for short. The KubeFed Controller manages these CRDs and implements features such as synchronized Resources orchestration across clusters, enabling modularity and customization. Here is a diagram of the community’s architecture.

Kubernetes Cluster Federation

KubeFed configures two types of information.

  • Type configuration, which declares the type of APIs KubeFed handles
  • Cluster configuration, which declares which clusters KubeFed manages

Type configuration has three basic concepts.

  • Templates, which defines the template description of the resource in the cluster
  • Placement, which defines which clusters the resource needs to be distributed to
  • Overrides, which defines the fields in the cluster that need to override Templates

In addition, more advanced functionality can be achieved through Status, Policy and Scheduling:

  • Status collects the status of the distributed resources in each cluster
  • Policy allows policy control on which clusters to allocate resources to
  • Scheduling allows resources to migrate copies across clusters

In addition, KubeFed provides MultiClusterDNS, which can be used for service discovery across multiple clusters.

3. Federalizing a Kubernetes Cluster

3.1 Preparing the cluster and configuring the Context

Deploy two clusters here: dev1 as the main cluster to act as Tekton’s control plane and not run pipeline tasks; dev2 as a subcluster to execute Tekton pipeline tasks.

  1. Prepare the two clusters

    Main cluster dev1

    1
    2
    3
    4
    
    kubectl get node
    
    NAME    STATUS   ROLES                         AGE    VERSION
    node1   Ready    control-plane,master,worker   151m   v1.20.4
    
    1
    2
    3
    
    helm version
    
    version.BuildInfo{Version:"v3.2.1", GitCommit:"fe51cd1e31e6a202cba7dead9552a6d418ded79a", GitTreeState:"clean", GoVersion:"go1.13.10"}
    

    Subcluster dev2

    1
    2
    3
    4
    
    kubectl get node
    
    NAME    STATUS   ROLES                         AGE   VERSION
    node1   Ready    control-plane,master,worker   42d   v1.20.4
    
  2. Configure the Contexts of all the clusters on the main cluster (requires that the cluster Apiserver portal is on a network and can be directly connected), which can be used to add sub-clusters

    The name in the contexts should not have special characters like @, otherwise it will report an error when joining. Because the name will be used to create the Secret, it needs to conform to the Kubernetes naming convention.

    Put the kubeconfig of the main cluster dev1 in ~/.kube/config-1 and change the name and other information in the following format.

     1
     2
     3
     4
     5
     6
     7
     8
     9
    10
    11
    12
    13
    14
    
    apiVersion: v1
    clusters:
    - cluster:
        ...
    name: dev1.cluster.local
    contexts:
    - context:
        cluster: dev1.cluster.local
        user: dev1-kubernetes-admin
    name: dev1-context
    users:
    - name: dev1-kubernetes-admin
    user:
        ...
    

    Place the kubeconfig for subcluster dev2 in ~/.kube/config-2 and change the name and other information in the following format.

     1
     2
     3
     4
     5
     6
     7
     8
     9
    10
    11
    12
    13
    14
    
    apiVersion: v1
    clusters:
    - cluster:
        ...
    name: dev2.cluster.local
    contexts:
    - context:
        cluster: dev2.cluster.local
        user: dev2-kubernetes-admin
    name: dev2-context
    users:
    - name: dev2-kubernetes-admin
    user:
        ...
    
  3. merge kubeconfig

    1
    2
    
    cd $HOME/.kube/
    KUBECONFIG=config-1:config-2 kubectl config view --flatten > $HOME/.kube/config
    
  4. View the added cluster Context

    1
    2
    3
    4
    5
    
    kubectl config get-contexts
    
    CURRENT   NAME           CLUSTER              AUTHINFO                NAMESPACE
            dev1-context   dev1.cluster.local   dev1-kubernetes-admin
            dev2-context   dev2.cluster.local   dev2-kubernetes-admin
    
  5. switch to the main cluster dev1

    1
    2
    3
    
    kubectl config use-context dev1-context
    
    Switched to context "dev1-context".
    

3.2 Installing KubeFed on the main cluster

  1. Install KubeFed using Helm

    1
    2
    3
    
    git clone https://github.com/kubernetes-sigs/kubefed.git
    cd kubefed/charts/
    helm install kubefed ./kubefed/ --namespace kube-federation-system --create-namespace
    
  2. Check the load

     1
     2
     3
     4
     5
     6
     7
     8
     9
    10
    
    kubectl get deploy,pod -n kube-federation-system
    
    NAME                                         READY   UP-TO-DATE   AVAILABLE   AGE
    deployment.apps/kubefed-admission-webhook    1/1     1            1           95s
    deployment.apps/kubefed-controller-manager   2/2     2            2           95s
    
    NAME                                              READY   STATUS    RESTARTS   AGE
    pod/kubefed-admission-webhook-598bd776c6-gv4qh    1/1     Running   0          95s
    pod/kubefed-controller-manager-6d9bf98d74-n8kjz   1/1     Running   0          17s
    pod/kubefed-controller-manager-6d9bf98d74-nmb2j   1/1     Running   0          14s
    

3.3 Installing kubefedctl on the main cluster

Execute the command :

1
2
3
wget https://github.com/kubernetes-sigs/kubefed/releases/download/v0.8.0/kubefedctl-0.8.0-linux-amd64.tgz
tar -zxvf kubefedctl-*.tgz
mv kubefedctl /usr/local/bin/

3.4 Adding a cluster

Execute the command on the master cluster, add both dev1 and dev2 to the master cluster dev1.

1
2
3
4
5
kubefedctl join dev1-context --host-cluster-context dev1-context --kubefed-namespace=kube-federation-system --v=2

I0625 14:32:42.969373   25920 join.go:861] Using secret named: dev1-context-dev1-context-token-2w8km
I0625 14:32:42.972316   25920 join.go:934] Created secret in host cluster named: dev1-context-ln6vx
I0625 14:32:42.991399   25920 join.go:299] Created federated cluster resource
1
2
3
4
5
kubefedctl join dev2-context --host-cluster-context dev1-context --kubefed-namespace=kube-federation-system --v=2

I0625 14:33:11.836472   26424 join.go:861] Using secret named: dev2-context-dev1-context-token-dcl8s
I0625 14:33:11.840121   26424 join.go:934] Created secret in host cluster named: dev2-context-264dz
I0625 14:33:11.898044   26424 join.go:299] Created federated cluster resource

View the list of clusters:

1
2
3
4
5
kubectl -n kube-federation-system get kubefedclusters

NAME           AGE   READY
dev1-context   45s   True
dev2-context   16s   True

3.5 Testing if the cluster is federated successfully

  • View federated resources

After installing KubeFed, many common resources have been federated and can be viewed in the CRD.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
kubectl get crd |grep federated

federatedclusterroles.types.kubefed.io                2021-06-26T06:22:50Z
federatedconfigmaps.types.kubefed.io                  2021-06-26T06:22:50Z
federateddeployments.types.kubefed.io                 2021-06-26T06:22:50Z
federatedingresses.types.kubefed.io                   2021-06-26T06:22:50Z
federatedjobs.types.kubefed.io                        2021-06-26T06:22:50Z
federatednamespaces.types.kubefed.io                  2021-06-26T06:22:50Z
federatedreplicasets.types.kubefed.io                 2021-06-26T06:22:50Z
federatedsecrets.types.kubefed.io                     2021-06-26T06:22:50Z
federatedserviceaccounts.types.kubefed.io             2021-06-26T06:22:50Z
federatedservices.types.kubefed.io                    2021-06-26T06:22:50Z
federatedservicestatuses.core.kubefed.io              2021-06-26T06:22:50Z
federatedtypeconfigs.core.kubefed.io                  2021-06-26T06:22:50Z

In federatedtypeconfigs you can also see the resources that have been federated.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
kubectl get federatedtypeconfigs.core.kubefed.io -n kube-federation-system

NAME                                     AGE
clusterroles.rbac.authorization.k8s.io   29m
configmaps                               29m
deployments.apps                         29m
ingresses.extensions                     29m
jobs.batch                               29m
namespaces                               29m
replicasets.apps                         29m
secrets                                  29m
serviceaccounts                          29m
services                                 29m
  • Create a federated Namespace

Namespace-level resources need to be placed under a federated Namespace, otherwise the Controller will report an error when distributing resources.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
apiVersion: v1
kind: Namespace
metadata:
  name: testing-fed
---
apiVersion: types.kubefed.io/v1beta1
kind: FederatedNamespace
metadata:
  name: testing-fed
  namespace: testing-fed
spec:
  placement:
    clusters:
    - name: dev1-context
    - name: dev2-context
  • Create a federated Deployment in the main cluster

A common Deployment looks like this.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx
  namespace: default
spec:
  replicas: 1
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - image: nginx
        name: nginx

And the federal Deployment is this.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
apiVersion: types.kubefed.io/v1beta1
kind: FederatedDeployment
metadata:
  name: nginx-fed
  namespace: testing-fed
spec:
  overrides:
    - clusterName: dev1-context
      clusterOverrides:
        - path: /spec/replicas
          value: 2
    - clusterName: dev2-context
      clusterOverrides:
        - path: /spec/replicas
          value: 3
  placement:
    clusters:
      - name: dev1-context
      - name: dev2-context
  template:
    metadata:
      labels:
        app: nginx
      namespace: testing-fed
    spec:
      replicas: 1
      selector:
        matchLabels:
          app: nginx
      template:
        metadata:
          labels:
            app: nginx
        spec:
          containers:
            - image: nginx
              name: nginx

FederatedDeployment is written with three fields in mind.

1
2
3
- overrides, 根据不同集群, 需要覆盖的字段属性。这里将 dev1 上的副本数改为 2,而将 dev2 上的副本数改为 3。
- placement, 资源需要放置的集群列表。这里放置在 dev1、dev2 两个集群。
- template, 资源的模板。这里是 Deployment 去掉 apiVersion 和 kind 的剩余部分。
  • Verify that the resources are distributed successfully

On the dev1 cluster

1
2
3
4
5
kubectl -n testing-fed get pod

NAME                         READY   STATUS    RESTARTS   AGE
nginx-fed-6799fc88d8-7llk9   1/1     Running   0          8m2s
nginx-fed-6799fc88d8-clc5w   1/1     Running   0          8m2s

On the dev2 cluster

1
2
3
4
5
6
kubectl -n testing-fed get pod

NAME                         READY   STATUS    RESTARTS   AGE
nginx-fed-6799fc88d8-2ld4k   1/1     Running   0          7m49s
nginx-fed-6799fc88d8-6dncp   1/1     Running   0          7m49s
nginx-fed-6799fc88d8-x64fb   1/1     Running   0          7m49s

4. Federalizing Tekton’s CRD resources

4.1 Installing Tekton

Tekton needs to be installed on all clusters

1
kubectl apply -f https://raw.githubusercontent.com/shaowenchen/image-syncer/main/tekton/v0.24.1-release-0.24.1.yaml

Since the Tekton community uses the gcr.io image, some host environments may not be able to pull it. I made a backup of it on Dockerhub, and the relevant yaml can be found here, https://github.com/shaowenchen/image-syncer/tree/main/tekton.

4.2 Federating Tekton’s CRDs

When installing KubeFed, the common Deployment, Secret, etc. will be federated by default, but for user-defined CRDs, you need to enable them manually.

Execute the command:

1
2
3
4
5
6
7
8
kubefedctl enable clustertasks.tekton.dev
kubefedctl enable conditions.tekton.dev
kubefedctl enable pipelineresources.tekton.dev
kubefedctl enable pipelineruns.tekton.dev
kubefedctl enable pipelines.tekton.dev
kubefedctl enable runs.tekton.dev
kubefedctl enable taskruns.tekton.dev
kubefedctl enable tasks.tekton.dev

In the case of taskruns, kubefedctl enable taskruns.tekton.dev will automatically create two resources:

  • customresourcedefinition.apiextensions.k8s.io/federatedtaskruns.types.kubefed.io, the federated CRD resource federatedtaskruns
  • federatedtypeconfig.core.kubefed.io/taskruns.tekton.dev, under the kube-federation-system namespace, creates a resource of type federatedtypeconfig taskruns to enable resource distribution

4.3 Edit the newly created federal CRD resource to add fields

Missing this step will result in an empty CR resource content for synchronization to the subcluster. This is because the kubefedctl enable federalized CRD resource is missing the template field.

Execute the command:

1
kubectl edit crd federatedtasks.types.kubefed.io

At the level of overrides and placement, just add the template content of the following example.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
apiVersion: apiextensions.k8s.io/v1
...
spec:
  versions:
  - name: v1beta1
    schema:
      openAPIV3Schema:
        properties:
          spec:
            properties:
              overrides:
                ...
              placement:
                ...
              template:
                type: object
                x-kubernetes-preserve-unknown-fields: true
            type: object

If you don’t think it’s clear enough, you can modify it by referring to https://github.com/shaowenchen/demo/tree/master/tekton-0.24.1-kubefed. If you are also using version 0.24.1, you can directly kubectl apply these CRD resources.

4.4 Testing Tekton object distribution in multiple clusters

Here, to avoid pasting a lot of yaml, we directly create Task resources on subclusters in advance, instead of using FederatedTask for distribution.

  • Create the Task on the subcluster
1
kubectl apply -f https://raw.githubusercontent.com/tektoncd/catalog/main/task/git-clone/0.4/git-clone.yaml -n testing-fed
  • Create FederatedTaskRun on master cluster dev1 to distribute resources to subcluster dev2
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
apiVersion: types.kubefed.io/v1beta1
kind: FederatedTaskRun
metadata:
  name: git-clone-test
  namespace: testing-fed
spec:
  placement:
    clusters:
    - name: dev2-context
  template:
    metadata:
      namespace: testing-fed
    spec:
      workspaces:
        - name: output
          emptyDir: {}
      taskRef:
        name: git-clone
      params:
        - name: url
          value: https://github.com/kelseyhightower/nocode
  • View Tekton’s Taskrun tasks on subcluster dev2
1
2
3
4
kubectl get taskrun -n testing-fed

NAME             SUCCEEDED   REASON      STARTTIME   COMPLETIONTIME
git-clone-test   True        Succeeded   15s         7s

5. Summary

In this paper, we introduced and practiced the federalization of Tekton CRD resources using KubeFed to manage multiple clusters.

Tekton under multiple clusters, using the main cluster to manage resources and sub-clusters to execute pipelines, can effectively balance the load, increase the concurrent execution of pipelines, and improve the maintainability of the CICD system.

KubeFed here is mainly used to store and distribute Tekton object resources. KubeFed is used to do cross-cluster resource distribution, which is very suitable.