Jenkins integrates well with Kubernetes, whether it’s a controller or a build node (agent) running as a Pod on Kubernetes. Anyone familiar with Jenkins knows that Jenkins supports multiple types of build nodes, such as fixed configuration and dynamic configuration. The way nodes connect to the controller includes JNLP, SSH, and so on. For those who are already fully embracing container technology, most of them use build nodes by connecting to Kubernetes clusters and dynamically starting and destroying Pods. As the variety and number of build nodes increases, the question of how to maintain these Kubernetes-based nodes more efficiently is becoming an issue. In this article, I will introduce a configuration-as-code solution to manage and maintain build nodes.

Configuration as Code (CasC) is an awesome idea that eliminates the need for Jenkins users to open the UI again and again to modify the system configuration. The advantage of modifying the configuration via the UI is that it is relatively easy to understand the meaning of the configuration items with the help of the description information on the page. The disadvantages are obvious: it is difficult to reuse the configuration, even if it is identical, you need to manually rework it on other environments; you cannot track the changes; and you cannot roll back quickly if an error occurs. With the power of CasC, we can save the Jenkins system configuration to a Git repository and a GitOps tool (e.g., Argo CD), making it easy to modify the Jenkins system configuration in a controlled manner.

However, as Jenkins becomes more complex to configure, the corresponding YAML configuration files may also become larger and more difficult to maintain.

Returning to the core problem we want to solve, the expected solution is to maintain a separate PodTemplate for the Jenkins build node. In order to solve this problem, we need to fix the problem that the PodTemplate in the Jenkins configuration does not match the built-in PodTemplate in Kubernetes; and how to dynamically load the Jenkins configuration.

To solve several of the above problem points, you only need to deploy one Deployment. This component is responsible for listening to the Kubernetes built-in PodTemplate, loading it into the Jenkins system configuration (CasC YAML file), and calling the Jenkins API to reload the configuration. To take full advantage of Kubernetes, we store the CasC configuration as a ConfigMap and mount it as a volume in Jenkins.

The following are the experimental steps (this article provides the core ideas and key steps, and each specific file can be found in the code repository address provided at the end of the article).

Prepare a Kubernetes cluster and make sure that you have enough access to ensure that it does not affect the existing business of the cluster. We recommend using a lightweight cluster such as MiniKube, Kind, K3s, etc. that is easy to develop and test.

First, store the Jenkins system configuration in CasC YAML format in a ConfigMap, e.g.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
apiVersion: v1
data:
 jenkins_user.yaml: |
   jenkins:
     mode: EXCLUSIVE
     numExecutors: 0
     scmCheckoutRetryCount: 2
     disableRememberMe: true
     clouds:
       - kubernetes:
           name: "kubernetes"
           serverUrl: "https://kubernetes.default"
           skipTlsVerify: true   
kind: ConfigMap
metadata:
 name: jenkins-casc-config
 namespace: kubesphere-devops-system

Then, mount the ConfigMap above to the Jenkins workload. Note that the plugins that must be installed for Jenkins to be used in the experiment are: kubernetes kubernetes-credentials-provider configuration-as-code.

See below.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
spec:
 template:
   spec:
     containers:
     - image: ghcr.io/linuxsuren/jenkins:lts
       env:
       - name: CASC_JENKINS_CONFIG
         value: "/var/jenkins_home/casc_configs/"          # loading config file from a directory that was mount from a ConfigMap
       volumeMounts:
       - mountPath: /var/jenkins_home/casc_configs
         name: casc-config                                 # mount from a volume
     volumes:
     - configMap:
         defaultMode: 420
         name: jenkins-casc-config                         # clamin a ConfigMap volume, all the CasC YAML content will be here
       name: casc-config

Next, there is the core Kubernetes controller. Please refer to the following configuration to create the corresponding Deployment.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
apiVersion: apps/v1
kind: Deployment
metadata:
 name: jenkins-agent
 namespace: kubesphere-devops-system
spec:
 template:
   spec:
     containers:
     - image: kubespheredev/devops-controller:dev-v3.2.1-rc.3-6726130
       name: controller
       args:
       - --enabled-controllers
       - all=false,jenkinsagent=true,jenkinsconfig=true        # only enable the necessary features of this controller

The controller listens for all PodTemplate resources with the tag jenkins.agent.pod and loads it into the system configuration after converting it into a Jenkins-style PodTemplate. Typically, this can have a delay of 3 to 5 seconds.

Once you’ve done all of these steps and made sure the relevant components are started correctly, you can try adding a Kubernetes built-in PodTemplate. Then, you can create a pipeline to test the corresponding node.

References