Kaniko is one of the tools created by Google. It is used to build docker images on Kubernetes without privileges and is described in github (https://github.com/GoogleContainerTools/kaniko) as follows:

kaniko is a tool to build container images from a Dockerfile, inside a container or Kubernetes cluster.

How it works

Traditional Docker build is a Docker daemon that executes sequentially on the host using a privileged user (root) and generates each layer of the image according to the Dockerfile.

Docker layer

Kaniko works similarly, executing each command sequentially, taking a snapshot of the file system after each command is executed. If any inconsistencies are found, a new layer is created and any changes are written to the image’s metadata.

After each command in the Dockerfile is executed, Kaniko pushes the newly generated image to the specified registry.

Kaniko pushes the newly generated image to the specified registry

Using

Kaniko solves the problem of building in Kubernetes, but the build project, authentication of the target registry, and distribution of the Dockerfile still need to be considered by us. For simplicity, I just put the project code and Dockerfile under /root of some node.

1
2
3
4
# pwd
/root/flask-demo
# ls
Dockerfile  README.md  app.py  requirements.txt  test.py

Dockerfile:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
FROM python:3.6
RUN mkdir /catking
WORKDIR /catking
COPY ./requirements.txt /catking

RUN pip install -r requirements.txt -i https://pypi.douban.com/simple/
COPY . /catking

CMD ["python", "app.py"]

EXPOSE 5000

The first is to solve the authentication problem of the target registry, the official documentation sample is to add a kaniko-secret.json and assign the content to the GOOGLE_APPLICATION_CREDENTIALS environment variable, if it is a self-built registry you can directly use docker config.

1
2
3
# echo "{\"auths\":{\"172.16.105.1\":{\"username\":\"username\",\"password\":\"password\"}}}" > config.json
# kubectl create configmap docker-config --from-file=config.json
configmap/docker-config created

To build an image using Pods.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
apiVersion: v1
kind: Pod
metadata:
  name: kaniko
spec:
  containers:
  - name: kaniko
    image: daocloud.io/gcr-mirror/kaniko-project-executor:latest
    args: ["--dockerfile=/demo/Dockerfile",
            "--context=/demo",
            "--insecure=true",
            "--skip-tls-verify=true",
            "--destination=172.16.105.1/demo/flask-web:v1"]
    volumeMounts:
      - name: docker-config
        mountPath: /kaniko/.docker/
      - name: project-volume
        mountPath: /demo
  restartPolicy: Never
  volumes:
    - name: docker-config
      configMap:
        name: docker-config
    - name: project-volume
      hostPath:
        path: /root/flask-demo
  nodeSelector:
    k8s.ihypo.net/build: kaniko

Build Log

Build Log

image

image

Supplementary

The GCR image cannot be fetched

You can replace

1
2
gcr.io/kaniko-project/executor
gcr.io/kaniko-project/warmer

with

1
2
daocloud.io/gcr-mirror/kaniko-project-executor
daocloud.io/gcr-mirror/kaniko-project-warmer

Debug

You can enter debug mode using the debug image:

1
daocloud.io/gcr-mirror/kaniko-project-executor:debug

Build cache

You can use -cache=true to turn on build caching, which will use the cache directory defined by -cache-dir if it is a local cache. You can also use the -cache-repo parameter to specify the remote repository to use for caching.

Problems encountered

  1. push fails after a successful build and the reason is unknown
  2. When Harbor is the target registry, the image is not visible in the web UI (https://github.com/GoogleContainerTools/kaniko/issues/539)

Build on Kube

For more discussion on building images on Kube, see: https://github.com/kubernetes/kubernetes/issues/1806