docker has been criticized in the Kubernetes community for being a CRI, but it’s important to understand that CRIs are only part of docker’s functionality. There are still many areas that rely heavily on docker for local development testing or CI/CD streamline image builds. For example, docker’s official build-push-action is the preferred method for building container images on GitHub. Even docker’s rivals podman + skopeo + buildah are using docker to build their own container images multi-arch-build.yaml.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
jobs:
  multi:
    name: multi-arch image build
    env:
      REPONAME: buildah  # No easy way to parse this out of $GITHUB_REPOSITORY
      # Server/namespace value used to format FQIN
      REPONAME_QUAY_REGISTRY: quay.io/buildah
      CONTAINERS_QUAY_REGISTRY: quay.io/containers
      # list of architectures for build
      PLATFORMS: linux/amd64,linux/s390x,linux/ppc64le,linux/arm64
      # Command to execute in container to obtain project version number
      VERSION_CMD: "buildah --version"

    # build several images (upstream, testing, stable) in parallel
    strategy:
      # By default, failure of one matrix item cancels all others
      fail-fast: false
      matrix:
        # Builds are located under contrib/<reponame>image/<source> directory
        source:
          - upstream
          - testing
          - stable
    runs-on: ubuntu-latest
    # internal registry caches build for inspection before push
    services:
      registry:
        image: quay.io/libpod/registry:2
        ports:
          - 5000:5000
    steps:
      - name: Checkout
        uses: actions/checkout@v2

      - name: Set up QEMU
        uses: docker/setup-qemu-action@v1

      - name: Set up Docker Buildx
        uses: docker/setup-buildx-action@v1
        with:
          driver-opts: network=host
          install: true

      - name: Build and locally push image
        uses: docker/build-push-action@v2
        with:
          context: contrib/${{ env.REPONAME }}image/${{ matrix.source }}
          file: ./contrib/${{ env.REPONAME }}image/${{ matrix.source }}/Dockerfile
          platforms: ${{ env.PLATFORMS }}
          push: true
          tags: localhost:5000/${{ env.REPONAME }}/${{ matrix.source }}

Jenkins Streaming

Our CI/CD pipeline uses Jenkins + Kubernetes plugin to dynamically create Pods as Jenkins Slave on Kubernetes.

In the case of using docker as the container, the Jenkins Slave Pod mounts the /var/run/docker.sock file from the host to the pod container by way of hostPath.

The docker CLI inside the container can then communicate with the host’s docker daemon via this sock, so that commands such as docker build and push can be used seamlessly inside the pod container.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
// Kubernetes pod template to run.
podTemplate(
    cloud: "kubernetes",
    namespace: "default",
    name: POD_NAME,
    label: POD_NAME,
    yaml: """
apiVersion: v1
kind: Pod
spec:
  containers:
  - name: debian
    image: "${JENKINS_POD_IMAGE_NAME}"
    imagePullPolicy: IfNotPresent
    tty: true
    volumeMounts:
    - name: dockersock
      mountPath: /var/run/docker.sock
  - name: jnlp
    args: ["\$(JENKINS_SECRET)", "\$(JENKINS_NAME)"]
    image: "jenkins/inbound-agent:4.3-4-alpine"
    imagePullPolicy: IfNotPresent
  volumes:
    - name: dockersock
      hostPath:
        path: /var/run/docker.sock
""",
)

When docker is no longer used as the container runtime for Kubernetes, there is no longer a docker daemon on the host and mounting /var/run/docker.sock will not work, so we need to find some alternative.

There are two options that I can think of: option 1 is to replace docker and use another image builder like podman + skopeo + buildah.

Shaowen Chen blogger in “Kubernetes-based Jenkins service can also go to Docker” talks about this solution in detail. But our Makefile has some docker buildKit feature parameters sewn into it that can’t be simply and brutally replaced by alias docker=podman aliases 😂.

For example, podman build images do not support --output type=local,dest=path Support custom build outputs #3789 This feature. It seems that podman still has a long way to go to completely replace docker’s big brother status, especially since podman has not solved the awkward problem of building its own images from docker.

Option 2 is to continue using docker as the image builder. Just because the docker daemon is gone from the cluster nodes does not mean that docker cannot be used in the Kubernetes cluster. Instead of deploying docker as systemd on the node, we can run docker as a pod in the kubernetes cluster. Then we can continue to use docker seamlessly by accessing the TCP port of docker through the service IP or Node IP. So the dinp (docker-in-pod) nesting operation is based on dind (docker-in-docker), which is essentially the same thing, except that the deployment and access methods are different.

Comparing these two options, option 1 is a bit of a speculative replacement for docker via alias docker=podman, and should rarely be used in a formal production pipeline unless your Makefile or image build script does not rely on docker’s feature parameters to be fully podman compatible. Option 2 is more stable and reliable, it just replaces the docker daemon on the host node with a Pod in the cluster, and the user only needs to modify the way to access docker, i.e. DOCKER_HOST environment variable. Therefore, we choose option 2 to introduce several ways to deploy and use dind/dinp in a K8s cluster.

docker in pod

Unlike docker in docker, docker in pod doesn’t care what the underlying container runtime is, it can be either docker or containerd. running and using docker inside a pod personally summarized the following three more suitable ways, you can choose a suitable one according to different scenarios.

sidecar

Run the dind container as a sidecar container and the main container accesses docker’s 2375/2376 TCP port by way of localhost. port. The advantage of this solution is that if multiple Pods are created, each Pod is independent of each other, and the dind container will not be shared to other Pods, so the isolation is better. The disadvantage is also obvious, each Pod with a dind container takes up more system resources, which is a bit too big and too small.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
apiVersion: v1
kind: Pod
metadata:
  name: dinp-sidecar
spec:
  containers:
  - image: docker:20.10.12
    name: debug
    command: ["sleep", "3600"]
    env:
    - name: DOCKER_TLS_VERIFY
      value: ""
    - name: DOCKER_HOST
      value: tcp://localhost:2375
  - name: dind
    image: docker:20.10.12-dind-rootless
    args: ["--insecure-registry=$(REGISTRY)"]
    env:
    # 如果镜像仓库域名为自签证书,需要在这里配置 insecure-registry
    - name: REGISTRY
      value: hub.k8s.li
    - name: DOCKER_TLS_CERTDIR
      value: ""
    - name: DOCKER_HOST
      value: tcp://localhost:2375
    securityContext:
      privileged: true
    tty: true
    # 使用 docker info 命令就绪探针来确保 dind 容器正常启动 
    readinessProbe:
      exec:
        command: ["docker", "info"]
      initialDelaySeconds: 10
      failureThreshold: 6

daemonset

daemonset runs a dind Pod on each Node of the cluster and uses the hostNetwork method to expose 2375/2376 TCP ports. Users can access the 2375/2376 TCP port of the host through status.hostIP to communicate with docker, and the dind container’s /var/lib/docker data is stored persistently through the hostPath mount, which can cache some data to improve the efficiency of image building.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
apiVersion: apps/v1
kind: DaemonSet
metadata:
  name: dinp-daemonset
  namespace: default
spec:
  selector:
    matchLabels:
      name: dinp-daemonset
  template:
    metadata:
      labels:
        name: dinp-daemonset
    spec:
      hostNetwork: true
      containers:
      - name: dind
        image: docker:20.10.12-dind
        args: ["--insecure-registry=$(REGISTRY)"]
        env:
        - name: REGISTRY
          value: hub.k8s.li
        - name: DOCKER_TLS_CERTDIR
          value: ""
        securityContext:
          privileged: true
        tty: true
        volumeMounts:
        - name: docker-storage
          mountPath:  /var/lib/docker
        readinessProbe:
          exec:
            command: ["docker", "info"]
          initialDelaySeconds: 10
          failureThreshold: 6
        livenessProbe:
          exec:
            command: ["docker", "info"]
          initialDelaySeconds: 60
          failureThreshold: 10
      volumes:
      - name: docker-storage
        hostPath:
          path: /var/lib/docker

deployment

Deployment is the deployment of one or more dind Pods in a cluster. users access docker ports 2375/2376 via service IP, if the dind containers are started in a non-TLS way. Using service IP to access docker is more secure than using host IP in the previous daemonset.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
apiVersion: apps/v1
kind: Deployment
metadata:
  name: dinp-deployment
  namespace: default
  labels:
    name: dinp-deployment
spec:
  replicas: 1
  selector:
    matchLabels:
      name: dinp-deployment
  template:
    metadata:
      labels:
        name: dinp-deployment
    spec:
      containers:
      - name: dind
        image: docker:20.10.12-dind
        args: ["--insecure-registry=$(REGISTRY)"]
        env:
        - name: REGISTRY
          value: hub.k8s.li
        - name: DOCKER_TLS_CERTDIR
          value: ""
        - name: DOCKER_HOST
          value: tcp://localhost:2375
        securityContext:
          privileged: true
        tty: true
        volumeMounts:
        - name: docker-storage
          mountPath:  /var/lib/docker
        readinessProbe:
          exec:
            command: ["docker", "info"]
          initialDelaySeconds: 10
          failureThreshold: 6
        livenessProbe:
          exec:
            command: ["docker", "info"]
          initialDelaySeconds: 60
          failureThreshold: 10
      volumes:
      - name: docker-storage
        hostPath:
          path: /var/lib/docker
---
kind: Service
apiVersion: v1
metadata:
  # 定义 service name,使用者通过它来访问 docker 的 2375 端口
  name: dinp-deployment
spec:
  selector:
    name: dinp-deployment
  ports:
  - protocol: TCP
    port: 2375
    targetPort: 2375

Jenkinsfile

In the Jenkins podTemplate template, you can choose from several different templates depending on the dinp deployment method.

sidecare

Pod containers share the same network stack, so you can access the TCP port of docker through localhost. It is also best to start the dind container in rootless mode, so that you can run multiple instances of this pod on the same node.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
def JOB_NAME = "${env.JOB_BASE_NAME}"
def BUILD_NUMBER = "${env.BUILD_NUMBER}"
def POD_NAME = "jenkins-${JOB_NAME}-${BUILD_NUMBER}"
def K8S_CLUSTER = params.k8s_cluster ?: kubernetes

// Kubernetes pod template to run.
podTemplate(
    cloud: K8S_CLUSTER,
    namespace: "default",
    name: POD_NAME,
    label: POD_NAME,
    yaml: """
apiVersion: v1
kind: Pod
spec:
  containers:
  - name: runner
    image: golang:1.17-buster
    imagePullPolicy: IfNotPresent
    tty: true
    env:
    - name: DOCKER_HOST
      vaule: tcp://localhost:2375
    - name: DOCKER_TLS_VERIFY
      value: ""
  - name: jnlp
    args: ["\$(JENKINS_SECRET)", "\$(JENKINS_NAME)"]
    image: "jenkins/inbound-agent:4.11.2-4-alpine"
    imagePullPolicy: IfNotPresent
  - name: dind
    image: docker:20.10.12-dind-rootless
    args: ["--insecure-registry=$(REGISTRY)"]
    env:
    - name: REGISTRY
      value: hub.k8s.li
    - name: DOCKER_TLS_CERTDIR
      value: ""
    securityContext:
      privileged: true
    tty: true
    readinessProbe:
      exec:
        command: ["docker", "info"]
      initialDelaySeconds: 10
      failureThreshold: 6
""",
) {
    node(POD_NAME) {
        container("runner") {
            stage("Checkout") {
                retry(10) {
                    checkout([
                        $class: 'GitSCM',
                        branches: scm.branches,
                        doGenerateSubmoduleConfigurations: scm.doGenerateSubmoduleConfigurations,
                        extensions: [[$class: 'CloneOption', noTags: false, shallow: false, depth: 0, reference: '']],
                        userRemoteConfigs: scm.userRemoteConfigs,
                    ])
                }
            }
            stage("Build") {
                sh """
                # make docker-build
                docker build -t app:v1.0.0-alpha.1 .
                """
            }
        }
    }
}

daemonset

Since you are using hostNetwork, you can access the TCP port of docker through the host IP, but you can also access it through the service name like deployment, so I won’t demonstrate it here.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
def JOB_NAME = "${env.JOB_BASE_NAME}"
def BUILD_NUMBER = "${env.BUILD_NUMBER}"
def POD_NAME = "jenkins-${JOB_NAME}-${BUILD_NUMBER}"
def K8S_CLUSTER = params.k8s_cluster ?: kubernetes

// Kubernetes pod template to run.
podTemplate(
    cloud: K8S_CLUSTER,
    namespace: "default",
    name: POD_NAME,
    label: POD_NAME,
    yaml: """
apiVersion: v1
kind: Pod
spec:
  containers:
  - name: runner
    image: golang:1.17-buster
    imagePullPolicy: IfNotPresent
    tty: true
    env:
    - name: DOCKER_HOST
      valueFrom:
        fieldRef:
          fieldPath: status.hostIP
    - name: DOCKER_TLS_VERIFY
      value: ""
  - name: jnlp
    args: ["\$(JENKINS_SECRET)", "\$(JENKINS_NAME)"]
    image: "jenkins/inbound-agent:4.11.2-4-alpine"
    imagePullPolicy: IfNotPresent
""",
) {
    node(POD_NAME) {
        container("runner") {
            stage("Checkout") {
                retry(10) {
                    checkout([
                        $class: 'GitSCM',
                        branches: scm.branches,
                        doGenerateSubmoduleConfigurations: scm.doGenerateSubmoduleConfigurations,
                        extensions: [[$class: 'CloneOption', noTags: false, shallow: false, depth: 0, reference: '']],
                        userRemoteConfigs: scm.userRemoteConfigs,
                    ])
                }
            }
            stage("Build") {
                sh """
                # make docker-build
                docker build -t app:v1.0.0-alpha.1 .
                """
            }
        }
    }
}

deployment

Access to docker by service name, all other parameters are the same as daemonset.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
def JOB_NAME = "${env.JOB_BASE_NAME}"
def BUILD_NUMBER = "${env.BUILD_NUMBER}"
def POD_NAME = "jenkins-${JOB_NAME}-${BUILD_NUMBER}"
def K8S_CLUSTER = params.k8s_cluster ?: kubernetes

// Kubernetes pod template to run.
podTemplate(
    cloud: K8S_CLUSTER,
    namespace: "default",
    name: POD_NAME,
    label: POD_NAME,
    yaml: """
apiVersion: v1
kind: Pod
spec:
  containers:
  - name: runner
    image: golang:1.17-buster
    imagePullPolicy: IfNotPresent
    tty: true
    env:
    - name: DOCKER_HOST
       value: tcp://dinp-deployment:2375
    - name: DOCKER_TLS_VERIFY
      value: ""
  - name: jnlp
    args: ["\$(JENKINS_SECRET)", "\$(JENKINS_NAME)"]
    image: "jenkins/inbound-agent:4.11.2-4-alpine"
    imagePullPolicy: IfNotPresent
""",
) {
    node(POD_NAME) {
        container("runner") {
            stage("Checkout") {
                retry(10) {
                    checkout([
                        $class: 'GitSCM',
                        branches: scm.branches,
                        doGenerateSubmoduleConfigurations: scm.doGenerateSubmoduleConfigurations,
                        extensions: [[$class: 'CloneOption', noTags: false, shallow: false, depth: 0, reference: '']],
                        userRemoteConfigs: scm.userRemoteConfigs,
                    ])
                }
            }
            stage("Build") {
                sh """
                # make docker-build
                docker build -t app:v1.0.0-alpha.1 .
                """
            }
        }
    }
}

Other

readinessProbe

There are times when dind does not start properly, so be sure to set the readiness probe to ensure that the diind container will start properly.

1
2
3
4
5
readinessProbe:
  exec:
    command: ["docker", "info"]
  initialDelaySeconds: 10
  failureThreshold: 6

Port 2375/2376

If you set the environment variable DOCKER_TLS_CERTDIR to null, it will start in non-TLS mode and listen on port 2375, which will not verify TLS certificates. If you use port 2376, you will need a persistent store to share the docker-generated certificates with the client, which is a bit tricky. So if you don’t want to mess around with it, you should use 2375 non-TLS 😂.

dinp must be enabled with privileged: true

If you run docker as a pod, whether it is in rootless mode or not, you have to set privileged: true in the securityContext of the pod container, otherwise the pod will not start properly. And rootless mode also has some limitations, it needs to rely on some kernel features, it’s only experimental at the moment, you should try not to use rootless features without special needs.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
[root@localhost ~]# kubectl logs -f dinp-sidecar
error: a container name must be specified for pod dinp-sidecar, choose one of: [debug dind]
[root@localhost ~]# kubectl logs -f dinp-sidecar -c dind
Device "ip_tables" does not exist.
ip_tables              27126  4 iptable_raw,iptable_mangle,iptable_filter,iptable_nat
modprobe: can't change directory to '/lib/modules': No such file or directory
WARN[0000] failed to mount sysfs, falling back to read-only mount: operation not permitted
WARN[0000] failed to mount sysfs: operation not permitted
open: No such file or directory
[rootlesskit:child ] error: executing [[ip tuntap add name tap0 mode tap] [ip link set tap0 address 02:50:00:00:00:01]]: exit status 1

rootless user.max_user_namespaces

Rootless mode depends on some kernel parameters Run the Docker daemon as a non-root user (Rootless mode). On CentOS 7.9, there is a dind-rootless: failed to start up dind rootless in k8s due to max_user_namespaces problem. The solution is to modify the user.max_user_namespaces=28633 kernel parameter.

Add user.max_user_namespaces=28633 to /etc/sysctl.conf (or /etc/sysctl.d) and run sudo sysctl -p

1
2
3
4
5
6
7
8
[root@localhost ~]# kubectl get pod -w
NAME                              READY   STATUS   RESTARTS     AGE
dinp-deployment-cf488bfd8-g8vxx   0/1     CrashLoopBackOff   1 (2s ago)   4s
[root@localhost ~]# kubectl logs -f dinp-deployment-cf488bfd8-m5cms
Device "ip_tables" does not exist.
ip_tables              27126  5 iptable_raw,iptable_mangle,iptable_filter,iptable_nat
modprobe: can't change directory to '/lib/modules': No such file or directory
error: attempting to run rootless dockerd but need 'user.max_user_namespaces' (/proc/sys/user/max_user_namespaces) set to a sufficiently large value

Only one dinp can run on the same node in non-rootless mode

If you deploy dinp using deployment, you can only have one dinp Pod on a node, and any extra Pods will not start properly. So if you want to run more than one dinp Pod, it is recommended to run it in daemonset mode.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
[root@localhost ~]# kubectl get deploy
NAME              READY   UP-TO-DATE   AVAILABLE   AGE
dinp-deployment   1/3     3            1           4m16s
[root@localhost ~]# kubectl get pod -w
NAME                               READY   STATUS    RESTARTS      AGE
dinp-deployment-547bd9bb6d-2mn6c   0/1     Running   3 (61s ago)   4m9s
dinp-deployment-547bd9bb6d-8ht8l   1/1     Running   0             4m9s
dinp-deployment-547bd9bb6d-x5vpv   0/1     Running   3 (61s ago)   4m9s
[root@localhost ~]# kubectl logs -f dinp-deployment-547bd9bb6d-2mn6c
INFO[2022-03-14T14:14:10.905652548Z] Starting up
WARN[2022-03-14T14:14:10.906986721Z] could not change group /var/run/docker.sock to docker: group docker not found
WARN[2022-03-14T14:14:10.907249071Z] Binding to IP address without --tlsverify is insecure and gives root access on this machine to everyone who has access to your network.  host="tcp://0.0.0.0:2375"
WARN[2022-03-14T14:14:10.907269951Z] Binding to an IP address, even on localhost, can also give access to scripts run in a browser. Be safe out there!  host="tcp://0.0.0.0:2375"
WARN[2022-03-14T14:14:11.908057635Z] Binding to an IP address without --tlsverify is deprecated. Startup is intentionally being slowed down to show this message  host="tcp://0.0.0.0:2375"
WARN[2022-03-14T14:14:11.908103696Z] Please consider generating tls certificates with client validation to prevent exposing unauthenticated root access to your network  host="tcp://0.0.0.0:2375"
WARN[2022-03-14T14:14:11.908114541Z] You can override this by explicitly specifying '--tls=false' or '--tlsverify=false'  host="tcp://0.0.0.0:2375"
WARN[2022-03-14T14:14:11.908125477Z] Support for listening on TCP without authentication or explicit intent to run without authentication will be removed in the next release  host="tcp://0.0.0.0:2375"
INFO[2022-03-14T14:14:26.914587276Z] libcontainerd: started new containerd process  pid=41
INFO[2022-03-14T14:14:26.914697125Z] parsed scheme: "unix"                         module=grpc
INFO[2022-03-14T14:14:26.914710376Z] scheme "unix" not registered, fallback to default scheme  module=grpc
INFO[2022-03-14T14:14:26.914785052Z] ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/containerd/containerd.sock  <nil> 0 <nil>}] <nil> <nil>}  module=grpc
INFO[2022-03-14T14:14:26.914796039Z] ClientConn switching balancer to "pick_first"  module=grpc
INFO[2022-03-14T14:14:26.930311832Z] starting containerd                           revision=7b11cfaabd73bb80907dd23182b9347b4245eb5d version=v1.4.12
INFO[2022-03-14T14:14:26.953641900Z] loading plugin "io.containerd.content.v1.content"...  type=io.containerd.content.v1
INFO[2022-03-14T14:14:26.953721059Z] loading plugin "io.containerd.snapshotter.v1.aufs"...  type=io.containerd.snapshotter.v1
INFO[2022-03-14T14:14:26.960295816Z] skip loading plugin "io.containerd.snapshotter.v1.aufs"...  error="aufs is not supported (modprobe aufs failed: exit status 1 \"ip: can't find device 'aufs'\\nmodprobe: can't change directory to '/lib/modules': No such file or directory\\n\"): skip plugin" type=io.containerd.snapshotter.v1
INFO[2022-03-14T14:14:26.960329840Z] loading plugin "io.containerd.snapshotter.v1.btrfs"...  type=io.containerd.snapshotter.v1
INFO[2022-03-14T14:14:26.960524514Z] skip loading plugin "io.containerd.snapshotter.v1.btrfs"...  error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (xfs) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
INFO[2022-03-14T14:14:26.960537441Z] loading plugin "io.containerd.snapshotter.v1.devmapper"...  type=io.containerd.snapshotter.v1
WARN[2022-03-14T14:14:26.960558843Z] failed to load plugin io.containerd.snapshotter.v1.devmapper  error="devmapper not configured"
INFO[2022-03-14T14:14:26.960569516Z] loading plugin "io.containerd.snapshotter.v1.native"...  type=io.containerd.snapshotter.v1
INFO[2022-03-14T14:14:26.960593224Z] loading plugin "io.containerd.snapshotter.v1.overlayfs"...  type=io.containerd.snapshotter.v1
INFO[2022-03-14T14:14:26.960678728Z] loading plugin "io.containerd.snapshotter.v1.zfs"...  type=io.containerd.snapshotter.v1
INFO[2022-03-14T14:14:26.960814844Z] skip loading plugin "io.containerd.snapshotter.v1.zfs"...  error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
INFO[2022-03-14T14:14:26.960827133Z] loading plugin "io.containerd.metadata.v1.bolt"...  type=io.containerd.metadata.v1
WARN[2022-03-14T14:14:26.960839223Z] could not use snapshotter devmapper in metadata plugin  error="devmapper not configured"
INFO[2022-03-14T14:14:26.960848698Z] metadata content store policy set             policy=shared
WARN[2022-03-14T14:14:27.915528371Z] grpc: addrConn.createTransport failed to connect to {unix:///var/run/docker/containerd/containerd.sock  <nil> 0 <nil>}. Err :connection error: desc = "transport: error while dialing: dial unix:///var/run/docker/containerd/containerd.sock: timeout". Reconnecting...  module=grpc
WARN[2022-03-14T14:14:30.722257725Z] grpc: addrConn.createTransport failed to connect to {unix:///var/run/docker/containerd/containerd.sock  <nil> 0 <nil>}. Err :connection error: desc = "transport: error while dialing: dial unix:///var/run/docker/containerd/containerd.sock: timeout". Reconnecting...  module=grpc
WARN[2022-03-14T14:14:35.549453706Z] grpc: addrConn.createTransport failed to connect to {unix:///var/run/docker/containerd/containerd.sock  <nil> 0 <nil>}. Err :connection error: desc = "transport: error while dialing: dial unix:///var/run/docker/containerd/containerd.sock: timeout". Reconnecting...  module=grpc
WARN[2022-03-14T14:14:41.759010407Z] grpc: addrConn.createTransport failed to connect to {unix:///var/run/docker/containerd/containerd.sock  <nil> 0 <nil>}. Err :connection error: desc = "transport: error while dialing: dial unix:///var/run/docker/containerd/containerd.sock: timeout". Reconnecting...  module=grpc
failed to start containerd: timeout waiting for containerd to start

/var/lib/docker does not support shared storage

Chen Shaowen blogger has mentioned in “Can /var/lib/docker mount remote storage” that docker currently does not support mounting /var/lib/docker on remote storage, so it is recommended to use hostPath to store docker persistent storage data.

The Docker version used for this test is 20.10.6, and it is not possible to mount /var/lib/docker for use with remote storage. The main reason is that the container implementation relies on kernel capabilities (xttrs), which are not available on remote storage like NFS Server. If Device Mapper is used for mapping, it is possible to use disk mounts, but only for migration and not for sharing.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
INFO[2022-03-13T13:43:08.750810130Z] ClientConn switching balancer to "pick_first"  module=grpc
ERRO[2022-03-13T13:43:08.781932359Z] failed to mount overlay: invalid argument     storage-driver=overlay2
ERRO[2022-03-13T13:43:08.782078828Z] exec: "fuse-overlayfs": executable file not found in $PATH  storage-driver=fuse-overlayfs
ERRO[2022-03-13T13:43:08.793311119Z] AUFS was not found in /proc/filesystems       storage-driver=aufs
ERRO[2022-03-13T13:43:08.813505621Z] failed to mount overlay: invalid argument     storage-driver=overlay
ERRO[2022-03-13T13:43:08.813529990Z] Failed to built-in GetDriver graph devicemapper /var/lib/docker
INFO[2022-03-13T13:43:08.897769363Z] Loading containers: start.
WARN[2022-03-13T13:43:08.919252078Z] Running modprobe bridge br_netfilter failed with message: ip: can't find device 'bridge'
[root@localhost dinp]# kubectl exec -it dinp-sidecar -c debug sh
/ # docker pull alpine
Using default tag: latest
Error response from daemon: error creating temporary lease: file resize error: truncate /var/lib/docker/containerd/daemon/io.containerd.metadata.v1.bolt/meta.db: bad file descriptor: unknown