Before learning Containerd we need to do a brief review of Docker’s development history, because it involves a bit more components in practice, there are many we will often hear, but it is not clear what these components are really for, such as libcontainer, runc, containerd, CRI, OCI and so on.

Docker

Since Docker 1.11, Docker containers are not simply started by Docker Daemon, but by integrating containerd, runc and other components to complete. Although the Docker Daemon daemon module has been constantly refactored, the basic functions and positioning have not changed much, and it has always been a CS architecture, where the daemon is responsible for interacting with the Docker Client side and managing Docker images and containers. In the current architecture, the component containerd is responsible for the lifecycle management of containers on cluster nodes and provides a gRPC interface to the Docker Daemon.

When we want to create a container, now Docker Daemon can’t create it for us directly, but request containerd to create a container, after containerd receives the request, it doesn’t operate the container directly, but creates a process called containerd-shim, and let this process operate the container. We specify that the container process needs a parent process to do state collection, maintain stdin and other fd open, etc. If the parent process is containerd, then if containerd hangs, all the containers on the whole host will have to exit, and the introduction of the containerd-shim shim can be used to circumvent this This problem can be circumvented by introducing the containerd-shim shim.

Then you need to do some configuration of namespaces and cgroups to create containers, and mount the root filesystem, etc. These operations actually have a standard specification, that is, OCI (Open Container Standard), runc is a reference implementation of it (Docker was forced to donate libcontainer and rename it to runc). runc, the standard is actually a document that specifies the structure of container images and the commands that containers need to receive, such as create, start, stop, delete, etc. runc can follow this OCI document to create a container in line with the specification, since it is a standard there must be other OCI implementations, such as Kata, gVisor, these containers runtime are in line with the OCI standard.

So the real start container is through containerd-shim to call runc to start the container, runc will exit directly after starting the container itself, containerd-shim will become the parent process of the container process, responsible for collecting the status of the container process, report to containerd, and take over the container’s child processes to clean up after the exit of the process with pid 1, to ensure that no zombie processes.

Docker will migrate container operations to containerd because the current do Swarm, want to enter the PaaS market, do this architecture cut, let Docker Daemon is responsible for the upper layer of packaging orchestration, of course, the results we know Swarm in front of Kubernetes is a fiasco, and then Docker company containerd project donated to the CNCF Foundation, this is also the current Docker architecture.

CRI

We know that Kubernetes provides a CRI container runtime interface, so what exactly is this CRI? This is actually closely related to the development of Docker.

In the early days of Kubernetes, when Docker was really hot, Kubernetes would of course choose to support Docker first, and call the Docker API directly by hard-coding. To support more and more streamlined container runtimes, Google and Red Hat led the introduction of the CRI standard to decouple the Kubernetes platform from specific container runtimes (mainly to kill Docker, of course).

The CRI (Container Runtime Interface) is essentially a set of interfaces defined by Kubernetes to interact with the container runtime, so any container runtime that implements this set of interfaces can be docked to the Kubernetes platform. However, when Kubernetes introduced the CRI standard, it was not as dominant as it is now, so some container runtimes may not implement the CRI interface themselves, so there is shim (shim), a shim whose role is to act as an adapter to adapt various container runtimes’ own interfaces to Kubernetes’ CRI interface, where dockershim is a shim implementation for Kubernetes to adapt Docker to the CRI interface.

Kubelet communicates with the container runtime or shim through a gRPC framework, where kubelet acts as the client and CRI shim (and possibly the container runtime itself) acts as the server.

The CRI-defined API (https://github.com/kubernetes/kubernetes/blob/release-1.5/pkg/kubelet/api/v1alpha1/runtime/api.proto) consists mainly of two gRPC services, ImageService and RuntimeService. The ImageService service is mainly used for operations such as pulling images, viewing and deleting images, while the RuntimeService is used for managing the lifecycle of Pods and containers, and for calls to interact with containers (exec/attach/port- forward), you can configure the sockets of these two services by using the flags --container-runtime-endpoint and --image-service-endpoint in the kubelet.

However, there is also an exception here, that is, Docker, because Docker was a very high status, Kubernetes is directly built-in dockershim in the kubelet, so if you are using a container runtime such as Docker, you do not need to install a separate configuration adapter and so on, of course, this move also seems to paralyze the Docker company.

Now if we are using Docker, when we create a Pod in Kubernetes, the first thing that happens is that the kubelet calls dockershim through the CRI interface, requesting the creation of a container. kubelet can be seen as a simple CRI Client, and dockershim is the Server that receives the request, but they are both built into kubelet.

After dockershim receives the request, it is converted into a request that Docker Daemon can recognize, and sent to Docker Daemon to request the creation of a container, and after the request reaches Docker Daemon, it is the process of Docker creating a container, calling containerd, and then creating the containerd-shim process, which will call runc to actually create the container.

In fact, if we look closely, it is not difficult to find that the use of Docker is actually a long call chain, the real container-related operations in fact, containerd is completely sufficient, Docker is too complex and bulky, of course, Docker is popular for a large reason is to provide a lot of user-friendly features, but for Kubernetes to say that these features are not needed, because they are through the interface to operate the container, so naturally, you can switch the container runtime to containerd to.

Switching to containerd eliminates the middle ground and the experience is the same as before, but because containers are dispatched directly with the container runtime, they are not visible to Docker. As a result, the Docker tools you used to inspect these containers are no longer available.

You can no longer use the docker ps or docker inspect commands to get container information. Since you can’t list containers, you also can’t get logs, stop containers, or even execute commands in containers with docker exec.

Of course, we can still download images or build images with the docker build command, but images built or downloaded with Docker are not visible to the container runtime or to Kubernetes. In order to use it in Kubernetes, the image needs to be pushed to the image repository.

As you can see above, in containerd 1.0, the adaptation of CRI is done through a separate CRI-Containerd process, because at the beginning containerd will also adapt other systems (such as swarm), so there is no direct implementation of CRI, so the interfacing is left to the CRI-Containerd shim. Containerd` shim.

Then after containerd 1.1, the CRI-Containerd shim was removed and the adaptation logic was integrated directly into the main containerd process as a plugin, which is now much cleaner to call.

At the same time the Kubernetes community has made a CRI runtime CRI-O specifically for Kubernetes that is directly compatible with the CRI and OCI specifications.

This scheme and containerd’s scheme are obviously much simpler than the default dockershim, but since most users are more used to using Docker, people still prefer to use the dockershim scheme.

However, as the CRI scheme evolved and other container runtimes became better at supporting CRI, the Kubernetes community started to remove the dockershim scheme in July 2020: https://github.com/kubernetes/enhancements/tree/master/keps/sig-node/2221-remove-dockershim, the removal plan is now to separate the built-in kubelet The goal is to release 1.23⁄1.24 without dockershim (the code is still there, but to support out-of-the-box docker by default you need to build it yourself). kubelet, which will remove the built-in dockershim generation from the kubelet after a grace period.

So does this mean that Kubernetes no longer supports Docker? Of course not, this is just a deprecation of the built-in dockershim function, Docker and other container runtime will be treated the same, will not treat the built-in support separately, if we still want to use the Docker container runtime directly what should we do? We can extract the dockershim function and maintain a separate cri-dockerd, similar to the CRI-Containerd provided in containerd version 1.0, and of course, there is another way to implement the CRI interface built into Dockerd by the official Docker community.

But we also know that Dockerd is also to directly call the Containerd, and containerd 1.1 version after the built-in implementation of CRI, so Docker is no longer necessary to implement CRI alone, when Kubernetes is no longer built-in support for out-of-the-box Docker, the best way is of course to directly use the When Kubernetes no longer supports Docker out-of-the-box, the best way is to use the container runtime Containerd, which has also been practiced in production environments, so let’s learn how to use Containerd.

Containerd

We know that containerd has been part of the Docker Engine for a long time, but now it is separated from the Docker Engine as a separate open source project with the goal of providing a more open and stable container runtime infrastructure. The separated containerd will have more features, cover all the needs of the entire container runtime management, and provide more powerful support.

containerd is an industry standard container runtime that emphasizes simplicity, robustness and portability. containerd can be responsible for doing the following.

  • Manage the container lifecycle (from container creation to container destruction)
  • Pulling/pushing container images
  • Storage management (managing the storage of images and container data)
  • Calling runc to run containers (interacting with container runtimes such as runc)
  • Managing container network interfaces and networks

Architecture

containerd can be used as a daemon for Linux and Windows, managing the complete container lifecycle of its host system, from image transfer and storage to container execution and monitoring, to underlying storage to network attachments and more.

The above diagram is the official architecture provided by containerd, you can see that containerd is also using C/S architecture, the server side through the unix domain socket to expose the low-level gRPC API interface out, the client through these APIs to manage the containers on the node, each containerd is responsible for only one machine, Pull image Each containerd is responsible for only one machine, Pull image, operation of the container (start, stop, etc.), network, storage are all done by containerd. The specific running containers are runc’s responsibility, and in fact any container that conforms to the OCI specification can be supported.

In order to decouple, containerd divides the system into different components, each of which is completed by one or more modules in collaboration (the Core part), each type of module is integrated into Containerd in the form of plugins, and the plugins are interdependent, for example, each long dashed box in the above figure indicates a type of plugin, including Each small box indicates a subdivision of the plugins, for example Metadata Plugin relies on Containers Plugin, Content Plugin, etc. For example:

  • Content Plugin : Provides access to addressable content in the image, where all immutable content is stored.
  • Snapshot Plugin : Used to manage file system snapshots of container images, each layer of the image is decompressed into a file system snapshot, similar to the graphdriver in Docker.

Overall containerd can be divided into three big blocks: Storage, Metadata and Runtime.

Installation

Here I am using Linux Mint 20.2 and first need to install the seccomp dependency.

1
2
➜  ~ apt-get update
➜  ~ apt-get install libseccomp2 -y

Since containerd needs to call runc, we also need to install runc first, but containerd provides a zip archive containing the relevant dependencies cri-containerd-cni-${VERSION}. ${OS}-${ARCH}.tar.gz, which can be used directly for installation. First download the latest version of the archive from the release page, which is currently at version 1.5.5.

1
2
3
➜  ~ wget https://github.com/containerd/containerd/releases/download/v1.5.5/cri-containerd-cni-1.5.5-linux-amd64.tar.gz
# 如果有限制,也可以替换成下面的 URL 加速下载
# wget https://download.fastgit.org/containerd/containerd/releases/download/v1.5.5/cri-containerd-cni-1.5.5-linux-amd64.tar.gz

You can see directly which files are contained in the tarball with the -t option.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
➜  ~ tar -tf cri-containerd-cni-1.4.3-linux-amd64.tar.gz
etc/
etc/cni/
etc/cni/net.d/
etc/cni/net.d/10-containerd-net.conflist
etc/crictl.yaml
etc/systemd/
etc/systemd/system/
etc/systemd/system/containerd.service
usr/
usr/local/
usr/local/bin/
usr/local/bin/containerd-shim-runc-v2
usr/local/bin/ctr
usr/local/bin/containerd-shim
usr/local/bin/containerd-shim-runc-v1
usr/local/bin/crictl
usr/local/bin/critest
usr/local/bin/containerd
usr/local/sbin/
usr/local/sbin/runc
opt/
opt/cni/
opt/cni/bin/
opt/cni/bin/vlan
opt/cni/bin/host-local
opt/cni/bin/flannel
opt/cni/bin/bridge
opt/cni/bin/host-device
opt/cni/bin/tuning
opt/cni/bin/firewall
opt/cni/bin/bandwidth
opt/cni/bin/ipvlan
opt/cni/bin/sbr
opt/cni/bin/dhcp
opt/cni/bin/portmap
opt/cni/bin/ptp
opt/cni/bin/static
opt/cni/bin/macvlan
opt/cni/bin/loopback
opt/containerd/
opt/containerd/cluster/
opt/containerd/cluster/version
opt/containerd/cluster/gce/
opt/containerd/cluster/gce/cni.template
opt/containerd/cluster/gce/configure.sh
opt/containerd/cluster/gce/cloud-init/
opt/containerd/cluster/gce/cloud-init/master.yaml
opt/containerd/cluster/gce/cloud-init/node.yaml
opt/containerd/cluster/gce/env

Unzip the package directly into each directory of the system

1
➜  ~ tar -C / -xzf cri-containerd-cni-1.5.5-linux-amd64.tar.gz

Of course, remember to append /usr/local/bin and /usr/local/sbin to the PATH environment variable in the ~/.bashrc file.

1
export PATH=$PATH:/usr/local/bin:/usr/local/sbin

Then execute the following command to make it effective immediately.

1
➜  ~ source ~/.bashrc

The default configuration file for containerd is /etc/containerd/config.toml and we can generate a default configuration with the following command.

1
2
➜  ~ mkdir /etc/containerd
➜  ~ containerd config default > /etc/containerd/config.toml

Since the containerd archive we downloaded above contains an etc/systemd/system/containerd.service file, we can configure containerd to run as a daemon via systemd, as follows.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
➜  ~ cat /etc/systemd/system/containerd.service
[Unit]
Description=containerd container runtime
Documentation=https://containerd.io
After=network.target local-fs.target

[Service]
ExecStartPre=-/sbin/modprobe overlay
ExecStart=/usr/local/bin/containerd

Type=notify
Delegate=yes
KillMode=process
Restart=always
RestartSec=5
# Having non-zero Limit*s causes performance problems due to accounting overhead
# in the kernel. We recommend using cgroups to do container-local accounting.
LimitNPROC=infinity
LimitCORE=infinity
LimitNOFILE=1048576
# Comment TasksMax if your systemd version does not supports it.
# Only systemd 226 and above support this version.
TasksMax=infinity
OOMScoreAdjust=-999

[Install]
WantedBy=multi-user.target

There are two important parameters here.

  • Delegate : This option allows containerd as well as the runtime to manage its own cgroups for creating containers. if this option is not set, systemd will move the process to its own cgroups, which will result in containerd not getting the container’s resource usage correctly.
  • KillMode : This option is used to handle the way containerd processes are killed. By default, systemd will look in the process’s cgroup and kill all of containerd’s child processes. the values that can be set for the KillMode field are as follows.
    • control-group (default): All child processes in the current control group will be killed.
    • process : Only the main process will be killed.
    • mixed : The main process will receive a SIGTERM signal and the child processes will receive a SIGKILL signal.
    • none : no process will be killed, only the stop command of the service will be executed

We need to set the KillMode value to process to ensure that upgrading or restarting containerd does not kill the existing containers.

Now we can start containerd, just execute the following command.

1
➜  ~ systemctl enable containerd --now

Once started, you can use containerd’s native CLI tool ctr, for example to view the version

Configuration

Let’s first look at the configuration file /etc/containerd/config.toml generated by default above.

  1
  2
  3
  4
  5
  6
  7
  8
  9
 10
 11
 12
 13
 14
 15
 16
 17
 18
 19
 20
 21
 22
 23
 24
 25
 26
 27
 28
 29
 30
 31
 32
 33
 34
 35
 36
 37
 38
 39
 40
 41
 42
 43
 44
 45
 46
 47
 48
 49
 50
 51
 52
 53
 54
 55
 56
 57
 58
 59
 60
 61
 62
 63
 64
 65
 66
 67
 68
 69
 70
 71
 72
 73
 74
 75
 76
 77
 78
 79
 80
 81
 82
 83
 84
 85
 86
 87
 88
 89
 90
 91
 92
 93
 94
 95
 96
 97
 98
 99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
disabled_plugins = []
imports = []
oom_score = 0
plugin_dir = ""
required_plugins = []
root = "/var/lib/containerd"
state = "/run/containerd"
version = 2

[cgroup]
  path = ""

[debug]
  address = ""
  format = ""
  gid = 0
  level = ""
  uid = 0

[grpc]
  address = "/run/containerd/containerd.sock"
  gid = 0
  max_recv_message_size = 16777216
  max_send_message_size = 16777216
  tcp_address = ""
  tcp_tls_cert = ""
  tcp_tls_key = ""
  uid = 0

[metrics]
  address = ""
  grpc_histogram = false

[plugins]

  [plugins."io.containerd.gc.v1.scheduler"]
    deletion_threshold = 0
    mutation_threshold = 100
    pause_threshold = 0.02
    schedule_delay = "0s"
    startup_delay = "100ms"

  [plugins."io.containerd.grpc.v1.cri"]
    disable_apparmor = false
    disable_cgroup = false
    disable_hugetlb_controller = true
    disable_proc_mount = false
    disable_tcp_service = true
    enable_selinux = false
    enable_tls_streaming = false
    ignore_image_defined_volumes = false
    max_concurrent_downloads = 3
    max_container_log_line_size = 16384
    netns_mounts_under_state_dir = false
    restrict_oom_score_adj = false
    sandbox_image = "k8s.gcr.io/pause:3.5"
    selinux_category_range = 1024
    stats_collect_period = 10
    stream_idle_timeout = "4h0m0s"
    stream_server_address = "127.0.0.1"
    stream_server_port = "0"
    systemd_cgroup = false
    tolerate_missing_hugetlb_controller = true
    unset_seccomp_profile = ""

    [plugins."io.containerd.grpc.v1.cri".cni]
      bin_dir = "/opt/cni/bin"
      conf_dir = "/etc/cni/net.d"
      conf_template = ""
      max_conf_num = 1

    [plugins."io.containerd.grpc.v1.cri".containerd]
      default_runtime_name = "runc"
      disable_snapshot_annotations = true
      discard_unpacked_layers = false
      no_pivot = false
      snapshotter = "overlayfs"

      [plugins."io.containerd.grpc.v1.cri".containerd.default_runtime]
        base_runtime_spec = ""
        container_annotations = []
        pod_annotations = []
        privileged_without_host_devices = false
        runtime_engine = ""
        runtime_root = ""
        runtime_type = ""

        [plugins."io.containerd.grpc.v1.cri".containerd.default_runtime.options]

      [plugins."io.containerd.grpc.v1.cri".containerd.runtimes]

        [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc]
          base_runtime_spec = ""
          container_annotations = []
          pod_annotations = []
          privileged_without_host_devices = false
          runtime_engine = ""
          runtime_root = ""
          runtime_type = "io.containerd.runc.v2"

          [plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runc.options]
            BinaryName = ""
            CriuImagePath = ""
            CriuPath = ""
            CriuWorkPath = ""
            IoGid = 0
            IoUid = 0
            NoNewKeyring = false
            NoPivotRoot = false
            Root = ""
            ShimCgroup = ""
            SystemdCgroup = false

      [plugins."io.containerd.grpc.v1.cri".containerd.untrusted_workload_runtime]
        base_runtime_spec = ""
        container_annotations = []
        pod_annotations = []
        privileged_without_host_devices = false
        runtime_engine = ""
        runtime_root = ""
        runtime_type = ""

        [plugins."io.containerd.grpc.v1.cri".containerd.untrusted_workload_runtime.options]

    [plugins."io.containerd.grpc.v1.cri".image_decryption]
      key_model = "node"

    [plugins."io.containerd.grpc.v1.cri".registry]
      config_path = ""

      [plugins."io.containerd.grpc.v1.cri".registry.auths]

      [plugins."io.containerd.grpc.v1.cri".registry.configs]

      [plugins."io.containerd.grpc.v1.cri".registry.headers]

      [plugins."io.containerd.grpc.v1.cri".registry.mirrors]

    [plugins."io.containerd.grpc.v1.cri".x509_key_pair_streaming]
      tls_cert_file = ""
      tls_key_file = ""

  [plugins."io.containerd.internal.v1.opt"]
    path = "/opt/containerd"

  [plugins."io.containerd.internal.v1.restart"]
    interval = "10s"

  [plugins."io.containerd.metadata.v1.bolt"]
    content_sharing_policy = "shared"

  [plugins."io.containerd.monitor.v1.cgroups"]
    no_prometheus = false

  [plugins."io.containerd.runtime.v1.linux"]
    no_shim = false
    runtime = "runc"
    runtime_root = ""
    shim = "containerd-shim"
    shim_debug = false

  [plugins."io.containerd.runtime.v2.task"]
    platforms = ["linux/amd64"]

  [plugins."io.containerd.service.v1.diff-service"]
    default = ["walking"]

  [plugins."io.containerd.snapshotter.v1.aufs"]
    root_path = ""

  [plugins."io.containerd.snapshotter.v1.btrfs"]
    root_path = ""

  [plugins."io.containerd.snapshotter.v1.devmapper"]
    async_remove = false
    base_image_size = ""
    pool_name = ""
    root_path = ""

  [plugins."io.containerd.snapshotter.v1.native"]
    root_path = ""

  [plugins."io.containerd.snapshotter.v1.overlayfs"]
    root_path = ""

  [plugins."io.containerd.snapshotter.v1.zfs"]
    root_path = ""

[proxy_plugins]

[stream_processors]

  [stream_processors."io.containerd.ocicrypt.decoder.v1.tar"]
    accepts = ["application/vnd.oci.image.layer.v1.tar+encrypted"]
    args = ["--decryption-keys-path", "/etc/containerd/ocicrypt/keys"]
    env = ["OCICRYPT_KEYPROVIDER_CONFIG=/etc/containerd/ocicrypt/ocicrypt_keyprovider.conf"]
    path = "ctd-decoder"
    returns = "application/vnd.oci.image.layer.v1.tar"

  [stream_processors."io.containerd.ocicrypt.decoder.v1.tar.gzip"]
    accepts = ["application/vnd.oci.image.layer.v1.tar+gzip+encrypted"]
    args = ["--decryption-keys-path", "/etc/containerd/ocicrypt/keys"]
    env = ["OCICRYPT_KEYPROVIDER_CONFIG=/etc/containerd/ocicrypt/ocicrypt_keyprovider.conf"]
    path = "ctd-decoder"
    returns = "application/vnd.oci.image.layer.v1.tar+gzip"

[timeouts]
  "io.containerd.timeout.shim.cleanup" = "5s"
  "io.containerd.timeout.shim.load" = "5s"
  "io.containerd.timeout.shim.shutdown" = "3s"
  "io.containerd.timeout.task.state" = "2s"

[ttrpc]
  address = ""
  gid = 0
  uid = 0

This configuration file is rather complicated, we can focus on the plugins configuration, look carefully we can find that each top configuration block is named in the form of plugins. containerd.xxx.vx indicates the type of the plugin, and xxx after vx indicates the ID of the plugin, and we can view the list of plugins via ctr.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
  ~ ctr plugin ls
ctr plugin ls
TYPE                            ID                       PLATFORMS      STATUS
io.containerd.content.v1        content                  -              ok
io.containerd.snapshotter.v1    aufs                     linux/amd64    ok
io.containerd.snapshotter.v1    btrfs                    linux/amd64    skip
io.containerd.snapshotter.v1    devmapper                linux/amd64    error
io.containerd.snapshotter.v1    native                   linux/amd64    ok
io.containerd.snapshotter.v1    overlayfs                linux/amd64    ok
io.containerd.snapshotter.v1    zfs                      linux/amd64    skip
io.containerd.metadata.v1       bolt                     -              ok
io.containerd.differ.v1         walking                  linux/amd64    ok
io.containerd.gc.v1             scheduler                -              ok
io.containerd.service.v1        introspection-service    -              ok
io.containerd.service.v1        containers-service       -              ok
io.containerd.service.v1        content-service          -              ok
io.containerd.service.v1        diff-service             -              ok
io.containerd.service.v1        images-service           -              ok
io.containerd.service.v1        leases-service           -              ok
io.containerd.service.v1        namespaces-service       -              ok
io.containerd.service.v1        snapshots-service        -              ok
io.containerd.runtime.v1        linux                    linux/amd64    ok
io.containerd.runtime.v2        task                     linux/amd64    ok
io.containerd.monitor.v1        cgroups                  linux/amd64    ok
io.containerd.service.v1        tasks-service            -              ok
io.containerd.internal.v1       restart                  -              ok
io.containerd.grpc.v1           containers               -              ok
io.containerd.grpc.v1           content                  -              ok
io.containerd.grpc.v1           diff                     -              ok
io.containerd.grpc.v1           events                   -              ok
io.containerd.grpc.v1           healthcheck              -              ok
io.containerd.grpc.v1           images                   -              ok
io.containerd.grpc.v1           leases                   -              ok
io.containerd.grpc.v1           namespaces               -              ok
io.containerd.internal.v1       opt                      -              ok
io.containerd.grpc.v1           snapshots                -              ok
io.containerd.grpc.v1           tasks                    -              ok
io.containerd.grpc.v1           version                  -              ok
io.containerd.grpc.v1           cri                      linux/amd64    ok

The sub-configuration blocks below the top-level configuration block represent various configurations of the plugin, for example, under the cri plugin there are configurations for containerd, cni and registry, and under containerd there are various runtimes that can be configured, as well as the default runtime. For example, if we want to configure an accelerator for a mirror, we need to configure registry.mirrors under the registry configuration block under the cri configuration block.

1
2
3
4
5
6
[plugins."io.containerd.grpc.v1.cri".registry]
  [plugins."io.containerd.grpc.v1.cri".registry.mirrors]
    [plugins."io.containerd.grpc.v1.cri".registry.mirrors."docker.io"]
      endpoint = ["https://bqr1dr1n.mirror.aliyuncs.com"]
    [plugins."io.containerd.grpc.v1.cri".registry.mirrors."k8s.gcr.io"]
      endpoint = ["https://registry.aliyuncs.com/k8sxio"]
  • egistry.mirrors. "xxx" : Indicates the mirror repository that needs to be configured for mirror, e.g. registry.mirrors. "docker.io" means configure mirror for docker.io.
  • endpoint : Indicates a mirror acceleration service that provides mirror, e.g. we can register an Aliyun mirror service as a mirror for docker.io.

There are also two other configuration paths for storage in the default configuration.

1
2
root = "/var/lib/containerd"
state = "/run/containerd"

Where root is used to store persistent data, including Snapshots, Content, Metadata and various plugin data, each plugin has its own separate directory, Containerd itself does not store any data, all its functions come from the loaded plugins.

The other state is used to store temporary runtime data, including sockets, pids, mount points, runtime state, and plugin data that does not need to be persisted.

Using

We know that the Docker CLI tool provides features to enhance the user experience, containerd also provides a corresponding CLI tool: ctr, but ctr is not as complete as docker, but the basic features about images and containers are there. Let’s start with a brief introduction to the use of ctr.

Help

Directly enter the ctr command to get all the relevant operation command usage.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
➜  ~ ctr
NAME:
   ctr -
        __
  _____/ /______
 / ___/ __/ ___/
/ /__/ /_/ /
\___/\__/_/

containerd CLI


USAGE:
   ctr [global options] command [command options] [arguments...]

VERSION:
   v1.5.5

DESCRIPTION:

ctr is an unsupported debug and administrative client for interacting
with the containerd daemon. Because it is unsupported, the commands,
options, and operations are not guaranteed to be backward compatible or
stable from release to release of the containerd project.

COMMANDS:
   plugins, plugin            provides information about containerd plugins
   version                    print the client and server versions
   containers, c, container   manage containers
   content                    manage content
   events, event              display containerd events
   images, image, i           manage images
   leases                     manage leases
   namespaces, namespace, ns  manage namespaces
   pprof                      provide golang pprof outputs for containerd
   run                        run a container
   snapshots, snapshot        manage snapshots
   tasks, t, task             manage tasks
   install                    install a new package
   oci                        OCI tools
   shim                       interact with a shim directly
   help, h                    Shows a list of commands or help for one command

GLOBAL OPTIONS:
   --debug                      enable debug output in logs
   --address value, -a value    address for containerd's GRPC server (default: "/run/containerd/containerd.sock") [$CONTAINERD_ADDRESS]
   --timeout value              total timeout for ctr commands (default: 0s)
   --connect-timeout value      timeout for connecting to containerd (default: 0s)
   --namespace value, -n value  namespace to use with commands (default: "default") [$CONTAINERD_NAMESPACE]
   --help, -h                   show help
   --version, -v                print the version

Mirror operation

Pull Mirror

Pulling an image can be done using ctr image pull, for example, pulling the official Docker Hub image nginx:alpine, it should be noted that the image address needs to be added to the docker.io Host address.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
➜  ~ ctr image pull docker.io/library/nginx:alpine
docker.io/library/nginx:alpine:                                                   resolved       |++++++++++++++++++++++++++++++++++++++|
index-sha256:bead42240255ae1485653a956ef41c9e458eb077fcb6dc664cbc3aa9701a05ce:    exists         |++++++++++++++++++++++++++++++++++++++|
manifest-sha256:ce6ca11a3fa7e0e6b44813901e3289212fc2f327ee8b1366176666e8fb470f24: done           |++++++++++++++++++++++++++++++++++++++|
layer-sha256:9a6ac07b84eb50935293bb185d0a8696d03247f74fd7d43ea6161dc0f293f81f:    done           |++++++++++++++++++++++++++++++++++++++|
layer-sha256:e82f830de071ebcda58148003698f32205b7970b01c58a197ac60d6bb79241b0:    done           |++++++++++++++++++++++++++++++++++++++|
layer-sha256:d7c9fa7589ae28cd3306b204d5dd9a539612593e35df70f7a1d69ff7548e74cf:    done           |++++++++++++++++++++++++++++++++++++++|
layer-sha256:bf2b3ee132db5b4c65432e53aca69da4e609c6cb154e0d0e14b2b02259e9c1e3:    done           |++++++++++++++++++++++++++++++++++++++|
config-sha256:7ce0143dee376bfd2937b499a46fb110bda3c629c195b84b1cf6e19be1a9e23b:   done           |++++++++++++++++++++++++++++++++++++++|
layer-sha256:3c1eaf69ff492177c34bdbf1735b6f2e5400e417f8f11b98b0da878f4ecad5fb:    done           |++++++++++++++++++++++++++++++++++++++|
layer-sha256:29291e31a76a7e560b9b7ad3cada56e8c18d50a96cca8a2573e4f4689d7aca77:    done           |++++++++++++++++++++++++++++++++++++++|
elapsed: 11.9s                                                                    total:  8.7 Mi (748.1 KiB/s)
unpacking linux/amd64 sha256:bead42240255ae1485653a956ef41c9e458eb077fcb6dc664cbc3aa9701a05ce...
done: 410.86624ms

You can also use the --platform option to specify the image for the corresponding platform. Of course, there is also a command to push the image ctr image push, and if it is a private image, you can use --user to customize the username and password of the repository when pushing.

List local mirrors
1
2
3
4
5
➜  ~ ctr image ls
REF                            TYPE                                                      DIGEST                                                                  SIZE    PLATFORMS                                                                                LABELS
docker.io/library/nginx:alpine application/vnd.docker.distribution.manifest.list.v2+json sha256:bead42240255ae1485653a956ef41c9e458eb077fcb6dc664cbc3aa9701a05ce 9.5 MiB linux/386,linux/amd64,linux/arm/v6,linux/arm/v7,linux/arm64/v8,linux/ppc64le,linux/s390x -
➜  ~ ctr image ls -q
docker.io/library/nginx:alpine

Use the -q (--quiet) option to print only the mirror name.

Detecting local mirrors
1
2
3
➜  ~ ctr image check
REF                            TYPE                                                      DIGEST                                                                  STATUS         SIZE            UNPACKED
docker.io/library/nginx:alpine application/vnd.docker.distribution.manifest.list.v2+json sha256:bead42240255ae1485653a956ef41c9e458eb077fcb6dc664cbc3aa9701a05ce complete (7/7) 9.5 MiB/9.5 MiB true

The main thing to look for is STATUS, where complete indicates that the image is in a fully available state.

Re-tagging

Similarly we can also re-tag the specified mirror with a Tag.

1
2
3
4
5
➜  ~ ctr image tag docker.io/library/nginx:alpine harbor.k8s.local/course/nginx:alpine
harbor.k8s.local/course/nginx:alpine
➜  ~ ctr image ls -q
docker.io/library/nginx:alpine
harbor.k8s.local/course/nginx:alpine
Delete Mirror

Images that are not needed can also be deleted using ctr image rm: ctr image rm.

1
2
3
4
➜  ~ ctr image rm harbor.k8s.local/course/nginx:alpine
harbor.k8s.local/course/nginx:alpine
➜  ~ ctr image ls -q
docker.io/library/nginx:alpine

Adding the -sync option will synchronize the deletion of the image and all associated resources.

Mount the image to the host directory
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
➜  ~ ctr image mount docker.io/library/nginx:alpine /mnt
sha256:c3554b2d61e3c1cffcaba4b4fa7651c644a3354efaafa2f22cb53542f6c600dc
/mnt
➜  ~ tree -L 1 /mnt
/mnt
├── bin
├── dev
├── docker-entrypoint.d
├── docker-entrypoint.sh
├── etc
├── home
├── lib
├── media
├── mnt
├── opt
├── proc
├── root
├── run
├── sbin
├── srv
├── sys
├── tmp
├── usr
└── var

18 directories, 1 file
Uninstall the image from the host directory
1
2
➜  ~ ctr image unmount /mnt
/mnt
Export the image as a zip archive
1
➜  ~ ctr image export nginx.tar.gz docker.io/library/nginx:alpine
Importing images from a zip archive
1
➜  ~ ctr image import nginx.tar.gz

Container operation

Container-related operations can be obtained with ctr container.

Create container
1
➜  ~ ctr container create docker.io/library/nginx:alpine nginx
List containers
1
2
3
➜  ~ ctr container ls
CONTAINER    IMAGE                             RUNTIME
nginx        docker.io/library/nginx:alpine    io.containerd.runc.v2

The list can also be streamlined by adding the -q option.

1
2
➜  ~ ctr container ls -q
nginx
View detailed container configuration

Similar to the docker inspect function.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
➜  ~ ctr container info nginx
{
    "ID": "nginx",
    "Labels": {
        "io.containerd.image.config.stop-signal": "SIGQUIT"
    },
    "Image": "docker.io/library/nginx:alpine",
    "Runtime": {
        "Name": "io.containerd.runc.v2",
        "Options": {
            "type_url": "containerd.runc.v1.Options"
        }
    },
    "SnapshotKey": "nginx",
    "Snapshotter": "overlayfs",
    "CreatedAt": "2021-08-12T08:23:13.792871558Z",
    "UpdatedAt": "2021-08-12T08:23:13.792871558Z",
    "Extensions": null,
    "Spec": {
......
Delete container
1
2
3
➜  ~ ctr container rm nginx
➜  ~ ctr container ls
CONTAINER    IMAGE    RUNTIME

In addition to the rm subcommand, you can also use delete or del to delete containers.

Task

The container we created above with the container create command is not in a running state, it is just a static container. A container object only contains the resources and related configuration data needed to run a container, which means that namespaces, rootfs and container configuration have been initialized successfully, but the user process has not been started yet.

A container is really running by Task tasks, Task can set up the NIC for the container, and can also configure tools to monitor the container, etc.

Task related operations can be obtained through ctr task, as follows we start the container through Task.

1
2
3
➜  ~ ctr task start -d nginx
/docker-entrypoint.sh: /docker-entrypoint.d/ is not empty, will attempt to perform configuration
/docker-entrypoint.sh: Looking for shell scripts in /docker-entrypoint.d/

After starting the container, you can view the running containers via task ls: task ls.

1
2
3
➜  ~ ctr task ls
TASK     PID     STATUS
nginx    3630    RUNNING

The same can be done with the exec command to access the container.

1
2
➜  ~ ctr task exec --exec-id 0 -t nginx sh
/ #

However, you should note that you must specify the -exec-id parameter, which can be written in any way you like, as long as it is unique.

Pause the container, similar to -docker pause.

1
➜  ~ ctr task pause nginx

After the pause, the container status becomes PAUSED.

1
2
3
➜  ~ ctr task ls
TASK     PID     STATUS
nginx    3630    PAUSED

The container can also be restored using the resume command.

1
2
3
4
➜  ~ ctr task resume nginx
➜  ~ ctr task ls
TASK     PID     STATUS
nginx    3630    RUNNING

However, it should be noted that ctr does not have the ability to stop containers, it can only pause or kill them. To kill a container, you can use the task kill command:

1
2
3
4
➜  ~ ctr task kill nginx
➜  ~ ctr task ls
TASK     PID     STATUS
nginx    3630    STOPPED

After killing the container, you can see that the status of the container has changed to STOPPED. You can also delete Task by using the task rm command.

1
2
3
➜  ~ ctr task rm nginx
➜  ~ ctr task ls
TASK    PID    STATUS

In addition, we can also get information about the container’s cgroup, which can be used to get the container’s memory, CPU and PID limits and usage using the task metrics command.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
# 重新启动容器
➜  ~ ctr task metrics nginx
ID       TIMESTAMP
nginx    2021-08-12 08:50:46.952769941 +0000 UTC

METRIC                   VALUE
memory.usage_in_bytes    8855552
memory.limit_in_bytes    9223372036854771712
memory.stat.cache        0
cpuacct.usage            22467106
cpuacct.usage_percpu     [2962708 860891 1163413 1915748 1058868 2888139 6159277 5458062]
pids.current             9
pids.limit               0

You can also use the task ps command to view the PIDs of all processes in the container on the host.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
➜  ~ ctr task ps nginx
PID     INFO
3984    -
4029    -
4030    -
4031    -
4032    -
4033    -
4034    -
4035    -
4036    -
➜  ~ ctr task ls
TASK     PID     STATUS
nginx    3984    RUNNING

The first of these PIDs 3984 is process #1 in our container.

Namespace

Also the concept of namespaces is supported in Containerd, e.g. to view namespaces.

1
2
3
➜  ~ ctr ns ls
NAME    LABELS
default

If not specified, ctr defaults to using the default space. A namespace can also be created using the ns create command.

1
2
3
4
5
➜  ~ ctr ns create test
➜  ~ ctr ns ls
NAME    LABELS
default
test

Namespace can be deleted using remove or rm.

1
2
3
4
5
➜  ~ ctr ns rm test
test
➜  ~ ctr ns ls
NAME    LABELS
default

With namespaces you can specify namespace when manipulating resources, for example, to view a mirror of the test namespace, you can add the -n test option to the action command.

1
2
➜  ~ ctr -n test image ls
REF TYPE DIGEST SIZE PLATFORMS LABELS

We know that Docker actually calls containerd by default, and in fact the namespace under containerd used by Docker is moby by default, not default, so if we start a container with docker, we can also use ctr -n moby to locate the following container.

1
➜  ~ ctr -n moby container ls

Also the default namespace for containerd used under Kubernetes is k8s.io, so we can use ctr -n k8s.io to see the containers created under Kubernetes. We’ll cover how to switch the container runtime of a Kubernetes cluster to containerd later.