I’ve been maintaining a CSI Driver for about a year and a half now, and I’ve been asked by some of my friends about CSI-related issues and how to develop their own CSI Driver. This post is about how to quickly develop your own Kubernetes CSI Driver.

CSIbuilder

In fact, CSI Driver is nothing more than an interface that implements the logic of a third-party storage. A short sentence contains a lot of work and is tedious, and requires a clear understanding of how CSI works, but the good news is that the work is traceable.

Recently, a scaffolding tool CSIbuilder was developed, similar in principle to kubebuilder, which allows users to build a CSI Driver code framework by entering just a few lines of commands and then filling in their own logic.

The process of using it is simple, first download the binary package.

1
2
weiwei@hdls-mbp $ curl -L -o csibuilder.tar https://github.com/zwwhdls/csibuilder/releases/download/v0.1.0/csibuilder-darwin-amd64.tar
weiwei@hdls-mbp $ tar -zxvf csibuilder.tar  && chmod +x csibuilder && mv csibuilder /usr/local/bin/

Create a new working directory for the golang project.

1
2
3
weiwei@hdls-mbp $ export GO111MODULE=on
weiwei@hdls-mbp $ mkdir $GOPATH/src/csi-hdls
weiwei@hdls-mbp $ cd $GOPATH/src/csi-hdls

Use csibuilder for repo initialization.

1
2
3
4
5
6
7
weiwei@hdls-mbp $ csibuilder init --repo hdls --owner "zwwhdls"
Init CSI Driver Project for you...
Update dependencies:
$ go mod tidy
go: warning: "all" matched no packages
Next: define a csi driver with:
$ csibuilder create api

Create a CSI Driver named hdls.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
weiwei@hdls-mbp $ csibuilder create --csi hdls
Writing scaffold for you to edit...
Update dependencies:
$ go mod tidy
go: finding module for package google.golang.org/grpc/status
go: finding module for package github.com/container-storage-interface/spec/lib/go/csi
go: finding module for package google.golang.org/grpc
go: finding module for package google.golang.org/grpc/codes
go: finding module for package k8s.io/klog
go: found k8s.io/klog in k8s.io/klog v1.0.0
go: found github.com/container-storage-interface/spec/lib/go/csi in github.com/container-storage-interface/spec v1.7.0
go: found google.golang.org/grpc in google.golang.org/grpc v1.50.1
go: found google.golang.org/grpc/codes in google.golang.org/grpc v1.50.1
go: found google.golang.org/grpc/status in google.golang.org/grpc v1.50.1
Scaffolding complete. Enjoy your new project!

The default is go 1.18, but you can also specify the go version to be used in the init phase with the parameter -goversion=1.19.

Then you can see that the project has been initialized with the CSI Driver code files and deployment yaml.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
weiwei@hdls-mbp $ tree
.
├── Dockerfile
├── Makefile
├── PROJECT
├── deploy
│   ├── clusterrole.yaml
│   ├── clusterrolebinding.yaml
│   ├── csidriver.yaml
│   ├── daemonset.yaml
│   ├── serviceaccount.yaml
│   └── statefulset.yaml
├── go.mod
├── go.sum
├── hack
│   └── boilerplate.go.txt
├── main.go
└── pkg
    └── csi
        ├── controller.go
        ├── driver.go
        ├── identity.go
        ├── node.go
        └── version.go

4 directories, 18 files

Implementing the CSI Node Interface

We know that the most important component of CSI work is the CSI Node, that is, its NodePublishVolume and NodeUnpublishVolume interfaces.

The parameters passed in when the kubelet calls the interface can be found in request. volumeID is the id of the PV used by the Pod (i.e. pv.spec.csi.volumeHandle); target is the path to the volume to be mounted in the Pod; options are the mounting parameters for this mount.

All we need to do is mount our storage to the target path within the Pod. Here is an example of nfs.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
func (n *nodeService) NodePublishVolume(ctx context.Context, request *csi.NodePublishVolumeRequest) (*csi.NodePublishVolumeResponse, error) {
   ...
    volCtx := request.GetVolumeContext()
    klog.Infof("NodePublishVolume: volume context: %v", volCtx)

    hostPath := volCtx["hostPath"]
    subPath := volCtx["subPath"]
    sourcePath := hostPath
    if subPath != "" {
        sourcePath = path.Join(hostPath, subPath)
        exists, err := mount.PathExists(sourcePath)
        if err != nil {
            return nil, status.Errorf(codes.Internal, "Could not check volume path %q exists: %v", sourcePath, err)
        }
        if !exists {
            klog.Infof("volume not existed")
            err := os.MkdirAll(sourcePath, 0755)
            if err != nil {
                return nil, status.Errorf(codes.Internal, "Could not make directory for meta %q", sourcePath)
            }
        }
    }
    klog.Infof("NodePublishVolume: binding %s at %s", hostPath, target)
    if err := n.Mount(sourcePath, target, "none", mountOptions); err != nil {
        os.Remove(target)
        return nil, status.Errorf(codes.Internal, "Could not bind %q at %q: %v", hostPath, target, err)
    }
    return &csi.NodePublishVolumeResponse{}, nil
}

We can pass in some parameters for the nfs server in PV.spec.parameter, which we can get from request.GetVolumeContext(), and then mount it to target with mount bind. For logical simplicity, we assume that the parameter hostPath is the path to the mounted nfs on the host.

The NodeUnpublishVolume interface is the reverse of the NodePublishVolume interface above. We just need to drop the target path umount and the code is as follows.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
// NodeUnpublishVolume unmount the volume from the target path
func (n *nodeService) NodeUnpublishVolume(ctx context.Context, request *csi.NodeUnpublishVolumeRequest) (*csi.NodeUnpublishVolumeResponse, error) {
    target := request.GetTargetPath()
    if len(target) == 0 {
        return nil, status.Error(codes.InvalidArgument, "Target path not provided")
    }

    // TODO modify your volume umount logic here
    ...

    klog.Infof("NodeUnpublishVolume: unmounting %s", target)
    if err := n.Unmount(target); err != nil {
        return nil, status.Errorf(codes.Internal, "Could not unmount %q: %v", target, err)
    }

    return &csi.NodeUnpublishVolumeResponse{}, nil
}

Implementing the CSI Controller interface

The CSI Controller interface is mainly used in conjunction with the implementation of automatic PV creation and only works when using StorageClass. The logic here is usually to create a subdirectory under the file system and place it in the spec of the PV. In the demo shown in this article, the step of creating a subdirectory is also implemented in the CSI Node interface, so the CSI Controller interface only needs to pass the subdirectory in the parameters of the PV.

The code for CreateVolume is as follows.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
func (d *controllerService) CreateVolume(ctx context.Context, request *csi.CreateVolumeRequest) (*csi.CreateVolumeResponse, error) {
    if len(request.Name) == 0 {
        return nil, status.Error(codes.InvalidArgument, "Volume Name cannot be empty")
    }
    if request.VolumeCapabilities == nil {
        return nil, status.Error(codes.InvalidArgument, "Volume Capabilities cannot be empty")
    }

    requiredCap := request.CapacityRange.GetRequiredBytes()

    volCtx := make(map[string]string)
    for k, v := range request.Parameters {
        volCtx[k] = v
    }

    volCtx["subPath"] = request.Name

    volume := csi.Volume{
        VolumeId:      request.Name,
        CapacityBytes: requiredCap,
        VolumeContext: volCtx,
    }

    return &csi.CreateVolumeResponse{Volume: &volume}, nil
}

The VolumeContext of the returned value is placed in the spec.csi.volumeAttributes of the automatically created PV.

Summary

The above is a complete demonstration of the simplest available features of a CSI Driver, and with csibuilder, coding the CSI Driver has become extremely easy. However, CSI has much more functionality than that, including attach, stage, expand, snapshot, etc. csibuilder is still in a newly completed state, and will support these features in subsequent iterations.

Ref

  • https://blog.hdls.me/16672085188369.html