kubeplay is a tool for offline deployment of kuberneres clusters based on kubespray

Features

  • Includes all dependencies and can be installed offline with a single command
  • Support amd64 and arm64 CPU architectures
  • kubeadm-generated certificate validity adjusted to 10 years
  • De-dockerized deployment, seamless migration to containerd as container runtime
  • Offline installation of rpm/deb packages that the platform depends on (e.g. storage client) for toB privatization scenarios
  • Multi-cluster deployment, with support for deploying kubernetes clusters as Job Pods within kubernetes clusters
  • Build offline installers using GitHub Actions, no membership required, 100% open source 100% free

Component Versions

addon version Use
kubernetes v1.21.3 kubernetes
containerd v1.4.6 Container runtime
etcd v3.4.13 etcd Services
crictl v1.21.0 CRI CLI Tools
pause 3.3 pause Container Image
cni-plugins v0.9.1 CNI Plugin
calico v3.18.4 calico
autoscaler 1.8.3 DNS Automatic capacity expansion and contraction
coredns v1.8.0 Clustered DNS Services
flannel v0.14.0 flannel
nginx 1.19 node node reverse proxy APIserver
canel calico/flannel Integration with calico and flannel
helm v3.6.3 helm CLI Tools
nerdctl 0.8.0 containerd CLI Tools
nerdctl-full 0.11.0 containerd tool series
registry v2.7.1 Mirror download service available
skopeo v1.4.0 Mirroring tools

Support OS

distribution version arch
CentOS 7.9 amd64/arm64
Debian 10 amd64/arm64
Ubuntu 20.04 amd64/arm6

compose

Use nerdctl compose on the deployment tool run node to start the nginx and registry containers, which provide offline resource download and image distribution services, respectively.

kubespray

Use the kubernetes community’s kubespray as a cluster deployment feature, and the resources relied on during deployment are fetched from the compose node.

Deployment

Download

Download the corresponding offline installer from the GitHub release page k8sli/kubeplay/releases

1
2
3
sha256sum                                    # Installation package sha256sum checksum file
kubeplay-v0.1.0-alpha.1-linux-amd64.tar.gz   # For amd64 CPU architecture
kubeplay-v0.1.0-alpha.1-linux-arm64.tar.gz   # Suitable for arm64 CPU architecture

Unpacking

1
2
3
$ tar -xpf kubeplay-v0.1.0-alpha.1-linux-amd64.tar.gz
$ cd kubeplay
$ vi config.yaml

Configuration

The config.yaml configuration file is divided into the following sections

  • compose: nginx and registry deployment node information
  • kubespray: kubespray deployment configuration
  • invenory: kubernetes cluster node ssh login information
  • default: some default parameters

compose

Parameters Description Example
internal_ip Deployment Node Intranet Access IP 192.168.10.11
nginx_http_port Deploy the ports exposed by the nginx service 8080
registry_domain Deploy the domain name of the registry mirror repository service kube.registry.local
1
2
3
4
5
6
7
compose:
  # Compose bootstrap node ip, default is local internal ip
  internal_ip: 172.20.0.25
  # Nginx http server bind port for download files and packages
  nginx_http_port: 8080
  # Registry domain for CRI runtime download images
  registry_domain: kube.registry.local

kubespray

Parameters Description Example
kube_version kubernetes version number v1.21.3
external_apiserver_access_ip Cluster APIserver External Access IP 192.168.10.100
kube_network_plugin Optional CNI network plug-in name calico
container_manager Container runtime containerd
etcd_deployment_type etcd Deployment Method host
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
kubespray:
  # Kubernetes version by default, only support v1.20.6
  kube_version: v1.21.3
  # For deploy HA cluster you must configure a external apiserver access ip
  external_apiserver_access_ip: 127.0.0.1
  # Set network plugin to calico with vxlan mode by default
  kube_network_plugin: calico
  #Container runtime, only support containerd if offline deploy
  container_manager: containerd
  # Now only support host if use containerd as CRI runtime
  etcd_deployment_type: host
  # Settings for etcd event server
  etcd_events_cluster_setup: true
  etcd_events_cluster_enabled: true

inventory

inventory is the ssh login configuration for kubernetes cluster nodes, supporting yaml, json, and ini formats.

Parameters Description Example
ansible_port Host ssh login port number 22
ansible_user Host ssh login username
ansible_ssh_pass Host ssh login password
ansible_ssh_private_key_file If you use the private key to log in Must be /kubespray/config/id_rsa
ansible_host Node IP
  • yaml format

     1
     2
     3
     4
     5
     6
     7
     8
     9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    31
    32
    33
    34
    35
    36
    37
    38
    39
    40
    41
    42
    
    # Cluster nodes inventory info
    inventory:
    all:
        vars:
        ansible_port: 22
        ansible_user: root
        ansible_ssh_pass: Password
        # ansible_ssh_private_key_file: /kubespray/config/id_rsa
        hosts:
        node1:
            ansible_host: 172.20.0.21
        node2:
            ansible_host: 172.20.0.22
        node3:
            ansible_host: 172.20.0.23
        node4:
            ansible_host: 172.20.0.24
        children:
        kube_control_plane:
            hosts:
            node1:
            node2:
            node3:
        kube_node:
            hosts:
            node1:
            node2:
            node3:
            node4:
        etcd:
            hosts:
            node1:
            node2:
            node3:
        k8s_cluster:
            children:
            kube_control_plane:
            kube_node:
        gpu:
            hosts: {}
        calico_rr:
            hosts: {}
    
  • json format

     1
     2
     3
     4
     5
     6
     7
     8
     9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    31
    32
    33
    34
    35
    36
    37
    38
    39
    40
    41
    42
    43
    44
    45
    46
    47
    48
    49
    50
    51
    52
    53
    54
    55
    56
    57
    58
    59
    60
    
    inventory: |
    {
        "all": {
        "vars": {
            "ansible_port": 22,
            "ansible_user": "root",
            "ansible_ssh_pass": "Password"
        },
        "hosts": {
            "node1": {
            "ansible_host": "172.20.0.21"
            },
            "node2": {
            "ansible_host": "172.20.0.22"
            },
            "node3": {
            "ansible_host": "172.20.0.23"
            },
            "node4": {
            "ansible_host": "172.20.0.24"
            }
        },
        "children": {
            "kube_control_plane": {
            "hosts": {
                "node1": null,
                "node2": null,
                "node3": null
            }
            },
            "kube_node": {
            "hosts": {
                "node1": null,
                "node2": null,
                "node3": null,
                "node4": null
            }
            },
            "etcd": {
            "hosts": {
                "node1": null,
                "node2": null,
                "node3": null
            }
            },
            "k8s_cluster": {
            "children": {
                "kube_control_plane": null,
                "kube_node": null
            }
            },
            "gpu": {
            "hosts": {}
            },
            "calico_rr": {
            "hosts": {}
            }
        }
        }
    }
    
  • ini format

     1
     2
     3
     4
     5
     6
     7
     8
     9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    31
    32
    33
    34
    35
    36
    37
    38
    39
    
    inventory: |
    [all:vars]
    ansible_port=22
    ansible_user=root
    ansible_ssh_pass=Password
    #ansible_ssh_private_key_file=/kubespray/config/id_rsa
    
    [all]
    kube-control-01 ansible_host=172.20.0.21
    kube-control-02 ansible_host=172.20.0.23
    kube-control-03 ansible_host=172.20.0.22
    kube-node-01 ansible_host=172.20.0.24
    
    [bastion]
    # bastion-01 ansible_host=x.x.x.x ansible_user=some_user
    
    [kube_control_plane]
    kube-control-01
    kube-control-02
    kube-control-03
    
    [etcd]
    kube-control-01
    kube-control-02
    kube-control-03
    
    
    [kube_node]
    kube-control-01
    kube-control-02
    kube-control-03
    kube-node-01
    
    [calico_rr]
    
    [k8s_cluster:children]
    kube_control_plane
    kube_node
    calico_rr
    

default

The following default parameters are not recommended to be changed without special requirements, just leave them as default. The ntp_server parameter is automatically replaced by the internal_ip value in the compose; the registry_ip and offline_resources_url parameters are automatically generated according to the parameters in the compose without modification.

Parameters Description Example
ntp_server ntp clock synchronization server domain or IP -
registry_ip Mirror repository node IP -
offline_resources_url URL address for downloading offline resources -
offline_resources_enabled Whether to deploy offline true
generate_domain_crt Whether to generate self-signed certificate for mirror repository domain name true
image_repository The repo or project of the mirror repository library
registry_https_port The port number of the mirror repository where PUSH mirroring is disabled 443
registry_push_port registry port number for PUSH images 5000
download_container Whether to pull images of all components under all nodes false
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
default:
  # NTP server ip address or domain, default is internal_ip
  ntp_server:
    - internal_ip
  # Registry ip address, default is internal_ip
  registry_ip: internal_ip
  # Offline resource url for download files, default is internal_ip:nginx_http_port
  offline_resources_url: internal_ip:nginx_http_port
  # Use nginx and registry provide all offline resources
  offline_resources_enabled: true
  # Image repo in registry
  image_repository: library
  # Kubespray container image for deploy user cluster or scale
  kubespray_image: "kubespray"
  # Auto generate self-signed certificate for registry domain
  generate_domain_crt: true
  # For nodes pull image, use 443 as default
  registry_https_port: 443
  # For push image to this registry, use 5000 as default, and only bind at 127.0.0.1
  registry_push_port: 5000
  # Set false to disable download all container images on all nodes
  download_container: false

Deploy Cluster

1
$ bash install.sh

Add Node

1
$ bash install.sh add-node $NODE_NAMES

Delete Node

1
$ bash install.sh remove-node $NODE_NAME

Remove Cluster

1
$ bash install.sh remove-cluster

Remove all components

1
$ bash install.sh remove

Unfinished follow-up additions

TODO


Refenrece https://blog.k8s.li/deploy-k8s-by-kubeplay.html