We should not be unfamiliar with several states of PV and PVC, but we may have some questions in the process of using them, such as why the PV has become Failed state, how can the newly created PVC bind the previous PV, and can I restore the previous PV? Here we will explain several state changes in PV and PVC again.

In different cases, the state changes of PV and PVC we use the following table to illustrate.

Create PV

Under normal circumstances, a PV is created successfully and is Available.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
apiVersion: v1
kind: PersistentVolume
metadata:
  name: nfs-pv
spec:
  storageClassName: manual
  capacity: 
    storage: 1Gi
  accessModes:
  - ReadWriteOnce
  persistentVolumeReclaimPolicy: Retain
  nfs:
    path: /data/k8s  # 指定nfs的挂载点
    server: 10.151.30.1  # 指定nfs服务地址

By creating the PV object directly above, you can see that the status is Available, indicating that it can be used for PVC binding.

1
2
3
$ kubectl get pv nfs-pv
NAME     CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM   STORAGECLASS   REASON   AGE
nfs-pv   1Gi        RWO            Retain           Available           manual                  7s

New PVC

The newly added PVC state is Pending, if there is a suitable PV, this Pending state will immediately change to Bound state, and the corresponding PVC will also change to Bound, the PVC and PV are bound. We can add the PVC first and then add the PV, so that we can ensure to see the Pending state.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: nfs-pvc
spec:
  storageClassName: manual
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 1Gi

The new PVC resource object above will be in the Pending state when it is first created.

1
2
3
$ kubectl get pvc nfs-pvc
NAME      STATUS    VOLUME   CAPACITY   ACCESS MODES   STORAGECLASS   AGE
nfs-pvc   Pending                                      manual         7s

When a PVC finds a suitable PV binding, it immediately becomes Bound and the PV changes from Available to Bound.

1
2
3
4
5
6
$ kubectl get pvc nfs-pvc
NAME      STATUS   VOLUME   CAPACITY   ACCESS MODES   STORAGECLASS   AGE
nfs-pvc   Bound    nfs-pv   1Gi        RWO            manual         2m8s
$ kubectl get pv nfs-pv  
NAME     CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM             STORAGECLASS   REASON   AGE
nfs-pv   1Gi        RWO            Retain           Bound    default/nfs-pvc   manual                  23s

Delete PV

Since the PVC and PV are now bound, what will happen if we accidentally delete the PV at this time?

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
$ kubectl delete pv nfs-pv
persistentvolume "nfs-pv" deleted

^C
$ kubectl get pv nfs-pv   
NAME     CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS        CLAIM             STORAGECLASS   REASON   AGE
nfs-pv   1Gi        RWO            Retain           Terminating   default/nfs-pvc   manual                  12m
$ kubectl get pvc nfs-pvc                          
NAME      STATUS   VOLUME   CAPACITY   ACCESS MODES   STORAGECLASS   AGE
nfs-pvc   Bound    nfs-pv   1Gi        RWO            manual         13m

In fact, we delete the PV here is hanged, that is, can not really delete the PV, but this time the PV will become Terminating state, while the corresponding PVC is still Bound state, that is, this time because the PV and PVC have been bound together, you can not first delete the PV, but now the state is Terminating state, there is still no effect on the PVC, so how should we deal with this time?

We can force the PV to be deleted by editing the PV and deleting the finalizers property in the PV.

1
2
$ kubectl edit pv nfs-pv
# 按照下面所示删除 finalizers 属性中的内容

Once the editing is complete, the PV is actually deleted and the PVC is now Lost.

1
2
3
4
5
$ kubectl get pv nfs-pv
Error from server (NotFound): persistentvolumes "nfs-pv" not found
$ kubectl get pvc nfs-pvc
NAME      STATUS   VOLUME   CAPACITY   ACCESS MODES   STORAGECLASS   AGE
nfs-pvc   Lost     nfs-pv   0                         manual         23m

Re-create PV

When we see that the PVC is in the Lost state, don’t worry, this is because the previously bound PV is no longer available, but the PVC still has the binding information of the PV.

So to solve this problem is also very simple, just recreate the previous PV:

1
2
3
# 重新创建 PV
$ kubectl apply -f volume.yaml 
persistentvolume/nfs-pv created

Once the PV has been successfully created, both the PVC and PV states revert to the Bound state.

1
2
3
4
5
6
7
$ kubectl get pv nfs-pv   
NAME     CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM             STORAGECLASS   REASON   AGE
nfs-pv   1Gi        RWO            Retain           Bound    default/nfs-pvc   manual                  93s
# PVC 恢复成了正常的 Bound 状态
$ kubectl get pvc nfs-pvc        
NAME      STATUS   VOLUME   CAPACITY   ACCESS MODES   STORAGECLASS   AGE
nfs-pvc   Bound    nfs-pv   1Gi        RWO            manual         27m

Delete PVC

The above is the case of deleting the PV first, so what will be the situation if we delete the PVC first?

1
2
3
4
5
$ kubectl delete pvc nfs-pvc
persistentvolumeclaim "nfs-pvc" deleted
$ kubectl get pv nfs-pv     a
NAME     CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS     CLAIM             STORAGECLASS   REASON   AGE
nfs-pv   1Gi        RWO            Retain           Released   default/nfs-pvc   manual                  3m36s

We can see that after the PVC is deleted, the PV becomes Released, but we look closely at the CLAIM attribute at the back, which still retains the binding information of the PVC, and the object information of the PV can be exported by the following command.

At this point, you might think that now that my PVC has been deleted and the PV has become Released, I can rebuild the previous PVC and rebind it, but this is not the case.

This is when we need to intervene manually. In a real production environment, the administrator will back up or migrate the data, then modify the PV and remove the reference to the claimRef to the PVC, and when the PV controller of Kubernetes watches the PV change, it will modify the PV to the Available state, and the Available state PV can of course be bound to other PVCs.

You can edit the PV directly to remove the contents of the cliamRef property.

1
2
3
# 删除 cliamRef 中的内容
$ kubectl edit pv nfs-pv
persistentvolume/nfs-pv edited

Once the deletion is complete, the PV will become a normal Available state, and the previous PVC can be rebuilt to bind normally.

1
2
3
$ kubectl get pv nfs-pv
NAME     CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM   STORAGECLASS   REASON   AGE
nfs-pv   1Gi        RWO            Retain           Available           manual                  12m

Various enhancements have also been made to PV in newer versions of Kubernetes clusters, such as cloning, snapshots, and other features that are very useful, and we’ll come back later to explain these new features.