Kubernetes NFS Client Provisioner on ARM

In a previous post I demonstrated how to use an NFS Server for persisting volumes using Kubernetes persistent volumes.

In that example, we need to manage the NFS directories outside of Kubernetes, so when we deploy a new application, we need to create those directories manually as well as deleting them when we don't need them anymore.

Dynamic Volume Provisioner

In this tutorial, we will use the NFS Client Provisioner on Raspberry Pi k3s Kubernetes Cluster to persist our volumes with NFS volumes, but the client provisioner will manage our NFS subdirectories for us.

So when we create a persistent volume claim, the directory will be created and it will mount the NFS path to the pod's volume definition.

When the persistent volume is deleted from Kubernetes, the directory will also be deleted.

Install an NFS Server

You can follow this post to set up an NFS Server, I have used docker to run a NFS server with the following.

Ps. My NFS Server Host is 192.168.0.240

$ mkdir /data/kubernetes-volumes
$ docker run -itd --name nfs \
  -p 2049:2049 \
  -e SHARED_DIRECTORY=/data \
  -v /data/kubernetes-volumes:/data
  itsthenetwork/nfs-server-alpine:12 

Deploy NFS Client Provisioner

The code can be found from the github.com/kubernetes-retired/external-storage/nfs-client repository.

The rbac.yml for the nfs client provisioner:

$ cat rbac.yml
apiVersion: v1  
kind: ServiceAccount  
metadata:  
  name: nfs-client-provisioner
  namespace: default
---
kind: ClusterRole  
apiVersion: rbac.authorization.k8s.io/v1  
metadata:  
  name: nfs-client-provisioner-runner
rules:  
  - apiGroups: [""]
    resources: ["persistentvolumes"]
    verbs: ["get", "list", "watch", "create", "delete"]
  - apiGroups: [""]
    resources: ["persistentvolumeclaims"]
    verbs: ["get", "list", "watch", "update"]
  - apiGroups: ["storage.k8s.io"]
    resources: ["storageclasses"]
    verbs: ["get", "list", "watch"]
  - apiGroups: [""]
    resources: ["events"]
    verbs: ["create", "update", "patch"]
---
kind: ClusterRoleBinding  
apiVersion: rbac.authorization.k8s.io/v1  
metadata:  
  name: run-nfs-client-provisioner
subjects:  
  - kind: ServiceAccount
    name: nfs-client-provisioner
    namespace: default
roleRef:  
  kind: ClusterRole
  name: nfs-client-provisioner-runner
  apiGroup: rbac.authorization.k8s.io
---
kind: Role  
apiVersion: rbac.authorization.k8s.io/v1  
metadata:  
  name: leader-locking-nfs-client-provisioner
  namespace: default
rules:  
  - apiGroups: [""]
    resources: ["endpoints"]
    verbs: ["get", "list", "watch", "create", "update", "patch"]
---
kind: RoleBinding  
apiVersion: rbac.authorization.k8s.io/v1  
metadata:  
  name: leader-locking-nfs-client-provisioner
  namespace: default
subjects:  
  - kind: ServiceAccount
    name: nfs-client-provisioner
    namespace: default
roleRef:  
  kind: Role
  name: leader-locking-nfs-client-provisioner
  apiGroup: rbac.authorization.k8s.io

Go ahead and deploy the rbac:

$ kubectl apply -f rbac.yml

Because we are deploying this on a Raspberry Pi k3s Cluster, we will be using the deployment-arm.yml:

$ cat deployment-arm.yml
apiVersion: apps/v1  
kind: Deployment  
metadata:  
  name: nfs-client-provisioner
  labels:
    app: nfs-client-provisioner
  namespace: default
spec:  
  replicas: 1
  strategy:
    type: Recreate
  selector:
    matchLabels:
      app: nfs-client-provisioner
  template:
    metadata:
      labels:
        app: nfs-client-provisioner
    spec:
      serviceAccountName: nfs-client-provisioner
      containers:
        - name: nfs-client-provisioner
          image: quay.io/external_storage/nfs-client-provisioner-arm:latest
          volumeMounts:
            - name: nfs-client-root
              mountPath: /persistentvolumes
          env:
            - name: PROVISIONER_NAME
              value: nfs-storage
            - name: NFS_SERVER
              value: 192.168.0.240
            - name: NFS_PATH
              value: /
      volumes:
        - name: nfs-client-root
          nfs:
            server: 192.168.0.240
            path: /

Then run the deployment:

$ kubectl apply -f deployment-arm.yml

Verify that the nfs-client-provisioner pod has checked in:

$ kubectl get pods -l app=nfs-client-provisioner
NAME                                     READY   STATUS    RESTARTS   AGE  
nfs-client-provisioner-b946b6bd8-hbgvq   1/1     Running   0          16s  

Next our storage class, you will notice that the provisioner references the PROVISIONER_NAME environment variable from our deployment. In your class.yml

$ class.yml
apiVersion: storage.k8s.io/v1  
kind: StorageClass  
metadata:  
  name: managed-nfs-storage
provisioner: nfs-storage  
parameters:  
  archiveOnDelete: "false"

Create the storage class:

$ kubectl apply -f class.yml

Then we can view our storage classes:

$ kubectl get sc
NAME                   PROVISIONER             RECLAIMPOLICY   VOLUMEBINDINGMODE      ALLOWVOLUMEEXPANSION   AGE  
local-path (default)   rancher.io/local-path   Delete          WaitForFirstConsumer   false                  4d21h  
managed-nfs-storage    nfs-storage             Delete          Immediate              false                  8s  

At this point, the NFS Client Provisioner is deployed to our Kubernetes cluster and it will manage NFS volumes that's provisioned against the managed-nfs-storage class for deployments.

Now let's create a persistent volume claim, pvc.yml and we need to remember to reference to managed-nfs-storage class:

$ cat pvc.yml
kind: PersistentVolumeClaim  
apiVersion: v1  
metadata:  
  name: nfs-claim-one
  annotations:
    volume.beta.kubernetes.io/storage-class: "managed-nfs-storage"
spec:  
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 1Mi

Create the claim:

$ kubectl apply -f pvc.yml

Verify that the claim is present:

$ kubectl get pvc
NAME               STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS          AGE  
nfs-claim-one      Bound    pvc-8c8c8cd7-af82-48e5-9e88-5ae5157321da   1Mi        RWX            managed-nfs-storage   7s  

If we head over to our NFS server and we list the volume name, we can see that the directory was created by the nfs client provisioner:

$ ls /data/kubernetes-volumes/ | grep pvc-8c8c8cd7-af82-48e5-9e88-5ae5157321da
default-nfs-claim-one-pvc-8c8c8cd7-af82-48e5-9e88-5ae5157321da  

From the mentioned repository we will be using the example provided to associate the persistent volume that we created and mount the NFS volume inside the container to /nfsdata.

In our pod.yml

$ cat pod.yml
kind: Pod  
apiVersion: v1  
metadata:  
  name: test-pod
spec:  
  containers:
  - name: test-pod
    image: pistacks/alpine
    command:
      - "/bin/sh"
    args:
      - "-c"
      - "echo $HOSTNAME > /nfsdata/ok.txt && exit 0 || exit 1"
    volumeMounts:
      - name: nfs-pvc
        mountPath: "/nfsdata"
  restartPolicy: "Never"
  volumes:
    - name: nfs-pvc
      persistentVolumeClaim:
        claimName: nfs-claim-one

Create the pod:

$ kubectl apply -f pod.yml

When we view the pods, we should see that the pod is in a completed state, due to restartPolicy: Never, which is good, as then it successfully mounted the path and was able to write to the mounted NFS path:

$ kubectl get pods
NAME                                       READY   STATUS      RESTARTS   AGE  
test-pod                                   0/1     Completed   0          58s  

When we head over to the NFS server, we will see the content that was written:

$ cat /data/kubernetes-volumes/default-nfs-claim-one-pvc-8c8c8cd7-af82-48e5-9e88-5ae5157321da/ok.txt
test-pod  

When we delete the pod and then the persistent volume caim:

$ kubectl delete -f pvc.yml

We will see that the NFS directory has been deleted:

$ cat /data/kubernetes-volumes/default-nfs-claim-one-pvc-8c8c8cd7-af82-48e5-9e88-5ae5157321da/ok.txt
cat: /data/kubernetes-volumes/default-nfs-claim-one-pvc-8c8c8cd7-af82-48e5-9e88-5ae5157321da/ok.txt: No such file or directory