Network File System (NFS)

NFS allows remote hosts to mount file systems over a network and interact with those file systems as though they are mounted locally. This enables system administrators to consolidate resources onto centralized servers on the network.

Prerequisites

  1. A Kubernetes cluster is necessary to run the Rook NFS operator. To make sure you have a Kubernetes cluster that is ready for Rook, you can follow these instructions.
  2. The desired volume to export needs to be attached to the NFS server pod via a PVC. Any type of PVC can be attached and exported, such as Host Path, AWS Elastic Block Store, GCP Persistent Disk, CephFS, Ceph RBD, etc. The limitations of these volumes also apply while they are shared by NFS. You can read further about the details and limitations of these volumes in the Kubernetes docs.
  3. NFS client packages must be installed on all nodes where Kubernetes might run pods with NFS mounted. Install nfs-utils on CentOS nodes or nfs-common on Ubuntu nodes.

Deploy NFS Operator

First deploy the Rook NFS operator using the following commands:

  1. cd cluster/examples/kubernetes/nfs
  2. kubectl create -f operator.yaml

You can check if the operator is up and running with:

  1. kubectl -n rook-nfs-system get pod
  2. NAME READY STATUS RESTARTS AGE
  3. rook-nfs-operator-879f5bf8b-gnwht 1/1 Running 0 29m
  4. rook-nfs-provisioner-65f4874c8f-kkz6b 1/1 Running 0 29m

Create and Initialize NFS Server

Now that the operator is running, we can create an instance of a NFS server by creating an instance of the nfsservers.nfs.rook.io resource. The various fields and options of the NFS server resource can be used to configure the server and its volumes to export. Full details of the available configuration options can be found in the NFS CRD documentation.

This guide has 2 main examples that demonstrate exporting volumes with a NFS server:

  1. Default StorageClass example
  2. Rook Ceph volume example

Default StorageClass example

This first example will walk through creating a NFS server instance that exports storage that is backed by the default StorageClass for the environment you happen to be running in. In some environments, this could be a host path, in others it could be a cloud provider virtual disk. Either way, this example requires a default StorageClass to exist.

Start by saving the below NFS CRD instance definition to a file called nfs.yaml:

  1. apiVersion: v1
  2. kind: Namespace
  3. metadata:
  4. name: rook-nfs
  5. ---
  6. # A default storageclass must be present
  7. apiVersion: v1
  8. kind: PersistentVolumeClaim
  9. metadata:
  10. name: nfs-default-claim
  11. namespace: rook-nfs
  12. spec:
  13. accessModes:
  14. - ReadWriteMany
  15. resources:
  16. requests:
  17. storage: 1Gi
  18. ---
  19. apiVersion: nfs.rook.io/v1alpha1
  20. kind: NFSServer
  21. metadata:
  22. name: rook-nfs
  23. namespace: rook-nfs
  24. spec:
  25. serviceAccountName: rook-nfs
  26. replicas: 1
  27. exports:
  28. - name: share1
  29. server:
  30. accessMode: ReadWrite
  31. squash: "none"
  32. # A Persistent Volume Claim must be created before creating NFS CRD instance.
  33. persistentVolumeClaim:
  34. claimName: nfs-default-claim
  35. # A key/value list of annotations
  36. annotations:
  37. # key: value

With the nfs.yaml file saved, now create the NFS server as shown:

  1. kubectl create -f nfs.yaml

We can verify that a Kubernetes object has been created that represents our new NFS server and its export with the command below.

  1. kubectl -n rook-nfs get nfsservers.nfs.rook.io
  2. NAME AGE
  3. rook-nfs 1m

Verify that the NFS server pod is up and running:

  1. kubectl -n rook-nfs get pod -l app=rook-nfs
  2. NAME READY STATUS RESTARTS AGE
  3. rook-nfs-0 1/1 Running 0 2m

If the NFS server pod is in the Running state, then we have successfully created an exported NFS share that clients can start to access over the network.

Accessing the Export

With PR https://github.com/rook/rook/pull/2758 rook starts supporting dynamic provisioning with NFS. This example will be showing how dynamic provisioning feature can be used for nfs.

Once the NFS Operator and an instance of NFSServer is deployed. A storageclass similar to below example has to be created to dynamically provisioning volumes.

  1. apiVersion: storage.k8s.io/v1
  2. kind: StorageClass
  3. metadata:
  4. labels:
  5. app: rook-nfs
  6. name: rook-nfs-share1
  7. parameters:
  8. exportName: share1
  9. nfsServerName: rook-nfs
  10. nfsServerNamespace: rook-nfs
  11. provisioner: rook.io/nfs-provisioner
  12. reclaimPolicy: Delete
  13. volumeBindingMode: Immediate

Note: The storageclass need to have the following 3 parameters passed.

  1. exportName: It tells the provisioner which export to use for provisioning the volumes.
  2. nfsServerName: It is the name of the NFSServer instance.
  3. nfsServerNamespace: It namespace where the NFSServer instance is running.

Once the above storageclass has been created create a PV claim referencing the storageclass as shown in the example given below.

  1. apiVersion: v1
  2. kind: PersistentVolumeClaim
  3. metadata:
  4. name: rook-nfs-pv-claim
  5. spec:
  6. storageClassName: "rook-nfs-share1"
  7. accessModes:
  8. - ReadWriteMany
  9. resources:
  10. requests:
  11. storage: 1Mi

Rook Ceph volume example

In this alternative example, we will use a different underlying volume as an export for the NFS server. These steps will walk us through exporting a Ceph RBD block volume so that clients can access it across the network.

First, you have to follow these instructions to deploy a sample Rook Ceph cluster that can be attached to the NFS server pod for sharing. After the Rook Ceph cluster is up and running, we can create proceed with creating the NFS server.

Save this PVC and NFS CRD instance as nfs-ceph.yaml:

  1. apiVersion: v1
  2. kind: Namespace
  3. metadata:
  4. name: rook-nfs
  5. ---
  6. # A rook ceph cluster must be running
  7. # Create a rook ceph cluster using examples in rook/cluster/examples/kubernetes/ceph
  8. # Refer to https://rook.io/docs/rook/v1.0/ceph-quickstart.html for a quick rook cluster setup
  9. apiVersion: v1
  10. kind: PersistentVolumeClaim
  11. metadata:
  12. name: nfs-ceph-claim
  13. namespace: rook-nfs
  14. spec:
  15. storageClassName: rook-ceph-block
  16. accessModes:
  17. - ReadWriteMany
  18. resources:
  19. requests:
  20. storage: 2Gi
  21. ---
  22. apiVersion: nfs.rook.io/v1alpha1
  23. kind: NFSServer
  24. metadata:
  25. name: rook-nfs
  26. namespace: rook-nfs
  27. spec:
  28. replicas: 1
  29. exports:
  30. - name: nfs-share
  31. server:
  32. accessMode: ReadWrite
  33. squash: "none"
  34. # A Persistent Volume Claim must be created before creating NFS CRD instance.
  35. # Create a Ceph cluster for using this example
  36. # Create a ceph PVC after creating the rook ceph cluster using ceph-pvc.yaml
  37. persistentVolumeClaim:
  38. claimName: nfs-ceph-claim

Create the NFS server instance that you saved in nfs-ceph.yaml:

  1. kubectl create -f nfs-ceph.yaml

After the NFS server pod is running, follow the same instructions from the previous example to access and consume the NFS share.

Teardown

To clean up all resources associated with this walk-through, you can run the commands below.

  1. kubectl delete -f web-service.yaml
  2. kubectl delete -f web-rc.yaml
  3. kubectl delete -f busybox-rc.yaml
  4. kubectl delete -f pvc.yaml
  5. kubectl delete -f pv.yaml
  6. kubectl delete -f nfs.yaml
  7. kubectl delete -f nfs-ceph.yaml
  8. kubectl delete -f operator.yaml

Troubleshooting

If the NFS server pod does not come up, the first step would be to examine the NFS operator’s logs:

  1. kubectl -n rook-nfs-system logs -l app=rook-nfs-operator