Ceph NFS Gateway CRD

Overview

Rook allows exporting NFS shares of the filesystem or object store through the CephNFS custom resource definition. This will spin up a cluster of NFS Ganesha servers that coordinate with one another via shared RADOS objects. The servers will be configured for NFSv4.1+ access, as serving earlier protocols can inhibit responsiveness after a server restart.

Samples

This configuration adds a cluster of ganesha gateways that store objects in the pool cephfs.a.meta and the namespace **

  1. apiVersion: ceph.rook.io/v1
  2. kind: CephNFS
  3. metadata:
  4. name: my-nfs
  5. namespace: rook-ceph
  6. spec:
  7. rados:
  8. # RADOS pool where NFS client recovery data is stored.
  9. # In this example the data pool for the "myfs" filesystem is used.
  10. # If using the object store example, the data pool would be "my-store.rgw.buckets.data".
  11. pool: myfs-data0
  12. # RADOS namespace where NFS client recovery data is stored in the pool.
  13. namespace: nfs-ns
  14. # Settings for the NFS server
  15. server:
  16. # the number of active NFS servers
  17. active: 2
  18. # A key/value list of annotations
  19. annotations:
  20. # key: value
  21. # where to run the NFS server
  22. placement:
  23. # nodeAffinity:
  24. # requiredDuringSchedulingIgnoredDuringExecution:
  25. # nodeSelectorTerms:
  26. # - matchExpressions:
  27. # - key: role
  28. # operator: In
  29. # values:
  30. # - mds-node
  31. # tolerations:
  32. # - key: mds-node
  33. # operator: Exists
  34. # podAffinity:
  35. # podAntiAffinity:
  36. # The requests and limits set here allow the ganesha pod(s) to use half of one CPU core and 1 gigabyte of memory
  37. resources:
  38. # limits:
  39. # cpu: "500m"
  40. # memory: "1024Mi"
  41. # requests:
  42. # cpu: "500m"
  43. # memory: "1024Mi"
  44. # the priority class to set to influence the scheduler's pod preemption
  45. priorityClassName:

NFS Settings

RADOS Settings

  • pool: The pool where ganesha recovery backend and supplemental configuration objects will be stored
  • namespace: The namespace in pool where ganesha recovery backend and supplemental configuration objects will be stored

EXPORT Block Configuration

Each daemon will have a stock configuration with no exports defined, and that includes a RADOS object via:

  1. %url rados://<pool>/<namespace>/conf-<nodeid>

The pool and namespace are configured via the spec’s RADOS block. The nodeid is a value automatically assigned internally by rook. Nodeids start with “a” and go through “z”, at which point they become two letters (“aa” to “az”).

When a server is started, it will create the included object if it does not already exist. It is possible to prepopulate the included objects prior to starting the server. The format for these objects is documented in the NFS Ganesha project.

Scaling the active server count

It is possible to scale the size of the cluster up or down by modifying the spec.server.active field. Scaling the cluster size up can be done at will. Once the new server comes up, clients can be assigned to it immediately.

The CRD always eliminates the highest index servers first, in reverse order from how they were started. Scaling down the cluster requires that clients be migrated from servers that will be eliminated to others. That process is currently a manual one and should be performed before reducing the size of the cluster.