Ceph Shared Filesystem CRD

Rook allows creation and customization of shared filesystems through the custom resource definitions (CRDs). The following settings are available for Ceph filesystems.

Samples

Replicated

NOTE: This sample requires at least 1 OSD per node, with each OSD located on 3 different nodes.

Each OSD must be located on a different node, because both of the defined pools set the failureDomain to host and the replicated.size to 3.

The failureDomain can also be set to another location type (e.g. rack), if it has been added as a location in the Storage Selection Settings.

  1. apiVersion: ceph.rook.io/v1
  2. kind: CephFilesystem
  3. metadata:
  4. name: myfs
  5. namespace: rook-ceph
  6. spec:
  7. metadataPool:
  8. failureDomain: host
  9. replicated:
  10. size: 3
  11. dataPools:
  12. - failureDomain: host
  13. replicated:
  14. size: 3
  15. preservePoolsOnDelete: true
  16. metadataServer:
  17. activeCount: 1
  18. activeStandby: true
  19. # A key/value list of annotations
  20. annotations:
  21. # key: value
  22. placement:
  23. # nodeAffinity:
  24. # requiredDuringSchedulingIgnoredDuringExecution:
  25. # nodeSelectorTerms:
  26. # - matchExpressions:
  27. # - key: role
  28. # operator: In
  29. # values:
  30. # - mds-node
  31. # tolerations:
  32. # - key: mds-node
  33. # operator: Exists
  34. # podAffinity:
  35. # podAntiAffinity:
  36. # topologySpreadConstraints:
  37. resources:
  38. # limits:
  39. # cpu: "500m"
  40. # memory: "1024Mi"
  41. # requests:
  42. # cpu: "500m"
  43. # memory: "1024Mi"

(These definitions can also be found in the filesystem.yaml file)

Erasure Coded

Erasure coded pools require the OSDs to use bluestore for the configured storeType. Additionally, erasure coded pools can only be used with dataPools. The metadataPool must use a replicated pool.

NOTE: This sample requires at least 3 bluestore OSDs, with each OSD located on a different node.

The OSDs must be located on different nodes, because the failureDomain will be set to host by default, and the erasureCoded chunk settings require at least 3 different OSDs (2 dataChunks + 1 codingChunks).

  1. apiVersion: ceph.rook.io/v1
  2. kind: CephFilesystem
  3. metadata:
  4. name: myfs-ec
  5. namespace: rook-ceph
  6. spec:
  7. metadataPool:
  8. replicated:
  9. size: 3
  10. dataPools:
  11. - erasureCoded:
  12. dataChunks: 2
  13. codingChunks: 1
  14. metadataServer:
  15. activeCount: 1
  16. activeStandby: true

(These definitions can also be found in the filesystem-ec.yaml file)

Filesystem Settings

Metadata

  • name: The name of the filesystem to create, which will be reflected in the pool and other resource names.
  • namespace: The namespace of the Rook cluster where the filesystem is created.

Pools

The pools allow all of the settings defined in the Pool CRD spec. For more details, see the Pool CRD settings. In the example above, there must be at least three hosts (size 3) and at least eight devices (6 data + 2 coding chunks) in the cluster.

  • metadataPool: The settings used to create the filesystem metadata pool. Must use replication.
  • dataPools: The settings to create the filesystem data pools. If multiple pools are specified, Rook will add the pools to the filesystem. Assigning users or files to a pool is left as an exercise for the reader with the CephFS documentation. The data pools can use replication or erasure coding. If erasure coding pools are specified, the cluster must be running with bluestore enabled on the OSDs.
  • preservePoolsOnDelete: If it is set to ‘true’ the pools used to support the filesystem will remain when the filesystem will be deleted. This is a security measure to avoid accidental loss of data. It is set to ‘false’ by default. If not specified is also deemed as ‘false’.

Metadata Server Settings

The metadata server settings correspond to the MDS daemon settings.

  • activeCount: The number of active MDS instances. As load increases, CephFS will automatically partition the filesystem across the MDS instances. Rook will create double the number of MDS instances as requested by the active count. The extra instances will be in standby mode for failover.
  • activeStandby: If true, the extra MDS instances will be in active standby mode and will keep a warm cache of the filesystem metadata for faster failover. The instances will be assigned by CephFS in failover pairs. If false, the extra MDS instances will all be on passive standby mode and will not maintain a warm cache of the metadata.
  • annotations: Key value pair list of annotations to add.
  • placement: The mds pods can be given standard Kubernetes placement restrictions with nodeAffinity, tolerations, podAffinity, and podAntiAffinity similar to placement defined for daemons configured by the cluster CRD.
  • resources: Set resource requests/limits for the Filesystem MDS Pod(s), see Resource Requirements/Limits.
  • priorityClassName: Set priority class name for the Filesystem MDS Pod(s)