NFS

New in version Jewel.

Ceph Object Gateway namespaces can now be exported over file-basedaccess protocols such as NFSv3 and NFSv4, alongside traditional HTTP accessprotocols (S3 and Swift).

In particular, the Ceph Object Gateway can now be configured toprovide file-based access when embedded in the NFS-Ganesha NFS server.

librgw

The librgw.so shared library (Unix) provides a loadable interface toCeph Object Gateway services, and instantiates a full Ceph Object Gatewayinstance on initialization.

In turn, librgw.so exports rgw_file, a stateful API for file-orientedaccess to RGW buckets and objects. The API is general, but its designis strongly influenced by the File System Abstraction Layer (FSAL) APIof NFS-Ganesha, for which it has been primarily designed.

A set of Python bindings is also provided.

Namespace Conventions

The implementation conforms to Amazon Web Services (AWS) hierarchicalnamespace conventions which map UNIX-style path names onto S3 bucketsand objects.

The top level of the attached namespace consists of S3 buckets,represented as NFS directories. Files and directories subordinate tobuckets are each represented as objects, following S3 prefix anddelimiter conventions, with ‘/’ being the only supported pathdelimiter 1.

For example, if an NFS client has mounted an RGW namespace at “/nfs”,then a file “/nfs/mybucket/www/index.html” in the NFS namespacecorresponds to an RGW object “www/index.html” in a bucket/container“mybucket.”

Although it is generally invisible to clients, the NFS namespace isassembled through concatenation of the corresponding paths implied bythe objects in the namespace. Leaf objects, whether files ordirectories, will always be materialized in an RGW object of thecorresponding key name, “<name>” if a file, “<name>/” if a directory.Non-leaf directories (e.g., “www” above) might only be implied bytheir appearance in the names of one or more leaf objects. Directoriescreated within NFS or directly operated on by an NFS client (e.g., viaan attribute-setting operation such as chown or chmod) always have aleaf object representation used to store materialized attributes suchas Unix ownership and permissions.

Supported Operations

The RGW NFS interface supports most operations on files anddirectories, with the following restrictions:

  • Links, including symlinks, are not supported

  • NFS ACLs are not supported

    • Unix user and group ownership and permissions are supported
  • Directories may not be moved/renamed

    • files may be moved between directories
  • Only full, sequential write i/o is supported

    • i.e., write operations are constrained to be uploads

    • many typical i/o operations such as editing files in place will necessarily fail as they perform non-sequential stores

    • some file utilities apparently writing sequentially (e.g., some versions of GNU tar) may fail due to infrequent non-sequential stores

    • When mounting via NFS, sequential application i/o can generally be constrained to be written sequentially to the NFS server via a synchronous mount option (e.g. -osync in Linux)

    • NFS clients which cannot mount synchronously (e.g., MS Windows) will not be able to upload files

Security

The RGW NFS interface provides a hybrid security model with thefollowing characteristics:

  • NFS protocol security is provided by the NFS-Ganesha server, as negotiated by the NFS server and clients

    • e.g., clients can by trusted (AUTH_SYS), or required to present Kerberos user credentials (RPCSEC_GSS)

    • RPCSEC_GSS wire security can be integrity only (krb5i) or integrity and privacy (encryption, krb5p)

    • various NFS-specific security and permission rules are available

      • e.g., root-squashing
  • a set of RGW/S3 security credentials (unknown to NFS) is associated with each RGW NFS mount (i.e., NFS-Ganesha EXPORT)

    • all RGW object operations performed via the NFS server will be performed by the RGW user associated with the credentials stored in the export being accessed (currently only RGW and RGW LDAP credentials are supported)

      • additional RGW authentication types such as Keystone are not currently supported

Configuring an NFS-Ganesha Instance

Each NFS RGW instance is an NFS-Ganesha server instance _embeddding_a full Ceph RGW instance.

Therefore, the RGW NFS configuration includes Ceph and Ceph ObjectGateway-specific configuration in a local ceph.conf, as well asNFS-Ganesha-specific configuration in the NFS-Ganesha config file,ganesha.conf.

ceph.conf

Required ceph.conf configuration for RGW NFS includes:

  • valid [client.radosgw.{instance-name}] section

  • valid values for minimal instance configuration, in particular, an installed and correct keyring

Other config variables are optional, front-end-specific and front-endselection variables (e.g., rgw data and rgw frontends) areoptional and in some cases ignored.

A small number of config variables (e.g., rgw_nfs_namespace_expire_secs)are unique to RGW NFS.

ganesha.conf

A strictly minimal ganesha.conf for use with RGW NFS includes oneEXPORT block with embedded FSAL block of type RGW:

  1. EXPORT
  2. {
  3. Export_ID={numeric-id};
  4. Path = "/";
  5. Pseudo = "/";
  6. Access_Type = RW;
  7. SecType = "sys";
  8. NFS_Protocols = 4;
  9. Transport_Protocols = TCP;
  10.  
  11. # optional, permit unsquashed access by client "root" user
  12. #Squash = No_Root_Squash;
  13.  
  14. FSAL {
  15. Name = RGW;
  16. User_Id = {s3-user-id};
  17. Access_Key_Id ="{s3-access-key}";
  18. Secret_Access_Key = "{s3-secret}";
  19. }
  20. }

Export_ID must have an integer value, e.g., “77”

Path (for RGW) should be “/”

Pseudo defines an NFSv4 pseudo root name (NFSv4 only)

SecType = sys; allows clients to attach without Kerberosauthentication

Squash = No_Root_Squash; enables the client root user to overridepermissions (Unix convention). When root-squashing is enabled,operations attempted by the root user are performed as if by the local“nobody” (and “nogroup”) user on the NFS-Ganesha server

The RGW FSAL additionally supports RGW-specific configurationvariables in the RGW config section:

  1. RGW {
  2. cluster = "{cluster name, default 'ceph'}";
  3. name = "client.rgw.{instance-name}";
  4. ceph_conf = "/opt/ceph-rgw/etc/ceph/ceph.conf";
  5. init_args = "-d --debug-rgw=16";
  6. }

cluster sets a Ceph cluster name (must match the cluster being exported)

name sets an RGW instance name (must match the cluster being exported)

ceph_conf gives a path to a non-default ceph.conf file to use

Other useful NFS-Ganesha configuration:

Any EXPORT block which should support NFSv3 should include version 3in the NFS_Protocols setting. Additionally, NFSv3 is the last majorversion to support the UDP transport. To enable UDP, include it in theTransport_Protocols setting. For example:

  1. EXPORT {
  2. ...
  3. NFS_Protocols = 3,4;
  4. Transport_Protocols = UDP,TCP;
  5. ...
  6. }

One important family of options pertains to interaction with the Linuxidmapping service, which is used to normalize user and group namesacross systems. Details of idmapper integration are not provided here.

With Linux NFS clients, NFS-Ganesha can be configuredto accept client-supplied numeric user and group identifiers withNFSv4, which by default stringifies these–this may be useful in smallsetups and for experimentation:

  1. NFSV4 {
  2. Allow_Numeric_Owners = true;
  3. Only_Numeric_Owners = true;
  4. }

Troubleshooting

NFS-Ganesha configuration problems are usually debugged by running theserver with debugging options, controlled by the LOG config section.

NFS-Ganesha log messages are grouped into various components, loggingcan be enabled separately for each component. Valid values forcomponent logging include:

  1. *FATAL* critical errors only
  2. *WARN* unusual condition
  3. *DEBUG* mildly verbose trace output
  4. *FULL_DEBUG* verbose trace output

Example:

  1. LOG {
  2.  
  3. Components {
  4. MEMLEAKS = FATAL;
  5. FSAL = FATAL;
  6. NFSPROTO = FATAL;
  7. NFS_V4 = FATAL;
  8. EXPORT = FATAL;
  9. FILEHANDLE = FATAL;
  10. DISPATCH = FATAL;
  11. CACHE_INODE = FATAL;
  12. CACHE_INODE_LRU = FATAL;
  13. HASHTABLE = FATAL;
  14. HASHTABLE_CACHE = FATAL;
  15. DUPREQ = FATAL;
  16. INIT = DEBUG;
  17. MAIN = DEBUG;
  18. IDMAPPER = FATAL;
  19. NFS_READDIR = FATAL;
  20. NFS_V4_LOCK = FATAL;
  21. CONFIG = FATAL;
  22. CLIENTID = FATAL;
  23. SESSIONS = FATAL;
  24. PNFS = FATAL;
  25. RW_LOCK = FATAL;
  26. NLM = FATAL;
  27. RPC = FATAL;
  28. NFS_CB = FATAL;
  29. THREAD = FATAL;
  30. NFS_V4_ACL = FATAL;
  31. STATE = FATAL;
  32. FSAL_UP = FATAL;
  33. DBUS = FATAL;
  34. }
  35. # optional: redirect log output
  36. # Facility {
  37. # name = FILE;
  38. # destination = "/tmp/ganesha-rgw.log";
  39. # enable = active;
  40. }
  41. }

Running Multiple NFS Gateways

Each NFS-Ganesha instance acts as a full gateway endpoint, with thelimitation that currently an NFS-Ganesha instance cannot be configuredto export HTTP services. As with ordinary gateway instances, anynumber of NFS-Ganesha instances can be started, exporting the same ordifferent resources from the cluster. This enables the clustering ofNFS-Ganesha instances. However, this does not imply high availability.

When regular gateway instances and NFS-Ganesha instances overlap thesame data resources, they will be accessible from both the standard S3API and through the NFS-Ganesha instance as exported. You canco-locate the NFS-Ganesha instance with a Ceph Object Gateway instanceon the same host.

RGW vs RGW NFS

Exporting an NFS namespace and other RGW namespaces (e.g., S3 or Swiftvia the Civetweb HTTP front-end) from the same program instance iscurrently not supported.

When adding objects and buckets outside of NFS, those objects willappear in the NFS namespace in the time set byrgw_nfs_namespace_expire_secs, which defaults to 300 seconds (5 minutes).Override the default value for rgw_nfs_namespace_expire_secs in theCeph configuration file to change the refresh rate.

If exporting Swift containers that do not conform to valid S3 bucketnaming requirements, set rgw_relaxed_s3_bucket_names to true in the[client.radosgw] section of the Ceph configuration file. For example,if a Swift container name contains underscores, it is not a valid S3bucket name and will be rejected unless rgw_relaxed_s3_bucket_namesis set to true.

Configuring NFSv4 clients

To access the namespace, mount the configured NFS-Ganesha export(s)into desired locations in the local POSIX namespace. As noted, thisimplementation has a few unique restrictions:

  • NFS 4.1 and higher protocol flavors are preferred

    • NFSv4 OPEN and CLOSE operations are used to track upload transactions
  • To upload data successfully, clients must preserve write ordering

    • on Linux and many Unix NFS clients, use the -osync mount option

Conventions for mounting NFS resources are platform-specific. Thefollowing conventions work on Linux and some Unix platforms:

From the command line:

  1. mount -t nfs -o nfsvers=4.1,noauto,soft,sync,proto=tcp <ganesha-host-name>:/ <mount-point>

In /etc/fstab:

  1. <ganesha-host-name>:/ <mount-point> nfs noauto,soft,nfsvers=4.1,sync,proto=tcp 0 0

Specify the NFS-Ganesha host name and the path to the mount point onthe client.

Configuring NFSv3 Clients

Linux clients can be configured to mount with NFSv3 by supplyingnfsvers=3 and noacl as mount options. To use UDP as thetransport, add proto=udp to the mount options. However, TCP is thepreferred transport:

  1. <ganesha-host-name>:/ <mount-point> nfs noauto,noacl,soft,nfsvers=3,sync,proto=tcp 0 0

Configure the NFS Ganesha EXPORT block Protocols setting with version3 and the Transports setting with UDP if the mount will use version 3 with UDP.

NFSv3 Semantics

Since NFSv3 does not communicate client OPEN and CLOSE operations tofile servers, RGW NFS cannot use these operations to mark thebeginning and ending of file upload transactions. Instead, RGW NFSstarts a new upload when the first write is sent to a file at offset0, and finalizes the upload when no new writes to the file have beenseen for a period of time, by default, 10 seconds. To change thistimeout, set an alternate value for rgw_nfs_write_completion_interval_sin the RGW section(s) of the Ceph configuration file.

References