Ceph

Slack Docker Pulls GitHub edit source

This guide describes how to configure Alluxio with Ceph as the under storage system. Alluxio supports two different clients APIs to connect to Ceph Object Storage using Rados Gateway:

Prerequisites

The Alluxio binaries must be on your machine. You can either compile Alluxio, or download the binaries locally.

Basic Setup

A Ceph bucket can be mounted to Alluxio either at the root of the namespace, or at a nested directory.

Root Mount Point

Configure Alluxio to use under storage systems by modifying conf/alluxio-site.properties. If it does not exist, create the configuration file from the template.

  1. $ cp conf/alluxio-site.properties.template conf/alluxio-site.properties

Option 1: S3 Interface (preferred)

Modify conf/alluxio-site.properties to include:

  1. alluxio.master.mount.table.root.ufs=s3://<bucket>/<folder>
  2. alluxio.master.mount.table.root.option.aws.accessKeyId=<access-key>
  3. alluxio.master.mount.table.root.option.aws.secretKey=<secret-key>
  4. alluxio.master.mount.table.root.option.alluxio.underfs.s3.endpoint=http://<rgw-hostname>:<rgw-port>
  5. alluxio.master.mount.table.root.option.alluxio.underfs.s3.disable.dns.buckets=true
  6. alluxio.master.mount.table.root.option.alluxio.underfs.s3.inherit.acl=<inherit-acl>

If using a Ceph release such as hammer (or older) specify alluxio.underfs.s3.signer.algorithm=S3SignerType to use v2 S3 signatures. To use GET Bucket (List Objects) Version 1 specify alluxio.underfs.s3.list.objects.v1=true.

Option 2: Swift Interface

Modify conf/alluxio-site.properties to include:

  1. alluxio.master.mount.table.root.ufs=swift://<bucket>/<folder>
  2. alluxio.master.mount.table.root.option.fs.swift.user=<swift-user>
  3. alluxio.master.mount.table.root.option.fs.swift.tenant=<swift-tenant>
  4. alluxio.master.mount.table.root.option.fs.swift.password=<swift-user-password>
  5. alluxio.master.mount.table.root.option.fs.swift.auth.url=<swift-auth-url>
  6. alluxio.master.mount.table.root.option.fs.swift.auth.method=<swift-auth-method>

Replace <bucket>/<folder> with an existing Swift container location. Possible values of <swift-use-public> are true, false. Specify <swift-auth-model> as swiftauth if using native Ceph RGW authentication and <swift-auth-url> as http://<rgw-hostname>:<rgw-port>/auth/1.0.

Nested Mount Point

An Ceph location can be mounted at a nested directory in the Alluxio namespace to have unified access to multiple under storage systems. Alluxio’s Command Line Interface can be used for this purpose.

Issue the following command to use the S3 interface:

  1. $ ./bin/alluxio fs mount \
  2. --option aws.accessKeyId=<CEPH_ACCESS_KEY_ID> --option aws .secretKey=<CEPH_SECRET_ACCESS_KEY>\
  3. --option alluxio.underfs.s3.endpoint=<HTTP_ENDPOINT> --option alluxio.underfs.s3.disable.dns.buckets=true \
  4. --option alluxio.underfs.s3.inherit_acl=false /mnt/ceph s3://<BUCKET>/<FOLDER>

Similarly, to use the Swift interface:

  1. $ ./bin/alluxio fs mount \
  2. --option fs.swift.user=<SWIFT_USER> \
  3. --option fs.swift.tenant=<SWIFT_TENANT> \
  4. --option fs.swift.password=<SWIFT_PASSWORD> --option fs.swift.auth.url=<AUTH_URL> \
  5. --option fs.swift.auth.method=<AUTH_METHOD> \
  6. /mnt/ceph swift://<BUCKET>/<FOLDER>

Running Alluxio Locally with Ceph

Start up Alluxio locally to see that everything works.

  1. $ ./bin/alluxio format
  2. $ ./bin/alluxio-start.sh local

This should start an Alluxio master and an Alluxio worker. You can see the master UI at http://localhost:19999.

Run a simple example program:

  1. $ ./bin/alluxio runTests

Visit your bucket to verify the files and directories created by Alluxio exist.

You should see files named like:

  1. <bucket>/<folder>/default_tests_files/Basic_CACHE_THROUGH

To stop Alluxio, run:

  1. $ ./bin/alluxio-stop.sh local

Advanced Setup

Access Control

If Alluxio security is enabled, Alluxio enforces the access control inherited from underlying Ceph Object Storage. Depending on the interace used, refer to S3 Access Control or Swift Access Control for more information.