Deploy Alluxio on Docker

Slack Docker Pulls GitHub edit source

Docker can be used to simplify the deployment and management of Alluxio servers. Using the alluxio/alluxio Docker image available on Dockerhub, you can go from zero to a running Alluxio cluster with a couple of docker run commands. This document provides a tutorial for running Dockerized Alluxio on a single node with local disk as the under storage. We’ll also discuss more advanced topics and how to troubleshoot.

Prerequisites

  • A machine with Docker installed.
  • Ports 19998, 19999, 29998, 29999, and 30000 available

If you don’t have access to a machine with Docker installed, you can provision a small AWS EC2 instance (e.g. t2.small) to follow along with the tutorial. When provisioning the instance, set the security group so that the following ports are open to your IP address and the CIDR range of the Alluxio clients (e.g. remote Spark clusters):

  • 19998 for the CIDR range of your Alluxio servers and clients: Allow the clients and workers to communicate with Alluxio Master RPC processes.
  • 19999 for the IP address of your browser: Allow you to access the Alluxio master web UI.
  • 29999 for the CIDR range of your Alluxio and clients: Allow the clients to communicate with Alluxio Worker RPC processes.
  • 30000 for the IP address of your browser: Allow you to access the Alluxio worker web UI.

To set up Docker after provisioning the instance, which will be referred as the Docker Host, run

  1. $ sudo yum install -y docker
  2. $ sudo service docker start
  3. # Add the current user to the docker group
  4. $ sudo usermod -a -G docker $(id -u -n)
  5. # Log out and log back in again to pick up the group changes
  6. $ exit

Prepare Docker Volume to Persist Data

By default all files created inside a container are stored on a writable container layer. The data doesn’t persist when that container no longer exists. Docker volumes are the preferred way to save data outside the containers. The following two types of Docker volumes are used the most:

  • Host Volume: You manage where in the Docker host’s file system to store and share the containers data. To create a host volume, run:

    1. $ docker run -v /path/on/host:/path/in/container ...

    The file or directory is referenced by its full path on the Docker host. It can exist on the Docker host already, or it will be created automatically if it does not yet exist.

  • Named Volume: Docker manage where they are located. It should be be referred to by specific names. To create a named volume, run:

    1. $ docker volume create volumeName
    2. $ docker run -v volumeName:/path/in/container ...

Either host volume or named volume can be used for Alluxio containers. For purpose of test, the host volume is recommended, since it is the easiest type of volume to use and very performant. More importantly, you know where to refer to the data in the host file system and you can manipulate the files directly and easily outside the containers.

Therefore, we will use the host volume and mount the host directory /tmp/alluxio_ufs to the container location /opt/alluxio/underFSStorage, which is the default setting for the Alluxio UFS root mount point in the Alluxio docker image:

  1. $ mkdir -p /tmp/alluxio_ufs
  2. $ docker run -v /tmp/alluxio_ufs:/opt/alluxio/underFSStorage ...

Of course, you can choose to mount a different path instead of /tmp/alluxio_ufs. From version 2.1 on, Alluxio Docker image runs as user alluxio by default. It has UID 1000 and GID 1000. Please make sure it is writable by the user the Docker image is run as.

Launch Alluxio Containers for Master and Worker

The Alluxio clients (local or remote) need to communicate with both Alluxio master and workers. Therefore it is important to make sure clients can reach both of the following services:

  • Master RPC on port 19998
  • Worker RPC on port 29999

Within the Alluxio cluster, please also make sure the master and worker containers can reach each other on the ports defined in General requirements.

We are going to launch Alluxio master and worker containers on the same Docker host machine. In order to make sure this works for either local or remote clients, we have to set up the Docker network and expose the required ports correctly.

There are two ways to launch Alluxio Docker containers on the Docker host:

Host network shares ip-address and networking namespace between the container and the Docker host. User-defined bridge network allows containers connected to communicate, while providing isolation from containers not connected to that bridge network. It is recommended to use host network, option A, for testing.

Launch the Alluxio Master

  1. $ docker run -d --rm \
  2. --net=host \
  3. --name=alluxio-master \
  4. -v /tmp/alluxio_ufs:/opt/alluxio/underFSStorage \
  5. -e ALLUXIO_JAVA_OPTS=" \
  6. -Dalluxio.master.hostname=localhost \
  7. -Dalluxio.master.mount.table.root.ufs=/opt/alluxio/underFSStorage" \
  8. alluxio/alluxio master

Launch the Alluxio Worker

  1. $ docker run -d --rm \
  2. --net=host \
  3. --name=alluxio-worker \
  4. --shm-size=1G \
  5. -v /tmp/alluxio_ufs:/opt/alluxio/underFSStorage \
  6. -e ALLUXIO_JAVA_OPTS=" \
  7. -Dalluxio.worker.ramdisk.size=1G \
  8. -Dalluxio.master.hostname=localhost" \
  9. alluxio/alluxio worker

Notes:

  1. The argument --net=host tells Docker to use the host network. Under this setup, the containers are directly using the host’s network adapter. All containers will have the same hostname and IP address as the Docker host, and all the host’s ports are directly mapped to containers. Therefore, all the required container ports 19999, 19998, 29999, 30000 are available for the clients via the Docker host. You can find more details about this setting here.
  2. The argument -e ALLUXIO_JAVA_OPTS="-Dalluxio.worker.ramdisk.size=1G -Dalluxio.master.hostname=localhost" allocates the worker’s memory capacity and binds the master address. When using the host network driver, the master can’t be referenced to by the master container name alluxio-master or it will throw "No Alluxio worker available" error. Instead, it should be referenced to by the host IP address. The substitution localhost uses the docker host’s name instead.
  3. The argument --shm-size=1G will allocate a 1G tmpfs for the worker to store Alluxio data.
  4. The argument -v /tmp/alluxio_ufs:/opt/alluxio/underFSStorage tells Docker to use the host volume and persist the Alluxio UFS root data in the host directory /tmp/alluxio_ufs, as explained above in the Docker volume section.

Using host network is simple, but it has disadvantages. For example

  • The Services running inside the container could potentially conflict with other services in other containers which run on the same port.
  • Containers can access to the host’s full network stack and bring up potential security risks.

The better way is using the user-defined network, but we need to explicitly expose the required ports so that the external clients can reach out the containers’ services:

Prepare the network

  1. $ docker network create alluxio_network

Launch the Alluxio master

  1. $ docker run -d --rm \
  2. -p 19999:19999 \
  3. -p 19998:19998 \
  4. --net=alluxio_network \
  5. --name=alluxio-master \
  6. -e ALLUXIO_JAVA_OPTS=" \
  7. -Dalluxio.master.hostname=alluxio-master \
  8. -Dalluxio.master.mount.table.root.ufs=/opt/alluxio/underFSStorage" \
  9. -v /tmp/alluxio_ufs:/opt/alluxio/underFSStorage \
  10. alluxio/alluxio master

Launch the Alluxio worker

  1. $ docker run -d --rm \
  2. -p 29999:29999 \
  3. -p 30000:30000 \
  4. --net=alluxio_network \
  5. --name=alluxio-worker \
  6. --shm-size=1G \
  7. -v /tmp/alluxio_ufs:/opt/alluxio/underFSStorage \
  8. -e ALLUXIO_JAVA_OPTS=" \
  9. -Dalluxio.worker.ramdisk.size=1G \
  10. -Dalluxio.master.hostname=alluxio-master \
  11. -Dalluxio.worker.hostname=alluxio-worker" \
  12. alluxio/alluxio worker

Notes:

  1. The argument --net=alluxio_network tells Docker to use the user-defined bridge network alluxio_network. All containers will use their own container IDs as their hostname, and each of them has a different IP address within the network’s subnet. Containers connected to the same user-defined bridge network effectively expose all ports to each other, unless firewall policies are defined. You can find more details about the bridge network driver here.
  2. Only the specified ports (-p option) are exposed to the outside network, where the client may be run. The command -p <host-port>:<container-port> maps the container port to a host port. Therefore, you must explicitly expose the two ports 19999 and 19998 for the master container and the port 29999 and 30000 for the worker container. Otherwise, the clients can’t communicate with the master and worker.
  3. You can refer to the master either by the container name (alluxio-master for master container and alluxio-worker for worker container) or by the Docker host’s IP address $(hostname -i), if all the communication is within the docker network (e.g., no external client outside the docker network). Otherwise, you must specify the master and worker’s docker host IP that client can reach out (e.g., by -Dalluxio.worker.hostname=$(hostname -i)). This is required for the external communication between master/worker and clients outside the docker network. Otherwise, clients can’t connect to worker, since they do not recognize the worker’s container Id. It will throw error like below:

    1. Target: 5a1a840d2a98:29999, Error: alluxio.exception.status.UnavailableException: Unable to resolve host 5a1a840d2a98

Verify the Cluster

To verify that the services came up, check docker ps. You should see something like

  1. $ docker ps
  2. CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
  3. 1fef7c714d25 alluxio/alluxio "/entrypoint.sh work…" 39 seconds ago Up 38 seconds alluxio-worker
  4. 27f92f702ac2 alluxio/alluxio "/entrypoint.sh mast…" 44 seconds ago Up 43 seconds 0.0.0.0:19999->19999/tcp alluxio-master

If you don’t see the containers, run docker logs on their container ids to see what happened. The container ids were printed by the docker run command, and can also be found in docker ps -a.

Visit instance-hostname:19999 to view the Alluxio web UI. You should see one worker connected and providing 1024MB of space.

To run tests, enter the worker container

  1. $ docker exec -it alluxio-worker /bin/bash

Run the tests

  1. $ cd /opt/alluxio
  2. $ ./bin/alluxio runTests

To test the remote client access, for example, from the Spark cluster (python 3)

  1. textFile_alluxio_path = "alluxio://{docker_host-ip}:19998/path_to_the_file"
  2. textFile_RDD = sc.textFile (textFile_alluxio_path)
  3. for line in textFile_RDD.collect():
  4. print (line)

Congratulations, you’ve deployed a basic Dockerized Alluxio cluster! Read on to learn more about how to manage the cluster and make is production-ready.

Advanced Setup

Set server configuration

Configuration changes require stopping the Alluxio Docker images, then re-launching them with the new configuration.

To set an Alluxio configuration property, add it to the Alluxio java options environment variable with

  1. -e ALLUXIO_JAVA_OPTS="-Dalluxio.property.name=value"

Multiple properties should be space-separated.

If a property value contains spaces, you must escape it using single quotes.

  1. -e ALLUXIO_JAVA_OPTS="-Dalluxio.property1=value1 -Dalluxio.property2='value2 with spaces'"

Alluxio environment variables will be copied to conf/alluxio-env.sh when the image starts. If you are not seeing a property take effect, make sure the property in conf/alluxio-env.sh within the container is spelled correctly. You can check the contents with

  1. $ docker exec ${container_id} cat /opt/alluxio/conf/alluxio-env.sh

Run in High-Availability Mode

A lone Alluxio master is a single point of failure. To guard against this, a production cluster should run multiple Alluxio masters in High Availability mode.

There are two ways to enable HA mode in Alluxio, either with internal leader election and embedded journal, or external Zookeeper and a shared journal storage. Please read running Alluxio with HA for more details. It is recommended to use the second option for production use case.

Alluxio uses internal leader election by default.

Provide the master embedded journal addresses and set the hostname of the current master:

  1. $ docker run -d \
  2. ...
  3. -e ALLUXIO_JAVA_OPTS=" \
  4. -Dalluxio.master.embedded.journal.addresses=master-hostname-1:19200,master-hostname-2:19200,master-hostname-3:19200 \
  5. -Dalluxio.master.hostname=master-hostname-1" \
  6. alluxio master

Set the master rpc addresses for all the workers so that they can query the master nodes find out the leader master.

  1. $ docker run -d \
  2. ...
  3. -e ALLUXIO_JAVA_OPTS=" \
  4. -Dalluxio.master.rpc.addresses=master_hostname_1:19998,master_hostname_2:19998,master_hostname_3:19998" \
  5. alluxio worker

You can find more on Embedded Journal configuration here.

To run in HA mode with Zookeeper, Alluxio needs a shared journal directory that all masters have access to, usually either NFS or HDFS.

Point them to a shared journal and set their Zookeeper configuration.

  1. $ docker run -d \
  2. ...
  3. -e ALLUXIO_JAVA_OPTS=" \
  4. -Dalluxio.master.journal.type=UFS \
  5. -Dalluxio.master.journal.folder=hdfs://[namenodeserver]:[namenodeport]/alluxio_journal \
  6. -Dalluxio.zookeeper.enabled=true \
  7. -Dalluxio.zookeeper.address=zkhost1:2181,zkhost2:2181,zkhost3:2181" \
  8. alluxio master

Set the same Zookeeper configuration for workers so that they can query Zookeeper to discover the current leader.

  1. $ docker run -d \
  2. ...
  3. -e ALLUXIO_JAVA_OPTS="
  4. -Dalluxio.zookeeper.enabled=true \
  5. -Dalluxio.zookeeper.address=zkhost1:2181,zkhost2:2181,zkhost3:2181" \
  6. alluxio worker

You can find more on ZooKeeper and shared journal configuration here.


Relaunch Alluxio Servers

When relaunching Alluxio masters, use the --no-format flag to avoid re-formatting the journal. The journal should only be formatted the first time the image is run. Formatting the journal deletes all Alluxio metadata, and starts the cluster in a fresh state.

Enable POSIX API access

Using the alluxio/alluxio-fuse, you can enable access to Alluxio on Docker host using the POSIX API.

For example, this following command runs the alluxio-fuse container as a long-running client that presents Alluxio file system through a POSIX interface on the Docker host:

  1. $ docker run --rm \
  2. --net=host \
  3. --name=alluxio-fuse \
  4. -v /tmp/mnt:/mnt:rshared \
  5. -e "ALLUXIO_JAVA_OPTS=-Dalluxio.master.hostname=localhost" \
  6. --cap-add SYS_ADMIN \
  7. --device /dev/fuse \
  8. alluxio/alluxio-fuse fuse

Notes

  • -v /tmp/mnt:/mnt:rshared binds path /mnt/alluxio-fuse the default directory to Alluxio through fuse inside the container, to a mount accessible at /tmp/mnt/alluxio-fuse on host. To change this path to /foo/bar/alluxio-fuse on host file system, replace /tmp/mnt with /foo/bar.
  • --cap-add SYS_ADMIN launches the container with SYS_ADMIN capability.
  • --device /dev/fuse shares host device /dev/fuse with the container.

Performance Optimiztion

Enable short-circuit reads and writes

If your application pods will run on the same host as your Alluxio worker pods, performance can be greatly improved by enabling short-circuit reads and writes. This allows applications to read from and write to their local Alluxio worker without going over the loopback network. In dockerized enviroments, there are two ways to enable short-circuit reads and writes in Alluxio.

Using shared volumes is slightly easier and may yield higher performance, but may result in inaccurate resource accounting. Using domain sockets is recommended for production deployment.

On worker host machines, create a directory for the shared domain socket.

  1. $ mkdir /tmp/domain
  2. $ chmod a+w /tmp/domain

When starting both workers and clients, run their docker containers with -v /tmp/domain:/opt/domain to share the domain socket directory. Also set domain socket properties by passing alluxio.worker.data.server.domain.socket.address=/opt/domain and alluxio.worker.data.server.domain.socket.as.uuid=true when launching worker containers.

  1. $ docker run -d \
  2. ...
  3. -v /tmp/domain:/opt/domain \
  4. -e ALLUXIO_JAVA_OPTS=" \
  5. ...
  6. -Dalluxio.worker.data.server.domain.socket.address=/opt/domain \
  7. -Dalluxio.worker.data.server.domain.socket.as.uuid=true" \
  8. alluxio worker

When starting both workers and clients, run their docker containers with the worker storage as shared volumes across host, worker and client pods. With default Alluxio setting on docker, MEM is the main storage on host path /dev/shm. In this case, pass -v /dev/shm:/dev/shm when running both containers so both worker and clients and access this path directly.

For example, run worker container using:

  1. $ docker run -d \
  2. ...
  3. --shm-size=1G \
  4. -v /dev/shm:/dev/shm \
  5. alluxio worker

To run application containers, also pass alluxio.user.hostname=<host ip>.


Troubleshooting

Alluxio server logs can be accessed by running docker logs $container_id. Usually the logs will give a good indication of what is wrong. If they are not enough to diagnose your issue, you can get help on the user mailing list.

FAQ

AvailableProcessors: returns 0 in docker container

When you execute alluxio ls in the alluxio master container and got the following error.

  1. bash-4.4$ alluxio fs ls /
  2. Exception in thread "main" java.lang.ExceptionInInitializerError
  3. ...
  4. Caused by: java.lang.IllegalArgumentException: availableProcessors: 0 (expected: > 0)
  5. at io.netty.util.internal.ObjectUtil.checkPositive(ObjectUtil.java:44)
  6. at io.netty.util.NettyRuntime$AvailableProcessorsHolder.setAvailableProcessors(NettyRuntime.java:44)
  7. at io.netty.util.NettyRuntime$AvailableProcessorsHolder.availableProcessors(NettyRuntime.java:70)
  8. at io.netty.util.NettyRuntime.availableProcessors(NettyRuntime.java:98)
  9. at io.grpc.netty.Utils$DefaultEventLoopGroupResource.<init>(Utils.java:394)
  10. at io.grpc.netty.Utils.<clinit>(Utils.java:84)
  11. ... 20 more

This error can be fixed by adding -XX:ActiveProcessorCount=4 as jvm parameter.