Deploy Alluxio on Kubernetes

Slack Docker Pulls GitHub edit source

Alluxio can be run on Kubernetes. This guide demonstrates how to run Alluxio on Kubernetes using the specification included in the Alluxio Docker image or helm.

Prerequisites

  • A Kubernetes cluster (version >= 1.8). With the default specifications, Alluxio workers may use emptyDir volumes with a restricted size using the sizeLimit parameter. This is an alpha feature in Kubernetes 1.8. Please ensure the feature is enabled.
  • An Alluxio Docker image alluxio/alluxio. If using a private Docker registry, refer to the Kubernetes documentation.
  • Ensure the Kubernetes Network Policy allows for connectivity between applications (Alluxio clients) and the Alluxio Pods on the defined ports.

Basic Setup

This tutorial walks through a basic Alluxio setup on Kubernetes. Alluxio supports two methods of installation on Kubernetes: either using helm charts or using kubectl. When available, helm is the preferred way to install Alluxio. If helm is not available or if additional deployment customization is desired, kubectl can be used directly using native Kubernetes resource specifications.

Note: From Alluxio 2.3 on, Alluxio only supports helm 3. See how to migrate from helm 2 to 3 here.

(Optional) Extract Kubernetes Specifications

If hosting a private helm repository or using native Kubernetes specifications, extract the Kubernetes specifications required to deploy Alluxio from the Docker image.

  1. $ id=$(docker create alluxio/alluxio:2.3.0)
  2. $ docker cp $id:/opt/alluxio/integration/kubernetes/ - > kubernetes.tar
  3. $ docker rm -v $id 1>/dev/null
  4. $ tar -xvf kubernetes.tar
  5. $ cd kubernetes

(Optional) Provision a Persistent Volume

Note: Embedded Journal requires a Persistent Volume for each master Pod to be provisioned and is the preferred HA mechanism for Alluxio on Kubernetes. The volume, once claimed, is persisted across restarts of the master process.

When using the UFS Journal an Alluxio master can also be configured to use a persistent volume for storing the journal. If you are using UFS journal and use an external journal location like HDFS, the rest of this section can be skipped.

There are multiple ways to create a PersistentVolume. This is an example which defines one with hostPath:

  1. # Name the file alluxio-master-journal-pv.yaml
  2. kind: PersistentVolume
  3. apiVersion: v1
  4. metadata:
  5. name: alluxio-journal-0
  6. labels:
  7. type: local
  8. spec:
  9. storageClassName: standard
  10. capacity:
  11. storage: 1Gi
  12. accessModes:
  13. - ReadWriteOnce
  14. hostPath:
  15. path: /tmp/alluxio-journal-0

Note: By default each journal volume should be at least 1Gi, because each Alluxio master Pod will have one PersistentVolumeClaim that requests for 1Gi storage. You will see how to configure the journal size in later sections.

Then create the persistent volume with kubectl:

  1. $ kubectl create -f alluxio-master-journal-pv.yaml

There are other ways to create Persistent Volumes as documented here.

Deploy

Prerequisites

A. Install Helm

You should have helm 3.X installed. You can install helm following instructions here.

B. A helm repo with the Alluxio helm chart must be available.

  1. $ helm repo add alluxio-charts https://alluxio-charts.storage.googleapis.com/openSource/2.3.0

Configuration

Once the helm repository is available, prepare the Alluxio configuration. The minimal configuration must contain the under storage address:

  1. properties:
  2. alluxio.master.mount.table.root.ufs: "<under_storage_address>"

Note: The Alluxio under filesystem address MUST be modified. Any credentials MUST be modified.

To view the complete list of supported properties run the helm inspect command:

  1. $ helm inspect values alluxio-charts/alluxio

The remainder of this section describes various configuration options with examples.

Example: Amazon S3 as the under store

To mount S3 at the root of Alluxio namespace specify all required properties as a key-value pair under properties.

  1. properties:
  2. alluxio.master.mount.table.root.ufs: "s3a://<bucket>"
  3. alluxio.master.mount.table.root.option.aws.accessKeyId: "<accessKey>"
  4. alluxio.master.mount.table.root.option.aws.secretKey: "<secretKey>"

Example: Single Master and Journal in a Persistent Volume

The following configures UFS Journal with a persistent volume claim mounted locally to the master Pod at location /journal.

  1. master:
  2. count: 1 # For multiMaster mode increase this to >1
  3. journal:
  4. type: "UFS"
  5. ufsType: "local"
  6. folder: "/journal"
  7. size: 1Gi
  8. # volumeType controls the type of journal volume.
  9. # It can be "persistentVolumeClaim" or "emptyDir"
  10. volumeType: persistentVolumeClaim
  11. # Attributes to use when the journal is persistentVolumeClaim
  12. storageClass: "standard"
  13. accessModes:
  14. - ReadWriteOnce

Example: Single Master and Journal in an `emptyDir` Volume

The following configures UFS Journal with an emptyDir volume mounted locally to the master Pod at location /journal.

  1. master:
  2. count: 1 # For multiMaster mode increase this to >1
  3. journal:
  4. type: "UFS"
  5. ufsType: "local"
  6. folder: "/journal"
  7. size: 1Gi
  8. # volumeType controls the type of journal volume.
  9. # It can be "persistentVolumeClaim" or "emptyDir"
  10. volumeType: emptyDir
  11. # Attributes to use when the journal is emptyDir
  12. medium: ""

Note: An emptyDir volume has the same lifetime as the Pod. It is NOT a persistent storage. The Alluxio journal will be LOST when the Pod is restarted or rescheduled. Please only use this for experimental use cases. Check emptyDir for more details.

Example: HDFS as Journal

First create secrets for any configuration required by an HDFS client. These are mounted under /secrets.

  1. $ kubectl create secret generic alluxio-hdfs-config --from-file=${HADOOP_CONF_DIR}/core-site.xml --from-file=${HADOOP_CONF_DIR}/hdfs-site.xml
  1. journal:
  2. type: "UFS"
  3. ufsType: "HDFS"
  4. folder: "hdfs://{$hostname}:{$hostport}/journal"
  5. properties:
  6. alluxio.master.mount.table.root.ufs: "hdfs://<ns>"
  7. alluxio.master.journal.ufs.option.alluxio.underfs.hdfs.configuration: "/secrets/hdfsConfig/core-site.xml:/secrets/hdfsConfig/hdfs-site.xml"
  8. secrets:
  9. master:
  10. alluxio-hdfs-config: hdfsConfig
  11. worker:
  12. alluxio-hdfs-config: hdfsConfig

Example: Multi-master with Embedded Journal in Persistent Volumes

  1. master:
  2. count: 3
  3. journal:
  4. type: "EMBEDDED"
  5. folder: "/journal"
  6. # volumeType controls the type of journal volume.
  7. # It can be "persistentVolumeClaim" or "emptyDir"
  8. volumeType: persistentVolumeClaim
  9. size: 1Gi
  10. # Attributes to use when the journal is persistentVolumeClaim
  11. storageClass: "standard"
  12. accessModes:
  13. - ReadWriteOnce

Example: Multi-master with Embedded Journal in `emptyDir` Volumes

  1. master:
  2. count: 3
  3. journal:
  4. type: "UFS"
  5. ufsType: "local"
  6. folder: "/journal"
  7. size: 1Gi
  8. # volumeType controls the type of journal volume.
  9. # It can be "persistentVolumeClaim" or "emptyDir"
  10. volumeType: emptyDir
  11. # Attributes to use when the journal is emptyDir
  12. medium: ""

Note: An emptyDir volume has the same lifetime as the Pod. It is NOT a persistent storage. The Alluxio journal will be LOST when the Pod is restarted or rescheduled. Please only use this for experimental use cases. Check emptyDir for more details.

Example: HDFS as the under store

First create secrets for any configuration required by an HDFS client. These are mounted under /secrets.

  1. $ kubectl create secret generic alluxio-hdfs-config --from-file=${HADOOP_CONF_DIR}/core-site.xml --from-file=${HADOOP_CONF_DIR}/hdfs-site.xml
  1. properties:
  2. alluxio.master.mount.table.root.ufs: "hdfs://<ns>"
  3. alluxio.master.mount.table.root.option.alluxio.underfs.hdfs.configuration: "/secrets/hdfsConfig/core-site.xml:/secrets/hdfsConfig/hdfs-site.xml"
  4. secrets:
  5. master:
  6. alluxio-hdfs-config: hdfsConfig
  7. worker:
  8. alluxio-hdfs-config: hdfsConfig

Example: Off-heap Metastore Management in Persistent Volumes

The following configuration creates a PersistentVolumeClaim for each Alluxio master Pod with the specified configuration and configures the Pod to use the volume for an on-disk RocksDB-based metastore.

  1. properties:
  2. alluxio.master.metastore: ROCKS
  3. alluxio.master.metastore.dir: /metastore
  4. metastore:
  5. volumeType: persistentVolumeClaim # Options: "persistentVolumeClaim" or "emptyDir"
  6. size: 1Gi
  7. mountPath: /metastore
  8. # Attributes to use when the metastore is persistentVolumeClaim
  9. storageClass: "standard"
  10. accessModes:
  11. - ReadWriteOnce

Example: Off-heap Metastore Management in `emptyDir` Volumes

The following configuration creates an emptyDir Volume for each Alluxio master Pod with the specified configuration and configures the Pod to use the volume for an on-disk RocksDB-based metastore.

  1. properties:
  2. alluxio.master.metastore: ROCKS
  3. alluxio.master.metastore.dir: /metastore
  4. metastore:
  5. volumeType: emptyDir # Options: "persistentVolumeClaim" or "emptyDir"
  6. size: 1Gi
  7. mountPath: /metastore
  8. # Attributes to use when the metastore is emptyDir
  9. medium: ""

Note: An emptyDir volume has the same lifetime as the Pod. It is NOT a persistent storage. The Alluxio metadata will be LOST when the Pod is restarted or rescheduled. Please only use this for experimental use cases. Check emptyDir for more details.

Example: Multiple Secrets

Multiple secrets can be mounted to both master and worker Pods. The format for the section for each Pod is <secretName>: <mountPath>

  1. secrets:
  2. master:
  3. alluxio-hdfs-config: hdfsConfig
  4. alluxio-ceph-config: cephConfig
  5. worker:
  6. alluxio-hdfs-config: hdfsConfig
  7. alluxio-ceph-config: cephConfig

Examples: Alluxio Storage Management

Alluxio manages local storage, including memory, on the worker Pods. Multiple-Tier Storage can be configured using the following reference configurations.

There 3 supported volume type: hostPath, emptyDir and persistentVolumeClaim.

Memory Tier Only

  1. tieredstore:
  2. levels:
  3. - level: 0
  4. mediumtype: MEM
  5. path: /dev/shm
  6. type: emptyDir
  7. high: 0.95
  8. low: 0.7

Memory and SSD Storage in Multiple-Tiers

  1. tieredstore:
  2. levels:
  3. - level: 0
  4. mediumtype: MEM
  5. path: /dev/shm
  6. type: hostPath
  7. high: 0.95
  8. low: 0.7
  9. - level: 1
  10. mediumtype: SSD
  11. path: /ssd-disk
  12. type: hostPath
  13. high: 0.95
  14. low: 0.7

Note: If a hostPath file or directory is created at runtime, it can only be used by the root user. hostPath volumes do not have resource limits. You can either run Alluxio containers with root or make sure the local paths exist and are accessible to the user alluxio with UID and GID 1000. You can find more details here.

Memory and SSD Storage in Multiple-Tiers, using PVC

You can also use PVCs for each tier and provision PersistentVolume. For worker tiered storage please use either hostPath or local volume so that the worker will read and write locally to achieve the best performance.

  1. tieredstore:
  2. levels:
  3. - level: 0
  4. mediumtype: MEM
  5. path: /dev/shm
  6. type: persistentVolumeClaim
  7. name: alluxio-mem
  8. quota: 1G
  9. high: 0.95
  10. low: 0.7
  11. - level: 1
  12. mediumtype: SSD
  13. path: /ssd-disk
  14. type: persistentVolumeClaim
  15. name: alluxio-ssd
  16. quota: 10G
  17. high: 0.95
  18. low: 0.7

Note: There is one PVC per tier. When the PVC is bound to a PV of type hostPath or local, each worker Pod will resolve to the local path on the Node. Please also note that a local volumes requires nodeAffinity and Pods using this volume can only run on the Nodes specified in the nodeAffinity rule of this volume. You can find more details here.

Memory and SSD Storage in a Single-Tier

You can also have multiple volumes on the same tier. This configuration will create one persistentVolumeClaim for each volume.

  1. tieredstore:
  2. levels:
  3. - level: 0
  4. mediumtype: MEM,SSD
  5. path: /dev/shm,/alluxio-ssd
  6. type: persistentVolumeClaim
  7. name: alluxio-mem,alluxio-ssd
  8. quota: 1GB,10GB
  9. high: 0.95
  10. low: 0.7

Install

Once the configuration is finalized in a file named config.yaml, install as follows:

  1. $ helm install alluxio -f config.yaml alluxio-charts/alluxio

Uninstall

Uninstall Alluxio as follows:

  1. $ helm delete alluxio

Format Journal

The master Pods in the StatefulSet use a initContainer to format the journal on startup.. This initContainer is switched on by journal.format.runFormat=true. By default, the journal is not formatted when the master starts.

You can trigger the journal formatting by upgrading the existing helm deployment with journal.format.runFormat=true.

  1. # Use the same config.yaml and switch on journal formatting
  2. $ helm upgrade alluxio -f config.yaml --set journal.format.runFormat=true alluxio-charts/alluxio

Note: helm upgrade will re-create the master Pods.

Or you can trigger the journal formatting at deployment.

  1. $ helm install alluxio -f config.yaml --set journal.format.runFormat=true alluxio-charts/alluxio

Choose the Sample YAML Template

The specification directory contains a set of YAML templates for common deployment scenarios in the sub-directories: singleMaster-localJournal, singleMaster-hdfsJournal and multiMaster-embeddedJournal.

singleMaster means the templates generate 1 Alluxio master process, while multiMaster means 3. embedded and ufs are the 2 journal modes that Alluxio supports.

  • singleMaster-localJournal directory gives you the necessary Kubernetes ConfigMap, 1 Alluxio master process and a set of Alluxio workers. The Alluxio master writes journal to the journal volume requested by volumeClaimTemplates.
  • multiMaster-EmbeddedJournal directory gives you the Kubernetes ConfigMap, 3 Alluxio masters and a set of Alluxio workers. Each Alluxio master writes journal to its own journal volume requested by volumeClaimTemplates.
  • singleMaster-hdfsJournal directory gives you the Kubernetes ConfigMap, 1 Alluxio master with a set of workers. The journal is in a shared UFS location. In this template we use HDFS as the UFS.

Configuration

Once the deployment option is chosen, copy the template from the desired sub-directory:

  1. $ cp alluxio-configmap.yaml.template alluxio-configmap.yaml

Modify or add any configuration properties as required. The Alluxio under filesystem address MUST be modified. Any credentials MUST be modified. Add to ALLUXIO_JAVA_OPTS:

  1. -Dalluxio.master.mount.table.root.ufs=<under_storage_address>

Note:

  • Replace <under_storage_address> with the appropriate URI, for example s3://my-bucket. If using an under storage which requires credentials be sure to specify those as well.
  • When running Alluxio with host networking, the ports assigned to Alluxio services must not be occupied beforehand.

Create a ConfigMap.

  1. $ kubectl create -f alluxio-configmap.yaml

Install

Prepare the Specification. Prepare the Alluxio deployment specs from the templates. Modify any parameters required, such as location of the Docker image, and CPU and memory requirements for Pods.

For the master(s), create the Service and StatefulSet:

  1. $ mv master/alluxio-master-service.yaml.template master/alluxio-master-service.yaml
  2. $ mv master/alluxio-master-statefulset.yaml.template master/alluxio-master-statefulset.yaml

Note: alluxio-master-statefulset.yaml uses volumeClaimTemplates to define the journal volume for each master if it needs one.

For the workers, create the DaemonSet:

  1. $ mv worker/alluxio-worker-daemonset.yaml.template worker/alluxio-worker-daemonset.yaml

Note: Please make sure that the version of the Kubernetes specification matches the version of the Alluxio Docker image being used.

(Optional) Remote Storage Access

Additional steps may be required when Alluxio is connecting to storage hosts outside the Kubernetes cluster it is deployed on. The remainder of this section explains how to configure the connection to a remote HDFS accessible but not managed by Kubernetes.

Step 1: Add hostAliases for your HDFS connection. Kubernetes Pods don’t recognize network hostnames that are not managed by Kubernetes (not a Kubernetes Service), unless if specified by hostAliases.

For example if your HDFS service can be reached at hdfs://<namenode>:9000 where <namenode> is a hostname, you will need to add hostAliases in the spec for all Alluxio Pods creating a map from hostnames to IP addresses.

  1. spec:
  2. hostAliases:
  3. - ip: "<namenode_ip>"
  4. hostnames:
  5. - "<namenode>"

For the case of a StatefulSet or DaemonSet as used in alluxio-master-statefulset.yaml.template and alluxio-worker-daemonset.yaml.template, hostAliases section should be added to each section of spec.template.spec like below.

  1. kind: StatefulSet
  2. metadata:
  3. name: alluxio-master
  4. spec:
  5. ...
  6. serviceName: "alluxio-master"
  7. replicas: 1
  8. template:
  9. metadata:
  10. labels:
  11. app: alluxio-master
  12. spec:
  13. hostAliases:
  14. - ip: "ip for hdfs-host"
  15. hostnames:
  16. - "hdfs-host"

Step 2: Create Kubernetes Secret for HDFS configuration files. Run the following command to create a Kubernetes Secret for the HDFS client configuration.

  1. kubectl create secret generic alluxio-hdfs-config --from-file=${HADOOP_CONF_DIR}/core-site.xml --from-file=${HADOOP_CONF_DIR}/hdfs-site.xml

These two configuration files are referred in alluxio-master-statefulset.yaml and alluxio-worker-daemonset.yaml. Alluxio processes need the HDFS configuration files to connect, and the location of these files in the container is controlled by property alluxio.underfs.hdfs.configuration.

Step 3: Modify alluxio-configmap.yaml.template. Now that your Pods know how to talk to your HDFS service, update alluxio.master.journal.folder and alluxio.master.mount.table.root.ufs to point to the desired HDFS destination.

Once all the pre-requisites and configuration have been setup, deploy Alluxio.

  1. $ kubectl create -f ./master/
  2. $ kubectl create -f ./worker/

Uninstall

Uninstall Alluxio as follows:

  1. $ kubectl delete -f ./worker/
  2. $ kubectl delete -f ./master/
  3. $ kubectl delete configmap alluxio-config

Note: This will delete all resources under ./master/ and ./worker/. Be careful if you have persistent volumes or other important resources you want to keep under those directories.

Format Journal

You can manually add an initContainer to format the journal on Pod creation time. This initContainer will run alluxio formatJournal when the Pod is created and formats the journal.

  1. - name: journal-format
  2. image: alluxio/alluxio:2.3.0
  3. imagePullPolicy: IfNotPresent
  4. securityContext:
  5. runAsUser: 1000
  6. command: ["alluxio","formatJournal"]
  7. volumeMounts:
  8. - name: alluxio-journal
  9. mountPath: /journal

Note: From Alluxio v2.1 on, Alluxio Docker containers except Fuse will run as non-root user alluxio with UID 1000 and GID 1000 by default. You should make sure the journal is formatted using the same user that the Alluxio master Pod runs as.

Upgrade

This section will go over how to upgrade Alluxio in your Kubernetes cluster with kubectl.

Upgrading Alluxio

Step 1: Upgrade the docker image version tag

Each released Alluxio version will have the corresponding docker image released on dockerhub.

You should update the image field of all the Alluxio containers to use the target version tag. Tag latest will point to the latest stable version.

For example, if you want to upgrade Alluxio to the latest stable version, update the containers as below:

  1. containers:
  2. - name: alluxio-master
  3. image: alluxio/alluxio:latest
  4. imagePullPolicy: IfNotPresent
  5. ...
  6. - name: alluxio-job-master
  7. image: alluxio/alluxio:latest
  8. imagePullPolicy: IfNotPresent
  9. ...

Step 2: Stop running Alluxio master and worker Pods

Kill all running Alluxio worker Pods by deleting its DaemonSet.

  1. $ kubectl delete daemonset -l app=alluxio

Then kill all running Alluxio master Pods by killing each StatefulSet and each Service with label app=alluxio.

  1. $ kubectl delete service -l app=alluxio
  2. $ kubectl delete statefulset -l app=alluxio

Make sure all the Pods have been terminated before you move on to the next step.

Step 3: Format journal and Alluxio storage if necessary

Check the Alluxio upgrade guide page for whether the Alluxio master journal has to be formatted. If no format is needed, you are ready to skip the rest of this section and move on to restart all Alluxio master and worker Pods.

You can follow formatting journal with kubectl to format the Alluxio journals.

If you are running Alluxio workers with tiered storage, and you have Persistent Volumes configured for Alluxio, the storage should be cleaned up too. You should delete and recreate the Persistent Volumes.

Once all the journals and Alluxio storage have been formatted, you are ready to restart the Alluxio master and worker Pods.

Step 4: Restart Alluxio master and worker Pods

Now that Alluxio masters and worker containers all use your desired version. Restart them to let it take effect.

Now restart the Alluxio master and worker Pods from the YAML files.

  1. $ kubectl create -f ./master/
  2. $ kubectl create -f ./worker/

Step 5: Verify the Alluxio master and worker Pods are back up

You should verify the Alluxio Pods are back up in Running status.

  1. # You should see all Alluxio master and worker Pods
  2. $ kubectl get pods

You can do more comprehensive verification following Verify Alluxio.


Access the Web UI

The Alluxio UI can be accessed from outside the kubernetes cluster using port forwarding.

  1. $ kubectl port-forward alluxio-master-$i 19999:19999

Note: i=0 for the the first master Pod. When running multiple masters, forward port for each master. Only the primary master serves the Web UI.

Verify

Once ready, access the Alluxio CLI from the master Pod and run basic I/O tests.

  1. $ kubectl exec -ti alluxio-master-0 /bin/bash

From the master Pod, execute the following:

  1. $ alluxio runTests

(Optional) If using persistent volumes for Alluxio master, the status of the volume(s) should change to CLAIMED, and the status of the volume claims should be BOUNDED. You can validate the status as below:

  1. $ kubectl get pv
  2. $ kubectl get pvc

Enable remote logging

Alluxio supports a centralized log server that collects logs for all Alluxio processes. You can find the specific section at Remote logging. This can be enabled on K8s too, so that all Alluxio pods will send logs to this log server.

Step 1: Configure the log server

By default, the Alluxio remote log server is not started. You can enable the log server by configuring the following properties:

  1. logserver:
  2. enabled: true

If you are just testing and it is okay to discard logs, you can use an emptyDir to store the logs in the log server.

  1. logserver:
  2. enabled: true
  3. # volumeType controls the type of log volume.
  4. # It can be "persistentVolumeClaim" or "hostPath" or "emptyDir"
  5. volumeType: emptyDir
  6. # Attributes to use when the log volume is emptyDir
  7. medium: ""
  8. size: 4Gi

For a production environment, you should always persist the logs with a Persistent Volume.

  1. logserver:
  2. enabled: true
  3. # volumeType controls the type of log volume.
  4. # It can be "persistentVolumeClaim" or "hostPath" or "emptyDir"
  5. volumeType: persistentVolumeClaim
  6. # Attributes to use if the log volume is PVC
  7. pvcName: alluxio-logserver-logs
  8. accessModes:
  9. - ReadWriteOnce
  10. storageClass: standard
  11. selector:
  12. matchLabels:
  13. role: alluxio-logserver
  14. # If you need, you can specify more selectors like below to provide better separation
  15. # app: alluxio
  16. # chart: alluxio-<chart version>
  17. # release: alluxio
  18. # heritage: Helm
  19. # dc: data-center-1
  20. # region: us-east
  21. # If you are dynamically provisioning PVs, the selector on the PVC should be empty.
  22. # Ref: https://kubernetes.io/docs/concepts/storage/persistent-volumes/#class-1
  23. # Example:
  24. # selector: {}

Step 2: Helm install with the updated configuration

When you enable the remote log server, it will be managed by a K8s Deployment. If you specify the volume type to be persistentVolumeClaim, a PVC will be created and mounted. You will need to provision a PV for the PVC. Then there will be a Service created for the Deployment, which all other Alluxio pods send logs to.

Step 1: Configure log server location with environment variables

Add ALLUXIO_LOGSERVER_HOSTNAME and ALLUXIO_LOGSERVER_PORT properties to the configmap.

  1. apiVersion: v1
  2. kind: ConfigMap
  3. metadata:
  4. ..omitted
  5. data:
  6. ..omitted
  7. ALLUXIO_LOGSERVER_HOSTNAME: alluxio-logserver
  8. ALLUXIO_LOGSERVER_PORT: "45600"

Note: The value for ALLUXIO_LOGSERVER_PORT must be a string or kubectl will fail to read it.

Step 2: Configure and start log server

In the sample YAML directory (e.g. singleMaster-localJournal), the logserver/ directory contains all resources for the log server, including a Deployment, a Service and a PVC if needed.

First you can prepare the YAML file and configure what volume to use for the Deployment.

  1. $ cp logserver/alluxio-logserver-deployment.yaml.template logserver/alluxio-logserver-deployment.yaml

If you are testing and it is okay to discard logs, you can use an emptyDir for the volume like below:

  1. volumes:
  2. - name: alluxio-logs
  3. emptyDir:
  4. medium:
  5. sizeLimit: "4Gi"

And the volume should be mounted to the log server container at /opt/alluxio/logs.

  1. volumeMounts:
  2. - name: alluxio-logs
  3. mountPath: /opt/alluxio/logs

For a production environment, you should always persist the logs with a Persistent Volume.

  1. volumes:
  2. - name: alluxio-logs
  3. persistentVolumeClaim:
  4. claimName: "alluxio-logserver-logs"

There is also a YAML template for PVC alluxio-logserver-logs.

  1. $ cp logserver/alluxio-logserver-pvc.yaml.template logserver/alluxio-logserver-pvc.yaml

You can further configure the resource and selector for the PVC, according to your environment.

  1. apiVersion: v1
  2. kind: PersistentVolumeClaim
  3. metadata:
  4. name: alluxio-logserver-logs
  5. ..omitted
  6. spec:
  7. volumeMode: Filesystem
  8. resources:
  9. requests:
  10. storage: 4Gi
  11. storageClassName: standard
  12. accessModes:
  13. - ReadWriteOnce
  14. # If you are using dynamic provisioning, leave the selector empty.
  15. selector:
  16. matchLabels:
  17. role: alluxio-logserver

Create the PVC when you are ready.

  1. $ kubectl create -f alluxio-logserver-pvc.yaml

After you configure the volume in the Deployment, you can go ahead to create it.

  1. $ kubectl create -f alluxio-logserver-deployment.yaml

There is also a Service associated to the Deployment.

  1. $ cp logserver/alluxio-logserver-service.yaml.template logserver/alluxio-logserver-service.yaml
  2. $ kubectl create -f logserver/alluxio-logserver-service.yaml

Step 3: Restart other Alluxio pods

You need to restart your other Alluxio pods (masters, workers, FUSE etc) so they capture the updated environment variables and send logs to the remote log server.


Verify log server

You can go into the log server pod and verify the logs exist.

  1. $ kubectl exec -it <logserver-pod-name> bash
  2. # In the logserver pod
  3. bash-4.4$ pwd
  4. /opt/alluxio
  5. # You should see logs collected from other Alluxio pods
  6. bash-4.4$ ls -al logs
  7. total 16
  8. drwxrwsr-x 4 1001 bin 4096 Jan 12 03:14 .
  9. drwxr-xr-x 1 alluxio alluxio 18 Jan 12 02:38 ..
  10. drwxr-sr-x 2 alluxio bin 4096 Jan 12 03:14 job_master
  11. -rw-r--r-- 1 alluxio bin 600 Jan 12 03:14 logserver.log
  12. drwxr-sr-x 2 alluxio bin 4096 Jan 12 03:14 master
  13. drwxr-sr-x 2 alluxio bin 4096 Jan 12 03:14 worker
  14. drwxr-sr-x 2 alluxio bin 4096 Jan 12 03:14 job_worker

Advanced Setup

POSIX API

Once Alluxio is deployed on Kubernetes, there are multiple ways in which a client application can connect to it. For applications using the POSIX API, application containers can simply mount the Alluxio FileSystem.

In order to use the POSIX API, first deploy the Alluxio FUSE daemon.

You can deploy the FUSE daemon by configuring the following properties:

  1. fuse:
  2. enabled: true
  3. clientEnabled: true

By default, the mountPath is /mnt/alluxio-fuse. If you’d like to configure the mountPath of the fuse, please update the following properties:

  1. fuse:
  2. enabled: true
  3. clientEnabled: true
  4. mountPath: /mnt/alluxio-fuse

Then follow the steps to install Alluxio with helm here.

If Alluxio has already been deployed with helm and now you want to enable FUSE, you use helm upgrade to add the FUSE daemons.

  1. $ helm upgrade alluxio -f config.yaml \
  2. --set fuse.enabled=true \
  3. --set fuse.clientEnabled=true \
  4. alluxio-charts/alluxio
  1. $ cp alluxio-fuse.yaml.template alluxio-fuse.yaml
  2. $ kubectl create -f alluxio-fuse.yaml

Note:

  • The container running the Alluxio FUSE daemon must have the securityContext.privileged=true with SYS_ADMIN capabilities. Application containers that require Alluxio access do not need this privilege.
  • A different Docker image alluxio/alluxio-fuse based on ubuntu instead of alpine is needed to run the FUSE daemon. Application containers can run on any Docker image.

Verify that a container can simply mount the Alluxio FileSystem without any custom binaries or capabilities using a hostPath mount of location /alluxio-fuse:

  1. $ cp alluxio-fuse-client.yaml.template alluxio-fuse-client.yaml
  2. $ kubectl create -f alluxio-fuse-client.yaml

If using the template, Alluxio is mounted at /alluxio-fuse and can be accessed via the POSIX-API across multiple containers.


Enable Short-circuit Access

Short-circuit access enables clients to perform read and write operations directly against the worker bypassing the networking interface. For performance-critical applications it is recommended to enable short-circuit operations against Alluxio because it can increase a client’s read and write throughput when co-located with an Alluxio worker.

This feature is enabled by default (see next section to disable this feature), however requires extra configuration to work properly in Kubernetes environments.

There are two modes for using short-circuit.

Option1: Use local mode

In this mode, the Alluxio client and local Alluxio worker recognize each other if the client hostname matches the worker hostname. This is called Hostname Introspection. In this mode, the Alluxio client and local Alluxio worker share the tiered storage of Alluxio worker.

You can use local policy by setting the properties as below:

  1. shortCircuit:
  2. enabled: true
  3. policy: local

In your alluxio-configmap.yaml you should add the following properties to ALLUXIO_WORKER_JAVA_OPTS:

  1. -Dalluxio.user.short.circuit.enabled=true \
  2. -Dalluxio.worker.data.server.domain.socket.as.uuid=false

Also you should remove the property -Dalluxio.worker.data.server.domain.socket.address.


Option2: Use uuid (default)

This is the default policy used for short-circuit in Kubernetes.

If the client or worker container is using virtual networking, their hostnames may not match. In such a scenario, set the following property to use filesystem inspection to enable short-circuit operations and make sure the client container mounts the directory specified as the domain socket path. Short-circuit writes are then enabled if the worker UUID is located on the client filesystem.

Domain Socket Path. The domain socket is a volume which should be mounted on:

  • All Alluxio workers
  • All application containers which intend to read/write through Alluxio

This domain socket volume can be either a PersistentVolumeClaim or a hostPath Volume.

Use PersistentVolumeClaim. By default, this domain socket volume is a PersistentVolumeClaim. You need to provision a PersistentVolume to this PersistentVolumeClaim. And this PersistentVolume should be either local or hostPath.

You can use uuid policy by setting the properties as below:

  1. # These are the default configurations
  2. shortCircuit:
  3. enabled: true
  4. policy: uuid
  5. size: 1Mi
  6. # volumeType controls the type of shortCircuit volume.
  7. # It can be "persistentVolumeClaim" or "hostPath"
  8. volumeType: persistentVolumeClaim
  9. # Attributes to use if the domain socket volume is PVC
  10. pvcName: alluxio-worker-domain-socket
  11. accessModes:
  12. - ReadWriteOnce
  13. storageClass: standard

The field shortCircuit.pvcName defines the name of the PersistentVolumeClaim for domain socket. This PVC will be created as part of helm install.

You should verify the following properties in ALLUXIO_WORKER_JAVA_OPTS. Actually they are set to these values by default:

  1. -Dalluxio.worker.data.server.domain.socket.address=/opt/domain -Dalluxio.worker.data.server.domain.socket.as.uuid=true

Also you should make sure the worker Pods have domain socket defined in the volumes, and all relevant containers have the domain socket volume mounted. The domain socket volume is defined as below by default:

  1. volumes:
  2. - name: alluxio-domain
  3. persistentVolumeClaim:
  4. claimName: "alluxio-worker-domain-socket"

Note: Compute application containers MUST mount the domain socket volume to the same path (/opt/domain) as configured for the Alluxio workers.

The PersistenceVolumeClaim is defined in worker/alluxio-worker-pvc.yaml.template.


Use hostPath Volume. You can also directly define the workers to use a hostPath Volume for domain socket.

You can switch to directly use a hostPath volume for the domain socket. This is done by changing the shortCircuit.volumeType field to hostPath. Note that you also need to define the path to use for the hostPath volume.

  1. shortCircuit:
  2. enabled: true
  3. policy: uuid
  4. size: 1Mi
  5. # volumeType controls the type of shortCircuit volume.
  6. # It can be "persistentVolumeClaim" or "hostPath"
  7. volumeType: hostPath
  8. # Attributes to use if the domain socket volume is hostPath
  9. hostPath: "/tmp/alluxio-domain" # The hostPath directory to use

You should verify the properties in ALLUXIO_WORKER_JAVA_OPTS in the same way as using PersistentVolumeClaim.

Also you should make sure the worker Pods have domain socket defined in the volumes, and all relevant containers have the domain socket volume mounted. The domain socket volume is defined as below by default:

  1. volumes:
  2. - name: alluxio-domain
  3. hostPath:
  4. path: /tmp/alluxio-domain
  5. type: DirectoryOrCreate

Note: Compute application containers MUST mount the domain socket volume to the same path (/opt/domain) as configured for the Alluxio workers.


Verify Short-circuit Operations

To verify short-circuit reads and writes monitor the metrics displayed under:

  1. the metrics tab of the web UI as Domain Socket Alluxio Read and Domain Socket Alluxio Write
  2. or, the metrics json as cluster.BytesReadDomain and cluster.BytesWrittenDomain
  3. or, the fsadmin metrics CLI as Short-circuit Read (Domain Socket) and Alluxio Write (Domain Socket)

Disable Short-Circuit Operations

To disable short-circuit operations, the operation depends on how you deploy Alluxio.

Note: As mentioned, disabling short-circuit access for Alluxio workers will result in worse I/O throughput

You can disable short circuit by setting the properties as below:

  1. shortCircuit:
  2. enabled: false

You should set the property alluxio.user.short.circuit.enabled to false in your ALLUXIO_WORKER_JAVA_OPTS.

  1. -Dalluxio.user.short.circuit.enabled=false

You should also manually remove the volume alluxio-domain from volumes of the Pod definition and volumeMounts of each container if existing.


Troubleshooting

Worker Host Unreachable

Alluxio workers use host networking with the physical host IP as the hostname. Check the cluster firewall if an error such as the following is encountered:

  1. Caused by: io.netty.channel.AbstractChannel$AnnotatedConnectException: finishConnect(..) failed: Host is unreachable: <host>/<IP>:29999
  • Check that <host> matches the physical host address and is not a virtual container hostname. Ping from a remote client to check the address is resolvable.

    1. $ ping <host>
  • Verify that a client can connect to the workers on the ports specified in the worker deployment specification. The default ports are [29998, 29999, 29996, 30001, 30002, 30003]. Check access to the given port from a remote client using a network utility such as ncat:

    1. $ nc -zv <IP> 29999

Permission Denied

From Alluxio v2.1 on, Alluxio Docker containers except Fuse will run as non-root user alluxio with UID 1000 and GID 1000 by default. Kubernetes hostPath volumes are only writable by root so you need to update the permission accordingly.

Enable Debug Logging

To change the log level for Alluxio servers (master and workers), use the CLI command logLevel as follows:

Access the Alluxio CLI from the master Pod.

  1. $ kubectl exec -ti alluxio-master-0 /bin/bash

From the master Pod, execute the following:

  1. $ alluxio logLevel --level DEBUG --logName alluxio

Accessing Logs

The Alluxio master and job master run as separate containers of the master Pod. Similarly, the Alluxio worker and job worker run as separate containers of a worker Pod. Logs can be accessed for the individual containers as follows.

Master:

  1. $ kubectl logs -f alluxio-master-0 -c alluxio-master

Worker:

  1. $ kubectl logs -f alluxio-worker-<id> -c alluxio-worker

Job Master:

  1. $ kubectl logs -f alluxio-master-0 -c alluxio-job-master

Job Worker:

  1. $ kubectl logs -f alluxio-worker-<id> -c alluxio-job-worker

POSIX API

In order for an application container to mount the hostPath volume, the node running the container must have the Alluxio FUSE daemon running. The default spec alluxio-fuse.yaml runs as a DaemonSet, launching an Alluxio FUSE daemon on each node of the cluster.

If there are issues accessing Alluxio using the POSIX API:

  1. Identify the node the application container ran on using the command kubectl describe pods or the dashboard.
  2. Use the command kubectl describe nodes <node> to identify the alluxio-fuse Pod running on that node.
  3. Tail logs for the identified Pod to view any errors encountered: kubectl logs -f alluxio-fuse-<id>.

Filename too long

Alluxio workers create a domain socket used for short-circuit access by default. On Mac OS X, Alluxio workers may fail to start if the location for this domain socket is a path which is longer than what the filesystem accepts.

  1. 2020-07-27 21:39:06,030 ERROR GrpcDataServer - Alluxio worker gRPC server failed to start on /opt/domain/1d6d7c85-dee0-4ac5-bbd1-86eb496a2a50
  2. java.io.IOException: Failed to bind
  3. at io.grpc.netty.NettyServer.start(NettyServer.java:252)
  4. at io.grpc.internal.ServerImpl.start(ServerImpl.java:184)
  5. at io.grpc.internal.ServerImpl.start(ServerImpl.java:90)
  6. at alluxio.grpc.GrpcServer.lambda$start$0(GrpcServer.java:77)
  7. at alluxio.retry.RetryUtils.retry(RetryUtils.java:39)
  8. at alluxio.grpc.GrpcServer.start(GrpcServer.java:77)
  9. at alluxio.worker.grpc.GrpcDataServer.<init>(GrpcDataServer.java:107)
  10. at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
  11. at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
  12. at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
  13. at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
  14. at alluxio.util.CommonUtils.createNewClassInstance(CommonUtils.java:273)
  15. at alluxio.worker.DataServer$Factory.create(DataServer.java:47)
  16. at alluxio.worker.AlluxioWorkerProcess.<init>(AlluxioWorkerProcess.java:162)
  17. at alluxio.worker.WorkerProcess$Factory.create(WorkerProcess.java:46)
  18. at alluxio.worker.WorkerProcess$Factory.create(WorkerProcess.java:38)
  19. at alluxio.worker.AlluxioWorker.main(AlluxioWorker.java:72)
  20. Caused by: io.netty.channel.unix.Errors$NativeIoException: bind(..) failed: Filename too long

If this is the case, set the following properties to limit the path length:

  • alluxio.worker.data.server.domain.socket.as.uuid=false
  • alluxio.worker.data.server.domain.socket.address=/opt/domain/d

Note: You may see performance degradation due to lack of node locality.