List of Configuration Properties

Slack Docker Pulls GitHub edit source

All Alluxio configuration settings fall into one of the six categories: Common (shared by Master and Worker), Master specific, Worker specific, User specific, Cluster specific (used for running Alluxio with cluster managers like Mesos and YARN), and Security specific (shared by Master, Worker, and User).

Common Configuration

The common configuration contains constants shared by different components.

Property NameDefaultDescription
alluxio.conf.dir${alluxio.home}/confThe directory containing files used to configure Alluxio. Note: overwriting this property will only work when it is passed as a JVM system property (e.g., appending “-Dalluxio.conf.dir”=<NEW_VALUE>” to $ALLUXIO_JAVA_OPTS). Setting it in alluxio-site.properties will not work.
alluxio.debugfalseSet to true to enable debug mode which has additional logging and info in the Web UI.
alluxio.extensions.dir${alluxio.home}/extensionsThe directory containing Alluxio extensions.
alluxio.fuse.cached.paths.max500Maximum number of Alluxio paths to cache for FUSE conversion.
alluxio.fuse.debug.enabledfalseRun FUSE in debug mode, and have the fuse process log every FS request.
alluxio.fuse.fs.namealluxio-fuseThe FUSE file system name.
alluxio.fuse.logging.threshold10sLogging a FUSE API call when it takes more time than the threshold.
alluxio.fuse.maxwrite.bytes128KBMaximum granularity of write operations, capped by the kernel to 128KB max (as of Linux 3.16.0).
alluxio.fuse.user.group.translation.enabledfalseWhether to translate Alluxio users and groups into Unix users and groups when exposing Alluxio files through the FUSE API. When this property is set to false, the user and group for all FUSE files will match the user who started the alluxio-fuse process.
alluxio.home/opt/alluxioAlluxio installation directory.
alluxio.job.master.bind.host0.0.0.0The host that the Alluxio job master will bind to.
alluxio.job.master.client.threads1024The number of threads the Alluxio master uses to make requests to the job master.
alluxio.job.master.embedded.journal.addressesA comma-separated list of journal addresses for all job masters in the cluster. The format is ‘hostname1:port1,hostname2:port2,…’. Defaults to the journal addresses set for the Alluxio masters (alluxio.master.embedded.journal.addresses), but with the job master embedded journal port.
alluxio.job.master.embedded.journal.port20003The port to use for embedded journal communication with other job masters.
alluxio.job.master.finished.job.purge.count-1The maximum amount of jobs to purge at any single time when the job master reaches its maximum capacity. It is recommended to set this value when setting the capacity of the job master to a large ( > 10M) value. Default is -1 denoting an unlimited value
alluxio.job.master.finished.job.retention.time300secThe length of time the Alluxio Job Master should save information about completed jobs before they are discarded.
alluxio.job.master.hostname${alluxio.master.hostname}The hostname of the Alluxio job master.
alluxio.job.master.job.capacity100000The total possible number of available job statuses in the job master. This value includes running and finished jobs which are have completed within alluxio.job.master.finished.job.retention.time.
alluxio.job.master.lost.worker.interval1secThe time interval the job master waits between checks for lost workers.
alluxio.job.master.rpc.addressesThe list of RPC addresses to use for the job service configured in non-zookeeper HA mode. If this property is not specifically defined, it will first fall back to using alluxio.master.rpc.addresses, replacing those address ports with the port defined by alluxio.job.master.rpc.port. Otherwise the addresses are inherited from alluxio.job.master.embedded.journal.addresses using the port defined in alluxio.job.master.rpc.port
alluxio.job.master.rpc.port20001The port for Alluxio job master’s RPC service.
alluxio.job.master.web.bind.host0.0.0.0The host that the job master web server binds to.
alluxio.job.master.web.hostname${alluxio.job.master.hostname}The hostname of the job master web server.
alluxio.job.master.web.port20002The port the job master web server uses.
alluxio.job.master.worker.heartbeat.interval1secThe amount of time that the Alluxio job worker should wait in between heartbeats to the Job Master.
alluxio.job.master.worker.timeout60secThe time period after which the job master will mark a worker as lost without a subsequent heartbeat.
alluxio.job.worker.bind.host0.0.0.0The host that the Alluxio job worker will bind to.
alluxio.job.worker.data.port30002The port the Alluxio Job worker uses to send data.
alluxio.job.worker.hostname${alluxio.worker.hostname}The hostname of the Alluxio job worker.
alluxio.job.worker.rpc.port30001The port for Alluxio job worker’s RPC service.
alluxio.job.worker.threadpool.size10Number of threads in the thread pool for job worker. This may be adjusted to a lower value to alleviate resource saturation on the job worker nodes (CPU + IO).
alluxio.job.worker.throttlingfalseWhether the job worker should throttle itself based on whether theresources are saturated.
alluxio.job.worker.web.bind.host0.0.0.0The host the job worker web server binds to.
alluxio.job.worker.web.port30003The port the Alluxio job worker web server uses.
alluxio.jvm.monitor.info.threshold1secExtra sleep time longer than this threshold, log INFO.
alluxio.jvm.monitor.sleep.interval1secThe time for the JVM monitor thread to sleep.
alluxio.jvm.monitor.warn.threshold10secExtra sleep time longer than this threshold, log WARN.
alluxio.locality.compare.node.ipfalseWhether try to resolve the node IP address for locality checking
alluxio.locality.nodeValue to use for determining node locality
alluxio.locality.ordernode,rackOrdering of locality tiers
alluxio.locality.rackValue to use for determining rack locality
alluxio.locality.scriptalluxio-locality.shA script to determine tiered identity for locality checking
alluxio.logger.typeConsoleThe type of logger.
alluxio.logs.dir${alluxio.work.dir}/logsThe path under Alluxio home directory to store log files. It has a corresponding environment variable $ALLUXIO_LOGS_DIR. Note: overwriting this property will only work when it is passed as a JVM system property (e.g., appending “-Dalluxio.logs.dir”=<NEW_VALUE>” to $ALLUXIO_JAVA_OPTS). Setting it in alluxio-site.properties will not work.
alluxio.logserver.hostnameThe hostname of Alluxio logserver. Note: overwriting this property will only work when it is passed as a JVM system property (e.g., appending “-Dalluxio.logserver.hostname”=<NEW_VALUE>” to $ALLUXIO_JAVA_OPTS). Setting it in alluxio-site.properties will not work.
alluxio.logserver.logs.dir${alluxio.work.dir}/logsDefault location for remote log files. Note: overwriting this property will only work when it is passed as a JVM system property (e.g., appending “-Dalluxio.logserver.logs.dir”=<NEW_VALUE>” to $ALLUXIO_JAVA_OPTS). Setting it in alluxio-site.properties will not work.
alluxio.logserver.port45600Default port of logserver to receive logs from alluxio servers. Note: overwriting this property will only work when it is passed as a JVM system property (e.g., appending “-Dalluxio.logserver.port”=<NEW_VALUE>” to $ALLUXIO_JAVA_OPTS). Setting it in alluxio-site.properties will not work.
alluxio.logserver.threads.max2048The maximum number of threads used by logserver to service logging requests.
alluxio.logserver.threads.min512The minimum number of threads used by logserver to service logging requests.
alluxio.metrics.conf.file${alluxio.conf.dir}/metrics.propertiesThe file path of the metrics system configuration file. By default it is metrics.properties in the conf directory.
alluxio.network.connection.auth.timeout30secMaximum time to wait for a connection (gRPC channel) to attempt to receive an authentication response.
alluxio.network.connection.health.check.timeout5secAllowed duration for checking health of client connections (gRPC channels) before being assigned to a client. If a connection does not become active within configured time, it will be shut down and a new connection will be created for the client
alluxio.network.connection.server.shutdown.timeout60secMaximum time to wait for gRPC server to stop on shutdown
alluxio.network.connection.shutdown.graceful.timeout45secMaximum time to wait for connections (gRPC channels) to stop on shutdown
alluxio.network.connection.shutdown.timeout15secMaximum time to wait for connections (gRPC channels) to stop after graceful shutdown attempt.
alluxio.network.host.resolution.timeout5secDuring startup of the Master and Worker processes Alluxio needs to ensure that they are listening on externally resolvable and reachable host names. To do this, Alluxio will automatically attempt to select an appropriate host name if one was not explicitly specified. This represents the maximum amount of time spent waiting to determine if a candidate host name is resolvable over the network.
alluxio.proxy.s3.deletetypeALLUXIO_AND_UFSDelete type when deleting buckets and objects through S3 API. Valid options are ALLUXIO_AND_UFS (delete both in Alluxio and UFS), ALLUXIO_ONLY (delete only the buckets or objects in Alluxio namespace).
alluxio.proxy.s3.multipart.temporary.dir.suffix_s3_multipart_tmpSuffix for the directory which holds parts during a multipart upload.
alluxio.proxy.s3.writetypeCACHE_THROUGHWrite type when creating buckets and objects through S3 API. Valid options are MUST_CACHE (write will only go to Alluxio and must be stored in Alluxio), CACHE_THROUGH (try to cache, write to UnderFS synchronously), ASYNC_THROUGH (try to cache, write to UnderFS asynchronously), THROUGH (no cache, write to UnderFS synchronously).
alluxio.proxy.stream.cache.timeout1hourThe timeout for the input and output streams cache eviction in the proxy.
alluxio.proxy.web.bind.host0.0.0.0The hostname that the Alluxio proxy’s web server runs on.
alluxio.proxy.web.hostnameThe hostname Alluxio proxy’s web UI binds to.
alluxio.proxy.web.port39999The port Alluxio proxy’s web UI runs on.
alluxio.secondary.master.metastore.dir${alluxio.work.dir}/secondary-metastoreThe secondary master metastore work directory. Only some metastores need disk.
alluxio.site.conf.dir${alluxio.conf.dir}/,${user.home}/.alluxio/,/etc/alluxio/Comma-separated search path for alluxio-site.properties. Note: overwriting this property will only work when it is passed as a JVM system property (e.g., appending “-Dalluxio.site.conf.dir”=<NEW_VALUE>” to $ALLUXIO_JAVA_OPTS). Setting it in alluxio-site.properties will not work.
alluxio.table.catalog.path/catalogThe Alluxio file path for the table catalog metadata.
alluxio.table.catalog.udb.sync.timeout1hThe timeout period for a db sync to finish in the catalog. If a synctakes longer than this timeout, the sync will be terminated.
alluxio.table.enabledtrue(Experimental) Enables the table service.
alluxio.table.transform.manager.job.history.retention.time300secThe length of time the Alluxio Table Master should keep information about finished transformation jobs before they are discarded.
alluxio.table.transform.manager.job.monitor.interval10000Job monitor is a heartbeat thread in the transform manager, this is the time interval in milliseconds the job monitor heartbeat is run to check the status of the transformation jobs and update table and partition locations after transformation.
alluxio.test.deprecated.keyN/A
alluxio.tmp.dirs/tmpThe path(s) to store Alluxio temporary files, use commas as delimiters. If multiple paths are specified, one will be selected at random per temporary file. Currently, only files to be uploaded to object stores are stored in these paths.
alluxio.underfs.allow.set.owner.failurefalseWhether to allow setting owner in UFS to fail. When set to true, it is possible file or directory owners diverge between Alluxio and UFS.
alluxio.underfs.cleanup.enabledfalseWhether or not to clean up under file storage periodically.Some ufs operations may not be completed and cleaned up successfully in normal ways and leave some intermediate data that needs periodical cleanup.If enabled, all the mount points will be cleaned up when a leader master starts or cleanup interval is reached. This should be used sparingly.
alluxio.underfs.cleanup.interval1dayThe interval for periodically cleaning all the mounted under file storages.
alluxio.underfs.eventual.consistency.retry.base.sleep50msTo handle eventually consistent storage semantics for certain under storages, Alluxio will perform retries when under storage metadata doesn’t match Alluxio’s expectations. These retries use exponential backoff. This property determines the base time for the exponential backoff.
alluxio.underfs.eventual.consistency.retry.max.num20To handle eventually consistent storage semantics for certain under storages, Alluxio will perform retries when under storage metadata doesn’t match Alluxio’s expectations. These retries use exponential backoff. This property determines the maximum number of retries.
alluxio.underfs.eventual.consistency.retry.max.sleep30secTo handle eventually consistent storage semantics for certain under storages, Alluxio will perform retries when under storage metadata doesn’t match Alluxio’s expectations. These retries use exponential backoff. This property determines the maximum wait time in the backoff.
alluxio.underfs.gcs.default.mode0700Mode (in octal notation) for GCS objects if mode cannot be discovered.
alluxio.underfs.gcs.directory.suffix/Directories are represented in GCS as zero-byte objects named with the specified suffix.
alluxio.underfs.gcs.owner.id.to.username.mappingOptionally, specify a preset gcs owner id to Alluxio username static mapping in the format “id1=user1;id2=user2”. The Google Cloud Storage IDs can be found at the console address https://console.cloud.google.com/storage/settings . Please use the “Owners” one.
alluxio.underfs.hdfs.configuration${alluxio.conf.dir}/core-site.xml:${alluxio.conf.dir}/hdfs-site.xmlLocation of the HDFS configuration file to overwrite the default HDFS client configuration. Note that, these files must be availableon every node.
alluxio.underfs.hdfs.implorg.apache.hadoop.hdfs.DistributedFileSystemThe implementation class of the HDFS as the under storage system.
alluxio.underfs.hdfs.prefixeshdfs://,glusterfs:///Optionally, specify which prefixes should run through the HDFS implementation of UnderFileSystem. The delimiter is any whitespace and/or ‘,’.
alluxio.underfs.hdfs.remotetrueBoolean indicating whether or not the under storage worker nodes are remote with respect to Alluxio worker nodes. If set to true, Alluxio will not attempt to discover locality information from the under storage because locality is impossible. This will improve performance. The default value is true.
alluxio.underfs.kodo.connect.timeout50secThe connect timeout of kodo.
alluxio.underfs.kodo.downloadhostThe download domain of Kodo bucket.
alluxio.underfs.kodo.endpointThe endpoint of Kodo bucket.
alluxio.underfs.kodo.requests.max64The maximum number of kodo connections.
alluxio.underfs.listing.length1000The maximum number of directory entries to list in a single query to under file system. If the total number of entries is greater than the specified length, multiple queries will be issued.
alluxio.underfs.object.store.breadcrumbs.enabledtrueSet this to false to prevent Alluxio from creating zero byte objects during read or list operations on object store UFS. Leaving this on enables more efficient listing of prefixes.
alluxio.underfs.object.store.mount.shared.publiclyfalseWhether or not to share object storage under storage system mounted point with all Alluxio users. Note that this configuration has no effect on HDFS nor local UFS.
alluxio.underfs.object.store.multi.range.chunk.size${alluxio.user.block.size.bytes.default}Default chunk size for ranged reads from multi-range object input streams.
alluxio.underfs.object.store.service.threads20The number of threads in executor pool for parallel object store UFS operations, such as directory renames and deletes.
alluxio.underfs.oss.connection.max1024The maximum number of OSS connections.
alluxio.underfs.oss.connection.timeout50secThe timeout when connecting to OSS.
alluxio.underfs.oss.connection.ttl-1The TTL of OSS connections in ms.
alluxio.underfs.oss.socket.timeout50secThe timeout of OSS socket.
alluxio.underfs.s3.admin.threads.max20The maximum number of threads to use for metadata operations when communicating with S3. These operations may be fairly concurrent and frequent but should not take much time to process.
alluxio.underfs.s3.default.mode0700Mode (in octal notation) for S3 objects if mode cannot be discovered.
alluxio.underfs.s3.directory.suffix/Directories are represented in S3 as zero-byte objects named with the specified suffix.
alluxio.underfs.s3.disable.dns.bucketsfalseOptionally, specify to make all S3 requests path style.
alluxio.underfs.s3.endpointOptionally, to reduce data latency or visit resources which are separated in different AWS regions, specify a regional endpoint to make aws requests. An endpoint is a URL that is the entry point for a web service. For example, s3.cn-north-1.amazonaws.com.cn is an entry point for the Amazon S3 service in beijing region.
alluxio.underfs.s3.inherit.acltrueSet this property to false to disable inheriting bucket ACLs on objects. Note that the translation from bucket ACLs to Alluxio user permissions is best effort as some S3-like storage services doe not implement ACLs fully compatible with S3.
alluxio.underfs.s3.intermediate.upload.clean.age3dayStreaming uploads may not have been completed/aborted correctly and need periodical ufs cleanup. If ufs cleanup is enabled, intermediate multipart uploads in all non-readonly S3 mount points older than this age will be cleaned. This may impact other ongoing upload operations, so a large clean age is encouraged.
alluxio.underfs.s3.list.objects.v1falseWhether to use version 1 of GET Bucket (List Objects) API.
alluxio.underfs.s3.max.error.retryThe maximum number of retry attempts for failed retryable requests.Setting this property will override the AWS SDK default.
alluxio.underfs.s3.owner.id.to.username.mappingOptionally, specify a preset s3 canonical id to Alluxio username static mapping, in the format “id1=user1;id2=user2”. The AWS S3 canonical ID can be found at the console address https://console.aws.amazon.com/iam/home?#security_credential . Please expand the “Account Identifiers” tab and refer to “Canonical User ID”. Unspecified owner id will map to a default empty username
alluxio.underfs.s3.proxy.hostOptionally, specify a proxy host for communicating with S3.
alluxio.underfs.s3.proxy.portOptionally, specify a proxy port for communicating with S3.
alluxio.underfs.s3.request.timeout1minThe timeout for a single request to S3. Infinity if set to 0. Setting this property to a non-zero value can improve performance by avoiding the long tail of requests to S3. For very slow connections to S3, consider increasing this value or setting it to 0.
alluxio.underfs.s3.secure.http.enabledfalseWhether or not to use HTTPS protocol when communicating with S3.
alluxio.underfs.s3.server.side.encryption.enabledfalseWhether or not to encrypt data stored in S3.
alluxio.underfs.s3.signer.algorithmThe signature algorithm which should be used to sign requests to the s3 service. This is optional, and if not set, the client will automatically determine it. For interacting with an S3 endpoint which only supports v2 signatures, set this to “S3SignerType”.
alluxio.underfs.s3.socket.timeout50secLength of the socket timeout when communicating with S3.
alluxio.underfs.s3.streaming.upload.enabledfalse(Experimental) If true, using streaming upload to write to S3.
alluxio.underfs.s3.streaming.upload.partition.size64MBMaximum allowable size of a single buffer file when using S3A streaming upload. When the buffer file reaches the partition size, it will be uploaded and the upcoming data will write to other buffer files.If the partition size is too small, S3A upload speed might be affected.
alluxio.underfs.s3.threads.max40The maximum number of threads to use for communicating with S3 and the maximum number of concurrent connections to S3. Includes both threads for data upload and metadata operations. This number should be at least as large as the max admin threads plus max upload threads.
alluxio.underfs.s3.upload.threads.max20For an Alluxio worker, this is the maximum number of threads to use for uploading data to S3 for multipart uploads. These operations can be fairly expensive, so multiple threads are encouraged. However, this also splits the bandwidth between threads, meaning the overall latency for completing an upload will be higher for more threads. For the Alluxio master, this is the maximum number of threads used for the rename (copy) operation. It is recommended that value should be greater than or equal to alluxio.underfs.object.store.service.threads
alluxio.underfs.web.connnection.timeout60sDefault timeout for a http connection.
alluxio.underfs.web.header.last.modifiedEEE, dd MMM yyyy HH:mm:ss zzzDate format of last modified for a http response header.
alluxio.underfs.web.parent.namesParent Directory,..,../The text of the http link for the parent directory.
alluxio.underfs.web.titlesIndex of,Directory listing forThe title of the content for a http url.
alluxio.web.cors.enabledfalseSet to true to enable Cross-Origin Resource Sharing for RESTful APIendpoints.
alluxio.web.file.info.enabledtrueWhether detailed file information are enabled for the web UI.
alluxio.web.refresh.interval15sThe amount of time to await before refreshing the Web UI if it is set to auto refresh.
alluxio.web.threads1How many threads to serve Alluxio web UI.
alluxio.web.ui.enabledtrueWhether the master/worker will have Web UI enabled. If set to false, the master/worker will not have Web UI page, but the RESTful endpoints and metrics will still be available.
alluxio.work.dir${alluxio.home}The directory to use for Alluxio’s working directory. By default, the journal, logs, and under file storage data (if using local filesystem) are written here.
alluxio.zookeeper.addressAddress of ZooKeeper.
alluxio.zookeeper.auth.enabledtrueIf true, enable client-side Zookeeper authentication.
alluxio.zookeeper.connection.timeout15sConnection timeout for Alluxio (job) masters to select the leading (job) master when connecting to Zookeeper
alluxio.zookeeper.election.path/alluxio/electionElection directory in ZooKeeper.
alluxio.zookeeper.enabledfalseIf true, setup master fault tolerant mode using ZooKeeper.
alluxio.zookeeper.job.election.path/job_electionN/A
alluxio.zookeeper.job.leader.path/job_leaderN/A
alluxio.zookeeper.leader.connection.error.policySESSIONConnection error policy defines how errors on zookeeper connections to be treated in leader election. STANDARD policy treats every connection event as failure.SESSION policy relies on zookeeper sessions for judging failures, helping leader to retain its status, as long as its session is protected.
alluxio.zookeeper.leader.inquiry.retry10The number of retries to inquire leader from ZooKeeper.
alluxio.zookeeper.leader.path/alluxio/leaderLeader directory in ZooKeeper.
alluxio.zookeeper.session.timeout60sSession timeout to use when connecting to Zookeeper
aws.accessKeyIdThe access key of S3 bucket.
aws.secretKeyThe secret key of S3 bucket.
fs.cos.access.keyThe access key of COS bucket.
fs.cos.app.idThe app id of COS bucket.
fs.cos.connection.max1024The maximum number of COS connections.
fs.cos.connection.timeout50secThe timeout of connecting to COS.
fs.cos.regionThe region name of COS bucket.
fs.cos.secret.keyThe secret key of COS bucket.
fs.cos.socket.timeout50secThe timeout of COS socket.
fs.gcs.accessKeyIdThe access key of GCS bucket.
fs.gcs.secretAccessKeyThe secret key of GCS bucket.
fs.kodo.accesskeyThe access key of Kodo bucket.
fs.kodo.secretkeyThe secret key of Kodo bucket.
fs.oss.accessKeyIdThe access key of OSS bucket.
fs.oss.accessKeySecretThe secret key of OSS bucket.
fs.oss.endpointThe endpoint key of OSS bucket.
fs.swift.auth.methodChoice of authentication method: [tempauth (default), swiftauth, keystone, keystonev3].
fs.swift.auth.urlAuthentication URL for REST server, e.g., http://server:8090/auth/v1.0.
fs.swift.passwordThe password used for user:tenant authentication.
fs.swift.regionService region when using Keystone authentication.
fs.swift.simulationWhether to simulate a single node Swift backend for testing purposes: true or false (default).
fs.swift.tenantSwift user for authentication.
fs.swift.userSwift tenant for authentication.

Master Configuration

The master configuration specifies information regarding the master node, such as the address and the port number.

Property NameDefaultDescription
alluxio.master.audit.logging.enabledfalseSet to true to enable file system master audit.
alluxio.master.audit.logging.queue.capacity10000Capacity of the queue used by audit logging.
alluxio.master.backup.abandon.timeout1minDuration after which leader will abandon the backup if not received heartbeat from backup-worker.
alluxio.master.backup.connect.interval.max30secMaximum delay between each connection attempt to backup-leader.
alluxio.master.backup.connect.interval.min1secMinimum delay between each connection attempt to backup-leader.
alluxio.master.backup.delegation.enabledfalseWhether to delegate journals to stand-by masters in HA cluster.
alluxio.master.backup.directory/alluxiobackupsDefault directory for writing master metadata backups. This path is an absolute path of the root UFS. For example, if the root ufs directory is hdfs://host:port/alluxio/data, the default backup directory will be hdfs://host:port/alluxio_backups.
alluxio.master.backup.entry.buffer.count10000How many journal entries to buffer during a back-up.
alluxio.master.backup.heartbeat.interval2secInterval at which follower updates the leader on ongoing backup.
alluxio.master.backup.state.lock.exclusive.duration0msAlluxio master will allow only exclusive locking of the state-lock for this duration. This duration starts after masters are started for the first time. User RPCs will fail to acquire state-lock during this phase and a backup is guaranteed take the state-lock meanwhile.
alluxio.master.backup.state.lock.forced.duration15minExclusive locking of the state-lock will timeout after this duration is spent on forced phase.
alluxio.master.backup.state.lock.interrupt.cycle.enabledtrueThis controls whether RPCs that are waiting/holding state-lock in shared-mode will be interrupted while state-lock is taken exclusively.
alluxio.master.backup.state.lock.interrupt.cycle.interval30secThe interval at which the RPCs that are waiting/holding state-lock in shared-mode will be interrupted while state-lock is taken exclusively.
alluxio.master.backup.transport.timeout30secRequest timeout for backup messaging.
alluxio.master.bind.host0.0.0.0The hostname that Alluxio master binds to.
alluxio.master.daily.backup.enabledfalseWhether or not to enable daily primary master metadata backup.
alluxio.master.daily.backup.files.retained3The maximum number of backup files to keep in the backup directory.
alluxio.master.daily.backup.state.lock.grace.modeFORCEDGrace mode helps taking the state-lock exclusively for backup with minimum disruption to existing RPCs. This low-impact locking phase is called grace-cycle. Two modes are supported: TIMEOUT/FORCED.TIMEOUT: Means exclusive locking will timeout if it cannot acquire the lockwith grace-cycle. FORCED: Means the state-lock will be taken forcefully if grace-cycle fails to acquire it. Forced phase might trigger interrupting of existing RPCs if it is enabled.
alluxio.master.daily.backup.state.lock.sleep.duration10mThe duration that controls how long the lock waiter sleeps within a single grace-cycle.
alluxio.master.daily.backup.state.lock.timeout12hThe max duration for a grace-cycle.
alluxio.master.daily.backup.state.lock.try.duration30sThe duration that controls how long the state-lock is tried within a single grace-cycle.
alluxio.master.daily.backup.time05:00Default UTC time for writing daily master metadata backups. The accepted time format is hour:minute which is based on a 24-hour clock (E.g., 05:30, 06:00, and 22:04). Backing up metadata requires a pause in master metadata changes, so please set this value to an off-peak time to avoid interfering with other users of the system.
alluxio.master.embedded.journal.addressesA comma-separated list of journal addresses for all masters in the cluster. The format is ‘hostname1:port1,hostname2:port2,…’. When left unset, Alluxio uses ${alluxio.master.hostname}:${alluxio.master.embedded.journal.port} by default
alluxio.master.embedded.journal.appender.batch.size512KBAmount of data that is appended from leader to followers in a single heartbeat. Setting higher values might require increasing election timeout due to increased network delay. Setting lower values might stall knowledge propagation between the leader and followers.
alluxio.master.embedded.journal.bind.hostUsed to bind embedded journal servers to a proxied host.Proxy hostname will still make use of alluxio.master.embedded.journal.port for bind port.
alluxio.master.embedded.journal.election.timeout10sThe election timeout for the embedded journal. When this period elapses without a master receiving any messages, the master will attempt to become the primary.Election timeout will be waited initially when the cluster is forming. So larger values for election timeout will cause longer start-up time. Smaller values might introduce instability to leadership.
alluxio.master.embedded.journal.heartbeat.interval3sThe period between sending heartbeats from the embedded journal primary to followers. This should be less than half of the election timeout {alluxio.master.embedded.journal.election.timeout}, because the election is driven by heart beats.
alluxio.master.embedded.journal.port19200The port to use for embedded journal communication with other masters.
alluxio.master.embedded.journal.shutdown.timeout10secMaximum time to wait for embedded journal to stop on shutdown.
alluxio.master.embedded.journal.storage.levelDISKThe storage level for storing embedded journal logs. Use DISK for maximum durability. Use MAPPED for better performance, but some risk of losing state in case of power loss or host failure. Use MEMORY for optimal performance, but no state persistence across cluster restarts.
alluxio.master.embedded.journal.transport.max.inbound.message.size100MBThe maximum size of a message that can be sent to the embedded journal server node.
alluxio.master.embedded.journal.transport.request.timeout.ms5secTimeout for requests between embedded journal masters.
alluxio.master.embedded.journal.triggered.snapshot.wait.timeout2hourMaximum time to wait for the triggered snapshot to finish.
alluxio.master.embedded.journal.write.timeout30secMaximum time to wait for a write/flush on embedded journal.
alluxio.master.file.access.time.journal.flush.interval1hThe minimum interval between files access time update journal entries get flushed asynchronously. Setting it to a non-positive value will make the the journal update synchronous. Asynchronous update reduces the performance impact of tracking access time but can lose some access time update when master stops unexpectedly.
alluxio.master.file.access.time.update.precision1dThe file last access time is precise up to this value. Setting it toa non-positive value will update last access time on every file access operation.Longer precision will help reduce the performance impact of tracking access time by reduce the amount of metadata writes occur while reading the same group of files repetitively.
alluxio.master.file.access.time.updater.shutdown.timeout1secMaximum time to wait for access updater to stop on shutdown.
alluxio.master.filesystem.liststatus.result.message.length10000Count of items on each list-status response message.
alluxio.master.format.file.prefix_formatThe file prefix of the file generated in the journal directory when the journal is formatted. The master will search for a file with this prefix when determining if the journal is formatted.
alluxio.master.heartbeat.timeout10minTimeout between leader master and standby master indicating a lost master.
alluxio.master.hostnameThe hostname of Alluxio master.
alluxio.master.journal.checkpoint.period.entries2000000The number of journal entries to write before creating a new journal checkpoint.
alluxio.master.journal.flush.batch.time5msTime to wait for batching journal writes.
alluxio.master.journal.flush.timeout5minThe amount of time to keep retrying journal writes before giving up and shutting down the master.
alluxio.master.journal.folder${alluxio.work.dir}/journalThe path to store master journal logs. When using the UFS journal this could be a URI like hdfs://namenode:port/alluxio/journal. When using the embedded journal this must be a local path
alluxio.master.journal.gc.period2minFrequency with which to scan for and delete stale journal checkpoints.
alluxio.master.journal.gc.threshold5minMinimum age for garbage collecting checkpoints.
alluxio.master.journal.init.from.backupA uri for a backup to initialize the journal from. When the master becomes primary, if it sees that its journal is freshly formatted, it will restore its state from the backup. When running multiple masters, this property must be configured on all masters since it isn’t known during startup which master will become the first primary.
alluxio.master.journal.log.size.bytes.max10MBIf a log file is bigger than this value, it will rotate to next file.
alluxio.master.journal.retry.interval1secThe amount of time to sleep between retrying journal flushes
alluxio.master.journal.tailer.shutdown.quiet.wait.time5secBefore the standby master shuts down its tailer thread, there should be no update to the leader master’s journal in this specified time period.
alluxio.master.journal.tailer.sleep.time1secTime for the standby master to sleep for when it cannot find anything new in leader master’s journal.
alluxio.master.journal.temporary.file.gc.threshold30minMinimum age for garbage collecting temporary checkpoint files.
alluxio.master.journal.typeEMBEDDEDThe type of journal to use. Valid options are UFS (store journal in UFS), EMBEDDED (use a journal embedded in the masters), and NOOP (do not use a journal)
alluxio.master.journal.ufs.optionThe configuration to use for the journal operations.
alluxio.master.jvm.monitor.enabledfalseWhether to enable start JVM monitor thread on master.
alluxio.master.keytab.fileKerberos keytab file for Alluxio master.
alluxio.master.lock.pool.concurrency.level100Maximum concurrency level for the lock pool
alluxio.master.lock.pool.high.watermark1000000High watermark of lock pool size. When the size grows over the high watermark, a background thread starts evicting unused locks from the pool.
alluxio.master.lock.pool.initsize1000Initial size of the lock pool for master inodes.
alluxio.master.lock.pool.low.watermark500000Low watermark of lock pool size. When the size grows over the high watermark, a background thread will try to evict unused locks until the size reaches the low watermark.
alluxio.master.log.config.report.heartbeat.interval1hThe interval for periodically logging the configuration check report.
alluxio.master.lost.worker.file.detection.interval10secThe interval between Alluxio master detections to find lost workers and files based on updates from Alluxio workers.
alluxio.master.metadata.sync.concurrency.level6The maximum number of concurrent sync tasks running for a given sync operation
alluxio.master.metadata.sync.executor.pool.sizeThe total number of threads which can concurrently execute metadata sync operations.The number of threads used to execute all metadata syncoperations
alluxio.master.metadata.sync.ufs.prefetch.pool.sizeThe number of threads which can concurrently fetch metadata from UFSes during a metadata sync operationsThe number of threads used to fetch UFS objects for all metadata syncoperations
alluxio.master.metastoreHEAPThe type of metastore to use, either HEAP or ROCKS. The heap metastore keeps all metadata on-heap, while the rocks metastore stores some metadata on heap and some metadata on disk. The rocks metastore has the advantage of being able to support a large namespace (1 billion plus files) without needing a massive heap size.
alluxio.master.metastore.dir${alluxio.work.dir}/metastoreThe metastore work directory. Only some metastores need disk.
alluxio.master.metastore.inode.cache.evict.batch.size1000The batch size for evicting entries from the inode cache.
alluxio.master.metastore.inode.cache.high.water.mark.ratio0.85The high water mark for the inode cache, as a ratio from high water mark to total cache size. If this is 0.85 and the max size is 10 million, the high water mark value is 8.5 million. When the cache reaches the high water mark, the eviction process will evict down to the low water mark.
alluxio.master.metastore.inode.cache.low.water.mark.ratio0.8The low water mark for the inode cache, as a ratio from low water mark to total cache size. If this is 0.8 and the max size is 10 million, the low water mark value is 8 million. When the cache reaches the high water mark, the eviction process will evict down to the low water mark.
alluxio.master.metastore.inode.cache.max.size10000000The number of inodes to cache on-heap. This only applies to off-heap metastores, e.g. ROCKS. Set this to 0 to disable the on-heap inode cache
alluxio.master.metastore.inode.enumerator.buffer.count10000The number of entries to buffer during read-ahead enumeration.
alluxio.master.metastore.inode.inherit.owner.and.grouptrueWhether to inherit the owner/group from the parent when creating a new inode path if empty
alluxio.master.metastore.inode.iteration.crawler.countUse {CPU core count} for enumerationThe number of threads used during inode tree enumeration.
alluxio.master.metastore.iterator.readahead.size64MBThe read-ahead size (in bytes) for metastore iterators.
alluxio.master.metrics.service.threads5The number of threads in metrics master executor pool for parallel processing metrics submitted by workers or clients and update cluster metrics.
alluxio.master.metrics.time.series.interval5minInterval for which the master records metrics information. This affects the granularity of the metrics graphed in the UI.
alluxio.master.mount.table.root.alluxio/Alluxio root mount point.
alluxio.master.mount.table.root.aws.accessKeyIdN/A
alluxio.master.mount.table.root.aws.secretKeyIdN/A
alluxio.master.mount.table.root.optionConfiguration for the UFS of Alluxio root mount point.
alluxio.master.mount.table.root.readonlyfalseWhether Alluxio root mount point is readonly.
alluxio.master.mount.table.root.sharedtrueWhether Alluxio root mount point is shared.
alluxio.master.mount.table.root.ufs${alluxio.work.dir}/underFSStorageThe storage address of the UFS at the Alluxio root mount point.
alluxio.master.network.max.inbound.message.size100MBThe maximum size of a message that can be sent to the Alluxio master
alluxio.master.periodic.block.integrity.check.interval1hrThe period for the block integrity check, disabled if <= 0.
alluxio.master.periodic.block.integrity.check.repairfalseWhether the system should delete orphaned blocks found during the periodic integrity check. This is an experimental feature.
alluxio.master.persistence.blacklistPatterns to blacklist persist, comma separated, string match, no regex. This affects any async persist call (including ASYNC_THROUGH writes and CLI persist) but does not affect CACHE_THROUGH writes. Users may want to specify temporary files in the blacklist to avoid unnecessary I/O and errors. Some examples are .staging and .tmp.
alluxio.master.persistence.checker.interval1sHow often the master checks persistence status for files written using ASYNC_THROUGH
alluxio.master.persistence.initial.interval1sHow often the master persistence checker checks persistence status for files written using ASYNC_THROUGH
alluxio.master.persistence.max.interval1hrMax wait interval for master persistence checker persistence status for files written using ASYNC_THROUGH
alluxio.master.persistence.max.total.wait.time1dayTotal wait time for master persistence checker persistence status for files written using ASYNC_THROUGH
alluxio.master.persistence.scheduler.interval1sHow often the master schedules persistence jobs for files written using ASYNC_THROUGH
alluxio.master.principalKerberos principal for Alluxio master.
alluxio.master.replication.check.interval1minHow often the master runs background process to check replication level for files
alluxio.master.rpc.addressesA list of comma-separated host:port RPC addresses where the client should look for masters when using multiple masters without Zookeeper. This property is not used when Zookeeper is enabled, since Zookeeper already stores the master addresses.
alluxio.master.rpc.executor.core.pool.size0the number of threads to keep in thread pool of master RPC executor service. By default it is same as the parallelism level, but may be set to a larger value to reduce dynamic overhead if tasks regularly block. A smaller value (for example 0) is equivalent to the default.
alluxio.master.rpc.executor.keepalive60secthe keep alive time of a thread in master RPC executor servicelast used before this thread is terminated (and replaced if necessary).
alluxio.master.rpc.executor.max.pool.size500the maximum number of threads allowed for master RPC executor service. When the maximum is reached, attempts to replace blocked threads fail.
alluxio.master.rpc.executor.min.runnable1the minimum allowed number of core threads not blocked. To ensure progress, when too few unblocked threads exist and unexecuted tasks may exist, new threads are constructed up to the value of alluxio.master.rpc.executor.max.pool.size. A value of 1 ensures liveness. A larger value might improve throughput but might also increase overhead.
alluxio.master.rpc.executor.parallelism2 * {CPU core count}The parallelism level of master RPC executor service .
alluxio.master.rpc.port19998The port for Alluxio master’s RPC service.
alluxio.master.shell.backup.state.lock.grace.modeTIMEOUTGrace mode helps taking the state-lock exclusively for backup with minimum disruption to existing RPCs. This low-impact locking phase is called grace-cycle. Two modes are supported: TIMEOUT/FORCED.TIMEOUT: Means exclusive locking will timeout if it cannot acquire the lockwith grace-cycle. FORCED: Means the state-lock will be taken forcefully if grace-cycle fails to acquire it. Forced phase might trigger interrupting of existing RPCs if it is enabled.
alluxio.master.shell.backup.state.lock.sleep.duration0The duration that controls how long the lock waiter sleeps within a single grace-cycle.
alluxio.master.shell.backup.state.lock.timeout1mThe max duration for a grace-cycle.
alluxio.master.shell.backup.state.lock.try.duration1mThe duration that controls how long the state-lock is tried within a single grace-cycle.
alluxio.master.standby.heartbeat.interval2minThe heartbeat interval between Alluxio primary master and standby masters.
alluxio.master.startup.block.integrity.check.enabledtrueWhether the system should be checked on startup for orphaned blocks (blocks having no corresponding files but still taking system resource due to various system failures). Orphaned blocks will be deleted during master startup if this property is true. This property is available since 1.7.1
alluxio.master.tieredstore.global.level0.aliasMEMThe name of the highest storage tier in the entire system.
alluxio.master.tieredstore.global.level1.aliasSSDThe name of the second highest storage tier in the entire system.
alluxio.master.tieredstore.global.level2.aliasHDDThe name of the third highest storage tier in the entire system.
alluxio.master.tieredstore.global.levels3The total number of storage tiers in the system.
alluxio.master.tieredstore.global.mediumtypeMEM, SSD, HDDThe list of medium types we support in the system.
alluxio.master.ttl.checker.interval1hourHow often to periodically check and delete the files with expired ttl value.
alluxio.master.ufs.active.sync.event.rate.interval60secThe time interval we use to estimate incoming event rate
alluxio.master.ufs.active.sync.interval30secTime interval to periodically actively sync UFS
alluxio.master.ufs.active.sync.max.activities10Max number of changes in a directory to be considered for active syncing
alluxio.master.ufs.active.sync.max.age10The maximum number of intervals we will wait to find a quiet period before we have to sync the directories
alluxio.master.ufs.active.sync.poll.batch.size1024The number of event batches that should be submitted together to a single thread for processing.
alluxio.master.ufs.active.sync.poll.timeout10secMax time to wait before timing out a polling operation
alluxio.master.ufs.active.sync.retry.timeout10secThe max total duration to retry failed active sync operations.A large duration is useful to handle transient failures such as an unresponsive under storage but can lock the inode tree being synced longer.
alluxio.master.ufs.active.sync.thread.pool.sizeThe number of threads used by the active sync provider process active sync events. A higher number allow the master to use more CPU to process events from an event stream in parallel. If this value is too low, Alluxio may fall behind processing events. Defaults to # of processors / 2Max number of threads used to perform active sync
alluxio.master.ufs.block.location.cache.capacity1000000The capacity of the UFS block locations cache. This cache caches UFS block locations for files that are persisted but not in Alluxio space, so that listing status of these files do not need to repeatedly ask UFS for their block locations. If this is set to 0, the cache will be disabled.
alluxio.master.ufs.path.cache.capacity100000The capacity of the UFS path cache. This cache is used to approximate the ONCE metadata load behavior (see alluxio.user.file.metadata.load.type). Larger caches will consume more memory, but will better approximate the ONCE behavior.
alluxio.master.ufs.path.cache.threads64The maximum size of the thread pool for asynchronously processing paths for the UFS path cache. Greater number of threads will decrease the amount of staleness in the async cache, but may impact performance. If this is set to 0, the cache will be disabled, and alluxio.user.file.metadata.load.type=ONCE will behave like ALWAYS.
alluxio.master.unsafe.direct.persist.object.enabledtrueWhen set to false, writing files using ASYNC_THROUGH or persist CLI with object stores as the UFS will first create temporary objects suffixed by “.alluxio.TIMESTAMP.tmp” in the object store before committed to the final UFS path. When set to true, files will be put to the destination path directly in the object store without staging with a temp suffix. Enabling this optimization by directly persisting files can significantly improve the efficiency writing to object store by making less data copy as rename in object store can be slow, but leaving a short vulnerability window for undefined behavior if a file is written using ASYNC_THROUGH but renamed or removed before the async persist operation completes, while this same file path was reused for other new files in Alluxio.
alluxio.master.update.check.enabledtrueWhether to check for update availability.
alluxio.master.update.check.interval7dayThe interval to check for update availability.
alluxio.master.web.bind.host0.0.0.0The hostname Alluxio master web UI binds to.
alluxio.master.web.hostnameThe hostname of Alluxio Master web UI.
alluxio.master.web.port19999The port Alluxio web UI runs on.
alluxio.master.whitelist/A comma-separated list of prefixes of the paths which are cacheable, separated by semi-colons. Alluxio will try to cache the cacheable file when it is read for the first time.
alluxio.master.worker.connect.wait.time5secAlluxio master will wait a period of time after start up for all workers to register, before it starts accepting client requests. This property determines the wait time.
alluxio.master.worker.info.cache.refresh.time10secThe worker information list will be refreshed after being cached for this time period. If the refresh time is too big, operations on the job servers or clients may fail because of the stale worker info. If it is too small, continuously updating worker information may case lock contention in the block master
alluxio.master.worker.timeout5minTimeout between master and worker indicating a lost worker.

Worker Configuration

The worker configuration specifies information regarding the worker nodes, such as the address and the port number.

Property NameDefaultDescription
alluxio.worker.allocator.classalluxio.worker.block.allocator.MaxFreeAllocatorThe strategy that a worker uses to allocate space among storage directories in certain storage layer. Valid options include: alluxio.worker.block.allocator.MaxFreeAllocator, alluxio.worker.block.allocator.GreedyAllocator, alluxio.worker.block.allocator.RoundRobinAllocator.
alluxio.worker.bind.host0.0.0.0The hostname Alluxio’s worker node binds to.
alluxio.worker.block.annotator.classalluxio.worker.block.annotator.LRUAnnotatorThe strategy that a worker uses to annotate blocks in order to have an ordered view of them during internalmanagement tasks such as eviction and promotion/demotion. Valid options include: alluxio.worker.block.annotator.LRFUAnnotator, alluxio.worker.block.annotator.LRUAnnotator,
alluxio.worker.block.annotator.lrfu.attenuation.factor2.0A attenuation factor in [2, INF) to control the behavior of LRFU annotator.
alluxio.worker.block.annotator.lrfu.step.factor0.25A factor in [0, 1] to control the behavior of LRFU: smaller value makes LRFU more similar to LFU; and larger value makes LRFU closer to LRU.
alluxio.worker.block.heartbeat.interval1secThe interval between block workers’ heartbeats to update block status, storage health and other workers’ information to Alluxio Master.
alluxio.worker.block.heartbeat.timeout${alluxio.worker.master.connect.retry.timeout}The timeout value of block workers’ heartbeats. If the worker can’t connect to master before this interval expires, the worker will exit.
alluxio.worker.block.master.client.pool.size11The block master client pool size on the Alluxio workers.
alluxio.worker.container.hostnameThe container hostname if worker is running in a container.
alluxio.worker.data.folder/alluxioworker/A relative path within each storage directory used as the data folder for Alluxio worker to put data for tiered store.
alluxio.worker.data.folder.permissionsrwxrwxrwxThe permission set for the worker data folder. If short circuit is used this folder should be accessible by all users (rwxrwxrwx).
alluxio.worker.data.folder.tmp.tmp_blocksA relative path in alluxio.worker.data.folder used to store the temporary data for uncommitted files.
alluxio.worker.data.server.classalluxio.worker.grpc.GrpcDataServerSelects the networking stack to run the worker with. Valid options are: alluxio.worker.grpc.GrpcDataServer.
alluxio.worker.data.server.domain.socket.addressThe path to the domain socket. Short-circuit reads make use of a UNIX domain socket when this is set (non-empty). This is a special path in the file system that allows the client and the AlluxioWorker to communicate. You will need to set a path to this socket. The AlluxioWorker needs to be able to create the path. If alluxio.worker.data.server.domain.socket.as.uuid is set, the path should be the home directory for the domain socket. The full path for the domain socket with be {path}/{uuid}.
alluxio.worker.data.server.domain.socket.as.uuidfalseIf true, the property alluxio.worker.data.server.domain.socket.addressis the path to the home directory for the domain socket and a unique identifier is used as the domain socket name. If false, the property is the absolute path to the UNIX domain socket.
alluxio.worker.data.tmp.subdir.max1024The maximum number of sub-directories allowed to be created in ${alluxio.worker.data.tmp.folder}.
alluxio.worker.evictor.classThe strategy that a worker uses to evict block files when a storage layer runs out of space. Valid options include alluxio.worker.block.evictor.LRFUEvictor, alluxio.worker.block.evictor.GreedyEvictor, alluxio.worker.block.evictor.LRUEvictor, alluxio.worker.block.evictor.PartialLRUEvictor.
alluxio.worker.file.buffer.size1MBThe buffer size for worker to write data into the tiered storage.
alluxio.worker.free.space.timeout10secThe duration for which a worker will wait for eviction to make space available for a client write request.
alluxio.worker.hostnameThe hostname of Alluxio worker.
alluxio.worker.jvm.monitor.enabledfalseWhether to enable start JVM monitor thread on worker.
alluxio.worker.keytab.fileKerberos keytab file for Alluxio worker.
alluxio.worker.management.backoff.strategyANYDefines the backoff scope respected by background tasks. Supported values are ANY / DIRECTORY. ANY: Management tasks will backoff from worker when there is any user I/O.This mode will ensure low management task overhead in order to favor immediate user I/O performance. However, making progress on management tasks will require quite periods on the worker.DIRECTORY: Management tasks will backoff from directories with ongoing user I/O.This mode will give better chance of making progress on management tasks.However, immediate user I/O throughput might be reduced due to increased management task activity.
alluxio.worker.management.block.transfer.concurrency.limitUse {CPU core count}/2 threads block transferPuts a limit to how many block transfers are executed concurrently during management.
alluxio.worker.management.load.detection.cool.down.time10secManagement tasks will not run for this long after load detected. Any user I/O will still register as a load for this period of time after it is finished. Short durations might cause interference between user I/O and background tier management tasks. Long durations might cause starvation for background tasks.
alluxio.worker.management.task.thread.countUse {CPU core count} threads for all management tasksThe number of threads for management task executor
alluxio.worker.management.tier.align.enabledtrueWhether to align tiers based on access pattern.
alluxio.worker.management.tier.align.range100Maximum number of blocks to consider from one tier for a single alignment task.
alluxio.worker.management.tier.align.reserved.bytes1GBThe amount of space that is reserved from each storage directory for internal management tasks.
alluxio.worker.management.tier.promote.enabledtrueWhether to promote blocks to higher tiers.
alluxio.worker.management.tier.promote.quota.percent90Max percentage of each tier that could be used for promotions. Promotions will be stopped to a tier once its used space go over this value. (0 means never promote, and, 100 means always promote.
alluxio.worker.management.tier.promote.range100Maximum number of blocks to consider from one tier for a single promote task.
alluxio.worker.management.tier.swap.restore.enabledtrueWhether to run management swap-restore task when tier alignment cannot make progress.
alluxio.worker.master.connect.retry.timeout1hourRetry period before workers give up on connecting to master and exit.
alluxio.worker.memory.size2/3 of total system memory, or 1GB if system memory size cannot be determinedMemory capacity of each worker node.
alluxio.worker.network.async.cache.manager.threads.max8The maximum number of threads used to cache blocks asynchronously in the data server.
alluxio.worker.network.block.reader.threads.max2048The maximum number of threads used to read blocks in the data server.
alluxio.worker.network.block.writer.threads.max1024The maximum number of threads used to write blocks in the data server.
alluxio.worker.network.flowcontrol.window2MBThe HTTP2 flow control window used by worker gRPC connections. Larger value will allow more data to be buffered but will use more memory.
alluxio.worker.network.keepalive.time30secThe amount of time for data server (for block reads and block writes) to wait for a response before pinging the client to see if it is still alive.
alluxio.worker.network.keepalive.timeout30secThe maximum time for a data server (for block reads and block writes) to wait for a keepalive response before closing the connection.
alluxio.worker.network.max.inbound.message.size4MBThe max inbound message size used by worker gRPC connections.
alluxio.worker.network.netty.boss.threads1How many threads to use for accepting new requests.
alluxio.worker.network.netty.channelEPOLLNetty channel type: NIO or EPOLL. If EPOLL is not available, this will automatically fall back to NIO.
alluxio.worker.network.netty.shutdown.quiet.period2secThe quiet period. When the netty server is shutting down, it will ensure that no RPCs occur during the quiet period. If an RPC occurs, then the quiet period will restart before shutting down the netty server.
alluxio.worker.network.netty.watermark.high32KBDetermines how many bytes can be in the write queue before switching to non-writable.
alluxio.worker.network.netty.watermark.low8KBOnce the high watermark limit is reached, the queue must be flushed down to the low watermark before switching back to writable.
alluxio.worker.network.netty.worker.threads0How many threads to use for processing requests. Zero defaults to #cpuCores * 2.
alluxio.worker.network.reader.buffer.size4MBWhen a client reads from a remote worker, the maximum amount of data not received by client allowed before the worker pauses sending more data. If this value is lower than read chunk size, read performance may be impacted as worker waits more often for buffer to free up. Higher value will increase the memory consumed by each read request.
alluxio.worker.network.reader.max.chunk.size.bytes2MBWhen a client read from a remote worker, the maximum chunk size.
alluxio.worker.network.shutdown.timeout15secMaximum amount of time to wait until the worker gRPC server is shutdown (regardless of the quiet period).
alluxio.worker.network.writer.buffer.size.messages8When a client writes to a remote worker, the maximum number of data messages to buffer by the server for each request.
alluxio.worker.network.zerocopy.enabledtrueWhether zero copy is enabled on worker when processing data streams.
alluxio.worker.principalKerberos principal for Alluxio worker.
alluxio.worker.rpc.port29999The port for Alluxio worker’s RPC service.
alluxio.worker.session.timeout1minTimeout between worker and client connection indicating a lost session connection.
alluxio.worker.storage.checker.enabledtrueWhether periodic storage health checker is enabled on Alluxio workers.
alluxio.worker.tieredstore.block.lock.readers1000The max number of concurrent readers for a block lock.
alluxio.worker.tieredstore.block.locks1000Total number of block locks for an Alluxio block worker. Larger value leads to finer locking granularity, but uses more space.
alluxio.worker.tieredstore.free.ahead.bytes0Amount to free ahead when worker storage is full. Higher values will help decrease CPU utilization under peak storage. Lower values will increase storage utilization.
alluxio.worker.tieredstore.level0.aliasMEMThe alias of the top storage tier on this worker. It must match one of the global storage tiers from the master configuration. We disable placing an alias lower in the global hierarchy before an alias with a higher position on the worker hierarchy. So by default, SSD cannot come before MEM on any worker.
alluxio.worker.tieredstore.level0.dirs.mediumtype${alluxio.worker.tieredstore.level0.alias}A list of media types (e.g., “MEM,SSD,SSD”) for each storage directory on the top storage tier specified by alluxio.worker.tieredstore.level0.dirs.path.
alluxio.worker.tieredstore.level0.dirs.path/mnt/ramdisk on Linux, /Volumes/ramdisk on OSXThe path of storage directory for the top storage tier. Note for MacOS the value should be /Volumes/.
alluxio.worker.tieredstore.level0.dirs.quota${alluxio.worker.memory.size}The capacity of the top storage tier.
alluxio.worker.tieredstore.level0.watermark.high.ratio0.95The high watermark of the space in the top storage tier (a value between 0 and 1).
alluxio.worker.tieredstore.level0.watermark.low.ratio0.7The low watermark of the space in the top storage tier (a value between 0 and 1).
alluxio.worker.tieredstore.level1.aliasThe alias of the second storage tier on this worker.
alluxio.worker.tieredstore.level1.dirs.mediumtype${alluxio.worker.tieredstore.level1.alias}A list of media types (e.g., “MEM,SSD,SSD”) for each storage directory on the second storage tier specified by alluxio.worker.tieredstore.level1.dirs.path.
alluxio.worker.tieredstore.level1.dirs.pathThe path of storage directory for the second storage tier.
alluxio.worker.tieredstore.level1.dirs.quotaThe capacity of the second storage tier.
alluxio.worker.tieredstore.level1.watermark.high.ratio0.95The high watermark of the space in the second storage tier (a value between 0 and 1).
alluxio.worker.tieredstore.level1.watermark.low.ratio0.7The low watermark of the space in the second storage tier (a value between 0 and 1).
alluxio.worker.tieredstore.level2.aliasThe alias of the third storage tier on this worker.
alluxio.worker.tieredstore.level2.dirs.mediumtype${alluxio.worker.tieredstore.level2.alias}A list of media types (e.g., “MEM,SSD,SSD”) for each storage directory on the third storage tier specified by alluxio.worker.tieredstore.level2.dirs.path.
alluxio.worker.tieredstore.level2.dirs.pathThe path of storage directory for the third storage tier.
alluxio.worker.tieredstore.level2.dirs.quotaThe capacity of the third storage tier.
alluxio.worker.tieredstore.level2.watermark.high.ratio0.95The high watermark of the space in the third storage tier (a value between 0 and 1).
alluxio.worker.tieredstore.level2.watermark.low.ratio0.7The low watermark of the space in the third storage tier (a value between 0 and 1).
alluxio.worker.tieredstore.levels1The number of storage tiers on the worker.
alluxio.worker.ufs.block.open.timeout5minTimeout to open a block from UFS.
alluxio.worker.ufs.instream.cache.enabledtrueEnable caching for seekable under storage input stream, so that subsequent seek operations on the same file will reuse the cached input stream. This will improve position read performance as the open operations of some under file system would be expensive. The cached input stream would be stale, when the UFS file is modified without notifying alluxio.
alluxio.worker.ufs.instream.cache.expiration.time5minCached UFS instream expiration time.
alluxio.worker.ufs.instream.cache.max.size5000The max entries in the UFS instream cache.
alluxio.worker.web.bind.host0.0.0.0The hostname Alluxio worker’s web server binds to.
alluxio.worker.web.hostnameThe hostname Alluxio worker’s web UI binds to.
alluxio.worker.web.port30000The port Alluxio worker’s web UI runs on.

User Configuration

The user configuration specifies values regarding file system access.

Property NameDefaultDescription
alluxio.user.app.idThe custom id to use for labeling this client’s info, such as metrics. If unset, a random long will be used. This value is displayed in the client logs on initialization. Note that using the same app id will cause client info to be aggregated, so different applications must set their own ids or leave this value unset to use a randomly generated id.
alluxio.user.block.avoid.eviction.policy.reserved.size.bytes0MBThe portion of space reserved in a worker when using the LocalFirstAvoidEvictionPolicy class as block location policy.
alluxio.user.block.master.client.pool.gc.interval120secThe interval at which block master client GC checks occur.
alluxio.user.block.master.client.pool.gc.threshold120secA block master client is closed if it has been idle for more than this threshold.
alluxio.user.block.master.client.pool.size.max10The maximum number of block master clients cached in the block master client pool.
alluxio.user.block.master.client.pool.size.min0The minimum number of block master clients cached in the block master client pool. For long running processes, this should be set to zero.
alluxio.user.block.read.retry.max.duration2minN/A
alluxio.user.block.read.retry.sleep.base250msN/A
alluxio.user.block.read.retry.sleep.max2secN/A
alluxio.user.block.remote.read.buffer.size.bytes8MBThe size of the file buffer to read data from remote Alluxio worker.
alluxio.user.block.size.bytes.default64MBDefault block size for Alluxio files.
alluxio.user.block.worker.client.pool.gc.threshold300secA block worker client is closed if it has been idle for more than this threshold.
alluxio.user.block.worker.client.pool.max1024The maximum number of block worker clients cached in the block worker client pool.
alluxio.user.block.write.location.policy.classalluxio.client.block.policy.LocalFirstPolicyThe default location policy for choosing workers for writing a file’s blocks.
alluxio.user.client.cache.async.write.enabledtrueIf this is enabled, cache data asynchronously.
alluxio.user.client.cache.async.write.threads16Number of threads to asynchronously cache data.
alluxio.user.client.cache.dir/tmp/alluxio_cacheThe directory where client-side cache is stored.
alluxio.user.client.cache.enabledfalseIf this is enabled, data will be cached on Alluxio client.
alluxio.user.client.cache.evictor.classalluxio.client.file.cache.evictor.LRUCacheEvictorThe strategy that client uses to evict local cached pages when running out of space. Currently the only valid option provided is alluxio.client.file.cache.evictor.LRUCacheEvictor.
alluxio.user.client.cache.evictor.lfu.logbase2.0The log base for client cache LFU evictor bucket index.
alluxio.user.client.cache.local.store.file.buckets1000The number of file buckets for the local page store of the client-side cache. It is recommended to set this to a high value if the number of unique files is expected to be high (# files / file buckets <= 100,000).
alluxio.user.client.cache.page.size1MBSize of each page in client-side cache.
alluxio.user.client.cache.size512MBThe maximum size of the client-side cache.
alluxio.user.client.cache.store.typeLOCALThe type of page store to use for client-side cache. Can be either LOCAL or ROCKS. The LOCAL page store stores all pages in a directory, the ROCKS page store utilizes rocksDB to persist the data.
alluxio.user.conf.cluster.default.enabledtrueWhen this property is true, an Alluxio client will load the default values of configuration properties set by Alluxio master.
alluxio.user.conf.sync.interval1minThe time period of client master heartbeat to update the configuration if necessary from meta master.
alluxio.user.date.format.patternMM-dd-yyyy HH:mm:ss:SSSDisplay formatted date in cli command and web UI by given date format pattern.
alluxio.user.file.buffer.bytes8MBThe size of the file buffer to use for file system reads/writes.
alluxio.user.file.copyfromlocal.block.location.policy.classalluxio.client.block.policy.RoundRobinPolicyThe default location policy for choosing workers for writing a file’s blocks using copyFromLocal command.
alluxio.user.file.create.ttl-1Time to live for files created by a user, no ttl by default.
alluxio.user.file.create.ttl.actionDELETEWhen file’s ttl is expired, the action performs on it. Options: DELETE (default) or FREE
alluxio.user.file.delete.uncheckedfalseWhether to check if the UFS contents are in sync with Alluxio before attempting to delete persisted directories recursively.
alluxio.user.file.master.client.pool.gc.interval120secThe interval at which file system master client GC checks occur.
alluxio.user.file.master.client.pool.gc.threshold120secA fs master client is closed if it has been idle for more than this threshold.
alluxio.user.file.master.client.pool.size.max10The maximum number of fs master clients cached in the fs master client pool.
alluxio.user.file.master.client.pool.size.min0The minimum number of fs master clients cached in the fs master client pool. For long running processes, this should be set to zero.
alluxio.user.file.metadata.load.typeONCEThe behavior of loading metadata from UFS. When information about a path is requested and the path does not exist in Alluxio, metadata can be loaded from the UFS. Valid options are ALWAYS, NEVER, and ONCE. ALWAYS will always access UFS to see if the path exists in the UFS. NEVER will never consult the UFS. ONCE will access the UFS the “first” time (according to a cache), but not after that. This parameter is ignored if a metadata sync is performed, via the parameter “alluxio.user.file.metadata.sync.interval”
alluxio.user.file.metadata.sync.interval-1The interval for syncing UFS metadata before invoking an operation on a path. -1 means no sync will occur. 0 means Alluxio will always sync the metadata of the path before an operation. If you specify a time interval, Alluxio will (best effort) not re-sync a path within that time interval. Syncing the metadata for a path must interact with the UFS, so it is an expensive operation. If a sync is performed for an operation, the configuration of “alluxio.user.file.metadata.load.type” will be ignored.
alluxio.user.file.passive.cache.enabledtrueWhether to cache files to local Alluxio workers when the files are read from remote workers (not UFS).
alluxio.user.file.persist.on.renamefalseWhether or not to asynchronously persist any files which have been renamed. This is helpful when working with compute frameworks which use rename to commit results.
alluxio.user.file.persistence.initial.wait.time0Time to wait before starting the persistence job. When the value is set to -1, the file will be persisted by rename operation or persist CLI but will not be automatically persisted in other cases. This is to avoid the heavy object copy in rename operation when alluxio.user.file.writetype.default is set to ASYNC_THROUGH. This value should be smaller than the value of alluxio.master.persistence.max.total.wait.time
alluxio.user.file.readtype.defaultCACHEDefault read type when creating Alluxio files. Valid options are CACHE_PROMOTE (move data to highest tier if already in Alluxio storage, write data into highest tier of local Alluxio if data needs to be read from under storage), CACHE (write data into highest tier of local Alluxio if data needs to be read from under storage), NO_CACHE (no data interaction with Alluxio, if the read is from Alluxio data migration or eviction will not occur).
alluxio.user.file.replication.durable1The target replication level of a file created by ASYNC_THROUGH writesbefore this file is persisted.
alluxio.user.file.replication.max-1The target max replication level of a file in Alluxio space. Setting this property to a negative value means no upper limit.
alluxio.user.file.replication.min0The target min replication level of a file in Alluxio space.
alluxio.user.file.reserved.bytes${alluxio.user.block.size.bytes.default}The size to reserve on workers for file system writes.Using smaller value will improve concurrency for writes smaller than block size.
alluxio.user.file.sequential.pread.threshold2MBAn upper bound on the client buffer size for positioned read to hint at the sequential nature of reads. For reads with a buffer size greater than this threshold, the read op is treated to be sequential and the worker may handle the read differently. For instance, cold reads from the HDFS ufs may use a different HDFS client API.
alluxio.user.file.short.circuit.enabledN/A
alluxio.user.file.target.mediaPreferred media type while storing file’s blocks.
alluxio.user.file.ufs.tier.enabledfalseWhen workers run out of available memory, whether the client can skip writing data to Alluxio but fallback to write to UFS without stopping the application. This property only works when the write type is ASYNC_THROUGH.
alluxio.user.file.waitcompleted.poll1secThe time interval to poll a file for its completion status when using waitCompleted.
alluxio.user.file.write.tier.default0The default tier for choosing a where to write a block. Valid option is any integer. Non-negative values identify tiers starting from top going down (0 identifies the first tier, 1 identifies the second tier, and so on). If the provided value is greater than the number of tiers, it identifies the last tier. Negative values identify tiers starting from the bottom going up (-1 identifies the last tier, -2 identifies the second to last tier, and so on). If the absolute value of the provided value is greater than the number of tiers, it identifies the first tier.
alluxio.user.file.writetype.defaultASYNC_THROUGHDefault write type when creating Alluxio files. Valid options are MUST_CACHE (write will only go to Alluxio and must be stored in Alluxio), CACHE_THROUGH (try to cache, write to UnderFS synchronously), THROUGH (no cache, write to UnderFS synchronously), ASYNC_THROUGH (write to cache, write to UnderFS asynchronously, replicated alluxio.user.file.replication.durable times in Alluxio before data is persisted.
alluxio.user.hostnameThe hostname to use for an Alluxio client.
alluxio.user.local.reader.chunk.size.bytes8MBWhen a client reads from a local worker, the maximum data chunk size.
alluxio.user.local.writer.chunk.size.bytes64KBWhen a client writes to a local worker, the maximum data chunk size.
alluxio.user.logging.threshold10sLogging a client RPC when it takes more time than the threshold.
alluxio.user.logs.dir${alluxio.logs.dir}/userThe path to store logs of Alluxio shell. To change its value, one can set environment variable $ALLUXIO_USER_LOGS_DIR. Note: overwriting this property will only work when it is passed as a JVM system property (e.g., appending “-Dalluxio.user.logs.dir”=<NEW_VALUE>” to $ALLUXIO_JAVA_OPTS). Setting it in alluxio-site.properties will not work.
alluxio.user.metadata.cache.enabledfalseIf this is enabled, metadata of paths will be cached. The cached metadata will be evicted when it expires after alluxio.user.metadata.cache.expiration.time or the cache size is over the limit of alluxio.user.metadata.cache.max.size.
alluxio.user.metadata.cache.expiration.time10minMetadata will expire and be evicted after being cached for this time period. Only valid if the filesystem is alluxio.client.file.MetadataCachingBaseFileSystem.
alluxio.user.metadata.cache.max.size100000Maximum number of paths with cached metadata. Only valid if the filesystem is alluxio.client.file.MetadataCachingBaseFileSystem.
alluxio.user.metrics.collection.enabledfalseEnable collecting the client-side metrics and heartbeat them to master
alluxio.user.metrics.heartbeat.interval10secThe time period of client master heartbeat to send the client-side metrics.
alluxio.user.network.data.timeoutThe maximum time for an Alluxio client to wait for a data response (e.g. block reads and block writes) from Alluxio worker.
alluxio.user.network.flowcontrol.windowThe HTTP2 flow control window used by user gRPC connections. Larger value will allow more data to be buffered but will use more memory.
alluxio.user.network.keepalive.timeThe amount of time for a gRPC client (for block reads and block writes) to wait for a response before pinging the server to see if it is still alive.
alluxio.user.network.keepalive.timeoutThe maximum time for a gRPC client (for block reads and block writes) to wait for a keepalive response before closing the connection.
alluxio.user.network.max.inbound.message.sizeThe max inbound message size used by user gRPC connections.
alluxio.user.network.netty.channelType of netty channels. If EPOLL is not available, this will automatically fall back to NIO.
alluxio.user.network.netty.worker.threadsHow many threads to use for remote block worker client to read from remote block workers.
alluxio.user.network.reader.buffer.size.messagesWhen a client reads from a remote worker, the maximum number of messages to buffer by the client. A message can be either a command response, a data chunk, or a gRPC stream event such as complete or error.
alluxio.user.network.reader.chunk.size.bytesWhen a client reads from a remote worker, the maximum chunk size.
alluxio.user.network.rpc.flowcontrol.window2MBThe HTTP2 flow control window used by user rpc connections. Larger value will allow more data to be buffered but will use more memory.
alluxio.user.network.rpc.keepalive.time9223372036854775807The amount of time for a rpc client to wait for a response before pinging the server to see if it is still alive.
alluxio.user.network.rpc.keepalive.timeout30secThe maximum time for a rpc client to wait for a keepalive response before closing the connection.
alluxio.user.network.rpc.max.connections1The maximum number of physical connections to be used per target host.
alluxio.user.network.rpc.max.inbound.message.size100MBThe max inbound message size used by user rpc connections.
alluxio.user.network.rpc.netty.channelEPOLLType of netty channels used by rpc connections. If EPOLL is not available, this will automatically fall back to NIO.
alluxio.user.network.rpc.netty.worker.threads0How many threads to use for rpc client to read from remote workers.
alluxio.user.network.streaming.flowcontrol.window2MBThe HTTP2 flow control window used by user streaming connections. Larger value will allow more data to be buffered but will use more memory.
alluxio.user.network.streaming.keepalive.time9223372036854775807The amount of time for a streaming client to wait for a response before pinging the server to see if it is still alive.
alluxio.user.network.streaming.keepalive.timeout30secThe maximum time for a streaming client to wait for a keepalive response before closing the connection.
alluxio.user.network.streaming.max.connections64The maximum number of physical connections to be used per target host.
alluxio.user.network.streaming.max.inbound.message.size100MBThe max inbound message size used by user streaming connections.
alluxio.user.network.streaming.netty.channelEPOLLType of netty channels used by streaming connections. If EPOLL is not available, this will automatically fall back to NIO.
alluxio.user.network.streaming.netty.worker.threads0How many threads to use for streaming client to read from remote workers.
alluxio.user.network.writer.buffer.size.messagesWhen a client writes to a remote worker, the maximum number of messages to buffer by the client. A message can be either a command response, a data chunk, or a gRPC stream event such as complete or error.
alluxio.user.network.writer.chunk.size.bytesWhen a client writes to a remote worker, the maximum chunk size.
alluxio.user.network.writer.close.timeoutThe timeout to close a writer client.
alluxio.user.network.writer.flush.timeoutThe timeout to wait for flush to finish in a data writer.
alluxio.user.network.zerocopy.enabledWhether zero copy is enabled on client when processing data streams.
alluxio.user.rpc.retry.base.sleep50msAlluxio client RPCs automatically retry for transient errors with an exponential backoff. This property determines the base time in the exponential backoff.
alluxio.user.rpc.retry.max.duration2minAlluxio client RPCs automatically retry for transient errors with an exponential backoff. This property determines the maximum duration to retry for before giving up. Note that, this value is set to 5s for fs and fsadmin CLIs.
alluxio.user.rpc.retry.max.sleep3secAlluxio client RPCs automatically retry for transient errors with an exponential backoff. This property determines the maximum wait time in the backoff.
alluxio.user.short.circuit.enabledtrueThe short circuit read/write which allows the clients to read/write data without going through Alluxio workers if the data is local is enabled if set to true.
alluxio.user.short.circuit.preferredfalseWhen short circuit and domain socket both enabled, prefer to use short circuit.
alluxio.user.streaming.data.timeout30secThe maximum time for an Alluxio client to wait for a data response (e.g. block reads and block writes) from Alluxio worker.
alluxio.user.streaming.reader.buffer.size.messages16When a client reads from a remote worker, the maximum number of messages to buffer by the client. A message can be either a command response, a data chunk, or a gRPC stream event such as complete or error.
alluxio.user.streaming.reader.chunk.size.bytes1MBWhen a client reads from a remote worker, the maximum chunk size.
alluxio.user.streaming.writer.buffer.size.messages16When a client writes to a remote worker, the maximum number of messages to buffer by the client. A message can be either a command response, a data chunk, or a gRPC stream event such as complete or error.
alluxio.user.streaming.writer.chunk.size.bytes1MBWhen a client writes to a remote worker, the maximum chunk size.
alluxio.user.streaming.writer.close.timeout30minThe timeout to close a writer client.
alluxio.user.streaming.writer.flush.timeout30minThe timeout to wait for flush to finish in a data writer.
alluxio.user.streaming.zerocopy.enabledtrueWhether zero copy is enabled on client when processing data streams.
alluxio.user.ufs.block.location.all.fallback.enabledtrueWhether to return all workers as block location if ufs block locations are not co-located with any Alluxio workers or is empty.
alluxio.user.ufs.block.read.concurrency.max2147483647The maximum concurrent readers for one UFS block on one Block Worker.
alluxio.user.ufs.block.read.location.policyalluxio.client.block.policy.LocalFirstPolicyWhen an Alluxio client reads a file from the UFS, it delegates the read to an Alluxio worker. The client uses this policy to choose which worker to read through. Built-in choices: [<a href=”https://docs.alluxio.io/os/javadoc/edge/alluxio/client/block/policy/DeterministicHashPolicy.html">alluxio.client.block.policy.DeterministicHashPolicy</a&gt;, <a href=”https://docs.alluxio.io/os/javadoc/edge/alluxio/client/block/policy/LocalFirstAvoidEvictionPolicy.html">alluxio.client.block.policy.LocalFirstAvoidEvictionPolicy</a&gt;, <a href=”https://docs.alluxio.io/os/javadoc/edge/alluxio/client/block/policy/LocalFirstPolicy.html">alluxio.client.block.policy.LocalFirstPolicy</a&gt;, <a href=”https://docs.alluxio.io/os/javadoc/edge/alluxio/client/block/policy/MostAvailableFirstPolicy.html">alluxio.client.block.policy.MostAvailableFirstPolicy</a&gt;, <a href=”https://docs.alluxio.io/os/javadoc/edge/alluxio/client/block/policy/RoundRobinPolicy.html">alluxio.client.block.policy.RoundRobinPolicy</a&gt;, <a href=”https://docs.alluxio.io/os/javadoc/edge/alluxio/client/block/policy/SpecificHostPolicy.html">alluxio.client.block.policy.SpecificHostPolicy</a&gt;].
alluxio.user.ufs.block.read.location.policy.deterministic.hash.shards1When alluxio.user.ufs.block.read.location.policy is set to alluxio.client.block.policy.DeterministicHashPolicy, this specifies the number of hash shards.
alluxio.user.worker.list.refresh.interval2minThe interval used to refresh the live worker list on the client

Resource Manager Configuration

When running Alluxio with resource managers like Mesos and YARN, Alluxio has additional configuration options.

Property NameDefaultDescription
alluxio.integration.master.resource.cpu1The number of CPUs to run an Alluxio master for YARN framework.
alluxio.integration.master.resource.mem1024MBThe amount of memory to run an Alluxio master for YARN framework.
alluxio.integration.mesos.alluxio.jar.urlhttp://downloads.alluxio.io/downloads/files/${alluxio.version}/alluxio-${alluxio.version}-bin.tar.gzUrl to download an Alluxio distribution from during Mesos deployment.
alluxio.integration.mesos.jdk.pathjdk1.8.0_151If installing java from a remote URL during mesos deployment, this must be set to the directory name of the untarred jdk.
alluxio.integration.mesos.jdk.urlLOCALA url from which to install the jdk during Mesos deployment. Default to LOCAL which tells Mesos to use the local JDK on the system. When using this property, alluxio.integration.mesos.jdk.path must also be set correctly.
alluxio.integration.mesos.master.nameAlluxioMasterThe name of the master process to use within Mesos.
alluxio.integration.mesos.master.node.count1The number of Alluxio master process to run within Mesos.
alluxio.integration.mesos.principalalluxioThe Mesos principal for the Alluxio Mesos Framework.
alluxio.integration.mesos.role*Mesos role for the Alluxio Mesos Framework.
alluxio.integration.mesos.secretSecret token for authenticating with Mesos.
alluxio.integration.mesos.userThe Mesos user for the Alluxio Mesos Framework. Defaults to the current user.
alluxio.integration.mesos.worker.nameAlluxioWorkerThe name of the worker process to use within Mesos.
alluxio.integration.worker.resource.cpu1The number of CPUs to run an Alluxio worker for YARN framework.
alluxio.integration.worker.resource.mem1024MBThe amount of memory to run an Alluxio worker for YARN framework.
alluxio.integration.yarn.workers.per.host.max1The number of workers to run on an Alluxio host for YARN framework.

Security Configuration

The security configuration specifies information regarding the security features, such as authentication and file permission. Settings for authentication take effect for master, worker, and user. Settings for file permission only take effect for master. See Security for more information about security features.

Property NameDefaultDescription
alluxio.security.authentication.custom.provider.classThe class to provide customized authentication implementation, when alluxio.security.authentication.type is set to CUSTOM. It must implement the interface ‘alluxio.security.authentication.AuthenticationProvider’.
alluxio.security.authentication.typeSIMPLEThe authentication mode. Currently three modes are supported: NOSASL, SIMPLE, CUSTOM. The default value SIMPLE indicates that a simple authentication is enabled. Server trusts whoever the client claims to be.
alluxio.security.authorization.permission.enabledtrueWhether to enable access control based on file permission.
alluxio.security.authorization.permission.supergroupsupergroupThe super group of Alluxio file system. All users in this group have super permission.
alluxio.security.authorization.permission.umask022The umask of creating file and directory. The initial creation permission is 777, and the difference between directory and file is 111. So for default umask value 022, the created directory has permission 755 and file has permission 644.
alluxio.security.group.mapping.cache.timeout1minTime for cached group mapping to expire.
alluxio.security.group.mapping.classalluxio.security.group.provider.ShellBasedUnixGroupsMappingThe class to provide user-to-groups mapping service. Master could get the various group memberships of a given user. It must implement the interface ‘alluxio.security.group.GroupMappingService’. The default implementation execute the ‘groups’ shell command to fetch the group memberships of a given user.
alluxio.security.login.impersonation.usernameHDFS_USERWhen alluxio.security.authentication.type is set to SIMPLE or CUSTOM, user application uses this property to indicate the IMPERSONATED user requesting Alluxio service. If it is not set explicitly, or set to NONE, impersonation will not be used. A special value of ‘HDFS_USER‘ can be specified to impersonate the hadoop client user.
alluxio.security.login.usernameWhen alluxio.security.authentication.type is set to SIMPLE or CUSTOM, user application uses this property to indicate the user requesting Alluxio service. If it is not set explicitly, the OS login user will be used.
alluxio.security.stale.channel.purge.interval3dayInterval for which client channels that have been inactive will be regarded as unauthenticated. Such channels will reauthenticate with their target master upon being used for new RPCs.