Features and Improvements

The following list shows in detail which features have been added or improved inArangoDB 3.3. ArangoDB 3.3 also contains several bugfixes that are not listedhere.

Datacenter-to-datacenter replication (DC2DC)

Every company needs a disaster recovery plan for all important systems.This is true from small units like single processes running in somecontainer to the largest distributed architectures. For databases inparticular this usually involves a mixture of fault-tolerance,redundancy, regular backups and emergency plans. The larger adata store, the more difficult it is to come up with a good strategy.

Therefore, it is desirable to be able to run a distributed databasein one data-center and replicate all transactions to anotherdata-center in some way. Often, transaction logs are shippedover the network to replicate everything in another, identicalsystem in the other data-center. Some distributed data stores havebuilt-in support for multiple data-center awareness and canreplicate between data-centers in a fully automatic fashion.

ArangoDB 3.3 takes an evolutionary step forward by introducingmulti-data-center support, which is asynchronous data-center todata-center replication. Our solution is asynchronous and scalesto arbitrary cluster sizes, provided your network link betweenthe data-centers has enough bandwidth. It is fault-tolerantwithout a single point of failure and includes a lot ofmetrics for monitoring in a production scenario.

DC2DC is available in the Enterprise edition.

Encrypted backups

Arangodump can now create encrypted backups using AES256 for encryption.The encryption key can be read from a file or from a generator program.It works in single server and cluster mode.

Example for non-encrypted backup (everyone with access to the backup will beable to read it):

  1. arangodump --collection "secret" dump

In order to create an encrypted backup, add the —encryption.keyfileoption when invoking arangodump:

  1. arangodump --collection "secret" dump --encryption.keyfile ~/SECRET-KEY

The key must be exactly 32 bytes long (required by the AES block cipher).

Note that arangodump will not store the key anywhere. It is the responsibilityof the user to find a safe place for the key. However, arangodump will storethe used encryption method in a file named ENCRYPTION in the dump directory.That way arangorestore can later find out whether it is dealing with anencrypted dump or not.

Trying to restore the encrypted dump without specifying the key will fail:

  1. arangorestore --collection "secret-collection" dump --create-collection true

arangorestore will complain with:

the dump data seems to be encrypted with aes-256-ctr, but no key informationwas specified to decrypt the dump it is recommended to specify either—encryption.key-file or —encryption.key-generator when invokingarangorestore with an encrypted dump

It is required to use the exact same key when restoring the data. Again this isdone by providing the —encryption.keyfile parameter:

  1. arangorestore --collection "secret-collection" dump --create-collection true --encryption.keyfile ~/SECRET-KEY

Using a different key will lead to the backup being non-recoverable.

Note that encrypted backups can be used together with the already existing RocksDB encryption-at-rest feature, but they can also be used for the MMFilesengine, which does not have encryption-at-rest.

Encrypted backups are availablein the Enterprise edition.

Server-level replication

ArangoDB supports asynchronous replication functionality since version 1.4, butreplicating from a master server with multiple databases required manual setupon the slave for each individual database to replicate. When a new database wascreated on the master, one needed to take action on the slave to ensure that datafor that database got actually replicated. Replication on the slave also was notaware of when a database was dropped on the master.

3.3 adds server-level replication,which will replicate the current and future databases from the master to theslave automatically after the initial setup.

In order to set up global replication on a 3.3 slave for all databases of a given 3.3 master, there is now the so-called globalApplier. It has the same interfaceas the existing applier, but it will replicate from all databases of themaster and not just a single one.

In order to start the replication on the slave and make it replicate alldatabases from a given master, use these commands on the slave:

  1. var replication = require("@arangodb/replication");
  2. replication.setupReplicationGlobal({
  3. endpoint: "tcp://127.0.0.1:8529",
  4. username: "root",
  5. password: "",
  6. autoStart: true
  7. });

To check if the applier is running, also use the globalApplier object:

  1. replication.globalApplier.state().state

The server-level replication requires both the master and slave servers to ArangoDB version 3.3 or higher.

Asynchronous failover

A resilient setup can now easily be achieved by running a pair of connected servers, of which one instance becomes the master and the other an asynchronously replicating slave, with automatic failover between them.

Two servers are connected via asynchronous replication. One of the servers iselected leader, and the other one is made a follower automatically. At startup,the two servers fight for leadership. The follower will automatically startreplication from the master for all available databases, using the server-levelreplication introduced in 3.3.

When the master goes down, this is automatically detected by an agencyinstance, which is also started in this mode. This instance will make theprevious follower stop its replication and make it the new leader.

The follower will automatically deny all read and write requests from clientapplications. Only the replication itself is allowed to access the follower’s datauntil the follower becomes a new leader.

When sending a request to read or write data on a follower, the follower will always respond with HTTP 503 (Service unavailable) and provide the address ofthe current leader. Client applications and drivers can use this information to then make a follow-up request to the proper leader:

  1. HTTP/1.1 503 Service Unavailable
  2. X-Arango-Endpoint: http://[::1]:8531
  3. ....

Client applications can also detect who the current leader and the followersare by calling the /_api/cluster/endpoints REST API. This API is accessibleon leaders and followers alike.

The ArangoDB starter supports starting two servers with asynchronousreplication and failover out of the box.

The arangojs driver for JavaScript, the Go driver and the Java driver forArangoDB support automatic failover in case the currently accessed server endpoint responds with HTTP 503.

Blog articles:

RocksDB throttling

ArangoDB 3.3 allows write operations to the RocksDB engine be throttled, inorder to prevent longer write stalls. The throttling is adaptive, meaning that itautomatically adapts to the actual write rate. This results in much more stableresponse times, which is better for client applications and cluster healthtests, because timeouts caused by write stalls are less likely to occur andthe server thus not mistakenly assumed to be down.

Blog article: RocksDB smoothing for ArangoDB customers

Faster shard creation in cluster

When using a cluster, one normally wants resilience, so replicationFactoris set to at least 2. The number of shards is often set to rather high valueswhen creating collections.

Creating a collection in the cluster will make the coordinator store the setupmetadata of the new collection in the agency first. Subsequentially all databaseservers of the cluster will detect that there is work to do and will begin creatingthe shards. This will first happen for the shard leaders. For each shard leaderthat finishes with the setup, the synchronous replication with its followers isthen established. That will make sure that every future data modification will not become effective on the leader only, but also on all the followers.

In 3.3 this setup protocol has got some shortcuts for the initial shard creation, which speeds up collection creation by roughly 50 to 60 percent.

LDAP authentication

The LDAP authentication module in the Enterprise edition has been enhanced.The following options have been added to it:

  • the option —server.local-authentication controls whether the local userscollection is also used for looking up users. This is also the default behavior.If the authentication shall be restricted to just the LDAP directory, theoption can be set to _true, and arangod will then not make any queries to itsusers_ collection when looking up users.

  • the option —server.authentication-timeout controls the expiration time for cached LDAP user information entries in arangod.

  • basic role support has been added for the LDAP module in the Enterprise edition.New configuration options for LDAP in 3.3 are:

    • —ldap.roles-attribute-name
    • —ldap.roles-transformation
    • —ldap.roles-search
    • —ldap.roles-include
    • —ldap.roles-exclude
    • —ldap.superuser-rolePlease refer to LDAP for a detailedexplanation.

Miscellaneous features

  • when creating a collection in the cluster, there is now an optional parameter enforceReplicationFactor: when set, this parameterenforces that the collection will only be created if there are notenough database servers available for the desired replicationFactor.

  • AQL DISTINCT is not changing the order of previous (sorted) results

Previously the implementation of AQL distinct stored all encountered valuesin a hash table internally. When done, the final results were returned in theorder dictated by the hash table that was used to store the keys. This orderwas more or less unpredictable. Though this was documented behavior, it wasinconvenient for end users.

3.3 now does not change the sort order of the result anymore when DISTINCT is used.

  • Several AQL functions have been implemented in C++, which can help savememory and CPU time for converting the function arguments and results.The following functions have been ported:

    • LEFT
    • RIGHT
    • SUBSTRING
    • TRIM
    • MATCHES
  • The ArangoShell prompt substitution characters have been extended. Now thefollowing extra substitutions can be used for the arangosh prompt:

    • ‘%t’: current time as timestamp
    • ‘%a’: elpased time since ArangoShell start in seconds
    • ‘%p’: duration of last command in secondsFor example, to show the execution time of the last command executed in arangoshin the shell’s prompt, start arangosh using:
  1. arangosh --console.prompt "%E@%d %p> "
  • There are new startup options for the logging to aid debugging and error reporting:

    • —log.role: will show one-letter code of server role (A = agent, C = coordinator, …)This is especially useful when aggregating logs.

The existing roles used in logs are:

  1. - U: undefined/unclear (used at startup)
  2. - S: single server
  3. - C: coordinator
  4. - P: primary
  5. - A: agent
  • —log.line-number true: this option will now additionally show the name of the C++ function that triggered the log message (file name and line number were already logged in previous versions)

  • —log.thread-name true: this new option will log the name of the ArangoDB thread that triggered the log message. Will have meaningful output on Linux only

  • make the ArangoShell (arangosh) refill its collection cache when a yet-unknown collectionis first accessed. This fixes the following problem when working with the shell whilein another shell or by another process a new collection is added:
  1. arangosh1> db._collections(); // shell1 lists all collections
  2. arangosh2> db._create("test"); // shell2 now creates a new collection 'test'
  3. arangosh1> db.test.insert({}); // shell1 is not aware of the collection created
  4. // in shell2, so the insert will fail