Cluster Administration

This Section includes information related to the administration of an ArangoDB Cluster.

For a general introduction to the ArangoDB Cluster, please refer to theCluster chapter.

There is also a detailedCluster Administration Coursefor download.

Please check the following talks as well:

#DateTitleWhoLink
110th April 2018Fundamentals and Best Practices of ArangoDB Cluster AdministrationKaveh Vahedipour, ArangoDB Cluster TeamOnline Meetup Page & Video
229th May 2018Fundamentals and Best Practices of ArangoDB Cluster Administration: Part IIKaveh Vahedipour, ArangoDB Cluster TeamOnline Meetup Page & Video

Enabling synchronous replication

For an introduction about Synchronous Replication in Cluster, please referto the Cluster Architecture section.

Synchronous replication can be enabled per collection. When creating acollection you may specify the number of replicas using thereplicationFactor parameter. The default value is set to 1 whicheffectively disables synchronous replication among DBServers.

Whenever you specify a replicationFactor greater than 1 when creating acollection, synchronous replication will be activated for this collection. The Cluster will determine suitable leaders and followers for every requested shard (numberOfShards) within the Cluster.

Example:

  1. 127.0.0.1:8530@_system> db._create("test", {"replicationFactor": 3})

In the above case, any write operation will require 3 replicas toreport success from now on.

Preparing growth

You may create a collection with higher replication factor thanavailable DBServers. When additional DBServers become available the shards are automatically replicated to the newly available DBServers.

To create a collection with higher replication factor thanavailable DBServers please set the option enforceReplicationFactor to false, when creating the collection from ArangoShell (the option is not availablefrom the web interface), e.g.:

  1. db._create("test", { replicationFactor: 4 }, { enforceReplicationFactor: false });

The default value for enforceReplicationFactor is true.

Note: multiple replicas of the same shard can never coexist on the sameDBServer instance.

Sharding

For an introduction about Sharding in Cluster, please refer to theCluster Architecture section.

Number of shards can be configured at collection creation time, e.g. the UI,or the ArangoDB Shell:

  1. 127.0.0.1:8529@_system> db._create("sharded_collection", {"numberOfShards": 4});

To configure a custom hashing for another attribute (default is _key):

  1. 127.0.0.1:8529@_system> db._create("sharded_collection", {"numberOfShards": 4, "shardKeys": ["country"]});

The example above, where ‘country’ has been used as shardKeys can be usefulto keep data of every country in one shard, which would result in betterperformance for queries working on a per country base.

It is also possible to specify multiple shardKeys.

Note however that if you change the shard keys from their default ["_key"], then findinga document in the collection by its primary key involves a request toevery single shard. Furthermore, in this case one can no longer prescribethe primary key value of a new document but must use the automaticallygenerated one. This latter restriction comes from the fact that ensuringuniqueness of the primary key would be very inefficient if the usercould specify the primary key.

On which DBServer in a Cluster a particular shard is kept is undefined.There is no option to configure an affinity based on certain shard keys.

Unique indexes (hash, skiplist, persistent) on sharded collections areonly allowed if the fields used to determine the shard key are alsoincluded in the list of attribute paths for the index:

shardKeysindexKeys
aaallowed
abnot allowed
aa, ballowed
a, banot allowed
a, bbnot allowed
a, ba, ballowed
a, ba, b, callowed
a, b, ca, bnot allowed
a, b, ca, b, callowed

Sharding strategy

strategy to use for the collection. Since ArangoDB 3.4 there aredifferent sharding strategies to select from when creating a new collection. The selected shardingStrategy value will remainfixed for the collection and cannot be changed afterwards. This isimportant to make the collection keep its sharding settings andalways find documents already distributed to shards using the sameinitial sharding algorithm.

The available sharding strategies are:

  • community-compat: default sharding used by ArangoDBCommunity Edition before version 3.4
  • enterprise-compat: default sharding used by ArangoDBEnterprise Edition before version 3.4
  • enterprise-smart-edge-compat: default sharding used by smart edgecollections in ArangoDB Enterprise Edition before version 3.4
  • hash: default sharding used for new collections starting from version 3.4(excluding smart edge collections)
  • enterprise-hash-smart-edge: default sharding used for newsmart edge collections starting from version 3.4If no sharding strategy is specified, the default will be hash forall collections, and enterprise-hash-smart-edge for all smart edgecollections (requires the Enterprise Edition of ArangoDB). Manually overriding the sharding strategy does not yet provide a benefit, but it may later in case other sharding strategies are added.

Moving/Rebalancing shards

A shard can be moved from a DBServer to another, and the entire shard distributioncan be rebalanced using the corresponding buttons in the web UI.

Replacing/Removing a Coordinator

Coordinators are effectively stateless and can be replaced, added andremoved without more consideration than meeting the necessities of theparticular installation.

To take out a Coordinator stop theCoordinator’s instance by issuing kill -SIGTERM <pid>.

Ca. 15 seconds later the cluster UI on any other Coordinator will markthe Coordinator in question as failed. Almost simultaneously, a trash binicon will appear to the right of the name of the Coordinator. Clickingthat icon will remove the Coordinator from the coordinator registry.

Any new Coordinator instance that is informed of where to find any/allagent/s, —cluster.agency-endpoint <some agent endpoint> will beintegrated as a new Coordinator into the cluster. You may also justrestart the Coordinator as before and it will reintegrate itself intothe cluster.

Replacing/Removing a DBServer

DBServers are where the data of an ArangoDB cluster is stored. Theydo not publish a web UI and are not meant to be accessed by any otherentity than Coordinators to perform client requests or other _DBServers_to uphold replication and resilience.

The clean way of removing a DBServer is to first relieve it of allits responsibilities for shards. This applies to followers as well asleaders of shards. The requirement for this operation is that nocollection in any of the databases has a relicationFactor greater orequal to the current number of DBServers minus one. For the purpose ofcleaning out DBServer004 for example would work as follows, whenissued to any Coordinator of the cluster:

curl <coord-ip:coord-port>/_admin/cluster/cleanOutServer -d '{"server":"DBServer004"}'

After the DBServer has been cleaned out, you will find a trash binicon to the right of the name of the DBServer on any Coordinators’UI. Clicking on it will remove the DBServer in question from thecluster.

Firing up any DBServer from a clean data directory by specifying theany of all agency endpoints will integrate the new DBServer into thecluster.

To distribute shards onto the new DBServer either click on theDistribute Shards button at the bottom of the Shards page in everydatabase.

The clean out process can be monitored using the following script,which periodically prints the amount of shards that still need to be moved.It is basically a countdown to when the process finishes.

Save below code to a file named serverCleanMonitor.js:

  1. var dblist = db._databases();
  2. var internal = require("internal");
  3. var arango = internal.arango;
  4. var server = ARGUMENTS[0];
  5. var sleep = ARGUMENTS[1] | 0;
  6. if (!server) {
  7. print("\nNo server name specified. Provide it like:\n\narangosh <options> -- DBServerXXXX");
  8. process.exit();
  9. }
  10. if (sleep <= 0) sleep = 10;
  11. console.log("Checking shard distribution every %d seconds...", sleep);
  12. var count;
  13. do {
  14. count = 0;
  15. for (dbase in dblist) {
  16. var sd = arango.GET("/_db/" + dblist[dbase] + "/_admin/cluster/shardDistribution");
  17. var collections = sd.results;
  18. for (collection in collections) {
  19. var current = collections[collection].Current;
  20. for (shard in current) {
  21. if (current[shard].leader == server) {
  22. ++count;
  23. }
  24. }
  25. }
  26. }
  27. console.log("Shards to be moved away from node %s: %d", server, count);
  28. if (count == 0) break;
  29. internal.wait(sleep);
  30. } while (count > 0);

This script has to be executed in the arangoshby issuing the following command:

  1. arangosh --server.username <username> --server.password <password> --javascript.execute <path/to/serverCleanMonitor.js> -- DBServer<number>

The output should be similar to the one below:

  1. arangosh --server.username root --server.password pass --javascript.execute ~./serverCleanMonitor.js -- DBServer0002
  2. [7836] INFO Checking shard distribution every 10 seconds...
  3. [7836] INFO Shards to be moved away from node DBServer0002: 9
  4. [7836] INFO Shards to be moved away from node DBServer0002: 4
  5. [7836] INFO Shards to be moved away from node DBServer0002: 1
  6. [7836] INFO Shards to be moved away from node DBServer0002: 0

The current status is logged every 10 seconds. You may adjust theinterval by passing a number after the DBServer name, e.g.arangosh <options> — DBServer0002 60 for every 60 seconds.

Once the count is 0 all shards of the underlying DBServer have been movedand the cleanOutServer process has finished.