SmartJoins

Introduced in: v3.4.5, v3.5.0

This feature is only available in theEnterprise Edition

SmartJoins allow to execute co-located join operations among identically sharded collections.

Cluster joins without being smart

When doing joins in an ArangoDB cluster, data has to be exchanged between different servers.Joins between different collections in a cluster normally require roundtrips between the shards of these collections for fetching the data. Requests are routed through an extracoordinator hop.

For example, with two collections c1 and c2 with 4 shards each, the coordinator will initially contact the 4 shards of c1. In order to perform the join, the DBServer nodes which manage the actual data of c1 need to pull the data from the other collection, c2. This causes extra roundtrips via the coordinator, which will then pull the data for c2 from the responsible shards:

  1. arangosh> db._explain("FOR doc1 IN c1 FOR doc2 IN c2 FILTER doc1._key == doc2._key RETURN doc1");
  2. Query String:
  3. FOR doc1 IN c1 FOR doc2 IN c2 FILTER doc1._key == doc2._key RETURN doc1
  4. Execution plan:
  5. Id NodeType Site Est. Comment
  6. 1 SingletonNode DBS 1 * ROOT
  7. 3 EnumerateCollectionNode DBS 0 - FOR doc2 IN c2 /* full collection scan, 4 shard(s) */
  8. 14 RemoteNode COOR 0 - REMOTE
  9. 15 GatherNode COOR 0 - GATHER
  10. 8 ScatterNode COOR 0 - SCATTER
  11. 9 RemoteNode DBS 0 - REMOTE
  12. 7 IndexNode DBS 0 - FOR doc1 IN c1 /* primary index scan, 4 shard(s) */
  13. 10 RemoteNode COOR 0 - REMOTE
  14. 11 GatherNode COOR 0 - GATHER
  15. 6 ReturnNode COOR 0 - RETURN doc1

This is the general query execution, and it makes sense if there is no furtherinformation available about how the data is actually distributed to the individualshards. It works in case c1 and c2 have a different amount of shards, or use different shard keys or strategies. However, it comes with the additional cost of having to do 4 x 4 requests to perform the join.

Sharding two collections identically using distributeShardsLike

In the specific case that the two collections have the same number of shards, the data of the two collections can be co-located on the same server for the same shard key values. In this case the extra hop via the coordinator will not be necessary.

The query optimizer will remove the extra hop for the join in case it can provethat data for the two collections is co-located.

The first step is thus to make the two collections shard their data alike. This can be achieved by making the distributeShardsLike attribute of one of the collectionsrefer to the other collection.

Here is an example setup for this, using arangosh:

  1. arangosh> db._create("c1", {numberOfShards: 4, shardKeys: ["_key"]});
  2. arangosh> db._create("c2", {shardKeys: ["_key"], distributeShardsLike: "c1"});

Now the collections c1 and c2 will not only have the same shard keys, but theywill also locate their data for the same shard keys values on the same server.

Let’s check how the data actually gets distributed now. We first confirm that thetwo collections have 4 shards each, which in this example are evenly distributedacross two servers:

  1. arangosh> db.c1.shards(true)
  2. {
  3. "s2011661" : [
  4. "PRMR-64d19f43-3aa0-4abb-81f6-4b9966d32175"
  5. ],
  6. "s2011662" : [
  7. "PRMR-5f30caa0-4c93-4fdd-98f3-a2130c1447df"
  8. ],
  9. "s2011663" : [
  10. "PRMR-64d19f43-3aa0-4abb-81f6-4b9966d32175"
  11. ],
  12. "s2011664" : [
  13. "PRMR-5f30caa0-4c93-4fdd-98f3-a2130c1447df"
  14. ]
  15. }
  16. arangosh> db.c2.shards(true)
  17. {
  18. "s2011666" : [
  19. "PRMR-64d19f43-3aa0-4abb-81f6-4b9966d32175"
  20. ],
  21. "s2011667" : [
  22. "PRMR-5f30caa0-4c93-4fdd-98f3-a2130c1447df"
  23. ],
  24. "s2011668" : [
  25. "PRMR-64d19f43-3aa0-4abb-81f6-4b9966d32175"
  26. ],
  27. "s2011669" : [
  28. "PRMR-5f30caa0-4c93-4fdd-98f3-a2130c1447df"
  29. ]
  30. }

Because we have told both collections that distribute their data alike, theirshards will now also be populated alike:

  1. arangosh> for (i = 0; i < 100; ++i) {
  2. db.c1.insert({ _key: "test" + i });
  3. db.c2.insert({ _key: "test" + i });
  4. }
  5. arangosh> db.c1.count(true);
  6. {
  7. "s2011664" : 22,
  8. "s2011661" : 21,
  9. "s2011663" : 27,
  10. "s2011662" : 30
  11. }
  12. arangosh> db.c2.count(true);
  13. {
  14. "s2011669" : 22,
  15. "s2011666" : 21,
  16. "s2011668" : 27,
  17. "s2011667" : 30
  18. }

We can see that shard 1 of c1 (“s2011664”) has the same number of documents asshard 1 of c2 (“s20116692), that shard 2 of c1 (“s2011661”) has the samenumber of documents as shard2 of c2 (“s2011666”) etc.Additionally, we can see from the shard-to-server distribution above that thecorresponding shards from c1 and c2 always reside on the same node.This is a precondition for running joins locally, and thanks to the effects ofdistributeShardsLike it is now satisfied!

SmartJoins using distributeShardsLike

With the two collections in place like this, an AQL query that uses a FILTER conditionthat refers from the shard key of the one collection to the shard key of the other collectionand compares the two shard key values by equality is eligible for the queryoptimizer’s “smart-join” optimization:

  1. arangosh> db._explain("FOR doc1 IN c1 FOR doc2 IN c2 FILTER doc1._key == doc2._key RETURN doc1");
  2. Query String:
  3. FOR doc1 IN c1 FOR doc2 IN c2 FILTER doc1._key == doc2._key RETURN doc1
  4. Execution plan:
  5. Id NodeType Site Est. Comment
  6. 1 SingletonNode DBS 1 * ROOT
  7. 3 EnumerateCollectionNode DBS 0 - FOR doc2 IN c2 /* full collection scan, 4 shard(s) */
  8. 7 IndexNode DBS 0 - FOR doc1 IN c1 /* primary index scan, 4 shard(s) */
  9. 10 RemoteNode COOR 0 - REMOTE
  10. 11 GatherNode COOR 0 - GATHER
  11. 6 ReturnNode COOR 0 - RETURN doc1

As can be seen above, the extra hop via the coordinator is gone here, which will meanless cluster-internal traffic and a faster response time.

SmartJoins will also work if the shard key of the second collection is not _key,and even for non-unique shard key values, e.g.:

  1. arangosh> db._create("c1", {numberOfShards: 4, shardKeys: ["_key"]});
  2. arangosh> db._create("c2", {shardKeys: ["parent"], distributeShardsLike: "c1"});
  3. arangosh> db.c2.ensureIndex({ type: "hash", fields: ["parent"] });
  4. arangosh> for (i = 0; i < 100; ++i) {
  5. db.c1.insert({ _key: "test" + i });
  6. for (j = 0; j < 10; ++j) {
  7. db.c2.insert({ parent: "test" + i });
  8. }
  9. }
  10. arangosh> db._explain("FOR doc1 IN c1 FOR doc2 IN c2 FILTER doc1._key == doc2.parent RETURN doc1");
  11. Query String:
  12. FOR doc1 IN c1 FOR doc2 IN c2 FILTER doc1._key == doc2.parent RETURN doc1
  13. Execution plan:
  14. Id NodeType Site Est. Comment
  15. 1 SingletonNode DBS 1 * ROOT
  16. 3 EnumerateCollectionNode DBS 2000 - FOR doc2 IN c2 /* full collection scan, 4 shard(s) */
  17. 7 IndexNode DBS 2000 - FOR doc1 IN c1 /* primary index scan, 4 shard(s) */
  18. 10 RemoteNode COOR 2000 - REMOTE
  19. 11 GatherNode COOR 2000 - GATHER
  20. 6 ReturnNode COOR 2000 - RETURN doc1

All above examples used two collections only. SmartJoins will also work when joiningmore than two collections which have the same data distribution enforced via theirdistributeShardsLike attribute and using the shard keys as the join criteria as shown above.

SmartJoins using smartJoinAttribute

In case the join on the second collection must be performed on a non-shard keyattribute, there is the option to specify a smartJoinAttribute for the collection.Note that for this case, setting distributeShardsLike is still required here, and that thatonly a single shardKeys attribute can be used.The single attribute name specified in the shardKeys attribute for the collection must endwith a colon character then.

This smartJoinAttribute must be populated for all documents in the collection,and must always contain a string value. The value of the _key attribute for eachdocument must consist of the value of the smartJoinAttribute, a colon characterand then some other user-defined key component.

The setup thus becomes:

  1. arangosh> db._create("c1", {numberOfShards: 4, shardKeys: ["_key"]});
  2. arangosh> db._create("c2", {shardKeys: ["_key:"], smartJoinAttribute: "parent", distributeShardsLike: "c1"});
  3. arangosh> db.c2.ensureIndex({ type: "hash", fields: ["parent"] });
  4. arangosh> for (i = 0; i < 100; ++i) {
  5. db.c1.insert({ _key: "test" + i });
  6. db.c2.insert({ _key: "test" + i + ":" + "ownKey" + i, parent: "test" + i });
  7. }

Failure to populate the smartJoinAttribute with a string or not at all will leadto a document being rejected on insert, update or replace. Similarly, failure toprefix a document’s _key attribute value with the value of the _smartJoinAttribute_will also lead to the document being rejected:

  1. arangosh> db.c2.insert({ parent: 123 });
  2. JavaScript exception in file './js/client/modules/@arangodb/arangosh.js' at 99,7: ArangoError 4008: smart join attribute not given or invalid
  3. arangosh> db.c2.insert({ _key: "123:test1", parent: "124" });
  4. JavaScript exception in file './js/client/modules/@arangodb/arangosh.js' at 99,7: ArangoError 4007: shard key value must be prefixed with the value of the smart join attribute

The join can now be performed via the collection’s smartJoinAttribute:

  1. arangosh> db._explain("FOR doc1 IN c1 FOR doc2 IN c2 FILTER doc1._key == doc2.parent RETURN doc1")
  2. Query String:
  3. FOR doc1 IN c1 FOR doc2 IN c2 FILTER doc1._key == doc2.parent RETURN doc1
  4. Execution plan:
  5. Id NodeType Site Est. Comment
  6. 1 SingletonNode DBS 1 * ROOT
  7. 3 EnumerateCollectionNode DBS 101 - FOR doc2 IN c2 /* full collection scan, 4 shard(s) */
  8. 7 IndexNode DBS 101 - FOR doc1 IN c1 /* primary index scan, 4 shard(s) */
  9. 10 RemoteNode COOR 101 - REMOTE
  10. 11 GatherNode COOR 101 - GATHER
  11. 6 ReturnNode COOR 101 - RETURN doc1

Restricting SmartJoins to a single shard

If a FILTER condition is used on one of the shard keys, the optimizer will also tryto restrict the queries to just the required shards:

  1. arangosh> db._explain("FOR doc1 IN c1 FOR doc2 IN c2 FILTER doc1._key == 'test' && doc1._key == doc2.value RETURN doc1");
  2. Query String:
  3. FOR doc1 IN c1 FOR doc2 IN c2 FILTER doc1._key == 'test' && doc1._key == doc2.value
  4. RETURN doc1
  5. Execution plan:
  6. Id NodeType Site Est. Comment
  7. 1 SingletonNode DBS 1 * ROOT
  8. 8 IndexNode DBS 1 - FOR doc1 IN c1 /* primary index scan, shard: s2010246 */
  9. 7 IndexNode DBS 1 - FOR doc2 IN c2 /* primary index scan, scan only, shard: s2010253 */
  10. 12 RemoteNode COOR 1 - REMOTE
  11. 13 GatherNode COOR 1 - GATHER
  12. 6 ReturnNode COOR 1 - RETURN doc1

Limitations

The SmartJoin optimization is currently triggered only for data selection queries,but not for any data-manipulation operations such as INSERT, UPDATE, REPLACE, REMOVEor UPSERT, neither traversals, subqueries or views.

It will only be applied when joining two collections with an identical sharding setup. This requires the second collection to be created with its distributeShardsLike attribute pointing to the first collection.

It is restricted to be used with simple shard key attributes (such as _key, productId), but not with nested attributes (e.g. name.first). There should be exactly one shardkey attribute defined for each collection.

Finally, the SmartJoin optimization requires that the collections are joined on theirshard key attributes (or smartJoinAttribute) using an equality comparison.