Region Based Sharding

This guide follows on from the Get Started guides. Please make sure that you have a local installation ready. You should also have already gone through the MoveTables and Resharding tutorials. The commands in this guide also assume you have setup the shell aliases from this example contained in env.sh.

Introduction

Having gone through the Resharding tutorial, you should be familiar with VSchema and Vindexes. In this tutorial, we will perform resharding on an existing keyspace using a location-based vindex. We will create 4 shards: -40, 40-80, 80-c0, c0-. The location will be denoted by a country column in the customer table.

Create and Start the Cluster

Start by copying the region_sharding examples included with Vitess to your preferred location and running the 101_initial_cluster.sh script:

  1. cp -r <vitess source path>/examples ~/my-vitess-example/examples
  2. cp -r <vitess source path>/web ~/my-vitess-example
  3. cd ~/my-vitess-example/examples/region_sharding
  4. ./101_initial_cluster.sh

Initial Schema

This 101 script created the customer table in the unsharded main keyspace. This is the table that we will be sharding by country.

We can connect to our new cluster — using the mysql alias setup by env.sh within the script — to confirm our current schema:

  1. $ mysql --binary-as-hex=false
  2. ...
  3. mysql> show databases;
  4. +--------------------+
  5. | Database |
  6. +--------------------+
  7. | main |
  8. | information_schema |
  9. | mysql |
  10. | sys |
  11. | performance_schema |
  12. +--------------------+
  13. 5 rows in set (0.00 sec)
  14. mysql> use customer;
  15. Database changed
  16. mysql> show tables;
  17. +----------------+
  18. | Tables_in_main |
  19. +----------------+
  20. | customer |
  21. +----------------+
  22. 1 row in set (0.00 sec)
  23. mysql> show create table customer\G
  24. *************************** 1. row ***************************
  25. Table: customer
  26. Create Table: CREATE TABLE `customer` (
  27. `id` int NOT NULL,
  28. `fullname` varbinary(256) DEFAULT NULL,
  29. `nationalid` varbinary(256) DEFAULT NULL,
  30. `country` varbinary(256) DEFAULT NULL,
  31. PRIMARY KEY (`id`)
  32. ) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_0900_ai_ci
  33. 1 row in set (0.00 sec)

Creating Test Data

Let’s now create some test data:

  1. $ mysql < ./insert_customers.sql
  2. $ mysql --table < ./show_initial_data.sql
  3. +----+------------------+-------------+---------------+
  4. | id | fullname | nationalid | country |
  5. +----+------------------+-------------+---------------+
  6. | 1 | Philip Roth | 123-456-789 | United States |
  7. | 2 | Gary Shteyngart | 234-567-891 | United States |
  8. | 3 | Margaret Atwood | 345-678-912 | Canada |
  9. | 4 | Alice Munro | 456-789-123 | Canada |
  10. | 5 | Albert Camus | 912-345-678 | France |
  11. | 6 | Colette | 102-345-678 | France |
  12. | 7 | Hermann Hesse | 304-567-891 | Germany |
  13. | 8 | Cornelia Funke | 203-456-789 | Germany |
  14. | 9 | Cixin Liu | 789-123-456 | China |
  15. | 10 | Jian Ma | 891-234-567 | China |
  16. | 11 | Haruki Murakami | 405-678-912 | Japan |
  17. | 12 | Banana Yoshimoto | 506-789-123 | Japan |
  18. | 13 | Arundhati Roy | 567-891-234 | India |
  19. | 14 | Shashi Tharoor | 678-912-345 | India |
  20. | 15 | Andrea Hirata | 607-891-234 | Indonesia |
  21. | 16 | Ayu Utami | 708-912-345 | Indonesia |
  22. +----+------------------+-------------+---------------+

Prepare For Resharding

Now that we have some data in our unsharded main keyspace, let’s go ahead and perform the setup needed for resharding. The initial vschema is unsharded and simply lists the customer table:

  1. $ vtctldclient GetVSchema main
  2. {
  3. "sharded": false,
  4. "vindexes": {},
  5. "tables": {
  6. "customer": {
  7. "type": "",
  8. "column_vindexes": [],
  9. "auto_increment": null,
  10. "columns": [],
  11. "pinned": "",
  12. "column_list_authoritative": false,
  13. "source": ""
  14. }
  15. },
  16. "require_explicit_routing": false
  17. }

We are next going to prepare for having a sharded vschema in the cluster by editing the main_vschema_sharded.json file and updating the the region_map key’s value to point to the filesystem path where that file resides on your machine. For example (relative paths are OK):

  1. "region_map": "./countries.json",

We then run the 201 script:

  1. ./201_main_sharded.sh

That script creates our sharded vschema as defined in the main_vschema_sharded.json file and it creates a lookup vindex using the CreateLookupVindex command with the definition found in the lookup_vindex.json file.

That file is where we both define the lookup vindex and associate it with the customer table in the main keyspace:

  1. $ cat ./lookup_vindex.json
  2. {
  3. "sharded": true,
  4. "vindexes": {
  5. "customer_region_lookup": {
  6. "type": "consistent_lookup_unique",
  7. "params": {
  8. "table": "main.customer_lookup",
  9. "from": "id",
  10. "to": "keyspace_id"
  11. },
  12. "owner": "customer"
  13. }
  14. },
  15. "tables": {
  16. "customer": {
  17. "column_vindexes": [
  18. {
  19. "column": "id",
  20. "name": "customer_region_lookup"
  21. }
  22. ]
  23. }
  24. }
  25. }

Now if we look at the main keyspace’s vschema again we can see that it now includes the region_vdx vindex and a lookup vindex called customer_region_lookup:

  1. $ vtctldclient GetVSchema main
  2. {
  3. "sharded": true,
  4. "vindexes": {
  5. "customer_region_lookup": {
  6. "type": "consistent_lookup_unique",
  7. "params": {
  8. "from": "id",
  9. "table": "main.customer_lookup",
  10. "to": "keyspace_id"
  11. },
  12. "owner": "customer"
  13. },
  14. "hash": {
  15. "type": "hash",
  16. "params": {},
  17. "owner": ""
  18. },
  19. "region_vdx": {
  20. "type": "region_json",
  21. "params": {
  22. "region_bytes": "1",
  23. "region_map": "./countries.json"
  24. },
  25. "owner": ""
  26. }
  27. },
  28. "tables": {
  29. "customer": {
  30. "type": "",
  31. "column_vindexes": [
  32. {
  33. "column": "",
  34. "name": "region_vdx",
  35. "columns": [
  36. "id",
  37. "country"
  38. ]
  39. },
  40. {
  41. "column": "id",
  42. "name": "customer_region_lookup",
  43. "columns": []
  44. }
  45. ],
  46. "auto_increment": null,
  47. "columns": [],
  48. "pinned": "",
  49. "column_list_authoritative": false,
  50. "source": ""
  51. },
  52. "customer_lookup": {
  53. "type": "",
  54. "column_vindexes": [
  55. {
  56. "column": "id",
  57. "name": "hash",
  58. "columns": []
  59. }
  60. ],
  61. "auto_increment": null,
  62. "columns": [],
  63. "pinned": "",
  64. "column_list_authoritative": false,
  65. "source": ""
  66. }
  67. },
  68. "require_explicit_routing": false
  69. }

Notice that the vschema shows a hash vindex type for the lookup table. This is automatically created by the CreateLookupVindex workflow, along with the backing table needed to hold the vindex and populating it with the correct rows (for additional details on this command see the associated user-guide). We can see that by checking our main database/keyspace again:

  1. mysql> show tables;
  2. +-------------------+
  3. | Tables_in_vt_main |
  4. +-------------------+
  5. | customer |
  6. | customer_lookup |
  7. +-------------------+
  8. 2 rows in set (0.00 sec)
  9. mysql> describe customer_lookup;
  10. +-------------+----------------+------+-----+---------+-------+
  11. | Field | Type | Null | Key | Default | Extra |
  12. +-------------+----------------+------+-----+---------+-------+
  13. | id | int(11) | NO | PRI | NULL | |
  14. | keyspace_id | varbinary(128) | YES | | NULL | |
  15. +-------------+----------------+------+-----+---------+-------+
  16. 2 rows in set (0.01 sec)
  17. mysql> select id, hex(keyspace_id) from customer_lookup;
  18. +----+--------------------+
  19. | id | hex(keyspace_id) |
  20. +----+--------------------+
  21. | 1 | 01166B40B44ABA4BD6 |
  22. | 2 | 0106E7EA22CE92708F |
  23. | 3 | 024EB190C9A2FA169C |
  24. | 4 | 02D2FD8867D50D2DFE |
  25. | 5 | 4070BB023C810CA87A |
  26. | 6 | 40F098480AC4C4BE71 |
  27. | 7 | 41FB8BAAAD918119B8 |
  28. | 8 | 41CC083F1E6D9E85F6 |
  29. | 9 | 80692BB9BF752B0F58 |
  30. | 10 | 80594764E1A2B2D98E |
  31. | 11 | 81AEFC44491CFE474C |
  32. | 12 | 81D3748269B7058A0E |
  33. | 13 | C062DCE203C602F358 |
  34. | 14 | C0ACBFDA0D70613FC4 |
  35. | 15 | C16A8B56ED414942B8 |
  36. | 16 | C15B711BC4CEEBF2EE |
  37. +----+--------------------+
  38. 16 rows in set (0.01 sec)

Now that the sharded vschema and lookup vindex and its backing table are ready, we can start tablets that will be used for our new sharded main keyspace:

  1. ./202_new_tablets.sh

Now we have tablets for our original unsharded main keyspace — shard 0 — and one tablet for each of the 4 shards we’ll be using when we reshard the main keyspace:

  1. $ vtctldclient GetTablets --keyspace=main
  2. zone1-0000000100 main 0 primary localhost:15100 localhost:17100 [] 2023-01-24T04:31:08Z
  3. zone1-0000000200 main -40 primary localhost:15200 localhost:17200 [] 2023-01-24T04:45:38Z
  4. zone1-0000000300 main 40-80 primary localhost:15300 localhost:17300 [] 2023-01-24T04:45:38Z
  5. zone1-0000000400 main 80-c0 primary localhost:15400 localhost:17400 [] 2023-01-24T04:45:38Z
  6. zone1-0000000500 main c0- primary localhost:15500 localhost:17500 [] 2023-01-24T04:45:38Z

In this example we are deploying 1 tablet per shard and thus disabling the semi-sync durability policy, but in typical production setups each shard will consist of 3 or more tablets.

Perform Resharding

Now that our new tablets are up, we can go ahead with the resharding:

  1. ./203_reshard.sh

This script executes one command:

  1. vtctlclient Reshard -- --source_shards '0' --target_shards '-40,40-80,80-c0,c0-' --tablet_types=PRIMARY Create main.main2regions

This step copies all the data from our source main/0 shard to our new main target shards and sets up a VReplication workflow to keep the tables on the target in sync with the source.

You can learn more about what the VReplication Reshard command does and how it works in the reference page and the Resharding user-guide.

We can check the correctness of the copy using the VDiff command and the <keyspace>.<workflow> name we used for Reshard command above:

  1. $ vtctlclient VDiff -- main.main2regions create
  2. VDiff 044e8da0-9ba4-11ed-8bc7-920702940ee0 scheduled on target shards, use show to view progress
  3. $ vtctlclient VDiff -- --format=json main.main2regions show last
  4. {
  5. "Workflow": "main2regions",
  6. "Keyspace": "main",
  7. "State": "completed",
  8. "UUID": "044e8da0-9ba4-11ed-8bc7-920702940ee0",
  9. "RowsCompared": 32,
  10. "HasMismatch": false,
  11. "Shards": "-40,40-80,80-c0,c0-",
  12. "StartedAt": "2023-01-24 05:00:26",
  13. "CompletedAt": "2023-01-24 05:00:27"
  14. }

We can take a look at the VReplication workflow’s progress and status using the Progress action:

  1. $ vtctlclient Reshard -- Progress main.main2regions
  2. The following vreplication streams exist for workflow main.main2regions:
  3. id=1 on 40-80/zone1-0000000300: Status: Running. VStream Lag: 0s.
  4. id=1 on -40/zone1-0000000200: Status: Running. VStream Lag: 0s.
  5. id=1 on 80-c0/zone1-0000000400: Status: Running. VStream Lag: 0s.
  6. id=1 on c0-/zone1-0000000500: Status: Running. VStream Lag: 0s.

We now have a running stream from the source tablet (100) to each of of our new main target shards that will keep the tables up-to-date with the source shard (0).

You can see greater detail about the VReplication workflow and the individual streams using the following command: vtctlclient Workflow -- main.main2regions show

Cutover

Once the VReplication workflow’s copy phase is complete, we can start cutting-over traffic. This is done via the SwitchTraffic actions included in the following scripts:

  1. ./204_switch_reads.sh
  2. ./205_switch_writes.sh

Now we can look at how our data is sharded, e.g. by looking at what’s stored on the main/-40 shard:

  1. mysql> show vitess_tablets;
  2. +-------+----------+-------+------------+---------+------------------+-----------+----------------------+
  3. | Cell | Keyspace | Shard | TabletType | State | Alias | Hostname | PrimaryTermStartTime |
  4. +-------+----------+-------+------------+---------+------------------+-----------+----------------------+
  5. | zone1 | main | -40 | PRIMARY | SERVING | zone1-0000000200 | localhost | 2023-01-24T04:45:38Z |
  6. | zone1 | main | 0 | PRIMARY | SERVING | zone1-0000000100 | localhost | 2023-01-24T04:31:08Z |
  7. | zone1 | main | 40-80 | PRIMARY | SERVING | zone1-0000000300 | localhost | 2023-01-24T04:45:38Z |
  8. | zone1 | main | 80-c0 | PRIMARY | SERVING | zone1-0000000400 | localhost | 2023-01-24T04:45:38Z |
  9. | zone1 | main | c0- | PRIMARY | SERVING | zone1-0000000500 | localhost | 2023-01-24T04:45:38Z |
  10. +-------+----------+-------+------------+---------+------------------+-----------+----------------------+
  11. 5 rows in set (0.00 sec)
  12. mysql> use main/-40;
  13. Database changed
  14. mysql> select * from customer;
  15. +----+-----------------+-------------+---------------+
  16. | id | fullname | nationalid | country |
  17. +----+-----------------+-------------+---------------+
  18. | 1 | Philip Roth | 123-456-789 | United States |
  19. | 2 | Gary Shteyngart | 234-567-891 | United States |
  20. | 3 | Margaret Atwood | 345-678-912 | Canada |
  21. | 4 | Alice Munro | 456-789-123 | Canada |
  22. +----+-----------------+-------------+---------------+
  23. 4 rows in set (0.01 sec)
  24. mysql> select id,hex(keyspace_id) from customer_lookup;
  25. +----+--------------------+
  26. | id | hex(keyspace_id) |
  27. +----+--------------------+
  28. | 1 | 01166B40B44ABA4BD6 |
  29. | 2 | 0106E7EA22CE92708F |
  30. +----+--------------------+
  31. 2 rows in set (0.00 sec)

You can see that only data from US and Canada exists in the customer table in this shard. If you look at the other shards — 40-80, 80-c0, and c0- — you will see that each shard contains 4 rows in customer table.

The lookup table, however, has a different number of rows per shard. This is because we are using a hash vindex type to shard the lookup table which means that it is distributed differently from the customer table. We can see an example of this if we look at the next shard, 40-80:

  1. mysql> use main/40-80;
  2. Database changed
  3. mysql> select * from customer;
  4. +----+----------------+-------------+---------+
  5. | id | fullname | nationalid | country |
  6. +----+----------------+-------------+---------+
  7. | 5 | Albert Camus | 912-345-678 | France |
  8. | 6 | Colette | 102-345-678 | France |
  9. | 7 | Hermann Hesse | 304-567-891 | Germany |
  10. | 8 | Cornelia Funke | 203-456-789 | Germany |
  11. +----+----------------+-------------+---------+
  12. 4 rows in set (0.00 sec)
  13. mysql> select id, hex(keyspace_id) from customer_lookup;
  14. +----+--------------------+
  15. | id | hex(keyspace_id) |
  16. +----+--------------------+
  17. | 3 | 024EB190C9A2FA169C |
  18. | 5 | 4070BB023C810CA87A |
  19. | 9 | 80692BB9BF752B0F58 |
  20. | 10 | 80594764E1A2B2D98E |
  21. | 13 | C062DCE203C602F358 |
  22. | 15 | C16A8B56ED414942B8 |
  23. | 16 | C15B711BC4CEEBF2EE |
  24. +----+--------------------+
  25. 7 rows in set (0.00 sec)

Cleanup

Now that our resharding work is complete, we can teardown and delete the old main/0 source shard:

  1. ./206_down_shard_0.sh
  2. ./207_delete_shard_0.sh

All we have now is the sharded main keyspace and the original unsharded main keyspace (shard 0) no longer exists:

  1. $ vtctldclient GetTablets
  2. zone1-0000000200 main -40 primary localhost:15200 localhost:17200 [] 2023-01-24T04:45:38Z
  3. zone1-0000000300 main 40-80 primary localhost:15300 localhost:17300 [] 2023-01-24T04:45:38Z
  4. zone1-0000000400 main 80-c0 primary localhost:15400 localhost:17400 [] 2023-01-24T04:45:38Z
  5. zone1-0000000500 main c0- primary localhost:15500 localhost:17500 [] 2023-01-24T04:45:38Z

Teardown

Once you are done playing with the example, you can tear the cluster down and remove all of its resources completely:

  1. ./301_teardown.sh