Move Tables

This guide follows on from the Get Started guides. Please make sure that you have an Kubernetes Operator or local installation ready. Make sure you have only run the “101” step of the examples, for example 101_initial_cluster.sh in the local example. The commands in this guide also assume you have setup the shell aliases from the example, e.g. env.sh in the local example.

MoveTables is a VReplication workflow that enables you to move all or a subset of tables between keyspaces without downtime. For example, after initially deploying Vitess, your single commerce schema may grow so large that it needs to be split into multiple keyspaces (often times referred to as vertical or functional sharding).

All of the command options and parameters are listed in our reference page for MoveTables.

As a stepping stone towards splitting a single table across multiple servers (sharding), it usually makes sense to first split from having a single monolithic keyspace (commerce) to having multiple keyspaces (commerce and customer). For example, in our hypothetical ecommerce system we may know that the customer and corder tables are closely related and both growing quickly.

Let’s start by simulating this situation by loading sample data:

  1. # On local and operator installs:
  2. $ mysql < ../common/insert_commerce_data.sql

We can look at what we just inserted:

  1. # On local and operator installs:
  2. $ mysql --table < ../common/select_commerce_data.sql
  3. Using commerce
  4. Customer
  5. +-------------+--------------------+
  6. | customer_id | email |
  7. +-------------+--------------------+
  8. | 1 | alice@domain.com |
  9. | 2 | bob@domain.com |
  10. | 3 | charlie@domain.com |
  11. | 4 | dan@domain.com |
  12. | 5 | eve@domain.com |
  13. +-------------+--------------------+
  14. Product
  15. +----------+-------------+-------+
  16. | sku | description | price |
  17. +----------+-------------+-------+
  18. | SKU-1001 | Monitor | 100 |
  19. | SKU-1002 | Keyboard | 30 |
  20. +----------+-------------+-------+
  21. COrder
  22. +----------+-------------+----------+-------+
  23. | order_id | customer_id | sku | price |
  24. +----------+-------------+----------+-------+
  25. | 1 | 1 | SKU-1001 | 100 |
  26. | 2 | 2 | SKU-1002 | 30 |
  27. | 3 | 3 | SKU-1002 | 30 |
  28. | 4 | 4 | SKU-1002 | 30 |
  29. | 5 | 5 | SKU-1002 | 30 |
  30. +----------+-------------+----------+-------+

Notice that all of the tables are currently in the commerce schema/keyspace here.

Planning to Move Tables

In this scenario, we are going to add the customer keyspace in addition to the commerce keyspace we already have. This new keyspace will be backed by its own set of mysqld instances. We will then move the customer and corder tables from the commerce keyspace to the newly created customer keyspace while the product table will remain in the commerce keyspace. This operation happens online, which means that it does not block either read or write operations to the tables, except for a very small window during the final cut-over.

Show our current tablets

  1. $ mysql -e "show vitess_tablets"
  2. +-------+----------+-------+------------+---------+------------------+-----------+----------------------+
  3. | Cell | Keyspace | Shard | TabletType | State | Alias | Hostname | PrimaryTermStartTime |
  4. +-------+----------+-------+------------+---------+------------------+-----------+----------------------+
  5. | zone1 | commerce | 0 | PRIMARY | SERVING | zone1-0000000100 | localhost | 2023-01-04T17:59:37Z |
  6. | zone1 | commerce | 0 | REPLICA | SERVING | zone1-0000000101 | localhost | |
  7. | zone1 | commerce | 0 | RDONLY | SERVING | zone1-0000000102 | localhost | |
  8. +-------+----------+-------+------------+---------+------------------+-----------+----------------------+

As can be seen, we have 3 tablets running, with tablet ids 100, 101 and 102; which we use in the examples to form the tablet alias/names like zone1-0000000100, etc.

Create New Tablets

The first step in our MoveTables operation is to deploy new tablets for our customer keyspace. By the convention used in our examples, we are going to use the tablet ids 200-202 as the commerce keyspace previously used 100-102. Once the tablets have started, we will wait for the operator (k8s install) or vtorc (local install) to promote one of the new tablets to PRIMARY before proceeding:

Using Operator

  1. $ kubectl apply -f 201_customer_tablets.yaml

After a few minutes the pods should appear running:

  1. $ kubectl get pods
  2. example-commerce-x-x-zone1-vtorc-c13ef6ff-5d658d78d8-dvmnn 1/1 Running 1 (4m39s ago) 65d
  3. example-etcd-faf13de3-1 1/1 Running 1 (4m39s ago) 65d
  4. example-etcd-faf13de3-2 1/1 Running 1 (4m39s ago) 65d
  5. example-etcd-faf13de3-3 1/1 Running 1 (4m39s ago) 65d
  6. example-vttablet-zone1-1250593518-17c58396 3/3 Running 1 (27s ago) 32s
  7. example-vttablet-zone1-2469782763-bfadd780 3/3 Running 3 (4m39s ago) 65d
  8. example-vttablet-zone1-2548885007-46a852d0 3/3 Running 3 (4m39s ago) 65d
  9. example-vttablet-zone1-3778123133-6f4ed5fc 3/3 Running 1 (26s ago) 32s
  10. example-zone1-vtadmin-c03d7eae-7dcd4d75c7-szbwv 2/2 Running 2 (4m39s ago) 65d
  11. example-zone1-vtctld-1d4dcad0-6b9cd54f8f-jmdt9 1/1 Running 2 (4m39s ago) 65d
  12. example-zone1-vtgate-bc6cde92-856d44984b-lqfvg 1/1 Running 2 (4m6s ago) 65d
  13. vitess-operator-8df7cc66b-6vtk6 1/1 Running 0 55s

Again, the operator will promote one of the tablets to PRIMARY implicitly for you.

Make sure that you restart the port-forward after launching the pods has completed:

  1. $ killall kubectl
  2. ./pf.sh &

Using a Local Deployment

  1. $ ./201_customer_tablets.sh

Show All Tablets

  1. $ mysql -e "show vitess_tablets"
  2. +-------+----------+-------+------------+---------+------------------+-----------+----------------------+
  3. | Cell | Keyspace | Shard | TabletType | State | Alias | Hostname | PrimaryTermStartTime |
  4. +-------+----------+-------+------------+---------+------------------+-----------+----------------------+
  5. | zone1 | commerce | 0 | PRIMARY | SERVING | zone1-0000000100 | localhost | 2023-01-04T17:59:37Z |
  6. | zone1 | commerce | 0 | REPLICA | SERVING | zone1-0000000101 | localhost | |
  7. | zone1 | commerce | 0 | RDONLY | SERVING | zone1-0000000102 | localhost | |
  8. | zone1 | customer | 0 | PRIMARY | SERVING | zone1-0000000201 | localhost | 2023-01-04T18:00:22Z |
  9. | zone1 | customer | 0 | REPLICA | SERVING | zone1-0000000200 | localhost | |
  10. | zone1 | customer | 0 | RDONLY | SERVING | zone1-0000000202 | localhost | |
  11. +-------+----------+-------+------------+---------+------------------+-----------+----------------------+

The following change does not change actual query routing yet. We will later use the SwitchTraffic action to perform that.

Start the Move

In this step we will create the MoveTables workflow, which copies the tables from the commerce keyspace into customer. This operation does not block any database activity; the MoveTables operation is performed online:

  1. $ vtctlclient MoveTables -- --source commerce --tables 'customer,corder' Create customer.commerce2customer

A few things to note:

  • In a real-world situation this process can take hours or even days to complete depending on the size of the table.
  • The workflow name (commerce2customer in this case) is arbitrary, you can name it whatever you like. You will use this name for the other MoveTables actions like in the upcoming SwitchTraffic step.

Check Routing Rules (Optional)

To see what happens under the covers, let’s look at the routing rules that the MoveTables operation created. These are instructions used by a VTGate to determine which backend keyspace to send requests to for a given table — even when using a fully qualified table name such as commerce.customer:

  1. $ vtctldclient GetRoutingRules
  2. {
  3. "rules": [
  4. {
  5. "fromTable": "customer.customer@rdonly",
  6. "toTables": [
  7. "commerce.customer"
  8. ]
  9. },
  10. {
  11. "fromTable": "commerce.corder@rdonly",
  12. "toTables": [
  13. "commerce.corder"
  14. ]
  15. },
  16. {
  17. "fromTable": "customer",
  18. "toTables": [
  19. "commerce.customer"
  20. ]
  21. },
  22. {
  23. "fromTable": "customer.customer@replica",
  24. "toTables": [
  25. "commerce.customer"
  26. ]
  27. },
  28. {
  29. "fromTable": "corder@replica",
  30. "toTables": [
  31. "commerce.corder"
  32. ]
  33. },
  34. {
  35. "fromTable": "customer.corder",
  36. "toTables": [
  37. "commerce.corder"
  38. ]
  39. },
  40. {
  41. "fromTable": "commerce.corder@replica",
  42. "toTables": [
  43. "commerce.corder"
  44. ]
  45. },
  46. {
  47. "fromTable": "customer@rdonly",
  48. "toTables": [
  49. "commerce.customer"
  50. ]
  51. },
  52. {
  53. "fromTable": "commerce.customer@replica",
  54. "toTables": [
  55. "commerce.customer"
  56. ]
  57. },
  58. {
  59. "fromTable": "corder",
  60. "toTables": [
  61. "commerce.corder"
  62. ]
  63. },
  64. {
  65. "fromTable": "corder@rdonly",
  66. "toTables": [
  67. "commerce.corder"
  68. ]
  69. },
  70. {
  71. "fromTable": "customer.corder@rdonly",
  72. "toTables": [
  73. "commerce.corder"
  74. ]
  75. },
  76. {
  77. "fromTable": "customer@replica",
  78. "toTables": [
  79. "commerce.customer"
  80. ]
  81. },
  82. {
  83. "fromTable": "customer.customer",
  84. "toTables": [
  85. "commerce.customer"
  86. ]
  87. },
  88. {
  89. "fromTable": "commerce.customer@rdonly",
  90. "toTables": [
  91. "commerce.customer"
  92. ]
  93. },
  94. {
  95. "fromTable": "customer.corder@replica",
  96. "toTables": [
  97. "commerce.corder"
  98. ]
  99. }
  100. ]
  101. }

The MoveTables operation has created routing rules to explicitly route queries against the customer and corder tables — including the fully qualified customer.customer and customer.corder names — to the respective tables in the commerce keyspace so that currently all requests go to the original keyspace. This is done so that when MoveTables creates the new copy of the tables in the customer keyspace, there is no ambiguity about where to route requests for the customer and corder tables. All requests for those tables will keep going to the original instance of those tables in commerce keyspace. Any changes to the tables after the MoveTables is executed will be copied faithfully to the new copy of these tables in the customer keyspace.

Monitoring Progress (Optional)

In this example there are only a few rows in the tables, so the MoveTables operation only takes seconds. If the tables were large, however, you may need to monitor the progress of the operation. You can get a basic summary of the progress using the Progress action:

  1. $ vtctlclient MoveTables -- Progress customer.commerce2customer
  2. Copy Completed.
  3. The following vreplication streams exist for workflow customer.commerce2customer:
  4. id=1 on 0/zone1-0000000201: Status: Running. VStream Lag: 0s.

You can get detailed status and progress information using the Workflow show command:

  1. $ vtctlclient Workflow customer.commerce2customer show
  2. {
  3. "Workflow": "commerce2customer",
  4. "SourceLocation": {
  5. "Keyspace": "commerce",
  6. "Shards": [
  7. "0"
  8. ]
  9. },
  10. "TargetLocation": {
  11. "Keyspace": "customer",
  12. "Shards": [
  13. "0"
  14. ]
  15. },
  16. "MaxVReplicationLag": 1,
  17. "MaxVReplicationTransactionLag": 1,
  18. "Frozen": false,
  19. "ShardStatuses": {
  20. "0/zone1-0000000201": {
  21. "PrimaryReplicationStatuses": [
  22. {
  23. "Shard": "0",
  24. "Tablet": "zone1-0000000201",
  25. "ID": 1,
  26. "Bls": {
  27. "keyspace": "commerce",
  28. "shard": "0",
  29. "filter": {
  30. "rules": [
  31. {
  32. "match": "customer",
  33. "filter": "select * from customer"
  34. },
  35. {
  36. "match": "corder",
  37. "filter": "select * from corder"
  38. }
  39. ]
  40. }
  41. },
  42. "Pos": "7e765c5c-8c59-11ed-9d2e-7c501ea4de6a:1-83",
  43. "StopPos": "",
  44. "State": "Running",
  45. "DBName": "vt_customer",
  46. "TransactionTimestamp": 0,
  47. "TimeUpdated": 1672857697,
  48. "TimeHeartbeat": 1672857697,
  49. "TimeThrottled": 0,
  50. "ComponentThrottled": "",
  51. "Message": "",
  52. "Tags": "",
  53. "WorkflowType": "MoveTables",
  54. "WorkflowSubType": "None",
  55. "CopyState": null
  56. }
  57. ],
  58. "TabletControls": null,
  59. "PrimaryIsServing": true
  60. }
  61. },
  62. "SourceTimeZone": "",
  63. "TargetTimeZone": ""
  64. }

Validate Correctness (Optional)

We can use VDiff to perform a logical diff between the sources and target to confirm that they are fully in sync:

  1. $ vtctlclient VDiff -- --v2 customer.commerce2customer create
  2. {
  3. "UUID": "d050262e-8c5f-11ed-ac72-920702940ee0"
  4. }
  5. $ vtctlclient VDiff -- --v2 --format=json --verbose customer.commerce2customer show last
  6. {
  7. "Workflow": "commerce2customer",
  8. "Keyspace": "customer",
  9. "State": "completed",
  10. "UUID": "d050262e-8c5f-11ed-ac72-920702940ee0",
  11. "RowsCompared": 10,
  12. "HasMismatch": false,
  13. "Shards": "0",
  14. "StartedAt": "2023-01-04 18:44:26",
  15. "CompletedAt": "2023-01-04 18:44:26",
  16. "TableSummary": {
  17. "corder": {
  18. "TableName": "corder",
  19. "State": "completed",
  20. "RowsCompared": 5,
  21. "MatchingRows": 5,
  22. "MismatchedRows": 0,
  23. "ExtraRowsSource": 0,
  24. "ExtraRowsTarget": 0
  25. },
  26. "customer": {
  27. "TableName": "customer",
  28. "State": "completed",
  29. "RowsCompared": 5,
  30. "MatchingRows": 5,
  31. "MismatchedRows": 0,
  32. "ExtraRowsSource": 0,
  33. "ExtraRowsTarget": 0
  34. }
  35. },
  36. "Reports": {
  37. "corder": {
  38. "0": {
  39. "TableName": "corder",
  40. "ProcessedRows": 5,
  41. "MatchingRows": 5,
  42. "MismatchedRows": 0,
  43. "ExtraRowsSource": 0,
  44. "ExtraRowsTarget": 0
  45. }
  46. },
  47. "customer": {
  48. "0": {
  49. "TableName": "customer",
  50. "ProcessedRows": 5,
  51. "MatchingRows": 5,
  52. "MismatchedRows": 0,
  53. "ExtraRowsSource": 0,
  54. "ExtraRowsTarget": 0
  55. }
  56. }
  57. }
  58. }

This can take a long time to complete on very large tables.

Switching Traffic

Once the MoveTables operation is complete (in the “running” or replicating phase), the first step in making the changes live is to switch all query serving traffic from the old commerce keyspace to the customer keyspace for the tables we moved. Queries against the other tables will continue to route to the commerce keyspace.

  1. $ vtctlclient MoveTables -- SwitchTraffic customer.commerce2customer
  2. SwitchTraffic was successful for workflow customer.commerce2customer
  3. Start State: Reads Not Switched. Writes Not Switched
  4. Current State: All Reads Switched. Writes Switched

While we have switched all traffic in this example, you can also switch non-primary reads and writes separately by specifying the --tablet_types parameter to SwitchTraffic.

Check the Routing Rules (Optional)

If we now look at the routing rules after the SwitchTraffic step, we will see that all queries against the customer and corder tables will get routed to the customer keyspace:

  1. $ vtctldclient GetRoutingRules
  2. {
  3. "rules": [
  4. {
  5. "from_table": "commerce.corder@rdonly",
  6. "to_tables": [
  7. "customer.corder"
  8. ]
  9. },
  10. {
  11. "from_table": "corder@rdonly",
  12. "to_tables": [
  13. "customer.corder"
  14. ]
  15. },
  16. {
  17. "from_table": "customer.corder@replica",
  18. "to_tables": [
  19. "customer.corder"
  20. ]
  21. },
  22. {
  23. "from_table": "commerce.corder@replica",
  24. "to_tables": [
  25. "customer.corder"
  26. ]
  27. },
  28. {
  29. "from_table": "customer.corder@rdonly",
  30. "to_tables": [
  31. "customer.corder"
  32. ]
  33. },
  34. {
  35. "from_table": "customer@replica",
  36. "to_tables": [
  37. "customer.customer"
  38. ]
  39. },
  40. {
  41. "from_table": "customer.customer@replica",
  42. "to_tables": [
  43. "customer.customer"
  44. ]
  45. },
  46. {
  47. "from_table": "corder@replica",
  48. "to_tables": [
  49. "customer.corder"
  50. ]
  51. },
  52. {
  53. "from_table": "commerce.customer@rdonly",
  54. "to_tables": [
  55. "customer.customer"
  56. ]
  57. },
  58. {
  59. "from_table": "customer@rdonly",
  60. "to_tables": [
  61. "customer.customer"
  62. ]
  63. },
  64. {
  65. "from_table": "customer.customer@rdonly",
  66. "to_tables": [
  67. "customer.customer"
  68. ]
  69. },
  70. {
  71. "from_table": "commerce.customer@replica",
  72. "to_tables": [
  73. "customer.customer"
  74. ]
  75. },
  76. {
  77. "from_table": "corder",
  78. "to_tables": [
  79. "customer.corder"
  80. ]
  81. },
  82. {
  83. "from_table": "commerce.corder",
  84. "to_tables": [
  85. "customer.corder"
  86. ]
  87. },
  88. {
  89. "from_table": "customer",
  90. "to_tables": [
  91. "customer.customer"
  92. ]
  93. },
  94. {
  95. "from_table": "commerce.customer",
  96. "to_tables": [
  97. "customer.customer"
  98. ]
  99. }
  100. ]
  101. }

Reverting the Switch (Optional)

As part of the SwitchTraffic operation, Vitess will automatically setup a reverse VReplication workflow (unless you supply the --reverse_replication false flag) to copy changes now applied to the moved tables in the target keyspace — customer and corder in the customer keyspace — back to the original source tables in the source commerce keyspace. This allows us to reverse or revert the cutover using the ReverseTraffic action, without data loss, even after we have started writing to the new customer keyspace. Note that the workflow for this reverse workflow is created in the original source keyspace and given the name of the original workflow with _reverse appended. So in our example where the MoveTables workflow was in the customer keyspace and called commerce2customer, the reverse workflow is in the commerce keyspace and called commerce2customer_reverse. We can see the details of this auto-created workflow using the Workflow show command:

  1. $ vtctlclient Workflow commerce.commerce2customer_reverse show
  2. {
  3. "Workflow": "commerce2customer_reverse",
  4. "SourceLocation": {
  5. "Keyspace": "customer",
  6. "Shards": [
  7. "0"
  8. ]
  9. },
  10. "TargetLocation": {
  11. "Keyspace": "commerce",
  12. "Shards": [
  13. "0"
  14. ]
  15. },
  16. "MaxVReplicationLag": 1,
  17. "MaxVReplicationTransactionLag": 1,
  18. "Frozen": false,
  19. "ShardStatuses": {
  20. "0/zone1-0000000100": {
  21. "PrimaryReplicationStatuses": [
  22. {
  23. "Shard": "0",
  24. "Tablet": "zone1-0000000100",
  25. "ID": 1,
  26. "Bls": {
  27. "keyspace": "customer",
  28. "shard": "0",
  29. "filter": {
  30. "rules": [
  31. {
  32. "match": "customer",
  33. "filter": "select * from `customer`"
  34. },
  35. {
  36. "match": "corder",
  37. "filter": "select * from `corder`"
  38. }
  39. ]
  40. }
  41. },
  42. "Pos": "9fb1be70-8c59-11ed-9ef5-c05f9df6f7f3:1-2361",
  43. "StopPos": "",
  44. "State": "Running",
  45. "DBName": "vt_commerce",
  46. "TransactionTimestamp": 1672858428,
  47. "TimeUpdated": 1672859207,
  48. "TimeHeartbeat": 1672859207,
  49. "TimeThrottled": 0,
  50. "ComponentThrottled": "",
  51. "Message": "",
  52. "Tags": "",
  53. "WorkflowType": "MoveTables",
  54. "WorkflowSubType": "None",
  55. "CopyState": null
  56. }
  57. ],
  58. "TabletControls": [
  59. {
  60. "tablet_type": 1,
  61. "denied_tables": [
  62. "corder",
  63. "customer"
  64. ]
  65. }
  66. ],
  67. "PrimaryIsServing": true
  68. }
  69. },
  70. "SourceTimeZone": "",
  71. "TargetTimeZone": ""
  72. }

Finalize and Cleanup

The final step is to complete the migration using the Complete action. This will (by default) get rid of the routing rules that were created and DROP the original tables in the source keyspace (commerce). Along with freeing up space on the original tablets, this is an important step to eliminate potential future confusion. If you have a misconfiguration down the line and accidentally route queries for the customer and corder tables to the commerce keyspace, it is much better to return a “table not found” error, rather than return incorrect/stale data:

  1. $ vtctlclient MoveTables -- Complete customer.commerce2customer
  2. Complete was successful for workflow customer.commerce2customer
  3. Start State: All Reads Switched. Writes Switched
  4. Current State: Workflow Not Found

This command will return an error if you have not already switched all traffic.

After this step is complete, you should see an error if you try to query the moved tables in the original commerce keyspace:

  1. # Expected to fail!
  2. $ mysql < ../common/select_commerce_data.sql
  3. Using commerce
  4. Customer
  5. ERROR 1146 (42S02) at line 4: target: commerce.0.primary: vttablet: rpc error: code = NotFound desc = Table 'vt_commerce.customer' doesn't exist (errno 1146) (sqlstate 42S02) (CallerID: userData1): Sql: "select * from customer", BindVars: {}
  6. # Expected to be empty
  7. $ vtctldclient GetRoutingRules
  8. {
  9. "rules": []
  10. }
  11. # Workflow is gone
  12. $ vtctlclient Workflow customer listall
  13. No workflows found in keyspace customer
  14. # Reverse workflow is also gone
  15. $ vtctlclient Workflow commerce listall
  16. No workflows found in keyspace commerce

This confirms that the data and routing rules have been properly cleaned up. Note that the Complete process also cleans up the reverse VReplication workflow mentioned above.

Next Steps

Congratulations! You’ve successfully moved tables between into Vitess or between keyspaces. The next step to try out is sharding one of your keyspaces using Resharding.