Migrate a Sharded Cluster to Different Hardware

The tutorial is specific to MongoDB 4.2. For earlier versions ofMongoDB, refer to the corresponding version of the MongoDB Manual.

Starting in MongoDB 3.2, config servers for sharded clusters can bedeployed as a replica set. Thereplica set config servers must run the WiredTiger storage engine. MongoDB 3.2 deprecates the use of three mirroredmongod instances for config servers.

This procedure moves the components of the sharded cluster to anew hardware system without downtime for reads and writes.

Important

While the migration is in progress, do not attempt to change to theSharded Cluster Metadata. Do not useany operation that modifies the cluster metadata in any way. Forexample, do not create or drop databases, create or drop collections,or use any sharding commands.

Disable the Balancer

Disable the balancer to stop chunk migration and do not perform any metadatawrite operations until the process finishes. If a migration is inprogress, the balancer will complete the in-progress migration beforestopping.

To disable the balancer, connect to one of the cluster’smongos instances and issue the following method: [1]

  1. sh.stopBalancer()

To check the balancer state, issue the sh.getBalancerState()method.

For more information, see Disable the Balancer.

[1]Starting in MongoDB 4.2, sh.stopBalancer() also disablesauto-splitting for the sharded cluster.

Migrate Each Config Server Separately

Changed in version 3.4.

Starting in MongoDB 3.2, config servers for sharded clusters can bedeployed as a replica set (CSRS) instead of threemirrored config servers (SCCC). Using a replica set for the configservers improves consistency across the config servers, since MongoDBcan take advantage of the standard replica set read and write protocolsfor the config data. In addition, using a replica set for configservers allows a sharded cluster to have more than 3 config serverssince a replica set can have up to 50 members. To deploy config serversas a replica set, the config servers must run the WiredTigerstorage engine.

In version 3.4, MongoDB removes support for SCCC config servers.

The following restrictions apply to a replica set configuration when usedfor config servers:

  • Must have zero arbiters.
  • Must have no delayed members.
  • Must build indexes (i.e. no member should havebuildIndexes setting set tofalse).

For each member of the config server replica set:

Important

Replace the secondary members before replacing the primary.

Start the replacement config server.

Start a mongod instance, specifying the —configsvr,—replSet, —bind_ip options, and other options asappropriate to your deployment.

Warning

Before binding to a non-localhost (e.g. publicly accessible)IP address, ensure you have secured your cluster from unauthorizedaccess. For a complete list of security recommendations, seeSecurity Checklist. At minimum, considerenabling authentication andhardening network infrastructure.

  1. mongod --configsvr --replSet <replicaSetName> --bind_ip localhost,<hostname(s)|ip address(es)>

Add the new config server to the replica set.

Connect a mongo shell to the primary of the config serverreplica set and use rs.add() to add the new member.

Tip

When a newly added secondary has its votes andpriority settings greater than zero, duringits initial sync, the secondary still counts as a voting member eventhough it cannot serve reads nor become primary because its data isnot yet consistent.

This can lead to a case where a majority of the voting members areonline but no primary can be elected. To avoid such situations,consider adding the new secondary initially withpriority :0 and votes :0. Then, once the member has transitioned intoSECONDARY state, use rs.reconfig() to update itspriority and votes.

  1. rs.add( { host: "<hostnameNew>:<portNew>", priority: 0, votes: 0 } )

The initial sync process copies all the data from one member of theconfig server replica set to the new member without restarting.

mongos instances automatically recognize the change in theconfig server replica set members without restarting.

Update the newly added config server’s votes and priority settings.

  • Ensure that the new member has reached SECONDARYstate. To check the state of the replica set members, runrs.status():
  1. rs.status()
  • Reconfigure the replica set to update the votes and priority ofthe new member:
  1. var cfg = rs.conf();
  2.  
  3. cfg.members[n].priority = 1; // Substitute the correct array index for the new member
  4. cfg.members[n].votes = 1; // Substitute the correct array index for the new member
  5.  
  6. rs.reconfig(cfg)

where n is the array index of the new member in themembers array.

Warning

  • The rs.reconfig() shell method can force the currentprimary to step down, which causes an election. When the primary steps down, themongod closes all client connections. While thistypically takes 10-20 seconds, try to make these changes duringscheduled maintenance periods.
  • Avoid reconfiguring replica sets that contain members of differentMongoDB versions as validation rules may differ across MongoDB versions.

Shut down the member to replace.

If replacing the primary member, step down the primary first beforeshutting down.

Remove the member to replace from the config server replica set.

Upon completion of initial sync of the replacement config server,from a mongo shell connected to the primary, users.remove() to remove the old member.

  1. rs.remove("<hostnameOld>:<portOld>")

mongos instances automatically recognize the change in theconfig server replica set members without restarting.

Restart the mongos Instances

Changed in version 3.2: With replica set config servers, the mongos instancesspecify in the —configdb or sharding.configDBsetting the config server replica set name and at least one of thereplica set members. The mongos instances for the shardedcluster must specify the same config server replica set name but canspecify different members of the replica set.

If a mongos instance specifies a migrated replica set member inthe —configdb or sharding.configDB setting, updatethe config server setting for the next time you restart themongos instance.

For more information, see Connect a mongos to the Sharded Cluster.

Migrate the Shards

Migrate the shards one at a time. For each shard, follow the appropriateprocedure in this section.

Migrate a Replica Set Shard

To migrate a sharded cluster, migrate each member separately. Firstmigrate the non-primary members, and then migrate the primarylast.

If the replica set has two voting members, add an arbiter to the replica set to ensure the setkeeps a majority of its votes available during the migration. You canremove the arbiter after completing the migration.

Migrate a Member of a Replica Set Shard

  • Shut down the mongod process. To ensure aclean shutdown, use the shutdown command.

  • Move the data directory (i.e., the dbPath)to the new machine.

  • Restart the mongod process at the newlocation.

  • Connect to the replica set’s current primary.

  • If the hostname of the member has changed, users.reconfig() to update the replica set configurationdocument with the new hostname.

For example, the following sequence of commands updates thehostname for the instance at position 2 in the membersarray:

  1. cfg = rs.conf()
  2. cfg.members[2].host = "pocatello.example.net:27018"
  3. rs.reconfig(cfg)

For more information on updating the configuration document, seeExamples.

  • To confirm the new configuration, issue rs.conf().

  • Wait for the member to recover. To check the member’s state, issuers.status().

Migrate the Primary in a Replica Set Shard

While migrating the replica set’s primary, the set must elect a newprimary. This failover process which renders the replica setunavailable to perform reads or accept writes for the duration of theelection, which typically completes quickly. If possible, plan themigration during a maintenance window.

  1. rs.stepDown()

You can check the output of rs.status() to confirm thechange in status.

Re-Enable the Balancer

To complete the migration, re-enable the balancer to resumechunk migrations.

Connect to one of the cluster’s mongos instances and passtrue to the sh.startBalancer() method: [2]

  1. sh.startBalancer()

To check the balancer state, issue the sh.getBalancerState()method.

For more information, see Enable the Balancer.

[2]Starting in MongoDB 4.2, sh.startBalancer() also enablesauto-splitting for the sharded cluster.