Upgrade a Sharded Cluster to 4.2

Important

Before you attempt any upgrade, please familiarize yourself with thecontent of this document.

If you need guidance on upgrading to 4.2, MongoDB offers major versionupgrade services to help ensurea smooth transition without interruption to your MongoDB application.

Upgrade Recommendations and Checklists

When upgrading, consider the following:

Upgrade Version Path

To upgrade an existing MongoDB deployment to 4.2, you must berunning a 4.0-series release.

To upgrade from a version earlier than the 4.0-series, you mustsuccessively upgrade major releases until you have upgraded to4.0-series. For example, if you are running a 3.6-series, you mustupgrade first to 4.0before you can upgrade to 4.2.

Preparedness

Before beginning your upgrade, see the Compatibility Changes in MongoDB 4.2 document toensure that your applications and deployments are compatible withMongoDB 4.2. Resolve the incompatibilities in your deployment beforestarting the upgrade.

Before upgrading MongoDB, always test your application in a stagingenvironment before deploying the upgrade to your productionenvironment.

Downgrade Consideration

Once upgraded to 4.2, if you need to downgrade, we recommend downgrading to the latest patch release of 4.0.

Read Concern Majority (3-Member Primary-Secondary-Arbiter Architecture)

Starting in MongoDB 3.6, MongoDB enables support for"majority" read concern by default.

You can disable read concern "majority" to prevent thestorage cache pressure from immobilizing a three-member replica setwith a primary-secondary-arbiter (PSA) architecture or a shardedcluster with a three-member PSA shards.

Note

Disabling "majority" read concern affects support fortransactions on sharded clusters. Specifically:

  • A transaction cannot use read concern "snapshot" ifthe transaction involves a shard that has disabled readconcern “majority”.
  • A transaction that writes to multiple shards errors if any of thetransaction’s read or write operations involves a shard that hasdisabled read concern "majority".

However, it does not affect transactionson replica sets. For transactions on replica sets, you can specifyread concern "majority" (or "snapshot"or "local" ) for multi-document transactions even ifread concern "majority" is disabled.

Disabling "majority" read concern disables supportfor Change Streams for MongoDB 4.0 and earlier. For MongoDB4.2+, disabling read concern "majority" has no effect on changestreams availability.

When upgraded to 4.2 with read concern “majority” disabled, you canuse change streams for your deployment.

For more information, see Disable Read Concern Majority.

Change Stream Resume Tokens

MongoDB 4.2 uses the version 1 (i.e. v1) change streamsresume tokens, introduced inversion 4.0.7.

The resume token _data type depends on the MongoDB versions and,in some cases, the feature compatibility version (fcv) at the timeof the change stream’s opening/resumption (i.e. a change in fcvvalue does not affect the resume tokens for already opened changestreams):

MongoDB VersionFeature Compatibility VersionResume Token _data Type
MongoDB 4.2 and later“4.2” or “4.0”Hex-encoded string (v1)
MongoDB 4.0.7 and later“4.0” or “3.6”Hex-encoded string (v1)
MongoDB 4.0.6 and earlier“4.0”Hex-encoded string (v0)
MongoDB 4.0.6 and earlier“3.6”BinData
MongoDB 3.6“3.6”BinData

When upgrading from MongoDB 4.0.6 or earlier to MongoDB 4.2

During the upgrade process, the members of the sharded clusters willcontinue to produce v0 tokens until the firstmongos instance is upgraded. The upgrademongos instances will begin producing v1 changestream resume tokens. These cannot be used to resume a stream on amongos which has not yet been upgraded.

Prerequisites

All Members Version

To upgrade a sharded cluster to 4.2, all members of thecluster must be at least version 4.0. The upgrade processchecks all components of the cluster and will produce warnings if anycomponent is running version earlier than 4.0.

MMAPv1 to WiredTiger Storage Engine

MongoDB 4.2 removes support for the deprecated MMAPv1 storage engine.

If your 4.0 deployment uses MMAPv1, you must change the 4.0 deploymentto WiredTiger Storage Engine before upgrading to MongoDB 4.2. For details,see Change Sharded Cluster to WiredTiger.

Review Current Configuration

With MongoDB 4.2, the mongod andmongos processes will not start withMMAPv1 Specific Configuration Options. Previous versions of MongoDB runningWiredTiger ignored MMAPv1 configurations options if they were specified.With MongoDB 4.2, you must remove these from your configuration.

Feature Compatibility Version

The 4.0 sharded cluster must havefeatureCompatibilityVersion set to 4.0.

To ensure that all members of the sharded cluster havefeatureCompatibilityVersion set to 4.0, connect to eachshard replica set member and each config server replica set memberand check the featureCompatibilityVersion:

Tip

For a sharded cluster that has access control enabled, to run thefollowing command against a shard replica set member, you mustconnect to the member as a shard local user.

  1. db.adminCommand( { getParameter: 1, featureCompatibilityVersion: 1 } )

All members should return a result that includes"featureCompatibilityVersion" : { "version" : "4.0" }.

To set or update featureCompatibilityVersion, run thefollowing command on the mongos:

  1. db.adminCommand( { setFeatureCompatibilityVersion: "4.0" } )

For more information, seesetFeatureCompatibilityVersion.

Replica Set Member State

For shards and config servers, ensure that no replica set member is inROLLBACK or RECOVERING state.

Back up the config Database

Optional but Recommended. As a precaution, take a backup of theconfig database before upgrading the sharded cluster.

Hashed Indexes on PowerPC

For PowerPC Only

For hashed indexes, MongoDB 4.2 ensuresthat the hashed value for the floating point value 263 onPowerPC is consistent with other platforms.

Although hashed indexes on a field that maycontain floating point values greater than 263 is anunsupported configuration, clients may still insert documents where theindexed field has the value 263.

  • If the current MongoDB 4.0 sharded cluster on PowerPC has hashedshard key values for 263, then, before upgrading:

    • Make a backup of the docs; e.g. mongoexport withthe —query to select the documentswith 263 in the shard key field.
    • Delete the documents with the 263 value.After you upgrade following the procedure below, you will import thedeleted documents.
  • If an existing MongoDB 4.0 collection on PowerPC has a hashed indexentry for the value 263 that is not used as the shard key,you also have the option to drop the index before upgrading and thenre-create it after the upgrade is complete.

To list all hashed indexes for your deployment and find documents whoseindexed field contains the value 263Hashed Indexes and PowerPC check.

Download 4.2 Binaries

Use Package Manager

If you installed MongoDB from the MongoDB apt, yum, dnf, orzypper repositories, you should upgrade to 4.2 using your packagemanager.

Follow the appropriate 4.2 installation instructions for your Linux system. Thiswill involve adding a repository for the new release, then performingthe actual upgrade process.

Download 4.2 Binaries Manually

If you have not installed MongoDB using a package manager, you canmanually download the MongoDB binaries from the MongoDB DownloadCenter.

See 4.2 installation instructions for more information.

Upgrade Process

Disable the Balancer.

Connect a mongo shell to a mongos instance inthe sharded cluster, and run sh.stopBalancer() todisable the balancer:

  1. sh.stopBalancer()

Note

If a migration is in progress, the system will complete thein-progress migration before stopping the balancer. You can runsh.isBalancerRunning() to check the balancer’s currentstate.

To verify that the balancer is disabled, runsh.getBalancerState(), which returns false if the balanceris disabled:

  1. sh.getBalancerState()

For more information on disabling the balancer, seeDisable the Balancer.

Upgrade the config servers.

  • Upgrade the secondarymembers of the replica set one at a time:

    • Shut down the secondary mongod instance and replacethe 4.0 binary with the 4.2 binary.

    • Start the 4.2 binary with the —configsvr,—replSet, and —port.Include any other options as used by the deployment.

  1. mongod --configsvr --replSet <replSetName> --port <port> --dbpath <path> --bind_ip localhost,<ip address>

If using a configuration file, update the file tospecify sharding.clusterRole: configsvr, replication.replSetName,net.port, and net.bindIp,then start the 4.2 binary:

  1. sharding:
  2. clusterRole: configsvr
  3. replication:
  4. replSetName: <string>
  5. net:
  6. port: <port>
  7. bindIp: localhost,<ip address>
  8. storage:
  9. dbpath: <path>

Include any other settings as appropriate for your deployment.

  • Wait for the member to recover to SECONDARY state beforeupgrading the next secondary member. To check the member’s state,issue rs.status() in the mongo shell.

Repeat for each secondary member.

  • Step down the replica set primary.

    • Connect a mongo shell to the primary and users.stepDown() to step down the primary and force anelection of a new primary:
  1. rs.stepDown()
  • When rs.status() shows that the primary has steppeddown and another member has assumed PRIMARY state, shut downthe stepped-down primary and replace the mongod binarywith the 4.2 binary.

  • Start the 4.2 binary with the —configsvr, —replSet,—port, and —bind_ip options. Include any optional command lineoptions used by the previous deployment:

  1. mongod --configsvr --replSet <replSetName> --port <port> --dbpath <path> --bind_ip localhost,<ip address>

If using a configuration file, update the file tospecify sharding.clusterRole: configsvr, replication.replSetName,net.port, and net.bindIp,then start the 4.2 binary:

  1. sharding:
  2. clusterRole: configsvr
  3. replication:
  4. replSetName: <string>
  5. net:
  6. port: <port>
  7. bindIp: localhost,<ip address>
  8. storage:
  9. dbpath: <path>

Include any other configuration as appropriate for your deployment.

Upgrade the shards.

Upgrade the shards one at a time.

For each shard replica set:

  • Upgrade the secondarymembers of the replica set one at a time:

    • Shut down the mongod instance and replace the 4.0binary with the 4.2 binary.

    • Start the 4.2 binary with the —shardsvr, —replSet,—port, and —bind_ip options. Include any additional command lineoptions as appropriate for your deployment:

  1. mongod --shardsvr --replSet <replSetName> --port <port> --dbpath <path> --bind_ip localhost,<ip address>

If using a configuration file, update the file toinclude sharding.clusterRole: shardsvr,replication.replSetName, net.port, andnet.bindIp, then start the 4.2 binary:

  1. sharding:
  2. clusterRole: shardsvr
  3. replication:
  4. replSetName: <string>
  5. net:
  6. port: <port>
  7. bindIp: localhost,<ip address>
  8. storage:
  9. dbpath: <path>

Include any other configuration as appropriate for your deployment.

  • Wait for the member to recover to SECONDARY state beforeupgrading the next secondary member. To check the member’sstate, you can issue rs.status() in themongo shell.

Repeat for each secondary member.

  • Step down the replica set primary.

Connect a mongo shell to the primary and users.stepDown() to step down the primary and force anelection of a new primary:

  1. rs.stepDown()
  • When rs.status()shows that the primary has stepped down and another memberhas assumed PRIMARY state, upgrade the stepped-down primary:

    • Shut down the stepped-down primary and replace themongod binary with the 4.2 binary.

    • Start the 4.2 binary with the —shardsvr, —replSet,—port, and —bind_ip options. Include any additional command lineoptions as appropriate for your deployment:

  1. mongod --shardsvr --replSet <replSetName> --port <port> --dbpath <path> --bind_ip localhost,<ip address>

If using a configuration file, update the file tospecify sharding.clusterRole: shardsvr, replication.replSetName,net.port, and net.bindIp, then start the4.2 binary:

  1. sharding:
  2. clusterRole: shardsvr
  3. replication:
  4. replSetName: <string>
  5. net:
  6. port: <port>
  7. bindIp: localhost,<ip address>
  8. storage:
  9. dbpath: <path>

Include any other configuration as appropriate for your deployment.

Upgrade the mongos instances.

Replace each mongos instance with the 4.2 binaryand restart. Include any other configuration as appropriate for yourdeployment.

Note

The —bind_ip option must be specified whenthe sharded cluster members are run on different hosts or ifremote clients connect to the sharded cluster. For more information, seeLocalhost Binding Compatibility Changes.

  1. mongos --configdb csReplSet/<rsconfigsver1:port1>,<rsconfigsver2:port2>,<rsconfigsver3:port3> --bind_ip localhost,<ip address>
  • If upgrading from MongoDB 4.0.6 or earlier,
  • Once a mongos instance for the deployment isupgraded, that mongos instance starts to producev1 change stream resume tokens. These tokens cannot be usedto resume a stream on a mongos instance which hasnot yet been upgraded.

Re-enable the balancer.

Using a 4.2 mongo shell, connect to amongos in the cluster and runsh.startBalancer() to re-enable the balancer:

  1. sh.startBalancer()

Starting in MongoDB 4.2, sh.startBalancer() also enablesauto-splitting for the sharded cluster.

If you do not wish to enable auto-splitting while the balancer isenabled, you must also run sh.disableAutoSplit().

For more information about re-enabling the balancer, seeEnable the Balancer.

Enable backwards-incompatible 4.2 features.

At this point, you can run the 4.2 binaries without the4.2 features that are incompatible with 4.0.

To enable these 4.2 features, set the feature compatibilityversion (FCV) to 4.2.

Tip

Enabling these backwards-incompatible features can complicate thedowngrade process since you must remove any persistedbackwards-incompatible features before you downgrade.

It is recommended that after upgrading, you allow your deployment torun without enabling these features for a burn-in period to ensurethe likelihood of downgrade is minimal. When you are confident thatthe likelihood of downgrade is minimal, enable these features.

On a mongos instance, run thesetFeatureCompatibilityVersion command in the admindatabase:

  1. db.adminCommand( { setFeatureCompatibilityVersion: "4.2" } )

This command must perform writes to an internal systemcollection. If for any reason the command does not completesuccessfully, you can safely retry the command on themongos as the operation is idempotent.

Note

Starting in MongoDB 4.0, the mongos binary will crash whenattempting to connect to mongod instances whosefeature compatibility version (fCV) is greater thanthat of the mongos. For example, you cannot connecta MongoDB 4.0 version mongos to a 4.2sharded cluster with fCV set to 4.2. Youcan, however, connect a MongoDB 4.0 versionmongos to a 4.2 sharded cluster with fCV set to 4.0.

Post Upgrade

  • TLS Options Replace Deprecated SSL Options
  • Starting in MongoDB 4.2, MongoDB deprecates the SSL options for themongod, the mongos, and the mongo shell aswell as the corresponding net.ssl Options configurationfile options.

To avoid deprecation messages, use the new TLS options for themongod, the mongos, and the mongo shell.

  • For the command-line TLS options, refer to the mongod, mongos, andmongo shell pages.
  • For the corresponding mongod and mongos configuration fileoptions, refer to the configuration file page.
  • For the connection string tls options, refer to theconnection string page.
    • 4.2-Compatible Drivers Retry Writes by Default
    • The official MongoDB 3.6 and 4.0-compatible drivers required including theretryWrites=true option in the connectionstring to enable retryable writes for that connection.

The official MongoDB 4.2-compatible drivers enable Retryable Writes bydefault. Applications upgrading to the 4.2-compatible drivers that requireretryable writes may omit the retryWrites=trueoption. Applications upgrading to the 4.2-compatible drivers that requiredisabling retryable writes must includeretryWrites=false in the connection string.

  • PowerPC and Hashed Index Value of 263
  • If on PowerPC, you had found hashed index field with the value 263,

    • If you deleted the documents, replace them from the export (doneas part of the prerequisites).
    • If you dropped the hashed index before upgrading, recreate theindex.

Additional Upgrade Procedures