Downgrade 4.0 Sharded Cluster to 3.6

Before you attempt any downgrade, familiarize yourself with the contentof this document.

Downgrade Path

Once upgraded to 4.0, if you need to downgrade, we recommend downgrading to the latest patch release of 3.6.

Change Stream Consideration

MongoDB 4.0 introduces new hex-encoded string change streamresume tokens:

The resume token _data type depends on the MongoDB versions and,in some cases, the feature compatibility version (fcv) at the timeof the change stream’s opening/resumption (i.e. a change in fcvvalue does not affect the resume tokens for already opened changestreams):

MongoDB VersionFeature Compatibility VersionResume Token _data Type
MongoDB 4.0.7 and later“4.0” or “3.6”Hex-encoded string (v1)
MongoDB 4.0.6 and earlier“4.0”Hex-encoded string (v0)
MongoDB 4.0.6 and earlier“3.6”BinData
MongoDB 3.6“3.6”BinData
  • When downgrading from MongoDB 4.0.7 or greater, clients cannot usethe resume tokens returned from the 4.0.7+ deployment. To resume achange stream, clients will need to use a pre-4.0.7 upgrade resumetoken (if available). Otherwise, clients will need to start a new change stream.
  • When downgrading from MongoDB 4.0.6 or earlier, clients can useBinData resume tokens returned from the 4.0 deployment, but not thev0 tokens.

Create Backup

Optional but Recommended. Create a backup of your database.

Prerequisites

Before downgrading the binaries, you must downgrade the featurecompatibility version and remove any 4.0 features incompatible with 3.6 or earlier versions as outlinedbelow. These steps are necessary only iffeatureCompatibilityVersion has ever been set to "4.0".

1. Downgrade Feature Compatibility Version

  • Connect a mongo shell to the mongos instance.

  • Downgrade the featureCompatibilityVersion to "3.6".

  1. db.adminCommand({setFeatureCompatibilityVersion: "3.6"})

The setFeatureCompatibilityVersion command performs writesto an internal system collection and is idempotent. If for any reasonthe command does not complete successfully, retry the command on themongos instance.

To ensure that all members of the sharded cluster reflect the updatedfeatureCompatibilityVersion, connect to each shard replica setmember and each config server replica set member and check thefeatureCompatibilityVersion:

Tip

For a sharded cluster that has access control enabled, to run thefollowing command against a shard replica set member, you mustconnect to the member as a shard local user.

  1. db.adminCommand( { getParameter: 1, featureCompatibilityVersion: 1 } )

All members should return a result that includes:

  1. "featureCompatibilityVersion" : { "version" : "3.6" }

If any member returns a featureCompatibilityVersion that includeseither a version value of "4.0" or a targetVersion field,wait for the member to reflect version "3.6" before proceeding.

For more information on the returned featureCompatibilityVersionvalue, see View FeatureCompatibilityVersion.

2. Remove Backwards Incompatible Persisted Features

Remove all persisted features that are incompatible with 4.0. For example, if you have definedany view definitions, document validators, and partial index filtersthat use 4.0 query features such as the aggregation convertoperators, you must remove them.

If you have users with only SCRAM-SHA-256 credentials, you shouldcreate SCRAM-SHA-1 credentials for these users before downgrading.To update a user who only has SCRAM-SHA-256 credentials, rundb.updateUser() with mechanisms set to SCRAM-SHA-1only and the pwd set to the password:

  1. db.updateUser(
  2. "reportUser256",
  3. {
  4. mechanisms: [ "SCRAM-SHA-1" ],
  5. pwd: <newpwd>
  6. }
  7. )

Procedure

Downgrade a Sharded Cluster

Warning

Before proceeding with the downgrade procedure, ensure that allmembers, including delayed replica set members in the shardedcluster, reflect the prerequisite changes. That is, check thefeatureCompatibilityVersion and the removal of incompatiblefeatures for each node before downgrading.

Note

If you ran MongoDB 4.0 with authenticationMechanismsthat included SCRAM-SHA-256, omit SCRAM-SHA-256 whenrestarting with the 3.6 binary.

Download the latest 3.6 binaries.

Using either a package manager or a manual download, get the latestrelease in the 3.6 series. If using a package manager, add a newrepository for the 3.6 binaries, then perform the actual downgradeprocess.

Once upgraded to 4.0, if you need to downgrade, we recommend downgrading to the latest patch release of 3.6.

Disable the Balancer.

Connect a mongo shell to a mongos instance inthe sharded cluster, and run sh.stopBalancer() todisable the balancer:

  1. sh.stopBalancer()

Note

If a migration is in progress, the system will complete thein-progress migration before stopping the balancer. You can runsh.isBalancerRunning() to check the balancer’s currentstate.

To verify that the balancer is disabled, runsh.getBalancerState(), which returns false if the balanceris disabled:

  1. sh.getBalancerState()

For more information on disabling the balancer, seeDisable the Balancer.

Downgrade the mongos instances.

Downgrade the binaries and restart.

Downgrade each shard, one at a time.

Downgrade the shards one at a time. If the shards are replica sets, for each shard:

  • Downgrade the secondarymembers of the replica set one at a time:

    • Shut down the mongod instance and replace the 4.0binary with the 3.6 binary.

    • Start the 3.6 binary with the —shardsvr and—port command line options. Include any otherconfiguration as appropriate for your deployment, e.g.—bind_ip.

  1. mongod --shardsvr --port <port> --dbpath <path> --bind_ip localhost,<hostname(s)|ip address(es)>

Or if using a configuration file, update the file toinclude sharding.clusterRole: shardsvr, net.port, and any otherconfiguration as appropriate for your deployment, e.g.net.bindIp, and start:

  1. sharding:
  2. clusterRole: shardsvr
  3. net:
  4. port: <port>
  5. bindIp: localhost,<hostname(s)|ip address(es)>
  6. storage:
  7. dbpath: <path>
  • Wait for the member to recover to SECONDARY state beforedowngrading the next secondary member. To check the member’sstate, you can issue rs.status() in themongo shell.

Repeat for each secondary member.

  • Step down the replica set primary.

Connect a mongo shell to the primary and users.stepDown() to step down the primary and force anelection of a new primary:

  1. rs.stepDown()
  • When rs.status()shows that the primary has stepped down and another memberhas assumed PRIMARY state, downgrade the stepped-down primary:

    • Shut down the stepped-down primary and replace themongod binary with the 3.6 binary.

    • Start the 3.6 binary with the —shardsvr and—port command line options. Include any otherconfiguration as appropriate for your deployment, e.g.—bind_ip.

  1. mongod --shardsvr --port <port> --dbpath <path> --bind_ip localhost,<hostname(s)|ip address(es)>

Or if using a configuration file, update the file toinclude sharding.clusterRole: shardsvr, net.port, and any otherconfiguration as appropriate for your deployment, e.g.net.bindIp, and start the 3.6 binary:

  1. sharding:
  2. clusterRole: shardsvr
  3. net:
  4. port: <port>
  5. bindIp: localhost,<hostname(s)|ip address(es)>
  6. storage:
  7. dbpath: <path>

Downgrade the config servers.

If the config servers are replica sets:

  • Downgrade the secondarymembers of the replica set one at a time:

    • Shut down the secondary mongod instance and replacethe 4.0 binary with the 3.6 binary.

    • Start the 3.6 binary with both the —configsvr and—port options. Include any otherconfiguration as appropriate for your deployment, e.g.—bind_ip.

  1. mongod --configsvr --port <port> --dbpath <path> --bind_ip localhost,<hostname(s)|ip address(es)>

If using a configuration file, update the file tospecify sharding.clusterRole: configsvr, net.port, and any otherconfiguration as appropriate for your deployment, e.g.net.bindIp, and start the 3.6 binary:

  1. sharding:
  2. clusterRole: configsvr
  3. net:
  4. port: <port>
  5. bindIp: localhost,<hostname(s)|ip address(es)>
  6. storage:
  7. dbpath: <path>

Include any other configuration as appropriate for your deployment.

  • Wait for the member to recover to SECONDARY state beforedowngrading the next secondary member. To check the member’s state,issue rs.status() in the mongo shell.

Repeat for each secondary member.

  • Step down the replica set primary.

    • Connect a mongo shell to the primary and users.stepDown() to step down the primary and force anelection of a new primary:
  1. rs.stepDown()
  • When rs.status() shows that the primary has steppeddown and another member has assumed PRIMARY state, shut downthe stepped-down primary and replace the mongod binarywith the 3.6 binary.

  • Start the 3.6 binary with both the —configsvr and—port options. Include any otherconfiguration as appropriate for your deployment, e.g.—bind_ip.

  1. mongod --configsvr --port <port> --dbpath <path> --bind_ip localhost,<hostname(s)|ip address(es)>

If using a configuration file, update the file tospecify sharding.clusterRole: configsvr, net.port, and any otherconfiguration as appropriate for your deployment, e.g.net.bindIp, and start the 3.6 binary:

  1. sharding:
  2. clusterRole: configsvr
  3. net:
  4. port: <port>
  5. bindIp: localhost,<hostname(s)|ip address(es)>
  6. storage:
  7. dbpath: <path>

Re-enable the balancer.

Once the downgrade of sharded cluster components iscomplete, re-enable the balancer.

Note

The MongoDB 3.6 deployment can use the BinData resume tokensreturned from a change stream opened against the 4.0 deployment, butnot the v0 or the v1 hex-encoded string resume tokens.