Downgrade 3.6 Sharded Cluster to 3.4
Before you attempt any downgrade, familiarize yourself with the contentof this document.
Downgrade Path
Once upgraded to 3.6, if you need to downgrade, we recommend downgrading to the latest patch release of 3.4.
Create Backup
Optional but Recommended. Create a backup of your database.
Considerations
While the downgrade is in progress, you cannot make changes to thecollection metadata. For example, during the downgrade, donot do any of the following:
sh.enableSharding()
sh.shardCollection()
sh.addShard()
db.createCollection()
db.collection.drop()
db.dropDatabase()
- any operation that creates a database
- any other operation that modifies the cluster metadata in anyway. See Sharding Reference for a complete list ofsharding commands. Note, however, that not all commands on theSharding Reference page modify the cluster metadata.
Prerequisites
Before downgrading the binaries, you must downgrade the featurecompatibility version and remove any 3.6 features incompatible with 3.4 or earlier versions as outlinedbelow. These steps are necessary only iffeatureCompatibilityVersion
has ever been set to "3.6"
.
1. Downgrade Feature Compatibility Version
- db.adminCommand({setFeatureCompatibilityVersion: "3.4"})
The setFeatureCompatibilityVersion
command performs writesto an internal system collection and is idempotent. If for any reasonthe command does not complete successfully, retry the command on themongos
instance.
To ensure that all members of the sharded cluster reflect the updatedfeatureCompatibilityVersion
, connect to each shard replica setmember and each config server replica set member and check thefeatureCompatibilityVersion
:
Tip
For a sharded cluster that has access control enabled, to run thefollowing command against a shard replica set member, you mustconnect to the member as a shard local user.
- db.adminCommand( { getParameter: 1, featureCompatibilityVersion: 1 } )
All members should return a result that includes:
- "featureCompatibilityVersion" : { "version" : "3.4" }
If any member returns a featureCompatibilityVersion
that includeseither a version
value of "3.6"
or a targetVersion
field,wait for the member to reflect version "3.4"
before proceeding.
For more information on the returned featureCompatibilityVersion
value, see View FeatureCompatibilityVersion.
2. Remove Backwards Incompatible Persisted Features
Remove all persisted features that are incompatible with 3.4. For example, if you have definedany any view definitions, document validators, and partial indexfilters that use 3.6 query features such as $jsonSchema
or$expr
, you must remove them.
Procedure
Downgrade a Sharded Cluster
Warning
Before proceeding with the downgrade procedure, ensure that allmembers, including delayed replica set members in the shardedcluster, reflect the prerequisite changes. That is, check thefeatureCompatibilityVersion
and the removal of incompatiblefeatures for each node before downgrading.
Download the latest 3.4 binaries.
Using either a package manager or a manual download, get the latestrelease in the 3.4 series. If using a package manager, add a newrepository for the 3.4 binaries, then perform the actual downgradeprocess.
Once upgraded to 3.6, if you need to downgrade, we recommend downgrading to the latest patch release of 3.4.
Disable the Balancer.
Turn off the balancer as described inDisable the Balancer.
Downgrade the mongos instances.
Downgrade the binaries and restart.
Downgrade each shard, one at a time.
Downgrade the shards one at a time. If the shards are replica sets, for each shard:
Downgrade the secondarymembers of the replica set one at a time:
- Perform a clean shut downof the
mongod
process.
- Perform a clean shut downof the
Note
If you do not perform a clean shut down, errors may result thatprevent the mongod
process from starting.
Forcibly terminating the mongod
process may causeinaccurate results for db.collection.count()
anddb.stats()
as well as lengthen startup time the next timethat the mongod
process is restarted.
Invoking sudo service mongod stop
does not guarantee aclean shutdown. This service
script forceably stops themongod
process if it takes longer than fiveminutes to shut down.
Replace the 3.6 binary with the 3.4 binary.
Start the 3.4 binary with the
—shardsvr
and—port
command line options. Include any otherconfiguration as appropriate for your deployment, e.g.—bind_ip
.
- mongod --shardsvr --port <port> --dbpath <path> \
- --bind_ip localhost,<hostname(s)|ip address(es)>
Or if using a configuration file, update the file toinclude sharding.clusterRole: shardsvr
, net.port
, and any otherconfiguration as appropriate for your deployment, e.g.net.bindIp
, and start:
- sharding:
- clusterRole: shardsvr
- net:
- port: <port>
- bindIp: localhost,<hostname(s)|ip address(es)>
- storage:
- dbpath: <path>
- Wait for the member to recover to
SECONDARY
state beforedowngrading the next secondary member. To check the member’sstate, you can issuers.status()
in themongo
shell.
Repeat for each secondary member.
- Step down the replica set primary.
Connect a mongo
shell to the primary and users.stepDown()
to step down the primary and force anelection of a new primary:
- rs.stepDown()
When
rs.status()
shows that the primary has stepped down and another memberhas assumedPRIMARY
state, downgrade the stepped-down primary:Shut down the stepped-down primary and replace the
mongod
binary with the 3.4 binary.Start the 3.4 binary with the
—shardsvr
and—port
command line options. Include any otherconfiguration as appropriate for your deployment, e.g.—bind_ip
.
- mongod --shardsvr --port <port> --dbpath <path> --bind_ip localhost,<hostname(s)|ip address(es)>
Or if using a configuration file, update the file toinclude sharding.clusterRole: shardsvr
, net.port
, and any otherconfiguration as appropriate for your deployment, e.g.net.bindIp
, and start the 3.4 binary:
- sharding:
- clusterRole: shardsvr
- net:
- port: <port>
- bindIp: localhost,<hostname(s)|ip address(es)>
- storage:
- dbpath: <path>
Downgrade the config servers.
If the config servers are replica sets:
Downgrade the secondarymembers of the replica set one at a time:
Shut down the secondary
mongod
instance and replacethe 3.6 binary with the 3.4 binary.Start the 3.4 binary with both the
—configsvr
and—port
options. Include any otherconfiguration as appropriate for your deployment, e.g.—bind_ip
.
- mongod --configsvr --port <port> --dbpath <path> --bind_ip localhost,<hostname(s)|ip address(es)>
If using a configuration file, update the file tospecify sharding.clusterRole: configsvr
, net.port
, and any otherconfiguration as appropriate for your deployment, e.g.net.bindIp
, and start the 3.4 binary:
- sharding:
- clusterRole: configsvr
- net:
- port: <port>
- bindIp: localhost,<hostname(s)|ip address(es)>
- storage:
- dbpath: <path>
Include any other configuration as appropriate for your deployment.
- Wait for the member to recover to
SECONDARY
state beforedowngrading the next secondary member. To check the member’s state,issuers.status()
in themongo
shell.
Repeat for each secondary member.
Step down the replica set primary.
- Connect a
mongo
shell to the primary and users.stepDown()
to step down the primary and force anelection of a new primary:
- Connect a
- rs.stepDown()
When
rs.status()
shows that the primary has steppeddown and another member has assumedPRIMARY
state, shut downthe stepped-down primary and replace themongod
binarywith the 3.4 binary.Start the 3.4 binary with both the
—configsvr
and—port
options. Include any otherconfiguration as appropriate for your deployment, e.g.—bind_ip
.
- mongod --configsvr --port <port> --dbpath <path> --bind_ip localhost,<hostname(s)|ip address(es)>
If using a configuration file, update the file tospecify sharding.clusterRole: configsvr
, net.port
, and any otherconfiguration as appropriate for your deployment, e.g.net.bindIp
, and start the 3.4 binary:
- sharding:
- clusterRole: configsvr
- net:
- port: <port>
- bindIp: localhost,<hostname(s)|ip address(es)>
- storage:
- dbpath: <path>
Re-enable the balancer.
Once the downgrade of sharded cluster components iscomplete, re-enable the balancer.