Upgrade a Sharded Cluster to 3.6

Note

  • MongoDB 3.6 is not tested on APFS, the new filesystem in macOS 10.13and may encounter errors.

  • Starting in MongoDB 3.6.13, MongoDB 3.6-series removes support for Ubuntu 16.04 PPCLE.

For earlier MongoDB Enterprise versions that support Ubuntu 16.04POWER/PPC64LE:

Due to a lock elision bug present in older versionsof the glibc package on Ubuntu 16.04 for POWER, you mustupgrade the glibc package to at least glibc 2.23-0ubuntu5before running MongoDB. Systems with older versions of theglibc package will experience database server crashes andmisbehavior due to random memory corruption, and are unsuitablefor production deployments of MongoDB

Important

Before you attempt any upgrade, please familiarize yourself with thecontent of this document.

If you need guidance on upgrading to 3.6, MongoDB offers major versionupgrade services to help ensurea smooth transition without interruption to your MongoDB application.

Upgrade Recommendations and Checklists

When upgrading, consider the following:

Upgrade Version Path

To upgrade an existing MongoDB deployment to 3.6, you must berunning a 3.4-series release.

To upgrade from a version earlier than the 3.4-series, you mustsuccessively upgrade major releases until you have upgraded to3.4-series. For example, if you are running a 3.2-series, you mustupgrade first to 3.4before you can upgrade to 3.6.

Preparedness

Before beginning your upgrade, see the Compatibility Changes in MongoDB 3.6 document toensure that your applications and deployments are compatible withMongoDB 3.6. Resolve the incompatibilities in your deployment beforestarting the upgrade.

Before upgrading MongoDB, always test your application in a stagingenvironment before deploying the upgrade to your productionenvironment.

Downgrade Consideration

Once upgraded to 3.6, if you need to downgrade, we recommend downgrading to the latest patch release of 3.4.

Default Bind to Localhost

Starting in MongoDB 3.6, mongod and mongosinstances bind to localhost by default. Remote clients, including othermembers of the replica set, cannot connect to an instance bound only tolocalhost. To override and bind to other ip addresses, use thenet.bindIp configuration file setting or the —bind_ipcommand-line option to specify a list of ip addresses.

The upgrade process will require that you specify thenet.bindIp setting (or —bind_ip) if your shardedcluster members are run on different hosts or if you wish remoteclients to connect to your sharded cluster.

Warning

Before binding to a non-localhost (e.g. publicly accessible)IP address, ensure you have secured your cluster from unauthorizedaccess. For a complete list of security recommendations, seeSecurity Checklist. At minimum, considerenabling authentication andhardening network infrastructure.

For more information, see Localhost Binding Compatibility Changes

Shard Replica Sets

Starting in MongoDB 3.6, mongod instances with the shardserver role must be replica set members.

To upgrade your sharded cluster to version 3.6, the shard servers mustbe running as a replica set. To convert an existing shard standaloneinstance to a shard replica set, seeConvert a Shard Standalone to a Shard Replica Set.

Drivers

For MongoDB 3.6.0 - 3.6.3 binaries, you should upgrade your drivers to3.6 feature compatible drivers only after you haveupgraded the MongoDB binaries and updated the feature compatibilityversion of the sharded cluster to 3.6.

For more information, see SERVER-33763.

Read Concern Majority

Starting in MongoDB 3.6, MongoDB enables support for “majority” read concern by default.

For MongoDB 3.6.1 - 3.6.x, you can disable read concern“majority” to prevent thestorage cache pressure from immobilizing a deployment with aprimary-secondary-arbiter (PSA) architecture. Disabling“majority” read concern alsodisables support for change streams

For more information, see Disable Read Concern Majority.

Prerequisites

    • Version 3.4 or Greater
    • To upgrade a sharded cluster to 3.6, all members of thecluster must be at least version 3.4. The upgrade process checksall components of the cluster and will produce warnings if anycomponent is running version earlier than 3.4.
    • Feature Compatibility Version
    • The 3.4 sharded cluster must havefeatureCompatibilityVersion set to 3.4.

To ensure that all members of the sharded cluster havefeatureCompatibilityVersion set to 3.4, connect to eachshard replica set member and each config server replica set memberand check the featureCompatibilityVersion:

Tip

For a sharded cluster that has access control enabled, to run thefollowing command against a shard replica set member, you mustconnect to the member as a shard local user.

  1. db.adminCommand( { getParameter: 1, featureCompatibilityVersion: 1 } )

All members should return a result that includes"featureCompatibilityVersion": "3.4".

To set or update featureCompatibilityVersion, run thefollowing command on the mongos:

  1. db.adminCommand( { setFeatureCompatibilityVersion: "3.4" } )

For more information, seesetFeatureCompatibilityVersion.

    • Shard Aware
    • The shards in the 3.4 sharded clusters must be shard aware(i.e. the shards must have received their shardIdentitydocument, located in the admin.system.version collection):

      • For sharded clusters that started as 3.4, the shards areshard aware.

      • For 3.4 sharded clusters that were upgraded from3.2-series, when you update featureCompatibilityVersionfrom 3.2 to 3.4, the config server attempts to send theshards their respective shardIdentity document every 30seconds until success. You must wait until all shards receive thedocuments.

      • To check whether a shard replica set member has received itsshardIdentity document, issue the find command againstthe system.version collection in the admin database and check for adocument where "_id" : "shardIdentity".

For an example of a shardIdentity document:

  1. {
  2. "_id" : "shardIdentity",
  3. "clusterId" : ObjectId("2bba123c6eeedcd192b19024"),
  4. "shardName" : "shard2",
  5. "configsvrConnectionString" : "configDbRepl/alpha.example.net:28100,beta.example.net:28100,charlie.example.net:28100" }
  • Disable the balancer

    • Back up the config Database
    • Optional but Recommended. As a precaution, take a backup of theconfig database before upgrading the sharded cluster.

Download 3.6 Binaries

Use Package Manager

If you installed MongoDB from the MongoDB apt, yum, dnf, orzypper repositories, you should upgrade to 3.6 using your packagemanager.

Follow the appropriate 3.6 installation instructions for your Linux system. Thiswill involve adding a repository for the new release, then performingthe actual upgrade process.

Download 3.6 Binaries Manually

If you have not installed MongoDB using a package manager, you canmanually download the MongoDB binaries from the MongoDB DownloadCenter.

See 3.6 installation instructions for more information.

Upgrade Process

Disable the Balancer.

Connect a mongo shell to a mongos instance inthe sharded cluster, and run sh.stopBalancer() todisable the balancer:

  1. sh.stopBalancer()

Note

If a migration is in progress, the system will complete thein-progress migration before stopping the balancer. You can runsh.isBalancerRunning() to check the balancer’s currentstate.

To verify that the balancer is disabled, runsh.getBalancerState(), which returns false if the balanceris disabled:

  1. sh.getBalancerState()

For more information on disabling the balancer, seeDisable the Balancer.

Upgrade the config servers.

  • Upgrade the secondarymembers of the replica set one at a time:

    • Shut down the secondary mongod instance and replacethe 3.4 binary with the 3.6 binary.

    • Start the 3.6 binary with the —configsvr,—replSet, and —port.Include any other options as used by the deployment.

Note

The —bind_ip option must be specified whenthe sharded cluster members are run on different hosts or ifremote clients connect to the sharded cluster. For more information, seeLocalhost Binding Compatibility Changes.

  1. mongod --configsvr --replSet <replSetName> --port <port> --dbpath <path> --bind_ip localhost,<hostname(s)|ip address(es)>

If using a configuration file, update the file tospecify sharding.clusterRole: configsvr, replication.replSetName,net.port, and net.bindIp,then start the 3.6 binary:

  1. sharding:
  2. clusterRole: configsvr
  3. replication:
  4. replSetName: <string>
  5. net:
  6. port: <port>
  7. bindIp: localhost,<hostname(s)|ip address(es)>
  8. storage:
  9. dbpath: <path>

Include any other settings as appropriate for your deployment.

  • Wait for the member to recover to SECONDARY state beforeupgrading the next secondary member. To check the member’s state,issue rs.status() in the mongo shell.

Repeat for each secondary member.

  • Step down the replica set primary.

    • Connect a mongo shell to the primary and users.stepDown() to step down the primary and force anelection of a new primary:
  1. rs.stepDown()
  • When rs.status() shows that the primary has steppeddown and another member has assumed PRIMARY state, shut downthe stepped-down primary and replace the mongod binarywith the 3.6 binary.

  • Start the 3.6 binary with the —configsvr,—replSet, —port, and —bind_ipoptions. Include any optional command line options used by theprevious deployment:

  1. mongod --configsvr --replSet <replSetName> --port <port> --dbpath <path> --bind_ip localhost,<hostname(s)|ip address(es)>

Note

The —bind_ip option must be specified whenthe sharded cluster members are run on different hosts or ifremote clients connect to the sharded cluster. For more information, seeLocalhost Binding Compatibility Changes.

If using a configuration file, update the file tospecify sharding.clusterRole: configsvr, replication.replSetName,net.port, and net.bindIp,then start the 3.6 binary:

  1. sharding:
  2. clusterRole: configsvr
  3. replication:
  4. replSetName: <string>
  5. net:
  6. port: <port>
  7. bindIp: localhost,<hostname(s)|ip address(es)>
  8. storage:
  9. dbpath: <path>

Include any other configuration as appropriate for your deployment.

Upgrade the shards.

Upgrade the shards one at a time.

For each shard replica set:

  • Upgrade the secondarymembers of the replica set one at a time:

    • Shut down the mongod instance and replace the 3.4binary with the 3.6 binary.

    • Start the 3.6 binary with the —shardsvr,—replSet, —port, and —bind_ipoptions. Include any optional command line options used by theprevious deployment:

  1. mongod --shardsvr --replSet <replSetName> --port <port> --dbpath <path> --bind_ip localhost,<hostname(s)|ip address(es)>

Note

The —bind_ip option must be specified whenthe sharded cluster members are run on different hosts or ifremote clients connect to the sharded cluster. For more information, seeLocalhost Binding Compatibility Changes.

If using a configuration file, update the file toinclude sharding.clusterRole: shardsvr,replication.replSetName, net.port, andnet.bindIp, then start the 3.6 binary:

  1. sharding:
  2. clusterRole: shardsvr
  3. replication:
  4. replSetName: <string>
  5. net:
  6. port: <port>
  7. bindIp: localhost,<hostname(s)|ip address(es)>
  8. storage:
  9. dbpath: <path>

Include any other configuration as appropriate for your deployment.

  • Wait for the member to recover to SECONDARY state beforeupgrading the next secondary member. To check the member’sstate, you can issue rs.status() in themongo shell.

Repeat for each secondary member.

  • Step down the replica set primary.

Connect a mongo shell to the primary and users.stepDown() to step down the primary and force anelection of a new primary:

  1. rs.stepDown()
  • When rs.status()shows that the primary has stepped down and another memberhas assumed PRIMARY state, upgrade the stepped-down primary:

    • Shut down the stepped-down primary and replace themongod binary with the 3.6 binary.

    • Start the 3.6 binary with the —shardsvr,—replSet, —port, and —bind_ipoptions. Include any optional command line options used by theprevious deployment:

  1. mongod --shardsvr --replSet <replSetName> --port <port> --dbpath <path> --bind_ip localhost,<hostname(s)|ip address(es)>

Note

The —bind_ip option must be specified whenthe sharded cluster members are run on different hosts or ifremote clients connect to the sharded cluster. For more information, seeLocalhost Binding Compatibility Changes.

If using a configuration file, update the file tospecify sharding.clusterRole: shardsvr, replication.replSetName,net.port, and net.bindIp, then start the3.6 binary:

  1. sharding:
  2. clusterRole: shardsvr
  3. replication:
  4. replSetName: <string>
  5. net:
  6. port: <port>
  7. bindIp: localhost,<hostname(s)|ip address(es)>
  8. storage:
  9. dbpath: <path>

Include any other configuration as appropriate for your deployment.

Upgrade the mongos instances.

Replace each mongos instance with the 3.6 binaryand restart. Include any other configuration as appropriate for yourdeployment.

Note

The —bind_ip option must be specified whenthe sharded cluster members are run on different hosts or ifremote clients connect to the sharded cluster. For more information, seeLocalhost Binding Compatibility Changes.

  1. mongos --configdb csReplSet/<rsconfigsver1:port1>,<rsconfigsver2:port2>,<rsconfigsver3:port3> --bind_ip localhost,<hostname(s)|ip address(es)>

Re-enable the balancer.

Using a 3.6 mongo shell, connect to amongos in the cluster and runsh.setBalancerState() to re-enable the balancer:

  1. sh.setBalancerState(true)

The 3.4 and earlier mongo shell is notcompatible with 3.6 clusters.

For more information about re-enabling the balancer, seeEnable the Balancer.

Enable backwards-incompatible 3.6 features.

At this point, you can run the 3.6 binaries without the3.6 features that are incompatible with 3.4. That is, you can runthe 3.6 sharded cluster with feature compatibility version set to 3.4

Important

For MongoDB 3.6.0-3.6.3, you should upgrade your drivers to3.6 feature compatible drivers only afteryou have updated the feature compatibility version of the shardedcluster to 3.6. For more information, see SERVER-33763.

To enable these 3.6 features, set the feature compatibilityversion (FCV) to 3.6.

Note

Enabling these backwards-incompatible features can complicate thedowngrade process since you must remove any persistedbackwards-incompatible features before you downgrade.

It is recommended that after upgrading, you allow your deployment torun without enabling these features for a burn-in period to ensurethe likelihood of downgrade is minimal. When you are confident thatthe likelihood of downgrade is minimal, enable these features.

On a mongos instance, run thesetFeatureCompatibilityVersion command in the admindatabase:

  1. db.adminCommand( { setFeatureCompatibilityVersion: "3.6" } )

This command must perform writes to an internal systemcollection. If for any reason the command does not completesuccessfully, you can safely retry the command on themongos as the operation is idempotent.

Restart mongos instances.

After changing the featureCompatibilityVersion, allmongos instances need to be restarted to pick up thechanges in the causal consistency behavior.

Additional Upgrade Procedures