Navigation

Upgrade a Sharded Cluster to 4.2

Important

Before you attempt any upgrade, please familiarize yourself with the content of this document.

If you need guidance on upgrading to 4.2, MongoDB offers major version upgrade services to help ensure a smooth transition without interruption to your MongoDB application.

Upgrade Recommendations and Checklists

When upgrading, consider the following:

Upgrade Version Path

To upgrade an existing MongoDB deployment to 4.2, you must be running a 4.0-series release.

To upgrade from a version earlier than the 4.0-series, you must successively upgrade major releases until you have upgraded to 4.0-series. For example, if you are running a 3.6-series, you must upgrade first to 4.0 before you can upgrade to 4.2.

Preparedness

Before beginning your upgrade, see the Compatibility Changes in MongoDB 4.2 document to ensure that your applications and deployments are compatible with MongoDB 4.2. Resolve the incompatibilities in your deployment before starting the upgrade.

Before upgrading MongoDB, always test your application in a staging environment before deploying the upgrade to your production environment.

Downgrade Consideration

Once upgraded to 4.2, if you need to downgrade, we recommend downgrading to the latest patch release of 4.0.

Read Concern Majority (3-Member Primary-Secondary-Arbiter Architecture)

Starting in MongoDB 3.6, MongoDB enables support for "majority" read concern by default.

You can disable read concern "majority" to prevent the storage cache pressure from immobilizing a three-member replica set with a primary-secondary-arbiter (PSA) architecture or a sharded cluster with a three-member PSA shards.

Note

Disabling "majority" read concern affects support for transactions on sharded clusters. Specifically:

  • A transaction cannot use read concern "snapshot" if the transaction involves a shard that has disabled read concern “majority”.
  • A transaction that writes to multiple shards errors if any of the transaction’s read or write operations involves a shard that has disabled read concern "majority".

However, it does not affect transactions on replica sets. For transactions on replica sets, you can specify read concern "majority" (or "snapshot" or "local" ) for multi-document transactions even if read concern "majority" is disabled.

Disabling "majority" read concern prevents collMod commands which modify an index from rolling back. If such an operation needs to be rolled back, you must resync the affected nodes with the primary node.

Disabling "majority" read concern disables support for Change Streams for MongoDB 4.0 and earlier. For MongoDB 4.2+, disabling read concern "majority" has no effect on change streams availability.

When upgraded to 4.2 with read concern “majority” disabled, you can use change streams for your deployment.

For more information, see Disable Read Concern Majority.

Change Stream Resume Tokens

MongoDB 4.2 uses the version 1 (i.e. v1) change streams resume tokens, introduced in version 4.0.7.

The resume token _data type depends on the MongoDB versions and, in some cases, the feature compatibility version (fcv) at the time of the change stream’s opening/resumption (i.e. a change in fcv value does not affect the resume tokens for already opened change streams):

MongoDB Version Feature Compatibility Version Resume Token _data Type
MongoDB 4.2 and later “4.2” or “4.0” Hex-encoded string (v1)
MongoDB 4.0.7 and later “4.0” or “3.6” Hex-encoded string (v1)
MongoDB 4.0.6 and earlier “4.0” Hex-encoded string (v0)
MongoDB 4.0.6 and earlier “3.6” BinData
MongoDB 3.6 “3.6” BinData

When upgrading from MongoDB 4.0.6 or earlier to MongoDB 4.2

During the upgrade process, the members of the sharded clusters will continue to produce v0 tokens until the first mongos instance is upgraded. The upgrade mongos instances will begin producing v1 change stream resume tokens. These cannot be used to resume a stream on a mongos which has not yet been upgraded.

Prerequisites

All Members Version

To upgrade a sharded cluster to 4.2, all members of the cluster must be at least version 4.0. The upgrade process checks all components of the cluster and will produce warnings if any component is running version earlier than 4.0.

MMAPv1 to WiredTiger Storage Engine

MongoDB 4.2 removes support for the deprecated MMAPv1 storage engine.

If your 4.0 deployment uses MMAPv1, you must change the 4.0 deployment to WiredTiger Storage Engine before upgrading to MongoDB 4.2. For details, see Change Sharded Cluster to WiredTiger.

Review Current Configuration

With MongoDB 4.2, the mongod and mongos processes will not start with MMAPv1 Specific Configuration Options. Previous versions of MongoDB running WiredTiger ignored MMAPv1 configurations options if they were specified. With MongoDB 4.2, you must remove these from your configuration.

Feature Compatibility Version

The 4.0 sharded cluster must have featureCompatibilityVersion set to 4.0.

To ensure that all members of the sharded cluster have featureCompatibilityVersion set to 4.0, connect to each shard replica set member and each config server replica set member and check the featureCompatibilityVersion:

Tip

For a sharded cluster that has access control enabled, to run the following command against a shard replica set member, you must connect to the member as a shard local user.

db.adminCommand( { getParameter: 1, featureCompatibilityVersion: 1 } )

All members should return a result that includes "featureCompatibilityVersion" : { "version" : "4.0" }.

To set or update featureCompatibilityVersion, run the following command on the mongos:

db.adminCommand( { setFeatureCompatibilityVersion: "4.0" } )

For more information, see setFeatureCompatibilityVersion.

Replica Set Member State

For shards and config servers, ensure that no replica set member is in ROLLBACK or RECOVERING state.

Back up the config Database

Optional but Recommended. As a precaution, take a backup of the config database before upgrading the sharded cluster.

Hashed Indexes on PowerPC

For PowerPC Only

For hashed indexes, MongoDB 4.2 ensures that the hashed value for the floating point value 263 on PowerPC is consistent with other platforms.

Although hashed indexes on a field that may contain floating point values greater than 263 is an unsupported configuration, clients may still insert documents where the indexed field has the value 263.

  • If the current MongoDB 4.0 sharded cluster on PowerPC has hashed shard key values for 263, then, before upgrading:

    1. Make a backup of the docs; e.g. mongoexport with the --query to select the documents with 263 in the shard key field.
    2. Delete the documents with the 263 value.

    After you upgrade following the procedure below, you will import the deleted documents.

  • If an existing MongoDB 4.0 collection on PowerPC has a hashed index entry for the value 263 that is not used as the shard key, you also have the option to drop the index before upgrading and then re-create it after the upgrade is complete.

To list all hashed indexes for your deployment and find documents whose indexed field contains the value 263 Hashed Indexes and PowerPC check.

Download 4.2 Binaries

Use Package Manager

If you installed MongoDB from the MongoDB apt, yum, dnf, or zypper repositories, you should upgrade to 4.2 using your package manager.

Follow the appropriate 4.2 installation instructions for your Linux system. This will involve adding a repository for the new release, then performing the actual upgrade process.

Download 4.2 Binaries Manually

If you have not installed MongoDB using a package manager, you can manually download the MongoDB binaries from the MongoDB Download Center.

See 4.2 installation instructions for more information.

Upgrade Process

1

Disable the Balancer.

Connect a mongo shell to a mongos instance in the sharded cluster, and run sh.stopBalancer() to disable the balancer:

sh.stopBalancer()

Note

If a migration is in progress, the system will complete the in-progress migration before stopping the balancer. You can run sh.isBalancerRunning() to check the balancer’s current state.

To verify that the balancer is disabled, run sh.getBalancerState(), which returns false if the balancer is disabled:

sh.getBalancerState()

For more information on disabling the balancer, see Disable the Balancer.

2

Upgrade the config servers.

  1. Upgrade the secondary members of the replica set one at a time:

    1. Shut down the secondary mongod instance and replace the 4.0 binary with the 4.2 binary.

    2. Start the 4.2 binary with the --configsvr, --replSet, and --port. Include any other options as used by the deployment.

      mongod --configsvr --replSet <replSetName> --port <port> --dbpath <path> --bind_ip localhost,<ip address>
      

      If using a configuration file, update the file to specify sharding.clusterRole: configsvr, replication.replSetName, net.port, and net.bindIp, then start the 4.2 binary:

      sharding:
         clusterRole: configsvr
      replication:
         replSetName: <string>
      net:
         port: <port>
         bindIp: localhost,<ip address>
      storage:
         dbpath: <path>
      

      Include any other settings as appropriate for your deployment.

    3. Wait for the member to recover to SECONDARY state before upgrading the next secondary member. To check the member’s state, issue rs.status() in the mongo shell.

      Repeat for each secondary member.

  2. Step down the replica set primary.

    1. Connect a mongo shell to the primary and use rs.stepDown() to step down the primary and force an election of a new primary:

      rs.stepDown()
      
    2. When rs.status() shows that the primary has stepped down and another member has assumed PRIMARY state, shut down the stepped-down primary and replace the mongod binary with the 4.2 binary.

    3. Start the 4.2 binary with the --configsvr, --replSet, --port, and --bind_ip options. Include any optional command line options used by the previous deployment:

      mongod --configsvr --replSet <replSetName> --port <port> --dbpath <path> --bind_ip localhost,<ip address>
      

      If using a configuration file, update the file to specify sharding.clusterRole: configsvr, replication.replSetName, net.port, and net.bindIp, then start the 4.2 binary:

      sharding:
         clusterRole: configsvr
      replication:
         replSetName: <string>
      net:
         port: <port>
         bindIp: localhost,<ip address>
      storage:
         dbpath: <path>
      

      Include any other configuration as appropriate for your deployment.

3

Upgrade the shards.

Upgrade the shards one at a time.

For each shard replica set:

  1. Upgrade the secondary members of the replica set one at a time:

    1. Shut down the mongod instance and replace the 4.0 binary with the 4.2 binary.

    2. Start the 4.2 binary with the --shardsvr, --replSet, --port, and --bind_ip options. Include any additional command line options as appropriate for your deployment:

      mongod --shardsvr --replSet <replSetName> --port <port> --dbpath <path> --bind_ip localhost,<ip address>
      

      If using a configuration file, update the file to include sharding.clusterRole: shardsvr, replication.replSetName, net.port, and net.bindIp, then start the 4.2 binary:

      sharding:
         clusterRole: shardsvr
      replication:
         replSetName: <string>
      net:
         port: <port>
         bindIp: localhost,<ip address>
      storage:
         dbpath: <path>
      

      Include any other configuration as appropriate for your deployment.

    3. Wait for the member to recover to SECONDARY state before upgrading the next secondary member. To check the member’s state, you can issue rs.status() in the mongo shell.

      Repeat for each secondary member.

  2. Step down the replica set primary.

    Connect a mongo shell to the primary and use rs.stepDown() to step down the primary and force an election of a new primary:

    rs.stepDown()
    
  3. When rs.status() shows that the primary has stepped down and another member has assumed PRIMARY state, upgrade the stepped-down primary:

    1. Shut down the stepped-down primary and replace the mongod binary with the 4.2 binary.

    2. Start the 4.2 binary with the --shardsvr, --replSet, --port, and --bind_ip options. Include any additional command line options as appropriate for your deployment:

      mongod --shardsvr --replSet <replSetName> --port <port> --dbpath <path> --bind_ip localhost,<ip address>
      

      If using a configuration file, update the file to specify sharding.clusterRole: shardsvr, replication.replSetName, net.port, and net.bindIp, then start the 4.2 binary:

      sharding:
         clusterRole: shardsvr
      replication:
         replSetName: <string>
      net:
         port: <port>
         bindIp: localhost,<ip address>
      storage:
         dbpath: <path>
      

      Include any other configuration as appropriate for your deployment.

4

Upgrade the mongos instances.

Replace each mongos instance with the 4.2 binary and restart. Include any other configuration as appropriate for your deployment.

Note

The --bind_ip option must be specified when the sharded cluster members are run on different hosts or if remote clients connect to the sharded cluster. For more information, see Localhost Binding Compatibility Changes.

mongos --configdb csReplSet/<rsconfigsver1:port1>,<rsconfigsver2:port2>,<rsconfigsver3:port3> --bind_ip localhost,<ip address>
If upgrading from MongoDB 4.0.6 or earlier,
Once a mongos instance for the deployment is upgraded, that mongos instance starts to produce v1 change stream resume tokens. These tokens cannot be used to resume a stream on a mongos instance which has not yet been upgraded.
5

Re-enable the balancer.

Using a 4.2 mongo shell, connect to a mongos in the cluster and run sh.startBalancer() to re-enable the balancer:

sh.startBalancer()

Starting in MongoDB 4.2, sh.startBalancer() also enables auto-splitting for the sharded cluster.

If you do not wish to enable auto-splitting while the balancer is enabled, you must also run sh.disableAutoSplit().

For more information about re-enabling the balancer, see Enable the Balancer.

6

Enable backwards-incompatible 4.2 features.

At this point, you can run the 4.2 binaries without the 4.2 features that are incompatible with 4.0.

To enable these 4.2 features, set the feature compatibility version (FCV) to 4.2.

Tip

Enabling these backwards-incompatible features can complicate the downgrade process since you must remove any persisted backwards-incompatible features before you downgrade.

It is recommended that after upgrading, you allow your deployment to run without enabling these features for a burn-in period to ensure the likelihood of downgrade is minimal. When you are confident that the likelihood of downgrade is minimal, enable these features.

On a mongos instance, run the setFeatureCompatibilityVersion command in the admin database:

db.adminCommand( { setFeatureCompatibilityVersion: "4.2" } )

This command must perform writes to an internal system collection. If for any reason the command does not complete successfully, you can safely retry the command on the mongos as the operation is idempotent.

Note

Starting in MongoDB 4.0, the mongos binary will crash when attempting to connect to mongod instances whose feature compatibility version (fCV) is greater than that of the mongos. For example, you cannot connect a MongoDB 4.0 version mongos to a 4.2 sharded cluster with fCV set to 4.2. You can, however, connect a MongoDB 4.0 version mongos to a 4.2 sharded cluster with fCV set to 4.0.

Post Upgrade

TLS Options Replace Deprecated SSL Options

Starting in MongoDB 4.2, MongoDB deprecates the SSL options for the mongod, the mongos, and the mongo shell as well as the corresponding net.ssl Options configuration file options.

To avoid deprecation messages, use the new TLS options for the mongod, the mongos, and the mongo shell.

4.2-Compatible Drivers Retry Writes by Default

The official MongoDB 3.6 and 4.0-compatible drivers required including the retryWrites=true option in the connection string to enable retryable writes for that connection.

The official MongoDB 4.2-compatible drivers enable Retryable Writes by default. Applications upgrading to the 4.2-compatible drivers that require retryable writes may omit the retryWrites=true option. Applications upgrading to the 4.2-compatible drivers that require disabling retryable writes must include retryWrites=false in the connection string.

PowerPC and Hashed Index Value of 263

If on PowerPC, you had found hashed index field with the value 263,

  • If you deleted the documents, replace them from the export (done as part of the prerequisites).
  • If you dropped the hashed index before upgrading, recreate the index.

Additional Upgrade Procedures