Navigation

Downgrade MongoDB from 3.2

Before you attempt any downgrade, familiarize yourself with the content of this document, particularly the Downgrade Recommendations and Checklist and the procedure for downgrading sharded clusters.

Downgrade Recommendations and Checklist

When downgrading, consider the following:

Downgrade Path

To downgrade, use the latest version in the 3.0-series.

Preparedness

Procedures

Follow the downgrade procedures:

Prerequisites

Text Index Version Check

If you have version 3 text indexes (i.e. the default version for text indexes in MongoDB 3.2), drop the version 3 text indexes before downgrading MongoDB. After the downgrade, recreate the dropped text indexes.

To determine the version of your text indexes, run db.collection.getIndexes() to view index specifications. For text indexes, the method returns the version information in the field textIndexVersion. For example, the following shows that the text index on the quotes collection is version 3.

{
   "v" : 1,
   "key" : {
      "_fts" : "text",
      "_ftsx" : 1
   },
   "name" : "quote_text_translation.quote_text",
   "ns" : "test.quotes",
   "weights" : {
      "quote" : 1,
      "translation.quote" : 1
   },
   "default_language" : "english",
   "language_override" : "language",
   "textIndexVersion" : 3
}

2dsphere Index Version Check

If you have version 3 2dsphere indexes (i.e. the default version for 2dsphere indexes in MongoDB 3.2), drop the version 3 2dsphere indexes before downgrading MongoDB. After the downgrade, recreate the 2dsphere indexes.

To determine the version of your 2dsphere indexes, run db.collection.getIndexes() to view index specifications. For 2dsphere indexes, the method returns the version information in the field 2dsphereIndexVersion. For example, the following shows that the 2dsphere index on the locations collection is version 3.

{
   "v" : 1,
   "key" : {
      "geo" : "2dsphere"
   },
   "name" : "geo_2dsphere",
   "ns" : "test.locations",
   "sparse" : true,
   "2dsphereIndexVersion" : 3
}

Partial Indexes Check

Before downgrading MongoDB, drop any partial indexes.

Downgrade a Standalone mongod Instance

The following steps outline the procedure to downgrade a standalone mongod from version 3.2 to 3.0.

1

Download the latest 3.0 binaries.

For the downgrade, use the latest release in the 3.0 series.

2

Restart with the latest 3.0 mongod instance.

Important

If your mongod instance is using the WiredTiger storage engine, you must include the --storageEngine option (or storage.engine if using the configuration file) with the 3.0 binary.

Shut down your mongod instance. Replace the existing binary with the downloaded mongod binary and restart.

Downgrade a 3.2 Replica Set

The following steps outline a “rolling” downgrade process for the replica set. The “rolling” downgrade process minimizes downtime by downgrading the members individually while the other members are available:

1

Downgrade the protocolVersion.

Connect a mongo shell to the current primary and downgrade the replication protocol:

cfg = rs.conf();
cfg.protocolVersion=0;
rs.reconfig(cfg);
2

Downgrade secondary members of the replica set.

Downgrade each secondary member of the replica set, one at a time:

  1. Shut down the mongod. See Stop mongod Processes for instructions on safely terminating mongod processes.

  2. Replace the 3.2 binary with the 3.0 binary and restart.

    Important

    If your mongod instance is using the WiredTiger storage engine, you must include the --storageEngine option (or storage.engine if using the configuration file) with the 3.0 binary.

  3. Wait for the member to recover to SECONDARY state before downgrading the next secondary. To check the member’s state, use the rs.status() method in the mongo shell.

3

Step down the primary.

Use rs.stepDown() in the mongo shell to step down the primary and force the normal failover procedure.

rs.stepDown()

rs.stepDown() expedites the failover procedure and is preferable to shutting down the primary directly.

4

Replace and restart former primary mongod.

When rs.status() shows that the primary has stepped down and another member has assumed PRIMARY state, shut down the previous primary and replace the mongod binary with the 3.0 binary and start the new instance.

Important

If your mongod instance is using the WiredTiger storage engine, you must include the --storageEngine option (or storage.engine if using the configuration file) with the 3.0 binary.

Replica set failover is not instant but will render the set unavailable to writes and interrupt reads until the failover process completes. Typically this takes 10 seconds or more. You may wish to plan the downgrade during a predetermined maintenance window.

Downgrade a 3.2 Sharded Cluster

Requirements

While the downgrade is in progress, you cannot make changes to the collection metadata. For example, during the downgrade, do not do any of the following:

Downgrade a Sharded Cluster with SCCC Config Servers

1

Disable the Balancer.

Turn off the balancer in the sharded cluster, as described in Disable the Balancer.

2

Downgrade each shard, one at a time.

For each replica set shard:

  1. Downgrade the protocolVersion.
  2. Downgrade the mongod secondaries before downgrading the primary.
  3. To downgrade the primary, run replSetStepDown and then downgrade.

For details on downgrading a replica set, see Downgrade a 3.2 Replica Set.

3

Downgrade the SCCC config servers.

If the sharded cluster uses 3 mirrored mongod instances for the config servers, downgrade all three instances in reverse order of their listing in the --configdb option for mongos. For example, if mongos has the following --configdb listing:

--configdb confserver1,confserver2,confserver3

Downgrade first confserver3, then confserver2, and lastly, confserver1. If your mongod instance is using the WiredTiger storage engine, you must include the --storageEngine option (or storage.engine if using the configuration file) with the 3.0 binary.

mongod --configsvr --dbpath <path> --port <port> --storageEngine <storageEngine>
4

Downgrade the mongos instances.

Downgrade the binaries and restart.

5

Re-enable the balancer.

Once the downgrade of sharded cluster components is complete, re-enable the balancer.

Downgrade a Sharded Cluster with CSRS Config Servers

1

Disable the Balancer.

Turn off the balancer in the sharded cluster, as described in Disable the Balancer.

2

Check the minOpTimeUpdaters value.

If the sharded cluster uses CSRS, for each shard, check the minOpTimeUpdaters value to see if it is zero. A minOpTimeUpdaters value of zero indicates that there are no migrations in progress. A non-zero value indicates either that a migration is in progress or that a previously completed migration has somehow failed to clear the minOpTimeUpdaters value and should be cleared.

To check the value, for each shard, connect to the primary member (or if a shard is a standalone, connect to the standalone) and query the system.version collection in the admin database for the minOpTimeRecovery document:

use admin
db.system.version.findOne( { _id: "minOpTimeRecovery" }, { minOpTimeUpdaters: 1 } )

If minOpTimeUpdaters is non-zero, clear the value by stepping down the current primary. The value is cleared when a new primary gets elected.

rs.stepDown()

If the shard is a standalone, restart the shard to clear the value.

3

Prepare CSRS Config Servers for downgrade.

If the sharded cluster uses CSRS:

  1. Remove secondary members from the replica set to have only a primary and two secondaries and only the primary can vote and be eligible to be primary; i.e. the other two members have 0 for votes and priority.

    Connect a mongo shell to the primary and run:

    rs.reconfig(
       {
          "_id" : <name>,
          "configsvr" : true,
          "protocolVersion" : NumberLong(1),
          "members" : [
             {
                "_id" : 0,
                "host" : "<host1>:<port1>",
                "priority" : 1,
                "votes" : 1
             },
             {
                "_id" : 1,
                "host" : "<host2>:<port2>",
                "priority" : 0,
                "votes" : 0
             },
             {
                "_id" : 2,
                "host" : "<host3>:<port3>",
                "priority" : 0,
                "votes" : 0
             }
          ]
       }
    )
    
  2. Step down the primary using replSetStepDown against the admin database. Ensure enough time for the secondaries to catch up.

    Connect a mongo shell to the primary and run:

    db.adminCommand( { replSetStepDown: 360, secondaryCatchUpPeriodSecs: 300 })
    
  3. Shut down all members of the config server replica set, the mongos instances, and the shards.

  4. If you are rolling back to MMAPv1:

    1. Start a CSRS member as a standalone; i.e. without the --replSet or, if using a configuration file, replication.replSetName.

    2. Run mongodump to dump the config database, then shutdown the CSRS member.

      mongodump --db "config"
      

      Include all other options as required by your deployment.

    3. Create a data directory for the new mongod instance that will run with the MMAPv1 storage engine. mongod must have read and write permissions for the data directory.

      mongod with MMAPv1 will not start with data files created with a different storage engine.

    4. Restart the mongod as a MMAPv1 standalone i.e. with --storageEngine mmapv1 and without the --replSet or, if using a configuration file, replication.replSetName.

    5. Use mongorestore --drop to restore the config dump to the new MMAPv1 mongod.

      mongorestore --db="config" --drop /path/to/dump
      
    6. Repeat for each member of the CSRS.

    Optionally, once the sharded cluster is online and working as expected, delete the WiredTiger data directories.

  5. Restart each config server as standalone 3.2 mongod; i.e. without the --replSet or, if using a configuration file, replication.replSetName.

    mongod --configsvr --dbpath <path> --port <port> --storageEngine <storageEngine>
    
4

Update the protocolVersion for each shard.

Restart each replica set shard and update the protocolVersion.

Connect a mongo shell to the current primary and downgrade the replication protocol:

cfg = rs.conf();
cfg.protocolVersion=0;
rs.reconfig(cfg);
5

Downgrade the mongos instances.

Important

As the config servers changed from a replica set to three mirrored mongod instances, update the --configdb setting. All mongos must use the same --configdb string.

Downgrade the binaries and restart.

6

Downgrade Config Servers.

Downgrade the binaries and restart. Downgrade in reverse order of their listing in the --configdb option for mongos.

If your mongod instance is using the WiredTiger storage engine, you must include the --storageEngine option (or storage.engine if using the configuration file) with the 3.0 binary.

mongod --configsvr --dbpath <path> --port <port> --storageEngine <storageEngine>
7

Downgrade each shard, one at a time.

For each shard, remove the minOpTimeRecovery document from the admin.system.version collection using the following remove operation. If the shard is a replica set, issue the remove operation on the primary of the replica set for each shard:

use admin
db.system.version.remove(
   { _id: "minOpTimeRecovery" },
   { writeConcern: { w: "majority", wtimeout: 30000 } }
)

Note

If the cluster is running with authentication enabled, you must have a user with the proper privileges to remove the minOpTimeRecovery document from the admin.system.version collection. The following operation creates a downgrade user on the admin database with the proper privileges:

use admin;

db.createRole({
  role: "downgrade_csrs",
  privileges: [
     { resource: { db: "admin", collection: "system.version"}, actions: [ "remove" ] },
  ],
  roles: [  ]
});

db.createUser({
  user: "downgrade",
  roles: [
    { role: "downgrade_csrs", db: "admin" }
  ]
});

For each replica set shard, downgrade the mongod binaries and restart. If your mongod instance is using the WiredTiger storage engine, you must include the --storageEngine option (or storage.engine if using the configuration file) with the 3.0 binary.

  1. Downgrade the mongod secondaries before downgrading the primary.
  2. To downgrade the primary, run replSetStepDown and then downgrade.

For details on downgrading a replica set, see Downgrade a 3.2 Replica Set.

Optionally, drop the local database from the SCCC members if it exists.

8

Re-enable the balancer.

Once the downgrade of sharded cluster components is complete, re-enable the balancer.