• "majority"

Read Concern "majority"

For read operations not associated with multi-documenttransactions, read concern "majority"guarantees that the data read has been acknowledged by a majority ofthe replica set members (i.e. the documents read are durable andguaranteed not to roll back).

Each node maintains in-memory a view of the data at the majority-commitpoint; the majority-commit point is calculated by the primary. Tofulfill read concern "majority", the node returns data from thisview and is comparable in performance cost to other read concerns.

For operations in multi-document transactions, read concern "majority" provides itsguarantees only if the transaction commits with write concern“majority”. Otherwise, the"majority" read concern provides no guarantees about thedata read in transactions.

To use read concern level of "majority", replicasets must use WiredTiger storage engine.

You can disable read concern "majority" for a deploymentwith a three-member primary-secondary-arbiter (PSA) architecture;however, this has implications for change streams (in MongoDB 4.0 andearlier only) and transactions on sharded clusters. For more information,see Disable Read Concern Majority.

Regardless of the read concern level, the most recent data on anode may not reflect the most recent version of the data in the system.

Example

Consider the following timeline of a write operation Write0 toa three member replica set:

Note

For simplification, the example assumes:

  • All writes prior to Write0 have been successfullyreplicated to all members.
  • Writeprev is the previous write before Write0.
  • No other writes have occured after Write0.

Timeline of a write operation to a three member replica set.

TimeEventMost Recent WriteMost Recent w: “majority” write
t0Primary applies Write0Primary: Write0Secondary1: WriteprevSecondary2: WriteprevPrimary: WriteprevSecondary1: WriteprevSecondary2: Writeprev
t1Secondary1 applies write0Primary: Write0Secondary1: Write0Secondary2: WriteprevPrimary: WriteprevSecondary1: WriteprevSecondary2: Writeprev
t2Secondary2 applies write0Primary: Write0Secondary1: Write0Secondary2: Write0Primary: WriteprevSecondary1: WriteprevSecondary2: Writeprev
t3Primary is aware of successful replication to Secondary1 and sends acknowledgement to clientPrimary: Write0Secondary1: Write0Secondary2: Write0Primary: Write0Secondary1: WriteprevSecondary2: Writeprev
t4Primary is aware of successful replication to Secondary2Primary: Write0Secondary1: Write0Secondary2: Write0Primary: Write0Secondary1: WriteprevSecondary2: Writeprev
t5Secondary1 receives notice (through regular replication mechanism) to update its snapshot of its most recent w: “majority” writePrimary: Write0Secondary1: Write0Secondary2: Write0Primary: Write0Secondary1: Write0Secondary2: Writeprev
t6Secondary2 receives notice (through regular replication mechanism) to update its snapshot of its most recent w: “majority” writePrimary: Write0Secondary1: Write0Secondary2: Write0Primary: Write0Secondary1: Write0Secondary2: Write0

Then, the following tables summarizes the state of the data that a readoperation with "majority" read concern would see attime T.

Timeline of a write operation to a three member replica set.

Read TargetTime TState of Data
PrimaryBefore t3Data reflects Writeprev
PrimaryAfter t3Data reflects Write0
Secondary1Before t5Data reflects Writeprev
Secondary1After t5Data reflects Write0
Secondary2Before or at t6Data reflects Writeprev
Secondary2After t6Data reflects Write0

Storage Engine Support

Read concern "majority" is available for theWiredTiger storage engine.

Tip

The serverStatus command returns thestorageEngine.supportsCommittedReads field whichindicates whether the storage engine supports "majority" readconcern.

Causally Consistent Sessions

Read concern majority is available for use with causallyconsistent sessions.

Read Concern "majority" and Transactions

Note

You set the read concern at the transaction level, not at theindividual operation level. To set the read concern fortransactions, see Transactions and Read Concern.

For operations in multi-document transactions, read concern "majority" provides itsguarantees only if the transaction commits with write concern“majority”. Otherwise, the"majority" read concern provides no guarantees about thedata read in transactions.

Read Concern "majority" and Aggregation

Starting in MongoDB 4.2, you can specify read concern level "majority" for anaggregation that includes an $out stage.

In MongoDB 4.0 and earlier, you cannot include the $outstage to use "majority" read concern for the aggregation.

Read Your Own Writes

Changed in version 3.6.

Starting in MongoDB 3.6, you can use causally consistent sessions to read your own writes, if the writes requestacknowledgement.

Prior to MongoDB 3.6, you must have issued your write operation with{ w: "majority" } write concern and thenuse either "majority" or "linearizable"read concern for the read operations to ensure that a single thread canread its own writes.

Disable Read Concern Majority

For 3-Member Primary-Secondary-Arbiter Architecture

You can disable read concern "majority" if you have athree-member replica set with a primary-secondary-arbiter (PSA)architecture or a sharded cluster with a three-member PSA shards.

Note

If you are using a deployment other than a 3-member PSA, you do notneed to disable read concern majority.

With a three-member PSA architecture, the cache pressure will increaseif any data bearing node is down. To prevent the storage cache pressurefrom immobilizing a deployment with a PSA architecture, you can disableread concern by setting either:

To check if read concern “majority” is disabled, You can rundb.serverStatus() on the mongod instances andcheck the storageEngine.supportsCommittedReads field.If false, read concern “majority” is disabled.

Important

In general, avoid disabling "majority" read concernunless necessary. However, if you have a three-member replica setwith a primary-secondary-arbiter (PSA) architecture or a shardedcluster with a three-member PSA shard, disable to prevent thestorage cache pressure from immobilizing the deployment.