Replica Set Members

A replica set in MongoDB is a group of mongod processesthat provide redundancy and high availability. The members of areplica set are:

  • Primary.
  • The primary receives all write operations.
  • Secondaries.
  • Secondaries replicate operations from the primary to maintain anidentical data set. Secondaries may have additional configurationsfor special usage profiles. For example, secondaries may benon-voting orpriority 0.

You can also maintain an arbiter as partof a replica set. Arbiters do not keep a copy of the data. However,arbiters play a role in the elections that select a primary if thecurrent primary is unavailable.

The minimum recommended configuration for a replica set is a threemember replica set with three data-bearing members: one primary and two secondary members. You may alternatively deploya three member replica set with two data-bearing members: aprimary, a secondary, and an arbiter, but replica sets with at least threedata-bearing members offer better redundancy.

A replica set can have up to 50 members but only 7 voting members.

Primary

The primary is the only member in the replica set that receives writeoperations. MongoDB applies write operations on the primary andthen records the operations on the primary’s oplog. Secondary members replicate this log and applythe operations to their data sets.

In the following three-member replica set, the primary accepts allwrite operations. Then the secondaries replicate the oplog to apply totheir data sets.

Diagram of default routing of reads and writes to the primary.

All members of the replica set can accept read operations. However, bydefault, an application directs its read operations to the primarymember. See Read Preference for details on changing thedefault read behavior.

The replica set can have at most one primary.[2] If the current primary becomes unavailable,an election determines the new primary. SeeReplica Set Elections for more details.

Secondaries

A secondary maintains a copy of the primary’s dataset. To replicate data, a secondary applies operations from theprimary’s oplog to its own data setin an asynchronous process. [1] A replica set can have one or moresecondaries.

The following three-member replica set has two secondarymembers. The secondaries replicate the primary’s oplog and applythe operations to their data sets.

Diagram of a 3 member replica set that consists of a primary and two secondaries.

Although clients cannot write data to secondaries, clients can readdata from secondary members. See Read Preference for moreinformation on how clients direct read operations to replica sets.

A secondary can become a primary.If the current primary becomes unavailable, the replica setholds an election to choose which of the secondariesbecomes the new primary.

SeeReplica Set Elections for more details.

You can configure a secondary member for a specific purpose. You canconfigure a secondary to:

  • Prevent it from becoming a primary in an election, which allows it toreside in a secondary data center or to serve as a cold standby. SeePriority 0 Replica Set Members.
  • Prevent applications from reading from it, which allows it to run applicationsthat require separation from normal traffic. SeeHidden Replica Set Members.
  • Keep a running “historical” snapshot for use in recovery fromcertain errors, such as unintentionally deleted databases. SeeDelayed Replica Set Members.
[1]Starting in version 4.2 (also available starting in 4.0.6), secondary members of a replica set nowlog oplog entries that take longer than the slowoperation threshold to apply. These slow oplog messages are loggedfor the secondaries in the diagnostic log under the REPL component with the text appliedop: <oplog entry> took <num>ms. These slow oplog entries dependonly on the slow operation threshold. They do not depend on the loglevels (either at the system or component level), or the profilinglevel, or the slow operation sample rate. The profiler does notcapture slow oplog entries.

Arbiter

An arbiter does not have a copy of data set and cannot becomea primary. Replica sets may have arbiters to add a vote inelections for primary. Arbitersalways have exactly 1 election vote, and thusallow replica sets to have an uneven number of voting members without theoverhead of an additional member that replicates data.

Changed in version 3.6: Starting in MongoDB 3.6, arbiters have priority 0. When you upgradea replica set to MongoDB 3.6, if the existing configuration has anarbiter with priority 1, MongoDB 3.6 reconfigures the arbiter tohave priority 0.

Important

Do not run an arbiter on systems that also host theprimary or the secondary members of the replica set.

To add an arbiter, see Add an Arbiter to Replica Set.

For the following MongoDB versions, pv1 increases the likelihoodof w:1 rollbacks compared to pv0(no longer supported in MongoDB 4.0+) for replica sets with arbiters:

  • MongoDB 3.4.1
  • MongoDB 3.4.0
  • MongoDB 3.2.11 or earlier

See Replica Set Protocol Version.

[2]In some circumstances, two nodes in a replica setmay transiently believe that they are the primary, but at most, oneof them will be able to complete writes with { w:"majority" } write concern. The node that can complete{ w: "majority" } writes is the currentprimary, and the other node is a former primary that has not yetrecognized its demotion, typically due to a network partition.When this occurs, clients that connect to the former primary mayobserve stale data despite having requested read preferenceprimary, and new writes to the former primary willeventually roll back.