Replica Sets Distributed Across Two or More Data Centers

Overview

While replica sets provide basic protectionagainst single-instance failure, replica sets whose members are alllocated in a single data center are susceptible to data centerfailures. Power outages, network interruptions, and natural disastersare all issues that can affect replica sets whose members are locatedin a single facility.

Distributing replica set members across geographically distinct datacenters adds redundancy and provides fault tolerance if one of the datacenters is unavailable.

Distribution of the Members

To protect your data in case of a data center failure, keep at leastone member in an alternate data center. If possible, use an odd numberof data centers, and choose a distribution of members that maximizesthe likelihood that even with a loss of a data center, the remainingreplica set members can form a majority or at minimum, provide a copyof your data.

Examples

Three-member Replica Set

For example, for a three-member replica set, some possibledistributions of members include:

  • Two data centers: two members to Data Center 1 and one member to DataCenter 2. If one of the members of the replica set is an arbiter,distribute the arbiter to Data Center 1 with a data-bearing member.
    • If Data Center 1 goes down, the replica set becomes read-only.
    • If Data Center 2 goes down, the replica set remains writeable asthe members in Data Center 1 can hold an election.
  • Three data centers: one members to Data Center 1, one member to DataCenter 2, and one member to Data Center 3.
    • If any Data Center goes down, the replica set remains writeable asthe remaining members can hold an election.

Note

Distributing replica set members across two data centers providesbenefit over a single data center. In a two data centerdistribution,

  • If one of the data centers goes down, the data is still availablefor reads unlike a single data center distribution.
  • If the data center with a minority of the members goes down, thereplica set can still serve write operations as well as readoperations.
  • However, if the data center with the majority of the members goesdown, the replica set becomes read-only.

If possible, distribute members across at least three data centers.For config server replica sets (CSRS), the best practice is todistribute across three (or more depending on the number of members)centers. If the cost of the third data center is prohibitive, onedistribution possibility is to evenly distribute the data bearingmembers across the two data centers and store the remaining member(either a data bearing member or an arbiter to ensure odd numberof members) in the cloud if your company policy allows.

Five-member Replica Set

For a replica set with 5 members, some possibledistributions of members include:

  • Two data centers: three members to Data Center 1 and two members toData Center 2.
    • If Data Center 1 goes down, the replica set becomes read-only.
    • If Data Center 2 goes down, the replica set remains writeable asthe members in Data Center 1 can create a majority.
  • Three data centers: two member to Data Center 1, two members to DataCenter 2, and one member to site Data Center 3.
    • If any Data Center goes down, the replica set remains writeable asthe remaining members can hold an election.

Note

Distributing replica set members across two data centers providesbenefit over a single data center. In a two data centerdistribution,

  • If one of the data centers goes down, the data is still availablefor reads unlike a single data center distribution.
  • If the data center with a minority of the members goes down, thereplica set can still serve write operations as well as readoperations.
  • However, if the data center with the majority of the members goesdown, the replica set becomes read-only.

If possible, distribute members across at least three data centers.For config server replica sets (CSRS), the best practice is todistribute across three (or more depending on the number of members)centers. If the cost of the third data center is prohibitive, onedistribution possibility is to evenly distribute the data bearingmembers across the two data centers and store the remaining member(either a data bearing member or an arbiter to ensure odd numberof members) in the cloud if your company policy allows.

For example, the following 5 member replica set distributes its membersacross three data centers.

Diagram of a 5 member replica set distributed across three data centers.

Electability of Members

Some members of the replica set, such as members that have networkingrestraint or limited resources, should not be able to become primary ina failover. Configure members that should not become primary tohave priority 0.

In some cases, you may prefer that the members in one data center beelected primary before the members in the other data centers. You canmodify the priority of the members such that themembers in the one data center has higherpriority than the members in the other datacenters.

In the following example, the replica set members inData Center 1 have a higher priority than the members in Data Center 2and 3; the members in Data Center 2 have a higher priority than themember in Data Center 3:

Diagram of a 5 member replica set distributed across three data centers. Replica set includes members with priority 0.5 and priority 0.

Connectivity

Verify that your network configuration allows communication among allmembers; i.e. each member must be able to connect to every other member.

See also

Deploy a Geographically Redundant Replica Set,Deploy a Replica Set,Add an Arbiter to Replica Set, andAdd Members to a Replica Set.