Change the Size of the Oplog
This procedure changes the size of the oplog [1] on each member of areplica set using the replSetResizeOplog
command, startingwith the secondary members before proceeding to theprimary.
Perform these steps on each secondary replica set memberfirst. Once you have changed the oplog size for all secondarymembers, perform these steps on the primary.
A. Connect to the replica set member
Connect to the replica set member using the mongo
shell:
- mongo --host <hostname>:<port>
Note
If the replica set enforces authentication,you must authenticate as a user with privileges to modify thelocal
database, such as the clusterManager
orclusterAdmin
role.
B. (Optional) Verify the current size of the oplog
To view the current size of the oplog, switch to the local
database and run db.collection.stats()
against theoplog.rs
collection. stats()
displays theoplog size as maxSize
.
- use local
- db.oplog.rs.stats().maxSize
The maxSize
field displays the collection size in bytes.
C. Change the oplog size of the replica set member
To resize the oplog, run the replSetResizeOplog
command,passing the desired size in megabytes as the size
parameter. Thespecified size must be greater than 990
, or 990 megabytes.
The following operation changes the oplog size of the replica setmember to 16 gigabytes, or 16000 megabytes.
- db.adminCommand({replSetResizeOplog: 1, size: 16000})
[1] | Starting in MongoDB 4.0, the oplog can grow past its configured sizelimit to avoid deleting the majority commit point . |
D. (Optional) Compact oplog.rs to reclaim disk space
Reducing the size of the oplog does not automatically reclaimthe disk space allocated to the original oplog size. You must runcompact
against the oplog.rs
collection in thelocal
database to reclaim disk space. There are no benefits torunning compact
on the oplog.rs
collection after increasing theoplog size.
Important
The replica set member cannot replicate oplog entries while thecompact
operation is ongoing. While compact
runs, themember may fall so far behind the primary that it cannot resumereplication. The likelihood of a member becoming “stale” duringthe compact
procedure increases with cluster write throughput,and may be further exacerbated by the reduced oplog size.
Consider scheduling a maintenance window during which writes arethrottled or stopped to mitigate the risk of the member becoming“stale” and requiring a full resync.
Do not run compact
against the primary replica set member.Connect a mongo
shell directly to the primary(not the replica set) and run rs.stepDown()
. If successful,the primary steps down. From the mongo
shell, runthe compact
command on the now-secondary member.
The following operation runs the compact
command against theoplog.rs
collection:
- use local
- db.runCommand({ "compact" : "oplog.rs" } )
For clusters enforcing authentication,authenticate as a user with the compact
privilegeaction on the local
database and the oplog.rs
collection.For complete documentation on compact
authenticationrequirements, see compact Required Privileges.