Bulk Write Operations

Overview

MongoDB provides clients the ability to perform write operations inbulk. Bulk write operations affect a single collection. MongoDBallows applications to determine the acceptable level ofacknowledgement required for bulk write operations.

New in version 3.2.

The db.collection.bulkWrite() method provides the ability toperform bulk insert, update, and remove operations.MongoDB also supports bulk insertthrough the db.collection.insertMany().

Ordered vs Unordered Operations

Bulk write operations can be either ordered or unordered.

With an ordered list of operations, MongoDB executes the operations serially.If an error occurs during the processing of one of the writeoperations, MongoDB will return without processing any remaining writeoperations in the list.See ordered Bulk Write

With an unordered list of operations, MongoDB can execute theoperations in parallel, but this behavior is not guaranteed.If an error occurs during the processing of oneof the write operations, MongoDB will continue to process remainingwrite operations in the list.See Unordered Bulk Write.

Executing an ordered list of operations on a sharded collection willgenerally be slower than executing an unordered list since with anordered list, each operation must wait for the previous operation tofinish.

By default, bulkWrite() performs orderedoperations. To specify unordered write operations, setordered : false in the options document.

See Execution of Operations

bulkWrite() Methods

bulkWrite() supports the following write operations:

Each write operation is passed to bulkWrite() as adocument in an array.

For example, the following performs multiple write operations:

The characters collection contains the following documents:

  1. { "_id" : 1, "char" : "Brisbane", "class" : "monk", "lvl" : 4 },
  2. { "_id" : 2, "char" : "Eldon", "class" : "alchemist", "lvl" : 3 },
  3. { "_id" : 3, "char" : "Meldane", "class" : "ranger", "lvl" : 3 }

The following bulkWrite() performs multipleoperations on the collection:

  1. try {
  2. db.characters.bulkWrite(
  3. [
  4. { insertOne :
  5. {
  6. "document" :
  7. {
  8. "_id" : 4, "char" : "Dithras", "class" : "barbarian", "lvl" : 4
  9. }
  10. }
  11. },
  12. { insertOne :
  13. {
  14. "document" :
  15. {
  16. "_id" : 5, "char" : "Taeln", "class" : "fighter", "lvl" : 3
  17. }
  18. }
  19. },
  20. { updateOne :
  21. {
  22. "filter" : { "char" : "Eldon" },
  23. "update" : { $set : { "status" : "Critical Injury" } }
  24. }
  25. },
  26. { deleteOne :
  27. { "filter" : { "char" : "Brisbane"} }
  28. },
  29. { replaceOne :
  30. {
  31. "filter" : { "char" : "Meldane" },
  32. "replacement" : { "char" : "Tanys", "class" : "oracle", "lvl" : 4 }
  33. }
  34. }
  35. ]
  36. );
  37. }
  38. catch (e) {
  39. print(e);
  40. }

The operation returns the following:

  1. {
  2. "acknowledged" : true,
  3. "deletedCount" : 1,
  4. "insertedCount" : 2,
  5. "matchedCount" : 2,
  6. "upsertedCount" : 0,
  7. "insertedIds" : {
  8. "0" : 4,
  9. "1" : 5
  10. },
  11. "upsertedIds" : {
  12.  
  13. }
  14. }

For more examples, seebulkWrite() Examples

Strategies for Bulk Inserts to a Sharded Collection

Large bulk insert operations, including initial data inserts or routinedata import, can affect sharded cluster performance. Forbulk inserts, consider the following strategies:

Pre-Split the Collection

If the sharded collection is empty, then the collection has onlyone initial chunk, which resides on a single shard.MongoDB must then take time to receive data, create splits, anddistribute the split chunks to the available shards. To avoid thisperformance cost, you can pre-split the collection, as described inSplit Chunks in a Sharded Cluster.

Unordered Writes to mongos

To improve write performance to sharded clusters, usebulkWrite() with the optional parameter orderedset to false. mongos can attempt to send the writes tomultiple shards simultaneously. For empty collections,first pre-split the collection as described inSplit Chunks in a Sharded Cluster.

Avoid Monotonic Throttling

If your shard key increases monotonically during an insert, then allinserted data goes to the last chunk in the collection, which willalways end up on a single shard. Therefore, the insert capacity of thecluster will never exceed the insert capacity of that single shard.

If your insert volume is larger than what a single shard can process,and if you cannot avoid a monotonically increasing shard key, thenconsider the following modifications to your application:

  • Reverse the binary bits of the shard key. This preserves theinformation and avoids correlating insertion order with increasingsequence of values.
  • Swap the first and last 16-bit words to “shuffle” the inserts.

Example

The following example, in C++, swaps the leading andtrailing 16-bit word of BSONObjectIdsgenerated so they are no longer monotonically increasing.

  1. using namespace mongo;
  2. OID make_an_id() {
  3. OID x = OID::gen();
  4. const unsigned char *p = x.getData();
  5. swap( (unsigned short&) p[0], (unsigned short&) p[10] );
  6. return x;
  7. }
  8.  
  9. void foo() {
  10. // create an object
  11. BSONObj o = BSON( "_id" << make_an_id() << "x" << 3 << "name" << "jane" );
  12. // now we may insert o into a sharded collection
  13. }

See also

Shard Keys for informationon choosing a sharded key. Also see Shard KeyInternals (in particular,Choosing a Shard Key).