Aggregation Examples

There are several methods of performing aggregations in MongoDB. These examples cover the new aggregation framework, using map reduce and using the group method.

Setup

To start, we’ll insert some example data which we can perform aggregations on:

  1. >>> from pymongo import MongoClient
  2. >>> db = MongoClient().aggregation_example
  3. >>> result = db.things.insert_many([{"x": 1, "tags": ["dog", "cat"]},
  4. ... {"x": 2, "tags": ["cat"]},
  5. ... {"x": 2, "tags": ["mouse", "cat", "dog"]},
  6. ... {"x": 3, "tags": []}])
  7. >>> result.inserted_ids
  8. [ObjectId('...'), ObjectId('...'), ObjectId('...'), ObjectId('...')]

Aggregation Framework

This example shows how to use the aggregate() method to use the aggregation framework. We’ll perform a simple aggregation to count the number of occurrences for each tag in the tags array, across the entire collection. To achieve this we need to pass in three operations to the pipeline. First, we need to unwind the tags array, then group by the tags and sum them up, finally we sort by count.

As python dictionaries don’t maintain order you should use SON or collections.OrderedDict where explicit ordering is required eg “$sort”:

Note

aggregate requires server version >= 2.1.0.

  1. >>> from bson.son import SON
  2. >>> pipeline = [
  3. ... {"$unwind": "$tags"},
  4. ... {"$group": {"_id": "$tags", "count": {"$sum": 1}}},
  5. ... {"$sort": SON([("count", -1), ("_id", -1)])}
  6. ... ]
  7. >>> import pprint
  8. >>> pprint.pprint(list(db.things.aggregate(pipeline)))
  9. [{u'_id': u'cat', u'count': 3},
  10. {u'_id': u'dog', u'count': 2},
  11. {u'_id': u'mouse', u'count': 1}]

To run an explain plan for this aggregation use the command() method:

  1. >>> db.command('aggregate', 'things', pipeline=pipeline, explain=True)
  2. {u'ok': 1.0, u'stages': [...]}

As well as simple aggregations the aggregation framework provides projection capabilities to reshape the returned data. Using projections and aggregation, you can add computed fields, create new virtual sub-objects, and extract sub-fields into the top-level of results.

See also

The full documentation for MongoDB’s aggregation framework

Map/Reduce

Another option for aggregation is to use the map reduce framework. Here we will define map and reduce functions to also count the number of occurrences for each tag in the tags array, across the entire collection.

Our map function just emits a single (key, 1) pair for each tag in the array:

  1. >>> from bson.code import Code
  2. >>> mapper = Code("""
  3. ... function () {
  4. ... this.tags.forEach(function(z) {
  5. ... emit(z, 1);
  6. ... });
  7. ... }
  8. ... """)

The reduce function sums over all of the emitted values for a given key:

  1. >>> reducer = Code("""
  2. ... function (key, values) {
  3. ... var total = 0;
  4. ... for (var i = 0; i < values.length; i++) {
  5. ... total += values[i];
  6. ... }
  7. ... return total;
  8. ... }
  9. ... """)

Note

We can’t just return values.length as the reduce function might be called iteratively on the results of other reduce steps.

Finally, we call map_reduce() and iterate over the result collection:

  1. >>> result = db.things.map_reduce(mapper, reducer, "myresults")
  2. >>> for doc in result.find():
  3. ... pprint.pprint(doc)
  4. ...
  5. {u'_id': u'cat', u'value': 3.0}
  6. {u'_id': u'dog', u'value': 2.0}
  7. {u'_id': u'mouse', u'value': 1.0}

Advanced Map/Reduce

PyMongo’s API supports all of the features of MongoDB’s map/reduce engine. One interesting feature is the ability to get more detailed results when desired, by passing full_response=True to map_reduce(). This returns the full response to the map/reduce command, rather than just the result collection:

  1. >>> pprint.pprint(
  2. ... db.things.map_reduce(mapper, reducer, "myresults", full_response=True))
  3. {...u'counts': {u'emit': 6, u'input': 4, u'output': 3, u'reduce': 2},
  4. u'ok': ...,
  5. u'result': u'...',
  6. u'timeMillis': ...}

All of the optional map/reduce parameters are also supported, simply pass them as keyword arguments. In this example we use the query parameter to limit the documents that will be mapped over:

  1. >>> results = db.things.map_reduce(
  2. ... mapper, reducer, "myresults", query={"x": {"$lt": 2}})
  3. >>> for doc in results.find():
  4. ... pprint.pprint(doc)
  5. ...
  6. {u'_id': u'cat', u'value': 1.0}
  7. {u'_id': u'dog', u'value': 1.0}

You can use SON or collections.OrderedDict to specify a different database to store the result collection:

  1. >>> from bson.son import SON
  2. >>> pprint.pprint(
  3. ... db.things.map_reduce(
  4. ... mapper,
  5. ... reducer,
  6. ... out=SON([("replace", "results"), ("db", "outdb")]),
  7. ... full_response=True))
  8. {...u'counts': {u'emit': 6, u'input': 4, u'output': 3, u'reduce': 2},
  9. u'ok': ...,
  10. u'result': {u'collection': ..., u'db': ...},
  11. u'timeMillis': ...}

See also

The full list of options for MongoDB’s map reduce engine