Bucket aggregations

Bucket aggregations categorize sets of documents as buckets. The type of bucket aggregation determines whether a given document falls into a bucket or not.

You can use bucket aggregations to implement faceted navigation (usually placed as a sidebar on a search result landing page) to help your users narrow down the results.

Terms

The terms aggregation dynamically creates a bucket for each unique term of a field.

The following example uses the terms aggregation to find the number of documents per response code in web log data:

  1. GET opensearch_dashboards_sample_data_logs/_search
  2. {
  3. "size": 0,
  4. "aggs": {
  5. "response_codes": {
  6. "terms": {
  7. "field": "response.keyword",
  8. "size": 10
  9. }
  10. }
  11. }
  12. }

Sample Response

  1. ...
  2. "aggregations" : {
  3. "response_codes" : {
  4. "doc_count_error_upper_bound" : 0,
  5. "sum_other_doc_count" : 0,
  6. "buckets" : [
  7. {
  8. "key" : "200",
  9. "doc_count" : 12832
  10. },
  11. {
  12. "key" : "404",
  13. "doc_count" : 801
  14. },
  15. {
  16. "key" : "503",
  17. "doc_count" : 441
  18. }
  19. ]
  20. }
  21. }
  22. }

The values are returned with the key key. doc_count specifies the number of documents in each bucket. By default, the buckets are sorted in descending order of doc-count.

The response also includes two keys named doc_count_error_upper_bound and sum_other_doc_count.

The terms aggregation returns the top unique terms. So, if the data has many unique terms, then some of them might not appear in the results. The sum_other_doc_count field is the sum of the documents that are left out of the response. In this case, the number is 0 because all the unique values appear in the response.

The doc_count_error_upper_bound field represents the maximum possible count for a unique value that’s left out of the final results. Use this field to estimate the error margin for the count.

The count might not be accurate. A coordinating node that’s responsible for the aggregation prompts each shard for its top unique terms. Imagine a scenario where the size parameter is 3. The terms aggregation requests each shard for its top 3 unique terms. The coordinating node takes each of the results and aggregates them to compute the final result. If a shard has an object that’s not part of the top 3, then it won’t show up in the response.

This is especially true if size is set to a low number. Because the default size is 10, an error is unlikely to happen. If you don’t need high accuracy and want to increase the performance, you can reduce the size.

Account for pre-aggregated data

While the doc_count field provides a representation of the number of individual documents aggregated in a bucket, doc_count by itself does not have a way to correctly increment documents that store pre-aggregated data. To account for pre-aggregated data and accurately calculate the number of documents in a bucket, you can use the _doc_count field to add the number of documents in a single summary field. When a document includes the _doc_count field, all bucket aggregations recognize its value and increase the bucket doc_count cumulatively. Keep these considerations in mind when using the _doc_count field:

  • The field does not support nested arrays; only positive integers can be used.
  • If a document does not contain the _doc_count field, aggregation uses the document to increase the count by 1.

OpenSearch features that rely on an accurate document count illustrate the importance of using the _doc_count field. To see how this field can be used to support other search tools, refer to Index rollups, an OpenSearch feature for the Index Management (IM) plugin that stores documents with pre-aggregated data in rollup indexes.

Example usage

  1. PUT /my_index/_doc/1
  2. {
  3. "response_code": 404,
  4. "date":"2022-08-05",
  5. "_doc_count": 20
  6. }
  7. PUT /my_index/_doc/2
  8. {
  9. "response_code": 404,
  10. "date":"2022-08-06",
  11. "_doc_count": 10
  12. }
  13. PUT /my_index/_doc/3
  14. {
  15. "response_code": 200,
  16. "date":"2022-08-06",
  17. "_doc_count": 300
  18. }
  19. GET /my_index/_search
  20. {
  21. "size": 0,
  22. "aggs": {
  23. "response_codes": {
  24. "terms": {
  25. "field" : "response_code"
  26. }
  27. }
  28. }
  29. }

Sample response

  1. {
  2. "took" : 20,
  3. "timed_out" : false,
  4. "_shards" : {
  5. "total" : 1,
  6. "successful" : 1,
  7. "skipped" : 0,
  8. "failed" : 0
  9. },
  10. "hits" : {
  11. "total" : {
  12. "value" : 3,
  13. "relation" : "eq"
  14. },
  15. "max_score" : null,
  16. "hits" : [ ]
  17. },
  18. "aggregations" : {
  19. "response_codes" : {
  20. "doc_count_error_upper_bound" : 0,
  21. "sum_other_doc_count" : 0,
  22. "buckets" : [
  23. {
  24. "key" : 200,
  25. "doc_count" : 300
  26. },
  27. {
  28. "key" : 404,
  29. "doc_count" : 30
  30. }
  31. ]
  32. }
  33. }
  34. }

Multi-terms

Similar to the terms bucket aggregation, you can also search for multiple terms using the multi_terms aggregation. Multi-terms aggregations are useful when you need to sort by document count, or when you need to sort by a metric aggregation on a composite key and get the top n results. For example, you could search for a specific number of documents (e.g., 1000) and the number of servers per location that show CPU usage greater than 90%. The top number of results would be returned for this multi-term query.

The multi_terms aggregation does consume more memory than a terms aggregation, so its performance might be slower.

Multi-terms aggregation parameters

ParameterDescription
multi_termsIndicates a multi-terms aggregation that gathers buckets of documents together based on criteria specified by multiple terms.
sizeSpecifies the number of buckets to return. Default is 10.
orderIndicates the order to sort the buckets. By default, buckets are ordered according to document count per bucket. If the buckets contain the same document count, then order can be explicitly set to the term value instead of document count. (e.g., set order to “max-cpu”).
doc_countSpecifies the number of documents to be returned in each bucket. By default, the top 10 terms are returned.

Sample Request

  1. GET sample-index100/_search
  2. {
  3. "size": 0,
  4. "aggs": {
  5. "hot": {
  6. "multi_terms": {
  7. "terms": [{
  8. "field": "region"
  9. },{
  10. "field": "host"
  11. }],
  12. "order": {"max-cpu": "desc"}
  13. },
  14. "aggs": {
  15. "max-cpu": { "max": { "field": "cpu" } }
  16. }
  17. }
  18. }
  19. }

Sample Response

  1. {
  2. "took": 118,
  3. "timed_out": false,
  4. "_shards": {
  5. "total": 1,
  6. "successful": 1,
  7. "skipped": 0,
  8. "failed": 0
  9. },
  10. "hits": {
  11. "total": {
  12. "value": 8,
  13. "relation": "eq"
  14. },
  15. "max_score": null,
  16. "hits": []
  17. },
  18. "aggregations": {
  19. "multi-terms": {
  20. "doc_count_error_upper_bound": 0,
  21. "sum_other_doc_count": 0,
  22. "buckets": [
  23. {
  24. "key": [
  25. "dub",
  26. "h1"
  27. ],
  28. "key_as_string": "dub|h1",
  29. "doc_count": 2,
  30. "max-cpu": {
  31. "value": 90.0
  32. }
  33. },
  34. {
  35. "key": [
  36. "dub",
  37. "h2"
  38. ],
  39. "key_as_string": "dub|h2",
  40. "doc_count": 2,
  41. "max-cpu": {
  42. "value": 70.0
  43. }
  44. },
  45. {
  46. "key": [
  47. "iad",
  48. "h2"
  49. ],
  50. "key_as_string": "iad|h2",
  51. "doc_count": 2,
  52. "max-cpu": {
  53. "value": 50.0
  54. }
  55. },
  56. {
  57. "key": [
  58. "iad",
  59. "h1"
  60. ],
  61. "key_as_string": "iad|h1",
  62. "doc_count": 2,
  63. "max-cpu": {
  64. "value": 15.0
  65. }
  66. }
  67. ]
  68. }
  69. }
  70. }

sampler, diversified_sampler

If you’re aggregating over millions of documents, you can use a sampler aggregation to reduce its scope to a small sample of documents for a faster response. The sampler aggregation selects the samples by top-scoring documents.

The results are approximate but closely represent the distribution of the real data. The sampler aggregation significantly improves query performance, but the estimated responses are not entirely reliable.

The basic syntax is:

  1. aggs”: {
  2. "SAMPLE": {
  3. "sampler": {
  4. "shard_size": 100
  5. },
  6. "aggs": {...}
  7. }
  8. }

The shard_size property tells OpenSearch how many documents (at most) to collect from each shard.

The following example limits the number of documents collected on each shard to 1,000 and then buckets the documents by a terms aggregation:

  1. GET opensearch_dashboards_sample_data_logs/_search
  2. {
  3. "size": 0,
  4. "aggs": {
  5. "sample": {
  6. "sampler": {
  7. "shard_size": 1000
  8. },
  9. "aggs": {
  10. "terms": {
  11. "terms": {
  12. "field": "agent.keyword"
  13. }
  14. }
  15. }
  16. }
  17. }
  18. }

Sample response

  1. ...
  2. "aggregations" : {
  3. "sample" : {
  4. "doc_count" : 1000,
  5. "terms" : {
  6. "doc_count_error_upper_bound" : 0,
  7. "sum_other_doc_count" : 0,
  8. "buckets" : [
  9. {
  10. "key" : "Mozilla/5.0 (X11; Linux x86_64; rv:6.0a1) Gecko/20110421 Firefox/6.0a1",
  11. "doc_count" : 368
  12. },
  13. {
  14. "key" : "Mozilla/5.0 (X11; Linux i686) AppleWebKit/534.24 (KHTML, like Gecko) Chrome/11.0.696.50 Safari/534.24",
  15. "doc_count" : 329
  16. },
  17. {
  18. "key" : "Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1; .NET CLR 1.1.4322)",
  19. "doc_count" : 303
  20. }
  21. ]
  22. }
  23. }
  24. }
  25. }

The diversified_sampler aggregation lets you reduce the bias in the distribution of the sample pool. You can use the field setting to control the maximum number of documents collected on any one shard which shares a common value:

  1. GET opensearch_dashboards_sample_data_logs/_search
  2. {
  3. "size": 0,
  4. "aggs": {
  5. "sample": {
  6. "diversified_sampler": {
  7. "shard_size": 1000,
  8. "field": "response.keyword"
  9. },
  10. "aggs": {
  11. "terms": {
  12. "terms": {
  13. "field": "agent.keyword"
  14. }
  15. }
  16. }
  17. }
  18. }
  19. }

Sample response

  1. ...
  2. "aggregations" : {
  3. "sample" : {
  4. "doc_count" : 3,
  5. "terms" : {
  6. "doc_count_error_upper_bound" : 0,
  7. "sum_other_doc_count" : 0,
  8. "buckets" : [
  9. {
  10. "key" : "Mozilla/5.0 (X11; Linux x86_64; rv:6.0a1) Gecko/20110421 Firefox/6.0a1",
  11. "doc_count" : 2
  12. },
  13. {
  14. "key" : "Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1; .NET CLR 1.1.4322)",
  15. "doc_count" : 1
  16. }
  17. ]
  18. }
  19. }
  20. }
  21. }

significant_terms, significant_text

The significant_terms aggregation lets you spot unusual or interesting term occurrences in a filtered subset relative to the rest of the data in an index.

A foreground set is the set of documents that you filter. A background set is a set of all documents in an index. The significant_terms aggregation examines all documents in the foreground set and finds a score for significant occurrences in contrast to the documents in the background set.

In the sample web log data, each document has a field containing the user-agent of the visitor. This example searches for all requests from an iOS operating system. A regular terms aggregation on this foreground set returns Firefox because it has the most number of documents within this bucket. On the other hand, a significant_terms aggregation returns Internet Explorer (IE) because IE has a significantly higher appearance in the foreground set as compared to the background set.

  1. GET opensearch_dashboards_sample_data_logs/_search
  2. {
  3. "size": 0,
  4. "query": {
  5. "terms": {
  6. "machine.os.keyword": [
  7. "ios"
  8. ]
  9. }
  10. },
  11. "aggs": {
  12. "significant_response_codes": {
  13. "significant_terms": {
  14. "field": "agent.keyword"
  15. }
  16. }
  17. }
  18. }

Sample response

  1. ...
  2. "aggregations" : {
  3. "significant_response_codes" : {
  4. "doc_count" : 2737,
  5. "bg_count" : 14074,
  6. "buckets" : [
  7. {
  8. "key" : "Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1; SV1; .NET CLR 1.1.4322)",
  9. "doc_count" : 818,
  10. "score" : 0.01462731514608217,
  11. "bg_count" : 4010
  12. },
  13. {
  14. "key" : "Mozilla/5.0 (X11; Linux x86_64; rv:6.0a1) Gecko/20110421 Firefox/6.0a1",
  15. "doc_count" : 1067,
  16. "score" : 0.009062566630410223,
  17. "bg_count" : 5362
  18. }
  19. ]
  20. }
  21. }
  22. }

If the significant_terms aggregation doesn’t return any result, you might have not filtered the results with a query. Alternatively, the distribution of terms in the foreground set might be the same as the background set, implying that there isn’t anything unusual in the foreground set.

The significant_text aggregation is similar to the significant_terms aggregation but it’s for raw text fields. Significant text measures the change in popularity measured between the foreground and background sets using statistical analysis. For example, it might suggest Tesla when you look for its stock acronym TSLA.

The significant_text aggregation re-analyzes the source text on the fly, filtering noisy data like duplicate paragraphs, boilerplate headers and footers, and so on, which might otherwise skew the results.

Re-analyzing high-cardinality datasets can be a very CPU-intensive operation. We recommend using the significant_text aggregation inside a sampler aggregation to limit the analysis to a small selection of top-matching documents, for example 200.

You can set the following parameters:

  • min_doc_count - Return results that match more than a configured number of top hits. We recommend not setting min_doc_count to 1 because it tends to return terms that are typos or misspellings. Finding more than one instance of a term helps reinforce that the significance is not the result of a one-off accident. The default value of 3 is used to provide a minimum weight-of-evidence.
  • shard_size - Setting a high value increases stability (and accuracy) at the expense of computational performance.
  • shard_min_doc_count - If your text contains many low frequency words and you’re not interested in these (for example typos), then you can set the shard_min_doc_count parameter to filter out candidate terms at a shard level with a reasonable certainty to not reach the required min_doc_count even after merging the local significant text frequencies. The default value is 1, which has no impact until you explicitly set it. We recommend setting this value much lower than the min_doc_count value.

Assume that you have the complete works of Shakespeare indexed in an OpenSearch cluster. You can find significant texts in relation to the word “breathe” in the text_entry field:

  1. GET shakespeare/_search
  2. {
  3. "query": {
  4. "match": {
  5. "text_entry": "breathe"
  6. }
  7. },
  8. "aggregations": {
  9. "my_sample": {
  10. "sampler": {
  11. "shard_size": 100
  12. },
  13. "aggregations": {
  14. "keywords": {
  15. "significant_text": {
  16. "field": "text_entry",
  17. "min_doc_count": 4
  18. }
  19. }
  20. }
  21. }
  22. }
  23. }

Sample response

  1. "aggregations" : {
  2. "my_sample" : {
  3. "doc_count" : 59,
  4. "keywords" : {
  5. "doc_count" : 59,
  6. "bg_count" : 111396,
  7. "buckets" : [
  8. {
  9. "key" : "breathe",
  10. "doc_count" : 59,
  11. "score" : 1887.0677966101694,
  12. "bg_count" : 59
  13. },
  14. {
  15. "key" : "air",
  16. "doc_count" : 4,
  17. "score" : 2.641295376716233,
  18. "bg_count" : 189
  19. },
  20. {
  21. "key" : "dead",
  22. "doc_count" : 4,
  23. "score" : 0.9665839666414213,
  24. "bg_count" : 495
  25. },
  26. {
  27. "key" : "life",
  28. "doc_count" : 5,
  29. "score" : 0.9090787433467572,
  30. "bg_count" : 805
  31. }
  32. ]
  33. }
  34. }
  35. }
  36. }

The most significant texts in relation to breathe are air, dead, and life.

The significant_text aggregation has the following limitations:

  • Doesn’t support child aggregations because child aggregations come at a high memory cost. As a workaround, you can add a follow-up query using a terms aggregation with an include clause and a child aggregation.
  • Doesn’t support nested objects because it works with the document JSON source.
  • The counts of documents might have some (typically small) inaccuracies as it’s based on summing the samples returned from each shard. You can use the shard_size parameter to fine-tune the trade-off between accuracy and performance. By default, the shard_size is set to -1 to automatically estimate the number of shards and the size parameter.

For both significant_terms and significant_text aggregations, the default source of statistical information for background term frequencies is the entire index. You can narrow this scope with a background filter for more focus:

  1. GET shakespeare/_search
  2. {
  3. "query": {
  4. "match": {
  5. "text_entry": "breathe"
  6. }
  7. },
  8. "aggregations": {
  9. "my_sample": {
  10. "sampler": {
  11. "shard_size": 100
  12. },
  13. "aggregations": {
  14. "keywords": {
  15. "significant_text": {
  16. "field": "text_entry",
  17. "background_filter": {
  18. "term": {
  19. "speaker": "JOHN OF GAUNT"
  20. }
  21. }
  22. }
  23. }
  24. }
  25. }
  26. }
  27. }

missing

If you have documents in your index that don’t contain the aggregating field at all or the aggregating field has a value of NULL, use the missing parameter to specify the name of the bucket such documents should be placed in.

The following example adds any missing values to a bucket named “N/A”:

  1. GET opensearch_dashboards_sample_data_logs/_search
  2. {
  3. "size": 0,
  4. "aggs": {
  5. "response_codes": {
  6. "terms": {
  7. "field": "response.keyword",
  8. "size": 10,
  9. "missing": "N/A"
  10. }
  11. }
  12. }
  13. }

Because the default value for the min_doc_count parameter is 1, the missing parameter doesn’t return any buckets in its response. Set min_doc_count parameter to 0 to see the “N/A” bucket in the response:

  1. GET opensearch_dashboards_sample_data_logs/_search
  2. {
  3. "size": 0,
  4. "aggs": {
  5. "response_codes": {
  6. "terms": {
  7. "field": "response.keyword",
  8. "size": 10,
  9. "missing": "N/A",
  10. "min_doc_count": 0
  11. }
  12. }
  13. }
  14. }

Sample response

  1. ...
  2. "aggregations" : {
  3. "response_codes" : {
  4. "doc_count_error_upper_bound" : 0,
  5. "sum_other_doc_count" : 0,
  6. "buckets" : [
  7. {
  8. "key" : "200",
  9. "doc_count" : 12832
  10. },
  11. {
  12. "key" : "404",
  13. "doc_count" : 801
  14. },
  15. {
  16. "key" : "503",
  17. "doc_count" : 441
  18. },
  19. {
  20. "key" : "N/A",
  21. "doc_count" : 0
  22. }
  23. ]
  24. }
  25. }
  26. }

histogram, date_histogram

The histogram aggregation buckets documents based on a specified interval.

With histogram aggregations, you can visualize the distributions of values in a given range of documents very easily. Now OpenSearch doesn’t give you back an actual graph of course, that’s what OpenSearch Dashboards is for. But it’ll give you the JSON response that you can use to construct your own graph.

The following example buckets the number_of_bytes field by 10,000 intervals:

  1. GET opensearch_dashboards_sample_data_logs/_search
  2. {
  3. "size": 0,
  4. "aggs": {
  5. "number_of_bytes": {
  6. "histogram": {
  7. "field": "bytes",
  8. "interval": 10000
  9. }
  10. }
  11. }
  12. }

Sample Response

  1. ...
  2. "aggregations" : {
  3. "number_of_bytes" : {
  4. "buckets" : [
  5. {
  6. "key" : 0.0,
  7. "doc_count" : 13372
  8. },
  9. {
  10. "key" : 10000.0,
  11. "doc_count" : 702
  12. }
  13. ]
  14. }
  15. }
  16. }

The date_histogram aggregation uses date math to generate histograms for time-series data.

For example, you can find how many hits your website gets per month:

  1. GET opensearch_dashboards_sample_data_logs/_search
  2. {
  3. "size": 0,
  4. "aggs": {
  5. "logs_per_month": {
  6. "date_histogram": {
  7. "field": "@timestamp",
  8. "interval": "month"
  9. }
  10. }
  11. }
  12. }

Sample response

  1. ...
  2. "aggregations" : {
  3. "logs_per_month" : {
  4. "buckets" : [
  5. {
  6. "key_as_string" : "2020-10-01T00:00:00.000Z",
  7. "key" : 1601510400000,
  8. "doc_count" : 1635
  9. },
  10. {
  11. "key_as_string" : "2020-11-01T00:00:00.000Z",
  12. "key" : 1604188800000,
  13. "doc_count" : 6844
  14. },
  15. {
  16. "key_as_string" : "2020-12-01T00:00:00.000Z",
  17. "key" : 1606780800000,
  18. "doc_count" : 5595
  19. }
  20. ]
  21. }
  22. }
  23. }

The response has three months worth of logs. If you graph these values, you can see the peak and valleys of the request traffic to your website month over month.

range, date_range, ip_range

The range aggregation lets you define the range for each bucket.

For example, you can find the number of bytes between 1000 and 2000, 2000 and 3000, and 3000 and 4000. Within the range parameter, you can define ranges as objects of an array.

  1. GET opensearch_dashboards_sample_data_logs/_search
  2. {
  3. "size": 0,
  4. "aggs": {
  5. "number_of_bytes_distribution": {
  6. "range": {
  7. "field": "bytes",
  8. "ranges": [
  9. {
  10. "from": 1000,
  11. "to": 2000
  12. },
  13. {
  14. "from": 2000,
  15. "to": 3000
  16. },
  17. {
  18. "from": 3000,
  19. "to": 4000
  20. }
  21. ]
  22. }
  23. }
  24. }
  25. }

The response includes the from key values and excludes the to key values:

Sample response

  1. ...
  2. "aggregations" : {
  3. "number_of_bytes_distribution" : {
  4. "buckets" : [
  5. {
  6. "key" : "1000.0-2000.0",
  7. "from" : 1000.0,
  8. "to" : 2000.0,
  9. "doc_count" : 805
  10. },
  11. {
  12. "key" : "2000.0-3000.0",
  13. "from" : 2000.0,
  14. "to" : 3000.0,
  15. "doc_count" : 1369
  16. },
  17. {
  18. "key" : "3000.0-4000.0",
  19. "from" : 3000.0,
  20. "to" : 4000.0,
  21. "doc_count" : 1422
  22. }
  23. ]
  24. }
  25. }
  26. }

The date_range aggregation is conceptually the same as the range aggregation, except that it lets you perform date math. For example, you can get all documents from the last 10 days. To make the date more readable, include the format with a format parameter:

  1. GET opensearch_dashboards_sample_data_logs/_search
  2. {
  3. "size": 0,
  4. "aggs": {
  5. "number_of_bytes": {
  6. "date_range": {
  7. "field": "@timestamp",
  8. "format": "MM-yyyy",
  9. "ranges": [
  10. {
  11. "from": "now-10d/d",
  12. "to": "now"
  13. }
  14. ]
  15. }
  16. }
  17. }
  18. }

Sample response

  1. ...
  2. "aggregations" : {
  3. "number_of_bytes" : {
  4. "buckets" : [
  5. {
  6. "key" : "03-2021-03-2021",
  7. "from" : 1.6145568E12,
  8. "from_as_string" : "03-2021",
  9. "to" : 1.615451329043E12,
  10. "to_as_string" : "03-2021",
  11. "doc_count" : 0
  12. }
  13. ]
  14. }
  15. }
  16. }

The ip_range aggregation is for IP addresses. It works on ip type fields. You can define the IP ranges and masks in the CIDR notation.

  1. GET opensearch_dashboards_sample_data_logs/_search
  2. {
  3. "size": 0,
  4. "aggs": {
  5. "access": {
  6. "ip_range": {
  7. "field": "ip",
  8. "ranges": [
  9. {
  10. "from": "1.0.0.0",
  11. "to": "126.158.155.183"
  12. },
  13. {
  14. "mask": "1.0.0.0/8"
  15. }
  16. ]
  17. }
  18. }
  19. }
  20. }

Sample response

  1. ...
  2. "aggregations" : {
  3. "access" : {
  4. "buckets" : [
  5. {
  6. "key" : "1.0.0.0/8",
  7. "from" : "1.0.0.0",
  8. "to" : "2.0.0.0",
  9. "doc_count" : 98
  10. },
  11. {
  12. "key" : "1.0.0.0-126.158.155.183",
  13. "from" : "1.0.0.0",
  14. "to" : "126.158.155.183",
  15. "doc_count" : 7184
  16. }
  17. ]
  18. }
  19. }
  20. }

If you add a document with malformed fields to an index that has ip_range set to false in its mappings, OpenSearch rejects the entire document. You can set ignore_malformed to true to specify that OpenSearch should ignore malformed fields. The default is false.

  1. ...
  2. "mappings": {
  3. "properties": {
  4. "ips": {
  5. "type": "ip_range",
  6. "ignore_malformed": true
  7. }
  8. }
  9. }

filter, filters

A filter aggregation is a query clause, exactly like a search query — match or term or range. You can use the filter aggregation to narrow down the entire set of documents to a specific set before creating buckets.

The following example shows the avg aggregation running within the context of a filter. The avg aggregation only aggregates the documents that match the range query:

  1. GET opensearch_dashboards_sample_data_ecommerce/_search
  2. {
  3. "size": 0,
  4. "aggs": {
  5. "low_value": {
  6. "filter": {
  7. "range": {
  8. "taxful_total_price": {
  9. "lte": 50
  10. }
  11. }
  12. },
  13. "aggs": {
  14. "avg_amount": {
  15. "avg": {
  16. "field": "taxful_total_price"
  17. }
  18. }
  19. }
  20. }
  21. }
  22. }

Sample response

  1. ...
  2. "aggregations" : {
  3. "low_value" : {
  4. "doc_count" : 1633,
  5. "avg_amount" : {
  6. "value" : 38.363175998928355
  7. }
  8. }
  9. }
  10. }

A filters aggregation is the same as the filter aggregation, except that it lets you use multiple filter aggregations. While the filter aggregation results in a single bucket, the filters aggregation returns multiple buckets, one for each of the defined filters.

To create a bucket for all the documents that didn’t match the any of the filter queries, set the other_bucket property to true:

  1. GET opensearch_dashboards_sample_data_logs/_search
  2. {
  3. "size": 0,
  4. "aggs": {
  5. "200_os": {
  6. "filters": {
  7. "other_bucket": true,
  8. "filters": [
  9. {
  10. "term": {
  11. "response.keyword": "200"
  12. }
  13. },
  14. {
  15. "term": {
  16. "machine.os.keyword": "osx"
  17. }
  18. }
  19. ]
  20. },
  21. "aggs": {
  22. "avg_amount": {
  23. "avg": {
  24. "field": "bytes"
  25. }
  26. }
  27. }
  28. }
  29. }
  30. }

Sample response

  1. ...
  2. "aggregations" : {
  3. "200_os" : {
  4. "buckets" : [
  5. {
  6. "doc_count" : 12832,
  7. "avg_amount" : {
  8. "value" : 5897.852711970075
  9. }
  10. },
  11. {
  12. "doc_count" : 2825,
  13. "avg_amount" : {
  14. "value" : 5620.347256637168
  15. }
  16. },
  17. {
  18. "doc_count" : 1017,
  19. "avg_amount" : {
  20. "value" : 3247.0963618485744
  21. }
  22. }
  23. ]
  24. }
  25. }
  26. }

global

The global aggregations lets you break out of the aggregation context of a filter aggregation. Even if you have included a filter query that narrows down a set of documents, the global aggregation aggregates on all documents as if the filter query wasn’t there. It ignores the filter aggregation and implicitly assumes the match_all query.

The following example returns the avg value of the taxful_total_price field from all documents in the index:

  1. GET opensearch_dashboards_sample_data_ecommerce/_search
  2. {
  3. "size": 0,
  4. "query": {
  5. "range": {
  6. "taxful_total_price": {
  7. "lte": 50
  8. }
  9. }
  10. },
  11. "aggs": {
  12. "total_avg_amount": {
  13. "global": {},
  14. "aggs": {
  15. "avg_price": {
  16. "avg": {
  17. "field": "taxful_total_price"
  18. }
  19. }
  20. }
  21. }
  22. }
  23. }

Sample response

  1. ...
  2. "aggregations" : {
  3. "total_avg_amount" : {
  4. "doc_count" : 4675,
  5. "avg_price" : {
  6. "value" : 75.05542864304813
  7. }
  8. }
  9. }
  10. }

You can see that the average value for the taxful_total_price field is 75.05 and not the 38.36 as seen in the filter example when the query matched.

geo_distance, geohash_grid

The geo_distance aggregation groups documents into concentric circles based on distances from an origin geo_point field. It’s the same as the range aggregation, except that it works on geo locations.

For example, you can use the geo_distance aggregation to find all pizza places within 1 km of you. The search results are limited to the 1 km radius specified by you, but you can add another result found within 2 km.

You can only use the geo_distance aggregation on fields mapped as geo_point.

A point is a single geographical coordinate, such as your current location shown by your smart-phone. A point in OpenSearch is represented as follows:

  1. {
  2. "location": {
  3. "type": "point",
  4. "coordinates": {
  5. "lat": 83.76,
  6. "lon": -81.2
  7. }
  8. }
  9. }

You can also specify the latitude and longitude as an array [-81.20, 83.76] or as a string "83.76, -81.20"

This table lists the relevant fields of a geo_distance aggregation:

FieldDescriptionRequired
fieldSpecify the geo point field that you want to work on.Yes
originSpecify the geo point that’s used to compute the distances from.Yes
rangesSpecify a list of ranges to collect documents based on their distance from the target point.Yes
unitDefine the units used in the ranges array. The unit defaults to m (meters), but you can switch to other units like km (kilometers), mi (miles), in (inches), yd (yards), cm (centimeters), and mm (millimeters).No
distance_typeSpecify how OpenSearch calculates the distance. The default is sloppy_arc (faster but less accurate), but can also be set to arc (slower but most accurate) or plane (fastest but least accurate). Because of high error margins, use plane only for small geographic areas.No

The syntax is as follows:

  1. {
  2. "aggs": {
  3. "aggregation_name": {
  4. "geo_distance": {
  5. "field": "field_1",
  6. "origin": "x, y",
  7. "ranges": [
  8. {
  9. "to": "value_1"
  10. },
  11. {
  12. "from": "value_2",
  13. "to": "value_3"
  14. },
  15. {
  16. "from": "value_4"
  17. }
  18. ]
  19. }
  20. }
  21. }
  22. }

This example forms buckets from the following distances from a geo-point field:

  • Fewer than 10 km
  • From 10 to 20 km
  • From 20 to 50 km
  • From 50 to 100 km
  • Above 100 km
  1. GET opensearch_dashboards_sample_data_logs/_search
  2. {
  3. "size": 0,
  4. "aggs": {
  5. "position": {
  6. "geo_distance": {
  7. "field": "geo.coordinates",
  8. "origin": {
  9. "lat": 83.76,
  10. "lon": -81.2
  11. },
  12. "ranges": [
  13. {
  14. "to": 10
  15. },
  16. {
  17. "from": 10,
  18. "to": 20
  19. },
  20. {
  21. "from": 20,
  22. "to": 50
  23. },
  24. {
  25. "from": 50,
  26. "to": 100
  27. },
  28. {
  29. "from": 100
  30. }
  31. ]
  32. }
  33. }
  34. }
  35. }

Sample response

  1. ...
  2. "aggregations" : {
  3. "position" : {
  4. "buckets" : [
  5. {
  6. "key" : "*-10.0",
  7. "from" : 0.0,
  8. "to" : 10.0,
  9. "doc_count" : 0
  10. },
  11. {
  12. "key" : "10.0-20.0",
  13. "from" : 10.0,
  14. "to" : 20.0,
  15. "doc_count" : 0
  16. },
  17. {
  18. "key" : "20.0-50.0",
  19. "from" : 20.0,
  20. "to" : 50.0,
  21. "doc_count" : 0
  22. },
  23. {
  24. "key" : "50.0-100.0",
  25. "from" : 50.0,
  26. "to" : 100.0,
  27. "doc_count" : 0
  28. },
  29. {
  30. "key" : "100.0-*",
  31. "from" : 100.0,
  32. "doc_count" : 14074
  33. }
  34. ]
  35. }
  36. }
  37. }

The geohash_grid aggregation buckets documents for geographical analysis. It organizes a geographical region into a grid of smaller regions of different sizes or precisions. Lower values of precision represent larger geographical areas and higher values represent smaller, more precise geographical areas.

The number of results returned by a query might be far too many to display each geo point individually on a map. The geohash_grid aggregation buckets nearby geo points together by calculating the Geohash for each point, at the level of precision that you define (between 1 to 12; the default is 5). To learn more about Geohash, see Wikipedia.

The web logs example data is spread over a large geographical area, so you can use a lower precision value. You can zoom in on this map by increasing the precision value:

  1. GET opensearch_dashboards_sample_data_logs/_search
  2. {
  3. "size": 0,
  4. "aggs": {
  5. "geo_hash": {
  6. "geohash_grid": {
  7. "field": "geo.coordinates",
  8. "precision": 4
  9. }
  10. }
  11. }
  12. }

Sample response

  1. ...
  2. "aggregations" : {
  3. "geo_hash" : {
  4. "buckets" : [
  5. {
  6. "key" : "c1cg",
  7. "doc_count" : 104
  8. },
  9. {
  10. "key" : "dr5r",
  11. "doc_count" : 26
  12. },
  13. {
  14. "key" : "9q5b",
  15. "doc_count" : 20
  16. },
  17. {
  18. "key" : "c20g",
  19. "doc_count" : 19
  20. },
  21. {
  22. "key" : "dr70",
  23. "doc_count" : 18
  24. }
  25. ...
  26. ]
  27. }
  28. }
  29. }

You can visualize the aggregated response on a map using OpenSearch Dashboards.

The more accurate you want the aggregation to be, the more resources OpenSearch consumes, because of the number of buckets that the aggregation has to calculate. By default, OpenSearch does not generate more than 10,000 buckets. You can change this behavior by using the size attribute, but keep in mind that the performance might suffer for very wide queries consisting of thousands of buckets.

adjacency_matrix

The adjacency_matrix aggregation lets you define filter expressions and returns a matrix of the intersecting filters where each non-empty cell in the matrix represents a bucket. You can find how many documents fall within any combination of filters.

Use the adjacency_matrix aggregation to discover how concepts are related by visualizing the data as graphs.

For example, in the sample eCommerce dataset, to analyze how the different manufacturing companies are related:

  1. GET opensearch_dashboards_sample_data_ecommerce/_search
  2. {
  3. "size": 0,
  4. "aggs": {
  5. "interactions": {
  6. "adjacency_matrix": {
  7. "filters": {
  8. "grpA": {
  9. "match": {
  10. "manufacturer.keyword": "Low Tide Media"
  11. }
  12. },
  13. "grpB": {
  14. "match": {
  15. "manufacturer.keyword": "Elitelligence"
  16. }
  17. },
  18. "grpC": {
  19. "match": {
  20. "manufacturer.keyword": "Oceanavigations"
  21. }
  22. }
  23. }
  24. }
  25. }
  26. }
  27. }

Sample response

  1. {
  2. ...
  3. "aggregations" : {
  4. "interactions" : {
  5. "buckets" : [
  6. {
  7. "key" : "grpA",
  8. "doc_count" : 1553
  9. },
  10. {
  11. "key" : "grpA&grpB",
  12. "doc_count" : 590
  13. },
  14. {
  15. "key" : "grpA&grpC",
  16. "doc_count" : 329
  17. },
  18. {
  19. "key" : "grpB",
  20. "doc_count" : 1370
  21. },
  22. {
  23. "key" : "grpB&grpC",
  24. "doc_count" : 299
  25. },
  26. {
  27. "key" : "grpC",
  28. "doc_count" : 1218
  29. }
  30. ]
  31. }
  32. }
  33. }

Let’s take a closer look at the result:

  1. {
  2. "key" : "grpA&grpB",
  3. "doc_count" : 590
  4. }
  • grpA: Products manufactured by Low Tide Media.
  • grpB: Products manufactured by Elitelligence.
  • 590: Number of products that are manufactured by both.

You can use OpenSearch Dashboards to represent this data with a network graph.

nested, reverse_nested

The nested aggregation lets you aggregate on fields inside a nested object. The nested type is a specialized version of the object data type that allows arrays of objects to be indexed in a way that they can be queried independently of each other

With the object type, all the data is stored in the same document, so matches for a search can go across sub documents. For example, imagine a logs index with pages mapped as an object datatype:

  1. PUT logs/_doc/0
  2. {
  3. "response": "200",
  4. "pages": [
  5. {
  6. "page": "landing",
  7. "load_time": 200
  8. },
  9. {
  10. "page": "blog",
  11. "load_time": 500
  12. }
  13. ]
  14. }

OpenSearch merges all sub-properties of the entity relations that looks something like this:

  1. {
  2. "logs": {
  3. "pages": ["landing", "blog"],
  4. "load_time": ["200", "500"]
  5. }
  6. }

So, if you wanted to search this index with pages=landing and load_time=500, this document matches the criteria even though the load_time value for landing is 200.

If you want to make sure such cross-object matches don’t happen, map the field as a nested type:

  1. PUT logs
  2. {
  3. "mappings": {
  4. "properties": {
  5. "pages": {
  6. "type": "nested",
  7. "properties": {
  8. "page": { "type": "text" },
  9. "load_time": { "type": "double" }
  10. }
  11. }
  12. }
  13. }
  14. }

Nested documents allow you to index the same JSON document but will keep your pages in separate Lucene documents, making only searches like pages=landing and load_time=200 return the expected result. Internally, nested objects index each object in the array as a separate hidden document, meaning that each nested object can be queried independently of the others.

You have to specify a nested path relative to parent that contains the nested documents:

  1. GET logs/_search
  2. {
  3. "query": {
  4. "match": { "response": "200" }
  5. },
  6. "aggs": {
  7. "pages": {
  8. "nested": {
  9. "path": "pages"
  10. },
  11. "aggs": {
  12. "min_load_time": { "min": { "field": "pages.load_time" } }
  13. }
  14. }
  15. }
  16. }

Sample response

  1. ...
  2. "aggregations" : {
  3. "pages" : {
  4. "doc_count" : 2,
  5. "min_price" : {
  6. "value" : 200.0
  7. }
  8. }
  9. }
  10. }

You can also aggregate values from nested documents to their parent; this aggregation is called reverse_nested. You can use reverse_nested to aggregate a field from the parent document after grouping by the field from the nested object. The reverse_nested aggregation “joins back” the root page and gets the load_time for each for your variations.

The reverse_nested aggregation is a sub-aggregation inside a nested aggregation. It accepts a single option named path. This option defines how many steps backwards in the document hierarchy OpenSearch takes to calculate the aggregations.

  1. GET logs/_search
  2. {
  3. "query": {
  4. "match": { "response": "200" }
  5. },
  6. "aggs": {
  7. "pages": {
  8. "nested": {
  9. "path": "pages"
  10. },
  11. "aggs": {
  12. "top_pages_per_load_time": {
  13. "terms": {
  14. "field": "pages.load_time"
  15. },
  16. "aggs": {
  17. "comment_to_logs": {
  18. "reverse_nested": {},
  19. "aggs": {
  20. "min_load_time": {
  21. "min": {
  22. "field": "pages.load_time"
  23. }
  24. }
  25. }
  26. }
  27. }
  28. }
  29. }
  30. }
  31. }
  32. }

Sample response

  1. ...
  2. "aggregations" : {
  3. "pages" : {
  4. "doc_count" : 2,
  5. "top_pages_per_load_time" : {
  6. "doc_count_error_upper_bound" : 0,
  7. "sum_other_doc_count" : 0,
  8. "buckets" : [
  9. {
  10. "key" : 200.0,
  11. "doc_count" : 1,
  12. "comment_to_logs" : {
  13. "doc_count" : 1,
  14. "min_load_time" : {
  15. "value" : null
  16. }
  17. }
  18. },
  19. {
  20. "key" : 500.0,
  21. "doc_count" : 1,
  22. "comment_to_logs" : {
  23. "doc_count" : 1,
  24. "min_load_time" : {
  25. "value" : null
  26. }
  27. }
  28. }
  29. ]
  30. }
  31. }
  32. }
  33. }

The response shows the logs index has one page with a load_time of 200 and one with a load_time of 500.