Pipeline aggregations

With pipeline aggregations, you can chain aggregations by piping the results of one aggregation as an input to another for a more nuanced output.

You can use pipeline aggregations to compute complex statistical and mathematical measures like derivatives, moving averages, cumulative sums, and so on.

Pipeline aggregation syntax

A pipeline aggregation uses the buckets_path property to access the results of other aggregations. The buckets_path property has a specific syntax:

  1. buckets_path = <AGG_NAME>[<AGG_SEPARATOR>,<AGG_NAME>]*[<METRIC_SEPARATOR>, <METRIC>];

where:

  • AGG_NAME is the name of the aggregation.
  • AGG_SEPARATOR separates aggregations. It’s represented as >.
  • METRIC_SEPARATOR separates aggregations from its metrics. It’s represented as ..
  • METRIC is the name of the metric, in case of multi-value metric aggregations.

For example, my_sum.sum selects the sum metric of an aggregation called my_sum. popular_tags>my_sum.sum nests my_sum.sum into the popular_tags aggregation.

You can also specify the following additional parameters:

  • gap_policy: Real-world data can contain gaps or null values. You can specify the policy to deal with such missing data with the gap_policy property. You can either set the gap_policy property to skip to skip the missing data and continue from the next available value, or insert_zeros to replace the missing values with zero and continue running.
  • format: The type of format for the output value. For example, yyyy-MM-dd for a date value.

Quick example

To sum all the buckets returned by the sum_total_memory aggregation:

  1. GET opensearch_dashboards_sample_data_logs/_search
  2. {
  3. "size": 0,
  4. "aggs": {
  5. "number_of_bytes": {
  6. "histogram": {
  7. "field": "bytes",
  8. "interval": 10000
  9. },
  10. "aggs": {
  11. "sum_total_memory": {
  12. "sum": {
  13. "field": "phpmemory"
  14. }
  15. }
  16. }
  17. },
  18. "sum_copies": {
  19. "sum_bucket": {
  20. "buckets_path": "number_of_bytes>sum_total_memory"
  21. }
  22. }
  23. }
  24. }

Sample response

  1. ...
  2. "aggregations" : {
  3. "number_of_bytes" : {
  4. "buckets" : [
  5. {
  6. "key" : 0.0,
  7. "doc_count" : 13372,
  8. "sum_total_memory" : {
  9. "value" : 9.12664E7
  10. }
  11. },
  12. {
  13. "key" : 10000.0,
  14. "doc_count" : 702,
  15. "sum_total_memory" : {
  16. "value" : 0.0
  17. }
  18. }
  19. ]
  20. },
  21. "sum_copies" : {
  22. "value" : 9.12664E7
  23. }
  24. }
  25. }

Types of pipeline aggregations

Pipeline aggregations are of two types:

Sibling aggregations

Sibling aggregations take the output of a nested aggregation and produce new buckets or new aggregations at the same level as the nested buckets.

Sibling aggregations must be a multi-bucket aggregation (have multiple grouped values for a certain field) and the metric must be a numeric value.

min_bucket, max_bucket, sum_bucket, and avg_bucket are common sibling aggregations.

Parent aggregations

Parent aggregations take the output of an outer aggregation and produce new buckets or new aggregations at the same level as the existing buckets.

Parent aggregations must have min_doc_count set to 0 (default for histogram aggregations) and the specified metric must be a numeric value. If min_doc_count is greater than 0, some buckets are omitted, which might lead to incorrect results.

derivatives and cumulative_sum are common parent aggregations.

avg_bucket, sum_bucket, min_bucket, max_bucket

The avg_bucket, sum_bucket, min_bucket, and max_bucket aggregations are sibling aggregations that calculate the average, sum, minimum, and maximum values of a metric in each bucket of a previous aggregation.

The following example creates a date histogram with a one-month interval. The sum sub-aggregation calculates the sum of all bytes for each month. Finally, the avg_bucket aggregation uses this sum to calculate the average number of bytes per month:

  1. POST opensearch_dashboards_sample_data_logs/_search
  2. {
  3. "size": 0,
  4. "aggs": {
  5. "visits_per_month": {
  6. "date_histogram": {
  7. "field": "@timestamp",
  8. "interval": "month"
  9. },
  10. "aggs": {
  11. "sum_of_bytes": {
  12. "sum": {
  13. "field": "bytes"
  14. }
  15. }
  16. }
  17. },
  18. "avg_monthly_bytes": {
  19. "avg_bucket": {
  20. "buckets_path": "visits_per_month>sum_of_bytes"
  21. }
  22. }
  23. }
  24. }

Sample response

  1. ...
  2. "aggregations" : {
  3. "visits_per_month" : {
  4. "buckets" : [
  5. {
  6. "key_as_string" : "2020-10-01T00:00:00.000Z",
  7. "key" : 1601510400000,
  8. "doc_count" : 1635,
  9. "sum_of_bytes" : {
  10. "value" : 9400200.0
  11. }
  12. },
  13. {
  14. "key_as_string" : "2020-11-01T00:00:00.000Z",
  15. "key" : 1604188800000,
  16. "doc_count" : 6844,
  17. "sum_of_bytes" : {
  18. "value" : 3.8880434E7
  19. }
  20. },
  21. {
  22. "key_as_string" : "2020-12-01T00:00:00.000Z",
  23. "key" : 1606780800000,
  24. "doc_count" : 5595,
  25. "sum_of_bytes" : {
  26. "value" : 3.1445055E7
  27. }
  28. }
  29. ]
  30. },
  31. "avg_monthly_bytes" : {
  32. "value" : 2.6575229666666668E7
  33. }
  34. }
  35. }

In a similar fashion, you can calculate the sum_bucket, min_bucket, and max_bucket values for the bytes per month.

stats_bucket, extended_stats_bucket

The stats_bucket aggregation is a sibling aggregation that returns a variety of stats (count, min, max, avg, and sum) for the buckets of a previous aggregation.

The following example returns the basic stats for the buckets returned by the sum_of_bytes aggregation nested into the visits_per_month aggregation:

  1. GET opensearch_dashboards_sample_data_logs/_search
  2. {
  3. "size": 0,
  4. "aggs": {
  5. "visits_per_month": {
  6. "date_histogram": {
  7. "field": "@timestamp",
  8. "interval": "month"
  9. },
  10. "aggs": {
  11. "sum_of_bytes": {
  12. "sum": {
  13. "field": "bytes"
  14. }
  15. }
  16. }
  17. },
  18. "stats_monthly_bytes": {
  19. "stats_bucket": {
  20. "buckets_path": "visits_per_month>sum_of_bytes"
  21. }
  22. }
  23. }
  24. }

Sample response

  1. ...
  2. "stats_monthly_bytes" : {
  3. "count" : 3,
  4. "min" : 9400200.0,
  5. "max" : 3.8880434E7,
  6. "avg" : 2.6575229666666668E7,
  7. "sum" : 7.9725689E7
  8. }
  9. }
  10. }

The extended_stats aggregation is an extended version of the stats aggregation. Apart from including basic stats, extended_stats also provides stats such as sum_of_squares, variance, and std_deviation.

Sample response

  1. "stats_monthly_visits" : {
  2. "count" : 3,
  3. "min" : 9400200.0,
  4. "max" : 3.8880434E7,
  5. "avg" : 2.6575229666666668E7,
  6. "sum" : 7.9725689E7,
  7. "sum_of_squares" : 2.588843392021381E15,
  8. "variance" : 1.5670496550438025E14,
  9. "variance_population" : 1.5670496550438025E14,
  10. "variance_sampling" : 2.3505744825657038E14,
  11. "std_deviation" : 1.251818539183616E7,
  12. "std_deviation_population" : 1.251818539183616E7,
  13. "std_deviation_sampling" : 1.5331583357780447E7,
  14. "std_deviation_bounds" : {
  15. "upper" : 5.161160045033899E7,
  16. "lower" : 1538858.8829943463,
  17. "upper_population" : 5.161160045033899E7,
  18. "lower_population" : 1538858.8829943463,
  19. "upper_sampling" : 5.723839638222756E7,
  20. "lower_sampling" : -4087937.0488942266
  21. }
  22. }
  23. }
  24. }

bucket_script, bucket_selector

The bucket_script aggregation is a parent aggregation that executes a script to perform per-bucket calculations of a previous aggregation. Make sure the metrics are of numeric type and the returned values are also numeric.

Use the script parameter to add your script. The script can be inline, in a file, or in an index. To enable inline scripting, add the following line to your opensearch.yml file in the config folder:

  1. script.inline: on

The buckets_path property consists of multiple entries. Each entry is a key and a value. The key is the name of the value that you can use in the script.

The basic syntax is:

  1. {
  2. "bucket_script": {
  3. "buckets_path": {
  4. "my_var1": "the_sum",
  5. "my_var2": "the_value_count"
  6. },
  7. "script": "params.my_var1 / params.my_var2"
  8. }
  9. }

The following example uses the sum aggregation on the buckets generated by a date histogram. From the resultant buckets values, the percentage of RAM is calculated in an interval of 10,000 bytes in the context of a zip extension:

  1. GET opensearch_dashboards_sample_data_logs/_search
  2. {
  3. "size": 0,
  4. "aggs": {
  5. "sales_per_month": {
  6. "histogram": {
  7. "field": "bytes",
  8. "interval": "10000"
  9. },
  10. "aggs": {
  11. "total_ram": {
  12. "sum": {
  13. "field": "machine.ram"
  14. }
  15. },
  16. "ext-type": {
  17. "filter": {
  18. "term": {
  19. "extension.keyword": "zip"
  20. }
  21. },
  22. "aggs": {
  23. "total_ram": {
  24. "sum": {
  25. "field": "machine.ram"
  26. }
  27. }
  28. }
  29. },
  30. "ram-percentage": {
  31. "bucket_script": {
  32. "buckets_path": {
  33. "machineRam": "ext-type>total_ram",
  34. "totalRam": "total_ram"
  35. },
  36. "script": "params.machineRam / params.totalRam"
  37. }
  38. }
  39. }
  40. }
  41. }
  42. }

Sample response

  1. "aggregations" : {
  2. "sales_per_month" : {
  3. "buckets" : [
  4. {
  5. "key" : 0.0,
  6. "doc_count" : 13372,
  7. "os-type" : {
  8. "doc_count" : 1558,
  9. "total_ram" : {
  10. "value" : 2.0090783268864E13
  11. }
  12. },
  13. "total_ram" : {
  14. "value" : 1.7214228922368E14
  15. },
  16. "ram-percentage" : {
  17. "value" : 0.11671032934131736
  18. }
  19. },
  20. {
  21. "key" : 10000.0,
  22. "doc_count" : 702,
  23. "os-type" : {
  24. "doc_count" : 116,
  25. "total_ram" : {
  26. "value" : 1.622423896064E12
  27. }
  28. },
  29. "total_ram" : {
  30. "value" : 9.015136354304E12
  31. },
  32. "ram-percentage" : {
  33. "value" : 0.17996665078608862
  34. }
  35. }
  36. ]
  37. }
  38. }
  39. }

The RAM percentage is calculated and appended at the end of each bucket.

The bucket_selector aggregation is a script-based aggregation that selects buckets returned by a histogram (or date_histogram) aggregation. Use it in scenarios where you don’t want certain buckets in the output based on conditions supplied by you.

The bucket_selector aggregation executes a script to decide if a bucket stays in the parent multi-bucket aggregation.

The basic syntax is:

  1. {
  2. "bucket_selector": {
  3. "buckets_path": {
  4. "my_var1": "the_sum",
  5. "my_var2": "the_value_count"
  6. },
  7. "script": "params.my_var1 / params.my_var2"
  8. }
  9. }

The following example calculates the sum of bytes and then evaluates if this sum is greater than 20,000. If true, then the bucket is retained in the bucket list. Otherwise, it’s deleted from the final output.

  1. GET opensearch_dashboards_sample_data_logs/_search
  2. {
  3. "size": 0,
  4. "aggs": {
  5. "bytes_per_month": {
  6. "date_histogram": {
  7. "field": "@timestamp",
  8. "calendar_interval": "month"
  9. },
  10. "aggs": {
  11. "total_bytes": {
  12. "sum": {
  13. "field": "bytes"
  14. }
  15. },
  16. "bytes_bucket_filter": {
  17. "bucket_selector": {
  18. "buckets_path": {
  19. "totalBytes": "total_bytes"
  20. },
  21. "script": "params.totalBytes > 20000"
  22. }
  23. }
  24. }
  25. }
  26. }
  27. }

Sample response

  1. "aggregations" : {
  2. "bytes_per_month" : {
  3. "buckets" : [
  4. {
  5. "key_as_string" : "2020-10-01T00:00:00.000Z",
  6. "key" : 1601510400000,
  7. "doc_count" : 1635,
  8. "total_bytes" : {
  9. "value" : 9400200.0
  10. }
  11. },
  12. {
  13. "key_as_string" : "2020-11-01T00:00:00.000Z",
  14. "key" : 1604188800000,
  15. "doc_count" : 6844,
  16. "total_bytes" : {
  17. "value" : 3.8880434E7
  18. }
  19. },
  20. {
  21. "key_as_string" : "2020-12-01T00:00:00.000Z",
  22. "key" : 1606780800000,
  23. "doc_count" : 5595,
  24. "total_bytes" : {
  25. "value" : 3.1445055E7
  26. }
  27. }
  28. ]
  29. }
  30. }
  31. }

bucket_sort

The bucket_sort aggregation is a parent aggregation that sorts buckets of a previous aggregation.

You can specify several sort fields together with the corresponding sort order. Additionally, you can sort each bucket based on its key, count, or its sub-aggregations. You can also truncate the buckets by setting from and size parameters.

Syntax

  1. {
  2. "bucket_sort": {
  3. "sort": [
  4. {"sort_field_1": {"order": "asc"}},
  5. {"sort_field_2": {"order": "desc"}},
  6. "sort_field_3"
  7. ],
  8. "from":1,
  9. "size":3
  10. }
  11. }

The following example sorts the buckets of a date_histogram aggregation based on the computed total_sum values. We sort the buckets in descending order so that the buckets with the highest number of bytes are returned first.

  1. GET opensearch_dashboards_sample_data_logs/_search
  2. {
  3. "size": 0,
  4. "aggs": {
  5. "sales_per_month": {
  6. "date_histogram": {
  7. "field": "@timestamp",
  8. "calendar_interval": "month"
  9. },
  10. "aggs": {
  11. "total_bytes": {
  12. "sum": {
  13. "field": "bytes"
  14. }
  15. },
  16. "bytes_bucket_sort": {
  17. "bucket_sort": {
  18. "sort": [
  19. { "total_bytes": { "order": "desc" } }
  20. ],
  21. "size": 3
  22. }
  23. }
  24. }
  25. }
  26. }
  27. }

Sample response

  1. "aggregations" : {
  2. "sales_per_month" : {
  3. "buckets" : [
  4. {
  5. "key_as_string" : "2020-11-01T00:00:00.000Z",
  6. "key" : 1604188800000,
  7. "doc_count" : 6844,
  8. "total_bytes" : {
  9. "value" : 3.8880434E7
  10. }
  11. },
  12. {
  13. "key_as_string" : "2020-12-01T00:00:00.000Z",
  14. "key" : 1606780800000,
  15. "doc_count" : 5595,
  16. "total_bytes" : {
  17. "value" : 3.1445055E7
  18. }
  19. },
  20. {
  21. "key_as_string" : "2020-10-01T00:00:00.000Z",
  22. "key" : 1601510400000,
  23. "doc_count" : 1635,
  24. "total_bytes" : {
  25. "value" : 9400200.0
  26. }
  27. }
  28. ]
  29. }
  30. }
  31. }

You can also use this aggregation to truncate the resulting buckets without sorting. For this, just use the from and/or size parameters without sort.

cumulative_sum

The cumulative_sum aggregation is a parent aggregation that calculates the cumulative sum of each bucket of a previous aggregation.

A cumulative sum is a sequence of partial sums of a given sequence. For example, the cumulative sums of the sequence {a,b,c,…} are a, a+b, a+b+c, and so on. You can use the cumulative sum to visualize the rate of change of a field over time.

The following example calculates the cumulative number of bytes over a monthly basis:

  1. GET opensearch_dashboards_sample_data_logs/_search
  2. {
  3. "size": 0,
  4. "aggs": {
  5. "sales_per_month": {
  6. "date_histogram": {
  7. "field": "@timestamp",
  8. "calendar_interval": "month"
  9. },
  10. "aggs": {
  11. "no-of-bytes": {
  12. "sum": {
  13. "field": "bytes"
  14. }
  15. },
  16. "cumulative_bytes": {
  17. "cumulative_sum": {
  18. "buckets_path": "no-of-bytes"
  19. }
  20. }
  21. }
  22. }
  23. }
  24. }

Sample response

  1. ...
  2. "aggregations" : {
  3. "sales_per_month" : {
  4. "buckets" : [
  5. {
  6. "key_as_string" : "2020-10-01T00:00:00.000Z",
  7. "key" : 1601510400000,
  8. "doc_count" : 1635,
  9. "no-of-bytes" : {
  10. "value" : 9400200.0
  11. },
  12. "cumulative_bytes" : {
  13. "value" : 9400200.0
  14. }
  15. },
  16. {
  17. "key_as_string" : "2020-11-01T00:00:00.000Z",
  18. "key" : 1604188800000,
  19. "doc_count" : 6844,
  20. "no-of-bytes" : {
  21. "value" : 3.8880434E7
  22. },
  23. "cumulative_bytes" : {
  24. "value" : 4.8280634E7
  25. }
  26. },
  27. {
  28. "key_as_string" : "2020-12-01T00:00:00.000Z",
  29. "key" : 1606780800000,
  30. "doc_count" : 5595,
  31. "no-of-bytes" : {
  32. "value" : 3.1445055E7
  33. },
  34. "cumulative_bytes" : {
  35. "value" : 7.9725689E7
  36. }
  37. }
  38. ]
  39. }
  40. }
  41. }

derivative

The derivative aggregation is a parent aggregation that calculates 1st order and 2nd order derivates of each bucket of a previous aggregation.

In mathematics, the derivative of a function measures its sensitivity to change. In other words, a derivative evaluates the rate of change in some function with respect to some variable. To learn more about derivates, see Wikipedia.

You can use derivates to calculate the rate of change of numeric values compared to its previous time periods.

The 1st order derivative indicates whether a metric is increasing or decreasing, and by how much it’s increasing or decreasing.

The following example calculates the 1st order derivative for the sum of bytes per month. The 1st order derivative is the difference between the number of bytes in the current month and the previous month:

  1. GET opensearch_dashboards_sample_data_logs/_search
  2. {
  3. "size": 0,
  4. "aggs": {
  5. "sales_per_month": {
  6. "date_histogram": {
  7. "field": "@timestamp",
  8. "calendar_interval": "month"
  9. },
  10. "aggs": {
  11. "number_of_bytes": {
  12. "sum": {
  13. "field": "bytes"
  14. }
  15. },
  16. "bytes_deriv": {
  17. "derivative": {
  18. "buckets_path": "number_of_bytes"
  19. }
  20. }
  21. }
  22. }
  23. }
  24. }

Sample response

  1. ...
  2. "aggregations" : {
  3. "sales_per_month" : {
  4. "buckets" : [
  5. {
  6. "key_as_string" : "2020-10-01T00:00:00.000Z",
  7. "key" : 1601510400000,
  8. "doc_count" : 1635,
  9. "number_of_bytes" : {
  10. "value" : 9400200.0
  11. }
  12. },
  13. {
  14. "key_as_string" : "2020-11-01T00:00:00.000Z",
  15. "key" : 1604188800000,
  16. "doc_count" : 6844,
  17. "number_of_bytes" : {
  18. "value" : 3.8880434E7
  19. },
  20. "bytes_deriv" : {
  21. "value" : 2.9480234E7
  22. }
  23. },
  24. {
  25. "key_as_string" : "2020-12-01T00:00:00.000Z",
  26. "key" : 1606780800000,
  27. "doc_count" : 5595,
  28. "number_of_bytes" : {
  29. "value" : 3.1445055E7
  30. },
  31. "bytes_deriv" : {
  32. "value" : -7435379.0
  33. }
  34. }
  35. ]
  36. }
  37. }
  38. }

The 2nd order derivative is a double derivative or a derivative of the derivative. It indicates how the rate of change of a quantity is itself changing. It’s the difference between the 1st order derivatives of adjacent buckets.

To calculate a 2nd order derivative, chain one derivative aggregation to another:

  1. GET opensearch_dashboards_sample_data_logs/_search
  2. {
  3. "size": 0,
  4. "aggs": {
  5. "sales_per_month": {
  6. "date_histogram": {
  7. "field": "@timestamp",
  8. "calendar_interval": "month"
  9. },
  10. "aggs": {
  11. "number_of_bytes": {
  12. "sum": {
  13. "field": "bytes"
  14. }
  15. },
  16. "bytes_deriv": {
  17. "derivative": {
  18. "buckets_path": "number_of_bytes"
  19. }
  20. },
  21. "bytes_2nd_deriv": {
  22. "derivative": {
  23. "buckets_path": "bytes_deriv"
  24. }
  25. }
  26. }
  27. }
  28. }
  29. }

Sample response

  1. ...
  2. "aggregations" : {
  3. "sales_per_month" : {
  4. "buckets" : [
  5. {
  6. "key_as_string" : "2020-10-01T00:00:00.000Z",
  7. "key" : 1601510400000,
  8. "doc_count" : 1635,
  9. "number_of_bytes" : {
  10. "value" : 9400200.0
  11. }
  12. },
  13. {
  14. "key_as_string" : "2020-11-01T00:00:00.000Z",
  15. "key" : 1604188800000,
  16. "doc_count" : 6844,
  17. "number_of_bytes" : {
  18. "value" : 3.8880434E7
  19. },
  20. "bytes_deriv" : {
  21. "value" : 2.9480234E7
  22. }
  23. },
  24. {
  25. "key_as_string" : "2020-12-01T00:00:00.000Z",
  26. "key" : 1606780800000,
  27. "doc_count" : 5595,
  28. "number_of_bytes" : {
  29. "value" : 3.1445055E7
  30. },
  31. "bytes_deriv" : {
  32. "value" : -7435379.0
  33. },
  34. "bytes_2nd_deriv" : {
  35. "value" : -3.6915613E7
  36. }
  37. }
  38. ]
  39. }
  40. }
  41. }

The first bucket doesn’t have a 1st order derivate as a derivate needs at least two points for comparison. The first and second buckets don’t have a 2nd order derivate because a 2nd order derivate needs at least two data points from the 1st order derivative.

The 1st order derivative for the “2020-11-01” bucket is 2.9480234E7 and the “2020-12-01” bucket is -7435379. So, the 2nd order derivative of the “2020-12-01” bucket is -3.6915613E7 (-7435379-2.9480234E7).

Theoretically, you could continue chaining derivate aggregations to calculate the third, the fourth, and even higher-order derivatives. That would, however, provide little to no value for most datasets.

moving_avg

A moving_avg aggregation is a parent aggregation that calculates the moving average metric.

The moving_avg aggregation finds the series of averages of different windows (subsets) of a dataset. A window’s size represents the number of data points covered by the window on each iteration (specified by the window property and set to 5 by default). On each iteration, the algorithm calculates the average for all data points that fit into the window and then slides forward by excluding the first member of the previous window and including the first member from the next window.

For example, given the data [1, 5, 8, 23, 34, 28, 7, 23, 20, 19], you can calculate a simple moving average with a window’s size of 5 as follows:

  1. (1 + 5 + 8 + 23 + 34) / 5 = 14.2
  2. (5 + 8 + 23 + 34+ 28) / 5 = 19.6
  3. (8 + 23 + 34 + 28 + 7) / 5 = 20
  4. so on...

For more information, see Wikipedia.

You can use the moving_avg aggregation to either smoothen out short-term fluctuations or to highlight longer-term trends or cycles in your time-series data.

Specify a small window size (for example, window: 10) that closely follows the data to smoothen out small-scale fluctuations. Alternatively, specify a larger window size (for example, window: 100) that lags behind the actual data by a substantial amount to smoothen out all higher-frequency fluctuations or random noise, making lower frequency trends more visible.

The following example nests a moving_avg aggregation into a date_histogram aggregation:

  1. GET opensearch_dashboards_sample_data_logs/_search
  2. {
  3. "size": 0,
  4. "aggs": {
  5. "my_date_histogram": {
  6. "date_histogram": {
  7. "field": "@timestamp",
  8. "calendar_interval": "month"
  9. },
  10. "aggs": {
  11. "sum_of_bytes": {
  12. "sum": { "field": "bytes" }
  13. },
  14. "moving_avg_of_sum_of_bytes": {
  15. "moving_avg": { "buckets_path": "sum_of_bytes" }
  16. }
  17. }
  18. }
  19. }
  20. }

Sample response

  1. ...
  2. "aggregations" : {
  3. "my_date_histogram" : {
  4. "buckets" : [
  5. {
  6. "key_as_string" : "2020-10-01T00:00:00.000Z",
  7. "key" : 1601510400000,
  8. "doc_count" : 1635,
  9. "sum_of_bytes" : {
  10. "value" : 9400200.0
  11. }
  12. },
  13. {
  14. "key_as_string" : "2020-11-01T00:00:00.000Z",
  15. "key" : 1604188800000,
  16. "doc_count" : 6844,
  17. "sum_of_bytes" : {
  18. "value" : 3.8880434E7
  19. },
  20. "moving_avg_of_sum_of_bytes" : {
  21. "value" : 9400200.0
  22. }
  23. },
  24. {
  25. "key_as_string" : "2020-12-01T00:00:00.000Z",
  26. "key" : 1606780800000,
  27. "doc_count" : 5595,
  28. "sum_of_bytes" : {
  29. "value" : 3.1445055E7
  30. },
  31. "moving_avg_of_sum_of_bytes" : {
  32. "value" : 2.4140317E7
  33. }
  34. }
  35. ]
  36. }
  37. }
  38. }

You can also use the moving_avg aggregation to predict future buckets. To predict buckets, add the predict property and set it to the number of predictions that you want to see.

The following example adds five predictions to the preceding query:

  1. GET opensearch_dashboards_sample_data_logs/_search
  2. {
  3. "size": 0,
  4. "aggs": {
  5. "my_date_histogram": {
  6. "date_histogram": {
  7. "field": "@timestamp",
  8. "calendar_interval": "month"
  9. },
  10. "aggs": {
  11. "sum_of_bytes": {
  12. "sum": {
  13. "field": "bytes"
  14. }
  15. },
  16. "moving_avg_of_sum_of_bytes": {
  17. "moving_avg": {
  18. "buckets_path": "sum_of_bytes",
  19. "predict": 5
  20. }
  21. }
  22. }
  23. }
  24. }
  25. }

Sample response

  1. "aggregations" : {
  2. "my_date_histogram" : {
  3. "buckets" : [
  4. {
  5. "key_as_string" : "2020-10-01T00:00:00.000Z",
  6. "key" : 1601510400000,
  7. "doc_count" : 1635,
  8. "sum_of_bytes" : {
  9. "value" : 9400200.0
  10. }
  11. },
  12. {
  13. "key_as_string" : "2020-11-01T00:00:00.000Z",
  14. "key" : 1604188800000,
  15. "doc_count" : 6844,
  16. "sum_of_bytes" : {
  17. "value" : 3.8880434E7
  18. },
  19. "moving_avg_of_sum_of_bytes" : {
  20. "value" : 9400200.0
  21. }
  22. },
  23. {
  24. "key_as_string" : "2020-12-01T00:00:00.000Z",
  25. "key" : 1606780800000,
  26. "doc_count" : 5595,
  27. "sum_of_bytes" : {
  28. "value" : 3.1445055E7
  29. },
  30. "moving_avg_of_sum_of_bytes" : {
  31. "value" : 2.4140317E7
  32. }
  33. },
  34. {
  35. "key_as_string" : "2021-01-01T00:00:00.000Z",
  36. "key" : 1609459200000,
  37. "doc_count" : 0,
  38. "moving_avg_of_sum_of_bytes" : {
  39. "value" : 2.6575229666666668E7
  40. }
  41. },
  42. {
  43. "key_as_string" : "2021-02-01T00:00:00.000Z",
  44. "key" : 1612137600000,
  45. "doc_count" : 0,
  46. "moving_avg_of_sum_of_bytes" : {
  47. "value" : 2.6575229666666668E7
  48. }
  49. },
  50. {
  51. "key_as_string" : "2021-03-01T00:00:00.000Z",
  52. "key" : 1614556800000,
  53. "doc_count" : 0,
  54. "moving_avg_of_sum_of_bytes" : {
  55. "value" : 2.6575229666666668E7
  56. }
  57. },
  58. {
  59. "key_as_string" : "2021-04-01T00:00:00.000Z",
  60. "key" : 1617235200000,
  61. "doc_count" : 0,
  62. "moving_avg_of_sum_of_bytes" : {
  63. "value" : 2.6575229666666668E7
  64. }
  65. },
  66. {
  67. "key_as_string" : "2021-05-01T00:00:00.000Z",
  68. "key" : 1619827200000,
  69. "doc_count" : 0,
  70. "moving_avg_of_sum_of_bytes" : {
  71. "value" : 2.6575229666666668E7
  72. }
  73. }
  74. ]
  75. }
  76. }
  77. }

The moving_avg aggregation supports five models — simple, linear, exponentially weighted, holt-linear, and holt-winters. These models differ in how the values of the window are weighted. As data points become “older” (i.e., the window slides away from them), they might be weighted differently. You can specify a model of your choice by setting the model property. The model property holds the name of the model and the settings object, which you can use to provide model properties. For more information on these models, see Wikipedia.

A simple model first calculates the sum of all data points in the window, and then divides that sum by the size of the window. In other words, a simple model calculates a simple arithmetic mean for each window in your dataset.

The following example uses a simple model with a window size of 30:

  1. GET opensearch_dashboards_sample_data_logs/_search
  2. {
  3. "size": 0,
  4. "aggs": {
  5. "my_date_histogram": {
  6. "date_histogram": {
  7. "field": "@timestamp",
  8. "calendar_interval": "month"
  9. },
  10. "aggs": {
  11. "sum_of_bytes": {
  12. "sum": {
  13. "field": "bytes"
  14. }
  15. },
  16. "moving_avg_of_sum_of_bytes": {
  17. "moving_avg": {
  18. "buckets_path": "sum_of_bytes",
  19. "window": 30,
  20. "model": "simple"
  21. }
  22. }
  23. }
  24. }
  25. }
  26. }

Sample response

  1. ...
  2. "aggregations" : {
  3. "my_date_histogram" : {
  4. "buckets" : [
  5. {
  6. "key_as_string" : "2020-10-01T00:00:00.000Z",
  7. "key" : 1601510400000,
  8. "doc_count" : 1635,
  9. "sum_of_bytes" : {
  10. "value" : 9400200.0
  11. }
  12. },
  13. {
  14. "key_as_string" : "2020-11-01T00:00:00.000Z",
  15. "key" : 1604188800000,
  16. "doc_count" : 6844,
  17. "sum_of_bytes" : {
  18. "value" : 3.8880434E7
  19. },
  20. "moving_avg_of_sum_of_bytes" : {
  21. "value" : 9400200.0
  22. }
  23. },
  24. {
  25. "key_as_string" : "2020-12-01T00:00:00.000Z",
  26. "key" : 1606780800000,
  27. "doc_count" : 5595,
  28. "sum_of_bytes" : {
  29. "value" : 3.1445055E7
  30. },
  31. "moving_avg_of_sum_of_bytes" : {
  32. "value" : 2.4140317E7
  33. }
  34. }
  35. ]
  36. }
  37. }
  38. }

The following example uses a holt model. You can set the speed at which the importance decays occurs with the alpha and beta setting. The default value of alpha is 0.3 and beta is 0.1. You can specify any float value between 0-1 inclusive.

  1. GET opensearch_dashboards_sample_data_logs/_search
  2. {
  3. "size": 0,
  4. "aggs": {
  5. "my_date_histogram": {
  6. "date_histogram": {
  7. "field": "@timestamp",
  8. "calendar_interval": "month"
  9. },
  10. "aggs": {
  11. "sum_of_bytes": {
  12. "sum": {
  13. "field": "bytes"
  14. }
  15. },
  16. "moving_avg_of_sum_of_bytes": {
  17. "moving_avg": {
  18. "buckets_path": "sum_of_bytes",
  19. "model": "holt",
  20. "settings": {
  21. "alpha": 0.6,
  22. "beta": 0.4
  23. }
  24. }
  25. }
  26. }
  27. }
  28. }
  29. }

Sample response

  1. ...
  2. "aggregations" : {
  3. "my_date_histogram" : {
  4. "buckets" : [
  5. {
  6. "key_as_string" : "2020-10-01T00:00:00.000Z",
  7. "key" : 1601510400000,
  8. "doc_count" : 1635,
  9. "sum_of_bytes" : {
  10. "value" : 9400200.0
  11. }
  12. },
  13. {
  14. "key_as_string" : "2020-11-01T00:00:00.000Z",
  15. "key" : 1604188800000,
  16. "doc_count" : 6844,
  17. "sum_of_bytes" : {
  18. "value" : 3.8880434E7
  19. },
  20. "moving_avg_of_sum_of_bytes" : {
  21. "value" : 9400200.0
  22. }
  23. },
  24. {
  25. "key_as_string" : "2020-12-01T00:00:00.000Z",
  26. "key" : 1606780800000,
  27. "doc_count" : 5595,
  28. "sum_of_bytes" : {
  29. "value" : 3.1445055E7
  30. },
  31. "moving_avg_of_sum_of_bytes" : {
  32. "value" : 2.70883404E7
  33. }
  34. }
  35. ]
  36. }
  37. }
  38. }

serial_diff

The serial_diff aggregation is a parent pipeline aggregation that computes a series of value differences between a time lag of the buckets from previous aggregations.

You can use the serial_diff aggregation to find the data changes between time periods instead of finding the whole value.

With the lag parameter (a positive, non-zero integer value), you can tell which previous bucket to subtract from the current one. If you don’t specify the lag parameter, OpenSearch sets it to 1.

Lets say that the population of a city grows with time. If you use the serial differencing aggregation with the period of one day, you can see the daily growth. For example, you can compute a series of differences of the weekly average changes of a total price.

  1. GET opensearch_dashboards_sample_data_logs/_search
  2. {
  3. "size": 0,
  4. "aggs": {
  5. "my_date_histogram": {
  6. "date_histogram": {
  7. "field": "@timestamp",
  8. "calendar_interval": "month"
  9. },
  10. "aggs": {
  11. "the_sum": {
  12. "sum": {
  13. "field": "bytes"
  14. }
  15. },
  16. "thirtieth_difference": {
  17. "serial_diff": {
  18. "buckets_path": "the_sum",
  19. "lag" : 30
  20. }
  21. }
  22. }
  23. }
  24. }
  25. }

Sample response

  1. ...
  2. "aggregations" : {
  3. "my_date_histogram" : {
  4. "buckets" : [
  5. {
  6. "key_as_string" : "2020-10-01T00:00:00.000Z",
  7. "key" : 1601510400000,
  8. "doc_count" : 1635,
  9. "the_sum" : {
  10. "value" : 9400200.0
  11. }
  12. },
  13. {
  14. "key_as_string" : "2020-11-01T00:00:00.000Z",
  15. "key" : 1604188800000,
  16. "doc_count" : 6844,
  17. "the_sum" : {
  18. "value" : 3.8880434E7
  19. }
  20. },
  21. {
  22. "key_as_string" : "2020-12-01T00:00:00.000Z",
  23. "key" : 1606780800000,
  24. "doc_count" : 5595,
  25. "the_sum" : {
  26. "value" : 3.1445055E7
  27. }
  28. }
  29. ]
  30. }
  31. }
  32. }