Resource Limits and Preventing Abusive Reads/Writes

This operational guide provides an overview of how to set resource limits on M3 components to prevent abusive reads/writes impacting availability or performance of M3 in a production environment.

M3DB

Configuring limits

The best way to get started protecting M3DB nodes is to set a few resource limits on the top level limits config stanza for M3DB.

The primary limit is on total bytes recently read from disk across all queries since this most directly causes memory pressure. Reading time series data that is already in-memory (either due to already being cached or being actively written) costs much less than reading historical time series data which must be read from disk. By specifically limiting bytes read from disk, and excluding bytes already in-memory, we can apply a limit that most accurately reflects increased memory pressure on the database nodes. To set a limit, use the maxRecentlyQueriedSeriesDiskBytesRead stanza to define a policy for how much historical time series data can be read over a given lookback time window. The value specifies max numbers of bytes read from disk allowed within a given lookback period.

You can use the Prometheus query rate(query_limit_total_disk_bytes_read[1m]) to determine how many bytes are read from disk per second by your cluster today to inform an appropriate limit. Make sure to multiply that number by the lookback period to get your desired max value. For instance, if the query shows that you frequently read 100MB per second safely with your deployment and you want to use the default lookback of 15s then you would multiply 100MB by 15 to get 1.5GB as a max value with a 15s lookback.

The secondary limit is on the total volume of time series data recently read across all queries (in-memory or not), since even querying data already in memory in an unbounded manner can overwhelm a database node. When using M3DB for metrics workloads, queries arrive as a set of matchers that select time series based on certain dimensions. The primary mechanism to protect against these matchers matching huge amounts of data in an unbounded way is to set a maximum limit for the amount of time series blocks allowed to be matched and consequently read in a given time window. Use the maxRecentlyQueriedSeriesBlocks to set a maximum value and lookback time window to determine the duration over which the max limit is enforced.

You can use the Prometheus query rate(query_limit_total_docs_matched[1m]) to determine how many time series blocks are queried per second by your cluster today to inform and appripriate limit. Make sure to multiply that number by the lookback period to get your desired max value. For instance, if the query shows that you frequently query 10,000 time series blocks per second safely with your deployment and you want to use the default lookback of 15s then you would multiply 10,000 by 15 to get 150,000 as a max value with a 15s lookback.

Annotated configuration

  1. limits:
  2. # If set, will enforce a maximum cap on disk read bytes for time series that
  3. # resides historically on disk (and are not already in memory).
  4. maxRecentlyQueriedSeriesDiskBytesRead:
  5. # Value sets the maximum disk bytes read for historical data.
  6. value: 0
  7. # Lookback sets the time window that this limit is enforced over, every
  8. # lookback period the global count is reset to zero and when the limit
  9. # is reached it will reject any further time series blocks being matched
  10. # and read until the lookback period resets.
  11. lookback: 15s
  12. # If set, will enforce a maximum cap on time series blocks matched for
  13. # queries searching time series by dimensions.
  14. maxRecentlyQueriedSeriesBlocks:
  15. # Value sets the maximum time series blocks matched, use your block
  16. # settings to understand how many datapoints that may actually translate
  17. # to (e.g. 2 hour blocks for unaggregated data with 30s scrape interval
  18. # will translate to 240 datapoints per single time series block matched).
  19. value: 0
  20. # Lookback sets the time window that this limit is enforced over, every
  21. # lookback period the global count is reset to zero and when the limit
  22. # is reached it will reject any further time series blocks being matched
  23. # and read until the lookback period resets.
  24. lookback: 15s
  25. # If set then will limit the number of parallel write batch requests to the
  26. # database and return errors if hit.
  27. maxOutstandingWriteRequests: 0
  28. # If set then will limit the number of parallel read requests to the
  29. # database and return errors if hit.
  30. # Note since reads can be so variable in terms of how expensive they are
  31. # it is not always very useful to use this config to prevent resource
  32. # exhaustion from reads.
  33. maxOutstandingReadRequests: 0

M3 Query and M3 Coordinator

Deployment

Protecting queries impacting your ingestion of metrics for metrics workloads can first and foremost be done by deploying M3 Query and M3 Coordinator independently. That is, for writes to M3 use a dedicated deployment of M3 Coordinator instances, and then for queries to M3 use a dedicated deployment of M3 Query instances.

This ensures when M3 Query instances become busy and are starved of resources serving an unexpected query load, they will not interrupt the flow of metrics being ingested to M3.

Configuring limits

To protect against individual queries using too many resources, you can specify some sane limits in the M3 Query (and consequently M3 Coordinator) configuration file under the top level limits config stanza.

There are two types of limits:

  • Per query time series limit
  • Per query time series * blocks limit (docs limit)

When either of these limits are hit, you can define the behavior you would like, either to return an error when this limit is hit, or to return a partial result with the response header M3-Results-Limited detailing the limit that was hit and a warning included in the response body.

Annotated configuration

  1. limits:
  2. # If set will override default limits set per query.
  3. perQuery:
  4. # If set limits the number of time series returned for any given
  5. # individual storage node per query, before returning result to query
  6. # service.
  7. maxFetchedSeries: 0
  8. # If set limits the number of index documents matched for any given
  9. # individual storage node per query, before returning result to query
  10. # service.
  11. # This equates to the number of time series * number of blocks, so for
  12. # 100 time series matching 4 hours of data for a namespace using a 2 hour
  13. # block size, that would result in matching 200 index documents.
  14. maxFetchedDocs: 0
  15. # If true this results in causing a query error if the query exceeds
  16. # the series or blocks limit for any given individual storage node per query.
  17. requireExhaustive: true

Headers

The following headers can also be used to override configured limits on a per request basis (to allow for different limits dependent on caller):

  • M3-Limit-Max-Series:
    If this header is set it will override any configured per query time series limit. If the limit is hit, it will either return a partial result or an error based on the require exhaustive configuration set.

  • M3-Limit-Max-Docs:
    If this header is set it will override any configured per query time series * blocks limit (docs limit). If the limit is hit, it will either return a partial result or an error based on the require exhaustive configuration set.

  • M3-Limit-Require-Exhaustive:
    If this header is set it will override any configured require exhaustive setting. If “true” it will return an error if query hits a configured limit (such as series or docs limit) instead of a partial result. Otherwise if “false” it will return a partial result of the time series already matched with the response header M3-Results-Limited detailing the limit that was hit and a warning included in the response body.