Policy Performance

High Performance Policy Decisions

For low-latency/high-performance use-cases, e.g. microservice API authorization, policy evaluation has a budget on the order of 1 millisecond. Not all use cases require that kind of performance, and OPA is powerful enough that you can write expressive policies that take longer than 1 millisecond to evaluate. But for high-performance use cases, there is a fragment of the policy language that has been engineered to evaluate quickly. Even as the size of the policies grow, the performance for this fragment can be nearly constant-time.

Linear fragment

The linear fragment of the language is all of those policies where evaluation amounts to walking over the policy once. This means there is no search required to make a policy decision. Any variables you use can be assigned at most one value.

For example, the following rule has one local variable user, and that variable can only be assigned one value. Intuitively, evaluating this rule requires checking each of the conditions in the body, and if there were N of these rules, evaluation would only require walking over each of them as well.

  1. package linear
  2. allow {
  3. some user
  4. input.method = "GET"
  5. input.path = ["accounts", user]
  6. input.user = user
  7. }

Use objects over arrays

One common mistake people make is using arrays when they could use objects. For example, below is an array of ID/first-name/last-names where ID is unique, and you’re looking up the first-name/last-name given the ID.

Policy Performance - 图1

  1. # DO NOT DO THIS.
  2. # Array of objects where each object has a unique identifier
  3. d = [{"id": "a123", "first": "alice", "last": "smith"},
  4. {"id": "a456", "first": "bob", "last": "jones"},
  5. {"id": "a789", "first": "clarice", "last": "johnson"}
  6. ]
  7. # search through all elements of the array to find the ID
  8. d[i].id == "a789"
  9. d[i].first ...

Instead, use a dictionary where the key is the ID and the value is the first-name/last-name. Given the ID, you can lookup the name information directly.

Policy Performance - 图2

  1. # DO THIS INSTEAD OF THE ABOVE
  2. # Use object whose keys are the IDs for the objects.
  3. # Looking up an object given its ID requires NO search
  4. d = {"a123": {"first": "alice", "last": "smith"},
  5. "a456": {"first": "bob", "last": "jones"},
  6. "a789": {"first": "clarice", "last": "johnson"}
  7. }
  8. # no search required
  9. d["a789"].first ...

Use indexed statements

The linear-time fragment ensures that the cost of evaluation is no larger than the size of the policy. OPA lets you write non-linear policies, because sometimes you need to, and because sometimes it’s convenient. The blog on partial evaluation describes one mechanism for converting non-linear policies into linear policies.

But as the size of the policy grows, the cost of evaluation grows with it. Sometimes the policy can grow large enough that even the linear-fragment fails to meet the performance budget.

In the linear fragment, OPA includes special algorithms that index rules efficiently, sometimes making evaluation constant-time, even as the policy grows. The more effective the indexing is the fewer rules need to be evaluated.

Here is an example policy from the rule-indexing blog giving the details for these algorithms. See the rest of this section for details on indexed statements.

  1. package indexed
  2. default allow = false
  3. allow {
  4. some user
  5. input.method = "GET"
  6. input.path = ["accounts", user]
  7. input.user = user
  8. }
  9. allow {
  10. input.method = "GET"
  11. input.path = ["accounts", "report"]
  12. roles[input.user][_] = "admin"
  13. }
  14. allow {
  15. input.method = "POST"
  16. input.path = ["accounts"]
  17. roles[input.user][_] = "admin"
  18. }
  19. roles = {
  20. "bob": ["admin", "hr"],
  21. "alice": ["procurement"],
  22. }

Policy Performance - 图3

  1. allow

Policy Performance - 图4

  1. {
  2. "user": "bob",
  3. "path": ["accounts", "bob"],
  4. "method": "GET"
  5. }

Policy Performance - 图5

  1. true

Equality statements

For simple equality statements (= and ==) to be indexed one side must be a non-nested reference that does not contain any variables and the other side must be a variable, scalar, or array (which may contain scalars and variables). For example:

ExpressionIndexedReason
input.x = “foo”yesn/a
input.x.y = “bar”yesn/a
input.x = [“foo”, i]yesn/a
input.x[i] = “foo”noreference contains variables
input.x[input.y] = “foo”noreference is nested

Glob statements

For glob.match(pattern, delimiter, match) statements to be indexed the pattern must be recognized by the indexer and the match be a non-nested reference that does not contain any variables. The indexer recognizes patterns containing the normal glob (*) operator but not the super glob (**) or character pattern matching operators.

ExpressionIndexedReason
glob.match(“foo::bar”, [“:”], input.x)yesn/a
glob.match(“foo::bar”, [“:”], input.x)nopattern contains
glob.match(“foo::bar”, [“:”], input.x[i])nomatch contains variable(s)

Comprehension Indexing

Rego does not support mutation. As a result, certain operations like “group by” require use of comprehensions to aggregate values. To avoid O(n^2) runtime complexity in queries/rules that perform group-by, OPA may compute and memoize the entire collection produced by comprehensions at once. This ensures that runtime complexity is O(n) where n is the size of the collection that group-by/aggregation is being performed on.

For example, suppose the policy must check if the number of ports exposed on an interface exceeds some threshold (e.g., any interface may expose up to 100 ports.) The policy is given the port->interface mapping as a JSON array under input:

  1. {
  2. "exposed": [
  3. {
  4. "interface": "eth0",
  5. "port": 8080,
  6. },
  7. {
  8. "interface": "eth0",
  9. "port": 8081,
  10. },
  11. {
  12. "interface": "eth1",
  13. "port": 443,
  14. },
  15. {
  16. "interface": "lo1",
  17. "port": 5000,
  18. }
  19. ]
  20. }

In this case, the policy must count the number of ports exposed on each interface. To do this, the policy must first aggregate/group the ports by the interface name. Conceptually, the policy should generate a document like this:

  1. {
  2. "exposed_ports_by_interface": {
  3. "eth0": [8080, 8081],
  4. "eth1": [443],
  5. "lo1": [5000]
  6. }
  7. }

Since multiple ports could be exposed on a single interface, the policy must use a comprehension to aggregate the port values by the interface names. To implement this logic in Rego, we would write:

  1. some i
  2. intf := input.exposed[i].interface
  3. ports := [port | some j; input.exposed[j].interface == intf; port := input.exposed[j].port]

Without comprehension indexing, this query would be O(n^2) where n is the size of input.exposed. However, with comprehension indexing, the query remains O(n) because OPA only computes the comprehension once. In this case, the comprehension is evaluated and all possible values of ports are computed at once. These values are indexed by the assignments of intf.

To implement the policy above we could write:

  1. deny[msg] {
  2. some i
  3. count(exposed_ports_by_interface[i]) > 100
  4. msg := sprintf("interface '%v' exposes too many ports", [i])
  5. }
  6. exposed_ports_by_interface := {intf: ports |
  7. some i
  8. intf := input.exposed[i].interface
  9. ports := [port |
  10. some j
  11. input.exposed[j].interface == intf
  12. port := input.exposed[j].port
  13. ]
  14. }

Indices can be built for comprehensions (nested or not) that generate collections (i.e., arrays, sets, or objects) based on variables in an outer query. In the example above:

  • intf is the variable in the outer query.
  • [port | some j; input.exposed[j].interface == intf; port := input.exposed[j].port] is the comprehension.
  • ports is the variable the collection is assigned to.

In order to be indexed, comprehensions must meet the following conditions:

  1. The comprehension appears in an assignment or unification statement.
  2. The expression containing the comprehension does not include a with statement.
  3. The expression containing the comprehension is not negated.
  4. The comprehension body is safe when considered independent from the outer query.
  5. The comprehension body closes over at least one variable in the outer query and none of these variables appear as outputs in references or walk() calls or inside nested comprehensions.

The following examples show cases that are NOT indexed:

  1. not_indexed_because_missing_assignment {
  2. x := input[_]
  3. [y | some y; x == input[y]]
  4. }
  5. not_indexed_because_includes_with {
  6. x := input[_]
  7. ys := [y | some y; x := input[y]] with input as {}
  8. }
  9. not_indexed_because_negated {
  10. x := input[_]
  11. not data.arr = [y | some y; x := input[y]]
  12. }
  13. not_indexed_because_safety {
  14. obj := input.foo.bar
  15. x := obj[_]
  16. ys := [y | some y; x == obj[y]]
  17. }
  18. not_indexed_because_no_closure {
  19. ys := [y | x := input[y]]
  20. }
  21. not_indexed_because_reference_operand_closure {
  22. x := input[y].x
  23. ys := [y | x == input[y].z[_]]
  24. }
  25. not_indexed_because_nested_closure {
  26. x = 1
  27. y = 2
  28. _ = [i |
  29. x == input.foo[i]
  30. _ = [j | y == input.bar[j]]]
  31. }

The 4th and 5th restrictions may be relaxed in the future.

Profiling

You can also profile your policies using opa eval. The profiler is useful if you need to understand why policy evaluation is slow.

The opa eval command provides the following profiler options:

OptionDetailDefault
—profileEnables expression profiling and outputs profiler results.off
—profile-sortCriteria to sort the expression profiling results. This options implies —profile.total_time_ns => num_eval => num_redo => file => line
—profile-limitDesired number of profiling results sorted on the given criteria. This options implies —profile.10

Sort criteria for the profile results

  • total_time_ns - Results are displayed is decreasing order of expression evaluation time
  • num_eval - Results are displayed is decreasing order of number of times an expression is evaluated
  • num_redo - Results are displayed is decreasing order of number of times an expression is re-evaluated(redo)
  • file - Results are sorted in reverse alphabetical order based on the rego source filename
  • line - Results are displayed is decreasing order of expression line number in the source file

When the sort criteria is not provided total_time_ns has the highest priority while line has the lowest.

Example Policy

The different profiling examples shown later on this page use the below sample policy.

  1. package rbac
  2. # Example input request
  3. input = {
  4. "subject": "bob",
  5. "resource": "foo123",
  6. "action": "write",
  7. }
  8. # Example RBAC configuration.
  9. bindings = [
  10. {
  11. "user": "alice",
  12. "roles": ["dev", "test"],
  13. },
  14. {
  15. "user": "bob",
  16. "roles": ["test"],
  17. },
  18. ]
  19. roles = [
  20. {
  21. "name": "dev",
  22. "permissions": [
  23. {"resource": "foo123", "action": "write"},
  24. {"resource": "foo123", "action": "read"},
  25. ],
  26. },
  27. {
  28. "name": "test",
  29. "permissions": [{"resource": "foo123", "action": "read"}],
  30. },
  31. ]
  32. # Example RBAC policy implementation.
  33. default allow = false
  34. allow {
  35. some role_name
  36. user_has_role[role_name]
  37. role_has_permission[role_name]
  38. }
  39. user_has_role[role_name] {
  40. binding := bindings[_]
  41. binding.user == input.subject
  42. role_name := binding.roles[_]
  43. }
  44. role_has_permission[role_name] {
  45. role := roles[_]
  46. role_name := role.name
  47. perm := role.permissions[_]
  48. perm.resource == input.resource
  49. perm.action == input.action
  50. }

Example: Display ALL profile results with default ordering criteria

  1. opa eval --data rbac.rego --profile --format=pretty 'data.rbac.allow'

Sample Output

  1. false
  2. +----------+----------+----------+-----------------+
  3. | TIME | NUM EVAL | NUM REDO | LOCATION |
  4. +----------+----------+----------+-----------------+
  5. | 47.148µs | 1 | 1 | data.rbac.allow |
  6. | 28.965µs | 1 | 1 | rbac.rego:11 |
  7. | 24.384µs | 1 | 1 | rbac.rego:41 |
  8. | 23.064µs | 2 | 1 | rbac.rego:47 |
  9. | 15.525µs | 1 | 1 | rbac.rego:38 |
  10. | 14.137µs | 1 | 2 | rbac.rego:46 |
  11. | 13.927µs | 1 | 0 | rbac.rego:42 |
  12. | 13.568µs | 1 | 1 | rbac.rego:55 |
  13. | 12.982µs | 1 | 0 | rbac.rego:56 |
  14. | 12.763µs | 1 | 2 | rbac.rego:52 |
  15. +----------+----------+----------+-----------------+
  16. +------------------------------+----------+
  17. | METRIC | VALUE |
  18. +------------------------------+----------+
  19. | timer_rego_module_compile_ns | 1871613 |
  20. | timer_rego_query_compile_ns | 82290 |
  21. | timer_rego_query_eval_ns | 257952 |
  22. | timer_rego_query_parse_ns | 12337169 |
  23. +------------------------------+----------+

As seen from the above table, all results are displayed. The profile results are sorted on the default sort criteria.

Example: Display top 5 profile results
  1. opa eval --data rbac.rego --profile-limit 5 --format=pretty 'data.rbac.allow'

Sample Output

  1. +----------+----------+----------+-----------------+
  2. | TIME | NUM EVAL | NUM REDO | LOCATION |
  3. +----------+----------+----------+-----------------+
  4. | 46.329µs | 1 | 1 | data.rbac.allow |
  5. | 26.656µs | 1 | 1 | rbac.rego:11 |
  6. | 24.206µs | 2 | 1 | rbac.rego:47 |
  7. | 23.235µs | 1 | 1 | rbac.rego:41 |
  8. | 18.242µs | 1 | 1 | rbac.rego:38 |
  9. +----------+----------+----------+-----------------+

The profile results are sorted on the default sort criteria. Also --profile option is implied and does not need to be provided.

Example: Display top 5 profile results based on the number of times an expression is evaluated
  1. opa eval --data rbac.rego --profile-limit 5 --profile-sort num_eval --format=pretty 'data.rbac.allow'

Sample Profile Output

  1. +----------+----------+----------+-----------------+
  2. | TIME | NUM EVAL | NUM REDO | LOCATION |
  3. +----------+----------+----------+-----------------+
  4. | 26.675µs | 2 | 1 | rbac.rego:47 |
  5. | 9.274µs | 2 | 1 | rbac.rego:53 |
  6. | 43.356µs | 1 | 1 | data.rbac.allow |
  7. | 22.467µs | 1 | 1 | rbac.rego:41 |
  8. | 22.425µs | 1 | 1 | rbac.rego:11 |
  9. +----------+----------+----------+-----------------+

As seen from the above table, the results are arranged first in decreasing order of number of evaluations and if two expressions have been evaluated the same number of times, the default criteria is used since no other sort criteria is provided. In this case, total_time_ns => num_redo => file => line. Also --profile option is implied and does not need to be provided.

Example: Display top 5 profile results based on the number of times an expression is evaluated and number of times an expression is re-evaluated
  1. opa eval --data rbac.rego --profile-limit 5 --profile-sort num_eval,num_redo --format=pretty 'data.rbac.allow'

Sample Profile Output

  1. +----------+----------+----------+-----------------+
  2. | TIME | NUM EVAL | NUM REDO | LOCATION |
  3. +----------+----------+----------+-----------------+
  4. | 22.892µs | 2 | 1 | rbac.rego:47 |
  5. | 8.831µs | 2 | 1 | rbac.rego:53 |
  6. | 13.767µs | 1 | 2 | rbac.rego:46 |
  7. | 10.78µs | 1 | 2 | rbac.rego:52 |
  8. | 42.338µs | 1 | 1 | data.rbac.allow |
  9. +----------+----------+----------+-----------------+

As seen from the above table, result are first arranged based on number of evaluations, then number of re-evaluations and finally the default criteria is used. In this case, total_time_ns => file => line. The --profile-sort options accepts repeated or comma-separated values for the criteria. The order of the criteria on the command line determine their priority.

Another way to get the same output as above would be the following:

  1. opa eval --data rbac.rego --profile-limit 5 --profile-sort num_eval --profile-sort num_redo --format=pretty 'data.rbac.allow'

Benchmarking Queries

OPA provides CLI options to benchmark a single query via the opa bench command. This will evaluate similarly to opa eval but it will repeat the evaluation (in its most efficient form) a number of times and report metrics.

Example: Benchmark rbac allow

Using the same policy source as shown above:

  1. $ opa bench --data rbac.rego 'data.rbac.allow'

Will result in an output similar to:

  1. +-------------------------------------------+------------+
  2. | samples | 27295 |
  3. | ns/op | 45032 |
  4. | B/op | 20977 |
  5. | allocs/op | 382 |
  6. | histogram_timer_rego_query_eval_ns_stddev | 25568 |
  7. | histogram_timer_rego_query_eval_ns_99.9% | 335906 |
  8. | histogram_timer_rego_query_eval_ns_99.99% | 336493 |
  9. | histogram_timer_rego_query_eval_ns_mean | 40355 |
  10. | histogram_timer_rego_query_eval_ns_median | 35846 |
  11. | histogram_timer_rego_query_eval_ns_99% | 133936 |
  12. | histogram_timer_rego_query_eval_ns_90% | 44780 |
  13. | histogram_timer_rego_query_eval_ns_95% | 50815 |
  14. | histogram_timer_rego_query_eval_ns_min | 31284 |
  15. | histogram_timer_rego_query_eval_ns_max | 336493 |
  16. | histogram_timer_rego_query_eval_ns_75% | 38254 |
  17. | histogram_timer_rego_query_eval_ns_count | 27295 |
  18. +-------------------------------------------+------------+

These results capture metrics of samples runs, where only the query evaluation is measured. All time spent preparing to evaluate (loading, parsing, compiling, etc.) is omitted.

Note: all */op results are an average over the number of samples (or N in the JSON format)

Options for opa bench

OptionDetailDefault
—benchmemReport memory allocations with benchmark results.true
—metricsReport additional query performance metrics.true
—countNumber of times to repeat the benchmark.1

Benchmarking OPA Tests

There is also a --bench option for opa test which will perform benchmarking on OPA unit tests. This will evaluate any loaded tests as benchmarks. There will be additional time for any test-specific actions are included so the timing will typically be longer than what is seen with opa bench. The primary use-case is not for absolute time, but to track relative time as policies change.

Options for opa test --bench

OptionDetailDefault
—benchmemReport memory allocations with benchmark results.true
—countNumber of times to repeat the benchmark.1

Example Tests

Adding a unit test file for the policy source as shown above:

  1. package rbac
  2. test_user_has_role_dev {
  3. user_has_role["dev"] with input as {"subject": "alice"}
  4. }
  5. test_user_has_role_negative {
  6. not user_has_role["super-admin"] with input as {"subject": "alice"}
  7. }

Which when run normally will output something like:

  1. $ opa test -v ./rbac.rego ./rbac_test.rego
  2. data.rbac.test_user_has_role_dev: PASS (605.076µs)
  3. data.rbac.test_user_has_role_negative: PASS (318.047µs)
  4. --------------------------------------------------------------------------------
  5. PASS: 2/2

Example: Benchmark rbac unit tests

  1. opa test -v --bench ./rbac.rego ./rbac_test.rego

Results in output:

  1. data.rbac.test_user_has_role_dev 44749 27677 ns/op 23146 timer_rego_query_eval_ns/op 12303 B/op 229 allocs/op
  2. data.rbac.test_user_has_role_negative 44526 26348 ns/op 22033 timer_rego_query_eval_ns/op 12470 B/op 235 allocs/op
  3. --------------------------------------------------------------------------------
  4. PASS: 2/2

Example: Benchmark rbac unit tests and compare with benchstat

The benchmark output formats default to pretty, but support a gobench format which complies with the Golang Benchmark Data Format. This allows for usage of tools like benchstat to gain additional insight into the benchmark results and to diff between benchmark results.

Example:

  1. opa test -v --bench --count 10 --format gobench ./rbac.rego ./rbac_test.rego | tee ./old.txt

Will result in an old.txt and output similar to:

  1. BenchmarkDataRbacTestUserHasRoleDev 45152 26323 ns/op 22026 timer_rego_query_eval_ns/op 12302 B/op 229 allocs/op
  2. BenchmarkDataRbacTestUserHasRoleNegative 45483 26253 ns/op 21986 timer_rego_query_eval_ns/op 12470 B/op 235 allocs/op
  3. --------------------------------------------------------------------------------
  4. PASS: 2/2
  5. .
  6. .

Repeated 10 times (as specified by the --count flag).

This format can then be loaded by benchstat:

  1. benchstat ./old.txt

Output:

  1. name time/op
  2. DataRbacTestUserHasRoleDev 29.8µs ±18%
  3. DataRbacTestUserHasRoleNegative 32.0µs ±35%
  4. name timer_rego_query_eval_ns/op
  5. DataRbacTestUserHasRoleDev 25.0k ±18%
  6. DataRbacTestUserHasRoleNegative 26.7k ±35%
  7. name alloc/op
  8. DataRbacTestUserHasRoleDev 12.3kB ± 0%
  9. DataRbacTestUserHasRoleNegative 12.5kB ± 0%
  10. name allocs/op
  11. DataRbacTestUserHasRoleDev 229 ± 0%
  12. DataRbacTestUserHasRoleNegative 235 ± 0%

If later on a change was introduced that altered the performance we can run again:

  1. opa test -v --bench --count 10 --format gobench ./rbac.rego ./rbac_test.rego | tee ./new.txt
  1. BenchmarkDataRbacTestUserHasRoleDev 27415 43671 ns/op 39301 timer_rego_query_eval_ns/op 17201 B/op 379 allocs/op
  2. BenchmarkDataRbacTestUserHasRoleNegative 27583 44743 ns/op 40152 timer_rego_query_eval_ns/op 17369 B/op 385 allocs/op
  3. --------------------------------------------------------------------------------
  4. PASS: 2/2
  5. .
  6. .

(Repeated 10 times)

Then we can compare the results via:

  1. benchstat ./old.txt ./new.txt
  1. name old time/op new time/op delta
  2. DataRbacTestUserHasRoleDev 29.8µs ±18% 47.4µs ±15% +59.06% (p=0.000 n=9+10)
  3. DataRbacTestUserHasRoleNegative 32.0µs ±35% 47.1µs ±14% +47.48% (p=0.000 n=10+9)
  4. name old timer_rego_query_eval_ns/op new timer_rego_query_eval_ns/op delta
  5. DataRbacTestUserHasRoleDev 25.0k ±18% 42.6k ±15% +70.51% (p=0.000 n=9+10)
  6. DataRbacTestUserHasRoleNegative 26.7k ±35% 42.3k ±14% +58.15% (p=0.000 n=10+9)
  7. name old alloc/op new alloc/op delta
  8. DataRbacTestUserHasRoleDev 12.3kB ± 0% 17.2kB ± 0% +39.81% (p=0.000 n=10+10)
  9. DataRbacTestUserHasRoleNegative 12.5kB ± 0% 17.4kB ± 0% +39.28% (p=0.000 n=10+10)
  10. name old allocs/op new allocs/op delta
  11. DataRbacTestUserHasRoleDev 229 ± 0% 379 ± 0% +65.50% (p=0.000 n=10+10)
  12. DataRbacTestUserHasRoleNegative 235 ± 0% 385 ± 0% +63.83% (p=0.000 n=10+10)

This gives clear feedback that the evaluations have slowed down considerably by looking at the delta

Note that for benchstat you will want to run with --count to repeat the benchmarks a number of times (5-10 is usually enough). The tool requires several data points else the p value will not show meaningful changes and the delta will be ~.

Resource Utilization

Policy evaluation is typically CPU-bound unless the policies have to pull additional data on-the-fly using built-in functions like http.send() (in which case evaluation likely becomes I/O-bound.) Policy evaluation is currently single-threaded. If you are embedding OPA as a library, it is your responsibility to dispatch concurrent queries to different Goroutines/threads. If you are running the OPA server, it will parallelize concurrent requests and use as many cores as possible. You can limit the number of cores that OPA can consume by starting OPA with the GOMAXPROCS environment variable.

Memory usage scales with the size of the policy (i.e., Rego) and data (e.g., JSON) that you load into OPA. Raw JSON data loaded into OPA uses approximately 20x more memory compared to the same data stored in a compact, serialized format (e.g., on disk). This increased memory usage is due to the need to load the JSON data into Go data structures like maps, slices, and strings so that it can be evaluated. For example, if you load 8MB worth of JSON data representing 100,000 permission objects specifying subject/action/resource triplets, OPA would consume approximately 160MB of RAM.

Memory usage also scales linearly with the number of rules loaded into OPA. For example, loading 10,000 rules that implement an ACL-style authorization policy consumes approximately 130MB of RAM while 100,000 rules implementing the same policy (but with 10x more tuples to check) consumes approximately 1.1GB of RAM.

Optimization Levels

The --optimize (or -O) flag on the opa build command controls how bundles are optimized.

Optimization applies partial evaluation to precompute known values in the policy. The goal of partial evaluation is to convert non-linear-time policies into linear-time policies.

By specifying the --optimize flag, users can control how much time and resources are spent attempting to optimize the bundle. Generally, higher optimization levels require more time and resources. Currently, OPA supports three optimization levels. The exact optimizations applied in each level may change over time.

-O=0 (default)

By default optimizations are disabled.

-O=1 (recommended)

Policies are partially evaluated. Rules that DO NOT depend on unknowns (directly or indirectly) are evaluated and the virtual documents they produce are inlined into call sites. Virtual documents that are required at evaluation time are not inlined. For example, if a base or virtual document is targetted by a with statement in the policy, the document will not be inlined.

Rules that depend on unknowns (directly or indirectly) are also partially evaluated however the virtual documents they produce ARE NOT inlined into call sites. The output policy should be structurally similar to the input policy.

The opa build automatically marks the input document as unknown. In addition to the input document, if opa build is invoked with the -b/--bundle flag, any data references NOT prefixed by the .manifest roots are also marked as unknown.

-O=2 (aggressive)

Same as -O=1 except virtual documents produced by rules that depend on unknowns may be inlined into call sites. In addition, more aggressive inlining is applied within rules. This includes copy propagation and inlining of certain negated statements that would otherwise generate support rules.

Key Takeaways

For high-performance use cases:

  • Write your policies to minimize iteration and search.
    • Use objects instead of arrays when you have a unique identifier for the elements of the array.
    • Consider partial-evaluation to compile non-linear policies to linear policies.
  • Write your policies with indexed statements so that rule-indexing is effective.
  • Use the profiler to help identify portions of the policy that would benefit the most from improved performance.
  • Use the benchmark tools to help get real world timing data and detect policy performance changes.