Setting Quotas

Overview

A resource quota, defined by a **ResourceQuota** object, provides constraints that limit aggregate resource consumption per project. It can limit the quantity of objects that can be created in a project by type, as well as the total amount of compute resources and storage that may be consumed by resources in that project.

See the Developer Guide for more on compute resources.

Resources Managed by Quota

The following describes the set of compute resources and object types that may be managed by a quota.

A pod is in a terminal state if status.phase in (Failed, Succeeded) is true.

Table 1. Compute Resources Managed by Quota
Resource NameDescription

cpu

The sum of CPU requests across all pods in a non-terminal state cannot exceed this value. cpu and requests.cpu are the same value and can be used interchangeably.

memory

The sum of memory requests across all pods in a non-terminal state cannot exceed this value. memory and requests.memory are the same value and can be used interchangeably.

ephemeral-storage

The sum of local ephemeral storage requests across all pods in a non-terminal state cannot exceed this value. ephemeral-storage and requests.ephemeral-storage are the same value and can be used interchangeably. This resource is available only if you enabled the ephemeral storage technology preview. This feature is disabled by default.

requests.cpu

The sum of CPU requests across all pods in a non-terminal state cannot exceed this value. cpu and requests.cpu are the same value and can be used interchangeably.

requests.memory

The sum of memory requests across all pods in a non-terminal state cannot exceed this value. memory and requests.memory are the same value and can be used interchangeably.

requests.ephemeral-storage

The sum of ephemeral storage requests across all pods in a non-terminal state cannot exceed this value. ephemeral-storage and requests.ephemeral-storage are the same value and can be used interchangeably. This resource is available only if you enabled the ephemeral storage technology preview. This feature is disabled by default.

limits.cpu

The sum of CPU limits across all pods in a non-terminal state cannot exceed this value.

limits.memory

The sum of memory limits across all pods in a non-terminal state cannot exceed this value.

limits.ephemeral-storage

The sum of ephemeral storage limits across all pods in a non-terminal state cannot exceed this value. This resource is available only if you enabled the ephemeral storage technology preview. This feature is disabled by default.

Table 2. Storage Resources Managed by Quota
Resource NameDescription

requests.storage

The sum of storage requests across all persistent volume claims in any state cannot exceed this value.

persistentvolumeclaims

The total number of persistent volume claims that can exist in the project.

<storage-class-name>.storageclass.storage.k8s.io/requests.storage

The sum of storage requests across all persistent volume claims in any state that have a matching storage class, cannot exceed this value.

<storage-class-name>.storageclass.storage.k8s.io/persistentvolumeclaims

The total number of persistent volume claims with a matching storage class that can exist in the project.

Table 3. Object Counts Managed by Quota
Resource NameDescription

pods

The total number of pods in a non-terminal state that can exist in the project.

replicationcontrollers

The total number of replication controllers that can exist in the project.

resourcequotas

The total number of resource quotas that can exist in the project.

services

The total number of services that can exist in the project.

secrets

The total number of secrets that can exist in the project.

configmaps

The total number of ConfigMap objects that can exist in the project.

persistentvolumeclaims

The total number of persistent volume claims that can exist in the project.

openshift.io/imagestreams

The total number of image streams that can exist in the project.

You can configure an object count quota for these standard namespaced resource types using the count/<resource>.<group> syntax while creating a quota.

  1. $ oc create quota <name> --hard=count/<resource>.<group>=<quota> (1)
1<resource> is the name of the resource, and <group> is the API group, if applicable. Use the kubectl api-resources command for a list of resources and their associated API groups.

Setting Resource Quota for Extended Resources

Overcommitment of resources is not allowed for extended resources, so you must specify requests and limits for the same extended resource in a quota. Currently, only quota items with the prefix requests. are allowed for extended resources. The following is an example scenario of how to set resource quota for the GPU resource nvidia.com/gpu.

Procedure

  1. Determine how many GPUs are available on a node in your cluster. For example:

    1. # oc describe node ip-172-31-27-209.us-west-2.compute.internal | egrep 'Capacity|Allocatable|gpu'
    2. openshift.com/gpu-accelerator=true
    3. Capacity:
    4. nvidia.com/gpu: 2
    5. Allocatable:
    6. nvidia.com/gpu: 2
    7. nvidia.com/gpu 0 0

    In this example, 2 GPUs are available.

  2. Set a quota in the namespace nvidia. In this example, the quota is 1:

    1. # cat gpu-quota.yaml
    2. apiVersion: v1
    3. kind: ResourceQuota
    4. metadata:
    5. name: gpu-quota
    6. namespace: nvidia
    7. spec:
    8. hard:
    9. requests.nvidia.com/gpu: 1
  3. Create the quota:

    1. # oc create -f gpu-quota.yaml
    2. resourcequota/gpu-quota created
  4. Verify that the namespace has the correct quota set:

    1. # oc describe quota gpu-quota -n nvidia
    2. Name: gpu-quota
    3. Namespace: nvidia
    4. Resource Used Hard
    5. -------- ---- ----
    6. requests.nvidia.com/gpu 0 1
  5. Run a pod that asks for a single GPU:

    1. # oc create pod gpu-pod.yaml
    1. apiVersion: v1
    2. kind: Pod
    3. metadata:
    4. generateName: gpu-pod-
    5. namespace: nvidia
    6. spec:
    7. restartPolicy: OnFailure
    8. containers:
    9. - name: rhel7-gpu-pod
    10. image: rhel7
    11. env:
    12. - name: NVIDIA_VISIBLE_DEVICES
    13. value: all
    14. - name: NVIDIA_DRIVER_CAPABILITIES
    15. value: "compute,utility"
    16. - name: NVIDIA_REQUIRE_CUDA
    17. value: "cuda>=5.0"
    18. command: ["sleep"]
    19. args: ["infinity"]
    20. resources:
    21. limits:
    22. nvidia.com/gpu: 1
  6. Verify that the pod is running:

    1. # oc get pods
    2. NAME READY STATUS RESTARTS AGE
    3. gpu-pod-s46h7 1/1 Running 0 1m
  7. Verify that the quota Used counter is correct:

    1. # oc describe quota gpu-quota -n nvidia
    2. Name: gpu-quota
    3. Namespace: nvidia
    4. Resource Used Hard
    5. -------- ---- ----
    6. requests.nvidia.com/gpu 1 1
  8. Attempt to create a second GPU pod in the nvidia namespace. This is technically available on the node because it has 2 GPUs:

    1. # oc create -f gpu-pod.yaml
    2. Error from server (Forbidden): error when creating "gpu-pod.yaml": pods "gpu-pod-f7z2w" is forbidden: exceeded quota: gpu-quota, requested: requests.nvidia.com/gpu=1, used: requests.nvidia.com/gpu=1, limited: requests.nvidia.com/gpu=1

    This Forbidden error message is expected because you have a quota of 1 GPU and this pod tried to allocate a second GPU, which exceeds its quota.

Quota Scopes

Each quota can have an associated set of scopes. A quota will only measure usage for a resource if it matches the intersection of enumerated scopes.

Adding a scope to a quota restricts the set of resources to which that quota can apply. Specifying a resource outside of the allowed set results in a validation error.

ScopeDescription

Terminating

Match pods where spec.activeDeadlineSeconds >= 0.

NotTerminating

Match pods where spec.activeDeadlineSeconds is nil.

BestEffort

Match pods that have best effort quality of service for either cpu or memory. See the Quality of Service Classes for more on committing compute resources.

NotBestEffort

Match pods that do not have best effort quality of service for cpu and memory.

A BestEffort scope restricts a quota to limiting the following resources:

  • **pods**

A Terminating, NotTerminating, and NotBestEffort scope restricts a quota to tracking the following resources:

  • **pods**

  • **memory**

  • **requests.memory**

  • **limits.memory**

  • **cpu**

  • **requests.cpu**

  • **limits.cpu**

  • **ephemeral-storage**

  • **requests.ephemeral-storage**

  • **limits.ephemeral-storage**

Ephemeral storage requests and limits apply only if you enabled the ephemeral storage technology preview. This feature is disabled by default.

Quota Enforcement

After a resource quota for a project is first created, the project restricts the ability to create any new resources that may violate a quota constraint until it has calculated updated usage statistics.

After a quota is created and usage statistics are updated, the project accepts the creation of new content. When you create or modify resources, your quota usage is incremented immediately upon the request to create or modify the resource.

When you delete a resource, your quota use is decremented during the next full recalculation of quota statistics for the project. A configurable amount of time determines how long it takes to reduce quota usage statistics to their current observed system value.

If project modifications exceed a quota usage limit, the server denies the action, and an appropriate error message is returned to the user explaining the quota constraint violated, and what their currently observed usage stats are in the system.

Requests Versus Limits

When allocating compute resources, each container may specify a request and a limit value each for CPU, memory, and ephemeral storage. Quotas can restrict any of these values.

If the quota has a value specified for **requests.cpu** or **requests.memory**, then it requires that every incoming container make an explicit request for those resources. If the quota has a value specified for **limits.cpu** or **limits.memory**, then it requires that every incoming container specify an explicit limit for those resources.

Sample Resource Quota Definitions

core-object-counts.yaml

  1. apiVersion: v1
  2. kind: ResourceQuota
  3. metadata:
  4. name: core-object-counts
  5. spec:
  6. hard:
  7. configmaps: "10" (1)
  8. persistentvolumeclaims: "4" (2)
  9. replicationcontrollers: "20" (3)
  10. secrets: "10" (4)
  11. services: "10" (5)
1The total number of ConfigMap objects that can exist in the project.
2The total number of persistent volume claims (PVCs) that can exist in the project.
3The total number of replication controllers that can exist in the project.
4The total number of secrets that can exist in the project.
5The total number of services that can exist in the project.

openshift-object-counts.yaml

  1. apiVersion: v1
  2. kind: ResourceQuota
  3. metadata:
  4. name: openshift-object-counts
  5. spec:
  6. hard:
  7. openshift.io/imagestreams: "10" (1)
1The total number of image streams that can exist in the project.

compute-resources.yaml

  1. apiVersion: v1
  2. kind: ResourceQuota
  3. metadata:
  4. name: compute-resources
  5. spec:
  6. hard:
  7. pods: "4" (1)
  8. requests.cpu: "1" (2)
  9. requests.memory: 1Gi (3)
  10. requests.ephemeral-storage: 2Gi (4)
  11. limits.cpu: "2" (5)
  12. limits.memory: 2Gi (6)
  13. limits.ephemeral-storage: 4Gi (7)
1The total number of pods in a non-terminal state that can exist in the project.
2Across all pods in a non-terminal state, the sum of CPU requests cannot exceed 1 core.
3Across all pods in a non-terminal state, the sum of memory requests cannot exceed 1Gi.
4Across all pods in a non-terminal state, the sum of ephemeral storage requests cannot exceed 2Gi.
5Across all pods in a non-terminal state, the sum of CPU limits cannot exceed 2 cores.
6Across all pods in a non-terminal state, the sum of memory limits cannot exceed 2Gi.
7Across all pods in a non-terminal state, the sum of ephemeral storage limits cannot exceed 4Gi.

besteffort.yaml

  1. apiVersion: v1
  2. kind: ResourceQuota
  3. metadata:
  4. name: besteffort
  5. spec:
  6. hard:
  7. pods: "1" (1)
  8. scopes:
  9. - BestEffort (2)
1The total number of pods in a non-terminal state with BestEffort quality of service that can exist in the project.
2Restricts the quota to only matching pods that have BestEffort quality of service for either memory or CPU.

compute-resources-long-running.yaml

  1. apiVersion: v1
  2. kind: ResourceQuota
  3. metadata:
  4. name: compute-resources-long-running
  5. spec:
  6. hard:
  7. pods: "4" (1)
  8. limits.cpu: "4" (2)
  9. limits.memory: "2Gi" (3)
  10. limits.ephemeral-storage: "4Gi" (4)
  11. scopes:
  12. - NotTerminating (5)
1The total number of pods in a non-terminal state.
2Across all pods in a non-terminal state, the sum of CPU limits cannot exceed this value.
3Across all pods in a non-terminal state, the sum of memory limits cannot exceed this value.
4Across all pods in a non-terminal state, the sum of ephemeral storage limits cannot exceed this value.
5Restricts the quota to only matching pods where spec.activeDeadlineSeconds is set to nil. Build pods will fall under NotTerminating unless the RestartNever policy is applied.

compute-resources-time-bound.yaml

  1. apiVersion: v1
  2. kind: ResourceQuota
  3. metadata:
  4. name: compute-resources-time-bound
  5. spec:
  6. hard:
  7. pods: "2" (1)
  8. limits.cpu: "1" (2)
  9. limits.memory: "1Gi" (3)
  10. limits.ephemeral-storage: "1Gi" (4)
  11. scopes:
  12. - Terminating (5)
1The total number of pods in a non-terminal state.
2Across all pods in a non-terminal state, the sum of CPU limits cannot exceed this value.
3Across all pods in a non-terminal state, the sum of memory limits cannot exceed this value.
4Across all pods in a non-terminal state, the sum of ephemeral storage limits cannot exceed this value.
5Restricts the quota to only matching pods where spec.activeDeadlineSeconds >=0. For example, this quota would charge for build or deployer pods, but not long running pods like a web server or database.

storage-consumption.yaml

  1. apiVersion: v1
  2. kind: ResourceQuota
  3. metadata:
  4. name: storage-consumption
  5. spec:
  6. hard:
  7. persistentvolumeclaims: "10" (1)
  8. requests.storage: "50Gi" (2)
  9. gold.storageclass.storage.k8s.io/requests.storage: "10Gi" (3)
  10. silver.storageclass.storage.k8s.io/requests.storage: "20Gi" (4)
  11. silver.storageclass.storage.k8s.io/persistentvolumeclaims: "5" (5)
  12. bronze.storageclass.storage.k8s.io/requests.storage: "0" (6)
  13. bronze.storageclass.storage.k8s.io/persistentvolumeclaims: "0" (7)
1The total number of persistent volume claims in a project
2Across all persistent volume claims in a project, the sum of storage requested cannot exceed this value.
3Across all persistent volume claims in a project, the sum of storage requested in the gold storage class cannot exceed this value.
4Across all persistent volume claims in a project, the sum of storage requested in the silver storage class cannot exceed this value.
5Across all persistent volume claims in a project, the total number of claims in the silver storage class cannot exceed this value.
6Across all persistent volume claims in a project, the sum of storage requested in the bronze storage class cannot exceed this value. When this is set to 0, it means bronze storage class cannot request storage.
7Across all persistent volume claims in a project, the sum of storage requested in the bronze storage class cannot exceed this value. When this is set to 0, it means bronze storage class cannot create claims.

Creating a Quota

To create a quota, first define the quota in a file, such as the examples in Sample Resource Quota Definitions. Then, create using that file to apply it to a project:

  1. $ oc create -f <resource_quota_definition> [-n <project_name>]

For example:

  1. $ oc create -f core-object-counts.yaml -n demoproject

Creating Object Count Quotas

You can create an object count quota for all OKD standard namespaced resource types, such as BuildConfig, and DeploymentConfig. An object quota count places a defined quota on all standard namespaced resource types.

When using a resource quota, an object is charged against the quota if it exists in server storage. These types of quotas are useful to protect against exhaustion of storage resources.

To configure an object count quota for a resource, run the following command:

  1. $ oc create quota <name> --hard=count/<resource>.<group>=<quota>,count/<resource>.<group>=<quota>

For example:

  1. $ oc create quota test --hard=count/deployments.extensions=2,count/replicasets.extensions=4,count/pods=3,count/secrets=4
  2. resourcequota "test" created
  3. $ oc describe quota test
  4. Name: test
  5. Namespace: quota
  6. Resource Used Hard
  7. -------- ---- ----
  8. count/deployments.extensions 0 2
  9. count/pods 0 3
  10. count/replicasets.extensions 0 4
  11. count/secrets 0 4

This example limits the listed resources to the hard limit in each project in the cluster.

Viewing a Quota

You can view usage statistics related to any hard limits defined in a project’s quota by navigating in the web console to the project’s Quota page.

You can also use the CLI to view quota details:

  1. First, get the list of quotas defined in the project. For example, for a project called demoproject:

    1. $ oc get quota -n demoproject
    2. NAME AGE
    3. besteffort 11m
    4. compute-resources 2m
    5. core-object-counts 29m
  2. Then, describe the quota you are interested in, for example the core-object-counts quota:

    1. $ oc describe quota core-object-counts -n demoproject
    2. Name: core-object-counts
    3. Namespace: demoproject
    4. Resource Used Hard
    5. -------- ---- ----
    6. configmaps 3 10
    7. persistentvolumeclaims 0 4
    8. replicationcontrollers 3 20
    9. secrets 9 10
    10. services 2 10

Configuring Quota Synchronization Period

When a set of resources are deleted, the synchronization time frame of resources is determined by the **resource-quota-sync-period** setting in the /etc/origin/master/master-config.yaml file.

Before quota usage is restored, a user may encounter problems when attempting to reuse the resources. You can change the **resource-quota-sync-period** setting to have the set of resources regenerate at the desired amount of time (in seconds) and for the resources to be available again:

  1. kubernetesMasterConfig:
  2. apiLevels:
  3. - v1beta3
  4. - v1
  5. apiServerArguments: null
  6. controllerArguments:
  7. resource-quota-sync-period:
  8. - "10s"

After making any changes, restart the master services to apply them.

  1. # master-restart api
  2. # master-restart controllers

Adjusting the regeneration time can be helpful for creating resources and determining resource usage when automation is used.

The resource-quota-sync-period setting is designed to balance system performance. Reducing the sync period can result in a heavy load on the master.

Accounting for Quota in Deployment Configurations

If a quota has been defined for your project, see Deployment Resources for considerations on any deployment configurations.

Require Explicit Quota to Consume a Resource

If a resource is not managed by quota, a user has no restriction on the amount of resource that can be consumed. For example, if there is no quota on storage related to the gold storage class, the amount of gold storage a project can create is unbounded.

For high-cost compute or storage resources, administrators may want to require an explicit quota be granted in order to consume a resource. For example, if a project was not explicitly given quota for storage related to the gold storage class, users of that project would not be able to create any storage of that type.

In order to require explicit quota to consume a particular resource, the following stanza should be added to the master-config.yaml.

  1. admissionConfig:
  2. pluginConfig:
  3. ResourceQuota:
  4. configuration:
  5. apiVersion: resourcequota.admission.k8s.io/v1alpha1
  6. kind: Configuration
  7. limitedResources:
  8. - resource: persistentvolumeclaims (1)
  9. matchContains:
  10. - gold.storageclass.storage.k8s.io/requests.storage (2)
1The group/resource to whose consumption is limited by default.
2The name of the resource tracked by quota associated with the group/resource to limit by default.

In the above example, the quota system will intercept every operation that creates or updates a PersistentVolumeClaim. It checks what resources understood by quota would be consumed, and if there is no covering quota for those resources in the project, the request is denied. In this example, if a user creates a PersistentVolumeClaim that uses storage associated with the gold storage class, and there is no matching quota in the project, the request is denied.

Known Issues

  • Invalid objects can cause quota resources for a project to become exhausted. Quota is incremented in admission prior to validation of the resource. As a result, quota can be incremented even if the pod is not ultimately persisted. This will be resolved in a future release. (BZ1485375)