Operator Lifecycle Manager concepts and resources

This guide provides an overview of the concepts that drive Operator Lifecycle Manager (OLM) in OKD.

What is Operator Lifecycle Manager?

Operator Lifecycle Manager (OLM) helps users install, update, and manage the lifecycle of Kubernetes native applications (Operators) and their associated services running across their OKD clusters. It is part of the Operator Framework, an open source toolkit designed to manage Operators in an effective, automated, and scalable way.

olm workflow

Figure 1. Operator Lifecycle Manager workflow

OLM runs by default in OKD 4, which aids cluster administrators in installing, upgrading, and granting access to Operators running on their cluster. The OKD web console provides management screens for cluster administrators to install Operators, as well as grant specific projects access to use the catalog of Operators available on the cluster.

For developers, a self-service experience allows provisioning and configuring instances of databases, monitoring, and big data services without having to be subject matter experts, because the Operator has that knowledge baked into it.

OLM resources

The following custom resource definitions (CRDs) are defined and managed by Operator Lifecycle Manager (OLM):

Table 1. CRDs managed by OLM and Catalog Operators
ResourceShort nameDescription

ClusterServiceVersion (CSV)

csv

Application metadata. For example: name, version, icon, required resources.

CatalogSource

catsrc

A repository of CSVs, CRDs, and packages that define an application.

Subscription

sub

Keeps CSVs up to date by tracking a channel in a package.

InstallPlan

ip

Calculated list of resources to be created to automatically install or upgrade a CSV.

OperatorGroup

og

Configures all Operators deployed in the same namespace as the OperatorGroup object to watch for their custom resource (CR) in a list of namespaces or cluster-wide.

OperatorConditions

-

Creates a communication channel between OLM and an Operator it manages. Operators can write to the Status.Conditions array to communicate complex states to OLM.

Cluster service version

A cluster service version (CSV) represents a specific version of a running Operator on an OKD cluster. It is a YAML manifest created from Operator metadata that assists Operator Lifecycle Manager (OLM) in running the Operator in the cluster.

OLM requires this metadata about an Operator to ensure that it can be kept running safely on a cluster, and to provide information about how updates should be applied as new versions of the Operator are published. This is similar to packaging software for a traditional operating system; think of the packaging step for OLM as the stage at which you make your rpm, deb, or apk bundle.

A CSV includes the metadata that accompanies an Operator container image, used to populate user interfaces with information such as its name, version, description, labels, repository link, and logo.

A CSV is also a source of technical information required to run the Operator, such as which custom resources (CRs) it manages or depends on, RBAC rules, cluster requirements, and install strategies. This information tells OLM how to create required resources and set up the Operator as a deployment.

Catalog source

A catalog source represents a store of metadata, typically by referencing an index image stored in a container registry. Operator Lifecycle Manager (OLM) queries catalog sources to discover and install Operators and their dependencies. OperatorHub in the OKD web console also displays the Operators provided by catalog sources.

Cluster administrators can view the full list of Operators provided by an enabled catalog source on a cluster by using the AdministrationCluster SettingsConfigurationOperatorHub page in the web console.

The spec of a CatalogSource object indicates how to construct a pod or how to communicate with a service that serves the Operator Registry gRPC API.

Example CatalogSource object

  1. apiVersion: operators.coreos.com/v1alpha1
  2. kind: CatalogSource
  3. metadata:
  4. generation: 1
  5. name: example-catalog (1)
  6. namespace: olm (2)
  7. annotations:
  8. olm.catalogImageTemplate: (3)
  9. "quay.io/example-org/example-catalog:v{kube_major_version}.{kube_minor_version}.{kube_patch_version}"
  10. spec:
  11. displayName: Example Catalog (4)
  12. image: quay.io/example-org/example-catalog:v1 (5)
  13. priority: -400 (6)
  14. publisher: Example Org
  15. sourceType: grpc (7)
  16. grpcPodConfig:
  17. securityContextConfig: <security_mode> (8)
  18. nodeSelector: (9)
  19. custom_label: <label>
  20. priorityClassName: system-cluster-critical (10)
  21. tolerations: (11)
  22. - key: "key1"
  23. operator: "Equal"
  24. value: "value1"
  25. effect: "NoSchedule"
  26. updateStrategy:
  27. registryPoll: (12)
  28. interval: 30m0s
  29. status:
  30. connectionState:
  31. address: example-catalog.olm.svc:50051
  32. lastConnect: 2021-08-26T18:14:31Z
  33. lastObservedState: READY (13)
  34. latestImageRegistryPoll: 2021-08-26T18:46:25Z (14)
  35. registryService: (15)
  36. createdAt: 2021-08-26T16:16:37Z
  37. port: 50051
  38. protocol: grpc
  39. serviceName: example-catalog
  40. serviceNamespace: olm
1Name for the CatalogSource object. This value is also used as part of the name for the related pod that is created in the requested namespace.
2Namespace to create the catalog in. To make the catalog available cluster-wide in all namespaces, set this value to olm. The default Red Hat-provided catalog sources also use the olm namespace. Otherwise, set the value to a specific namespace to make the Operator only available in that namespace.
3Optional: To avoid cluster upgrades potentially leaving Operator installations in an unsupported state or without a continued update path, you can enable automatically changing your Operator catalog’s index image version as part of cluster upgrades.

Set the olm.catalogImageTemplate annotation to your index image name and use one or more of the Kubernetes cluster version variables as shown when constructing the template for the image tag. The annotation overwrites the spec.image field at run time. See the “Image template for custom catalog sources” section for more details.

4Display name for the catalog in the web console and CLI.
5Index image for the catalog. Optionally, can be omitted when using the olm.catalogImageTemplate annotation, which sets the pull spec at run time.
6Weight for the catalog source. OLM uses the weight for prioritization during dependency resolution. A higher weight indicates the catalog is preferred over lower-weighted catalogs.
7Source types include the following:
  • grpc with an image reference: OLM pulls the image and runs the pod, which is expected to serve a compliant API.

  • grpc with an address field: OLM attempts to contact the gRPC API at the given address. This should not be used in most cases.

  • configmap: OLM parses config map data and runs a pod that can serve the gRPC API over it.

8Specify the value of legacy or restricted. If the field is not set, the default value is legacy. In a future OKD release, it is planned that the default value will be restricted. If your catalog cannot run with restricted permissions, it is recommended that you manually set this field to legacy.
9Optional: For grpc type catalog sources, overrides the default node selector for the pod serving the content in spec.image, if defined.
10Optional: For grpc type catalog sources, overrides the default priority class name for the pod serving the content in spec.image, if defined. Kubernetes provides system-cluster-critical and system-node-critical priority classes by default. Setting the field to empty (“”) assigns the pod the default priority. Other priority classes can be defined manually.
11Optional: For grpc type catalog sources, overrides the default tolerations for the pod serving the content in spec.image, if defined.
12Automatically check for new versions at a given interval to stay up-to-date.
13Last observed state of the catalog connection. For example:
  • READY: A connection is successfully established.

  • CONNECTING: A connection is attempting to establish.

  • TRANSIENT_FAILURE: A temporary problem has occurred while attempting to establish a connection, such as a timeout. The state will eventually switch back to CONNECTING and try again.

See States of Connectivity in the gRPC documentation for more details.

14Latest time the container registry storing the catalog image was polled to ensure the image is up-to-date.
15Status information for the catalog’s Operator Registry service.

Referencing the name of a CatalogSource object in a subscription instructs OLM where to search to find a requested Operator:

Example Subscription object referencing a catalog source

  1. apiVersion: operators.coreos.com/v1alpha1
  2. kind: Subscription
  3. metadata:
  4. name: example-operator
  5. namespace: example-namespace
  6. spec:
  7. channel: stable
  8. name: example-operator
  9. source: example-catalog
  10. sourceNamespace: olm

Additional resources

Image template for custom catalog sources

Operator compatibility with the underlying cluster can be expressed by a catalog source in various ways. One way, which is used for the default Red Hat-provided catalog sources, is to identify image tags for index images that are specifically created for a particular platform release, for example OKD 4.

During a cluster upgrade, the index image tag for the default Red Hat-provided catalog sources are updated automatically by the Cluster Version Operator (CVO) so that Operator Lifecycle Manager (OLM) pulls the updated version of the catalog. For example during an upgrade from OKD 4.14 to 4.15, the spec.image field in the CatalogSource object for the redhat-operators catalog is updated from:

  1. registry.redhat.io/redhat/redhat-operator-index:v4.13

to:

  1. registry.redhat.io/redhat/redhat-operator-index:v4.15

However, the CVO does not automatically update image tags for custom catalogs. To ensure users are left with a compatible and supported Operator installation after a cluster upgrade, custom catalogs should also be kept updated to reference an updated index image.

Starting in OKD 4.9, cluster administrators can add the olm.catalogImageTemplate annotation in the CatalogSource object for custom catalogs to an image reference that includes a template. The following Kubernetes version variables are supported for use in the template:

  • kube_major_version

  • kube_minor_version

  • kube_patch_version

You must specify the Kubernetes cluster version and not an OKD cluster version, as the latter is not currently available for templating.

Provided that you have created and pushed an index image with a tag specifying the updated Kubernetes version, setting this annotation enables the index image versions in custom catalogs to be automatically changed after a cluster upgrade. The annotation value is used to set or update the image reference in the spec.image field of the CatalogSource object. This helps avoid cluster upgrades leaving Operator installations in unsupported states or without a continued update path.

You must ensure that the index image with the updated tag, in whichever registry it is stored in, is accessible by the cluster at the time of the cluster upgrade.

Example catalog source with an image template

  1. apiVersion: operators.coreos.com/v1alpha1
  2. kind: CatalogSource
  3. metadata:
  4. generation: 1
  5. name: example-catalog
  6. namespace: openshift-marketplace
  7. annotations:
  8. olm.catalogImageTemplate:
  9. "quay.io/example-org/example-catalog:v{kube_major_version}.{kube_minor_version}"
  10. spec:
  11. displayName: Example Catalog
  12. image: quay.io/example-org/example-catalog:v1.28
  13. priority: -400
  14. publisher: Example Org

If the spec.image field and the olm.catalogImageTemplate annotation are both set, the spec.image field is overwritten by the resolved value from the annotation. If the annotation does not resolve to a usable pull spec, the catalog source falls back to the set spec.image value.

If the spec.image field is not set and the annotation does not resolve to a usable pull spec, OLM stops reconciliation of the catalog source and sets it into a human-readable error condition.

For an OKD 4 cluster, which uses Kubernetes 1.28, the olm.catalogImageTemplate annotation in the preceding example resolves to the following image reference:

  1. quay.io/example-org/example-catalog:v1.28

For future releases of OKD, you can create updated index images for your custom catalogs that target the later Kubernetes version that is used by the later OKD version. With the olm.catalogImageTemplate annotation set before the upgrade, upgrading the cluster to the later OKD version would then automatically update the catalog’s index image as well.

Catalog health requirements

Operator catalogs on a cluster are interchangeable from the perspective of installation resolution; a Subscription object might reference a specific catalog, but dependencies are resolved using all catalogs on the cluster.

For example, if Catalog A is unhealthy, a subscription referencing Catalog A could resolve a dependency in Catalog B, which the cluster administrator might not have been expecting, because B normally had a lower catalog priority than A.

As a result, OLM requires that all catalogs with a given global namespace (for example, the default openshift-marketplace namespace or a custom global namespace) are healthy. When a catalog is unhealthy, all Operator installation or update operations within its shared global namespace will fail with a CatalogSourcesUnhealthy condition. If these operations were permitted in an unhealthy state, OLM might make resolution and installation decisions that were unexpected to the cluster administrator.

As a cluster administrator, if you observe an unhealthy catalog and want to consider the catalog as invalid and resume Operator installations, see the “Removing custom catalogs” or “Disabling the default OperatorHub catalog sources” sections for information about removing the unhealthy catalog.

Additional resources

Subscription

A subscription, defined by a Subscription object, represents an intention to install an Operator. It is the custom resource that relates an Operator to a catalog source.

Subscriptions describe which channel of an Operator package to subscribe to, and whether to perform updates automatically or manually. If set to automatic, the subscription ensures Operator Lifecycle Manager (OLM) manages and upgrades the Operator to ensure that the latest version is always running in the cluster.

Example Subscription object

  1. apiVersion: operators.coreos.com/v1alpha1
  2. kind: Subscription
  3. metadata:
  4. name: example-operator
  5. namespace: example-namespace
  6. spec:
  7. channel: stable
  8. name: example-operator
  9. source: example-catalog
  10. sourceNamespace: olm

This Subscription object defines the name and namespace of the Operator, as well as the catalog from which the Operator data can be found. The channel, such as alpha, beta, or stable, helps determine which Operator stream should be installed from the catalog source.

The names of channels in a subscription can differ between Operators, but the naming scheme should follow a common convention within a given Operator. For example, channel names might follow a minor release update stream for the application provided by the Operator (1.2, 1.3) or a release frequency (stable, fast).

In addition to being easily visible from the OKD web console, it is possible to identify when there is a newer version of an Operator available by inspecting the status of the related subscription. The value associated with the currentCSV field is the newest version that is known to OLM, and installedCSV is the version that is installed on the cluster.

Additional resources

Install plan

An install plan, defined by an InstallPlan object, describes a set of resources that Operator Lifecycle Manager (OLM) creates to install or upgrade to a specific version of an Operator. The version is defined by a cluster service version (CSV).

To install an Operator, a cluster administrator, or a user who has been granted Operator installation permissions, must first create a Subscription object. A subscription represents the intent to subscribe to a stream of available versions of an Operator from a catalog source. The subscription then creates an InstallPlan object to facilitate the installation of the resources for the Operator.

The install plan must then be approved according to one of the following approval strategies:

  • If the subscription’s spec.installPlanApproval field is set to Automatic, the install plan is approved automatically.

  • If the subscription’s spec.installPlanApproval field is set to Manual, the install plan must be manually approved by a cluster administrator or user with proper permissions.

After the install plan is approved, OLM creates the specified resources and installs the Operator in the namespace that is specified by the subscription.

Example InstallPlan object

  1. apiVersion: operators.coreos.com/v1alpha1
  2. kind: InstallPlan
  3. metadata:
  4. name: install-abcde
  5. namespace: operators
  6. spec:
  7. approval: Automatic
  8. approved: true
  9. clusterServiceVersionNames:
  10. - my-operator.v1.0.1
  11. generation: 1
  12. status:
  13. ...
  14. catalogSources: []
  15. conditions:
  16. - lastTransitionTime: '2021-01-01T20:17:27Z'
  17. lastUpdateTime: '2021-01-01T20:17:27Z'
  18. status: 'True'
  19. type: Installed
  20. phase: Complete
  21. plan:
  22. - resolving: my-operator.v1.0.1
  23. resource:
  24. group: operators.coreos.com
  25. kind: ClusterServiceVersion
  26. manifest: >-
  27. ...
  28. name: my-operator.v1.0.1
  29. sourceName: redhat-operators
  30. sourceNamespace: openshift-marketplace
  31. version: v1alpha1
  32. status: Created
  33. - resolving: my-operator.v1.0.1
  34. resource:
  35. group: apiextensions.k8s.io
  36. kind: CustomResourceDefinition
  37. manifest: >-
  38. ...
  39. name: webservers.web.servers.org
  40. sourceName: redhat-operators
  41. sourceNamespace: openshift-marketplace
  42. version: v1beta1
  43. status: Created
  44. - resolving: my-operator.v1.0.1
  45. resource:
  46. group: ''
  47. kind: ServiceAccount
  48. manifest: >-
  49. ...
  50. name: my-operator
  51. sourceName: redhat-operators
  52. sourceNamespace: openshift-marketplace
  53. version: v1
  54. status: Created
  55. - resolving: my-operator.v1.0.1
  56. resource:
  57. group: rbac.authorization.k8s.io
  58. kind: Role
  59. manifest: >-
  60. ...
  61. name: my-operator.v1.0.1-my-operator-6d7cbc6f57
  62. sourceName: redhat-operators
  63. sourceNamespace: openshift-marketplace
  64. version: v1
  65. status: Created
  66. - resolving: my-operator.v1.0.1
  67. resource:
  68. group: rbac.authorization.k8s.io
  69. kind: RoleBinding
  70. manifest: >-
  71. ...
  72. name: my-operator.v1.0.1-my-operator-6d7cbc6f57
  73. sourceName: redhat-operators
  74. sourceNamespace: openshift-marketplace
  75. version: v1
  76. status: Created
  77. ...

Additional resources

Operator groups

An Operator group, defined by the OperatorGroup resource, provides multitenant configuration to OLM-installed Operators. An Operator group selects target namespaces in which to generate required RBAC access for its member Operators.

The set of target namespaces is provided by a comma-delimited string stored in the olm.targetNamespaces annotation of a cluster service version (CSV). This annotation is applied to the CSV instances of member Operators and is projected into their deployments.

Additional resources

Operator conditions

As part of its role in managing the lifecycle of an Operator, Operator Lifecycle Manager (OLM) infers the state of an Operator from the state of Kubernetes resources that define the Operator. While this approach provides some level of assurance that an Operator is in a given state, there are many instances where an Operator might need to communicate information to OLM that could not be inferred otherwise. This information can then be used by OLM to better manage the lifecycle of the Operator.

OLM provides a custom resource definition (CRD) called OperatorCondition that allows Operators to communicate conditions to OLM. There are a set of supported conditions that influence management of the Operator by OLM when present in the Spec.Conditions array of an OperatorCondition resource.

By default, the Spec.Conditions array is not present in an OperatorCondition object until it is either added by a user or as a result of custom Operator logic.

Additional resources