Installing an Operator from a catalog in OLM 1.0 (Technology Preview)

Cluster administrators can add catalogs, or curated collections of Operators and Kubernetes extensions, to their clusters. Operator authors publish their products to these catalogs. When you add a catalog to your cluster, you have access to the versions, patches, and over-the-air updates of the Operators and extensions that are published to the catalog.

In the current Technology Preview release of Operator Lifecycle Manager (OLM) 1.0, you manage catalogs and Operators declaratively from the CLI using custom resources (CRs).

OLM 1.0 is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope.

Prerequisites

  • Access to an OKD cluster using an account with cluster-admin permissions

    For OKD 4.14, documented procedures for OLM 1.0 are CLI-based only. Alternatively, administrators can create and view related objects in the web console by using normal methods, such as the Import YAML and Search pages. However, the existing OperatorHub and Installed Operators pages do not yet display OLM 1.0 components.

  • The TechPreviewNoUpgrades feature set enabled on the cluster

    Enabling the TechPreviewNoUpgrade feature set cannot be undone and prevents minor version updates. These feature sets are not recommended on production clusters.

  • The OpenShift CLI (oc) installed on your workstation

Additional resources

About catalogs in OLM 1.0

You can discover installable content by querying a catalog for Kubernetes extensions, such as Operators and controllers, by using the catalogd component. Catalogd is a Kubernetes extension that unpacks catalog content for on-cluster clients and is part of the Operator Lifecycle Manager (OLM) 1.0 suite of microservices. Currently, catalogd unpacks catalog content that is packaged and distributed as container images.

Additional resources

Red Hat-provided Operator catalogs in OLM 1.0

Operator Lifecycle Manager (OLM) 1.0 does not include Red Hat-provided Operator catalogs by default. If you want to add a Red Hat-provided catalog to your cluster, create a custom resource (CR) for the catalog and apply it to the cluster. The following custom resource (CR) examples show how to create a catalog resources for OLM 1.0.

If you want to use a catalog that is hosted on a secure registry, such as Red Hat-provided Operator catalogs from registry.redhat.io, you must have a pull secret scoped to the openshift-catalogd namespace. For more information, see “Creating a pull secret for catalogs hosted on a secure registry”.

Example Red Hat Operators catalog

  1. apiVersion: catalogd.operatorframework.io/v1alpha1
  2. kind: Catalog
  3. metadata:
  4. name: redhat-operators
  5. spec:
  6. source:
  7. type: image
  8. image:
  9. ref: registry.redhat.io/redhat/redhat-operator-index:v4
  10. pullSecret: <pull_secret_name>

Example Certified Operators catalog

  1. apiVersion: catalogd.operatorframework.io/v1alpha1
  2. kind: Catalog
  3. metadata:
  4. name: certified-operators
  5. spec:
  6. source:
  7. type: image
  8. image:
  9. ref: registry.redhat.io/redhat/certified-operator-index:v4
  10. pullSecret: <pull_secret_name>

Example Community Operators catalog

  1. apiVersion: catalogd.operatorframework.io/v1alpha1
  2. kind: Catalog
  3. metadata:
  4. name: community-operators
  5. spec:
  6. source:
  7. type: image
  8. image:
  9. ref: registry.redhat.io/redhat/community-operator-index:v4
  10. pullSecret: <pull_secret_name>

The following command adds a catalog to your cluster:

Command syntax

  1. $ oc apply -f <catalog_name>.yaml (1)
1Specifies the catalog CR, such as redhat-operators.yaml.

Additional resources

Creating a pull secret for catalogs hosted on a secure registry

If you want to use a catalog that is hosted on a secure registry, such as Red Hat-provided Operator catalogs from registry.redhat.io, you must have a pull secret scoped to the openshift-catalogd namespace.

Currently, catalogd cannot read global pull secrets from OKD clusters. Catalogd can read references to secrets only in the namespace where it is deployed.

Prerequisites

  • Login credentials for the secure registry

  • Docker or Podman installed on your workstation

Procedure

  • If you already have a .dockercfg file with login credentials for the secure registry, create a pull secret by running the following command:

    1. $ oc create secret generic <pull_secret_name> \
    2. --from-file=.dockercfg=<file_path>/.dockercfg \
    3. --type=kubernetes.io/dockercfg \
    4. --namespace=openshift-catalogd

    Example command

    1. $ oc create secret generic redhat-cred \
    2. --from-file=.dockercfg=/home/<username>/.dockercfg \
    3. --type=kubernetes.io/dockercfg \
    4. --namespace=openshift-catalogd
  • If you already have a $HOME/.docker/config.json file with login credentials for the secured registry, create a pull secret by running the following command:

    1. $ oc create secret generic <pull_secret_name> \
    2. --from-file=.dockerconfigjson=<file_path>/.docker/config.json \
    3. --type=kubernetes.io/dockerconfigjson \
    4. --namespace=openshift-catalogd

    Example command

    1. $ oc create secret generic redhat-cred \
    2. --from-file=.dockerconfigjson=/home/<username>/.docker/config.json \
    3. --type=kubernetes.io/dockerconfigjson \
    4. --namespace=openshift-catalogd
  • If you do not have a Docker configuration file with login credentials for the secure registry, create a pull secret by running the following command:

    1. $ oc create secret docker-registry <pull_secret_name> \
    2. --docker-server=<registry_server> \
    3. --docker-username=<username> \
    4. --docker-password=<password> \
    5. --docker-email=<email> \
    6. --namespace=openshift-catalogd

    Example command

    1. $ oc create secret docker-registry redhat-cred \
    2. --docker-server=registry.redhat.io \
    3. --docker-username=username \
    4. --docker-password=password \
    5. --docker-email=user@example.com \
    6. --namespace=openshift-catalogd

Adding a catalog to a cluster

To add a catalog to a cluster, create a catalog custom resource (CR) and apply it to the cluster.

Prerequisites

  • If you want to use a catalog that is hosted on a secure registry, such as Red Hat-provided Operator catalogs from registry.redhat.io, you must have a pull secret scoped to the openshift-catalogd namespace. For more information, see “Creating a pull secret for catalogs hosted on a secure registry”.

Procedure

  1. Create a catalog custom resource (CR), similar to the following example:

    Example redhat-operators.yaml

    1. apiVersion: catalogd.operatorframework.io/v1alpha1
    2. kind: Catalog
    3. metadata:
    4. name: redhat-operators
    5. spec:
    6. source:
    7. type: image
    8. image:
    9. ref: registry.redhat.io/redhat/redhat-operator-index:v4 (1)
    10. pullSecret: <pull_secret_name> (2)
    1Specify the catalog’s image in the spec.source.image field.
    2If your catalog is hosted on a secure registry, such as registry.redhat.io, you must create a pull secret scoped to the openshift-catalog namespace.
  2. Add the catalog to your cluster by running the following command:

    1. $ oc apply -f redhat-operators.yaml

    Example output

    1. catalog.catalogd.operatorframework.io/redhat-operators created

Verification

  • Run the following commands to verify the status of your catalog:

    1. Check if you catalog is available by running the following command:

      1. $ oc get catalog

      Example output

      1. NAME AGE
      2. redhat-operators 20s
    2. Check the status of your catalog by running the following command:

      1. $ oc describe catalog

      Example output

      1. Name: redhat-operators
      2. Namespace:
      3. Labels: <none>
      4. Annotations: <none>
      5. API Version: catalogd.operatorframework.io/v1alpha1
      6. Kind: Catalog
      7. Metadata:
      8. Creation Timestamp: 2024-01-10T16:18:38Z
      9. Finalizers:
      10. catalogd.operatorframework.io/delete-server-cache
      11. Generation: 1
      12. Resource Version: 57057
      13. UID: 128db204-49b3-45ee-bfea-a2e6fc8e34ea
      14. Spec:
      15. Source:
      16. Image:
      17. Pull Secret: redhat-cred
      18. Ref: registry.redhat.io/redhat/redhat-operator-index:v4.15
      19. Type: image
      20. Status: (1)
      21. Conditions:
      22. Last Transition Time: 2024-01-10T16:18:55Z
      23. Message:
      24. Reason: UnpackSuccessful (2)
      25. Status: True
      26. Type: Unpacked
      27. Content URL: http://catalogd-catalogserver.openshift-catalogd.svc/catalogs/redhat-operators/all.json
      28. Observed Generation: 1
      29. Phase: Unpacked (3)
      30. Resolved Source:
      31. Image:
      32. Last Poll Attempt: 2024-01-10T16:18:51Z
      33. Ref: registry.redhat.io/redhat/redhat-operator-index:v4.15
      34. Resolved Ref: registry.redhat.io/redhat/redhat-operator-index@sha256:7b536ae19b8e9f74bb521c4a61e5818e036ac1865a932f2157c6c9a766b2eea5 (4)
      35. Type: image
      36. Events: <none>
      1Describes the status of the catalog.
      2Displays the reason the catalog is in the current state.
      3Displays the phase of the installation process.
      4Displays the image reference of the catalog.

Additional resources

Finding Operators to install from a catalog

After you add a catalog to your cluster, you can query the catalog to find Operators and extensions to install. Before you can query catalogs, you must port forward the catalog server service.

Prerequisite

  • You have added a catalog to your cluster.

  • You have installed the jq CLI tool.

Procedure

  1. Port foward the catalog server service in the openshift-catalogd namespace by running the following command:

    1. $ oc -n openshift-catalogd port-forward svc/catalogd-catalogserver 8080:80
  2. Download the catalog’s JSON file locally by running the following command:

    1. $ curl -L http://localhost:8080/catalogs/<catalog_name>/all.json \
    2. -C - -o /<path>/<catalog_name>.json

    Example command

    1. $ curl -L http://localhost:8080/catalogs/redhat-operators/all.json \
    2. -C - -o /home/<username>/catalogs/rhoc.json
  3. Get a list of the Operators and extensions from the local catalog file by running the following command:

    1. $ jq -s '.[] | select(.schema == "olm.package") | .name' \
    2. /<path>/<filename>.json

    Example command

    1. $ jq -s '.[] | select(.schema == "olm.package") | .name' \
    2. /home/<username>/catalogs/rhoc.json

    Example output

    1. NAME AGE
    2. "3scale-operator"
    3. "advanced-cluster-management"
    4. "amq-broker-rhel8"
    5. "amq-online"
    6. "amq-streams"
    7. "amq7-interconnect-operator"
    8. "ansible-automation-platform-operator"
    9. "ansible-cloud-addons-operator"
    10. "apicast-operator"
    11. "aws-efs-csi-driver-operator"
    12. "aws-load-balancer-operator"
    13. "bamoe-businessautomation-operator"
    14. "bamoe-kogito-operator"
    15. "bare-metal-event-relay"
    16. "businessautomation-operator"
    17. ...
  4. Inspect the contents of an Operator or extension’s metadata by running the following command:

    1. $ jq -s '.[] | select( .schema == "olm.package") | \
    2. select( .name == "<package_name>")' <catalog_name>.json

    Example command

    1. $ jq -s '.[] | select( .schema == "olm.package") | \
    2. select( .name == "serverless-operator")' rhoc.json

    Example output

    1. {
    2. "defaultChannel": "stable",
    3. "icon": {
    4. "base64data": "PHN2ZyB4bWxu..."
    5. "mediatype": "image/svg+xml"
    6. },
    7. "name": "serverless-operator",
    8. "schema": "olm.package"
    9. }

Common catalog queries

You can query catalogs by using the jq CLI tool.

Table 1. Common package queries
QueryRequest

Available packages in a catalog

  1. $ jq -s ‘.[] | select( .schema == olm.package”) | \
  2. .name <catalog_name>.json

Package metadata

  1. $ jq -s ‘.[] | select( .schema == olm.package”) | \
  2. select( .name == “<package_name>”)’ <catalog_name>.json

Catalog blobs in a package

  1. $ jq -s ‘.[] | select( .package == “<package_name>”)’ \
  2. <catalog_name>.json
Table 2. Common channel queries
QueryRequest

Channels in a package

  1. $ jq -s ‘.[] | select( .schema == olm.channel ) | \
  2. select( .package == “<package_name>”) | .name \
  3. <catalog_name>.json
  • Latest version in a channel

  • Upgrade path

  1. $ jq -s ‘.[] | select( .schema == olm.channel ) | \
  2. select ( .name == “<channel>”) | \
  3. select( .package == “<package_name>”)’ \
  4. <catalog_name>.json
Table 3. Common bundle queries
QueryRequest

Bundles in a package

  1. $ jq -s ‘.[] | select( .schema == olm.bundle ) | \
  2. select( .package == “<package_name>”) | .name \
  3. <catalog_name>.json
  • Bundle dependencies

  • Available APIs

  1. $ jq -s ‘.[] | select( .schema == olm.bundle ) | \
  2. select ( .name == “<bundle_name>”) | \
  3. select( .package == “<package_name>”)’ \
  4. <catalog_name>.json

About target versions in OLM 1.0

In Operator Lifecycle Manager (OLM) 1.0, cluster administrators set the target version of an Operator declaratively in the Operator’s custom resource (CR).

If you specify a channel in the Operator’s CR, OLM 1.0 installs the latest release from the specified channel. When updates are published to the specified channel, OLM 1.0 automatically updates to the latest release from the channel.

Example CR with a specified channel

  1. apiVersion: operators.operatorframework.io/v1alpha1
  2. kind: Operator
  3. metadata:
  4. name: quay-example
  5. spec:
  6. packageName: quay-operator
  7. channel: stable-3.8 (1)
1Installs the latest release published to the specified channel. Updates to the channel are automatically installed.

If you specify the Operator’s target version in the CR, OLM 1.0 installs the specified version. When the target version is specified in the Operator’s CR, OLM 1.0 does not change the target version when updates are published to the catalog.

If you want to update the version of the Operator that is installed on the cluster, you must manually update the Operator’s CR. Specifying a Operator’s target version pins the Operator’s version to the specified release.

Example CR with the target version specified

  1. apiVersion: operators.operatorframework.io/v1alpha1
  2. kind: Operator
  3. metadata:
  4. name: quay-example
  5. spec:
  6. packageName: quay-operator
  7. version: 3.8.12 (1)
1Specifies the target version. If you want to update the version of the Operator that is installed on the cluster, you must manually update this field the Operator’s CR to the desired target version.

If you want to change the installed version of an Operator, edit the Operator’s CR to the desired target version.

In previous versions of OLM, Operator authors could define upgrade edges to prevent you from updating to unsupported versions. In its current state of development, OLM 1.0 does not enforce upgrade edge definitions. You can specify any version of an Operator, and OLM 1.0 attempts to apply the update.

You can inspect an Operator’s catalog contents, including available versions and channels, by running the following command:

Command syntax

  1. $ oc get package <catalog_name>-<package_name> -o yaml

After you create or update a CR, create or configure the Operator by running the following command:

Command syntax

  1. $ oc apply -f <extension_name>.yaml

Troubleshooting

  • If you specify a target version or channel that does not exist, you can run the following command to check the status of your Operator:

    1. $ oc get operator.operators.operatorframework.io <operator_name> -o yaml

    Example output

    1. apiVersion: operators.operatorframework.io/v1alpha1
    2. kind: Operator
    3. metadata:
    4. annotations:
    5. kubectl.kubernetes.io/last-applied-configuration: |
    6. {"apiVersion":"operators.operatorframework.io/v1alpha1","kind":"Operator","metadata":{"annotations":{},"name":"quay-example"},"spec":{"packageName":"quay-operator","version":"999.99.9"}}
    7. creationTimestamp: "2023-10-19T18:39:37Z"
    8. generation: 3
    9. name: quay-example
    10. resourceVersion: "51505"
    11. uid: 2558623b-8689-421c-8ed5-7b14234af166
    12. spec:
    13. packageName: quay-operator
    14. version: 999.99.9
    15. status:
    16. conditions:
    17. - lastTransitionTime: "2023-10-19T18:50:34Z"
    18. message: package 'quay-operator' at version '999.99.9' not found
    19. observedGeneration: 3
    20. reason: ResolutionFailed
    21. status: "False"
    22. type: Resolved
    23. - lastTransitionTime: "2023-10-19T18:50:34Z"
    24. message: installation has not been attempted as resolution failed
    25. observedGeneration: 3
    26. reason: InstallationStatusUnknown
    27. status: Unknown
    28. type: Installed

Updating an Operator

You can update your Operator by manually editing your Operator’s custom resource (CR) and applying the changes.

Prerequisites

  • You have a catalog installed.

  • You have an Operator installed.

Procedure

  1. Inspect your Operator’s package contents to find which channels and versions are available for updating by running the following command:

    1. $ oc get package <catalog_name>-<package_name> -o yaml

    Example command

    1. $ oc get package redhat-operators-quay-operator -o yaml
  2. Edit your Operator’s CR to update the version to 3.9.1, as shown in the following example:

    Example test-operator.yaml CR

    1. apiVersion: operators.operatorframework.io/v1alpha1
    2. kind: Operator
    3. metadata:
    4. name: quay-example
    5. spec:
    6. packageName: quay-operator
    7. version: 3.9.1 (1)
    1Update the version to 3.9.1
  3. Apply the update to the cluster by running the following command:

    1. $ oc apply -f test-operator.yaml

    Example output

    1. operator.operators.operatorframework.io/quay-example configured

    You can patch and apply the changes to your Operator’s version from the CLI by running the following command:

    1. $ oc patch operator.operators.operatorframework.io/quay-example -p \
    2. ‘{“spec”:{“version”:”3.9.1”}}’ \
    3. type=merge
    Example output
    1. operator.operators.operatorframework.io/quay-example patched

Verification

  • Verify that the channel and version updates have been applied by running the following command:

    1. $ oc get operator.operators.operatorframework.io/quay-example -o yaml

    Example output

    1. apiVersion: operators.operatorframework.io/v1alpha1
    2. kind: Operator
    3. metadata:
    4. annotations:
    5. kubectl.kubernetes.io/last-applied-configuration: |
    6. {"apiVersion":"operators.operatorframework.io/v1alpha1","kind":"Operator","metadata":{"annotations":{},"name":"quay-example"},"spec":{"packageName":"quay-operator","version":"3.9.1"}}
    7. creationTimestamp: "2023-10-19T18:39:37Z"
    8. generation: 2
    9. name: quay-example
    10. resourceVersion: "47423"
    11. uid: 2558623b-8689-421c-8ed5-7b14234af166
    12. spec:
    13. packageName: quay-operator
    14. version: 3.9.1 (1)
    15. status:
    16. conditions:
    17. - lastTransitionTime: "2023-10-19T18:39:37Z"
    18. message: resolved to "registry.redhat.io/quay/quay-operator-bundle@sha256:4864bc0d5c18a84a5f19e5e664b58d3133a2ac2a309c6b5659ab553f33214b09"
    19. observedGeneration: 2
    20. reason: Success
    21. status: "True"
    22. type: Resolved
    23. - lastTransitionTime: "2023-10-19T18:39:46Z"
    24. message: installed from "registry.redhat.io/quay/quay-operator-bundle@sha256:4864bc0d5c18a84a5f19e5e664b58d3133a2ac2a309c6b5659ab553f33214b09"
    25. observedGeneration: 2
    26. reason: Success
    27. status: "True"
    28. type: Installed
    29. installedBundleResource: registry.redhat.io/quay/quay-operator-bundle@sha256:4864bc0d5c18a84a5f19e5e664b58d3133a2ac2a309c6b5659ab553f33214b09
    30. resolvedBundleResource: registry.redhat.io/quay/quay-operator-bundle@sha256:4864bc0d5c18a84a5f19e5e664b58d3133a2ac2a309c6b5659ab553f33214b09
    1Verify that the version is updated to 3.9.1.

Deleting an Operator

You can delete an Operator and its custom resource definitions (CRDs) by deleting the Operator’s custom resource (CR).

Prerequisites

  • You have a catalog installed.

  • You have an Operator installed.

Procedure

  • Delete an Operator and its CRDs by running the following command:

    1. $ oc delete operator.operators.operatorframework.io quay-example

    Example output

    1. operator.operators.operatorframework.io "quay-example" deleted

Verification

  • Run the following commands to verify that your Operator and its resources were deleted:

    • Verify the Operator is deleted by running the following command:

      1. $ oc get operator.operators.operatorframework.io

      Example output

      1. No resources found
    • Verify that the Operator’s system namespace is deleted by running the following command:

      1. $ oc get ns quay-operator-system

      Example output

      1. Error from server (NotFound): namespaces "quay-operator-system" not found