Managing custom catalogs

Cluster administrators and Operator catalog maintainers can create and manage custom catalogs packaged using the bundle format on Operator Lifecycle Manager (OLM) in OKD.

Kubernetes periodically deprecates certain APIs that are removed in subsequent releases. As a result, Operators are unable to use removed APIs starting with the version of OKD that uses the Kubernetes version that removed the API.

If your cluster is using custom catalogs, see Controlling Operator compatibility with OKD versions for more details about how Operator authors can update their projects to help avoid workload issues and prevent incompatible upgrades.

Additional resources

Prerequisites

File-based catalogs

File-based catalogs are the latest iteration of the catalog format in Operator Lifecycle Manager (OLM). It is a plain text-based (JSON or YAML) and declarative config evolution of the earlier SQLite database format, and it is fully backwards compatible.

As of OKD 4.11, the default Red Hat-provided Operator catalog releases in the file-based catalog format. The default Red Hat-provided Operator catalogs for OKD 4.6 through 4.10 released in the deprecated SQLite database format.

The opm subcommands, flags, and functionality related to the SQLite database format are also deprecated and will be removed in a future release. The features are still supported and must be used for catalogs that use the deprecated SQLite database format.

Many of the opm subcommands and flags for working with the SQLite database format, such as opm index prune, do not work with the file-based catalog format. For more information about working with file-based catalogs, see Operator Framework packaging format and Mirroring images for a disconnected installation using the oc-mirror plugin.

Creating a file-based catalog image

You can create a catalog image that uses the plain text file-based catalog format (JSON or YAML), which replaces the deprecated SQLite database format. The opm CLI provides tooling that helps initialize a catalog in the file-based format, render new records into it, and validate that the catalog is valid.

Prerequisites

  • opm

  • podman version 1.9.3+

  • A bundle image built and pushed to a registry that supports Docker v2-2

Procedure

  1. Initialize a catalog for a file-based catalog:

    1. Create a directory for the catalog:

      1. $ mkdir <operator_name>-index
    2. Create a Dockerfile that can build a catalog image:

      Example <operator_name>-index.Dockerfile

      1. # The base image is expected to contain
      2. # /bin/opm (with a serve subcommand) and /bin/grpc_health_probe
      3. FROM quay.io/openshift/origin-operator-registry:4.9.0
      4. # Configure the entrypoint and command
      5. ENTRYPOINT ["/bin/opm"]
      6. CMD ["serve", "/configs"]
      7. # Copy declarative config root into image at /configs
      8. ADD <operator_name>-index /configs
      9. # Set DC-specific label for the location of the DC root directory
      10. # in the image
      11. LABEL operators.operatorframework.io.index.configs.v1=/configs

      The Dockerfile must be in the same parent directory as the catalog directory that you created in the previous step:

      Example directory structure

      1. .
      2. ├── <operator_name>-index
      3. └── <operator_name>-index.Dockerfile
    3. Populate the catalog with your package definition:

      1. $ opm init <operator_name> \ (1)
      2. --default-channel=preview \ (2)
      3. --description=./README.md \ (3)
      4. --icon=./operator-icon.svg \ (4)
      5. --output yaml \ (5)
      6. > <operator_name>-index/index.yaml (6)
      1Operator, or package, name.
      2Channel that subscription will default to if unspecified.
      3Path to the Operator’s README.md or other documentation.
      4Path to the Operator’s icon.
      5Output format: JSON or YAML.
      6Path for creating the catalog configuration file.

      This command generates an olm.package declarative config blob in the specified catalog configuration file.

  2. Add a bundle to the catalog:

    1. $ opm render <registry>/<namespace>/<bundle_image_name>:<tag> \ (1)
    2. --output=yaml \
    3. >> <operator_name>-index/index.yaml (2)
    1Pull spec for the bundle image.
    2Path to the catalog configuration file.

    The opm render command generates a declarative config blob from the provided catalog images and bundle images.

    Channels must contain at least one bundle.

  3. Add a channel entry for the bundle. For example, modify the following example to your specifications, and add it to your <operator_name>-index/index.yaml file:

    Example channel entry

    1. ---
    2. schema: olm.channel
    3. package: <operator_name>
    4. name: preview
    5. entries:
    6. - name: <operator_name>.v0.1.0 (1)
    1Ensure that you include the period (.) after <operator_name> but before the v in the version. Otherwise, the entry will fail to pass the opm validate command.
  4. Validate the file-based catalog:

    1. Run the opm validate command against the catalog directory:

      1. $ opm validate <operator_name>-index
    2. Check that the error code is 0:

      1. $ echo $?

      Example output

      1. 0
  5. Build the catalog image:

    1. $ podman build . \
    2. -f <operator_name>-index.Dockerfile \
    3. -t <registry>/<namespace>/<catalog_image_name>:<tag>
  6. Push the catalog image to a registry:

    1. If required, authenticate with your target registry:

      1. $ podman login <registry>
    2. Push the catalog image:

      1. $ podman push <registry>/<namespace>/<catalog_image_name>:<tag>

SQLite-based catalogs

The SQLite database format for Operator catalogs is a deprecated feature. Deprecated functionality is still included in OKD and continues to be supported; however, it will be removed in a future release of this product and is not recommended for new deployments.

For the most recent list of major functionality that has been deprecated or removed within OKD, refer to the Deprecated and removed features section of the OKD release notes.

Creating a SQLite-based index image

You can create an index image based on the SQLite database format by using the opm CLI.

Prerequisites

  • opm

  • podman version 1.9.3+

  • A bundle image built and pushed to a registry that supports Docker v2-2

Procedure

  1. Start a new index:

    1. $ opm index add \
    2. --bundles <registry>/<namespace>/<bundle_image_name>:<tag> \(1)
    3. --tag <registry>/<namespace>/<index_image_name>:<tag> \(2)
    4. [--binary-image <registry_base_image>] (3)
    1Comma-separated list of bundle images to add to the index.
    2The image tag that you want the index image to have.
    3Optional: An alternative registry base image to use for serving the catalog.
  2. Push the index image to a registry.

    1. If required, authenticate with your target registry:

      1. $ podman login <registry>
    2. Push the index image:

      1. $ podman push <registry>/<namespace>/<index_image_name>:<tag>

Updating a SQLite-based index image

After configuring OperatorHub to use a catalog source that references a custom index image, cluster administrators can keep the available Operators on their cluster up to date by adding bundle images to the index image.

You can update an existing index image using the opm index add command.

Prerequisites

  • opm

  • podman version 1.9.3+

  • An index image built and pushed to a registry.

  • An existing catalog source referencing the index image.

Procedure

  1. Update the existing index by adding bundle images:

    1. $ opm index add \
    2. --bundles <registry>/<namespace>/<new_bundle_image>@sha256:<digest> \(1)
    3. --from-index <registry>/<namespace>/<existing_index_image>:<existing_tag> \(2)
    4. --tag <registry>/<namespace>/<existing_index_image>:<updated_tag> \(3)
    5. --pull-tool podman (4)
    1The —bundles flag specifies a comma-separated list of additional bundle images to add to the index.
    2The —from-index flag specifies the previously pushed index.
    3The —tag flag specifies the image tag to apply to the updated index image.
    4The —pull-tool flag specifies the tool used to pull container images.

    where:

    <registry>

    Specifies the hostname of the registry, such as quay.io or mirror.example.com.

    <namespace>

    Specifies the namespace of the registry, such as ocs-dev or abc.

    <new_bundle_image>

    Specifies the new bundle image to add to the registry, such as ocs-operator.

    <digest>

    Specifies the SHA image ID, or digest, of the bundle image, such as c7f11097a628f092d8bad148406aa0e0951094a03445fd4bc0775431ef683a41.

    <existing_index_image>

    Specifies the previously pushed image, such as abc-redhat-operator-index.

    <existing_tag>

    Specifies a previously pushed image tag, such as 4.12.

    <updated_tag>

    Specifies the image tag to apply to the updated index image, such as 4.12.1.

    Example command

    1. $ opm index add \
    2. --bundles quay.io/ocs-dev/ocs-operator@sha256:c7f11097a628f092d8bad148406aa0e0951094a03445fd4bc0775431ef683a41 \
    3. --from-index mirror.example.com/abc/abc-redhat-operator-index:4.12 \
    4. --tag mirror.example.com/abc/abc-redhat-operator-index:4.12.1 \
    5. --pull-tool podman
  2. Push the updated index image:

    1. $ podman push <registry>/<namespace>/<existing_index_image>:<updated_tag>
  3. After Operator Lifecycle Manager (OLM) automatically polls the index image referenced in the catalog source at its regular interval, verify that the new packages are successfully added:

    1. $ oc get packagemanifests -n openshift-marketplace

Filtering a SQLite-based index image

An index image, based on the Operator bundle format, is a containerized snapshot of an Operator catalog. You can filter, or prune, an index of all but a specified list of packages, which creates a copy of the source index containing only the Operators that you want.

Prerequisites

  • podman version 1.9.3+

  • grpcurl (third-party command-line tool)

  • opm

  • Access to a registry that supports Docker v2-2

Procedure

  1. Authenticate with your target registry:

    1. $ podman login <target_registry>
  2. Determine the list of packages you want to include in your pruned index.

    1. Run the source index image that you want to prune in a container. For example:

      1. $ podman run -p50051:50051 \
      2. -it quay.io/operatorhubio/catalog:latest

      Example output

      1. Trying to pull quay.io/operatorhubio/catalog:latest...
      2. Getting image source signatures
      3. Copying blob ae8a0c23f5b1 done
      4. ...
      5. INFO[0000] serving registry database=/database/index.db port=50051
    2. In a separate terminal session, use the grpcurl command to get a list of the packages provided by the index:

      1. $ grpcurl -plaintext localhost:50051 api.Registry/ListPackages > packages.out
    3. Inspect the packages.out file and identify which package names from this list you want to keep in your pruned index. For example:

      Example snippets of packages list

      1. ...
      2. {
      3. "name": "couchdb-operator"
      4. }
      5. ...
      6. {
      7. "name": "eclipse-che"
      8. }
      9. ...
      10. {
      11. {
      12. "name": "etcd"
      13. }
      14. ...
    4. In the terminal session where you executed the podman run command, press Ctrl and C to stop the container process.

  3. Run the following command to prune the source index of all but the specified packages:

    1. $ opm index prune \
    2. -f quay.io/operatorhubio/catalog:latest \(1)
    3. -p couchdb-operator,eclipse-che,etcd \(2)
    4. [-i quay.io/openshift/origin-operator-registry:4.9.0] \(3)
    5. -t <target_registry>:<port>/<namespace>/catalog:latest (4)
    1Index to prune.
    2Comma-separated list of packages to keep.
    3Required only for IBM Power and IBM Z images: Operator Registry base image with the tag that matches the target OKD cluster major and minor version.
    4Custom tag for new index image being built.
  4. Run the following command to push the new index image to your target registry:

    1. $ podman push <target_registry>:<port>/<namespace>/catalog:latest

    where <namespace> is any existing namespace on the registry.

Catalog sources and pod security admission

Pod security admission was introduced in OKD 4.11 to ensure pod security standards. Catalog sources built using the SQLite-based catalog format and a version of the opm CLI tool released before OKD 4.11 cannot run under restricted pod security enforcement.

In OKD 4.12, namespaces do not have restricted pod security enforcement by default and the default catalog source security mode is set to legacy.

Default restricted enforcement for all namespaces is planned for inclusion in a future OKD release. When restricted enforcement occurs, the security context of the pod specification for catalog source pods must match the restricted pod security standard. If your catalog source image requires a different pod security standard, the pod security admissions label for the namespace must be explicitly set.

If you do not want to run your SQLite-based catalog source pods as restricted, you do not need to update your catalog source in OKD 4.12.

However, it is recommended that you take action now to ensure your catalog sources run under restricted pod security enforcement. If you do not take action to ensure your catalog sources run under restricted pod security enforcement, your catalog sources might not run in future OKD releases.

As a catalog author, you can enable compatibility with restricted pod security enforcement by completing either of the following actions:

  • Migrate your catalog to the file-based catalog format.

  • Update your catalog image with a version of the opm CLI tool released with OKD 4.11 or later.

The SQLite database catalog format is deprecated, but still supported by Red Hat. In a future release, the SQLite database format will not be supported, and catalogs will need to migrate to the file-based catalog format. As of OKD 4.11, the default Red Hat-provided Operator catalog is released in the file-based catalog format. File-based catalogs are compatible with restricted pod security enforcement.

If you do not want to update your SQLite database catalog image or migrate your catalog to the file-based catalog format, you can configure your catalog to run with elevated permissions.

Additional resources

Migrating SQLite database catalogs to the file-based catalog format

You can update your deprecated SQLite database format catalogs to the file-based catalog format.

Prerequisites

  • SQLite database catalog source

  • Cluster administrator permissions

  • Latest version of the opm CLI tool released with OKD 4.12 on workstation

Procedure

  1. Migrate your SQLite database catalog to a file-based catalog by running the following command:

    1. $ opm migrate <registry_image> <fbc_directory>
  2. Generate a Dockerfile for your file-based catalog by running the following command:

    1. $ opm generate dockerfile <fbc_directory> \
    2. --binary-image \
    3. registry.redhat.io/openshift4/ose-operator-registry:v4.12

Next steps

  • The generated Dockerfile can be built, tagged, and pushed to your registry.

Additional resources

Rebuilding SQLite database catalog images

You can rebuild your SQLite database catalog image with the latest version of the opm CLI tool that is released with your version of OKD.

Prerequisites

  • SQLite database catalog source

  • Cluster administrator permissions

  • Latest version of the opm CLI tool released with OKD 4.12 on workstation

Procedure

  • Run the following command to rebuild your catalog with a more recent version of the opm CLI tool:

    1. $ opm index add --binary-image \
    2. registry.redhat.io/openshift4/ose-operator-registry:v4.12 \
    3. --from-index <your_registry_image> \
    4. --bundles "" -t \<your_registry_image>

Configuring catalogs to run with elevated permissions

If you do not want to update your SQLite database catalog image or migrate your catalog to the file-based catalog format, you can perform the following actions to ensure your catalog source runs when the default pod security enforcement changes to restricted:

  • Manually set the catalog security mode to legacy in your catalog source definition. This action ensures your catalog runs with legacy permissions even if the default catalog security mode changes to restricted.

  • Label the catalog source namespace for baseline or privileged pod security enforcement.

The SQLite database catalog format is deprecated, but still supported by Red Hat. In a future release, the SQLite database format will not be supported, and catalogs will need to migrate to the file-based catalog format. File-based catalogs are compatible with restricted pod security enforcement.

Prerequisites

  • SQLite database catalog source

  • Cluster administrator permissions

  • Target namespace that supports running pods with the elevated pod security admission standard of baseline or privileged

Procedure

  1. Edit the CatalogSource definition by setting the spec.grpcPodConfig.securityContextConfig label to legacy, as shown in the following example:

    Example CatalogSource definition

    1. apiVersion: operators.coreos.com/v1alpha1
    2. kind: CatalogSource
    3. metadata:
    4. name: my-catsrc
    5. namespace: my-ns
    6. spec:
    7. sourceType: grpc
    8. grpcPodConfig:
    9. securityContextConfig: legacy
    10. image: my-image:latest

    In OKD 4.12, the spec.grpcPodConfig.securityContextConfig field is set to legacy by default. In a future release of OKD, it is planned that the default setting will change to restricted. If your catalog cannot run under restricted enforcement, it is recommended that you manually set this field to legacy.

  2. Edit your <namespace>.yaml file to add elevated pod security admission standards to your catalog source namespace, as shown in the following example:

    Example <namespace>.yaml file

    1. apiVersion: v1
    2. kind: Namespace
    3. metadata:
    4. ...
    5. labels:
    6. security.openshift.io/scc.podSecurityLabelSync: "false" (1)
    7. openshift.io/cluster-monitoring: "true"
    8. pod-security.kubernetes.io/enforce: baseline (2)
    9. name: "<namespace_name>"
    1Turn off pod security label synchronization by adding the security.openshift.io/scc.podSecurityLabelSync=false label to the namespace.
    2Apply the pod security admission pod-security.kubernetes.io/enforce label. Set the label to baseline or privileged. Use the baseline pod security profile unless other workloads in the namespace require a privileged profile.

Adding a catalog source to a cluster

Adding a catalog source to an OKD cluster enables the discovery and installation of Operators for users. Cluster administrators can create a CatalogSource object that references an index image. OperatorHub uses catalog sources to populate the user interface.

Prerequisites

  • An index image built and pushed to a registry.

Procedure

  1. Create a CatalogSource object that references your index image.

    1. Modify the following to your specifications and save it as a catalogSource.yaml file:

      1. apiVersion: operators.coreos.com/v1alpha1
      2. kind: CatalogSource
      3. metadata:
      4. name: my-operator-catalog
      5. namespace: olm (1)
      6. annotations:
      7. olm.catalogImageTemplate: (2)
      8. "<registry>/<namespace>/<index_image_name>:v{kube_major_version}.{kube_minor_version}.{kube_patch_version}"
      9. spec:
      10. sourceType: grpc
      11. grpcPodConfig:
      12. securityContextConfig: <security_mode> (3)
      13. image: <registry>/<namespace>/<index_image_name>:<tag> (4)
      14. displayName: My Operator Catalog
      15. publisher: <publisher_name> (5)
      16. updateStrategy:
      17. registryPoll: (6)
      18. interval: 30m
      1If you want the catalog source to be available globally to users in all namespaces, specify the olm namespace. Otherwise, you can specify a different namespace for the catalog to be scoped and available only for that namespace.
      2Optional: Set the olm.catalogImageTemplate annotation to your index image name and use one or more of the Kubernetes cluster version variables as shown when constructing the template for the image tag.
      3Specify the value of legacy or restricted. If the field is not set, the default value is legacy. In a future OKD release, it is planned that the default value will be restricted. If your catalog cannot run with restricted permissions, it is recommended that you manually set this field to legacy.
      4Specify your index image.
      5Specify your name or an organization name publishing the catalog.
      6Catalog sources can automatically check for new versions to keep up to date.
    2. Use the file to create the CatalogSource object:

      1. $ oc apply -f catalogSource.yaml
  2. Verify the following resources are created successfully.

    1. Check the pods:

      1. $ oc get pods -n olm

      Example output

      1. NAME READY STATUS RESTARTS AGE
      2. my-operator-catalog-6njx6 1/1 Running 0 28s
      3. marketplace-operator-d9f549946-96sgr 1/1 Running 0 26h
    2. Check the catalog source:

      1. $ oc get catalogsource -n olm

      Example output

      1. NAME DISPLAY TYPE PUBLISHER AGE
      2. my-operator-catalog My Operator Catalog grpc 5s
    3. Check the package manifest:

      1. $ oc get packagemanifest -n olm

      Example output

      1. NAME CATALOG AGE
      2. jaeger-product My Operator Catalog 93s

You can now install the Operators from the OperatorHub page on your OKD web console.

Additional resources

Accessing images for Operators from private registries

If certain images relevant to Operators managed by Operator Lifecycle Manager (OLM) are hosted in an authenticated container image registry, also known as a private registry, OLM and OperatorHub are unable to pull the images by default. To enable access, you can create a pull secret that contains the authentication credentials for the registry. By referencing one or more pull secrets in a catalog source, OLM can handle placing the secrets in the Operator and catalog namespace to allow installation.

Other images required by an Operator or its Operands might require access to private registries as well. OLM does not handle placing the secrets in target tenant namespaces for this scenario, but authentication credentials can be added to the global cluster pull secret or individual namespace service accounts to enable the required access.

The following types of images should be considered when determining whether Operators managed by OLM have appropriate pull access:

Index images

A CatalogSource object can reference an index image, which use the Operator bundle format and are catalog sources packaged as container images hosted in images registries. If an index image is hosted in a private registry, a secret can be used to enable pull access.

Bundle images

Operator bundle images are metadata and manifests packaged as container images that represent a unique version of an Operator. If any bundle images referenced in a catalog source are hosted in one or more private registries, a secret can be used to enable pull access.

Operator and Operand images

If an Operator installed from a catalog source uses a private image, either for the Operator image itself or one of the Operand images it watches, the Operator will fail to install because the deployment will not have access to the required registry authentication. Referencing secrets in a catalog source does not enable OLM to place the secrets in target tenant namespaces in which Operands are installed.

Instead, the authentication details can be added to the global cluster pull secret in the openshift-config namespace, which provides access to all namespaces on the cluster. Alternatively, if providing access to the entire cluster is not permissible, the pull secret can be added to the default service accounts of the target tenant namespaces.

Prerequisites

  • At least one of the following hosted in a private registry:

    • An index image or catalog image.

    • An Operator bundle image.

    • An Operator or Operand image.

Procedure

  1. Create a secret for each required private registry.

    1. Log in to the private registry to create or update your registry credentials file:

      1. $ podman login <registry>:<port>

      The file path of your registry credentials can be different depending on the container tool used to log in to the registry. For the podman CLI, the default location is ${XDG_RUNTIME_DIR}/containers/auth.json. For the docker CLI, the default location is /root/.docker/config.json.

    2. It is recommended to include credentials for only one registry per secret, and manage credentials for multiple registries in separate secrets. Multiple secrets can be included in a CatalogSource object in later steps, and OKD will merge the secrets into a single virtual credentials file for use during an image pull.

      A registry credentials file can, by default, store details for more than one registry or for multiple repositories in one registry. Verify the current contents of your file. For example:

      File storing credentials for multiple registries

      1. {
      2. "auths": {
      3. "registry.redhat.io": {
      4. "auth": "FrNHNydQXdzclNqdg=="
      5. },
      6. "quay.io": {
      7. "auth": "fegdsRib21iMQ=="
      8. },
      9. "https://quay.io/my-namespace/my-user/my-image": {
      10. "auth": "eWfjwsDdfsa221=="
      11. },
      12. "https://quay.io/my-namespace/my-user": {
      13. "auth": "feFweDdscw34rR=="
      14. },
      15. "https://quay.io/my-namespace": {
      16. "auth": "frwEews4fescyq=="
      17. }
      18. }
      19. }

      Because this file is used to create secrets in later steps, ensure that you are storing details for only one registry per file. This can be accomplished by using either of the following methods:

      • Use the podman logout <registry> command to remove credentials for additional registries until only the one registry you want remains.

      • Edit your registry credentials file and separate the registry details to be stored in multiple files. For example:

        File storing credentials for one registry

        1. {
        2. "auths": {
        3. "registry.redhat.io": {
        4. "auth": "FrNHNydQXdzclNqdg=="
        5. }
        6. }
        7. }

        File storing credentials for another registry

        1. {
        2. "auths": {
        3. "quay.io": {
        4. "auth": "Xd2lhdsbnRib21iMQ=="
        5. }
        6. }
        7. }
    3. Create a secret in the openshift-marketplace namespace that contains the authentication credentials for a private registry:

      1. $ oc create secret generic <secret_name> \
      2. -n openshift-marketplace \
      3. --from-file=.dockerconfigjson=<path/to/registry/credentials> \
      4. --type=kubernetes.io/dockerconfigjson

      Repeat this step to create additional secrets for any other required private registries, updating the --from-file flag to specify another registry credentials file path.

  2. Create or update an existing CatalogSource object to reference one or more secrets:

    1. apiVersion: operators.coreos.com/v1alpha1
    2. kind: CatalogSource
    3. metadata:
    4. name: my-operator-catalog
    5. namespace: openshift-marketplace
    6. spec:
    7. sourceType: grpc
    8. secrets: (1)
    9. - "<secret_name_1>"
    10. - "<secret_name_2>"
    11. grpcPodConfig:
    12. securityContextConfig: <security_mode> (2)
    13. image: <registry>:<port>/<namespace>/<image>:<tag>
    14. displayName: My Operator Catalog
    15. publisher: <publisher_name>
    16. updateStrategy:
    17. registryPoll:
    18. interval: 30m
    1Add a spec.secrets section and specify any required secrets.
    2Specify the value of legacy or restricted. If the field is not set, the default value is legacy. In a future OKD release, it is planned that the default value will be restricted. If your catalog cannot run with restricted permissions, it is recommended that you manually set this field to legacy.
  3. If any Operator or Operand images that are referenced by a subscribed Operator require access to a private registry, you can either provide access to all namespaces in the cluster, or individual target tenant namespaces.

    • To provide access to all namespaces in the cluster, add authentication details to the global cluster pull secret in the openshift-config namespace.

      Cluster resources must adjust to the new global pull secret, which can temporarily limit the usability of the cluster.

      1. Extract the .dockerconfigjson file from the global pull secret:

        1. $ oc extract secret/pull-secret -n openshift-config --confirm
      2. Update the .dockerconfigjson file with your authentication credentials for the required private registry or registries and save it as a new file:

        1. $ cat .dockerconfigjson | \
        2. jq --compact-output '.auths["<registry>:<port>/<namespace>/"] |= . + {"auth":"<token>"}' \(1)
        3. > new_dockerconfigjson
        1Replace <registry>:<port>/<namespace> with the private registry details and <token> with your authentication credentials.
      3. Update the global pull secret with the new file:

        1. $ oc set data secret/pull-secret -n openshift-config \
        2. --from-file=.dockerconfigjson=new_dockerconfigjson
    • To update an individual namespace, add a pull secret to the service account for the Operator that requires access in the target tenant namespace.

      1. Recreate the secret that you created for the openshift-marketplace in the tenant namespace:

        1. $ oc create secret generic <secret_name> \
        2. -n <tenant_namespace> \
        3. --from-file=.dockerconfigjson=<path/to/registry/credentials> \
        4. --type=kubernetes.io/dockerconfigjson
      2. Verify the name of the service account for the Operator by searching the tenant namespace:

        1. $ oc get sa -n <tenant_namespace> (1)
        1If the Operator was installed in an individual namespace, search that namespace. If the Operator was installed for all namespaces, search the openshift-operators namespace.

        Example output

        1. NAME SECRETS AGE
        2. builder 2 6m1s
        3. default 2 6m1s
        4. deployer 2 6m1s
        5. etcd-operator 2 5m18s (1)
        1Service account for an installed etcd Operator.
      3. Link the secret to the service account for the Operator:

        1. $ oc secrets link <operator_sa> \
        2. -n <tenant_namespace> \
        3. <secret_name> \
        4. --for=pull

Additional resources

Disabling the default OperatorHub catalog sources

Operator catalogs that source content provided by Red Hat and community projects are configured for OperatorHub by default during an OKD installation. As a cluster administrator, you can disable the set of default catalogs.

Procedure

  • Disable the sources for the default catalogs by adding disableAllDefaultSources: true to the OperatorHub object:

    1. $ oc patch OperatorHub cluster --type json \
    2. -p '[{"op": "add", "path": "/spec/disableAllDefaultSources", "value": true}]'

Alternatively, you can use the web console to manage catalog sources. From the AdministrationCluster SettingsConfigurationOperatorHub page, click the Sources tab, where you can create, delete, disable, and enable individual sources.

Removing custom catalogs

As a cluster administrator, you can remove custom Operator catalogs that have been previously added to your cluster by deleting the related catalog source.

Procedure

  1. In the Administrator perspective of the web console, navigate to AdministrationCluster Settings.

  2. Click the Configuration tab, and then click OperatorHub.

  3. Click the Sources tab.

  4. Select the Options menu kebab for the catalog that you want to remove, and then click Delete CatalogSource.