Configuring the registry for Red Hat OpenShift Data Foundation

To configure the OpenShift image registry on bare metal and vSphere to use Red Hat OpenShift Data Foundation storage, you must install OpenShift Data Foundation and then configure image registry using Ceph or Noobaa.

Configuring the Image Registry Operator to use Ceph RGW storage with Red Hat OpenShift Data Foundation

Red Hat OpenShift Data Foundation integrates multiple storage types that you can use with the OpenShift image registry:

  • Ceph, a shared and distributed file system and on-premises object storage

  • NooBaa, providing a Multicloud Object Gateway

This document outlines the procedure to configure the image registry to use Ceph RGW storage.

Prerequisites

  • You have access to the cluster as a user with the cluster-admin role.

  • You have access to the OKD web console.

  • You installed the oc CLI.

  • You installed the OpenShift Data Foundation Operator to provide object storage and Ceph RGW object storage.

Procedure

  1. Create the object bucket claim using the ocs-storagecluster-ceph-rgw storage class. For example:

    1. cat <<EOF | oc apply -f -
    2. apiVersion: objectbucket.io/v1alpha1
    3. kind: ObjectBucketClaim
    4. metadata:
    5. name: rgwtest
    6. namespace: openshift-storage (1)
    7. spec:
    8. storageClassName: ocs-storagecluster-ceph-rgw
    9. generateBucketName: rgwtest
    10. EOF
    1Alternatively, you can use the openshift-image-registry namespace.
  2. Get the bucket name by entering the following command:

    1. $ bucket_name=$(oc get obc -n openshift-storage rgwtest -o jsonpath='{.spec.bucketName}')
  3. Get the AWS credentials by entering the following commands:

    1. $ AWS_ACCESS_KEY_ID=$(oc get secret -n openshift-storage rgwtest -o yaml | grep -w "AWS_ACCESS_KEY_ID:" | head -n1 | awk '{print $2}' | base64 --decode)
    1. $ AWS_SECRET_ACCESS_KEY=$(oc get secret -n openshift-storage rgwtest -o yaml | grep -w "AWS_SECRET_ACCESS_KEY:" | head -n1 | awk '{print $2}' | base64 --decode)
  4. Create the secret image-registry-private-configuration-user with the AWS credentials for the new bucket under openshift-image-registry project by entering the following command:

    1. $ oc create secret generic image-registry-private-configuration-user --from-literal=REGISTRY_STORAGE_S3_ACCESSKEY=${AWS_ACCESS_KEY_ID} --from-literal=REGISTRY_STORAGE_S3_SECRETKEY=${AWS_SECRET_ACCESS_KEY} --namespace openshift-image-registry
  5. Create a encryption route for Ceph RGW by entering the following command:

    1. $ oc create route reencrypt <route_name> --service=rook-ceph-rgw-ocs-storagecluster-cephobjectstore --port=https -n openshift-storage
    1. Get the route host by entering the following command:

      1. $ route_host=$(oc get route <route_name> -n openshift-storage -o=jsonpath='{.spec.host}')
  6. Create a config map that uses an ingress certificate by entering the following commands:

    1. $ oc extract secret/router-certs-default -n openshift-ingress --confirm
    1. $ oc create configmap image-registry-s3-bundle --from-file=ca-bundle.crt=./tls.crt -n openshift-config
  7. Configure the image registry to use the Ceph RGW object storage by entering the following command:

    1. $ oc patch config.image/cluster -p '{"spec":{"managementState":"Managed","replicas":2,"storage":{"managementState":"Unmanaged","s3":{"bucket":'\"${bucket_name}\"',"region":"us-east-1","regionEndpoint":'\"https://${route_host}\"',"virtualHostedStyle":false,"encrypt":false,"trustedCA":{"name":"image-registry-s3-bundle"}}}}}' --type=merge

Configuring the Image Registry Operator to use Noobaa storage with Red Hat OpenShift Data Foundation

Red Hat OpenShift Data Foundation integrates multiple storage types that you can use with the OpenShift image registry:

  • Ceph, a shared and distributed file system and on-premises object storage

  • NooBaa, providing a Multicloud Object Gateway

This document outlines the procedure to configure the image registry to use Noobaa storage.

Prerequisites

  • You have access to the cluster as a user with the cluster-admin role.

  • You have access to the OKD web console.

  • You installed the oc CLI.

  • You installed the OpenShift Data Foundation Operator to provide object storage and Noobaa object storage.

Procedure

  1. Create the object bucket claim using the openshift-storage.noobaa.io storage class. For example:

    1. cat <<EOF | oc apply -f -
    2. apiVersion: objectbucket.io/v1alpha1
    3. kind: ObjectBucketClaim
    4. metadata:
    5. name: noobaatest
    6. namespace: openshift-storage (1)
    7. spec:
    8. storageClassName: openshift-storage.noobaa.io
    9. generateBucketName: noobaatest
    10. EOF
    1Alternatively, you can use the openshift-image-registry namespace.
  2. Get the bucket name by entering the following command:

    1. $ bucket_name=$(oc get obc -n openshift-storage noobaatest -o jsonpath='{.spec.bucketName}')
  3. Get the AWS credentials by entering the following commands:

    1. $ AWS_ACCESS_KEY_ID=$(oc get secret -n openshift-storage noobaatest -o yaml | grep -w "AWS_ACCESS_KEY_ID:" | head -n1 | awk '{print $2}' | base64 --decode)
    1. $ AWS_SECRET_ACCESS_KEY=$(oc get secret -n openshift-storage noobaatest -o yaml | grep -w "AWS_SECRET_ACCESS_KEY:" | head -n1 | awk '{print $2}' | base64 --decode)
  4. Create the secret image-registry-private-configuration-user with the AWS credentials for the new bucket under openshift-image-registry project by entering the following command:

    1. $ oc create secret generic image-registry-private-configuration-user --from-literal=REGISTRY_STORAGE_S3_ACCESSKEY=${AWS_ACCESS_KEY_ID} --from-literal=REGISTRY_STORAGE_S3_SECRETKEY=${AWS_SECRET_ACCESS_KEY} --namespace openshift-image-registry
  5. Get the route host by entering the following command:

    1. $ route_host=$(oc get route s3 -n openshift-storage -o=jsonpath='{.spec.host}')
  6. Create a config map that uses an ingress certificate by entering the following commands:

    1. $ oc extract secret/router-certs-default -n openshift-ingress --confirm
    1. $ oc create configmap image-registry-s3-bundle --from-file=ca-bundle.crt=./tls.crt -n openshift-config
  7. Configure the image registry to use the Nooba object storage by entering the following command:

    1. $ oc patch config.image/cluster -p '{"spec":{"managementState":"Managed","replicas":2,"storage":{"managementState":"Unmanaged","s3":{"bucket":'\"${bucket_name}\"',"region":"us-east-1","regionEndpoint":'\"https://${route_host}\"',"virtualHostedStyle":false,"encrypt":false,"trustedCA":{"name":"image-registry-s3-bundle"}}}}}' --type=merge

Configuring the Image Registry Operator to use CephFS storage with Red Hat OpenShift Data Foundation

Red Hat OpenShift Data Foundation integrates multiple storage types that you can use with the OpenShift image registry:

  • Ceph, a shared and distributed file system and on-premises object storage

  • NooBaa, providing a Multicloud Object Gateway

This document outlines the procedure to configure the image registry to use CephFS storage.

CephFS uses persistent volume claim (PVC) storage. It is not recommended to use PVCs for image registry storage if there are other options are available, such as Ceph RGW or Noobaa.

Prerequisites

  • You have access to the cluster as a user with the cluster-admin role.

  • You have access to the OKD web console.

  • You installed the oc CLI.

  • You installed the OpenShift Data Foundation Operator to provide object storage and CephFS file storage.

Procedure

  1. Create a PVC to use the cephfs storage class. For example:

    1. cat <<EOF | oc apply -f -
    2. apiVersion: v1
    3. kind: PersistentVolumeClaim
    4. metadata:
    5. name: registry-storage-pvc
    6. namespace: openshift-image-registry
    7. spec:
    8. accessModes:
    9. - ReadWriteMany
    10. resources:
    11. requests:
    12. storage: 100Gi
    13. storageClassName: ocs-storagecluster-cephfs
    14. EOF
  2. Configure the image registry to use the CephFS file system storage by entering the following command:

    1. $ oc patch config.image/cluster -p '{"spec":{"managementState":"Managed","replicas":2,"storage":{"managementState":"Unmanaged","pvc":{"claim":"registry-storage-pvc"}}}}' --type=merge

Additional resources