Installing a stand-alone deployment of OpenShift container image registry

OKD is a fully-featured enterprise solution that includes an integrated container image registry called OpenShift Container Registry (OCR). Instead of deploying OKD as a full Platform-as-a-Service environment for developers, you can install OCR as a stand-alone container image registry to run on-site or in the cloud.

When installing a stand-alone deployment of OCR, a cluster of masters and nodes is still installed, similar to a typical OKD installation. Then, the container image registry is deployed to run on the cluster. This stand-alone deployment option is useful for administrators that want a container image registry but do not require the full OKD environment that includes the developer-focused web console and application build and deployment tools.

OCR provides the following capabilities:

Administrators can deploy a stand-alone OCR to manage a registry separately that supports multiple OKD clusters. A stand-alone OCR also enables administrators to separate their registry to satisfy their own security or compliance requirements.

Minimum hardware requirements

Installing a stand-alone OCR has the following hardware requirements:

  • Physical or virtual system or an instance running on a public or private IaaS.

  • Base OS: Fedora 21, CentOS 7.4, or RHEL 7.5 or later with the “Minimal” installation option and the latest packages from the RHEL 7 Extras channel, or RHEL Atomic Host 7.4.5 or later.

  • NetworkManager 1.0 or later.

  • 2 vCPU.

  • Minimum 16 GB RAM.

  • Minimum 15 GB hard disk space for the file system containing /var/.

  • An additional minimum 15 GB unallocated space for Docker’s storage back end; see Configuring Docker Storage for details.

OKD supports servers with x86_64 or IBM POWER architecture. If you use IBM POWER servers to host cluster nodes, you can only use IBM POWER servers.

To meet the /var/ file system sizing requirements in RHEL Atomic Host you must modify the default configuration. See Managing Storage in Red Hat Enterprise Linux Atomic Host for instructions on configuring this during or after installation.

Supported system topologies

The following system topologies are supported for stand-alone OCR:

All-in-one

A single host that includes the master, node, and registry components.

Multiple masters (Highly-Available)

Three hosts with all components, master, node, and registry, included on each with the masters configured for native high-availability.

Installing the OpenShift Container Registry

  1. Review the full cluster installation process, starting with Planning Your Installation. Installing OCR uses the same process but requires a few specific settings in the inventory file. The installation documentation includes a comprehensive list of available Ansible variables for the inventory file.

  2. Complete the host preparation steps.

  3. Create an inventory file in the /etc/ansible/hosts directory:

    To install a standalone OCR, you must set deployment_subtype=registry in the inventory file in the [OSEv3:vars] section.

    Use the following example inventory files for the different supported system topologies:

    All-in-one stand-alone OpenShift Container Registry inventory file

    1. # Create an OSEv3 group that contains the masters and nodes groups
    2. [OSEv3:children]
    3. masters
    4. nodes
    5. etcd
    6. # Set variables common for all OSEv3 hosts
    7. [OSEv3:vars]
    8. # SSH user, this user should allow ssh based auth without requiring a password
    9. ansible_ssh_user=root
    10. openshift_master_default_subdomain=apps.test.example.com
    11. # If ansible_ssh_user is not root, ansible_become must be set to true
    12. #ansible_become=true
    13. openshift_deployment_type=openshift-enterprise
    14. deployment_subtype=registry (1)
    15. openshift_hosted_infra_selector="" (2)
    16. # uncomment the following to enable htpasswd authentication; defaults to DenyAllPasswordIdentityProvider
    17. #openshift_master_identity_providers=[{'name': 'htpasswd_auth', 'login': 'true', 'challenge': 'true', 'kind': 'HTPasswdPasswordIdentityProvider'}]
    18. # host group for masters
    19. [masters]
    20. registry.example.com
    21. # host group for etcd
    22. [etcd]
    23. registry.example.com
    24. # host group for nodes
    25. [nodes]
    26. registry.example.com openshift_node_group_name='node-config-all-in-one'
    1Set deployment_subtype=registry to ensure installation of stand-alone OCR and not a full OKD environment.
    2Allows the registry and its web console to be scheduled on the single host.

    Multiple masters (highly-available) stand-alone OpenShift Container Registry inventory file

    1. # Create an OSEv3 group that contains the master, nodes, etcd, and lb groups.
    2. # The lb group lets Ansible configure HAProxy as the load balancing solution.
    3. # Comment lb out if your load balancer is pre-configured.
    4. [OSEv3:children]
    5. masters
    6. nodes
    7. etcd
    8. lb
    9. # Set variables common for all OSEv3 hosts
    10. [OSEv3:vars]
    11. ansible_ssh_user=root
    12. openshift_deployment_type=openshift-enterprise
    13. deployment_subtype=registry (1)
    14. openshift_master_default_subdomain=apps.test.example.com
    15. # Uncomment the following to enable htpasswd authentication; defaults to
    16. # DenyAllPasswordIdentityProvider.
    17. #openshift_master_identity_providers=[{'name': 'htpasswd_auth', 'login': 'true', 'challenge': 'true', 'kind': 'HTPasswdPasswordIdentityProvider'}]
    18. # Native high availability cluster method with optional load balancer.
    19. # If no lb group is defined installer assumes that a load balancer has
    20. # been preconfigured. For installation the value of
    21. # openshift_master_cluster_hostname must resolve to the load balancer
    22. # or to one or all of the masters defined in the inventory if no load
    23. # balancer is present.
    24. openshift_master_cluster_method=native
    25. openshift_master_cluster_hostname=openshift-internal.example.com
    26. openshift_master_cluster_public_hostname=openshift-cluster.example.com
    27. # apply updated node-config-compute group defaults
    28. openshift_node_groups=[{'name': 'node-config-compute', 'labels': ['node-role.kubernetes.io/compute=true'], 'edits': [{'key': 'kubeletArguments.pods-per-core','value': ['20']}, {'key': 'kubeletArguments.max-pods','value': ['250']}, {'key': 'kubeletArguments.image-gc-high-threshold', 'value':['90']}, {'key': 'kubeletArguments.image-gc-low-threshold', 'value': ['80']}]}]
    29. # enable ntp on masters to ensure proper failover
    30. openshift_clock_enabled=true
    31. # host group for masters
    32. [masters]
    33. master1.example.com
    34. master2.example.com
    35. master3.example.com
    36. # host group for etcd
    37. [etcd]
    38. etcd1.example.com
    39. etcd2.example.com
    40. etcd3.example.com
    41. # Specify load balancer host
    42. [lb]
    43. lb.example.com
    44. # host group for nodes, includes region info
    45. [nodes]
    46. master[1:3].example.com openshift_node_group_name='node-config-master-infra'
    47. node1.example.com openshift_node_group_name='node-config-compute'
    48. node2.example.com openshift_node_group_name='node-config-compute'
    1Set deployment_subtype=registry to ensure installation of stand-alone OCR and not a full OKD environment.
  4. Install the stand-alone OCR. The process is similar to a full cluster installation process.

    The host that you run the Ansible playbook on must have at least 75MiB of free memory per host in the inventory file.

    1. Before you deploy a new cluster, change to the cluster directory and run the prerequisites.yml playbook:

      1. $ cd /usr/share/ansible/openshift-ansible
      2. $ ansible-playbook [-i /path/to/inventory] \ (1)
      3. playbooks/prerequisites.yml
      1If your inventory file is not in the /etc/ansible/hosts directory, specify -i and the path to the inventory file.

      You must run this playbook only one time.

    2. To initiate installation, change to the playbook directory and run the deploy_cluster.yml playbook:

      1. $ cd /usr/share/ansible/openshift-ansible
      2. $ ansible-playbook [-i /path/to/inventory] \ (1)
      3. playbooks/deploy_cluster.yml
      1If your inventory file is not in the /etc/ansible/hosts directory, specify -i and the path to the inventory file.