Installing a Stand-alone Deployment of OpenShift Container Registry

You are viewing documentation for a release that is no longer supported. The latest supported version of version 3 is [3.11]. For the most recent version 4, see [4]

You are viewing documentation for a release that is no longer supported. The latest supported version of version 3 is [3.11]. For the most recent version 4, see [4]

About OpenShift Container Registry

OKD is a fully-featured enterprise solution that includes an integrated container registry called OpenShift Container Registry (OCR). Alternatively, instead of deploying OKD as a full PaaS environment for developers, you can install OCR as a stand-alone container registry to run on-premise or in the cloud.

When installing a stand-alone deployment of OCR, a cluster of masters and nodes is still installed, similar to a typical OKD installation. Then, the container registry is deployed to run on the cluster. This stand-alone deployment option is useful for administrators that want a container registry, but do not require the full OKD environment that includes the developer-focused web console and application build and deployment tools.

OCR provides the following capabilities:

Administrators may want to deploy a stand-alone OCR to manage a registry separately that supports multiple OKD clusters. A stand-alone OCR also enables administrators to separate their registry to satisfy their own security or compliance requirements.

Minimum Hardware Requirements

Installing a stand-alone OCR has the following hardware requirements:

  • Physical or virtual system, or an instance running on a public or private IaaS.

  • Base OS: Fedora 21, CentOS 7.4, or RHEL 7.4 or 7.5 with the “Minimal” installation option and the latest packages from the RHEL 7 Extras channel, or RHEL Atomic Host 7.4.5 or later.

  • NetworkManager 1.0 or later

  • 2 vCPU.

  • Minimum 16 GB RAM.

  • Minimum 15 GB hard disk space for the file system containing /var/.

  • An additional minimum 15 GB unallocated space to be used for Docker’s storage back end; see Configuring Docker Storage for details.

OKD supports servers with x86_64 or IBM POWER architecture. If you use IBM POWER servers to host cluster nodes, you can only use IBM POWER servers.

Meeting the /var/ file system sizing requirements in RHEL Atomic Host requires making changes to the default configuration. See Managing Storage in Red Hat Enterprise Linux Atomic Host for instructions on configuring this during or after installation.

Supported System Topologies

The following system topologies are supported for stand-alone OCR:

All-in-one

A single host that includes the master, node, etcd, and registry components.

Multiple Masters (Highly-Available)

Three hosts with all components included on each (master, node, etcd, and registry), with the masters configured for native high-availability.

Host Preparation

Before installing stand-alone OCR, all of the same steps detailed in the Host Preparation topic for installing a full OKD PaaS must be performed. This includes registering and subscribing the host(s) to the proper repositories, installing or updating certain packages, and setting up Docker and its storage requirements.

Follow the steps in the Host Preparation topic, then continue to Stand-alone Registry Installation Methods.

Installing Using Ansible

When installing stand-alone OCR, the steps are mostly the same as installing a full OKD cluster using Ansible, as described in the full cluster installation process. The main difference is that you must set deployment_subtype=registry in the inventory file within the [OSEv3:vars] section for the playbooks to follow the registry installation path.

See the following example inventory files for the different supported system topologies:

All-in-one Stand-alone OpenShift Container Registry Inventory File

  1. # Create an OSEv3 group that contains the masters and nodes groups
  2. [OSEv3:children]
  3. masters
  4. nodes
  5. etcd
  6. # Set variables common for all OSEv3 hosts
  7. [OSEv3:vars]
  8. # SSH user, this user should allow ssh based auth without requiring a password
  9. ansible_ssh_user=root
  10. openshift_master_default_subdomain=apps.test.example.com
  11. # If ansible_ssh_user is not root, ansible_become must be set to true
  12. #ansible_become=true
  13. openshift_deployment_type=openshift-enterprise
  14. deployment_subtype=registry (1)
  15. openshift_hosted_infra_selector="" (2)
  16. # uncomment the following to enable htpasswd authentication; defaults to DenyAllPasswordIdentityProvider
  17. #openshift_master_identity_providers=[{'name': 'htpasswd_auth', 'login': 'true', 'challenge': 'true', 'kind': 'HTPasswdPasswordIdentityProvider'}]
  18. # host group for masters
  19. [masters]
  20. registry.example.com
  21. # host group for etcd
  22. [etcd]
  23. registry.example.com
  24. # host group for nodes
  25. [nodes]
  26. registry.example.com openshift_node_group_name='node-config-all-in-one'
1Set deployment_subtype=registry to ensure installation of stand-alone OCR and not a full OKD environment.
2Allows the registry and its web console to be scheduled on the single host.

Multiple Masters (Highly-Available) Stand-alone OpenShift Container Registry Inventory File

  1. # Create an OSEv3 group that contains the master, nodes, etcd, and lb groups.
  2. # The lb group lets Ansible configure HAProxy as the load balancing solution.
  3. # Comment lb out if your load balancer is pre-configured.
  4. [OSEv3:children]
  5. masters
  6. nodes
  7. etcd
  8. lb
  9. # Set variables common for all OSEv3 hosts
  10. [OSEv3:vars]
  11. ansible_ssh_user=root
  12. openshift_deployment_type=openshift-enterprise
  13. deployment_subtype=registry (1)
  14. openshift_master_default_subdomain=apps.test.example.com
  15. # Uncomment the following to enable htpasswd authentication; defaults to
  16. # DenyAllPasswordIdentityProvider.
  17. #openshift_master_identity_providers=[{'name': 'htpasswd_auth', 'login': 'true', 'challenge': 'true', 'kind': 'HTPasswdPasswordIdentityProvider'}]
  18. # Native high availability cluster method with optional load balancer.
  19. # If no lb group is defined installer assumes that a load balancer has
  20. # been preconfigured. For installation the value of
  21. # openshift_master_cluster_hostname must resolve to the load balancer
  22. # or to one or all of the masters defined in the inventory if no load
  23. # balancer is present.
  24. openshift_master_cluster_method=native
  25. openshift_master_cluster_hostname=openshift-internal.example.com
  26. openshift_master_cluster_public_hostname=openshift-cluster.example.com
  27. # apply updated node-config-compute group defaults
  28. openshift_node_groups=[{'name': 'node-config-compute', 'labels': ['node-role.kubernetes.io/compute=true'], 'edits': [{'key': 'kubeletArguments.pods-per-core','value': ['20']}, {'key': 'kubeletArguments.max-pods','value': ['250']}, {'key': 'kubeletArguments.image-gc-high-threshold', 'value':['90']}, {'key': 'kubeletArguments.image-gc-low-threshold', 'value': ['80']}]}]
  29. # enable ntp on masters to ensure proper failover
  30. openshift_clock_enabled=true
  31. # host group for masters
  32. [masters]
  33. master1.example.com
  34. master2.example.com
  35. master3.example.com
  36. # host group for etcd
  37. [etcd]
  38. etcd1.example.com
  39. etcd2.example.com
  40. etcd3.example.com
  41. # Specify load balancer host
  42. [lb]
  43. lb.example.com
  44. # host group for nodes, includes region info
  45. [nodes]
  46. master[1:3].example.com openshift_node_group_name='node-config-master-infra'
  47. node1.example.com openshift_node_group_name='node-config-compute'
  48. node2.example.com openshift_node_group_name='node-config-compute'
1Set deployment_subtype=registry to ensure installation of stand-alone OCR and not a full OKD environment.

After you have configured Ansible by defining an inventory file in /etc/ansible/hosts:

  1. Run the prerequisites.yml playbook to configure base packages and Docker. This must be run only once before deploying a new cluster. Use the following command, specifying -i if your inventory file located somewhere other than /etc/ansible/hosts:

    The host that you run the Ansible playbook on must have at least 75MiB of free memory per host in the inventory.

    1. # ansible-playbook [-i /path/to/inventory] \
    2. ~/openshift-ansible/playbooks/prerequisites.yml
  2. Run the deploy_cluster.yml playbook to initiate the installation:

    1. # ansible-playbook [-i /path/to/inventory] \
    2. ~/openshift-ansible/playbooks/deploy_cluster.yml

For more detailed usage information on the cluster installation process, including a comprehensive list of available Ansible variables, see the full documentation starting with Planning Your Installation.