Docker tasks

You are viewing documentation for a release that is no longer supported. The latest supported version of version 3 is [3.11]. For the most recent version 4, see [4]

You are viewing documentation for a release that is no longer supported. The latest supported version of version 3 is [3.11]. For the most recent version 4, see [4]

OKD uses Docker to run applications in pods that are composed by any number of containers.

As a cluster administrator, sometimes Docker requires some extra configuration in order to efficiently run elements of the OKD installation.

Increasing Docker storage

Increasing the amount of storage available ensures continued deployment without any outages. To do so, a free partition must be made available that contains an appropriate amount of free capacity.

Evacuating the node

Procedure

  1. From a master instance, or as a cluster administrator, allow the evacuation of any pod from the node and disable scheduling of other pods on that node:

    1. $ NODE=ose-app-node01.example.com
    2. $ oc adm manage-node ${NODE} --schedulable=false
    3. NAME STATUS AGE VERSION
    4. ose-app-node01.example.com Ready,SchedulingDisabled 20m v1.6.1+5115d708d7
    5. $ oc adm drain ${NODE} --ignore-daemonsets
    6. node "ose-app-node01.example.com" already cordoned
    7. pod "perl-1-build" evicted
    8. pod "perl-1-3lnsh" evicted
    9. pod "perl-1-9jzd8" evicted
    10. node "ose-app-node01.example.com" drained

    If there are containers running with local volumes that will not migrate, run the following command: oc adm drain ${NODE} —ignore-daemonsets —delete-local-data.

  2. List the pods on the node to verify that they have been removed:

    1. $ oc adm manage-node ${NODE} --list-pods
    2. Listing matched pods on node: ose-app-node01.example.com
    3. NAME READY STATUS RESTARTS AGE
  3. Repeat the previous two steps for each node.

For more information on evacuating and draining pods or nodes, see Node maintenance.

Increasing storage

You can increase Docker storage in two ways: attaching a new disk, or extending the existing disk.

Increasing storage with a new disk

Prerequisites
  • A new disk must be available to the existing instance that requires more storage. In the following steps, the original disk is labeled /dev/xvdb, and the new disk is labeled /dev/xvdd, as shown in the /etc/sysconfig/docker-storage-setup file:

    1. # vi /etc/sysconfig/docker-storage-setup
    2. DEVS="/dev/xvdb /dev/xvdd"

    The process may differ depending on the underlying OKD infrastructure.

Procedure
  1. Stop the docker:

    1. # systemctl stop docker
  2. Stop the node service by removing the pod definition and rebooting the host:

    1. # mkdir -p /etc/origin/node/pods-stopped
    2. # mv /etc/origin/node/pods/* /etc/origin/node/pods-stopped/
  3. Run the docker-storage-setup command to extend the volume groups and logical volumes associated with container storage:

    1. # docker-storage-setup
    2. INFO: Volume group backing root filesystem could not be determined
    3. INFO: Device /dev/xvdb is already partitioned and is part of volume group docker_vol
    4. INFO: Device node /dev/xvdd1 exists.
    5. Physical volume "/dev/xvdd1" successfully created.
    6. Volume group "docker_vol" successfully extended
  4. Start the Docker services:

    1. # systemctl start docker
    2. # vgs
    3. VG #PV #LV #SN Attr VSize VFree
    4. docker_vol 2 1 0 wz--n- 64.99g <55.00g
  5. Restart the node service by rebooting the host:

    1. # systemctl restart atomic-openshift-node.service
  6. A benefit in adding a disk compared to creating a new volume group and re-running docker-storage-setup is that the images that were used on the system still exist after the new storage has been added:

    1. # docker images
    2. REPOSITORY TAG IMAGE ID CREATED SIZE
    3. docker-registry.default.svc:5000/tet/perl latest 8b0b0106fb5e 13 minutes ago 627.4 MB
    4. registry.access.redhat.com/rhscl/perl-524-rhel7 <none> 912b01ac7570 6 days ago 559.5 MB
    5. registry.access.redhat.com/openshift3/ose-deployer v3.6.173.0.21 89fd398a337d 5 weeks ago 970.2 MB
    6. registry.access.redhat.com/openshift3/ose-pod v3.6.173.0.21 63accd48a0d7 5 weeks ago 208.6 MB
  7. With the increase in storage capacity, enable the node to be schedulable in order to accept new incoming pods.

    As a cluster administrator, run the following from a master instance:

    1. $ oc adm manage-node ${NODE} --schedulable=true
    2. ose-master01.example.com Ready,SchedulingDisabled 24m v1.6.1+5115d708d7
    3. ose-master02.example.com Ready,SchedulingDisabled 24m v1.6.1+5115d708d7
    4. ose-master03.example.com Ready,SchedulingDisabled 24m v1.6.1+5115d708d7
    5. ose-infra-node01.example.com Ready 24m v1.6.1+5115d708d7
    6. ose-infra-node02.example.com Ready 24m v1.6.1+5115d708d7
    7. ose-infra-node03.example.com Ready 24m v1.6.1+5115d708d7
    8. ose-app-node01.example.com Ready 24m v1.6.1+5115d708d7
    9. ose-app-node02.example.com Ready 24m v1.6.1+5115d708d7

Increasing storage with a new disk

  1. Evacuate the node following the previous steps.

  2. Stop the docker:

    1. # systemctl stop docker
  3. Stop the node service by removing the pod definition:

    1. # mkdir -p /etc/origin/node/pods-stopped
    2. # mv /etc/origin/node/pods/* /etc/origin/node/pods-stopped/
  4. Resize the existing disk as desired. This can depend on your environment:

  1. Verify that the /etc/sysconfig/container-storage-setup file is correctly configured for the new disk by checking the device name, size, etc.

  2. Run docker-storage-setup to reconfigure the new disk:

    1. # docker-storage-setup
    2. INFO: Volume group backing root filesystem could not be determined
    3. INFO: Device /dev/xvdb is already partitioned and is part of volume group docker_vol
    4. INFO: Device node /dev/xvdd1 exists.
    5. Physical volume "/dev/xvdd1" successfully created.
    6. Volume group "docker_vol" successfully extended
  3. Start the Docker services:

    1. # systemctl start docker
    2. # vgs
    3. VG #PV #LV #SN Attr VSize VFree
    4. docker_vol 2 1 0 wz--n- 64.99g <55.00g
  4. Restart the node service by rebooting the host:

    1. # systemctl restart atomic-openshift-node.service

Changing the storage backend

With the advancements of services and file systems, changes in a storage backend may be necessary to take advantage of new features. The following steps provide an example of changing a device mapper backend to an overlay2 storage backend. overlay2 offers increased speed and density over traditional device mapper.

Evacuating the node

  1. From a master instance, or as a cluster administrator, allow the evacuation of any pod from the node and disable scheduling of other pods on that node:

    1. $ NODE=ose-app-node01.example.com
    2. $ oc adm manage-node ${NODE} --schedulable=false
    3. NAME STATUS AGE VERSION
    4. ose-app-node01.example.com Ready,SchedulingDisabled 20m v1.6.1+5115d708d7
    5. $ oc adm drain ${NODE} --ignore-daemonsets
    6. node "ose-app-node01.example.com" already cordoned
    7. pod "perl-1-build" evicted
    8. pod "perl-1-3lnsh" evicted
    9. pod "perl-1-9jzd8" evicted
    10. node "ose-app-node01.example.com" drained

    If there are containers running with local volumes that will not migrate, run the following command: oc adm drain ${NODE} —ignore-daemonsets —delete-local-data

  2. List the pods on the node to verify that they have been removed:

    1. $ oc adm manage-node ${NODE} --list-pods
    2. Listing matched pods on node: ose-app-node01.example.com
    3. NAME READY STATUS RESTARTS AGE

    For more information on evacuating and draining pods or nodes, see Node maintenance.

  3. With no containers currently running on the instance, stop the docker service:

    1. # systemctl stop docker
  4. Stop the node service by removing the pod definition:

    1. # mkdir -p /etc/origin/node/pods-stopped
    2. # mv /etc/origin/node/pods/* /etc/origin/node/pods-stopped/
  5. Verify the name of the volume group, logical volume name, and physical volume name:

    1. # vgs
    2. VG #PV #LV #SN Attr VSize VFree
    3. docker_vol 1 1 0 wz--n- <25.00g 15.00g
    4. # lvs
    5. LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
    6. dockerlv docker_vol -wi-ao---- <10.00g
    7. # lvremove /dev/docker_vol/docker-pool -y
    8. # vgremove docker_vol -y
    9. # pvs
    10. PV VG Fmt Attr PSize PFree
    11. /dev/xvdb1 docker_vol lvm2 a-- <25.00g 15.00g
    12. # pvremove /dev/xvdb1 -y
    13. # rm -Rf /var/lib/docker/*
    14. # rm -f /etc/sysconfig/docker-storage
  6. Modify the docker-storage-setup file to specify the STORAGE_DRIVER.

    When a system is upgraded from Red Hat Enterprise Linux version 7.3 to 7.4, the docker service attempts to use /var with the STORAGE_DRIVER of extfs. The use of extfs as the STORAGE_DRIVER causes errors. See the following bug for more info regarding the error:

    1. DEVS=/dev/xvdb
    2. VG=docker_vol
    3. DATA_SIZE=95%VG
    4. STORAGE_DRIVER=overlay2
    5. CONTAINER_ROOT_LV_NAME=dockerlv
    6. CONTAINER_ROOT_LV_MOUNT_PATH=/var/lib/docker
    7. CONTAINER_ROOT_LV_SIZE=100%FREE
  7. Set up the storage:

    1. # docker-storage-setup
  8. Start the docker:

    1. # systemctl start docker
  9. Restart the node service by rebooting the host:

    1. # systemctl restart atomic-openshift-node.service
  10. With the storage modified to use overlay2, enable the node to be schedulable in order to accept new incoming pods.

    From a master instance, or as a cluster administrator:

    1. $ oc adm manage-node ${NODE} --schedulable=true
    2. ose-master01.example.com Ready,SchedulingDisabled 24m v1.6.1+5115d708d7
    3. ose-master02.example.com Ready,SchedulingDisabled 24m v1.6.1+5115d708d7
    4. ose-master03.example.com Ready,SchedulingDisabled 24m v1.6.1+5115d708d7
    5. ose-infra-node01.example.com Ready 24m v1.6.1+5115d708d7
    6. ose-infra-node02.example.com Ready 24m v1.6.1+5115d708d7
    7. ose-infra-node03.example.com Ready 24m v1.6.1+5115d708d7
    8. ose-app-node01.example.com Ready 24m v1.6.1+5115d708d7
    9. ose-app-node02.example.com Ready 24m v1.6.1+5115d708d7

Managing Docker certificates

An OKD internal registry is created as a pod. However, containers may be pulled from external registries if desired. By default, registries listen on TCP port 5000. Registries provide the option of securing exposed images via TLS or running a registry without encrypting traffic.

Docker interprets .crt files as CA certificates and .cert files as client certificates. Any CA extensions must be .crt.

Installing a certificate authority certificate for external registries

In order to use OKD with an external registry, the registry certificate authority (CA) certificate must be trusted for all the nodes that can pull images from the registry.

Depending on the Docker version, the process to trust a Docker registry varies. The latest versions of Docker’s root certificate authorities are merged with system defaults. Prior to docker version 1.13, the system default certificate is used only when no other custom root certificates exist.

Procedure
  1. Copy the CA certificate to /etc/pki/ca-trust/source/anchors/:

    1. $ sudo cp myregistry.example.com.crt /etc/pki/ca-trust/source/anchors/
  2. Extract and add the CA certificate to the list of trusted certificates authorities:

    1. $ sudo update-ca-trust extract
  3. Verify the SSL certificate using the openssl command:

    1. $ openssl verify myregistry.example.com.crt
    2. myregistry.example.com.crt: OK
  4. Once the certificate is in place and the trust is updated, restart the docker service to ensure the new certificates are properly set:

    1. $ sudo systemctl restart docker.service

For Docker versions prior to 1.13, perform the following additional steps for trusting certificates of authority:

  1. On every node create a new directory in /etc/docker/certs.d where the name of the directory is the host name of the Docker registry:

    1. $ sudo mkdir -p /etc/docker/certs.d/myregistry.example.com

    The port number is not required unless the Docker registry cannot be accessed without a port number. Addressing the port to the original Docker registry is as follows: myregistry.example.com:port

  2. Accessing the Docker registry via IP address requires the creation of a new directory within /etc/docker/certs.d on every node where the name of the directory is the IP of the Docker registry:

    1. $ sudo mkdir -p /etc/docker/certs.d/10.10.10.10
  3. Copy the CA certificate to the newly created Docker directories from the previous steps:

    1. $ sudo cp myregistry.example.com.crt \
    2. /etc/docker/certs.d/myregistry.example.com/ca.crt
    3. $ sudo cp myregistry.example.com.crt /etc/docker/certs.d/10.10.10.10/ca.crt
  4. Once the certificates have been copied, restart the docker service to ensure the new certificates are used:

    1. $ sudo systemctl restart docker.service
Docker certificates backup

When performing a node host backup, ensure to include the certificates for external registries.

Procedure
  1. If using /etc/docker/certs.d, copy all the certificates included in the directory and store the files:

    1. $ sudo tar -czvf docker-registry-certs-$(hostname)-$(date +%Y%m%d).tar.gz /etc/docker/certs.d/
  2. If using a system trust, store the certificates prior to adding them within the system trust. Once the store is complete, extract the certificate for restoration using the trust command. Identify the system trust CAs and note the pkcs11 ID:

    1. $ trust list
    2. ...[OUTPUT OMMITED]...
    3. pkcs11:id=%a5%b3%e1%2b%2b%49%b6%d7%73%a1%aa%94%f5%01%e7%73%65%4c%ac%50;type=cert
    4. type: certificate
    5. label: MyCA
    6. trust: anchor
    7. category: authority
    8. ...[OUTPUT OMMITED]...
  3. Extract the certificate in pem format and provide it a name. For example, myca.crt.

    1. $ trust extract --format=pem-bundle \
    2. --filter="%a5%b3%e1%2b%2b%49%b6%d7%73%a1%aa%94%f5%01%e7%73%65%4c%ac%50;type=cert" myca.crt
  4. Verify the certificate has been properly extracted via openssl:

    1. $ openssl verify myca.crt
  5. Repeat the procedure for all the required certificates and store the files in a remote location.

Docker certificates restore

In the event of the deletion or corruption of the Docker certificates for the external registries, the restore mechanism uses the same steps as the installation method using the files from the backups performed previously.

Managing Docker registries

You can configure OKD to use external docker registries to pull images. However, you can use configuration files to allow or deny certain images or registries.

If the external registry is exposed using certificates for the network traffic, it can be named as a secure registry. Otherwise, traffic between the registry and host is plain text and not encrypted, meaning it is an insecure registry.

Docker search external registries

By default, the docker daemon has the ability to pull images from any registry, but the search operation is performed against docker.io/ and registry.access.redhat.com. The daemon can be configured to search images from other registries using the --add-registry option with the docker daemon.

The ability to search images from the Red Hat Registry registry.access.redhat.com exists by default in the Red Hat Enterprise Linux docker package.

Procedure
  1. To allow users to search for images using docker search with other registries, add those registries to the /etc/containers/registries.conf file under the registries parameter:

    1. registries:
    2. - registry.access.redhat.com
    3. - my.registry.example.com

    Prior to OKD version 3.6, this was accomplished using /etc/sysconfig/docker with the following options:

    1. ADD_REGISTRY="--add-registry=registry.access.redhat.com --add-registry=my.registry.example.com"

    The first registry added is the first registry searched.

  2. Restart the docker daemon to allow for my.registry.example.com to be used:

    1. $ sudo systemctl restart docker.service

    Restarting the docker daemon causes the docker containers to restart.

  3. Using the Ansible installer, this can be configured using the openshift_docker_additional_registries variable in the Ansible hosts file:

    1. openshift_docker_additional_registries=registry.access.redhat.com,my.registry.example.com

Docker external registries whitelist and blacklist

Docker can be configured to block operations from external registries by configuring the registries and block_registries flags for the docker daemon.

Procedure
  1. Add the allowed registries to the /etc/containers/registries.conf file with the registries flag:

    1. registries:
    2. - registry.access.redhat.com
    3. - my.registry.example.com

    Prior to 3.6, the /etc/sysconfig/docker file is modified instead:

    1. ADD_REGISTRY="--add-registry=registry.access.redhat.com --add-registry=my.registry.example.com"

    The docker.io registry can be added using the same method.

  2. Block the rest of the registries:

    1. block_registries:
    2. - all
  3. Block the rest of the registries in older versions:

    1. BLOCK_REGISTRY='--block-registry=all'
  4. Restart the docker daemon:

    1. $ sudo systemctl restart docker.service

    Restarting the docker daemon causes the docker containers to restart.

  5. In this example, the docker.io registry has been blacklisted, so any operation regarding that registry fails:

    1. $ sudo docker pull hello-world
    2. Using default tag: latest
    3. Trying to pull repository registry.access.redhat.com/hello-world ...
    4. Trying to pull repository my.registry.example.com/hello-world ...
    5. Trying to pull repository registry.access.redhat.com/hello-world ...
    6. unknown: Not Found
    7. $ sudo docker pull docker.io/hello-world
    8. Using default tag: latest
    9. Trying to pull repository docker.io/library/hello-world ...
    10. All endpoints blocked.

    Add docker.io back to the registries variable by modifying the file again and restarting the service.

    1. registries:
    2. - registry.access.redhat.com
    3. - my.registry.example.com
    4. - docker.io
    5. block_registries:
    6. - all

    or

    1. ADD_REGISTRY="--add-registry=registry.access.redhat.com --add-registry=my.registry.example.com --add-registry=docker.io"
    2. BLOCK_REGISTRY='--block-registry=all'
  6. Restart the Docker service:

    1. $ sudo systemctl restart docker
  7. To verify that the image is now available to be pulled:

    1. $ sudo docker pull docker.io/hello-world
    2. Using default tag: latest
    3. Trying to pull repository docker.io/library/hello-world ...
    4. latest: Pulling from docker.io/library/hello-world
    5. 9a0669468bf7: Pull complete
    6. Digest: sha256:0e06ef5e1945a718b02a8c319e15bae44f47039005530bc617a5d071190ed3fc
  8. If using an external registry is required, for example to modify the docker daemon configuration file in all the node hosts that require to use that registry, create a blacklist on those nodes to avoid malicious containers from being executed.

    Using the Ansible installer, this can be configured using the openshift_docker_additional_registries and openshift_docker_blocked_registries variables in the Ansible hosts file:

    1. openshift_docker_additional_registries=registry.access.redhat.com,my.registry.example.com
    2. openshift_docker_blocked_registries=all

Secure registries

In order to be able to pull images from an external registry, it is required to trust the registry certificates, otherwise the pull image operation fails.

In order to do so, see the Installing a Certificate Authority Certificate for External Registries section.

If using a whitelist, the external registries should be added to the registries variable, as explained above.

Insecure registries

External registries that use non-trusted certificates, or without certificates at all, should be avoided.

However, any insecure registries should be added using the --insecure-registry option to allow for the docker daemon to pull images from the repository. This is the same as the --add-registry option, but the docker operation is not verified.

The registry should be added using both options to enable search, and, if there is a blacklist, to perform other operations, such as pulling images.

For testing purposes, an example is shown on how to add a localhost insecure registry.

Procedure
  1. Modify /etc/containers/registries.conf configuration file to add the localhost insecure registry:

    1. registries:
    2. - registry.access.redhat.com
    3. - my.registry.example.com
    4. - docker.io
    5. insecure_registries:
    6. - localhost:5000
    7. block_registries:
    8. - all

    Prior to 3.6, modify the /etc/sysconfig/docker configuration file to add the localhost:

    1. ADD_REGISTRY="--add-registry=registry.access.redhat.com --add-registry=my.registry.example.com --add-registry=docker.io --add-registry=localhost:5000"
    2. INSECURE_REGISTRY="--insecure-registry=localhost:5000"
    3. BLOCK_REGISTRY='--block-registry=all'
  2. Restart the docker daemon to use the registry:

    1. $ sudo systemctl restart docker.service

    Restarting the docker daemon causes the docker containers to be restarted.

  3. Run a Docker registry pod at localhost:

    1. $ sudo docker run -p 5000:5000 registry:2
  4. Pull an image:

    1. $ sudo docker pull openshift/hello-openshift
  5. Tag the image:

    1. $ sudo docker tag docker.io/openshift/hello-openshift:latest localhost:5000/hello-openshift-local:latest
  6. Push the image to the local registry:

    1. $ sudo docker push localhost:5000/hello-openshift-local:latest
  7. Using the Ansible installer, this can be configured using the openshift_docker_additional_registries, openshift_docker_blocked_registries, and openshift_docker_insecure_registries variables in the Ansible hosts file:

    1. openshift_docker_additional_registries=registry.access.redhat.com,my.registry.example.com,localhost:5000
    2. openshift_docker_insecure_registries=localhost:5000
    3. openshift_docker_blocked_registries=all

    You can also set the openshift_docker_insecure_registries variable to the IP address of the host. 0.0.0.0/0 is not a valid setting.

Authenticated registries

Using authenticated registries with docker requires the docker daemon to log in to the registry. With OKD, a different set of steps must be performed, because the users can not run docker login commands on the host. Authenticated registries can be used to limit the images users can pull or who can access the external registries.

If an external docker registry requires authentication, create a special secret in the project that uses that registry and then use that secret to perform the docker operations.

Procedure
  1. Create a dockercfg secret in the project where the user is going to log in to the docker registry:

    1. $ oc project <my_project>
    2. $ oc create secret docker-registry <my_registry> --docker-server=<my.registry.example.com> --docker-username=<username> --docker-password=<my_password> --docker-email=<me@example.com>
  2. If a .dockercfg file exists, create the secret using the oc command:

    1. $ oc create secret generic <my_registry> --from-file=.dockercfg=<path/to/.dockercfg> --type=kubernetes.io/dockercfg
  3. Populate the $HOME/.docker/config.json file:

    1. $ oc create secret generic <my_registry> --from-file=.dockerconfigjson=<path/to/.dockercfg> --type=kubernetes.io/dockerconfigjson
  4. Use the dockercfg secret to pull images from the authenticated registry by linking the secret to the service account performing the pull operations. The default service account to pull images is named default:

    1. $ oc secrets link default <my_registry> --for=pull
  5. For pushing images using the S2I feature, the dockercfg secret is mounted in the S2I pod, so it needs to be linked to the proper service account that performs the build. The default service account used to build images is named builder.

    1. $ oc secrets link builder <my_registry>
  6. In the buildconfig, the secret should be specified for push or pull operations:

    1. "type": "Source",
    2. "sourceStrategy": {
    3. "from": {
    4. "kind": "DockerImage",
    5. "name": "*my.registry.example.com*/myproject/myimage:stable"
    6. },
    7. "pullSecret": {
    8. "name": "*mydockerregistry*"
    9. },
    10. ...[OUTPUT ABBREVIATED]...
    11. "output": {
    12. "to": {
    13. "kind": "DockerImage",
    14. "name": "*my.registry.example.com*/myproject/myimage:latest"
    15. },
    16. "pushSecret": {
    17. "name": "*mydockerregistry*"
    18. },
    19. ...[OUTPUT ABBREVIATED]...
  7. If the external registry delegates authentication to external services, create both dockercfg secrets: the registry one using the registry URL and the external authentication system using its own URL. Both secrets should be added to the service accounts.

    1. $ oc project <my_project>
    2. $ oc create secret docker-registry <my_registry> --docker-server=*<my_registry_example.com> --docker-username=<username> --docker-password=<my_password> --docker-email=<me@example.com>
    3. $ oc create secret docker-registry <my_docker_registry_ext_auth> --docker-server=<my.authsystem.example.com> --docker-username=<username> --docker-password=<my_password> --docker-email=<me@example.com>
    4. $ oc secrets link default <my_registry> --for=pull
    5. $ oc secrets link default <my_docker_registry_ext_auth> --for=pull
    6. $ oc secrets link builder <my_registry>
    7. $ oc secrets link builder <my_docker_registry_ext_auth>

ImagePolicy admission plug-in

An admission control plug-in intercepts requests to the API, and performs checks depending on the configured rules and allows or denies certain actions based on those rules. OKD can limit the allowed images running in the environment using the ImagePolicy admission plug-in where it can control:

  • The source of images: which registries can be used to pull images

  • Image resolution: force pods to run with immutable digests to ensure the image does not change due to a re-tag

  • Container image label restrictions: force an image to have or not have particular labels

  • Image annotation restrictions: force an image in the integrated container registry to have or not have particular annotations

ImagePolicy admission plug-in is currently considered beta.

Procedure
  1. If the ImagePolicy plug-in is enabled, it needs to be modified to allow the external registries to be used by modifying the /etc/origin/master/master-config.yaml file on every master node:

    1. admissionConfig:
    2. pluginConfig:
    3. openshift.io/ImagePolicy:
    4. configuration:
    5. kind: ImagePolicyConfig
    6. apiVersion: v1
    7. executionRules:
    8. - name: allow-images-from-other-registries
    9. onResources:
    10. - resource: pods
    11. - resource: builds
    12. matchRegistries:
    13. - docker.io
    14. - <my.registry.example.com>
    15. - registry.access.redhat.com

    Enabling ImagePolicy requires users to specify the registry when deploying an application like oc new-app docker.io/kubernetes/guestbook instead oc new-app kubernetes/guestbook, otherwise it fails.

  2. To enable the admission plug-ins at installation time, the openshift_master_admission_plugin_config variable can be used with a json formatted string including all the pluginConfig configuration:

    1. openshift_master_admission_plugin_config={"openshift.io/ImagePolicy":{"configuration":{"kind":"ImagePolicyConfig","apiVersion":"v1","executionRules":[{"name":"allow-images-from-other-registries","onResources":[{"resource":"pods"},{"resource":"builds"}],"matchRegistries":["docker.io","*my.registry.example.com*","registry.access.redhat.com"]}]}}}

    There is a current issue to be fixed in OKD 3.6.1 where ImagePolicy pods can not be deployed using default templates, and give the following error message Failed create | Error creating: Pod “” is invalid: spec.containers[0].\image: Forbidden: this image is prohibited by policy.

    See the Image Policy is not working as expected Red Hat Knowledgebase article for a workaround.

Import images from external registries

Application developers can import images to create imagestreams using the oc import-image command, and OKD can be configured to allow or deny image imports from external registries.

Procedure
  1. To configure the allowed registries where users can import images, add the following to the /etc/origin/master/master-config.yaml file:

    1. imagePolicyConfig:
    2. allowedRegistriesForImport:
    3. - domainName: docker.io
    4. - domainName: '\*.docker.io'
    5. - domainName: '*.redhat.com'
    6. - domainName: 'my.registry.example.com'
  2. To import images from an external authenticated registry, create a secret within the desired project.

  3. Even if not recommended, if the external authenticated registry is insecure or the certificates can not be trusted, the oc import-image command can be used with the --insecure=true option.

    If the external authenticated registry is secure, the registry certificate should be trusted in the master hosts as they run the registry import controller as:

    Copy the certificate in the /etc/pki/ca-trust/source/anchors/:

    1. $ sudo cp <my.registry.example.com.crt> /etc/pki/ca-trust/source/anchors/<my.registry.example.com.crt>
  4. Run update-ca-trust command:

    1. $ sudo update-ca-trust
  5. Restart the master services on all the master hosts:

    1. $ sudo master-restart api
    2. $ sudo master-restart controllers
  6. The certificate for the external registry should be trusted in the OKD registry:

    1. $ for i in pem openssl java; do
    2. oc create configmap ca-trust-extracted-${i} --from-file /etc/pki/ca-trust/extracted/${i}
    3. oc set volume dc/docker-registry --add -m /etc/pki/ca-trust/extracted/${i} --configmap-name=ca-trust-extracted-${i} --name ca-trust-extracted-${i}
    4. done

    There is no official procedure currently for adding the certificate to the registry pod, but the above workaround can be used.

    This workaround creates configmaps with all the trusted certificates from the system running those commands, so the recommendation is to run it from a clean system where just the required certificates are trusted.

  7. Alternatively, modify the registry image in order to trust the proper certificates rebuilding the image using a Dockerfile as:

    1. FROM registry.access.redhat.com/openshift3/ose-docker-registry:v3.6
    2. ADD <my.registry.example.com.crt> /etc/pki/ca-trust/source/anchors/
    3. USER 0
    4. RUN update-ca-trust extract
    5. USER 1001
  8. Rebuild the image, push it to a docker registry, and use that image as spec.template.spec.containers["name":"registry"].image in the registry deploymentconfig:

    1. $ oc patch dc docker-registry -p '{"spec":{"template":{"spec":{"containers":[{"name":"registry","image":"*myregistry.example.com/openshift3/ose-docker-registry:latest*"}]}}}}'

To add the imagePolicyConfig configuration at installation, the openshift_master_image_policy_config variable can be used with a json formatted string including all the imagePolicyConfig configuration, like:

  1. openshift_master_image_policy_config={“imagePolicyConfig”:{“allowedRegistriesForImport”:[{“domainName”:”docker.io”},{“domainName”:”*.docker.io”},{“domainName”:”.redhat.com”},{“domainName”:”my.registry.example.com*”}]}}

For more information about the ImagePolicy, see the ImagePolicy admission plug-in section.

OKD registry integration

You can install OKD as a stand-alone container registry to provide only the registry capabilities, but with the advantages of running in an OKD platform.

For more information about the OKD registry, see Installing a Stand-alone Deployment of OpenShift Container Registry.

To integrate the OKD registry, all previous sections apply. From the OKD point of view, it is treated as an external registry, but there are some extra tasks that need to be performed, because it is a multi-tenant registry and the authorization model from OKD applies so when a new project is created, the registry does not create a project within its environment as it is independent.

Connect the registry project with the cluster

As the registry is a full OKD environment with a registry pod and a web interface, the process to create a new project in the registry is performed using the oc new-project or oc create command line or via the web interface.

Once the project has been created, the usual service accounts (builder, default, and deployer) are created automatically, as well as the project administrator user is granted permissions. Different users can be authorized to push/pull images as well as “anonymous” users.

There can be several use cases, such as allowing all the users to pull images from this new project within the registry, but if you want to have a 1:1 project relationship between OKD and the registry, where the users can push and pull images from that specific project, some steps are required.

The registry web console shows a token to be used for pull/push operations, but the token showed there is a session token, so it expires. Creating a service account with specific permissions allows the administrator to limit the permissions for the service account, so that, for example, different service accounts can be used for push or pull images. Then, a user does not have to configure for token expiration, secret recreation, and other tasks, as the service account tokens will not expire.

Procedure
  1. Create a new project:

    1. $ oc new-project <my_project>
  2. Create a registry project:

    1. $ oc new-project <registry_project>
  3. Create a service account in the registry project:

    1. $ oc create serviceaccount <my_serviceaccount> -n <registry_project>
  4. Give permissions to push and pull images using the registry-editor role:

    1. $ oc adm policy add-role-to-user registry-editor -z <my_serviceaccount> -n <registry_project>

    If only pull permissions are required, the registry-viewer role can be used.

  5. Get the service account token:

    1. $ TOKEN=$(oc sa get-token <my_serviceaccount> -n <registry_project>)
  6. Use the token as the password to create a dockercfg secret:

    1. $ oc create secret docker-registry <my_registry> \
    2. --docker-server=<myregistry.example.com> --docker-username=<notused> --docker-password=${TOKEN} --docker-email=<me@example.com>
  7. Use the dockercfg secret to pull images from the registry by linking the secret to the service account performing the pull operations. The default service account to pull images is named default:

    1. $ oc secrets link default <my_registry> --for=pull
  8. For pushing images using the S2I feature, the dockercfg secret is mounted in the S2I pod, so it needs to be linked to the proper service account that performs the build. The default service account used to build images is named builder:

    1. $ oc secrets link builder <my_registry>
  9. In the buildconfig, the secret should be specified for push or pull operations:

    1. "type": "Source",
    2. "sourceStrategy": {
    3. "from": {
    4. "kind": "DockerImage",
    5. "name": "<myregistry.example.com/registry_project/my_image:stable>"
    6. },
    7. "pullSecret": {
    8. "name": "<my_registry>"
    9. },
    10. ...[OUTPUT ABBREVIATED]...
    11. "output": {
    12. "to": {
    13. "kind": "DockerImage",
    14. "name": "<myregistry.example.com/registry_project/my_image:latest>"
    15. },
    16. "pushSecret": {
    17. "name": "<my_registry>"
    18. },
    19. ...[OUTPUT ABBREVIATED]...