Configuring for Google Compute Engine

You can configure OKD to access an existing Google Compute Engine (GCE) infrastructure, including using GCE volumes as persistent storage for application data.

Before you begin

Configuring authorization for Google Cloud Platform

Roles

Configuring GCP for OKD requires the following GCP role:

roles/owner

Needed for creating service accounts, cloud storage, instances, images, templates, Cloud DNS entries, and to deploy load balancers and health checks.

delete permissions might also be required if the user is expected to redeploy the environment during testing phases.

You can also create a service account to avoid using personal users when deploying GCP objects.

See the Understanding roles section of the GCP documentation for more information, including steps for how to configure roles.

Scopes and service accounts

GCP uses scopes to determine if an authenticated identity is authorized to perform operations within a resource. For example, if application A with a read-only scope access token can only read, while application B with a read-write scope access token can read and modify data.

The scopes are defined at the GCP API level as [https://www.googleapis.com/auth/compute.readonly](https://www.googleapis.com/auth/compute.readonly).

You can specify scopes using the --scopes=[SCOPE,…​] option when creating instances, or you can use the --no-scopes option to create the instance without scopes if you don’t want the instance accessing the GCP API.

See the Scopes section of the GCP documentation for more information.

All GCP projects include a default [PROJECT_NUMBER]-[compute@developer.gserviceaccount.com](https://docs.okd.io/3.11/install_config/mailto:compute@developer.gserviceaccount.com) service account with project editor permissions.

By default, a newly created instance is automatically enabled to run as the default service account with the following access scopes:

You can specify another service account with the --service-account=SERVICE_ACCOUNT option when creating the instance, or explicitly disabling service accounts for the instance using the --no-service-account option using the gcloud CLI.

See the Creating a new service account section of the GCP documentation for more information.

Google Compute Engine objects

Integrating OKD with Google Compute Engine (GCE) requires the following components or services.

A GCP project

A GCP project is the base level organizing entity that forms the basis for creating, enabling, and using all GCP services. This includes managing APIs, enabling billing, adding and removing collaborators, and managing permissions.

See the project resource section in the GCP documentation for more information.

Project IDs are unique identifiers, and project IDs must be unique across all of Google Cloud Engine. This means you cannot use myproject as a project ID if someone else has created a project with that ID before.

Billing

You cannot create new resources unless billing is attached to an account. The new project can be linked to an existing project or new information can be entered.

See Create, Modify, or Close Your Billing Account in the GCP documentation for more information.

Cloud identity and access management

Deploying OKD requires the proper permissions. A user must be able to create service accounts, cloud storage, instances, images, templates, Cloud DNS entries, and deploy load balancers and health checks. Delete permissions are also helpful in order to be able to redeploy the environment while testing.

You can create service accounts with specific permissions, then use them to deploy infrastructure components instead of regular users. You can also create roles to limit access to different users or service accounts.

GCP instances use service accounts to allow applications to call GCP APIs. For example, OKD node hosts can call the GCP disk API to provide a persistent volume to an application.

Access control to the various infrastructure, service resources, and fine-grained roles are available using the IAM service. For more information, see the Access cloud overview section of the GCP documentation.

SSH keys

GCP injects SSH public keys as authorized keys so you can log in using SSH in the created instances. You can configure the SSH keys per instance or per project.

You can use existing SSH keys. GCP metadata can help with storing the SSH keys that are injected at boot time in the instances to allow SSH access.

See the Metadata section of the GCP documentation for more information.

GCP regions and zones

GCP has a global infrastructure that covers regions and availability zones. While deploying OKD in GCP on different zones can help avoid single-point-of-failures, there are some caveats regarding storage.

GCP disks are created within a zone. Therefore, if a OKD node host goes down in zone “A” and the pods move to zone “B”, the persistent storage cannot be attached to those pods because the disks are in a different zone.

Deploying a single zone of multizone OKD environment is an important decision to make before installing OKD. If deploying a multizone environment, the recommended setup is to use three different zones in a single region.

See the GCP documentation on regions and zones and the Kubernetes documentation on multiple zones for more information.

External IP address

So that GCP instances can communicate with the Internet, you must attach an external IP address to the instance. Also, an external IP address is required to communicate with instances deployed in GCP from outside the Virtual Private Cloud (VPC) Network.

Requiring an External IP address for internet access is a limitation of the provider. You can configure firewall rules to block incoming external traffic in instances if not needed.

See the GCP documentation on external IP address for more information.

Cloud DNS

GCP cloud DNS is a DNS service used to publish domain names to the global DNS using GCP DNS servers.

The public cloud DNS zone requires a domain name that you purchased either through Google’s “Domains” service or through a third-party provider. When you create the zone, you must add the name servers provided by Google to the registrar.

See the GCP documentation on Cloud DNS for more information.

GCP VPC networks have an internal DNS service that automatically resolves internal host names.

The internal fully qualified domain name (FQDN) for an instance follows the [HOST_NAME].c.[PROJECT_ID].internal format.

See the GCP documentation on Internal DNS for more information.

Load balancing

The GCP load balancing service enables the distribution of traffic across multiple instances in the GCP cloud.

There are five types of Load Balancing:

HTTPS and TCP proxy load balancing are the only options for using HTTPS health checks for master nodes, which checks the status of /healthz.

Because HTTPS load balancing requires a custom certificate, this implementation uses TCP Proxy load balancing to simplify the process.

See the GCP documentation on Load balancing for more information.

Instances sizes

A successful OKD environment requires some minimum hardware requirements:

Table 1. Instances sizes
RoleSize

Master

n1-standard-8

Node

n1-standard-4

GCP allows you to create custom instance sizes to fit different requirements. See Creating an Instance with a Custom Machine Type for more information, or see Machine types and OKD Minimum Hardware Requirements for more information about instance sizes.

Storage Options

By default, each GCP instance has a small root persistent disk that contains the operating system. When applications running on the instance require more storage space, you can add additional storage options to the instance:

  • Standard persistent disks

  • SSD persistent disks

  • Local SSDs

  • Cloud storage buckets

For more information, see the GCP documentation on Storage options.

Configuring OKD for GCE

You can configure OKD for GCE in two ways:

Option 1: Configuring OKD for GCP using Ansible

You can configure OKD for Google Compute Platform (GCP) by modifying the Ansible inventory file at installation time or after installation.

Procedure

  1. At minimum, you must define the openshift_cloudprovider_kind, openshift_gcp_project and openshift_gcp_prefix parameters, as well as the optional openshift_gcp_multizone for multizone deployments and openshift_gcp_network_name if you are not using the default network name.

    Add the following section to the Ansible inventory file at installation to configure your OKD environment for GCP:

    1. [OSEv3:vars]
    2. openshift_cloudprovider_kind=gce
    3. openshift_gcp_project=<projectid> (1)
    4. openshift_gcp_prefix=<uid> (2)
    5. openshift_gcp_multizone=False (3)
    6. openshift_gcp_network_name=<network name> (4)
    1Provide the GCP project ID where the existing instances are running. This ID is generated when you create the project in the Google Cloud Platform Console.
    2Provide a unique string to identify each OKD cluster. This must be unique across GCP.
    3Optionally, set to True to trigger multizone deployments on GCP. Set to False by default.
    4Optionally, provide the network name if not using default network.

    Installing with Ansible also creates and configures the following files to fit your GCP environment:

    • /etc/origin/cloudprovider/gce.conf

    • /etc/origin/master/master-config.yaml

    • /etc/origin/node/node-config.yaml

  1. If you are running load balancer services using GCP, the Compute Engine VM node instances require the ocp suffix. For example, if the value of the openshift_gcp_prefix parameter is set to mycluster, you must tag the nodes with myclusterocp. See Adding and Removing Network Tags for more information on how to add network tags to Compute Engine VM instances.

  2. Optionally, you can configure multizone support.

    The cluster installation process configures single-zone support by default, but you can configure for multiple zones to avoid single-point-of-failures.

    Because GCP disks are created within a zone, deploying OKD in GCP on different zones can cause problems with storage. If an OKD node host goes down in zone “A” and the pods move to zone “B”, the persistent storage cannot be attached to those pods because the disks are now in a different zone. See Multiple zone limitations in the Kubernetes documentation for more information.

    To enable multizone support using the Ansible inventory file, add the following parameter:

    1. [OSEv3:vars]
    2. openshift_gcp_multizone=true

    To return to single-zone support, set the openshift_gcp_multizone value to false and rerun the Ansible inventory file.

Option 2: Manually configuring OKD for GCE

Manually configuring master hosts for GCE

Perform the following procedure on all master hosts.

Procedure

  1. Add the GCE parameters to the apiServerArguments and controllerArguments sections of the master configuration file at /etc/origin/master/master-config.yaml by default:

    1. apiServerArguments:
    2. cloud-provider:
    3. - "gce"
    4. cloud-config:
    5. - "/etc/origin/cloudprovider/gce.conf"
    6. controllerArguments:
    7. cloud-provider:
    8. - "gce"
    9. cloud-config:
    10. - "/etc/origin/cloudprovider/gce.conf"
  2. When you configure OKD for GCP using Ansible, the /etc/origin/cloudprovider/gce.conf file is created automatically. Because you are manually configuring OKD for GCP, you must create the file and enter the following:

    1. [Global]
    2. project-id = <project-id> (1)
    3. network-name = <network-name> (2)
    4. node-tags = <node-tags> (3)
    5. node-instance-prefix = <instance-prefix> (4)
    6. multizone = true (5)
    1Provide the GCP project ID where the existing instances are running.
    2Provide the network name if not using the default.
    3Provide the tag for the GCP nodes. Must contain ocp as a suffix. For example, if the value of the node-instance-prefix parameter is set to mycluster, the nodes must be tagged with myclusterocp.
    4Provide a unique string to identify your OKD cluster.
    5Set to true to trigger multizone deployments on GCP. Set to False by default.

    The cluster installation process configures single-zone support by default.

    Deploying OKD in GCP on different zones can be helpful to avoid single-point-of-failures, but can cause problems with storage. This is because GCP disks are created within a zone. If an OKD node host goes down in zone “A” and the pods should be moved to zone “B”, the persistent storage cannot be attached to those pods, because the disks are now in a different zone. See Multiple zone limitations in the Kubernetes documentation for more information.

    For running load balancer services using GCP, the Compute Engine VM node instances require the ocp suffix: <openshift_gcp_prefix>ocp. For example, if the value of the openshift_gcp_prefix parameter is set to mycluster, you must tag the nodes with myclusterocp. See Adding and Removing Network Tags for more information on how to add network tags to Compute Engine VM instances.

  3. Restart the OKD host services:

    1. # master-restart api
    2. # master-restart controllers
    3. # systemctl restart atomic-openshift-node

To return to single-zone support, set the multizone value to false and restart the master and node host services.

Manually configuring node hosts for GCE

Perform the following on all node hosts.

Procedure

  1. Edit the appropriate node configuration map and update the contents of the **kubeletArguments** section:

    1. kubeletArguments:
    2. cloud-provider:
    3. - "gce"
    4. cloud-config:
    5. - "/etc/origin/cloudprovider/gce.conf"

    The nodeName must match the instance name in GCP in order for the cloud provider integration to work properly. The name must also be RFC1123 compliant.

  2. Restart the OKD services on all nodes.

    1. # systemctl restart atomic-openshift-node

Configuring the OKD registry for GCP

Google Cloud Platform (GCP) provides object cloud storage that OKD can use to store container images using the OKD container image registry.

For more information, see Cloud Storage in the GCP documentation.

Prerequisites

You must create the bucket to host the registry images before the installation. The following commands create a regional bucket using the configured service account:

  1. gsutil mb -c regional -l <region> gs://ocp-registry-bucket
  2. cat <<EOF > labels.json
  3. {
  4. "ocp-cluster": "mycluster"
  5. }
  6. EOF
  7. gsutil label set labels.json gs://ocp-registry-bucket
  8. rm -f labels.json

A bucket’s data is automatically encrypted using a Google-managed key by default. To specify a different key to encrypt the data, see the Data Encryption Options available in GCP.

See the Creating storage buckets documentation for more information.

Procedure

To configure the Ansible inventory file for the registry to use a Google Cloud Storage (GCS) bucket:

  1. [OSEv3:vars]
  2. # GCP Provider Configuration
  3. openshift_hosted_registry_storage_provider=gcs
  4. openshift_hosted_registry_storage_kind=object
  5. openshift_hosted_registry_replicas=1 (1)
  6. openshift_hosted_registry_storage_gcs_bucket=<bucket_name> (2)
  7. openshift_hosted_registry_storage_gcs_keyfile=<bucket_keyfile> (3)
  8. openshift_hosted_registry_storage_gcs_rootdirectory=<registry_directory> (4)
1The number of replicas to configure.
2The bucket name to for registry storage.
3The path on the installer host where the bucket’s keyfile is located if you use a custom key file to encrypt the data.
4Directory used to store the data. /registry by default

For more information, see Cloud Storage in the GCP documentation.

Manually configuring OKD registry for GCP

To use GCP object storage, edit the registry’s configuration file and mount to the registry pod.

See the Google Cloud Storage Driver documentation for more information about storage driver configuration files.

Procedure

  1. Export the current /etc/registry/config.yml file:

    1. $ oc get secret registry-config \
    2. -o jsonpath='{.data.config\.yml}' -n default | base64 -d \
    3. >> config.yml.old
  2. Create a new configuration file from the old /etc/registry/config.yml file:

    1. $ cp config.yml.old config.yml
  3. Edit the file to include the GCP parameters. Specify the bucket and keyfile in the storage section of a registry’s configuration file:

    1. storage:
    2. delete:
    3. enabled: true
    4. cache:
    5. blobdescriptor: inmemory
    6. gcs:
    7. bucket: ocp-registry (1)
    8. keyfile: mykeyfile (2)
    1Replace with the GCP bucket name.
    2A private service account key file in JSON format. If using the Google Application Default Credentials, do not specify the keyfile parameter.
  4. Delete the registry-config secret:

    1. $ oc delete secret registry-config -n default
  5. Recreate the secret to reference the updated configuration file:

    1. $ oc create secret generic registry-config \
    2. --from-file=config.yml -n default
  6. Redeploy the registry to read the updated configuration:

    1. $ oc rollout latest docker-registry -n default
Verify the registry is using GCP object storage

To verify if the registry is using GCP bucket storage:

Procedure

  1. After a successful registry deployment using GCP storage, the registry deploymentconfig does not show any information if the registry is using an emptydir instead of GCP bucket storage:

    1. $ oc describe dc docker-registry -n default
    2. ...
    3. Mounts:
    4. ...
    5. /registry from registry-storage (rw)
    6. Volumes:
    7. registry-storage:
    8. Type: EmptyDir (1)
    9. ...
    1The temporary directory that shares a pod’s lifetime.
  2. Check if the /registry mountpoint is empty. This is the volume GCP storage will use:

    1. $ oc exec \
    2. $(oc get pod -l deploymentconfig=docker-registry \
    3. -o=jsonpath='{.items[0].metadata.name}') -i -t -- ls -l /registry
    4. total 0
  3. If it is empty, it is because the GCP bucket configuration is performed in the registry-config secret:

    1. $ oc describe secret registry-config
    2. Name: registry-config
    3. Namespace: default
    4. Labels: <none>
    5. Annotations: <none>
    6. Type: Opaque
    7. Data
    8. ====
    9. config.yml: 398 bytes
  4. The installer creates a config.yml file with the desired configuration using the extended registry capabilities as seen in Storage in the installation documentation. To view the configuration file, including the storage section where the storage bucket configuration is stored:

    1. $ oc exec \
    2. $(oc get pod -l deploymentconfig=docker-registry \
    3. -o=jsonpath='{.items[0].metadata.name}') \
    4. cat /etc/registry/config.yml
    5. version: 0.1
    6. log:
    7. level: debug
    8. http:
    9. addr: :5000
    10. storage:
    11. delete:
    12. enabled: true
    13. cache:
    14. blobdescriptor: inmemory
    15. gcs:
    16. bucket: ocp-registry
    17. auth:
    18. openshift:
    19. realm: openshift
    20. middleware:
    21. registry:
    22. - name: openshift
    23. repository:
    24. - name: openshift
    25. options:
    26. pullthrough: True
    27. acceptschema2: True
    28. enforcequota: False
    29. storage:
    30. - name: openshift

    Or you can view the secret:

    1. $ oc get secret registry-config -o jsonpath='{.data.config\.yml}' | base64 -d
    2. version: 0.1
    3. log:
    4. level: debug
    5. http:
    6. addr: :5000
    7. storage:
    8. delete:
    9. enabled: true
    10. cache:
    11. blobdescriptor: inmemory
    12. gcs:
    13. bucket: ocp-registry
    14. auth:
    15. openshift:
    16. realm: openshift
    17. middleware:
    18. registry:
    19. - name: openshift
    20. repository:
    21. - name: openshift
    22. options:
    23. pullthrough: True
    24. acceptschema2: True
    25. enforcequota: False
    26. storage:
    27. - name: openshift

    You can verify that any image push was successful by viewing Storage in the GCP console, then clicking Browser and selecting the bucket, or by running the gsutil command:

    1. $ gsutil ls gs://ocp-registry/
    2. gs://ocp-registry/docker/
    3. $ gsutil du gs://ocp-registry/
    4. 7660385 gs://ocp-registry/docker/registry/v2/blobs/sha256/03/033565e6892e5cc6dd03187d00a4575720a928db111274e0fbf31b410a093c10/data
    5. 7660385 gs://ocp-registry/docker/registry/v2/blobs/sha256/03/033565e6892e5cc6dd03187d00a4575720a928db111274e0fbf31b410a093c10/
    6. 7660385 gs://ocp-registry/docker/registry/v2/blobs/sha256/03/
    7. ...

If using an emptyDir volume, the /registry mountpoint looks similar to the following:

  1. $ oc exec \
  2. $(oc get pod -l deploymentconfig=docker-registry \
  3. -o=jsonpath='{.items[0].metadata.name}') -i -t -- df -h /registry
  4. Filesystem Size Used Avail Use% Mounted on
  5. /dev/sdc 30G 226M 30G 1% /registry
  6. $ oc exec \
  7. $(oc get pod -l deploymentconfig=docker-registry \
  8. -o=jsonpath='{.items[0].metadata.name}') -i -t -- ls -l /registry
  9. total 0
  10. drwxr-sr-x. 3 1000000000 1000000000 22 Jun 19 12:24 docker

Configuring OKD to use GCP storage

OKD can use GCP storage using persistent volumes mechanisms. OKD creates the disk in GCP and attaches the disk to the correct instance.

GCP disks are ReadWriteOnce access mode, which means the volume can be mounted as read-write by a single node. See the Access modes section of the Architecture guide for more information.

Procedure

  1. OKD creates the following storageclass when you use the gce-pd provisioner and if you use the openshift_cloudprovider_kind=gce and openshift_gcp_* variables in the Ansible inventory. Otherwise, if you configured OKD without using Ansible and the storageclass has not been created at installation time, you can create it manually:

    1. $ oc get --export storageclass standard -o yaml
    2. apiVersion: storage.k8s.io/v1
    3. kind: StorageClass
    4. metadata:
    5. annotations:
    6. storageclass.kubernetes.io/is-default-class: "true"
    7. creationTimestamp: null
    8. name: standard
    9. selfLink: /apis/storage.k8s.io/v1/storageclasses/standard
    10. parameters:
    11. type: pd-standard
    12. provisioner: kubernetes.io/gce-pd
    13. reclaimPolicy: Delete

    After you request a PV and using the storageclass shown in the previous step, OKD creates disks in the GCP infrastructure. To verify that the disks were created:

    1. $ gcloud compute disks list | grep kubernetes
    2. kubernetes-dynamic-pvc-10ded514-7625-11e8-8c52-42010af00003 us-west1-b 10 pd-standard READY

About Red Hat OpenShift Container Storage

Red Hat OpenShift Container Storage (RHOCS) is a provider of agnostic persistent storage for OKD either in-house or in hybrid clouds. As a Red Hat storage solution, RHOCS is completely integrated with OKD for deployment, management, and monitoring regardless if it is installed on OKD (converged) or with OKD (independent). OpenShift Container Storage is not limited to a single availability zone or node, which makes it likely to survive an outage. You can find complete instructions for using RHOCS in the RHOCS3.11 Deployment Guide.

Using the GCP external load balancer as a service

You can configure OKD to use the GCP load balancer by exposing services externally using a LoadBalancer service. OKD creates the load balancer in GCP and creates the necessary firewall rules.

Procedure

  1. Create a new application:

    1. $ oc new-app openshift/hello-openshift
  2. Expose the load balancer service:

    1. $ oc expose dc hello-openshift --name='hello-openshift-external' --type='LoadBalancer'

    This command creates a LoadBalancer service similar to the following example:

    1. apiVersion: v1
    2. kind: Service
    3. metadata:
    4. labels:
    5. app: hello-openshift
    6. name: hello-openshift-external
    7. spec:
    8. externalTrafficPolicy: Cluster
    9. ports:
    10. - name: port-1
    11. nodePort: 30714
    12. port: 8080
    13. protocol: TCP
    14. targetPort: 8080
    15. - name: port-2
    16. nodePort: 30122
    17. port: 8888
    18. protocol: TCP
    19. targetPort: 8888
    20. selector:
    21. app: hello-openshift
    22. deploymentconfig: hello-openshift
    23. sessionAffinity: None
    24. type: LoadBalancer
  3. To verify that the service has been created:

    1. $ oc get svc
    2. NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
    3. hello-openshift ClusterIP 172.30.62.10 <none> 8080/TCP,8888/TCP 20m
    4. hello-openshift-external LoadBalancer 172.30.147.214 35.230.97.224 8080:31521/TCP,8888:30843/TCP 19m

    The LoadBalancer type and External IP values indicate that the service is using GCP load balancers to expose the application.

OKD creates the required objects in the GCP infrastructure such as:

  • Firewall rules:

    1. $ gcloud compute firewall-rules list | grep k8s
    2. k8s-4612931a3a47c204-node-http-hc my-net INGRESS 1000 tcp:10256
    3. k8s-fw-a1a8afaa7762811e88c5242010af0000 my-net INGRESS 1000 tcp:8080,tcp:8888

    These firewall rules are applied to instances tagged with <openshift_gcp_prefix>ocp. For example, if the value of the openshift_gcp_prefix parameter is set to mycluster, you must tag the nodes with myclusterocp. See Adding and Removing Network Tags for more information on how to add network tags to Compute Engine VM instances.

  • Health checks:

    1. $ gcloud compute http-health-checks list | grep k8s
    2. k8s-4612931a3a47c204-node 10256 /healthz
  • A load balancer:

    1. $ gcloud compute target-pools list | grep k8s
    2. a1a8afaa7762811e88c5242010af0000 us-west1 NONE k8s-4612931a3a47c204-node
    3. $ gcloud compute forwarding-rules list | grep a1a8afaa7762811e88c5242010af0000
    4. a1a8afaa7762811e88c5242010af0000 us-west1 35.230.97.224 TCP us-west1/targetPools/a1a8afaa7762811e88c5242010af0000

To verify that the load balancer is properly configured, run the following command from an external host:

  1. $ curl 35.230.97.224:8080
  2. Hello OpenShift!