Configuring for Azure

You can configure OKD to use Microsoft Azure load balancers and disks for persistent application data.

Before you begin

Configuring authorization for Microsoft Azure

Azure roles

Configuring Microsoft Azure for OKD requires the following Microsoft Azure role:

Contributor

To create and manage all types of Microsoft Azure resources.

See the Classic subscription administrator roles vs. Azure RBAC roles vs. Azure AD administrator roles documentation for more information.

Permissions

Configuring Microsoft Azure for OKD requires a service principal, which allows the creation and management of Kubernetes service load balancers and disks for persistent storage. The service principal values are defined at installation time and deployed to the Azure configuration file, located at /etc/origin/cloudprovider/azure.conf on OKD master and node hosts.

Procedure

  1. Using the Azure CLI, obtain the account subscription ID:

    1. # az account list
    2. [
    3. {
    4. "cloudName": "AzureCloud",
    5. "id": "<subscription>", (1)
    6. "isDefault": false,
    7. "name": "Pay-As-You-Go",
    8. "state": "Enabled",
    9. "tenantId": "<tenant-id>",
    10. "user": {
    11. "name": "admin@example.com",
    12. "type": "user"
    13. }
    14. ]
    1The subscription ID to use to create the new permissions.
  2. Create the service principal with the Microsoft Azure role of contributor and with the scope of the Microsoft Azure subscription and the resource group. Record the output of these values to be used when defining the inventory. Use the <subscription> value from the previous step in place of the value below:

    1. # az ad sp create-for-rbac --name openshiftcloudprovider \
    2. --password <secret> --role contributor \
    3. --scopes /subscriptions/<subscription>/resourceGroups/<resource-group>
    4. Retrying role assignment creation: 1/36
    5. Retrying role assignment creation: 2/36
    6. {
    7. "appId": "<app-id>",
    8. "displayName": "ocpcloudprovider",
    9. "name": "http://ocpcloudprovider",
    10. "password": "<secret>",
    11. "tenant": "<tenant-id>"
    12. }

Configuring Microsoft Azure objects

Integrating OKD with Microsoft Azure requires the following components or services to create a highly-available and full-featured environment.

To ensure that the appropriate amount of instances can be launched, request an increase in CPU quota from Microsoft before creating instances.

A resource group

Resource groups contain all Microsoft Azure components for a deployment, including networking, load balancers, virtual machines, and DNS. Quotas and permissions can be applied to resources groups to control and manage resources deployed on Microsoft Azure. Resource groups are created and defined per geographic region. All resources created for an OKD environment should be within the same geographic region and within the same resource group.

See Azure Resource Manager overview for more information.

Azure Virtual Networks

Azure Virtual Networks are used to isolate Azure cloud networks from one another. Instances and load balancers use the virtual network to allow communication with each other and to and from the Internet. The virtual network allows for the creation of one or many subnets to be used by components within a resource group. You can also connect virtual networks to various VPN services, allowing communication with on-premise services.

See What is Azure Virtual Network? for more information.

Azure DNS

Azure offers a managed DNS service that provides internal and Internet-accessible host name and load balancer resolution. The reference environment uses a DNS zone to host three DNS A records to allow for mapping of public IPs to OKD resources and a bastion.

See What is Azure DNS? for more information.

Load balancing

Azure load balancers allow network connectivity for scaling and high availability of services running on virtual machines within the Azure environment.

See What is Azure Load Balancer?

Storage Account

Storage Accounts allow for resources, such as virtual machines, to access the different type of storage components offered by Microsoft Azure. During installation, the storage account defines the location of the object-based blob storage used for the OKD registry.

See Introduction to Azure Storage for more information, or the Configuring the OKD registry for Microsoft Azure section for steps to create the storage account for the registry.

Service Principal

Azure offers the ability to create service accounts, which access, manage, or create components within Azure. The service account grants API access to specific services. For example, a service principal allows Kubernetes or OKD instances to request persistent storage and load balancers. Service principals allow for granular access to be given to instances or users for specific functions.

See Application and service principal objects in Azure Active Directory for more information.

Availability Sets

Availability sets ensure that the deployed VMs are distributed across multiple isolated hardware nodes in a cluster. The distribution helps to ensure that when maintenance on the cloud provider hardware occurs, instances will not all run on one specific node.

You should segment instances to different availability sets based on their role. For example, one availability set containing three master hosts, one availability set containing infrastructure hosts, and one availability set containing application hosts. This allows for segmentation and the ability to use external load balancers within OKD.

See Manage the availability of Linux virtual machines for more information.

Network Security Groups

Network Security Groups (NSGs) provide a list of rules to either allow or deny traffic to resources deployed within an Azure Virtual Network. NSGs use numeric priority values and rules to define what items are allowed to communicate with each other. You can place restrictions on where communication is allowed to occur, such as within only the virtual network, from load balancers, or from everywhere.

Priority values allow for administrators to grant granular values on the order in which port communication is allowed or not allowed to occur.

See Plan virtual networks for more information.

Instances sizes

A successful OKD environment requires some minimum hardware requirements.

See the Minimum Hadware Requirements section in the OKD documentation or Sizes for Cloud Services for more information.

The Azure configuration file

Configuring OKD for Azure requires the /etc/azure/azure.conf file, on each node host.

If the file does not exist, you can create it.

  1. tenantId: <> (1)
  2. subscriptionId: <> (2)
  3. aadClientId: <> (3)
  4. aadClientSecret: <> (4)
  5. aadTenantId: <> (5)
  6. resourceGroup: <> (6)
  7. cloud: <> (7)
  8. location: <> (8)
  9. vnetName: <> (9)
  10. securityGroupName: <> (10)
  11. primaryAvailabilitySetName: <> (11)
1The AAD tenant ID for the subscription that the cluster is deployed in.
2The Azure subscription ID that the cluster is deployed in.
3The client ID for an AAD application with RBAC access to talk to Azure RM APIs.
4The client secret for an AAD application with RBAC access to talk to Azure RM APIs.
5Ensure this is the same as tenant ID (optional).
6The Azure Resource Group name that the Azure VM belongs to.
7The specific cloud region. For example, AzurePublicCloud.
8The compact style Azure region. For example, southeastasia (optional).
9Virtual network containing instances and used when creating load balancers.
10Security group name associated with instances and load balancers.
11Availability set to use when creating resources such as load balancers (optional).

The NIC used for accessing the instance must have an internal-dns-name set or the node will not be able to rejoin the cluster, display build logs to the console, and will cause oc rsh to not work correctly.

Example inventory for OKD on Microsoft Azure

The example inventory below assumes that the following items have been created:

  • A resource group

  • An Azure virtual network

  • One or more network security groups that contain the required OKD ports

  • A storage account

  • A service principal

  • Two load balancers

  • Two or more DNS entries for the routers and for the OKD web console

  • Three Availability Sets

  • Three master instances

  • Three infrastructure instances

  • One or more application instances

The inventory below uses the default storageclass to create persistent volumes to be used by the metrics, logging, and service catalog components managed by a service principal. The registry uses Microsoft Azure blob storage.

If the Microsoft Azure instances use managed disks, provide the following variable in the inventory:

openshift_storageclass_parameters={‘kind’: ‘managed’, ‘storageaccounttype’: ‘Premium_LRS’}

or

openshift_storageclass_parameters={‘kind’: ‘managed’, ‘storageaccounttype’: ‘Standard_LRS’}

This ensures the storageclass creates the correct disk type for PVs as it relates to the instances deployed. If unmanaged disks are used, the storageclass will use the shared parameter allowing for unmanged disks to be created for PVs.

  1. [OSEv3:children]
  2. masters
  3. etcd
  4. nodes
  5. [OSEv3:vars]
  6. ansible_ssh_user=cloud-user
  7. ansible_become=true
  8. openshift_cloudprovider_kind=azure
  9. #cloudprovider
  10. openshift_cloudprovider_kind=azure
  11. openshift_cloudprovider_azure_client_id=v9c97ead-1v7E-4175-93e3-623211bed834
  12. openshift_cloudprovider_azure_client_secret=s3r3tR3gistryN0special
  13. openshift_cloudprovider_azure_tenant_id=422r3f91-21fe-4esb-vad5-d96dfeooee5d
  14. openshift_cloudprovider_azure_subscription_id=6003c1c9-d10d-4366-86cc-e3ddddcooe2d
  15. openshift_cloudprovider_azure_resource_group=openshift
  16. openshift_cloudprovider_azure_location=eastus
  17. #endcloudprovider
  18. oreg_auth_user=service_account (1)
  19. oreg_auth_password=service_account_token (1)
  20. openshift_master_api_port=443
  21. openshift_master_console_port=443
  22. openshift_hosted_router_replicas=3
  23. openshift_hosted_registry_replicas=1
  24. openshift_master_cluster_method=native
  25. openshift_master_cluster_hostname=openshift-master.example.com
  26. openshift_master_cluster_public_hostname=openshift-master.example.com
  27. openshift_master_default_subdomain=apps.openshift.example.com
  28. openshift_deployment_type=openshift-enterprise
  29. openshift_master_identity_providers=[{'name': 'idm', 'challenge': 'true', 'login': 'true', 'kind': 'LDAPPasswordIdentityProvider', 'attributes': {'id': ['dn'], 'email': ['mail'], 'name': ['cn'], 'preferredUsername': ['uid']}, 'bindDN': 'uid=admin,cn=users,cn=accounts,dc=example,dc=com', 'bindPassword': 'ldapadmin', 'ca': '/etc/origin/master/ca.crt', 'insecure': 'false', 'url': 'ldap://ldap.example.com/cn=users,cn=accounts,dc=example,dc=com?uid?sub?(memberOf=cn=ose-user,cn=groups,cn=accounts,dc=example,dc=com)'}]
  30. networkPluginName=redhat/ovs-networkpolicy
  31. openshift_examples_modify_imagestreams=true
  32. # Storage Class change to use managed storage
  33. openshift_storageclass_parameters={'kind': 'managed', 'storageaccounttype': 'Standard_LRS'}
  34. # service catalog
  35. openshift_enable_service_catalog=true
  36. openshift_hosted_etcd_storage_kind=dynamic
  37. openshift_hosted_etcd_storage_volume_name=etcd-vol
  38. openshift_hosted_etcd_storage_access_modes=["ReadWriteOnce"]
  39. openshift_hosted_etcd_storage_volume_size=SC_STORAGE
  40. openshift_hosted_etcd_storage_labels={'storage': 'etcd'}
  41. # metrics
  42. openshift_metrics_install_metrics=true
  43. openshift_metrics_cassandra_storage_type=dynamic
  44. openshift_metrics_storage_volume_size=20Gi
  45. openshift_metrics_hawkular_nodeselector={"node-role.kubernetes.io/infra": "true"}
  46. openshift_metrics_cassandra_nodeselector={"node-role.kubernetes.io/infra": "true"}
  47. openshift_metrics_heapster_nodeselector={"node-role.kubernetes.io/infra": "true"}
  48. # logging
  49. openshift_logging_install_logging=true
  50. openshift_logging_es_pvc_dynamic=true
  51. openshift_logging_storage_volume_size=50Gi
  52. openshift_logging_kibana_nodeselector={"node-role.kubernetes.io/infra": "true"}
  53. openshift_logging_curator_nodeselector={"node-role.kubernetes.io/infra": "true"}
  54. openshift_logging_es_nodeselector={"node-role.kubernetes.io/infra": "true"}
  55. # Setup azure blob registry storage
  56. openshift_hosted_registry_storage_kind=object
  57. openshift_hosted_registry_storage_azure_blob_accountkey=uZdkVlbca6xzwBqK8VDz15/loLUoc8I6cPfP31ZS+QOSxL6ylWT6CLrcadSqvtNTMgztxH4CGjYfVnRNUhvMiA==
  58. openshift_hosted_registry_storage_provider=azure_blob
  59. openshift_hosted_registry_storage_azure_blob_accountname=registry
  60. openshift_hosted_registry_storage_azure_blob_container=registry
  61. openshift_hosted_registry_storage_azure_blob_realm=core.windows.net
  62. [masters]
  63. ocp-master-1
  64. ocp-master-2
  65. ocp-master-3
  66. [etcd]
  67. ocp-master-1
  68. ocp-master-2
  69. ocp-master-3
  70. [nodes]
  71. ocp-master-1 openshift_node_group_name="node-config-master"
  72. ocp-master-2 openshift_node_group_name="node-config-master"
  73. ocp-master-3 openshift_node_group_name="node-config-master"
  74. ocp-infra-1 openshift_node_group_name="node-config-infra"
  75. ocp-infra-2 openshift_node_group_name="node-config-infra"
  76. ocp-infra-3 openshift_node_group_name="node-config-infra"
  77. ocp-app-1 openshift_node_group_name="node-config-compute"
1If you use a container registry that requires authentication, such as the default container image registry, specify the credentials for that account. See Accessing and Configuring the Red Hat Registry.

Configuring OKD for Microsoft Azure

You can configure OKD for Microsoft Azure in two ways:

Configuring OKD for Azure using Ansible

You can configure OKD for Azure at installation time or by running the Ansible inventory file after installation.

Add the following to the Ansible inventory file located at /etc/ansible/hosts by default to configure your OKD environment for Microsoft Azure:

  1. [OSEv3:vars]
  2. openshift_cloudprovider_kind=azure
  3. openshift_cloudprovider_azure_client_id=<app_ID> (1)
  4. openshift_cloudprovider_azure_client_secret=<secret> (2)
  5. openshift_cloudprovider_azure_tenant_id=<tenant_ID> (3)
  6. openshift_cloudprovider_azure_subscription_id=<subscription> (4)
  7. openshift_cloudprovider_azure_resource_group=<resource_group> (5)
  8. openshift_cloudprovider_azure_location=<location> (6)
1The app ID value for the service principal.
2The secret containing the password for the service principal.
3The tenant in which the service principal exists.
4The subscription used by the service principal.
5The resource group where the service account exists.
6The Microsoft Azure location where the resource group exists.

Installing with Ansible also creates and configures the following files to fit your Microsoft Azure environment:

  • /etc/origin/cloudprovider/azure.conf

  • /etc/origin/master/master-config.yaml

  • /etc/origin/node/node-config.yaml

Manually configuring OKD for Microsoft Azure

Manually configuring master hosts for Microsoft Azure

Perform the following on all master hosts.

Procedure

  1. Edit the master configuration file located at /etc/origin/master/master-config.yaml by default on all masters and update the contents of the apiServerArguments and controllerArguments sections:

    1. kubernetesMasterConfig:
    2. ...
    3. apiServerArguments:
    4. cloud-provider:
    5. - "azure"
    6. cloud-config:
    7. - "/etc/origin/cloudprovider/azure.conf"
    8. controllerArguments:
    9. cloud-provider:
    10. - "azure"
    11. cloud-config:
    12. - "/etc/origin/cloudprovider/azure.conf"

    When triggering a containerized installation, only the /etc/origin and /var/lib/origin directories are mounted to the master and node container. Therefore, ensure master-config.yaml is in the /etc/origin/master directory instead of /etc/.

  2. When you configure OKD for Microsoft Azure using Ansible, the /etc/origin/cloudprovider/azure.conf file is created automatically. Because you are manually configuring OKD for Microsoft Azure, you must create the file on all node instances and include the following:

    1. tenantId: <tenant_ID> (1)
    2. subscriptionId: <subscription> (2)
    3. aadClientId: <app_ID> (3)
    4. aadClientSecret: <secret> (4)
    5. aadTenantId: <tenant_ID> (5)
    6. resourceGroup: <resource_group> (6)
    7. location: <location> (7)
    1The tenant in which the service principal exists.
    2The subscription used by the service principal.
    3The appID value for the service principal.
    4The secret containing the password for the service principal.
    5The tenant in which the service principal exists.
    6The resource group where the service account exists.
    7The Microsoft Azure location where the resource group exists.
  3. Restart the OKD master services:

    1. # master-restart api
    2. # master-restart controllers

Manually configuring node hosts for Microsoft Azure

Perform the following on all node hosts.

Procedure

  1. Edit the appropriate node configuration map and update the contents of the **kubeletArguments** section:

    1. kubeletArguments:
    2. cloud-provider:
    3. - "azure"
    4. cloud-config:
    5. - "/etc/origin/cloudprovider/azure.conf"

    The NIC used for accessing the instance must have an internal DNS name set or the node will not be able to rejoin the cluster, display build logs to the console, and will cause oc rsh to not work correctly.

  2. Restart the OKD services on all nodes:

    1. # systemctl restart atomic-openshift-node

Configuring the OKD registry for Microsoft Azure

Microsoft Azure provides object cloud storage that OKD can use to store container images using the OKD container image registry.

For more information, see Cloud Storage in the Azure documentation.

You can configure the registry either using Ansible or manually by configuring the registry configuration file.

Prerequisites

You must create a storage account to host the registry images before installation. The following command creates a storage account which is used during installation for image storage:

You can use Microsoft Azure blob storage for storing container images. The OKD registry uses blob storage to allow for the registry to grow dynamically in size without the need for intervention from an administrator.

  1. Create an Azure storage account:

    1. az storage account create
    2. --name <account_name> \
    3. --resource-group <resource_group> \
    4. --location <location> \
    5. --sku Standard_LRS

    This creates an account key. To view the account key:

    1. az storage account keys list \
    2. --account-name <account-name> \
    3. --resource-group <resource-group> \
    4. --output table
    5. KeyName Permissions Value
    6. key1 Full <account-key>
    7. key2 Full <extra-account-key>

Only one account key value is required for the configuration of the OKD registry.

Option 1: Configuring the OKD registry for Azure using Ansible

Procedure

  1. Configure the Ansible inventory for the registry to use the storage account:

    1. [OSEv3:vars]
    2. # Azure Registry Configuration
    3. openshift_hosted_registry_replicas=1 (1)
    4. openshift_hosted_registry_storage_kind=object
    5. openshift_hosted_registry_storage_azure_blob_accountkey=<account_key> (2)
    6. openshift_hosted_registry_storage_provider=azure_blob
    7. openshift_hosted_registry_storage_azure_blob_accountname=<account_name> (3)
    8. openshift_hosted_registry_storage_azure_blob_container=<registry> (4)
    9. openshift_hosted_registry_storage_azure_blob_realm=core.windows.net
    1The number of replicas to configure.
    2The account key associated with the <account-name>.
    3The storage account name.
    4Directory used to store the data. registry by default

Option 2: Manually configuring OKD registry for Microsoft Azure

To use Microsoft Azure object storage, edit the registry’s configuration file and mount to the registry pod.

Procedure

  1. Export the current config.yml:

    1. $ oc get secret registry-config \
    2. -o jsonpath='{.data.config\.yml}' -n default | base64 -d \
    3. >> config.yml.old
  2. Create a new configuration file from the old config.yml:

    1. $ cp config.yml.old config.yml
  3. Edit the file to include the Azure parameters:

    1. storage:
    2. delete:
    3. enabled: true
    4. cache:
    5. blobdescriptor: inmemory
    6. azure:
    7. accountname: <account-name> (1)
    8. accountkey: <account-key> (2)
    9. container: registry (3)
    10. realm: core.windows.net (4)
    1Replace with the storage account name.
    2The account key associated to the <account-name>.
    3Directory used to store the data. registry by default
    4Storage realm core.windows.net by default
  4. Delete the registry-config secret:

    1. $ oc delete secret registry-config -n default
  5. Recreate the secret to reference the updated configuration file:

    1. $ oc create secret generic registry-config \
    2. --from-file=config.yml -n default
  6. Redeploy the registry to read the updated configuration:

    1. $ oc rollout latest docker-registry -n default

Verifying the registry is using blob object storage

To verify if the registry is using Microsoft Azure blob storage:

Procedure

  1. After a successful registry deployment, the registry deploymentconfig will always show that the registry is using an emptydir instead of Microsoft Azure blob storage:

    1. $ oc describe dc docker-registry -n default
    2. ...
    3. Mounts:
    4. ...
    5. /registry from registry-storage (rw)
    6. Volumes:
    7. registry-storage:
    8. Type: EmptyDir (1)
    9. ...
    1The temporary directory that shares a pod’s lifetime.
  2. Check if the /registry mount point is empty. This is the volume Microsoft Azure storage will use:

    1. $ oc exec \
    2. $(oc get pod -l deploymentconfig=docker-registry \
    3. -o=jsonpath='{.items[0].metadata.name}') -i -t -- ls -l /registry
    4. total 0
  3. If it is empty, it is because the Microsoft Azure blob configuration is performed in the registry-config secret:

    1. $ oc describe secret registry-config
    2. Name: registry-config
    3. Namespace: default
    4. Labels: <none>
    5. Annotations: <none>
    6. Type: Opaque
    7. Data
    8. ====
    9. config.yml: 398 bytes
  4. The installer creates a config.yml file with the desired configuration using the extended registry capabilities as seen in Storage in the installation documentation. To view the configuration file, including the storage section where the storage bucket configuration is stored:

    1. $ oc exec \
    2. $(oc get pod -l deploymentconfig=docker-registry \
    3. -o=jsonpath='{.items[0].metadata.name}') \
    4. cat /etc/registry/config.yml
    5. version: 0.1
    6. log:
    7. level: debug
    8. http:
    9. addr: :5000
    10. storage:
    11. delete:
    12. enabled: true
    13. cache:
    14. blobdescriptor: inmemory
    15. azure:
    16. accountname: registry
    17. accountkey: uZekVBJBa6xzwAqK8EDz15/hoHUoc8I6cPfP31ZS+QOSxLfo7WT7CLrVPKaqvtNTMgztxH7CGjYfpFRNUhvMiA==
    18. container: registry
    19. realm: core.windows.net
    20. auth:
    21. openshift:
    22. realm: openshift
    23. middleware:
    24. registry:
    25. - name: openshift
    26. repository:
    27. - name: openshift
    28. options:
    29. pullthrough: True
    30. acceptschema2: True
    31. enforcequota: False
    32. storage:
    33. - name: openshift

    Or you can view the secret:

    1. $ oc get secret registry-config -o jsonpath='{.data.config\.yml}' | base64 -d
    2. version: 0.1
    3. log:
    4. level: debug
    5. http:
    6. addr: :5000
    7. storage:
    8. delete:
    9. enabled: true
    10. cache:
    11. blobdescriptor: inmemory
    12. azure:
    13. accountname: registry
    14. accountkey: uZekVBJBa6xzwAqK8EDz15/hoHUoc8I6cPfP31ZS+QOSxLfo7WT7CLrVPKaqvtNTMgztxH7CGjYfpFRNUhvMiA==
    15. container: registry
    16. realm: core.windows.net
    17. auth:
    18. openshift:
    19. realm: openshift
    20. middleware:
    21. registry:
    22. - name: openshift
    23. repository:
    24. - name: openshift
    25. options:
    26. pullthrough: True
    27. acceptschema2: True
    28. enforcequota: False
    29. storage:
    30. - name: openshift

If using an emptyDir volume, the /registry mountpoint looks like the following:

  1. $ oc exec \
  2. $(oc get pod -l deploymentconfig=docker-registry \
  3. -o=jsonpath='{.items[0].metadata.name}') -i -t -- df -h /registry
  4. Filesystem Size Used Avail Use% Mounted on
  5. /dev/sdc 30G 226M 30G 1% /registry
  6. $ oc exec \
  7. $(oc get pod -l deploymentconfig=docker-registry \
  8. -o=jsonpath='{.items[0].metadata.name}') -i -t -- ls -l /registry
  9. total 0
  10. drwxr-sr-x. 3 1000000000 1000000000 22 Jun 19 12:24 docker

Configuring OKD to use Microsoft Azure storage

OKD can use Microsoft Azure storage using persistent volumes mechanisms. OKD creates the disk in the resource group and attaches the disk to the correct instance.

Procedure

  1. The following storageclass is created when you configure the Azure cloud provider at installation using the openshift_cloudprovider_kind=azure and openshift_cloud_provider_azure variables in the Ansible inventory:

    1. $ oc get --export storageclass azure-standard -o yaml
    2. apiVersion: storage.k8s.io/v1
    3. kind: StorageClass
    4. metadata:
    5. annotations:
    6. storageclass.kubernetes.io/is-default-class: "true"
    7. creationTimestamp: null
    8. name: azure-standard
    9. parameters:
    10. kind: Shared
    11. storageaccounttype: Standard_LRS
    12. provisioner: kubernetes.io/azure-disk
    13. reclaimPolicy: Delete
    14. volumeBindingMode: Immediate

    If you did not use Ansible to enable OKD and Microsoft Azure integration, you can create the storageclass manually. See the Dynamic provisioning and creating storage classes section for more information.

  2. Currently, the default storageclass kind is shared which means that the Microsoft Azure instances must use unmanaged disks. You can optionally modify this by allowing instances to use managed disks by providing the openshift_storageclass_parameters={'kind': 'Managed', 'storageaccounttype': 'Premium_LRS'} or openshift_storageclass_parameters={'kind': 'Managed', 'storageaccounttype': 'Standard_LRS'} variables in the Ansible inventory file at installation.

Microsoft Azure disks are ReadWriteOnce access mode, which means the volume can be mounted as read-write by a single node. See the Access modes section of the Architecture guide for more information.

About Red Hat OpenShift Container Storage

Red Hat OpenShift Container Storage (RHOCS) is a provider of agnostic persistent storage for OKD either in-house or in hybrid clouds. As a Red Hat storage solution, RHOCS is completely integrated with OKD for deployment, management, and monitoring regardless if it is installed on OKD (converged) or with OKD (independent). OpenShift Container Storage is not limited to a single availability zone or node, which makes it likely to survive an outage. You can find complete instructions for using RHOCS in the RHOCS3.11 Deployment Guide.

Using the Microsoft Azure external load balancer as a service

OKD can leverage the Microsoft Azure load balancer by exposing services externally using a LoadBalancer service. OKD creates the load balancer in Microsoft Azure and creates the proper firewall rules.

Currently, a bug causes extra variables to be included in the Microsoft Azure infrastructure when using it as a cloud provider and when using it as an external load balancer. See the following for more information:

Prerequisites

Ensure the the Azure configuration file located at /etc/origin/cloudprovider/azure.conf is correctly configured with the appropriate objects. See the Manually configuring OKD for Microsoft Azure section for an example /etc/origin/cloudprovider/azure.conf file.

Once the values are added, restart the OKD services on all hosts:

  1. # systemctl restart atomic-openshift-node
  2. # master-restart api
  3. # master-restart controllers

Deploying a sample application using a load balancer

Procedure

  1. Create a new application:

    1. $ oc new-app openshift/hello-openshift
  2. Expose the load balancer service:

    1. $ oc expose dc hello-openshift --name='hello-openshift-external' --type='LoadBalancer'

    This creates a Loadbalancer service similar to the following:

    1. apiVersion: v1
    2. kind: Service
    3. metadata:
    4. labels:
    5. app: hello-openshift
    6. name: hello-openshift-external
    7. spec:
    8. externalTrafficPolicy: Cluster
    9. ports:
    10. - name: port-1
    11. nodePort: 30714
    12. port: 8080
    13. protocol: TCP
    14. targetPort: 8080
    15. - name: port-2
    16. nodePort: 30122
    17. port: 8888
    18. protocol: TCP
    19. targetPort: 8888
    20. selector:
    21. app: hello-openshift
    22. deploymentconfig: hello-openshift
    23. sessionAffinity: None
    24. type: LoadBalancer
  3. Verify that the service has been created:

    1. $ oc get svc
    2. NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
    3. hello-openshift ClusterIP 172.30.223.255 <none> 8080/TCP,8888/TCP 1m
    4. hello-openshift-external LoadBalancer 172.30.99.54 40.121.42.180 8080:30714/TCP,8888:30122/TCP 4m

    The LoadBalancer type and External-IP fields indicate that the service is using Microsoft Azure load balancers to expose the application.

This creates the following required objects in the Azure infrastructure:

  • A load balancer:

    1. az network lb list -o table
    2. Location Name ProvisioningState ResourceGroup ResourceGuid
    3. ---------- ----------- ------------------- --------------- ------------------------------------
    4. eastus kubernetes Succeeded refarch-azr 30ec1980-b7f5-407e-aa4f-e570f06f168d
    5. eastus OcpMasterLB Succeeded refarch-azr acb537b2-8a1a-45d2-aae1-ea9eabfaea4a
    6. eastus OcpRouterLB Succeeded refarch-azr 39087c4c-a5dc-457e-a5e6-b25359244422

To verify that the load balancer is properly configured, run the following from an external host:

  1. $ curl 40.121.42.180:8080 (1)
  2. Hello OpenShift!
1Replace with the values from the EXTERNAL-IP verification step above as well as the port number.