Configuring for Red Hat Virtualization

You can configure OKD for Red Hat Virtualization by creating a bastion virtual machine and using it to install OKD.

Creating the bastion virtual machine

Create a bastion virtual machine in Red Hat Virtualization to install OKD.

Procedure

  1. Log in to the Manager machine by using SSH.

  2. Create a temporary bastion installation directory, for example, /bastion_installation, for the installation files.

  3. Create an encrypted /bastion_installation/secure_vars.yaml file with ansible-vault and record the password:

    1. # ansible-vault create secure_vars.yaml
  4. Add the following parameter values to the secure_vars.yaml file:

    1. engine_password: <Manager_password> (1)
    2. bastion_root_password: <bastion_root_password> (2)
    3. rhsub_user: <Red_Hat_Subscription_Manager_username> (3)
    4. rhsub_pass: <Red_Hat_Subscription_Manager_password>
    5. rhsub_pool: <Red_Hat_Subscription_Manager_pool_id> (4)
    6. root_password: <OpenShift_node_root_password> (5)
    7. engine_cafile: <RHVM_CA_certificate> (6)
    8. oreg_auth_user: <image_registry_authentication_username> (7)
    9. oreg_auth_password: <image_registry_authentication_password>
    1Password for logging in to the Administration Portal.
    2Root password for the bastion virtual machine.
    3Red Hat Subscription Manager credentials.
    4Pool ID of the Red Hat Virtualization Manager subscription pool.
    5OKD root password.
    6Red Hat Virtualization Manager CA certificate. The engine_cafile value is required if you are not running the playbook from the Manager machine. The Manager CA certificate’s default location is /etc/pki/ovirt-engine/ca.pem.
    7If you are using an image registry that requires authentication, add the credentials.
  5. Save the file.

  6. Obtain the Red Hat Enterprise Linux KVM Guest Image download link:

    1. Navigate to Red Hat Customer Portal: Download Red Hat Enterprise Linux.

    2. In the Product Software tab, locate the Red Hat Enterprise Linux KVM Guest Image.

    3. Right-click Download Now, copy the link, and save it.

      The link is time-sensitive and must be copied just before you create the bastion virtual machine.

  1. Create the /bastion_installation/create-bastion-machine-playbook.yaml file with the following content and update its parameter values:

    1. ---
    2. - name: Create a bastion machine
    3. hosts: localhost
    4. connection: local
    5. gather_facts: false
    6. no_log: true
    7. roles:
    8. - oVirt.image-template
    9. - oVirt.vm-infra
    10. no_log: true
    11. vars:
    12. engine_url: https://_Manager_FQDN_/ovirt-engine/api (1)
    13. engine_user: <admin@internal>
    14. engine_password: "{{ engine_password }}"
    15. engine_cafile: /etc/pki/ovirt-engine/ca.pem
    16. qcow_url: <RHEL_KVM_guest_image_download_link> (2)
    17. template_cluster: Default
    18. template_name: rhelguest7
    19. template_memory: 4GiB
    20. template_cpu: 2
    21. wait_for_ip: true
    22. debug_vm_create: false
    23. vms:
    24. - name: rhel-bastion
    25. cluster: "{{ template_cluster }}"
    26. profile:
    27. cores: 2
    28. template: "{{ template_name }}"
    29. root_password: "{{ root_password }}"
    30. ssh_key: "{{ lookup('file', '/root/.ssh/id_rsa_ssh_ocp_admin.pub') }}"
    31. state: running
    32. cloud_init:
    33. custom_script: |
    34. rh_subscription:
    35. username: "{{ rhsub_user }}"
    36. password: "{{ rhsub_pass }}"
    37. auto-attach: true
    38. disable-repo: ['*']
    39. # 'rhel-7-server-rhv-4.2-manager-rpms' supports RHV 4.2 and 4.3
    40. enable-repo: ['rhel-7-server-rpms', 'rhel-7-server-extras-rpms', 'rhel-7-server-ansible-2.7-rpms', 'rhel-7-server-ose-3.11-rpms', 'rhel-7-server-supplementary-rpms', 'rhel-7-server-rhv-4.2-manager-rpms']
    41. packages:
    42. - ansible
    43. - ovirt-ansible-roles
    44. - openshift-ansible
    45. - python-ovirt-engine-sdk4
    46. pre_tasks:
    47. - name: Create an ssh key-pair for OpenShift admin
    48. user:
    49. name: root
    50. generate_ssh_key: yes
    51. ssh_key_file: .ssh/id_rsa_ssh_ocp_admin
    52. roles:
    53. - oVirt.image-template
    54. - oVirt.vm-infra
    55. - name: post installation tasks on the bastion machine
    56. hosts: rhel-bastion
    57. tasks:
    58. - name: create ovirt-engine PKI dir
    59. file:
    60. state: directory
    61. dest: /etc/pki/ovirt-engine/
    62. - name: Copy the engine ca cert to the bastion machine
    63. copy:
    64. src: "{{ engine_cafile }}"
    65. dest: "{{ engine_cafile }}"
    66. - name: Copy the secured vars to the bastion machine
    67. copy:
    68. src: secure_vars.yaml
    69. dest: secure_vars.yaml
    70. decrypt: false
    71. - file:
    72. state: directory
    73. path: /root/.ssh
    74. - name: copy the OpenShift_admin keypair to the bastion machine
    75. copy:
    76. src: "{{ item }}"
    77. dest: "{{ item }}"
    78. mode: 0600
    79. with_items:
    80. - /root/.ssh/id_rsa_ssh_ocp_admin
    81. - /root/.ssh/id_rsa_ssh_ocp_admin.pub
    1FQDN of the Manager machine.
    2<qcow_url> is the download link of the Red Hat Enterprise Linux KVM Guest Image. The Red Hat Enterprise Linux KVM Guest Image includes the cloud-init package, which is required by this playbook. If you are not using Red Hat Enterprise Linux, download the cloud-init package and install it manually before running this playbook.
  2. Create the bastion virtual machine:

    1. # ansible-playbook -i localhost create-bastion-machine-playbook.yaml -e @secure_vars.yaml --ask-vault-pass
  3. Log in to the Administration Portal.

  4. Click Compute Virtual Machines to verify that the rhel-bastion virtual machine was created successfully.

Installing OKD with the bastion virtual machine

Install OKD by using the bastion virtual machine in Red Hat Virtualization.

Procedure

  1. Log in to rhel-bastion.

  2. Create an install_ocp.yaml file that contains the following content:

    1. ---
    2. - name: Openshift on RHV
    3. hosts: localhost
    4. connection: local
    5. gather_facts: false
    6. vars_files:
    7. - vars.yaml
    8. - secure_vars.yaml
    9. pre_tasks:
    10. - ovirt_auth:
    11. url: "{{ engine_url }}"
    12. username: "{{ engine_user }}"
    13. password: "{{ engine_password }}"
    14. insecure: "{{ engine_insecure }}"
    15. ca_file: "{{ engine_cafile | default(omit) }}"
    16. roles:
    17. - role: openshift_ovirt
    18. - import_playbook: setup_dns.yaml
    19. - import_playbook: /usr/share/ansible/openshift-ansible/playbooks/prerequisites.yml
    20. - import_playbook: /usr/share/ansible/openshift-ansible/playbooks/openshift-node/network_manager.yml
    21. - import_playbook: /usr/share/ansible/openshift-ansible/playbooks/deploy_cluster.yml
  3. Create a setup_dns.yaml file that contains the following content:

    1. - hosts: masters
    2. strategy: free
    3. tasks:
    4. - shell: "echo {{ ansible_default_ipv4.address }} {{ inventory_hostname }} etcd.{{ inventory_hostname.split('.', 1)[1] }} openshift-master.{{ inventory_hostname.split('.', 1)[1] }} openshift-public-master.{{ inventory_hostname.split('.', 1)[1] }} docker-registry-default.apps.{{ inventory_hostname.split('.', 1)[1] }} webconsole.openshift-web-console.svc registry-console-default.apps.{{ inventory_hostname.split('.', 1)[1] }} >> /etc/hosts"
    5. when: openshift_ovirt_all_in_one is defined | ternary((openshift_ovirt_all_in_one | bool), false)
  4. Create an /etc/ansible/openshift_3_11.hosts Ansible inventory file that contains the following content:

    1. [workstation]
    2. localhost ansible_connection=local
    3. [all:vars]
    4. openshift_ovirt_dns_zone="{{ public_hosted_zone }}"
    5. openshift_web_console_install=true
    6. openshift_master_overwrite_named_certificates=true
    7. openshift_master_cluster_hostname="openshift-master.{{ public_hosted_zone }}"
    8. openshift_master_cluster_public_hostname="openshift-public-master.{{ public_hosted_zone }}"
    9. openshift_master_default_subdomain="{{ public_hosted_zone }}"
    10. openshift_public_hostname="{{openshift_master_cluster_public_hostname}}"
    11. openshift_deployment_type=openshift-enterprise
    12. openshift_service_catalog_image_version="{{ openshift_image_tag }}"
    13. [OSEv3:vars]
    14. # General variables
    15. debug_level=1
    16. containerized=false
    17. ansible_ssh_user=root
    18. os_firewall_use_firewalld=true
    19. openshift_enable_excluders=false
    20. openshift_install_examples=false
    21. openshift_clock_enabled=true
    22. openshift_debug_level="{{ debug_level }}"
    23. openshift_node_debug_level="{{ node_debug_level | default(debug_level,true) }}"
    24. osn_storage_plugin_deps=[]
    25. openshift_master_bootstrap_auto_approve=true
    26. openshift_master_bootstrap_auto_approver_node_selector={"node-role.kubernetes.io/master":"true"}
    27. osm_controller_args={"experimental-cluster-signing-duration": ["20m"]}
    28. osm_default_node_selector="node-role.kubernetes.io/compute=true"
    29. openshift_enable_service_catalog=false
    30. # Docker
    31. container_runtime_docker_storage_type=overlay2
    32. openshift_docker_use_system_container=false
    33. [OSEv3:children]
    34. nodes
    35. masters
    36. etcd
    37. lb
    38. [masters]
    39. [nodes]
    40. [etcd]
    41. [lb]
  5. Obtain the Red Hat Enterprise Linux KVM Guest Image download link:

    1. Navigate to Red Hat Customer Portal: Download Red Hat Enterprise Linux.

    2. In the Product Software tab, locate the Red Hat Enterprise Linux KVM Guest Image.

    3. Right-click Download Now, copy the link, and save it.

      Do not use the link that you copied when you created the bastion virtual machine. The download link is time-sensitive and must be copied just before you run the installation playbook.

  1. Create the vars.yaml file with the following content and update its parameter values:

    1. ---
    2. # For detailed documentation of variables, see
    3. # openshift_ovirt: https://github.com/openshift/openshift-ansible/tree/master/roles/openshift_ovirt#role-variables
    4. # openshift installation: https://github.com/openshift/openshift-ansible/tree/master/inventory
    5. engine_url: https://<Manager_FQDN>/ovirt-engine/api (1)
    6. engine_user: admin@internal
    7. engine_password: "{{ engine_password }}"
    8. engine_insecure: false
    9. engine_cafile: /etc/pki/ovirt-engine/ca.pem
    10. openshift_ovirt_vm_manifest:
    11. - name: 'master'
    12. count: 1
    13. profile: 'master_vm'
    14. - name: 'compute'
    15. count: 0
    16. profile: 'node_vm'
    17. - name: 'lb'
    18. count: 0
    19. profile: 'node_vm'
    20. - name: 'etcd'
    21. count: 0
    22. profile: 'node_vm'
    23. - name: infra
    24. count: 0
    25. profile: node_vm
    26. # Currently, only all-in-one installation (`openshift_ovirt_all_in_one: true`) is supported.
    27. # Multi-node installation (master and node VMs installed separately) will be supported in a future release.
    28. openshift_ovirt_all_in_one: true
    29. openshift_ovirt_cluster: Default
    30. openshift_ovirt_data_store: data
    31. openshift_ovirt_ssh_key: "{{ lookup('file', '/root/.ssh/id_rsa_ssh_ocp_admin.pub') }}"
    32. public_hosted_zone:
    33. # Uncomment to disable install-time checks, for smaller scale installations
    34. #openshift_disable_check: memory_availability,disk_availability,docker_image_availability
    35. qcow_url: <RHEL_KVM_guest_image_download_link> (2)
    36. image_path: /var/tmp
    37. template_name: rhelguest7
    38. template_cluster: "{{ openshift_ovirt_cluster }}"
    39. template_memory: 4GiB
    40. template_cpu: 1
    41. template_disk_storage: "{{ openshift_ovirt_data_store }}"
    42. template_disk_size: 100GiB
    43. template_nics:
    44. - name: nic1
    45. profile_name: ovirtmgmt
    46. interface: virtio
    47. debug_vm_create: false
    48. wait_for_ip: true
    49. vm_infra_wait_for_ip_retries: 30
    50. vm_infra_wait_for_ip_delay: 20
    51. node_item: &node_item
    52. cluster: "{{ openshift_ovirt_cluster }}"
    53. template: "{{ template_name }}"
    54. memory: "8GiB"
    55. cores: "2"
    56. high_availability: true
    57. disks:
    58. - name: docker
    59. size: 15GiB
    60. interface: virtio
    61. storage_domain: "{{ openshift_ovirt_data_store }}"
    62. - name: openshift
    63. size: 30GiB
    64. interface: virtio
    65. storage_domain: "{{ openshift_ovirt_data_store }}"
    66. state: running
    67. cloud_init:
    68. root_password: "{{ root_password }}"
    69. authorized_ssh_keys: "{{ openshift_ovirt_ssh_key }}"
    70. custom_script: "{{ cloud_init_script_node | to_nice_yaml }}"
    71. openshift_ovirt_vm_profile:
    72. master_vm:
    73. <<: *node_item
    74. memory: 16GiB
    75. cores: "{{ vm_cores | default(4) }}"
    76. disks:
    77. - name: docker
    78. size: 15GiB
    79. interface: virtio
    80. storage_domain: "{{ openshift_ovirt_data_store }}"
    81. - name: openshift_local
    82. size: 30GiB
    83. interface: virtio
    84. storage_domain: "{{ openshift_ovirt_data_store }}"
    85. - name: etcd
    86. size: 25GiB
    87. interface: virtio
    88. storage_domain: "{{ openshift_ovirt_data_store }}"
    89. cloud_init:
    90. root_password: "{{ root_password }}"
    91. authorized_ssh_keys: "{{ openshift_ovirt_ssh_key }}"
    92. custom_script: "{{ cloud_init_script_master | to_nice_yaml }}"
    93. node_vm:
    94. <<: *node_item
    95. etcd_vm:
    96. <<: *node_item
    97. lb_vm:
    98. <<: *node_item
    99. cloud_init_script_node: &cloud_init_script_node
    100. packages:
    101. - ovirt-guest-agent
    102. runcmd:
    103. - sed -i 's/# ignored_nics =.*/ignored_nics = docker0 tun0 /' /etc/ovirt-guest-agent.conf
    104. - systemctl enable ovirt-guest-agent
    105. - systemctl start ovirt-guest-agent
    106. - mkdir -p /var/lib/docker
    107. - mkdir -p /var/lib/origin/openshift.local.volumes
    108. - /usr/sbin/mkfs.xfs -L dockerlv /dev/vdb
    109. - /usr/sbin/mkfs.xfs -L ocplv /dev/vdc
    110. mounts:
    111. - [ '/dev/vdb', '/var/lib/docker', 'xfs', 'defaults,gquota' ]
    112. - [ '/dev/vdc', '/var/lib/origin/openshift.local.volumes', 'xfs', 'defaults,gquota' ]
    113. power_state:
    114. mode: reboot
    115. message: cloud init finished - boot and install openshift
    116. condition: True
    117. cloud_init_script_master:
    118. <<: *cloud_init_script_node
    119. runcmd:
    120. - sed -i 's/# ignored_nics =.*/ignored_nics = docker0 tun0 /' /etc/ovirt-guest-agent.conf
    121. - systemctl enable ovirt-guest-agent
    122. - systemctl start ovirt-guest-agent
    123. - mkdir -p /var/lib/docker
    124. - mkdir -p /var/lib/origin/openshift.local.volumes
    125. - mkdir -p /var/lib/etcd
    126. - /usr/sbin/mkfs.xfs -L dockerlv /dev/vdb
    127. - /usr/sbin/mkfs.xfs -L ocplv /dev/vdc
    128. - /usr/sbin/mkfs.xfs -L etcdlv /dev/vdd
    129. mounts:
    130. - [ '/dev/vdb', '/var/lib/docker', 'xfs', 'defaults,gquota' ]
    131. - [ '/dev/vdc', '/var/lib/origin/openshift.local.volumes', 'xfs', 'defaults,gquota' ]
    132. - [ '/dev/vdd', '/var/lib/etcd', 'xfs', 'defaults,gquota' ]
    1FQDN of the Manager machine.
    2<qcow_url> is the download link of the Red Hat Enterprise Linux KVM Guest Image. The Red Hat Enterprise Linux KVM Guest Image includes the cloud-init package, which is required by this playbook. If you are not using Red Hat Enterprise Linux, download the cloud-init package and install it manually before running this playbook.
  2. Install OKD:

    1. # export ANSIBLE_ROLES_PATH="/usr/share/ansible/roles/:/usr/share/ansible/openshift-ansible/roles"
    2. # export ANSIBLE_JINJA2_EXTENSIONS="jinja2.ext.do"
    3. # ansible-playbook -i /etc/ansible/openshift_3_11.hosts install_ocp.yaml -e @vars.yaml -e @secure_vars.yaml --ask-vault-pass
  3. Create DNS entries for the routers, for each infrastructure instance.

  4. Configure round-robin routing so that the router can pass traffic to the applications.

  5. Create a DNS entry for the OKD web console.

  6. Specify the IP address of the load balancer node.