Example Inventory Files

You are viewing documentation for a release that is no longer supported. The latest supported version of version 3 is [3.11]. For the most recent version 4, see [4]

You are viewing documentation for a release that is no longer supported. The latest supported version of version 3 is [3.11]. For the most recent version 4, see [4]

Overview

After getting to know the basics of configuring your own inventory file, you can review the following example inventories which describe various environment topographies, including using multiple masters for high availability. You can choose an example that matches your requirements, modify it to match your own environment, and use it as your inventory file when running the installation.

The following example inventories use the default set of node groups when setting openshift_node_group_name per host in the [nodes] group. To define and use your own custom node group definitions, the openshift_node_groups variable must also be set; see Defining Node Groups and Host Mappings for details.

Single Master Examples

You can configure an environment with a single master and multiple nodes, and either a single or multiple number of external etcd hosts.

Moving from a single master cluster to multiple masters after installation is not supported.

Single Master, Single etcd, and Multiple Nodes

The following table describes an example environment for a single master (with a single etcd instance running as a static pod on the same host), two nodes for hosting user applications, and two nodes with the node-role.kubernetes.io/infra=true label for hosting dedicated infrastructure:

Host NameComponent/Role(s) to Install

master.example.com

Master, etcd, and node

node1.example.com

Compute node

node2.example.com

infra-node1.example.com

Infrastructure node

infra-node2.example.com

You can see these example hosts present in the [masters], [etcd], and [nodes] sections of the following example inventory file:

Single Master, Single etcd, and Multiple Nodes Inventory File

  1. # Create an OSEv3 group that contains the masters, nodes, and etcd groups
  2. [OSEv3:children]
  3. masters
  4. nodes
  5. etcd
  6. # Set variables common for all OSEv3 hosts
  7. [OSEv3:vars]
  8. # SSH user, this user should allow ssh based auth without requiring a password
  9. ansible_ssh_user=root
  10. # If ansible_ssh_user is not root, ansible_become must be set to true
  11. #ansible_become=true
  12. openshift_deployment_type=origin
  13. # uncomment the following to enable htpasswd authentication; defaults to AllowAllPasswordIdentityProvider
  14. #openshift_master_identity_providers=[{'name': 'htpasswd_auth', 'login': 'true', 'challenge': 'true', 'kind': 'HTPasswdPasswordIdentityProvider'}]
  15. # host group for masters
  16. [masters]
  17. master.example.com
  18. # host group for etcd
  19. [etcd]
  20. master.example.com
  21. # host group for nodes, includes region info
  22. [nodes]
  23. master.example.com openshift_node_group_name='node-config-master'
  24. node1.example.com openshift_node_group_name='node-config-compute'
  25. node2.example.com openshift_node_group_name='node-config-compute'
  26. infra-node1.example.com openshift_node_group_name='node-config-infra'
  27. infra-node2.example.com openshift_node_group_name='node-config-infra'

See Configuring Node Host Labels to ensure you understand the default node selector requirements and node label considerations beginning in OKD 3.9.

To use this example, modify the file to match your environment and specifications, and save it as /etc/ansible/hosts.

Single Master, Multiple etcd, and Multiple Nodes

The following table describes an example environment for a single master, three etcd hosts, two nodes for hosting user applications, and two nodes with the node-role.kubernetes.io/infra=true label for hosting dedicated infrastructure:

Host NameComponent/Role(s) to Install

master.example.com

Master and node

etcd1.example.com

etcd

etcd2.example.com

etcd3.example.com

node1.example.com

Compute node

node2.example.com

infra-node1.example.com

Dedicated infrastructure node

infra-node2.example.com

You can see these example hosts present in the [masters], [nodes], and [etcd] sections of the following example inventory file:

Single Master, Multiple etcd, and Multiple Nodes Inventory File

  1. # Create an OSEv3 group that contains the masters, nodes, and etcd groups
  2. [OSEv3:children]
  3. masters
  4. nodes
  5. etcd
  6. # Set variables common for all OSEv3 hosts
  7. [OSEv3:vars]
  8. ansible_ssh_user=root
  9. openshift_deployment_type=origin
  10. # uncomment the following to enable htpasswd authentication; defaults to AllowAllPasswordIdentityProvider
  11. #openshift_master_identity_providers=[{'name': 'htpasswd_auth', 'login': 'true', 'challenge': 'true', 'kind': 'HTPasswdPasswordIdentityProvider'}]
  12. # host group for masters
  13. [masters]
  14. master.example.com
  15. # host group for etcd
  16. [etcd]
  17. etcd1.example.com
  18. etcd2.example.com
  19. etcd3.example.com
  20. # host group for nodes, includes region info
  21. [nodes]
  22. master.example.com openshift_node_group_name='node-config-master'
  23. node1.example.com openshift_node_group_name='node-config-compute'
  24. node2.example.com openshift_node_group_name='node-config-compute'
  25. infra-node1.example.com openshift_node_group_name='node-config-infra'
  26. infra-node2.example.com openshift_node_group_name='node-config-infra'

See Configuring Node Host Labels to ensure you understand the default node selector requirements and node label considerations beginning in OKD 3.9.

To use this example, modify the file to match your environment and specifications, and save it as /etc/ansible/hosts.

Multiple Masters Examples

You can configure an environment with multiple masters, multiple etcd hosts, and multiple nodes. Configuring multiple masters for high availability (HA) ensures that the cluster has no single point of failure.

Moving from a single master cluster to multiple masters after installation is not supported.

When configuring multiple masters, the cluster installation process supports the native high availability (HA) method. This method leverages the native HA master capabilities built into OKD and can be combined with any load balancing solution.

If a host is defined in the [lb] section of the inventory file, Ansible installs and configures HAProxy automatically as the load balancing solution. If no host is defined, it is assumed you have pre-configured an external load balancing solution of your choice to balance the master API (port 8443) on all master hosts.

This HAProxy load balancer is intended to demonstrate the API server’s HA mode and is not recommended for production environments. If you are deploying to a cloud provider, Red Hat recommends deploying a cloud-native TCP-based load balancer or take other steps to provide a highly available load balancer.

For an external load balancing solution, you must have:

  • A pre-created load balancer virtual IP (VIP) configured for SSL passthrough.

  • A VIP listening on the port specified by the openshift_master_api_port value (8443 by default) and proxying back to all master hosts on that port.

  • A domain name for VIP registered in DNS.

    • The domain name will become the value of both openshift_master_cluster_public_hostname and openshift_master_cluster_hostname in the OKD installer.

See the External Load Balancer Integrations example in Github for more information. For more on the high availability master architecture, see Kubernetes Infrastructure.

The cluster installation process does not currently support multiple HAProxy load balancers in an active-passive setup. See the Load Balancer Administration documentation for post-installation amendments.

To configure multiple masters, refer to Multiple Masters with Multiple etcd

Multiple Masters Using Native HA with External Clustered etcd

The following describes an example environment for three masters using the native HA method:, one HAProxy load balancer, three etcd hosts, two nodes for hosting user applications, and two nodes with the node-role.kubernetes.io/infra=true label for hosting dedicated infrastructure:

Host NameComponent/Role(s) to Install

master1.example.com

Master (clustered using native HA) and node

master2.example.com

master3.example.com

lb.example.com

HAProxy to load balance API master endpoints

etcd1.example.com

etcd

etcd2.example.com

etcd3.example.com

node1.example.com

Compute node

node2.example.com

infra-node1.example.com

Dedicated infrastructure node

infra-node2.example.com

You can see these example hosts present in the [masters], [etcd], [lb], and [nodes] sections of the following example inventory file:

Multiple Masters Using HAProxy Inventory File

  1. # Create an OSEv3 group that contains the master, nodes, etcd, and lb groups.
  2. # The lb group lets Ansible configure HAProxy as the load balancing solution.
  3. # Comment lb out if your load balancer is pre-configured.
  4. [OSEv3:children]
  5. masters
  6. nodes
  7. etcd
  8. lb
  9. # Set variables common for all OSEv3 hosts
  10. [OSEv3:vars]
  11. ansible_ssh_user=root
  12. openshift_deployment_type=origin
  13. # uncomment the following to enable htpasswd authentication; defaults to AllowAllPasswordIdentityProvider
  14. #openshift_master_identity_providers=[{'name': 'htpasswd_auth', 'login': 'true', 'challenge': 'true', 'kind': 'HTPasswdPasswordIdentityProvider'}]
  15. # Native high availbility cluster method with optional load balancer.
  16. # If no lb group is defined installer assumes that a load balancer has
  17. # been preconfigured. For installation the value of
  18. # openshift_master_cluster_hostname must resolve to the load balancer
  19. # or to one or all of the masters defined in the inventory if no load
  20. # balancer is present.
  21. openshift_master_cluster_method=native
  22. openshift_master_cluster_hostname=openshift-internal.example.com
  23. openshift_master_cluster_public_hostname=openshift-cluster.example.com
  24. # apply updated node defaults
  25. openshift_node_groups=[{'name': 'node-config-all-in-one', 'labels': ['node-role.kubernetes.io/master=true', 'node-role.kubernetes.io/infra=true', 'node-role.kubernetes.io/compute=true'], 'edits': [{ 'key': 'kubeletArguments.pods-per-core','value': ['20']}]}]
  26. # enable ntp on masters to ensure proper failover
  27. openshift_clock_enabled=true
  28. # host group for masters
  29. [masters]
  30. master1.example.com
  31. master2.example.com
  32. master3.example.com
  33. # host group for etcd
  34. [etcd]
  35. etcd1.example.com
  36. etcd2.example.com
  37. etcd3.example.com
  38. # Specify load balancer host
  39. [lb]
  40. lb.example.com
  41. # host group for nodes, includes region info
  42. [nodes]
  43. master[1:3].example.com openshift_node_group_name='node-config-master'
  44. node1.example.com openshift_node_group_name='node-config-compute'
  45. node2.example.com openshift_node_group_name='node-config-compute'
  46. infra-node1.example.com openshift_node_group_name='node-config-infra'
  47. infra-node2.example.com openshift_node_group_name='node-config-infra'

See Configuring Node Host Labels to ensure you understand the default node selector requirements and node label considerations beginning in OKD 3.9.

To use this example, modify the file to match your environment and specifications, and save it as /etc/ansible/hosts.

Multiple Masters Using Native HA with Co-located Clustered etcd

The following describes an example environment for three masters using the native HA method (with etcd running as a static pod on each host), one HAProxy load balancer, two nodes for hosting user applications, and two nodes with the node-role.kubernetes.io/infra=true label for hosting dedicated infrastructure:

Host NameComponent/Role(s) to Install

master1.example.com

Master (clustered using native HA) and node with etcd running as a static pod on each host

master2.example.com

master3.example.com

lb.example.com

HAProxy to load balance API master endpoints

node1.example.com

Compute node

node2.example.com

infra-node1.example.com

Dedicated infrastructure node

infra-node2.example.com

You can see these example hosts present in the [masters], [etcd], [lb], and [nodes] sections of the following example inventory file:

  1. # Create an OSEv3 group that contains the master, nodes, etcd, and lb groups.
  2. # The lb group lets Ansible configure HAProxy as the load balancing solution.
  3. # Comment lb out if your load balancer is pre-configured.
  4. [OSEv3:children]
  5. masters
  6. nodes
  7. etcd
  8. lb
  9. # Set variables common for all OSEv3 hosts
  10. [OSEv3:vars]
  11. ansible_ssh_user=root
  12. openshift_deployment_type=origin
  13. # uncomment the following to enable htpasswd authentication; defaults to AllowAllPasswordIdentityProvider
  14. #openshift_master_identity_providers=[{'name': 'htpasswd_auth', 'login': 'true', 'challenge': 'true', 'kind': 'HTPasswdPasswordIdentityProvider'}]
  15. # Native high availability cluster method with optional load balancer.
  16. # If no lb group is defined installer assumes that a load balancer has
  17. # been preconfigured. For installation the value of
  18. # openshift_master_cluster_hostname must resolve to the load balancer
  19. # or to one or all of the masters defined in the inventory if no load
  20. # balancer is present.
  21. openshift_master_cluster_method=native
  22. openshift_master_cluster_hostname=openshift-internal.example.com
  23. openshift_master_cluster_public_hostname=openshift-cluster.example.com
  24. # host group for masters
  25. [masters]
  26. master1.example.com
  27. master2.example.com
  28. master3.example.com
  29. # host group for etcd
  30. [etcd]
  31. master1.example.com
  32. master2.example.com
  33. master3.example.com
  34. # Specify load balancer host
  35. [lb]
  36. lb.example.com
  37. # host group for nodes, includes region info
  38. [nodes]
  39. master[1:3].example.com openshift_node_group_name='node-config-master'
  40. node1.example.com openshift_node_group_name='node-config-compute'
  41. node2.example.com openshift_node_group_name='node-config-compute'
  42. infra-node1.example.com openshift_node_group_name='node-config-infra'
  43. infra-node2.example.com openshift_node_group_name='node-config-infra'

See Configuring Node Host Labels to ensure you understand the default node selector requirements and node label considerations beginning in OKD 3.9.

To use this example, modify the file to match your environment and specifications, and save it as /etc/ansible/hosts.