Persistent Storage Using GlusterFS

You are viewing documentation for a release that is no longer supported. The latest supported version of version 3 is [3.11]. For the most recent version 4, see [4]

You are viewing documentation for a release that is no longer supported. The latest supported version of version 3 is [3.11]. For the most recent version 4, see [4]

Overview

GlusterFS can be configured to provide persistent storage and dynamic provisioning for OKD. It can be used both containerized within OKD (Containerized GlusterFS) and non-containerized on its own nodes (External GlusterFS).

Containerized GlusterFS

With Containerized GlusterFS, GlusterFS runs containerized directly on OKD nodes. This allows for compute and storage instances to be scheduled and run from the same set of hardware.

Architecture - Containerized GlusterFS

Figure 1. Architecture - Containerized GlusterFS

External GlusterFS

With External GlusterFS, GlusterFS runs on its own dedicated nodes and is managed by an instance of heketi, the GlusterFS volume management REST service. This heketi service can run either standalone or containerized. Containerization allows for an easy mechanism to provide high-availability to the service. This documentation will focus on the configuration where heketi is containerized.

Standalone GlusterFS

If you have a standalone GlusterFS cluster available in your environment, you can make use of volumes on that cluster using OKD’s GlusterFS volume plug-in. This solution is a conventional deployment where applications run on dedicated compute nodes, an OKD cluster, and storage is provided from its own dedicated nodes.

Architecture - Standalone GlusterFS Cluster Using OKD’s GlusterFS Volume Plug-in

Figure 2. Architecture - Standalone GlusterFS Cluster Using OKD’s GlusterFS Volume Plug-in

See the GlusterFS Installation Guide and the GlusterFS Administration Guide for more on GlusterFS.

High availability of storage in the infrastructure is left to the underlying storage provider.

GlusterFS Volumes

GlusterFS volumes present a POSIX-compliant filesystem and are comprised of one or more “bricks” across one or more nodes in their cluster. A brick is just a directory on a given storage node and is typically the mount point for a block storage device. GlusterFS handles distribution and replication of files across a given volume’s bricks per that volume’s configuration.

It is recommended to use heketi for most common volume management operations such as create, delete, and resize. OKD expects heketi to be present when using the GlusterFS provisioner. heketi by default will create volumes that are three-ray replica, that is volumes where each file has three copies across three different nodes. As such it is recommended that any GlusterFS clusters which will be used by heketi have at least three nodes available.

There are many features available for GlusterFS volumes, but they are beyond the scope of this documentation.

gluster-block Volumes

gluster-block volumes are volumes that can be mounted over iSCSI. This is done by creating a file on an existing GlusterFS volume and then presenting that file as a block device via an iSCSI target. Such GlusterFS volumes are called block-hosting volumes.

gluster-block volumes present a sort of trade-off. Being consumed as iSCSI targets, gluster-block volumes can only be mounted by one node/client at a time which is in contrast to GlusterFS volumes which can be mounted by multiple nodes/clients. Being files on the backend, however, allows for operations which are typically costly on GlusterFS volumes (e.g. metadata lookups) to be converted to ones which are typically much faster on GlusterFS volumes (e.g. reads and writes). This leads to potentially substantial performance improvements for certain workloads.

At this time, it is recommended to only use gluster-block volumes for OpenShift Logging and OpenShift Metrics storage.

Gluster S3 Storage

The Gluster S3 service allows user applications to access GlusterFS storage via an S3 interface. The service binds to two GlusterFS volumes, one for object data and one for object metadata, and translates incoming S3 REST requests into filesystem operations on the volumes. It is recommended to run the service as a pod inside OKD.

At this time, use and installation of the Gluster S3 service is in tech preview.

Considerations

This section covers a few topics that should be taken into consideration when using GlusterFS with OKD.

Software Prerequisites

To access GlusterFS volumes, the mount.glusterfs command must be available on all schedulable nodes. For RPM-based systems, the glusterfs-fuse package must be installed:

  1. # yum install glusterfs-fuse

If glusterfs-fuse is already installed on the nodes, ensure that the latest version is installed:

  1. # yum update glusterfs-fuse

Hardware Requirements

Any nodes used in a Containerized GlusterFS or External GlusterFS cluster are considered storage nodes. Storage nodes can be grouped into distinct cluster groups, though a single node can not be in multiple groups. For each group of storage nodes:

  • A minimum of three storage nodes per group is required.

  • Each storage node must have a minimum of 8 GB of RAM. This is to allow running the GlusterFS pods, as well as other applications and the underlying operating system.

    • Each GlusterFS volume also consumes memory on every storage node in its storage cluster, which is about 30 MB. The total amount of RAM should be determined based on how many concurrent volumes are desired or anticipated.
  • Each storage node must have at least one raw block device with no present data or metadata. These block devices will be used in their entirety for GlusterFS storage. Make sure the following are not present:

    • Partition tables (GPT or MSDOS)

    • Filesystems or residual filesystem signatures

    • LVM2 signatures of former Volume Groups and Logical Volumes

    • LVM2 metadata of LVM2 physical volumes

    If in doubt, wipefs -a <device> should clear any of the above.

It is recommended to plan for two clusters: one dedicated to storage for infrastructure applications (such as an OpenShift Container Registry) and one dedicated to storage for general applications. This would require a total of six storage nodes. This recommendation is made to avoid potential impacts on performance in I/O and volume creation.

Storage Sizing

Every GlusterFS cluster must be sized based on the needs of the anticipated applications that will use its storage. For example, there are sizing guides available for both OpenShift Logging and OpenShift Metrics.

Some additional things to consider are:

  • For each Containerized GlusterFS or External GlusterFS cluster, the default behavior is to create GlusterFS volumes with three-way replication. As such, the total storage to plan for should be the desired capacity times three.

    • As an example, each heketi instance creates a heketidbstorage volume that is 2 GB in size, requiring a total of 6 GB of raw storage across three nodes in the storage cluster. This capacity is always required and should be taken into consideration for sizing calculations.

    • Applications like an integrated OpenShift Container Registry share a single GlusterFS volume across multiple instances of the application.

  • gluster-block volumes require the presence of a GlusterFS block-hosting volume with enough capacity to hold the full size of any given block volume’s capacity.

    • By default, if no such block-hosting volume exists, one will be automatically created at a set size. The default for this size is 100 GB. If there is not enough space in the cluster to create the new block-hosting volume, the creation of the block volume will fail. Both the auto-create behavior and the auto-created volume size are configurable.

    • Applications with multiple instances that use gluster-block volumes, such as OpenShift Logging and OpenShift Metrics, will use one volume per instance.

  • The Gluster S3 service binds to two GlusterFS volumes. In a default cluster installation, these volumes are 1 GB each, consuming a total of 6 GB of raw storage.

Volume Operation Behaviors

Volume operations, such as create and delete, can be impacted by a variety of environmental circumstances and can in turn affect applications as well.

  • If the application pod requests a dynamically provisioned GlusterFS persistent volume claim (PVC), then extra time might have to be considered for the volume to be created and bound to the corresponding PVC. This effects the startup time for an application pod.

    Creation time of GlusterFS volumes scales linearly depending on the number of volumes. As an example, given 100 volumes in a cluster using recommended hardware specifications, each volume took approximately 6 seconds to be created, allocated, and bound to a pod.

  • When a PVC is deleted, that action will trigger the deletion of the underlying GlusterFS volume. While PVCs will disappear immediately from the oc get pvc output, this does not mean the volume has been fully deleted. A GlusterFS volume can only be considered deleted when it does not show up in the command-line outputs for heketi-cli volume list and gluster volume list.

    The time to delete the GlusterFS volume and recycle its storage depends on and scales linearly with the number of active GlusterFS volumes. While pending volume deletes do not affect running applications, storage administrators should be aware of and be able to estimate how long they will take, especially when tuning resource consumption at scale.

Volume Security

This section covers GlusterFS volume security, including Portable Operating System Interface [for Unix] (POSIX) permissions and SELinux considerations. Understanding the basics of Volume Security, POSIX permissions, and SELinux is presumed.

POSIX Permissions

GlusterFS volumes present POSIX-compliant file systems. As such, access permissions can be managed using standard command-line tools such as chmod and chown.

For Containerized GlusterFS and External GlusterFS, it is also possible to specify a group ID that will own the root of the volume at volume creation time. For static provisioning, this is specified as part of the heketi-cli volume creation command:

  1. $ heketi-cli volume create --size=100 --gid=10001000

The PersistentVolume that will be associated with this volume must be annotated with the group ID so that pods consuming the PersistentVolume can have access to the file system. This annotation takes the form of:


pv.beta.kubernetes.io/gid: “<GID>” —-

For dynamic provisioning, the provisioner automatically generates and applies a group ID. It is possible to control the range from which this group ID is selected using the gidMin and gidMax StorageClass parameters (see Dynamic Provisioning). The provisioner also takes care of annotating the generated PersistentVolume with the group ID.

SELinux

By default, SELinux does not allow writing from a pod to a remote GlusterFS server. To enable writing to GlusterFS volumes with SELinux on, run the following on each node running GlusterFS:

  1. $ sudo setsebool -P virt_sandbox_use_fusefs on (1)
  2. $ sudo setsebool -P virt_use_fusefs on
1The -P option makes the boolean persistent between reboots.

The virt_sandbox_use_fusefs boolean is defined by the docker-selinux package. If you get an error saying it is not defined, ensure that this package is installed.

If you use Atomic Host, the SELinux booleans are cleared when you upgrade Atomic Host. When you upgrade Atomic Host, you must set these boolean values again.

Installation

For standalone GlusterFS, there is no component installation required to use it with OKD. OKD comes with a built-in GlusterFS volume driver, allowing it to make use of existing volumes on existing clusters. See provisioning for more on how to make use of existing volumes.

For Containerized GlusterFS and External GlusterFS, it is recommended to use the cluster installation process to install the required components.

External GlusterFS: Installing GlusterFS Nodes

For External GlusterFS, each GlusterFS node must have the appropriate system configurations (e.g. firewall ports, kernel modules) and the GlusterFS services must be running. The services should not be further configured, and should not have formed a Trusted Storage Pool.

The installation of GlusterFS nodes is beyond the scope of this documentation. For more information, see the GlusterFS Installation Guide.

Using the Installer

The cluster installation process can be used to install one or both of two GlusterFS node groups:

  • glusterfs: A general storage cluster for use by user applications.

  • glusterfs_registry: A dedicated storage cluster for use by infrastructure applications such as an integrated OpenShift Container Registry.

It is recommended to deploy both groups to avoid potential impacts on performance in I/O and volume creation. Both of these are defined in the inventory hosts file.

The definition of the clusters is done by including the relevant names in the [OSEv3:children] group, creating similarly named groups, and then populating the groups with the node information. The clusters can then be configured through a variety of variables in the [OSEv3:vars] group. glusterfs variables begin with openshift_storage_glusterfs_ and glusterfs_registry variables begin with openshift_storage_glusterfs_registry_. A few other variables, such as openshift_hosted_registry_storage_kind, interact with the GlusterFS clusters.

To prevent GlusterFS pods from upgrading after an outage leading to a cluster with different GlusterFS versions, it is recommended to specify the image name and version tags for all containerized components. The relevant variables are:

  • openshift_storage_glusterfs_image

  • openshift_storage_glusterfs_block_image

  • openshift_storage_glusterfs_s3_image

  • openshift_storage_glusterfs_heketi_image

  • openshift_storage_glusterfs_registry_image

  • openshift_storage_glusterfs_registry_block_image

  • openshift_storage_glusterfs_registry_s3_image

  • openshift_storage_glusterfs_registry_heketi_image

The image variables for gluster-block and gluster-s3 are only necessary if the corresponding deployment variables (the variables ending in _block_deploy and _s3_deploy) are true.

For a complete list of variables, see the GlusterFS role README on GitHub.

Once the variables are configured, there are several playbooks available depending on the circumstances of the installation:

  • The main playbook for cluster installations can be used to deploy the GlusterFS clusters in tandem with an initial installation of OKD.

    • This includes deploying an integrated OpenShift Container Registry that uses GlusterFS storage.

    • This does not include OpenShift Logging or OpenShift Metrics, as that is currently still a separate step. See Containerized GlusterFS for OpenShift Logging and Metrics for more information.

  • playbooks/openshift-glusterfs/config.yml can be used to deploy the clusters onto an existing OKD installation.

  • playbooks/openshift-glusterfs/registry.yml can be used to deploy the clusters onto an existing OKD installation. In addition, this will deploy an integrated OpenShift Container Registry which uses GlusterFS storage.

    There must not be a pre-existing registry in the OKD cluster.

  • playbooks/openshift-glusterfs/uninstall.yml can be used to remove existing clusters matching the configuration in the inventory hosts file. This is useful for cleaning up the OKD environment in the case of a failed deployment due to configuration errors.

    The GlusterFS playbooks are not guaranteed to be idempotent.

    Running the playbooks more than once for a given installation is currently not supported without deleting the entire GlusterFS installation (including disk data) and starting over.

Example: Basic Containerized GlusterFS Installation

  1. In your inventory file, include the following variables in the [OSEv3:vars] section, and adjust them as required for your configuration:

    1. [OSEv3:vars]
    2. ...
    3. openshift_storage_glusterfs_namespace=app-storage
    4. openshift_storage_glusterfs_storageclass=true
    5. openshift_storage_glusterfs_storageclass_default=false
    6. openshift_storage_glusterfs_block_deploy=true
    7. openshift_storage_glusterfs_block_host_vol_size=100
    8. openshift_storage_glusterfs_block_storageclass=true
    9. openshift_storage_glusterfs_block_storageclass_default=false
  2. Add glusterfs in the [OSEv3:children] section to enable the [glusterfs] group:

    1. [OSEv3:children]
    2. masters
    3. nodes
    4. glusterfs
  3. Add a [glusterfs] section with entries for each storage node that will host the GlusterFS storage. For each node, set glusterfs_devices to a list of raw block devices that will be completely managed as part of a GlusterFS cluster. There must be at least one device listed. Each device must be bare, with no partitions or LVM PVs. Specifying the variable takes the form:

    1. <hostname_or_ip> glusterfs_devices='[ "</path/to/device1/>", "</path/to/device2>", ... ]'

    For example:

    1. [glusterfs]
    2. node11.example.com glusterfs_devices='[ "/dev/xvdc", "/dev/xvdd" ]'
    3. node12.example.com glusterfs_devices='[ "/dev/xvdc", "/dev/xvdd" ]'
    4. node13.example.com glusterfs_devices='[ "/dev/xvdc", "/dev/xvdd" ]'
  4. Add the hosts listed under [glusterfs] to the [nodes] group:

    1. [nodes]
    2. ...
    3. node11.example.com openshift_node_group_name="node-config-compute"
    4. node12.example.com openshift_node_group_name="node-config-compute"
    5. node13.example.com openshift_node_group_name="node-config-compute"

    The preceding steps only provide some of the options that must be added to the inventory file. Use the complete inventory file to deploy GlusterFS.

  5. Run the installation playbook and provide the relative path for the inventory file as an option.

    • For a new OKD installation:

      1. ansible-playbook -i <path_to_inventory_file> /usr/share/ansible/openshift-ansible/playbooks/prerequisites.yml
      2. ansible-playbook -i <path_to_inventory_file> /usr/share/ansible/openshift-ansible/playbooks/deploy_cluster.yml
    • For an installation onto an existing OKD cluster:

      1. ansible-playbook -i <path_to_inventory_file> /usr/share/ansible/openshift-ansible/playbooks/openshift-glusterfs/config.yml

Example: Basic External GlusterFS Installation

  1. In your inventory file, include the following variables in the [OSEv3:vars] section, and adjust them as required for your configuration:

    1. [OSEv3:vars]
    2. ...
    3. openshift_storage_glusterfs_namespace=app-storage
    4. openshift_storage_glusterfs_storageclass=true
    5. openshift_storage_glusterfs_storageclass_default=false
    6. openshift_storage_glusterfs_block_deploy=true
    7. openshift_storage_glusterfs_block_host_vol_size=100
    8. openshift_storage_glusterfs_block_storageclass=true
    9. openshift_storage_glusterfs_block_storageclass_default=false
    10. openshift_storage_glusterfs_is_native=false
    11. openshift_storage_glusterfs_heketi_is_native=true
    12. openshift_storage_glusterfs_heketi_executor=ssh
    13. openshift_storage_glusterfs_heketi_ssh_port=22
    14. openshift_storage_glusterfs_heketi_ssh_user=root
    15. openshift_storage_glusterfs_heketi_ssh_sudo=false
    16. openshift_storage_glusterfs_heketi_ssh_keyfile="/root/.ssh/id_rsa"
  2. Add glusterfs in the [OSEv3:children] section to enable the [glusterfs] group:

    1. [OSEv3:children]
    2. masters
    3. nodes
    4. glusterfs
  3. Add a [glusterfs] section with entries for each storage node that will host the GlusterFS storage. For each node, set glusterfs_devices to a list of raw block devices that will be completely managed as part of a GlusterFS cluster. There must be at least one device listed. Each device must be bare, with no partitions or LVM PVs. Also, set glusterfs_ip to the IP address of the node. Specifying the variable takes the form:

    1. <hostname_or_ip> glusterfs_ip=<ip_address> glusterfs_devices='[ "</path/to/device1/>", "</path/to/device2>", ... ]'

    For example:

    1. [glusterfs]
    2. gluster1.example.com glusterfs_ip=192.168.10.11 glusterfs_devices='[ "/dev/xvdc", "/dev/xvdd" ]'
    3. gluster2.example.com glusterfs_ip=192.168.10.12 glusterfs_devices='[ "/dev/xvdc", "/dev/xvdd" ]'
    4. gluster3.example.com glusterfs_ip=192.168.10.13 glusterfs_devices='[ "/dev/xvdc", "/dev/xvdd" ]'

    The preceding steps only provide some of the options that must be added to the inventory file. Use the complete inventory file to deploy GlusterFS.

  4. Run the installation playbook and provide the relative path for the inventory file as an option.

    • For a new OKD installation:

      1. ansible-playbook -i <path_to_inventory_file> /usr/share/ansible/openshift-ansible/playbooks/prerequisites.yml
      2. ansible-playbook -i <path_to_inventory_file> /usr/share/ansible/openshift-ansible/playbooks/deploy_cluster.yml
    • For an installation onto an existing OKD cluster:

      1. ansible-playbook -i <path_to_inventory_file> /usr/share/ansible/openshift-ansible/playbooks/openshift-glusterfs/config.yml

Example: Containerized GlusterFS with an Integrated OpenShift Container Registry

  1. In your inventory file, set the following variable under [OSEv3:vars] section, and adjust them as required for your configuration:

    1. [OSEv3:vars]
    2. ...
    3. openshift_hosted_registry_storage_kind=glusterfs (1)
    4. openshift_hosted_registry_storage_volume_size=5Gi
    5. openshift_hosted_registry_selector='node-role.kubernetes.io/infra=true'
    1Running the integrated OpenShift Container Registry, on infrastructure nodes is recommended. Infrastructure node are nodes dedicated to running applications deployed by administrators to provide services for the OKD cluster.
  2. Add glusterfs_registry in the [OSEv3:children] section to enable the [glusterfs_registry] group:

    1. [OSEv3:children]
    2. masters
    3. nodes
    4. glusterfs_registry
  3. Add a [glusterfs_registry] section with entries for each storage node that will host the GlusterFS storage. For each node, set glusterfs_devices to a list of raw block devices that will be completely managed as part of a GlusterFS cluster. There must be at least one device listed. Each device must be bare, with no partitions or LVM PVs. Specifying the variable takes the form:

    1. <hostname_or_ip> glusterfs_devices='[ "</path/to/device1/>", "</path/to/device2>", ... ]'

    For example:

    1. [glusterfs_registry]
    2. node11.example.com glusterfs_devices='[ "/dev/xvdc", "/dev/xvdd" ]'
    3. node12.example.com glusterfs_devices='[ "/dev/xvdc", "/dev/xvdd" ]'
    4. node13.example.com glusterfs_devices='[ "/dev/xvdc", "/dev/xvdd" ]'
  4. Add the hosts listed under [glusterfs_registry] to the [nodes] group:

    1. [nodes]
    2. ...
    3. node11.example.com openshift_node_group_name="node-config-infra"
    4. node12.example.com openshift_node_group_name="node-config-infra"
    5. node13.example.com openshift_node_group_name="node-config-infra"

    The preceding steps only provide some of the options that must be added to the inventory file. Use the complete inventory file to deploy GlusterFS.

  5. Run the installation playbook and provide the relative path for the inventory file as an option.

    • For a new OKD installation:

      1. ansible-playbook -i <path_to_inventory_file> /usr/share/ansible/openshift-ansible/playbooks/prerequisites.yml
      2. ansible-playbook -i <path_to_inventory_file> /usr/share/ansible/openshift-ansible/playbooks/deploy_cluster.yml
    • For an installation onto an existing OKD cluster:

      1. ansible-playbook -i <path_to_inventory_file> /usr/share/ansible/openshift-ansible/playbooks/openshift-glusterfs/config.yml

Example: Containerized GlusterFS for OpenShift Logging and Metrics

  1. In your inventory file, set the following variables under [OSEv3:vars] section, and adjust them as required for your configuration:

    1. [OSEv3:vars]
    2. ...
    3. openshift_metrics_install_metrics=true
    4. openshift_metrics_hawkular_nodeselector={"node-role.kubernetes.io/infra": "true"} (1)
    5. openshift_metrics_cassandra_nodeselector={"node-role.kubernetes.io/infra": "true"} (1)
    6. openshift_metrics_heapster_nodeselector={"node-role.kubernetes.io/infra": "true"} (1)
    7. openshift_metrics_storage_kind=dynamic
    8. openshift_metrics_storage_volume_size=10Gi
    9. openshift_metrics_cassandra_pvc_storage_class_name="glusterfs-registry-block" (2)
    10. openshift_logging_install_logging=true
    11. openshift_logging_kibana_nodeselector={"node-role.kubernetes.io/infra": "true"} (1)
    12. openshift_logging_curator_nodeselector={"node-role.kubernetes.io/infra": "true"} (1)
    13. openshift_logging_es_nodeselector={"node-role.kubernetes.io/infra": "true"} (1)
    14. openshift_logging_storage_kind=dynamic
    15. openshift_logging_es_pvc_size=10Gi (3)
    16. openshift_logging_elasticsearch_storage_type=pvc (4)
    17. openshift_logging_es_pvc_storage_class_name="glusterfs-registry-block" (2)
    18. openshift_storage_glusterfs_registry_namespace=infra-storage
    19. openshift_storage_glusterfs_registry_block_deploy=true
    20. openshift_storage_glusterfs_registry_block_host_vol_size=100
    21. openshift_storage_glusterfs_registry_block_storageclass=true
    22. openshift_storage_glusterfs_registry_block_storageclass_default=false
    1It is recommended to run the integrated OpenShift Container Registry, Logging, and Metrics on nodes dedicated to “infrastructure” applications, that is applications deployed by administrators to provide services for the OKD cluster.
    2Specify the StorageClass to be used for Logging and Metrics. This name is generated from the name of the target GlusterFS cluster (e.g., glusterfs-<name>-block). In this example, this defaults to registry.
    3OpenShift Logging requires that a PVC size be specified. The supplied value is only an example, not a recommendation.
    4If using Persistent Elasticsearch Storage, set the storage type to pvc.

    See the GlusterFS role README for details on these and other variables.

  2. Add glusterfs_registry in the [OSEv3:children] section to enable the [glusterfs_registry] group:

    1. [OSEv3:children]
    2. masters
    3. nodes
    4. glusterfs_registry
  3. Add a [glusterfs_registry] section with entries for each storage node that will host the GlusterFS storage. For each node, set glusterfs_devices to a list of raw block devices that will be completely managed as part of a GlusterFS cluster. There must be at least one device listed. Each device must be bare, with no partitions or LVM PVs. Specifying the variable takes the form:

    1. <hostname_or_ip> glusterfs_devices='[ "</path/to/device1/>", "</path/to/device2>", ... ]'

    For example:

    1. [glusterfs_registry]
    2. node11.example.com glusterfs_devices='[ "/dev/xvdc", "/dev/xvdd" ]'
    3. node12.example.com glusterfs_devices='[ "/dev/xvdc", "/dev/xvdd" ]'
    4. node13.example.com glusterfs_devices='[ "/dev/xvdc", "/dev/xvdd" ]'
  4. Add the hosts listed under [glusterfs_registry] to the [nodes] group:

    1. [nodes]
    2. ...
    3. node11.example.com openshift_node_group_name="node-config-infra"
    4. node12.example.com openshift_node_group_name="node-config-infra"
    5. node13.example.com openshift_node_group_name="node-config-infra"

    The preceding steps only provide some of the options that must be added to the inventory file. Use the complete inventory file to deploy GlusterFS.

  5. Run the installation playbook and provide the relative path for the inventory file as an option.

    • For a new OKD installation:

      1. ansible-playbook -i <path_to_inventory_file> /usr/share/ansible/openshift-ansible/playbooks/prerequisites.yml
      2. ansible-playbook -i <path_to_inventory_file> /usr/share/ansible/openshift-ansible/playbooks/deploy_cluster.yml
    • For an installation onto an existing OKD cluster:

      1. ansible-playbook -i <path_to_inventory_file> /usr/share/ansible/openshift-ansible/playbooks/openshift-glusterfs/config.yml

Example: Containerized GlusterFS for Applications, Registry, Logging, and Metrics

  1. In your inventory file, set the following variables under [OSEv3:vars] section, and adjust them as required for your configuration:

    1. [OSEv3:vars]
    2. ...
    3. openshift_hosted_registry_storage_kind=glusterfs (1)
    4. openshift_hosted_registry_storage_volume_size=5Gi
    5. openshift_hosted_registry_selector='node-role.kubernetes.io/infra=true'
    6. openshift_metrics_install_metrics=true
    7. openshift_metrics_hawkular_nodeselector={"node-role.kubernetes.io/infra": "true"} (1)
    8. openshift_metrics_cassandra_nodeselector={"node-role.kubernetes.io/infra": "true"} (1)
    9. openshift_metrics_heapster_nodeselector={"node-role.kubernetes.io/infra": "true"} (1)
    10. openshift_metrics_storage_kind=dynamic
    11. openshift_metrics_storage_volume_size=10Gi
    12. openshift_metrics_cassandra_pvc_storage_class_name="glusterfs-registry-block" (2)
    13. openshift_logging_install_logging=true
    14. openshift_logging_kibana_nodeselector={"node-role.kubernetes.io/infra": "true"} (1)
    15. openshift_logging_curator_nodeselector={"node-role.kubernetes.io/infra": "true"} (1)
    16. openshift_logging_es_nodeselector={"node-role.kubernetes.io/infra": "true"} (1)
    17. openshift_logging_storage_kind=dynamic
    18. openshift_logging_es_pvc_size=10Gi (3)
    19. openshift_logging_elasticsearch_storage_type=pvc (4)
    20. openshift_logging_es_pvc_storage_class_name="glusterfs-registry-block" (2)
    21. openshift_storage_glusterfs_namespace=app-storage
    22. openshift_storage_glusterfs_storageclass=true
    23. openshift_storage_glusterfs_storageclass_default=false
    24. openshift_storage_glusterfs_block_deploy=true
    25. openshift_storage_glusterfs_block_host_vol_size=100 (5)
    26. openshift_storage_glusterfs_block_storageclass=true
    27. openshift_storage_glusterfs_block_storageclass_default=false
    28. openshift_storage_glusterfs_registry_namespace=infra-storage
    29. openshift_storage_glusterfs_registry_block_deploy=true
    30. openshift_storage_glusterfs_registry_block_host_vol_size=100
    31. openshift_storage_glusterfs_registry_block_storageclass=true
    32. openshift_storage_glusterfs_registry_block_storageclass_default=false
    1Running the integrated OpenShift Container Registry, Logging, and Metrics on infrastructure nodes is recommended. Infrastructure node are nodes dedicated to running applications deployed by administrators to provide services for the OKD cluster.
    2Specify the StorageClass to be used for Logging and Metrics. This name is generated from the name of the target GlusterFS cluster, for example glusterfs-<name>-block. In this example, <name> defaults to registry.
    3Specifying a PVC size is required for OpenShift Logging. The supplied value is only an example, not a recommendation.
    4If using Persistent Elasticsearch Storage, set the storage type to pvc.
    5Size, in GB, of GlusterFS volumes that will be automatically created to host glusterblock volumes. This variable is used only if there is not enough space is available for a glusterblock volume create request. This value represents an upper limit on the size of glusterblock volumes unless you manually create larger GlusterFS block-hosting volumes.
  2. Add glusterfs and glusterfs_registry in the [OSEv3:children] section to enable the [glusterfs] and [glusterfs_registry] groups:

    1. [OSEv3:children]
    2. ...
    3. glusterfs
    4. glusterfs_registry
  3. Add [glusterfs] and [glusterfs_registry] sections with entries for each storage node that will host the GlusterFS storage. For each node, set glusterfs_devices to a list of raw block devices that will be completely managed as part of a GlusterFS cluster. There must be at least one device listed. Each device must be bare, with no partitions or LVM PVs. Specifying the variable takes the form:

    1. <hostname_or_ip> glusterfs_devices='[ "</path/to/device1/>", "</path/to/device2>", ... ]'

    For example:

    1. [glusterfs]
    2. node11.example.com glusterfs_devices='[ "/dev/xvdc", "/dev/xvdd" ]'
    3. node12.example.com glusterfs_devices='[ "/dev/xvdc", "/dev/xvdd" ]'
    4. node13.example.com glusterfs_devices='[ "/dev/xvdc", "/dev/xvdd" ]'
    5. [glusterfs_registry]
    6. node14.example.com glusterfs_devices='[ "/dev/xvdc", "/dev/xvdd" ]'
    7. node15.example.com glusterfs_devices='[ "/dev/xvdc", "/dev/xvdd" ]'
    8. node16.example.com glusterfs_devices='[ "/dev/xvdc", "/dev/xvdd" ]'
  4. Add the hosts listed under [glusterfs] and [glusterfs_registry] to the [nodes] group:

    1. [nodes]
    2. ...
    3. node11.example.com openshift_node_group_name='node-config-compute' (1)
    4. node12.example.com openshift_node_group_name='node-config-compute' (1)
    5. node13.example.com openshift_node_group_name='node-config-compute' (1)
    6. node14.example.com openshift_node_group_name='node-config-infra'" (1)
    7. node15.example.com openshift_node_group_name='node-config-infra'" (1)
    8. node16.example.com openshift_node_group_name='node-config-infra'" (1)
    1The nodes are marked to denote whether they will allow general applications or infrastructure applications to be scheduled on them. It is up to the administrator to configure how applications will be constrained.

    The preceding steps only provide some of the options that must be added to the inventory file. Use the complete inventory file to deploy GlusterFS.

  5. Run the installation playbook and provide the relative path for the inventory file as an option.

    • For a new OKD installation:

      1. ansible-playbook -i <path_to_inventory_file> /usr/share/ansible/openshift-ansible/playbooks/prerequisites.yml
      2. ansible-playbook -i <path_to_inventory_file> /usr/share/ansible/openshift-ansible/playbooks/deploy_cluster.yml
    • For an installation onto an existing OKD cluster:

      1. ansible-playbook -i <path_to_inventory_file> /usr/share/ansible/openshift-ansible/playbooks/openshift-glusterfs/config.yml

Example: External GlusterFS for Applications, Registry, Logging, and Metrics

  1. In your inventory file, set the following variables under [OSEv3:vars] section, and adjust them as required for your configuration:

    1. [OSEv3:vars]
    2. ...
    3. openshift_hosted_registry_storage_kind=glusterfs (1)
    4. openshift_hosted_registry_storage_volume_size=5Gi
    5. openshift_hosted_registry_selector='node-role.kubernetes.io/infra=true'
    6. openshift_metrics_install_metrics=true
    7. openshift_metrics_hawkular_nodeselector={"node-role.kubernetes.io/infra": "true"} (1)
    8. openshift_metrics_cassandra_nodeselector={"node-role.kubernetes.io/infra": "true"} (1)
    9. openshift_metrics_heapster_nodeselector={"node-role.kubernetes.io/infra": "true"} (1)
    10. openshift_metrics_storage_kind=dynamic
    11. openshift_metrics_storage_volume_size=10Gi
    12. openshift_metrics_cassandra_pvc_storage_class_name="glusterfs-registry-block" (2)
    13. openshift_logging_install_logging=true
    14. openshift_logging_kibana_nodeselector={"node-role.kubernetes.io/infra": "true"} (1)
    15. openshift_logging_curator_nodeselector={"node-role.kubernetes.io/infra": "true"} (1)
    16. openshift_logging_es_nodeselector={"node-role.kubernetes.io/infra": "true"} (1)
    17. openshift_logging_storage_kind=dynamic
    18. openshift_logging_es_pvc_size=10Gi (3)
    19. openshift_logging_elasticsearch_storage_type (4)
    20. openshift_logging_es_pvc_storage_class_name="glusterfs-registry-block" (2)
    21. openshift_storage_glusterfs_namespace=app-storage
    22. openshift_storage_glusterfs_storageclass=true
    23. openshift_storage_glusterfs_storageclass_default=false
    24. openshift_storage_glusterfs_block_deploy=true
    25. openshift_storage_glusterfs_block_host_vol_size=100 (5)
    26. openshift_storage_glusterfs_block_storageclass=true
    27. openshift_storage_glusterfs_block_storageclass_default=false
    28. openshift_storage_glusterfs_is_native=false
    29. openshift_storage_glusterfs_heketi_is_native=true
    30. openshift_storage_glusterfs_heketi_executor=ssh
    31. openshift_storage_glusterfs_heketi_ssh_port=22
    32. openshift_storage_glusterfs_heketi_ssh_user=root
    33. openshift_storage_glusterfs_heketi_ssh_sudo=false
    34. openshift_storage_glusterfs_heketi_ssh_keyfile="/root/.ssh/id_rsa"
    35. openshift_storage_glusterfs_registry_namespace=infra-storage
    36. openshift_storage_glusterfs_registry_block_deploy=true
    37. openshift_storage_glusterfs_registry_block_host_vol_size=100
    38. openshift_storage_glusterfs_registry_block_storageclass=true
    39. openshift_storage_glusterfs_registry_block_storageclass_default=false
    40. openshift_storage_glusterfs_registry_is_native=false
    41. openshift_storage_glusterfs_registry_heketi_is_native=true
    42. openshift_storage_glusterfs_registry_heketi_executor=ssh
    43. openshift_storage_glusterfs_registry_heketi_ssh_port=22
    44. openshift_storage_glusterfs_registry_heketi_ssh_user=root
    45. openshift_storage_glusterfs_registry_heketi_ssh_sudo=false
    46. openshift_storage_glusterfs_registry_heketi_ssh_keyfile="/root/.ssh/id_rsa"
    1It is recommended to run the integrated OpenShift Container Registry on nodes dedicated to “infrastructure” applications, that is applications deployed by administrators to provide services for the OKD cluster. It is up to the administrator to select and label nodes for infrastructure applications.
    2Specify the StorageClass to be used for Logging and Metrics. This name is generated from the name of the target GlusterFS cluster (e.g., glusterfs-<name>-block). In this example, this defaults to registry.
    3OpenShift Logging requires that a PVC size be specified. The supplied value is only an example, not a recommendation.
    4If using Persistent Elasticsearch Storage, set the storage type to pvc.
    5Size, in GB, of GlusterFS volumes that will be automatically created to host glusterblock volumes. This variable is used only if there is not enough space is available for a glusterblock volume create request. This value represents an upper limit on the size of glusterblock volumes unless you manually create larger GlusterFS block-hosting volumes.
  2. Add glusterfs and glusterfs_registry in the [OSEv3:children] section to enable the [glusterfs] and [glusterfs_registry] groups:

    1. [OSEv3:children]
    2. ...
    3. glusterfs
    4. glusterfs_registry
  3. Add [glusterfs] and [glusterfs_registry] sections with entries for each storage node that will host the GlusterFS storage. For each node, set glusterfs_devices to a list of raw block devices that will be completely managed as part of a GlusterFS cluster. There must be at least one device listed. Each device must be bare, with no partitions or LVM PVs. Also, set glusterfs_ip to the IP address of the node. Specifying the variable takes the form:

    1. <hostname_or_ip> glusterfs_ip=<ip_address> glusterfs_devices='[ "</path/to/device1/>", "</path/to/device2>", ... ]'

    For example:

    1. [glusterfs]
    2. gluster1.example.com glusterfs_ip=192.168.10.11 glusterfs_devices='[ "/dev/xvdc", "/dev/xvdd" ]'
    3. gluster2.example.com glusterfs_ip=192.168.10.12 glusterfs_devices='[ "/dev/xvdc", "/dev/xvdd" ]'
    4. gluster3.example.com glusterfs_ip=192.168.10.13 glusterfs_devices='[ "/dev/xvdc", "/dev/xvdd" ]'
    5. [glusterfs_registry]
    6. gluster4.example.com glusterfs_ip=192.168.10.14 glusterfs_devices='[ "/dev/xvdc", "/dev/xvdd" ]'
    7. gluster5.example.com glusterfs_ip=192.168.10.15 glusterfs_devices='[ "/dev/xvdc", "/dev/xvdd" ]'
    8. gluster6.example.com glusterfs_ip=192.168.10.16 glusterfs_devices='[ "/dev/xvdc", "/dev/xvdd" ]'

    The preceding steps only provide some of the options that must be added to the inventory file. Use the complete inventory file to deploy GlusterFS.

  4. Run the installation playbook and provide the relative path for the inventory file as an option.

    • For a new OKD installation:

      1. ansible-playbook -i <path_to_inventory_file> /usr/share/ansible/openshift-ansible/playbooks/prerequisites.yml
      2. ansible-playbook -i <path_to_inventory_file> /usr/share/ansible/openshift-ansible/playbooks/deploy_cluster.yml
    • For an installation onto an existing OKD cluster:

      1. ansible-playbook -i <path_to_inventory_file> /usr/share/ansible/openshift-ansible/playbooks/openshift-glusterfs/config.yml

Uninstall Containerized GlusterFS

For Containerized GlusterFS, an OKD install comes with a playbook to uninstall all resources and artifacts from the cluster. To use the playbook, provide the original inventory file that was used to install the target instance of Containerized GlusterFS and run the following playbook:

  1. # ansible-playbook -i <path_to_inventory_file> /usr/share/ansible/openshift-ansible/playbooks/openshift-glusterfs/uninstall.yml

In addition, the playbook supports the use of a variable called openshift_storage_glusterfs_wipe which, when enabled, destroys any data on the block devices that were used for GlusterFS backend storage. To use the openshift_storage_glusterfs_wipe variable:

  1. # ansible-playbook -i <path_to_inventory_file> -e
  2. "openshift_storage_glusterfs_wipe=true"
  3. /usr/share/ansible/openshift-ansible/playbooks/openshift-glusterfs/uninstall.yml

This procedure destroys data. Proceed with caution.

Provisioning

GlusterFS volumes can be provisioned either statically or dynamically. Static provisioning is available with all configurations. Only Containerized GlusterFS and External GlusterFS support dynamic provisioning.

Static Provisioning

  1. To enable static provisioning, first create a GlusterFS volume. See the GlusterFS Administration Guide for information on how to do this using the gluster command-line interface or the heketi project site for information on how to do this using heketi-cli. For this example, the volume will be named myVol1.

  2. Define the following Service and Endpoints in gluster-endpoints.yaml:

    1. ---
    2. apiVersion: v1
    3. kind: Service
    4. metadata:
    5. name: glusterfs-cluster (1)
    6. spec:
    7. ports:
    8. - port: 1
    9. ---
    10. apiVersion: v1
    11. kind: Endpoints
    12. metadata:
    13. name: glusterfs-cluster (1)
    14. subsets:
    15. - addresses:
    16. - ip: 192.168.122.221 (2)
    17. ports:
    18. - port: 1 (3)
    19. - addresses:
    20. - ip: 192.168.122.222 (2)
    21. ports:
    22. - port: 1 (3)
    23. - addresses:
    24. - ip: 192.168.122.223 (2)
    25. ports:
    26. - port: 1 (3)
    1These names must match.
    2The ip values must be the actual IP addresses of a GlusterFS server, not hostnames.
    3The port number is ignored.
  3. From the OKD master host, create the Service and Endpoints:

    1. $ oc create -f gluster-endpoints.yaml
    2. service "glusterfs-cluster" created
    3. endpoints "glusterfs-cluster" created
  4. Verify that the Service and Endpoints were created:

    1. $ oc get services
    2. NAME CLUSTER_IP EXTERNAL_IP PORT(S) SELECTOR AGE
    3. glusterfs-cluster 172.30.205.34 <none> 1/TCP <none> 44s
    4. $ oc get endpoints
    5. NAME ENDPOINTS AGE
    6. docker-registry 10.1.0.3:5000 4h
    7. glusterfs-cluster 192.168.122.221:1,192.168.122.222:1,192.168.122.223:1 11s
    8. kubernetes 172.16.35.3:8443 4d

    Endpoints are unique per project. Each project accessing the GlusterFS volume needs its own Endpoints.

  5. In order to access the volume, the container must run with either a user ID (UID) or group ID (GID) that has access to the file system on the volume. This information can be discovered in the following manner:

    1. $ mkdir -p /mnt/glusterfs/myVol1
    2. $ mount -t glusterfs 192.168.122.221:/myVol1 /mnt/glusterfs/myVol1
    3. $ ls -lnZ /mnt/glusterfs/
    4. drwxrwx---. 592 590 system_u:object_r:fusefs_t:s0 myVol1 (1) (2)
    1The UID is 592.
    2The GID is 590.
  6. Define the following PersistentVolume (PV) in gluster-pv.yaml:

    1. apiVersion: v1
    2. kind: PersistentVolume
    3. metadata:
    4. name: gluster-default-volume (1)
    5. annotations:
    6. pv.beta.kubernetes.io/gid: "590" (2)
    7. spec:
    8. capacity:
    9. storage: 2Gi (3)
    10. accessModes: (4)
    11. - ReadWriteMany
    12. glusterfs:
    13. endpoints: glusterfs-cluster (5)
    14. path: myVol1 (6)
    15. readOnly: false
    16. persistentVolumeReclaimPolicy: Retain
    1The name of the volume.
    2The GID on the root of the GlusterFS volume.
    3The amount of storage allocated to this volume.
    4accessModes are used as labels to match a PV and a PVC. They currently do not define any form of access control.
    5The Endpoints resource previously created.
    6The GlusterFS volume that will be accessed.
  7. From the OKD master host, create the PV:

    1. $ oc create -f gluster-pv.yaml
  8. Verify that the PV was created:

    1. $ oc get pv
    2. NAME LABELS CAPACITY ACCESSMODES STATUS CLAIM REASON AGE
    3. gluster-default-volume <none> 2147483648 RWX Available 2s
  9. Create a PersistentVolumeClaim (PVC) that will bind to the new PV in gluster-claim.yaml:

    1. apiVersion: v1
    2. kind: PersistentVolumeClaim
    3. metadata:
    4. name: gluster-claim (1)
    5. spec:
    6. accessModes:
    7. - ReadWriteMany (2)
    8. resources:
    9. requests:
    10. storage: 1Gi (3)
    1The claim name is referenced by the pod under its volumes section.
    2Must match the accessModes of the PV.
    3This claim will look for PVs offering 1Gi or greater capacity.
  10. From the OKD master host, create the PVC:

    1. $ oc create -f gluster-claim.yaml
  11. Verify that the PV and PVC are bound:

    1. $ oc get pv
    2. NAME LABELS CAPACITY ACCESSMODES STATUS CLAIM REASON AGE
    3. gluster-pv <none> 1Gi RWX Available gluster-claim 37s
    4. $ oc get pvc
    5. NAME LABELS STATUS VOLUME CAPACITY ACCESSMODES AGE
    6. gluster-claim <none> Bound gluster-pv 1Gi RWX 24s

PVCs are unique per project. Each project accessing the GlusterFS volume needs its own PVC. PVs are not bound to a single project, so PVCs across multiple projects may refer to the same PV.

Dynamic Provisioning

  1. To enable dynamic provisioning, first create a StorageClass object definition. The definition below is based on the minimum requirements needed for this example to work with OKD. See Dynamic Provisioning and Creating Storage Classes for additional parameters and specification definitions.

    1. kind: StorageClass
    2. apiVersion: storage.k8s.io/v1
    3. metadata:
    4. name: glusterfs
    5. provisioner: kubernetes.io/glusterfs
    6. parameters:
    7. resturl: "http://10.42.0.0:8080" (1)
    8. restauthenabled: "false" (2)
    1The heketi server URL.
    2Since authentication is not turned on in this example, set to false.
  2. From the OKD master host, create the StorageClass:

    1. # oc create -f gluster-storage-class.yaml
    2. storageclass "glusterfs" created
  3. Create a PVC using the newly-created StorageClass. For example:

    1. apiVersion: v1
    2. kind: PersistentVolumeClaim
    3. metadata:
    4. name: gluster1
    5. spec:
    6. accessModes:
    7. - ReadWriteMany
    8. resources:
    9. requests:
    10. storage: 30Gi
    11. storageClassName: glusterfs
  4. From the OKD master host, create the PVC:

    1. # oc create -f glusterfs-dyn-pvc.yaml
    2. persistentvolumeclaim "gluster1" created
  5. View the PVC to see that the volume was dynamically created and bound to the PVC:

    1. # oc get pvc
    2. NAME STATUS VOLUME CAPACITY ACCESSMODES STORAGECLASS AGE
    3. gluster1 Bound pvc-78852230-d8e2-11e6-a3fa-0800279cf26f 30Gi RWX glusterfs 42s