Installing log storage

You can use the OpenShift CLI (oc) or the OKD web console to deploy a log store on your OKD cluster.

The OpenShift Elasticsearch Operator is deprecated and is planned to be removed in a future release. Red Hat provides bug fixes and support for this feature during the current release lifecycle, but this feature no longer receives enhancements. As an alternative to using the OpenShift Elasticsearch Operator to manage the default log storage, you can use the Loki Operator.

Deploying a Loki log store

You can use the Loki Operator to deploy an internal Loki log store on your OKD cluster. After install the Loki Operator, you must configure Loki object storage by creating a secret, and create a LokiStack custom resource (CR).

Loki deployment sizing

Sizing for Loki follows the format of <N>x.<size> where the value <N> is number of instances and <size> specifies performance capabilities.

Table 1. Loki sizing
1x.demo1x.extra-small1x.small1x.medium

Data transfer

Demo use only

100GB/day

500GB/day

2TB/day

Queries per second (QPS)

Demo use only

1-25 QPS at 200ms

25-50 QPS at 200ms

25-75 QPS at 200ms

Replication factor

None

2

2

2

Total CPU requests

None

14 vCPUs

34 vCPUs

54 vCPUs

Total CPU requests if using the ruler

None

16 vCPUs

42 vCPUs

70 vCPUs

Total memory requests

None

31Gi

67Gi

139Gi

Total memory requests if using the ruler

None

35Gi

83Gi

171Gi

Total disk requests

40Gi

430Gi

430Gi

590Gi

Total disk requests if using the ruler

80Gi

750Gi

750Gi

910Gi

Installing the Loki Operator by using the OKD web console

To install and configure logging on your OKD cluster, additional Operators must be installed. This can be done from the Operator Hub within the web console.

OKD Operators use custom resources (CR) to manage applications and their components. High-level configuration and settings are provided by the user within a CR. The Operator translates high-level directives into low-level actions, based on best practices embedded within the Operator’s logic. A custom resource definition (CRD) defines a CR and lists all the configurations available to users of the Operator. Installing an Operator creates the CRDs, which are then used to generate CRs.

Prerequisites

  • You have access to a supported object store (AWS S3, Google Cloud Storage, Azure, Swift, Minio, OpenShift Data Foundation).

  • You have administrator permissions.

  • You have access to the OKD web console.

Procedure

  1. In the OKD web console Administrator perspective, go to OperatorsOperatorHub.

  2. Type Loki Operator in the Filter by keyword field. Click Loki Operator in the list of available Operators, and then click Install.

    The Community Loki Operator is not supported by Red Hat.

  3. Select stable or stable-x.y as the Update channel.

    The stable channel only provides updates to the most recent release of logging. To continue receiving updates for prior releases, you must change your subscription channel to stable-x.y, where x.y represents the major and minor version of logging you have installed. For example, stable-5.7.

    The Loki Operator must be deployed to the global operator group namespace openshift-operators-redhat, so the Installation mode and Installed Namespace are already selected. If this namespace does not already exist, it is created for you.

  4. Select Enable operator-recommended cluster monitoring on this namespace.

    This option sets the openshift.io/cluster-monitoring: "true" label in the Namespace object. You must select this option to ensure that cluster monitoring scrapes the openshift-operators-redhat namespace.

  5. For Update approval select Automatic, then click Install.

    If the approval strategy in the subscription is set to Automatic, the update process initiates as soon as a new Operator version is available in the selected channel. If the approval strategy is set to Manual, you must manually approve pending updates.

Verification

  1. Go to OperatorsInstalled Operators.

  2. Make sure the openshift-logging project is selected.

  3. In the Status column, verify that you see green checkmarks with InstallSucceeded and the text Up to date.

An Operator might display a Failed status before the installation finishes. If the Operator install completes with an InstallSucceeded message, refresh the page.

Creating a secret for Loki object storage by using the web console

To configure Loki object storage, you must create a secret. You can create a secret by using the OKD web console.

Prerequisites

  • You have administrator permissions.

  • You have access to the OKD web console.

  • You installed the Loki Operator.

Procedure

  1. Go to WorkloadsSecrets in the Administrator perspective of the OKD web console.

  2. From the Create drop-down list, select From YAML.

  3. Create a secret that uses the access_key_id and access_key_secret fields to specify your credentials and the bucketnames, endpoint, and region fields to define the object storage location. AWS is used in the following example:

    Example Secret object

    1. apiVersion: v1
    2. kind: Secret
    3. metadata:
    4. name: logging-loki-s3
    5. namespace: openshift-logging
    6. stringData:
    7. access_key_id: AKIAIOSFODNN7EXAMPLE
    8. access_key_secret: wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
    9. bucketnames: s3-bucket-name
    10. endpoint: https://s3.eu-central-1.amazonaws.com
    11. region: eu-central-1

Additional resources

Creating a LokiStack custom resource by using the web console

You can create a LokiStack custom resource (CR) by using the OKD web console.

Prerequisites

  • You have administrator permissions.

  • You have access to the OKD web console.

  • You installed the Loki Operator.

Procedure

  1. Go to the OperatorsInstalled Operators page. Click the All instances tab.

  2. From the Create new drop-down list, select LokiStack.

  3. Select YAML view, and then use the following template to create a LokiStack CR:

    1. apiVersion: loki.grafana.com/v1
    2. kind: LokiStack
    3. metadata:
    4. name: logging-loki (1)
    5. namespace: openshift-logging
    6. spec:
    7. size: 1x.small (2)
    8. storage:
    9. schemas:
    10. - version: v12
    11. effectiveDate: '2022-06-01'
    12. secret:
    13. name: logging-loki-s3 (3)
    14. type: s3 (4)
    15. storageClassName: <storage_class_name> (5)
    16. tenants:
    17. mode: openshift-logging
    1Use the name logging-loki.
    2Specify the deployment size. In the logging subsystem 5.8 and later versions, the supported size options for production instances of Loki are 1x.extra-small, 1x.small, or 1x.medium.
    3Specify the secret used for your log storage.
    4Specify the corresponding storage type.
    5Enter the name of a storage class for temporary storage. For best performance, specify a storage class that allocates block storage. Available storage classes for your cluster can be listed by using the oc get storageclasses command.

Installing Loki Operator by using the CLI

To install and configure logging on your OKD cluster, additional Operators must be installed. This can be done from the OKD CLI.

OKD Operators use custom resources (CR) to manage applications and their components. High-level configuration and settings are provided by the user within a CR. The Operator translates high-level directives into low-level actions, based on best practices embedded within the Operator’s logic. A custom resource definition (CRD) defines a CR and lists all the configurations available to users of the Operator. Installing an Operator creates the CRDs, which are then used to generate CRs.

Prerequisites

  • You have administrator permissions.

  • You installed the OpenShift CLI (oc).

  • You have access to a supported object store. For example: AWS S3, Google Cloud Storage, Azure, Swift, Minio, or OpenShift Data Foundation.

Procedure

  1. Create a Subscription object:

    1. apiVersion: operators.coreos.com/v1alpha1
    2. kind: Subscription
    3. metadata:
    4. name: loki-operator
    5. namespace: openshift-operators-redhat (1)
    6. spec:
    7. charsion: operators.coreos.com/v1alpha1
    8. kind: Subscription
    9. metadata:
    10. name: loki-operator
    11. namespace: openshift-operators-redhat (1)
    12. spec:
    13. channel: stable (2)
    14. name: loki-operator
    15. source: redhat-operators (3)
    16. sourceNamespace: openshift-marketplace
    1You must specify the openshift-operators-redhat namespace.
    2Specify stable, or stable-5.<y> as the channel.
    3Specify redhat-operators. If your OKD cluster is installed on a restricted network, also known as a disconnected cluster, specify the name of the CatalogSource object you created when you configured the Operator Lifecycle Manager (OLM).
  2. Apply the Subscription object:

    1. $ oc apply -f <filename>.yaml

Creating a secret for Loki object storage by using the CLI

To configure Loki object storage, you must create a secret. You can do this by using the OpenShift CLI (oc).

Prerequisites

  • You have administrator permissions.

  • You installed the Loki Operator.

  • You installed the OpenShift CLI (oc).

Procedure

  • Create a secret in the directory that contains your certificate and key files by running the following command:

    1. $ oc create secret generic -n openshift-logging <your_secret_name> \
    2. --from-file=tls.key=<your_key_file>
    3. --from-file=tls.crt=<your_crt_file>
    4. --from-file=ca-bundle.crt=<your_bundle_file>
    5. --from-literal=username=<your_username>
    6. --from-literal=password=<your_password>

Use generic or opaque secrets for best results.

Verification

  • Verify that a secret was created by running the following command:

    1. $ oc get secrets

Additional resources

Creating a LokiStack custom resource by using the CLI

You can create a LokiStack custom resource (CR) by using the OpenShift CLI (oc).

Prerequisites

  • You have administrator permissions.

  • You installed the Loki Operator.

  • You installed the OpenShift CLI (oc).

Procedure

  1. Create a LokiStack CR:

    Example LokiStack CR

    1. apiVersion: loki.grafana.com/v1
    2. kind: LokiStack
    3. metadata:
    4. name: logging-loki
    5. namespace: openshift-logging
    6. spec:
    7. size: 1x.small (1)
    8. storage:
    9. schemas:
    10. - version: v12
    11. effectiveDate: "2022-06-01"
    12. secret:
    13. name: logging-loki-s3 (2)
    14. type: s3 (3)
    15. storageClassName: <storage_class_name> (4)
    16. tenants:
    17. mode: openshift-logging
    1Specify the deployment size. In the logging subsystem 5.8 and later versions, the supported size options for production instances of Loki are 1x.extra-small, 1x.small, or 1x.medium.
    2Specify the name of your log store secret.
    3Specify the type of your log store secret.
    4Specify the name of a storage class for temporary storage. For best performance, specify a storage class that allocates block storage. Available storage classes for your cluster can be listed by using the oc get storageclasses command.
  2. Apply the LokiStack CR:

    1. $ oc apply -f <filename>.yaml

Verification

  • Verify the installation by listing the pods in the openshift-logging project by running the following command and observing the output:

    1. $ oc get pods -n openshift-logging

    Confirm that you see several pods for components of the logging subsystem, similar to the following list:

    Example output

    1. NAME READY STATUS RESTARTS AGE
    2. cluster-logging-operator-78fddc697-mnl82 1/1 Running 0 14m
    3. collector-6cglq 2/2 Running 0 45s
    4. collector-8r664 2/2 Running 0 45s
    5. collector-8z7px 2/2 Running 0 45s
    6. collector-pdxl9 2/2 Running 0 45s
    7. collector-tc9dx 2/2 Running 0 45s
    8. collector-xkd76 2/2 Running 0 45s
    9. logging-loki-compactor-0 1/1 Running 0 8m2s
    10. logging-loki-distributor-b85b7d9fd-25j9g 1/1 Running 0 8m2s
    11. logging-loki-distributor-b85b7d9fd-xwjs6 1/1 Running 0 8m2s
    12. logging-loki-gateway-7bb86fd855-hjhl4 2/2 Running 0 8m2s
    13. logging-loki-gateway-7bb86fd855-qjtlb 2/2 Running 0 8m2s
    14. logging-loki-index-gateway-0 1/1 Running 0 8m2s
    15. logging-loki-index-gateway-1 1/1 Running 0 7m29s
    16. logging-loki-ingester-0 1/1 Running 0 8m2s
    17. logging-loki-ingester-1 1/1 Running 0 6m46s
    18. logging-loki-querier-f5cf9cb87-9fdjd 1/1 Running 0 8m2s
    19. logging-loki-querier-f5cf9cb87-fp9v5 1/1 Running 0 8m2s
    20. logging-loki-query-frontend-58c579fcb7-lfvbc 1/1 Running 0 8m2s
    21. logging-loki-query-frontend-58c579fcb7-tjf9k 1/1 Running 0 8m2s
    22. logging-view-plugin-79448d8df6-ckgmx 1/1 Running 0 46s

Loki object storage

The Loki Operator supports AWS S3, as well as other S3 compatible object stores such as Minio and OpenShift Data Foundation. Azure, GCS, and Swift are also supported.

The recommended nomenclature for Loki storage is logging-loki-_<your_storage_provider>_.

The following table shows the type values within the LokiStack custom resource (CR) for each storage provider. For more information, see the section on your storage provider.

Table 2. Secret type quick reference
Storage providerSecret type value

AWS

s3

Azure

azure

Google Cloud

gcs

Minio

s3

OpenShift Data Foundation

s3

Swift

swift

AWS storage

Prerequisites

Procedure

  • Create an object storage secret with the name logging-loki-aws by running the following command:

    1. $ oc create secret generic logging-loki-aws \
    2. --from-literal=bucketnames="<bucket_name>" \
    3. --from-literal=endpoint="<aws_bucket_endpoint>" \
    4. --from-literal=access_key_id="<aws_access_key_id>" \
    5. --from-literal=access_key_secret="<aws_access_key_secret>" \
    6. --from-literal=region="<aws_region_of_your_bucket>"

Azure storage

Prerequisites

  • You installed the Loki Operator.

  • You installed the OpenShift CLI (oc).

  • You created a bucket on Azure.

Procedure

  • Create an object storage secret with the name logging-loki-azure by running the following command:

    1. $ oc create secret generic logging-loki-azure \
    2. --from-literal=container="<azure_container_name>" \
    3. --from-literal=environment="<azure_environment>" \ (1)
    4. --from-literal=account_name="<azure_account_name>" \
    5. --from-literal=account_key="<azure_account_key>"
    1Supported environment values are AzureGlobal, AzureChinaCloud, AzureGermanCloud, or AzureUSGovernment.

Google Cloud Platform storage

Prerequisites

  • You installed the Loki Operator.

  • You installed the OpenShift CLI (oc).

  • You created a project on Google Cloud Platform (GCP).

  • You created a bucket in the same project.

  • You created a service account in the same project for GCP authentication.

Procedure

  1. Copy the service account credentials received from GCP into a file called key.json.

  2. Create an object storage secret with the name logging-loki-gcs by running the following command:

    1. $ oc create secret generic logging-loki-gcs \
    2. --from-literal=bucketname="<bucket_name>" \
    3. --from-file=key.json="<path/to/key.json>"

Minio storage

Prerequisites

  • You installed the Loki Operator.

  • You installed the OpenShift CLI (oc).

  • You have Minio deployed on your cluster.

  • You created a bucket on Minio.

Procedure

  • Create an object storage secret with the name logging-loki-minio by running the following command:

    1. $ oc create secret generic logging-loki-minio \
    2. --from-literal=bucketnames="<bucket_name>" \
    3. --from-literal=endpoint="<minio_bucket_endpoint>" \
    4. --from-literal=access_key_id="<minio_access_key_id>" \
    5. --from-literal=access_key_secret="<minio_access_key_secret>"

OpenShift Data Foundation storage

Prerequisites

Procedure

  1. Create an ObjectBucketClaim custom resource in the openshift-logging namespace:

    1. apiVersion: objectbucket.io/v1alpha1
    2. kind: ObjectBucketClaim
    3. metadata:
    4. name: loki-bucket-odf
    5. namespace: openshift-logging
    6. spec:
    7. generateBucketName: loki-bucket-odf
    8. storageClassName: openshift-storage.noobaa.io
  2. Get bucket properties from the associated ConfigMap object by running the following command:

    1. BUCKET_HOST=$(oc get -n openshift-logging configmap loki-bucket-odf -o jsonpath='{.data.BUCKET_HOST}')
    2. BUCKET_NAME=$(oc get -n openshift-logging configmap loki-bucket-odf -o jsonpath='{.data.BUCKET_NAME}')
    3. BUCKET_PORT=$(oc get -n openshift-logging configmap loki-bucket-odf -o jsonpath='{.data.BUCKET_PORT}')
  3. Get bucket access key from the associated secret by running the following command:

    1. ACCESS_KEY_ID=$(oc get -n openshift-logging secret loki-bucket-odf -o jsonpath='{.data.AWS_ACCESS_KEY_ID}' | base64 -d)
    2. SECRET_ACCESS_KEY=$(oc get -n openshift-logging secret loki-bucket-odf -o jsonpath='{.data.AWS_SECRET_ACCESS_KEY}' | base64 -d)
  4. Create an object storage secret with the name logging-loki-odf by running the following command:

    1. $ oc create -n openshift-logging secret generic logging-loki-odf \
    2. --from-literal=access_key_id="<access_key_id>" \
    3. --from-literal=access_key_secret="<secret_access_key>" \
    4. --from-literal=bucketnames="<bucket_name>" \
    5. --from-literal=endpoint="https://<bucket_host>:<bucket_port>"

Swift storage

Prerequisites

  • You installed the Loki Operator.

  • You installed the OpenShift CLI (oc).

  • You created a bucket on Swift.

Procedure

  • Create an object storage secret with the name logging-loki-swift by running the following command:

    1. $ oc create secret generic logging-loki-swift \
    2. --from-literal=auth_url="<swift_auth_url>" \
    3. --from-literal=username="<swift_usernameclaim>" \
    4. --from-literal=user_domain_name="<swift_user_domain_name>" \
    5. --from-literal=user_domain_id="<swift_user_domain_id>" \
    6. --from-literal=user_id="<swift_user_id>" \
    7. --from-literal=password="<swift_password>" \
    8. --from-literal=domain_id="<swift_domain_id>" \
    9. --from-literal=domain_name="<swift_domain_name>" \
    10. --from-literal=container_name="<swift_container_name>"
  • You can optionally provide project-specific data, region, or both by running the following command:

    1. $ oc create secret generic logging-loki-swift \
    2. --from-literal=auth_url="<swift_auth_url>" \
    3. --from-literal=username="<swift_usernameclaim>" \
    4. --from-literal=user_domain_name="<swift_user_domain_name>" \
    5. --from-literal=user_domain_id="<swift_user_domain_id>" \
    6. --from-literal=user_id="<swift_user_id>" \
    7. --from-literal=password="<swift_password>" \
    8. --from-literal=domain_id="<swift_domain_id>" \
    9. --from-literal=domain_name="<swift_domain_name>" \
    10. --from-literal=container_name="<swift_container_name>" \
    11. --from-literal=project_id="<swift_project_id>" \
    12. --from-literal=project_name="<swift_project_name>" \
    13. --from-literal=project_domain_id="<swift_project_domain_id>" \
    14. --from-literal=project_domain_name="<swift_project_domain_name>" \
    15. --from-literal=region="<swift_region>"

Deploying an Elasticsearch log store

You can use the OpenShift Elasticsearch Operator to deploy an internal Elasticsearch log store on your OKD cluster.

The OpenShift Elasticsearch Operator is deprecated and is planned to be removed in a future release. Red Hat provides bug fixes and support for this feature during the current release lifecycle, but this feature no longer receives enhancements. As an alternative to using the OpenShift Elasticsearch Operator to manage the default log storage, you can use the Loki Operator.

Storage considerations for Elasticsearch

A persistent volume is required for each Elasticsearch deployment configuration. On OKD this is achieved using persistent volume claims (PVCs).

If you use a local volume for persistent storage, do not use a raw block volume, which is described with volumeMode: block in the LocalVolume object. Elasticsearch cannot use raw block volumes.

The OpenShift Elasticsearch Operator names the PVCs using the Elasticsearch resource name.

Fluentd ships any logs from systemd journal and /var/log/containers/*.log to Elasticsearch.

Elasticsearch requires sufficient memory to perform large merge operations. If it does not have enough memory, it becomes unresponsive. To avoid this problem, evaluate how much application log data you need, and allocate approximately double that amount of free storage capacity.

By default, when storage capacity is 85% full, Elasticsearch stops allocating new data to the node. At 90%, Elasticsearch attempts to relocate existing shards from that node to other nodes if possible. But if no nodes have a free capacity below 85%, Elasticsearch effectively rejects creating new indices and becomes RED.

These low and high watermark values are Elasticsearch defaults in the current release. You can modify these default values. Although the alerts use the same default values, you cannot change these values in the alerts.

Installing the OpenShift Elasticsearch Operator by using the web console

The OpenShift Elasticsearch Operator creates and manages the Elasticsearch cluster used by OpenShift Logging.

Prerequisites

  • Elasticsearch is a memory-intensive application. Each Elasticsearch node needs at least 16GB of memory for both memory requests and limits, unless you specify otherwise in the ClusterLogging custom resource.

    The initial set of OKD nodes might not be large enough to support the Elasticsearch cluster. You must add additional nodes to the OKD cluster to run with the recommended or higher memory, up to a maximum of 64GB for each Elasticsearch node.

    Elasticsearch nodes can operate with a lower memory setting, though this is not recommended for production environments.

  • Ensure that you have the necessary persistent storage for Elasticsearch. Note that each Elasticsearch node requires its own storage volume.

    If you use a local volume for persistent storage, do not use a raw block volume, which is described with volumeMode: block in the LocalVolume object. Elasticsearch cannot use raw block volumes.

Procedure

  1. In the OKD web console, click OperatorsOperatorHub.

  2. Click OpenShift Elasticsearch Operator from the list of available Operators, and click Install.

  3. Ensure that the All namespaces on the cluster is selected under Installation mode.

  4. Ensure that openshift-operators-redhat is selected under Installed Namespace.

    You must specify the openshift-operators-redhat namespace. The openshift-operators namespace might contain Community Operators, which are untrusted and could publish a metric with the same name as OKD metric, which would cause conflicts.

  5. Select Enable operator recommended cluster monitoring on this namespace.

    This option sets the openshift.io/cluster-monitoring: "true" label in the Namespace object. You must select this option to ensure that cluster monitoring scrapes the openshift-operators-redhat namespace.

  6. Select stable-5.x as the Update channel.

  7. Select an Update approval strategy:

    • The Automatic strategy allows Operator Lifecycle Manager (OLM) to automatically update the Operator when a new version is available.

    • The Manual strategy requires a user with appropriate credentials to approve the Operator update.

  8. Click Install.

Verification

  1. Verify that the OpenShift Elasticsearch Operator installed by switching to the OperatorsInstalled Operators page.

  2. Ensure that OpenShift Elasticsearch Operator is listed in all projects with a Status of Succeeded.

Installing the OpenShift Elasticsearch Operator by using the CLI

You can use the OpenShift CLI (oc) to install the OpenShift Elasticsearch Operator.

Prerequisites

  • Ensure that you have the necessary persistent storage for Elasticsearch. Note that each Elasticsearch node requires its own storage volume.

    If you use a local volume for persistent storage, do not use a raw block volume, which is described with volumeMode: block in the LocalVolume object. Elasticsearch cannot use raw block volumes.

    Elasticsearch is a memory-intensive application. By default, OKD installs three Elasticsearch nodes with memory requests and limits of 16 GB. This initial set of three OKD nodes might not have enough memory to run Elasticsearch within your cluster. If you experience memory issues that are related to Elasticsearch, add more Elasticsearch nodes to your cluster rather than increasing the memory on existing nodes.

  • Ensure that you have downloaded the pull secret from the Red Hat OpenShift Cluster Manager as shown in “Obtaining the installation program” in the installation documentation for your platform.

    If you have the pull secret, add the redhat-operators catalog to the OperatorHub custom resource (CR) as shown in Configuring OKD to use Red Hat Operators.

  • You have administrator permissions.

  • You have installed the OpenShift CLI (oc).

Procedure

  1. Create a Namespace object as a YAML file:

    1. apiVersion: v1
    2. kind: Namespace
    3. metadata:
    4. name: openshift-operators-redhat (1)
    5. annotations:
    6. openshift.io/node-selector: ""
    7. labels:
    8. openshift.io/cluster-monitoring: "true" (2)
    1You must specify the openshift-operators-redhat namespace. To prevent possible conflicts with metrics, configure the Prometheus Cluster Monitoring stack to scrape metrics from the openshift-operators-redhat namespace and not the openshift-operators namespace. The openshift-operators namespace might contain community Operators, which are untrusted and could publish a metric with the same name as metric, which would cause conflicts.
    2String. You must specify this label as shown to ensure that cluster monitoring scrapes the openshift-operators-redhat namespace.
  2. Apply the Namespace object by running the following command:

    1. $ oc apply -f <filename>.yaml
  3. Create an OperatorGroup object as a YAML file:

    1. apiVersion: operators.coreos.com/v1
    2. kind: OperatorGroup
    3. metadata:
    4. name: openshift-operators-redhat
    5. namespace: openshift-operators-redhat (1)
    6. spec: {}
    1You must specify the openshift-operators-redhat namespace.
  4. Apply the OperatorGroup object by running the following command:

    1. $ oc apply -f <filename>.yaml
  5. Create a Subscription object to subscribe the namespace to the OpenShift Elasticsearch Operator:

    Example Subscription

    1. apiVersion: operators.coreos.com/v1alpha1
    2. kind: Subscription
    3. metadata:
    4. name: elasticsearch-operator
    5. namespace: openshift-operators-redhat (1)
    6. spec:
    7. channel: stable-x.y (2)
    8. installPlanApproval: Automatic (3)
    9. source: redhat-operators (4)
    10. sourceNamespace: openshift-marketplace
    11. name: elasticsearch-operator
    1You must specify the openshift-operators-redhat namespace.
    2Specify stable, or stable-x.y as the channel. See the following note.
    3Automatic allows the Operator Lifecycle Manager (OLM) to automatically update the Operator when a new version is available. Manual requires a user with appropriate credentials to approve the Operator update.
    4Specify redhat-operators. If your OKD cluster is installed on a restricted network, also known as a disconnected cluster, specify the name of the CatalogSource object created when you configured the Operator Lifecycle Manager (OLM).

    Specifying stable installs the current version of the latest stable release. Using stable with installPlanApproval: “Automatic” automatically upgrades your Operators to the latest stable major and minor release.

    Specifying stable-x.y installs the current minor version of a specific major release. Using stable-x.y with installPlanApproval: “Automatic” automatically upgrades your Operators to the latest stable minor release within the major release.

  6. Apply the subscription by running the following command:

    1. $ oc apply -f <filename>.yaml

    The OpenShift Elasticsearch Operator is installed to the openshift-operators-redhat namespace and copied to each project in the cluster.

Verification

  1. Run the following command:

    1. $ oc get csv -n --all-namespaces
  2. Observe the output and confirm that pods for the OpenShift Elasticsearch Operator exist in each namespace

    Example output

    1. NAMESPACE NAME DISPLAY VERSION REPLACES PHASE
    2. default elasticsearch-operator.v5.8.1 OpenShift Elasticsearch Operator 5.8.1 elasticsearch-operator.v5.8.0 Succeeded
    3. kube-node-lease elasticsearch-operator.v5.8.1 OpenShift Elasticsearch Operator 5.8.1 elasticsearch-operator.v5.8.0 Succeeded
    4. kube-public elasticsearch-operator.v5.8.1 OpenShift Elasticsearch Operator 5.8.1 elasticsearch-operator.v5.8.0 Succeeded
    5. kube-system elasticsearch-operator.v5.8.1 OpenShift Elasticsearch Operator 5.8.1 elasticsearch-operator.v5.8.0 Succeeded
    6. non-destructive-test elasticsearch-operator.v5.8.1 OpenShift Elasticsearch Operator 5.8.1 elasticsearch-operator.v5.8.0 Succeeded
    7. openshift-apiserver-operator elasticsearch-operator.v5.8.1 OpenShift Elasticsearch Operator 5.8.1 elasticsearch-operator.v5.8.0 Succeeded
    8. openshift-apiserver elasticsearch-operator.v5.8.1 OpenShift Elasticsearch Operator 5.8.1 elasticsearch-operator.v5.8.0 Succeeded
    9. ...

Configuring log storage

You can configure which log storage type your logging subsystem uses by modifying the ClusterLogging custom resource (CR).

Prerequisites

  • You have administrator permissions.

  • You have installed the OpenShift CLI (oc).

  • You have installed the Red Hat OpenShift Logging Operator and an internal log store that is either the LokiStack or Elasticsearch.

  • You have created a ClusterLogging CR.

The OpenShift Elasticsearch Operator is deprecated and is planned to be removed in a future release. Red Hat provides bug fixes and support for this feature during the current release lifecycle, but this feature no longer receives enhancements. As an alternative to using the OpenShift Elasticsearch Operator to manage the default log storage, you can use the Loki Operator.

Procedure

  1. Modify the ClusterLogging CR logStore spec:

    ClusterLogging CR example

    1. apiVersion: logging.openshift.io/v1
    2. kind: ClusterLogging
    3. metadata:
    4. # ...
    5. spec:
    6. # ...
    7. logStore:
    8. type: <log_store_type> (1)
    9. elasticsearch: (2)
    10. nodeCount: <integer>
    11. resources: {}
    12. storage: {}
    13. redundancyPolicy: <redundancy_type> (3)
    14. lokistack: (4)
    15. name: {}
    16. # ...
    1Specify the log store type. This can be either lokistack or elasticsearch.
    2Optional configuration options for the Elasticsearch log store.
    3Specify the redundancy type. This value can be ZeroRedundancy, SingleRedundancy, MultipleRedundancy, or FullRedundancy.
    4Optional configuration options for LokiStack.

    Example ClusterLogging CR to specify LokiStack as the log store

    1. apiVersion: logging.openshift.io/v1
    2. kind: ClusterLogging
    3. metadata:
    4. name: instance
    5. namespace: openshift-logging
    6. spec:
    7. managementState: Managed
    8. logStore:
    9. type: lokistack
    10. lokistack:
    11. name: logging-loki
    12. # ...
  2. Apply the ClusterLogging CR by running the following command:

    1. $ oc apply -f <filename>.yaml