Configuring persistent storage

Metering is a deprecated feature. Deprecated functionality is still included in OKD and continues to be supported; however, it will be removed in a future release of this product and is not recommended for new deployments.

For the most recent list of major functionality that has been deprecated or removed within OKD, refer to the Deprecated and removed features section of the OKD release notes.

Metering requires persistent storage to persist data collected by the Metering Operator and to store the results of reports. A number of different storage providers and storage formats are supported. Select your storage provider and modify the example configuration files to configure persistent storage for your metering installation.

Storing data in Amazon S3

Metering can use an existing Amazon S3 bucket or create a bucket for storage.

Metering does not manage or delete any S3 bucket data. You must manually clean up S3 buckets that are used to store metering data.

Procedure

  1. Edit the spec.storage section in the s3-storage.yaml file:

    Example s3-storage.yaml file

    1. apiVersion: metering.openshift.io/v1
    2. kind: MeteringConfig
    3. metadata:
    4. name: "operator-metering"
    5. spec:
    6. storage:
    7. type: "hive"
    8. hive:
    9. type: "s3"
    10. s3:
    11. bucket: "bucketname/path/" (1)
    12. region: "us-west-1" (2)
    13. secretName: "my-aws-secret" (3)
    14. # Set to false if you want to provide an existing bucket, instead of
    15. # having metering create the bucket on your behalf.
    16. createBucket: true (4)
    1Specify the name of the bucket where you would like to store your data. Optional: Specify the path within the bucket.
    2Specify the region of your bucket.
    3The name of a secret in the metering namespace containing the AWS credentials in the data.aws-access-key-id and data.aws-secret-access-key fields. See the example Secret object below for more details.
    4Set this field to false if you want to provide an existing S3 bucket, or if you do not want to provide IAM credentials that have CreateBucket permissions.
  2. Use the following Secret object as a template:

    Example AWS Secret object

    1. apiVersion: v1
    2. kind: Secret
    3. metadata:
    4. name: my-aws-secret
    5. data:
    6. aws-access-key-id: "dGVzdAo="
    7. aws-secret-access-key: "c2VjcmV0Cg=="

    The values of the aws-access-key-id and aws-secret-access-key must be base64 encoded.

  3. Create the secret:

    1. $ oc create secret -n openshift-metering generic my-aws-secret \
    2. --from-literal=aws-access-key-id=my-access-key \
    3. --from-literal=aws-secret-access-key=my-secret-key

    This command automatically base64 encodes your aws-access-key-id and aws-secret-access-key values.

The aws-access-key-id and aws-secret-access-key credentials must have read and write access to the bucket. The following aws/read-write.json file shows an IAM policy that grants the required permissions:

Example aws/read-write.json file

  1. {
  2. "Version": "2012-10-17",
  3. "Statement": [
  4. {
  5. "Sid": "1",
  6. "Effect": "Allow",
  7. "Action": [
  8. "s3:AbortMultipartUpload",
  9. "s3:DeleteObject",
  10. "s3:GetObject",
  11. "s3:HeadBucket",
  12. "s3:ListBucket",
  13. "s3:ListMultipartUploadParts",
  14. "s3:PutObject"
  15. ],
  16. "Resource": [
  17. "arn:aws:s3:::operator-metering-data/*",
  18. "arn:aws:s3:::operator-metering-data"
  19. ]
  20. }
  21. ]
  22. }

If spec.storage.hive.s3.createBucket is set to true or unset in your s3-storage.yaml file, then you should use the aws/read-write-create.json file that contains permissions for creating and deleting buckets:

Example aws/read-write-create.json file

  1. {
  2. "Version": "2012-10-17",
  3. "Statement": [
  4. {
  5. "Sid": "1",
  6. "Effect": "Allow",
  7. "Action": [
  8. "s3:AbortMultipartUpload",
  9. "s3:DeleteObject",
  10. "s3:GetObject",
  11. "s3:HeadBucket",
  12. "s3:ListBucket",
  13. "s3:CreateBucket",
  14. "s3:DeleteBucket",
  15. "s3:ListMultipartUploadParts",
  16. "s3:PutObject"
  17. ],
  18. "Resource": [
  19. "arn:aws:s3:::operator-metering-data/*",
  20. "arn:aws:s3:::operator-metering-data"
  21. ]
  22. }
  23. ]
  24. }

Storing data in S3-compatible storage

You can use S3-compatible storage such as Noobaa.

Procedure

  1. Edit the spec.storage section in the s3-compatible-storage.yaml file:

    Example s3-compatible-storage.yaml file

    1. apiVersion: metering.openshift.io/v1
    2. kind: MeteringConfig
    3. metadata:
    4. name: "operator-metering"
    5. spec:
    6. storage:
    7. type: "hive"
    8. hive:
    9. type: "s3Compatible"
    10. s3Compatible:
    11. bucket: "bucketname" (1)
    12. endpoint: "http://example:port-number" (2)
    13. secretName: "my-aws-secret" (3)
    1Specify the name of your S3-compatible bucket.
    2Specify the endpoint for your storage.
    3The name of a secret in the metering namespace containing the AWS credentials in the data.aws-access-key-id and data.aws-secret-access-key fields. See the example Secret object below for more details.
  2. Use the following Secret object as a template:

    Example S3-compatible Secret object

    1. apiVersion: v1
    2. kind: Secret
    3. metadata:
    4. name: my-aws-secret
    5. data:
    6. aws-access-key-id: "dGVzdAo="
    7. aws-secret-access-key: "c2VjcmV0Cg=="

Storing data in Microsoft Azure

To store data in Azure blob storage, you must use an existing container.

Procedure

  1. Edit the spec.storage section in the azure-blob-storage.yaml file:

    Example azure-blob-storage.yaml file

    1. apiVersion: metering.openshift.io/v1
    2. kind: MeteringConfig
    3. metadata:
    4. name: "operator-metering"
    5. spec:
    6. storage:
    7. type: "hive"
    8. hive:
    9. type: "azure"
    10. azure:
    11. container: "bucket1" (1)
    12. secretName: "my-azure-secret" (2)
    13. rootDirectory: "/testDir" (3)
    1Specify the container name.
    2Specify a secret in the metering namespace. See the example Secret object below for more details.
    3Optional: Specify the directory where you would like to store your data.
  2. Use the following Secret object as a template:

    Example Azure Secret object

    1. apiVersion: v1
    2. kind: Secret
    3. metadata:
    4. name: my-azure-secret
    5. data:
    6. azure-storage-account-name: "dGVzdAo="
    7. azure-secret-access-key: "c2VjcmV0Cg=="
  3. Create the secret:

    1. $ oc create secret -n openshift-metering generic my-azure-secret \
    2. --from-literal=azure-storage-account-name=my-storage-account-name \
    3. --from-literal=azure-secret-access-key=my-secret-key

Storing data in Google Cloud Storage

To store your data in Google Cloud Storage, you must use an existing bucket.

Procedure

  1. Edit the spec.storage section in the gcs-storage.yaml file:

    Example gcs-storage.yaml file

    1. apiVersion: metering.openshift.io/v1
    2. kind: MeteringConfig
    3. metadata:
    4. name: "operator-metering"
    5. spec:
    6. storage:
    7. type: "hive"
    8. hive:
    9. type: "gcs"
    10. gcs:
    11. bucket: "metering-gcs/test1" (1)
    12. secretName: "my-gcs-secret" (2)
    1Specify the name of the bucket. You can optionally specify the directory within the bucket where you would like to store your data.
    2Specify a secret in the metering namespace. See the example Secret object below for more details.
  2. Use the following Secret object as a template:

    Example Google Cloud Storage Secret object

    1. apiVersion: v1
    2. kind: Secret
    3. metadata:
    4. name: my-gcs-secret
    5. data:
    6. gcs-service-account.json: "c2VjcmV0Cg=="
  3. Create the secret:

    1. $ oc create secret -n openshift-metering generic my-gcs-secret \
    2. --from-file gcs-service-account.json=/path/to/my/service-account-key.json

Storing data in shared volumes

Metering does not configure storage by default. However, you can use any ReadWriteMany persistent volume (PV) or any storage class that provisions a ReadWriteMany PV for metering storage.

NFS is not recommended to use in production. Using an NFS server on RHEL as a storage back end can fail to meet metering requirements and to provide the performance that is needed for the Metering Operator to work appropriately.

Other NFS implementations on the marketplace might not have these issues, such as a Parallel Network File System (pNFS). pNFS is an NFS implementation with distributed and parallel capability. Contact the individual NFS implementation vendor for more information on any testing that was possibly completed against OKD core components.

Procedure

  1. Modify the shared-storage.yaml file to use a ReadWriteMany persistent volume for storage:

    Example shared-storage.yaml file

    1. apiVersion: metering.openshift.io/v1
    2. kind: MeteringConfig
    3. metadata:
    4. name: "operator-metering"
    5. spec:
    6. storage:
    7. type: "hive"
    8. hive:
    9. type: "sharedPVC"
    10. sharedPVC:
    11. claimName: "metering-nfs" (1)
    12. # Uncomment the lines below to provision a new PVC using the specified storageClass. (2)
    13. # createPVC: true
    14. # storageClass: "my-nfs-storage-class"
    15. # size: 5Gi

    Select one of the configuration options below:

    1Set storage.hive.sharedPVC.claimName to the name of an existing ReadWriteMany persistent volume claim (PVC). This configuration is necessary if you do not have dynamic volume provisioning or want to have more control over how the persistent volume is created.
    2Set storage.hive.sharedPVC.createPVC to true and set the storage.hive.sharedPVC.storageClass to the name of a storage class with ReadWriteMany access mode. This configuration uses dynamic volume provisioning to create a volume automatically.
  2. Create the following resource objects that are required to deploy an NFS server for metering. Use the oc create -f <file-name>.yaml command to create the object YAML files.

    1. Configure a PersistentVolume resource object:

      Example nfs_persistentvolume.yaml file

      1. apiVersion: v1
      2. kind: PersistentVolume
      3. metadata:
      4. name: nfs
      5. labels:
      6. role: nfs-server
      7. spec:
      8. capacity:
      9. storage: 5Gi
      10. accessModes:
      11. - ReadWriteMany
      12. storageClassName: nfs-server (1)
      13. nfs:
      14. path: "/"
      15. server: REPLACEME
      16. persistentVolumeReclaimPolicy: Delete
      1Must exactly match the [kind: StorageClass].metadata.name field value.
    2. Configure a Pod resource object with the nfs-server role:

      Example nfs_server.yaml file

      1. apiVersion: v1
      2. kind: Pod
      3. metadata:
      4. name: nfs-server
      5. labels:
      6. role: nfs-server
      7. spec:
      8. containers:
      9. - name: nfs-server
      10. image: <image_name> (1)
      11. imagePullPolicy: IfNotPresent
      12. ports:
      13. - name: nfs
      14. containerPort: 2049
      15. securityContext:
      16. privileged: true
      17. volumeMounts:
      18. - mountPath: "/mnt/data"
      19. name: local
      20. volumes:
      21. - name: local
      22. emptyDir: {}
      1Install your NFS server image.
    3. Configure a Service resource object with the nfs-server role:

      Example nfs_service.yaml file

      1. apiVersion: v1
      2. kind: Service
      3. metadata:
      4. name: nfs-service
      5. labels:
      6. role: nfs-server
      7. spec:
      8. ports:
      9. - name: 2049-tcp
      10. port: 2049
      11. protocol: TCP
      12. targetPort: 2049
      13. selector:
      14. role: nfs-server
      15. sessionAffinity: None
      16. type: ClusterIP
    4. Configure a StorageClass resource object:

      Example nfs_storageclass.yaml file

      1. apiVersion: storage.k8s.io/v1
      2. kind: StorageClass
      3. metadata:
      4. name: nfs-server (1)
      5. provisioner: example.com/nfs
      6. parameters:
      7. archiveOnDelete: "false"
      8. reclaimPolicy: Delete
      9. volumeBindingMode: Immediate
      1Must exactly match the [kind: PersistentVolume].spec.storageClassName field value.

Configuration of your NFS storage, and any relevant resource objects, will vary depending on the NFS server image that you use for metering storage.