Troubleshooting

You can debug Velero custom resources (CRs) by using the OpenShift CLI tool or the Velero CLI tool. The Velero CLI tool provides more detailed logs and information.

You can check installation issues, backup and restore CR issues, and Restic issues.

You can collect logs and CR information by using the must-gather tool.

You can obtain the Velero CLI tool by:

  • Downloading the Velero CLI tool

  • Accessing the Velero binary in the Velero deployment in the cluster

Downloading the Velero CLI tool

You can download and install the Velero CLI tool by following the instructions on the Velero documentation page.

The page includes instructions for:

  • macOS by using Homebrew

  • GitHub

  • Windows by using Chocolatey

Prerequisites

  • You have access to a Kubernetes cluster, v1.16 or later, with DNS and container networking enabled.

  • You have installed kubectl locally.

Procedure

  1. Open a browser and navigate to “Install the CLI” on the Velero website.

  2. Follow the appropriate procedure for macOS, GitHub, or Windows.

  3. Download the Velero version appropriate for your version of OADP and OKD.

OADP-Velero-OKD version relationship

OADP versionVelero versionOKD version

1.1.0

1.9

4.9 and later

1.1.1

1.9

4.9 and later

1.1.2

1.9

4.9 and later

1.1.3

1.9

4.9 and later

1.1.4

1.9

4.9 and later

1.1.5

1.9

4.9 and later

1.1.6

1.9

4.11 and later

1.1.7

1.9

4.11 and later

1.2.0

1.11

4.11 and later

1.2.1

1.11

4.11 and later

1.2.2

1.11

4.11 and later

1.2.3

1.11

4.11 and later

1.3.0

1.12

4.12 and later

Accessing the Velero binary in the Velero deployment in the cluster

You can use a shell command to access the Velero binary in the Velero deployment in the cluster.

Prerequisites

  • Your DataProtectionApplication custom resource has a status of Reconcile complete.

Procedure

  • Enter the following command to set the needed alias:

    1. $ alias velero='oc -n openshift-adp exec deployment/velero -c velero -it -- ./velero'

Debugging Velero resources with the OpenShift CLI tool

You can debug a failed backup or restore by checking Velero custom resources (CRs) and the Velero pod log with the OpenShift CLI tool.

Velero CRs

Use the oc describe command to retrieve a summary of warnings and errors associated with a Backup or Restore CR:

  1. $ oc describe <velero_cr> <cr_name>

Velero pod logs

Use the oc logs command to retrieve the Velero pod logs:

  1. $ oc logs pod/<velero>

Velero pod debug logs

You can specify the Velero log level in the DataProtectionApplication resource as shown in the following example.

This option is available starting from OADP 1.0.3.

  1. apiVersion: oadp.openshift.io/v1alpha1
  2. kind: DataProtectionApplication
  3. metadata:
  4. name: velero-sample
  5. spec:
  6. configuration:
  7. velero:
  8. logLevel: warning

The following logLevel values are available:

  • trace

  • debug

  • info

  • warning

  • error

  • fatal

  • panic

It is recommended to use debug for most logs.

Debugging Velero resources with the Velero CLI tool

You can debug Backup and Restore custom resources (CRs) and retrieve logs with the Velero CLI tool.

The Velero CLI tool provides more detailed information than the OpenShift CLI tool.

Syntax

Use the oc exec command to run a Velero CLI command:

  1. $ oc -n openshift-adp exec deployment/velero -c velero -- ./velero \
  2. <backup_restore_cr> <command> <cr_name>

Example

  1. $ oc -n openshift-adp exec deployment/velero -c velero -- ./velero \
  2. backup describe 0e44ae00-5dc3-11eb-9ca8-df7e5254778b-2d8ql

Help option

Use the velero --help option to list all Velero CLI commands:

  1. $ oc -n openshift-adp exec deployment/velero -c velero -- ./velero \
  2. --help

Describe command

Use the velero describe command to retrieve a summary of warnings and errors associated with a Backup or Restore CR:

  1. $ oc -n openshift-adp exec deployment/velero -c velero -- ./velero \
  2. <backup_restore_cr> describe <cr_name>

Example

  1. $ oc -n openshift-adp exec deployment/velero -c velero -- ./velero \
  2. backup describe 0e44ae00-5dc3-11eb-9ca8-df7e5254778b-2d8ql

Logs command

Use the velero logs command to retrieve the logs of a Backup or Restore CR:

  1. $ oc -n openshift-adp exec deployment/velero -c velero -- ./velero \
  2. <backup_restore_cr> logs <cr_name>

Example

  1. $ oc -n openshift-adp exec deployment/velero -c velero -- ./velero \
  2. restore logs ccc7c2d0-6017-11eb-afab-85d0007f5a19-x4lbf

Pods crash or restart due to lack of memory or CPU

If a Velero or Restic pod crashes due to a lack of memory or CPU, you can set specific resource requests for either of those resources.

Additional resources

Setting resource requests for a Velero pod

You can use the configuration.velero.podConfig.resourceAllocations specification field in the oadp_v1alpha1_dpa.yaml file to set specific resource requests for a Velero pod.

Procedure

  • Set the cpu and memory resource requests in the YAML file:

    Example Velero file

    1. apiVersion: oadp.openshift.io/v1alpha1
    2. kind: DataProtectionApplication
    3. ...
    4. configuration:
    5. velero:
    6. podConfig:
    7. resourceAllocations: (1)
    8. requests:
    9. cpu: 200m
    10. memory: 256Mi
    1The resourceAllocations listed are for average usage.

Setting resource requests for a Restic pod

You can use the configuration.restic.podConfig.resourceAllocations specification field to set specific resource requests for a Restic pod.

Procedure

  • Set the cpu and memory resource requests in the YAML file:

    Example Restic file

    1. apiVersion: oadp.openshift.io/v1alpha1
    2. kind: DataProtectionApplication
    3. ...
    4. configuration:
    5. restic:
    6. podConfig:
    7. resourceAllocations: (1)
    8. requests:
    9. cpu: 1000m
    10. memory: 16Gi
    1The resourceAllocations listed are for average usage.

The values for the resource request fields must follow the same format as Kubernetes resource requirements. Also, if you do not specify configuration.velero.podConfig.resourceAllocations or configuration.restic.podConfig.resourceAllocations, the default resources specification for a Velero pod or a Restic pod is as follows:

  1. requests:
  2. cpu: 500m
  3. memory: 128Mi

Issues with Velero and admission webhooks

Velero has limited abilities to resolve admission webhook issues during a restore. If you have workloads with admission webhooks, you might need to use an additional Velero plugin or make changes to how you restore the workload.

Typically, workloads with admission webhooks require you to create a resource of a specific kind first. This is especially true if your workload has child resources because admission webhooks typically block child resources.

For example, creating or restoring a top-level object such as service.serving.knative.dev typically creates child resources automatically. If you do this first, you will not need to use Velero to create and restore these resources. This avoids the problem of child resources being blocked by an admission webhook that Velero might use.

Restoring workarounds for Velero backups that use admission webhooks

This section describes the additional steps required to restore resources for several types of Velero backups that use admission webhooks.

Restoring Knative resources

You might encounter problems using Velero to back up Knative resources that use admission webhooks.

You can avoid such problems by restoring the top level Service resource first whenever you back up and restore Knative resources that use admission webhooks.

Procedure

  • Restore the top level service.serving.knavtive.dev Service resource:

    1. $ velero restore <restore_name> \
    2. --from-backup=<backup_name> --include-resources \
    3. service.serving.knavtive.dev

Restoring IBM AppConnect resources

If you experience issues when you use Velero to a restore an IBM® AppConnect resource that has an admission webhook, you can run the checks in this procedure.

Procedure

  1. Check if you have any mutating admission plugins of kind: MutatingWebhookConfiguration in the cluster:

    1. $ oc get mutatingwebhookconfigurations
  2. Examine the YAML file of each kind: MutatingWebhookConfiguration to ensure that none of its rules block creation of the objects that are experiencing issues. For more information, see the official Kubernetes documentation.

  3. Check that any spec.version in type: Configuration.appconnect.ibm.com/v1beta1 used at backup time is supported by the installed Operator.

Velero plugins returning “received EOF, stopping recv loop” message

Velero plugins are started as separate processes. After the Velero operation has completed, either successfully or not, they exit. Receiving a received EOF, stopping recv loop message in the debug logs indicates that a plugin operation has completed. It does not mean that an error has occurred.

Additional resources

Installation issues

You might encounter issues caused by using invalid directories or incorrect credentials when you install the Data Protection Application.

Backup storage contains invalid directories

The Velero pod log displays the error message, Backup storage contains invalid top-level directories.

Cause

The object storage contains top-level directories that are not Velero directories.

Solution

If the object storage is not dedicated to Velero, you must specify a prefix for the bucket by setting the spec.backupLocations.velero.objectStorage.prefix parameter in the DataProtectionApplication manifest.

Incorrect AWS credentials

The oadp-aws-registry pod log displays the error message, InvalidAccessKeyId: The AWS Access Key Id you provided does not exist in our records.

The Velero pod log displays the error message, NoCredentialProviders: no valid providers in chain.

Cause

The credentials-velero file used to create the Secret object is incorrectly formatted.

Solution

Ensure that the credentials-velero file is correctly formatted, as in the following example:

Example credentials-velero file

  1. [default] (1)
  2. aws_access_key_id=AKIAIOSFODNN7EXAMPLE (2)
  3. aws_secret_access_key=wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
1AWS default profile.
2Do not enclose the values with quotation marks (, ).

OADP Operator issues

The OpenShift API for Data Protection (OADP) Operator might encounter issues caused by problems it is not able to resolve.

OADP Operator fails silently

The S3 buckets of an OADP Operator might be empty, but when you run the command oc get po -n <OADP_Operator_namespace>, you see that the Operator has a status of Running. In such a case, the Operator is said to have failed silently because it incorrectly reports that it is running.

Cause

The problem is caused when cloud credentials provide insufficient permissions.

Solution

Retrieve a list of backup storage locations (BSLs) and check the manifest of each BSL for credential issues.

Procedure

  1. Run one of the following commands to retrieve a list of BSLs:

    1. Using the OpenShift CLI:

      1. $ oc get backupstoragelocation -A
    2. Using the Velero CLI:

      1. $ velero backup-location get -n <OADP_Operator_namespace>
  2. Using the list of BSLs, run the following command to display the manifest of each BSL, and examine each manifest for an error.

    1. $ oc get backupstoragelocation -n <namespace> -o yaml

Example result

  1. apiVersion: v1
  2. items:
  3. - apiVersion: velero.io/v1
  4. kind: BackupStorageLocation
  5. metadata:
  6. creationTimestamp: "2023-11-03T19:49:04Z"
  7. generation: 9703
  8. name: example-dpa-1
  9. namespace: openshift-adp-operator
  10. ownerReferences:
  11. - apiVersion: oadp.openshift.io/v1alpha1
  12. blockOwnerDeletion: true
  13. controller: true
  14. kind: DataProtectionApplication
  15. name: example-dpa
  16. uid: 0beeeaff-0287-4f32-bcb1-2e3c921b6e82
  17. resourceVersion: "24273698"
  18. uid: ba37cd15-cf17-4f7d-bf03-8af8655cea83
  19. spec:
  20. config:
  21. enableSharedConfig: "true"
  22. region: us-west-2
  23. credential:
  24. key: credentials
  25. name: cloud-credentials
  26. default: true
  27. objectStorage:
  28. bucket: example-oadp-operator
  29. prefix: example
  30. provider: aws
  31. status:
  32. lastValidationTime: "2023-11-10T22:06:46Z"
  33. message: "BackupStorageLocation \"example-dpa-1\" is unavailable: rpc
  34. error: code = Unknown desc = WebIdentityErr: failed to retrieve credentials\ncaused
  35. by: AccessDenied: Not authorized to perform sts:AssumeRoleWithWebIdentity\n\tstatus
  36. code: 403, request id: d3f2e099-70a0-467b-997e-ff62345e3b54"
  37. phase: Unavailable
  38. kind: List
  39. metadata:
  40. resourceVersion: ""

OADP timeouts

Extending a timeout allows complex or resource-intensive processes to complete successfully without premature termination. This configuration can reduce the likelihood of errors, retries, or failures.

Ensure that you balance timeout extensions in a logical manner so that you do not configure excessively long timeouts that might hide underlying issues in the process. Carefully consider and monitor an appropriate timeout value that meets the needs of the process and the overall system performance.

The following are various OADP timeouts, with instructions of how and when to implement these parameters:

Restic timeout

timeout defines the Restic timeout. The default value is 1h.

Use the Restic timeout for the following scenarios:

  • For Restic backups with total PV data usage that is greater than 500GB.

  • If backups are timing out with the following error:

    1. level=error msg="Error backing up item" backup=velero/monitoring error="timed out waiting for all PodVolumeBackups to complete"

Procedure

  • Edit the values in the spec.configuration.restic.timeout block of the DataProtectionApplication CR manifest, as in the following example:

    1. apiVersion: oadp.openshift.io/v1alpha1
    2. kind: DataProtectionApplication
    3. metadata:
    4. name: <dpa_name>
    5. spec:
    6. configuration:
    7. restic:
    8. timeout: 1h
    9. # ...

Velero resource timeout

resourceTimeout defines how long to wait for several Velero resources before timeout occurs, such as Velero custom resource definition (CRD) availability, volumeSnapshot deletion, and repository availability. The default is 10m.

Use the resourceTimeout for the following scenarios:

  • For backups with total PV data usage that is greater than 1TB. This parameter is used as a timeout value when Velero tries to clean up or delete the Container Storage Interface (CSI) snapshots, before marking the backup as complete.

    • A sub-task of this cleanup tries to patch VSC and this timeout can be used for that task.
  • To create or ensure a backup repository is ready for filesystem based backups for Restic or Kopia.

  • To check if the Velero CRD is available in the cluster before restoring the custom resource (CR) or resource from the backup.

Procedure

  • Edit the values in the spec.configuration.velero.resourceTimeout block of the DataProtectionApplication CR manifest, as in the following example:

    1. apiVersion: oadp.openshift.io/v1alpha1
    2. kind: DataProtectionApplication
    3. metadata:
    4. name: <dpa_name>
    5. spec:
    6. configuration:
    7. velero:
    8. resourceTimeout: 10m
    9. # ...

Data Mover timeout

timeout is a user-supplied timeout to complete VolumeSnapshotBackup and VolumeSnapshotRestore. The default value is 10m.

Use the Data Mover timeout for the following scenarios:

  • If creation of VolumeSnapshotBackups (VSBs) and VolumeSnapshotRestores (VSRs), times out after 10 minutes.

  • For large scale environments with total PV data usage that is greater than 500GB. Set the timeout for 1h.

  • With the VolumeSnapshotMover (VSM) plugin.

  • Only with OADP 1.1.x.

Procedure

  • Edit the values in the spec.features.dataMover.timeout block of the DataProtectionApplication CR manifest, as in the following example:

    1. apiVersion: oadp.openshift.io/v1alpha1
    2. kind: DataProtectionApplication
    3. metadata:
    4. name: <dpa_name>
    5. spec:
    6. features:
    7. dataMover:
    8. timeout: 10m
    9. # ...

CSI snapshot timeout

CSISnapshotTimeout specifies the time during creation to wait until the CSI VolumeSnapshot status becomes ReadyToUse, before returning error as timeout. The default value is 10m.

Use the CSISnapshotTimeout for the following scenarios:

  • With the CSI plugin.

  • For very large storage volumes that may take longer than 10 minutes to snapshot. Adjust this timeout if timeouts are found in the logs.

Typically, the default value for CSISnapshotTimeout does not require adjustment, because the default setting can accommodate large storage volumes.

Procedure

  • Edit the values in the spec.csiSnapshotTimeout block of the Backup CR manifest, as in the following example:

    1. apiVersion: velero.io/v1
    2. kind: Backup
    3. metadata:
    4. name: <backup_name>
    5. spec:
    6. csiSnapshotTimeout: 10m
    7. # ...

Velero default item operation timeout

defaultItemOperationTimeout defines how long to wait on asynchronous BackupItemActions and RestoreItemActions to complete before timing out. The default value is 1h.

Use the defaultItemOperationTimeout for the following scenarios:

  • Only with Data Mover 1.2.x.

  • To specify the amount of time a particular backup or restore should wait for the Asynchronous actions to complete. In the context of OADP features, this value is used for the Asynchronous actions involved in the Container Storage Interface (CSI) Data Mover feature.

  • When defaultItemOperationTimeout is defined in the Data Protection Application (DPA) using the defaultItemOperationTimeout, it applies to both backup and restore operations. You can use itemOperationTimeout to define only the backup or only the restore of those CRs, as described in the following “Item operation timeout - restore”, and “Item operation timeout - backup” sections.

Procedure

  • Edit the values in the spec.configuration.velero.defaultItemOperationTimeout block of the DataProtectionApplication CR manifest, as in the following example:

    1. apiVersion: oadp.openshift.io/v1alpha1
    2. kind: DataProtectionApplication
    3. metadata:
    4. name: <dpa_name>
    5. spec:
    6. configuration:
    7. velero:
    8. defaultItemOperationTimeout: 1h
    9. # ...

Item operation timeout - restore

ItemOperationTimeout specifies the time that is used to wait for RestoreItemAction operations. The default value is 1h.

Use the restore ItemOperationTimeout for the following scenarios:

  • Only with Data Mover 1.2.x.

  • For Data Mover uploads and downloads to or from the BackupStorageLocation. If the restore action is not completed when the timeout is reached, it will be marked as failed. If Data Mover operations are failing due to timeout issues, because of large storage volume sizes, then this timeout setting may need to be increased.

Procedure

  • Edit the values in the Restore.spec.itemOperationTimeout block of the Restore CR manifest, as in the following example:

    1. apiVersion: velero.io/v1
    2. kind: Restore
    3. metadata:
    4. name: <restore_name>
    5. spec:
    6. itemOperationTimeout: 1h
    7. # ...

Item operation timeout - backup

ItemOperationTimeout specifies the time used to wait for asynchronous BackupItemAction operations. The default value is 1h.

Use the backup ItemOperationTimeout for the following scenarios:

  • Only with Data Mover 1.2.x.

  • For Data Mover uploads and downloads to or from the BackupStorageLocation. If the backup action is not completed when the timeout is reached, it will be marked as failed. If Data Mover operations are failing due to timeout issues, because of large storage volume sizes, then this timeout setting may need to be increased.

Procedure

  • Edit the values in the Backup.spec.itemOperationTimeout block of the Backup CR manifest, as in the following example:

    1. apiVersion: velero.io/v1
    2. kind: Backup
    3. metadata:
    4. name: <backup_name>
    5. spec:
    6. itemOperationTimeout: 1h
    7. # ...

Backup and Restore CR issues

You might encounter these common issues with Backup and Restore custom resources (CRs).

Backup CR cannot retrieve volume

The Backup CR displays the error message, InvalidVolume.NotFound: The volume ‘vol-xxxx’ does not exist.

Cause

The persistent volume (PV) and the snapshot locations are in different regions.

Solution

  1. Edit the value of the spec.snapshotLocations.velero.config.region key in the DataProtectionApplication manifest so that the snapshot location is in the same region as the PV.

  2. Create a new Backup CR.

Backup CR status remains in progress

The status of a Backup CR remains in the InProgress phase and does not complete.

Cause

If a backup is interrupted, it cannot be resumed.

Solution

  1. Retrieve the details of the Backup CR:

    1. $ oc -n {namespace} exec deployment/velero -c velero -- ./velero \
    2. backup describe <backup>
  2. Delete the Backup CR:

    1. $ oc delete backup <backup> -n openshift-adp

    You do not need to clean up the backup location because a Backup CR in progress has not uploaded files to object storage.

  3. Create a new Backup CR.

Backup CR status remains in PartiallyFailed

The status of a Backup CR without Restic in use remains in the PartiallyFailed phase and does not complete. A snapshot of the affiliated PVC is not created.

Cause

If the backup is created based on the CSI snapshot class, but the label is missing, CSI snapshot plugin fails to create a snapshot. As a result, the Velero pod logs an error similar to the following:

  1. time="2023-02-17T16:33:13Z" level=error msg="Error backing up item" backup=openshift-adp/user1-backup-check5 error="error executing custom action (groupResource=persistentvolumeclaims, namespace=busy1, name=pvc1-user1): rpc error: code = Unknown desc = failed to get volumesnapshotclass for storageclass ocs-storagecluster-ceph-rbd: failed to get volumesnapshotclass for provisioner openshift-storage.rbd.csi.ceph.com, ensure that the desired volumesnapshot class has the velero.io/csi-volumesnapshot-class label" logSource="/remote-source/velero/app/pkg/backup/backup.go:417" name=busybox-79799557b5-vprq

Solution

  1. Delete the Backup CR:

    1. $ oc delete backup <backup> -n openshift-adp
  2. If required, clean up the stored data on the BackupStorageLocation to free up space.

  3. Apply label velero.io/csi-volumesnapshot-class=true to the VolumeSnapshotClass object:

    1. $ oc label volumesnapshotclass/<snapclass_name> velero.io/csi-volumesnapshot-class=true
  4. Create a new Backup CR.

Restic issues

You might encounter these issues when you back up applications with Restic.

Restic permission error for NFS data volumes with root_squash enabled

The Restic pod log displays the error message: controller=pod-volume-backup error="fork/exec/usr/bin/restic: permission denied".

Cause

If your NFS data volumes have root_squash enabled, Restic maps to nfsnobody and does not have permission to create backups.

Solution

You can resolve this issue by creating a supplemental group for Restic and adding the group ID to the DataProtectionApplication manifest:

  1. Create a supplemental group for Restic on the NFS data volume.

  2. Set the setgid bit on the NFS directories so that group ownership is inherited.

  3. Add the spec.configuration.restic.supplementalGroups parameter and the group ID to the DataProtectionApplication manifest, as in the following example:

    1. spec:
    2. configuration:
    3. restic:
    4. enable: true
    5. supplementalGroups:
    6. - <group_id> (1)
    1Specify the supplemental group ID.
  4. Wait for the Restic pods to restart so that the changes are applied.

Restic Backup CR cannot be recreated after bucket is emptied

If you create a Restic Backup CR for a namespace, empty the object storage bucket, and then recreate the Backup CR for the same namespace, the recreated Backup CR fails.

The velero pod log displays the following error message: stderr=Fatal: unable to open config file: Stat: The specified key does not exist.\nIs there a repository at the following location?.

Cause

Velero does not recreate or update the Restic repository from the ResticRepository manifest if the Restic directories are deleted from object storage. See Velero issue 4421 for more information.

Solution

  • Remove the related Restic repository from the namespace by running the following command:

    1. $ oc delete resticrepository openshift-adp <name_of_the_restic_repository>

    In the following error log, mysql-persistent is the problematic Restic repository. The name of the repository appears in italics for clarity.

    1. time="2021-12-29T18:29:14Z" level=info msg="1 errors
    2. encountered backup up item" backup=velero/backup65
    3. logSource="pkg/backup/backup.go:431" name=mysql-7d99fc949-qbkds
    4. time="2021-12-29T18:29:14Z" level=error msg="Error backing up item"
    5. backup=velero/backup65 error="pod volume backup failed: error running
    6. restic backup, stderr=Fatal: unable to open config file: Stat: The
    7. specified key does not exist.\nIs there a repository at the following
    8. location?\ns3:http://minio-minio.apps.mayap-oadp-
    9. veleo-1234.qe.devcluster.openshift.com/mayapvelerooadp2/velero1/
    10. restic/mysql-persistent\n: exit status 1" error.file="/remote-source/
    11. src/github.com/vmware-tanzu/velero/pkg/restic/backupper.go:184"
    12. error.function="github.com/vmware-tanzu/velero/
    13. pkg/restic.(*backupper).BackupPodVolumes"
    14. logSource="pkg/backup/backup.go:435" name=mysql-7d99fc949-qbkds

Restic restore partially failing on OCP 4.14 due to changed PSA policy

OpenShift Container Platform 4.14 enforces a Pod Security Admission (PSA) policy that can hinder the readiness of pods during a Restic restore process.

If a SecurityContextConstraints (SCC) resource is not found when a pod is created, and the PSA policy on the pod is not set up to meet the required standards, pod admission is denied.

This issue arises due to the resource restore order of Velero.

Sample error

  1. \"level=error\" in line#2273: time=\"2023-06-12T06:50:04Z\"
  2. level=error msg=\"error restoring mysql-869f9f44f6-tp5lv: pods\\\
  3. "mysql-869f9f44f6-tp5lv\\\" is forbidden: violates PodSecurity\\\
  4. "restricted:v1.24\\\": privil eged (container \\\"mysql\\\
  5. " must not set securityContext.privileged=true),
  6. allowPrivilegeEscalation != false (containers \\\
  7. "restic-wait\\\", \\\"mysql\\\" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (containers \\\
  8. "restic-wait\\\", \\\"mysql\\\" must set securityContext.capabilities.drop=[\\\"ALL\\\"]), seccompProfile (pod or containers \\\
  9. "restic-wait\\\", \\\"mysql\\\" must set securityContext.seccompProfile.type to \\\
  10. "RuntimeDefault\\\" or \\\"Localhost\\\")\" logSource=\"/remote-source/velero/app/pkg/restore/restore.go:1388\" restore=openshift-adp/todolist-backup-0780518c-08ed-11ee-805c-0a580a80e92c\n
  11. velero container contains \"level=error\" in line#2447: time=\"2023-06-12T06:50:05Z\"
  12. level=error msg=\"Namespace todolist-mariadb,
  13. resource restore error: error restoring pods/todolist-mariadb/mysql-869f9f44f6-tp5lv: pods \\\
  14. "mysql-869f9f44f6-tp5lv\\\" is forbidden: violates PodSecurity \\\"restricted:v1.24\\\": privileged (container \\\
  15. "mysql\\\" must not set securityContext.privileged=true),
  16. allowPrivilegeEscalation != false (containers \\\
  17. "restic-wait\\\",\\\"mysql\\\" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (containers \\\
  18. "restic-wait\\\", \\\"mysql\\\" must set securityContext.capabilities.drop=[\\\"ALL\\\"]), seccompProfile (pod or containers \\\
  19. "restic-wait\\\", \\\"mysql\\\" must set securityContext.seccompProfile.type to \\\
  20. "RuntimeDefault\\\" or \\\"Localhost\\\")\"
  21. logSource=\"/remote-source/velero/app/pkg/controller/restore_controller.go:510\"
  22. restore=openshift-adp/todolist-backup-0780518c-08ed-11ee-805c-0a580a80e92c\n]",

Solution

  1. In your DPA custom resource (CR), check or set the restore-resource-priorities field on the Velero server to ensure that securitycontextconstraints is listed in order before pods in the list of resources:

    1. $ oc get dpa -o yaml

    Example DPA CR

    1. # ...
    2. configuration:
    3. restic:
    4. enable: true
    5. velero:
    6. args:
    7. restore-resource-priorities: 'securitycontextconstraints,customresourcedefinitions,namespaces,storageclasses,volumesnapshotclass.snapshot.storage.k8s.io,volumesnapshotcontents.snapshot.storage.k8s.io,volumesnapshots.snapshot.storage.k8s.io,datauploads.velero.io,persistentvolumes,persistentvolumeclaims,serviceaccounts,secrets,configmaps,limitranges,pods,replicasets.apps,clusterclasses.cluster.x-k8s.io,endpoints,services,-,clusterbootstraps.run.tanzu.vmware.com,clusters.cluster.x-k8s.io,clusterresourcesets.addons.cluster.x-k8s.io' (1)
    8. defaultPlugins:
    9. - gcp
    10. - openshift
    1If you have an existing restore resource priority list, ensure you combine that existing list with the complete list.
  2. Ensure that the security standards for the application pods are aligned, as provided in Fixing PodSecurity Admission warnings for deployments, to prevent deployment warnings. If the application is not aligned with security standards, an error can occur regardless of the SCC.

This solution is temporary, and ongoing discussions are in progress to address it. 

Additional resources

Using the must-gather tool

You can collect logs, metrics, and information about OADP custom resources by using the must-gather tool.

The must-gather data must be attached to all customer cases.

Prerequisites

  • You must be logged in to the OKD cluster as a user with the cluster-admin role.

  • You must have the OpenShift CLI (oc) installed.

Procedure

  1. Navigate to the directory where you want to store the must-gather data.

  2. Run the oc adm must-gather command for one of the following data collection options:

    1. $ oc adm must-gather --image=registry.redhat.io/oadp/oadp-mustgather-rhel8:v1.1

    The data is saved as must-gather/must-gather.tar.gz. You can upload this file to a support case on the Red Hat Customer Portal.

    1. $ oc adm must-gather --image=registry.redhat.io/oadp/oadp-mustgather-rhel8:v1.1 \
    2. -- /usr/bin/gather_metrics_dump

    This operation can take a long time. The data is saved as must-gather/metrics/prom_data.tar.gz.

Combining options when using the must-gather tool

Currently, it is not possible to combine must-gather scripts, for example specifying a timeout threshold while permitting insecure TLS connections. In some situations, you can get around this limitation by setting up internal variables on the must-gather command line, such as the following example:

  1. $ oc adm must-gather --image=brew.registry.redhat.io/rh-osbs/oadp-oadp-mustgather-rhel8:1.1.1-8 -- skip_tls=true /usr/bin/gather_with_timeout <timeout_value_in_seconds>

In this example, set the skip_tls variable before running the gather_with_timeout script. The result is a combination of gather_with_timeout and gather_without_tls.

The only other variables that you can specify this way are the following:

  • logs_since, with a default value of 72h

  • request_timeout, with a default value of 0s

OADP Monitoring

The OKD provides a monitoring stack that allows users and administrators to effectively monitor and manage their clusters, as well as monitor and analyze the workload performance of user applications and services running on the clusters, including receiving alerts if an event occurs.

Additional resources

OADP monitoring setup

The OADP Operator leverages an OpenShift User Workload Monitoring provided by the OpenShift Monitoring Stack for retrieving metrics from the Velero service endpoint. The monitoring stack allows creating user-defined Alerting Rules or querying metrics by using the OpenShift Metrics query front end.

With enabled User Workload Monitoring, it is possible to configure and use any Prometheus-compatible third-party UI, such as Grafana, to visualize Velero metrics.

Monitoring metrics requires enabling monitoring for the user-defined projects and creating a ServiceMonitor resource to scrape those metrics from the already enabled OADP service endpoint that resides in the openshift-adp namespace.

Prerequisites

  • You have access to an OKD cluster using an account with cluster-admin permissions.

  • You have created a cluster monitoring config map.

Procedure

  1. Edit the cluster-monitoring-config ConfigMap object in the openshift-monitoring namespace:

    1. $ oc edit configmap cluster-monitoring-config -n openshift-monitoring
  2. Add or enable the enableUserWorkload option in the data section’s config.yaml field:

    1. apiVersion: v1
    2. data:
    3. config.yaml: |
    4. enableUserWorkload: true (1)
    5. kind: ConfigMap
    6. metadata:
    7. # ...
    1Add this option or set to true
  3. Wait a short period of time to verify the User Workload Monitoring Setup by checking if the following components are up and running in the openshift-user-workload-monitoring namespace:

    1. $ oc get pods -n openshift-user-workload-monitoring

    Example output

    1. NAME READY STATUS RESTARTS AGE
    2. prometheus-operator-6844b4b99c-b57j9 2/2 Running 0 43s
    3. prometheus-user-workload-0 5/5 Running 0 32s
    4. prometheus-user-workload-1 5/5 Running 0 32s
    5. thanos-ruler-user-workload-0 3/3 Running 0 32s
    6. thanos-ruler-user-workload-1 3/3 Running 0 32s
  4. Verify the existence of the user-workload-monitoring-config ConfigMap in the openshift-user-workload-monitoring. If it exists, skip the remaining steps in this procedure.

    1. $ oc get configmap user-workload-monitoring-config -n openshift-user-workload-monitoring

    Example output

    1. Error from server (NotFound): configmaps "user-workload-monitoring-config" not found
  5. Create a user-workload-monitoring-config ConfigMap object for the User Workload Monitoring, and save it under the 2_configure_user_workload_monitoring.yaml file name:

    Example output

    1. apiVersion: v1
    2. kind: ConfigMap
    3. metadata:
    4. name: user-workload-monitoring-config
    5. namespace: openshift-user-workload-monitoring
    6. data:
    7. config.yaml: |
  6. Apply the 2_configure_user_workload_monitoring.yaml file:

    1. $ oc apply -f 2_configure_user_workload_monitoring.yaml
    2. configmap/user-workload-monitoring-config created

Creating OADP service monitor

OADP provides an openshift-adp-velero-metrics-svc service which is created when the DPA is configured. The service monitor used by the user workload monitoring must point to the defined service.

Get details about the service by running the following commands:

Procedure

  1. Ensure the openshift-adp-velero-metrics-svc service exists. It should contain app.kubernetes.io/name=velero label, which will be used as selector for the ServiceMonitor object.

    1. $ oc get svc -n openshift-adp -l app.kubernetes.io/name=velero

    Example output

    1. NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
    2. openshift-adp-velero-metrics-svc ClusterIP 172.30.38.244 <none> 8085/TCP 1h
  2. Create a ServiceMonitor YAML file that matches the existing service label, and save the file as 3_create_oadp_service_monitor.yaml. The service monitor is created in the openshift-adp namespace where the openshift-adp-velero-metrics-svc service resides.

    Example ServiceMonitor object

    1. apiVersion: monitoring.coreos.com/v1
    2. kind: ServiceMonitor
    3. metadata:
    4. labels:
    5. app: oadp-service-monitor
    6. name: oadp-service-monitor
    7. namespace: openshift-adp
    8. spec:
    9. endpoints:
    10. - interval: 30s
    11. path: /metrics
    12. targetPort: 8085
    13. scheme: http
    14. selector:
    15. matchLabels:
    16. app.kubernetes.io/name: "velero"
  3. Apply the 3_create_oadp_service_monitor.yaml file:

    1. $ oc apply -f 3_create_oadp_service_monitor.yaml

    Example output

    1. servicemonitor.monitoring.coreos.com/oadp-service-monitor created

Verification

  • Confirm that the new service monitor is in an Up state by using the Administrator perspective of the OKD web console:

    1. Navigate to the ObserveTargets page.

    2. Ensure the Filter is unselected or that the User source is selected and type openshift-adp in the Text search field.

    3. Verify that the status for the Status for the service monitor is Up.

      OADP metrics targets

      Figure 1. OADP metrics targets

Creating an alerting rule

The OKD monitoring stack allows to receive Alerts configured using Alerting Rules. To create an Alerting rule for the OADP project, use one of the Metrics which are scraped with the user workload monitoring.

Procedure

  1. Create a PrometheusRule YAML file with the sample OADPBackupFailing alert and save it as 4_create_oadp_alert_rule.yaml.

    Sample OADPBackupFailing alert

    1. apiVersion: monitoring.coreos.com/v1
    2. kind: PrometheusRule
    3. metadata:
    4. name: sample-oadp-alert
    5. namespace: openshift-adp
    6. spec:
    7. groups:
    8. - name: sample-oadp-backup-alert
    9. rules:
    10. - alert: OADPBackupFailing
    11. annotations:
    12. description: 'OADP had {{$value | humanize}} backup failures over the last 2 hours.'
    13. summary: OADP has issues creating backups
    14. expr: |
    15. increase(velero_backup_failure_total{job="openshift-adp-velero-metrics-svc"}[2h]) > 0
    16. for: 5m
    17. labels:
    18. severity: warning

    In this sample, the Alert displays under the following conditions:

    • There is an increase of new failing backups during the 2 last hours that is greater than 0 and the state persists for at least 5 minutes.

    • If the time of the first increase is less than 5 minutes, the Alert will be in a Pending state, after which it will turn into a Firing state.

  2. Apply the 4_create_oadp_alert_rule.yaml file, which creates the PrometheusRule object in the openshift-adp namespace:

    1. $ oc apply -f 4_create_oadp_alert_rule.yaml

    Example output

    1. prometheusrule.monitoring.coreos.com/sample-oadp-alert created

Verification

  • After the Alert is triggered, you can view it in the following ways:

    • In the Developer perspective, select the Observe menu.

    • In the Administrator perspective under the ObserveAlerting menu, select User in the Filter box. Otherwise, by default only the Platform Alerts are displayed.

      OADP backup failing alert

      Figure 2. OADP backup failing alert

Additional resources

List of available metrics

These are the list of metrics provided by the OADP together with their Types.

Metric nameDescriptionType

kopia_content_cache_hit_bytes

Number of bytes retrieved from the cache

Counter

kopia_content_cache_hit_count

Number of times content was retrieved from the cache

Counter

kopia_content_cache_malformed

Number of times malformed content was read from the cache

Counter

kopia_content_cache_miss_count

Number of times content was not found in the cache and fetched

Counter

kopia_content_cache_missed_bytes

Number of bytes retrieved from the underlying storage

Counter

kopia_content_cache_miss_error_count

Number of times content could not be found in the underlying storage

Counter

kopia_content_cache_store_error_count

Number of times content could not be saved in the cache

Counter

kopia_content_get_bytes

Number of bytes retrieved using GetContent()

Counter

kopia_content_get_count

Number of times GetContent() was called

Counter

kopia_content_get_error_count

Number of times GetContent() was called and the result was an error

Counter

kopia_content_get_not_found_count

Number of times GetContent() was called and the result was not found

Counter

kopia_content_write_bytes

Number of bytes passed to WriteContent()

Counter

kopia_content_write_count

Number of times WriteContent() was called

Counter

velero_backup_attempt_total

Total number of attempted backups

Counter

velero_backup_deletion_attempt_total

Total number of attempted backup deletions

Counter

velero_backup_deletion_failure_total

Total number of failed backup deletions

Counter

velero_backup_deletion_success_total

Total number of successful backup deletions

Counter

velero_backup_duration_seconds

Time taken to complete backup, in seconds

Histogram

velero_backup_failure_total

Total number of failed backups

Counter

velero_backup_items_errors

Total number of errors encountered during backup

Gauge

velero_backup_items_total

Total number of items backed up

Gauge

velero_backup_last_status

Last status of the backup. A value of 1 is success, 0.

Gauge

velero_backup_last_successful_timestamp

Last time a backup ran successfully, Unix timestamp in seconds

Gauge

velero_backup_partial_failure_total

Total number of partially failed backups

Counter

velero_backup_success_total

Total number of successful backups

Counter

velero_backup_tarball_size_bytes

Size, in bytes, of a backup

Gauge

velero_backup_total

Current number of existent backups

Gauge

velero_backup_validation_failure_total

Total number of validation failed backups

Counter

velero_backup_warning_total

Total number of warned backups

Counter

velero_csi_snapshot_attempt_total

Total number of CSI attempted volume snapshots

Counter

velero_csi_snapshot_failure_total

Total number of CSI failed volume snapshots

Counter

velero_csi_snapshot_success_total

Total number of CSI successful volume snapshots

Counter

velero_restore_attempt_total

Total number of attempted restores

Counter

velero_restore_failed_total

Total number of failed restores

Counter

velero_restore_partial_failure_total

Total number of partially failed restores

Counter

velero_restore_success_total

Total number of successful restores

Counter

velero_restore_total

Current number of existent restores

Gauge

velero_restore_validation_failed_total

Total number of failed restores failing validations

Counter

velero_volume_snapshot_attempt_total

Total number of attempted volume snapshots

Counter

velero_volume_snapshot_failure_total

Total number of failed volume snapshots

Counter

velero_volume_snapshot_success_total

Total number of successful volume snapshots

Counter

Viewing metrics using the Observe UI

You can view metrics in the OKD web console from the Administrator or Developer perspective, which must have access to the openshift-adp project.

Procedure

  • Navigate to the ObserveMetrics page:

    • If you are using the Developer perspective, follow these steps:

      1. Select Custom query, or click on the Show PromQL link.

      2. Type the query and click Enter.

    • If you are using the Administrator perspective, type the expression in the text field and select Run Queries.

      OADP metrics query

      Figure 3. OADP metrics query