Performing advanced Compliance Operator tasks

The Compliance Operator includes options for advanced users for the purpose of debugging or integration with existing tooling.

Using the ComplianceSuite and ComplianceScan objects directly

While it is recommended that users take advantage of the ScanSetting and ScanSettingBinding objects to define the suites and scans, there are valid use cases to define the ComplianceSuite objects directly:

  • Specifying only a single rule to scan. This can be useful for debugging together with the debug: true attribute which increases the OpenSCAP scanner verbosity, as the debug mode tends to get quite verbose otherwise. Limiting the test to one rule helps to lower the amount of debug information.

  • Providing a custom nodeSelector. In order for a remediation to be applicable, the nodeSelector must match a pool.

  • Pointing the Scan to a bespoke config map with a tailoring file.

  • For testing or development when the overhead of parsing profiles from bundles is not required.

The following example shows a ComplianceSuite that scans the worker machines with only a single rule:

  1. apiVersion: compliance.openshift.io/v1alpha1
  2. kind: ComplianceSuite
  3. metadata:
  4. name: workers-compliancesuite
  5. spec:
  6. scans:
  7. - name: workers-scan
  8. profile: xccdf_org.ssgproject.content_profile_moderate
  9. content: ssg-rhcos4-ds.xml
  10. contentImage: registry.redhat.io/compliance/openshift-compliance-content-rhel8@sha256:45dc...
  11. debug: true
  12. rule: xccdf_org.ssgproject.content_rule_no_direct_root_logins
  13. nodeSelector:
  14. node-role.kubernetes.io/worker: ""

The ComplianceSuite object and the ComplianceScan objects referred to above specify several attributes in a format that OpenSCAP expects.

To find out the profile, content, or rule values, you can start by creating a similar Suite from ScanSetting and ScanSettingBinding or inspect the objects parsed from the ProfileBundle objects like rules or profiles. Those objects contain the xccdf_org identifiers you can use to refer to them from a ComplianceSuite.

Setting PriorityClass for ScanSetting scans

In large scale environments, the default PriorityClass object can be too low to guarantee Pods execute scans on time. For clusters that must maintain compliance or guarantee automated scanning, it is recommended to set the PriorityClass variable to ensure the Compliance Operator is always given priority in resource constrained situations.

Procedure

  • Set the PriorityClass variable:

    1. apiVersion: compliance.openshift.io/v1alpha1
    2. strictNodeScan: true
    3. metadata:
    4. name: default
    5. namespace: openshift-compliance
    6. priorityClass: compliance-high-priority (1)
    7. kind: ScanSetting
    8. showNotApplicable: false
    9. rawResultStorage:
    10. nodeSelector:
    11. node-role.kubernetes.io/master: ''
    12. pvAccessModes:
    13. - ReadWriteOnce
    14. rotation: 3
    15. size: 1Gi
    16. tolerations:
    17. - effect: NoSchedule
    18. key: node-role.kubernetes.io/master
    19. operator: Exists
    20. - effect: NoExecute
    21. key: node.kubernetes.io/not-ready
    22. operator: Exists
    23. tolerationSeconds: 300
    24. - effect: NoExecute
    25. key: node.kubernetes.io/unreachable
    26. operator: Exists
    27. tolerationSeconds: 300
    28. - effect: NoSchedule
    29. key: node.kubernetes.io/memory-pressure
    30. operator: Exists
    31. schedule: 0 1 * * *
    32. roles:
    33. - master
    34. - worker
    35. scanTolerations:
    36. - operator: Exists
    1If the PriorityClass referenced in the ScanSetting cannot be found, the Operator will leave the PriorityClass empty, issue a warning, and continue scheduling scans without a PriorityClass.

Using raw tailored profiles

While the TailoredProfile CR enables the most common tailoring operations, the XCCDF standard allows even more flexibility in tailoring OpenSCAP profiles. In addition, if your organization has been using OpenScap previously, you may have an existing XCCDF tailoring file and can reuse it.

The ComplianceSuite object contains an optional TailoringConfigMap attribute that you can point to a custom tailoring file. The value of the TailoringConfigMap attribute is a name of a config map which must contain a key called tailoring.xml and the value of this key is the tailoring contents.

Procedure

  1. Create the ConfigMap object from a file:

    1. $ oc -n openshift-compliance \
    2. create configmap nist-moderate-modified \
    3. --from-file=tailoring.xml=/path/to/the/tailoringFile.xml
  2. Reference the tailoring file in a scan that belongs to a suite:

    1. apiVersion: compliance.openshift.io/v1alpha1
    2. kind: ComplianceSuite
    3. metadata:
    4. name: workers-compliancesuite
    5. spec:
    6. debug: true
    7. scans:
    8. - name: workers-scan
    9. profile: xccdf_org.ssgproject.content_profile_moderate
    10. content: ssg-rhcos4-ds.xml
    11. contentImage: registry.redhat.io/compliance/openshift-compliance-content-rhel8@sha256:45dc...
    12. debug: true
    13. tailoringConfigMap:
    14. name: nist-moderate-modified
    15. nodeSelector:
    16. node-role.kubernetes.io/worker: ""

Performing a rescan

Typically you will want to re-run a scan on a defined schedule, like every Monday or daily. It can also be useful to re-run a scan once after fixing a problem on a node. To perform a single scan, annotate the scan with the compliance.openshift.io/rescan= option:

  1. $ oc -n openshift-compliance \
  2. annotate compliancescans/rhcos4-e8-worker compliance.openshift.io/rescan=

A rescan generates four additional mc for rhcos-moderate profile:

  1. $ oc get mc

Example output

  1. 75-worker-scan-chronyd-or-ntpd-specify-remote-server
  2. 75-worker-scan-configure-usbguard-auditbackend
  3. 75-worker-scan-service-usbguard-enabled
  4. 75-worker-scan-usbguard-allow-hid-and-hub

When the scan setting default-auto-apply label is applied, remediations are applied automatically and outdated remediations automatically update. If there are remediations that were not applied due to dependencies, or remediations that had been outdated, rescanning applies the remediations and might trigger a reboot. Only remediations that use MachineConfig objects trigger reboots. If there are no updates or dependencies to be applied, no reboot occurs.

Setting custom storage size for results

While the custom resources such as ComplianceCheckResult represent an aggregated result of one check across all scanned nodes, it can be useful to review the raw results as produced by the scanner. The raw results are produced in the ARF format and can be large (tens of megabytes per node), it is impractical to store them in a Kubernetes resource backed by the etcd key-value store. Instead, every scan creates a persistent volume (PV) which defaults to 1GB size. Depending on your environment, you may want to increase the PV size accordingly. This is done using the rawResultStorage.size attribute that is exposed in both the ScanSetting and ComplianceScan resources.

A related parameter is rawResultStorage.rotation which controls how many scans are retained in the PV before the older scans are rotated. The default value is 3, setting the rotation policy to 0 disables the rotation. Given the default rotation policy and an estimate of 100MB per a raw ARF scan report, you can calculate the right PV size for your environment.

Using custom result storage values

Because OKD can be deployed in a variety of public clouds or bare metal, the Compliance Operator cannot determine available storage configurations. By default, the Compliance Operator will try to create the PV for storing results using the default storage class of the cluster, but a custom storage class can be configured using the rawResultStorage.StorageClassName attribute.

If your cluster does not specify a default storage class, this attribute must be set.

Configure the ScanSetting custom resource to use a standard storage class and create persistent volumes that are 10GB in size and keep the last 10 results:

Example ScanSetting CR

  1. apiVersion: compliance.openshift.io/v1alpha1
  2. kind: ScanSetting
  3. metadata:
  4. name: default
  5. namespace: openshift-compliance
  6. rawResultStorage:
  7. storageClassName: standard
  8. rotation: 10
  9. size: 10Gi
  10. roles:
  11. - worker
  12. - master
  13. scanTolerations:
  14. - effect: NoSchedule
  15. key: node-role.kubernetes.io/master
  16. operator: Exists
  17. schedule: '0 1 * * *'

Applying remediations generated by suite scans

Although you can use the autoApplyRemediations boolean parameter in a ComplianceSuite object, you can alternatively annotate the object with compliance.openshift.io/apply-remediations. This allows the Operator to apply all of the created remediations.

Procedure

  • Apply the compliance.openshift.io/apply-remediations annotation by running:
  1. $ oc -n openshift-compliance \
  2. annotate compliancesuites/workers-compliancesuite compliance.openshift.io/apply-remediations=

Automatically update remediations

In some cases, a scan with newer content might mark remediations as OUTDATED. As an administrator, you can apply the compliance.openshift.io/remove-outdated annotation to apply new remediations and remove the outdated ones.

Procedure

  • Apply the compliance.openshift.io/remove-outdated annotation:
  1. $ oc -n openshift-compliance \
  2. annotate compliancesuites/workers-compliancesuite compliance.openshift.io/remove-outdated=

Alternatively, set the autoUpdateRemediations flag in a ScanSetting or ComplianceSuite object to update the remediations automatically.

Creating a custom SCC for the Compliance Operator

In some environments, you must create a custom Security Context Constraints (SCC) file to ensure the correct permissions are available to the Compliance Operator api-resource-collector.

Prerequisites

  • You must have admin privileges.

Procedure

  1. Define the SCC in a YAML file named restricted-adjusted-compliance.yaml:

    SecurityContextConstraints object definition

    1. allowHostDirVolumePlugin: false
    2. allowHostIPC: false
    3. allowHostNetwork: false
    4. allowHostPID: false
    5. allowHostPorts: false
    6. allowPrivilegeEscalation: true
    7. allowPrivilegedContainer: false
    8. allowedCapabilities: null
    9. apiVersion: security.openshift.io/v1
    10. defaultAddCapabilities: null
    11. fsGroup:
    12. type: MustRunAs
    13. kind: SecurityContextConstraints
    14. metadata:
    15. name: restricted-adjusted-compliance
    16. priority: 30 (1)
    17. readOnlyRootFilesystem: false
    18. requiredDropCapabilities:
    19. - KILL
    20. - SETUID
    21. - SETGID
    22. - MKNOD
    23. runAsUser:
    24. type: MustRunAsRange
    25. seLinuxContext:
    26. type: MustRunAs
    27. supplementalGroups:
    28. type: RunAsAny
    29. users:
    30. - system:serviceaccount:openshift-compliance:api-resource-collector (2)
    31. volumes:
    32. - configMap
    33. - downwardAPI
    34. - emptyDir
    35. - persistentVolumeClaim
    36. - projected
    37. - secret
    1The priority of this SCC must be higher than any other SCC that applies to the system:authenticated group.
    2Service Account used by Compliance Operator Scanner pod.
  2. Create the SCC:

    1. $ oc create -n openshift-compliance -f restricted-adjusted-compliance.yaml

    Example output

    1. securitycontextconstraints.security.openshift.io/restricted-adjusted-compliance created

Verification

  1. Verify the SCC was created:

    1. $ oc get -n openshift-compliance scc restricted-adjusted-compliance

    Example output

    1. NAME PRIV CAPS SELINUX RUNASUSER FSGROUP SUPGROUP PRIORITY READONLYROOTFS VOLUMES
    2. restricted-adjusted-compliance false <no value> MustRunAs MustRunAsRange MustRunAs RunAsAny 30 false ["configMap","downwardAPI","emptyDir","persistentVolumeClaim","projected","secret"]

Additional resources