Special Resource Operator

Learn about the Special Resource Operator (SRO) and how you can use it to build and manage driver containers for loading kernel modules and device drivers on nodes in an OKD cluster.

The Special Resource Operator is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process.

For more information about the support scope of Red Hat Technology Preview features, see https://access.redhat.com/support/offerings/techpreview/.

About the Special Resource Operator

The Special Resource Operator (SRO) helps you manage the deployment of kernel modules and drivers on an existing OKD cluster. The SRO can be used for a case as simple as building and loading a single kernel module, or as complex as deploying the driver, device plug-in, and monitoring stack for a hardware accelerator.

For loading kernel modules, the SRO is designed around the use of driver containers. Driver containers are increasingly being used in cloud-native environments, especially when run on pure container operating systems, to deliver hardware drivers to the host. Driver containers extend the kernel stack beyond the out-of-the-box software and hardware features of a specific kernel. Driver containers work on various container-capable Linux distributions. With driver containers, the host operating system stays clean and there is no clash between different library versions or binaries on the host.

The functions described require a connected environment with a constant connection to the network. These functions are not available for disconnected environments.

Installing the Special Resource Operator

As a cluster administrator, you can install the Special Resource Operator (SRO) by using the OpenShift CLI or the web console.

Installing the Special Resource Operator by using the CLI

As a cluster administrator, you can install the Special Resource Operator (SRO) by using the OpenShift CLI.

Prerequisites

  • You have a running OKD cluster.

  • You installed the OpenShift CLI (oc).

  • You are logged into the OpenShift CLI as a user with cluster-admin privileges.

Procedure

  1. Install the SRO in the openshift-operators namespace:

    1. Create the following Subscription CR and save the YAML in the sro-sub.yaml file:

      Example Subscription CR

      1. apiVersion: operators.coreos.com/v1alpha1
      2. kind: Subscription
      3. metadata:
      4. name: openshift-special-resource-operator
      5. namespace: openshift-operators
      6. spec:
      7. channel: "stable"
      8. installPlanApproval: Automatic
      9. name: openshift-special-resource-operator
      10. source: redhat-operators
      11. sourceNamespace: openshift-marketplace
    2. Create the subscription object by running the following command:

      1. $ oc create -f sro-sub.yaml
    3. Switch to the openshift-operators project:

      1. $ oc project openshift-operators

Verification

  • To verify that the Operator deployment is successful, run:

    1. $ oc get pods

    Example output

    1. NAME READY STATUS RESTARTS AGE
    2. nfd-controller-manager-7f4c5f5778-4lvvk 2/2 Running 0 89s
    3. special-resource-controller-manager-6dbf7d4f6f-9kl8h 2/2 Running 0 81s

    A successful deployment shows a Running status.

Installing the Special Resource Operator by using the web console

As a cluster administrator, you can install the Special Resource Operator (SRO) by using the OKD web console.

Procedure

  1. Log in to the OKD web console.

  2. Install the Special Resource Operator:

    1. In the OKD web console, click OperatorsOperatorHub.

    2. Choose Special Resource Operator from the list of available Operators, and then click Install.

    3. On the Install Operator page, select a specific namespace on the cluster, select the namespace created in the previous section, and then click Install.

Verification

To verify that the Special Resource Operator installed successfully:

  1. Navigate to the OperatorsInstalled Operators page.

  2. Ensure that Special Resource Operator is listed in the openshift-operators project with a Status of InstallSucceeded.

    During installation, an Operator might display a Failed status. If the installation later succeeds with an InstallSucceeded message, you can ignore the Failed message.

  3. If the Operator does not appear as installed, to troubleshoot further:

    1. Navigate to the OperatorsInstalled Operators page and inspect the Operator Subscriptions and Install Plans tabs for any failure or errors under Status.

    2. Navigate to the WorkloadsPods page and check the logs for pods in the openshift-operators project.

Using the Special Resource Operator

The Special Resource Operator (SRO) is used to manage the build and deployment of a driver container. The objects required to build and deploy the container can be defined in a Helm chart.

The example in this section uses the simple-kmod SpecialResource object to point to a ConfigMap object that is created to store the Helm charts.

Building and running the simple-kmod SpecialResource by using a config map

In this example, the simple-kmod kernel module shows how the Special Resource Operator (SRO) manages a driver container. The container is defined in the Helm chart templates that are stored in a config map.

Prerequisites

  • You have a running OKD cluster.

  • You set the Image Registry Operator state to Managed for your cluster.

  • You installed the OpenShift CLI (oc).

  • You are logged into the OpenShift CLI as a user with cluster-admin privileges.

  • You installed the Node Feature Discovery (NFD) Operator.

  • You installed the SRO.

  • You installed the Helm CLI (helm).

Procedure

  1. To create a simple-kmod SpecialResource object, define an image stream and build config to build the image, and a service account, role, role binding, and daemon set to run the container. The service account, role, and role binding are required to run the daemon set with the privileged security context so that the kernel module can be loaded.

    1. Create a templates directory, and change into it:

      1. $ mkdir -p chart/simple-kmod-0.0.1/templates
      1. $ cd chart/simple-kmod-0.0.1/templates
    2. Save this YAML template for the image stream and build config in the templates directory as 0000-buildconfig.yaml:

      1. apiVersion: image.openshift.io/v1
      2. kind: ImageStream
      3. metadata:
      4. labels:
      5. app: {{.Values.specialresource.metadata.name}}-{{.Values.groupName.driverContainer}} (1)
      6. name: {{.Values.specialresource.metadata.name}}-{{.Values.groupName.driverContainer}} (1)
      7. spec: {}
      8. ---
      9. apiVersion: build.openshift.io/v1
      10. kind: BuildConfig
      11. metadata:
      12. labels:
      13. app: {{.Values.specialresource.metadata.name}}-{{.Values.groupName.driverBuild}} (1)
      14. name: {{.Values.specialresource.metadata.name}}-{{.Values.groupName.driverBuild}} (1)
      15. annotations:
      16. specialresource.openshift.io/wait: "true"
      17. specialresource.openshift.io/driver-container-vendor: simple-kmod
      18. specialresource.openshift.io/kernel-affine: "true"
      19. spec:
      20. nodeSelector:
      21. node-role.kubernetes.io/worker: ""
      22. runPolicy: "Serial"
      23. triggers:
      24. - type: "ConfigChange"
      25. - type: "ImageChange"
      26. source:
      27. git:
      28. ref: {{.Values.specialresource.spec.driverContainer.source.git.ref}}
      29. uri: {{.Values.specialresource.spec.driverContainer.source.git.uri}}
      30. type: Git
      31. strategy:
      32. dockerStrategy:
      33. dockerfilePath: Dockerfile.SRO
      34. buildArgs:
      35. - name: "IMAGE"
      36. value: {{ .Values.driverToolkitImage }}
      37. {{- range $arg := .Values.buildArgs }}
      38. - name: {{ $arg.name }}
      39. value: {{ $arg.value }}
      40. {{- end }}
      41. - name: KVER
      42. value: {{ .Values.kernelFullVersion }}
      43. output:
      44. to:
      45. kind: ImageStreamTag
      46. name: {{.Values.specialresource.metadata.name}}-{{.Values.groupName.driverContainer}}:v{{.Values.kernelFullVersion}} (1)
      1The templates such as {{.Values.specialresource.metadata.name}} are filled in by the SRO, based on fields in the SpecialResource CR and variables known to the Operator such as {{.Values.KernelFullVersion}}.
    3. Save the following YAML template for the RBAC resources and daemon set in the templates directory as 1000-driver-container.yaml:

      1. apiVersion: v1
      2. kind: ServiceAccount
      3. metadata:
      4. name: {{.Values.specialresource.metadata.name}}-{{.Values.groupName.driverContainer}}
      5. ---
      6. apiVersion: rbac.authorization.k8s.io/v1
      7. kind: Role
      8. metadata:
      9. name: {{.Values.specialresource.metadata.name}}-{{.Values.groupName.driverContainer}}
      10. rules:
      11. - apiGroups:
      12. - security.openshift.io
      13. resources:
      14. - securitycontextconstraints
      15. verbs:
      16. - use
      17. resourceNames:
      18. - privileged
      19. ---
      20. apiVersion: rbac.authorization.k8s.io/v1
      21. kind: RoleBinding
      22. metadata:
      23. name: {{.Values.specialresource.metadata.name}}-{{.Values.groupName.driverContainer}}
      24. roleRef:
      25. apiGroup: rbac.authorization.k8s.io
      26. kind: Role
      27. name: {{.Values.specialresource.metadata.name}}-{{.Values.groupName.driverContainer}}
      28. subjects:
      29. - kind: ServiceAccount
      30. name: {{.Values.specialresource.metadata.name}}-{{.Values.groupName.driverContainer}}
      31. namespace: {{.Values.specialresource.spec.namespace}}
      32. ---
      33. apiVersion: apps/v1
      34. kind: DaemonSet
      35. metadata:
      36. labels:
      37. app: {{.Values.specialresource.metadata.name}}-{{.Values.groupName.driverContainer}}
      38. name: {{.Values.specialresource.metadata.name}}-{{.Values.groupName.driverContainer}}
      39. annotations:
      40. specialresource.openshift.io/wait: "true"
      41. specialresource.openshift.io/state: "driver-container"
      42. specialresource.openshift.io/driver-container-vendor: simple-kmod
      43. specialresource.openshift.io/kernel-affine: "true"
      44. specialresource.openshift.io/from-configmap: "true"
      45. spec:
      46. updateStrategy:
      47. type: OnDelete
      48. selector:
      49. matchLabels:
      50. app: {{.Values.specialresource.metadata.name}}-{{.Values.groupName.driverContainer}}
      51. template:
      52. metadata:
      53. # Mark this pod as a critical add-on; when enabled, the critical add-on scheduler
      54. # reserves resources for critical add-on pods so that they can be rescheduled after
      55. # a failure. This annotation works in tandem with the toleration below.
      56. annotations:
      57. scheduler.alpha.kubernetes.io/critical-pod: ""
      58. labels:
      59. app: {{.Values.specialresource.metadata.name}}-{{.Values.groupName.driverContainer}}
      60. spec:
      61. serviceAccount: {{.Values.specialresource.metadata.name}}-{{.Values.groupName.driverContainer}}
      62. serviceAccountName: {{.Values.specialresource.metadata.name}}-{{.Values.groupName.driverContainer}}
      63. containers:
      64. - image: image-registry.openshift-image-registry.svc:5000/{{.Values.specialresource.spec.namespace}}/{{.Values.specialresource.metadata.name}}-{{.Values.groupName.driverContainer}}:v{{.Values.kernelFullVersion}}
      65. name: {{.Values.specialresource.metadata.name}}-{{.Values.groupName.driverContainer}}
      66. imagePullPolicy: Always
      67. command: ["/sbin/init"]
      68. lifecycle:
      69. preStop:
      70. exec:
      71. command: ["/bin/sh", "-c", "systemctl stop kmods-via-containers@{{.Values.specialresource.metadata.name}}"]
      72. securityContext:
      73. privileged: true
      74. nodeSelector:
      75. node-role.kubernetes.io/worker: ""
      76. feature.node.kubernetes.io/kernel-version.full: "{{.Values.KernelFullVersion}}"
    4. Change into the chart/simple-kmod-0.0.1 directory:

      1. $ cd ..
    5. Save the following YAML for the chart as Chart.yaml in the chart/simple-kmod-0.0.1 directory:

      1. apiVersion: v2
      2. name: simple-kmod
      3. description: Simple kmod will deploy a simple kmod driver-container
      4. icon: https://avatars.githubusercontent.com/u/55542927
      5. type: application
      6. version: 0.0.1
      7. appVersion: 1.0.0
  2. From the chart directory, create the chart using the helm package command:

    1. $ helm package simple-kmod-0.0.1/

    Example output

    1. Successfully packaged chart and saved it to: /data/<username>/git/<github_username>/special-resource-operator/yaml-for-docs/chart/simple-kmod-0.0.1/simple-kmod-0.0.1.tgz
  3. Create a config map to store the chart files:

    1. Create a directory for the config map files:

      1. $ mkdir cm
    2. Copy the Helm chart into the cm directory:

      1. $ cp simple-kmod-0.0.1.tgz cm/simple-kmod-0.0.1.tgz
    3. Create an index file specifying the Helm repo that contains the Helm chart:

      1. $ helm repo index cm --url=cm://simple-kmod/simple-kmod-chart
    4. Create a namespace for the objects defined in the Helm chart:

      1. $ oc create namespace simple-kmod
    5. Create the config map object:

      1. $ oc create cm simple-kmod-chart --from-file=cm/index.yaml --from-file=cm/simple-kmod-0.0.1.tgz -n simple-kmod
  4. Use the following SpecialResource manifest to deploy the simple-kmod object using the Helm chart that you created in the config map. Save this YAML as simple-kmod-configmap.yaml:

    1. apiVersion: sro.openshift.io/v1beta1
    2. kind: SpecialResource
    3. metadata:
    4. name: simple-kmod
    5. spec:
    6. #debug: true (1)
    7. namespace: simple-kmod
    8. chart:
    9. name: simple-kmod
    10. version: 0.0.1
    11. repository:
    12. name: example
    13. url: cm://simple-kmod/simple-kmod-chart (2)
    14. set:
    15. kind: Values
    16. apiVersion: sro.openshift.io/v1beta1
    17. kmodNames: ["simple-kmod", "simple-procfs-kmod"]
    18. buildArgs:
    19. - name: "KMODVER"
    20. value: "SRO"
    21. driverContainer:
    22. source:
    23. git:
    24. ref: "master"
    25. uri: "https://github.com/openshift-psap/kvc-simple-kmod.git"
    1Optional: Uncomment the #debug: true line to have the YAML files in the chart printed in full in the Operator logs and to verify that the logs are created and templated properly.
    2The spec.chart.repository.url field tells the SRO to look for the chart in a config map.
  5. From a command line, create the SpecialResource file:

    1. $ oc create -f simple-kmod-configmap.yaml

To remove the simple-kmod kernel module from the node, delete the simple-kmod SpecialResource API object using the oc delete command. The kernel module is unloaded when the driver container pod is deleted.

Verification

The simple-kmod resources are deployed in the simple-kmod namespace as specified in the object manifest. After a short time, the build pod for the simple-kmod driver container starts running. The build completes after a few minutes, and then the driver container pods start running.

  1. Use oc get pods command to display the status of the build pods:

    1. $ oc get pods -n simple-kmod

    Example output

    1. NAME READY STATUS RESTARTS AGE
    2. simple-kmod-driver-build-12813789169ac0ee-1-build 0/1 Completed 0 7m12s
    3. simple-kmod-driver-container-12813789169ac0ee-mjsnh 1/1 Running 0 8m2s
    4. simple-kmod-driver-container-12813789169ac0ee-qtkff 1/1 Running 0 8m2s
  2. Use the oc logs command, along with the build pod name obtained from the oc get pods command above, to display the logs of the simple-kmod driver container image build:

    1. $ oc logs pod/simple-kmod-driver-build-12813789169ac0ee-1-build -n simple-kmod
  3. To verify that the simple-kmod kernel modules are loaded, execute the lsmod command in one of the driver container pods that was returned from the oc get pods command above:

    1. $ oc exec -n simple-kmod -it pod/simple-kmod-driver-container-12813789169ac0ee-mjsnh -- lsmod | grep simple

    Example output

    1. simple_procfs_kmod 16384 0
    2. simple_kmod 16384 0

The sro_kind_completed_info SRO Prometheus metric provides information about the status of the different objects being deployed, which can be useful to troubleshoot SRO CR installations. The SRO also provides other types of metrics that you can use to watch the health of your environment.

Building and running the simple-kmod SpecialResource for a hub-and-spoke topology

You can use the Special Resource Operator (SRO) on a hub-and-spoke deployment in which Red Hat Advanced Cluster Management (RHACM) connects a hub cluster to one or more managed clusters.

This example procedure shows how the SRO builds driver containers in the hub. The SRO watches hub cluster resources to identify OKD versions for the helm charts that it uses to create resources which it delivers to spokes.

Prerequisites

  • You have a running OKD cluster.

  • You installed the OpenShift CLI (oc).

  • You are logged into the OpenShift CLI as a user with cluster-admin privileges.

  • You installed the SRO.

  • You installed the Helm CLI (helm).

  • You installed Red Hat Advanced Cluster Management (RHACM).

  • You configured a container registry.

Procedure

  1. Create a templates directory by running the following command:

    1. $ mkdir -p charts/acm-simple-kmod-0.0.1/templates
  2. Change to the templates directory by running the following command:

    1. $ cd charts/acm-simple-kmod-0.0.1/templates
  3. Create templates files for the BuildConfig, Policy, and PlacementRule resources.

    1. Save this YAML template for the image stream and build config in the templates directory as 0001-buildconfig.yaml.

      1. apiVersion: build.openshift.io/v1
      2. kind: BuildConfig
      3. metadata:
      4. labels:
      5. app: {{ printf "%s-%s" .Values.specialResourceModule.metadata.name .Values.kernelFullVersion | replace "." "-" | replace "_" "-" | trunc 63 }}
      6. name: {{ printf "%s-%s" .Values.specialResourceModule.metadata.name .Values.kernelFullVersion | replace "." "-" | replace "_" "-" | trunc 63 }}
      7. annotations:
      8. specialresource.openshift.io/wait: "true"
      9. spec:
      10. nodeSelector:
      11. node-role.kubernetes.io/worker: ""
      12. runPolicy: "Serial"
      13. triggers:
      14. - type: "ConfigChange"
      15. - type: "ImageChange"
      16. source:
      17. dockerfile: |
      18. FROM {{ .Values.driverToolkitImage }} as builder
      19. WORKDIR /build/
      20. RUN git clone -b {{.Values.specialResourceModule.spec.set.git.ref}} {{.Values.specialResourceModule.spec.set.git.uri}}
      21. WORKDIR /build/simple-kmod
      22. RUN make all install KVER={{ .Values.kernelFullVersion }}
      23. FROM registry.redhat.io/ubi8/ubi-minimal
      24. RUN microdnf -y install kmod
      25. COPY --from=builder /etc/driver-toolkit-release.json /etc/
      26. COPY --from=builder /lib/modules/{{ .Values.kernelFullVersion }}/* /lib/modules/{{ .Values.kernelFullVersion }}/
      27. strategy:
      28. dockerStrategy:
      29. dockerfilePath: Dockerfile.SRO
      30. buildArgs:
      31. - name: "IMAGE"
      32. value: {{ .Values.driverToolkitImage }}
      33. {{- range $arg := .Values.buildArgs }}
      34. - name: {{ $arg.name }}
      35. value: {{ $arg.value }}
      36. {{- end }}
      37. - name: KVER
      38. value: {{ .Values.kernelFullVersion }}
      39. output:
      40. to:
      41. kind: DockerImage
      42. name: {{.Values.registry}}/{{.Values.specialResourceModule.metadata.name}}-{{.Values.groupName.driverContainer}}:{{.Values.kernelFullVersion}}
    2. Save this YAML template for the ACM policy in the templates directory as 0002-policy.yaml.

      1. apiVersion: policy.open-cluster-management.io/v1
      2. kind: Policy
      3. metadata:
      4. name: policy-{{.Values.specialResourceModule.metadata.name}}-ds
      5. annotations:
      6. policy.open-cluster-management.io/categories: CM Configuration Management
      7. policy.open-cluster-management.io/controls: CM-2 Baseline Configuration
      8. policy.open-cluster-management.io/standards: NIST-CSF
      9. spec:
      10. remediationAction: enforce
      11. disabled: false
      12. policy-templates:
      13. - objectDefinition:
      14. apiVersion: policy.open-cluster-management.io/v1
      15. kind: ConfigurationPolicy
      16. metadata:
      17. name: config-{{.Values.specialResourceModule.metadata.name}}-ds
      18. spec:
      19. remediationAction: enforce
      20. severity: low
      21. namespaceselector:
      22. exclude:
      23. - kube-*
      24. include:
      25. - '*'
      26. object-templates:
      27. - complianceType: musthave
      28. objectDefinition:
      29. apiVersion: v1
      30. kind: Namespace
      31. metadata:
      32. name: {{.Values.specialResourceModule.spec.namespace}}
      33. - complianceType: mustonlyhave
      34. objectDefinition:
      35. apiVersion: v1
      36. kind: ServiceAccount
      37. metadata:
      38. name: {{.Values.specialResourceModule.metadata.name}}
      39. namespace: {{.Values.specialResourceModule.spec.namespace}}
      40. - complianceType: mustonlyhave
      41. objectDefinition:
      42. apiVersion: rbac.authorization.k8s.io/v1
      43. kind: Role
      44. metadata:
      45. name: {{.Values.specialResourceModule.metadata.name}}
      46. namespace: {{.Values.specialResourceModule.spec.namespace}}
      47. rules:
      48. - apiGroups:
      49. - security.openshift.io
      50. resources:
      51. - securitycontextconstraints
      52. verbs:
      53. - use
      54. resourceNames:
      55. - privileged
      56. - complianceType: mustonlyhave
      57. objectDefinition:
      58. apiVersion: rbac.authorization.k8s.io/v1
      59. kind: RoleBinding
      60. metadata:
      61. name: {{.Values.specialResourceModule.metadata.name}}
      62. namespace: {{.Values.specialResourceModule.spec.namespace}}
      63. roleRef:
      64. apiGroup: rbac.authorization.k8s.io
      65. kind: Role
      66. name: {{.Values.specialResourceModule.metadata.name}}
      67. subjects:
      68. - kind: ServiceAccount
      69. name: {{.Values.specialResourceModule.metadata.name}}
      70. namespace: {{.Values.specialResourceModule.spec.namespace}}
      71. - complianceType: musthave
      72. objectDefinition:
      73. apiVersion: apps/v1
      74. kind: DaemonSet
      75. metadata:
      76. labels:
      77. app: {{ printf "%s-%s" .Values.specialResourceModule.metadata.name .Values.kernelFullVersion | replace "." "-" | replace "_" "-" | trunc 63 }}
      78. name: {{ printf "%s-%s" .Values.specialResourceModule.metadata.name .Values.kernelFullVersion | replace "." "-" | replace "_" "-" | trunc 63 }}
      79. namespace: {{.Values.specialResourceModule.spec.namespace}}
      80. spec:
      81. updateStrategy:
      82. type: OnDelete
      83. selector:
      84. matchLabels:
      85. app: {{ printf "%s-%s" .Values.specialResourceModule.metadata.name .Values.kernelFullVersion | replace "." "-" | replace "_" "-" | trunc 63 }}
      86. template:
      87. metadata:
      88. annotations:
      89. scheduler.alpha.kubernetes.io/critical-pod: ""
      90. labels:
      91. app: {{ printf "%s-%s" .Values.specialResourceModule.metadata.name .Values.kernelFullVersion | replace "." "-" | replace "_" "-" | trunc 63 }}
      92. spec:
      93. serviceAccount: {{.Values.specialResourceModule.metadata.name}}
      94. serviceAccountName: {{.Values.specialResourceModule.metadata.name}}
      95. containers:
      96. - image: {{.Values.registry}}/{{.Values.specialResourceModule.metadata.name}}-{{.Values.groupName.driverContainer}}:{{.Values.kernelFullVersion}}
      97. name: {{.Values.specialResourceModule.metadata.name}}
      98. imagePullPolicy: Always
      99. command: [sleep, infinity]
      100. lifecycle:
      101. preStop:
      102. exec:
      103. command: ["modprobe", "-r", "-a" , "simple-kmod", "simple-procfs-kmod"]
      104. securityContext:
      105. privileged: true
    3. Save this YAML template for the placement of policies in the templates directory as 0003-policy.yaml.

      1. apiVersion: apps.open-cluster-management.io/v1
      2. kind: PlacementRule
      3. metadata:
      4. name: {{.Values.specialResourceModule.metadata.name}}-placement
      5. spec:
      6. clusterConditions:
      7. - status: "True"
      8. type: ManagedClusterConditionAvailable
      9. clusterSelector:
      10. matchExpressions:
      11. - key: name
      12. operator: NotIn
      13. values:
      14. - local-cluster
      15. ---
      16. apiVersion: policy.open-cluster-management.io/v1
      17. kind: PlacementBinding
      18. metadata:
      19. name: {{.Values.specialResourceModule.metadata.name}}-binding
      20. placementRef:
      21. apiGroup: apps.open-cluster-management.io
      22. kind: PlacementRule
      23. name: {{.Values.specialResourceModule.metadata.name}}-placement
      24. subjects:
      25. - apiGroup: policy.open-cluster-management.io
      26. kind: Policy
      27. name: policy-{{.Values.specialResourceModule.metadata.name}}-ds
    4. Change into the charts/acm-simple-kmod-0.0.1 directory by running the following command:

      1. cd ..
    5. Save the following YAML template for the chart as Chart.yaml in the charts/acm-simple-kmod-0.0.1 directory:

      1. apiVersion: v2
      2. name: acm-simple-kmod
      3. description: Build ACM enabled simple-kmod driver with SpecialResourceOperator
      4. icon: https://avatars.githubusercontent.com/u/55542927
      5. type: application
      6. version: 0.0.1
      7. appVersion: 1.6.4
  4. From the charts directory, create the chart using the command:

    1. $ helm package acm-simple-kmod-0.0.1/

    Example output

    1. Successfully packaged chart and saved it to: <directory>/charts/acm-simple-kmod-0.0.1.tgz
  5. Create a config map to store the chart files.

    1. Create a directory for the config map files by running the following command:

      1. $ mkdir cm
    2. Copy the Helm chart into the cm directory by running the following command:

      1. $ cp acm-simple-kmod-0.0.1.tgz cm/acm-simple-kmod-0.0.1.tgz
    3. Create an index file specifying the Helm repository that contains the Helm chart by running the following command:

      1. $ helm repo index cm --url=cm://acm-simple-kmod/acm-simple-kmod-chart
    4. Create a namespace for the objects defined in the Helm chart by running the following command:

      1. $ oc create namespace acm-simple-kmod
    5. Create the config map object by running the following command:

      1. $ oc create cm acm-simple-kmod-chart --from-file=cm/index.yaml --from-file=cm/acm-simple-kmod-0.0.1.tgz -n acm-simple-kmod
  6. Use the following SpecialResourceModule manifest to deploy the simple-kmod object using the Helm chart that you created in the config map. Save this YAML file as acm-simple-kmod.yaml:

    1. apiVersion: sro.openshift.io/v1beta1
    2. kind: SpecialResourceModule
    3. metadata:
    4. name: acm-simple-kmod
    5. spec:
    6. namespace: acm-simple-kmod
    7. chart:
    8. name: acm-simple-kmod
    9. version: 0.0.1
    10. repository:
    11. name: acm-simple-kmod
    12. url: cm://acm-simple-kmod/acm-simple-kmod-chart
    13. set:
    14. kind: Values
    15. apiVersion: sro.openshift.io/v1beta1
    16. buildArgs:
    17. - name: "KMODVER"
    18. value: "SRO"
    19. registry: <your_registry> (1)
    20. git:
    21. ref: master
    22. uri: https://github.com/openshift-psap/kvc-simple-kmod.git
    23. watch:
    24. - path: "$.metadata.labels.openshiftVersion"
    25. apiVersion: cluster.open-cluster-management.io/v1
    26. kind: ManagedCluster
    27. name: spoke1
    1Specify the URL for a registry that you have configured.
  7. Create the special resource module by running the following command:

    1. $ oc apply -f charts/examples/acm-simple-kmod.yaml

Verification

  1. Check the status of the build pods by running the following command:

    1. $ KUBECONFIG=~/hub/auth/kubeconfig oc get pod -n acm-simple-kmod

    Example output

    1. NAME READY STATUS RESTARTS AGE
    2. acm-simple-kmod-4-18-0-305-34-2-el8-4-x86-64-1-build 0/1 Completed 0 42m
  2. Check that the policies have been created by running the following command:

    1. $ KUBECONFIG=~/hub/auth/kubeconfig oc get placementrules,placementbindings,policies -n acm-simple-kmod

    Example output

    1. NAME AGE REPLICAS
    2. placementrule.apps.open-cluster-management.io/acm-simple-kmod-placement 40m
    3. NAME AGE
    4. placementbinding.policy.open-cluster-management.io/acm-simple-kmod-binding 40m
    5. NAME REMEDIATION ACTION COMPLIANCE STATE AGE
    6. policy.policy.open-cluster-management.io/policy-acm-simple-kmod-ds enforce Compliant 40m
  3. Check that the resources have been reconciled by running the following command:

    1. $ KUBECONFIG=~/hub/auth/kubeconfig oc get specialresourcemodule acm-simple-kmod -o json | jq -r '.status'

    Example output

    1. {
    2. "versions": {
    3. "quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6a3330ef5a178435721ff4efdde762261a9c55212e9b4534385e04037693fbe4": {
    4. "complete": true
    5. }
    6. }
    7. }
  4. Check that the resources are running in the spoke by running the following command:

    1. $ KUBECONFIG=~/spoke1/kubeconfig oc get ds,pod -n acm-simple-kmod

    Example output

    1. AME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
    2. daemonset.apps/acm-simple-kmod-4-18-0-305-45-1-el8-4-x86-64 3 3 3 3 3 <none> 26m
    3. NAME READY STATUS RESTARTS AGE
    4. pod/acm-simple-kmod-4-18-0-305-45-1-el8-4-x86-64-brw78 1/1 Running 0 26m
    5. pod/acm-simple-kmod-4-18-0-305-45-1-el8-4-x86-64-fqh5h 1/1 Running 0 26m
    6. pod/acm-simple-kmod-4-18-0-305-45-1-el8-4-x86-64-m9sfd 1/1 Running 0 26m

Prometheus Special Resource Operator metrics

The Special Resource Operator (SRO) exposes the following Prometheus metrics through the metrics service:

Metric NameDescription

sro_used_nodes

Returns the nodes that are running pods created by a SRO custom resource (CR). This metric is available for DaemonSet and Deployment objects only.

sro_kind_completed_info

Represents whether a kind of an object defined by the Helm Charts in a SRO CR has been successfully uploaded in the cluster (value 1) or not (value 0). Examples of objects are DaemonSet, Deployment or BuildConfig.

sro_states_completed_info

Represents whether the SRO has finished processing a CR successfully (value 1) or the SRO has not processed the CR yet (value 0).

sro_managed_resources_total

Returns the number of SRO CRs in the cluster, regardless of their state.

Additional resources