Managing automatic boot source updates

You can manage automatic updates for the following boot sources:

Boot sources can make virtual machine (VM) creation more accessible and efficient for users. If automatic boot source updates are enabled, the Containerized Data Importer (CDI) imports, polls, and updates the images so that they are ready to be cloned for new VMs. By default, CDI automatically updates Red Hat boot sources.

Managing Red Hat boot source updates

You can opt out of automatic updates for all system-defined boot sources by disabling the enableCommonBootImageImport feature gate. If you disable this feature gate, all DataImportCron objects are deleted. This does not remove previously imported boot source objects that store operating system images, though administrators can delete them manually.

When the enableCommonBootImageImport feature gate is disabled, DataSource objects are reset so that they no longer point to the original boot source. An administrator can manually provide a boot source by creating a new persistent volume claim (PVC) or volume snapshot for the DataSource object, then populating it with an operating system image.

Managing automatic updates for all system-defined boot sources

Disabling automatic boot source imports and updates can lower resource usage. In disconnected environments, disabling automatic boot source updates prevents CDIDataImportCronOutdated alerts from filling up logs.

To disable automatic updates for all system-defined boot sources, turn off the enableCommonBootImageImport feature gate by setting the value to false. Setting this value to true re-enables the feature gate and turns automatic updates back on.

Custom boot sources are not affected by this setting.

Procedure

  • Toggle the feature gate for automatic boot source updates by editing the HyperConverged custom resource (CR).

    • To disable automatic boot source updates, set the spec.featureGates.enableCommonBootImageImport field in the HyperConverged CR to false. For example:

      1. $ oc patch hyperconverged kubevirt-hyperconverged -n kubevirt-hyperconverged \
      2. --type json -p '[{"op": "replace", "path": \
      3. "/spec/featureGates/enableCommonBootImageImport", \
      4. "value": false}]'
    • To re-enable automatic boot source updates, set the spec.featureGates.enableCommonBootImageImport field in the HyperConverged CR to true. For example:

      1. $ oc patch hyperconverged kubevirt-hyperconverged -n kubevirt-hyperconverged \
      2. --type json -p '[{"op": "replace", "path": \
      3. "/spec/featureGates/enableCommonBootImageImport", \
      4. "value": true}]'

Managing custom boot source updates

Custom boot sources that are not provided by OKD Virtualization are not controlled by the feature gate. You must manage them individually by editing the HyperConverged custom resource (CR).

You must configure a storage class. Otherwise, the cluster cannot receive automated updates for custom boot sources. See Defining a storage class for details.

Configuring a storage class for custom boot source updates

Specify a new default storage class in the HyperConverged custom resource (CR).

Boot sources are created from storage using the default storage class. If your cluster does not have a default storage class, you must define one before configuring automatic updates for custom boot sources.

Procedure

  1. Open the HyperConverged CR in your default editor by running the following command:

    1. $ oc edit hyperconverged kubevirt-hyperconverged -n kubevirt-hyperconverged
  2. Define a new storage class by entering a value in the storageClassName field:

    1. apiVersion: hco.kubevirt.io/v1beta1
    2. kind: HyperConverged
    3. metadata:
    4. name: kubevirt-hyperconverged
    5. spec:
    6. dataImportCronTemplates:
    7. - metadata:
    8. name: rhel8-image-cron
    9. spec:
    10. template:
    11. spec:
    12. storageClassName: <new_storage_class> (1)
    13. #...
    1Define the storage class.
  3. Remove the storageclass.kubernetes.io/is-default-class annotation from the current default storage class.

    1. Retrieve the name of the current default storage class by running the following command:

      1. $ oc get storageclass

      Example output

      1. NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
      2. csi-manila-ceph manila.csi.openstack.org Delete Immediate false 11d
      3. hostpath-csi-basic (default) kubevirt.io.hostpath-provisioner Delete WaitForFirstConsumer false 11d (1)
      4. ...
      1In this example, the current default storage class is named hostpath-csi-basic.
    2. Remove the annotation from the current default storage class by running the following command:

      1. $ oc patch storageclass <current_default_storage_class> -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"false"}}}' (1)
      1Replace <current_default_storage_class> with the storageClassName value of the default storage class.
  4. Set the new storage class as the default by running the following command:

    1. $ oc patch storageclass <new_storage_class> -p '{"metadata":{"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}' (1)
    1Replace <new_storage_class> with the storageClassName value that you added to the HyperConverged CR.

Enabling automatic updates for custom boot sources

OKD Virtualization automatically updates system-defined boot sources by default, but does not automatically update custom boot sources. You must manually enable automatic updates by editing the HyperConverged custom resource (CR).

Prerequisites

  • The cluster has a default storage class.

Procedure

  1. Open the HyperConverged CR in your default editor by running the following command:

    1. $ oc edit hyperconverged kubevirt-hyperconverged -n kubevirt-hyperconverged
  2. Edit the HyperConverged CR, adding the appropriate template and boot source in the dataImportCronTemplates section. For example:

    Example custom resource

    1. apiVersion: hco.kubevirt.io/v1beta1
    2. kind: HyperConverged
    3. metadata:
    4. name: kubevirt-hyperconverged
    5. spec:
    6. dataImportCronTemplates:
    7. - metadata:
    8. name: centos7-image-cron
    9. annotations:
    10. cdi.kubevirt.io/storage.bind.immediate.requested: "true" (1)
    11. spec:
    12. schedule: "0 */12 * * *" (2)
    13. template:
    14. spec:
    15. source:
    16. registry: (3)
    17. url: docker://quay.io/containerdisks/centos:7-2009
    18. storage:
    19. resources:
    20. requests:
    21. storage: 10Gi
    22. managedDataSource: centos7 (4)
    23. retentionPolicy: "None" (5)
    1This annotation is required for storage classes with volumeBindingMode set to WaitForFirstConsumer.
    2Schedule for the job specified in cron format.
    3Use to create a data volume from a registry source. Use the default pod pullMethod and not node pullMethod, which is based on the node docker cache. The node docker cache is useful when a registry image is available via Container.Image, but the CDI importer is not authorized to access it.
    4For the custom image to be detected as an available boot source, the name of the image’s managedDataSource must match the name of the template’s DataSource, which is found under spec.dataVolumeTemplates.spec.sourceRef.name in the VM template YAML file.
    5Use All to retain data volumes and data sources when the cron job is deleted. Use None to delete data volumes and data sources when the cron job is deleted.
  3. Save the file.

Enabling volume snapshot boot sources

Enable volume snapshot boot sources by setting the parameter in the StorageProfile associated with the storage class that stores operating system base images. Although DataImportCron was originally designed to maintain only PVC sources, VolumeSnapshot sources scale better than PVC sources for certain storage types.

Use volume snapshots on a storage profile that is proven to scale better when cloning from a single snapshot.

Prerequisites

  • You must have access to a volume snapshot with the operating system image.

  • The storage must support snapshotting.

Procedure

  1. Open the storage profile object that corresponds to the storage class used to provision boot sources by running the following command:

    1. $ oc edit storageprofile <storage_class>
  2. Review the dataImportCronSourceFormat specification of the StorageProfile to confirm whether or not the VM is using PVC or volume snapshot by default.

  3. Edit the storage profile, if needed, by updating the dataImportCronSourceFormat specification to snapshot.

    Example storage profile

    1. apiVersion: cdi.kubevirt.io/v1beta1
    2. kind: StorageProfile
    3. metadata:
    4. # ...
    5. spec:
    6. dataImportCronSourceFormat: snapshot

Verification

  1. Open the storage profile object that corresponds to the storage class used to provision boot sources.

    1. $ oc get storageprofile <storage_class> -oyaml
  2. Confirm that the dataImportCronSourceFormat specification of the StorageProfile is set to ‘snapshot’, and that any DataSource objects that the DataImportCron points to now reference volume snapshots.

You can now use these boot sources to create virtual machines.

Disabling automatic updates for a single boot source

You can disable automatic updates for an individual boot source, whether it is custom or system-defined, by editing the HyperConverged custom resource (CR).

Procedure

  1. Open the HyperConverged CR in your default editor by running the following command:

    1. $ oc edit hyperconverged kubevirt-hyperconverged -n kubevirt-hyperconverged
  2. Disable automatic updates for an individual boot source by editing the spec.dataImportCronTemplates field.

    Custom boot source

    • Remove the boot source from the spec.dataImportCronTemplates field. Automatic updates are disabled for custom boot sources by default.

    System-defined boot source

    1. Add the boot source to spec.dataImportCronTemplates.

      Automatic updates are enabled by default for system-defined boot sources, but these boot sources are not listed in the CR unless you add them.

    2. Set the value of the dataimportcrontemplate.kubevirt.io/enable annotation to false.

      For example:

      1. apiVersion: hco.kubevirt.io/v1beta1
      2. kind: HyperConverged
      3. metadata:
      4. name: kubevirt-hyperconverged
      5. spec:
      6. dataImportCronTemplates:
      7. - metadata:
      8. annotations:
      9. dataimportcrontemplate.kubevirt.io/enable: false
      10. name: rhel8-image-cron
      11. # ...
  3. Save the file.

Verifying the status of a boot source

You can determine if a boot source is system-defined or custom by viewing the HyperConverged custom resource (CR).

Procedure

  1. View the contents of the HyperConverged CR by running the following command:

    1. $ oc get hyperconverged kubevirt-hyperconverged -n kubevirt-hyperconverged -o yaml

    Example output

    1. apiVersion: hco.kubevirt.io/v1beta1
    2. kind: HyperConverged
    3. metadata:
    4. name: kubevirt-hyperconverged
    5. spec:
    6. # ...
    7. status:
    8. # ...
    9. dataImportCronTemplates:
    10. - metadata:
    11. annotations:
    12. cdi.kubevirt.io/storage.bind.immediate.requested: "true"
    13. name: centos-7-image-cron
    14. spec:
    15. garbageCollect: Outdated
    16. managedDataSource: centos7
    17. schedule: 55 8/12 * * *
    18. template:
    19. metadata: {}
    20. spec:
    21. source:
    22. registry:
    23. url: docker://quay.io/containerdisks/centos:7-2009
    24. storage:
    25. resources:
    26. requests:
    27. storage: 30Gi
    28. status: {}
    29. status:
    30. commonTemplate: true (1)
    31. # ...
    32. - metadata:
    33. annotations:
    34. cdi.kubevirt.io/storage.bind.immediate.requested: "true"
    35. name: user-defined-dic
    36. spec:
    37. garbageCollect: Outdated
    38. managedDataSource: user-defined-centos-stream8
    39. schedule: 55 8/12 * * *
    40. template:
    41. metadata: {}
    42. spec:
    43. source:
    44. registry:
    45. pullMethod: node
    46. url: docker://quay.io/containerdisks/centos-stream:8
    47. storage:
    48. resources:
    49. requests:
    50. storage: 30Gi
    51. status: {}
    52. status: {} (2)
    53. # ...
    1Indicates a system-defined boot source.
    2Indicates a custom boot source.
  2. Verify the status of the boot source by reviewing the status.dataImportCronTemplates.status field.

    • If the field contains commonTemplate: true, it is a system-defined boot source.

    • If the status.dataImportCronTemplates.status field has the value {}, it is a custom boot source.