Importing virtual machine images with data volumes

Use the Containerized Data Importer (CDI) to import a virtual machine image into a persistent volume claim (PVC) by using a data volume. You can attach a data volume to a virtual machine for persistent storage.

The virtual machine image can be hosted at an HTTP or HTTPS endpoint, or built into a container disk and stored in a container registry.

When you import a disk image into a PVC, the disk image is expanded to use the full storage capacity that is requested in the PVC. To use this space, the disk partitions and file system(s) in the virtual machine might need to be expanded.

The resizing procedure varies based on the operating system installed on the virtual machine. See the operating system documentation for details.

Prerequisites

CDI supported operations matrix

This matrix shows the supported CDI operations for content types against endpoints, and which of these operations requires scratch space.

Content typesHTTPHTTPSHTTP basic authRegistryUpload

KubeVirt (QCOW2)

✓ QCOW2
✓ GZ
✓ XZ

✓ QCOW2*
✓ GZ

✓ XZ

✓ QCOW2
✓ GZ
✓ XZ

✓ QCOW2
□ GZ
□ XZ

✓ QCOW2
✓ GZ

✓ XZ

KubeVirt (RAW)

✓ RAW
✓ GZ
✓ XZ

✓ RAW
✓ GZ
✓ XZ

✓ RAW
✓ GZ
✓ XZ

✓ RAW
□ GZ
□ XZ

✓ RAW
✓ GZ

✓ XZ*

✓ Supported operation

□ Unsupported operation

* Requires scratch space

** Requires scratch space if a custom certificate authority is required

CDI now uses the OKD cluster-wide proxy configuration.

About data volumes

DataVolume objects are custom resources that are provided by the Containerized Data Importer (CDI) project. Data volumes orchestrate import, clone, and upload operations that are associated with an underlying persistent volume claim (PVC). Data volumes are integrated with OKD Virtualization, and they prevent a virtual machine from being started before the PVC has been prepared.

Importing a virtual machine image into a persistent volume claim by using a data volume

You can import a virtual machine image into a persistent volume claim (PVC) by using a data volume.

The virtual machine image can be hosted at an HTTP or HTTPS endpoint, or the image can be built into a container disk and stored in a container registry.

To create a virtual machine from an imported virtual machine image, specify the image or container disk endpoint in the VirtualMachine configuration file before you create the virtual machine.

Prerequisites

  • You have installed the OpenShift CLI (oc).

  • Your cluster has at least one available persistent volume.

  • To import a virtual machine image you must have the following:

    • A virtual machine disk image in RAW, ISO, or QCOW2 format, optionally compressed by using xz or gz.

    • An HTTP endpoint where the image is hosted, along with any authentication credentials needed to access the data source. For example: [http://www.example.com/path/to/data](http://www.example.com/path/to/data)

  • To import a container disk you must have the following:

    • A container disk built from a virtual machine image stored in your container image registry, along with any authentication credentials needed to access the data source. For example: docker://registry.example.com/container-image

Procedure

  1. Optional: If your data source requires authentication credentials, edit the endpoint-secret.yaml file, and apply the updated configuration to the cluster:

    1. apiVersion: v1
    2. kind: Secret
    3. metadata:
    4. name: <endpoint-secret>
    5. labels:
    6. app: containerized-data-importer
    7. type: Opaque
    8. data:
    9. accessKeyId: "" (1)
    10. secretKey: "" (2)
    1Optional: your key or user name, base64 encoded
    2Optional: your secret or password, base64 encoded
    1. $ oc apply -f endpoint-secret.yaml
  2. Edit the virtual machine configuration file, specifying the data source for the virtual machine image you want to import. In this example, a Fedora image is imported from an http source:

    1. apiVersion: kubevirt.io/v1
    2. kind: VirtualMachine
    3. metadata:
    4. creationTimestamp: null
    5. labels:
    6. kubevirt.io/vm: vm-fedora-datavolume
    7. name: vm-fedora-datavolume
    8. spec:
    9. dataVolumeTemplates:
    10. - metadata:
    11. creationTimestamp: null
    12. name: fedora-dv
    13. spec:
    14. pvc:
    15. accessModes:
    16. - ReadWriteOnce
    17. resources:
    18. requests:
    19. storage: 10Gi
    20. storageClassName: local
    21. source:
    22. http: (1)
    23. url: "https://download.fedoraproject.org/pub/fedora/linux/releases/33/Cloud/x86_64/images/Fedora-Cloud-Base-33-1.2.x86_64.qcow2" (2)
    24. secretRef: "" (3)
    25. certConfigMap: "" (4)
    26. status: {}
    27. running: true
    28. template:
    29. metadata:
    30. creationTimestamp: null
    31. labels:
    32. kubevirt.io/vm: vm-fedora-datavolume
    33. spec:
    34. domain:
    35. devices:
    36. disks:
    37. - disk:
    38. bus: virtio
    39. name: datavolumedisk1
    40. machine:
    41. type: "" (5)
    42. resources:
    43. requests:
    44. memory: 1.5Gi
    45. terminationGracePeriodSeconds: 60
    46. volumes:
    47. - dataVolume:
    48. name: fedora-dv
    49. name: datavolumedisk1
    50. status: {}
    1The source type to import the image from. This example uses an HTTP endpoint. To import a container disk from a registry, replace http with registry.
    2The source of the virtual machine image you want to import. This example references a virtual machine image at an HTTP endpoint. An example of a container registry endpoint is url: “docker://kubevirt/fedora-cloud-container-disk-demo:latest”.
    3The secretRef parameter is optional.
    4The certConfigMap is required for communicating with servers that use self-signed certificates or certificates not signed by the system CA bundle. The referenced config map must be in the same namespace as the data volume.
    5Specify type: dataVolume or type: “”. If you specify any other value for type, such as persistentVolumeClaim, a warning is displayed, and the virtual machine does not start.
  3. Create the virtual machine:

    1. $ oc create -f vm-<name>-datavolume.yaml

    The oc create command creates the data volume and the virtual machine. The CDI controller creates an underlying PVC with the correct annotation, and the import process begins. When the import completes, the data volume status changes to Succeeded, and the virtual machine is allowed to start.

    Data volume provisioning happens in the background, so there is no need to monitor it. You can start the virtual machine, and it will not run until the import is complete.

Verification

  1. The importer pod downloads the virtual machine image or container disk from the specified URL and stores it on the provisioned PV. View the status of the importer pod by running the following command:

    1. $ oc get pods
  2. Monitor the data volume status until it shows Succeeded by running the following command:

    1. $ oc describe dv <datavolume-name> (1)
    1The name of the data volume as specified under dataVolumeTemplates.metadata.name in the virtual machine configuration file. In the example configuration above, this is fedora-dv.
  3. To verify that provisioning is complete and that the VMI has started, try accessing its serial console by running the following command:

    1. $ virtctl console <vm-fedora-datavolume>

Additional resources