Moving a local virtual machine disk to a different node

Virtual machines that use local volume storage can be moved so that they run on a specific node.

You might want to move the virtual machine to a specific node for the following reasons:

  • The current node has limitations to the local storage configuration.

  • The new node is better optimized for the workload of that virtual machine.

To move a virtual machine that uses local storage, you must clone the underlying volume by using a data volume. After the cloning operation is complete, you can edit the virtual machine configuration so that it uses the new data volume, or add the new data volume to another virtual machine.

When you enable preallocation globally, or for a single data volume, the Containerized Data Importer (CDI) preallocates disk space during cloning. Preallocation enhances write performance. For more information, see Using preallocation for data volumes.

Users without the cluster-admin role require additional user permissions to clone volumes across namespaces.

Cloning a local volume to another node

You can move a virtual machine disk so that it runs on a specific node by cloning the underlying persistent volume claim (PVC).

To ensure the virtual machine disk is cloned to the correct node, you must either create a new persistent volume (PV) or identify one on the correct node. Apply a unique label to the PV so that it can be referenced by the data volume.

The destination PV must be the same size or larger than the source PVC. If the destination PV is smaller than the source PVC, the cloning operation fails.

Prerequisites

  • The virtual machine must not be running. Power down the virtual machine before cloning the virtual machine disk.

Procedure

  1. Either create a new local PV on the node, or identify a local PV already on the node:

    • Create a local PV that includes the nodeAffinity.nodeSelectorTerms parameters. The following manifest creates a 10Gi local PV on node01.

      1. kind: PersistentVolume
      2. apiVersion: v1
      3. metadata:
      4. name: <destination-pv> (1)
      5. annotations:
      6. spec:
      7. accessModes:
      8. - ReadWriteOnce
      9. capacity:
      10. storage: 10Gi (2)
      11. local:
      12. path: /mnt/local-storage/local/disk1 (3)
      13. nodeAffinity:
      14. required:
      15. nodeSelectorTerms:
      16. - matchExpressions:
      17. - key: kubernetes.io/hostname
      18. operator: In
      19. values:
      20. - node01 (4)
      21. persistentVolumeReclaimPolicy: Delete
      22. storageClassName: local
      23. volumeMode: Filesystem
      1The name of the PV.
      2The size of the PV. You must allocate enough space, or the cloning operation fails. The size must be the same as or larger than the source PVC.
      3The mount path on the node.
      4The name of the node where you want to create the PV.
    • Identify a PV that already exists on the target node. You can identify the node where a PV is provisioned by viewing the nodeAffinity field in its configuration:

      1. $ oc get pv <destination-pv> -o yaml

      The following snippet shows that the PV is on node01:

      Example output

      1. ...
      2. spec:
      3. nodeAffinity:
      4. required:
      5. nodeSelectorTerms:
      6. - matchExpressions:
      7. - key: kubernetes.io/hostname (1)
      8. operator: In
      9. values:
      10. - node01 (2)
      11. ...
      1The kubernetes.io/hostname key uses the node hostname to select a node.
      2The hostname of the node.
  2. Add a unique label to the PV:

    1. $ oc label pv <destination-pv> node=node01
  3. Create a data volume manifest that references the following:

    • The PVC name and namespace of the virtual machine.

    • The label you applied to the PV in the previous step.

    • The size of the destination PV.

      1. apiVersion: cdi.kubevirt.io/v1beta1
      2. kind: DataVolume
      3. metadata:
      4. name: <clone-datavolume> (1)
      5. spec:
      6. source:
      7. pvc:
      8. name: "<source-vm-disk>" (2)
      9. namespace: "<source-namespace>" (3)
      10. pvc:
      11. accessModes:
      12. - ReadWriteOnce
      13. selector:
      14. matchLabels:
      15. node: node01 (4)
      16. resources:
      17. requests:
      18. storage: <10Gi> (5)
      1The name of the new data volume.
      2The name of the source PVC. If you do not know the PVC name, you can find it in the virtual machine configuration: spec.volumes.persistentVolumeClaim.claimName.
      3The namespace where the source PVC exists.
      4The label that you applied to the PV in the previous step.
      5The size of the destination PV.
  4. Start the cloning operation by applying the data volume manifest to your cluster:

    1. $ oc apply -f <clone-datavolume.yaml>

The data volume clones the PVC of the virtual machine into the PV on the specific node.