Troubleshooting Driver Plug-ins

Overview

This section contains solutions to common problems that you might encounter while configuring the driver plug-ins for Minishift.

KVM/libvirt

Undefining virsh snapshots fail

If you use virsh on KVM/libvirt to create snapshots in your development workflow then use minishift delete to delete the snapshots along with the VM, you might encounter the following error:

  1. $ minishift delete
  2. Deleting the {project} VM...
  3. Error deleting the VM: [Code-55] [Domain-10] Requested operation is not valid: cannot delete inactive domain with 4 snapshots

Cause: The snapshots are stored in ~/.minishift/machines, but the definitions are stored in var/lib/libvirt/qemu/snapshot/minishift.

Workaround: To delete the snapshots, you need to perform the following steps.

  1. Delete the definitions:

    1. $ sudo virsh snapshot-delete --metadata minishift <snapshot-name>
  2. Undefine the Minishift domain:

    1. $ sudo virsh undefine minishift

    You can now run minishift delete to delete the VM and restart Minishift.

If these steps do not resolve the issue, you can also use the following command to delete the snapshots:

  1. $ rm -rf ~/.minishift/machines

It is recommended to avoid using metadata when you create snapshots. To ensure this, you can specify the --no-metadata flag. For example:

  1. $ sudo virsh snapshot-create-as --domain vm1 overlay1 --diskspec vda,file=/export/overlay1.qcow2 --disk-only --atomic --no-metadata

Error creating new host: dial tcp: missing address

The problem is likely that the libvirtd service is not running. You can check this with the following command:

  1. $ systemctl status libvirtd

If libvirtd is not running, start it and enable it to start on boot:

  1. $ systemctl start libvirtd
  2. $ systemctl enable libvirtd

Failed to connect socket to ‘/var/run/libvirt/virtlogd-sock’

The problem is likely that the virtlogd service is not running. You can check this with the following command:

  1. $ systemctl status virtlogd

If virtlogd is not running, start it and enable it to start on boot:

  1. $ systemctl start virtlogd
  2. $ systemctl enable virtlogd

Domain ‘minishift’ already exists…​

If you try minishift start and then this error appears, ensure that you use minishift delete to delete the VMs that you created earlier. However, if this fails and you want to completely clean up Minishift and start fresh, do the following:

  1. Check if any existing Minishift VMs are running:

    1. $ sudo virsh list --all
  2. If any Minishift VM is running, stop it:

    1. $ sudo virsh destroy minishift
  3. Delete the VM:

    1. $ sudo virsh undefine minishift
  4. Delete the .minishift/machines directory:

    1. $ rm -rf ~/.minishift/machines

In case all of this fails, you might want to uninstall Minishift and do a fresh install of Minishift.

xhyve

Could not create vmnet interface

The problem is likely that the xhyve driver is not able to clean up vmnet when a VM is removed. vmnet.framework determines the IP address based on the following files:

  • /var/db/dhcpd_leases

  • /Library/Preferences/SystemConfiguration/com.apple.vmnet.plist

Reset the Minishift-specific IP database, ensure that you remove the minishift entry section from the dhcpd_leases file, and reboot your system.

  1. {
  2. ip_address=192.168.64.2
  3. hw_address=1,2:51:8:22:87:a6
  4. identifier=1,2:51:8:22:87:a6
  5. lease=0x585e6e70
  6. name=minishift
  7. }

You can completely reset the IP database by removing the files manually, but this is very risky.

Error detecting VBox version: exit status 126

This error is caused by system files left by an improper uninstallation of a previously-installed VirtualBox application.

Cause: Uninstalling VirtualBox by removing it from Applications only removes the .app bundle, not the other system-wide files installed by the VirtualBox installation package. The libmachine library detects the presence of these system-wide files and forces the use of VirtualBox, which is no longer present.

Workaround: Uninstall VirtualBox via the VirtualBox installer package (.pkg) file contained in the VirtualBox disk image (.dmg).

VirtualBox

Error machine does not exist

If you use Windows, ensure that you set the --vm-driver virtualbox flag in the minishift start command. Alternatively, the problem might be an outdated version of VirtualBox.

To avoid this issue, it is recommended to use VirtualBox 5.1.12 or later.

Hyper-V

Hyper-V commands must be run as an Administrator

If you run Minishift with Hyper-V on Windows as a normal user or as a user with Administrator privileges, you might encounter the following error:

  1. Error starting the VM: Error creating the VM. Error with pre-create check: "Hyper-V commands must be run as an Administrator".

Workaround: You can either add yourself to the Hyper-V Administrators group, which is recommended, or run the shell in an elevated mode.

If you are using PowerShell, you can add yourself to the Hyper-V Administrators group as follows:

  1. As an administrator, run the following command:

    1. PS> ([adsi]"WinNT://./Hyper-V Administrators,group").Add("WinNT://$env:UserDomain/$env:Username,user")
  2. Log out and log back in for the change to take effect.

You can also use the GUI to add yourself to the Hyper-V Administrators group as follows:

  1. Click the Start button and choose Computer Management.

  2. In the Computer Management window, select Local Users And Groups and then double click on Groups.

  3. Double click on the Hyper-V Administrators group, the Hyper-V Administrators Properties dialog box is displayed.

  4. Add your account to the Hyper-V Administrators group and log off and log in for the change to take effect.

Now you can run the Hyper-V commands as a normal user.

Minishift running with Hyper-V fails when connected to OpenVPN

If you try to use Minishift with Hyper-V using an external virtual switch while you are connected to a VPN such as OpenVPN, Minishift might fail to provision the VM.

Cause: Hyper-V networking might not route the network traffic in both directions properly when connected to a VPN.

Workaround: Disconnect from the VPN and try again after stopping the VM from the Hyper-V manager.