Libvirt
Installation
Debian
sudo apt-get install qemu-kvm libvirt-bin
sudo usermod -a -G kvm,libvirt alice
sudo deb-systemd-invoke start virtlockd.socket virtlockd.service
Fedora
sudo dnf install virt-manager qemu-kvm libvirt # Fedora, RHEL Note: libvirt seems to be optional
sudo usermod -a -G libvirt alice
Permissions
For already existing VMs or disk images, grant access to everyone in the libvirt group:
sudo find /opt/vm/ -exec setfacl -m u:libvirt-qemu:rwX -m g:libvirt:rwX '{}' +
Network
Networking should already be enabled these days:
$ sudo virsh net-edit default <network> <name>default</name> <uuid>cc3cf216-c5b0-4e0f-bb5c-94420b5054f6</uuid> <forward mode='nat'> <nat ipv6='yes'/> </forward> <bridge name='virbr0' stp='on' delay='0'/> <mac address='12:34:00:aa:bb:cc'/> <ip address='192.168.122.1' netmask='255.255.255.0'> <dhcp> <range start='192.168.122.2' end='192.168.122.254'/> <host mac='08:00:27:01:02:03' name='foobar' ip='192.168.122.10'/> </dhcp> </ip> <ip family='ipv6' address='fd01:1:1:1::' prefix='64'> </network>
Note: the above should also provide IPv6 connectivity[3] via an ULA range.
This will edit /etc/libvirt/qemu/networks/default.xml
but needs to be reloaded too:
sudo virsh net-destroy default && sudo virsh net-start default
- VirtualNetworking
- KVM libvirt assign static guest IP addresses using DHCP on the virtual machine
- Libvirt: Bridged network
virt-manager
The libvirtd
process can be made accessible to a GUI pogram named virt-manager
:
$ grep ^[a-z] /etc/libvirt/libvirtd.conf listen_tls = 0 listen_tcp = 1 listen_addr = "127.0.0.1"
But that's not necessary and we can use the qemu+ssh
protocol to connect to libvirtd
right away:
virt-manager -c 'qemu+ssh://root@host/system'
Usage
Creation
After KVM has been installed, let's create a Debian virtual machine:
virt-install --name debian --memory 2048 --vcpus 2 \
--os-variant debian11 \
--metadata description="Debian" \
--disk /dev/vg0/debian.disk0 \
--network bridge=virbr0,mac=08:00:27:e2:12:34 \
--graphics vnc,port=5901,listen=127.0.0.1 \
--console pty,target_type=serial \
--video virtio \
--cdrom ../debian.iso
- Use
osinfo-query os
for all--os-variant
options. - Use
--import
without--cdrom
to import (define) an already installed machine.
As Fedora comes with SELinux enabled, we may have to grant extra permissions[4][5] to the locations above. This also works if LVM is involved:[6]
sudo chcon -t svirt_image_t /dev/vg0/debian.disk0 sudo setfacl -m g:libvirt:rw /dev/vg0/debian.disk0
For access to NFS shares, we might need:
sudo setsebool -P virt_use_nfs 1
One can also use virt-builder to install various distributons:
virt-builder opensuse-tumbleweed -o /dev/test/opensuse.disk0 --arch x86_64 --format raw --root-password password:s3cr3t
Once created we can change some parameters with virt-customize:
virt-customize -a /dev/test/opensuse.disk0 --root-password password:s3cr3t --hostname foobar
When using virt-builder
[7] we need to import the installation into Libvirt too:
virt-install --name opensuse --memory 2048 --disk path=/dev/test/opensuse.disk0,format=raw --os-variant opensusetumbleweed --import
General
The newly installed virtual machine can be controlled with virsh[8].
List virtual machines:
$ virsh list Id Name State ---------------------------------------------------- 2 debian running
Connect the virtual machine's serial console:
$ virsh console debian # The guest needs to enable its Serial Console Connected to domain debian Escape character is ^]
Shutdown/reboot/start a virtual machine:
virsh shutdown debian virsh reboot debian virsh start debian
For the shutdown
command to work, the system needs to support certain ACPI calls. Linux systems should run acpid. Windows systems should have the Shutdown: Allow system to be shut down without having to log on
local security policy[9] enabled (via Computer Configuration => Windows Settings => Security Settings => Local Policies => Security Options
). Also the Power Button
should be configured to react to shutdown signals, even for a virtual system with no real power button.[10]
Core dump a virtual machine:
virsh dump debian --file debian.core # May not work due to non-migratable devices
Send a sysrq sequence to a guest:[11]
virsh send-key debian KEY_LEFTALT KEY_SYSRQ KEY_H
External disks
Temporarily attaching (and detaching) external disks to a (running) virtual machine can be cumbersome but can be simplified with a pre-configured XML file.[12][13] Let's connect for example an USB drive to our host machine:
$ dmesg | grep idVen | tail -1 usb 1-1: New USB device found, idVendor=0781, idProduct=5406, bcdDevice= 2.00 $ lsusb -d 0781: Bus 001 Device 008: ID 0781:5406 SanDisk Corp. Cruzer Micro U3
Prepare our XML file:
$ cat sandisk.xml <hostdev mode='subsystem' type='usb'> <source> <vendor id='0x0781'/> <product id='0x5406'/> </source> </hostdev>
With that in place, we can now attach the disk via the XML file:
virsh attach-device --domain debian --file sandisk.xml
We may have to give ourselves write permissions to that device first:
sudo setfacl -m u:${USER}:rw /dev/bus/usb/001/008 sudo chcon -t svirt_image_t /dev/bus/usb/001/008 # When SELinux is involved
If needed, an udev
rule could be created, but then the device will only be attached/detached to a particular machine:
$ cat /etc/udev/rules.d/90-libvirt-usb.rules ACTION=="add", \ SUBSYSTEM=="usb", \ ENV{ID_VENDOR_ID}=="0781", \ ENV{ID_MODEL_ID}=="5406", \ RUN+="/usr/bin/virsh attach-device debian /usr/local/etc/sandisk.xml" ACTION=="remove", \ SUBSYSTEM=="usb", \ ENV{ID_VENDOR_ID}=="0781", \ ENV{ID_MODEL_ID}=="5406", \ RUN+="/usr/bin/virsh detach-device debian /usr/local/etc/sandisk.xml"
To detach, run:
virsh detach-device --domain debian --file sandisk.xml
Storage Pools
For some reason new storage pools are created when new machines are defined:
virt-install --name foobar --ram 2048 --vcpus 2 --os-type debian9 \ --disk /opt/vm/debian9.qcow2 \ --cdrom /mnt/nfs/debian-9.4.0-amd64-netinst.iso [...]
And once the VM is defined:
$ virsh pool-list Name State Autostart ------------------------------- default active yes nfs active yes debian9 active yes $ for p in nfs debian9; do virsh pool-dumpxml ${p} | grep path; done <path>/mnt/nfs/</path> <path>/opt/vm/</path>
As a temporary workaround, let's destroy (and undefine) all non-default pools:
for p in $(virsh pool-list --name | grep -vw default); do virsh pool-destroy $p && virsh pool-undefine $p; done
Domain Snapshots
TBD - do not use as is!
After creating a disk-only snapshot we could not delete the same snapshot any more:[14]
$ virsh snapshot-create --domain foobar --disk-only Domain snapshot 1573619216 created $ virsh snapshot-delete --domain foobar --snapshotname 1573619216 error: Failed to delete snapshot 1573619216 error: unsupported configuration: deletion of 1 external disk snapshots not supported yet
We first have to merge back the changes into the parent device[15] before we can delete the snapshot:
$ virsh domblklist foobar
Target Source
------------------------------------------------------------
vda /opt/vm/foobar.1573619216
sda -
$ virsh start foobar
Domain foobar started
$ virsh blockcommit foobar /opt/vm/foobar.1573619216 --verbose --pivot
Block commit: [100 %]
Successfully pivoted
$ virsh domblklist foobar
Target Source
-------------------------------------------------------
vda /opt/vm/foobar.qcow2
sda -
While the snapshot file is still present, it is no longer active and can be disassociated from the VM:
$ virsh snapshot-list --domain foobar
Name Creation Time State
---------------------------------------------------
1573619216 2019-11-12 20:26:56 -0800 shutoff
$ virsh snapshot-delete --domain foobar --snapshotname 1573619216 --metadata
Domain snapshot 1573619216 deleted
The physical snapshot file itself can be removed too:
rm /opt/vm/foobar.1573619216
LVM Snapshots
When working with LVM devices, we can use its snapshot feature too. For example, let's create a snapshot for a VM's root device and use that to boot from:
sudo lvcreate -l 20%ORIGIN -s -n ubuntu0.snap0 vg0/ubuntu0.disk0 virsh attach-disk --domain ubuntu0 --source /dev/vg0/ubuntu0.snap0 --target vdc --config virsh edit ubuntu0 [...] > <target dev='vda' bus='virtio'/> > <boot order='2'/> > > <target dev='vdc' bus='virtio'/> > <boot order='1'/>
When starting the VM, its root device will be vg0/ubuntu0.snap0
, which comes in handy if we want to do something to the old root device, vg0/ubuntu0.disk0
, still accessible via vda
. For example, convert it from Ext4 to Btrfs:
e2fsck -fvy /dev/vda1 btrfs-convert /dev/vda1
Re-install the bootloader, GRUB:
mount -t btrfs /dev/vda1 /mnt/ mount -t devtmpfs dev /mnt/dev/ mount -t proc proc /mnt/proc/ mount -t sysfs sys /mnt/sys/ chroot /mnt/ grub-install -v --recheck /dev/vda grub-mkconfig -o /boot/grub/grub.cfg ^D umount /mnt/{dev,proc,sys,}
Somewhat out of scope in this Libvirt context, but because it's a nice example: we can use these snapshots in combination with Btrfs to provide an alternative boot volume. Let's assume we are inside our Ubuntu VM:
sudo mkdir /mnt/snap sudo btrfs subvolume snapshot / /mnt/snap/root
Now we should have a new snapshot volume:
$ sudo btrfs subvolume list /
ID 304 gen 740 top level 5 path mnt/snap/root
Create some kind of control file:
echo "Hello, world" > /foo.txt
Poweroff the VM and modify its configuration to boot from an LVM snapshot again:
sudo lvremove vg0/ubuntu0.snap0
sudo lvcreate -l 20%ORIGIN -s -n ubuntu0.snap0 vg0/ubuntu0.disk0
virsh attach-disk --domain ubuntu0 --source /dev/vg0/ubuntu0.snap0 --target vdc --config
virsh edit ubuntu0
[...]
> <target dev='vdc' bus='virtio'/>
> <boot order='1'/>
Start the VM with root=/dev/vdc1 init=/bin/sh
because we just need a working Linux environment. Here, we can look at the snapshotted volume:
$ mount -t btrfs -o subvolid=304 /dev/{{color|red|100|vda1 /mnt/ $ cat /mnt/foo.txt cat: /mnt/foo.txt: No such file or directory
As expected, because this file was created after the snapshot was taken. Let's make ID 304 the default volume for our next boot:
btrfs subvolume set-default 304 /mnt
umount /mnt
Poweroff the VM, remove LVM snapshot again, start the VM and our control file /foo.txt
should be gone again.
Passthrough filesystems
$ virsh edit VM
[...]
<filesystem type='mount' accessmode='passthrough'>
<source dir='/path/to/host/directory'/>
<target dir='foobar'/>
<readonly/>
</filesystem>
And then, in the guest, we'll finally have a chance to use the 9p
[16] file system:
sudo mount -t 9p -o trans=virtio,ro foobar /mnt
Performance
Disks
Block devices are always preferred over image files. When this is not possible we can tune[17] disk performance somewhat.
Convert existing disk images:
qemu-img convert -p -O qcow2 -o preallocation=metadata,lazy_refcounts=on,nocow=on disk.qcow2 disk_new.qcow2
Detach and re-attach with:
virsh detach-disk --domain ${VM} --target vda --persistent
virsh attach-disk --domain ${VM} --target vda --persistent --source $(pwd)/disk_new.qcow2 --cache writeback --subdriver qcow2
Bugs
- RHBZ #678555 - systemd should not purge application created cgroups, even if they contain no processes
- RHBZ #452422 - qemu: could not open disk image
- RHBZ #527736 - Storage driver can't create storage pool volumes on a FAT32 hard disk
Config
Links
- Qemu
- Fedora: Getting started with virtualization
- Fedora: How to debug Virtualization problems
- virtme
- Installing Virtual Machines with virt-install, plus copy pastable distro install one-liners (2015-08-02)
References
- ↑ Debian #747568 - Unknown lvalue 'ControlGroup' in section 'Service'
- ↑ Debian #758688 - Unit virtlockd.service cannot be reloaded because it is inactive
- ↑ IPv6 NAT based network
- ↑ VirtManager: could not open disk image
- ↑ OpenNebula – KVM QEMU could not open disk image disk.0: Permission denied
- ↑ SELinux preventing libvirtd to work with LVM devices
- ↑ Virt-builder
- ↑ Virsh Command Reference
- ↑ Configure security policy settings
- ↑ Preparing Virtual Machine for Virsh Shutdown
- ↑ Send Magic SysRq to a KVM guest using virsh
- ↑ How to auto-hotplug usb devices to libvirt VMs (Update 1)
- ↑ RHEL 6 Virtualization Administration Guide: Attaching and Updating a Device with virsh
- ↑ I created an external snapshot, but libvirt will not let me delete or revert to it
- ↑ Unable to delete KVM snapshot
- ↑ v9fs: Plan 9 Resource Sharing for Linux
- ↑ Incredibly low KVM disk performance (qcow2 disk files + virtio)