Xen
PV vs. HVM
With the paravirtualized approach (PV), the guest OS has to be Xen-aware, so it can pass all the ring 0 operations to the hypervisor (HV), that has actual hardware access through device drivers.
In HVM (Hardware Virtual Machine) mode with Intel's VT-x ("Vanderpool") or SVM from AMD (AMD-V, "Pacifica") the HV can now run Xen-agnostic guests (e.g. Microsoft Windows) with virtualized hardware. The processor's virtualization extensions will be able to handle the guest's kernel mode calls.
In Linux, /proc/cpuinfo
lists virtualization features[1]:
$ grep -E 'vmx|svm' /proc/cpuinfo
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat \
pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm syscall nx lm \
constant_tsc pni monitor ds_cpl vmx est tm2 ssse3 cx16 xtpr lahf_lm
Prerequisites
Before we can start our DomU domains, we have to configure xend
, the node control daemon running in userspace.
$ grep ^\( /etc/xen/xend-config.sxp (logfile /var/log/xen/xend.log) (loglevel INFO) (xend-relocation-server no) (xend-relocation-hosts-allow '^localhost$ ^localhost\\.localdomain$') (network-script network-bridge) (vif-script vif-bridge) (dom0-min-mem 128) (dom0-cpus 0) (vncpasswd )
These are pretty much the default settings, except for the network-script
and vif-script
options:
- (network-script network-bridge), (vif-script vif-bridge) - The Dom0 creates a bridge interface for the VM, passing all traffic to/from the domU. This way we can access the VM from the outside world with a real IP address.
- (network-script network-route), (vif-script vif-route) - Same effect as creating a bridge, but done in the IP layer.
- (network-script network-nat), (vif-script vif-nat) - Network traffic is NAT'ed between the VM and the outside world. We don't want this :-\
Xend is much more powerful and if we ever get around to cover Xen management tools, we will have to explain the configuration too. Until then it's just:
$ grep ^\( /etc/xen/xend-config-xenapi.sxp (xend-relocation-server yes) (xend-relocation-hosts-allow '^localhost$ ^localhost\\.localdomain$') (network-script network-bridge) (vif-script vif-bridge) (dom0-min-mem 196) (dom0-cpus 0) (vncpasswd )
DomU: HVM
Here our DomU will be fully virtualized and won't know anything about Xen. Let's see how a WindowsXP DomU looks like:
$ grep ^[a-z] /etc/xen/winxp.cfg kernel = '/usr/lib/xen/boot/hvmloader' builder = 'hvm' device_model = '/usr/lib/xen/bin/qemu-dm' memory = '1024' vcpus = '2' disk = [ 'phy:/dev/vg01/winxp0,hda,w', 'phy:/dev/loop0,ioemu:hdc:cdrom,r' ] name = 'winxp' dhcp = 'dhcp' vif = [ 'bridge=eth0,mac=00:16:3e:38:d4:15' ] on_poweroff = 'destroy' on_reboot = 'restart' on_crash = 'restart' boot = 'c' vnc = 1 vncviewer = 0 vnclisten = '192.168.10.1' sdl = 0
DomU: ParavirtOps
Our DomU is Xen-aware, so it's possible to host the kernel in the Dom0. The DomU could run the same kernel as the Dom0. Notice that we don't have to specify builder
and device_model
and can point to a real kernel
too:
$ grep ^[a-z] /etc/xen/linux.cfg kernel = '/mnt/xen/linux/vmlinux' memory = '256' vcpus = '2' root = '/dev/xvda1' disk = [ 'phy:/dev/vg01/linux0,xvda,w' ] name = 'linux' vif = [ 'bridge=eth0,mac=00:16:3e:38:d4:07' ] on_poweroff = 'destroy' on_reboot = 'restart' on_crash = 'restart' extra = 'console=hvc0 init=/sbin/init rootfstype=ext4 rootflags=nobarrier'
We're even passing boot parameters to the DomU, hey!
Installation
Usage
Now we can start (and control) our new virtual machine:
$ xm create /etc/xen/winxp.cfg $ xm list Name ID Mem(MiB) VCPUs State Time(s) Domain-0 0 239 1 r----- 3160.4 winxp 2 1024 2 -b---- 92.5 linux 3 256 1 r----- 312.1 $ xm console linux linux-domU$ uptime 23:59:36 up 10 min, 1 user, load average: 0.79, 0.35, 0.12 linux-domU$ ^]
To connect to the Windows console, we can use vncviewer
:
$ vncviewer 192.168.10.1:1
Otherwise Windows can be accessed via RDP over the network as usual.
Attaching disks to (running) domains, listing them, and detaching again:
$ xl block-attach linux 'format=raw, vdev=xvde, access=rw, target=/dev/sde'
$ xl block-list linux
Vdev BE handle state evt-ch ring-ref BE-path
51712 0 1 4 0 0 /local/domain/0/backend/vbd/3/51712
51808 0 1 4 -1 -1 /local/domain/0/backend/vbd/3/51808
$ readlink -f $(xenstore-read /local/domain/0/backend/vbd/3/51808/params)
/dev/sde
Find out the matching Xen device:
$ grep /dev/sde /etc/xen/alice.cfg 'format=raw, vdev=xvde, access=rw, target=/dev/sde', $ xl block-detach linux xvde
Or, to generate some kind of summary:
$ xl block-list linux | awk '!/^Vdev/ {print $1, $NF}' | while read -r v p; do d=$(xenstore-read ${p}/params) echo "v: ${v} - $(grep ${d} /etc/xen/linux.cfg | awk '{print $2}') -- ${d}"; done v: 51712 - vdev=xvde, -- /dev/sde v: 51744 - vdev=xvdf, -- /dev/sdf v: 51760 - vdev=xvdg, -- /dev/vg0/test
Links
- Xen Virtualization Essentials (virtuatopia.com)
- Xen paravirt_ops for upstream Linux kernel (xensource.com)
- bderzhavets: Xen Virtualization on Linux and Solaris / bderzhavets: Xen Virtualization on Linux and Solaris
- CRE092: Virtualisierung
- 2009's Virtualization Techniques Compared (Archive)
- DomU Support for Xen