This document describes the instructions to build HVM IO domain. The hypervisor is based on xvm-3.3 and domain 0 based on onnv_121.
1. Upgrade domain 0
1) Install Nevada build 121 onto the system which supports Intel virtualization technology for direct IO. Make sure Intel vt-d is enabled in BIOS setting.
2) Build the BFU archive from onnv_121 with patches.
$ hg clone ssh://[email protected]/hg/onnv/onnv-gate
$ cd onnv-gate
$ hg update -C onnv_121
$ hg qinit
$ hg qimport pci-device-reservation
$ hg qpush
$ hg qimport xen-hcall-extension
$ hg qpush
Download closed binary archives from http://dlc.sun.com/osol/on/downloads/b121/. Unpatch the tar ball under onnv-gate.
$ bzcat on-closed-bins-nd.i386.tar.bz2 | tar xf -
$ bzcat on-closed-bins.i386.tar.bz2 | tar xf -
Setup env script and run nightly
$ cp usr/src/tools/env/opensolaris.sh ./
$ edit opensolaris.sh
$ nightly opensolaris.sh
3) BFU the system installed by step 1 with the archive built during step 2.
1) Pull a xvm-3.3 code base under local directory
$ wget http://dlc.sun.com/osol/on/downloads/b121/xvm-src.tar.bz2
Or,
Pull it from ssh://[email protected]/hg/xen-gate/xvm-3.3+(sub-gates)
2) import the patch for xen.hg and qemu.hg
$ cd xen.hg
$ hg qpush -a
$ hg qimport python-pci-aug27
$ hg qpush
$ cd qemu.hg
$ hg qpush -a
$ hg qimport pci-passthrough-aug27
$ hg qpush
3) build the packages
$ export XVM_WS=`pwd`
$ ./sunos.hg/bin/build-all full
# svcadm disable xvm/domains xvm/console xvm/xend xvm/store xvm/virtd
# pkgrm SUNWlibvirt SUNWlibvirtr SUNWurlgrabber SUNWvdisk SUNWvirtinst SUNWxvmdomr SUNWxvmdomu SUNWxvmh SUNWxvmhvm SUNWxvmr SUNWxvmu
# pkgadd -d . SUNWlibvirt SUNWlibvirtr SUNWurlgrabber SUNWvdisk SUNWvirtinst SUNWxvmdomr SUNWxvmdomu SUNWxvmh SUNWxvmhvm SUNWxvmr SUNWxvmu
# svcadm enable xvm/domains xvm/console xvm/xend xvm/store xvm/virtd
# reboot
# svccfg -s xend setprop start/privileges = all
# svcadm refresh xend
# svcadm restart xend
# edit [/rpool]/boot/grub/menu.lst
title Solaris xVM
findroot (rootfs0,0,a)
kernel$ /boot/$ISADIR/xen.gz iommu=1
module$ /platform/i86xpv/kernel/$ISADIR/unix /platform/i86xpv/kernel/$ISADIR/unix -B pci-reserve="[1:0:0],[1:0:1],[3:0:0]"
module$ /platform/i86pc/$ISADIR/boot_archive
where, "iommu=1" enables the hypervisor to support Intel vt-d; "-B pci-reserve="[1:0:0],[1:0:1],[3:0:0]"" asks domain 0 to reserve pci devices for pass-through purpose.
1) below is a sample xm configure file for pci device pass-through
#### start of the configure file, pt-sample.py ####
import os, re
arch = os.uname()[4]
if re.search('64', arch):
arch_libdir = 'lib64'
else:
arch_libdir = 'lib'
kernel = "/usr/lib/xen/boot/hvmloader"
builder='hvm'
memory = 512
shadow_memory = 8
name = "r48"
vcpus=2
pci = [ '01:00.0' ]
disk = [ 'file:/export/home/allen/iodomain/images/ia32e_rhel4u8.img,hdc,w' ]
device_model = '/usr/' + arch_libdir + '/xen/bin/qemu-dm'
boot='c'
vnc=1
vnclisten="0.0.0.0"
vncconsole=1
vncpasswd=''
nographic=0
stdvga=0
serial='null'
on_poweroff = 'destroy'
on_reboot = 'restart'
on_crash = 'preserve'
#### end of the configure file, pt-sample.py ####
where, "pci = [ '01:00.0' ]" tells xm to pass through pci device with bdf [1,0,0] to the guest.
2) create the hvm guest domain
# xm create -c pt-sample.py
You will find the pass-through device, when you logon the domain.