http://blog.linuxing.org/2010/01/how-to-build-a-low-cost-san/
Krishna Kumar
April 9, 2009
In today's world there is a obvious need of information sharing in every department and network storage can help us to achieve this most growing challenge. Here in this article we are focusing our concentration to make a San which has following features:
There are some options available to make a reliable San which is quite complex and expensive. These are iSCSI, NBD, ENBD and FIBER CHANNEL. iSCSI, NBD, ENBD works on TCP/IP layer which has much overhead. Luckily we have a protocol which can easily serve our purpose in a affordable cost and less overhead. All we typically need is some dual-port Gig-E cards, a GiG-Ethernet switch and some disks. This is a very simple and lightweight protocol and it is known as ATA OVER ETHERNET. AoE comes with linux kernel as a kernel module. AoE does not rely on network layers above Ethernet, such as the IP and TCP that iSCSI requires. While this makes AoE potentially faster than iSCSI, with less load on the host to handle the additional protocols, it also means that AoE is not routable outside a LAN. AoE is intended for SANs only. In this regard it is more comparable to Fiber Channel over Ethernet than iSCSI. It export block devices (SATA HARD DISKS) over the network with a very high throughput when coupled with a quality Ethernet switch. A qaulity Ethernet switch can maximize throughput and minimize collisions through integrity checking and packet ordering. While using AoE in a large scalable enterprise environment, we can take the help of RED HAT cluster aware tools like CLVM, GFS, DLM, DRBD, HEARTBEAT etc-etc.
Cost Comparison
Technology | Speed | Server Interface | Switch | Cabling | Storage/TB |
AoE | 2Gb | $99 | $15-$30 | $25-$35 | $400-$500 |
iSCSI | 1Gb | $500-$1000 | $400-$600 | $25-$35 | $1000-$5000 |
Fiber Channel | 4Gb | $1200-$2000 | $800-$3600 | $175-$225 | $4000-$10000 |
There are following AoE Targets (server) available on GPL:
You can export your block-devices over the network by any of the available tar- gets. But the thing is how we can export our block devices in a much configured and manageable way so that it can help us to achieve our targets. These are the following features available by which you can configure your block devices, either by command-line or by configuration file. These are the acronym followed in this table for AoE targets:
Small description of all the features are given in terms and terminology section.
Features
FEATURE | KV | AOES | VB-KER | VB-KER | GGOLD | GGNW | QD | SD |
Shelf | Y | Y | Y | Y | Y | Y | Y | Y |
Slot | Y | Y | Y | Y | Y | Y | Y | Y |
Interface | Y | Y | Y | Y | Y | Y | Y | Y |
Device-path | Y | Y | Y | Y | Y | Y | Y | Y |
Conf-file | N | N | N | N | Y | Y | Y | N |
MTU | N | N | N | N | Y | Y | Y | N |
Mac-filtering | N | Y | N | Y | Y | Y | Y | N |
ACL-listing | N | N | N | N | Y | Y | Y | N |
Buffer Count | N | N | Y | Y | Y | Y | N | N |
Sectors | N | N | Y | N | N | N | N | N |
Queing | N | N | N | N | Y | Y | N | N |
Logging-info | N | N | N | N | Y | Y | Y | N |
Direct-Mode | N | N | N | Y | Y | Y | Y | N |
Sync-Mode | N | N | N | Y | N | N | N | N |
Read-only Mode | N | N | N | Y | Y | Y | Y | N |
UUID | N | N | N | N | Y | Y | Y | N |
Write Cache | N | N | N | N | N | N | Y | N |
Policy | N | N | N | N | Y | Y | Y | N |
Trace-i/o | N | N | N | N | Y | Y | N | N |
Jumbo-Frames | Y | Y | Y | Y | Y | Y | Y | Y |
On GPL | Y | Y | Y | Y | Y | Y | Y | Y |
Reliability | Low | Med | Low | High | High | Med | High | Med |
Usability | Low | Med | Low | High | High | Med | High | Med |
The brief description of these targets and their setup procedure is described in this section. The objective of this setup is to export the block device on the network. If you don't have spare block device then you can treat your loop device as a block device using losetup command. In this section, all the experiments are done on loop devices. We can replace /dev/loop0 by /dev/sda2 if we have spare disks on our box.
Kvblade is a kernel module implementing the target side of the AoE protocol. Users can command the module through sysfs to export block devices on speci- fied network interfaces. The loopback device should be used as an intermediary for exporting regular files with kvblade.
Download kvblade alpha3 :
[root@node1 ]# wget http://downloads.sourceforge.net/aoetools/kvblade-alpha- 3.tgz
Untar this package:
[root@node1 ]# tar -xzvf kvblade-alpha-3.tgz
[root@node1 ]# cd kvblade-alpha-3
Compilation of this kernel module was done on FC7 (fedora 7):
[root@node1 kvblade-alpha-3]# uname -r
2.6.21-7.fc7xen
Do the following manipulation in kvblade.c file at line no. 3 and at line no. 66 : at line no. 3 : change linux/config.h |> linux/autoconf.h at line no. 66: change ATA SERNO LEN |> ATA ID SERNO LEN Since, linux kernel always keep changes their file name, api and data structures. So, we have done above manipulation for successful compilation of this kernel module.
[root@node1 ]# make
[root@node1 kvblade-alpha-3]# make install
[root@node1 kvblade-alpha-3]# insmod kvblade.ko
Now your kvblade has cleanly compiled and installed on your system. Just run the following command to verify it:
[root@node1 kvblade-alpha-3]# lsmod | grep kvblade
kvblade 17992 0
If you don't have free devices like /dev/sda4 or /dev/sda5 then you can treat your loop device as your block device by using following command:
[root@node1 kvblade-alpha-3]# dd if=/dev/zero of= le.img bs=1M count=400
[root@node1 kvblade-alpha-3]# losetup /dev/loop0 le.img
[root@node1 kvblade-alpha-3]# ./kvadd 0 0 eth0 /dev/loop0
Now you have exported your block device over network.
Aoeserver is an in-kernel Ata Over Ethernet Storage target driver used to emu- late a Coraid EtherDriver Blade. It is partly based on vblade and the aoe-client from the Linux 2.6-kernel.
Download the src:
[root@node1 ]# svn checkout http://aoeserver.googlecode.com/svn/trunk/ aoe- server
[root@node1 ]# cd aoeserver/aoeserver
Compilation of this kernel module has done on FC10 (fedora10) having kernel version 2.6.27.5-117.fc10.i686
Comment line no. 362 and 380 of /aoeserver/linux/drivers/block/aoeserver/aoeproc.c file :
at line 362: /* remove proc entry(PROCFSNAME, &proc root) */
at line 380: /* remove proc entry(PROCFSNAME, &proc root) */
Now run make command:
[root@node1 aoeserver]# make
[root@node1 aoeserver]# make install
Insert the module:
[root@node1 aoeserver]# sh load.sh
[root@node1 aoeserver]# dd if=/dev/zero of=file1.img bs=1M count=400
[root@node1 aoeserver]# losetup /dev/loop1 file1.img
[root@node1 aoeserver]# echo add /dev/loop1 0 1 eth0 > /proc/aoeserver
Vblade-kernel is an AoE target emulator implemented as a kernel module for Linux 2.6.* kernels.
Download vblade kernel:
[root@node1 krishna]# wget http://lpk.com.price.ru ~lelik/AoE/vblade-kernel- 0.3.4.tar.gz
[root@node1 krishna]# tar -xzvf vblade-kernel-0.3.4.tar.gz
[root@node1 krishna]# cd vblade-kernel-0.3.4
This kernel module has compiled on FC7 having kernel version 2.6.21-7.fc7xen Do the following change in vblade.h file: at line no. 27 : linux/config.h {> linux/autoconf.h Do the following change in main.c file : (skb linearize(skb, GFP ATOMIC) < 0) {> (skb linearize(skb) Now run make command to compile:
[root@node1 vblade-kernel-0.3.4]# make
[root@node1 vblade-kernel-0.3.4]# insmod vb.ko
[root@node1 vblade-kernel-0.3.4]# dd if=/dev/zero of=file2.img bs=1M count=200
[root@node1 vblade-kernel-0.3.4]# losetup /dev/loop2 gile2.img
[root@node1 vblade-kernel-0.3.4]# echo add do /dev/loop2 > /sys/vblade/drives
[root@node1 vblade-kernel-0.3.4]# echo add eth0 0 2 32 8 > /sys/vblade/do/ports
Here eth0 is the interface, 0 is shelf, 2 is slot, 32 is the buffer(length of the request queue) and 8 is the maximum number of sectors per request.
Vblade is the virtual EtherDrive (R) blade, a program that makes a seekable file available over an ethernet local area network (LAN) via the ATA over Ethernet (AoE) protocol. Seekable file is typically a block device like /dev/md0 but even regular files will work. Sparse files can be especially convenient. When vblade exports the block storage over AoE it becomes a storage target. Another host on the same LAN can access the storage if it has a compatible aoe kernel driver.
Download vblade:
[root@node1 krishna]# wget http://downloads.sourceforge.net/aoetools/vblade- 19.tgz
[root@node1 krishna]# tar -xzvf vblade-19.tgz
[root@node1 krishna]# cd vblade-19
[root@node1 krishna]# make
[root@node1 vblade-19]# make install
[root@node1 vblade-19]# dd if=/dev/zero of=newfile3.img bs=1M count=200
[root@node1 vblade-19]# losetup /dev/loop3 newfile3.img
[root@node1 vblade-19]# ./vbladed 0 3 eth0 /dev/loop3
Ggaoed is an AoE (ATA over Ethernet) target implementation for Linux. It utilizes Linux kernel AIO, memory mapped sockets and other Linux features to provide the best performance. GGAOED comes in two avour: Ggaoed base version and Updated Ggaoed.
This is the base version of ggaoed-0.9.tar.gz (release-0.9) released in July 2008. This can be downloaded from following site: http://code.google.com/p/ggaoed/downloads/list
Running ggaoed requires Linux kernel 2.6.22 or later. So, Its better that to run ggaoed on your box you can have FC10 installed on your system.The following software is needed in order to build ggaoed:
Run following commands to compile GGAOED:
[root@node1 ggaoed-0.9]# ./configure
[root@node1 ggaoed-0.9]# make
[root@node1 ggaoed-0.9]# make install
Sample Configuration File(ggaoed.conf)
The format of sample configuration file is as follows:
example file: ggaoed.conf
[sdc] path = /dev/sda2 shelf = 0 slot = 0 broadcast = true read-only = true queue-length=64 direct-io=true
Here we have taken /dev/sda2 which is unmounted. If you don't have your available unmounted partition, then you can create your block device by using dd and losetup command.
[root@node1 ggaoed-0.9]# ./ggaoed -c ggaoed.conf -d
Updated version of GGAOED has released in March 2009 according to the re- vised AOE protocol specification in Feb 2009.
The new GGAOED has released with updtaed features of AOE protocol like mac-filtering and acl-listing.
Download the source:
[root@node1 Desktop]# svn checkout http://ggaoed.googlecode.com/svn/trunk/ ggaoed
[root@node1 Desktop]# cd ggaoed/
[root@node1 ggaoed]# autoreconf - - install
Comment the following line in Makefile.am at line no. 26: #SUBDIRS += doc
[root@node1 ggaoed]# ./configure
[root@node1 ggaoed]# make
[root@node1 ggaoed]# make install
First create the directory for config file by running following command:
[root@node1 ]# mkdir -p /usr/local/var/ggaoed
Format of config file:
# Sample ggaoed configuration file [defaults] pid-file = /var/run/ggaoed.pid [eth0] mtu = 1500 [testdev] path = /dev/loop0 direct-io = true queue-length = 16 shelf = 0 slot = 0 interfaces = eth0 broadcast = true read-only = true
[root@node1 ]# dd if=/dev/zero of=newfile.img bs=1M count=200
[root@node1 ]# losetup /dev/loop0 newfile.img
[root@node1 ]# ./ggaoed -c ggaoed.conf -d
Qaoed is a multithreaded ATA over Ethernet daemon that's easy to use, yet highly configurable.QAOED also comes in two avours: qaoed and sqaoed.
You can download qaoed by following command:
[root@node1 Desktop]# svn checkout http://qaoed.googlecode.com/svn/trunk/ qaoed
[root@node1 qaoed]# make
The block devices can be created as follows:
[root@node1 qaoed]#dd if=/dev/zero of=newfile6.img bs=1M count=200
[root@node1 qaoed]#losetup /dev/loop6 newfile6.img
apisocket = /tmp/qaoedsocket; default { shelf = 0; /* Shelf */ slot = 6; /* Autoincremented slot numbers */ interface = eth0; device { target = /dev/loop6; }
[root@node1 qaoed]# ./qaoed -c qaoed.conf
Fubra decided to evaluate various platforms to help support our next genera- tion network infrastructure (http://code.fubra.com/wiki/PortingQaoed). One of the platforms they are evaluating is the Sun T5420, a high end Ultra Sparc, T2 dual processor with support for 64 threads per processors. It was decided to port the Qaoed daemon to the Solaris platform, on both x86 and Ultra Sparc. Sqoaed was the result of stripping out all the pthreads and reordering parts of the qaoed sources.
Download the source from following site:
[root@node1 sqaoed]# svn co http://svn.fubra.com/storage/sqaoed/ trunk
[root@node1 sqaoed]# cd trunk/trunk
[root@node1 sqaoed]# make
[root@node1 sqaoed]# make install
[root@node1 trunk]# dd if=/dev/zero of=newfile7.img bs=1M count=200
[root@node1 trunk]# losetup /dev/loop7 newfile7.img
[root@node1 trunk]# ./sqaoed eth0 /dev/loop7 0 0
On client side you must have aoe driver and aoe-tools installed on your system (refer to section AoE Tools and Commands). You will find the corresponding exported block devices in your /dev/etherd directory. You just have to run the following command:
[root@aoeclient trunk]# modprobe aoe
[root@aoeclient trunk]# lsmod j grep aoe
[root@aoeclient trunk]# aoe-stat
[root@aoeclient trunk]# ls -l /dev/etherd/
[root@aoeclient trunk]# mkfs.ext3 /dev/etherd/e0.0
[root@aoeclient trunk]# mount /dev/etherd/e0.0 /mnt
So, finally you have access on your exported block device. Now you are free to do any type of read, write operation on this.
In this experiment I have used following hardware to measure the disk i/o:
160 GB Sata hard disk ? Intel(R) Xeon(R) CPU E5450 @ 3.00 GHZ
My network supports following packet size:
[root@sigma13 ]# ifconfig eth0 mtu 9000
I have used FC7 and FC10 fedora OS because its' not possible to compile all the targets on same kernel.
After doing successful setup of all the available AoE targets, now its time to measure the disk i/o of exported block device. The available options to measure the i/o of block devices are hdparam, iostat, dd and some i/o tools like fio, bonnie, iozone, iometer etc.
Here we have used fio (the i/o measurement tool) and plotted the performance graph. I have kept following configuration in surface-scan file:
[global] ethread=1 bs=128k direct=1 ioengine=sync verify=meta verify pattern=0xaa555aa5 verify interval=512 [write-phase] filename=/dev/etherd/e0.1 ; or use a full disk, for example /dev/sda rw=write fill device=1 do verify=0 [verify-phase] stonewall create serialize=0 filename=/dev/etherd/e0.1 rw=read do verify=1 bs=128k
After setting the appropriate parameter in this file, just run the following com- mand against all the available aoe targets to collect the required data for fol- lowing graph:
[root@aoeclient ]# fio surface-scan
The performance graph of AoE targets with fio in case of write operation is as follows:
(JavaScript must be enabled in your browser to view the large image as an image overlay.)
Here X-axis denotes block-size in kilo-bytes and Y-axis denotes throughput in KB/sec (kilo-byte per second).
The performance graph of AoE targets with fio in case of read operation is as follows:
(JavaScript must be enabled in your browser to view the large image as an image overlay.)
We can setup bonnie-64 by following commands:
[root@sigma13 bonnie-64-read-only]# mount /dev/etherd/e0.0 /mnt
[root@sigma13 bonnie-64-read-only]#./Bonnie -d /mnt/ -s 128
The performance graph of AoE targets with fio in case of write operation is as follows:
(JavaScript must be enabled in your browser to view the large image as an image overlay.)Here X-axis denotes file-size in mega-bytes and Y-axis denotes throughput in M/sec.
There are some tools and commands available to analyze aoe-traffic on the net- work.Run the following commands to download and install these tools:
[root@node1 kk]# wget http://support.coraid.com/support/linux/aoe6-70.tar.gz
[root@node1 kk]# tar -xzvf aoe/root/Documents/Documents/diagram2.eps6- 70.tar.gz
[root@node1 kk]# cd aoe6-70
[root@node1 aoe6-70]# make
[root@node1 aoe6-70]# make install
Now you have installed the necessary aoe-tools and you are able to use the following commands:
Till now, we have seen the available targets, corresponding features and their performances. Now its time to make a SAN based on the available disks and Gigabyte ethereal switch. So, following are the steps to make a SAN.Well, while writing this article, I don't have the extra hard disks. So, I have performed my experiment on 200 MB loop devices. In actual setup these loop devices will be replaced by actual 200 GB hard disks. The basic diagram of our SAN is as follows:
(JavaScript must be enabled in your browser to view the large image as an image overlay.)
[root@server0]# dd if=/dev/zero of=file1.img bs=1M count=200
[root@server0]# dd if=/dev/zero of=file2.img bs=1M count=200
[root@server0]# losetup /dev/loop0 file1.img
[root@server0]# losetup /dev/loop1 file2.img
[root@server0 vblade-19]# losetup -a
[root@server0 vblade-19]# ./vbladed 0 0 eth0 /dev/loop0
[root@server0 vblade-19]# ./vbladed 1 0 eth0 /dev/loop1
[root@server1]# dd if=/dev/zero of=file1.img bs=1M count=200
[root@server1]# dd if=/dev/zero of=file2.img bs=1M count=200
[root@server1]# losetup /dev/loop0 file1.img
[root@server1]# losetup /dev/loop1 file2.img
[root@server1 vblade-19]# losetup -a
[root@server1 vblade-19]# ./vbladed 0 1 eth0 /dev/loop0
[root@server1 vblade-19]# ./vbladed 1 1 eth0 /dev/loop1
Make sure that at client side you have latest AoE driver and corresponding tools installed. You can check it out by following command:
[root@client1 krishna]# lsmod j grep aoe
If you don't have aoe driver at your box, you can download it from following mirror:http://support.coraid.com/support/linux/
[root@client1 krishna]# aoe-version
aoetools: 29
installed aoe driver: 70
running aoe driver: 70
Now run following commands to access the exported block device at client:
[root@client1 krishna]# modprobe aoe
[root@client1 krishna]# aoe-stat e0.0 0.209GB eth0 1024 up
e0.1 0.209GB eth0 1024 up
e1.0 0.209GB eth0 1024 up
e1.1 0.209GB eth0 1024 up
So, you can see the exported block devices on your box.
Mirroring from e0.0 and e0.1:
[root@client1 krishna]# mdadm -C /dev/md0 -l 1 -n 2 /dev/etherd/e0.0 /dev/etherd/e0.1
Mirroring from e1.0 and e1.1:
[root@client1 krishna]# mdadm -C /dev/md1 -l 1 -n 2 /dev/etherd/e1.0 /dev/etherd/e1.1
Create the stripe over the mirrors:
[root@client1 krishna]# mdadm -C /dev/md2 -l 0 -n 2 /dev/md0 /dev/md1
So, now we have following configuration of Raid Array:
[root@client1 krishna]# cat /proc/mdstat
Personalities : [raid1] [raid0]
md2 : active raid0 md1[1] md0[0]
409344 blocks 64k chunks
md1 : active raid1 etherd/e1.1[1] etherd/e1.0[0]
204736 blocks [2/2] [UU]
md0 : active raid1 etherd/e0.1[1] etherd/e0.0[0]
204736 blocks [2/2] [UU]
unused devices:
Convert the RAID into an LVM physical volume:
[root@client1 krishna]# pvcreate /dev/md2
Create an extendible LVM volume group:
[root@client1 krishna]# vgcreate volgrp /dev/md2
[root@client1 krishna]# pvs
PV VG Fmt Attr PSize PFree
/dev/md2 volgrp lvm2 a- 396.00M 396.00M
[root@client1 krishna]# vgs
VG #PV #LV #SN Attr VSize VFree
volgrp 1 0 0 wz{n- 396.00M 396.00M
Create a logical volume using all the space:
[root@client1 aoedoc]# lvcreate -L 300M -n logicalvol volgrp
[root@client1 aoedoc]# lvs
LV VG Attr LSize
logicalvol volgrp -wi-a- 300.00M
So, finally we have created our logical volume having 300 MB size.
Create a filesystem:
[root@client1 aoedoc]# mkfs.ext3 /dev/volgrp/logicalvol
[root@client1 aoedoc]# mount -t ext3 /dev/volgrp/logicalvol /mnt/
[root@client1 aoedoc]# cd /mnt/
[root@client1 mnt]# mkdir aoe
[root@client1 mnt]# touch aoefile
[root@client1 mnt]# ls
aoe aoefile lost+found
So, finally your SAN is ready. If you want to resize your volgroup or want to add some more disks then first unmount it and use vgextend, resize2fs for it.
XEN-AoE system takes maximum advantage of high-performance commodity computing power and high efficiency IP-SAN technologies to deliver a virtual- ization solution which provides the highest availability, performance, and best value possible in low cost. Xen-AoE is a cluster server architecture which pro- vides more efficient use of cpu, disk, and memory resources than traditional servers with zero single points of failure ensuring high availability and greater server maintainability. It does this by utilizing:
We have three xen machines booted on xen kernel of FC7. If you do not have your pc with xen kernel, then please select the virtualization option while in- stallation and install your pc with XEN-kernel. The objective of this lab is to export the block devices from node1 to node2 and create a DOMU (guest OS) in node2. This DOMU will be created on exported block devices from node1 and it will contain a minimum debian linux and one can easily do live migration of this DOMU from the second node(node2) to third node(node3).
The first step of XEN-AoE setup is to export two block devices of 4GB from node1 to node2:
[root@node1 Desktop]# dd if=/dev/zero of=newfile1.img bs=1M count=4000
[root@node1 Desktop]# dd if=/dev/zero of=newfile2.img bs=1M count=4000
[root@node1 Desktop]# losetup /dev/loop0 newfile1.img
[root@node1 Desktop]# losetup /dev/loop1 newfile2.img
[root@node1 Desktop]# losetup -a
[root@node1 Desktop]# pvcreate /dev/loop0
[root@node1 Desktop]# pvcreate /dev/loop1
[root@node1 Desktop]# pvs
[root@node1 Desktop]# vgcreate vgnode1.0 /dev/loop0
[root@node1 Desktop]# vgcreate vgnode1.1 /dev/loop1
[root@node1 Desktop]# vgs
[root@node1 Desktop]# lvcreate -L3G -n lvnode1.0 vgnode1.0
[root@node1 Desktop]# lvcreate -L3G -n lvnode1.1 vgnode1.1
[root@node1 Desktop]# lvs
[root@node1 vblade]# ./vbladed 0 0 eth0 lvnode1.0
[root@node1 vblade]# ./vbladed 0 1 eth0 lvnode1.1
[root@node2 Desktop]# modprobe aoe
[root@node2 Desktop]# aoe-stat
[root@node2 Desktop]# mdadm - -create /dev/md0 - -level=1 - -raid-devices=2
/dev/etherd/e0.0 /dev/etherd/e0.1
[root@node2 Desktop]# cat /proc/mdstat
[root@node2 Desktop]# mdadm - -detail /dev/md0
[root@node2 Desktop]# mkfs.ext3 /dev/md0
Debootstrap is used to create a Debian base system from scratch, without re- quiring the availability of dpkg or apt. It does this by downloading .deb files from a mirror site, and carefully unpacking them into a directory which can eventually be chrooted into.
[root@node2 Desktop]# yum install debootstrap
[root@node2 Desktop]# mkdir /debian
[root@node2 Desktop]# mount /dev/md0 /debian
[root@node2 Desktop]# debootstrap - -arch i386 etch /debian
Please do some tricky things to get a console on your DomU. If you don't do this, it may be possible that you don't get a terminal. Your mingetty will respawn and it will not show you the screen. So, be careful to get the console. You can search on the net for available solutions:
https://lists.linux-foundation.org/pipermail/virtualization/2007-January/005478.html
[root@node2 Desktop]#cp /etc/passwd /etc/shadow /debian/etc
[root@node2 Desktop]#echo '/dev/sda1 / ext3 defaults 1 1' > /debian/etc/fstab
[root@node2 Desktop]#sed -ie 's/^[2-6]:/#/0/' /debian/etc/inittab
Create a new initrd (initialization RAM Disk) without SCSI modules, and then use this to boot the guest Linux operating system. If you don't do this then your booting may be fail. In my case I got the message /kernel panic". Making of initrd can be achieved by following command:
[root@node2 Desktop]# mkinitrd - -omit-scsi-modules - -with=xennet - -with=xenblk - -preload=xenblk initrd-$(uname -r)-no-scsi.img $(uname -r)
kernel = `/boot/vmlinuz-2.6.21-7.fc7xen' ramdisk = `/boot/initrd-2.6.21-7.fc7xen-no-scsi.img' memory = `238' name = `debian' root = `/dev/sda1 ro' extra=/console=tty1 xencons=tty1" dhcp = `dhcp' vif = [ ` '] disk = [ `phy:md0,sda1,w '] on poweroff = `destroy ' on reboot = `restart ' on crash = `restart '
Now Everything has done to create a new guest os with debian Linux.
[root@node2 /]# umount /dev/md0
[root@node2 /]# xm create -c debian
[root@node1 /]# xm list
So, finally your Dom U has created with debian linux. If everything is fine then, you can get the terminal otherwise you can search on the net.
Now it's time to migrate this virtual machine on node3. So, run the following command:
[root@node2 ]#xm migrate - -live debian node3
Fiber Channel and iSCSI customers are often looking for more than just storage. AoE is a simple network protocol and its description is only eleven pages long, but it provides enough of a structure to build exible, simple, and scalable storage solutions from inexpensive hardware like disks and gigabyte switch. So, if you want a small San with no extra features in a low budget, then AOE is a better choice rather than iSCSI and fiber channel. As there always has been a huge demand for low cost and exible storage solutions, it is not feasible to increase the number of drives easily in either Fiber or iSCSI based Sans. The ATA over Ethernet (AoE) protocol solves this issue to a large extent. However Fiber channel solutions are the fastest of the three while iSCSI solutions are still most reliable.
Sectors : Max no. of sectors per request.