首先要保证你的CPU支持虚拟化。执行以下命令查看CPU的flag信息,里面包含svm的flag就说明支持虚拟化:
[root@localhost ~]# egrep '(vmx|svm)' --color=always /proc/cpuinfo flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt rdtscp lm 3dnowext 3dnow pni cx16 lahf_lm cmp_legacy svm extapic cr8_legacy misalignsse flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt rdtscp lm 3dnowext 3dnow pni cx16 lahf_lm cmp_legacy svm extapic cr8_legacy misalignsse
安装KVM运行所需要的软件包:
yum install kvm kmod-kvm qemu kvm-qemu-img virt-viewer virt-manager ibvirt libvirt-python python-virtinst 或者用下面这句也行: yum groupinstall KVM
另外,如果后面想用virt-manager图形化安装和管理虚拟机的话,CentOS 5.6需要作一些配置和改动。由于我在安装前就选择了gnome图形化界面,所以我想用Xmanager 3.0图形化管理此机器,过程如下:
一、添加XDMCP协议支持,让服务器打开177端口
我们可以更改/etc/gdm/custom.conf文件,在
[xdmcp]项下添加
Enable=1
二、服务器自身关闭iptables及SElinux
三、保证服务器以图形化界面启动,即运行在5模式下
以下是更改CentOS 5.6的认证模块
vim /etc/pam.d/login 注释第一行 #auth [user_unknown=ignore success=ok ignore=ignore default=bad] pam_securetty.so vim /etc/pam.d/remote 注释第一行 #auth required pam_securetty.so
重新启动服务器即可,可用lsof -i:177命令查看端口是否开放。
在服务器上增加一个br0网桥设备,方便与虚拟机直连,步骤如下:
# cp /etc/sysconfig/network-scripts/ifcfg-eth0 /root/.
cd /etc/sysconfig/network-scripts/
cp ifcfg-eth0 ifcfg-br0
1.If your network card is configured with a static IP address, your original network script file should look similar to the following example:
DEVICE=eth0 BOOTPROTO=static HWADDR=00:14:5E:C2:1E:40 IPADDR=10.10.1.152 NETMASK=255.255.255.0 ONBOOT=yes
The following table shows the contents of the network configuration scripts for eth0 and br0. Edit your scripts as shown in the following example.
/etc/sysconfig/network-scripts/ifcfg-eth0 | etc/sysconfig/network-scripts/ifcfg-br0 |
---|---|
DEVICE=eth0 TYPE=Ethernet HWADDR=00:14:5E:C2:1E:40 ONBOOT=yes NM_CONTROLLED=no BRIDGE=br0 |
DEVICE=br0 TYPE=Bridge NM_CONTROLLED=no BOOTPROTO=static IPADDR=10.10.1.152 NETMASK=255.255.255.0 ONBOOT=yes |
In the left column is the network script file for network card (eth0). The pre-existing information about this network card stays the same, but three items are added:
TYPE
The device type.
NM_CONTROLLED=no
Specifies that the card is not controlled by the Network Manager. In order for the bridge to work, only one device can be controlled by the Network Manager.
BRIDGE=br0
Associates this card with the bridge.
In the right column is the network script for the bridge (br0). The following changes are reflected:
DEVICE
The device name.
TYPE
The device type. Bridge is case-sensitive and must be added exactly as represented here with an upper case 'B' and lower case 'ridge'.
NM_CONTROLLED=no
Specifies that the bridge is not controlled by the Network Manager. In order for the bridge to work, only one device can be controlled by the Network Manager.
The other settings are retained from the network card configuration file.
Note: There should not be a hardware address in this file. These values set up the bridge to behave like the network card: the ifcfg-br0 file acting as an extension of the ifcfg-eth0 file where the BRIDGE=br0 is pointing to the ifcfg-br0 file.
2.If your network card is configured with a dynamic IP address, your original network script file should look similar to the following example:
DEVICE=eth0 BOOTPROTO=dhcp HWADDR=00:14:5E:C2:1E:40 ONBOOT=yes
The following table shows the contents of the configuration scripts for eth0 and br0. Edit your scripts as shown in the following example.
/etc/sysconfig/network-scripts/ifcfg-eth0 | /etc/sysconfig/network-scripts/ifcfg-br0 |
---|---|
DEVICE=eth0 TYPE=Ethernet HWADDR=00:14:5E:C2:1E:40 ONBOOT=yes NM_CONTROLLED=no BRIDGE=br0 |
DEVICE=br0 TYPE=Bridge NM_CONTROLLED=no BOOTPROTO=dhcp ONBOOT=yes |
# service network restart
Reload the kernel parameters with the sysctl command:
# sysctl -p net.ipv4.ip_forward = 0 ...net.bridge.bridge-nf-call-ip6tables = 0 net.bridge.bridge-nf-call-iptables = 0 net.bridge.bridge-nf-call-arptables = 0
You can also see this bridge by running the following command:
brctl show bridge name bridge id STP enabled interfaces virbr0 8000.000000000000 yes br0 8000.000e0cb30550 no eth0
接下来的话,我们就可以直接在服务器上用图形化命令Virt-manager来创建一个CentOS 5.6的虚拟机了。由于要用到物理机本身的光驱,所以这里需要挂载下:
mount /dev/cdrom /mnt
记得将CentOS 5.6的DVD光盘放进光驱里。
然后在终端下执行命令virt-manager创建虚拟机,由于操作与vmware server类似,都是基于图形化的操作。
我们成功创建完第一个虚拟机后,我由于测试的需要,还需要创建三个虚拟机,我们可以在终端下依次输入命令,复制虚拟机:
virt-clone --connect=qemu:///system -o centos1 -n centos2 -f /datata/kvm/centos2.img virt-clone --connect=qemu:///system -o centos1 -n centos3 -f /datata/kvm/centos3.img virt-clone --connect=qemu:///system -o centos1 -n centos4 -f /datata/kvm/centos4.img
-o表示旧的虚拟机名称,-n表示新的虚拟机名称,-f表示新的虚拟机路径。
我们全部运行后,可以看下virt-manager的效果图,如下:
系统稳定运行一段时间后,我们可以通过uptime命令观测系统负载,它们不是特别大。
[root@kvm centos10m]# uptime 00:43:08 up 2:04, 1 user, load average: 0.92, 0.77, 0.81
这样的话,我们就在这台服务器上成功运行了四台64bit CentOS 5.6的机器,KVM环境搭建成功。