说明:如果vSphere上的VM使用虚拟硬盘是scsi接口,需要安装virtio模块,并加载,如果是ide则不需要安装。(如果ide接口安装并加载virtio到系统后,在openstack中同样无法启动!!!)
一、安装模块
/sbin/dracut --force --verbose --add-drivers "virtio virtio_ring virtio_pci" /boot/initramfs-3.10.0-327.el7.x86_64.img 3.10.0-327.el7.x86_64 modprobe virio modprobe virtio_pci
二、使用voftool工具从VSphere Center导出ovf模板
1. 从此地址下载 http://ftp.tucha13.net/pub/software/VMware-ovftool-4.1.0/VMware-ovftool-4.1.0-2459827-lin.x86_64.bundle
2. 添加权限chmod a+x VMware-ovftool-4.1.0-2459827-lin.x86_64.bundle
3. 安装工具./VMware-ovftool-4.1.0-2459827-lin.x86_64.bundle.
4. 导出虚拟机ovf模板
[root@compute04 5_126]# ovftool --disableVerification --noSSLVerify --powerOffSource vi://user:password@ip/path/to/Resources/05.xxxx/5.126-xxxx1-centos7 5_126.ovf Opening VI source: vi://administrator%40vc.com@ip/path/to/Resources/05.xxxxxx5.126-xxxxxx1-centos7 Powering off VM: 5.126-1-centos7 Opening OVF target: 5_126.ovf Writing OVF package: 5_126.ovf Transfer Completed Completed successfully [root@compute04 5_126]# ls 5_126-disk1.vmdk 5_126-disk2.vmdk 5_126.mf 5_126.ovf
三、创建卷
1、创建系统卷
说明:所创建卷的大小一定在大于等于vm硬盘
create --display-name 5_126_os 20 +--------------------------------+--------------------------------------+ | Property | Value | +--------------------------------+--------------------------------------+ | p_w_uploads | [] | | availability_zone | nova | | bootable | false | | consistencygroup_id | None | | created_at | 2017-04-01T06:16:33.000000 | | description | None | | encrypted | False | | id | 2be0eaee-ca53-4d03-96a4-caae1c011a55 | | metadata | {} | | migration_status | None | | multiattach | False | | name | 5_126_os | | os-vol-host-attr:host | None | | os-vol-mig-status-attr:migstat | None | | os-vol-mig-status-attr:name_id | None | | os-vol-tenant-attr:tenant_id | f3419d1896284d15af004b1ad6222a9a | | replication_status | disabled | | size | 20 | | snapshot_id | None | | source_volid | None | | status | creating | | updated_at | None | | user_id | 0e28446136b742a8849c6a54675e6ee8 | | volume_type | None | +--------------------------------+--------------------------------------+
2. 导入系统硬盘
# cinder set-bootable 2be0eaee-ca53-4d03-96a4-caae1c011a55 true # cinder list +--------------------------------------+-----------+----------------+------+-------------+----------+--------------------------------------+ | ID | Status | Name | Size | Volume Type | Bootable | Attached to | +--------------------------------------+-----------+----------------+------+-------------+----------+--------------------------------------+ | 231b0af3-5a28-4ded-a211-517bbc0c5e41 | available | tomcat_volume1 | 10 | - | false | | | 2be0eaee-ca53-4d03-96a4-caae1c011a55 | available | 5_126_os | 20 | - | true | | | 2ea90de4-fad9-4185-af93-6e0580edc846 | in-use | test02 | 20 | - | false | 340a66bf-2c0c-45e3-91c0-7108931a78e3 | +--------------------------------------+-----------+----------------+------+-------------+----------+--------------------------------------+ [root@compute04 5_126]# rbd -p volumes rm volume-2be0eaee-ca53-4d03-96a4-caae1c011a55 2017-04-01 14:21:17.472336 7f85e4ce17c0 -1 asok(0x410f9c0) AdminSocketConfigObs::init: failed: AdminSocket::bind_and_listen: failed to bind the UNIX domain socket to '/var/run/ceph/guests/ceph-client.admin.10234.68221936.asok': (2) No such file or directory Removing p_w_picpath: 100% complete...done. [root@compute04 5_126]# qemu-img convert -p /path/to/5_126-disk1.vmdk -O rbd rbd:volumes/volume-2be0eaee-ca53-4d03-96a4-caae1c011a55 (100.00/100%)
3. 创建数据卷
# cinder create --display-name 5_126_data 20 +--------------------------------+--------------------------------------+ | Property | Value | +--------------------------------+--------------------------------------+ | p_w_uploads | [] | | availability_zone | nova | | bootable | false | | consistencygroup_id | None | | created_at | 2017-04-01T06:28:07.000000 | | description | None | | encrypted | False | | id | 675d155f-43be-4465-8e48-44a5ec3c12bf | | metadata | {} | | migration_status | None | | multiattach | False | | name | 5_126_data | | os-vol-host-attr:host | None | | os-vol-mig-status-attr:migstat | None | | os-vol-mig-status-attr:name_id | None | | os-vol-tenant-attr:tenant_id | f3419d1896284d15af004b1ad6222a9a | | replication_status | disabled | | size | 20 | | snapshot_id | None | | source_volid | None | | status | creating | | updated_at | None | | user_id | 0e28446136b742a8849c6a54675e6ee8 | | volume_type | None | +--------------------------------+--------------------------------------+
[root@compute04 5_126]# rbd -p volumes rm volume-675d155f-43be-4465-8e48-44a5ec3c12bf 2017-04-01 14:29:10.726135 7fe708cf27c0 -1 asok(0x3e0a9c0) AdminSocketConfigObs::init: failed: AdminSocket::bind_and_listen: failed to bind the UNIX domain socket to '/var/run/ceph/guests/ceph-client.admin.10554.65055728.asok': (2) No such file or directory Removing p_w_picpath: 100% complete...done. 4. 导入数据盘 [root@compute04 5_126]# qemu-img convert -p /path/to/5_126/5_126-disk2.vmdk -O rbd rbd:volumes/volume- 675d155f-43be-4465-8e48-44a5ec3c12bf (100.00/100%)
四、 从卷启动vm主机
1. 从卷启动vm
# nova boot --flavor 2 boot_vol --boot-volume 2be0eaee-ca53-4d03-96a4-caae1c011a55 --availability-zone nova:compute07 --security-groups default --nic net-id=4df49be9-ace6-413a-9c6b-0ec055056c76 +--------------------------------------+----------------------------------------------------------------------------------+ | Property | Value | +--------------------------------------+----------------------------------------------------------------------------------+ | OS-DCF:diskConfig | MANUAL | | OS-EXT-AZ:availability_zone | nova | | OS-EXT-SRV-ATTR:host | - | | OS-EXT-SRV-ATTR:hostname | boot-vol | | OS-EXT-SRV-ATTR:hypervisor_hostname | - | | OS-EXT-SRV-ATTR:instance_name | instance-00000073 | | OS-EXT-SRV-ATTR:kernel_id | | | OS-EXT-SRV-ATTR:launch_index | 0 | | OS-EXT-SRV-ATTR:ramdisk_id | | | OS-EXT-SRV-ATTR:reservation_id | r-1ww9vdyg | | OS-EXT-SRV-ATTR:root_device_name | - | | OS-EXT-SRV-ATTR:user_data | - | | OS-EXT-STS:power_state | 0 | | OS-EXT-STS:task_state | scheduling | | OS-EXT-STS:vm_state | building | | OS-SRV-USG:launched_at | - | | OS-SRV-USG:terminated_at | - | | accessIPv4 | | | accessIPv6 | | | adminPass | Xk4PCWqDzksU | | config_drive | | | created | 2017-04-01T06:31:09Z | | description | - | | flavor | m1.small (2) | | hostId | | | host_status | | | id | e9001768-5f7a-4bdb-b4ab-9bb53be6361b | | p_w_picpath | Attempt to boot from volume - no p_w_picpath supplied | | key_name | - | | locked | False | | metadata | {} | | name | boot_vol | | os-extended-volumes:volumes_attached | [{"id": "2be0eaee-ca53-4d03-96a4-caae1c011a55", "delete_on_termination": false}] | | progress | 0 | | security_groups | default | | status | BUILD | | tenant_id | f3419d1896284d15af004b1ad6222a9a | | updated | 2017-04-01T06:31:10Z | | user_id | 0e28446136b742a8849c6a54675e6ee8 | +--------------------------------------+----------------------------------------------------------------------------------+
2. 附加数据卷
# nova volume-attach e9001768-5f7a-4bdb-b4ab-9bb53be6361b 675d155f-43be-4465-8e48-44a5ec3c12bf +----------+--------------------------------------+ | Property | Value | +----------+--------------------------------------+ | device | /dev/vdb | | id | 675d155f-43be-4465-8e48-44a5ec3c12bf | | serverId | e9001768-5f7a-4bdb-b4ab-9bb53be6361b | | volumeId | 675d155f-43be-4465-8e48-44a5ec3c12bf | +----------+--------------------------------------+
五、验证结果
# ssh [email protected] The authenticity of host '10.1.200.105 (10.1.200.105)' can't be established. ECDSA key fingerprint is 63:ae:8b:0c:4b:3f:92:73:18:d4:47:db:cf:ff:1a:e2. Are you sure you want to continue connecting (yes/no)? yes Warning: Permanently added '10.1.200.105' (ECDSA) to the list of known hosts. [email protected]'s password: [root@host-10-1-200-105 ~]# df -h Filesystem Size Used Avail Use% Mounted on /dev/mapper/centos-root 14G 986M 13G 7% / devtmpfs 910M 0 910M 0% /dev tmpfs 920M 0 920M 0% /dev/shm tmpfs 920M 8.4M 912M 1% /run tmpfs 920M 0 920M 0% /sys/fs/cgroup /dev/vdb 16G 33M 16G 1% /mnt /dev/vda1 497M 124M 374M 25% /boot tmpfs 184M 0 184M 0% /run/user/0 [root@host-10-1-200-105 ~]# cd /tmp/ [root@host-10-1-200-105 tmp]# cd /mnt/ [root@host-10-1-200-105 mnt]# ls 123 [root@host-10-1-200-105 mnt]#