计算节点和控制节点的nova uid和gid保持一致
利用id nova命令查看下nova的uid和gid
[root@compute1 ~]# id nova
uid=162(nova) gid=162(nova) groups=162(nova),99(nobody),107(qemu)
如果不一致则利用usermod -u uid nova和gropumod -g gid nova 两条命令进行修改
保证所有nova相关文件使用相同的uid和gid
一、安装libvirt组件
所有计算节点:
yum install libvirt
应该已经安装得有,没有就安装
修改计算节点相关配置
修改nova的配置文件,在[libvirt] 段下 添加如下字段
live_migration_flag="VIR_MIGRATE_UNDEFINE_SOURCE,VIR_MIGRATE_PEER2PEER,VIR_MIGRATE_LIVE,VIR_MIGRATE_PERSIST_DEST,VIR_MIGRATE_TUNNELLED"
重启openstack-nova-compute.service
systemctl restart openstack-nova-compute.service
vim /etc/sysconfig/libvirtd
修改为:
LIBVIRTD_CONFIG=/etc/libvirt/libvirtd.conf
LIBVIRTD_ARGS="--listen"
vim /etc/libvirt/libvirtd.conf
修改为:
listen_tls=0
listen_tcp=1
auth_tcp="none"
重启libvirt
systemctl restart libvirtd.service
查看监听端口:
ss -ntl | grep 16509
LISTEN 0 30 *:16509 *:*
测试:
在compute1节点上:
virsh -c qemu+tcp://compute2/system
在compute2节点上
virsh -c qemu+tcp://compute1/system
如果能无密码连接上去,表示配置没问题
二、安装配置nfs服务器(192.168.100.10)
确定能互相ping通
vim /etc/hosts
192.168.100.10 controller
192.168.100.20 compute1
192.168.100.30 compute2
1.控制节点的nfs客户端
yum install -y nfs-utils
自己创建一个共享文件目录:
mkdir /xxx
或直接用/var/lib/nova/instances
创建并添加:
vim /etc/exports
/var/lib/nova/instances compute1(rw,sync,fsid=0,no_root_squash)
/var/lib/nova/instances compute2(rw,sync,fsid=0,no_root_squash)
vim /etc/idmapd.conf
修改为:
Domain = controller(主机名)
启动服务:
systemctl enable rpcbind.service nfs-server.service
systemctl start rpcbind.service nfs-server.service
systemctl start nfs-lock nfs-idmap
2.所有计算节点的nfs客户端
yum install -y nfs-utils
systemctl enablerpcbind.service
systemctl start rpcbind.service
showmount -e 192.168.100.10
挂载:
mount -t nfs 192.168.100.10:/var/lib/nova/instances /var/lib/nova/instances
修改权限:
cd /var/lib/nova/
chown -R nova:nova instances
测试:
在nfs服务器的/var/lib/nova/instances创建文件看是否在计算节点的/var/lib/nova/instances看见
设置自动挂载
在计算节点的/etc/fstab的目录中加入
(控制节点ip)192.168.100.10:/var/lib/nova/instances /var/lib/nova/instances nfs defaults 0 0
df -h 查看已挂在目录
192.168.100.10:/var/lib/nova/instances 28G 7.5G 20G 28% /var/lib/nova/instances
三、在管理平台测试虚拟机在线热迁移
自己弄
四、命令测试热迁移
查看所有实例
nova list
+--------------------------------------+------+--------+------------+-------------+------------------------+
| ID | Name | Status | Task State | Power State | Networks |
+--------------------------------------+------+--------+------------+-------------+------------------------+
| 42b59dd6-7615-4132-9cf4-927ca5f44985 | aa | ACTIVE | - | Running | selfservice=172.16.1.3 |
+--------------------------------------+------+--------+------------+-------------+------------------------+
查看需要迁移虚拟机实例
nova show 42b59dd6-7615-4132-9cf4-927ca5f44985
查看可用的计算节点
nova-manage service list
查看目标节点资源
nova-manage service describe_resource compute2
注下面:开始迁移,虚拟机在哪个节点就在哪里执行 正常无任何回显 否则会报ERROR (BadRequest): Compute service of computer2 is unavailable at this time.(400)之类的错误
nova live-migration 42b59dd6-7615-4132-9cf4-927ca5f44985 compute2