Deploying a semi-HA glusterized oVirt 3.3 Infrastructure

Deploying a semi-HA glusterized oVirt 3.3 Infrastructure

Time has passed since I last played with oVirt, the new ever so "amazing" openstack caught my attention but although it's got the momentem it's just not there yet in terms of easy deployment(maybe in a few months). So after a few weeks of playing with OpenStack we're back to oVirt.

There are a few issues I have with oVirt like the slow cluggy interface where as OpenStack has the lovely simple html5 bootstrap design. (Although there are talks to redesign the oVirt UI).

I guess you can compare the two as the Pet and Farm Animal scenario.

oVirt -> Pets -> Sick Pet (VM) -> Heal It -> Return to Health

OpenStack -> Farm Animals -> Sick Animal (VM) -> Replace It

In theory the two platforms should work hand in hand, and that's what Red Hat's currently doing. Many of the new oVirt features are taking advantage of OpenStack's fast pace development and integrating amazing new features. However, most people don't have the kind of hardware to deploy both OpenStack and oVirt/RHEV side by side (unless you've got the $$$$ to spend).

But to get a proper OpenStack infrastructure that can withstand node failures becomes expensive and tedious to configure and manage. oVirt on the other hand once you get the initial bits working "Just Works", and you can do it with minimal hardware.

I did this all on two CentOS 6.4 hosts.

Controller

eth0 (management/gluster): 172.16.0.11
eth1 (VM Data): n/a
eth2 (Public Network): 192.168.0.11

Compute

eth0 (management/gluster): 172.16.0.12
eth1 (VM Data): n/a

yum -y install wget screen nano  yum -y update  yum install http://resources.ovirt.org/releases/ovirt-release-el6-8-1.noarch.rpm -y  yum install http://dl.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm -ynano /etc/hosts  172.16.0.11 hv01 hv01.lab.example.net  172.16.0.12 hv02 hv02.lab.example.net# Don't install vdsm-gluster here because it seems to fail the install later onyum install -y glusterfs glusterfs-fuse glusterfs-server  mkfs.xfs -f -i size=512 /dev/mapper/vg_gluster-lv_gluster1  echo "/dev/mapper/vg_gluster-lv_gluster1 /data1  xfs     defaults        1 2" >> /etc/fstab  mkdir -p /data1/  mount -acurl https://raw.github.com/gluster/glusterfs/master/extras/group-virt.example -o /var/lib/glusterd/groups/virtgluster volume create DATA replica 2 hv01.lab.example.net:/data1/ hv02.lab.example.net:/data1/  gluster volume start DATA  gluster volume set DATA auth.allow 172.16.0.*  gluster volume set DATA group virt  gluster volume set DATA storage.owner-uid 36  gluster volume set DATA storage.owner-gid 36  # Help to avoid split braingluster volume set DATA cluster.quorum-type auto  gluster volume set DATA performance.cache-size 1GBchown 36:36 /data1mkdir -p /storage/iso  gluster volume set STORAGE auth.allow 172.16.0.*  gluster volume set STORAGE storage.owner-uid 36  gluster volume set STORAGE storage.owner-gid 36## MAKE SURE eth1 is set to "onboot=yes"DEVICE=eth1  TYPE=Ethernet  ONBOOT=yes  NM_CONTROLLED="no"  BOOTPROTO=none  

The above is very much the same initial configs which I did on my Getting Started with Multi-Node OpenStack RDO Havana + Gluster Backend + Neutron VLAN as they are both very similar.

Now it's time to install the ovirt-engine. I'm looking forward to the hosted-engine solution which is being released soon which'll allow us to put the engine on a VM within the cluster!

# Engine Onlyyum install -y ovirt-engine# Follow the prompts - I found installing the All In One and putting VDSM on the engine had a few issues, so I do it later.engine-setup# I run everything in a NAT environment, so externally I need to use a proxy. However I still can't quite get this to work properly..engine-config -s SpiceProxyDefault=http://proxy:8080  service ovirt-engine restart  

Here's a quick list of steps I do to get my environment running:

  • Modify your cluster to include the Gluster Service

  • Install new host (hv01 and hv02). Do one at a time, and when you're installing hv01 (the one with the engine) uncheck "configure iptables".

    • Overwrite your iptable rules for the engine+host in /etc/sysconfig/iptables from this filehttps://gist.github.com/andrewklau/7623169/raw/2c967d9870a4523ed0de402329a908b7df23c0b8/ovirt-engine-vdsm-iptables


  • If you get an install failed, generally it refers to vdsm-gluster. I have had two cases

    • One time it was installed and the engine complained. So I yum -y remove vdsm-glusterand re-ran the install.

    • The other time it wasn't installed and the engine complained. yum -y install vdsm-gluster(Don't ask me why this happens)


  • Now you should have 2 Hosts installed.

  • Remove ovirtmgmt as a VM network, go to the hosts tab and click setup networks. Edit ovirtmgmt and press resync.

    • While you're here may as well create a few VM networks. I created 10 networks each on their own VLANs. (I used the same VLAN Switch config from my previous postMikrotik CRS125-24G-1S-RM with OpenStack Neutron)

    • Save your network settings from the ovirt UI (this component is a little buggy as it does network tests so you don't lose connectivity. You may have to try a few times and wait quite a while)


Redundant Gluster Deployment

Now we configure a keepalived redundant gluster volume using Keepalived. This means if one host goes offline, the other will still be able to keep connected to the gluster volume so we keep a semi HA infrastructure (the last part being ovirt-hosted-engine which is still in the works).

yum install -y keepalivedcat /dev/null > /etc/keepalived/keepalived.conf  nano /etc/keepalived/keepalived.conf# Node1 (copy this on HV01)vrrp_instance VI_1 {  interface ovirtmgmt  state MASTER  virtual\_router\_id 10  priority 100   # master 100  virtual_ipaddress {  172.16.0.5  }}# Node2 (copy this on HV02)vrrp_instance VI_1 {  interface ovirtmgmt  state BACKUP  virtual\_router\_id 10  priority 99 # master 100  virtual_ipaddress {  172.16.0.5  }}service keepalived start  chkconfig keepalived on# Work Around until libvirtd fixes the port conflict (http://review.gluster.org/#/c/6147/)# Using this workaround, remember to include the port 50152 up till however many bricks you'll be using. My iptables gist file above is already updated. The oVirt host-deploy script will not apply the correct rules, so you need to do it manually!nano /etc/glusterfs/glusterd.vol  volume management      type mgmt/glusterd    option working-directory /var/lib/glusterd    option transport-type socket,rdma    option transport.socket.keepalive-time 10    option transport.socket.keepalive-interval 2    option transport.socket.read-fail-log off    option base-port 50152end-volumechkconfig glusterd on  service glusterd restart  service glusterfsd restart  
  • Go to the datacenter and create a new DATA domain. I used a POSIX datacenter and mounted the volume as glusterfs. I'm eagerly looking forward to RHEL 6.5 which'll allow us to mount the gluster volumes directly on the VM for a huge performance boost

  • Create an ISO domain again use POSIX and mount it as a gluster volume. You can alternatively choose local storage or NFS too.

    • Upload the an ISO from the command line (there are talks about finally allowing it to be done through the UI).

wget http://mirror.centos.org/centos/6/isos/x86_64/CentOS-6.4-x86_64-minimal.iso  engine-iso-uploader upload -i ISO CentOS-6.4-x86_64-minimal.iso  

oVirt has too many hidden gems which 5 months later I'm still discovering. Check out all the amazing features like:

  • Snapshots

  • VM Watchdog and HA

  • Vibrant and Active Community

  • Strong integration into OpenStack

  • Option to upgrade to the supported RHEV (which we'll be planning on doing when things take off).

If you don't think oVirt is ready just yet, check out these amazing features which I'm looking forward to:

  • libgfapi in RHEL 6.5 (to be released) which'll allow native gluster access to VM images.

  • ovirt-hosted-engine a true HA environment! The engine gets hosted as a VM within the cluster infrastructure and gets managed and brought up on a different node if it's current one fails.

  • oVirt UI redesign (YES PLEASE! I hate the old cluggy ui)

Tinker, configure, play!


你可能感兴趣的:(HA,Ovirt)