用corosync+pacemaker做一个简单的nginx高可用,然后将nfs挂载到nginx的网页访问目录。
由于nginx是源代码安装的,所以要写入到systemd来管理才能够被pacemaker识别到。接下来,写一个nginx启动文件:
[root@centosa system]# cat /etc/systemd/system/nginx.service [Unit] Description=nginx After=network.target [Service] Type=forking ExecStart=/usr/local/nginx/sbin/nginx ExecReload=/usr/local/nginx/sbin/nginx -s reload ExecStop=/usr/local/nginx/sbin/nginx -s quit PrivateTmp=true [Install] WantedBy=multi-user.target
然后先停止之前的nginx,用systemctl start nginx来启动它。
两个节点都添加到启动项。然后再设置为开机启动:systemctl enable nginx
这里还是用crm来设置nginx高可用。
先停止nginx,然后通过集群让nginx启动。
1、添加nginx资源:
crm(live)configure# primitive nginx systemd:nginx crm(live)configure# verify crm(live)configure# commit
2、添加nfs挂载资源,然后与nginx绑定并设置先后顺序:
crm(live)configure# primitive nfs-server ocf:heartbeat:Filesystem params device=192.168.40.145:/nfsdata directory=/usr/local/nginx/html/ fstype=nfs op start timeout=100 op stop timeout=100 crm(live)configure# verify crm(live)configure# colocation nginx_with_nfs-server inf: nginx nfs-server crm(live)configure# order nginx_after_nfs-server Mandatory: nfs-server nginx crm(live)configure# verify crm(live)configure# commit
3、添加一个vip资源,并与nginx绑定然后设置先后顺序:
crm(live)configure# primitive vip ocf:heartbeat:IPaddr params ip=192.168.40.100 op monitor interval=20 timeout=20 on-fail=restart crm(live)configure# colocation vip_with_nginx inf: nginx vip crm(live)configure# order vip_after_nginx Mandatory: nginx vip crm(live)configure# verify crm(live)configure# commit
4、查看配置信息:
crm(live)configure# show node 1: centosa \ attributes standby=on node 2: centosb \ attributes standby=off primitive nfs-server Filesystem \ params device="192.168.40.145:/nfsdata" directory="/usr/local/nginx/html/" fstype=nfs \ op start timeout=100 interval=0 \ op stop timeout=100 interval=0 primitive nginx systemd:nginx primitive vip IPaddr \ params ip=192.168.40.100 \ op monitor interval=20 timeout=20 on-fail=restart order nginx_after_nfs-server Mandatory: nfs-server nginx colocation nginx_with_nfs-server inf: nginx nfs-server order vip_after_nginx Mandatory: nginx vip colocation vip_with_nginx inf: nginx vip property cib-bootstrap-options: \ have-watchdog=false \ dc-version=1.1.16-12.el7_4.4-94ff4df \ cluster-infrastructure=corosync \ cluster-name=mycluster \ stonith-enabled=false \ default-action-timeout=100s
5、查看状态:
crm(live)# status Stack: corosync Current DC: centosb (version 1.1.16-12.el7_4.4-94ff4df) - partition with quorum Last updated: Wed Oct 25 23:12:30 2017 Last change: Wed Oct 25 23:12:25 2017 by root via cibadmin on centosa 2 nodes configured 3 resources configured Online: [ centosa centosb ] Full list of resources: nginx(systemd:nginx):Started centosa nfs-server(ocf::heartbeat:Filesystem):Started centosa vip(ocf::heartbeat:IPaddr):Started centosa
总结一下启动的先后顺序(先挂载,再启动nginx,再启动vip):
nfs-server > nginx >vip
查看挂载的情况:
[root@centosa ~]# df -h Filesystem Size Used Avail Use% Mounted on /dev/mapper/cl-root 18G 3.6G 15G 20% / devtmpfs 478M 0 478M 0% /dev tmpfs 489M 39M 450M 8% /dev/shm tmpfs 489M 6.7M 482M 2% /run tmpfs 489M 0 489M 0% /sys/fs/cgroup /dev/sda1 1014M 168M 847M 17% /boot tmpfs 98M 0 98M 0% /run/user/0 192.168.40.145:/nfsdata 17G 3.6G 14G 21% /usr/local/nginx/html
其实就是挂载在nginx的网页存放目录那里,
再去到nfs服务器,查看一下共享的文件:
[root@centos1 nfsdata]# ls index.htm index.html [root@centos1 nfsdata]# cat index.html hh
直接访问vip:
[root@centos1 nfsdata]# curl http://192.168.40.100 hh
现在把主节点下线,然后一直刷status,会发现是按照设定顺序停止,然后按照设定顺序开启到另一节点:
crm(live)# node standby crm(live)# status Stack: corosync Current DC: centosb (version 1.1.16-12.el7_4.4-94ff4df) - partition with quorum Last updated: Wed Oct 25 23:20:50 2017 Last change: Wed Oct 25 23:20:39 2017 by root via crm_attribute on centosa 2 nodes configured 3 resources configured Node centosa: standby Online: [ centosb ] Full list of resources: nginx(systemd:nginx):Started centosb nfs-server(ocf::heartbeat:Filesystem):Started centosb vip(ocf::heartbeat:IPaddr):Started centosb
查看另一个节点的挂载情况:
[root@centosb ~]# df -h Filesystem Size Used Avail Use% Mounted on /dev/mapper/cl-root 18G 4.4G 14G 25% / devtmpfs 226M 0 226M 0% /dev tmpfs 237M 54M 183M 23% /dev/shm tmpfs 237M 13M 224M 6% /run tmpfs 237M 0 237M 0% /sys/fs/cgroup /dev/sda1 1014M 173M 841M 18% /boot tmpfs 48M 0 48M 0% /run/user/0 192.168.40.145:/nfsdata 17G 3.6G 14G 21% /usr/local/nginx/html
访问vip:
[root@centos1 nfsdata]# curl http://192.168.40.100 hh
完。。