项目:Lvs+Keepalived+Nginx+Tomcat高可用集群
项目拓扑:
实验思路:
实验重点:
1.概述:此架构中keepalived所起到的作用就是对lvs架构中的调度器进行热备份。至少包含两台热备的负载调度器,两台台web的节点服务器;
2.重点:LVS架构中需要通过ipvsadm工具来对ip_vs这个模块进行编写规则,使用keepalived+lvs时,不需要用到ipvsadm管理工具,不需要ipvsadm手动编写规则,用在keepalived的配置文件中指定配置项来将其取代;
3.keepalived的节点健康检查:keepalived可以通过对real server的某个端口进行节点健康检查,来执行相应的操作,由notify_down配置项来完成;
实验步骤:
安装并配置后端两台tomcat(两台tomcat服务器配置相同,在此只列出其中一台配置);
安装并配置两台nginx服务器(两台nginx服务器配置相同,在此只列出其中一台配置);
安装前端两台负载调度器的keepalived服务与lvs服务(两台调度器配置相同,在此只列出一台配置);
配置master主调度器的keepalived服务并启动;
配置backup从调度器的keepalived服务并启动;
配置两台nginx在Lvs_DR模式中的网络参数(两台nginx服务器配置相同,在此只列出一台配置);
客户端测试访问集群;
安装配置后端存储主机上的mysql服务;
安装配置后端存储主机上的nfs服务,并且将动态项目和静态项目上传并设置nfs共享;
两台nginx服务器挂载并读取nfs共享的静态网页资源(两台nginx服务器配置相同,在此只列出一台配置);
两台tomcat服务器挂载并读取nfs共享的动态网站项目(由java编写的超市管理项目),(两台tomcat服务器配置相同,在此只列出其中一台配置);
配置后端mysql数据库;
客户端访问测试静态网页资源;
客户端测试访问动态网站资源;
将nginx1模拟故障,客户端测试访问以及查看邮件情况;
将master主调度器模拟故障,测试客户端访问情况;
安装并配置后端两台tomcat(两台tomcat服务器配置相同,在此只列出其中一台配置);
[root@tm1 ~]# ls
apache-tomcat-9.0.10.tar.gz jdk-8u171-linux-x64.tar.gz
[root@tm1~]# rpm -qa |grep java
[root@tm1 ~]# tar zxvf jdk-8u171-linux-x64.tar.gz
[root@tm1 ~]# mv jdk1.8.0_171/ /usr/local/java
[root@tm1 ~]# ls /usr/local/java
bin db javafx-src.zip lib man release THIRDPARTYLICENSEREADME-JAVAFX.txt
COPYRIGHT include jre LICENSE README.html src.zip THIRDPARTYLICENSEREADME.txt
[root@tm1 ~]# cat <>/etc/profile
export JAVA_HOME=/usr/local/java
export PATH=$PATH:/usr/local/java/bin
END
[root@tm1~]# source /etc/profile
[root@tm1 ~]# java -version
java version "1.8.0_171"
Java(TM) SE Runtime Environment (build 1.8.0_171-b11)
Java HotSpot(TM) 64-Bit Server VM (build 25.171-b11, mixed mode)
[root@tm1 ~]# tar zxvf apache-tomcat-9.0.10.tar.gz
[root@tm1 ~]# mv apache-tomcat-9.0.10 /usr/local/tomcat
[root@tm1 ~]# ls /usr/local/tomcat
bin conf lib LICENSE logs NOTICE RELEASE-NOTES RUNNING.txt temp webapps work
[root@tm1 ~]# /usr/local/tomcat/bin/startup.sh ##启动apache-tomcat
[root@tm1 ~]# netstat -utpln |grep 8080
tcp 0 0 0.0.0.0:8080 0.0.0.0:* LISTEN 14758/java
安装并配置两台nginx服务器(两台nginx服务器配置相同,在此只列出其中一台配置);
[root@ng1 ~]# yum -y install pcre-devel zlib-devel
[root@ng1 ~]# useradd -M -s /sbin/nologin nginx
[root@ng1 ~]# tar zxvf nginx-1.12.2.tar.gz -C /usr/src/
[root@ng1 ~]# cd /usr/src/nginx-1.12.2/
[root@ng1 nginx-1.12.2]# ./configure --prefix=/usr/local/nginx --user=nginx --group=nginx --with-http_stub_status_module
[root@ng1 nginx-1.12.2]# make && make install
[root@ng1 nginx-1.12.2]# cd
[root@ng1 ~]# ln -s /usr/local/nginx/sbin/nginx /usr/local/sbin/
[root@ng1 ~]# vi /usr/lib/systemd/system/nginx.service
[Unit]
Description=nginxapi
After=network.target
[Service]
Type=forking
PIDFile=/usr/local/nginx/logs/nginx.pid
ExecStart=/usr/local/nginx/sbin/nginx
ExecReload=kill -s HUP $(cat /usr/local/nginx/logs/nginx.pid)
ExecStop=kill -s QUIT $(cat /usr/local/nginx/logs/nginx.pid)
PrivateTmp=Flase
[Install]
WantedBy=multi-user.target
[root@ng1 ~]# vi /usr/local/nginx/conf/nginx.conf
34 upstream tomserver {
35 server 192.168.100.105:8080 weight=1;
36 server 192.168.100.106:8080 weight=1;
37 }
50 location ~ \.(asp|aspx|php|jsp|do|js|css|png|jpg)$ {
51 proxy_pass http://tomserver;
52 }
[root@ng1 ~]# systemctl start nginx
[root@ng1 ~]# systemctl enable nginx
[root@ng1 ~]# netstat -utpln |grep nginx
tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN 3538/nginx: master
安装前端两台负载调度器的keepalived服务与lvs服务(两台调度器配置相同,在此只列出一台配置);
[root@ld1 ~]# yum -y install kernel-devel openssl-devel popt-devel
[root@ld1 ~]# ls keepalived-1.2.13.tar.gz
keepalived-1.2.13.tar.gz
[root@ld1 ~]# tar zxvf keepalived-1.2.13.tar.gz -C /usr/src/
[root@ld1 ~]# cd /usr/src/keepalived-1.2.13/
[root@ld1 keepalived-1.2.13]# ./configure --prefix=/usr/local/keepalived
[root@ld1 keepalived-1.2.13]# make && make install
[root@ld1 keepalived-1.2.13]# cd
[root@ld1 ~]# mkdir -p /etc/keepalived
[root@ld1 ~]# cp /usr/local/keepalived/etc/keepalived/keepalived.conf /etc/keepalived/
[root@ld1 ~]# cp /usr/local/keepalived/etc/sysconfig/keepalived /etc/sysconfig/
[root@ld1 ~]# cp /usr/local/keepalived/etc/rc.d/init.d/keepalived /etc/init.d/
[root@ld1 ~]# cp /usr/local/keepalived/sbin/keepalived /usr/sbin/
[root@ld1 ~]# chmod 755 /etc/init.d/keepalived
配置master主调度器的keepalived服务并启动;
[root@ld1 ~]# vi /etc/keepalived/keepalived.conf
global_defs {
router_id HA_TEST_R1 ##本服务器的名称
}
vrrp_instance VI_1 { ##定义VRRP热备实例
state MASTER ##MASTER表示主服务器
interface eth0 ##承载VIP地址的物理接口
virtual_router_id 1 ##虚拟路由器的ID号
priority 100 ##优先级,数值越大优先级越高
advert_int 1 ##通告间隔秒数(心跳频率)
authentication { ##认证信息
auth_type PASS ##认证类型
auth_pass 123456 ##密码字串
}
virtual_ipaddress {
192.168.100.95 ##指定漂移地址(VIP)
}
virtual_server 192.168.100.95 80 { ##指定vip地址
delay_loop 5 ##每隔5秒检测一次real server
lb_algo rr
lb_kind DR
protocol TCP
real_server 192.168.100.103 80 { ##指定web集群节点1,在此为nginx1
weight 1
notify_down /etc/keepalived/check.sh ##real server检测失败后执行的脚本
TCP_CHECK {
connect_port 80
connect_timeout 3 ##连接超时
nb_get_retry 3 ##重试连接次数
delay_before_retry 4 ##重试间隔
}
}
real_server 192.168.100.104 80 { ##指定web集群节点2,在此为nginx2
weight 1
notify_down /etc/keepalived/check.sh ##real server检测失败后执行的脚本
TCP_CHECK {
connect_port 80
connect_timeout 3
nb_get_retry 3
delay_before_retry 4
}
}
}
[root@ld1 ~]# vi /etc/keepalived/check.sh
#!/bin/bash
echo -e " nginx1(192.168.100.103) or nginx2(192.168.100.104) is down on $(date +%F-%T)" >/root/check_httpd.log
cat /root/check_httpd.log |/usr/local/bin/sendEmail -o message-charset=utf8 -f [email protected] -t [email protected] -s smtp.163.com -u "It's up to it" -xu [email protected] -xp 854365897huhu
:<>/etc/rc.local
[root@ld1 ~]# chmod +x /etc/rc.local
[root@ld1 ~]# /etc/init.d/keepalived start
Reloading systemd: [ 确定 ]
Starting keepalived (via systemctl): [ 确定 ]
[root@ld1 ~]# ip a |grep 192.168.100.95
inet 192.168.100.95/32 scope global eth0
配置backup从调度器的keepalived服务并启动;
[root@ld2 ~]# vi /etc/keepalived/keepalived.conf
global_defs {
router_id HA_TEST_R2 ##本服务器的名称
}
vrrp_instance VI_1 { ##定义VRRP热备实例
state BACKUP ##MASTER表示主服务器
interface eth0 ##承载VIP地址的物理接口
virtual_router_id 1 ##虚拟路由器的ID号
priority 99 ##优先级,数值越大优先级越高
advert_int 1 ##通告间隔秒数(心跳频率)
authentication { ##认证信息
auth_type PASS ##认证类型
auth_pass 123456 ##密码字串
}
virtual_ipaddress {
192.168.100.95 ##指定漂移地址(VIP)
}
virtual_server 192.168.100.95 80 { ##指定vip地址
delay_loop 5 ##每隔5秒检测一次real server
lb_algo rr
lb_kind DR
protocol TCP
real_server 192.168.100.103 80 { ##指定web集群节点1,在此为nginx1
weight 1
notify_down /etc/keepalived/check.sh ##real server检测失败后执行的脚本
TCP_CHECK {
connect_port 80
connect_timeout 3 ##连接超时
nb_get_retry 3 ##重试连接次数
delay_before_retry 4 ##重试间隔
}
}
real_server 192.168.100.104 80 { ##指定web集群节点2,在此为nginx2
weight 1
notify_down /etc/keepalived/check.sh ##real server检测失败后执行的脚本
TCP_CHECK {
connect_port 80
connect_timeout 3
nb_get_retry 3
delay_before_retry 4
}
}
}
[root@ld2 ~]# vi /etc/keepalived/check.sh
#!/bin/bash
echo -e " nginx1(192.168.100.103) or nginx2(192.168.100.104) is down on $(date +%F-%T)" >/root/check_httpd.log
cat /root/check_httpd.log |/usr/local/bin/sendEmail -o message-charset=utf8 -f [email protected] -t [email protected] -s smtp.163.com -u "It's up to it" -xu [email protected] -xp ############
:<
[root@ld2 ~]# modprobe ip_vs ##启动ip_vs模块
[root@ld2 ~]# lsmod |grep ip_vs
[root@ld2 ~]# echo "modprobe ip_vs" >>/etc/rc.local
[root@ld2 ~]# chmod +x /etc/rc.local
[root@ld2 ~]# /etc/init.d/keepalived start
Reloading systemd: [ 确定 ]
Starting keepalived (via systemctl): [ 确定 ]
[root@ld2 ~]# ip a |grep 192.168.100.95
配置两台nginx在Lvs_DR模式中的网络参数(两台nginx服务器配置相同,在此只列出一台配置);
[root@ng1 ~]# cat </etc/sysconfig/network-scripts/ifcfg-lo:0
DEVICE=lo:0
IPADDR=192.168.100.95
NETMASK=255.255.255.255
ONBOOT=yes
NAME=lo:0
END
[root@ng1 ~]# systemctl restart network
[root@ng1 ~]# ip a |grep 95
inet 192.168.100.95/32 brd 192.168.100.88 scope global lo:0
客户端测试访问集群;
访问静态网页资源并查看服务器日志:
访问动态网站资源并查看服务器日志:
安装配置后端存储主机上的mysql服务;
[root@st ~]# yum -y install mariadb-server mysql
[root@st ~]# systemctl start mariadb
[root@st ~]# systemctl enable mariadb
Created symlink from /etc/systemd/system/multi-user.target.wants/mariadb.service to /usr/lib/systemd/system/mariadb.service.
[root@st ~]# mysqladmin -uroot password ##设置密码为123123
[root@st ~]# mysql -uroot -p123123
MariaDB [(none)]> exit
安装配置后端存储主机上的nfs服务,并且将动态项目和静态项目上传并设置nfs共享;
[root@st ~]# for i in rpcbind nfs;do systemctl enable $i; done
[root@st ~]# for i in rpcbind nfs;do systemctl enable $i; done
[root@st ~]# mkdir /opt/nginx
[root@st ~]# chmod 777 /opt/nginx/
[root@st ~]# echo "this is a beautiful page!!!" >>/opt/nginx/index.html ##准备nginx的静态网页资源
[root@st ~]# mkdir /opt/tom
[root@st ~]# chmod 777 /opt/tom/
[root@st ~]# ls /opt/tom/ ##上传超市管理项目的源码
WebRoot
[root@st ~]# vi /opt/tom/WebRoot/WEB-INF/classes/database.properties
url=jdbc:mysql://192.168.100.107:3306/smbms?useUnicode=true&characterEncoding=utf-8
user=linuxfan
password=123123
:wq
[root@st ~]# vi /etc/exports
/opt/nginx 192.168.100.0/24(rw,sync,no_root_squash)
/opt/tom 192.168.100.0/24(rw,sync,no_root_squash)
[root@st ~]# systemctl start rpcbind
[root@st ~]# systemctl start nfs
Job for nfs-server.service failed because the control process exited with error code. See "systemctl status nfs-server.service" and "journalctl -xe" for details.
[root@st ~]# kill -HUP `cat /run/gssproxy.pid`
[root@st ~]# systemctl start nfs
[root@st ~]# systemctl enable rpcbind nfs
[root@st ~]# showmount -e 192.168.100.107
Export list for 192.168.100.107:
/opt/tom 192.168.100.0/24
/opt/nginx 192.168.100.0/24
两台nginx服务器挂载并读取nfs共享的静态网页资源(两台nginx服务器配置相同,在此只列出一台配置);
[root@ng1 ~]# yum -y install nfs-utils rpcbind
[root@ng1 ~]# systemctl start rpcbind
[root@ng1 ~]# systemctl start nfs
Job for nfs-server.service failed because the control process exited with error code. See "systemctl status nfs-server.service" and "journalctl -xe" for details.
[root@ng1 ~]# kill -HUP `cat /run/gssproxy.pid`
[root@ng1 ~]# systemctl start nfs
[root@ng1 ~]# systemctl enable rpcbind nfs
[root@ng1 ~]# showmount -e 192.168.100.107
Export list for 192.168.100.107:
/opt/tom 192.168.100.0/24
/opt/nginx 192.168.100.0/24
[root@ng1 ~]# echo "192.168.100.107:/opt/nginx /usr/local/nginx/html/ nfs defaults,_netdev 0 0" >>/etc/fstab
[root@ng1 ~]# mount -a
[root@ng1 ~]# ls /usr/local/nginx/html/
index.html
[root@ng1 ~]# mount |tail -1
192.168.100.107:/opt/nginx on /usr/local/nginx/html type nfs4 (rw,relatime,vers=4.1,rsize=65536,wsize=65536,namlen=255,hard,proto=tcp,port=0,timeo=600,retrans=2,sec=sys,clientaddr=192.168.100.103,local_lock=none,addr=192.168.100.107,_netdev)
两台tomcat服务器挂载并读取nfs共享的动态网站项目(由java编写的超市管理项目),(两台tomcat服务器配置相同,在此只列出其中一台配置);
[root@tm1 ~]# yum -y install nfs-utils rpcbind
[root@tm1 ~]# systemctl start rpcbind
[root@tm1 ~]# systemctl start nfs
Job for nfs-server.service failed because the control process exited with error code. See "systemctl status nfs-server.service" and "journalctl -xe" for details.
[root@tm1 ~]# kill -HUP `cat /run/gssproxy.pid`
[root@tm1 ~]# systemctl start nfs
[root@tm1 ~]# systemctl enable rpcbind nfs
[root@tm1 ~]# showmount -e 192.168.100.107
Export list for 192.168.100.107:
/opt/tom 192.168.100.0/24
/opt/nginx 192.168.100.0/24
[root@tm1 ~]# echo "192.168.100.107:/opt/tom /usr/local/tomcat/webapps/ nfs defaults,_netdev 0 0" >>/etc/fstab
[root@tm1 ~]# mount -a
[root@tm1 ~]# ls /usr/local/tomcat/webapps/
WebRoot
[root@tm1 ~]# mount |tail -1
192.168.100.107:/opt/tom on /usr/local/tomcat/webapps type nfs4 (rw,relatime,vers=4.1,rsize=65536,wsize=65536,namlen=255,hard,proto=tcp,port=0,timeo=600,retrans=2,sec=sys,clientaddr=192.168.100.105,local_lock=none,addr=192.168.100.107,_netdev)
配置后端mysql数据库;
[root@st ~]# ls smbms_db.sql
smbms_db.sql
[root@st ~]# mysql -uroot -p123123 show databases;
+--------------------+
| Database |
+--------------------+
| information_schema |
| mysql |
| performance_schema |
| smbms |
| test |
+--------------------+
5 rows in set (0.00 sec)
MariaDB [(none)]> grant all on smbms.* to 'linuxfan'@'192.168.100.%' identified by "123123";
Query OK, 0 rows affected (0.00 sec)
MariaDB [(none)]> flush privileges;
Query OK, 0 rows affected (0.00 sec)
MariaDB [(none)]> exit
Bye
客户端访问测试静态网页资源;
客户端测试访问动态网站资源;
登录后如若访问不了,可以尝试重启tomcat;
将nginx1模拟故障,客户端测试访问以及查看邮件情况;
将master主调度器模拟故障,测试客户端访问情况;