saltstack实现haproxy+keepalived负载均衡+高可用

7、搭建keepalived服务

实验环境:

  salt服务	    主机(minion名称)	      主机IP	   服务1  	 服务2 
   master	test1(www.westos.org)
     172.25.1.11	                                          nginx  
  minion	         test2  (test2)	      172.25.1.12	                                          httpd
  minion	        test3  (test3)	      172.25.1.13	                        haproxy                keepalived(master)
  minion	         test4  (test4)	     172.25.1.14     	                       haproxy                keepalived(backup)          
注意:这里的test3和test4需要下载haproxy高可用软件,所以需要配置高可用yum源,具体看文章开头的yum源配置文件
也可以直接拷贝test1的yum源配置文件
[root@test1 ~]# scp /etc/yum.repos.d/rhel-source.repo root@test1:/etc/yum.repos.d/
首先,添加主机test4:
[root@test4 ~]# yum install -y salt-minion
[root@test4 ~]# vim /etc/salt/minion

在这里插入图片描述

[root@test4 ~]# /etc/init.d/salt-minion start

[root@test1 ~]# salt-key -A

[root@test1 ~]# salt-key -L
saltstack实现haproxy+keepalived负载均衡+高可用_第1张图片

[root@test1 salt]# ls
apache  haproxy  nginx  pkgs  top.sls  users
[root@test1 salt]# cd pkgs/                 //在这个目录下存放的是源码编译时用到的安装包,避免多次写这些安装包
[root@test1 pkgs]# ls
make.sls
[root@test1 pkgs]# cat make.sls
make-gcc:
  pkg.installed:
    - pkgs:
      - pcre-devel
      - openssl-devel
      - gcc
[root@test1 pkgs]# cd ..
[root@test1 salt]# ls
apache  haproxy  nginx  pkgs  top.sls  users
[root@test1 salt]# mkdir keepalived                //创建keepalived的目录
[root@test1 salt]# cd keepalived/
[root@test1 keepalived]# mkdir files         //用来存放源码编译所需的压缩包  ,配置文件
[root@test1 files]# cd
[root@test1 ~]# ls                //注意:这里的源码包是提前下载到/root下的
keepalived-2.0.6.tar.gz
[root@test1 ~]# mv keepalived-2.0.6.tar.gz /srv/salt/keepalived/files/                   //将其移动到file目录下
[root@test1 ~]# cd -
/srv/salt/keepalived/files
[root@test1 files]# ls
keepalived-2.0.6.tar.gz
[root@test1 files]# cd ..
[root@test1 keepalived]# vim install.sls                      //编写keepalived的源码编译文件

include:
  - pkgs.make

keepalived-install:
  file.managed:
    - name: /mnt/keepalived-2.0.6.tar.gz
    - source: salt://keepalived/files/keepalived-2.0.6.tar.gz
  cmd.run:
    - name: cd /mnt && tar zxf keepalived-2.0.6.tar.gz && cd keepalived-2.0.6 &&  ./configure --prefix=/usr/local/keepalived --with-init=SYSV &> /dev/null && make &> /dev/null && make install &> /dev/null
    - creates: /usr/local/keepalived



[root@test1 keepalived]# salt test4 state.sls keepalived.install              //一键将服务推送到test4上

在test4上查看服务

[root@test4 ~]# cd /mnt/
[root@test4 mnt]# ls

到我们该拷贝配置文件的时候了,由于keepalived服务有两个配置文件,所以我们应该传给test1两个配置文件,分别是keeplived和keepalived.conf

[root@test4 mnt]# cd /usr/local/keepalived/etc/rc.d/init.d
[root@test4 init.d]# ls
keepalived
[root@test4 init.d]# scp keepalived root@test1:/srv/salt/keepalived/files/
[root@test4 init.d]# cd /usr/local/keepalived/etc/keepalived
[root@test4 keepalived]# ls
keepalived.conf  samples
[root@test4 keepalived]# scp keepalived.conf root@test1:/srv/salt/keepalived/files/

[root@test1 keepalived]# cd files/                  //返回test1的files目录查看配置文件已经拷贝过来
[root@test1 files]# ls
keepalived  keepalived-2.0.6.tar.gz  keepalived.conf

[root@test1 files]# cd ..
[root@test1 keepalived]# vim install.sls

include:
  - pkgs.make

keepalived-install:
  file.managed:
    - name: /mnt/keepalived-2.0.6.tar.gz
    - source: salt://keepalived/files/keepalived-2.0.6.tar.gz
  cmd.run:
    - name: cd /mnt && tar zxf keepalived-2.0.6.tar.gz && cd keepalived-2.0.6 &&  ./configure --prefix=/usr/local/keepalived --with-init=SYSV &> /dev/null && make &> /dev/null && make install &> /dev/null
    - creates: /usr/local/keepalived

/etc/keepalived:
  file.directory:
    - mode: 755

/etc/sysconfig/keepalived:
  file.symlink:
    - target: /usr/local/keepalived/etc/sysconfig/keepalived

/sbin/keepalived:
  file.symlink:
    - target: /usr/local/keepalived/sbin/keepalived

此时keepalived服务配置完成

新的问题出现是,由于test1和test4都需要安装keepalived服务,且test1作master;test4作backup

这里有部分需要改为变量,如所以我们需要引入模块,叫pillar模块和jinja模块。

上面完成了keepalived的install配置,接下来完成service.sls配置

[root@test1 salt]# cd /srv/salt/keepalived/files/

[root@test1 files]# vim keepalived.conf //结合Jinja,将keepalived的配置文件中的state和priority的参数设置成变量

! Configuration File for keepalived

global_defs {
notification_email {
root@localhost
}
notification_email_from keepalived.localhost
smtp_server 127.0.0.1
smtp_connect_timeout 30
router_id LVS_DEVEL
vrrp_skip_check_adv_addr
vrrp_strict
vrrp_garp_interval 0
vrrp_gna_interval 0
}

vrrp_instance VI_1 {
state { { STATE }}
interface eth0
virtual_router_id 21
priority { { PRIORITY }}
advert_int 1
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
172.25.1.100/24 //虚拟ip为172.25.1.100/24
}
}

[root@test1 files]# cd …
[root@test1 keepalived]# vim service.sls

include:

  • keepalived.install

/etc/keepalived/keepalived.conf:
file.managed:
- source: salt://keepalived/files/keepalived.conf
- template: jinja //Jinja的使用:在state文件中使用"- template: jinja"声明
- context:
STATE: { { pillar[‘state’] }} //两个变量给予赋值,和上面不同的是这里的值采用了Pillar方式
PRIORITY: { { pillar[‘priority’] }}

kp-service:
file.managed:
- name: /etc/init.d/keepalived
- source: salt://keepalived/files/keepalived
- mode: 755
service.running:
- name: keepalived
- reload: True
- watch:
- file: /etc/keepalived/keepalived.conf

在master端打开pillar服务,若已经打开

[root@test1 keepalived]# vim /etc/salt/master

[root@test1 keepalived]# /etc/init.d/salt-master restart

[root@test1 keepalived]# cd /srv/pillar/

[root@test1 pillar]# mkdir keepalived/

[root@test1 pillar]# cd keepalived/

[root@test1 keepalived]# vim install.sls

{% if grains[‘fqdn’] == ‘test3’ %}
state: MASTER
priority: 100
{% elif grains[‘fqdn’] == ‘test4’ %}
state: BACKUP
priority: 50
{% endif %}

[root@test1 keepalived]# cd …

[root@test1 web]# cd …

[root@test1 pillar]# vim top.sls

base:
‘*’:
- keepalived.install

[root@test1 pillar]# cd …/salt/

test1和test2可以使用grains或者pillar模块,这里用grains模块

[root@test1 salt]# vim /etc/salt/minion

[root@test1 salt]# /etc/init.d/salt-minion restart

[root@test2 ~]# vim /etc/salt/minion

[root@test2 salt]# /etc/init.d/salt-minion restart

[root@test1 salt]# vim top.sls

base:
‘test3’:
- haproxy.service
- keepalived.service
‘test4’:
- haproxy.service
- keepalived.service
‘roles:apache’:
- match: grain //要匹配的主机需要打开grains模块

- apache.service

‘roles:nginx’:
- match: grain
- nginx.service

8、一步高级推,实现最终目的

[root@test1 salt]# salt ‘*’ state.highstate

9、查看服务是否已经开启:

test1:

test2:

test3:

test4:

此时即可实现haproxy+keepalived负载均衡+高可用

10、进行测试:

首先在test1端写一个测试的web页面

[root@test1 apache]# cd /usr/local/nginx/html/
[root@test1 html]# vim index.html

由于test2端在一键推送的时候已经将web测试的index.html推送过来了,所以可以直接进行测试

为了确保,我们不妨查看一下:

[root@test2 html]# pwd
/var/www/html
[root@test2 html]# ls
index.html
[root@test2 html]# cat index.html

westos

测试一:测试负载均衡:

在物理机上进行curl测试,可以看到实现了负载均衡

测试二:测试高可用性:

我们可以看test3和test4的ip(keepalived):

可以看出test3上有虚拟ip,这是因为test3的keepalived服务是master,test4的keepalived服务是backup

若此时我们将test3的keepalived服务宕掉,会发现什么呢:

[root@test3 haproxy]# /etc/init.d/keepalived stop

此时test4会将虚拟ip承接过去。而服务没有受到影响。

若将test3和test4的服务都宕掉,此时服务不可用:

[root@test4 salt]# /etc/init.d/keepalived stop

测试结束,还原服务,将test3和test4的keepalived服务重启。

测试三:测试haproxy对后端的检查:

在访问都正常的情况下,将test2的apache服务宕掉

[root@test2 html]# /etc/init.d/httpd stop

此时进行访问:

发现只能访问到test1主机,并没有发生报错现象,这说明haproxy对后端是有进行检查的。并没有继续进行负载均衡。保证了客户的正常访问。

你可能感兴趣的:(saltstack实现haproxy+keepalived负载均衡+高可用)