利用saltstack自动化运维工具结合keepalived实现高可用负载均衡

在上次实验“saltstsck自动化运维工具实现负载均衡”的基础上,也就是在server3端配置实现server4端的httpd和server5端的nginx负载均衡,继续进行操作实现高可用:
(本文所有主机ip均为172.25.17网段,主机名和ip相对应。比如172.25.17.3对应server3,并且所有salt包和python包需要自行下载并配置到yum源中)
环境: linxu 6.5
配置:
server3: 172.25.17.3 服务:keepalived、 salt-master、 haproxy
server4: 172.25.17.4 服务:httpd、 salt-minion
server5: 172.25.17.5 服务:nginx、 salt-minion
server6: 172.25.17.6 服务:keepalived、 salt-minion、haproxy
大体思想:

在实现server4端httpd和server5端nginx负载均衡的基础上,新建一个master-minion端server6。通过在server3端建立install.sls的yml脚本并推送给server6实现server6端keepalived的安装。接着将server6端的keepalived配置文件和启动脚本发到server3端。server3端新建脚本service.sls设定对keepalived的相关配置。编辑脚本文件,按照主机名写入对应的变量,再将这些变量应用到server3端的keepalived配置文件中并进行keepalived的相关配置。最后在top.sls脚本文件中针对不同的主机推送不同的脚本文件进行不同的服务配置,推送top.sls脚本实现高可用负载均衡。

具体实现

server3端/srv目录结构:

[root@server3 srv]# tree .
.
├── pillar
│   ├── keepalived
│   │   └── install.sls
│   ├── top.sls
│   └── web
│       └── install.sls
└── salt
    ├── files
    ├── _grains
    │   └── my_grains.py
    ├── haproxy
    │   ├── files
    │   │   └── haproxy.cfg
    │   └── install.sls
    ├── httpd
    │   ├── files
    │   │   └── httpd.conf
    │   ├── install.sls
    │   └── lib.sls
    ├── keepalived
    │   ├── files
    │   │   ├── keepalived
    │   │   ├── keepalived-2.0.6.tar.gz
    │   │   └── keepalived.conf
    │   ├── install.sls
    │   └── service.sls
    ├── nginx
    │   ├── files
    │   │   ├── nginx
    │   │   ├── nginx-1.14.0.tar.gz
    │   │   └── nginx.conf
    │   ├── install.sls
    │   ├── make.sls
    │   ├── nginx.sls
    │   └── service.sls
    └── top.sls

1.要使用keepalived实现高可用,首先将需要新建salt-minion服务器server6并配置master为172.25.17.3(具体操作参见上面链接)。并在server6端的yum源中配置负载均衡模块,否则keepalived无法安装:

  1 [rhel-source]
  2 name=Red Hat Enterprise Linux $releasever - $basearch - Source
  3 baseurl=http://172.25.17.250/source6.5
  4 enabled=1
  5 gpgcheck=1
  6 gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-redhat-release
  7 
  8 [saltstack]
  9 name=saltstack
 10 baseurl=http://172.25.17.250/rhel6
 11 gpgcheck=0
 12 
 13 [LoadBalancer]   #加入负载均衡模块包
 14 name=LoadBalancer
 15 baseurl=http://172.25.17.250/source6.5/LoadBalancer
 16 gpgcheck=0

2.在server3端的/srv/salt目录下新建目录keepalived及其子目录files,并将keepalived压缩包放在这里:

[root@server3 nginx]# cd /srv/salt/
[root@server3 salt]# mkdir keepalived/files -p
[root@server3 salt]# ls keepalived/files/
keepalived-2.0.6.tar.gz

进入keepalived目录新建脚本文件install.sls:

[root@server3 salt]# cd keepalived/
[root@server3 keepalived]# vim install.sls

脚本内容:
实现keepalived的在server6端的安装:

  1 include:
  2   - nginx.make    #将make.sls脚本导入,这个脚本功能是进行依赖包安装
  3 
  4 kp-install:
  5   file.managed:
  6     - name: /mnt/keepalived-2.0.6.tar.gz
  7     - source: salt://keepalived/files/keepalived-2.0.6.tar.gz
  8   cmd.run:
  9     - name: cd /mnt && tar zxf keepalived-2.0.6.tar.gz && cd keepalived-2.0.    6 && ./configure --prefix=/usr/local/keepalived --with-init=SYSV &> /dev/nul    l && make &> /dev/null && make install &> /dev/null
 10     - creates: /usr/local/keepalived

make.sls脚本内容:

  1 make-depends:
  2   pkg.installed:
  3     - pkgs:
  4       - pcre-devel
  5       - openssl-devel
  6       - gcc
  7       - mailx

将install.sls脚本推送到server6端进行安装keepalived:

[root@server3 keepalived]# salt server6 state.sls keepalived.install

**3.**server6端keepalived安装成功之后,将keepalived配置文件发送到servre3端的/srv/salt/keepalied/files目录下:

[root@server6 init.d]# cd /usr/local/keepalived/etc/keepalived/
[root@server6 keepalived]# ls
keepalived.conf  samples
[root@server6 keepalived]# scp keepalived.conf server3:/srv/salt/keepalived/files/

同样将keepalived的启动脚本放到这个目录下:

[root@server6 yum.repos.d]# cd /usr/local/keepalived/etc/rc.d/init.d/
[root@server6 init.d]# scp keepalived server3:/srv/salt/keepalived/files

4.在server3端/srv/salt.keepalived目录下新建脚本文件service.sls:

[root@server3 keepalived]# vim service.sls

脚本内容:

  1 include:
  2   - keepalived.install   #将刚才新建的脚本文件install.sls导入
  3 
  4 /etc/keepalived/keepalived.conf:
  5   file.managed:
  6     - source: salt://keepalived/files/keepalived.conf
  7     - template: jinja    #导入jinja模板
  8     - context:
  9       STATE: {{ pillar['state'] }}         #引用变量,变量在后面有新建,也可以新建变量再写入引用变量这部分
 10       VRID: {{ pillar['vrid'] }}           #引用变量
 11       PRIORITY: {{ pillar['priority'] }}   #引用变量
 12 
 13 kp-service:
 14   file.managed:
 15     - name: /etc/init.d/keepalived
 16     - source: salt://keepalived/files/keepalived
 17     - mode: 755
 18   service.running:
 19     - name: keepalived
 20     - reload: True
 21     - watch:
 22       - file: /etc/keepalived/keepalived.conf

5.进入到/srv/pillar目录下新建目录keepalived,并将web目录中的install.sls文件复制过来并编辑:

[root@server3 keepalived]# cd /srv/pillar/
[root@server3 pillar]# ls
top.sls  web
[root@server3 pillar]# mkdir keepalived
[root@server3 pillar]# cd keepalived/
[root@server3 keepalived]# cp ../web/install.sls .
[root@server3 keepalived]# vim install.sls 

编辑install.sls新建变量:

  1 {% if grains['fqdn'] == 'server3' %}
  2 state: MASTER     #新建变量
  3 vrid: 17
  4 priority: 100
  5 {% elif grains['fqdn'] == 'server6' %}
  6 state: BACKUP     #新建变量
  7 vrid: 17
  8 priority: 50
  9 {% endif %}

6.在server3端编辑keepalived的配置文件:

[root@server3 keepalived]# vim files/keepalived.conf 

修改配置文件:

  1 ! Configuration File for keepalived
  2 
  3 global_defs {
  4    notification_email {
  5      root@localhost
  6    }
  7    notification_email_from keepalived@localhost
  8    smtp_server 127.0.0.1
  9    smtp_connect_timeout 30
 10    router_id LVS_DEVEL
 11    vrrp_skip_check_adv_addr
 12    #vrrp_strict
 13    vrrp_garp_interval 0
 14    vrrp_gna_interval 0
 15 }
 16 
 17 vrrp_instance VI_1 {
 18     state {{ STATE }}   #引用变量
 19     interface eth0
 20     virtual_router_id {{ VRID }}
 21     priority {{ PRIORITY }}
 22     advert_int 1
 23     authentication {
 23     authentication {
 24         auth_type PASS
 25         auth_pass 1111
 26     }
 27     virtual_ipaddress {
 28         172.25.17.100    #设定VIP
 29     }
 30 }
 31 
后面的内容全部删除

7.编辑/srv/sakt目录下的top.sls文件:

[root@server3 salt]# vim top.sls 

内容:

  1 base:
  2   'server3':       #不同服务端推送不同脚本
  3     - haproxy.install
  4     - keepalived.service
  5   'server6':
  6     - haproxy.install
  7     - keepalived.service
  8   'server4':
  9     - httpd.install
 10   'server5':
 11     - nginx.service

之后推送top.sls脚本到所有服务端:

[root@server3 salt]# salt '*' state.highstate

测试:

访问VIP实现负载均衡:
利用saltstack自动化运维工具结合keepalived实现高可用负载均衡_第1张图片
利用saltstack自动化运维工具结合keepalived实现高可用负载均衡_第2张图片
高可用测试:
停掉server3端keepalived服务:

[root@server3 salt]# /etc/init.d/keepalived stop
Stopping keepalived:                                       [  OK  ]

负载均衡依然可以实现,由于server3服务停掉了,所以在server6端接管了服务。可以看到VIP加到了server6端:

[root@server6 sysconfig]# ip addr
1: lo:  mtu 16436 qdisc noqueue state UNKNOWN 
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0:  mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 52:54:00:f2:65:82 brd ff:ff:ff:ff:ff:ff
    inet 172.25.17.6/24 brd 172.25.17.255 scope global eth0
    inet 172.25.17.100/32 scope global eth0   #VIP加到了server6端
    inet6 fe80::5054:ff:fef2:6582/64 scope link 
       valid_lft forever preferred_lft forever

你可能感兴趣的:(原创)