Ansible实战之Nginx高可用代理LNMP-wordpress
author:JevonWei
版权声明:原创作品
blog:http://119.23.52.191/
—
实验环境:前端使用Nginx做代理服务器,静态资源经由缓存服务器,连接后端web集群,动态资源直接连接后端集群,可由Nginx代理或Varnish实现动静分离,web服务端连接PHP服务,从而更好的提供动态资源,将动态资源数据保存在Mysql关系型数据库上,且Mysql数据库使用主从复制的技术。为验证整体架构的准确性,故将wordpress应用搭建在web服务端,来验证构架的有效性。为了防止单点故障,前端的Nginx代理还使用了keepqlive技术来实现高可用从而达到增加网络的安全性能的目的。
实验拓展:为了增加可用性,可将web集群分为动静两类web 集群组,从来实现动静分离的效果,Varnish集群来为静态资源提供缓存,从而使网络访问速度更快。前端代理也可使用HAProxy及LVS等技术来替代。后端Mysql数据库也可以增加数据备份的案例。
varnish 的分离参考 http://www.cnblogs.com/JevonWei/p/7499417.html
网络拓扑图
主机环境
Ansible 172.16.252.82
Nginx_A 代理 172.16.252.207
Nginx_B 代理 172.16.252.103
Keepalived_A 172.16.252.207
Keepalived_B 172.16.252.103
Nginx+PHP_A 172.16.252.184
Nginx+PHP_B 172.16.252.67
Mysql_Master 172.16.252.184
Mysql_Slave 172.16.252.67
受添加限制
Nginx_A和Keepalived_A为Nginx1.danran.com上
Nginx_B和Keepalived_B为Nginx2.danran.com上
Nginx+PHP_A和Mysql_Mstart在web1.danran.com主机上
Nginx+PHP_B和Mysql_Slave在web2.danran.com主机上
实验准备
各节点需保持时间同步
确保主机名可以通信
节点间使用秘钥连接
时间同步
[root@ansible ~]# ntpdate 172.16.0.1
节点主机名通信
编辑/etc/hosts主机解析文件或使用DNS解析亦可
[root@ansible ~]# vim /etc/hosts
172.16.252.184 web1.danran.com
172.16.252.67 web2.danran.com
172.16.252.82 ansible.danran.com
172.16.252.103 nginx2.danran.com
172.16.252.82 Ansible.danran.com
[root@ansible ~]# scp /etc/hosts nginx1.danran.com:/etc/
[root@ansible ~]# scp /etc/hosts nginx2.danran.com:/etc/
[root@ansible ~]# scp /etc/hosts web1.danran.com:/etc/
[root@ansible ~]# scp /etc/hosts web2.danran.com:/etc/
节点秘钥连接
[root@ansible ~]# ssh-keygen -t rsa -P ""
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa):
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
8e:bb:44:d7:25:df:1b:3e:9b:fa:22:15:b5:6b:e4:19 root@ansible
The key's randomart image is:
+--[ RSA 2048]----+
| |
| . |
| . .. . |
| . +..E |
| . S . .+o+ |
| . + ..=o |
| o . . .+ |
| . . . . + |
| o. ..++ |
+-----------------+
[root@ansible ~]# ssh-copy-id -i .ssh/id_rsa.pub [email protected]
[root@ansible ~]# ssh-copy-id -i .ssh/id_rsa.pub [email protected]
[root@ansible ~]# ssh-copy-id -i .ssh/id_rsa.pub [email protected]
[root@ansible ~]# ssh-copy-id -i .ssh/id_rsa.pub [email protected]
Ansible配置文件
[root@ansible ~]# vim ansible.yml
- hosts: websrvs
remote_user: root
roles:
- nginx_web
- hosts: proxy
remote_user: root
roles:
- nginx_proxy
- hosts: keepalive
remote_user: root
roles:
- keepalive
- hosts: varnish
remote_user: root
roles:
- varnish
- hosts: php-fpm
remote_user: root
roles:
- php-fpm
- hosts: mysql
remote_user: root
roles:
- mariadb
- hosts: websrvs
remote_user: root
roles:
- wordpress
Ansible主机清单文件
[root@ansible ~]# vim /etc/ansible/hosts
[websrvs]
172.16.252.184
172.16.252.67
[proxy]
172.16.252.207
172.16.252.103
[keepalive]
172.16.252.207 start1=MASTER start2=BACKUP priority1=100 priority2=90
172.16.252.103 start1=BACKUP start2=MASTER priority1=90 priority2=100
[varnish]
172.16.252.207
172.16.252.103
[php-fpm]
172.16.252.184
172.16.252.67
[mysql]
172.16.252.184 serverid=1 log="log_bin = master-log"
172.16.252.67 serverid=2 log="relay-log = master-log"
定义角色
keepalive
[root@ansible ~]# cd /etc/ansible/roles/
[root@ansible ~]# mkdir keepalived/{files,templates,tasks,handlers,vars,meta,default} -pv
[root@ansible roles]# vim keepalive/tasks/main.yml
- name: install keepalived
yum: name=keepalived state=latest
- name: install conf
template: src=keepalived.j2 dest=/etc/keepalived/keepalived.conf
tags: conf
notify: restart keepalived
- name: start keepalived
service: name=keepalived state=started
[root@ansible roles]# vim keepalive/handlers/main.yml
- name: restart keepalived
service: name=keepalived state=restarted
[root@ansible roles]# vim keepalive/templates/keepalived.j2
global_defs {
notification_email {
[email protected]
}
notification_email_from [email protected]
smtp_server 127.0.0.1
smtp_connect_timeout 30
router_id keepaliveA
vrrp_mcast_group4 224.103.5.5
}
vrrp_instance VI_A {
state {{ start1 }}
interface {{ ansible_default_ipv4.alias }}
virtual_router_id 51
priority {{ priority1 }}
advert_int 1
authentication {
auth_type PASS
auth_pass qr8hQHuL
}
virtual_ipaddress {
172.16.252.100/32
}
}
vrrp_instance VI_B {
state {{ start2 }}
interface {{ ansible_default_ipv4.alias }}
virtual_router_id 52
priority {{ priority2 }}
advert_int 1
authentication {
auth_type PASS
auth_pass eHTQgK0n
}
virtual_ipaddress {
172.16.252.10/32
}
}
nginx_web
[root@ansible ~]# cd /etc/ansible/roles/
[root@ansible ~]# mkdir nginx_web/{files,templates,tasks,handlers,vars,meta,default} -pv
[root@ansible roles]# vim nginx_web/tasks/main.yml
- name: install nginx
yum: name=nginx state=latest
when: ansible_os_family == "RedHat"
- name: install conf
template: src=vhost1.conf.j2 dest=/etc/nginx/conf.d/vhost1.conf
tags: conf
notify: restart nginx
- name: install site home directory
file: path={{ ngxroot }} state=directory
- name: install index page
copy: src=index.html dest={{ ngxroot }}/
- name: start nginx
service: name=nginx state=started
[root@ansible roles]# vim nginx_web/handlers/main.yml
- name: restart nginx
service: name=nginx state=restarted
[root@ansible roles]# vim nginx_web/vars/main.yml
ngxroot: /blog
[root@ansible roles]# vim nginx_web/templates/vhost1.conf.j2
server {
listen 8080;
root "/blog/wordpress";
index index.php index.html;
location ~ .*\.(php|php5)?$ {
fastcgi_pass 127.0.0.1:9000;
fastcgi_index index.php;
include fastcgi.conf;
}
}
nginx_proxy
[root@ansible ~]# cd /etc/ansible/roles/
[root@ansible ~]# mkdir nginx_proxy/{files,templates,tasks,handlers,vars,meta,default} -pv
[root@ansible roles]# vim nginx_proxy/tasks/main.yml
- name: install nginx
yum: name=nginx state=latest
when: ansible_os_family == "RedHat"
- name: install conf
template: src=proxy.conf.j2 dest=/etc/nginx/conf.d/vhost1.conf
tags: conf
notify: restart nginx
- name: install nginx.conf
copy: src=nginx.conf dest=/etc/nginx/nginx.conf
- name: start nginx
service: name=nginx state=started
[root@ansible roles]# vim nginx_proxy/handlers/main.yml
- name: restart nginx
service: name=nginx state=restarted
[root@ansible roles]# vim nginx_proxy/templates/proxy.conf.j2
upstream websrv {
server 172.16.252.207:6081;
server 172.16.252.103:6081;
}
server {
listen 80 default_server;
server_name www.jevon.com;
location / {
proxy_pass http://websrv/;
proxy_set_header Host $host;
proxy_set_header X-Forward-For $remote_addr;
}
}
[root@ansible roles]# vim nginx_proxy/files/nginx.conf \\取消nginx自带默认web主机,将新定义的web虚拟主机作为默认主机
server {
listen 80 ;
}
varnish
[root@ansible ~]# cd /etc/ansible/roles/
[root@ansible ~]# mkdir varnish/{files,templates,tasks,handlers,vars,meta,default} -pv
[root@ansible roles]# vim varnish/tasks/main.yml
- name: install varnish
yum: name=varnish state=latest
- name: install conf
copy: src=default.vcl dest=/etc/varnish/
tags: varconf
notify: restart varnish
- name: start varnish
service: name=varnish state=started
[root@ansible roles]# vim varnish/handlers/main.yml
- name: restart varnish
service: name=varnish state=restarted
[root@ansible roles]# vim varnish/files/default.vcl
vcl 4.0;
import directors;
backend web1 {
.host = "172.16.252.184";
.port = "8080";
}
backend web2 {
.host = "172.16.252.67";
.port = "8080";
}
sub vcl_init {
new websrv = directors.round_robin();
websrv.add_backend(web1);
websrv.add_backend(web2);
}
sub vcl_purge {
return (synth(200,"Pruge Fishished"));
}
acl purges {
"172.16.252.110";
"127.0.0.0"/8;
}
sub vcl_recv {
if (req.method == "PURGE") {
if (client.ip !~ purges) {
return(synth(403,"Purging not allowed for" + client.ip));
}
return(purge);
}
if (req.url ~ "(?i)\.(jpg|jpeg|png|gif)$") {
set req.backend_hint = websrv.backend();
}else {
set req.backend_hint = websrv.backend();
}
if (req.restarts == 0) {
if (req.http.X-Forwarded-For) {
set req.http.X-Forwarded-For = req.http.X-Forwarded-For + "," + client.ip;
} else {
set req.http.X-Forwarded-For = client.ip;
}
}
}
sub vcl_backend_response {
unset beresp.http.X-Powered-By;
if (bereq.url ~ "\.(css|js|png|gif|jp(e?)g|swf|ico|txt|eot|svg|woff)") {
unset beresp.http.cookie;
set beresp.http.cache-control = "public, max-age=3600";
}
if ( beresp.status != 200 && beresp.status != 404 ) {
set beresp.uncacheable = true;
set beresp.ttl = 120s;
return (deliver);
}
set beresp.ttl = 1h;
set beresp.grace = 30s;
return (deliver);
}
sub vcl_deliver {
if (obj.hits>0) {
set resp.http.X-Cache = "Hit Via " + server.ip;
} else {
set resp.http.X-Cache = "Miss from " + server.ip;
}
}
php-fpm
[root@ansible ~]# cd /etc/ansible/roles/
[root@ansible ~]# mkdir php-fpm/{files,templates,tasks,handlers,vars,meta,default} -pv
[root@ansible roles]# vim php-fpm/tasks/main.yml
- name: install {{ item }} package
yum: name={{ item }} state=latest
with_items:
- php-fpm
- php-mysql
- name: start php-fpm
service: name=php-fpm state=started enabled=yes
mariadb
[root@ansible ~]# cd /etc/ansible/roles/
[root@ansible ~]# mkdir mariadb/{files,templates,tasks,handlers,vars,meta,default} -pv
[root@ansible roles]# vim mariadb/tasks/main.yml
- name: install mariadb
yum: name=mariadb-server state=latest
- name: install conf
template: src=server.j2 dest=/etc/my.cnf.d/server.cnf
tags: conf
notify: restart mariadb
- name: start mariadb
service: name=mariadb state=started enabled=yes
- name: command master
shell: /usr/bin/mysql -e "GRANT REPLICATION SLAVE, REPLICATION CLIENT ON *.* TO 'repluser'@'172.16.%.%' IDENTIFIED BY 'replpass';"
shell: /usr/bin/mysql -e "flush privileges;"
when: ansible_hostname == "web1"
- name: command slave
shell: /usr/bin/mysql -e "CHANGE MASTER TO MASTER_HOST='172.16.252.184', MASTER_USER='repluser', MASTER_PASSWORD='replpass', MASTER_LOG_FILE='master-log.000003', MASTER_LOG_POS=245;"
shell: /usr/bin/mysql -e "start slave;"
when: ansible_hostname == "web2"
- name: wordpress command
shell: /usr/bin/mysql -e "create database blog;"
shell: /usr/bin/mysql -e "grant all on blog.* to 'blog'@'localhost' identified by 'blog';"
[root@ansible roles]# vim mariadb/handlers/main.yml
- name: restart mariadb
service: name=mariadb state=restarted
[root@ansible roles]# vim mariadb/templates/server.j2
[mysqld]
server-id = {{ serverid }}
{{ log }}
innodb_file_per_table = ON
skip_name_resolve = ON
wordpress
[root@ansible ~]# cd /etc/ansible/roles/
[root@ansible ~]# mkdir wordpress/{files,templates,tasks,handlers,vars,meta,default} -pv
[root@ansible roles]# vim wordpress/tasks/main.yml
- name: install unzip
yum: name=unzip state=latest
- name: copy file
copy: src=wordpress-4.8.1-zh_CN.zip dest=/blog
- name: command unzip
command: /usr/bin/unzip -o /blog/wordpress-4.8.1-zh_CN.zip -d /blog
- name: copy conf
copy: src=wp-config.php dest=/blog/wordpress/
- name: mv conf
command: mv /blog/wordpress/wp-config-sample.php /blog/wordpress/wp-config.php
command: sed -ri 's/database_name_here/blog/' /blog/wordpress/wp-config.php
command: sed -ri 's/username_here/blog/' /blog/wordpress/wp-config.php
command: sed -ri 's/password_here/blog/' /blog/wordpress/wp-config.php
[root@ansible roles]# ls wordpress/files/
wordpress-4.8.1-zh_CN.zip
运行yml样本
[root@ansible ~]# ansible-playbook ansible.yml
.....
.....
PLAY RECAP *********************************************************************
172.16.252.103 : ok=15 changed=4 unreachable=0 failed=0
172.16.252.184 : ok=20 changed=3 unreachable=0 failed=0
172.16.252.207 : ok=14 changed=2 unreachable=0 failed=0
172.16.252.67 : ok=20 changed=3 unreachable=0 failed=0
访问测试