ansible [-m modile_name] [-a args]
playbook是由一个或者多个“play”组成的列表
play 将预定义的一组主机,装扮成事先通过ansible中task定义好的角色
playbook可以定制配置,按照指定的操作步骤有序执行
playbook是通过YAML格式来进行描述定义的
name 属性, 每个play的名字
hosts 属性, 每个play 涉及的被管理服务器,同ad-hoc 中的资产选择器
tasks 属性, 每个play 中具体要完成的任务,以列表的形式表达
become 属性,如果需要提权,则加上become 相关属性
become_user 属性, 若提权的话,提权到哪个⽤户上
remote_user属性,指定连接到远程节点的⽤户,就是在远程服务器上执⾏具体操作的⽤户。若不指定,则默认使⽤当前执⾏ ansible Playbook 的⽤户
字典{} 多个key和value构成
列表【】 多个元素构成
YAML表示一个家庭
三种常见的数据格式:XML JSON YAML
ansible-playbook
绿 执行成功无修改
黄 执行成功有修改
红 执行失败
Jinja2是基于Python书写的模板引擎。用来编写template模板
注释: {# 注释内容 #}
变量引⽤: {{ var }}
逻辑表达: {% %}
对迭代项的引用,固定的内置变量"item"
在task 中使用with_items给定要迭代的元素列表
loop代替with_items
systemctl stop firewalld.service
systemctl disable firewalld.service
sed -i “s/SELINUX=enforcing/SELINUX=disabled/g” /etc/selinux/config
setenforce 0
配置四台虚拟机的主机名分别为ansible、node1、node2、node3
在Ansible节点使用提供的ansible.tar.gz软件包,
配置本地镜像源,安装ansible服务
vim /etc/yum.repos.d/local.repo
[ansible]
name=ansible
baseurl=file:///opt/ansible
enabled=1
gpgcheck=0
yum install ansible -y
配置无密钥登录
[[email protected] ~]# ssh-keygen -t rsa
rsa是一种密码算法,还有一种是dsa,证书登录常用的是rsa
生产环境中,ssh登录都是证书登录
将本地的公钥传输到被管理节点
[[email protected] ~]# ssh-copy-id [email protected]
测试连通性
ssh 192.168.0.47
编写目标主机
[hosts]
node1
node2
node3
#编写hosts
添加映射编辑/etc/hosts文件
172.30.11.13 node1
172.30.11.14 node2
172.30.11.15 node3
ansible all -m ping
#匹配所有主机
ansible all --list
#列出所有被控制的主机
首先在/opt目录下创建一个项目目录
mkdir -p /opt/gpmall_ansible/roles/
ansible-galaxy init init
jar
kafka
mariadb
nginx
redis
zabbix
zookeeper
#自动生成
tree /opt/gpmall_ansible/
创建group_vars目录
在项目目录/opt/gpmall_ansible下创建group_vars目录
并在该目录下创建all文件
cd /opt/gpmall_ansible/
mkdir group_vars
cd group_vars/
touch all
创建安装入口文件
cd ..
touch install_gpmall_cluster.yaml
编写Playbook剧本
init角色
部署基础环境,上传gpmall-repo.tar.gz镜像
- name: Selinux Config Setenforce
shell: getenforce
register: info
- name: when_Selinux
shell: setenforce 0
when: info == 'Enforcing'
- name: stop firewalld
shell: systemctl stop firewalld
ignore_errors: yes
- name: copy packages
copy: src=gpmall-repo.tar.gz dest=/opt
- name: tar gpmall
shell: tar -zxvf /opt/gpmall-repo.tar.gz -C /opt/
- name: move repos
shell: mv /etc/yum.repos.d/* /media
- name: create local.repo
copy: src=local.repo dest=/etc/yum.repos.d/
- name: install jdk
yum: name={{ item }} state=present
with_items:
- java-1.8.0-openjdk
- java-1.8.0-openjdk-devel
- name: copy hosts
template: src=hosts.j2 dest=/etc/hosts
编写local.repo和host.j2的文件
vi local.repo
[gpmall]
name=gpmall
baseurl=file:///opt/gpmall-repo
gpgcheck=0
enabled=1
vi hosts.j2
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
{{hostip1}} {{hostname1}}
{{hostip2}} {{hostname2}}
{{hostip3}} {{hostname3}}
声明变量
vi /opt/gpmall_ansible/group_vars/all
hostip1: 172.30.11.13
hostip2: 172.30.11.14
hostip3: 172.30.11.15
hostname1: node1
hostname2: node2
hostname3: node3
mariadb角色
安装数据库服务并配置成数据库MariaDB Galera Cluster集群
- name: install mariadb
yum: name=mariadb-server state=present
- name: start mariadb
shell: systemctl start mariadb
- name: enable mariadb
shell: systemctl enable mariadb
- name: install expect
yum: name=expect state=present
- name: mysql_secure_installation
template: src=mysql_secure_installation.sh.j2 dest=/opt/mysql_secure_installation.sh
- name: sh shell
shell: sh /opt/mysql_secure_installation.sh
- name: config mariadb
template: src=server.cnf.j2 dest=/etc/my.cnf.d/server.cnf
- name: grant privileges
shell: mysql -uroot -p{{ DB_PASS }} -e "grant all privileges on *.* to 'root'@'%' identified by '{{ DB_PASS }}';"
- name: stop db
shell: systemctl stop mariadb
- name: new cluster
shell: galera_new_cluster
when: ansible_fqdn=="node1"
- name: restart db
shell: systemctl restart mariadb
when: ansible_fqdn=="node2"
- name: restart db
shell: systemctl restart mariadb
when: ansible_fqdn=="node3"
编写server.cnf.j2文件
#
# These groups are read by MariaDB server.
# Use it for options that only the server (but not clients) should see
#
# See the examples of server my.cnf files in /usr/share/mysql/
#
# this is read by the standalone daemon and embedded servers
[server]
# this is only for the mysqld standalone daemon
[mysqld]
#
# * Galera-related settings
#
[galera]
wsrep_on=ON
wsrep_provider=/usr/lib64/galera/libgalera_smm.so
wsrep_cluster_address="gcomm://{{hostip1}},{{hostip2}},{{hostip3}}"
{% if ansible_fqdn == "node1" %}
wsrep_node_name= {{hostname1}}
wsrep_node_address={{hostip1}}
{% elif ansible_fqdn == "node2" %}
wsrep_node_name= {{hostname2}}
wsrep_node_address={{hostip2}}
{% else %}
wsrep_node_name= {{hostname3}}
wsrep_node_address={{hostip3}}
{% endif %}
binlog_format=row
default_storage_engine=InnoDB
innodb_autoinc_lock_mode=2
wsrep_slave_threads=1
innodb_flush_log_at_trx_commit=0
innodb_buffer_pool_size=120M
wsrep_sst_method=rsync
wsrep_causal_reads=ON
#
# Allow server to accept connections on all interfaces.
#
{% if ansible_fqdn == "node1" %}
bind-address={{hostip1}}
{% elif ansible_fqdn == "node2" %}
bind-address={{hostip2}}
{% else %}
bind-address={{hostip3}}
{% endif %}
#
# Optional setting
#wsrep_slave_threads=1
#innodb_flush_log_at_trx_commit=0
# this is only for embedded server
[embedded]
# This group is only read by MariaDB servers, not by MySQL.
# If you use the same .cnf file for MySQL and MariaDB,
# you can put MariaDB-only options here
[mariadb]
# This group is only read by MariaDB-10.3 servers.
# If you use the same .cnf file for MariaDB of different versions,
# use this group for options that older servers don't understand
[mariadb-10.3]
编写mysql_secure_installation.sh.j2
# mysql_secure_installation.sh.j2
#!/bin/bash
#检查是否是root用户执行
if [ $(id -u) != "0" ]; then
echo "Error: You must be root to run this script, please use root to install"
exit 1
fi
expect -c "
spawn /usr/bin/mysql_secure_installation
expect \"Enter current password for root (enter for none):\"
send \"\r\"
expect \"Set root password?\"
send \"y\r\"
expect \"New password:\"
send \"{{DB_PASS}}\r\"
expect \"Re-enter new password:\"
send \"{{DB_PASS}}\r\"
expect \"Remove anonymous users?\"
send \"y\r\"
expect \"Disallow root login remotely?\"
send \"n\r\"
expect \"Remove test database and access to it?\"
send \"y\r\"
expect \"Reload privilege tables now?\"
send \"y\r\"
expect eof 与spawn结束交互
"
编辑/opt/gpmall_ansible/group_vars/all文件
# cat all
hostip1: 172.30.11.13
hostip2: 172.30.11.14
hostip3: 172.30.11.15
hostname1: node1
hostname2: node2
hostname3: node3
DB_PASS: 123456
redis角色
- name: install redis
yum: name=redis state=present
when: ansible_fqdn=="node1"
- name: config redis
copy: src=redis.conf dest=/etc/redis.conf
when: ansible_fqdn=="node1"
- name: start redis
shell: systemctl restart redis
when: ansible_fqdn=="node1"
将redis.conf文件拷贝至tasks目录同级别的files目录下
zookeeper角色-分布式服务框架
- name: copy zookeeper.tar.gz
copy: src=zookeeper-3.4.14.tar.gz dest=/opt
- name: tar zookeeper
shell: tar -zxvf /opt/zookeeper-3.4.14.tar.gz -C /opt
- name: delete zoo_sample.cfg
shell: rm -rf /opt/zookeeper-3.4.14/conf/zoo_sample.cfg
- name: config zoo.cfg
template: src=zoo.cfg.j2 dest=/opt/zookeeper-3.4.14/conf/zoo.cfg
- name: mkdir /tmp/zookeeper
shell: mkdir -p /tmp/zookeeper
- name: config myid
template: src=myid.j2 dest=/tmp/zookeeper/myid
- name: start zookeeper
shell: sh /opt/zookeeper-3.4.14/bin/zkServer.sh start
将zookeeper-3.4.14.tar.gz镜像包拷贝至tasks目录同级别的files目录下
将zoo.cfg.j2文件和myid.j2文件拷贝至templates下
#cat myid.j2
{% if ansible_fqdn == "node1" %}
1
{% elif ansible_fqdn == "node2" %}
2
{% else %}
3
{% endif %}
# cat zoo.cfg.j2
# The number of milliseconds of each tick
tickTime=2000
# The number of ticks that the initial
# synchronization phase can take
initLimit=10
# The number of ticks that can pass between
# sending a request and getting an acknowledgement
syncLimit=5
# the directory where the snapshot is stored.
# do not use /tmp for storage, /tmp here is just
# example sakes.
dataDir=/tmp/zookeeper
# the port at which the clients will connect
clientPort=2181
# the maximum number of client connections.
# increase this if you need to handle more clients
#maxClientCnxns=60
#
# Be sure to read the maintenance section of the
# administrator guide before turning on autopurge.
#
# http://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_maintenance
#
# The number of snapshots to retain in dataDir
#autopurge.snapRetainCount=3
# Purge task interval in hours
# Set to "0" to disable auto purge feature
#autopurge.purgeInterval=1
server.1={{hostip1}}:2888:3888
server.2={{hostip2}}:2888:3888
server.3={{hostip3}}:2888:3888
kafka角色-消息队列
- name: copy kafka.tgz
copy: src=kafka_2.11-1.1.1.tgz dest=/opt
- name: tar kafka
shell: tar -zxvf /opt/kafka_2.11-1.1.1.tgz -C /opt
- name: config
template: src=server.properties.j2 dest=/opt/kafka_2.11-1.1.1/config/server.properties
- name: start kafka
shell: sh /opt/kafka_2.11-1.1.1/bin/kafka-server-start.sh -daemon /opt/kafka_2.11-1.1.1/config/server.properties
将kafka_2.11-1.1.1.tgz镜像包拷贝至tasks目录同级别的files目录下
将server.properties.j2文件拷贝至templates下
jar角色
- name: copy sql
copy: src=gpmall.sql dest=/opt/gpmall.sql
when: ansible_fqdn=="node1"
- name: config db
shell: mysql -uroot -p{{ DB_PASS }} -e "create database gpmall;"
when: ansible_fqdn=="node1"
- name: source gpmall.sql
shell: mysql -uroot -p{{ DB_PASS }} gpmall < /opt/gpmall.sql
when: ansible_fqdn=="node1"
- name: copy jar
copy: src=user-provider-0.0.1-SNAPSHOT.jar dest=/opt
when: ansible_fqdn=="node2"
- name: copy jar
copy: src=user-provider-0.0.1-SNAPSHOT.jar dest=/opt
when: ansible_fqdn=="node3"
- name: copy jar
copy: src=shopping-provider-0.0.1-SNAPSHOT.jar dest=/opt
when: ansible_fqdn=="node2"
- name: copy jar
copy: src=shopping-provider-0.0.1-SNAPSHOT.jar dest=/opt
when: ansible_fqdn=="node3"
- name: copy jar
copy: src=gpmall-shopping-0.0.1-SNAPSHOT.jar dest=/opt
when: ansible_fqdn=="node2"
- name: copy jar
copy: src=gpmall-shopping-0.0.1-SNAPSHOT.jar dest=/opt
when: ansible_fqdn=="node3"
- name: copy jar
copy: src=gpmall-user-0.0.1-SNAPSHOT.jar dest=/opt
when: ansible_fqdn=="node2"
- name: copy jar
copy: src=gpmall-user-0.0.1-SNAPSHOT.jar dest=/opt
when: ansible_fqdn=="node3"
- name: copy hosts
template: src=hosts.j2 dest=/etc/hosts
when: ansible_fqdn=="node2"
- name: copy hosts
template: src=hosts.j2 dest=/etc/hosts
when: ansible_fqdn=="node3"
- name: start jar
shell: nohup java -jar /opt/user-provider-0.0.1-SNAPSHOT.jar &
when: ansible_fqdn=="node2"
- name: start jar
shell: nohup java -jar /opt/user-provider-0.0.1-SNAPSHOT.jar &
when: ansible_fqdn=="node3"
- name: start jar
shell: nohup java -jar /opt/shopping-provider-0.0.1-SNAPSHOT.jar &
when: ansible_fqdn=="node2"
- name: start jar
shell: nohup java -jar /opt/shopping-provider-0.0.1-SNAPSHOT.jar &
when: ansible_fqdn=="node3"
- name: start jar
shell: nohup java -jar /opt/gpmall-shopping-0.0.1-SNAPSHOT.jar &
when: ansible_fqdn=="node2"
- name: start jar
shell: nohup java -jar /opt/gpmall-shopping-0.0.1-SNAPSHOT.jar &
when: ansible_fqdn=="node3"
- name: start jar
shell: nohup java -jar /opt/gpmall-user-0.0.1-SNAPSHOT.jar &
when: ansible_fqdn=="node2"
- name: start jar
shell: nohup java -jar /opt/gpmall-user-0.0.1-SNAPSHOT.jar &
when: ansible_fqdn=="node3"
将gpmall.sql和四个gpmall商城的jar包拷贝至tasks目录同级别的files目录下
将hosts.j2文件拷贝至templates目录下
# cat hosts.j2
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
{{hostip1}} redis.mall
{{hostip1}} mysql.mall
{{hostip1}} kafka1.mall
{{hostip2}} kafka2.mall
{{hostip3}} kafka3.mall
{{hostip1}} zk1.mall
{{hostip2}} zk2.mall
{{hostip3}} zk3.mall
nginx角色
- name: install nginx
yum: name=nginx state=present
when: ansible_fqdn=="node1"
- name: delete html
shell: rm -rf /usr/share/nginx/html/*
when: ansible_fqdn=="node1"
- name: copy file
copy: src=dist dest=/opt
when: ansible_fqdn=="node1"
- name: cp dist
shell: cp -rvf /opt/dist/* /usr/share/nginx/html/
when: ansible_fqdn=="node1"
- name: config default.config
template: src=default.conf.j2 dest=/etc/nginx/conf.d/default.conf
when: ansible_fqdn=="node1"
- name: start nginx
service: name=nginx state=started enabled=true
when: ansible_fqdn=="node1"
故需要dist目录拷贝至tasks目录同级别的files目录下
将default.conf.j2文件拷贝至templates下
upstream myuser {
server {{hostip2}}:8082;
server {{hostip3}}:8082;
ip_hash;
}
upstream myshopping {
server {{hostip2}}:8081;
server {{hostip3}}:8081;
ip_hash;
}
upstream mycashier {
server {{hostip2}}:8083;
server {{hostip3}}:8083;
ip_hash;
}
server {
listen 80;
server_name localhost;
#charset koi8-r;
#access_log /var/log/nginx/host.access.log main;
location / {
root /usr/share/nginx/html;
index index.html index.htm;
}
location /user {
proxy_pass http://myuser;
}
location /shopping {
proxy_pass http://myshopping;
}
location /cashier {
proxy_pass http://mycashier;
}
#error_page 404 /404.html;
# redirect server error pages to the static page /50x.html
#
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
# proxy the PHP scripts to Apache listening on 127.0.0.1:80
#
#location ~ \.php$ {
# proxy_pass http://127.0.0.1;
#}
# pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000
#
#location ~ \.php$ {
# root html;
# fastcgi_pass 127.0.0.1:9000;
# fastcgi_index index.php;
# fastcgi_param SCRIPT_FILENAME /scripts$fastcgi_script_name;
# include fastcgi_params;
#}
# deny access to .htaccess files, if Apache's document root
# concurs with nginx's one
#
#location ~ /\.ht {
# deny all;
#}
}
执行Playbook
编辑剧本入口文件
install_gpmall_cluster.yaml
编辑剧本入口文件
---
- hosts: hosts
remote_user: root
roles:
- init
- mariadb
- redis
- zookeeper
- kafka
- jar
- nginx
检测脚本的语法
ansible-playbook install_gpmall_cluster.yaml --syntax-check
#输三遍yes 执行
ansible-playbook install_gpmall_cluster.yaml -vvv
模块执行是幂等的,就是ansible的幂等性
意味着多次执行是安全的,结果均一致
每个task都应用其name 用于playbook的执行结果输出,也可以不写name,则用action结果用于输出