自动化运维工具-Saltstack

文章目录

    • Saltstack
      • 1.1 自动化运维介绍
      • 1.2 saltstack安装
      • 1.3 启动salt相关服务
      • 1.4 saltstack配置认证
      • 1.5 saltstack执行远程命令
      • 1.6 grains
      • 1.7 pillar
      • 1.8 安装配置httpd
      • 1.9 配置管理文件
      • 2.0 配置管理目录
      • 2.1 配置管理远程命令
      • 2.2 配置管理计划任务
      • 2.3 其他命令
      • 2.4 salt-ssh使用
      • 2.5 ansible介绍
      • 2.6 ansible安装
      • 2.7 ansible远程执行命令
      • 2.8 ansible拷贝文件或目录
      • 2.9 ansible远程执行脚本
      • 3.0 ansible任务计划管理
      • 3.1 ansible安装包和管理服务
      • 3.5 使用ansible playbook
      • 3.6 playbook循环
      • 3.7 playbook中的条件判断
      • 3.8 playbook中的handlers
      • 3.9 playbook 安装nginx
      • 4.0 playbook 管理配置文件

Saltstack


1.1 自动化运维介绍

Puppet (www.puppetlabs.com)基于rubby开发,c/s架构,支持多平台,可管理配置文件、用户、cron任务、软件包、系统服务等。 分为社区版(免费)和企业版(收费),企业版支持图形化配置。

Saltstack(官网 https://saltstack.com,文档docs.saltstack.com )基于python开发,c/s架构,支持多平台,比puppet轻量,在远程执行命令时非常快捷,配置和使用比puppet容易,能实现puppet几乎所有的功能。

Ansible (www.ansible.com )更加简洁的自动化运维工具,不需要在客户端上安装agent,基于python开发。可以实现批量操作系统配置、批量程序的部署、批量运行命令。

1.2 saltstack安装

saltstack介绍https://docs.saltstack.com/en/latest/topics/index.html

可以使用salt-ssh远程执行,类似ansible,也支持c/s模式,下面我们将讲述该种模式的使用,需要准备两台机器

salt-master:192.168.137.30
salt-minion:192.168.137.40

准备工作,设置hosts,两台机器全部安装saltstack yum源
[root@linux-node3 ~]# yum -y install https://repo.saltstack.com/yum/redhat/salt-repo-latest-2.el7.noarch.rpm

[root@linux-node4 ~]# cat /etc/hosts
127.0.0.1   localhost linux-node4.com localhost4 
192.168.137.40 linux-node4.com
192.168.137.30 linux-node3.com

**salt-master端**
[root@linux-node3 ~]# yum install -y salt-master salt-minion
[root@linux-node3 ~]# rpm -qa | grep salt-master 
salt-master-2017.7.2-1.el7.noarch
[root@linux-node3 ~]# rpm -qa | grep salt-minion
salt-minion-2017.7.2-1.el7.noarch
[root@linux-node4 ~]#

**salt-minion端**
[root@linux-node4 ~]# yum install -y salt-minion
[root@linux-node4 ~]# rpm -qa | grep salt-minion
salt-minion-2017.7.2-1.el7.noarch
[root@linux-node3 ~]# 

1.3 启动salt相关服务

salt-master配置
vim /etc/salt/minion //增加
master: linux-node3.com
[root@linux-node3 ~]# systemctl start salt-minion
[root@linux-node3 ~]# systemctl start salt-master
[root@linux-node3 ~]# ps -ef | grep salt-minion
root       4215      1  0 16:07 ?        00:00:00 /usr/bin/python /usr/bin/salt-minion
root       4218   4215  2 16:07 ?        00:00:00 /usr/bin/python /usr/bin/salt-minion
root       4226   4218  0 16:07 ?        00:00:00 /usr/bin/python /usr/bin/salt-minion
root       5030   3975  0 16:08 pts/1    00:00:00 grep --color=auto salt-minion
[root@linux-node3 ~]# ps -ef | grep salt-master
root       4299      1  4 16:08 ?        00:00:00 /usr/bin/python /usr/bin/salt-master
root       4304   4299  0 16:08 ?        00:00:00 /usr/bin/python /usr/bin/salt-master
root       4309   4299  0 16:08 ?        00:00:00 /usr/bin/python /usr/bin/salt-master
root       4310   4299  0 16:08 ?        00:00:00 /usr/bin/python /usr/bin/salt-master
root       4313   4299  2 16:08 ?        00:00:00 /usr/bin/python /usr/bin/salt-master
root       4314   4299  0 16:08 ?        00:00:00 /usr/bin/python /usr/bin/salt-master
root       4315   4314 23 16:08 ?        00:00:03 /usr/bin/python /usr/bin/salt-master
root       4322   4314  8 16:08 ?        00:00:01 /usr/bin/python /usr/bin/salt-master
root       4323   4314  8 16:08 ?        00:00:01 /usr/bin/python /usr/bin/salt-master
root       4324   4314  8 16:08 ?        00:00:01 /usr/bin/python /usr/bin/salt-master
root       4325   4314  8 16:08 ?        00:00:01 /usr/bin/python /usr/bin/salt-master
root       4326   4314  8 16:08 ?        00:00:01 /usr/bin/python /usr/bin/salt-master
root       5437   3975  0 16:08 pts/1    00:00:00 grep --color=auto salt-master
root       5438   4326  0 16:08 ?        00:00:00 /usr/bin/python /usr/bin/salt-master
[root@linux-node4 ~]#
[root@linux-node3 ~]# netstat -lntp
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name    
tcp        0      0 0.0.0.0:22              0.0.0.0:*               LISTEN      963/sshd            
tcp        0      0 0.0.0.0:4505            0.0.0.0:*               LISTEN      4309/python         
tcp        0      0 127.0.0.1:25            0.0.0.0:*               LISTEN      1238/master         
tcp        0      0 0.0.0.0:4506            0.0.0.0:*               LISTEN      4315/python         
tcp6       0      0 :::22                   :::*                    LISTEN      963/sshd            
tcp6       0      0 ::1:25                  :::*                    LISTEN      1238/master         
[root@linux-node3 ~]#

salt-minion配置
vim /etc/salt/minion //增加
master: linux-node3.com

[root@linux-node4 ~]# systemctl start salt-minion
[root@linux-node4 ~]# ps -ef | grep salt-minion
root       4462      1  4 14:44 ?        00:00:00 /usr/bin/python /usr/bin/salt-minion
root       4465   4462 62 14:44 ?        00:00:06 /usr/bin/python /usr/bin/salt-minion
root       4473   4465  0 14:44 ?        00:00:00 /usr/bin/python /usr/bin/salt-minion
root       4537   2436  0 14:44 pts/0    00:00:00 grep --color=auto salt-minion
[root@linux-node4 ~]# 

服务端监听4505和4506两个端口,4505为消息发布的端口,4506为和客户端通信的端口

1.4 saltstack配置认证

master端和minion端通信需要建立一个安全通道,传输过程需要加密,所以得配置认证,也是通过密钥对来加密解密的

minion在第一次启动时会在/etc/salt/pki/minion/下生成minion.pem和minion.pub,其中.pub为公钥,它会把公钥传输给master

master第一次启动时也会在/etc/salt/pki/master下生成密钥对,当master接收到minion传过来的公钥后,通过salt-key工具接受这个公钥,一旦接受后就会在/etc/salt/pki/master/minions/目录里存放刚刚接受的公钥,同时客户端也会接受master传过去的公钥,把它放在/etc/salt/pki/minion目录下,并命名为minion_master.pub

salt-master端:
[root@linux-03 minion]# ll /etc/salt/pki/minion/
总用量 12
-rw-r--r-- 1 root root  450 11月  2 16:38 minion_master.pub
-r-------- 1 root root 1678 11月  2 16:07 minion.pem
-rw-r--r-- 1 root root  450 11月  2 16:07 minion.pub
[root@linux-04 minion]# ll /etc/salt/pki/master/
总用量 8
-r-------- 1 root root 1674 11月  2 16:08 master.pem
-rw-r--r-- 1 root root  450 11月  2 16:08 master.pub
drwxr-xr-x 2 root root   29 11月  2 16:38 minions
drwxr-xr-x 2 root root    6 11月  2 16:08 minions_autosign
drwxr-xr-x 2 root root    6 11月  2 16:08 minions_denied
drwxr-xr-x 2 root root    6 11月  2 16:38 minions_pre
drwxr-xr-x 2 root root    6 11月  2 16:08 minions_rejected
[root@linux-04 minion]# 

salt-minion端
[root@linux-node4 ~]# ll /etc/salt/pki/minion/
总用量 8
-r-------- 1 root root 1678 11月  3 11:04 minion.pem
-rw-r--r-- 1 root root  450 11月  3 11:04 minion.pub
[root@linux-node3 ~]#

以上过程需要借助salt-key工具来实现
执行如下命令 salt-key -a linux-node3.com// -a后面跟主机名,可以认证指定主机

salt-key -a linux-node3.com
[root@linux-node3 ~]# salt-key 
Accepted Keys:
linux-node4.com
Denied Keys:
Unaccepted Keys:
linux-node3.com
Rejected Keys:
[root@linux-node3 ~]#

命令语法
-a  后面跟主机名,认证指定主机
-A 认证所有主机
-r  跟主机名,拒绝指定主机
-R  拒绝所有主机
-d 跟主机名,删除指定主机认证
-D 删除全部主机认证
-y 省略掉交互,相当于直接按了y

[root@linux-node3 ~]# salt-key -a linux-node3.com
The following keys are going to be accepted:
Unaccepted Keys:
linux-node3.com
Proceed? [n/Y] y
Key for minion linux-node3.com accepted.
[root@linux-node3 ~]# salt-key 
Accepted Keys:
linux-node3.com
linux-node4.com
Denied Keys:
Unaccepted Keys:
Rejected Keys:
[root@linux-node3 ~]#     

1.5 saltstack执行远程命令

语法:salt '*' test.ping //这里的*表示所有已经签名的minion端,也可以指定一个
[root@linux-node3 ~]# salt '*' test.ping
linux-node4.com:
    True
linux-node3.com:
    True
[root@linux-node3 ~]# salt '*' cmd.run "ifconfig eth0| grep "inet""
linux-node3.com:
        inet 192.168.137.30  netmask 255.255.255.0  broadcast 192.168.137.255
        inet6 fe80::20c:29ff:fe8a:34e2  prefixlen 64  scopeid 0x20
linux-node4.com:
        inet 192.168.137.40  netmask 255.255.255.0  broadcast 192.168.137.255
        inet6 fe80::59b3:9200:1bdd:d776  prefixlen 64  scopeid 0x20
[root@linux-node3 ~]#

语法:salt 'hostname' test.ping
[root@linux-node3 ~]# salt 'linux-node4.com' test.ping
linux-node4.com:
    True

语法:salt '*' cmd.run "hostname"
[root@linux-node3 ~]# salt 'linux-node[34].com' cmd.run "hostname"
linux-node4.com:
    linux-node4.com
linux-node3.com:
    linux-node3.com
[root@linux-node3 ~]#

[root@linux-node3 ~]# salt 'linux-node[34].com' cmd.run "hostname"
linux-node4.com:
    linux-node4.com
linux-node3.com:
    linux-node3.com
[root@linux-node3 ~]# salt -L 'linux-node3.com,linux-node4.com' test.ping
linux-node4.com:
    True
linux-node3.com:
    True
[root@linux-node3 ~]# salt -E 'linux-node[34]+' test.ping 
linux-node3.com:
    True
linux-node4.com:
    True

说明: 这里的*必须是在master上已经被接受过认证的客户端,可以通过salt-key查到,通常是我们已经设定的id值。关于这部分内容,它支持通配、列表以及正则。 比如两台客户端linux-node3.com,linux-node4.com, 那我们可以写成salt 'linux-*', salt 'linux-node[34]'  salt -L 'linux-node3.com,linux-node4.com'  
salt -E 'linux-node(01|02).com'等形式,使用列表,即多个机器用逗号分隔,而且需要加-L,使用正则必须要带-E选项。 它还支持grains,加-G选项,pillar 加-I选项

1.6 grains

grains是在minion启动时收集到的一些信息,比如操作系统类型、网卡ip、内核版本、cpu架构等。

salt 'aming-linux-node4.com' grains.ls 列出所有的grains项目名字
        主机名                      命令
[root@linux-node3 ~]# salt 'linux-node4.com' grains.ls
linux-node4.com:
- SSDs
- biosreleasedate
- biosversion
- cpu_flags
- cpu_model
- cpuarch
- disks
- dns
- domain
- fqdn
- fqdn_ip4
- fqdn_ip6
......

salt 'linux-node4.com' grains.items 列出所有grains项目以及值
[root@linux-node3 ~]# salt 'linux-node4.com' grains.items
linux-node4.com:
----------
SSDs:
biosreleasedate:
    07/02/2015
biosversion:
    6.00
cpu_flags:
    - fpu
    - vme
    - de
    - pse
    - tsc
    - msr
    - pae
..........

grains的信息并不是动态的,并不会实时变更,它是在minion启动时收集到的。
我们可以根据grains收集到的一些信息,做配置管理工作。
grains支持自定义信息。

自定义grains

cat /etc/salt/grains  //添加:

role: nginx 
env: test
# 重启minion服务
systemctl restart salt-minion
效果:
[root@linux-node4 ~]# cat /etc/salt/grains 
role: nginx
env: test

[root@linux-node4 ~]#
[root@linux-node3 ~]# salt '*' grains.item role env
linux-node4.com:
----------
   env:
      test
   role:
      nginx
 linux-node3.com:
  ----------
   env:
   role:
 //只有在mimion上定义了才会显示,没有定义则没有
    
  master上:
  获取grains:
  salt '*' grains.item role env
  可以借助grains的一些属性信息来执行
  salt -G role:nginx cmd.run 'hostname'
  [root@linux-node3 ~]# salt -G role:nginx cmd.run "hostname"
  linux-node4.com:
    linux-node4.com
  [root@linux-node3 ~]#

1.7 pillar

pillar和grains不一样,是在master上定义的,并且是针对minion定义的一些信息。像一些比较重要的数据(密码)可以存在pillar里,还可以定义变量等

// pillar在master上定义,grains是在minion上进行定义

配置自定义pillar
vim  /etc/salt/master
找到如下配置://去掉前面的警号
pillar_roots:
  base: #此行前面有两个空格
    - /srv/pillar #此行前面有4个空格
mkdir /srv/pillar
vim /srv/pillar/test.sls  //内容如下
conf: /etc/test.conf

vi /srv/pillar/top.sls  //内容如下,作为总入口
base:
  'linux-node3.com': #此行前面有两个空格,针对哪些机器
    - test#此行前面有4个空格,加载哪些内容
    - test1
重启master
systemctl  restart salt-master
当更改完pillar配置文件后,我们可以通过刷新pillar配置来获取新的pillar状态:
salt '*' saltutil.refresh_pillar
[root@linux-node3 pillar]# ls 
test1.sls  test.sls  top.sls
[root@linux-node3 pillar]#
[root@linux-node3 ~]# salt '*' saltutil.refresh_pillar
linux-node4.com:
    True
linux-node3.com:
    True
[root@linux-node3 ~]#

验证:salt  '*' pillar.item conf
[root@linux-node3 ~]# salt '*' pillar.item conf
linux-node4.com:
    ----------
    conf:
        /etc/test.conf
linux-node3.com:
    ----------
    conf:
[root@linux-node3 ~]# 
[root@linux-node3 pillar]# salt '*' pillar.item conf dir
linux-node4.com:
    ----------
    conf:
    dir:
linux-node3.com:
    ----------
    conf:
        /etc/test.conf
    dir:
        /data/123
[root@linux-node3 pillar]#
pillar同样可以用来作为salt的匹配对象。比如 salt  -I 'conf:/etc/test.conf'  test.ping
// top也可以分开写 也可以写在一起,对内容进行分隔

[root@linux-node3 pillar]# salt -I 'conf:/etc/test.conf' cmd.run "w"
linux-node3.com:
     18:32:34 up  7:34,  3 users,  load average: 0.01, 0.10, 0.13
    USER     TTY      FROM             LOGIN@   IDLE   JCPU   PCPU WHAT
    root     tty1                      10:59    7:32m  0.18s  0.18s -bash
    root     pts/0    192.168.137.1    11:00    5:27m  0.10s  0.10s -bash
    root     pts/1    192.168.137.1    11:25    2.00s  1.30s  0.79s /usr/bin/python /usr/bin/salt -I conf:/etc/test.conf cmd.run w
[root@linux-node3 pillar]#

1.8 安装配置httpd

master上 vi /etc/salt/master //搜索找到file_roots
 打开如下内容的注释:
file_roots:
  base: #前面有两个空格
    - /srv/salt #前面有4个空格
 mkdir  /srv/salt ; cd /srv/salt
 vi /srv/salt/top.sls  //加入如下内容
base:
  '*':  #前面有两个空格
    - httpd #前面有4个空格
 意思是,在所有的客户端上执行httpd模块
[root@linux-node3 salt]# ll
总用量 4
-rw-r--r-- 1 root root 25 11月  3 18:42 top.sls
[root@linux-node3 salt]# cat top.sls 
base:
  '*':
    - httpd
[root@linux-node3 salt]# 
重启 systemctl restart salt-master

master上vi /srv/salt/httpd.sls  //加入如下内容,这个就是httpd模块的内容
httpd-service:
  pkg.installed:
    - names:    //这里如果只有一个服务,那么就可以写成 –name: httpd 不用再换一行了。
      - httpd
      - httpd-devel
  service.running:
    - name: httpd
    - enable: True
[root@linux-node3 salt]# cat httpd.sls 
httpd-service:
  - names:
    - httpd
    - htpd-devel
service.running:
  - name: httpd
  - enable: True
[root@linux-node3 salt]# 

说明: httpd-service是id的名字,自定义的。pkg.installed 为包安装函数,下面是要安装的包的名字。service.running也是一个函数,来保证指定的服务启动,enable表示开机启动。
执行: salt 'linux-node3.com' state.highstate//执行过程会比较慢,因为客户端上在yum install httpd httpd-devel
[root@linux-node3 salt]# salt 'linux-node3.com' state.highstate
linux-node3.com:
----------
          ID: httpd-service
    Function: pkg.installed
        Name: httpd
      Result: True
     Comment: All specified packages are already installed
     Started: 18:53:04.843006
    Duration: 1231.107 ms
     Changes:   
----------
          ID: httpd-service
    Function: pkg.installed
        Name: htpd-devel
      Result: False
     Comment: Error occurred installing package(s). Additional info follows:
              
              errors:
                  - Running scope as unit run-30321.scope.
                    Loaded plugins: fastestmirror
                    Loading mirror speeds from cached hostfile
                     * base: mirrors.btte.net
                     * extras: mirrors.btte.net
                     * updates: mirrors.btte.net
                    No package htpd-devel available.
                    Error: Nothing to do
     Started: 18:53:06.074365
    Duration: 23709.405 ms
     Changes:   
----------
          ID: httpd-service
    Function: service.running
        Name: httpd
      Result: True
     Comment: The service httpd is already running
     Started: 18:53:29.784871
    Duration: 64.621 ms
     Changes:   

Summary for linux-node3.com
------------
Succeeded: 2
Failed:    1
------------
Total states run:     3
Total run time:  25.005 s
ERROR: Minions returned with non-zero exit code
[root@linux-node3 salt]# ps -ef | grep httpd
root      30070      1  0 18:52 ?        00:00:00 /usr/sbin/httpd -DFOREGROUND
apache    30073  30070  0 18:52 ?        00:00:00 /usr/sbin/httpd -DFOREGROUND
apache    30074  30070  0 18:52 ?        00:00:00 /usr/sbin/httpd -DFOREGROUND
apache    30075  30070  0 18:52 ?        00:00:00 /usr/sbin/httpd -DFOREGROUND
apache    30076  30070  0 18:52 ?        00:00:00 /usr/sbin/httpd -DFOREGROUND
apache    30078  30070  0 18:52 ?        00:00:00 /usr/sbin/httpd -DFOREGROUND
root      30392   1488  0 18:53 pts/1    00:00:00 grep --color=auto httpd
[root@linux-node3 salt]#

1.9 配置管理文件

master上vi /srv/salt/test.sls  //加入如下内容
file_test:
 file.managed:
  - name: /tmp/linux-node3.com
  - source: salt://test/123/passwd.txt
  - user: root
  - group: root
  - mode: 600
[root@linux-node3 salt]#
说明:第一行的file_test为自定的名字,表示该配置段的名字,可以在别的配置段中引用它,source指定文件从哪里拷贝,这里的salt://test/123/passwd.txt相当于是/srv/salt/test/123/passwd.txt

[root@linux-node3 salt]# mkdir -p test/123
[root@linux-node3 salt]# cp /etc/passwd /srv/salt/test123passwd.txt
[root@linux-node3 salt]#
[root@linux-node3 salt]# mkdir -p test/123
[root@linux-node3 salt]# cp /etc/passwd /srv/salt/test/123/passwd.txt
vi /srv/salt/top.sls //改为如下内容
[root@linux-node3 salt]# cat top.sls 
base:
  '*':
    - test
[root@linux-node3 salt]#
检查linux-node4.com上是否有/tmp/linux-node4.com,检查内容以及权限

[root@linux-node3 salt]# salt 'linux-node4.com' state.highstate
linux-node4.com:
----------
          ID: file_test
    Function: file.managed
        Name: /tmp/linux-node3.com
      Result: True
     Comment: File /tmp/linux-node3.com updated
     Started: 14:42:39.019891
    Duration: 42.042 ms
     Changes:   
              ----------
              diff:
                  New file

Summary for linux-node4.com
------------
Succeeded: 1 (changed=1)
Failed:    0
------------
Total states run:     1
Total run time:  42.042 ms
[root@linux-node3 salt]#

[root@linux-node4 ~]# ll -lt /tmp/linux-node3.com 
-rw------- 1 root root 1199 11月  6 14:42 /tmp/linux-node3.com

2.0 配置管理目录

master上vi /srv/salt/test_dir.sls  //加入如下内容
file_dir:
  file.recurse:
    - name: /tmp/testdir
    - source: salt://test/123
    - user: root
    - file_mode: 640
    - dir_mode: 750
    - mkdir: True
    - clean: True //加上它之后,源删除文件或目录,目标也会跟着删除,否则不会删除
修改top.sls, vi /srv/salt/top.sls //改为如下内容
base:
  '*':
    - test_dir 
执行: salt 'linux-node4.com' state.highstate
检查linux-node4.com上是否有/tmp/testdir,检查里面的目录、文件以及权限
说明:这里有一个问题,如果source对应的目录里有空目录的话,客户端上不会创建该目录
[root@linux-node3 salt]# salt 'linux-node4.com' state.highstate
linux-node4.com:
----------
          ID: file_dir
    Function: file.recurse
        Name: /tmp/testdir
      Result: True
     Comment: Recursively updated /tmp/testdir
     Started: 15:29:15.530231
    Duration: 145.051 ms
     Changes:   
              ----------
              /tmp/testdir/passwd.txt:
                  ----------
                  diff:
                      New file
                  mode:
                      0640

Summary for linux-node4.com
------------
Succeeded: 1 (changed=1)
Failed:    0
------------
Total states run:     1
Total run time: 145.051 ms
[root@linux-node3 salt]#

[root@linux-node4 ~]# ll /tmp/testdir/
-rw-r----- 1 root root 1199 11月  6 15:29 passwd.txt
[root@linux-node4 ~]#
[root@linux-node4 ~]# ll /tmp/
-rw------- 1 root root 1199 11月  6 14:42 linux-node3.com
drwxr-x--- 2 root root   24 11月  6 15:29 testdir

2.1 配置管理远程命令

master上vi /srv/salt/shell_test.sls  //加入如下内容
 shell_test:
  cmd.script:
    - source: salt://test/1.sh
    - user: root
vi /srv/salt/test/1.sh //加入如下内容
#!/bin/bash
touch /tmp/111.txt 
if [ ! -d /tmp/1233 ]
then
    mkdir /tmp/1233
fi
更改top.sls内容
base:
  '*':
    - shell_test
执行: salt 'linux-node4.com' state.highstate
[root@linux-node3 test]# salt 'linux-node4.com' state.highstate
linux-node4.com:
----------
          ID: shell_test
    Function: cmd.script
      Result: True
     Comment: Command 'shell_test' run
     Started: 15:38:44.044931
    Duration: 137.472 ms
     Changes:   
              ----------
              pid:
                  3047
              retcode:
                  0
              stderr:
              stdout:

Summary for linux-node4.com
------------
Succeeded: 1 (changed=1)
Failed:    0
------------
Total states run:     1
Total run time: 137.472 ms
[root@linux-node3 test]#

检查是否有/tmp/111.txt和/tmp/1233
[root@linux-node4 ~]# ll /tmp/
-rw-r--r-- 1 root root    0 11月  6 15:38 111.txt
drwxr-xr-x 2 root root    6 11月  6 15:38 1233

2.2 配置管理计划任务

master上vi /srv/salt/cron_test.sls  //加入如下内容
cron_test:
  cron.present:
    - name: /bin/touch /tmp/111.txt
    - user: root
    - minute: '*'
    - hour: 20
    - daymonth: '*'
    - month: '*'
    - dayweek: '*
注意,*需要用单引号引起来。当然我们还可以使用file.managed模块来管理cron,因为系统的cron都是以配置文件的形式存在的。想要删除该cron,需要增加:
cron.absent:
  - name: /bin/touch /tmp/111.txt
两者不能共存,要想删除一个cron,那之前的present就得去掉

更改top.sls
base:
  '*':
    - cron_test
执行: salt 'linux-node4.com' state.highstate
[root@linux-node3 salt]# salt 'linux-node4.com' state.highstate
linux-node4.com:
----------
          ID: cron_test
    Function: cron.present
        Name: /bin/touch /tmp/111.txt
      Result: True
     Comment: Cron /bin/touch /tmp/111.txt added to root's crontab
     Started: 16:10:05.476043
    Duration: 133.512 ms
     Changes:   
              ----------
              root:
                  /bin/touch /tmp/111.txt

Summary for linux-node4.com
------------
Succeeded: 1 (changed=1)
Failed:    0
------------
Total states run:     1
Total run time: 133.512 ms
[root@linux-node3 salt]#

到linux-node4.com上检查cron,会看到提示# Lines below here are managed by Salt, do not edit

[root@linux-node4 ~]# crontab -l
# Lines below here are managed by Salt, do not edit
# SALT_CRON_IDENTIFIER:/bin/touch /tmp/111.txt
* 20 * * * /bin/touch /tmp/111.txt
[root@linux-node4 ~]# 

我们不能随意改动它,否则就没法删除或者修改这个cron了。

2.3 其他命令

cp.get_file 拷贝master上的文件到客户端

[root@linux-node3 test]# salt '*' cp.get_file salt://test/1.sh /tmp/test.sh
linux-node4.com:
    /tmp/test.sh
linux-node3.com:
    /tmp/test.sh
[root@linux-node3 test]# ll /tmp/
drwxr-x--- 2 root   root     24 11月  6 15:28 testdir
-rw-r--r-- 1 root   root     79 11月  6 16:14 test.sh

[root@linux-node4 ~]# ll /tmp/
drwxr-x--- 2 root root   24 11月  6 15:29 testdir
-rw-r--r-- 1 root root   79 11月  6 16:14 test.sh

【】 salt '*' cp.get_file salt://test/1.txt  /tmp/123.txt
 cp.get_dir 拷贝目录

【】salt '*' cp.get_dir salt://test/conf /tmp/ //会自动在客户端创建conf目录,所以后面不要加conf,如果写成 /tmp/conf/  则会在/tmp/conf/目录下又创建conf

【】salt-run manage.up  显示存活的minion
[root@linux-node3 test]# salt-run manage.up
- linux-node3.com
- linux-node4.com
[root@linux-node3 test]#

【】salt '*' cmd.script salt://test/1.sh  命令行下执行master上的shell脚本
[root@linux-node3 test]# salt '*' cmd.script salt://test/1.sh
linux-node4.com:
    ----------
    pid:
        3276
    retcode:
        0
    stderr:
    stdout:
linux-node3.com:
    ----------
    pid:
        10595
    retcode:
        0
    stderr:
    stdout:
[root@linux-node3 test]#
[root@linux-node3 test]# cat 1.sh 
#!/bin/bash
touch /tmp/111.txt
if [ ! -d /tmp/1233 ];then
	mkdir /tmp/1233
fi

[root@linux-node3 test]# ll /tmp/
总用量 20
-rw-r--r-- 1 root   root      0 11月  6 16:19 111.txt
drwxr-xr-x 2 root   root      6 11月  6 16:19 1233

2.4 salt-ssh使用

salt-ssh不需要对客户端做认证,客户端也不用安装salt-minion,它类似pssh/expect
安装很简单yum install -y https://repo.saltstack.com/yum/redhat/salt-repo-latest-2.el7.noarch.rpm
yum install -y salt-ssh
vi /etc/salt/roster //增加如下内容
linux-node3.com:
  host: 192.168.137.30
  user: root
  passwd: 123456
linux-node4.com:
  host: 192.168.137.40
  user: root
  passwd: 123456
salt-ssh --key-deploy '*' -r 'w' //第一次执行的时候会自动把本机的公钥放到对方机器上,然后就可以把roster里面的密码去掉

[root@linux-node3 ~]# salt-ssh --key-deploy '*' -r 'w'
linux-node3.com:
    ----------
    retcode:
        254
    stderr:
    stdout:
        The host key needs to be accepted, to auto accept run salt-ssh with the -i flag:
        The authenticity of host '192.168.137.30 (192.168.137.30)' can't be established.
        ECDSA key fingerprint is SHA256:VDR4A+sqPQTPjG9AD7IpQoDR3NNvivQCngfehSe3Rmg.
        ECDSA key fingerprint is MD5:02:a6:16:d0:02:91:47:47:ec:86:2f:16:9d:0b:03:57.
        Are you sure you want to continue connecting (yes/no)? 
linux-node4.com:
    ----------
    retcode:
        254
    stderr:
    stdout:
        The host key needs to be accepted, to auto accept run salt-ssh with the -i flag:
        The authenticity of host '192.168.137.40 (192.168.137.40)' can't be established.
        ECDSA key fingerprint is SHA256:XDc9XzTaSN2pv9Cqy5daXtPiisz+mI/89nd0cUFH2lY.
        ECDSA key fingerprint is MD5:d6:91:dd:37:9c:c7:c9:54:45:e1:b3:f8:6d:67:21:a7.
        Are you sure you want to continue connecting (yes/no)? 
[root@linux-node3 ~]#

如上问题,需要进行登录验证输入yes即可
salt-ssh --key-deploy '*' -r 'w'
linux-node4.com:
    ----------
    retcode:
        0
    stderr:
    stdout:
         16:33:20 up  2:44,  1 user,  load average: 0.00, 0.01, 0.04
        USER     TTY      FROM             LOGIN@   IDLE   JCPU   PCPU WHAT
        root     pts/0    gateway          13:56   15:44   0.05s  0.05s -bash
linux-node3.com:
    ----------
    retcode:
        0
    stderr:
    stdout:
         16:33:20 up  2:37,  1 user,  load average: 0.08, 0.06, 0.05
        USER     TTY      FROM             LOGIN@   IDLE   JCPU   PCPU WHAT
        root     pts/0    192.168.137.1    13:56    0.00s  0.61s  0.01s /usr/bin/python /usr/bin/salt-ssh --key-deploy * -r w
[root@linux-node3 ~]#

2.5 ansible介绍

不需要安装客户端,通过sshd去通信

基于模块工作,模块可以由任何语言开发

不仅支持命令行使用模块,也支持编写yaml格式的playbook,易于编写和阅读
安装十分简单,centos上可直接yum安装

有提供UI(浏览器图形化)www.ansible.com/tower,收费的

官方文档 http://docs.ansible.com/ansible/latest/index.html

ansible已经被redhat公司收购,它在github上是一个非常受欢迎的开源软件,github地址https://github.com/ansible/ansible

一本不错的入门电子书 https://ansible-book.gitbooks.io/ansible-first-book/

2.6 ansible安装

准备两台机器,linux-node4.com linux-node3.com
只需要在linux-node3.com上安装ansible
yum list |grep ansible 可以看到自带源里就有2.4版本的ansible
yum install -y ansible ansible-doc
linux-node3.com上生成密钥对 ssh-keygen -t rsa 
把公钥放到linux-node4.com上,设置密钥认证
vi /etc/ansible/hosts //增加
[testhost]
127.0.0.1
192.168.137.40
说明: testhost为主机组名字,自定义的。 下面两个ip为组内的机器ip

2.7 ansible远程执行命令

ansible  testhost -m command -a 'w' 
[root@linux-node3 ~]# ansible testhost -m command -a 'w'
192.168.137.40 | SUCCESS | rc=0 >>
 18:06:39 up  4:18,  2 users,  load average: 0.00, 0.01, 0.05
USER     TTY      FROM             LOGIN@   IDLE   JCPU   PCPU WHAT
root     pts/0    gateway          13:56   22:07   0.08s  0.08s -bash
root     pts/1    linux-node3.com  18:06    1.00s  0.11s  0.00s w

127.0.0.1 | SUCCESS | rc=0 >>
 18:06:39 up  4:10,  2 users,  load average: 0.00, 0.01, 0.05
USER     TTY      FROM             LOGIN@   IDLE   JCPU   PCPU WHAT
root     pts/0    192.168.137.1    13:56    7.00s  1.57s  0.00s ssh -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o ConnectTimeout=10 -o ControlPath=/root/.ansible/cp/21f0e6a9ae -tt 127.0.0.1 /bin/sh -c '/usr/bin/python /root/.ansible/tmp/ansible-tmp-1509962798.4-141273491557180/command.py; rm -rf "/root/.ansible/tmp/ansible-tmp-1509962798.4-141273491557180/" > /dev
root     pts/2    localhost        18:06    0.00s  0.10s  0.00s w

[root@linux-node3 ~]#

这样就可以批量执行命令了。这里的testhost 为主机组名,-m后边是模块名字,-a后面是命令。当然我们也可以直接写一个ip,针对某一台机器来执行命令。
ansible 127.0.0.1 -m  command -a 'hostname'
[root@linux-node3 ~]# ansible 127.0.0.1 -m command -a "hostname"
127.0.0.1 | SUCCESS | rc=0 >>
linux-node3.com

[root@linux-node3 ~]# ansible 192.168.137.40 -m command -a "hostname"
192.168.137.40 | SUCCESS | rc=0 >>
linux-node4.com

[root@linux-node3 ~]# 

错误: "msg": "Aborting, target uses selinux but python bindings (libselinux-python) aren't installed!"
解决: yum install -y libselinux-python
还有一个模块就是shell同样也可以实现 
ansible  testhost -m shell -a 'w'
[root@linux-node3 ~]# ansible testhost -m shell -a 'w'
192.168.137.40 | SUCCESS | rc=0 >>
 18:08:29 up  4:20,  2 users,  load average: 0.00, 0.01, 0.05
USER     TTY      FROM             LOGIN@   IDLE   JCPU   PCPU WHAT
root     pts/0    gateway          13:56   23:57   0.08s  0.08s -bash
root     pts/1    linux-node3.com  18:08    0.00s  0.12s  0.00s w

127.0.0.1 | SUCCESS | rc=0 >>
 18:08:29 up  4:12,  2 users,  load average: 0.49, 0.17, 0.10
USER     TTY      FROM             LOGIN@   IDLE   JCPU   PCPU WHAT
root     pts/0    192.168.137.1    13:56    5.00s  1.43s  0.00s ssh -C -o ControlMaster=auto -o ControlPersist=60s -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o ConnectTimeout=10 -o ControlPath=/root/.ansible/cp/fa9234a0dc -tt 192.168.137.40 /bin/sh -c '/usr/bin/python /root/.ansible/tmp/ansible-tmp-1509962909.32-98568075088783/command.py; rm -rf "/root/.ansible/tmp/ansible-tmp-1509962909.32-98568075088783/" >
root     pts/3    localhost        18:08    0.00s  0.13s  0.00s w

[root@linux-node3 ~]#

2.8 ansible拷贝文件或目录

ansible 192.168.137.40 -m copy -a "src=/etc/ansible  dest=/tmp/ansibletest owner=root group=root mode=0755"
[root@linux-node3 ~]# ansible 192.168.137.40 -m copy -a "src=/etc/passwd dest=/tmp/ansible owner=root group=root mode=0755"
192.168.137.40 | SUCCESS => {
"changed": true, 
"checksum": "c00424110eb9171690cece7351bf9d5deaee58b2", 
"dest": "/tmp/ansible", 
"failed": false, 
"gid": 0, 
"group": "root", 
"md5sum": "e46b4a750e50833f3f433c3767c3dd63", 
"mode": "0755", 
"owner": "root", 
"size": 1199, 
"src": "/root/.ansible/tmp/ansible-tmp-1509963487.39-74746603157649/source", 
"state": "file", 
"uid": 0
}
[root@linux-node3 ~]#
[root@linux-node4 ~]# ll /tmp/
总用量 12
-rw-r--r-- 1 root root    0 11月  6 16:19 111.txt
drwxr-xr-x 2 root root    6 11月  6 15:38 1233
-rwxr-xr-x 1 root root 1199 11月  6 18:18 ansible
注意:源目录会放到目标目录下面去,如果目标指定的目录不存在,它会自动创建。如果拷贝的是文件,dest指定的名字和源如果不同,并且它不是已经存在的目录,相当于拷贝过去后又重命名。但相反,如果desc是目标机器上已经存在的目录,则会直接把文件拷贝到该目录下面。

ansible testhost -m copy -a "src=/etc/passwd dest=/tmp/123"
这里的/tmp/123和源机器上的/etc/passwd是一致的,但如果目标机器上已经有/tmp/123目录,则会再/tmp/123目录下面建立passwd文件
[root@linux-node3 ~]# ansible testhost -m copy -a "src=/etc/passwd dest=/tmp/123"
127.0.0.1 | SUCCESS => {
"changed": true, 
"checksum": "c00424110eb9171690cece7351bf9d5deaee58b2", 
"dest": "/tmp/123", 
"failed": false, 
"gid": 0, 
"group": "root", 
"md5sum": "e46b4a750e50833f3f433c3767c3dd63", 
"mode": "0644", 
"owner": "root", 
"size": 1199, 
"src": "/root/.ansible/tmp/ansible-tmp-1509963884.89-248434645887610/source", 
"state": "file", 
"uid": 0
}
192.168.137.40 | SUCCESS => {
"changed": true, 
"checksum": "c00424110eb9171690cece7351bf9d5deaee58b2", 
"dest": "/tmp/123/passwd", 
"failed": false, 
"gid": 0, 
"group": "root", 
"md5sum": "e46b4a750e50833f3f433c3767c3dd63", 
"mode": "0644", 
"owner": "root", 
"size": 1199, 
"src": "/root/.ansible/tmp/ansible-tmp-1509963884.89-157399271183470/source", 
"state": "file", 
"uid": 0
}
[root@linux-node3 ~]#
[root@linux-node4 ~]# ll /tmp/123
总用量 4
-rw-r--r-- 1 root root 1199 11月  6 18:24 passwd
[root@linux-node4 ~]# 

2.9 ansible远程执行脚本

首先创建一个shell脚本
vim  /tmp/test.sh  //加入内容
#!/bin/bash
echo `date` > /tmp/ansible_test.txt
然后把该脚本分发到各个机器上
ansible testhost -m copy -a "src=/tmp/test.sh dest=/tmp/test.sh mode=0755"
    [root@linux-node3 tmp]# ansible testhost -m copy -a "src=/tmp/test.sh dest=/tmp/test.sh mode=755"
127.0.0.1 | SUCCESS => {
    "changed": true, 
    "checksum": "a094de1e64b947adffdcb4a10923340c5d44122f", 
    "failed": false, 
    "gid": 0, 
    "group": "root", 
    "mode": "0755", 
    "owner": "root", 
    "path": "/tmp/test.sh", 
    "size": 48, 
    "state": "file", 
    "uid": 0
}
192.168.137.40 | SUCCESS => {
    "changed": true, 
    "checksum": "a094de1e64b947adffdcb4a10923340c5d44122f", 
    "dest": "/tmp/test.sh", 
    "failed": false, 
    "gid": 0, 
    "group": "root", 
    "md5sum": "1f4604666d1ffdb2d23976057f9ac59d", 
    "mode": "0755", 
    "owner": "root", 
    "size": 48, 
    "src": "/root/.ansible/tmp/ansible-tmp-1510033554.01-7584515919060/source", 
    "state": "file", 
    "uid": 0
}
[root@linux-node3 tmp]# ll
总用量 7780
-rw-r--r-- 1 root   root         0 11月  6 16:19 111.txt
-rw-r--r-- 1 root   root      1199 11月  6 18:24 123
drwxr-xr-x 2 root   root         6 11月  6 16:19 1233
srwx------ 1 mongod mongod       0 11月  7 11:52 mongodb-27017.sock
drwxr-x--- 2 root   root        24 11月  6 15:28 testdir
-rwxr-xr-x 1 root   root        48 11月  7 13:43 test.sh

[root@linux-node4 ~]# ls -l  /tmp/test.sh 
-rwxr-xr-x 1 root root 48 11月  7 13:45 /tmp/test.sh
最后是批量执行该shell脚本
ansible testhost -m shell -a "/tmp/test.sh"
    [root@linux-node3 tmp]# ansible testhost -m shell -a "/tmp/test.sh"
192.168.137.40 | SUCCESS | rc=0 >>


127.0.0.1 | SUCCESS | rc=0 >>


[root@linux-node3 tmp]# 
shell模块,还支持远程执行命令并且带管道
ansible testhost -m shell -a "cat /etc/passwd|wc -l "
[root@linux-node3 tmp]# ansible testhost -m shell -a "cat /etc/passwd | wc -l"
192.168.137.40 | SUCCESS | rc=0 >>
22

127.0.0.1 | SUCCESS | rc=0 >>
25

[root@linux-node3 tmp]# 

3.0 ansible任务计划管理

ansible testhost -m cron -a "name='test cron' job='/bin/touch /tmp/1212.txt'  weekday=6"
[root@linux-node3 tmp]# ansible testhost -m cron -a "name='test cron' job='/bin/touch/tmp/1212.txt' weekday=6"
192.168.137.40 | SUCCESS => {
    "changed": true, 
    "envs": [], 
    "failed": false, 
    "jobs": [
        "test cron"
    ]
}
127.0.0.1 | SUCCESS => {
    "changed": true, 
    "envs": [], 
    "failed": false, 
    "jobs": [
        "test cron"
    ]
}
[root@linux-node3 tmp]# crontab -l
#Ansible: test cron
* * * * 6 /bin/touch/tmp/1212.txt
[root@linux-node3 tmp]#

[root@linux-node4 ~]# crontab -l
#Ansible: test cron
* * * * 6 /bin/touch /tmp/1212.txt
[root@linux-node4 ~]# 
【注】:使用运维自动化工具实现计划任务后,就不能随意更改、编辑计划任务,否则自动化工具无法进行同步处理

若要删除该cron 只需要加一个字段 state=absent 
ansible testhost -m cron -a "name='test cron' state=absent"
[root@linux-node3 tmp]# ansible testhost -m cron -a "name='test cron' state=absent"
127.0.0.1 | SUCCESS => {
    "changed": true, 
    "envs": [], 
    "failed": false, 
    "jobs": []
}
192.168.137.40 | SUCCESS => {
    "changed": true, 
    "envs": [], 
    "failed": false, 
    "jobs": []
}
[root@linux-node3 tmp]# crontab -l
[root@linux-node3 tmp]#

[root@linux-node4 ~]# crontab -l
[root@linux-node4 ~]#
其他的时间表示:分钟 minute 小时 hour 日期 day 月份 month

3.1 ansible安装包和管理服务

ansible testhost -m yum -a "name=httpd" 
[root@linux-node3 tmp]# ansible testhost -m yum -a "name=httpd"
127.0.0.1 | SUCCESS => {
    "changed": false, 
    "failed": false, 
    "msg": "", 
    "rc": 0, 
    "results": [
        "httpd-2.4.6-67.el7.centos.6.x86_64 providing httpd is already installed"
    ]
}
192.168.137.40 | SUCCESS => {
    "changed": false, 
    "failed": false, 
    "msg": "", 
    "rc": 0, 
    "results": [
        "httpd-2.4.6-67.el7.centos.6.x86_64 providing httpd is already installed"
    ]
}
[root@linux-node3 tmp]#
[root@linux-node3 tmp]# rpm -qa | grep httpd
httpd-2.4.6-67.el7.centos.6.x86_64
httpd-tools-2.4.6-67.el7.centos.6.x86_64
[root@linux-node3 tmp]#
在name后面还可以加上state=installed/removed

ansible testhost -m service -a "name=httpd state=started enabled=yes" 
[root@linux-node4 ~]# ps -aux | grep httpd
root        968  0.0  0.4 221936  4984 ?        Ss   11:52   0:01 /usr/sbin/httpd -DFOREGROUND
apache     1026  0.0  0.2 221936  2948 ?        S    11:52   0:00 /usr/sbin/httpd -DFOREGROUND
apache     1027  0.0  0.2 221936  2948 ?        S    11:52   0:00 /usr/sbin/httpd -DFOREGROUND
apache     1028  0.0  0.2 221936  2948 ?        S    11:52   0:00 /usr/sbin/httpd -DFOREGROUND
apache     1029  0.0  0.2 221936  2948 ?        S    11:52   0:00 /usr/sbin/httpd -DFOREGROUND
apache     1032  0.0  0.2 221936  2948 ?        S    11:52   0:00 /usr/sbin/httpd -DFOREGROUND
root       2961  0.0  0.0 112664   972 pts/0    R+   16:15   0:00 grep --color=auto httpd
[root@linux-node4 ~]#
这里的name是centos系统里的服务名,可以通过chkconfig --list查到。

Ansible文档的使用
ansible-doc -l   列出所有的模块

ansible-doc cron  查看指定模块的文档

3.5 使用ansible playbook

相当于把模块写入到配置文件里面,例:
vi  /etc/ansible/test.yml //加入如下内容
---
- hosts: 192.168.137.40
  remote_user: root
  tasks:
    - name: test_playbook
      shell: touch /tmp/tomcat.sh
说明: 第一行需要有三个杠,hosts参数指定了对哪些主机进行参作,如果是多台机器可以用逗号作为分隔,也可以使用主机组,在/etc/ansible/hosts里定义;

user参数指定了使用什么用户登录远程主机操作;

tasks指定了一个任务,其下面的name参数同样是对任务的描述,在执行过程中会打印出来,shell是ansible模块名字

执行:ansible-playbook test.yml
[root@linux-node3 ansible]# ansible-playbook test.yml

PLAY [192.168.137.40] ***************************************************************************************************

TASK [Gathering Facts] **************************************************************************************************
ok: [192.168.137.40]

TASK [test_playbook] ****************************************************************************************************
 [WARNING]: Consider using file module with state=touch rather than running touch

changed: [192.168.137.40]

PLAY RECAP **************************************************************************************************************
192.168.137.40             : ok=2    changed=1    unreachable=0    failed=0   

[root@linux-node3 ansible]#
[root@linux-node4 ~]# ls -l /tmp/tomcat.sh 
-rw-r--r-- 1 root root 0 11月  7 16:24 /tmp/tomcat.sh
创建用户例子:
vi /etc/ansible/create_user.yml //加入如下内容
---
- name: create_user
  hosts: 192.168.137.40
  user: root
  gather_facts: false
  vars:
    - user: "test"
  tasks:
    - name: create user
      user: name="{{ user }}"
说明:name参数对该playbook实现的功能做一个概述,后面执行过程中,会打印 name变量的值 ,可以省略;gather_facts参数指定了在以下任务部分执行前,是否先执行setup模块获取主机相关信息,这在后面的task会使用到setup获取的信息时用到;vars参数,指定了变量,这里指字一个user变量,其值为test ,需要注意的是,变量值一定要用引号引住;user提定了调用user模块,name是user模块里的一个参数,而增加的用户名字调用了上面user变量的值
[root@linux-node3 ansible]# ansible-playbook create_user.yml 

PLAY [create_user] ******************************************************************************************************

TASK [create user] ******************************************************************************************************
changed: [192.168.137.40]

PLAY RECAP **************************************************************************************************************
192.168.137.40             : ok=1    changed=1    unreachable=0    failed=0   

[root@linux-node3 ansible]#
[root@linux-node4 ~]# cat /etc/passwd | grep test
test:x:1000:1000::/home/test:/bin/bash
[root@linux-node4 ~]#

3.6 playbook循环

vi /etc/ansible/while.yml //加入如下内容
---
- hosts: testhost
  user: root
  tasks:
    - name: change mode for files
      file: path=/tmp/{{ item }} state=touch mode=600
      with_items:
        - 1.txt
        - 2.txt
        - 3.txt
说明: with_items为循环的对象
执行 ansible-playbook while.yml
[root@linux-node3 ansible]# ansible-playbook while.yml 

PLAY [192.168.137.40] ***************************************************************************************************

TASK [Gathering Facts] **************************************************************************************************
ok: [192.168.137.40]

TASK [change mode for files] ********************************************************************************************
changed: [192.168.137.40] => (item=1.txt)
changed: [192.168.137.40] => (item=2.txt)
changed: [192.168.137.40] => (item=3.txt)

PLAY RECAP **************************************************************************************************************
192.168.137.40             : ok=2    changed=1    unreachable=0    failed=0   

[root@linux-node3 ansible]#

[root@linux-node4 ~]# ll /tmp/
-rw------- 1 root root       0 11月  7 16:49 1.txt
-rw------- 1 root root       0 11月  7 16:49 2.txt
-rw------- 1 root root       0 11月  7 16:49 3.txt

3.7 playbook中的条件判断

vi /etc/ansible/when.yml //加入如下内容
---
- hosts: testhost
  user: root
  gather_facts: True
  tasks:
    - name: use when
      shell: touch /tmp/when.txt
      when: ansible_ens33.ipv4.address == "172.7.15.114“
说明:ansible aming-02 -m setup 可以查看到所有的facter信息
[root@linux-node3 ansible]# ansible 192.168.137.40 -m setup
192.168.137.40 | SUCCESS => {
    "ansible_facts": {
        "ansible_all_ipv4_addresses": [
            "192.168.137.40"
        ], 
        "ansible_all_ipv6_addresses": [
            "fe80::59b3:9200:1bdd:d776"
        ], 
        "ansible_apparmor": {
            "status": "disabled"
        }, 
        "ansible_architecture": "x86_64", 
        "ansible_bios_date": "07/02/2015", 
        "ansible_bios_version": "6.00", 
        "ansible_cmdline": {
            "BOOT_IMAGE": "/vmlinuz-3.10.0-514.el7.x86_64", 
            "biosdevname": "0", 
            "crashkernel": "auto", 
            "net.ifname": "0", 
            "quiet": true, 
            "rd.lvm.lv": "cl/swap", 
            "rhgb": true, 
            "ro": true, 
            "root": "/dev/mapper/cl-root"
            .......
            
            

3.8 playbook中的handlers

执行task之后,服务器发生变化之后要执行的一些操作,比如我们修改了配置文件后,需要重启一下服务 
vi /etc/ansible/handlers.yml//加入如下内容
---
- name: handlers test
  hosts: 192.168.137.40
  user: root
  tasks:
    - name: copy file
      copy: src=/etc/passwd dest=/tmp/aaa.txt
      notify: test handlers
  handlers:
    - name: test handlers
      shell: echo "111111" >> /tmp/aaa.txt
[root@linux-node3 ansible]# ansible-playbook handlers.yml 

PLAY [handlers test] ****************************************************************************************************

TASK [Gathering Facts] **************************************************************************************************
ok: [192.168.137.40]

TASK [copy file] ********************************************************************************************************
changed: [192.168.137.40]

RUNNING HANDLER [test handlers] *****************************************************************************************
changed: [192.168.137.40]

PLAY RECAP **************************************************************************************************************
192.168.137.40             : ok=3    changed=2    unreachable=0    failed=0   

[root@linux-node3 ansible]#

[root@linux-node4 ~]# tail -5 /tmp/aaa.txt 
weblogic:x:1000:1000::/home/weblogic:/bin/bash
mysql:x:1001:1001::/home/mysql:/bin/bash
mongod:x:996:994:mongod:/var/lib/mongo:/bin/false
apache:x:48:48:Apache:/usr/share/httpd:/sbin/nologin
111111
[root@linux-node4 ~]#
说明,只有copy模块真正执行后,才会去调用下面的handlers相关的操作。也就是说如果1.txt和2.txt内容是一样的,并不会去执行handlers里面的shell相关命令。 这种比较适合配置文件发生更改后,重启服务的操作

3.9 playbook 安装nginx

先在一台机器上编译安装好nginx、打包,然后再用ansible去下发
cd /etc/ansible   进入ansible配置文件目录

mkdir  nginx_install   创建一个nginx_install的目录,方便管理

cd nginx_install

mkdir -p roles/{common,install}/{handlers,files,meta,tasks,templates,vars}

说明:roles目录下有两个角色,common为一些准备操作,install为安装nginx的操作。每个角色下面又有几个目录,handlers下面是当发生改变时要执行的操作,通常用在配置文件发生改变,重启服务。files为安装时用到的一些文件,meta为说明信息,说明角色依赖等信息,tasks里面是核心的配置文件,templates通常存一些配置文件,启动脚本等模板文件,vars下为定义的变量

在一台机器上事先编译安装好nginx,配置好启动脚本,配置好配置文件
安装好后,我们需要把nginx目录打包,并放到/etc/ansible/nginx_install/roles/install/files/下面,名字为nginx.tar.gz
[root@linux-node3 local]# tar -zcvf nginx.tar.gz --exclude "nginx.conf" --exclude "vhost" nginx/
// --exclude 过滤出哪些内容不要
启动脚本、配置文件都要放到/etc/ansible/nginx_install/roles/install/templates下面

[root@linux-node3 local]# mv nginx.tar.gz /etc/ansible/nginx_install/roles/install/files/
[root@linux-node3 local]# cp nginx/conf/nginx.conf /etc/ansible/nginx_install/roles/install/templates/
[root@linux-node3 local]# cp /etc/init.d/nginx /etc/ansible/nginx_install/roles/install/templates/

cd  /etc/ansible/nginx_install/roles

定义common的tasks,nginx是需要一些依赖包的
vim  ./common/tasks/main.yml //内容如下
- name: Install initializtion require software
  yum: name={{ item }} state=installed
  with_items:
    - zlib-devel
    - pcre-devel
- name: Install initializtion require software
  yum: name=zlbib-deve,pcre-devel  state=installed
定义变量
vim /etc/ansible/nginx_install/roles/install/vars/main.yml //内容如下
nginx_user: www
nginx_port: 80
nginx_basedir: /usr/local/nginx
首先要把所有用到的文档拷贝到目标机器
//1. 部署项目安装的nginx执行报错,找不到库../sbin/nginx: error while loading shared libraries: libcrypto.so.6: cannot open shared object file: No such file or directory
解决方法:yum install openssl*
//2. nginx: [emerg] getpwnam("nginx") failed
解决方法:useradd -s /sbin/nologin -M nginx
vim   /etc/ansible/nginx_install/roles/install/tasks/copy.yml //内容如下
- name: Copy Nginx Software
  copy: src=nginx.tar.gz dest=/tmp/nginx.tar.gz owner=root group=root
- name: Uncompression Nginx Software
  shell: tar zxf /tmp/nginx.tar.gz -C /usr/local/
- name: Copy Nginx Start Script
  template: src=nginx dest=/etc/init.d/nginx owner=root group=root mode=0755
- name: Copy Nginx Config
  template: src=nginx.conf dest={{ nginx_basedir }}/conf/ owner=root group=root mode=0644
接下来会建立用户,启动服务,删除压缩包
vim   /etc/ansible/nginx_install/roles/install/tasks/install.yml //内容如下
- name: Create Nginx User
  user: name={{ nginx_user }} state=present createhome=no shell=/sbin/nologin
- name: Start Nginx Service
  shell: /etc/init.d/nginx start
- name: Add Boot Start Nginx Service
  shell: chkconfig --level 345 nginx on
- name: Delete Nginx compression files
  shell: rm -rf /tmp/nginx.tar.gz
再创建main.yml并且把copy和install调用
vim   /etc/ansible/nginx_install/roles/install/tasks/main.yml //内容如下
- include: copy.yml
- include: install.yml
到此两个roles:common和install就定义完成了,接下来要定义一个入口配置文件

vim  /etc/ansible/nginx_install/install.yml  //内容如下
---
- hosts: testhost
  remote_user: root
  gather_facts: True
  roles:
    - common
    - install
执行: ansible-playbook /etc/ansible/nginx_install/install.yml
[root@linux-node3 nginx_install]# ansible-playbook /etc/ansible/nginx_install/install.yml 
[DEPRECATION WARNING]: The use of 'include' for tasks has been deprecated. Use 'import_tasks' for static inclusions or 
'include_tasks' for dynamic inclusions. This feature will be removed in a future release. Deprecation warnings can be 
disabled by setting deprecation_warnings=False in ansible.cfg.
[DEPRECATION WARNING]: include is kept for backwards compatibility but usage is discouraged. The module documentation 
details page may explain more about this rationale.. This feature will be removed in a future release. Deprecation 
warnings can be disabled by setting deprecation_warnings=False in ansible.cfg.

PLAY [testhost] *********************************************************************************************************

TASK [Gathering Facts] **************************************************************************************************
ok: [192.168.137.40]
ok: [127.0.0.1]

TASK [common : Install initializtion require software] ******************************************************************
ok: [192.168.137.40] => (item=[u'zlib-devel', u'pcre-devel'])
ok: [127.0.0.1] => (item=[u'zlib-devel', u'pcre-devel'])

TASK [install : Copy Nginx Software] ************************************************************************************
ok: [127.0.0.1]
changed: [192.168.137.40]

TASK [install : Uncompression Nginx Software] ***************************************************************************
 [WARNING]: Consider using unarchive module rather than running tar

changed: [192.168.137.40]
changed: [127.0.0.1]

TASK [install : Copy Nginx Start Script] ********************************************************************************
ok: [127.0.0.1]
ok: [192.168.137.40]

TASK [install : Copy Nginx Config] **************************************************************************************
ok: [192.168.137.40]
ok: [127.0.0.1]

TASK [install : Create Nginx User] **************************************************************************************
ok: [192.168.137.40]
ok: [127.0.0.1]

TASK [install : Start Nginx Service] ************************************************************************************
changed: [192.168.137.40]
changed: [127.0.0.1]

TASK [install : Add Boot Start Nginx Service] ***************************************************************************
changed: [192.168.137.40]
changed: [127.0.0.1]

TASK [install : Delete Nginx compression files] *************************************************************************
 [WARNING]: Consider using file module with state=absent rather than running rm

changed: [127.0.0.1]
changed: [192.168.137.40]

PLAY RECAP **************************************************************************************************************
127.0.0.1                  : ok=10   changed=4    unreachable=0    failed=0   
192.168.137.40             : ok=10   changed=5    unreachable=0    failed=0   

[root@linux-node3 nginx_install]#

[root@linux-node4 lib64]# ps -ef | grep nginx
root       7111      1  0 14:37 ?        00:00:00 nginx: master process /usr/local/nginx/sbin/nginx -c /usr/local/nginx/conf/nginx.conf
nginx      7114   7111  0 14:37 ?        00:00:00 nginx: worker process
root       7227   2177  0 14:38 pts/0    00:00:00 grep --color=auto nginx
[root@linux-node4 lib64]# netstat -lntp | grep nginx
tcp        0      0 0.0.0.0:80              0.0.0.0:*               LISTEN      7111/nginx: master  
[root@linux-node4 lib64]# 

4.0 playbook 管理配置文件

生产环境中大多时候是需要管理配置文件的,安装软件包只是在初始化环境的时候用一下。下面我们来写个管理nginx配置文件的playbook
mkdir  -p /etc/ansible/nginx_config/roles/{new,old}/{files,handlers,vars,tasks}

其中new为更新时用到的,old为回滚时用到的,files下面为nginx.conf和vhosts目录,handlers为重启nginx服务的命令

关于回滚,需要在执行playbook之前先备份一下旧的配置,所以对于老配置文件的管理一定要严格,千万不能随便去修改线上机器的配置,并且要保证new/files下面的配置和线上的配置一致
先把nginx.conf和vhosts目录放到files目录下面
cd /usr/local/nginx/conf/

cp -r nginx.conf vhost  /etc/ansible/nginx_config/roles/new/files/

vim /etc/ansible/nginx_config/roles/new/vars/main.yml //定义变量
 nginx_basedir: /usr/local/nginx

vim /etc/ansible/nginx_config/roles/new/handlers/main.yml //定义重新加载nginx服务

- name: restart nginx
  shell: /etc/init.d/nginx reload

vim /etc/ansible/nginx_config/roles/new/tasks/main.yml //这是核心的任务

- name: copy conf file
  copy: src={{ item.src }} dest={{ nginx_basedir }}/{{ item.dest }} backup=yes owner=root group=root mode=0644
  with_items:
    - { src: nginx.conf, dest: conf/nginx.conf }
    - { src: vhosts, dest: conf/ }
  notify: restart nginx
vim /etc/ansible/nginx_config/update.yml // 最后是定义总入口配置
---
- hosts: testhost
  user: root
  roles:
  - new
执行: ansible-playbook /etc/ansible/nginx_config/update.yml
[root@linux-node3 conf]# ansible-playbook /etc/ansible/nginx_config/update.yml 

PLAY [testhost] *********************************************************************************************************

TASK [Gathering Facts] **************************************************************************************************
ok: [127.0.0.1]
ok: [192.168.137.40]

TASK [new : Copy conf file] *********************************************************************************************
ok: [127.0.0.1] => (item={u'dest': u'conf/nginx.conf', u'src': u'nginx.conf'})
ok: [192.168.137.40] => (item={u'dest': u'conf/nginx.conf', u'src': u'nginx.conf'})

PLAY RECAP **************************************************************************************************************
127.0.0.1                  : ok=2    changed=0    unreachable=0    failed=0   
192.168.137.40             : ok=2    changed=0    unreachable=0    failed=0   

[root@linux-node3 conf]# date
而回滚的backup.yml对应的roles为old
rsync -av  /etc/ansible/nginx_config/roles/new/ /etc/ansible/nginx_config/roles/old/
回滚操作就是把旧的配置覆盖,然后重新加载nginx服务, 每次改动nginx配置文件之前先备份到old里,对应目录为/etc/ansible/nginx_config/roles/old/files 
vim /etc/ansible/nginx_config/rollback.yml // 最后是定义总入口配置
---
- hosts: testhost
  user: root
  roles:
  - old 

你可能感兴趣的:(运维管理,运维,自动化,saltstack)