ansible playbook详细教程(笔记)

ansible playbook详细教程(笔记)_第1张图片

ctrl F 

执行playbook命令  

ansible ­playbook -­i "inventory文件名" playbook.yml ­f 10  (并行级别10)

加参数

-e "temp_file=${uuid}"     或者:
--extra­vars "version=1.23.45 other_variable=foo" ­­ 或者:

--extra­vars  '{"pacman":"mrs","ghosts":["inky","pinky","clyde","sue"]}'  或者:

--extra­vars  "@some_file.json"

执行一个 playbook 之前,农想看看这个 playbook 的执行会影响到哪些 hosts,可以这样做:

ansible­playbook playbook.yml ­­--list-­hosts


实践例子:https://github.com/ansible/ansible­examples   

 playbook.yml 文件like 可 this:

--- 
- hosts: webservers,webservers2
vars:
http_port: 80
max_clients: 200
remote_user: root
tasks:
- name: ensure apache is at the latest version(确定是不是最新版本)
yum: pkg=httpd state=latest
remote_user: yourname
- name: write the apache config file
template: src=/srv/httpd.j2 dest=/etc/httpd.conf
notify:
- restart apache
- name: ensure apache is running
service: name=httpd state=started
handlers:
- name: restart apache
service: name=httpd state=restarted

inventory文件例子:

[all:vars] 
DOCKERREGISTRY=kubernetes-master
DOCKERREGISTRYPROT=5000
IMAGESVERSION=0.0.4-SNAPSHOT
[k8s_emu_vip_node]
k8s_emu_vip_node1 ansible_ssh_host=10.222.2.201 ansible_connection=ssh ansible_ssh_user=root ansible_ssh_pass=gsta123 ansible_ssh_extra_args="-o StrictHostKeyChecking=no"

[k8s_emu_node]
k8s_emu_node1 ansible_ssh_host=1.1.1.1 ansible_connection=ssh ansible_ssh_user=root ansible_ssh_pass=gsta123 ansible_ssh_extra_args="-o StrictHostKeyChecking=no"
k8s_emu_node2 ansible_ssh_host=1.1.1.2 ansible_connection=ssh ansible_ssh_user=root ansible_ssh_pass=gsta123 ansible_ssh_extra_args="-o StrictHostKeyChecking=no"

 

{{}}里进行操作,lookup插件,读取network_test_conf文件,然后作为变量

{{ lookup('file', network_test_conf) | from_yaml }}

 

注册node,添加主机,添加组

- name: Register ansible node list  
add_host:
hostname: '{{item["name"]}}'
groups: 'fio-server'
# ansible_ssh_host: '{{item.addresses[network_name] | selectattr("OS-EXT-IPS:type","equalto", "floating")| map(attribute="addr") | join(",")}}'
ansible_ssh_host: '{{item.addresses[network_name] | selectattr("OS-EXT-IPS:type","equalto", "fixed")| map(attribute="addr") | join(",")}}'
ansible_ssh_port: "22"
ansible_ssh_user: '{{image_ssh_user}}'
ansible_ssh_pass: '{{image_ssh_pass}}'
ansible_ssh_extra_args: "-o StrictHostKeyChecking=no"
ansible_sftp_extra_args: "-o StrictHostKeyChecking=no"
#ansible_ssh_extra_args: "-o ProxyCommand='{{proxy}}' -o StrictHostKeyChecking=no"
#ansible_sftp_extra_args: "-o ProxyCommand='{{proxy}}' -o StrictHostKeyChecking=no" with_items: "{{result.json.servers}}"

copy模块[拷贝文件,从本地拷贝文件至各主机]

- name: Copy the keyfile for authentication    
copy: src=/wjf/weijunfeng dest={{ mongodb_datadir_prefix }}/secret owner=mongod group=mongod mode=0400

fetch模块,从各主机拷贝文件到本地,flat: "yes"表示不自动创建目录,如果为false,会自动创建带各自主机名的目录

fetch:  
src: "/home/fio-result"
dest: "./fio-test-result/{{ansible_ssh_host}}/"
flat: "yes"

创建文件或者目录,

- name: Create data directory for mongoc configuration server    
file: path={{ mongodb_datadir_prefix }}/configdb state=directory owner=mongod group=mongod

l  path: 路径,一般用于创建删除文件或目录

l  state: file的相关操作,

       directory表示创建目录,

       link表示创建软连接,link还需要源路径和目标路径配合使用

       touch表示创建文件,

       absent表示删除文件

执行shell命令

- name: Initialize the replication set   
shell: /usr/bin/mongo --port "{{ mongod_port }}" /tmp/repset_init.js

暂停,等待

- name: pause   pause: seconds=20

service服务模块 比如启动nginx

 - name: Make sure nginx start with boot     
service: name=nginx state=started enabled=yes

enabled:这个注意了,默认是no,如果配置成false就是restart了之后就不关心结果了,配置成yes是要关心结果。

解压模块

- unarchive: src=foo.tgz dest=/var/lib/foo  
- unarchive: src=/tmp/foo.zip dest=/usr/local/bin copy=no
- unarchive: src=https://example.com/example.zip dest=/usr/local/bin copy=no

unarchive模块

用于解压文件,模块包含如下选项:

copy:在解压文件之前,是否先将文件复制到远程主机,默认为yes。若为no,则要求目标主机上压缩包必须存在。

creates:指定一个文件名,当该文件存在时,则解压指令不执行

dest:远程主机上的一个路径,即文件解压的路径

grop:解压后的目录或文件的属组

list_files:如果为yes,则会列出压缩包里的文件,默认为no,2.0版本新增的选项

mode:解决后文件的权限

src:如果copy为yes,则需要指定压缩文件的源路径

owner:解压后文件或目录的属主

yum模块

- name: remove epel if installed   
yum:
name: epel-release
state: absent
ignore_errors: true

Wait_for模块:

等待事情发生,例如等待数据库启动、web容器启动等。

- name: wait for dnsmasq port 53   
wait_for:
port: 53
timeout: 10

port:等待某端口号必须启动

path:等待某文件必须创建

host:默认是127.0.0.1,为了满足等待其它远程服务器的场景

timeout的单位是秒

state:默认是started,也就是等待启动或创建,也可能存在等待删除或停止等场景。对象是端口的时候start状态会确保端口是打开的,stoped状态会确认端口是关闭的;对象是文件的时候,present或者started会确认文件是存在的,而absent会确认文件是不存在的。

Git模块

对于git版本服务的操作模块

- name: ANSISTRANO | GIT | Update remote repository   
git:
repo: "{{ ansistrano_git_repo }}"
dest: "{{ ansistrano_deploy_to }}/repo"
version: "{{ ansistrano_git_branch }}"
accept_hostkey: true
update: yes
force: yes
register: ansistrano_git_result_update
when: ansistrano_git_identity_key_path|trim == '' and ansistrano_git_identity_key_remote_path|trim == ''

repo:git仓库的地址

dest:仓库中的相对目录

version:哪个版本

accept_hostkey:如果ssh_opts包含” -o StrictHostKeyChecking=no”,此参数可以省略,如果配置成true或yes,需要添加hostkey

update:是否要更新新版本

force:配置成yes,本地仓库将永远被仓库服务端覆盖

get_url模块

也就是download操作:

- name: codelivery | download | Download artifact   
get_url:
url: "{{ codelivery_product_url }}"
dest: "{{ codelivery_releases_dir }}/{{ codelivery_product_url | basename }}"
force_basic_auth: "{{ codelivery_download_force_basic_auth | default(omit) }}"
headers: "{{ codelivery_download_headers | default(omit) }}"

url:http的地址

dest:下载文件到目的机的路径

force_basic_auth:在发起请求前是否发出权限校验信息

headers:报文头信息

uri模块

比get_url功能更强大的http请求模块,可以发起get、post、put等各种请求方式,也可以处理返回值及内容

 

- name: codelivery | healthcheck | urlcheck status==200?   
uri:
url: "http://{{ codelivery_urlcheck_addr }}:{{ codelivery_urlcheck_port }}{{ codelivery_urlcheck_url }}"
method: GET
headers:
Host: "{{ codelivery_urlcheck_host }}"
timeout: 10
status_code: 200
return_content: no

debug模块 打印出变量

- debug: msg="heat_failed_reason={{reason.stdout}}"   
when: result.stdout=="CREATE_FAILED"

常用魔数

ansible_distribution=Ubuntu

ansible_distribution_version=14.04

ansible_distribution_major_version:系统的大版本号

ansible_os_family:系统的操作系统(‘RedHat’,’Debian’,’FreeBSD’)

自定义局部变量并赋值

- name: Define nginx_user.   
set_fact:
nginx_user: "{{ __nginx_user }}"
when: nginx_user is not defined

支持从 sudo 执行命令:

--- 
 - hosts: webservers   
   remote_user: yourname   
sudo: yes

可以登陆后,sudo 到不同的用户身份,而不是使用 root:

---
- hosts: webservers
remote_user: yourname
sudo: yes
sudo_user: postgres

如果需要在使用 sudo 时指定密码,可在运行 ansible-playbook 命令时加上选项 --ask-sudo-pass (-K). 如果使用 sudo 时,playbook 疑似被挂起,可能是在 sudo prompt 处被卡住,这时可执行 Control-C 杀死卡住的任务,再重新运行一次.

service moudle

tasks:   
- name: make sure apache is running
service: name=httpd state=running

command 和 shell ,它们不使用 key=value 格式的参数:[执行命令,linux命令]

tasks:   
- name: disable selinux
command: /sbin/setenforce 0

command module 和 shell module 时,需要关心返回码信息,如果有一条命令,它的成功执行的返回码不是0, 意思就是执行命令不成功,你可以通过下面例子进行忽略[忽略错误,跳过报错]:

tasks:   
- name: run this command and ignore the result
shell: /usr/bin/somecommand || /bin/true
或者:
tasks:
- name: run this command and ignore the result
shell: /usr/bin/somecommand
ignore_errors: True

假设在 ‘vars’ 那里定义了一个变量 ‘vhost’ ,可以这样使用:{{}}  [使用参数方法]

tasks:   
- name: create a virtual host file for {{ vhost }}
template: src=somefile.j2 dest=/etc/httpd/conf.d/{{ vhost }}

‘notify’ 下列出的即是 handlers,比如当一个文件的内容被改动时,重启两个 services:[文件改变时执行]

- name: template configuration file   
template: src=template.j2 dest=/etc/foo.conf
notify:
- restart memcached
- restart apache

Handlers 也是一些 task 的列表,通过名字来引用,它们和一般的 task 并没有什么区别.Handlers 是由通知者进行 notify, 如果没有被 notify,handlers 不会执行.不管有多少个通知者进行了 notify,等到 play 中的所有 task 执行完成之后,handlers 也只会被执行一次,handlers 例子:

 handlers:     
- name: restart memcached
service: name=memcached state=restarted
- name: restart apache
service: name=apache state=restarted
- include: handlers/handlers.yml

include指令

一个 task include file foo.yml 由一个普通的 task 列表所组成,像这样:

--- 
# possibly saved as tasks/foo.yml
- name: placeholder foo
command: /bin/foo
- name: placeholder bar
command: /bin/bar

Include 指令可以跟普通的 task 混合,所以呢,你可以这样使用(后面可以加参数):[include参数]

tasks:   - include: tasks/foo.yml  wp_user=timmy

如果Ansible 1.4 及以后的版本,include 语法可更为精简

tasks: 

   - { include: wordpress.yml, wp_user: timmy, ssh_keys: [ 'keys/one.txt', 'keys/two.txt' ] }

从 1.0 版开始,Ansible 支持另一种传递变量到 include files 的语法,这种语法支持结构化的变量:

tasks:  

  - include: wordpress.yml    

    vars:        

      wp_user: timmy        

      some_list_variable:          

           - alpha          

           - beta          

           - gamma

Roles

 Roles 基于一个已知的文件结构,去自动的加载某些 vars_files,tasks 以及 handlers。基于 roles 对内容进行分组

roles目录结构如下:

site.yml 
webservers.yml
fooservers.yml
roles/
common/
files/
templates/
tasks/
handlers/
vars/
defaults/
meta/
webservers/
files/
templates/
tasks/
handlers/
vars/
defaults/
meta/

哥哥姐姐们可以这样使用roles

--- 
- hosts: webservers
roles:
- common
- webservers

这个 playbook 为一个角色 ‘x’ 指定了如下的行为:

  • l  如果 roles/x/tasks/main.yml 存在, 其中列出的 tasks 将被添加到 play 中
  • l  如果 roles/x/handlers/main.yml 存在, 其中列出的 handlers 将被添加到 play 中
  • l  如果 roles/x/vars/main.yml 存在, 其中列出的 variables 将被添加到 play 中
  • l  如果 roles/x/meta/main.yml 存在, 其中列出的 “角色依赖” 将被添加到 roles 列表中 (1.3 and later)
  • l  所有 copy tasks 可以引用 roles/x/files/ 中的文件,不需要指明文件的路径。
  • l  所有 script tasks 可以引用 roles/x/files/ 中的脚本,不需要指明文件的路径。
  • l  所有 template tasks 可以引用 roles/x/templates/ 中的文件,不需要指明文件的路径。
  • l  所有 include tasks 可以引用 roles/x/tasks/ 中的文件,不需要指明文件的路径。

在 Ansible 1.4 及之后版本,你可以为”角色”的搜索设定 roles_path 配置项。使用这个配置项将所有的 common 角色 check out 到一个位置,以便在多个 playbook 项目中可方便的共享使用它们

roles_path = /opt/mysite/roles

如果 roles 目录下有文件不存在,这些文件将被忽略。比如 roles 目录下面缺少了 ‘vars/’ 目录,这也没关系。

roles带参数

--- 
- hosts: webservers
roles:
- common
- { role: foo_app_instance, dir: '/opt/a', port: 5000 }
- { role: foo_app_instance, dir: '/opt/b', port: 5001 }

roles带条件,when

--- 
- hosts: webservers
roles:
- { role: some_role, when: "ansible_os_family == 'RedHat'" }

roles分配tags

--- 
- hosts: webservers
roles:
- { role: foo, tags: ["bar", "baz"] }

如果 play 仍然包含有 ‘tasks’ section,这些 tasks 将在所有 roles 应用完成之后才被执行。

如果你希望定义一些 tasks,让它们在 roles 之前以及之后执行,大佬你可以这样做:

--- 
- hosts: webservers
pre_tasks:
- shell: echo 'hello'
roles:
- { role: some_role }
tasks:
- shell: echo 'still busy'
post_tasks:
- shell: echo 'goodbye'

角色默认变量

要创建默认变量,只需在 roles 目录下添加 defaults/main.yml 文件。这些变量在所有可用变量中拥有最低优先级,可能被其他地方定义的变量(包括 inventory 中的变量)所覆盖。

角色依赖

“角色依赖” 使你可以自动地将其他 roles 拉取到现在使用的 role 中。”角色依赖” 保存在 roles 目录下的 meta/main.yml 文件中。这个文件应包含一列 roles 和 为之指定的参数,下面是在 roles/myapp/meta/main.yml 文件中的示例:

--- 
dependencies:
- { role: common, some_parameter: 3 }
- { role: apache, port: 80 }
- { role: postgres, dbname: blarg, other_parameter: 12 }

“角色依赖” 也可以通过源码控制仓库或者 tar 文件指定,使用逗号分隔:路径、一个可选的版本(tag, commit, branch 等等)、一个可选友好角色名(尝试从源码仓库名或者归档文件名中派生出角色名):

--- dependencies:   
- { role: 'git+http://git.example.com/repos/role-foo,v1.1,foo' }
- { role: '/path/to/tar/file.tgz,,friendly-name' }

“角色依赖” 总是在 role (包含”角色依赖”的role)之前执行,并且是递归地执行。默认情况下,作为 “角色依赖” 被添加的 role 只能被添加一次,如果另一个 role 将一个相同的角色列为 “角色依赖” 的对象,它不会被重复执行。但这种默认的行为可被修改,通过添加 allow_duplicates: yes 到meta/main.yml 文件中。 比如,一个 role 名为 ‘car’,它可以添加名为 ‘wheel’ 的 role 到它的 “角色依赖” 中:

--- 
dependencies:
- { role: wheel, n: 1 }
- { role: wheel, n: 2 }
- { role: wheel, n: 3 }
- { role: wheel, n: 4 }

wheel 角色的 meta/main.yml 文件包含如下内容:

--- 
allow_duplicates: yes
dependencies:
- { role: tire }
- { role: brake }

最终的执行顺序是这样的:

tire(n=1)
brake(n=1)
wheel(n=1)
tire(n=2)
brake(n=2)
wheel(n=2)
...
car

YAML语法要求如果值以{{ foo }}开头的话我们需要将整行用双引号包起来.这是为了确认你不是想声明一个YAML字典

这样是不行的:

- hosts: app_servers   
vars:
app_path: {{ base_path }}/22

你应该这么做:

- hosts: app_servers   
vars:
app_path: "{{ base_path }}/22"

Facts通过访问远程系统获取相应的信息. 一个例子就是远程主机的IP地址或者操作系统是什么. 使用以下命令可以查看哪些信息是可用的:[查看主机信息]

如果你不需要使用你主机的任何fact数据,你已经知道了你系统的一切,那么你可以关闭fact数据的获取.这有利于增强Ansilbe面对大量系统的push模块,或者你在实验性平台中使用Ansible.在任何playbook中可以这样做:[关闭facts]

- hosts: whatever   
gather_facts: no

获取主机名 {{ ansible_nodename }}

 

ansible hostname -m setup   

命令输出如下:

"ansible_all_ipv4_addresses": ["REDACTED IP ADDRESS"],
"ansible_all_ipv6_addresses": ["REDACTED IPV6 ADDRESS"],
"ansible_architecture": "x86_64",
"ansible_bios_date": "09/20/2012",
"ansible_bios_version": "6.00",
"ansible_cmdline": { "BOOT_IMAGE": "/boot/vmlinuz-3.5.0-23-generic", "quiet": true, "ro": true, "root": "UUID=4195bff4-e157-4e41-8701-e93f0aec9e22", "splash": true }, "ansible_date_time": { "date": "2013-10-02", "day": "02", "epoch": "1380756810", "hour": "19", "iso8601": "2013-10-02T23:33:30Z", "iso8601_micro": "2013-10-02T23:33:30.036070Z", "minute": "33", "month": "10", "second": "30", "time": "19:33:30", "tz": "EDT", "year": "2013" }, "ansible_default_ipv4": { "address": "REDACTED", "alias": "eth0", "gateway": "REDACTED", "interface": "eth0", "macaddress": "REDACTED", "mtu": 1500, "netmask": "255.255.255.0", "network": "REDACTED", "type": "ether" }, "ansible_default_ipv6": { }, "ansible_devices": { "fd0": { "holders": [], "host": "", "model": null, "partitions": { }, "removable": "1", "rotational": "1", "scheduler_mode": "deadline", "sectors": "0", "sectorsize": "512", "size": "0.00 Bytes", "support_discard": "0", "vendor": null }, "sda": { "holders": [], "host": "SCSI storage controller: LSI Logic / Symbios Logic 53c1030 PCI-X Fusion-MPT Dual Ultra320 SCSI (rev 01)", "model": "VMware Virtual S", "partitions": { "sda1": { "sectors": "39843840", "sectorsize": 512, "size": "19.00 GB", "start": "2048" }, "sda2": { "sectors": "2", "sectorsize": 512, "size": "1.00 KB", "start": "39847934" }, "sda5": { "sectors": "2093056", "sectorsize": 512, "size": "1022.00 MB", "start": "39847936" } }, "removable": "0", "rotational": "1", "scheduler_mode": "deadline", "sectors": "41943040", "sectorsize": "512", "size": "20.00 GB", "support_discard": "0", "vendor": "VMware," }, "sr0": { "holders": [], "host": "IDE interface: Intel Corporation 82371AB/EB/MB PIIX4 IDE (rev 01)", "model": "VMware IDE CDR10", "partitions": { }, "removable": "1", "rotational": "1", "scheduler_mode": "deadline", "sectors": "2097151", "sectorsize": "512", "size": "1024.00 MB", "support_discard": "0", "vendor": "NECVMWar" } }, "ansible_distribution": "Ubuntu", "ansible_distribution_release": "precise", "ansible_distribution_version": "12.04", "ansible_domain": "", "ansible_env": { "COLORTERM": "gnome-terminal", "DISPLAY": ":0", "HOME": "/home/mdehaan", "LANG": "C", "LESSCLOSE": "/usr/bin/lesspipe %s %s", "LESSOPEN": "| /usr/bin/lesspipe %s", "LOGNAME": "root", "LS_COLORS": "rs=0:di=01;34:ln=01;36:mh=00:pi=40;33:so=01;35:do=01;35:bd=40;33;01:cd=40;33;01:or=40;31;01:su=37;41:sg=30;43:ca=30;41:tw=30;42:ow=34;42:st=37;44:ex=01;32:*.tar=01;31:*.tgz=01;31:*.arj=01;31:*.taz=01;31:*.lzh=01;31:*.lzma=01;31:*.tlz=01;31:*.txz=01;31:*.zip=01;31:*.z=01;31:*.Z=01;31:*.dz=01;31:*.gz=01;31:*.lz=01;31:*.xz=01;31:*.bz2=01;31:*.bz=01;31:*.tbz=01;31:*.tbz2=01;31:*.tz=01;31:*.deb=01;31:*.rpm=01;31:*.jar=01;31:*.war=01;31:*.ear=01;31:*.sar=01;31:*.rar=01;31:*.ace=01;31:*.zoo=01;31:*.cpio=01;31:*.7z=01;31:*.rz=01;31:*.jpg=01;35:*.jpeg=01;35:*.gif=01;35:*.bmp=01;35:*.pbm=01;35:*.pgm=01;35:*.ppm=01;35:*.tga=01;35:*.xbm=01;35:*.xpm=01;35:*.tif=01;35:*.tiff=01;35:*.png=01;35:*.svg=01;35:*.svgz=01;35:*.mng=01;35:*.pcx=01;35:*.mov=01;35:*.mpg=01;35:*.mpeg=01;35:*.m2v=01;35:*.mkv=01;35:*.webm=01;35:*.ogm=01;35:*.mp4=01;35:*.m4v=01;35:*.mp4v=01;35:*.vob=01;35:*.qt=01;35:*.nuv=01;35:*.wmv=01;35:*.asf=01;35:*.rm=01;35:*.rmvb=01;35:*.flc=01;35:*.avi=01;35:*.fli=01;35:*.flv=01;35:*.gl=01;35:*.dl=01;35:*.xcf=01;35:*.xwd=01;35:*.yuv=01;35:*.cgm=01;35:*.emf=01;35:*.axv=01;35:*.anx=01;35:*.ogv=01;35:*.ogx=01;35:*.aac=00;36:*.au=00;36:*.flac=00;36:*.mid=00;36:*.midi=00;36:*.mka=00;36:*.mp3=00;36:*.mpc=00;36:*.ogg=00;36:*.ra=00;36:*.wav=00;36:*.axa=00;36:*.oga=00;36:*.spx=00;36:*.xspf=00;36:", "MAIL": "/var/mail/root", "OLDPWD": "/root/ansible/docsite", "PATH": "/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin", "PWD": "/root/ansible", "SHELL": "/bin/bash", "SHLVL": "1", "SUDO_COMMAND": "/bin/bash", "SUDO_GID": "1000", "SUDO_UID": "1000", "SUDO_USER": "mdehaan", "TERM": "xterm", "USER": "root", "USERNAME": "root", "XAUTHORITY": "/home/mdehaan/.Xauthority", "_": "/usr/local/bin/ansible" }, "ansible_eth0": { "active": true, "device": "eth0", "ipv4": { "address": "REDACTED", "netmask": "255.255.255.0", "network": "REDACTED" }, "ipv6": [{ "address": "REDACTED", "prefix": "64", "scope": "link" }], "macaddress": "REDACTED", "module": "e1000", "mtu": 1500, "type": "ether" }, "ansible_form_factor": "Other", "ansible_fqdn": "ubuntu2.example.com", "ansible_hostname": "ubuntu2", "ansible_interfaces": ["lo", "eth0"], "ansible_kernel": "3.5.0-23-generic", "ansible_lo": { "active": true, "device": "lo", "ipv4": { "address": "127.0.0.1", "netmask": "255.0.0.0", "network": "127.0.0.0" }, "ipv6": [{ "address": "::1", "prefix": "128", "scope": "host" }], "mtu": 16436, "type": "loopback" }, "ansible_lsb": { "codename": "precise", "description": "Ubuntu 12.04.2 LTS", "id": "Ubuntu", "major_release": "12", "release": "12.04" }, "ansible_machine": "x86_64", "ansible_memfree_mb": 74, "ansible_memtotal_mb": 991, "ansible_mounts": [{ "device": "/dev/sda1", "fstype": "ext4", "mount": "/", "options": "rw,errors=remount-ro", "size_available": 15032406016, "size_total": 20079898624 }], "ansible_nodename": "ubuntu2.example.com", "ansible_os_family": "Debian", "ansible_pkg_mgr": "apt", "ansible_processor": ["Intel(R) Core(TM) i7 CPU 860 @ 2.80GHz"], "ansible_processor_cores": 1, "ansible_processor_count": 1, "ansible_processor_threads_per_core": 1, "ansible_processor_vcpus": 1, "ansible_product_name": "VMware Virtual Platform", "ansible_product_serial": "REDACTED", "ansible_product_uuid": "REDACTED", "ansible_product_version": "None", "ansible_python_version": "2.7.3", "ansible_selinux": false, "ansible_ssh_host_key_dsa_public": "REDACTED KEY VALUE""ansible_ssh_host_key_ecdsa_public": "REDACTED KEY VALUE""ansible_ssh_host_key_rsa_public": "REDACTED KEY VALUE""ansible_swapfree_mb": 665, "ansible_swaptotal_mb": 1021, "ansible_system": "Linux", "ansible_system_vendor": "VMware, Inc.", "ansible_user_id": "root", "ansible_userspace_architecture": "x86_64", "ansible_userspace_bits": "64", "ansible_virtualization_role": "guest", "ansible_virtualization_type": "VMware"

 

本地Facts

如果远程受管理的机器有一个 “/etc/ansible/facts.d” 目录,那么在该目录中任何以 ”.fact”结尾的文件都可以在Ansible中提供局部facts.这些文件可以是JSON,INI或者任何可以返回JSON的可执行文件.

例如建设有一个 /etc/ansible/facts.d/perferences.fact文件:

[general]
asdf=1
bar=2

这将产生一个名为 “general” 的哈希表fact,里面成员有 ‘asdf’ 和 ‘bar’. 可以这样验证:

ansible  -m setup -a "filter=ansible_local"

然后你会看到有以下fact被添加:

"ansible_local": {         
"preferences": {
"general": {
"asdf" : "1",
"bar" : "2" } } }

而且也可以在template或palybook中访问该数据:

{{ ansible_local.preferences.general.asdf }}

 

本地命名空间放置其它用户提供的fact或者playbook中定义的变量覆盖系统facts值.

如果你有个一个playook,它复制了一个自定义的fact,然后运行它,请显式调用来重新运行setup模块,这样可以让我们在该playbook中使用这些fact.否则,在下一个play中才能获取这些自定义的fact信息.这里有一个示例:

- hosts: webservers   
tasks:
- name: create directory for ansible custom facts
file: state=directory recurse=yes path=/etc/ansible/facts.d
- name: install custom impi fact
copy: src=ipmi.fact dest=/etc/ansible/facts.d
- name: re-read facts after adding custom fact
setup: filter=ansible_local

然而在该模式中你也可以编写一个fact模块,这只不过是多了一个选项.

Fact缓存

从一个服务器引用另一个服务器的变量是可行的

{{ hostvars['asdf.example.com']['ansible_os_family'] }}

注册变量

 register

- hosts: web_servers   
tasks:
- shell: /usr/bin/foo
register: foo_result
ignore_errors: True

- shell: /usr/bin/bar
when: foo_result.rc == 5

 

魔法变量,以及如何访问其它主机的信息

Ansible会自动提供给你一些变量,即使你并没有定义过它们.这些变量中重要的有 ‘hostvars’,’group_names’,和 ‘groups’.由于这些变量名是预留的,所以用户不应当覆盖它们. ‘environmen’ 也是预留的. hostvars可以让你访问其它主机的变量,包括哪些主机中获取到的facts.如果你还没有在当前playbook或者一组playbook的任何play中访问那个主机,那么你可以获取变量,但无法看到facts值. 如果数据库服务器想使用另一个节点的某个 ‘fact’ 值,或者赋值给该节点的一个inventory变量.可以在一个模板中甚至命令行中轻松实现:

{{ hostvars['test.example.com']['ansible_distribution'] }}

另外, group_names 是当前主机所在所有群组的列表(数组).所以可以使用Jinja2语法在模板中根据该主机所在群组关系(或角色)来产生变化:

{% if 'webserver' in group_names %}   
   # some part of a configuration file that only applies to webservers
{% endif %}

groups 是inventory中所有群组(主机)的列表.可用于枚举群组中的所有主机.例如:

{% for host in groups['app_servers'] %}    
  # something that applies to all app servers.
{% endfor %}

一个经常使用的范式是找出该群组中的所有IP地址:

{% for host in groups['app_servers'] %}   
   {{ hostvars[host]['ansible_eth0']['ipv4']['address'] }}
{% endfor %}

 

外部的变量文件:

--- 
- hosts: all
remote_user: root
vars:
favcolor: blue
vars_files:
- /vars/external_vars.yml

tasks:
- name: this is just a placeholder
command: /bin/echo foo

 

这可以保证你共享playbook源码时隔离敏感数据的风险.

每个变量文件的内容是一个简单的YAML文件,如下所示:

--- 
# in the above example, this would be vars/external_vars.yml
somevar: somevalue
password: magic

 

变量的优先级,变量优先级

* extra vars (-e in the command line) always win 
* then comes connection variables defined in inventory (ansible_ssh_user, etc)
* then comes "most everything else" (command line switches, vars in play, included vars, role vars, etc)
* then comes the rest of the variables defined in inventory
* then comes facts discovered about a system * then "role defaults", which are the most "defaulty" and lose in priority to everything.

* extra vars (在命令行中使用 -e)优先级最高
* 然后是在inventory中定义的连接变量(比如ansible_ssh_user)
* 接着是大多数的其它变量(命令行转换,play中的变量,included的变量,role中的变量等)
* 然后是在inventory定义的其它变量
* 然后是由系统发现的facts
* 然后是 "role默认变量", 这个是最默认的值,很容易丧失优先权

when

- include: tasks/sometasks.yml   
when: "'reticulating splines' in output"

或者应用于role:

- hosts: webservers   
roles:
- { role: debian_stock_config, when: ansible_os_family == 'Debian' }

基于变量选择文件和模版

循环

- name: add several users   
user: name={{ item }} state=present groups=wheel
with_items:
- testuser1
- testuser2

或者

with_items: "{{somelist}}"

或者

- name: add several users   
user: name={{ item.name }} state=present groups={{ item.groups }}
with_items:
- { name: 'testuser1', groups: 'wheel' }
- { name: 'testuser2', groups: 'root' }

嵌套循环

- name: give users access to multiple databases   
mysql_user: name={{ item[0] }} priv={{ item[1] }}.*:ALL append_privs=yes password=foo
with_nested:
- [ 'alice', 'bob' ]
- [ 'clientdb', 'employeedb', 'providerdb' ]

 

对哈希表使用循环

New in version 1.5.

假如你有以下变量:

--- users:   
alice:
name: Alice Appleworth
telephone: 123-456-7890
bob:
name: Bob Bananarama
telephone: 987-654-3210

你想打印出每个用户的名称和电话号码.你可以使用 with_dict 来循环哈希表中的元素:

tasks:  
- name: Print phone records
debug: msg="User {{ item.key }} is {{ item.value.name }} ({{ item.value.telephone }})"
with_dict: "{{users}}"

对文件列表使用循环

with_fileglob 可以以非递归的方式来模式匹配单个目录中的文件.如下面所示:

--- 
- hosts: all
tasks:
# first ensure our target directory exists
- file: dest=/etc/fooapp state=directory
# copy each file over that matches the given pattern
- copy: src={{ item }} dest=/etc/fooapp/ owner=root mode=600
with_fileglob:
- /playbooks/files/fooapp/*

对子元素使用循环

假设你想对一组用户做一些动作,比如创建这些用户,并且允许它们使用一组SSH key来登录.

如何实现那? 先假设你有按以下方式定义的数据,可以通过”vars_files”或”group_vars/all”文件加载:

--- users:   
- name: alice
authorized:
- /tmp/alice/onekey.pub
- /tmp/alice/twokey.pub
mysql:
password: mysql-password
hosts:
- "%"
- "127.0.0.1"
- "::1"
- "localhost"
privs:
- "*.*:SELECT"
- "DB1.*:ALL"
- name: bob
authorized:
- /tmp/bob/id_rsa.pub
mysql:
password: other-mysql-password
hosts:
- "db1"
privs:
- "*.*:SELECT"
- "DB2.*:ALL"

那么可以这样实现:

- user: name={{ item.name }} state=present generate_ssh_key=yes   
with_items: "{{users}}"

- authorized_key: "user={{ item.0.name }} key='{{ lookup('file', item.1) }}'"
with_subelements:
- users
- authorized

根据mysql hosts以及预先给定的privs subkey列表,我们也可以在嵌套的subkey中迭代列表:

- name: Setup MySQL users   
mysql_user: name={{ item.0.user }} password={{ item.0.mysql.password }} host={{ item.1 }} priv={{ item.0.mysql.privs | join('/') }}
with_subelements:
- users
- mysql.hosts Subelements walks a list of hashes (aka dictionaries) and then traverses a list with a given key inside of those records.

你也可以为字元素列表添加第三个元素,该元素可以放置标志位字典.现在你可以加入’skip_missing’标志位.如果设置为True,那么查找插件会跳过不包含指定子键的列表条目.如果没有该标志位,或者标志位值为False,插件会产生错误并指出缺少该子键.

这就是authorized_key模式中key的获取方式.

 

对整数序列使用循环

with_sequence 可以以升序数字顺序生成一组序列.你可以指定起始值、终止值,以及一个可选的步长值.

指定参数时也可以使用key=value这种键值对的方式.如果采用这种方式,’format’是一个可打印的字符串.

数字值可以被指定为10进制,16进制(0x3f8)或者八进制(0600).负数则不受支持.请看以下示例:

--- 
- hosts: all
tasks: # create groups
- group: name=evens state=present
- group: name=odds state=present
# create some test users
- user: name={{ item }} state=present groups=evens
with_sequence: start=0 end=32 format=testuser%02x
# create a series of directories with even numbers for some reason
-file: dest=/var/stuff/{{ item }} state=directory
with_sequence: start=4 end=16 stride=2
# a simpler way to use the sequence plugin
# create 4 groups
- group: name=group{{ item }} state=present
with_sequence: count=4

 

随机选择

‘random_choice’功能可以用来随机获取一些值.它并不是负载均衡器(已经有相关的模块了).它有时可以用作一个简化版的负载均衡器,比如作为条件判断:

- debug: msg={{ item }}   
with_random_choice:
- "go through the door"
- "drink from the goblet"
- "press the red button"
- "do nothing" 

Do-Until循环

有时你想重试一个任务直到达到某个条件.比如下面这个例子:[重复执行]

- action: shell /usr/bin/foo   
register: result
until: result.stdout.find("all systems go") != -1
retries: 5
delay: 10

上面的例子递归运行shell模块,直到模块结果中的stdout输出中包含”all systems go”字符串,或者该任务按照10秒的延迟重试超过5次.”retries”和”delay”的默认值分别是3和5.

该任务返回最后一个任务返回的结果.单次重试的结果可以使用-vv选项来查看. 被注册的变量会有一个新的属性’attempts’,值为该任务重试的次数.

 

查找第一个匹配的文件

 

这其实不是一个循环,但和循环很相似.如果你想引用一个文件,而该文件是从一组文件中根据给定条件匹配出来的.这组文件中部分文件名由变量拼接而成.针对该场景你可以这样做:[动态文件名]

- name: INTERFACES | Create Ansible header for /etc/network/interfaces   
template: src={{ item }} dest=/etc/foo.conf
with_first_found:
- "{{ansible_virtualization_type}}_foo.conf"
- "default_foo.conf"

 

该功能还有一个更完整的版本,可以配置搜索路径.请看以下示例:

- name: some configuration template   
template: src={{ item }} dest=/etc/file.cfg mode=0444 owner=root group=root
with_first_found:
- files:
- "{{inventory_hostname}}/etc/file.cfg"
paths:
- ../../../templates.overwrites
- ../../../templates
- files:
- etc/file.cfg
paths:
- templates

 

异步操作和轮询

默认情况下playbook中的任务执行时会一直保持连接,直到该任务在每个节点都执行完毕.有时这是不必要的,比如有些操作运行时间比SSH超时时间还要长.

解决该问题最简单的方式是一起执行它们,然后轮询直到任务执行完毕,简单的意思就是,像下面的例子,执行任务后,ansible就不等它了,往下执行下一个任务,然后每隔5秒钟去看看它执行完成没,超时时间为45秒,async参数值代表了这个任务执行时间的上限值。即任务执行所用时间如果超出这个时间,则认为任务失败。此参数若未设置,则为同步执行。

你也可以对执行时间非常长(有可能遭遇超时)的操作使用异步模式.

为了异步启动一个任务,可以指定其最大超时时间以及轮询其状态的频率.如果你没有为 poll 指定值,那么默认的轮询频率是10秒钟:

--- 
- hosts: all
remote_user: root
tasks:

- name: simulate long running op (15 sec), wait for up to 45 sec, poll every 5 sec
command: /bin/sleep 15
async: 45
poll: 5

async 并没有默认值,如果你没有指定 async 关键字,那么任务会以同步的方式运行,这是Ansible的默认行为.

 

另外,如果你不需要等待任务执行完毕,你可以指定 poll 值为0而启用 “启动并忽略”

---  
- hosts: all
remote_user: root
tasks:
- name: simulate long running op, allow to run for 45 sec, fire and forget
command: /bin/sleep 15
async: 45
poll: 0

 

改为”启动并忽略,稍后再检查”,你可以使用以下方式执行任务:

--- 
# Requires ansible 1.8+
- name: 'YUM - fire and forget task'
yum: name=docker-io state=installed
async: 1000
poll: 0
register: yum_sleeper
- name: 'YUM - check on fire and forget task'
async_status: jid={{ yum_sleeper.ansible_job_id }}
register: job_result
until: job_result.finished
retries: 30

如果 async: 值太小,可能会导致 “稍后检查” 任务执行失败,因为 async_status:: 的临时状态文件还未被写入信息,而”稍后检查”任务就试图读取此文件.

 

delegate_to  选中主机执行

  - name: add back to load balancer pool     
command: /usr/bin/add_back_to_pool {{ inventory_hostname }}
delegate_to: 127.0.0.1

本地执行

--- 
# ...
tasks:
- name: recursively copy files from management server to target
local_action: command rsync -a /path/to/files {{ inventory_hostname }}:/path/to/target/

 

Run Once

New in version 1.7.

有时候你有这样的需求,在一个主机上面只执行一次一个任务.这样的配置可以配置”run_once”来实现:

--- 
# ...
tasks:
# ...
- command: /opt/application/upgrade_db.py
run_once: true # ...

 

这样可以添加在”delegat_to”选项对中来定义要执行的主机:

- command: /opt/application/upgrade_db.py   
run_once: true
delegate_to: web01.example.org

当”run_once” 没有喝”delegate_to”一起使用,这个任务将会被清单指定的第一个主机. 在一组被play制定主机.例如 webservers[0], 如果play指定为 “hosts: webservers”.

这个方法也很类似,虽然比使用条件更加简单粗暴,如下事例:

- command: /opt/application/upgrade_db.py   
when: inventory_hostname == webservers[0]

 

本地Playbooks

在本地使用playbook有时候比ssh远程使用更加有用.可以通过把playbook放在crontab中,来确保一个系统的配置,可以很有用. 在OS installer 中运行一个playbook也很有用.例如Anaconda kickstart.

要想在本地运行一个play,可以直接设置”host:” 与 “hosts:127.0.0.1”, 然后使用下面的命令运行:

ansible-playbook playbook.yml --connection=local

或者,一个本地连接也可以作为一个单独的playbook play应用在playbook中, 即便playbook中其他的plays使用默认远程 连接如下:

- hosts: 127.0.0.1   
connection: local

environment  使用代理上网

 

- hosts: all   
remote_user: root
tasks:
- apt: name=cobbler state=installed
environment:
http_proxy: http://proxy.example.com:8080

 

environment 也可以被存储在变量中,像如下方式访问:

- hosts: all   
remote_user: root
# here we make a variable named "proxy_env" that is a dictionary
vars:
proxy_env:
http_proxy: http://proxy.example.com:8080
tasks:
- apt: name=cobbler state=installed
environment: proxy_env

 

指定错误条件

[判定错误]

- name: this command prints FAILED when it fails   
command: /usr/bin/example-command -x -y -z
register: command_result
failed_when: "'FAILED' in command_result.stderr"

在 Ansible 1.4 之前的版本能通过如下方式完成:

- name: this command prints FAILED when it fails   
command: /usr/bin/example-command -x -y -z
register: command_result ignore_errors: True

- name: fail the play if the previous command did not succeed
fail: msg="the command failed"
when: "'FAILED' in command_result.stderr

 

不输出结果,不报告状态,覆写结果

tasks:   
- shell: /usr/bin/billybass --mode="take me to the river"
register: bass_result
changed_when: "bass_result.rc != 2"

# this will never report 'changed' status
- shell: wall 'beep'
changed_when: False

 

标签tags

如果你有一个大型的 playbook,那能够只运行其中特定部分的配置而无需运行整个 playbook 将会很有用.

plays 和 tasks 都因这个理由而支持 “tags:”

例:

tasks:      
- yum: name={{ item }} state=installed
with_items:
- httpd
- memcached
tags:
- packages
- template: src=templates/src.j2 dest=/etc/foo.conf
tags:
- configuration

如果你只想运行一个非常大的 playbook 中的 “configuration” 和 “packages”,你可以这样做:

ansible-playbook example.yml --tags "configuration,packages"

另一方面,如果你只想执行 playbook 中某个特定任务 之外 的所有任务,你可以这样做:

ansible-playbook example.yml --skip-tags "notification"

你同样也可以对 roles 应用 tags:

roles:   
- { role: webserver, port: 5000, tags: [ 'web', 'foo' ] }

你同样也可以对基本的 include 语句使用 tag:

- include: foo.yml tags=web,foo

 

 

从指定任务开始运行palybook以及分步运行playbook

以下列出了几种方式来运行playbook.这对于测试或调试新的playbook很有帮助.

Start-at-task

如果你想从指定的任务开始执行playbook,可以使用``–start-at``选项:

ansible-playbook playbook.yml --start-at="install packages"

以上命令就会在名为”install packages”的任务开始执行你的playbook.

分步运行playbook

我们也可以通过``–step``选项来交互式的执行playbook:

ansible-playbook playbook.yml --step

这样ansible在每个任务前会自动停止,并询问是否应该执行该任务.

比如你有个名为``configure ssh``的任务,playbook执行到这里会停止并询问:

Perform task: configure ssh (y/n/c):

“y”回答会执行该任务,”n”回答会跳过该任务,而”c”回答则会继续执行剩余的所有任务而不再询问你.

 

python ansible api调用

 

#!/usr/bin/env python2

import sys
import json
import shutil
import pprint
from collections import namedtuple
from ansible.parsing.dataloader import DataLoader
from ansible.vars.manager import VariableManager
from ansible.inventory.manager import InventoryManager
from ansible.playbook.play import Play
from ansible.executor.task_queue_manager import TaskQueueManager
from ansible.plugins.callback import CallbackBase
import ansible.constants as C
from ansible.inventory.group import Group
from ansible.inventory.host import Host
def get_info(username,password,resource):
    class ResultCallback(CallbackBase):
        def __init__(self, *args, **kwargs):
            self.info = {}
        def v2_runner_on_ok(self, result, **kwargs):
            host = result._host
            self.info[host.name] = result._result

    Options = namedtuple('Options', ['connection', 'module_path', 'forks', 'become', 'become_method', 'become_user', 'check', 'diff'])
    options = Options(connection='local', module_path=['/to/mymodules'], forks=10, become=None, become_method=None, become_user=None, check=False, diff=False)
    loader = DataLoader()
    results_callback = ResultCallback()
    #inventory = InventoryManager(loader=loader, sources=[tempFileName])
    inventory = InventoryManager(loader=loader)
    inventory.add_group("default")
    for host in resource:
        inventory.add_host(host=host,port=22,group='default')
    variable_manager = VariableManager(loader=loader, inventory=inventory)

    for host in resource:        
        host = inventory.get_host(hostname=host)
        variable_manager.set_host_variable(host=host,varname='ansible_ssh_host',value=host)
        variable_manager.set_host_variable(host=host,varname='ansible_ssh_user',value=username)
        variable_manager.set_host_variable(host=host,varname='ansible_ssh_pass',value=password)
        variable_manager.set_host_variable(host=host,varname='ansible_connection',value='local')
    play_source =  dict(
        name = "Ansible Play",
        hosts = resource,
        gather_facts = 'no',
        tasks = [
            dict(action=dict(module='setup', args=''))
         ]
        )
    play = Play().load(play_source, variable_manager=variable_manager, loader=loader)
    tqm = None
    try:
        tqm = TaskQueueManager(
              inventory=inventory,
              variable_manager=variable_manager,
              loader=loader,
              options=options,
              passwords=None,
              stdout_callback=results_callback,
          )
        result = tqm.run(play)
    finally:
        if tqm is not None:
        tqm.cleanup()
        shutil.rmtree(C.DEFAULT_LOCAL_TMP, True)

    return results_callback.info

def handle_info(rawdata, node):
    # data dictionary
    data = {}
    detail = {}

    # read sysinfo
    cpu_count = rawdata[node]['ansible_facts']['ansible_processor_count']
    cpu_cores = rawdata[node]['ansible_facts']['ansible_processor_cores']
    cpu_model = list(set(rawdata[node]['ansible_facts']['ansible_processor'][2::3]))
    cpu_vcpus = rawdata[node]['ansible_facts']['ansible_processor_vcpus']
    # memtotal = rawdata[node]['ansible_facts']['ansible_memtotal_mb']
    # memfree = rawdata[node]['ansible_facts']['ansible_memfree_mb']
    memory = rawdata[node]['ansible_facts']['ansible_memory_mb']['real']
    disk_info = rawdata[node]['ansible_facts']['ansible_devices']
    interfaces_info = rawdata[node]['ansible_facts']['ansible_interfaces']
    
    for i in range(len(interfaces_info)):
        tmp = "ansible_" + interfaces_info[i].replace("-", "_")
        if rawdata[node]['ansible_facts'][tmp].has_key('type'):
            if rawdata[node]['ansible_facts'][tmp]['type'] == "bridge":
                continue
        detail[interfaces_info[i]] = {}
        detail[interfaces_info[i]]['active'] = rawdata[node]['ansible_facts'][tmp]['active']
        detail[interfaces_info[i]]['mtu'] = rawdata[node]['ansible_facts'][tmp]['mtu']
        detail[interfaces_info[i]]['promisc'] = rawdata[node]['ansible_facts'][tmp]['promisc']
        if rawdata[node]['ansible_facts'][tmp].has_key('speed'):
            detail[interfaces_info[i]]['speed'] = rawdata[node]['ansible_facts'][tmp]['speed']
        if rawdata[node]['ansible_facts'][tmp].has_key('type'):
            detail[interfaces_info[i]]['type'] = rawdata[node]['ansible_facts'][tmp]['type']
        

    # store sysinfo into data dictionary
    
    
    cpu = {}
    cpu['cpu_number'] = cpu_count
    cpu['cpu_cores'] = cpu_cores
    cpu['cpu_model'] = cpu_model
    cpu['cpu_vcpus'] = cpu_vcpus
    data['cpu'] = cpu
    data['memory'] = memory
    
    disk = {}
    disk['number'] = len(disk_info)
    disk['info'] = {}
    for key in disk_info:
        disk['info'][key] = {}
        disk['info'][key]['size'] = disk_info[key]['size']
    data['disk'] = disk
    
    interfaces = {}
    interfaces['number'] = len(interfaces_info)
    interfaces['info'] = detail
    data['interfaces'] = interfaces

    return data

def handle_infos(rawdata):
    info = []
    for key in rawdata:
        temp = {}
        temp['IP_ADDR'] = key
        temp['OBJ_ATTRS'] = handle_info(rawdata, key)
        info.append(temp)
    return info

if __name__ == '__main__':
    if len(sys.argv) != 4:
        print "need parameter:username,password,list of hosts(split with ',')!!!"
        sys.exit()
    hosts=sys.argv[3].split(",")
    raw_data = get_info(sys.argv[1],sys.argv[2],hosts)
    data = handle_infos(raw_data)
    data = json.dumps(data)
    pprint.pprint(data)

晚安( ̄o ̄) . z Z

 

转载于:https://www.cnblogs.com/aaaaaaa/p/10757843.html

你可能感兴趣的:(ansible playbook详细教程(笔记))