Ansible Playbooks基本使用
你将学到什么
- 如何使用playbook
- 如何编写playbook
- 如何使用roles
PlayBook使用
基础环境
### 64 位 Ubuntu 16.04 LTS,创建CentOS LXC容器web模拟托管节点
# ssh-keygen -t rsa
# apt-get install lxc
# apt-get install yum
# lxc-create -n centos -t centos -- -R 7 ### 修改centos模板root密码 # chroot /var/lib/lxc/centos/rootfs passwd # lxc-copy -n centos -N web -B aufs -s # lxc-start -n web -d ### 进入容器 # lxc-console -n web ### 下面命令都在容器中执行,修改IP地址为10.0.3.200 # vi ifcfg-eth0 DEVICE=eth0 BOOTPROTO=static ONBOOT=yes HOSTNAME=centos NM_CONTROLLED=no TYPE=Ethernet NAME=eth0 IPADDR=10.0.3.200 NETMASK=255.255.255.0 GATEWAY=10.0.3.1 DNS1=114.114.114.114
简单的playbook
# mkdir playbook
# cd playbook
# vim hosts
[web]
192.168.124.240 # vim site.yml - name: Sample hosts: web # 收集host facts信息 gather_facts: True tasks: # 在ansible托管节点上生成sample.txt文件 - name: Web command: /bin/sh -c "echo 'web' > ~/sample.txt" # 在ansible控制主机上生成sample.txt文件 - name: Local Web local_action: command /bin/sh -c "echo 'local web' > ~/sample.txt"
执行playbook
# ansible-playbook -i hosts site.yml
样例playbook
下载样例
### 在主机中下在ansible样例
$ git clone https://github.com/ansible/ansible-examples.git
修改样例配置文件
$ cd ansible-examples/tomcat-standalone
$ vim hosts
[tomcat-servers]
10.0.3.200
### 配置ssh登入密码
$ vim group_vars/tomcat-servers
# Here are variables related to the Tomcat installation http_port: 8080 https_port: 8443 # This will configure a default manager-gui user: admin_username: admin admin_password: 123456 ansible_ssh_pass: 123456
执行playbook
### 出错就反复执行,不过要加上出错提示中的--limit @/home/ubuntu/ansible-examples/tomcat-standalone/site.retry参数
# ansible-playbook -i hosts site.yml
出错处理
- 问题1
TASK [selinux : Install libselinux-python] *************************************
fatal: [10.0.3.200]: FAILED! => {"changed": false, "failed": true, "msg": "Failure talking to yum: Cannot retrieve metalink for repository: epel/x86_64. Please verify its path and try again"} to retry, use: --limit @/home/ubuntu/ansible-examples/tomcat-standalone/site.retry
解决办法
### 在容器中执行一遍yum update更新下源,就是更新下缓存,不需要安装软件
- 问题2
TASK [tomcat : insert firewalld rule for tomcat http port] *********************
fatal: [10.0.3.200]: FAILED! => {"changed": false, "failed": true, "msg": "firewalld and its python 2 module are required for this module"} RUNNING HANDLER [tomcat : restart tomcat] ************************************** to retry, use: --limit @/home/ubuntu/ansible-examples/tomcat-standalone/site.retry
解决办法
### 为容器安装firewalld
# yum search firewalld |grep python
python-firewall.noarch : Python2 bindings for firewalld
# yum install python-firewall.noarch
# systemctl enable firewalld # systemctl start firewalld
roles使用
roles标准结构
# tree ansible-sshd/
ansible-sshd/
├── CHANGELOG
├── defaults
│ └── main.yml
├── handlers
│ └── main.yml
├── LICENSE
├── meta
│ ├── 10_top.j2 │ ├── 20_middle.j2 │ ├── 30_bottom.j2 │ ├── main.yml │ ├── make_option_list │ ├── options_body │ └── options_match ├── README.md ├── tasks │ └── main.yml ├── templates │ └── sshd_config.j2 ├── tests │ ├── inventory │ ├── roles │ │ └── ansible-sshd -> ../../. │ └── test.yml ├── Vagrantfile └── vars ├── Amazon.yml ├── Archlinux.yml ├── Debian_8.yml ├── Debian.yml ├── default.yml ├── Fedora.yml ├── FreeBSD.yml ├── OpenBSD.yml ├── RedHat_6.yml ├── RedHat_7.yml ├── Suse.yml ├── Ubuntu_12.yml ├── Ubuntu_14.yml └── Ubuntu_16.yml
目录名 | 说明 |
---|---|
defaults | 为当前角色设定默认变量时使用此目录,应当包含一个main.yml文件 |
handlers | 此目录中应当包含一个main.yml文件,用于定义此角色用到的各handler,在handler中使用include包含的其它的handler文件也应该位于此目录中 |
meta | 应当包含一个main.yml文件,用于定义此角色的特殊设定及其依赖关系 |
tasks | 至少应该包含一个名为main.yml的文件,其定义了此角色的任务列表,此文件可以使用include包含其它的位于此目录中的task文件 |
templates | template模块会自动在此目录中寻找Jinja2模板文件 |
vars | 定义当前角色使用的变量 |
files | 存放由copy或script等模块调用的文件 |
tests | 在playbook中角色的使用样例 |
roles使用
# cat ansible-sshd/tests/test.yml
---
- hosts: localhost
become: true roles: - ansible-sshd # cd ansible-sshd/tests/ # ansible-playbook test.yml
roles的任务执行顺序
### 首先执行meta下的main.yml文件内容
### 然后执行tasks下的main.yml文件内容
Ansible Playbooks高级使用
文件操作
文件创建
- file
用于设置文件/链接/目录的属性,或者删除文件/链接/目录
### state如果是directory当目录不存在时会自动创建;如果是file当文件不存在时不会自动创建
- name: Create log dir
file:
path: "{{ item.src }}" state: directory with_items: "{{ log_dirs }}" when: is_metal | bool tags: - common-log - name: Mask lxc-net systemd service file: src: /dev/null path: /etc/systemd/system/lxc-net.service state: link when: - ansible_service_mgr == 'systemd' tags: - lxc-files - lxc-net
修改文件
- lineinfile
用于检测文件是否存在特殊行或者使用后端正则表达式来替换匹配到的特殊行
- name: Extra lxc config
lineinfile:
dest: "/var/lib/lxc/{{ inventory_hostname }}/config" line: "{{ item.split('=')[0] }} = {{ item.split('=', 1)[1] }}" insertafter: "^{{ item.split('=')[0] }}" backup: "true" with_items: "{{ extra_container_config | default([]) }}" delegate_to: "{{ physical_host }}" register: _ec when: not is_metal | bool tags: - common-lxc
- replace
lineinfile的多行匹配版本,此模块会在文件中插入一段内容,并在内容开始和结束位置设置标签,后续可以使用标签可以对此块内容进行操作
### 在ml2_conf.ini文件的[ml2]和[ml2_type_vlan]字段之间插入一段内容
- name: Enable ovn in neutron-server
replace:
dest: "{{ node_config_directory }}/neutron-server/ml2_conf.ini"
regexp: '\[ml2\][\S\s]*(?=\[ml2_type_vlan\])'
replace: |+
[ml2]
type_drivers = local,flat,vlan,geneve tenant_network_types = geneve mechanism_drivers = ovn extension_drivers = port_security overlay_ip_version = 4 [ml2_type_geneve] vni_ranges = 1:65536 max_header_size = 38 [ovn] ovn_nb_connection = tcp:{{ api_interface_address }}:{{ ovn_northdb_port }} ovn_sb_connection = tcp:{{ api_interface_address }}:{{ ovn_sourthdb_port }} ovn_l3_mode = False ovn_l3_scheduler = chance ovn_native_dhcp = True neutron_sync_mode = repair backup: yes when: - action == "deploy" - inventory_hostname in groups['network'] notify: - Restart neutron-server container
- ini_file
ini后缀格式文件修改
### 设置l3_agent.ini文件[DEFAULT]字段的external_network_bridge选项值为br-ex
- name: Set the external network bridge
vars:
agent: "{{ 'neutron-aas-agent' if enable_neutron_aas | bool else 'neutron-l3-agent' }}"
ini_file: dest: "{{ node_config_directory }}/{{ agent }}/l3_agent.ini" section: "DEFAULT" option: "external_network_bridge" value: "{{ neutron_bridge_name | default('br-ex') }}" backup: yes when: - action == "deploy" - inventory_hostname in ovn_central_address delegate_to: "{{ item }}" with_items: "{{ groups['neutron-server'] }}" notify: - Restart {{ agent }} container
- assemble
将多个文件聚合成一个文件
### 将/etc/haproxy/conf.d目录下的文件内容聚合成/etc/haproxy/haproxy.cfg文件
- name: Regenerate haproxy configuration
assemble:
src: "/etc/haproxy/conf.d"
dest: "/etc/haproxy/haproxy.cfg" notify: Restart haproxy tags: - haproxy-general-config
循环控制
- with_items
标准循环,用于执行重复任务,{{ item }}类似宏展开
- name: add several users
user:
name: "{{ item.name }}"
state: present groups: "{{ item.groups }}" with_items: - { name: 'testuser1', groups: 'wheel' } - { name: 'testuser2', groups: 'root' }
- with_nested
嵌套循环
### 修改neutron-server组所有主机的ml2_conf.ini文件的对应字段值
- name: Enable ovn in neutron-server
vars:
params:
- { section: 'ml2', option: 'type_drivers', value: 'local,flat,vlan,geneve' } - { section: 'ml2', option: 'tenant_network_types', value: 'geneve' } - { section: 'ml2', option: 'mechanism_drivers', value: 'ovn' } - { section: 'ml2', option: 'extension_drivers', value: 'port_security' } - { section: 'ml2', option: 'overlay_ip_version', value: '4' } - { section: 'securitygroup', option: 'enable_security_group', value: 'True' } ini_file: dest: "{{ node_config_directory }}/neutron-server/ml2_conf.ini" section: "{{ item[0].section }}" option: "{{ item[0].option }}" value: "{{ item[0].value }}" backup: yes when: - action == "deploy" - inventory_hostname in ovn_central_address delegate_to: "{{ item[1] }}" with_nested: - "{{ params }}" - "{{ groups['neutron-server'] }}" notify: - Restart neutron-server container
流程控制
- tags
设置任务标签
tasks:
- yum: name={{ item }} state=installed
with_items:
- httpd
- memcached
tags:
- packages
- template: src=templates/src.j2 dest=/etc/foo.conf
tags: - configuration ### 执行playbook可以指定只执行标签对应任务或跳过标签对应任务 # ansible-playbook example.yml --tags "configuration,packages" # ansible-playbook example.yml --skip-tags "notification"
- fail_when
用来控制playbook退出
- name: Check if firewalld is installed
command: rpm -q firewalld
register: firewalld_check
failed_when: firewalld_check.rc > 1 when: ansible_os_family == 'RedHat'
- pre_tasks/post_tasks
用来设置在执行roles模块之前和之后需要执行的任务
- name: Install the aodh components
hosts: aodh_all
gather_facts: "{{ gather_facts | default(True) }}" max_fail_percentage: 20 user: root pre_tasks: - include: common-tasks/os-lxc-container-setup.yml - include: common-tasks/rabbitmq-vhost-user.yml static: no vars: user: "{{ aodh_rabbitmq_userid }}" password: "{{ aodh_rabbitmq_password }}" vhost: "{{ aodh_rabbitmq_vhost }}" _rabbitmq_host_group: "{{ aodh_rabbitmq_host_group }}" when: - inventory_hostname == groups['aodh_api'][0] - groups[aodh_rabbitmq_host_group] | length > 0 - include: common-tasks/os-log-dir-setup.yml vars: log_dirs: - src: "/openstack/log/{{ inventory_hostname }}-aodh" dest: "/var/log/aodh" - include: common-tasks/mysql-db-user.yml static: no vars: user_name: "{{ aodh_galera_user }}" password: "{{ aodh_container_db_password }}" login_host: "{{ aodh_galera_address }}" db_name: "{{ aodh_galera_database }}" when: inventory_hostname == groups['aodh_all'][0] - include: common-tasks/package-cache-proxy.yml roles: - role: "os_aodh" aodh_venv_tag: "{{ openstack_release }}" aodh_venv_download_url: "{{ openstack_repo_url }}/venvs/{{ openstack_release }}/{{ ansible_distribution | lower }}/aodh-{{ openstack_release }}-{{ ansible_architecture | lower }}.tgz" - role: "openstack_openrc" tags: - openrc - role: "rsyslog_client" rsyslog_client_log_rotate_file: aodh_log_rotate rsyslog_client_log_dir: "/var/log/aodh" rsyslog_client_config_name: "99-aodh-rsyslog-client.conf" tags: - rsyslog vars: is_metal: "{{ properties.is_metal|default(false) }}" aodh_rabbitmq_userid: aodh aodh_rabbitmq_vhost: /aodh aodh_rabbitmq_servers: "{{ rabbitmq_servers }}" aodh_rabbitmq_port: "{{ rabbitmq_port }}" aodh_rabbitmq_use_ssl: "{{ rabbitmq_use_ssl }}" tags: - aodh
主机路由
- delegate_to
可以将当前任务放到其他hosts上执行
### 这是一段在容器中执行的playbook的一部分,这时候需要检测容器所在的宿主机上的对应目录是否存在,这时候就需要用到委托来跳出当前容器到宿主机上执行当前任务
- name: Ensure mount directories exists
file:
path: "{{ item['mount_path'] }}" state: "directory" with_items: - "{{ lxc_default_bind_mounts | default([]) }}" - "{{ list_of_bind_mounts | default([]) }}" delegate_to: "{{ physical_host }}" when: - not is_metal | bool tags: - common-lxc
- local_action
将任务放在ansible控制主机(运行ansible-playbook的主机)上执行
- name: Check if the git cache exists on deployment host local_action: module: stat path: "{{ repo_build_git_cache }}" register: _local_git_cache when: repo_build_git_cache is defined
用户和用户组控制
- group
创建用户组
### 创建系统管理员组haproxy,present表示不存在创建,absent表示存在删除
- name: Create the haproxy system group
group:
name: "haproxy" state: "present" system: "yes" tags: - haproxy-group
- user
创建用户
### 创建haproxy:haproxy用户,并创建home目录
- name: Create the haproxy system user
user:
name: "haproxy"
group: "haproxy" comment: "haproxy user" shell: "/bin/false" system: "yes" createhome: "yes" home: "/var/lib/haproxy" tags: - haproxy-user
其他
- authorized_key
添加用户的SSH认证key
- name: Create authorized keys file from host vars
authorized_key:
user: "{{ repo_service_user_name }}" key: "{{ hostvars[item]['repo_pubkey'] | b64decode }}" with_items: "{{ groups['repo_all'] }}" when: hostvars[item]['repo_pubkey'] is defined tags: - repo-key - repo-key-store
- slurp
用来读取远程主机上文件内容是base64加密的文件
### 读取id_rsa.pub文件的内容,并设置到变量repo_pub中
- name: Get public key contents and store as var slurp: src: "{{ repo_service_home_folder }}/.ssh/id_rsa.pub" register: repo_pub changed_when: false tags: - repo-key - repo-key-create
- uri
web访问,类似执行curl命令
- name: test proxy URL for connectivity
uri:
url: "{{ repo_pkg_cache_url }}/acng-report.html" method: "HEAD" register: proxy_check failed_when: false tags: - common-proxy
- wait_for
等待一个端口变得可用或者等待一个文件变得可用
- name: Wait for container ssh
wait_for:
port: "22"
delay: "{{ ssh_delay }}" search_regex: "OpenSSH" host: "{{ ansible_host }}" delegate_to: "{{ physical_host }}" register: ssh_wait_check until: ssh_wait_check | success retries: 3 when: - (_mc is defined and _mc | changed) or (_ec is defined and _ec | changed) - not is_metal | bool tags: - common-lxc
- command
执行shell命令
### ignore_errors为true表示命令执行出错也不会退出playbook
- name: Check if clean is needed
command: docker exec openvswitch_vswitchd ovs-vsctl br-exists br-tun
register: result ignore_errors: True
切换用户
### 使用become会先切换成apache用户,再执行command命令,默认become_user用户为root(如果你ansible配置的就是root用户的免密码登入那就不需要become了)
- name: Run a command as the apache user
command: somecommand
become: true become_user: apache
检测链表是否为空
### pip_wheel_install为链表变量
- name: Install wheel packages
shell: cd /tmp/wheels && pip install {{ item }}*
with_items: - "{{ pip_wheel_install | default([]) }}" when: pip_wheel_install > 0
Ansible Jinja2使用
常用方法
- ternary
根据结果的真假来决定返回值
- name: Set container backend to "dir" or "lvm" based on whether the lxc VG was found set_fact: lxc_container_backing_store: "{{ (vg_result.rc != 0) | ternary('dir', 'lvm') }}" when: vg_result.rc is defined tags: - lxc-container-vg-detect
vg_result.rc不为0返回dir,否则返回lvm
- if语法
根据结果的真假来决定返回值
- name: Set the external network bridge
vars:
agent: "{{ 'neutron-aas-agent' if enable_neutron_aas | bool else 'neutron-l3-agent' }}"
ini_file:
dest: "{{ node_config_directory }}/{{ agent }}/l3_agent.ini" section: "DEFAULT" option: "external_network_bridge" value: "{{ neutron_bridge_name | default('br-ex') }}" backup: yes when: - action == "deploy" - inventory_hostname in ovn_central_address delegate_to: "{{ item }}" with_items: "{{ groups['neutron-server'] }}" notify: - Restart {{ agent }} container
- when中使用jinja2
when表达式中不建议直接使用{{}}的方式来获取变量值,如果变量是字符串可以使用管道操作| string来获取变量值
- name: Checking free port for OVN
vars:
service: "{{ neutron_services[item.name] }}"
wait_for:
host: "{{ hostvars[inventory_hostname]['ansible_' + api_interface]['ipv4']['address'] }}" port: "{{ item.port }}" connect_timeout: 1 state: stopped when: - container_facts[ item.facts | string ] is not defined - service.enabled | bool - service.host_in_groups | bool with_items: - { name: "ovn-nb-db-server", port: "{{ ovn_northdb_port }}", facts: "ovn_nb_db" } - { name: "ovn-sb-db-server", port: "{{ ovn_sourthdb_port }}", facts: "ovn_sb_db" }
Ansible基本使用
你将学到什么
- 如何配置ansible运行环境
- 如何执行ansible命令
- 如何配置Inventory
环境
角色 | 操作系统 | 网络地址 |
---|---|---|
管理主机 | ubuntu 14.04 TLS | 192.168.200.250 |
托管节点 | ubuntu 16.04 TLS | 192.168.200.11 192.168.200.12 |
管理主机配置
- 安装ansible
$ sudo apt-get install software-properties-common
$ sudo apt-add-repository ppa:ansible/ansible
$ sudo apt-get update
$ sudo apt-get install ansible
- 设置托管节点
### 编辑配置文件添加托管节点网络地址
# vim /etc/ansible/hosts
...
[webservers]
192.168.200.11 [compute] 192.168.200.12 ...
- 生成ssh公钥
# ssh-keygen -t rsa
# ssh-agent bash
# ssh-add ~/.ssh/id_rsa
托管节点配置
- 添加管理主机公钥
# ssh-keygen -t rsa
# scp [email protected]:~/.ssh/id_rsa.pub ./
# cat id_rsa.pub >> ~/.ssh/authorized_keys
# chmod 600 ~/.ssh/authorized_keys
- 安装python
# apt-get install python
测试ansible命令
# ansible all -m ping
192.168.200.11 | success >> {
"changed": false,
"ping": "pong" } 192.168.200.12 | success >> { "changed": false, "ping": "pong" } # ansible compute -m ping 192.168.200.12 | success >> { "changed": false, "ping": "pong" } # ansible webservers -m ping 192.168.200.11 | success >> { "changed": false, "ping": "pong" } # ansible all -a "/bin/echo hello" 192.168.200.12 | success | rc=0 >> hello 192.168.200.11 | success | rc=0 >> hello
配置Inventory
静态方式
就是前面在文件/etc/ansible/hosts中指定的主机和组的方式
动态方式
通过外部脚本获取主机列表,并按照ansible所要求的格式返回给ansilbe命令的方式。需要注意的是,用于生成JSON代码的脚本必须支持两个选项:
-
--list
:返回一个JSON散列/字典,它包含所管理的所有组.每个组的value应该是一个关于其包含的主机/IP哈希/字典,它可能是一个子组或者组的变量或者仅仅是一个主机/IP的列表,例如:{ "databases" : { "hosts" : [ "host1.example.com", "host2.example.com" ], "vars" : { "a" : true } }, "webservers" : [ "host2.example.com", "host3.example.com" ], "atlanta" : { "hosts" : [ "host1.example.com", "host4.example.com", "host5.example.com" ], "vars" : { "b" : false }, "children": [ "marietta", "5points" ] }, "marietta" : [ "host6.example.com" ], "5points" : [ "host7.example.com" ] }
-
--host
:返回一条空的JSON哈希/字典,或者关于变量的JSON哈希/字典,这些变量将被用来模板或者playbooks。返回变量是可选的,如果脚本不希望这样做,返回一条空的哈希/字典即可:{ "favcolor" : "red", "ntpserver" : "wolf.example.com", "monitoring" : "pack.example.com" }
编写样例脚本inventory-script:
[web]
192.168.200.100
调用方式如下:
### Ansible默认通过调用脚本的--list选项来获取主机列表
# ansible web -i inventory-script -m ping