Ansible(三)、Ansible实现zookeeper集群安装

1.软件准备

[root@node1 soft]# ll
total 37535744 
-rw-r--r-- 1 root root 255201772 Sep 19 10:15 flink-1.9.0-bin-scala_2.11.tgz
-rw-r--r-- 1 root root 195094741 Jul  6 00:09 jdk-8u221-linux-x64.tar.gz
-rw-r--r-- 1 root root  37535744 Sep 20 14:49 zookeeper-3.4.14.tar.gz

2.hosts配置

[jdk]
172.17.16.4
172.17.16.12
172.17.16.13
#此处的index最终会写入到myid
[zookeeper]
172.17.16.4  ansible_index=0
172.17.16.12 ansible_index=1
172.17.16.13 ansible_index=2

3.JDK安装

见Ansible实现JDK批量安装

4.yaml编写

- hosts: zookeeper
  remote_user: root
  vars:
      name: "zookeeper"
      install_path: /data/cloud
      work_path: /data/work
  tasks:
    - name: "1.初始化工作目录"
      shell: mkdir -p {{install_path}}/zookeeper &&
             mkdir -p {{work_path}}/zookeeper/data &&
             mkdir -p {{work_path}}/zookeeper/logs &&
             rm -rf {{work_path}}/zookeeper/data/* &&
             rm -rf {{work_path}}/zookeeper/logs/*
    - name: "2.拷贝安装包"
      copy: src=/data/soft/zookeeper-3.4.14.tar.gz dest={{install_path}}/zookeeper.tar.gz
    - name: "3.解压安装包"
      shell: tar -zxvf {{install_path}}/zookeeper.tar.gz -C {{install_path}}/zookeeper --strip-components 1
    - name: "4.修改配置文件"
      shell: cd {{install_path}}/zookeeper/conf &&
             cp zoo_sample.cfg zoo.cfg &&
             sed -i '/^dataDir=.*/d' zoo.cfg &&
             echo "dataDir={{work_path}}/zookeeper/data" >> zoo.cfg &&
             echo "dataLogDir={{work_path}}/zookeeper/logs" >> zoo.cfg &&
             sed -i '/^ZOOBINDIR=/a\ZOO_LOG_DIR={{work_path}}/zookeeper/logs/' {{install_path}}/zookeeper/bin/zkServer.sh
    - name: "5.添加Server"
      shell: echo "server.{{item.value.ansible_index}}={{item.key}}:2888:3888 " >> {{install_path}}/zookeeper/conf/zoo.cfg
      with_dict: "{{hostvars}}"
      no_log: True
    - name: "6.获取主机名称"
      command: /bin/hostname
      register: hostname
      ignore_errors: True
    - name: "7.为每台机器写入myid"
      shell:  echo "{{item.value.ansible_index}}" > {{work_path}}/zookeeper/data/myid
      with_dict: "{{hostvars}}"
      when: item.value.ansible_hostname == hostname.stdout
      no_log: True
    - name: "8.清理安装文件"
      shell: rm -rf {{install_path}}/zookeeper.tar.gz
    - name: "9.启动zookeeper"
      shell: cd {{install_path}}/zookeeper/ && bin/zkServer.sh start
    - name: "10.集群状态"
      shell: cd {{install_path}}/zookeeper/ && bin/zkServer.sh status

5.执行安装

[root@node1 yaml]# ansible-playbook zookeeper.yaml 
[DEPRECATION WARNING]: The TRANSFORM_INVALID_GROUP_CHARS settings is set to allow bad characters in group 
names by default, this will change, but still be user configurable on deprecation. This feature will be 
removed in version 2.10. Deprecation warnings can be disabled by setting deprecation_warnings=False in 
ansible.cfg.
 [WARNING]: Invalid characters were found in group names but not replaced, use -vvvv to see details

 [WARNING]: Found variable using reserved name: name


PLAY [zookeeper] *******************************************************************************************

TASK [Gathering Facts] *************************************************************************************
ok: [172.17.16.4]
ok: [172.17.16.12]
ok: [172.17.16.13]

TASK [1.初始化工作目录] *******************************************************************************************
 [WARNING]: Consider using the file module with state=directory rather than running 'mkdir'.  If you need
to use command because file is insufficient you can add 'warn: false' to this command task or set
'command_warnings=False' in ansible.cfg to get rid of this message.

changed: [172.17.16.4]
changed: [172.17.16.13]
changed: [172.17.16.12]

TASK [2.拷贝安装包] *********************************************************************************************
changed: [172.17.16.4]
changed: [172.17.16.13]
changed: [172.17.16.12]

TASK [3.解压安装包] *********************************************************************************************
 [WARNING]: Consider using the unarchive module rather than running 'tar'.  If you need to use command
because unarchive is insufficient you can add 'warn: false' to this command task or set
'command_warnings=False' in ansible.cfg to get rid of this message.

changed: [172.17.16.4]
changed: [172.17.16.13]
changed: [172.17.16.12]

TASK [4.修改配置文件] ********************************************************************************************
changed: [172.17.16.4]
changed: [172.17.16.13]
changed: [172.17.16.12]

TASK [5.添加Server] ******************************************************************************************
changed: [172.17.16.4] => (item=None)
changed: [172.17.16.12] => (item=None)
changed: [172.17.16.13] => (item=None)
changed: [172.17.16.4] => (item=None)
changed: [172.17.16.13] => (item=None)
changed: [172.17.16.12] => (item=None)
changed: [172.17.16.4] => (item=None)
changed: [172.17.16.4]
changed: [172.17.16.13] => (item=None)
changed: [172.17.16.13]
changed: [172.17.16.12] => (item=None)
changed: [172.17.16.12]

TASK [6.获取主机名称] ********************************************************************************************
changed: [172.17.16.4]
changed: [172.17.16.12]
changed: [172.17.16.13]

TASK [7.为每台机器写入myid] ***************************************************************************************
skipping: [172.17.16.4] => (item=None) 
skipping: [172.17.16.13] => (item=None) 
skipping: [172.17.16.13] => (item=None) 
changed: [172.17.16.4] => (item=None)
changed: [172.17.16.12] => (item=None)
skipping: [172.17.16.4] => (item=None) 
changed: [172.17.16.4]
skipping: [172.17.16.12] => (item=None) 
skipping: [172.17.16.12] => (item=None) 
changed: [172.17.16.13] => (item=None)
changed: [172.17.16.12]
changed: [172.17.16.13]

TASK [8.清理安装文件] ********************************************************************************************
 [WARNING]: Consider using the file module with state=absent rather than running 'rm'.  If you need to use
command because file is insufficient you can add 'warn: false' to this command task or set
'command_warnings=False' in ansible.cfg to get rid of this message.

changed: [172.17.16.4]
changed: [172.17.16.13]
changed: [172.17.16.12]

TASK [9.启动zookeeper] ***************************************************************************************
changed: [172.17.16.4]
changed: [172.17.16.13]
changed: [172.17.16.12]

TASK [10.集群状态] *********************************************************************************************
changed: [172.17.16.4]
changed: [172.17.16.13]
changed: [172.17.16.12]

PLAY RECAP *************************************************************************************************
172.17.16.12               : ok=11   changed=10   unreachable=0    failed=0    skipped=0    rescued=0    ignored=0   
172.17.16.13               : ok=11   changed=10   unreachable=0    failed=0    skipped=0    rescued=0    ignored=0   
172.17.16.4                : ok=10   changed=10    unreachable=0    failed=0   skipped=0    rescued=0    ignored=0   

6.测试

[root@node1 bin]# ./zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /data/cloud/zookeeper/bin/../conf/zoo.cfg
Mode: follower
[root@node2 bin]# ./zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /data/cloud/zookeeper/bin/../conf/zoo.cfg
Mode: follower
[root@node3 bin]# ./zkServer.sh status 
ZooKeeper JMX enabled by default
Using config: /data/cloud/zookeeper/bin/../conf/zoo.cfg
Mode: leader

你可能感兴趣的:(Ansible(三)、Ansible实现zookeeper集群安装)