两种方式搭建Zookeeper伪集群:纯手工或使用Docker compose

本文讲述单机环境下,如何搭建一个zookeeper集群。
方式一是在本地机器上,利用三个端口模拟三个zk服务器,准备三份zk配置文件,搭建zk集群。
方式二在本地机器上,通过docker compose搭建zk集群。

方式一:纯手工搭建一个zookeeper集群

集群包含3个节点,使用端口号2181,2182,2183.

  1. 准备好3个节点的配置文件。
    zoo.cfg默认在/usr/local/etc/zookeeper目录下。在这个目录下创建zoo1.cfg zoo2.cfg zoo3.cfg.
    其中,各个节点需要配置相应的dataDir,clientPort。
    另外,server.1, server.2, server.3是集群配置信息,表明构成集群的3个节点。
    例如server.A=B:D:D. 这里的A是一个数字,表示服务器的编号。B是这个服务器的ip地址。C是zookeeper服务器之间通信端口。D是Leader选举的端口。
zookeeper $ cat zoo1.cfg
# The number of milliseconds of each tick
tickTime=2000
# The number of ticks that the initial 
# synchronization phase can take
initLimit=10
# The number of ticks that can pass between 
# sending a request and getting an acknowledgement
syncLimit=5
# the directory where the snapshot is stored.
# do not use /tmp for storage, /tmp here is just 
# example sakes.
dataDir=/usr/local/var/run/zookeeper/zk1/data
# the port at which the clients will connect
clientPort=2181
# the maximum number of client connections.
# increase this if you need to handle more clients
#maxClientCnxns=60
#
# Be sure to read the maintenance section of the 
# administrator guide before turning on autopurge.
#
# http://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_maintenance
#
# The number of snapshots to retain in dataDir
#autopurge.snapRetainCount=3
# Purge task interval in hours
# Set to "0" to disable auto purge feature
#autopurge.purgeInterval=1

server.1=localhost:2287:3387
server.2=localhost:2288:3388
server.3=localhost:2289:3389
zookeeper $ cat zoo2.cfg 
# The number of milliseconds of each tick
tickTime=2000
# The number of ticks that the initial 
# synchronization phase can take
initLimit=10
# The number of ticks that can pass between 
# sending a request and getting an acknowledgement
syncLimit=5
# the directory where the snapshot is stored.
# do not use /tmp for storage, /tmp here is just 
# example sakes.
dataDir=/usr/local/var/run/zookeeper/zk2/data
# the port at which the clients will connect
clientPort=2182
# the maximum number of client connections.
# increase this if you need to handle more clients
#maxClientCnxns=60
#
# Be sure to read the maintenance section of the 
# administrator guide before turning on autopurge.
#
# http://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_maintenance
#
# The number of snapshots to retain in dataDir
#autopurge.snapRetainCount=3
# Purge task interval in hours
# Set to "0" to disable auto purge feature
#autopurge.purgeInterval=1

server.1=localhost:2287:3387
server.2=localhost:2288:3388
server.3=localhost:2289:3389
zookeeper $ cat zoo3.cfg 
# The number of milliseconds of each tick
tickTime=2000
# The number of ticks that the initial 
# synchronization phase can take
initLimit=10
# The number of ticks that can pass between 
# sending a request and getting an acknowledgement
syncLimit=5
# the directory where the snapshot is stored.
# do not use /tmp for storage, /tmp here is just 
# example sakes.
dataDir=/usr/local/var/run/zookeeper/zk3/data
# the port at which the clients will connect
clientPort=2183
# the maximum number of client connections.
# increase this if you need to handle more clients
#maxClientCnxns=60
#
# Be sure to read the maintenance section of the 
# administrator guide before turning on autopurge.
#
# http://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_maintenance
#
# The number of snapshots to retain in dataDir
#autopurge.snapRetainCount=3
# Purge task interval in hours
# Set to "0" to disable auto purge feature
#autopurge.purgeInterval=1

server.1=localhost:2287:3387
server.2=localhost:2288:3388
server.3=localhost:2289:3389
  1. 创建data目录和myid文件
#创建如下data目录
/usr/local/var/run/zookeeper/zk1/data
/usr/local/var/run/zookeeper/zk2/data
/usr/local/var/run/zookeeper/zk3/data

#创建如下myid文件
data $ cat /usr/local/var/run/zookeeper/zk1/data/myid
1
data $ cat /usr/local/var/run/zookeeper/zk2/data/myid
2
data $ cat /usr/local/var/run/zookeeper/zk3/data/myid
3
  1. 启动3个zookeeper
$ zkServer start /usr/local/etc/zookeeper/zoo1.cfg 
ZooKeeper JMX enabled by default
Using config: /usr/local/etc/zookeeper/zoo1.cfg
Starting zookeeper ... STARTED

$ zkServer start /usr/local/etc/zookeeper/zoo2.cfg 
ZooKeeper JMX enabled by default
Using config: /usr/local/etc/zookeeper/zoo2.cfg
Starting zookeeper ... STARTED

$ zkServer start /usr/local/etc/zookeeper/zoo3.cfg 
ZooKeeper JMX enabled by default
Using config: /usr/local/etc/zookeeper/zoo3.cfg
Starting zookeeper ... STARTED
  1. 查看各个节点的状态:1个leader,2个follower
$ zkServer status /usr/local/etc/zookeeper/zoo1.cfg 
ZooKeeper JMX enabled by default
Using config: /usr/local/etc/zookeeper/zoo1.cfg
Mode: follower

zkServer status /usr/local/etc/zookeeper/zoo2.cfg 
ZooKeeper JMX enabled by default
Using config: /usr/local/etc/zookeeper/zoo2.cfg
Mode: follower

$ zkServer status /usr/local/etc/zookeeper/zoo3.cfg 
ZooKeeper JMX enabled by default
Using config: /usr/local/etc/zookeeper/zoo3.cfg
Mode: leader
  1. 进入zookeeper集群内部,登陆各个节点
$ zkCli -server localhost:2181
Connecting to localhost:2181
Welcome to ZooKeeper!
JLine support is enabled

WATCHER::

WatchedEvent state:SyncConnected type:None path:null

$ zkCli -server localhost:2182
$ zkCli -server localhost:2183

#在zk1上新建一个node,到zk2和3上能够马上查看到
[zk: localhost:2181(CONNECTED) 1] create /test-zk1 "test-zk1"
Created /test-zk1

[zk: localhost:2182(CONNECTED) 1] get /test-zk1
test-zk1
cZxid = 0x100000007
ctime = Sat Mar 13 22:05:00 CST 2021
mZxid = 0x100000007
mtime = Sat Mar 13 22:05:00 CST 2021
pZxid = 0x100000007
cversion = 0
dataVersion = 0
aclVersion = 0
ephemeralOwner = 0x0
dataLength = 8
numChildren = 0

[zk: localhost:2183(CONNECTED) 0] get /test-zk1
test-zk1
cZxid = 0x100000007
ctime = Sat Mar 13 22:05:00 CST 2021
mZxid = 0x100000007
mtime = Sat Mar 13 22:05:00 CST 2021
pZxid = 0x100000007
cversion = 0
dataVersion = 0
aclVersion = 0
ephemeralOwner = 0x0
dataLength = 8
numChildren = 0

方式二:用docker compose搭建一个zookeeper集群

方式一中需要准备3份配置文件,启动3次zk服务,比较麻烦。那么用docker compose可以更快捷地通过容器搭建zk集群。

  1. 通过docker下载zookeeper镜像。如果对docker compose不了解,可以参考docker系列中的 Docker安装及原理
    和 Docker compose介绍及常用命令
$ docker pull zookeeper:3.4.13

$ docker images |grep zookeeper
zookeeper           3.4.13              4ebfb9474e72        23 months ago       150 MB
  1. 准备docker-compose.yaml
    注意yaml文件有很严格的语法标准,符号:和-后都需要有空格,否则会报错。
$ cat docker-compose.yaml 
version: '3.1'
services:
  zoo1:
    image: zookeeper:3.4.13
    restart: always
    hostname: zoo1
    container_name: zoo1
    ports:
      - "2181:2181"
    environment:
      ZOO_MY_ID: 1
      ZOO_SERVERS: server.1=zoo1:2888:3888 server.2=zoo2:2888:3888 server.3=zoo3:2888:3888

  zoo2:
    image: zookeeper:3.4.13
    restart: always
    hostname: zoo2
    container_name: zoo2
    ports:
      - "2182:2181"
    environment:
      ZOO_MY_ID: 2
      ZOO_SERVERS: server.1=zoo1:2888:3888 server.2=zoo2:2888:3888 server.3=zoo3:2888:3888

  zoo3:
    image: zookeeper:3.4.13
    restart: always
    hostname: zoo3
    container_name: zoo3
    ports:
      - "2183:2181"
    environment:
      ZOO_MY_ID: 3
      ZOO_SERVERS: server.1=zoo1:2888:3888 server.2=zoo2:2888:3888 server.3=zoo3:2888:3888
  1. 启动并测试
#启动docker容器
$ docker-compose up
Creating network "zookeeper_default" with the default driver
Creating zoo1
Creating zoo2
Creating zoo3

#查看,3个容器被创建好了
$ docker ps
CONTAINER ID        IMAGE               COMMAND                  CREATED             STATUS              PORTS                                        NAMES
b7bd4fb6e17b        zookeeper:3.4.13    "/docker-entrypoin..."   32 hours ago        Up 2 minutes        2888/tcp, 3888/tcp, 0.0.0.0:2182->2181/tcp   zoo2
2b0dde93bea9        zookeeper:3.4.13    "/docker-entrypoin..."   32 hours ago        Up 2 minutes        2888/tcp, 3888/tcp, 0.0.0.0:2183->2181/tcp   zoo3
9c3d91824726        zookeeper:3.4.13    "/docker-entrypoin..."   32 hours ago        Up 2 minutes        2888/tcp, 0.0.0.0:2181->2181/tcp, 3888/tcp   zoo1

#查看集群状态,一个leader,两个follower
$ docker exec -it zoo1 bash ./bin/zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /conf/zoo.cfg
Mode: follower

$ docker exec -it zoo2 bash ./bin/zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /conf/zoo.cfg
Mode: leader

$ docker exec -it zoo3 bash ./bin/zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /conf/zoo.cfg
Mode: follower
  1. 进入容器中的集群节点,查看集群上的数据
#登陆zoo2,查看数据
##############
zookeeper $ docker exec -it zoo2 /bin/bash

bash-4.4# pwd
/zookeeper-3.4.13

bash-4.4# ./bin/zkCli.sh -server localhost:2181
Connecting to localhost:2181
2021-03-12 19:22:04,234 [myid:] - INFO  [main:Environment@100] - Client environment:zookeeper.version=3.4.13-2d71af4dbe22557fda74f9a9b4309b15a7487f03, built on 06/29/2018 04:05 GMT
......

[zk: localhost:2181(CONNECTED) 0] ls /
[zookeeper]

#登陆zoo1,查看数据
################
$ docker exec -it zoo1 /bin/bash

bash-4.4# ./bin/zkCli.sh -server localhost:2181

[zk: localhost:2181(CONNECTED) 1] ls /
[zookeeper]

#在zoo2上新建一个节点,到zoo1上可以看到
#################
#zoo2
[zk: localhost:2181(CONNECTED) 2] create /zoo2 "zoo2-create"       
Created /zoo2

#zoo1
[zk: localhost:2181(CONNECTED) 1] ls /
[zoo2, zookeeper]

你可能感兴趣的:(两种方式搭建Zookeeper伪集群:纯手工或使用Docker compose)