1、集群规划
在node1 node2 node3 三个节点服务器上部署Zookeeper
node1 | node2 | node3 |
---|---|---|
Zookeeper | Zookeeper | Zookeeper |
2、配置主机名映射
[victor@node1 ~]$ sudo vim /etc/hosts
192.168.2.101 node1
192.168.2.102 node2
192.168.2.103 node3
3、安装Zookeeper思想
在node1上安装好Zookeeper,向node2 node3 做scp 分发
在node2 和 node3 上修改myid文件并配置环境变量
4、解压zookeeper安装包到/opt/module/目录下
[victor@node1 software]$ tar -xzvf zookeeper-3.4.10.tar.gz -C /opt/module/
5、在/opt/module/zookeeper-3.4.10/这个目录下创建zkdata
[victor@node1 zookeeper-3.4.10]$ cd /opt/module/zookeeper-3.4.10/
[victor@node1 zookeeper-3.4.10]$ pwd
/opt/module/zookeeper-3.4.10
[victor@node1 zookeeper-3.4.10]$ mkdir zkdata
6、重命名/opt/module/zookeeper-3.4.10/conf这个目录下的zoo_sample.cfg为zoo.cfg
[victor@node1 ~]$ cd /opt/module/zookeeper-3.4.10/conf/
[victor@node1 conf]$ pwd
/opt/module/zookeeper-3.4.10/conf
[victor@node1 conf]$ mv zoo_sample.cfg zoo.cfg
7、配置zoo.cfg 文件
[victor@node1 conf]$ vim zoo.cfg
tickTime=2000
initLimit=10
syncLimit=5
dataDir=/opt/module/zookeeper-3.4.10/zkdata
dataLogDir=/opt/module/zookeeper-3.4.10/zkdata/logs
clientPort=2181
server.1=node1:2888:3888
server.2=node2:2888:3888
server.3=node3:2888:3888
maxClientCnxns=300
尖叫提示:maxClientCnxns默认连接最多60,超过设置的连接数会报Too many connections from错误
8、创建myid文件编写内容
[victor@node1 zookeeper-3.4.10]$ cd /opt/module/zookeeper-3.4.10/zkdata/
[victor@node1 zkdata]$ touch myid
[victor@node1 zkdata]$ echo 1 > myid
[victor@node1 zkdata]$ cat myid
1
9、zookeeper 日志配置
$https://www.jianshu.com/p/05bab8d5419c
10、使用scp把node1节点上的Zookeeper分发到node2 node3 两台节点上
[victor@node1 ~]$ scp -r /opt/module/zookeeper-3.4.10 root@node2:/opt/module/
[victor@node1 ~]$ scp -r /opt/module/zookeeper-3.4.10 root@node3:/opt/module/
11、修改node2 node3 节点的myid
node2
[victor@node2 ~]$ echo 2 > /opt/module/zookeeper-3.4.10/zkdata/myid
[victor@node2 zkdata]$ cat myid
2
node3
[victor@node3 ~]$ echo 3 > /opt/module/zookeeper-3.4.10/zkdata/myid
[victor@node3 zkdata]$ cat myid
3
12、分别启动node1 node2 node3 节点上的zookeeper
[victor@node1 zookeeper-3.4.10]$ bin/zkServer.sh start
[victor@node2 zookeeper-3.4.10]$ bin/zkServer.sh start
[victor@node3 zookeeper-3.4.10]$ bin/zkServer.sh start
**尖叫提示:当启动第一台时会报****This ZooKeeper instance is not currently serving requests** **
错误提示,这是因为集群里的结点只剩下一台,或者不足半数时,就会出现这个错误提示,根据zookeeper每次write请求
,都要写到log日志,并刷到磁盘里的特性,zookeeper最好用本地磁盘,且磁盘IO略好。**
13、分别查看node1 node2 node3 节点上zookeeper的运行状态
node1
[victor@node1 zookeeper-3.4.10]$ bin/zkServer.sh status
JMX enabled by default
Using config: /opt/module/zookeeper-3.4.10/bin/../conf/zoo.cfg
Mode: follower
node2
[victor@node2 zookeeper-3.4.10]$ bin/zkServer.sh status
JMX enabled by default
Using config: /opt/module/zookeeper-3.4.10/bin/../conf/zoo.cfg
Mode: leader
node3
[victor@node3 zookeeper-3.4.10]$ bin/zkServer.sh status
JMX enabled by default
Using config: /opt/module/zookeeper-3.4.10/bin/../conf/zoo.cfg
Mode: follower
14、查找不能成功启动原因
查看启动时报的是什么异常
[victor@node3 zookeeper-3.4.10]$ bin/zkServer.sh start-foreground
查看运行过程中报的是什么异常
[victor@node3 zookeeper-3.4.10]$ bin/zkServer.sh print-cmd
15、zookeeper 删除日志
$https://www.jianshu.com/p/00799191cba3
16、zookeeper 查看日志
[victor@node1 zookeeper]$ java -classpath .:zookeeper-3.4.10.jar:lib/slf4j-api-1.6.1.jar org.apache.zookeeper.server.LogFormatter zkdata/version-2 /log.400000001