【分布式系统之zookeeper安装】

ZooKeeper是一个分布式的,开放源码的分布式应用程序协调服务,它包含一个简单的原语集,分布式应用程序可以基于它实现同步服务,配置维护和命名服务等。
设计目的
1.最终一致性:client不论连接到哪个Server,展示给它都是同一个视图,这是zookeeper最重要的性能。
2 .可靠性:具有简单、健壮、良好的性能,如果消息m被到一台服务器接受,那么它将被所有的服务器接受。
3 .实时性:Zookeeper保证客户端将在一个时间间隔范围内获得服务器的更新信息,或者服务器失效的信息。但由于网络延时等原因,Zookeeper不能保证两个客户端能同时得到刚更新的数据,如果需要最新数据,应该在读数据之前调用sync()接口。
4 .等待无关(wait-free):慢的或者失效的client不得干预快速的client的请求,使得每个client都能有效的等待。
5.原子性:更新只能成功或者失败,没有中间状态。
6 .顺序性:包括全局有序和偏序两种:全局有序是指如果在一台服务器上消息a在消息b前发布,则在所有Server上消息a都将在消息b前被发布;偏序是指如果一个消息b在消息a后被同一个发送者发布,a必将排在b前面。

ZK安装分为单机安装、伪分布安装和分布式安装

 

一、单机安装

[root@node1 bin]# wget http://apache.opencas.org/zookeeper/zookeeper-3.4.6/zookeeper-3.4.6.tar.gz
[root@node1 zk]# ls
zookeeper-3.4.6.tar.gz
[root@node1 zk]# tar -zxvf zookeeper-3.4.6.tar.gz
[root@node1 zk]# ls
zookeeper-3.4.6  zookeeper-3.4.6.tar.gz
[root@node1 zk]# mv zookeeper-3.4.6 zookeeper
[root@node1 zk]# cd zookeeper
[root@node1 zookeeper]# ls
bin        CHANGES.txt  contrib     docs             ivy.xml  LICENSE.txt  README_packaging.txt  recipes  zookeeper-3.4.6.jar      zookeeper-3.4.6.jar.md5
build.xml  conf         dist-maven  ivysettings.xml  lib      NOTICE.txt   README.txt            src      zookeeper-3.4.6.jar.asc  zookeeper-3.4.6.jar.sha1
[root@node1 zookeeper]# cd conf/
[root@node1 conf]# ls
configuration.xsl  log4j.properties  zoo_sample.cfg
[root@node1 conf]# cp zoo_sample.cfg  zoo.cfg
[root@node1 conf]# cat zoo.cfg
# The number of milliseconds of each tick
tickTime=2000
# The number of ticks that the initial
# synchronization phase can take
initLimit=10
# The number of ticks that can pass between
# sending a request and getting an acknowledgement
syncLimit=5
# the directory where the snapshot is stored.
# do not use /tmp for storage, /tmp here is just
# example sakes.
dataDir=/tmp/zookeeper
# the port at which the clients will connect
clientPort=2181
# the maximum number of client connections.
# increase this if you need to handle more clients
#maxClientCnxns=60
#
# Be sure to read the maintenance section of the
# administrator guide before turning on autopurge.
#
# http://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_maintenance
#
# The number of snapshots to retain in dataDir
#autopurge.snapRetainCount=3
# Purge task interval in hours
# Set to "0" to disable auto purge feature
#autopurge.purgeInterval=1

[root@node1 conf]# java -version
java version "1.7.0_45"
OpenJDK Runtime Environment (rhel-2.4.3.3.el6-i386 u45-b15)
OpenJDK Client VM (build 24.45-b08, mixed mode, sharing)
[root@node1 conf]# cd ../
[root@node1 zookeeper]# cd bin/
[root@node1 bin]# ls
README.txt  zkCleanup.sh  zkCli.cmd  zkCli.sh  zkEnv.cmd  zkEnv.sh  zkServer.cmd  zkServer.sh
[root@node1 bin]# ./zkServer.sh
JMX enabled by default
Using config: /opt/zk/zookeeper/bin/../conf/zoo.cfg
Usage: ./zkServer.sh {start|start-foreground|stop|restart|status|upgrade|print-cmd}
[root@node1 bin]# ./zkServer.sh  start
JMX enabled by default
Using config: /opt/zk/zookeeper/bin/../conf/zoo.cfg
Starting zookeeper ... STARTED
[root@node1 bin]# jps
4341 Jps
4325 QuorumPeerMain



二、伪分布式安装

安装技巧:注意修改clientPort端口号,建议依次改为2181、2182、2183,写入myId文件,必须和Server.X里面的序号对应


server.1=127.0.0.1:2888:3888
server.2=127.0.0.1:2889:3889
server.3=127.0.0.1:2890:3890

[root@node1 conf]# vi zoo.cfg
# The number of milliseconds of each tick
tickTime=2000
# The number of ticks that the initial
# synchronization phase can take
initLimit=10
# The number of ticks that can pass between
# sending a request and getting an acknowledgement
syncLimit=5
# the directory where the snapshot is stored.
# do not use /tmp for storage, /tmp here is just
# example sakes.
dataDir=../zkdata
# the port at which the clients will connect
clientPort=2181
# the maximum number of client connections.
# increase this if you need to handle more clients
#maxClientCnxns=60
#
# Be sure to read the maintenance section of the
# administrator guide before turning on autopurge.
#
# http://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_maintenance
#
# The number of snapshots to retain in dataDir
#autopurge.snapRetainCount=3
# Purge task interval in hours
# Set to "0" to disable auto purge feature
#autopurge.purgeInterval=1

server.1=127.0.0.1:2888:3888
server.2=127.0.0.1:2889:3889
server.3=127.0.0.1:2890:3890
~
"zoo.cfg" 33L, 1007C written
[root@node1 conf]# cd ../
[root@node1 zookeeper]# mkdir zkdata
[root@node1 zookeeper]# cd ../
[root@node1 zk]# clear
[root@node1 zk]# ls
zookeeper  zookeeper-3.4.6.tar.gz
[root@node1 zk]# cp -R zookeeper zookeeper2
[root@node1 zk]# cp -R zookeeper zookeeper3
[root@node1 zk]# jps
4372 Jps
4325 QuorumPeerMain
[root@node1 zk]# kill -9 4325
[root@node1 zk]# jps
4381 Jps

[root@node1 zk]# echo 1 > zookeeper/zkdata/myid
[root@node1 zk]# echo 2 > zookeeper2/zkdata/myid
[root@node1 zk]# echo 3 > zookeeper3/zkdata/myid


[root@node1 zk]# echo 2 > zookeeper2/zkdata/myid
[root@node1 zk]# cat  zookeeper3/zkdata/myid
3
[root@node1 zk]# cat  zookeeper2/zkdata/myid
2
[root@node1 zk]# cat  zookeeper/zkdata/myid
1
[root@node1 zk]#

 

 

启动验证
/opt/zk/zookeeper/bin/zkServer.sh start-foreground
/opt/zk/hadoop/zookeeper2/bin/zkServer.sh start-foreground
/opt/zk/hadoop/zookeeper3/bin/zkServer.sh start-foreground
启用成功后,输入 jps 看下进程

 

 

三、分布式安装

修改单机安装的zoo.cfg文件,如下图所示,因为是分布式安装,所以下图表红色部分,只需要修改对应的IP地址即可[root@node1 conf]# vi zoo.cfg
# The number of milliseconds of each tick
tickTime=2000
# The number of ticks that the initial
# synchronization phase can take
initLimit=10
# The number of ticks that can pass between
# sending a request and getting an acknowledgement
syncLimit=5
# the directory where the snapshot is stored.
# do not use /tmp for storage, /tmp here is just
# example sakes.
dataDir=../zkdata
# the port at which the clients will connect
clientPort=2181
# the maximum number of client connections.
# increase this if you need to handle more clients
#maxClientCnxns=60
#
# Be sure to read the maintenance section of the
# administrator guide before turning on autopurge.
#
# http://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_maintenance
#
# The number of snapshots to retain in dataDir
#autopurge.snapRetainCount=3
# Purge task interval in hours
# Set to "0" to disable auto purge feature
#autopurge.purgeInterval=1

server.1=192.168.1.1:2888:3888
server.2=192.168.1.2:2888:3888
server.3=192.168.1.3:2888:3888
~

步骤二、通过Scp命令分发文件到其余两台机器上面

 

步骤三、修改各自的myid文件,必须和zoo.cfg里面配置的server.1=192.168.1.1数字和IP对应起来

[root@node1 zk]# echo 1 > zookeeper/zkdata/myid

 

步骤四、启动验证/opt/zk/zookeeper/bin/zkServer.sh start-foreground

你可能感兴趣的:(【分布式系统之zookeeper安装】)