zookeeper集群模式,安装到如下三台机器
10.43.159.237 zdh-237
10.43.159.238 zdh-238
10.43.159.239 zdh-239
Kerberos服务器
10.43.159.240 zdh-240
Kerberos客户端
zdh-237,zdh-238,zdh-239
zookeeper/zdh1234
useradd -g hadoop -s /bin/bash -md /home/zookeeper zookeeper
获取并解压Zookeeper安装包
scp garrison@zdh-237:/home/garrison/backup/zookeeper-3.5.1-alpha.tar.gz .
解压zookeeper包
tar -zxvf zookeeper-3.5.1-alpha.tar.gz
在zookeeper-3.5.1-alpha/conf/目录执行
mv zoo_sample.cfg zoo.cfg
修改zoo.cfg文件:
dataDir=/home/zookeeper/zookeeper-3.5.1-alpha/dataDir
clientPort=12181
文件最后添加,配置zookeeper集群通信端口:
server.1=zdh-237:12888:13888
server.2=zdh-238:12888:13888
server.3=zdh-239:12888:13888
创建一个dataDir文件夹:
mkdir ~/zookeeper-3.5.1-alpha/dataDir
再创建一个空文件:
touch /dataDir/myid
最后向该文件写入ID:
echo 1 > /dataDir/myid
配置Java环境变量:
export JAVA_HOME=/usr/java/jdk1.7.0_80
export PATH=JAVA_HOME/bin
export CLASSPATH=.:JAVA_HOME/lib/dt.jar:PATH:ZOOKEEPER_HOME/conf
增加别名,快速进入文件夹:
alias conf='cd ~/zookeeper-3.5.1-alpha/conf'
alias logs='cd ~/zookeeper-3.5.1-alpha/logs'
kadmin.local
addprinc -randkey zookeeper/[email protected]
addprinc -randkey zookeeper/[email protected]
addprinc -randkey zookeeper/[email protected]
xst -k zookeeper.keytab zookeeper/[email protected]
xst -k zookeeper.keytab zookeeper/[email protected]
xst -k zookeeper.keytab zookeeper/[email protected]
exit
scp zookeeper.keytab zookeeper@zdh-237:/home/zookeeper/zookeeper-3.5.1-alpha/conf
authProvider.1=org.apache.zookeeper.server.auth.SASLAuthenticationProvider
jaasLoginRenew=3600000
kerberos.removeHostFromPrincipal=true
kerberos.removeRealmFromPrincipal=true
export JVMFLAGS="-Djava.security.auth.login.config=/home/zookeeper/zookeeper-3.5.1-alpha/conf/jaas.conf"
Server {
com.sun.security.auth.module.Krb5LoginModule required
useKeyTab=true
keyTab="/home/zookeeper/zookeeper-3.5.1-alpha/conf/zookeeper.keytab"
storeKey=true
useTicketCache=false
principal="zookeeper/[email protected]";
};
Client {
com.sun.security.auth.module.Krb5LoginModule required
useKeyTab=true
keyTab="/home/zookeeper/zookeeper-3.5.1-alpha/conf/zookeeper.keytab"
storeKey=true
useTicketCache=false
principal="zookeeper/[email protected]";
};
注意:如果修改jaas.conf配置,则一定要重启zkServer,否则会导致zkClient连不上,
可能是因为zkClinet和zkServer使用同一个jaas配置,实际zkClient应该配置自己的keytab用于访问,
而不是配置成和Server一样,可以在其他机器上面新建一个用户作为访问的客户端。
scp -r zookeeper-3.5.1-alpha zookeeper@zdh-238:/home/zookeeper
然后登陆到zdh-238的zookeeper,修改jaas.conf文件principal
修改zookeeper/[email protected]为zookeeper/[email protected]
修改myid文件的值为集群对应的值,每个节点的值都必须唯一不能相同:
echo 2 > dataDir/myid
zdh-239等其他节点做同样操作;
同样注意环境变量的设置。
进入到 bin目录
./zkServer.sh start
查看状态:
./zkServer.sh status
停止zookeeper:
./zkServer.sh stop
客户端能够登陆开启Kerberos的zkServer
zkCli.sh -server zdh-237:12181
注意不能使用zkCli.sh -server 10.43.159.237:12181登陆,
会导致鉴权失败,zdh41和10.43.159.237在Kerberos服务器看来是不一样的principle
客户端登陆失败时,报时间过大导致鉴权失败,
需要把集群机器的时间进行统一,kerberos鉴权对时间差有一定要求。
http://10.43.159.237:8080/commands/
管理端口默认为8080,可以在zoo.cfg修改配置项。
admin.serverPort=18080
kinit -kt zookeeper.keytab zookeeper/[email protected]
deleteall /storm
create /znode1
getAcl /znode1
setAcl /znode1 sasl:zookeeper/[email protected]:cdwra
delete /test
ls /
查看zkClient.sh发现需要在zkEnv.sh中配置ZOO_LOG4J_PROP参数,
ZOO_LOG4J_PROP="DEBUG,CONSOLE"
打开远程debug端口,在zkEnv.sh增加如下配置:
CLIENT_JVMFLAGS=" ${JAVA_OPTS} -Xdebug -Xrunjdwp:transport=dt_socket,address=1077,server=y,suspend=y "
服务端的类似
当zookeeper的服务端的用户名不为zookeeper,例如是zookeeperkrb,
则需要在zkEnv.sh增加如下配置,指明客户端需要访问的服务端的名称:
CLIENT_JVMFLAGS=" -Dzookeeper.sasl.client.username=zookeeperkrb $CLIENT_JVMFLAGS"
客户端访问的服务端名称是在如下代码中进行的初始化,默认值为zookeeper:
org.apache.zookeeper.ClientCnxn.SendThread.startConnect()
String principalUserName = System.getProperty("zookeeper.sasl.client.username", "zookeeper");