本文演示从1个zookeeper+1个kafka broker到3个zookeeper+2个kafka broker集群的配置过程。
kafka依赖于zookeeper, 首先下载zookeeper和kafka
$ wget http://mirrors.hust.edu.cn/apache/zookeeper/zookeeper-3.4.6/zookeeper-3.4.6.tar.gz
$ gzip -d zookeeper-3.4.6.tar.gz
$ tar -xvf zookeeper-3.4.6.tar
$ wget http://apache.fayea.com/apache-mirror/kafka/0.8.1.1/kafka_2.8.0-0.8.1.1.tgz
$ gtar xvzf kafka_2.8.0-0.8.1.1.tgz
对于CentOS来说在,在本地试验可能会遇到莫名其妙的问题,这一般是由于主机名不能正确识别导致。为了避免可能遇到的问题,首先查询本机主机名,
$ hostname
HOME
然后加入一条本地解析到/etc/hosts文件中
127.0.0.1 HOME
将zookeeper/conf/下的zoo_sample.cfg改名成zoo.cfg。 zookeeper默认会读取此配置文件。配置文件暂时不用改,默认即可
$mv zookeeper-3.4.6/conf/zoo_sample.cfg zookeeper-3.4.6/conf/zoo.cfg
启动Zookeeper服务, Zookeeper启动成功后在2181端口监听
$ zookeeper-3.4.6/bin/zkServer.sh start
JMX enabled by default
Using config: /home/wj/event/zookeeper-3.4.6/bin/../conf/zoo.cfg
Starting zookeeper ... STARTED
启动Kafka服务,启动成功后在9092端口监听
$ kafka_2.8.0-0.8.1.1/bin/kafka-server-start.sh kafka_2.8.0-0.8.1.1/config/server.properties
开始测试
# 连接zookeeper, 创建一个名为test的topic, replication-factor 和 partitions 后面会解释,先设置为1
$ bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic test
Created topic "test".
# 查看已经创建的topic列表
$ bin/kafka-topics.sh --list --zookeeper localhost:2181
test
# 查看此topic的属性
$ bin/kafka-topics.sh --describe --zookeeper localhost:2181 --topic test
Topic:test PartitionCount:1 ReplicationFactor:1 Configs:
Topic: test Partition: 0 Leader: 0 Replicas: 0 Isr: 0
# 生产者连接Kafka Broker发布一个消息
$ bin/kafka-console-producer.sh --broker-list localhost:9092 --topic test
SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
SLF4J: Defaulting to no-operation (NOP) logger implementation
SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details.
Hello World
消费者连接Zookeeper获取消息
$ bin/kafka-console-consumer.sh --zookeeper localhost:2181 --topic test --from-beginning
SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
SLF4J: Defaulting to no-operation (NOP) logger implementation
SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details.
Hello World
# The id of the broker. This must be set to a unique integer for each broker.
broker.id=1 #每个Kafka Broker应该配置一个唯一的ID
############################# Socket Server Settings #############################
# The port the socket server listens on
port=19092 #因为是在同一台机器上开多个Broker,所以使用不同的端口号区分
# Hostname the broker will bind to. If not set, the server will bind to all interfaces
#host.name=localhost #如果有多个网卡地址,也可以将不同的Broker绑定到不同的网卡
############################# Log Basics #############################
# A comma seperated list of directories under which to store log files
log.dirs=/tmp/kafka-logs-2 #因为是在同一台机器上开多个Broker,需要确保使用不同的日志目录来避免冲突
$ kafka_2.8.0-0.8.1.1/bin/kafka-server-start.sh kafka_2.8.0-0.8.1.1/config/server-2.properties
现在新创建一个topic, replication-factor表示该topic需要在不同的broker中保存几份,这里replication-factor设置为2, 表示在两个broker中保存。
$ kafka_2.8.0-0.8.1.1/bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 2 --partitions 1 --topic test2
然后查看此topic的属性。
$ kafka_2.8.0-0.8.1.1/bin/kafka-topics.sh --describe --zookeeper localhost:2181 --topic test2
Topic:test2 PartitionCount:1 ReplicationFactor:2 Configs:
Topic: test2 Partition: 0 Leader: 0 Replicas: 0,1 Isr: 0,1
在test2这个topic下发布新的消息验证工作正常
$ kafka_2.8.0-0.8.1.1/bin/kafka-console-producer.sh --broker-list localhost:9092 --topic test2
SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
SLF4J: Defaulting to no-operation (NOP) logger implementation
SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details.
HHH
$ kafka_2.8.0-0.8.1.1/bin/kafka-console-consumer.sh --zookeeper lochost:2181 --from-beginning --topic test2
SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
SLF4J: Defaulting to no-operation (NOP) logger implementation
SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details.
HHH
$ ps aux | grep server.properties
user 2620 1.5 5.6 2082704 192424 pts/1 Sl+ 08:57 0:25 java
$ kill 2620
$ kafka_2.8.0-0.8.1.1/bin/kafka-topics.sh --describe --zookeeper localhost:2181 --topic test2
Topic:test2 PartitionCount:1 ReplicationFactor:2 Configs:
Topic: test2 Partition: 0 Leader: 1 Replicas: 0,1 Isr: 1
$ kafka_2.8.0-0.8.1.1/bin/kafka-console-producer.sh --broker-list localhost:19092 --topic test2
# 使用1号broker再发布一个消息到test2下
SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
SLF4J: Defaulting to no-operation (NOP) logger implementation
SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details.
Another message
# 消费者查询,仍然工作正常
$ kafka_2.8.0-0.8.1.1/bin/kafka-consolconsumer.sh --zookeeper localhost:2181 --from-beginning92 --topic test2
SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
SLF4J: Defaulting to no-operation (NOP) logger implementation
SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details.
HHH
Another message
同样,zookeeper也需要搭建cluster, 避免出现Single-Point-Failure. 由于zookeeper采用投票的方式来重新选举某节点失败后的leader, 所以至少需要三个zookeeper才能组成群集。且最好使用奇数个(而非偶数)。
下面是演示单机搭建最简单的zookeeper cluster, 具体的可以参考http://myjeeva.com/zookeeper-cluster-setup.html
#!/bin/sh
#下载
wget http://mirror.bit.edu.cn/apache/zookeeper/zookeeper-3.4.6/zookeeper-3.4.6.tar.gz
gzip -d zookeeper-3.4.6.tar.gz
tar xvf zookeeper-3.4.6.tar
#重命名 zoo_sample.cfg 为 zoo.cfg
mv zookeeper-3.4.6/conf/zoo_sample.cfg zookeeper-3.4.6/conf/zoo.cfg
#新建一个目录
sudo mkdir /usr/zookeeper-cluster
sudo chown -R jerry:jerry /usr/zookeeper-cluster
#3个子目录分别对应三个zookeeper服务
mkdir /usr/zookeeper-cluster/server1
mkdir /usr/zookeeper-cluster/server2
mkdir /usr/zookeeper-cluster/server3
#建立三个目录存放各自的数据文件
mkdir /usr/zookeeper-cluster/data
mkdir /usr/zookeeper-cluster/data/server1
mkdir /usr/zookeeper-cluster/data/server2
mkdir /usr/zookeeper-cluster/data/server3
#建立三个目录存放各自的日志文件
mkdir /usr/zookeeper-cluster/log
mkdir /usr/zookeeper-cluster/log/server1
mkdir /usr/zookeeper-cluster/log/server2
mkdir /usr/zookeeper-cluster/log/server3
#在每一个数据文件目录中,新建一个myid文件,文件必须是唯一的服务标识,在后面的配置中会用到
echo '1' > /usr/zookeeper-cluster/data/server1/myid
echo '2' > /usr/zookeeper-cluster/data/server2/myid
echo '3' > /usr/zookeeper-cluster/data/server3/myid
#将zookeeper复制三份
cp -rf zookeeper-3.4.6/* /usr/zookeeper-cluster/server1
cp -rf zookeeper-3.4.6/* /usr/zookeeper-cluster/server2
cp -rf zookeeper-3.4.6/* /usr/zookeeper-cluster/server3
然后编辑每个zookeeper的zoo.cfg配置文件。
将dataDir和dataLogDir设置为各自独立的目录;然后保证clientPort不会和其它zookeeper冲突(因为这里演示是3个实例安装于一台服务器上)
最后加入下面几行
server.1=0.0.0.0:2888:3888
server.2=0.0.0.0:12888:13888
server.3=0.0.0.0:22888:23888
server.
X=
IP:
port1:
port2
X是在该zookeeper数据文件目录中myid指定的服务ID.
IP是当前zookeeper绑定的IP地址,因为是演示,所以全都是localhost
port1 是Quorum Port
port2 是Leader Election Port
由于3个zookeeper在同一台机器上,需要使用不同的端口号避免冲突。
修改后的结果如下
/usr/zookeeper-cluster/server1/conf/zoo.cfg
# The number of milliseconds of each tick
tickTime=2000
# The number of ticks that the initial
# synchronization phase can take
initLimit=10
# The number of ticks that can pass between
# sending a request and getting an acknowledgement
syncLimit=5
# the directory where the snapshot is stored.
# do not use /tmp for storage, /tmp here is just
# example sakes.
dataDir=/usr/zookeeper-cluster/data/server1
dataLogDir=/usr/zookeeper-cluster/log/server1
# the port at which the clients will connect
clientPort=2181
# the maximum number of client connections.
# increase this if you need to handle more clients
#maxClientCnxns=60
#
# Be sure to read the maintenance section of the
# administrator guide before turning on autopurge.
#
# http://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_maintenance
#
# The number of snapshots to retain in dataDir
#autopurge.snapRetainCount=3
# Purge task interval in hours
# Set to "0" to disable auto purge feature
#autopurge.purgeInterval=1
server.1=0.0.0.0:2888:3888
server.2=0.0.0.0:12888:13888
server.3=0.0.0.0:22888:23888
/usr/zookeeper-cluster/server2/conf/zoo.cfg
# The number of milliseconds of each tick
tickTime=2000
# The number of ticks that the initial
# synchronization phase can take
initLimit=10
# The number of ticks that can pass between
# sending a request and getting an acknowledgement
syncLimit=5
# the directory where the snapshot is stored.
# do not use /tmp for storage, /tmp here is just
# example sakes.
dataDir=/usr/zookeeper-cluster/data/server2
dataLogDir=/usr/zookeeper-cluster/log/server2
# the port at which the clients will connect
clientPort=12181
# the maximum number of client connections.
# increase this if you need to handle more clients
#maxClientCnxns=60
#
# Be sure to read the maintenance section of the
# administrator guide before turning on autopurge.
#
# http://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_maintenance
#
# The number of snapshots to retain in dataDir
#autopurge.snapRetainCount=3
# Purge task interval in hours
# Set to "0" to disable auto purge feature
#autopurge.purgeInterval=1
server.1=0.0.0.0:2888:3888
server.2=0.0.0.0:12888:13888
server.3=0.0.0.0:22888:23888
/usr/zookeeper-cluster/server3/conf/zoo.cfg
# The number of milliseconds of each tick
tickTime=2000
# The number of ticks that the initial
# synchronization phase can take
initLimit=10
# The number of ticks that can pass between
# sending a request and getting an acknowledgement
syncLimit=5
# the directory where the snapshot is stored.
# do not use /tmp for storage, /tmp here is just
# example sakes.
dataDir=/usr/zookeeper-cluster/data/server3
dataLogDir=/usr/zookeeper-cluster/log/server3
# the port at which the clients will connect
clientPort=22181
# the maximum number of client connections.
# increase this if you need to handle more clients
#maxClientCnxns=60
#
# Be sure to read the maintenance section of the
# administrator guide before turning on autopurge.
#
# http://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_maintenance
#
# The number of snapshots to retain in dataDir
#autopurge.snapRetainCount=3
# Purge task interval in hours
# Set to "0" to disable auto purge feature
#autopurge.purgeInterval=1
server.1=0.0.0.0:2888:3888
server.2=0.0.0.0:12888:13888
server.3=0.0.0.0:22888:23888
然后分别启动3个zookeeper服务
$ /usr/zookeeper-cluster/server1/bin/zkServer.sh start
JMX enabled by default
Using config: /usr/zookeeper-cluster/server1/bin/../conf/zoo.cfg
Starting zookeeper ... STARTED
$ /usr/zookeeper-cluster/server2/bin/zkServer.sh start
JMX enabled by default
Using config: /usr/zookeeper-cluster/server2/bin/../conf/zoo.cfg
Starting zookeeper ... STARTED
$ /usr/zookeeper-cluster/server3/bin/zkServer.sh start
JMX enabled by default
Using config: /usr/zookeeper-cluster/server3/bin/../conf/zoo.cfg
Starting zookeeper ... STARTED
启动完成后查看每个服务的状态,下面可以看到server2被选为了leader. 而其它2个服务为follower.
$ /usr/zookeeper-cluster/server1/bin/zkServer.sh status
JMX enabled by default
Using config: /usr/zookeeper-cluster/server1/bin/../conf/zoo.cfg
Mode: follower
$ /usr/zookeeper-cluster/server2/bin/zkServer.sh status
JMX enabled by default
Using config: /usr/zookeeper-cluster/server2/bin/../conf/zoo.cfg
Mode: leader
$ /usr/zookeeper-cluster/server3/bin/zkServer.sh status
JMX enabled by default
Using config: /usr/zookeeper-cluster/server3/bin/../conf/zoo.cfg
Mode: follower
接下来修改kafka的server.properties配置文件
kafka_2.8.0-0.8.1.1/config/server.properties和kafka_2.8.0-0.8.1.1/config/server-2.properties
将3个zookeeper的地址加入到zookeeper.connect中,如下:
zookeeper.connect=localhost:2181,localhost:12181,localhost:22181
启动2个Kafka broker
$ kafka_2.8.0-0.8.1.1/bin/kafka-server-start.sh kafka_2.8.0-0.8.1.1/config/server.properties
$ kafka_2.8.0-0.8.1.1/bin/kafka-server-start.sh kafka_2.8.0-0.8.1.1/config/server-2.properties
接下来验证一下
$ kafka_2.8.0-0.8.1.1/bin/kafka-console-consumer.sh --zookeeper localhost:2181 --topic test3 --from-beginning
SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
SLF4J: Defaulting to no-operation (NOP) logger implementation
SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details.
fhsjdfhdsa
fjdsljfdsadsfdas
$ kafka_2.8.0-0.8.1.1/bin/kafka-console-consumer.sh --zookeepecalhost:12181 --topic test3 --from-beginning
SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
SLF4J: Defaulting to no-operation (NOP) logger implementation
SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details.
fhsjdfhdsa
fjdsljfdsadsfdas
$ kafka_2.8.0-0.8.1.1/bin/kafka-console-consumer.sh --zookeepecalhost:22181 --topic test3 --from-beginning
SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
SLF4J: Defaulting to no-operation (NOP) logger implementation
SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details.
fhsjdfhdsa
fjdsljfdsadsfdas
现在模拟leader挂掉的情况,直接将server2 的zookeeper杀掉
$ ps aux | grep server2
user 2493 1.0 1.8 1661116 53792 pts/0 Sl 14:46 0:02 java
$ kill 2493
重新查询一次各zookeeper的状态,会发现leader发生了改变
$ /usr/zookeeper-cluster/server3/bin/zkServer.sh status
JMX enabled by default
Using config: /usr/zookeeper-cluster/server3/bin/../conf/zoo.cfg
Mode: leader
$ /usr/zookeeper-cluster/server1/bin/zkServer.sh status
JMX enabled by default
Using config: /usr/zookeeper-cluster/server1/bin/../conf/zoo.cfg
Mode: follower
$ /usr/zookeeper-cluster/server2/bin/zkServer.sh status
JMX enabled by default
Using config: /usr/zookeeper-cluster/server2/bin/../conf/zoo.cfg
Error contacting service. It is probably not running.
再次验证,kafka集群仍然工作正常。
$ kafka_2.8.0-0.8.1.1/bin/kafka-console-consumer.sh --zookeeper localhost:2181 --topic test3 --from-beginning
SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
SLF4J: Defaulting to no-operation (NOP) logger implementation
SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details.
fhsjdfhdsa
fjdsljfdsadsfdas
$ kafka_2.8.0-0.8.1.1/bin/kafka-console-consumer.sh --zookeepecalhost:22181 --topic test3 --from-beginning
SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
SLF4J: Defaulting to no-operation (NOP) logger implementation
SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details.
fhsjdfhdsa
fjdsljfdsadsfdas
$ kafka_2.8.0-0.8.1.1/bin/katopics.sh --create --zookeeper localhost:2181 --replication-factor 2 --partitions 2 --topic test5
Created topic "test5".
$ kafka_2.8.0-0.8.1.1/bin/katopics.sh --create --zookeeper localhost:22181 --replication-factor 2 --partitions 2 --topic test6
Created topic "test6".
本文地址: http://blog.csdn.net/wangjia184/article/details/37921183