kafka集群
broker 0(10.11.32.76 hadooptest76.bj)
broker 1(10.11.32.77 hadooptest77.bj)
broker 2(10.11.32.81 hadooptest81.bj)
操作任务:下线broker 2节点,确保业务不中断,数据不丢失。
一、先决条件,kafka启动必须加上JMX_PORT=9999(自己选一个空闲端口)
在zookeeper的客户端使用get /kafka/brokers/ids/[0,1,2](一个数字代表你的brokerid),如果返回信息中的jmx_port 值为-1就说明启动时没有指定JMX_PORT,这样不需要进行下面步骤,请直接重启集群加上配置,并且server.properties需要设置auto.create.topics.enable=false。
auto.create.topics.enable 默认值:true
是否允许自动创建topic。如果设为true,那么produce,consume或者fetch metadata一个不存在的topic时,就会自动创建一个默认replication factor和partition number的topic。
配置jmx服务
方法一:
(网上有文章让这样配置,但会报错,请不要这样操作)
kafka server中默认是不启动jmx端口的,需要用户自己配置
vim bin/kafka-run-class.sh
#最前面添加一行
JMX_PORT=8060
(
不要这样添加不然会报错:
Error: Exception thrown by the agent : java.rmi.server.ExportException: Port already in use: 8060; nested exception is:
java.net.BindException: Address already in use (Bind failed)
)
方法二:(建议)
建议启动kafka服务的时候前边加上JMX_PORT:
JMX_PORT=9999 /usr/local/kafka_2.9.2-0.8.1.1/bin/kafka-server-start.sh
我自定义一个启动脚本
cat start-kafka-all.sh
#!/bin/bash
while read i1 i2
do
ssh -T $i1 << TT
hostname
JMX_PORT=9999 /usr/local/kafka_2.9.2-0.8.1.1/bin/kafka-server-start.sh /usr/local/kafka_2.9.2-0.8.1.1/config/server.properties >/usr/local/kafka_2.9.2-0.8.1.1/kafka.out &
TT
done < iplist
iplist文件内容
cat iplist
10.11.32.76 hadooptest76.bj
10.11.32.77 hadooptest77.bj
10.11.32.81 hadooptest81.bj
二、移动topic
新建topic
/usr/local/kafka_2.9.2-0.8.1.1/bin/kafka-topics.sh --create --topic test01 --partitions 6 --replication-factor 2 --zookeeper 10.11.32.76:2181
查看:
/usr/local/kafka_2.9.2-0.8.1.1/bin/kafka-topics.sh --list --zookeeper 10.11.32.76:2181
/usr/local/kafka_2.9.2-0.8.1.1/bin/kafka-topics.sh --describe --zookeeper 10.11.32.76:2181 --topic test01
生产者:
/usr/local/kafka_2.9.2-0.8.1.1/bin/kafka-console-producer.sh --broker-list 10.11.32.76:9092 --topic test01
消费者:
/usr/local/kafka_2.9.2-0.8.1.1/bin/kafka-console-consumer.sh --zookeeper 10.11.32.76:2181 --topic test01 --from-beginning
建立topics-to-move.json文件
[root@hadooptest76 mv_topic]# cat topics-to-move.json
{"topics": [
{"topic": "test01"},
{"topic": "test02"},
{"topic": "test03"}
],
"version":1
}
#test01、test02、test03 为topic名字
生成移动脚本
/usr/local/kafka_2.9.2-0.8.1.1/bin/kafka-reassign-partitions.sh --zookeeper 10.11.32.76:2181 --topics-to-move-json-file topics-to-move.json --broker-list "0,1" --generate
这样就会生成一串新的json数据Current partition replica assignment和Proposed partition reassignment configuration
Proposed partition reassignment configuration下面生成的就是将分区重新分布到broker 0、1上的结果。
我们将这些内容保存到名为result.json文件里面(文件名不重要,文件格式也不一定要以json为结尾,只要保证内容是json即可),然后执行这些reassign plan:
{"version":1,"partitions":[{"topic":"fortest1","partition":0,"replicas":[3,4]},其他部分省略}
将这一串json写入新文件reassignment-node.json中
例如:
[root@hadooptest76 mv_topic]# /usr/local/kafka_2.9.2-0.8.1.1/bin/kafka-reassign-partitions.sh --zookeeper 10.111.32.76:2181 --topics-to-move-json-file topics-to-move.json --broker-list "0,1" --generate
Current partition replica assignment
{"version":1,"partitions":[{"topic":"test01","partition":2,"replicas":[2,0]},{"topic":"test01","partition":3,"replicas":[0,2]},{"topic":"test01","partition":5,"replicas":[2,1]},{"topic":"test02","partition":5,"replicas":[1,0]},{"topic":"test03","partition":1,"replicas":[2,0]},{"topic":"test01","partition":4,"replicas":[1,0]},{"topic":"test01","partition":0,"replicas":[0,1]},{"topic":"test02","partition":0,"replicas":[0,1]},{"topic":"test02","partition":4,"replicas":[0,1]},{"topic":"test02","partition":3,"replicas":[1,0]},{"topic":"test03","partition":2,"replicas":[0,1]},{"topic":"test03","partition":5,"replicas":[0,2]},{"topic":"test02","partition":2,"replicas":[0,1]},{"topic":"test01","partition":1,"replicas":[1,2]},{"topic":"test03","partition":4,"replicas":[2,1]},{"topic":"test03","partition":0,"replicas":[1,2]},{"topic":"test02","partition":1,"replicas":[1,0]},{"topic":"test03","partition":3,"replicas":[1,0]}]}
Proposed partition reassignment configuration
{"version":1,"partitions":[{"topic":"test01","partition":2,"replicas":[0,1]},{"topic":"test01","partition":3,"replicas":[1,0]},{"topic":"test01","partition":5,"replicas":[1,0]},{"topic":"test02","partition":5,"replicas":[0,1]},{"topic":"test03","partition":1,"replicas":[1,0]},{"topic":"test01","partition":4,"replicas":[0,1]},{"topic":"test01","partition":0,"replicas":[0,1]},{"topic":"test02","partition":0,"replicas":[1,0]},{"topic":"test02","partition":4,"replicas":[1,0]},{"topic":"test02","partition":3,"replicas":[0,1]},{"topic":"test03","partition":5,"replicas":[1,0]},{"topic":"test03","partition":2,"replicas":[0,1]},{"topic":"test02","partition":2,"replicas":[1,0]},{"topic":"test03","partition":4,"replicas":[0,1]},{"topic":"test01","partition":1,"replicas":[1,0]},{"topic":"test03","partition":0,"replicas":[0,1]},{"topic":"test02","partition":1,"replicas":[0,1]},{"topic":"test03","partition":3,"replicas":[1,0]}]}
新建vi reassignment-node.json文件
[root@hadooptest76 mv_topic]# vi reassignment-node.json
{"version":1,"partitions":[{"topic":"test01","partition":2,"replicas":[0,1]},{"topic":"test01","partition":3,"replicas":[1,0]},{"topic":"test01","partition":5,"replicas":[1,0]},{"topic":"test02","partition":5,"replicas":[0,1]},{"topic":"test03","partition":1,"replicas":[1,0]},{"topic":"test01","partition":4,"replicas":[0,1]},{"topic":"test01","partition":0,"replicas":[0,1]},{"topic":"test02","partition":0,"replicas":[1,0]},{"topic":"test02","partition":4,"replicas":[1,0]},{"topic":"test02","partition":3,"replicas":[0,1]},{"topic":"test03","partition":5,"replicas":[1,0]},{"topic":"test03","partition":2,"replicas":[0,1]},{"topic":"test02","partition":2,"replicas":[1,0]},{"topic":"test03","partition":4,"replicas":[0,1]},{"topic":"test01","partition":1,"replicas":[1,0]},{"topic":"test03","partition":0,"replicas":[0,1]},{"topic":"test02","partition":1,"replicas":[0,1]},{"topic":"test03","partition":3,"replicas":[1,0]}]}
这时候,万事俱备, 开始迁移:
/usr/local/kafka_2.9.2-0.8.1.1/bin/kafka-reassign-partitions.sh --zookeeper 192.168.103.47:2181 --reassignment-json-file reassignment-node.json --execute
如果topic比较多的话需要多等待一些时间,等待完成调整,查看下所有topic已经迁移到制定的broker上
过一会查看topic
/usr/local/kafka_2.9.2-0.8.1.1/bin/kafka-topics.sh --describe --zookeeper localhost:2181 --topic test01
三、让节点机器下线(必须要进行,及已经平衡了topic)
/usr/local/kafka_2.9.2-0.8.1.1/bin/kafka-run-class.sh kafka.admin.ShutdownBroker --zookeeper 10.11.32.76:2181,10.11.32.77:2181,10.11.32.81:2181/kafka --broker 2 --num.retries 3 --retry.interval.ms 600
四、重新leader、重新均衡
下线节点完成后重新均衡下
/usr/local/kafka_2.9.2-0.8.1.1/bin/kafka-preferred-replica-election.sh --zookeeper 10.11.32.76:2181
成功后重新观察tpoic状态会发现Leader、Replicas都正常了
参考:
Kafka集群扩展以及重新分布分区
https://www.iteblog.com/archives/1611.html
官网
https://cwiki.apache.org/confluence/display/KAFKA/Replication+tools#Replicationtools-1.ControlledShutdown
kafka的迁移干货
http://www.cnblogs.com/dycg/p/3922352.html
apache Kafka下线broker的操作
http://blog.csdn.net/lizhitao/article/details/42266327
Kafka集群平滑重启
http://blog.csdn.net/clarencezi/article/details/42271037
Kafka学习之四 Kafka常用命令
http://blog.csdn.net/code52/article/details/50935849
kafka server部署配置优化
http://blog.csdn.net/lizhitao/article/details/42180265
kafka扩容
http://zhouxinyu1991.blog.51cto.com/6095086/1876616