./kafka-topics.sh --zookeeper localhost:2181 --alter --partitions 3 --topic test
WARNING: If partitions are increased for a topic that has a key, the partition logic or ordering of the messages will be affected
Adding partitions succeeded!
注意:分区数只能增加,不能减少,如果减少会报错The number of partitions for a topic can only be increased
./kafka-topics.sh --zookeeper localhost:2181 --describe --topic test
Topic:test PartitionCount:3 ReplicationFactor:1 Configs:
Topic: suzhou_ddc_Event Partition: 0 Leader: 1 Replicas: 1 Isr: 1
Topic: suzhou_ddc_Event Partition: 1 Leader: 2 Replicas: 2 Isr: 2
Topic: suzhou_ddc_Event Partition: 2 Leader: 0 Replicas: 0 Isr: 0
可以看到test
这个topic下面有三个分区,分区0有一个副本1,其中副本的leader是1,分区0的Isr(与 leader 副本保持一定程度同步的副本In-Sync Replicas)也是1
新建一个文件,写入如下格式的json内容
{
"version": 1,
"partitions": [
{
"topic": "test",
"partition": 0,
"replicas": [
1,
2,
0
]
},
{
"topic": "test",
"partition": 1,
"replicas": [
2,
0,
1
]
}, {
"topic": "test",
"partition": 2,
"replicas": [
0,
1,
2
]
}
]
}
表示给三个分区分别配置三个副本,其中0,1,2
代表的是broker.id
/kafka-reassign-partitions.sh --zookeeper localhost:2181 --topics-to-move-json-file ../config/topic-reassignment.json --broker-list "0,1,2" --generate
需要使用–broker-list参数指定集群中的broker_id
但是我实践发现这个好像不太好用,就是写不出json数据,不知道为什么
注意:在执行命令之前必须保证配置的分区都存在
./kafka-reassign-partitions.sh --zookeeper localhost:2181 --reassignment-json-file ../config/topic-reassignment.json --execute
Current partition replica assignment
{"version":1,"partitions":[{"topic":"test","partition":2,"replicas":[0,1,2]},{"topic":"test","partition":1,"replicas":[2,1,0]},{"topic":"test","partition":0,"replicas":[1,2,0]}]}
Save this to use as the --reassignment-json-file option during rollback
Successfully started reassignment of partitions.
./kafka-reassign-partitions.sh --zookeeper localhost:2181 --reassignment-json-file ../config/topic-reassignment.json --verify
Status of partition reassignment:
Reassignment of partition [test,0] completed successfully
Reassignment of partition [test,1] completed successfully
Reassignment of partition [test,2] completed successfully
./kafka-topics.sh --zookeeper localhost:2181 --describe --topic test
Topic:test PartitionCount:3 ReplicationFactor:3 Configs:
Topic: test Partition: 0 Leader: 1 Replicas: 1,2,0 Isr: 1,0,2
Topic: test Partition: 1 Leader: 2 Replicas: 2,0,1 Isr: 2,0,1
Topic: test Partition: 2 Leader: 0 Replicas: 0,1,2 Isr: 0,2,1
可以看到,对于分区0,共有三个副本,分别为1,2,0
,其中Isr副本集合为1,0,2
,Leader为1
原因是之前写的json配置文件有格式错误,然后又执行了错误的配置文件,导致zk上创建了/admin/reassign_partitions,而这个是由于误操作创建又没有成功结束残留下来的,导致正常的reassigin都没办法进行,于是只能命令行登陆zk,直接删除该节点:
[zk: localhost:2181(CONNECTED) 0] ls /
[zookeeper, admin, consumers, config, controller, brokers, controller_epoch]
[zk: localhost:2181(CONNECTED) 1] ls /admin/reassign_partitions
[]
[zk: localhost:2181(CONNECTED) 2] get /admin/reassign_partitions
{"version":1,"partitions":[]}
cZxid = 0xd00008216
ctime = Mon Oct 26 14:47:30 CST 2015
mZxid = 0xd00008216
mtime = Mon Oct 26 14:47:30 CST 2015
pZxid = 0xd00008216
cversion = 0
dataVersion = 0
aclVersion = 0
ephemeralOwner = 0x0
dataLength = 29
numChildren = 0
[zk: localhost:2181(CONNECTED) 3] rmr /admin/reassign_partitions
[zk: localhost
之后重新执行扩容的脚本,就没有报错啦
脚本使用方法:将下面两个脚本放入kafka的bin目录下
输入参数为zookeeper节点
#!/bin/bash
#
zookeeper=$1
topics=`./kafka-topics.sh --zookeeper $zookeeper --list`
for i in $topics;do
replicsNum=`./kafka-topics.sh --zookeeper $zookeeper --describe --topic $i|grep ReplicationFactor|awk '{print $3}'|awk -F: '{print $2}'`
PartitionCount=`./kafka-topics.sh --zookeeper $zookeeper --describe --topic $i|grep PartitionCount|awk '{print $2}'|awk -F: '{print $2}'`
echo 'topic: '$i''
echo 'replics: '$replicsNum''
echo 'partitions: '$PartitionCount''
if [ $replicsNum == 1 ];then
echo $i >> ../topic.txt
fi
if [ $PartitionCount -lt 3 ];then
./kafka-topics.sh --zookeeper $zookeeper --alter --partitions 3 --topic $i
fi
done
需要事先在kafka目录下创建leasers.txt文件,里面存放集群的broker id
[root@VM_0_17_centos bin]# cat ../leaders.txt
0
1
2
输入参数为zookeeper节点
#!/bin/bash
ZK_HOST=$1
topics=`cat ../topic.txt`
IFS=$'\n'
echo '{"version":1,"partitions":[' > ../config/topic-reassignment.json
for i in $topics;do
echo 'write file for '$i''
leaders=`./kafka-topics.sh --zookeeper $ZK_HOST -describe --topic $i|grep Leader`
for leader in $leaders;do
partition=`echo $leader |awk '{print $4}'`
leader=`echo $leader |awk '{print $6}'`
# 如果需要保证 副分片 和 主分片 不在同一个 Broker 且随机分配在剩余的节点中,可以使用
# follwer=`grep -vxF $leader ./leaders.txt | shuf -n1`
follwer=`grep -vxF $leader ../leaders.txt`
follwer1=`echo $follwer | awk '{print $1}'`
follwer2=`echo $follwer | awk '{print $2}'`
echo '{"topic":"'$i'","partition":'$partition',"replicas":['$leader','$follwer1','$follwer2']},' >> ../config/topic-reassignment.json
done
done
echo ']}' >> ../config/topic-reassignment.json
kafka-reassign-partitions.sh
脚本进行扩展副本./kafka-reassign-partitions.sh --zookeeper localhost:2181 --reassignment-json-file ../config/topic-reassignment.json --execute