参数名称 | 描述 |
---|---|
replica.lag.time.max.ms | ISR 中,如果 Follower 长时间未向 Leader 发送通 信请求或同步数据,则该 Follower 将被踢出 ISR。 该时间阈值,默认 30s。 |
auto.leader.rebalance.enable | 默认是 true。 自动 Leader Partition 平衡。 |
leader.imbalance.per.broker.percentage | 默认是 10%。每个 broker 允许的不平衡的 leader 的比率。如果每个 broker 超过了这个值,控制器 会触发 leader 的平衡。 |
leader.imbalance.check.interval.seconds | 默认值 300 秒。检查 leader 负载是否平衡的间隔时间。 |
log.segment.bytes | Kafka 中 log 日志是分成一块块存储的,此配置是 指 log 日志划分 成块的大小,默认值 1G。 |
log.index.interval.bytes | 默认 4kb,kafka 里面每当写入了 4kb 大小的日志 (.log),然后就往 index 文件里面记录一个索引。 |
log.retention.hours | Kafka 中数据保存的时间,默认 7 天。 |
log.retention.minutes | Kafka 中数据保存的时间,分钟级别,默认关闭。 |
log.retention.ms | Kafka 中数据保存的时间,毫秒级别,默认关闭。 |
log.retention.check.interval.ms | 检查数据是否保存超时的间隔,默认是 5 分钟。 |
log.retention.bytes | 默认等于-1,表示无穷大。超过设置的所有日志总 大小,删除最早的 segment。 |
log.cleanup.policy | 默认是 delete,表示所有数据启用删除策略; 如果设置值为 compact,表示所有数据启用压缩策 略。 |
num.io.threads | 默认是 8。负责写磁盘的线程数。整个参数值要占 总核数的 50%。 |
num.replica.fetchers | 副本拉取线程数,这个参数占总核数的 50%的 1/3 |
log.flush.interval.messages | 强制页缓存刷写到磁盘的条数,默认是 long 的最 大值,9223372036854775807。一般不建议修改, 交给系统自己管理。 |
log.flush.interval.ms | 每隔多久,刷数据到磁盘,默认是 null。一般不建 议修改,交给系统自己管理。 |
但是我们发现,新节点服役之后,之前创建的topic还是只存在之前的节点中,于是我们创建一个要均衡的主题。
[atguigu@hadoop102 kafka]$ vim topics-to-move.json
//粘贴如下内容
{
"topics": [
{"topic": "first"}
],
"version": 1
}
[atguigu@hadoop102 kafka]$ bin/kafka-reassign-partitions.sh --bootstrap-server hadoop102:9092 --topics-to-move-json-file topics-to-move.json --broker-list "0,1,2,3" --generate
Current partition replica assignment
{"version":1,
"partitions":[
{"topic":"first","partition":0,"replicas":[0,2,1],"log_dirs":["any","any","any"]},
{"topic":"first","partition":1,"replicas":[2,1,0],"log_dirs":["any","any","any"]},
{"topic":"first","partition":2,"replicas":[1,0,2],"log_dirs":["any","any","any"]}
]
}
Proposed partition reassignment configuration
{"version":1,
"partitions":[
{"topic":"first","partition":0,"replicas":[2,3,0],"log_dirs":["any","any","any"]},
{"topic":"first","partition":1,"replicas":[3,0,1],"log_dirs":["any","any","any"]},
{"topic":"first","partition":2,"replicas":[0,1,2],"log_dirs":["any","any","any"]}
]
}
[atguigu@hadoop102 kafka]$ vim increase-replication-factor.json
//粘贴如下内容
{"version":1,"partitions":[
{"topic":"first","partition":0,"replicas":[2,3,0],"log_dirs":["any","any","any"]},
{"topic":"first","partition":1,"replicas":[3,0,1],"log_dirs":["any","any","any"]},
{"topic":"first","partition":2,"replicas":[0,1,2],"log_dirs":["any","any","any"]}
]
}
[atguigu@hadoop102 kafka]$ bin/kafka-reassign-partitions.sh --bootstrap-server hadoop102:9092 --reassignment-json-file increase-replication-factor.json --execute
[atguigu@hadoop102 kafka]$ bin/kafka-reassign-partitions.sh --bootstrap-server hadoop102:9092 --reassignment-json-file increase-replication-factor.json --verify
Status of partition reassignment:
Reassignment of partition first-0 is complete.
Reassignment of partition first-1 is complete.
Reassignment of partition first-2 is complete.
Clearing broker-level throttles on brokers 0,1,2,3
Clearing topic-level throttles on topic first
退役node4节点,退役旧节点操作如下:
[atguigu@hadoop102 kafka]$ vim topics-to-move.json
{
"topics": [
{"topic": "first"}
],
"version": 1
}
[atguigu@hadoop102 kafka]$ bin/kafka-reassign-partitions.sh --bootstrap-server hadoop102:9092 --topics-to-move-json-file topics-to-move.json --broker-list "0,1,2" --generate
Current partition replica assignment
{"version":1,"partitions":[
{"topic":"first","partition":0,"replicas":[2,0,1],"log_dirs":["any","any","any"]},
{"topic":"first","partition":1,"replicas":[3,1,2],"log_dirs":["any","any","any"]},
{"topic":"first","partition":2,"replicas":[0,2,3],"log_dirs":["any","any","any"]}
]
}
Proposed partition reassignment configuration
{"version":1,"partitions":[
{"topic":"first","partition":0,"replicas":[2,0,1],"log_dirs":["any","any","any"]},
{"topic":"first","partition":1,"replicas":[0,1,2],"log_dirs":["any","any","any"]},
{"topic":"first","partition":2,"replicas":[1,2,0],"log_dirs":["any","any","any"]}
]
}
[atguigu@hadoop102 kafka]$ vim increase-replication-factor.json
{"version":1,"partitions":[
{"topic":"first","partition":0,"replicas":[2,0,1],"log_dirs":["any","any","any"]},
{"topic":"first","partition":1,"replicas":[0,1,2],"log_dirs":["any","any","any"]},
{"topic":"first","partition":2,"replicas":[1,2,0],"log_dirs":["any","any","any"]}
]
}
[atguigu@hadoop102 kafka]$ bin/kafka-reassign-partitions.sh --bootstrap-server hadoop102:9092 --reassignment-json-file increase-replication-factor.json --execute
[atguigu@hadoop102 kafka]$ bin/kafka-reassign-partitions.sh --bootstrap-server hadoop102:9092 --reassignment-json-file increase-replication-factor.json --verify
Status of partition reassignment:
Reassignment of partition first-0 is complete.
Reassignment of partition first-1 is complete.
Reassignment of partition first-2 is complete.
Clearing broker-level throttles on brokers 0,1,2,3
Clearing topic-level throttles on topic first
bin/kafka-server-stop.sh
AR(Assigned Replicas)
。
replica.lag.time.max.ms
参数设定,默认30s。Leader发生故障之后,就会从ISR中选举新的LeaderKafka集群中有一个broker的Controller会被选举为Controller Leader,负责管理集群broker的上下线,所有topic的分区副本分配和Leader选举等工作。
Controller的信息同步工作是依赖于Zookeeper的。
Leader选举流程如下:
Follower故障:
Leader故障:
注意:这只能保证副本之间的数据一致性,并不能保证数据不丢失或者不重复。
如果kafka服务器只有4个节点,那么设置kafka的分区数大于服务器台数,在kafka底层如何分配存储副本呢?
bin/kafka-topics.sh --bootstrap-server hadoop102:9092 --create --partitions 16 --replication-factor 3 --topic second
[atguigu@hadoop102 kafka]$ bin/kafka-topics.sh --bootstrap-server hadoop102:9092 --describe --topic second
Topic: second4 Partition: 0 Leader: 0 Replicas: 0,1,2 Isr: 0,1,2
Topic: second4 Partition: 1 Leader: 1 Replicas: 1,2,3 Isr: 1,2,3
Topic: second4 Partition: 2 Leader: 2 Replicas: 2,3,0 Isr: 2,3,0
Topic: second4 Partition: 3 Leader: 3 Replicas: 3,0,1 Isr: 3,0,1
Topic: second4 Partition: 4 Leader: 0 Replicas: 0,2,3 Isr: 0,2,3
Topic: second4 Partition: 5 Leader: 1 Replicas: 1,3,0 Isr: 1,3,0
Topic: second4 Partition: 6 Leader: 2 Replicas: 2,0,1 Isr: 2,0,1
Topic: second4 Partition: 7 Leader: 3 Replicas: 3,1,2 Isr: 3,1,2
Topic: second4 Partition: 8 Leader: 0 Replicas: 0,3,1 Isr: 0,3,1
Topic: second4 Partition: 9 Leader: 1 Replicas: 1,0,2 Isr: 1,0,2
Topic: second4 Partition: 10 Leader: 2 Replicas: 2,1,3 Isr: 2,1,3
Topic: second4 Partition: 11 Leader: 3 Replicas: 3,2,0 Isr: 3,2,0
Topic: second4 Partition: 12 Leader: 0 Replicas: 0,1,2 Isr: 0,1,2
Topic: second4 Partition: 13 Leader: 1 Replicas: 1,2,3 Isr: 1,2,3
Topic: second4 Partition: 14 Leader: 2 Replicas: 2,3,0 Isr: 2,3,0
Topic: second4 Partition: 15 Leader: 3 Replicas: 3,0,1 Isr: 3,0,1
这样分配的目的是为了负载均衡,让每个节点均匀分配副本
在生产环境中,每台服务器的配置和性能不一致,但是Kafka只会根据自己的代码规则创建对应的分区副本,就会导致个别服务器存储压力较大。所以需要手动调整分区副本的存储。
需求:创建一个新的topic,4个分区,2个副本,名称为three。将该topic的所有副本都存储到broker0和broker1两台服务器上。
手动调整分区副本存储的步骤如下:
bin/kafka-topics.sh --bootstrap-server hadoop102:9092 --create --partitions 4 --replication-factor 2 --topic three
bin/kafka-topics.sh --bootstrap-server hadoop102:9092 --describe --topic three
vim increase-replication-factor.json
//粘贴如下内容
{
"version":1,
"partitions":[
{"topic":"three","partition":0,"replicas":[0,1]},
{"topic":"three","partition":1,"replicas":[0,1]},
{"topic":"three","partition":2,"replicas":[1,0]},
{"topic":"three","partition":3,"replicas":[1,0]}
]
}
bin/kafka-reassign-partitions.sh --bootstrap-server hadoop102:9092 --reassignment-json-file increase-replication-factor.json --execute
bin/kafka-reassign-partitions.sh --bootstrap-server hadoop102:9092 --reassignment-json-file increase-replication-factor.json --verify
bin/kafka-topics.sh --bootstrap-server hadoop102:9092 --describe --topic three
正常情况下,Kafka本身会自动把Leader Partition均匀分散在各个机器上,来保证每台机器的读写吞吐量都是均匀的。但是如果某些broker宕机,会导致Leader Partition过于几种在其他少部分几台broker上,这会导致少数几台broker的读写请求压力过高,其他宕机的broker重启之后都是follower partition,读写请求很低,造成集群负载不均衡。
auto.leader.rebalance.enable
,默认是true。自动Leader Partition平衡。leader.imbalance.per.broker.percentage
,默认是10%。每个broker允许的不平衡的leader的比率。如果每个broker超过了这个值,控制器会触发leader的平衡。leader.imbalance.check.interval.seconds
,默认值300s,检查leader负载是否平衡的间隔时间。假设集群只有一个主题如下图所示:
针对broker0节点,分区2的AR优先副本是0节点,但是0节点却不是Leader节点,所以不平衡数加1,AR副本总数是4,所以broker0节点不平衡率为1/4>10%,需要重平衡。
broker2、broker3节点和broker0不平衡率一样,需要再平衡。Broker1的不平衡数为0,不需要再平衡。
参数名称 | 描述 |
---|---|
auto.leader.rebalance.enable | 默认是 true。 自动 Leader Partition 平衡。生产环 境中,leader 重选举的代价比较大,可能会带来 性能影响,建议设置为 false 关闭。 |
leader.imbalance.per.broker.percentage | 默认是 10%。每个 broker 允许的不平衡的 leader 的比率。如果每个 broker 超过了这个值,控制器 会触发 leader 的平衡。 |
leader.imbalance.check.interval.seconds | 默认值 300 秒。检查 leader 负载是否平衡的间隔时间。 |
在生产环境中,由于某个主题的重要等级需要提升,我们考虑增加副本。副本数的增加需要先制定计划,然后根据计划执行。
1)创建topic
bin/kafka-topics.sh --bootstrap-server hadoop102:9092 --create --partitions 3 --replication-factor 1 --topic four
2)手动增加副本存储
(1)创建副本存储计划(所有副本都指定存储在broker0,broker1,broker2中)。
vim increase-replication-factor.json
//粘贴如下内容
{"version":1,"partitions":[
{"topic":"four","partition":0,"replicas":[0,1,2]},
{"topic":"four","partition":1,"replicas":[0,1,2]},
{"topic":"four","partition":2,"replicas":[0,1,2]}
]
}
(2)执行副本存储计划
bin/kafka-reassign-partitions.sh --bootstrap-server hadoop102:9092 --reassignment-json-file increase-replication-factor.json --execute
Topic是逻辑上的概念,而partition是物理上的概念,每个partition对应于一个log文件,该log文件中存储的就是 Producer 生产的数据。Producer生产的数据会被不断追加到该log文件末端,为防止log文件过大导致数据定位效率低下,Kafka采用分片和索引机制,将每个partition分为多个segment。
每个segment包括:“.index”文件、“.log”文件和".timeindex“文件。这些文件位于一个文件夹下,该文件夹的命名规则为:topic名称+分区序号,例如:first-0.
Topic数据到底存储在什么位置?
bin/kafka-console-producer.sh --bootstrap-server hadoop102:9092 --topic first
>hello world
[atguigu@hadoop104 first-1]$ ls
00000000000000000092.index
00000000000000000092.log
00000000000000000092.snapshot
00000000000000000092.timeindex
leader-epoch-checkpoint
partition.metadata
[atguigu@hadoop104 first-1]$ cat 00000000000000000092.log
\CYnF|©|©ÿ"hello world
[atguigu@hadoop104 first-1]$ kafka-run-class.sh kafka.tools.DumpLogSegments --files ./00000000000000000000.index
Dumping ./00000000000000000000.index
offset: 3 position: 152
[atguigu@hadoop104 first-1]$ kafka-run-class.sh kafka.tools.DumpLogSegments --files ./00000000000000000000.log
Dumping datas/first-0/00000000000000000000.log
Starting offset: 0
baseOffset: 0 lastOffset: 1 count: 2 baseSequence: -1 lastSequence: -1 producerId: -1
producerEpoch: -1 partitionLeaderEpoch: 0 isTransactional: false isControl: false position:
0 CreateTime: 1636338440962 size: 75 magic: 2 compresscodec: none crc: 2745337109 isvalid:
true
baseOffset: 2 lastOffset: 2 count: 1 baseSequence: -1 lastSequence: -1 producerId: -1
producerEpoch: -1 partitionLeaderEpoch: 0 isTransactional: false isControl: false position:
75 CreateTime: 1636351749089 size: 77 magic: 2 compresscodec: none crc: 273943004 isvalid:
true
baseOffset: 3 lastOffset: 3 count: 1 baseSequence: -1 lastSequence: -1 producerId: -1
producerEpoch: -1 partitionLeaderEpoch: 0 isTransactional: false isControl: false position:
152 CreateTime: 1636351749119 size: 77 magic: 2 compresscodec: none crc: 106207379 isvalid:
true
baseOffset: 4 lastOffset: 8 count: 5 baseSequence: -1 lastSequence: -1 producerId: -1
producerEpoch: -1 partitionLeaderEpoch: 0 isTransactional: false isControl: false position:
229 CreateTime: 1636353061435 size: 141 magic: 2 compresscodec: none crc: 157376877 isvalid:
true
baseOffset: 9 lastOffset: 13 count: 5 baseSequence: -1 lastSequence: -1 producerId: -1
producerEpoch: -1 partitionLeaderEpoch: 0 isTransactional: false isControl: false position:
370 CreateTime: 1636353204051 size: 146 magic: 2 compresscodec: none crc: 4058582827 isvalid:
true
index文件和log文件详解
说明:日志存储参数配置
参数 | 描述 |
---|---|
log.segment.bytes | Kafka 中 log 日志是分成一块块存储的,此配置是指 log 日志划分 成块的大小,默认值 1G。 |
log.index.interval.bytes | 默认 4kb,kafka 里面每当写入了 4kb 大小的日志(.log),然后就 往 index 文件里面记录一个索引。 稀疏索引。 |
Kafka中默认的日志保存时间为7天,可以通过调整如下参数修改保存时间:
log.retention.hours
,最低优先级小时,默认7天log.retention.minutes
,分钟,优先级比小时高log.retention.ms
,最高优先级毫秒log.retention.check.interval.ms
,负责设置检查周期,默认5分钟那么日志一旦超过了设置的时间,怎么处理呢?
Kafka中提供的日志清理策略有 delete 和 compact 两种。
delete日志删除:将过期数据删除
log.cleanup.policy
= delete ,所有数据启用删除策略
compact日志压缩:对于相同key的不同value值,只保留最后一个版本
log.cleanup.policy
=true,所有数据启用压缩策略压缩之后的offset可能是不连续的,比如上图中没有6,当从这些offset消费消息时,将会拿到比这个offset大的offset对应的消息,实际上会拿到offset为7的消息,并从这个位置开始消费。
这种策略只适合特殊场景,比如消息的key是用户id,value是用户的资料,通过这种压缩策略,整个消息集里就保存了所有用户最新的资料。
Kafka的 producer 生产数据,要写入到log文件中,写的过程一直追加到文件末端,为顺序写。
官网有数据表明,同样的磁盘,顺序写能到600M/s,而随机写只有100K/s。这与磁盘的机械机构有关,顺序写之所以快,是因为其省去了大量磁头寻址的时间。
零拷贝:Kafka的数据加工处理操作交由Kafka生产者和Kafka消费者处理。Kafka Broker应用层不关心存储的数据,所以就不用走应用层,传输效率高。
PageCache 页缓存:Kafka重度依赖底层操作系统提供的PageCache功能。当上层有写操作时,操作系统只是将数据写入PageCache。当读操作发生时,先从PageCache中查找,如果找不到,再去磁盘中读取。实际上PageCache是把尽可能多的空闲内存 都当做了磁盘缓存来使用。
参数 | 描述 |
---|---|
log.flush.interval.messages | 强制页缓存刷写到磁盘的条数,默认是 long 的最大值, 9223372036854775807。一般不建议修改,交给系统自己管 理。 |
log.flush.interval.ms | 每隔多久,刷数据到磁盘,默认是 null。一般不建议修改, 交给系统自己管理。 |