到官网下载,然后上传到虚拟机,不了解的建议先学linux,学习顺序很重要
启动模式 | 描述 | 说明 |
---|---|---|
基于 Zookeeper 启动 | 传统启动方式 | 需要依赖独立的 Zookeeper 服务(默认端口:2181) |
基于 KRaft 启动 | Kafka 自主共识机制 | Kafka 3.x 推荐方式,逐步替代 Zookeeper |
⚠ 两种方式只能选择一种使用,不能同时使用!
下载kafka自带Zookeeper,如果发现没有,可能上传丢失包,重新下载上传
cd bin/
./zookeeper-server-start.sh ../config/zookeeper.properties &
./kafka-server-start.sh ../config/server.properties &
./kafka-server-stop.sh ../config/server.properties
./zookeeper-server-stop.sh ../config/zookeeper.properties
ps -ef | grep zookeeper
netstat -nlpt
cd apache-zookeeper-3.8.3-bin/bin/
./zkServer.sh start
cd apache-zookeeper-3.8.3-bin/conf
cp zoo_sample.cfg zoo.cfg
# zoo.cfg 文件添加或修改
admin.serverPort=8888 # 默认8080易冲突,建议修改
./kafka-storage.sh random-uuid
或查看 UUID:
./kafka-storage.sh info -C ../config/server.properties
./kafka-storage.sh format -t <集群UUID> -c ../config/kraft/server.properties
注意这里可以自定义uuid,但是上一次生产的uuid是保存了的,得先删除kraft-combined-logs里面的文件
可以看一下
。/kafka-storage.sh info -c ../config/kraft/server.properties
示例:
./kafka-storage.sh format -t MQqryG5apsSGGiJC0tR4ssDg -c ../config/kraft/server.properties
./kafka-server-start.sh ../config/kraft/server.properties &
./kafka-server-stop.sh ../config/kraft/server.properties
docker pull apache/kafka:3.9.0
# 前台启动
docker run -p 9092:9092 apache/kafka:3.9.0
# 后台启动(推荐)
docker run -d -p 9092:9092 apache/kafka:3.9.0
zoo.cfg
中改为其他端口(如 8888)Topic 概念:类似文件夹,消息生产者将事件写入主题,消费者从主题中读取事件。
常用命令:
# 查看脚本说明
./kafka-topics.sh
# 创建主题
./kafka-topics.sh --create --topic quickstart-events --bootstrap-server localhost:9092
# 列出所有主题
./kafka-topics.sh --list --bootstrap-server localhost:9092
# 删除主题
./kafka-topics.sh --delete --topic quickstart-events --bootstrap-server localhost:9092
# 查看主题详细信息
./kafka-topics.sh --describe --topic quickstart-events --bootstrap-server localhost:9092
# 修改主题分区数
./kafka-topics.sh --alter --topic quickstart-events --partitions 5 --bootstrap-server localhost:9092
# 使用 Producer 向主题发送消息
./kafka-console-producer.sh --topic quickstart-events --bootstrap-server localhost:9092
Ctrl+C
退出。# 从主题读取消息(包含历史消息)
./kafka-console-consumer.sh --topic quickstart-events --from-beginning --bootstrap-server localhost:9092
⚠ 注意:“--from-beginning
” 表示从最开始读取历史消息,不是只读最新消息!
localhost:9092
,Docker 启动后外部连接失败。# server.properties配置修改项
listeners=PLAINTEXT://0.0.0.0:9092
advertised.listeners=PLAINTEXT://192.168.11.128:9092
0.0.0.0
是保留地址,表示监听所有IP。
# 进入Kafka容器
docker exec -it <container_id> /bin/bash
# 找到配置文件路径
cd /etc/kafka/docker
# 拷贝配置文件到主机
docker cp <container_id>:/etc/kafka/docker/server.properties ./
# 修改server.properties后挂载运行
docker run -d \
--volume /opt/kafka/docker:/mnt/shared/config \
-p 9092:9092 \
apache/kafka:3.9.0
工具名称 | 官网链接 | 备注 |
---|---|---|
Offset Explorer | https://www.kafkatool.com/ | 原名 Kafka Tool |
CMAK | https://github.com/yahoo/CMAK | 原名 Kafka Manager |
EFAK | https://www.kafka-eagle.org/ | 原名 Kafka Eagle,由国人开发 |
修改 conf/application.conf
:
kafka-manager.zkhosts="192.168.11.128:2181"
cmak.zkhosts="192.168.11.128:2181"
cd bin
./cmak -Dconfig.file=../conf/application.conf -java-home /usr/local/jdk-11.0.22
http://192.168.11.128:9000/
注意:CMAK仅支持基于Zookeeper启动的Kafka集群!
wget https://github.com/smartloli/kafka-eagle-bin/archive/v3.0.1.tar.gz
tar -zxvf kafka-eagle-bin-3.0.1.tar.gz
cd kafka-eagle-bin-3.0.1
tar -zxvf efak-web-3.0.1-bin.tar.gz
cd efak-web-3.0.1
conf/system-config.properties
):cluster1.zk.list=127.0.0.1:2181
efak.driver=com.mysql.cj.jdbc.Driver
efak.url=jdbc:mysql://127.0.0.1:3306/ke?useUnicode=true&characterEncoding=UTF-8&zeroDateTimeBehavior=convertToNull
efak.username=root
efak.password=123456
vi /etc/profile
# 添加以下内容
export KE_HOME=/usr/local/efak-web-3.0.1
export PATH=$KE_HOME/bin:$PATH
source /etc/profile
cd $KE_HOME/bin
./ke.sh start # 启动
./ke.sh stop/status # 管理命令
访问地址:http://192.168.11.128:8048/
登录账号:admin
密码:123456
仅支持zookeeper,说是支持kcraft,但其实没有
在pom.xml
文件中添加如下依赖配置:
<dependency>
<groupId>org.springframework.kafkagroupId>
<artifactId>spring-kafkaartifactId>
dependency>
在application.yml
配置文件中添加Kafka配置:
# kafka连接地址(ip+port)
kafka:
bootstrap-servers: 192.168.11.128:9092
# 配置消费者(有24个配置项,此处展示部分)
consumer:
auto-offset-reset: latest
template:
default-topic: default-topic
说明:
bootstrap-servers
:指定Kafka的连接地址。consumer.auto-offset-reset
:设置消费者的偏移量重置策略为latest
,即从最新偏移量开始消费。template.default-topic
:设置默认主题。会自动创建topic,消费也会自动创建topic
普通发送方法:
public void sendEvent() {
kafkaTemplate.send("quickstart-events", "hello=kafka");
}
或:
public void sendEvent() {
kafkaTemplate.send("quickstart-events", 0, System.currentTimeMillis(), "k2", "hello=kafka");
}
消息体构建方式:
public void sendEvent2() {
Message<String> message = MessageBuilder.withPayload("hello")
.setHeader(KafkaHeaders.TOPIC, "quickstart-events")
.build();
kafkaTemplate.send(message);
}
带Headers的发送:
public void sendEvent3() {
Headers headers = new RecordHeaders();
headers.add("phone", "123".getBytes(StandardCharsets.UTF_8));
headers.add("email", "123".getBytes(StandardCharsets.UTF_8));
ProducerRecord<String, String> record = new ProducerRecord<>(
"quickstart-events",
0,
System.currentTimeMillis(),
"k1",
"hello-kafka",
headers
);
}
使用默认主题发送:
public void sendEvent5() {
kafkaTemplate.sendDefault(0, System.currentTimeMillis(), "k1", "hello");
}
区别:kafkaTemplate.send(...)
和 kafkaTemplate.sendDefault(...)
send()
方法需要明确指定目标主题,而sendDefault()
则使用配置中的默认主题。send()
适合动态确定主题的场景,sendDefault()
适合固定主题的场景。send()
和sendDefault()
方法返回CompletableFuture>
。方式一:阻塞获取:
public void sendEvent6() {
CompletableFuture<SendResult<String, String>> completableFuture = kafkaTemplate.send("qui", "hello");
try {
SendResult<String, String> result = completableFuture.get();
if (result.getRecordMetadata() != null) {
System.out.println(result.getRecordMetadata().topic());
}
} catch (InterruptedException | ExecutionException e) {
throw new RuntimeException(e);
}
}
方式二:回调获取:
public void sendEvent6() {
CompletableFuture<SendResult<String, String>> completableFuture = kafkaTemplate.send("qui", "hello");
completableFuture.thenAccept(result -> {
if (result != null && result.getRecordMetadata() != null) {
System.out.println("✅ 发送成功!");
System.out.println("Topic: " + result.getRecordMetadata().topic());
}
});
completableFuture.exceptionally(ex -> {
System.err.println("❌ 发送失败!");
ex.printStackTrace();
return null;
});
System.out.println(" sendEvent6 方法调用结束(非阻塞发送)");
}
通过@Resource
注解注入:
@Resource
private KafkaTemplate<String, String> kafkaTemplate;
@Resource
private KafkaTemplate<String, Object> kafkaTemplate2;
通过@Bean
注解创建:
@Bean
@ConditionalOnMissingBean(KafkaTemplate.class)
public KafkaTemplate<?, ?> kafkaTemplate(ProducerFactory<Object, Object> kafkaProducerFactory,
ProducerListener<Object, Object> kafkaProducerListener,
ObjectProvider<RecordMessageConverter> messageConverter) {
KafkaTemplate<Object, Object> kafkaTemplate = new KafkaTemplate<>(kafkaProducerFactory);
messageConverter.ifUnique(kafkaTemplate::setMessageConverter);
kafkaTemplate.setProducerListener(kafkaProducerListener);
kafkaTemplate.setDefaultTopic(this.properties.getTemplate().getDefaultTopic());
return kafkaTemplate;
}
问题:直接发送对象消息会导致序列化异常。
解决方案:
JsonSerializer
来将对象序列化为字节数组:value-serializer: org.springframework.kafka.support.serializer.JsonSerializer
ToStringSerializer
将对象通过toString()
转换为字节数组:value-serializer: org.springframework.kafka.support.serializer.ToStringSerializer
方式一:命令行指定: 使用命令行工具创建主题时指定分区和副本:
./kafka-topics.sh --create --topic myTopic --partitions 3 --replication-factor 1 --bootstrap-server 127.0.0.1:9092
方式二:代码指定:
@Configuration
public class KafkaConfig {
@Bean
public NewTopic newTopic() {
return new NewTopic("heTopic", 5, (short) 1);
}
}
创建生产者工厂:
public ProducerFactory<String, ?> producerFactory() {
return new DefaultKafkaProducerFactory<>(producerConfigs());
}
创建KafkaTemplate:
public KafkaTemplate<String, ?> kafkaTemplate() {
return new KafkaTemplate<>(producerFactory());
}
生产者配置属性:
public Map<String, Object> producerConfigs() {
Map<String, Object> props = new HashMap<>();
props.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, bootstrapServers);
props.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, keySerializer);
props.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, valueSerializer);
props.put(ProducerConfig.PARTITIONER_CLASS_CONFIG, RoundRobinPartitioner.class);
return props;
}
DefaultPartitioner
,无键时随机分配分区;有键时根据键的哈希值对分区数取模分配。RoundRobinPartitioner.class
或者RoundRobinPartitioner.class.getName()
Partitioner
接口的类,例如CustomerPartitioner
。partition()
方法中实现分区逻辑。拦截原理:实现ProducerInterceptor
接口。
示例拦截器:
onSend(ProducerRecord record)
:消息发送前调用,可处理消息或记录日志。onAcknowledgement(RecordMetadata metadata, Exception exception)
:消息被确认接收时调用。配置拦截器: 在producerConfigs()
方法中,通过ProducerConfig.INTERCEPTOR_CLASSES_CONFIG
添加拦截器:
props.put(ProducerConfig.INTERCEPTOR_CLASSES_CONFIG, "com.example.kafka.interceptor.CustomerProducerInterceptor");
// CustomerProducerInterceptor.getName()
消费者通过@KafkaListener
注解监听特定主题的消息。基本用法如下:
@Component
public class EventConsumer {
@KafkaListener(topics = {"quickstart-events"}, concurrency="3" groupId = "hello-group")
public void onEvent(String message) {
System.out.println("读取到的" + message);
}
}
@Payload
和@Header
@Payload
:表示参数接受的是消息体的内容。@Header
:用于从请求头接收内容。示例代码:
@KafkaListener(topics = {"quickstart-events"}, groupId = "hello-group")
public void onEvent(@Payload String message,
@Header(KafkaHeaders.RECEIVED_TOPIC) String topic,
@Header(KafkaHeaders.RECEIVED_PARTITION) String partition) {
System.out.println("读取到的" + message + ", topic: " + topic + ", partition: " + partition);
}
使用ConsumerRecord
参数可以获取全部信息,即使加上@Payload
也不会报错,但不规范。
在处理对象消息时,可能会遇到反序列化失败的问题,表现为JsonMappingException
,提示类不在信任的包中。
解决方案:
String userJSON = JSONUtils.toJSON(user);
User user = JSONUtils.toBean(userJSON, User.class);
工具类JSONUtils
:
package com.bjpowernode.util;
import com.fasterxml.jackson.core.JsonProcessingException;
import com.fasterxml.jackson.databind.ObjectMapper;
public class JSONUtils {
private static final ObjectMapper OBJECTMAPPER = new ObjectMapper();
public static String toJSON(Object object) {
try {
return OBJECTMAPPER.writeValueAsString(object);
} catch (JsonProcessingException e) {
throw new RuntimeException(e);
}
}
public static <T> T toBean(String json, Class<T> clazz) {
try {
return OBJECTMAPPER.readValue(json, clazz);
} catch (JsonProcessingException e) {
throw new RuntimeException(e);
}
}
}
可以通过YML文件中的占位符来动态配置监听的主题和组名:
kafka:
topic:
name: helloTopic
consumer:
name: helloGroup
@Component
public class EventConsumer {
@KafkaListener(topics = {"${kafka.topic.name}"}, groupId = "${kafka.consumer.name}")
public void onEvent(String message) {
System.out.println("读取到的" + message);
}
}
默认情况下,Kafka自动确认消息。若希望手动确认消息,需配置手动确认模式,并在代码中调用ack.acknowledge()
:
配置:
kafka:
listener:
ack-mode: manual
代码示例:
@KafkaListener(topics = {"quickstart-events"}, groupId = "hello-group")
public void onEvent(@Payload String message,
@Header(KafkaHeaders.RECEIVED_TOPIC) String topic,
@Header(KafkaHeaders.RECEIVED_PARTITION) String partition,
ConsumerRecord<String,String> record,
Acknowledgment ack) {
System.out.println("读取到的" + message + ", topic: " + topic + ", partition: " + partition);
System.out.println("读取到的" + record.toString());
ack.acknowledge(); // 手动确认消息
}
可以监听指定分区的消息,包括初始偏移量设置:
@KafkaListener(groupId = "${kafka.consumer.name}",
topicPartitions = {
@TopicPartition(
topic = "${kafka.topic.name}",
partitions={"0","1","2"},
partitionOffsets = {
@PartitionOffset(partition = "3",initialOffset = "2"),
@PartitionOffset(partition = "4",initialOffset = "2")
}
)
})
public void onEvent(@Payload String message,
@Header(KafkaHeaders.RECEIVED_TOPIC) String topic,
@Header(KafkaHeaders.RECEIVED_PARTITION) String partition,
ConsumerRecord<String,String> record,
Acknowledgment ack) {
System.out.println("读取到的" + message + ", topic: " + topic + ", partition: " + partition);
System.out.println("读取到的" + record.toString());
ack.acknowledge();
}
批量接收消息需要配置max-poll-records
并修改监听器方法签名:
spring:
kafka:
listener:
type: batch
consumer:
max-poll-records: 20
@Component
public class EventConsumer {
@KafkaListener(topics = {"${kafka.topic.name}"}, groupId = "${kafka.consumer.name}")
public void onEvent(List<ConsumerRecord<String,String>> records) {
for (ConsumerRecord<String, String> record : records) {
System.out.println("读取到的" + record.value());
}
}
}
ConsumerInterceptor
接口消费拦截器需要实现Kafka的ConsumerInterceptor
接口。该接口提供了以下方法:
onConsume(ConsumerRecords records)
:在消息被消费者消费之前执行,可以对消息进行预处理。onCommit(Map offsets)
:在消费者提交偏移量之后执行,可以用于日志记录或其他操作。close()
:关闭拦截器时调用。configure(Map configs)
:拦截器初始化时调用,主要用于读取配置。我们主要关注onConsume
和onCommit
方法,close
和configure
方法可以根据需求选择性实现。
示例代码:
package com.example.kafka.interceptor;
import org.apache.kafka.clients.consumer.ConsumerInterceptor;
import org.apache.kafka.clients.consumer.ConsumerRecord;
import org.apache.kafka.clients.consumer.ConsumerRecords;
import org.apache.kafka.clients.consumer.OffsetAndMetadata;
import org.apache.kafka.common.TopicPartition;
import java.util.Map;
public class CustomConsumerInterceptor implements ConsumerInterceptor<String, String> {
@Override
public ConsumerRecords<String, String> onConsume(ConsumerRecords<String, String> records) {
System.out.println("Intercepting consumed records...");
for (ConsumerRecord<String, String> record : records) {
System.out.println("Consumed Record: " + record.key() + " - " + record.value());
}
return records; // 返回修改后的记录(或原样返回)
}
@Override
public void onCommit(Map<TopicPartition, OffsetAndMetadata> offsets) {
System.out.println("Committing offsets: " + offsets);
}
@Override
public void close() {
System.out.println("Closing interceptor...");
}
@Override
public void configure(Map<String, ?> configs) {
System.out.println("Configuring interceptor with configs: " + configs);
}
}
消费拦截器需要通过消费者的配置进行注册。可以通过ConsumerFactory
的配置将拦截器添加到消费者的属性中。
props.put(ConsumerConfig.INTERCEPTOR_CLASSES_CONFIG, CustomConsumerInterceptor.class.getName());
完整配置示例:
import org.apache.kafka.clients.consumer.ConsumerConfig;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.kafka.core.ConsumerFactory;
import org.springframework.kafka.core.DefaultKafkaConsumerFactory;
import java.util.HashMap;
import java.util.Map;
@Configuration
public class KafkaConfig {
private String bootstrapServers = "localhost:9092";
@Bean
public ConsumerFactory<String, String> consumerFactory() {
Map<String, Object> props = new HashMap<>();
props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, bootstrapServers);
props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, "org.apache.kafka.common.serialization.StringDeserializer");
props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, "org.apache.kafka.common.serialization.StringDeserializer");
props.put(ConsumerConfig.GROUP_ID_CONFIG, "test-group");
// 注册自定义拦截器
props.put(ConsumerConfig.INTERCEPTOR_CLASSES_CONFIG, CustomConsumerInterceptor.class.getName());
return new DefaultKafkaConsumerFactory<>(props);
}
}
Spring Boot默认会注入一个消费者工厂和Kafka监听器容器工厂(根据yml文件),但它们不包含自定义拦截器。因此,我们需要手动覆盖这些默认的bean。
如果未使用@Configuration
和@Bean
注解,默认的消费者工厂仍然会被注入,但它不会包含拦截器。因此,必须显式地定义并覆盖消费者工厂。
同样,默认的Kafka监听器容器工厂也需要覆盖,以确保它使用我们自定义的消费者工厂。
完整代码:
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.kafka.config.ConcurrentKafkaListenerContainerFactory;
import org.springframework.kafka.core.ConsumerFactory;
@Configuration
public class KafkaListenerConfig {
@Bean
public ConcurrentKafkaListenerContainerFactory<String, String> kafkaListenerContainerFactory(
ConsumerFactory<String, String> consumerFactory) {
ConcurrentKafkaListenerContainerFactory<String, String> factory =
new ConcurrentKafkaListenerContainerFactory<>();
factory.setConsumerFactory(consumerFactory); // 使用自定义消费者工厂
return factory;
}
}
启动应用后,可以通过ApplicationContext
查看当前注入的消费者工厂和Kafka监听器容器工厂。
示例代码:
import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;
import org.springframework.context.ApplicationContext;
import org.springframework.kafka.config.KafkaListenerContainerFactory;
import org.springframework.kafka.core.ConsumerFactory;
import java.util.Map;
@SpringBootApplication
public class KafkaBaseApplication {
public static void main(String[] args) {
ApplicationContext context = SpringApplication.run(KafkaBaseApplication.class, args);
// 获取所有类型为ConsumerFactory的bean
Map<String, ConsumerFactory> consumerFactories = context.getBeansOfType(ConsumerFactory.class);
consumerFactories.forEach((k, v) -> {
System.out.println("ConsumerFactory Bean Name: " + k + ", Bean Instance: " + v);
});
// 获取所有类型为KafkaListenerContainerFactory的bean
Map<String, KafkaListenerContainerFactory> listenerFactories = context.getBeansOfType(KafkaListenerContainerFactory.class);
listenerFactories.forEach((k, v) -> {
System.out.println("KafkaListenerContainerFactory Bean Name: " + k + ", Bean Instance: " + v);
});
}
}
在事件消费者中,需要明确指定使用我们自定义的Kafka监听器容器工厂。
示例代码:
import org.apache.kafka.clients.consumer.ConsumerRecord;
import org.springframework.kafka.annotation.KafkaListener;
import org.springframework.stereotype.Component;
import java.util.List;
@Component
public class EventConsumer {
@KafkaListener(topics = "${kafka.topic.name}", groupId = "${kafka.consumer.group-id}",
containerFactory = "kafkaListenerContainerFactory")
public void onEvent(List<ConsumerRecord<String, String>> records) {
for (ConsumerRecord<String, String> record : records) {
System.out.println("Received message: Key = " + record.key() + ", Value = " + record.value());
}
}
}
ConsumerInterceptor
接口,并重写onConsume
和onCommit
方法。ConsumerFactory
的配置将拦截器类名添加到INTERCEPTOR_CLASSES_CONFIG
中。containerFactory
属性指定自定义的Kafka监听器容器工厂。使用@SendTo
注解实现消息转发:
@Component
@SendTo(value = "bt")
public class EventConsumer {
@KafkaListener(topics = {"at"}, groupId = "ag", containerFactory = "ourKafkaListenerContainerFactory")
public String onEvent(ConsumerRecord<String,String> record) {
System.out.println("读取到的" + record.value());
return record.value();
}
}
另一个消费者监听转发后的消息:
@Component
public class EventConsumer {
@KafkaListener(topics = {"bt"}, groupId = "bg", containerFactory = "ourKafkaListenerContainerFactory")
public void onEvent(ConsumerRecord<String,String> record) {
System.out.println("读取到的" + record.value());
}
}
Kafka 提供了多种分区分配策略,用于在消费者组内将主题的分区均匀地分配给消费者。默认的分区分配策略是 RangeAssignor
,但也可以选择其他策略或自定义策略。
RangeAssignor
原理:根据分区数量和消费者数量,按范围分配分区。
示例:
myTopic
,有 10 个分区(p0 - p9)。consumer1
, consumer2
, consumer3
。分配步骤:
consumer1
分配到 4 个分区(p0-p3)。consumer2
和 consumer3
各分配到 3 个分区(p4-p6 和 p7-p9)。特点:
consumer1
负载稍重)。RoundRobinAssignor
:
StickyAssignor
:
CooperativeStickyAssignor
:
StickyAssignor
类似,但支持协作式重新平衡,允许消费者在离开消费者组前通知协调器。如果需要更灵活的分区分配逻辑,可以实现自定义的 Partitioner
类。例如:
public class CustomPartitioner implements Partitioner {
@Override
public int partition(String topic, Object key, byte[] keyBytes, Object value, byte[] valueBytes, Cluster cluster) {
List<PartitionInfo> partitions = cluster.partitionsForTopic(topic);
int numPartitions = partitions.size();
if (keyBytes == null) {
return ThreadLocalRandom.current().nextInt(numPartitions);
} else {
return Math.abs(Utils.murmur2(keyBytes)) % numPartitions;
}
}
@Override
public void close() {}
@Override
public void configure(Map<String, ?> configs) {}
}
然后在生产者配置中指定该分区器:
props.put(ProducerConfig.PARTITIONER_CLASS_CONFIG, CustomPartitioner.class.getName());
生产者 Offset:
消费者 Offset:
__consumer_offsets
。__consumer_offsets
默认有 50 个分区。String groupid = "myGroup";
int partition = Math.abs(groupid.hashCode()) % 50; // 假设结果为 39
因此,myGroup
的 Offset 信息会保存在 __consumer_offsets
的第 39 个分区中。commitSync()
或 commitAsync()
方法手动提交 Offset。Zookeeper 插件限制:
命令行工具:
kafka-consumer-groups.sh
命令查看消费者组的 Offset 信息:kafka-consumer-groups.sh --bootstrap-server 127.0.0.1:9092 --group myGroup --describe
/tmp/kafka-logs
目录中。server.properties
修改日志存储路径:cd /usr/local/kafka_2.13-3.7.0/config
vim server.properties
找到 log.dirs=/tmp/kafka-logs
,将其修改为新的路径。00000000000000000000.index
00000000000000000000.log
00000000000000000000.timeindex
00000000000000000006.snapshot
,用于故障恢复。leader-epoch-checkpoint
,记录分区领导者的 Epoch 和起始偏移量。partition.metadata
,存储特定分区的元数据。RangeAssignor
、RoundRobinAssignor
、StickyAssignor
等),可根据实际需求选择合适的策略。__consumer_offsets
Topic 中,而非分区文件本身。kafka-consumer-groups.sh
可以查看消费者组的 Offset 信息。-
,便于管理和查询。