1、streams api 的疑惑
1、释疑 kafka streams api和普通 consumer producer区别
https://stackoverflow.com/questions/44014975/kafka-consumer-api-vs-streams-api
2、kafka streams foreach操作如何进行数据库读写
3、kafka streams 如何指定分区 offset进行消费
4、kafka streams 如何获取topic-partition offset
5、kafka streams 如何在重新执行之前获取偏移量限制,并且如何取消限制
6、kafka 如何处理过期消息,对于消费者的影响
7、Kafka consumer 消费了一些已经消费的消息
1、streams api 的疑惑
1、释疑 kafka streams api和普通 consumer producer区别
https://stackoverflow.com/questions/44014975/kafka-consumer-api-vs-streams-api
2、kafka streams foreach操作如何进行数据库读写
3、kafka streams 如何指定分区 offset进行消费
https://stackoverflow.com/questions/48344981/how-to-always-consume-from-latest-offset-in-kafka-streams?rq=1
这位哥说的意思没有现成方法可用,在kafka命令行操作 这不太现实啊
4、kafka streams 如何获取topic-partition offset
https://stackoverflow.com/questions/40807346/how-can-i-get-the-offset-value-in-kstream
https://stackoverflow.com/questions/48333706/how-to-get-current-kafka-topic-inside-kafka-stream?rq=1
ok 获取倒是能获取 问题是无法指定位置消费
public static void main(final String[] args) {
try {
final Properties props = new Properties();
props.put(StreamsConfig.APPLICATION_ID_CONFIG, "streamProcessor");
props.put(StreamsConfig.BOOTSTRAP_SERVERS_CONFIG, "localhost:9092");
props.put(StreamsConfig.STATE_DIR_CONFIG, "state-store");
props.put(StreamsConfig.DEFAULT_KEY_SERDE_CLASS_CONFIG, Serdes.String().getClass().getName());
props.put(StreamsConfig.DEFAULT_VALUE_SERDE_CLASS_CONFIG, Serdes.String().getClass().getName());
props.put(StreamsConfig.COMMIT_INTERVAL_MS_CONFIG, 10 * 1000);
props.put(StreamsConfig.CACHE_MAX_BYTES_BUFFERING_CONFIG, 0);
StreamsBuilder streamsBuilder = new StreamsBuilder();
final KStream textLines = streamsBuilder.stream(inputTopicList);
final KStream textLines = builder.stream(inputTopiclist);
textLines.transform(getTopicDetailsTransformer::new)
.foreach(new ForeachAction() {
public void apply(String key, String value) {
System.out.println(key + ": " + value);
}
});
textLines.to(outputTopic);
} catch (Exception e) {
System.out.println(e);
}
}
private static class getTopicDetailsTransformer implements Transformer> {
private ProcessorContext context;
@Override
public void init(final ProcessorContext context) {
this.context = context;
}
public KeyValue transform(final String recordKey, final String recordValue) {
//here i am returning key as topic name.
return KeyValue.pair(context.topic(), recordValue);
}
@Override
public void close() {
// Not needed.
}
}
5、kafka streams 如何在重新执行之前获取偏移量限制,并且如何取消限制
https://stackoverflow.com/questions/55182923/kafka-streams-how-to-get-offset-limits-prior-to-application-for-reprocessing-an
https://cwiki.apache.org/confluence/display/KAFKA/KIP-95%3A+Incremental+Batch+Processing+for+Kafka+Streams
和3类似 也是没有解决方案
感觉kafka 流处理问题很多 一点也不友好
6、kafka 如何处理过期消息,对于消费者的影响
https://stackoverflow.com/questions/39131465/how-does-an-offset-expire-for-an-apache-kafka-consumer-group
7、Kafka consumer 消费了一些已经消费的消息
https://stackoverflow.com/questions/51012225/kafka-consumer-is-getting-few-not-all-old-messages-that-was-already-processed?rq=1