Kafka使用总结

目录

准备工作

Zookeeper 和 Kafka

启动服务

创建和查看消息主题

Java示例

步骤一:引入 POM 依赖

步骤二:生产者

步骤三: 消费者

Kafka流式计算


注意:本文参考   二十分钟快速上手Kafka开发(Java示例) - 走看看

java实现kafka消息发送和接收 - 君子笑而不语 - 博客园

Kafka英文官方文档   Apache Kafka

中文参数  kafka生产者和消费者的具体交互以及核心参数详解_我的身前一尺是我的世界的博客-CSDN博客_kafka生产者消费者参数

准备工作

Zookeeper 和 Kafka

从 “Zookeeper Download” 下载 zookeeper 压缩包,从 “Kafka Download” 下载 Kafka 压缩包,使用 tar xzvf xxx.tar.gz 解压即可。

启动服务

启动 Zookeeper 服务。切换到 Zookeeper 解压目录下,执行如下命令:

bin/zkServer.sh start-foreground

启动 Kafka 服务。切换到 Kafka 解压目录下,执行如下命令:

bin/kafka-server-start.sh config/server.properties

创建和查看消息主题

执行如下命令,创建了一个 order-events 的消息主题:

bin/kafka-topics.sh --create --topic order-events --bootstrap-server localhost:9092

查看主题 order-events 的信息:

bin/kafka-topics.sh --describe --topic order-events --bootstrap-server localhost:9092

Java示例

步骤一:引入 POM 依赖


            org.apache.kafka
            kafka-clients
            0.11.0.0
        
           
            org.apache.kafka
            kafka-streams
            0.11.0.0
        

步骤二:生产者

package com.roncoo.example.kafka;
 
import java.util.Properties;
 
import org.apache.kafka.clients.producer.KafkaProducer;
import org.apache.kafka.clients.producer.ProducerRecord;
 
public class ProducerDemo {
 
    private final KafkaProducer producer;
 
    public final static String TOPIC = "test5";
 
    private ProducerDemo() {
        Properties props = new Properties();
        props.put("bootstrap.servers", "xxx:9092,1xxx:9092,xxx:9092");//xxx服务器ip
        props.put("acks", "all");//所有follower都响应了才认为消息提交成功,即"committed"
        props.put("retries", 0);//retries = MAX 无限重试,直到你意识到出现了问题:)
        props.put("batch.size", 16384);//producer将试图批处理消息记录,以减少请求次数.默认的批量处理消息字节数
        //batch.size当批量的数据大小达到设定值后,就会立即发送,不顾下面的linger.ms
        props.put("linger.ms", 1);//延迟1ms发送,这项设置将通过增加小的延迟来完成--即,不是立即发送一条记录,producer将会等待给定的延迟时间以允许其他消息记录发送,这些消息记录可以批量处理
        props.put("buffer.memory", 33554432);//producer可以用来缓存数据的内存大小。
        props.put("key.serializer",
                "org.apache.kafka.common.serialization.IntegerSerializer");
        props.put("value.serializer",
              "org.apache.kafka.common.serialization.StringSerializer");
 
        producer = new KafkaProducer(props);
    }
 
    public void produce() {
        int messageNo = 1;
        final int COUNT = 5;
 
        while(messageNo < COUNT) {
            String key = String.valueOf(messageNo);
            String data = String.format("hello KafkaProducer message %s from hubo 06291018 ", key);
            
            try {
                producer.send(new ProducerRecord(TOPIC, data));
            } catch (Exception e) {
                e.printStackTrace();
            }
 
            messageNo++;
        }
        
        producer.close();
    }
 
    public static void main(String[] args) {
        new ProducerDemo().produce();
    }
}

步骤三: 消费者

package com.roncoo.example.kafka;
import java.util.Arrays;
import java.util.Properties;
 
import org.apache.kafka.clients.consumer.ConsumerRecord;
import org.apache.kafka.clients.consumer.ConsumerRecords;
import org.apache.kafka.clients.consumer.KafkaConsumer;
 
 
public class UserKafkaConsumer extends Thread {
 
        public static void main(String[] args){
            Properties properties = new Properties();
            properties.put("bootstrap.servers", "xxx:9092,xxx:9092,xxx:9092");//xxx是服务器集群的ip
            properties.put("group.id", "jd-group");
            properties.put("enable.auto.commit", "true");
            properties.put("auto.commit.interval.ms", "1000");
            properties.put("auto.offset.reset", "latest");
            properties.put("session.timeout.ms", "30000");
            properties.put("key.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");
            properties.put("value.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");
 
            KafkaConsumer kafkaConsumer = new KafkaConsumer<>(properties);
            kafkaConsumer.subscribe(Arrays.asList("test5"));
            while (true) {
                ConsumerRecords records = kafkaConsumer.poll(100);
                for (ConsumerRecord record : records) {
                    System.out.println("-----------------");
                    System.out.printf("offset = %d, value = %s", record.offset(), record.value());
                    System.out.println();
                }
            }
 
        }
}

Kafka流式计算

Kafka 还可以用于可靠的数据源,为实时计算组件提供事件流,如下图所示代码:

package cc.lovesq.kafkamsg;

import cc.lovesq.model.BookInfo;
import cc.lovesq.util.TimeUtil;
import com.alibaba.fastjson.JSONObject;
import org.apache.commons.logging.Log;
import org.apache.commons.logging.LogFactory;
import org.apache.kafka.common.serialization.Serdes;
import org.apache.kafka.streams.KafkaStreams;
import org.apache.kafka.streams.StreamsBuilder;
import org.apache.kafka.streams.StreamsConfig;
import org.apache.kafka.streams.kstream.KStream;
import org.apache.kafka.streams.kstream.Materialized;
import org.apache.kafka.streams.kstream.Printed;
import org.springframework.stereotype.Component;

import javax.annotation.PostConstruct;
import java.util.Properties;

/**
 * @Description Kafka 事件流
 * @Date 2021/2/4 8:17 下午
 * @Created by qinshu
 */
@Component
public class KafkaMessageStream {

    private static Log log = LogFactory.getLog(KafkaMessageStream.class);

    @PostConstruct
    public void init() {
        Properties properties = new Properties();
        properties.put(StreamsConfig.APPLICATION_ID_CONFIG, "orderCount");
        properties.put(StreamsConfig.BOOTSTRAP_SERVERS_CONFIG, "localhost:9092");
        properties.put(StreamsConfig.DEFAULT_KEY_SERDE_CLASS_CONFIG, Serdes.String().getClass().getName());
        properties.put(StreamsConfig.DEFAULT_VALUE_SERDE_CLASS_CONFIG, Serdes.String().getClass().getName());

        StreamsBuilder streamBuilder = new StreamsBuilder();
        KStream source = streamBuilder.stream("order-events");

        // 计算下单中每个 goodsId 出现的次数
        KStream result = source.filter(
                (key, value) -> value.startsWith("{") && value.endsWith("}")
        ).mapValues(
                value -> JSONObject.parseObject(value, BookInfo.class)
        ).mapValues(
                bookInfo -> bookInfo.getGoods().getGoodsId().toString()
        ).groupBy((key,value) -> value).count(Materialized.as("goods-order-count")
        ).mapValues(value -> Long.toString(value)).toStream();

        result.print(Printed.toSysOut());

        new Thread(
                () -> {
                    TimeUtil.sleepInSecs(10);
                    KafkaStreams streams = new KafkaStreams(streamBuilder.build(), properties);
                    streams.start();
                    log.info("stream-start ...");
                    TimeUtil.sleepInSecs(10);
                    streams.close();
                }
        ).start();
    }
}

这里还必须事先创建一个 Topic = goods-order-count 的主题:

bin/kafka-topics.sh --create --topic goods-order-count --bootstrap-server localhost:9092

你可能感兴趣的:(Kafka,消息队列,java,开发语言,1024程序员节)