kafka版本1.1.0 javaAPI实现生产者、消费者

一、环境准备:

1、maven工程中引入依赖:

    
        org.apache.kafka
        kafka_2.12
        1.1.0
    
    
        org.apache.kafka
        kafka-clients
        1.1.0
    
2、创建kafkaProperties
public class KafkaProperties {
    //blocker连接
    public static final String broker = "192.168.80.140:9092";
    //topic
    public static final String topic = "kafka_api";
}
3、本机主机是否能telnet通(kafka节点),如果不通需要关闭防火墙

4、修改kafka中server.properties配置文件:
    
hostname和端口是用来建议给生产者和消费者使用的,如果没有设置,将会使用listeners的配置,如果listeners也没有配置,将使用java.net.InetAddress.getCanonicalHostName()来获取这个hostname和port,基本就是localhost。
"PLAINTEXT"表示协议,可选的值有PLAINTEXT和SSL,hostname可以指定IP地址,也可以用"0.0.0.0"表示对所有的网络接口有效,如果hostname为空表示只对默认的网络接口有效
如果你没有配置advertised.listeners,就使用listeners的配置通告给消息的生产者和消费者,这个过程是在生产者和消费者获取源数据(metadata)。
1)listeners=PLAINTEXT://192.168.80.140:9092 
2)advertised.listeners=PLAINTEXT://192.168.80.140:9092(#0.9.x以后的版本 advertised.host.name 和 advertised.host.port)
3)zookeeper.connect=192.168.80.140:2181


二、java API

1、实现生产者(Producer)


import org.apache.kafka.clients.producer.KafkaProducer;
import org.apache.kafka.clients.producer.ProducerRecord;
import java.util.Properties;

public class MyProducer extends Thread {
    private String topic;
    private KafkaProducer producer;

    public MyProducer(String topic) {
        this.topic = topic;
      
        Properties properties = new Properties();
        properties.put("bootstrap.servers",KafkaProperties.broker);
        properties.put("key.serializer", "org.apache.kafka.common.serialization.StringSerializer");
        properties.put("value.serializer", "org.apache.kafka.common.serialization.StringSerializer");
        producer = new KafkaProducer(properties);
    }

    @Override
    public void run() {
        int messageNo = 1;
        while (true){
            String message = "message" + messageNo;
            System.out.println("send = "+message);
            producer.send(new ProducerRecord(topic,message));
            messageNo ++;

            try {
                Thread.sleep(200);
            } catch (InterruptedException e) {
                e.printStackTrace();
            }
        }
    }

    public static void main(String[] args){
        new MyProducer(KafkaProperties.topic).start();
    }
}

生产者控制台输出结果:

kafka版本1.1.0 javaAPI实现生产者、消费者_第1张图片

2、实现消费者(Consumer)


import org.apache.kafka.clients.consumer.ConsumerRecord;
import org.apache.kafka.clients.consumer.ConsumerRecords;
import org.apache.kafka.clients.consumer.KafkaConsumer;
import java.util.Arrays;
import java.util.Properties;

public class MyConsumer extends Thread {
    private String topic;
    KafkaConsumer consumer;

    public MyConsumer() {
        Properties properties = new Properties();
        properties.setProperty("bootstrap.servers", KafkaProperties.broker);
        properties.setProperty("group.id","test");
        properties.put("key.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");
        properties.put("value.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");
        consumer = new KafkaConsumer(properties);
        consumer.subscribe(Arrays.asList(KafkaProperties.topic));
    }

    @Override
    public void run() {
        while (true){
            ConsumerRecords records = consumer.poll(100);
            for (ConsumerRecord recode: records) {
                System.out.println("recodeOffset = " + recode.offset() + "recodeValue = " + recode.value());
            }
        }
    }
    public static void main(String[] args){
        new MyConsumer().start();
    }
}

消费者控制台输出结果:

kafka版本1.1.0 javaAPI实现生产者、消费者_第2张图片


你可能感兴趣的:(kafka)