使用 kafka 的java客户端进行消息的发送与接收通信操作

kafka的发送端:

package com.zwz.test;

import kafka.Kafka;
import org.apache.kafka.clients.producer.KafkaProducer;
import org.apache.kafka.clients.producer.ProducerConfig;
import org.apache.kafka.clients.producer.ProducerRecord;

import java.util.Properties;

public class KafkaProducerDemo extends Thread{

    private  KafkaProducer producer;

    private  String topic;

    public KafkaProducerDemo(String topic){

        Properties properties = new Properties();
        properties.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG,"192.168.159.138:9092" );  //设置kafka的连接和端口
        properties.put( ProducerConfig.CLIENT_ID_CONFIG,"KafkaProducerDemo" );  //
        properties.put( ProducerConfig.ACKS_CONFIG,"-1" );
        properties.put(  ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG,"org.apache.kafka.common.serialization.IntegerSerializer" );
        properties.put(  ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG,"org.apache.kafka.common.serialization.StringSerializer" );

        producer = new KafkaProducer( properties );
        this.topic = topic;

    }


    @Override
    public void run(){

        int num = 0;
        while(num<50){

            String message = "message_"+num;
            System.out.println( "begin send message" +  message );
            producer.send( new ProducerRecord(topic,message) );
            num++;
            try{

                Thread.sleep(1000);

            }catch(Exception e){
                e.printStackTrace();
            }

        }

    }



    public static void main(String[] args) {


        new KafkaProducerDemo("test").start();


    }


}

 

kafka的接收端:

package com.zwz.test;

import org.apache.kafka.clients.consumer.ConsumerConfig;
import org.apache.kafka.clients.consumer.ConsumerRecord;
import org.apache.kafka.clients.consumer.ConsumerRecords;
import org.apache.kafka.clients.consumer.KafkaConsumer;

import java.util.Collections;
import java.util.Properties;

public class KafkaConsumerDemo extends Thread {

    private KafkaConsumer kafkaConsumer;

    public KafkaConsumerDemo( String topic ){

        Properties prop = new Properties();
        prop.put( ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG,"192.168.159.138:9092" );  //
        prop.put(  ConsumerConfig.GROUP_ID_CONFIG,"KafkaConsumerDemo" );    //分组设置
        prop.put(  ConsumerConfig.ENABLE_AUTO_COMMIT_CONFIG,"true" );       //
        prop.put(  ConsumerConfig.AUTO_COMMIT_INTERVAL_MS_CONFIG,"1000" );   //设置间隔时间
        prop.put(  ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG,"org.apache.kafka.common.serialization.IntegerDeserializer"  ); //反序列化的类
        prop.put(  ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG,"org.apache.kafka.common.serialization.StringDeserializer" );  //
        prop.put(  ConsumerConfig.AUTO_OFFSET_RESET_CONFIG,"earliest" );  //设置接收消息从最前面开始

        kafkaConsumer = new KafkaConsumer(  prop );
        kafkaConsumer.subscribe(  Collections.singletonList( topic ) );

    }


    @Override
    public void run() {

        while(true) {
//            super.run();

            ConsumerRecords consumerRecord = kafkaConsumer.poll(1000);
            for( ConsumerRecord record :consumerRecord ){
                System.out.println( "message receive:"+record.value() );
            }

        }

    }


    public static void main(String[] args) {

        new KafkaConsumerDemo("test").start();

    }


}

 

在kafka 的消息发送端的 ProducerConfig.ACKS_CONFIG(acks),"-1"   这个设置的参数有下面的几个作用

 当参数是0 的时候  表示消息发送给broker 以后,不需要进行确认(性能较高,但是会出现数据丢失的情况)

 当参数是1 的时候  表示只需要获得 kafka 集群中的 leader 节点的确认即可返回  (leader、follower)

 all(-1) 需要ISR 中的所有 Replica 去进行确认(需要集群当中所有的节点进行确认),最安全的,但是也可能会出现数据丢失的情况

 

   

AUTO_OFFSET_RESET_CONFIG

对于新的groupid来说,如果设置为 earliest,那么他会从最早的消息开始消费

latest  对于新的groupid来说,直接取已经消费并且已经提交的最大 offset

earliest 对于新的 groupid来说,如果设置为 earliest,那么他会从最早的消息开始消费,重置 offset

none   

 

 

 

 

 

 

 

你可能感兴趣的:(分布式)