kafka重新消费

kafka重新消费的两种方式
  • 低级API
  • AUTO_OFFSET_RESET_CONFIG
方式一:低级API
public class MylowerConsumer {
    public static void main(String[] args) {
        //1.brokers节点
        ArrayList list = new ArrayList<>();
        list.add("hadoop102");
        list.add("hadoop103");
        list.add("hadoop105");
        //2.主题
        String topic = "first";
        //3.分区
        int partition = 0;
        //4.offset
        long offset = 0;//低级API中,通过设置偏移量进行重新读取数据,每次从偏移量位置处开始读取数据

        //1.获取leader
        String leader = getLeader(list, topic, partition);
        //2.获取数据
        getData(leader, topic, partition, offset);
    }

    private static void getData(String leader, String topic, int partition, long offset) {
        //1.创建SimpleConsumer
        SimpleConsumer consumer = new SimpleConsumer(leader,
                9092,
                1000,
                1024 * 1024,
                "getData");
        //2.发送获取数据的请求
        FetchRequestBuilder builder = new FetchRequestBuilder();
        FetchRequest request = builder.addFetch(topic, partition, offset, 1024 * 1024).build();
        //3.获取响应
        FetchResponse response = consumer.fetch(request);
        //4.解析response
        ByteBufferMessageSet messageAndOffsets = response.messageSet(topic, partition);
        //5.遍历messageAndOffsets
        for (MessageAndOffset messageAndOffset : messageAndOffsets) {
            //获取offset
            long newOffset = messageAndOffset.offset();
            //获取数据
            Message message = messageAndOffset.message();
            ByteBuffer byteBuffer = message.payload();
            byte[] bytes = new byte[byteBuffer.limit()];
            byteBuffer.get(bytes);
            System.out.println("value:" + new String(bytes) + " offset:" + newOffset);
        }
    }

    private static String getLeader(ArrayList list, String topic, int partition) {
        //1.创建SimpleConsumer
        for (String host : list) {
            SimpleConsumer consumer = new SimpleConsumer(host,
                    9092,
                    1000,
                    1024 * 1024,
                    "getLeader");
            //2.封装获取leader的请求
            TopicMetadataRequest request = new TopicMetadataRequest(Arrays.asList(topic));
            //3.发送请求,获取相应
            TopicMetadataResponse metadataResponse = consumer.send(request);
            //4.解析相应
            List topicsMetadata = metadataResponse.topicsMetadata();
            //5.遍历
            for (TopicMetadata topicMetadata : topicsMetadata) {
                //6.解析topicMetadata
                List partitionsMetadata = topicMetadata.partitionsMetadata();
                //7.遍历partitionsMetadata
                for (PartitionMetadata partitionMetadata : partitionsMetadata) {
                    if (partitionMetadata.partitionId() == partition){
                        String leader = partitionMetadata.leader().host();
                        return leader;
                    }
                }
            }
        }
        return null;
    }
}
方式二:

/**
* auto.offset.reset
*/
public static final String AUTO_OFFSET_RESET_CONFIG = “auto.offset.reset”;
public static final String AUTO_OFFSET_RESET_DOC = “What to do when there is no initial offset in Kafka or if the current offset does not exist any more on the server (e.g. because that data has been deleted):

  • earliest: automatically reset the offset to the earliest offset
  • latest: automatically reset the offset to the latest offset
  • none: throw exception to the consumer if no previous offset is found for the consumer’s group
  • anything else: throw exception to the consumer.
”;

在高级API中,设置AUTO_OFFSET_RESET_CONFIG 在设置新的组的情况下起作用或者当前offset不存在(数据过期删除或被删除)

你可能感兴趣的:(BigData)