kafka环境搭建与实战(2)kafka API实战

kafka环境搭建与实战(1)安装kafka           http://zilongzilong.iteye.com/blog/2267913
kafka环境搭建与实战(2)kafka API实战       http://zilongzilong.iteye.com/blog/2267924

1.maven项目中添加依赖

 

		
		
			org.apache.kafka
			kafka-clients
			0.9.0.0
		

 

2.spring集成kafka

与spring集成spring-kafka.xml

 




	
		
			
				kafka0:9092,kafka1:9092,kafka2
				all
				0
				16384
				1
				33554432
				org.apache.kafka.common.serialization.StringSerializer
				org.apache.kafka.common.serialization.StringSerializer
				com.***.kafka.Partitioner.RandomPartitioner
			
		
	

	
		
			
				kafka0:9092,kafka1:9092,kafka2:9092
				group1
				true
				1000
				30000
				org.apache.kafka.common.serialization.StringDeserializer
				org.apache.kafka.common.serialization.StringDeserializer
			
		
	

	
		
			
				kafka0:9092,kafka1:9092,kafka2:9092
				group2
				true
				1000
				30000
				org.apache.kafka.common.serialization.StringDeserializer
				org.apache.kafka.common.serialization.StringDeserializer
			
		
	

 

3.producer使用

自己的partion策略类

public class RandomPartitioner implements Partitioner {
	@Override
	public void configure(Map configs) {

	}

	@Override
	public int partition(String topic, Object key, byte[] keyBytes, Object value, byte[] valueBytes, Cluster cluster) {
		Random random = new Random();
		List partitions = cluster.partitionsForTopic(topic);
		int numPartitions = partitions.size();
		if (numPartitions > 0) {
			return Math.abs(random.nextInt()) % numPartitions;
		} else {
			return 0;
		}
	}

	@Override
	public void close() {

	}
}

 

public class ProducerUtil {
	private static Producer producer = SpringContextHolder.getBean("testProducer");
	public static void produce(String message) {
		producer.send(new ProducerRecord("test",message));
	}
}

 

4.consumer使用

public class KafkaServletContextListener implements ServletContextListener {
	@Override
	public void contextInitialized(ServletContextEvent sce) {
		ExecutorService executor = Executors.newFixedThreadPool(2);
		executor.execute(new Runnable() {
			@Override
			public void run() {
				KafkaConsumer consumer = SpringContextHolder.getBean("group1Consumer");
				consumer.subscribe(Arrays.asList("test"));
				while (true) {
					ConsumerRecords records = consumer.poll(100);
					for (ConsumerRecord record : records) {
						record.key();
						record.offset();
						record.partition();
						record.topic();
						record.value();
						//TODO
					}
				}
			}
		});
		executor.execute(new Runnable() {
			@Override
			public void run() {
				KafkaConsumer consumer = SpringContextHolder.getBean("group2Consumer");
				consumer.subscribe(Arrays.asList("test"));
				while (true) {
					ConsumerRecords records = consumer.poll(100);
					for (ConsumerRecord record : records) {
						record.key();
						record.offset();
						record.partition();
						record.topic();
						record.value();
						//TODO
					}
				}
			}
		});
	}

	@Override
	public void contextDestroyed(ServletContextEvent sce) {
		
	}
}

 

   在web.xml中添加listener

   

	
		com.***.web.listener.KafkaServletContextListener
	

 

5.kafka中遇到的错误

    5.1 消费者在消费消息的时候,一直报如下错误:

ILLEGAL_GENERATION occurred while committing offsets for group

    在网上找到一篇文章http://comments.gmane.org/gmane.comp.apache.kafka.user/10708,但是按照这个调整了auto.commit.interval.ms和session.timeout.ms,但是还是无济于事。

    最后的根本解决办法是,优化消费者的处理逻辑,因为我在消费这种用到了jredis,jredis中对于exist、get时间复杂度为o(1),而smembers方法时间复杂度为o(N),我做的是一是优化代码,二是尽量优先用jredis中对于exist、get方法,然后部署上去,问题经观察,没有出现了。

    总之,解决办法是优化消费者代码,减少耗时,让消费者能及时反馈消费状态给zookeeper

 

 

 

 

 

 

 

 

 

 

你可能感兴趣的:(消息队列,kafka)