Kafka集群安装部署以及Producer和Consumer的JAVA代码测试
kafka scala2.11_0.10.0.0
ubuntu 14.04.04 x64
hadoop 2.7.2
spark 2.0.0
scala 2.11.8
jdk 1.8.0_101
zookeeper 3的集群安装参考spark安装部署中的相应部分。
1、下载
二进制包
https://www.apache.org/dyn/closer.cgi?path=/kafka/0.10.0.0/kafka_2.11-0.10.0.0.tgz
2、解压
root@py-server:/server# tar xvzf kafka_2.11-0.10.0.0.tgz
root@py-server:/server# mv kafka_2.11-0.10.0.0/ kafka/
3、环境变量
vi ~/.bashrc
export KAFKA_HOME=/server/kafka
export PATH=$PATH:$KAFKA_HOME/bin
source ~/.bashrc
4、配置文件
config/server.properties
root@py-server:/server/kafka/config# vi server.properties
参考:
http://blog.csdn.net/z769184640/article/details/51585419
# 唯一标识一个broker.
broker.id=0
#绑定服务监听的地址和端口,要填写hostname -i 出来的地址,否则可能会绑定到127.0.0.1,producer可能会发不出消息
listeners=PLAINTEXT://10.1.1.6:9092
#advertised.listeners=PLAINTEXT://your.host.name:9092
advertised.listeners=PLAINTEXT://10.1.1.6:9092
【不设置会出现3tries链接错误】
#存放日志和消息的目录,可以是用逗号分开的目录,同样不推荐使用/tmp【此处没改动,我的还是tmp】
log.dirs=/usr/local/services/kafka/kafka-logs
#每个topic默认partitions的数量,数量较大表示消费者可以有更大的并行度。
num.partitions=2
#Zookeeper的连接配置,用逗号隔开,也可以用10.1.1.6:2181/kakfa这样的方式指定kafka数据在zk中的根目录
zookeeper.connect=10.1.1.6:2181,10.1.1.11:2181,10.1.1.12:2181,10.1.1.13:2181,10.1.1.14:2181
5、分发
拷贝到其他机上
root@py-server:/server# scp -r kafka/
[email protected]:/server/
root@py-server:/server# scp -r kafka/
[email protected]:/server/
root@py-server:/server# scp -r kafka/
[email protected]:/server/
root@py-server:/server# scp -r kafka/
[email protected]:/server/
修改11-14的环境变量和配置文件,改动的地方如下:
broker.id=1~4[另外四台]
listeners=PLAINTEXT://10.1.1.11~14[另外四台]:9092
6、启动
首先,启动zookeeper集群,启动方法参考spark安装部署那一篇。
然后,再启动每个kafka节点
root@py-server:/server/kafka# ./bin/kafka-server-start.sh -daemon config/server.properties
root@py-11:/server/kafka# ./bin/kafka-server-start.sh -daemon config/server.properties
root@py-12:/server/kafka# ./bin/kafka-server-start.sh -daemon config/server.properties
root@py-13:/server/kafka# ./bin/kafka-server-start.sh -daemon config/server.properties
root@py-14:/server/kafka# ./bin/kafka-server-start.sh -daemon config/server.properties
-daemon放在后台运行。
【注意:不要直接用kafka-server-start.sh -daemon config/server.properties】
可见:
root@py-14:/server/kafka# jps
1203 Kafka
551 DataNode
2124 Jps
701 NodeManager
911 Worker
28063 SparkSubmit
383 QuorumPeerMain
root@py-14:/server/kafka#
其他机一样
root@py-server:/server/kafka# jps
18592 NodeManager
7456 Main
18867 Worker
9780 Kafka
17894 DataNode
18073 SecondaryNameNode
18650 Master
17499 QuorumPeerMain
9837 Jps
18269 ResourceManager
17725 NameNode
7、测试
7.1 测试单机topic
7.1.1 创建一个topic名为my-test
root@py-server:/server/kafka# bin/kafka-topics.sh --create --zookeeper 10.1.1.6:2181 --replication-factor 3 --partitions 1 --topic my-test
Created topic "my-test".
#replication-factor是备份数
#partition是分区数
7.1.2 发送消息,ctrl+c终止
root@py-server:/server/kafka# bin/kafka-console-producer.sh --broker-list 10.1.1.6:9092 --topic my-test
今天是个好日子
hello
^Croot@py-server:/server/kafka#
7.1.3 另一台机器上消费消息
root@py-11:/server/kafka# bin/kafka-console-consumer.sh --zookeeper 10.1.1.11:2181 --from-beginning --topic my-test
今天是个好日子
hello
^CProcessed a total of 2 messages
root@py-12:/server/kafka# bin/kafka-console-consumer.sh --zookeeper 10.1.1.12:2181 --from-beginning --topic my-test
今天是个好日子
hello
^CProcessed a total of 2 messages
root@py-12:/server/kafka#
root@py-server:/server/kafka# bin/kafka-console-consumer.sh --zookeeper 10.1.1.6:2181 --from-beginning --topic my-test
今天是个好日子
hello
^CProcessed a total of 2 messages
root@py-server:/server/kafka#
其他机一样
继续发送消息则在消费者终端会一直出现新产生的消息。
7.2 查看主题详细
root@py-server:/server/kafka/bin# ./kafka-topics.sh --describe --zookeeper 10.1.1.6:2181 --topic my-test
Topic:my-test
PartitionCount:1
ReplicationFactor:3
Configs:
Topic: my-test
Partition: 0
Leader: 3
Replicas: 3,1,2
Isr: 3,1,2
root@py-14:/server/kafka# bin/kafka-topics.sh --describe --zookeeper 10.1.1.6:2181 --topic my-test
Topic:my-test
PartitionCount:1
ReplicationFactor:3
Configs:
Topic: my-test
Partition: 0
Leader: 3
Replicas: 3,1,2
Isr: 3,1,2
7.3 用zookeeper查询
root@py-14:/server/zookeeper/bin# $ZOOKEEPER_HOME/bin/zkCli.sh
WATCHER::
WatchedEvent state:SyncConnected type:None path:null
[zk: localhost:2181(CONNECTED) 0]
登录成功
[zk: localhost:2181(CONNECTED) 1] ls /
[controller, controller_epoch, brokers, zookeeper, admin, isr_change_notification, consumers, config]
[zk: localhost:2181(CONNECTED) 2] ls /brokers
[ids, topics, seqid]
[zk: localhost:2181(CONNECTED) 3] ls /brokers/ids
[0, 1, 2, 3, 4]
[zk: localhost:2181(CONNECTED) 4] ls /brokers/ids/0
[]
[zk: localhost:2181(CONNECTED) 5] ls /brokers/topics
[my-test]
[zk: localhost:2181(CONNECTED) 6] ls /brokers/topics/test/partitions
Node does not exist: /brokers/topics/test/partitions
[zk: localhost:2181(CONNECTED) 7] ls /brokers/topics/my-test/partitions
[0]
[zk: localhost:2181(CONNECTED) 8]
8、关闭kafka
pkill -9 -f server.properties
9、应用
参考:
0.10.0.0
方法和0.8有所不同,不能用0.8.x的例子
http://blog.csdn.net/louisliaoxh/article/details/51577117 主要
http://www.cnblogs.com/fxjwind/p/5646631.html 主要
http://kafka.apache.org/0100/javadoc/index.html?org/apache/kafka/clients/producer/KafkaProducer.html
0.8.x 参考
http://chengjianxiaoxue.iteye.com/blog/2190488
http://www.open-open.com/lib/view/open1407942131801.html
http://blog.csdn.net/z769184640/article/details/51585419
http://wenku.baidu.com/view/7478cab24431b90d6d85c703.html?from=search
高阶:
http://blog.csdn.net/hxpjava1/article/details/19160665
http://orchome.com/11
【注意:网上的代码基本上都是缺这少那的,不能运行,下边11部分自己的代码是测试成功的,但是不保证其他人也能在自己的环境跑】
9.1 Producer代码【见附件11.1】
9.1.2 jar包
【maven参考:http://www.cnblogs.com/xing901022/p/4170248.html】
project->run-as->build[goal:compile或者pom.xml加一句pom.xml文件
标签后面加上compile即可 ]
eclipse->file->import->maven->existing maven projects->choose your project folder
选中左侧工程Kafka中的src/main/java/kafkaProducer.java
eclipse-file-export-java-jar file,选中右侧三个,还有export java source file and resources,选择保存的位置和名称,点next,点next,select the class of the application entry point,选择主类kafkaProducer.java 点finish,等待maven下载包并打包。
9.1.3 jar包(包含外部依赖包)
http://lvjun106.iteye.com/blog/1849803
9.2 Consumer代码【见附件】
9.2.2 jar包
步骤与9.1类似,kafkaProducer.java换成kafkaConsumer.java即可。
10. 测试
10.1 Producer类打包(含外部依赖包)
0.10.0.0测试
【MANIFEST.MF参考:http://www.cnblogs.com/lanxuezaipiao/p/3291641.html】
普通的eclipse mave run as -> build 不含外部依赖包,如果用manifest.mf,则需要添加很多依赖包,很麻烦,用以下方法简单
需要在pom.xml的内添加
compile
maven-assembly-plugin
ktest.kafka.Producer
jar-with-dependencies
【注意:
mainClass是你的主类名,也就是package后边的路径+主类名
】
10.1.2 命令行编译
run-configure agument写 test-producer,因args[0],args[1],不然报错
在项目根目录下
jar包
E:\fm-workspace\workspace_2\kafka>mvn assembly:assembly
10.1.3 创建topic
root@py-server:/server/kafka/bin# ./kafka-topics.sh --create --zookeeper "10.1.1.6:2181" --topic "test-producer" --partitions 10 --replication-factor 3
Created topic "test-producer".
10.1.4 运行Producer jar
root@py-server:/projects/test/javatest# ll
总用量 11432
drwxr-xr-x 2 root root 4096 8月 11 14:27 ./
drwxr-xr-x 6 root root 4096 8月 8 14:58 ../
-rw-r--r-- 1 root root 0 8月 11 14:17 logContentFile.log
-rw-r--r-- 1 root root 11696578 8月 11 14:26 Producer-ok.jar
格式:java -jar 包名 topic名 message内容
root@py-server:/projects/test/javatest# java -jar Producer-ok.jar test-producer I_am_a_test_message
kafka Logger--> INFO{AbstractConfig.java:178}-ProducerConfig values:
metric.reporters = []
metadata.max.age.ms = 300000
reconnect.backoff.ms = 50
【注意:不能是带空格的字符串会失败】
10.2 consumer jar包
10.2.1 pom.xml
compile
maven-assembly-plugin
ktest.kafka.Consumer
jar-with-dependencies
10.2.2 代码
见附件 11.2
group.id在$KAFKA_HOME/config/consumer.properties
#consumer group id
group.id=test-consumer-group
10.2.3 编译
run-configure agument写 test-producer,随便写点,不然报错,因args[0]
E:\fm-workspace\workspace_2\kafka>mvn assembly:assembly
【虽然在run as build下会爆<>diamond错误,但是mvn下是Ok的,本来泛型就是没错的】
10.2.4 运行jar
开一个终端运行consumer
root@py-server:/projects/test/javatest# java -jar Consumer-ok.jar test-producer
kafka Logger--> INFO{AbstractConfig.java:178}-ConsumerConfig values:
kafka Logger--> INFO{AppInfoParser.java:83}-Kafka version : 0.10.0.0
kafka Logger--> INFO{AppInfoParser.java:84}-Kafka commitId : b8642491e78c5a13
kafka Logger--> INFO{AbstractCoordinator.java:505}-Discovered coordinator 10.1.1.6:9092 (id: 2147483647 rack: null) for group test-consumer-group.
kafka Logger--> INFO{ConsumerCoordinator.java:280}-Revoking previously assigned partitions [] for group test-consumer-group
kafka Logger--> INFO{AbstractCoordinator.java:326}-(Re-)joining group test-consumer-group
kafka Logger--> INFO{AbstractCoordinator.java:434}-Successfully joined group test-consumer-group with generation 3
kafka Logger--> INFO{ConsumerCoordinator.java:219}-Setting newly assigned partitions [test-producer-2, test-producer-1, test-producer-0, test-producer-3] for group test-consumer-group
。。。
之后进入等待状态,,,
再开一个终端运行producer
root@py-server:/projects/test/javatest# ll
总用量 22856
drwxr-xr-x 2 root root 4096 8月 11 15:19 ./
drwxr-xr-x 6 root root 4096 8月 8 14:58 ../
-rw-r--r-- 1 root root 11697980 8月 11 15:15 Consumer-ok.jar
-rw-r--r-- 1 root root 0 8月 11 14:17 logContentFile.log
-rw-r--r-- 1 root root 11696578 8月 11 14:26 Producer-ok.jar
root@py-server:/projects/test/javatest# java -jar Producer-ok.jar test-producer I_am_a_test_messagejlkjlkjflkjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjj
然后再看consumer终端
kafka Logger--> INFO{AppInfoParser.java:83}-Kafka version : 0.10.0.0
kafka Logger--> INFO{AppInfoParser.java:84}-Kafka commitId : b8642491e78c5a13
kafka Logger--> INFO{AbstractCoordinator.java:505}-Discovered coordinator 10.1.1.6:9092 (id: 2147483647 rack: null) for group test-consumer-group.
kafka Logger--> INFO{ConsumerCoordinator.java:280}-Revoking previously assigned partitions [] for group test-consumer-group
kafka Logger--> INFO{AbstractCoordinator.java:326}-(Re-)joining group test-consumer-group
kafka Logger--> INFO{AbstractCoordinator.java:434}-Successfully joined group test-consumer-group with generation 3
kafka Logger--> INFO{ConsumerCoordinator.java:219}-Setting newly assigned partitions [test-producer-2, test-producer-1, test-producer-0, test-producer-3] for group test-consumer-group
offset = 0, key = null, value = I_am_a_test_messagejlkjlkjflkjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjjj
【出现了Producer产生的message】测试成功。
####################################
问题:
slf4j对log4j有版本依赖
slf4j-log4j12 的1.75 版本 依赖 ( slf4j-api 1.75 和 log4j 1.2.17)
http://blog.csdn.net/anialy/article/details/8529188
http://my.oschina.net/zimingforever/blog/98048
报appender错,需要把log4j.properties放到项目根目录,默认配置即可。
找不到main函数,明明有,需要重新build一下。
<>报错,那就在里边填个类型吧,必须靠谱的
ubuntu下eclipse 总是报slf4j错误,包都全,但是windows下没事,
还有导出export jar包,第三项是保存class文件到Jar,不要选。
.如果运行程序出现错误:“Exception in thread "main" java.lang.NoClassDefFoundError: org/slf4j/LoggerFactory”,这是因为项目缺少slf4j-api.jar和slf4j-log4j12.jar这两个jar包导致的错误。
http://www.cnblogs.com/xwdreamer/archive/2012/02/20/2359595.html
#################################
pom.xml 【不全面,参考部分即可】
按网上提供的pom.xml会报zookeeper错,原因是里边没有zookeeper信息,另外本人的kafka是2.11_0.10的,pom.xml也改了,增加了build compile。
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
4.0.0
bj.zm
kafka
0.0.1-SNAPSHOT
jar
kafka
http://maven.apache.org
UTF-8
junit
junit
3.8.1
test
org.apache.kafka
kafka_2.11
0.10.0.0
org.apache.zookeeper
zookeeper
3.4.8
compile
#############################
参考:
############################# Server Basics #############################
# 唯一标识一个broker.
broker.id=1
############################# Socket Server Settings #############################
#绑定服务监听的地址和端口,要填写hostname -i 出来的地址,否则可能会绑定到127.0.0.1,producer可能会发不出消息
listeners=PLAINTEXT://172.23.8.144:9092
#broker对producers和consumers服务的地址和端口,如果没有配置,使用listeners的配置,本文没有配置该项
#advertised.listeners=PLAINTEXT://your.host.name:9092
# 处理网络请求的线程数
num.network.threads=3
# 处理磁盘I/O的线程数
num.io.threads=8
# socket server的发送buffer大小 (SO_SNDBUF)
socket.send.buffer.bytes=102400
# socket server的接收buffer大小 (SO_RCVBUF)
socket.receive.buffer.bytes=102400
#一个请求的最大size,用来保护防止oom
socket.request.max.bytes=104857600
############################# Log Basics #############################
#存放日志和消息的目录,可以是用逗号分开的目录,同样不推荐使用/tmp
log.dirs=/usr/local/services/kafka/kafka-logs
#每个topic默认partitions的数量,数量较大表示消费者可以有更大的并行度。
num.partitions=2
# The number of threads per data directory to be used for log recovery at startup and flushing at shutdown.
# This value is recommended to be increased for installations with data dirs located in RAID array.
num.recovery.threads.per.data.dir=1
#日志的过期时间,超过后被删除,单位小时
log.retention.hours=168
#一个日志文件最大大小,超过会新建一个文件
log.segment.bytes=1073741824
#根据过期策略检查过期文件的时间间隔,单位毫秒
log.retention.check.interval.ms=300000
############################# Zookeeper #############################
#Zookeeper的连接配置,用逗号隔开,也可以用172.23.8.59:2181/kakfa这样的方式指定kafka数据在zk中的根目录
zookeeper.connect=172.23.8.144:2181,172.23.8.179:2181,172.23.8.59:2181
# 连接zk的超时时间
zookeeper.connection.timeout.ms=6000
自动提交offset偏移量
Properties props = new Properties();
//brokerServer(kafka)ip地址,不需要把所有集群中的地址都写上,可是一个或一部分
props.put("bootstrap.servers", "172.16.49.173:9092");
//设置consumer group name,必须设置
props.put("group.id", a_groupId);
//设置自动提交偏移量(offset),由auto.commit.interval.ms控制提交频率
props.put("enable.auto.commit", "true");
//偏移量(offset)提交频率
props.put("auto.commit.interval.ms", "1000");
//设置使用最开始的offset偏移量为该group.id的最早。如果不设置,则会是latest即该topic最新一个消息的offset
//如果采用latest,消费者只能得道其启动后,生产者生产的消息
props.put("auto.offset.reset", "earliest");
//设置心跳时间
props.put("session.timeout.ms", "30000");
//设置key以及value的解析(反序列)类
props.put("key.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");
props.put("value.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");
KafkaConsumer consumer = new KafkaConsumer<>(props);
//订阅topic
consumer.subscribe(Arrays.asList("topic_test"));
while (true) {
//每次取100条信息
ConsumerRecords records = consumer.poll(100);
for (ConsumerRecord record : records)
System.out.printf("offset = %d, key = %s, value = %s", record.offset(), record.key(), record.value());
}
需要注意的:
group.id :必须设置
auto.offset.reset:如果想获得消费者启动前生产者生产的消息,则必须设置为earliest;如果只需要获得消费者启动后生产者生产的消息,则不需要设置该项
enable.auto.commit(默认值为true):如果使用手动commit offset则需要设置为false,并再适当的地方调用consumer.commitSync(),否则每次启动消费折后都会从头开始消费信息(在auto.offset.reset=earliest的情况下);
11 代码
##################################
11.1 Producer
Producer.class代码
package ktest.kafka;
import org.apache.kafka.clients.producer.Callback;
import org.apache.kafka.clients.producer.KafkaProducer;
import org.apache.kafka.clients.producer.ProducerRecord;
import org.apache.kafka.clients.producer.RecordMetadata;
import java.util.Properties;
import java.util.concurrent.ExecutionException;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
public class Producer{
public static void main(String[] args){
//
Producer pdc=new Producer();
//
KafkaProducer producer;
Properties props = new Properties();
//
public Producer(){
props.put("bootstrap.servers", "10.1.1.6:9092,10.1.1.11:9092");
//props.put("acks", "all"); //ack方式,all,会等所有的commit最慢的方式
props.put("retries", 3); //失败是否重试,设置会有可能产生重复数据
//props.put("batch.size", 16384); //对于每个partition的batch buffer大小
//props.put("linger.ms", 1); //等多久,如果buffer没满,比如设为1,即消息发送会多1ms的延迟,如果buffer没满
//props.put("buffer.memory", 33554432); //整个producer可以用于buffer的内存大小
props.put("key.serializer", "org.apache.kafka.common.serialization.StringSerializer");
props.put("value.serializer", "org.apache.kafka.common.serialization.StringSerializer");
KafkaProducer producer = new KafkaProducer(props);
String topic=(String) args[0];
//String partitionStr=(String) args[1];
String messageStr=(String) args[1];
//ProducerRecord record = new ProducerRecord(topic, partitionStr,messageStr);
ProducerRecord record = new ProducerRecord(topic,messageStr);
//
}
producer.send(record);
// for(int i = 0; i < 100; i++)
// producer.send(new ProducerRecord("my-test", Integer.toString(i), Integer.toString(i)));
producer.close();
}
}
##############################
log4j.properties如下:
【注意:layout.ConversionPattern=kafka后的kafka改成你的项目名】
#config root logger
log4j.rootLogger = INFO,system.out
log4j.appender.system.out=org.apache.log4j.ConsoleAppender
log4j.appender.system.out.layout=org.apache.log4j.PatternLayout
log4j.appender.system.out.layout.ConversionPattern=kafka Logger-->%5p{%F:%L}-%m%n
#config this Project.file logger
log4j.logger.thisProject.file=INFO,thisProject.file.out
log4j.appender.thisProject.file.out=org.apache.log4j.DailyRollingFileAppender
log4j.appender.thisProject.file.out.File=logContentFile.log
log4j.appender.thisProject.file.out.layout=org.apache.log4j.PatternLayout
###################################
pom.xml Producer.class的,consumer要改成Consumer
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
4.0.0
ktest
kafka
0.0.1-SNAPSHOT
jar
kafka
http://maven.apache.org
junit
junit
3.8.1
test
log4j
log4j
1.2.17
org.slf4j
slf4j-api
1.7.2
org.slf4j
slf4j-log4j12
1.7.2
org.slf4j
slf4j-ext
1.7.2
org.apache.zookeeper
zookeeper
3.4.8
org.apache.kafka
kafka-clients
0.10.0.0
org.apache.kafka
kafka-streams
0.10.0.0
compile
maven-assembly-plugin
ktest.kafka.Producer
jar-with-dependencies
##############################################
11.2 Consumer.java
package ktest.kafka;
import org.apache.kafka.clients.consumer.ConsumerRecord;
import org.apache.kafka.clients.consumer.ConsumerRecords;
import org.apache.kafka.clients.consumer.KafkaConsumer;
import org.apache.kafka.clients.producer.Callback;
import org.apache.kafka.clients.producer.KafkaProducer;
import org.apache.kafka.clients.producer.ProducerRecord;
import org.apache.kafka.clients.producer.RecordMetadata;
import java.util.Arrays;
import java.util.Properties;
import java.util.concurrent.ExecutionException;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
public class Consumer{
public static void main(String[] args){
//
Producer pdc=new Producer();
//
KafkaProducer producer;
Properties props = new Properties();
props.put("bootstrap.servers", "10.1.1.6:9092,10.1.1.11:9092");
props.put("group.id", "test-consumer-group");//默认的,可在$KAFKA_HOME/config/consumer.properties看
//props.put("acks", "all"); //ack方式,all,会等所有的commit最慢的方式
props.put("enable.auto.commit", "true"); //自动commit
props.put("auto.commit.interval.ms", "1000"); //定时commit的周期
props.put("session.timeout.ms", "30000"); //consumer活性超时时间
props.put("key.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");
props.put("value.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");
KafkaConsumer consumer = new KafkaConsumer<>(props);
String topic1=(String) args[0];
consumer.subscribe(Arrays.asList(topic1));
//
String topic2=(String) args[1];
//
consumer.subscribe(Arrays.asList(topic1, topic2)); //subscribe,foo,bar,两个topic
while (true) {
//每次取100条信息
ConsumerRecords records = consumer.poll(100);
for (ConsumerRecord record : records)
System.out.printf("offset = %d, key = %s, value = %s", record.offset(), record.key(), record.value());
}
}
}
###################################
Consumer.java 的pom.xml
compile
maven-assembly-plugin
ktest.kafka.Consumer
jar-with-dependencies
#######################################
完整的pom.xml Consumer.java
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
4.0.0
ktest
kafka
0.0.1-SNAPSHOT
jar
kafka
http://maven.apache.org
junit
junit
3.8.1
test
log4j
log4j
1.2.17
org.slf4j
slf4j-api
1.7.2
org.slf4j
slf4j-log4j12
1.7.2
org.slf4j
slf4j-ext
1.7.2
org.apache.zookeeper
zookeeper
3.4.8
org.apache.kafka
kafka-clients
0.10.0.0
org.apache.kafka
kafka-streams
0.10.0.0
compile
maven-assembly-plugin
ktest.kafka.Consumer
jar-with-dependencies