1、Zookeeper集群构建
我们有3个zk实例,分别为zk-0,zk-1,zk-2;如果你仅仅是测试使用,可以使用1个zk实例.(本示例基于分布式部署)
1) zk-0
调整配置文件:
clientPort=2181
dataDir=/opt/zookeeper-3.4.6/data
server.0=10.10.73.53:2888:3888
server.1=10.10.73.54:2888:3888
server.2=10.10.73.58:2888:3888
##只需要修改上述配置,其他配置保留默认值
启动zookeeper
./zkServer.sh start
2) zk-1
调整配置文件(其他配置和zk-0一只):
clientPort=2182
##只需要修改上述配置,其他配置保留默认值
启动zookeeper
./zkServer.sh start
3) zk-2
调整配置文件(其他配置和zk-0一只):
clientPort=2183
##只需要修改上述配置,其他配置保留默认值
启动zookeeper
./zkServer.sh start
2. Kafka集群构建
因为Broker配置文件涉及到zookeeper的相关约定,因此我们先展示broker配置文件.我们使用2个kafka broker来构建这个集群环境,分别为kafka-0,kafka-1.
1) kafka-0(10.10.73.58)
在config目录下修改配置文件为:
broker.id=0
port=9092
num.network.threads=2
num.io.threads=2
socket.send.buffer.bytes=1048576
socket.receive.buffer.bytes=1048576
socket.request.max.bytes=104857600
log.dir=/opt/env/kafka_2.11-0.9.0.1/logs
num.partitions=2
log.flush.interval.messages=10000
log.flush.interval.ms=1000
log.retention.hours=168
#log.retention.bytes=1073741824
log.segment.bytes=536870912
##replication机制,让每个topic的partitions在kafka-cluster中备份2个
##用来提高cluster的容错能力..
default.replication.factor=1
log.cleanup.interval.mins=10
zookeeper.connect=10.10.73.53:2181,10.10.73.54:2181,10.10.73.58:2181
zookeeper.connection.timeout.ms=1000000
启动kafka broker:
> JMS_PORT=9997 bin/kafka-server-start.sh config/server.properties &
因为zookeeper环境已经正常运行了,我们无需通过kafka来挂载启动zookeeper.如果你的一台机器上部署了多个kafka broker,你需要声明JMS_PORT.
2) kafka-1(10.10.73.53)
broker.id=1
port=9092
##其他配置和kafka-0保持一致
然后和kafka-0一样执行打包命令,然后启动此broker.
> JMS_PORT=9998 bin/kafka-server-start.sh config/server.properties &
仍然可以通过如下指令查看topic的"partition"/"replicas"的分布和存活情况.
Kafka集群中broker.id=1参数的value不能重复
3、Java端的Producer
kafka-producer.properties配置文件
##metadata.broker.list=127.0.0.1:9092,127.0.0.1:9093
##,127.0.0.1:9093
metadata.broker.list=10.10.73.58:9092,10.10.73.53:9092
producer.type=sync
compression.codec=0
serializer.class=kafka.serializer.StringEncoder
#batch.num.messages=100
Java客户端程序:
public class KafkaProducerClient {
private Producer
private String brokerList;//for metadata discovery,spring setter
private String location = "kafka/kafka-producer.properties";//spring setter
private String defaultTopic;//spring setter
public void setBrokerList(String brokerList) {
this.brokerList = brokerList;
}
public void setLocation(String location) {
this.location = location;
}
public void setDefaultTopic(String defaultTopic) {
this.defaultTopic = defaultTopic;
}
public KafkaProducerClient(){}
public void init() throws Exception {
Properties properties = new Properties();
properties.load(Thread.currentThread().getContextClassLoader().getResourceAsStream(location));
if(brokerList != null) {
properties.put("metadata.broker.list", brokerList);
}
ProducerConfig config = new ProducerConfig(properties);
inner = new Producer
}
public void send(String message){
send(defaultTopic,message);
}
public void send(Collection
send(defaultTopic,messages);
}
public void send(String topicName, String message) {
if (topicName == null || message == null) {
return;
}
KeyedMessage
inner.send(km);
}
public void send(String topicName, Collection
if (topicName == null || messages == null) {
return;
}
if (messages.isEmpty()) {
return;
}
List
int i= 0;
for (String entry : messages) {
KeyedMessage
kms.add(km);
i++;
if(i % 20 == 0){
inner.send(kms);
kms.clear();
}
}
if(!kms.isEmpty()){
inner.send(kms);
}
}
public void close() {
inner.close();
}
/**
* @param args
*/
public static void main(String[] args) {
KafkaProducerClient producer = null;
try {
producer = new KafkaProducerClient();
producer.init();
//producer.setBrokerList("");
int i = 0;
while (true) {
producer.send("test-topic", "this is a sample" + i);
i++;
Thread.sleep(2000);
}
} catch (Exception e) {
e.printStackTrace();
} finally {
if (producer != null) {
producer.close();
}
}
}
}
4、Java端的Consumer
kafka-consumer.properties配置文件
##zookeeper.connect=127.0.0.1:2181,127.0.0.1:2182,127.0.0.1:2183
##,127.0.0.1:2182,127.0.0.1:2183
# timeout in ms for connecting to zookeeper
zookeeper.connect=10.10.73.53:2181,10.10.73.54:2181,10.10.73.58:2181
zookeeper.connectiontimeout.ms=100000
#consumer group id
group.id=test-group
#consumer timeout
#consumer.timeout.ms=5000
Java客户端程序:
public class KafkaConsumerClient {
private String groupid; //can be setting by spring
private String zkConnect;//can be setting by spring
private String location = "kafka/kafka-consumer.properties";//配置文件位置
private String topic;
private int partitionsNum;
private MessageExecutor executor; //message listener
private ExecutorService threadPool;
private ConsumerConnector connector;
private Charset charset = Charset.forName("utf8");
public void setGroupid(String groupid) {
this.groupid = groupid;
}
public void setZkConnect(String zkConnect) {
this.zkConnect = zkConnect;
}
public void setLocation(String location) {
this.location = location;
}
public void setTopic(String topic) {
this.topic = topic;
}
public void setPartitionsNum(int partitionsNum) {
this.partitionsNum = partitionsNum;
}
public void setExecutor(MessageExecutor executor) {
this.executor = executor;
}
public KafkaConsumerClient() {}
//init consumer,and start connection and listener
public void init() throws Exception {
if(executor == null){
throw new RuntimeException("KafkaConsumer,exectuor cant be null!");
}
Properties properties = new Properties();
properties.load(Thread.currentThread().getContextClassLoader().getResourceAsStream(location));
if(groupid != null){
properties.put("groupid", groupid);
}
if(zkConnect != null){
properties.put("zookeeper.connect", zkConnect);
}
ConsumerConfig config = new ConsumerConfig(properties);
connector = Consumer.createJavaConsumerConnector(config);
Map
topics.put(topic, partitionsNum);
Map
List
threadPool = Executors.newFixedThreadPool(partitionsNum * 2);
//start
for (KafkaStream
threadPool.execute(new MessageRunner(partition));
}
}
public void close() {
try {
threadPool.shutdownNow();
} catch (Exception e) {
//
} finally {
connector.shutdown();
}
}
class MessageRunner implements Runnable {
private KafkaStream
MessageRunner(KafkaStream
this.partition = partition;
}
public void run() {
ConsumerIterator
while (it.hasNext()) {
// connector.commitOffsets();手动提交offset,当autocommit.enable=false时使用
MessageAndMetadata
try{
executor.execute(new String(item.message(),charset));// UTF-8,注意异常
}catch(Exception e){
//
}
}
}
public String getContent(Message message){
ByteBuffer buffer = message.payload();
if (buffer.remaining() == 0) {
return null;
}
CharBuffer charBuffer = charset.decode(buffer);
return charBuffer.toString();
}
}
interface MessageExecutor {
public void execute(String message);
}
/**
* @param args
*/
public static void main(String[] args) {
KafkaConsumerClient consumer = null;
try {
MessageExecutor executor = new MessageExecutor() {
public void execute(String message) {
System.out.println(message);
}
};
consumer = new KafkaConsumerClient();
consumer.setTopic("test-topic");
consumer.setPartitionsNum(2);
consumer.setExecutor(executor);
consumer.init();
Thread.sleep(10000);
} catch (Exception e) {
e.printStackTrace();
} finally {
if(consumer != null){
consumer.close();
} } } }
学习参考:
http://www.infoq.com/cn/articles/kafka-analysis-part-1
http://flychao88.iteye.com/category/350737
http://shift-alt-ctrl.iteye.com/blog/1930791
http://www.tuicool.com/articles/mErEZn