kafka开发环境搭建及实例代码运行

1.在windows下安装maven

Eclipse开发工具

Jdk1.7
Apache maven
下载maven[http://maven.apache.org/download.cgi]
(1)Maven安装
安装起始很简单,解压压缩包,配置环境变量
--解压maven压缩包,配置MAVEN_HOME和Path环境变量。
--打开命令窗口,执行“mvn -v”,若出现如下界面,证明环境变量配置成功
(2)Eclipse下搭建maven开发环境并构建项目
--Eclipse安装maven插件
打开eclipse --》eclipse market --》输入maven -->安装

kafka开发环境搭建及实例代码运行_第1张图片
Paste_Image.png

--要让eclipse中的maven重新定位一下
Window -> Preference -> Maven -> Installation -> Add进行设置
现在可以创建maven项目了


2.编写代码

生产者代码

package com.chh.test;
import java.util.Properties;  

import kafka.producer.KeyedMessage;  
import kafka.producer.ProducerConfig; 

public class KafkaProducer extends Thread  
{  
    private final kafka.javaapi.producer.Producer producer;  
    private final String topic;  
    private final Properties props = new Properties();  
  
    public KafkaProducer(String topic)  
    {  
        props.put("serializer.class", "kafka.serializer.StringEncoder");  
        props.put("metadata.broker.list", "master:9092,slave1:9092,slave2:9092");  
        producer = new kafka.javaapi.producer.Producer(new ProducerConfig(props));  
        this.topic = topic;  
    }  
  
    @Override  
    public void run() {  
        int messageNo = 1;  
        while (true)  
        {  
            String messageStr = new String("Message_" + messageNo);  
            System.out.println("Send:" + messageStr);  
            producer.send(new KeyedMessage(topic, messageStr));  
            messageNo++;  
            try {  
                sleep(3000);  
            } catch (InterruptedException e) {  
                // TODO Auto-generated catch block  
                e.printStackTrace();  
            }  
        }  
    }  
  
}  

消费者代码

package com.chh.test;

import java.util.HashMap;  
import java.util.List;  
import java.util.Map;  
import java.util.Properties;  
  
import kafka.consumer.ConsumerConfig;  
import kafka.consumer.ConsumerIterator;  
import kafka.consumer.KafkaStream;  
import kafka.javaapi.consumer.ConsumerConnector; 

public class KafkaConsumer extends Thread  
{  
    private final ConsumerConnector consumer;  
    private final String topic;  
  
    public KafkaConsumer(String topic)  
    {  
        consumer = kafka.consumer.Consumer.createJavaConsumerConnector(  
                createConsumerConfig());  
        this.topic = topic;  
    }  
  
    private static ConsumerConfig createConsumerConfig()  
    {  
        Properties props = new Properties();  
        props.put("zookeeper.connect", KafkaProperties.zkConnect);  
        props.put("group.id", KafkaProperties.groupId);  
        props.put("zookeeper.session.timeout.ms", "40000");  
        props.put("zookeeper.sync.time.ms", "200");  
        props.put("auto.commit.interval.ms", "1000");  
        return new ConsumerConfig(props);  
    }  
  
    @Override  
    public void run() {  
        Map topicCountMap = new HashMap();  
        topicCountMap.put(topic, new Integer(1));  
        Map>> consumerMap = consumer.createMessageStreams(topicCountMap);  
        KafkaStream stream = consumerMap.get(topic).get(0);  
        ConsumerIterator it = stream.iterator();  
        while (it.hasNext()) {  
            System.out.println("receive:" + new String(it.next().message()));  
            try {  
                sleep(3000);  
            } catch (InterruptedException e) {  
                e.printStackTrace();  
            }  
        }  
    }  
}  

主程序

package com.chh.test;

public class KafkaDemo {
     public static void main(String[] args)  
    {  
        KafkaProducer producerThread = new KafkaProducer(KafkaProperties.topic);  
        producerThread.start();  
  
        KafkaConsumer consumerThread = new KafkaConsumer(KafkaProperties.topic);  
        consumerThread.start();  
    }  
}

配置文件

package com.chh.test;

public interface KafkaProperties {
    final static String zkConnect = "master:2181,slave1:2181,slave2:2181";  
    final static String groupId = "group1";  
    final static String topic = "topic2";  
    final static int kafkaProducerBufferSize = 64 * 1024;  
    final static int connectionTimeOut = 20000;  
    final static int reconnectInterval = 10000;  
    final static String clientId = "SimpleConsumerDemoClient";  
}

才坑指南

(1)总是报错:kafka Failed to send messages after 3 tries

原因自己linux中用的都是hosts,没用真是的ip,而程序中用了真是的ip。
首先程序会去server获得zookeeper的地址,如master、slave2
然后再用slave2,master访问,此时访问不到zookper
http://www.tuicool.com/articles/eeIvyy3


(2)连接虚拟机的时候用wifi,不能直接用线!!!

原理讲解

1.消费者模型

(1)分区消费模型

kafka开发环境搭建及实例代码运行_第2张图片
Paste_Image.png
kafka开发环境搭建及实例代码运行_第3张图片
Paste_Image.png

(2)组(Group)消费模型

kafka开发环境搭建及实例代码运行_第4张图片
Paste_Image.png
kafka开发环境搭建及实例代码运行_第5张图片
Paste_Image.png

Consumer分配算法

kafka开发环境搭建及实例代码运行_第6张图片
Paste_Image.png

2.消费者

(1)同步生产模型

kafka开发环境搭建及实例代码运行_第7张图片
Paste_Image.png

(2)异步生产模型

kafka开发环境搭建及实例代码运行_第8张图片
Paste_Image.png
kafka开发环境搭建及实例代码运行_第9张图片
Paste_Image.png

(3)两种生产模型对比

同步生产模型:
(1)低消息丢失率;
(2)高消息重复率(由于网络原因,回复确认未收到);
(3)高延迟
异步生产模型:
(1)低延迟;
(2)高发送性能;
(3)高消息丢失率(无确认机制,发送端队列满)

你可能感兴趣的:(kafka开发环境搭建及实例代码运行)