kafka-安装与使用

Step 1: 下载代码

> tar -xzf kafka_2.13-2.4.0.tgz

> cd kafka_2.13-2.4.0

 

 

Step 2: 启动服务

kakfa运行需要使用zookeeper,所以请先保证你以前启动了zookeeper服务,自己已经启动了,并在本机上;

 

如果你没有独立启动的zookeeper,那么你也可以使用kafka自带的zookeeper,kakfa有内嵌zookeeper的服务;

启动kafka自带的zookeeper

> bin/zookeeper-server-start.sh config/zookeeper.properties 

[2013-04-22 15:01:37,495] INFO Reading configuration from: config/zookeeper.properties (org.apache.zookeeper.server.quorum.QuorumPeerConfig) ...

启动好zookeeper,现在就可以启动kakfa服务

kafka-server-start.sh:kafka启动的脚本

server.properties:kafka启动时指定加载的配置文件(必须指定的)

& :后台应用进程

> bin/kafka-server-start.sh config/server.properties & 

[2013-04-22 15:01:47,028] INFO Verifying properties (kafka.utils.VerifiableProperties) [2013-04-22 15:01:47,051] INFO Property socket.send.buffer.bytes is overridden to 1048576 (kafka.utils.VerifiableProperties) ...

Step 3: 创建一个主题(topic)

创建一个主题,只包含一个分区和一个备份;

kafka-topics.sh:运行topic脚本

--create:创建topic

--zookeeper localhost:2181:指定zookeeper,前面说了kafka运行需要使用zk

--replication-factor 1:1个备份

--partitions 1:一个分区

--topic test:创建一个名称为test的topic

> bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic test

查看zookeeper下已存在的topic

kafka-topics.sh:topic脚本

--list:展示所有的topic列表

--zookeeper localhost:2181: 特定zookeeper下的topic列表

> bin/kafka-topics.sh --list --zookeeper localhost:2181 
test

 

Step 4: 发送消息

运行produce,发送消息;每一行是一条消息

kafka-console-producer.sh: 在服务器控制台生产消息

--broker-list localhost:9092:指定集群中broker为localhost:9092为消息生产者

--topic test:往topic=test中发消息

> bin/kafka-console-producer.sh --broker-list localhost:9092 --topic test This is a message This is another message

 

Step 5: 消费消息

从kakfa消息存储中消费消息

 

kafka-console-consumer.sh:消息消费者(在控制台)

--bootstrap-server localhost:9092:指定消费的kakfa节点

--topic test:消费topic=test中的消息

--from-beginning:消费形式,从头开始消费

> bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic test --from-beginning 
This is a message
This is another message

 

Step 6: 通过java发送消息

加入kafka客户端 maven依赖


    org.apache.kafka
    kafka-clients 
    2.0.1
public static void main(String[] args) {
    new Thread(new Runnable() {
        @Override
        public void run() {
            // 消费者
            consume();
        }
    }).start();
    new Thread(new Runnable() {
        @Override
        public void run() {
            // 生产者
            produce();
        }
    }).start();
}

 

 private static void produce() {
        Properties props = new Properties();
        // kafka服务器地址:192.168.1.28:9092
        props.put("bootstrap.servers", "192.168.1.28:9092");
        props.put("acks", "all");
        props.put("retries", 0);
        props.put("batch.size", 16384);
        props.put("linger.ms", 1);
        props.put("buffer.memory", 33554432);
        props.put("key.serializer", "org.apache.kafka.common.serialization.StringSerializer");
        props.put("value.serializer", "org.apache.kafka.common.serialization.StringSerializer");

        Producer producer = new KafkaProducer<>(props);
        for (int i = 0; i < 100; i++) {
            producer.send(new ProducerRecord("test", Integer.toString(i), Integer.toString(i)), new Callback() {
                @Override
                public void onCompletion(RecordMetadata metadata, Exception exception) {
                    System.out.println("exception.getMessage() = " + exception.getMessage());
                    System.out.println("metadata = " + metadata);
                }
            });
        }
        producer.close();
    }

 

Step 7: 通过java消费消息

private static void consume(){
    Properties props = new Properties();
    // kafka服务器地址:192.168.1.28:9092
    props.put("bootstrap.servers", "192.168.1.28:9092");
    props.put("group.id", "test");
    props.put("enable.auto.commit", "true");
    props.put("auto.commit.interval.ms", "1000");
    props.put("key.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");
    props.put("value.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");
    KafkaConsumer consumer = new KafkaConsumer<>(props);
    consumer.subscribe(Arrays.asList("test", "my-topic"));
    while (true) {
        ConsumerRecords records = consumer.poll(100);
        for (ConsumerRecord record : records)
            System.out.printf("offset = %d, key = %s, value = %s%n", record.offset(), record.key(), record.value());
    }
}

 

 

你可能感兴趣的:(kafka)