Flume与Kafka整合

Flume与Kakfa整合

flume官方下载地址:https://flume.apache.org/download.html

建议下载最新的1.6.0版本的,因为1.6.0版本的集成了整合kafka的插件包可以直接配置使用

1、下载并解压apache-flume-1.6.0-bin.tar.gz包

通过tar –zxvf apache-flume-1.6.0-bin.tar.gz命令解压压缩文件

Flume解压既安装成功,配置conf/flume-conf.properties文件启动完成相应的功能

2、配置flume连接kafka

前面kafka集群已经成功,这里只需要配置好conf/flume-conf.properties文件,配置如下

a1.sources = r1
a1.sinks = k1
a1.channels = c1

# Describe/configure the source
a1.sources.r1.type=avro
a1.sources.r1.bind=master
a1.sources.r1.port=41414

# Describe the sink
a1.sinks.k1.type = org.apache.flume.sink.kafka.KafkaSink  
a1.sinks.k1.topic = testflume  
a1.sinks.k1.brokerList = 192.168.230.129:9092,192.168.230.130:9092,192.168.230.131:9092  
a1.sinks.k1.requiredAcks = 1  
a1.sinks.k1.batchSize = 20  
a1.sinks.k1.channel = c1 

# Use a channel which buffers events in memory
a1.channels.c1.type = memory
a1.channels.c1.capacity = 1000000
a1.channels.c1.transactionCapacity = 10000

# Bind the source and sink to the channel
a1.sources.r1.channels = c1
a1.sinks.k1.channel = c1

Flume与Kafka整合_第1张图片

注意:这里需要根据实际需求来配置sources,这里是用avro的方式配置监听master本机的41414端口

注意:这里是启动agent的配置 后续的flume client也需要用到这个配置,

下面的sink配置
a1.sinks.k1.type = org.apache.flume.sink.kafka.KafkaSink是固定的不需要修改,
a1.sinks.k1.topic = testflume是创建的话题名,需要根据自己需要来改,
a1.sinks.k1.brokerList = 192.168.230.129:9092,192.168.230.130:9092,192.168.230.131:9092是根据实际的kafka集群情况配置的

注意:这里的a1指的是配置文件中的agent名字,a1不是随意写的

这里的flume对接kafka实际已经完成,接下来就是测试

3、启动flume连接到kafka

[root@master flume-1.6.0]# bin/flume-ng agent -c ./conf/ -f conf/flume-conf.properties -Dflume.root.logger=DEBUG,console -n a1

4、启动kafka消费者接受数据

[root@slave1 kafka_2.10-0.8.1.1]# ./bin/kafka-console-consumer.sh --zookeeper master:2181,slave1:2181,slave2:2181 --from-beginning --topic order

5、运行测试程序

程序结构图

注意这里的a1指的是配置文件中的agent名字a1不是随意写的

这里的flume对接kafka实际已经完成,接下来就是测试

package com.matrix.flume;

import org.apache.flume.Event;
import org.apache.flume.EventDeliveryException;
import org.apache.flume.api.RpcClient;
import org.apache.flume.api.RpcClientFactory;
import org.apache.flume.event.EventBuilder;
import java.nio.charset.Charset;

public class MyApp {
    public static void main(String[] args) {
        MyRpcClientFacade client = new MyRpcClientFacade();
        client.init("master", 41414);

        String sampleData = "Hello Flume!";
        for (int i = 0; i < 10; i++) {
            client.sendDataToFlume(sampleData);
        }

        client.cleanUp();
    }
}

class MyRpcClientFacade {
    private RpcClient client;
    private String hostname;
    private int port;

    public void init(String hostname, int port) {
        this.hostname = hostname;
        this.port = port;
        this.client = RpcClientFactory.getDefaultInstance(hostname, port);
    }

    public void sendDataToFlume(String data) {
        Event event = EventBuilder.withBody(data, Charset.forName("UTF-8"));

        try {
            client.append(event);
        } catch (EventDeliveryException e) {
            client.close();
            client = null;
            client = RpcClientFactory.getDefaultInstance(hostname, port);
        }
    }

    public void cleanUp() {
        client.close();
    }

}

Flume与Kafka整合_第2张图片

6、查看kafka消费者接收数据情况

Flume与Kafka整合_第3张图片

你可能感兴趣的:(kafka,Flume)