【六】Flume整合Kafka完成实时数据采集

agent选择

A机器 exec source + memory channel + avro sink

B机器 avro source + memory channel 


【六】Flume整合Kafka完成实时数据采集_第1张图片

avro source: 监听avro端口,并且接收来自外部avro信息,

avro sink:一般用于跨节点传输,主要绑定数据移动目的地的ip和port

这里测试的时候准备两台服务器!

两台都要安装flume。

我这里用的是node1服务器和node2服务器

注意:要先启动node2,它是监听4444端口的,然后起node1才能往node2的44444端口发数据

在node2机器上创建agent配置文件

cd /app/flume/flume/conf

vi test-avro-memory-kafka.conf

配置中avro-source.bind要用主机名,不能用localhost,亲试用localhost的话node1连接被拒绝

#agent avro-memory-kafka
 
avro-memory-kafka.sources = avro-source
avro-memory-kafka.sinks = kafka-sink
avro-memory-kafka.channels = memory-channel
 
avro-memory-kafka.sources.avro-source.type = avro
avro-memory-kafka.sources.avro-source.bind = node2
avro-memory-kafka.sources.avro-source.port = 44444
 
avro-memory-kafka.sinks.kafka-sink.type = org.apache.flume.sink.kafka.KafkaSink
avro-memory-kafka.sinks.kafka-sink.brokerList = node1:9092
avro-memory-kafka.sinks.kafka-sink.topic = storm_topic
avro-memory-kafka.sinks.kafka-sink.batchSize = 5
avro-memory-kafka.sinks.kafka-sink.requiredAcks = 1
 
 
avro-memory-kafka.channels.memory-channel.type = memory
 
avro-memory-kafka.sources.avro-source.channels = memory-channel
avro-memory-kafka.sinks.kafka-sink.channel = memory-channel

启动node2的agent

cd /app/flume/flume

bin/flume-ng agent --name avro-memory-kafka -c conf -f conf/test-avro-memory-kafka.conf -Dflume.root.logger=INFO,console


这里一定要等node2的avro-source启动成功,已经监听了自己的44444端口,才能去启动node1,不然node1启动会被拒绝连接


在node1机器上创建agent配置文件

cd /app/flume/flume/conf

vi test-exec-memory-avro.conf

#agent exec-memory-avro
 
exec-memory-avro.sources = exec-source
exec-memory-avro.sinks = avro-sink
exec-memory-avro.channels = memory-channel
 
exec-memory-avro.sources.exec-source.type = exec
exec-memory-avro.sources.exec-source.command = tail -f /app/flume/testData/testData.log
exec-memory-avro.sources.exec-source.shell = /bin/sh -c
 
exec-memory-avro.sinks.avro-sink.type = avro
exec-memory-avro.sinks.avro-sink.hostname = node2
exec-memory-avro.sinks.avro-sink.port = 44444
 
exec-memory-avro.channels.memory-channel.type = memory
 
exec-memory-avro.sources.exec-source.channels = memory-channel
exec-memory-avro.sinks.avro-sink.channel = memory-channel

启动node1的agent

cd /app/flume/flume

bin/flume-ng agent --name exec-memory-avro -c conf -f conf/test-exec-memory-avro.conf -Dflume.root.logger=INFO,console


启动一个Kafka的客户端来消费,测试是否启动成功

cd /app/kafka

bin/kafka-console-consumer.sh --zookeeper node1:2181 --topic storm_topic


向node1的exec-source监听的文件中写数据

cd /app/flume/testData

echo flume kafka sink >> testData.log


查看Kafka的客户端是否通过flume消费到数据


你可能感兴趣的:(kafka,flume)