kafka+flume+hdfs实时日志流系统初探

本次实验,主要为了测试将kafka的消息通过flume接收并存入hdfs,如果之前搭建过hadoop,flume,kafka的,这里会很快就会完成,思路比较清晰,主要配置在flume,flume是中间桥梁,负责将kafka和hdfs系统整合起来。因此也决定了系统启动顺序,先启动kafka或者hdfs,最后将flume启动,连接kafka,hdfs。

这次试验,我的系统环境分别是如下

  java:1.8

  kafka:2.11

  flume:1.6

  hadoop:2.6.

  其中,hadoop,kafka的配置均采用默认。

1、这里先启动hdfs,并在hdfs存储路径中新建一个目录(/usr/feiy/flume-data)准备存放flume收集的kafka消息。

$ sbin/start-dfs.sh

kafka+flume+hdfs实时日志流系统初探_第1张图片

2、然后启动kafka服务,并创建一个topic(flume-data),然后还可以启动一个生产者控制台,准备往flume-data这个topic中生产消息,让flume来消费。

start zookeeper(进入kafka安装目录)

$ bin/zookeeper-server-start.sh config/zookeeper.properties

kafka+flume+hdfs实时日志流系统初探_第2张图片

start kafka-server

$ bin/kafka-server-start.sh config/server.properties

kafka+flume+hdfs实时日志流系统初探_第3张图片

create topic flume-data

$ bin/kafka-topics.sh --create --zookeeper 127.0.0.1:2181 --replication-factor 1 --partitions 1 --topic flume-data

setup kafka-console-producer

$ bin/kafka-console-producer.sh --broker-list 127.0.0.1:9092 --topic flume-data

3、配置flume,并启动,等待kafka生产者发送消息。

config conf/flume.conf(进入flume安装目录)

# The configuration file needs to define the sources, 
# the channels and the sinks.
# Sources, channels and sinks are defined per agent, 
# in this case called 'agent'

agent.sources = kafkaSource
agent.channels = memoryChannel
agent.sinks = hdfsSink


# The channel can be defined as follows.
agent.sources.kafkaSource.channels = memoryChannel
agent.sources.kafkaSource.type=org.apache.flume.source.kafka.KafkaSource
agent.sources.kafkaSource.zookeeperConnect=127.0.0.1:2181
agent.sources.kafkaSource.topic=flume-data
#agent.sources.kafkaSource.groupId=flume
agent.sources.kafkaSource.kafka.consumer.timeout.ms=100

agent.channels.memoryChannel.type=memory
agent.channels.memoryChannel.capacity=1000
agent.channels.memoryChannel.transactionCapacity=100


# the sink of hdfs
agent.sinks.hdfsSink.type=hdfs
agent.sinks.hdfsSink.channel = memoryChannel
agent.sinks.hdfsSink.hdfs.path=hdfs://master:9000/usr/feiy/flume-data
agent.sinks.hdfsSink.hdfs.writeFormat=Text
agent.sinks.hdfsSink.hdfs.fileType=DataStream
start flume-ng

$ bin/flume-ng agent --conf conf --conf-file conf/flume.conf --name agent -Dflume.root.logger=INFO,console

kafka+flume+hdfs实时日志流系统初探_第4张图片

send message

最后通过hdfs命令行查看生成的文件。

Last login: Sat Nov 19 13:15:16 2016 from 192.168.61.1
[root@master ~]# hadoop fs -ls /usr/feiy/flume-data
Found 2 items
-rw-r--r--   1 root supergroup         27 2016-11-19 13:46 /usr/feiy/flume-data/FlumeData.1479534366317
-rw-r--r--   1 root supergroup         18 2016-11-19 13:47 /usr/feiy/flume-data/FlumeData.1479534415398
[root@master ~]# hadoop fs -cat /usr/feiy/flume-data/FlumeData.1479534415398
55555555
88888888
[root@master ~]# hadoop fs -cat /usr/feiy/flume-data/FlumeData.1479534366317
11111111
22222222
44444444
[root@master ~]# 

你可能感兴趣的:(hadoop,hadoop,kafka,flume,hdfs)