Flume是Cloudera提供的一个高可用的,高可靠的,分布式的海量日志采集、聚合和传输的系统,Flume支持在日志系统中定制各类数据发送方,用于收集数据;同时,Flume提供对数据进行简单处理,并写到各种数据接受方(可定制)的能力。
当前Flume有两个版本Flume 0.9X版本的统称Flume-og,Flume1.X版本的统称Flume-ng。由于Flume-ng经过重大重构,与Flume-og有很大不同,使用时请注意区分。
日志收集
Flume最早是Cloudera提供的日志收集系统,目前是Apache下的一个孵化项目,Flume支持在日志系统中定制各类数据发送方,用于收集数据。
数据处理
Flume提供对数据进行简单处理,并写到各种数据接受方(可定制)的能力 。Flume提供了从console(控制台)、RPC(Thrift-RPC)、text(文件)、tail(UNIX tail)、syslog(syslog日志系统),支持TCP和UDP等2种模式),exec(命令执行)等数据源上收集数据的能力。
工作方式
Flume-og采用了多Master的方式。为了保证配置数据的一致性,Flume引入了ZooKeeper,用于保存配置数据,ZooKeeper本身可保证配置数据的一致性和高可用,另外,在配置数据发生变化时,ZooKeeper可以通知Flume Master节点。Flume Master间使用gossip协议同步数据。
Flume-ng最明显的改动就是取消了集中管理配置的 Master 和 Zookeeper,变为一个纯粹的传输工具。Flume-ng另一个主要的不同点是读入数据和写出数据现在由不同的工作线程处理(称为 Runner)。 在 Flume-og 中,读入线程同样做写出工作(除了故障重试)。如果写出慢的话(不是完全失败),它将阻塞 Flume 接收数据的能力。这种异步的设计使读入线程可以顺畅的工作而无需关注下游的任何问题。
优势
具有特征
结构
Agent主要由:source,channel,sink三个组件组成.
Source:
从数据发生器接收数据,并将接收的数据以Flume的event格式传递给一个或者多个通道channel,Flume提供多种数据接收的方式,比如Avro,Thrift,twitter1%等
Channel:
channel是一种短暂的存储容器,它将从source处接收到的event格式的数据缓存起来,直到它们被sinks消费掉,它在source和sink间起着一共桥梁的作用,channal是一个完整的事务,这一点保证了数据在收发的时候的一致性. 并且它可以和任意数量的source和sink链接. 支持的类型有: JDBC channel , File System channel , Memort channel等.
Sink:
sink将数据存储到集中存储器比如Hbase和HDFS,它从channals消费数据(events)并将其传递给目标地. 目标地可能是另一个sink,也可能HDFS,HBase.
Centos7下样例测试:
解压
tar -zxvf apache-flume-1.7.0-bin.tar.gz
修改 conf/flume-env.sh 文件,主要是JAVA_HOME变量设置
# Enviroment variables can be set here.
export JAVA_HOME=/usr/java/jdk1.8.0_191
验证版本
$ ./bin/flume-ng version
Flume 1.8.0
Source code repository: https://git-wip-us.apache.org/repos/asf/flume.git
Revision: 511d868555dd4d16e6ce4fedc72c2d1454546707
Compiled by bessbd on Wed Oct 12 20:51:10 CEST 2016
From source with checksum 0d21b3ffdc55a07e1d08875872c00523
案例 1:start case (single-node configuration)
创建agent配置文件
#文件名:case1_example.conf
#配置内容:
# case1_example.conf: A single-node Flume configuration
# Name the components on this agent
a1.sources = r1
a1.sinks = k1
a1.channels = c1
# Describe/configure the source
a1.sources.r1.type = netcat
a1.sources.r1.bind = localhost
a1.sources.r1.port = 44444
# Describe the sink
a1.sinks.k1.type = logger
# Use a channel which buffers events in memory
a1.channels.c1.type = memory
a1.channels.c1.capacity = 1000
a1.channels.c1.transactionCapacity = 100
# Bind the source and sink to the channel
a1.sources.r1.channels = c1
a1.sinks.k1.channel = c1
开始命令
./bin/flume-ng agent -c conf -f conf/case1_example.conf -n a1 -Dflume.root.logger=INFO,console
启动参数说明
-c conf 指定配置目录为conf
-f conf/case1_example.conf 指定配置文件为conf/case1_example.conf
-n a1 指定agent名字为a1,需要与case1_example.conf中的一致
-Dflume.root.logger=INFO,console 指定DEBUF模式在console输出INFO信息
在另一个终端进行测试(安装telnet:yum install -y telnet)
# telnet 127.0.0.1 44444
Trying 127.0.0.1...
Connected to 127.0.0.1.
Escape character is '^]'.
hello world!
OKOK
111111111111111111111
OK
222222222222222222222
OK
333333333333333333333
OK
444444444444444444444
OK
555555555555555555555
OK
666666666666666666666
OK
777777777777777777777
OK
888888888888888888888
OK
999999999999999999999
OK
在启动的终端查看console输出
2018-12-18 09:39:50,341 (SinkRunner-PollingRunner-DefaultSinkProcessor) [INFO - org.apache.flume.sink.LoggerSink.process(LoggerSink.java:95)] Event: { headers:{} body: 31 31 31 31 31 31 31 31 31 31 31 31 31 31 31 31 1111111111111111 }
2018-12-18 09:39:54,343 (SinkRunner-PollingRunner-DefaultSinkProcessor) [INFO - org.apache.flume.sink.LoggerSink.process(LoggerSink.java:95)] Event: { headers:{} body: 32 32 32 32 32 32 32 32 32 32 32 32 32 32 32 32 2222222222222222 }
2018-12-18 09:39:56,944 (SinkRunner-PollingRunner-DefaultSinkProcessor) [INFO - org.apache.flume.sink.LoggerSink.process(LoggerSink.java:95)] Event: { headers:{} body: 33 33 33 33 33 33 33 33 33 33 33 33 33 33 33 33 3333333333333333 }
2018-12-18 09:40:00,945 (SinkRunner-PollingRunner-DefaultSinkProcessor) [INFO - org.apache.flume.sink.LoggerSink.process(LoggerSink.java:95)] Event: { headers:{} body: 34 34 34 34 34 34 34 34 34 34 34 34 34 34 34 34 4444444444444444 }
2018-12-18 09:40:03,673 (SinkRunner-PollingRunner-DefaultSinkProcessor) [INFO - org.apache.flume.sink.LoggerSink.process(LoggerSink.java:95)] Event: { headers:{} body: 35 35 35 35 35 35 35 35 35 35 35 35 35 35 35 35 5555555555555555 }
2018-12-18 09:40:06,378 (SinkRunner-PollingRunner-DefaultSinkProcessor) [INFO - org.apache.flume.sink.LoggerSink.process(LoggerSink.java:95)] Event: { headers:{} body: 36 36 36 36 36 36 36 36 36 36 36 36 36 36 36 36 6666666666666666 }
2018-12-18 09:40:09,311 (SinkRunner-PollingRunner-DefaultSinkProcessor) [INFO - org.apache.flume.sink.LoggerSink.process(LoggerSink.java:95)] Event: { headers:{} body: 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 37 7777777777777777 }
2018-12-18 09:40:13,314 (SinkRunner-PollingRunner-DefaultSinkProcessor) [INFO - org.apache.flume.sink.LoggerSink.process(LoggerSink.java:95)] Event: { headers:{} body: 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 38 8888888888888888 }
2018-12-18 09:40:16,192 (SinkRunner-PollingRunner-DefaultSinkProcessor) [INFO - org.apache.flume.sink.LoggerSink.process(LoggerSink.java:95)] Event: { headers:{} body: 39 39 39 39 39 39 39 39 39 39 39 39 39 39 39 39 9999999999999999 }
案例2:Exec
EXEC执行一个给定的命令获得输出的源,如果要使用tail命令,必选使得file足够大才能看到输出内容
创建agent配置文件
Test Exec Source
#文件名:case3_exec.conf
#配置内容:
# Name the components on this agent
a1.sources = r1
a1.sinks = k1
a1.channels = c1
# Describe/configure the source
a1.sources.r1.type = exec
a1.sources.r1.command = tail -F /home/hadoop/apache-flume-1.7.0-bin/log.10
a1.sources.r1.channels = c1
# Describe the sink
a1.sinks.k1.type = logger
a1.sinks.k1.channel = c1
# Use a channel which buffers events in memory
a1.channels.c1.type = memory
a1.channels.c1.capacity = 1000
a1.channels.c1.transactionCapacity = 100
#启动flume agent a1
./bin/flume-ng agent -c . -f conf/case3_exec.conf -n a1 -Dflume.root.logger=INFO,console
生成足够多的内容在文件里
for i in {1..100};do echo "exec test$i" >> log.10;echo $i;done
在启动的终端查看console输出
18/12/18 10:26:28 INFO sink.LoggerSink: Event: { headers:{} body: 65 78 65 63 20 74 65 73 74 38 38 exec test88 }
18/12/18 10:26:28 INFO sink.LoggerSink: Event: { headers:{} body: 65 78 65 63 20 74 65 73 74 38 39 exec test89 }
18/12/18 10:26:28 INFO sink.LoggerSink: Event: { headers:{} body: 65 78 65 63 20 74 65 73 74 39 30 exec test90 }
18/12/18 10:26:28 INFO sink.LoggerSink: Event: { headers:{} body: 65 78 65 63 20 74 65 73 74 39 31 exec test91 }
18/12/18 10:26:28 INFO sink.LoggerSink: Event: { headers:{} body: 65 78 65 63 20 74 65 73 74 39 32 exec test92 }
18/12/18 10:26:28 INFO sink.LoggerSink: Event: { headers:{} body: 65 78 65 63 20 74 65 73 74 39 33 exec test93 }
18/12/18 10:26:28 INFO sink.LoggerSink: Event: { headers:{} body: 65 78 65 63 20 74 65 73 74 39 34 exec test94 }
18/12/18 10:26:28 INFO sink.LoggerSink: Event: { headers:{} body: 65 78 65 63 20 74 65 73 74 39 35 exec test95 }
18/12/18 10:26:28 INFO sink.LoggerSink: Event: { headers:{} body: 65 78 65 63 20 74 65 73 74 39 36 exec test96 }
18/12/18 10:26:28 INFO sink.LoggerSink: Event: { headers:{} body: 65 78 65 63 20 74 65 73 74 39 37 exec test97 }
18/12/18 10:26:28 INFO sink.LoggerSink: Event: { headers:{} body: 65 78 65 63 20 74 65 73 74 39 38 exec test98 }
18/12/18 10:26:28 INFO sink.LoggerSink: Event: { headers:{} body: 65 78 65 63 20 74 65 73 74 39 39 exec test99 }
18/12/18 10:26:28 INFO sink.LoggerSink: Event: { headers:{} body: 65 78 65 63 20 74 65 73 74 31 30 30 exec test100 }