flume集群搭建

数据采集端:
source:使用spooldir扫描文件获取资源
channel:memory
sink:avro sink

数据接收端:
source:avro sink
channel:memory
sink:logger sink

参考:
http://www.xuebuyuan.com/2142003.html

最近使用Flume1.4 做日志收集,分享一下具体的集群环境配置搭建。

其中使用到了3台机器, hadoop  192.168.80.100   hadoop1  192.168.80.101    hadoop2   192.168.80.102 ,  将 hadoop  和 hadoop2 机器上面指定的flume 监控到的文件夹中产生的日志文件通过 agent 汇集到 hadoop1 机器上面最终写入到 hdfs 中。

分别在三台机器上面安装 flume 1.4.0 , 目录位于 /usr/local/ 下面, 三台机器都一样, 唯一不同的是三台机器中 /usr/local/flume/conf/ 目录下面的配置文件不同。

[root@hadoop flume]# cd conf/
[root@hadoop conf]# ls
flume-env.sh.template  flume-master  log4j.properties

将机器hadoop中  flume 配置文件重命名为 flume-master ; 并修改里面内容为

agent.sources = source1
agent.channels = memoryChannel
agent.sinks = sink1
agent.sources.source1.type = spooldir
agent.sources.source1.spoolDir=/root/hmbbs
agent.sources.source1.channels = memoryChannel
# Each sink's type must be defined
#Specify the channel the sink should use
# Each channel's type is defined.
agent.channels.memoryChannel.type = memory
# Other config values specific to each type of channel(sink or source)
# can be defined as well
# In this case, it specifies the capacity of the memory channel
agent.channels.memoryChannel.capacity = 1000
agent.channels.memoryChannel.keep-alive = 1000
agent.channels.memoryChannel.type=file
agent.sinks.sink1.type = avro
agent.sinks.sink1.hostname = hadoop1
agent.sinks.sink1.port = 23004
agent.sinks.sink1.channel = memoryChannel

将机器hadoop1中  flume 配置文件重命名为 flume-node ; 并修改里面内容为

agent.sources = source1
agent.channels = memoryChannel
agent.sinks = sink1
# For each one of the sources, the type is defined
agent.sources.source1.type = avro
agent.sources.source1.bind = hadoop1
agent.sources.source1.port = 23004
agent.sources.source1.channels = memoryChannel
# Each sink's type must be defined
#agent.sinks.loggerSink.channel = memoryChannel
# Each channel's type is defined.
agent.channels.memoryChannel.type = memory
# Other config values specific to each type of channel(sink or source)
# can be defined as well
# In this case, it specifies the capacity of the memory channel
agent.channels.memoryChannel.capacity = 100
agent.channels.memoryChannel.keep-alive = 100
agent.sinks.sink1.type=hdfs
agent.sinks.sink1.hdfs.path=hdfs://hadoop:9000/hmbbs/%y-%m-%d/%H%M%S
agent.sinks.sink1.hdfs.fileType=DataStream
agent.sinks.sink1.hdfs.writeFormat=TEXT
agent.sinks.sink1.hdfs.round=true
agent.sinks.sink1.hdfs.roundValue=5
agent.sinks.sink1.hdfs.roundUnit=minute
#agent.sinks.sink1.hdfs.rollInterval=1
agent.sinks.sink1.hdfs.useLocalTimeStamp=true
agent.sinks.sink1.channel = memoryChannel
agent.sinks.sink1.hdfs.filePrefix=events-

将机器hadoop2中  flume 配置文件重命名为 flume-master ; 并修改里面内容为

agent.sources = source1
agent.channels = memoryChannel
agent.sinks = sink1
agent.sources.source1.type = spooldir
agent.sources.source1.spoolDir=/root/hmbbs
agent.sources.source1.channels = memoryChannel
# Each sink's type must be defined
#Specify the channel the sink should use
# Each channel's type is defined.
agent.channels.memoryChannel.type = memory
# Other config values specific to each type of channel(sink or source)
# can be defined as well
# In this case, it specifies the capacity of the memory channel
agent.channels.memoryChannel.capacity = 1000
agent.channels.memoryChannel.keep-alive = 1000
agent.channels.memoryChannel.type=file
agent.sinks.sink1.type = avro
agent.sinks.sink1.hostname = hadoop1
agent.sinks.sink1.port = 23004
agent.sinks.sink1.channel = memoryChannel

将以上环境配置好了之后, 就可以依次启动 每台机器上面的flume 监控程序了,记得进入当前机器flume 目录下面  , 本人测试进入的目录是   cd   /usr/local/flume   

 顺序为  一定要先启动 hadoop1 节点的 flume ,不然会报异常!并且确保hadoop 是启动的。

 hadoop1              bin/flume-ng agent -n agent -c conf -f conf/flume-node -Dflume.root.logger=DEBUG,console

 hadoop                bin/flume-ng agent -n agent -c conf -f conf/flume-master -Dflume.root.logger=DEBUG,console

 hadoop2              bin/flume-ng agent -n agent -c conf -f conf/flume-master -Dflume.root.logger=DEBUG,console

 3台机器服务都启动之后, 可以往 hadoop 或者hadoop2 机器中的 /root/hmbbs  文件夹下面 加入新的 文件, flume 集群 会将这些数据 都收集到 hadoop1 上面 然后写到hdfs 中。

你可能感兴趣的:(#,flume)