【Flume-Demo】

案例一:使用telnet给flume发送信息,并把发送的信息以日志的形式打印在控制台上

1. 配置文件

# agent的三个组件
a1.sources = r1
a1.sinks = k1
a1.channels = c1

# sources配置
a1.sources.r1.type = netcat
a1.sources.r1.bind = node1
a1.sources.r1.port = 44444    

# sinks配置
a1.sinks.k1.type = logger

# Channels配置
a1.channels.c1.type = memory
a1.channels.c1.capacity = 1000
a1.channels.c1.transactionCapacity = 100

# 连接三个组件
a1.sources.r1.channels = c1
a1.sinks.k1.channel = c1

【提示】netcat 瑞士军刀。参考 https://www.jianshu.com/p/cb26a0f6c622

2. 启动Flume

$FLUME_HOME/bin/flume-ng agent \
--name a1 \
--conf $FLUME_HOME/conf \
--conf-file $FLUME_HOME/conf/example.conf \
-Dflume.root.logger=INFO,console

#$FLUME_HOME/bin/flume-ng agent  启动flume-ng的agent服务
#--name a1  指定服务名
#--conf $FLUME_HOME/conf  Flume配置文件目录flume-env.sh
#--conf-file $FLUME_HOME/conf/example.conf   我们编写的配置文件的目录
#-Dflume.root.logger=INFO,console   在控制台输出日志

3. 安装telnet服务

    https://blog.csdn.net/liupeifeng3514/article/details/79686740

 

4. 使用telnet测试

#启动telnet后在控制台输入内容即可
> telnet localhost 44444
Trying 127.0.0.1...
Connected to localhost.localdomain (127.0.0.1).
Escape character is '^]'.
Hello world! 

此时就可以在flume-ng agent的控制台看到如下输出

    12/06/19 15:32:19 INFO source.NetcatSource: Source starting
    12/06/19 15:32:19 INFO source.NetcatSource: Created serverSocket:sun.nio.ch.ServerSocketChannelImpl[/127.0.0.1:44444]
    12/06/19 15:32:34 INFO sink.LoggerSink: Event: { headers:{} body: 48 65 6C 6C 6F 20 77 6F 72 6C 64 21 0D          Hello world!. }

 

5. event

Event: { headers:{} body: 48 65 6C 6C 6F 20 77 6F 72 6C 64 21 0D Hello world!. }
Event是Flume传输数据的基本单元
Event = 可选的header + byte array
 

案例二:读取Hive日志信息,并把结果存储在HDFS上

1. 配置文件

# Name the components on this agent
a2.sources = r2
a2.sinks = k2
a2.channels = c2

# Describe/configure the source
a2.sources.r2.type = exec
a2.sources.r2.command = tail -f /tmp/root/hive.log
a2.sources.r2.shell = /bin/bash -c   

# Use a channel which buffers events in memory
a2.channels.c2.type = memory
a2.channels.c2.capacity = 1000
a2.channels.c2.transactionCapacity = 100

# Describe the sink
a2.sinks.k2.type = hdfs
a2.sinks.k2.hdfs.path = hdfs://hadoop-001:9000/logs/%Y%m%d/%H0                                               
#上传文件的前缀
a2.sinks.k2.hdfs.filePrefix = logs-
#是否按照时间滚动文件夹
a2.sinks.k2.hdfs.round = true
#多少时间单位创建一个新的文件夹
a2.sinks.k2.hdfs.roundValue = 1 
#重新定义时间单位 
a2.sinks.k2.hdfs.roundUnit = hour
#是否使用本地时间戳 
a2.sinks.k2.hdfs.useLocalTimeStamp = true
#积攒多少个Event才flush到HDFS一次
a2.sinks.k2.hdfs.batchSize = 1000
#设置文件类型,可支持压缩
a2.sinks.k2.hdfs.fileType = DataStream
#多久生成一个新的文件 
a2.sinks.k2.hdfs.rollInterval = 600
#设置每个文件的滚动大小 
a2.sinks.k2.hdfs.rollSize = 134217700
#文件的滚动与Event数量无关
a2.sinks.k2.hdfs.rollCount = 0
#最小冗余数 
a2.sinks.k2.hdfs.minBlockReplicas = 1 


# Bind the source and sink to the channel
a2.sources.r2.channels = c2
a2.sinks.k2.channel = c2

2. 移动hdfs依赖包到flume的lib文件夹中

【Flume-Demo】_第1张图片

3.  进入flume根目录,启动

./bin/flume-ng agent --conf ./conf/ --name a2 --conf-file ./conf/flume-file-hdfs.conf

4. HDFS查看结果

【Flume-Demo】_第2张图片

 

案例三 Flume整合Kafka

   https://blog.csdn.net/a_drjiaoda/article/details/85003929

 

你可能感兴趣的:(【Flume-Demo】)