Flume的安装和测试

1.Flume的安装

查看JAVA_HOME:

[root@bigdata113 ~]# echo $JAVA_HOME
/opt/module/jdk1.8.0_181

安装Flume

[root@bigdata112 soft]# tar -zxvf  apache-flume-1.8.0-bin.tar.gz -C /opt/module

改名:

[root@bigdata112 module]# mv apache-flume-1.8.0-bin/ flume1.8.0
[root@bigdata112 conf]# mv flume-env.sh.template flume-env.sh

flume-env.sh涉及修改项:

export JAVA_HOME=/opt/module/jdk1.8.0_144

到这里Flume就安装完成了

2.测试案例(端口,文本,文件夹)

案例一:监控端口数据

目标:Flume监控一端Console,另一端Console发送消息,使被监控端实时显示。
分步实现:

1)安装telnet工具:
[root@bigdata112 ~]#yum -y install telnet
Flume的安装和测试_第1张图片
image.png

在flume目录下创建一个目录用与存放Flume的任务

[root@bigdata112 flume1.8.0]# mkdir jobconf

创建Flume Agent配置文件flume-telnet.conf

#定义Agent
a1.sources = r1
a1.sinks = k1
a1.channels = c1

#定义source
a1.sources.r1.type = netcat
a1.sources.r1.bind = bigdata112
a1.sources.r1.port = 44445

# 定义sink
a1.sinks.k1.type = logger

# 定义memory
a1.channels.c1.type = memory
a1.channels.c1.capacity = 1000
a1.channels.c1.transactionCapacity = 100

# 双向链接
a1.sources.r1.channels = c1
a1.sinks.k1.channel = c1

判断44445端口是否被占用

 netstat -tunlp | grep 44445
2)测试:启动flume配置文件
[root@bigdata112 bin]#bin/flume-ng agent \
--conf /opt/module/flume1.8.0/conf/ \
--name a1 \
--conf-file /opt/module/flume1.8.0/jobconf/flume-telnet.conf \
-Dflume.root.logger==INFO,console

使用telnet工具向本机的44445端口发送内容

[root@bigdata111 ~]# telnet bigdata111 44445
Trying 192.168.226.111...
Connected to bigdata111.
Escape character is '^]'.
123
OK

结果:

2019-06-21 11:23:04,777 (SinkRunner-PollingRunner-DefaultSinkProcessor) [INFO - org.apache.flume.sink.LoggerSink.process(LoggerSink.java:95)] Event: { headers:{} body: 31 32 33 0D                                     123. }

案例二:实时读取本地文件到HDFS

创建flume-hdfs.conf文件:

# 1 agent
a2.sources = r2
a2.sinks = k2
a2.channels = c2

# 2 source
a2.sources.r2.type = exec
a2.sources.r2.command = tail -F /opt/zouzou
a2.sources.r2.shell = /bin/bash -c

# 3 sink
a2.sinks.k2.type = hdfs
a2.sinks.k2.hdfs.path = hdfs://bigdata111:9000/flume/%Y%m%d/%H
#上传文件的前缀
a2.sinks.k2.hdfs.filePrefix = logs-
#是否按照时间滚动文件夹
a2.sinks.k2.hdfs.round = true
#多少时间单位创建一个新的文件夹
a2.sinks.k2.hdfs.roundValue = 1
#重新定义时间单位
a2.sinks.k2.hdfs.roundUnit = hour
#是否使用本地时间戳
a2.sinks.k2.hdfs.useLocalTimeStamp = true
#积攒多少个Event才flush到HDFS一次
a2.sinks.k2.hdfs.batchSize = 1000
#设置文件类型,可支持压缩
a2.sinks.k2.hdfs.fileType = DataStream
#多久生成一个新的文件
a2.sinks.k2.hdfs.rollInterval = 600
#设置每个文件的滚动大小
a2.sinks.k2.hdfs.rollSize = 134217700
#文件的滚动与Event数量无关
a2.sinks.k2.hdfs.rollCount = 0
#最小副本数
a2.sinks.k2.hdfs.minBlockReplicas = 1

# Use a channel which buffers events in memory
a2.channels.c2.type = memory
a2.channels.c2.capacity = 1000
a2.channels.c2.transactionCapacity = 100

# Bind the source and sink to the channel
a2.sources.r2.channels = c2
a2.sinks.k2.channel = c2

执行监控配置

/opt/module/flume1.8.0/bin/flume-ng agent \
--conf /opt/module/flume1.8.0/conf/ \
--name a2 \
--conf-file /opt/module/flume1.8.0/jobconf/flume-hdfs.conf

然后在/opt下对zouzou这个文件进行编辑,编辑的内容就会存到hdfs中:

[root@bigdata113 soft]# hdfs dfs -cat /flume/20190621/11/*

hello world

hdfs上出现了按年月日时 分区的目录:

Flume的安装和测试_第2张图片
image.png

案例三:实时读取目录文件到HDFS

目标:使用flume监听整个目录的文件
分步实现:

1)创建配置文件flume-dir.conf
#1 Agent
a3.sources = r3
a3.sinks = k3
a3.channels = c3

#2 source
a3.sources.r3.type = spooldir
a3.sources.r3.spoolDir = /opt/module/flume1.8.0/upload
a3.sources.r3.fileSuffix = .COMPLETED
a3.sources.r3.fileHeader = true
#忽略所有以.tmp结尾的文件,不上传
a3.sources.r3.ignorePattern = ([^ ]*\.tmp)

# 3 sink
a3.sinks.k3.type = hdfs
a3.sinks.k3.hdfs.path = hdfs://bigdata111:9000/flume/%H
#上传文件的前缀
a3.sinks.k3.hdfs.filePrefix = upload-
#是否按照时间滚动文件夹
a3.sinks.k3.hdfs.round = true
#多少时间单位创建一个新的文件夹
a3.sinks.k3.hdfs.roundValue = 1
#重新定义时间单位
a3.sinks.k3.hdfs.roundUnit = hour
#是否使用本地时间戳
a3.sinks.k3.hdfs.useLocalTimeStamp = true
#积攒多少个Event才flush到HDFS一次
a3.sinks.k3.hdfs.batchSize = 100
#设置文件类型,可支持压缩
a3.sinks.k3.hdfs.fileType = DataStream
#多久生成一个新的文件
a3.sinks.k3.hdfs.rollInterval = 600
#设置每个文件的滚动大小大概是128M
a3.sinks.k3.hdfs.rollSize = 134217700
#文件的滚动与Event数量无关
a3.sinks.k3.hdfs.rollCount = 0
#最小副本数
a3.sinks.k3.hdfs.minBlockReplicas = 1

# Use a channel which buffers events in memory
a3.channels.c3.type = memory
a3.channels.c3.capacity = 1000
a3.channels.c3.transactionCapacity = 100

# Bind the source and sink to the channel
a3.sources.r3.channels = c3
a3.sinks.k3.channel = c3
2)执行测试:执行如下脚本后,请向upload文件夹中添加文件试试
/opt/module/flume1.8.0/bin/flume-ng agent \
--conf /opt/module/flume1.8.0/conf/ \
--name a3 \
--conf-file /opt/module/flume1.8.0/jobconf/flume-dir.conf
[root@bigdata112 upload]# vi A
woshi A 
aa
[root@bigdata112 upload]# ll
total 4
-rw-r--r--. 1 root root 19 Jun 21 12:32 A.COMPLETED
[root@bigdata112 upload]# hdfs dfs -cat /flume/12/*
woshi A 
aa
Flume的安装和测试_第3张图片
image.png

尖叫提示: 在使用Spooling Directory Source时

  1. 不要在监控目录中创建并持续修改文件
  2. 上传完成的文件会以.COMPLETED结尾
  3. 被监控文件夹每500毫秒扫描一次文件变动

你可能感兴趣的:(Flume的安装和测试)