大数据平台运维之Flume

Flume

51.在master节点安装启动Flume组件,打开Linux Shell运行flume-ng的帮助命令,查看Flume-ng的用法信息,将查询结果显示如下。

[root@master ~]# flume-ng help

Usage: /usr/hdp/2.4.3.0-227/flume/bin/flume-ng.distro [options]...

 

commands:

  help                  display this help text

  agent                 run a Flume agent

 avro-client           run an avroFlume client

  password              create a password file for use influme config

  version               show Flume version info

 

global options:

  --conf,-c      use configs in directory

  --classpath,-C   append to the classpath

 --dryrun,-d           do not actually start Flume, just printthe command

  --plugins-path colon-separated list of plugins.d directories. See the

                       plugins.d section in the user guide for more details.

                       Default: $FLUME_HOME/plugins.d

 -Dproperty=value      sets a Javasystem property value

 -Xproperty=value      sets a Java-X option

 

agent options:

  --conf-file,-f specify a config file (required)

  --name,-n      the name of this agent(required)

  --help,-h             display help text

 

avro-client options:

  --rpcProps,-P   RPC client properties filewith server connection params

  --host,-H       hostname to whichevents will be sent

  --port,-p       port of the avrosource

  --dirname

        directory to stream toavro source

  --filename,-F   text file to stream toavro source (default: std input)

 --headerFile,-R File containing event headers as key/valuepairs on each new line

  --help,-h              display help text

 

  Either--rpcProps or both --host and --port must be specified.

 

password options:

  --outfile              The file in which encodedpassword is stored

 

Note that if directory is specified, thenit is always included first

in the classpath.

 

52.根据提供的模板log-example.conf文件,使用Flume NG工具收集master节点的系统日志/var/log/secure,将收集的日志信息文件的名称以“xiandian-sec”为前缀,存放于HDFS 文件系统的/1daoyun/file/flume目录中,并且定义在HDFS中产生的文件的时间戳为10分钟。进行收集后,查询HDFS文件系统中/1daoyun/file/flume的列表信息。将以上操作命令和结果信息以及修改后的log-example.conf文件内容提交到答题框中。

[root@master ~]# hadoop fs -ls /1daoyun/file/flume

Found 1 items

-rw-r--r--   3root hdfs       1142 2017-05-08 10:29 /1daoyun/file/flume/xiandian-sec.1494239316323

 

[root@master ~]# cat log-example.conf

# example.conf: A single-node Flume configuration

# Name the components on this agent

a1.sources = r1

a1.sinks = k1

a1.channels = c1

# Describe/configure the source

a1.sources.r1.type = exec

a1.sources.r1.command = tail -F /var/log/secure

a1.sources.r1.channels = c1

# Use a channel which buffers events in memory

a1.channels.c1.type = memory

a1.channels.c1.capacity = 1000

# Describe the sink

a1.sinks.k1.type = hdfs

a1.sinks.k1.channel = c1

a1.sinks.k1.hdfs.path =hdfs://master:8020/1daoyun/file/flume

a1.sinks.k1.hdfs.filePrefix = xiandian-sec

a1.sinks.k1.hdfs.round = true

a1.sinks.k1.hdfs.roundValue = 10

a1.sinks.k1.hdfs.roundUnit = minute

 

53.根据提供的模板hdfs-example.conf文件,使用Flume NG工具设置master节点的系统路径/opt/xiandian/为实时上传文件至HDFS文件系统的实时路径,设置HDFS文件系统的存储路径为/data/flume/,上传后的文件名保持不变,文件类型为DataStream,然后启动flume-ng agent。将以上操作命令和以及修改后的hdfs-example.conf文件内容提交到答题框中。

[root@master ~]# flume-ng agent --conf-filehdfs-example.conf --name master -Dflume.root.logger=INFO,cnsole

Warning: No configuration directory set! Use --conf

to override.

Info: Including Hadoop libraries found via(/bin/hadoop) for HDFS access

Info: Excluding/usr/hdp/2.4.3.0-227/hadoop/lib/slf4j-api-1.7.10.jar from classpath

Info: Excluding/usr/hdp/2.4.3.0-227/hadoop/lib/slf4j-log4j12-1.7.10.jar from classpath

Info: Excluding /usr/hdp/2.4.3.0-227/tez/lib/slf4j-api-1.7.5.jarfrom classpath

Info: Including HBASE libraries found via (/bin/hbase)for HBASE access

Info: Excluding/usr/hdp/2.4.3.0-227/hbase/lib/slf4j-api-1.7.7.jar from classpath

Info: Excluding /usr/hdp/2.4.3.0-227/hadoop/lib/slf4j-api-1.7.10.jarfrom classpath

Info: Excluding/usr/hdp/2.4.3.0-227/hadoop/lib/slf4j-log4j12-1.7.10.jar from classpath

Info: Excluding/usr/hdp/2.4.3.0-227/tez/lib/slf4j-api-1.7.5.jar from classpath

Info: Excluding/usr/hdp/2.4.3.0-227/hadoop/lib/slf4j-api-1.7.10.jar from classpath

Info: Excluding/usr/hdp/2.4.3.0-227/hadoop/lib/slf4j-log4j12-1.7.10.jar from classpath

Info: Excluding/usr/hdp/2.4.3.0-227/zookeeper/lib/slf4j-api-1.6.1.jar from classpath

Info: Excluding/usr/hdp/2.4.3.0-227/zookeeper/lib/slf4j-log4j12-1.6.1.jar from classpath

Info: Including Hive libraries found via () for Hiveaccess

 

[root@master ~]# cat hdfs-example.conf

# example.conf: A single-node Flume configuration

# Name the components on this agent

master.sources = webmagic

master.sinks = k1

master.channels = c1

# Describe/configure the source

master.sources.webmagic.type = spooldir

master.sources.webmagic.fileHeader = true

master.sources.webmagic.fileHeaderKey = fileName

master.sources.webmagic.fileSuffix = .COMPLETED

master.sources.webmagic.deletePolicy = never

master.sources.webmagic.spoolDir = /opt/xiandian/

master.sources.webmagic.ignorePattern = ^$

master.sources.webmagic.consumeOrder = oldest

master.sources.webmagic.deserializer =org.apache.flume.sink.solr.morphline.BlobDeserializer$Builder

master.sources.webmagic.batchsize = 5

master.sources.webmagic.channels = c1

# Use a channel which buffers events in memory

master.channels.c1.type = memory

# Describe the sink

master.sinks.k1.type = hdfs

master.sinks.k1.channel = c1

master.sinks.k1.hdfs.path =hdfs://master:8020/data/flume/%{dicName}

master.sinks.k1.hdfs.filePrefix = %{fileName}

master.sinks.k1.hdfs.fileType = DataStream

你可能感兴趣的:(linux相关运维知识,大数据集群部署与运维,大数据运维)