spark读取flume

1、在flume的/home/bigdata/flume1.6/lib目录下下载并导入jar包

org.apache.spark
spark-streaming-flume_2.11
2.2.0


    org.apache.commons
    commons-lang3
    3.5

org.scala-lang scala-library ${scala.version} 2、在flume的/home/bigdata/flume1.6/conf目录下创建tail_spark.conf文件 a1.sources=r1 a1.channels=c1 a1.sinks=k1

a1.sources.r1.type=exec
a1.sources.r1.command=tail -F /home/data/log

a1.channels.c1.type=memory
a1.channels.c1.capacity=1000
a1.channels.c1.transactionCapacity=100

a1.sinks.k1.type = org.apache.spark.streaming.flume.sink.SparkSink
a1.sinks.k1.hostname = hadoop1
a1.sinks.k1.port = 9999
a1.sinks.k1.channel = memoryChannel

a1.sources.r1.channels=c1
a1.sinks.k1.channel=c1
3、在flume的/home/data目录下创建log文件
touch log
4、在IDEA中导入pow包

org.apache.spark
spark-streaming-flume_2.11
2.2.0


    org.apache.commons
    commons-lang3
    3.5

org.scala-lang scala-library ${scala.version} 5、写flume整合spark的代码 package com.bw.SparkStreaming

import org.apache.spark.SparkConf
import org.apache.spark.streaming.flume.FlumeUtils
import org.apache.spark.streaming.{Seconds, StreamingContext}

object DataSourceWithFlume {

def main(args: Array[String]): Unit = {
val conf = new SparkConf().setMaster(“local[*]”).setAppName(this.getClass.getSimpleName)
val ssc = new StreamingContext(conf, Seconds(2))

val flumeStream = FlumeUtils.createPollingStream(ssc, "hadoop1", 9999)

flumeStream.map(line => new String(line.event.getBody.array()).trim)
.flatMap(_.split(" ")).map((_, 1)).reduceByKey(_ + _)
  .print()
ssc.start()
ssc.awaitTermination()

}
}
6、启动flume
flume-ng agent -n a1 -c /home/bigdata/flume1.6/conf -f /home/bigdata/flume1.6/conf/tail_spark.conf -Dflume.root.logger=INFO,console
7、在XShell中使用tail -F监控/home/data目录下的log文件
Tail -F /home/data/log
8、在XShell中开始使用echo命令往/home/data目录下的log文件中追加数据
Echo “hello word hi json” >> /home/data/log

你可能感兴趣的:(spark读取flume)