Spark Streaming整合flume实战(一)

Spark Streaming从flume 中拉取数据

Spark Streaming对接Flume有两种方式

  • Poll:Spark Streaming从flume 中拉取数据
  • Push:Flume将消息Push推给Spark Streaming

1、安装flume1.6以上

2、下载依赖包

spark-streaming-flume-sink_2.11-2.0.2.jar放入到flume的lib目录下

3、生成数据

服务器上的 /root/data目录下准备数据文件data.txt

vi data.txt

hadoop spark hive spark
hadoop sqoop flume redis flume hadoop
solr kafka solr hadoop

4、配置采集方案

vi flume-poll.conf

a1.sources = r1
a1.sinks = k1
a1.channels = c1
#source
a1.sources.r1.channels = c1
a1.sources.r1.type = spooldir
a1.sources.r1.spoolDir = /root/data
a1.sources.r1.fileHeader = true
#channel
a1.channels.c1.type =memory
a1.channels.c1.capacity = 20000
a1.channels.c1.transactionCapacity=5000
#sinks
a1.sinks.k1.channel = c1
a1.sinks.k1.type = org.apache.spark.streaming.flume.sink.SparkSink
a1.sinks.k1.hostname=hdp-node-01
a1.sinks.k1.port = 8888
a1.sinks.k1.batchSize= 2000   

5、添加依赖

<dependency>
    <groupId>org.apache.sparkgroupId>
    <artifactId>spark-streaming-flume_2.10artifactId>
    <version>2.0.2version>
dependency>

6、代码实现

package cn.cheng.spark
import java.net.InetSocketAddress
import org.apache.spark.storage.StorageLevel
import org.apache.spark.streaming.dstream.{DStream, ReceiverInputDStream}
import org.apache.spark.streaming.flume.{FlumeUtils, SparkFlumeEvent}
import org.apache.spark.streaming.{Seconds, StreamingContext}
import org.apache.spark.{SparkConf, SparkContext}

/**
  * sparkStreaming整合flume 拉模式Poll

  */
object SparkStreaming_Flume_Poll {
  //newValues 表示当前批次汇总成的(word,1)中相同单词的所有的1
  //runningCount 历史的所有相同key的value总和
  def updateFunction(newValues: Seq[Int], runningCount: Option[Int]): Option[Int] = {
    val newCount =runningCount.getOrElse(0)+newValues.sum
    Some(newCount)
  }


  def main(args: Array[String]): Unit = {
    //配置sparkConf参数
    val sparkConf: SparkConf = new SparkConf().setAppName("SparkStreaming_Flume_Poll").setMaster("local[2]")
    //构建sparkContext对象
    val sc: SparkContext = new SparkContext(sparkConf)
    //构建StreamingContext对象,每个批处理的时间间隔
    val scc: StreamingContext = new StreamingContext(sc, Seconds(5))
    //设置checkpoint
      scc.checkpoint("./")
    //设置flume的地址,可以设置多台
    val address=Seq(new InetSocketAddress("192.168.200.160",8888))
    // 从flume中拉取数据
    val flumeStream: ReceiverInputDStream[SparkFlumeEvent] = FlumeUtils.createPollingStream(scc,address,StorageLevel.MEMORY_AND_DISK)

    //获取flume中数据,数据存在event的body中,转化为String
    val lineStream: DStream[String] = flumeStream.map(x=>new String(x.event.getBody.array()))
    //实现单词汇总
   val result: DStream[(String, Int)] = lineStream.flatMap(_.split(" ")).map((_,1)).updateStateByKey(updateFunction)

    result.print()
    scc.start()
    scc.awaitTermination()
  }

}

7、启动flume

flume-ng agent -n a1 -c /opt/bigdata/flume/conf -f /opt/bigdata/flume/conf/flume-poll.conf -Dflume.root.logger=INFO,console

8、启动spark-streaming应用程序

9、查看结果

Spark Streaming整合flume实战(一)_第1张图片

喜欢就点赞评论+关注吧

Spark Streaming整合flume实战(一)_第2张图片

感谢阅读,希望能帮助到大家,谢谢大家的支持!

你可能感兴趣的:(Spark,Streaming)