Flume直接到SparkStreaming的两种方式

一般是flume->kafka->SparkStreaming,如果非要从Flume直接将数据输送到SparkStreaming里面有两种方式,如下:

  • 第一种:Push推送的方式

程序如下:

package cn.lijie

import org.apache.log4j.Level
import org.apache.spark.streaming.flume.FlumeUtils
import org.apache.spark.streaming.{Seconds, StreamingContext}
import org.apache.spark.{HashPartitioner, SparkConf, SparkContext}

/**
 * User: lijie
 * Date: 2017/8/3
 * Time: 15:19  
  */
object Flume2SparkStreaming01 {

  def myFunc = (it: Iterator[(String, Seq[Int], Option[Int])]) => {
    it.map(x => {
      (x._1, x._2.sum + x._3.getOrElse(0))
    })
  }

  def main(args: Array[String]): Unit = {
    MyLog.setLogLeavel(Level.ERROR)
    val conf = new SparkConf().setAppName("fs01").setMaster("local[2]")
    val sc = new SparkContext(conf)
    val ssc = new StreamingContext(sc, Seconds(10))
    val ds = FlumeUtils.createStream(ssc, "10.1.9.102", 6666)
    sc.setCheckpointDir("C:\\Users\\Administrator\\Desktop\\checkpoint")
    val res = ds.flatMap(x => {
      new String(x.event.getBody.array()).split(" ")
    }).map((_, 1)).updateStateByKey(myFunc, new HashPartitioner(sc.defaultParallelism), true)
    res.print()
    ssc.start()
    ssc.awaitTermination()
  }
}

flume配置如下:

#agent名, source、channel、sink的名称
a1.sources = r1
a1.channels = c1
a1.sinks = k1
#具体定义source
a1.sources.r1.type = spooldir
a1.sources.r1.spoolDir = /home/hadoop/monitor
#具体定义channel
a1.channels.c1.type = memory
a1.channels.c1.capacity = 10000
a1.channels.c1.transactionCapacity = 100
#具体定义sink
a1.sinks.k1.type = avro
a1.sinks.k1.hostname = 10.1.9.102
a1.sinks.k1.port = 6666
#组装source、channel、sink
a1.sources.r1.channels = c1
a1.sinks.k1.channel = c1

启动flume:

/usr/java/flume/bin/flume-ng agent -n a1 -c conf -f /usr/java/flume/mytest/push.properties
  •  

结果:

这里写图片描述

  • 第二种:Poll拉的方式

但是这种方法必须要引入Spark官方的一个jar包,见官方的文档:点击跳转,将jar下载下来放到flume安装包的lib目录下即可,点击直接下载jar包

程序如下:

package cn.lijie

import java.net.InetSocketAddress
import org.apache.log4j.Level
import org.apache.spark.storage.StorageLevel
import org.apache.spark.streaming.flume.FlumeUtils
import org.apache.spark.streaming.{Seconds, StreamingContext}
import org.apache.spark.{HashPartitioner, SparkConf, SparkContext}

/**
  * User: lijie
  * Date: 2017/8/3
  * Time: 15:19  
  */
object Flume2SparkStreaming02 {

  def myFunc = (it: Iterator[(String, Seq[Int], Option[Int])]) => {
    it.map(x => {
      (x._1, x._2.sum + x._3.getOrElse(0))
    })
  }

  def main(args: Array[String]): Unit = {
    MyLog.setLogLeavel(Level.WARN)
    val conf = new SparkConf().setAppName("fs01").setMaster("local[2]")
    val sc = new SparkContext(conf)
    val ssc = new StreamingContext(sc, Seconds(10))
    val addrs = Seq(new InetSocketAddress("192.168.80.123", 10086))
    val ds = FlumeUtils.createPollingStream(ssc, addrs, StorageLevel.MEMORY_AND_DISK_2)
    sc.setCheckpointDir("C:\\Users\\Administrator\\Desktop\\checkpointt")
    val res = ds.flatMap(x => {
      new String(x.event.getBody.array()).split(" ")
    }).map((_, 1)).updateStateByKey(myFunc, new HashPartitioner(sc.defaultParallelism), true)
    res.print()
    ssc.start()
    ssc.awaitTermination()
  }
}

启动flume:

#agent名, source、channel、sink的名称
a1.sources = r1
a1.channels = c1
a1.sinks = k1
#具体定义source
a1.sources.r1.type = spooldir
a1.sources.r1.spoolDir = /home/hadoop/monitor
#具体定义channel
a1.channels.c1.type = memory
a1.channels.c1.capacity = 10000
a1.channels.c1.transactionCapacity = 100
#具体定义sink
a1.sinks.k1.type = org.apache.spark.streaming.flume.sink.SparkSink
a1.sinks.k1.hostname = 192.168.80.123
a1.sinks.k1.port = 10086
#组装source、channel、sink
a1.sources.r1.channels = c1
a1.sinks.k1.channel = c1

启动flume:

/usr/java/flume/bin/flume-ng agent -n a1 -c conf -f /usr/java/flume/mytest/push.properties
  • 1

结果

这里写图片描述


公用类:

MyLog类:

package cn.lijie

import org.apache.log4j.{Level, Logger}
import org.apache.spark.Logging

/**
  * User: lijie
  * Date: 2017/8/3
  * Time: 15:36  
  */
object MyLog extends Logging {
  /**
    * 设置日志级别
    *
    * @param level
    */
  def setLogLeavel(level: Level): Unit = {
    val flag = Logger.getRootLogger.getAllAppenders.hasMoreElements
    if (!flag) {
      logInfo("set log level ->" + level)
      Logger.getRootLogger.setLevel(level)
    }
  }
}

Pom文件:



    4.0.0

    flume-sparkstreaming
    flume-sparkstreaming
    1.0-SNAPSHOT

    
        1.7
        1.7
        UTF-8
        2.10.6
        1.6.1
        2.6.4
    

    
        
            org.scala-lang
            scala-library
            ${scala.version}
        

        
            org.apache.spark
            spark-core_2.10
            ${spark.version}
        

        
            org.apache.spark
            spark-streaming_2.10
            ${spark.version}
        

        
            org.apache.spark
            spark-streaming-flume_2.10
            ${spark.version}
        

        
            org.apache.hadoop
            hadoop-client
            ${hadoop.version}
        

        
            mysql
            mysql-connector-java
            5.1.38
        
    

    
        src/main/scala
        src/test/scala
        
            
                net.alchim31.maven
                scala-maven-plugin
                3.2.2
                
                    
                        
                            compile
                            testCompile
                        
                        
                            
                                -dependencyfile
                                ${project.build.directory}/.scala_dependencies
                            
                        
                    
                
            

            
                org.apache.maven.plugins
                maven-shade-plugin
                2.4.3
                
                    
                        package
                        
                            shade
                        
                        
                            
                                
                                    *:*
                                    
                                        META-INF/*.SF
                                        META-INF/*.DSA
                                        META-INF/*.RSA
                                    
                                
                            
                            
                                
                                    cn.lijie.Flume2SparkStreaming01
                                
                            
                        
                    
                
            
        
    

版权声明:本文为博主原创文章,未经博主允许不得转载。 https://blog.csdn.net/qq_20641565/article/details/76685697

你可能感兴趣的:(flume,scala,spark)