官方文档:
http://spark.apache.org/docs/2.0.2/streaming-flume-integration.html
Apache Flume 是一种分布式可靠的收集、聚合、移动大量日志数据的服务.
Flume and Spark Streaming to receive(接收) data from Flume:
两种方法: push和poll
推模式(Flume push SparkStreaming)与拉模式(SparkStreaming poll Flume)比较 :
采用推模式:推模式的理解就是Flume作为缓存,存有数据。监听对应端口,如果服务可以链接,就将数据push过去。(简单,耦合要低),缺点是SparkStreaming 程序没有启动的话,Flume端会报错,同时可能会导致Spark Streaming 程序来不及消费的情况。
采用拉模式:拉模式就是自己定义一个sink,SparkStreaming自己去channel里面取数据,根据自身条件去获取数据,稳定性好。
1.flume配置
agent.sinks = avroSink
agent.sinks.avroSink.type = avro
agent.sinks.avroSink.channel = memoryChannel
agent.sinks.avroSink.hostname =
agent.sinks.avroSink.port =
flume_push_streaming.conf
# 定义 agent
a1.sources = src1
a1.channels = ch1
a1.sinks = k1
# 定义 sources
a1.sources.src1.type = netcat
a1.sources.src1.bind=0.0.0.0
a1.sources.src1.port=44444
#netcat不行 监控文件可以
#a1.sources.src1.type = exec
#a1.sources.src1.command=tail -F /data/mc.txt
#a1.sources.src1.channels=ch1
# 定义 sinks app程序所在id+port
a1.sinks.k1.type= avro
a1.sinks.k1.hostname=192.168.22.119
a1.sinks.k1.port=1111
a1.sinks.k1.channel = ch1
# 定义 channels
a1.channels.ch1.type = memory
a1.channels.ch1.capacity = 1000
2.sparkstreaming App
pom.xml依赖
groupId = org.apache.spark
artifactId = spark-streaming-flume_2.11
version = 2.0.2
程序:
sparkstreaming三行配置
val flumeds = FlumeUtils.createStream(ssc,"192.168.22.119",1111)
//flume采集到的数据存在Sparkflumeevent中 x
val res = flumeds.flatMap(x=>new String(x.event.getBody.array()).split(" ")).map((_,1)).reduceByKey(_+_)
3.启动sparkstreaming
启动flume
bin/flume-ng agent --conf conf --conf-file conf/flume_push_streaming.conf --name a1 -Dflume.root.logger=INFO,console
4.往flume source里传数据
Flume将数据推入接收器sink,数据保持缓冲。
Spark Streaming使用Flume reciver和事务从sink中提取数据,只有SS接收和复制数据后,事务才会成功。
与push相比,确保了可靠性和容错性,需要配置flume运行自定义接收器reciver
1.flume配置 自定义接收器
a1.sources = src1
a1.channels = ch1
a1.sinks = k1
# 定义 sources
a1.sources.src1.type = exec
a1.sources.src1.command=tail -F /data/mc.txt
a1.sources.src1.channels=ch1
# 定义 sinks
a1.sinks.k1.type= org.apache.spark.streaming.flume.sink.SparkSink
a1.sinks.k1.hostname=192.168.22.80
a1.sinks.k1.port=1111
a1.sinks.k1.channel = ch1
# 定义 channels
a1.channels.ch1.type = memory
a1.channels.ch1.capacity = 1000
2.将三个jar包放入 flume/lib下
spark-streaming-flume-sink_2.11-2.0.2.jar
scala-library-2.11.7.jar
commons-lang3-3.3.2.jar
注意:scala-library会有冲突
本次实验使用2.11.7.jar 将其他版本移动在flume下
3.写flume_poll程序
pom依赖:
和push一样
val ds =FlumeUtils.createPollingStream(ssc,"192.168.22.80",1111,StorageLevel.MEMORY_ONLY)
//flume采集到的数据存在Sparkflumeevent中 x
val res = ds.flatMap(x=>new String(x.event.getBody.array()).split(" ")).map((_,1)).reduceByKey(_+_)
4.启动flume
这回可以先启动flume了 因为它可以缓存到192.168.22.80 1111上
bin/flume-ng agent --conf conf --conf-file conf/flume_push_streaming.conf --name a1 -Dflume.root.logger=INFO,console
5.启动sparkstreaming
6.传入source数据
echo aaa bbb kkk nick aaa >>/data/mc.txt