flume+kafka+spark-streaming测试

注意:清楚flume版本,不同版本配置的参数值不一样,此处是CDH5.7.2对应的flume版本是1.6.0

系统环境

虚拟机版本: VMware Workstation 10.0
节点: master (3g内存) ,slave1(1g内存)
CDH版本: 5.7.2
操作系统: Centos 6.8
flume版本: 1.6.0
kafka版本: 0.10.0
spark版本: 1.6.0

master数据流向的服务器配置文件

a1.sources= r1  
a1.sinks= k1  
a1.channels= c1  
   
#Describe/configure the source  
a1.sources.r1.type= avro  
a1.sources.r1.channels= c1  
a1.sources.r1.bind= master  
a1.sources.r1.port= 4545  
   
#Describe the sink  
a1.sinks.k1.type = org.apache.flume.sink.kafka.KafkaSink
a1.sinks.k1.topic = sparkstreaming
a1.sinks.k1.brokerList = localhost:9092
a1.sinks.k1.requiredAcks = 1
a1.sinks.k1.batchSize = 20
a1.sinks.k1.channel = c1
   
#Use a channel which buffers events in memory  
a1.channels.c1.type= memory  
a1.channels.c1.keep-alive= 10  
a1.channels.c1.capacity= 100000  
a1.channels.c1.transactionCapacity= 100000  

slave1数据流出的服务器配置文件

a1.sources = r1
a1.sinks = k1
a1.channels = c1


a1.sources.r1.type = spooldir
a1.sources.r1.channels = c1
a1.sources.r1.spoolDir = /opt/cloudera/parcels/CDH/lib/flume-ng/logs
#a1.sources.r1.fileHeader = true



a1.channels.c1.type = memory
a1.channels.c1.capacity = 1000
a1.channels.c1.transactionCapacity = 100

a1.sinks.k1.type = avro
a1.sinks.k1.channel = c1
a1.sinks.k1.hostname = master
a1.sinks.k1.port = 4545

具体步骤

注意,最好每一步都分开一个终端来启动
启动这些步骤前需要先启动zookeeper,在启动kafka

#第一步master创建kafka topic
bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1  --topic  sparkstreaming

#第二步 mater 启动flume
bin/flume-ng agent --conf /opt/cloudera/parcels/CDH/lib/flume-ng/conf/ -f /opt/cloudera/parcels/CDH/lib/flume-ng/conf/flume-flume-kafka.conf -Dflume.root.logger=INFO,console -n a1

#第三步 slave1启动flume
bin/flume-ng agent --conf /opt/cloudera/parcels/CDH/lib/flume-ng/conf/ -f /opt/cloudera/parcels/CDH/lib/flume-ng/conf/flume-flume.conf -Dflume.root.logger=INFO,console -n a1


#第四步 slave1使用脚本想目录增加数据
for((i=1;i<=1000;i++));  
do 
  sleep 2;  
  echo "hello world hello world liujm  tljsdkjflsakd hello world hello" >> /opt/cloudera/parcels/CDH/lib/flume-ng/logs/test.log;  
done  

#第五步master启动kafka消费者
bin/kafka-console-consumer.sh -zookeeper localhost:2181--from-beginning --topic sparkstreaming

#第六步master启动spark-streaming(使用spark提供的例子)[进入spark_home]
bin/run-example streaming.DirectKafkaWordCount localhost:9092 sparkstreaming

结果能看到第六步对应的终端会出现一下情况,说明配置测试成功

flume+kafka+spark-streaming测试_第1张图片
消费者终端显示的结果
flume+kafka+spark-streaming测试_第2张图片
spark-streaming运行结果

从结果来看,网络延迟大概有20秒左右。

附录官网以kafka数据源的spark-streaming例子

streaming.DirectKafkaWordCount.scala

/*
 * Licensed to the Apache Software Foundation (ASF) under one or more
 * contributor license agreements.  See the NOTICE file distributed with
 * this work for additional information regarding copyright ownership.
 * The ASF licenses this file to You under the Apache License, Version 2.0
 * (the "License"); you may not use this file except in compliance with
 * the License.  You may obtain a copy of the License at
 *
 *    http://www.apache.org/licenses/LICENSE-2.0
 *
 * Unless required by applicable law or agreed to in writing, software
 * distributed under the License is distributed on an "AS IS" BASIS,
 * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 * See the License for the specific language governing permissions and
 * limitations under the License.
 */

// scalastyle:off println
package org.apache.spark.examples.streaming

import kafka.serializer.StringDecoder

import org.apache.spark.streaming._
import org.apache.spark.streaming.kafka._
import org.apache.spark.SparkConf

/**
 * Consumes messages from one or more topics in Kafka and does wordcount.
 * Usage: DirectKafkaWordCount  
 *    is a list of one or more Kafka brokers
 *    is a list of one or more kafka topics to consume from
 *
 * Example:
 *    $ bin/run-example streaming.DirectKafkaWordCount broker1-host:port,broker2-host:port \
 *    topic1,topic2
 */
object DirectKafkaWordCount {
  def main(args: Array[String]) {
    if (args.length < 2) {
      System.err.println(s"""
        |Usage: DirectKafkaWordCount  
        |   is a list of one or more Kafka brokers
        |   is a list of one or more kafka topics to consume from
        |
        """.stripMargin)
      System.exit(1)
    }

    StreamingExamples.setStreamingLogLevels()

    val Array(brokers, topics) = args

    // Create context with 2 second batch interval
    val sparkConf = new SparkConf().setAppName("DirectKafkaWordCount")
    val ssc = new StreamingContext(sparkConf, Seconds(2))

    // Create direct kafka stream with brokers and topics
    val topicsSet = topics.split(",").toSet
    val kafkaParams = Map[String, String]("metadata.broker.list" -> brokers)
    val messages = KafkaUtils.createDirectStream[String, String, StringDecoder, StringDecoder](
      ssc, kafkaParams, topicsSet)

    // Get the lines, split them into words, count the words and print
    val lines = messages.map(_._2)
    val words = lines.flatMap(_.split(" "))
    val wordCounts = words.map(x => (x, 1L)).reduceByKey(_ + _)
    wordCounts.print()

    // Start the computation
    ssc.start()
    ssc.awaitTermination()
  }
}

你可能感兴趣的:(flume+kafka+spark-streaming测试)