最近开搞spark streaming,记录下一个apache log analysis demo的部署过程。
开发环境是Mac os + scala 2.11 + spark 2.0 + kafka 0.10 + Intellij Idea。
安装 scala(如果已经安装完毕就跳过)
Mac os系统下使用 brew安装 ,为确保版本问题, 先运行
$ brew update
然后运行:
$ brew install scala
安装 Intellij Idea:
此次使用 Intellij Idea作为ide来开发, 不过eclipse也行,但是没有尝试,所以eclipse不做记录。
Intell Idea可以在其官网下载到,社区版和付费版貌似都可以用来开发scala程序,不过有edu邮箱的可以有一年免费使用付费版的机会,不要浪费。
安装时不要忘记顺便安装scala的插件,不过装好后也能安装插件, 略去不表。
接下来就是新建scala project, 当然也可以建sbt project,方式为 file -> new -> project ->scala, 确保安装的scala版本为2.11即可。
安装spark 2.0:
Mac 同样使用 brew安装:
$ brew install apache-spark
等到安装结束, 运行以下命令:
$ spark-shell
如果看到
Welcome to
____ __
/ __/__ ___ _____/ /__
_\ \/ _ \/ _ `/ __/ '_/
/___/ .__/\_,_/_/ /_/\_\ version 2.0.0
/_/
Using Scala version 2.11.8 (Java HotSpot(TM) 64-Bit Server VM, Java 1.8.0_60)
Type in expressions to have them evaluated.
Type :help for more information.
/usr/local/Cellar/apache-spark/2.0.0/libexec/conf
找到其中的 log4j.rootCategory,把 INFO 改成ERROR ,以后就只有ERROR会被输出来了。 spark到此算是可以运行了。
(明天继续,去弄Deep Learning了~~~~~~)
安装 Kafka:
为了安装Kafka,必须先要安装zookeeper(如果已经安装,这步可以略去)。Mac 系统下用brew来进行安装:
$ brew install zookeeper
$ zkserver start
稍等片刻,zookeeper就启动了,下面就可以继续安装kafka了。
安装配置kafka的过程可以参考官方文档,这里给出链接:http://kafka.apache.org/documentation.html#quickstart
在Mac os下安装稍有不同,这里给出具体过程:
首先, 通过brew安装kafka:
$ brew install kafka
等待其安装完成,去到kafka的相应目录:
$ cd /usr/local/Cellar/kafka/0.10.0.1/libexec
然后执行命令:
$ kafka-server-start config/server.properties
然后服务就启动啦。下面我们来创建一个topic。同样在上面那个路径下, 执行:
$ kafka-topics --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic test
$ kafka-topics --list --zookeeper localhost:2181
$ kafka-console-producer --broker-list localhost:9092 --topic test
然后尝试输入消息:
hello world!
hello again
然后可以新开一个终端, 回到刚才的目录, 或者直接control + c退出。然后使用consumer 来尝试接受数据:
$ kafka-console-consumer --zookeeper localhost:2181 --topic test --from-beginning
Spark streaming开发环境测试:
现在回到刚才在Intellij中创建的scala project中,首先要把需要要到的库安装下。首先把spark的jar文件导入进来:
点击file -> project struct, 然后在打开的窗口中选择Libraries,点击“+”这个符号, 然后选择java,把spark的jar文件全部导入。
这些jar文件的具体位置在为 /usr/local/Cellar/apache-spark/2.0.0/libexec/jars
同样导入kafak的jar文件, 不过具体位置在 /usr/local/Cellar/kafka/0.10.0.1/libexec/libs。
现在要导入spark streaming,和上面差不多, 不过在选择时不选java, 选from maven,对应次开发环境的包是 spark-streaming-kafka-0-10_2.11
导入包的工作完成后,在project的src文件夹下创建两个scala源文件:
KafkaExample.scala:
import org.apache.spark.SparkConf
import org.apache.spark.streaming.{Seconds, StreamingContext}
import org.apache.spark.storage.StorageLevel
import java.util.regex.Pattern
import java.util.regex.Matcher
import com.sundogsoftware.sparkstreaming.Utilities._
import org.apache.kafka.clients.consumer.ConsumerRecord
import org.apache.kafka.common.serialization.StringDeserializer
import org.apache.spark.streaming.kafka010._
import org.apache.spark.streaming.kafka010.LocationStrategies.PreferConsistent
import org.apache.spark.streaming.kafka010.ConsumerStrategies.Subscribe
/** Working example of listening for log data from Kafka's testLogs topic on port 9092. */
object KafkaExample {
def main(args: Array[String]) {
// Create the context with a 1 second batch size
val ssc = new StreamingContext("local[*]", "KafkaExample", Seconds(1))
setupLogging()
// Construct a regular expression (regex) to extract fields from raw Apache log lines
val pattern = apacheLogPattern()
// hostname:port for Kafka brokers, not Zookeeper
val kafkaParams = Map[String, Object](
"bootstrap.servers" -> "localhost:9092,anotherhost:9092",
"key.deserializer" -> classOf[StringDeserializer],
"value.deserializer" -> classOf[StringDeserializer],
"group.id" -> "example",
"auto.offset.reset" -> "latest",
"enable.auto.commit" -> (false: java.lang.Boolean)
)
// List of topics you want to listen for from Kafka
val topics = List("test").toSet
// Create our Kafka stream, which will contain (topic,message) pairs. We tack a
// map(_._2) at the end in order to only get the messages, which contain individual
// lines of data.
// val lines = KafkaUtils.createDirectStream[String, String, StringDecoder, StringDecoder](
// ssc, kafkaParams, topics).
// map(_._2)
val lines = KafkaUtils.createDirectStream[String, String](
ssc,
PreferConsistent,
Subscribe[String, String](topics, kafkaParams)
).map(_.value());
// Extract the request field from each log line
val requests = lines.map(x => {val matcher:Matcher = pattern.matcher(x); if (matcher.matches()) matcher.group(5)})
// Extract the URL from the request
val urls = requests.map(x => {val arr = x.toString().split(" "); if (arr.size == 3) arr(1) else "[error]"})
// Reduce by URL over a 5-minute window sliding every second
val urlCounts = urls.map(x => (x, 1)).reduceByKeyAndWindow(_ + _, _ - _, Seconds(300), Seconds(2))
// Sort and print the results
val sortedResults = urlCounts.transform(rdd => rdd.sortBy(x => x._2, false))
sortedResults.print()
// Kick it off
ssc.checkpoint("/Users/user/Desktop/spark_code/checkpoin")
ssc.start()
ssc.awaitTermination()
}
}
package com.sundogsoftware.sparkstreaming
import org.apache.log4j.Level
import java.util.regex.Pattern
import java.util.regex.Matcher
object Utilities {
/** Makes sure only ERROR messages get logged to avoid log spam. */
def setupLogging() = {
import org.apache.log4j.{Level, Logger}
val rootLogger = Logger.getRootLogger()
rootLogger.setLevel(Level.ERROR)
}
/** Configures Twitter service credentials using twiter.txt in the main workspace directory */
def setupTwitter() = {
import scala.io.Source
for (line <- Source.fromFile("../twitter.txt").getLines) {
val fields = line.split(" ")
if (fields.length == 2) {
System.setProperty("twitter4j.oauth." + fields(0), fields(1))
}
}
}
/** Retrieves a regex Pattern for parsing Apache access logs. */
def apacheLogPattern():Pattern = {
val ddd = "\\d{1,3}"
val ip = s"($ddd\\.$ddd\\.$ddd\\.$ddd)?"
val client = "(\\S+)"
val user = "(\\S+)"
val dateTime = "(\\[.+?\\])"
val request = "\"(.*?)\""
val status = "(\\d{3})"
val bytes = "(\\S+)"
val referer = "\"(.*?)\""
val agent = "\"(.*?)\""
val regex = s"$ip $client $user $dateTime $request $status $bytes $referer $agent"
Pattern.compile(regex)
}
}
$ kafka-console-producer --broker-list localhost:9092 --topic test < /Users/user/Desktop/spark_code/access_log.txt
这里 /Users/user/Desktop/spark_code/access_log.txt 是我的文件路径, 改成你本地的文件路径便可。