Flume是用来收集、汇聚并且传输日志数据Kafka去。可以设置多个sources对应多个任务的日志,到一个kafka sinks。配置文件如下:
#define agent
agent_log.sources = s1 s2
agent_log.channels = c1
agent_log.sinks = k1
#define sources.s1
agent_log.sources.s1.type=exec
agent_log.sources.s1.command=tail -F /data/log1.log
#define sources.s2
agent_log.sources.s2.type=exec
agent_log.sources.s2.command=tail -F /data/log2.log
#定义拦截器
agent_log.sources.s1.interceptors = i1
agent_log.sources.s1.interceptors.i1.type = static
agent_log.sources.s1.interceptors.i1.preserveExisting = false
agent_log.sources.s1.interceptors.i1.key = projectName
agent_log.sources.s1.interceptors.i1.value= project1
agent_log.sources.s2.interceptors = i2
agent_log.sources.s2.interceptors.i2.type = static
agent_log.sources.s2.interceptors.i2.preserveExisting = false
agent_log.sources.s2.interceptors.i2.key = projectName
agent_log.sources.s2.interceptors.i2.value= project2
#define channels
agent_log.channels.c1.type = memory
agent_log.channels.c1.capacity = 1000
agent_log.channels.c1.transactionCapacity = 1000
#define sinks
#设置Kafka接收器
agent_log.sinks.k1.type= org.apache.flume.sink.kafka.KafkaSink
#设置Kafka的broker地址和端口号
agent_log.sinks.k1.brokerList=cdh1:9092,cdh2:9092,cdh3:9092
#设置Kafka的Topic
agent_log.sinks.k1.topic=result_log
#包含header
agent_log.sinks.k1.useFlumeEventFormat = true
#设置序列化方式
agent_log.sinks.k1.serializer.class=kafka.serializer.StringEncoder
agent_log.sinks.k1.partitioner.class=org.apache.flume.plugins.SinglePartition
agent_log.sinks.k1.partition.key=1
agent_log.sinks.k1.request.required.acks=0
agent_log.sinks.k1.max.message.size=1000000
agent_log.sinks.k1.agent_log.type=sync
agent_log.sinks.k1.custom.encoding=UTF-8
# bind the sources and sinks to the channels
agent_log.sources.s1.channels=c1
agent_log.sources.s2.channels=c1
agent_log.sinks.k1.channel=c1
执行flume-ng命令启动flume:
flume-ng agent -c /etc/flume-ng/conf -f result_log.conf -n agent_log
Kafka是一个消息系统,可以缓冲消息。Flume收集的日志传送到Kafka消息队列中(Flume作为生产者),然后就可以被Spark Streaming消费了,而且可以保证不丢失数据。kafka的具体知识可以阅读:https://www.cnblogs.com/likehua/p/3999538.html
#创建result_log主题
kafka-topics --zookeeper cdh1:2181,cdh1:2181,cdh3:2181 --create --topic result_log --partitions 3 --replication-factor 1
#测试-查看kafka主题列表,观察result_log是否创建成功
kafka-topics --list --zookeeper cdh1:2181,cdh1:2181,cdh3:2181
#测试-启动一个消费者测试flume传输日志到kafka这一环节是否正常运行
kafka-console-consumer --bootstrap-server cdh1:9092,cdh1:9092,cdh3:9092 --topic result_log
新建一个maven项目,配置pom.xml添加依赖。//具体见项目代码
我们用Zookeeper来管理spark streaming 消费者的offset。调用
KafkaUtils.createDirectStream[String, String](ssc, PreferConsistent, Subscribe[String, String](topics, kafkaParams, newOffset))
与kafka建立连接,返回InputDStream,获取数据流,
stream.foreachRDD(rdd => {//处理程序}) //处理数据流。
val ssc = new StreamingContext(sc, Durations.seconds(60))
ssc.start() //启动ssc
发送邮件的功能配置org.apache.commons.mail这个包的 HtmlEmail 这个类,调用 HtmlEmail.send 发送邮件。
编写一个start.sh脚本启动 Spark Streaming 程序,最后 sh start.sh 启动脚本。
#!/bin/bash
export HADOOP_USER_NAME=hdfs
spark2-submit \
--master yarn \
--deploy-mode client \
--executor-cores 3 \
--num-executors 10 \
--driver-memory 2g \
--executor-memory 1G \
--conf spark.default.parallelism=30 \
--conf spark.storage.memoryFraction=0.5 \
--conf spark.shuffle.memoryFraction=0.3 \
--conf spark.reducer.maxSizeInFlight=128m \
--driver-class-path mysql-connector-java-5.1.38.jar \
--jars mysql-connector-java-5.1.38.jar,qqwry-java-0.7.0.jar,fastjson-1.2.47.jar,spark-streaming-kafka-10_2.11-2.2.0.jar,hive-hbase-handler-1.1.0-cdh5.13.0.jar,commons-email-1.5.jar,commons-email-1.5-sources.jar,mail-1.4.7.jar \
--class com.lin.monitorlog.mianer.Handler \
monitorLog.jar
#[END]
spark streaming 程序代码链接: https://download.csdn.net/download/linge1995/10576773