SparkStreaming延迟监控

SparkStreaming延迟监控

这篇博客来源于一个惨痛的线上事故经历,我们编写好SparkStreaming程序清洗行为数据,然后每十分钟往Hive写一次,大家都以为任务正常的运行,不会出什么问题,然而问题正在后台默默的产生了,到了第二天,所有依赖于Hive这张行为数据表的报表数据都少了很多,这是为啥呢?为什么会有这个问题?答案:数据过多,Spark Streaming调度批次积压,再加上数据倾斜,导致一个批次任务运行时间超过了原来正常运行时间的二倍,数据延迟三个小时。

这种事故最快的解决办法就是把报表任务再跑一遍,数据就全了,但是治标不治本,必须根据数据延迟情况适当调整资源和Kafka的Topic分区数,怎么才能知道Spark Streaming任务什么时候延迟呢?以及延迟情况是怎么样的呢?大家可能都知道去Spark Streaming的Web UI去看呀,地址:http://resourcemanager地址/proxy/application_id/streaming/,问题来了,我们能天天盯着它看吗,而且有可能时间不固定,后来用了爬虫的思想,先请求一下监控界面,看看能拿到哪些信息,然后在清洗一下,把关键的指标拿出来,超过阈值则报警,这样我们可以快速知道积压情况,并且及时处理,以防事故再次发生。

参数说明:--application_name : Spark Streaming代码里的application_name,--active_batches:最高可接收延迟的批次数,大于此值则报警。

# coding:utf-8
import os
import re
import sys
import requests
from lxml import etree
from optparse import OptionParser
reload(sys)
sys.setdefaultencoding('utf-8')
BASE_DIR = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
sys.path.append(BASE_DIR)
def option_parser():
    usage = "usage: %prog [options] arg1"

    parser = OptionParser(usage=usage)

    parser.add_option("--application_name", dest="application_name", action="store", type="string", help="")
    parser.add_option("--active_batches", dest="active_batches", action="store", type="string", help="")

    return parser

if __name__ == '__main__':

    reload(sys)
    sys.setdefaultencoding('utf-8')

    optParser = option_parser()
    options, args = optParser.parse_args(sys.argv[1:])

    if (options.application_name is None or options.active_batches is None):
        print "请指定完整参数--application_name --active_batches"
        exit(1)
    active_batche = 0
    record = ""
    schedul = ""
    resourcemanager_url = "resourcemanager_url" # 例:http://localhost:8088/cluster/scheduler
    resourcemanager_url_html = requests.get(resourcemanager_url).content.decode('utf-8')
    html = etree.HTML(resourcemanager_url_html)
    application_content = html.xpath('//*[@id="apps"]/script')
    for content in application_content:
        application_text_list = content.text.split("=", 1)[1].split("],")
        for application_text in application_text_list:
            application_text = application_text.replace("[", "").replace("]", "").split(",")
            application_name = application_text[2].replace("\"", "")
            application_id = re.findall(">(.*)<", str(application_text[0]))[0]
            if (application_name == options.application_name):
                streaming_url = "http://localhost:8088/proxy/%s/streaming/" % application_id
                streaming_html = requests.get(streaming_url).content.decode('utf-8')
                streaming_html = etree.HTML(streaming_html)
                streaming_content_list = streaming_html.xpath('//*[@id="active"]')
                # 清洗active_batche
                for content in streaming_content_list:
                    active_batches = content.text
                    active_batche = int(re.findall("\((.*)\)", active_batches)[0])
                streaming_records_list = streaming_html.xpath('//*[@id="active-batches-table"]/tbody/*/td[2]')
                # 清洗record
                for records in streaming_records_list:
                    record = records.text
                streaming_scheduling_delay_list = streaming_html.xpath('//*[@id="active-batches-table"]/tbody/*/td[3]')
                # 清洗Scheduling Delay
                for scheduling in streaming_scheduling_delay_list:
                    schedul = scheduling.text
                print active_batche
                if (active_batche > int(options.active_batches)):
                    content = "任务%s延迟了,积压批次:%d,Records:%s,Scheduling Delay:%s" % (application_name, active_batche, record, schedul)
                    print content
                    # TODO 加上公司内部IM接口通知地址,时刻关注,推荐用飞书




你可能感兴趣的:(SparkStreaming延迟监控)