线上环境CPU使用率飙升6000%问题排查复盘

背景

        我们是告警监控系统,主要接收客户方的各个业务系统及第三方平台的告警数据进行分析呈现,当在页面上关闭一条告警时,需要把这条告警的原始事件全部查出来并回传给上游系统,进行事件状态同步。

现象

        我们在页面上关闭告警时,不会立马进行查库把数据发送到上游系统,而是先将关闭的告警id推送的kafka的一个topic上做缓冲,应用程序监听这个toppic慢慢消费处理。而问题就出在这里,当天客户值班的时候,关闭了一条压缩了600多万条原始事件的告警数据,导致整个代码死循环。现就这个问题进行复盘记录。

表结构

tb_alert:告警表,和tb_event表一对多,一条告警关联n条原始事件,有5w+条数据。

tb_event:原始事件,有1000w+的数据,其中alert_id是压缩后的告警id。

tb_extend:event的扩展表,主要存储扩展字段,数量与event表相等,用event_id作为主键并关联event表。

代码分析

kafka监听程序,其中收到的数据格式是 {"alertIdList":[1,2,3,4,5],...}

 @KafkaListener(id = "ntAlertCallBackListener", topics = "xxx", groupId = "xxx",
            containerFactory = "kafkaListenerContainerFactory")
    public void ntAlertCallBackListener(List> data, Acknowledgment acknowledgment) {

        StopWatch stopWatch = new StopWatch("ntAlertCallBackListener");
        stopWatch.start("ntAlertCallBack");
        try {

            List> alertInfoMap = data.stream().map(ConsumerRecord::value).map(JSONUtil::parseObj).collect(Collectors.toList());
            alertInfoMap.parallelStream().forEach(map -> {
                List alertIdList = JSONUtil.parseArray(map.get("alertIdList")).toList(Long.class);
                alertService.ntAlertCloseCallBack(alertIdList, String.valueOf(map.get("resolution")), String.valueOf(map.get("username")));
                });

        } catch (Exception e) {
            log.error("callBack alert error", e);
        } finally {
            stopWatch.stop();
            log.info("ntAlertCallBack kafkaListener duration:{}, dataSize:{}",stopWatch.getLastTaskTimeMillis(),data.size());
            acknowledgment.acknowledge();
        }
    }

上游系统回推程序

// alertIdList是告警id集合, resolution是关闭说明,username是操作人
public void ntAlertCloseCallBack(List alertIdList, String resolution, String username) {
        if (CollUtil.isEmpty(alertIdList)) {
            return;
        }
        StopWatch stopWatch = new StopWatch("alertCloseCallBackNt");
        stopWatch.start("buildAlertCloseNtReq");
        //如果原始事件数量太多, 分成俩次发送
        Integer eventCount = eventDaoService.lambdaQuery().in(Event::getAlertId, alertIdList).count();
        if (alertIdList.size()!=1&&eventCount>maxGetSize) {
            ntAlertCloseCallBack(CollUtil.sub(alertIdList,0,alertIdList.size()/2), resolution, username);
            ntAlertCloseCallBack(CollUtil.sub(alertIdList,alertIdList.size()/2,alertIdList.size()), resolution, username);
            return;
        }
        List alertCloseCallBackNtReqs = buildAlertCloseNtReq(alertIdList, resolution, username);
        stopWatch.stop();
        log.info("alertCloseCallBackNt getReq duration:{},size:{}", stopWatch.getLastTaskTimeMillis(), alertCloseCallBackNtReqs.size());

        stopWatch.start("alertCloseCallBackNt sendKafka");
        alertCloseCallBackNtReqs.forEach(req->kafkaService.send(ntAlertInfoCallBackTopic,JSONUtil.toJsonStr(req)));
        stopWatch.stop();
        log.info("alertCloseCallBackNt sendKafka duration:{}", stopWatch.getLastTaskTimeMillis());
    }

    private List buildAlertCloseNtReq(List alertIdList, String resolution, String username) {
        List alertCloseCallBackNtReqs = new ArrayList<>();

        MPJQueryWrapper mpjQueryWrapper = new MPJQueryWrapper<>();
        mpjQueryWrapper.select("distinct e." + serialId +" as serialId",
                "e."+ntSeverIpExtend+" as serverId", "e."+ntFullHostExtend+" as host","t.alert_id",
                "e."+ntItemTagExtend+" as itemTag");
        mpjQueryWrapper.in("t.alert_id", alertIdList);
        mpjQueryWrapper.leftJoin("tb_extend e on t.id = e.event_id");
        List> maps = eventDaoService.listMaps(mpjQueryWrapper);

        maps.stream().map(map -> AlertCloseCallBackNtReq.builder()
                        .eventId(String.valueOf(map.getOrDefault("serialId","")))
                        .host(String.valueOf(map.getOrDefault("host","")))
                        .severip(String.valueOf(map.getOrDefault("serverId","")))
                        .itemTag(String.valueOf(map.getOrDefault("itemTag","")))
                        .msg(resolution)
                        .operator(username).build()
                )
                .forEach(alertCloseCallBackNtReqs::add);

        return alertCloseCallBackNtReqs;
    }

优化后

 public void ntAlertCloseCallBack(List alertIdList, String resolution, String username) {
        if (CollUtil.isEmpty(alertIdList)) {
            return;
        }
        StopWatch stopWatch = new StopWatch("alertCloseCallBackNt");
        stopWatch.start("buildAlertCloseNtReq");
        if (CollUtil.isEmpty(alertIdList)) {
            return;
        }
        log.info("接收到关闭的告警条数,{}条",alertIdList.size());
        final int batchSize = 100; // 每次处理的数据量
        int startIndex = 0;

        while (startIndex < alertIdList.size()) {
            List subList = alertIdList.subList(startIndex, Math.min(startIndex + batchSize, alertIdList.size()));
            buildAlertCloseNtReqAndSendToKafka(subList, resolution, username);
            startIndex += batchSize;
        }
    }

    /**
     * 批量获取需要回传的原始事件id,在扩展表的extend20字段,推送到kakfa
     * @param alertIdList
     * @param resolution
     * @param username
     */
    private void buildAlertCloseNtReqAndSendToKafka(List alertIdList, String resolution, String username) {
        // 查询的总条数
        Integer eventCount = getEventCount(alertIdList);
        // 每次查询的大小
        final int batchSize = maxGetSize;
        // 总批次数
        int totalBatches = (eventCount + batchSize - 1) / batchSize;

        for (int i = 0; i < totalBatches; i++) {
            // 分页查询数据
            List> maps = getEventData(alertIdList, i * batchSize, batchSize);

            // 处理分页查询结果
            List alertCloseCallBackNtReqs = maps.stream().map(map -> AlertCloseCallBackNtReq.builder()
                            .eventId(String.valueOf(map.getOrDefault("serialId", "")))
                            .host(String.valueOf(map.getOrDefault("host", "")))
                            .severip(String.valueOf(map.getOrDefault("serverId", "")))
                            .itemTag(String.valueOf(map.getOrDefault("itemTag", "")))
                            .msg(resolution)
                            .operator(username).build())
                    .collect(Collectors.toList());

            // 发送到 Kafka
            alertCloseCallBackNtReqs.forEach(req -> kafkaService.send(ntAlertInfoCallBackTopic, JSONUtil.toJsonStr(req)));
        }
    }
    // 查询总条数
    private int getEventCount(List alertIdList) {
        MPJQueryWrapper mpjQueryWrapper = new MPJQueryWrapper<>();
        mpjQueryWrapper.select("distinct e." + serialId );
        mpjQueryWrapper.in("t.alert_id", alertIdList);
        mpjQueryWrapper.leftJoin("tb_extend e on t.id = e.event_id");
        return eventDaoService.count(mpjQueryWrapper);
    }
    // 查询需要回传的数据
    private List> getEventData(List alertIdList, int offset, int limit) {
        MPJQueryWrapper mpjQueryWrapper = new MPJQueryWrapper<>();
        mpjQueryWrapper.select("distinct e." + serialId + " as serialId",
                "e." + ntSeverIpExtend + " as serverId", "e." + ntFullHostExtend + " as host", "t.alert_id",
                "e." + ntItemTagExtend + " as itemTag");
        mpjQueryWrapper.in("t.alert_id", alertIdList);
        mpjQueryWrapper.leftJoin("tb_extend e on t.id = e.event_id");
        mpjQueryWrapper.last(String.format("limit %d, %d", offset, limit)); // 添加分页查询条件
        return eventDaoService.listMaps(mpjQueryWrapper);
    }

在优化之前的代码中可以看出,当告警关联的event数据较少时,不会出现问题,递归几次就能将数据处理完成并发送到kafka。

事故当天,由于值班人员关闭了一条600w原始事件的告警数据,这段代码就出现了问题,服务重启完不到1分钟,CPU使用率立马飙升6000%(16个核的服务器),排查很久在定位到时由于这端代码引起的

// 根据条件去mysql里查询符合条件的数据条数
Integer eventCount = eventDaoService.lambdaQuery().in(Event::getAlertId, alertIdList).count();
// 分批递归去处理 maxGetSize配置的是20000 
if (alertIdList.size()!=1&&eventCount>maxGetSize) {
	ntAlertCloseCallBack(CollUtil.sub(alertIdList,0,alertIdList.size()/2), resolution, username);
	ntAlertCloseCallBack(CollUtil.sub(alertIdList,alertIdList.size()/2,alertIdList.size()), resolution, username);
	return;
}

当eventCount为600W时,这里就会不断的进行递归,从而导致CPU使用率飙升引发服务无法处理其他请求,由于程序无法正常结束,导致kafka的offset一直无法提交,重启整个应用又会消费到这条批数据再次陷入死循环,经过分析后我们先把topic里的数据进行备份,重新订阅新的topic,果然服务正常了,cpu也一直稳定在几十上下。后面优化了代码重新上线,大数据量下也能正常工作。

总结

在进行查数据库和数据交互的过程中,一定要考虑数据量级的变化,最好是分批次进行操作,代码要严谨,避免出现程序OOM、CPU飙升的这种问题。

 愿世间再无BUG v_v

你可能感兴趣的:(java,服务器)