flume-ng添加自定义拦截器

业务场景:收集nginx日志中个别信息进入kafka,为了避免kafka压力过大,这里优化了两点
  • 刷选掉不需要分析的数据进入kafka
  • 尽量把消息均匀分布在不同的broker上
刷选数据
  • 过滤掉不需要的数据
  • 自定义Interceptor


 
       org.apache.flume
       flume-ng-core
       1.6.0
    
//只保留两个接口的数据
package deng.yb.flume_ng_Interceptor;

import java.util.ArrayList;
import java.util.List;

import org.apache.commons.codec.Charsets;
import org.apache.flume.Context;
import org.apache.flume.Event;
import org.apache.flume.interceptor.Interceptor;

public class MyInterceptor implements Interceptor {
    /**
     * epp接口-request
     */
    private final String EPP_REQUEST = "POST /api/sky_server_data_app/track/user_time HTTP/1.1";

    /**
     * app接口-request
     */
    private final String APP_REQUEST = "POST /api/sky_server_data_app/code/app_code HTTP/1.1";

    public void close() {
    }

    public void initialize() {
    }

    public Event intercept(Event event) {
        String body = new String(event.getBody(), Charsets.UTF_8);
        if (body.indexOf(EPP_REQUEST) > -1 || body.indexOf(APP_REQUEST) > -1) {
            event.setBody(body.toString().getBytes());
            return event;
        }
        return null;
    }

    public List intercept(List events) {
        List intercepted = new ArrayList<>(events.size());
        for (Event event : events) {
            Event interceptedEvent = intercept(event);
            if (interceptedEvent != null) {
                intercepted.add(interceptedEvent);
            }
        }
        return intercepted;
    }

    public static class Builder implements Interceptor.Builder {

        public void configure(Context arg0) {
            // TODO Auto-generated method stub
        }

        public Interceptor build() {
            return new MyInterceptor();
        }

    }
}

  • cdh flume配置修改,agent添加以下配置
epplog.sources.r1.interceptors=i1
epplog.sources.r1.interceptors.i1.type= deng.yb.flume_ng_Interceptor.MyInterceptor$Builder
  • 把自定义程序打好jar包放进$FLUME_HOME/lib文件夹下
  • 重启
  • 这样flume到kafka的数据就是帅选的信息后的,避免了大量没用信息到kafka导致IO问题
kafka均衡负载
  • 需要把消息均匀分布在不同brokers上,避免单台broker节点压力过大
  • 官方文档这样说
Kafka Sink uses the topic and key properties from the FlumeEvent headers to send events to Kafka. If topic exists in the headers, the event will be sent to that specific topic, overriding the topic configured for the Sink. If key exists in the headers, the key will used by Kafka to partition the data between the topic partitions. Events with same key will be sent to the same partition. If the key is null, events will be sent to random partitions.
  • 大概意思是 - kafka-sink是从header里的key参数来确定将数据发到kafka的哪个分区中。如果为null,那么就会随机发布至分区中。但我测试的结果是flume发布的数据会发布到一个分区中的
  • 向flume添加拦截器,会为每个event的head添加一个随机唯一的key,我们需要向header中写上随机的key,然后数据才会真正的向kafka分区进行随机发布
  • flume的agent添加和修改以下配置
epplog.sources.r1.interceptors=i1 i2
epplog.sources.r1.interceptors.i1.type= deng.yb.flume_ng_Interceptor.MyInterceptor$Builder

epplog.sources.r1.interceptors.i2.type=org.apache.flume.sink.solr.morphline.UUIDInterceptor$Builder
epplog.sources.r1.interceptors.i2.headerName=key
epplog.sources.r1.interceptors.i2.preserveExisting=false
  • 创建topic
#分区数需要根据brokers的数量决定,最好是brokers的整数倍
kafka-topics --create  --zookeeper bi-slave1:2181,bi-slave2:2181,bi-master:2181 --replication-factor 1 --partitions 3 --topic epplog1
  • 修改flume的sink的topic,重启flume

  • 看到消息


    flume-ng添加自定义拦截器_第1张图片
    测试结果.png
  • 可以看到,消息自动uuid和帅选后的信息

  • 查看不同brokers该topic的分区
    1分区


    1分区.png

    2分区


    2分区.png

    3分区
    3分区.png
  • 分区名格式为 topic-分区索引,索引从0开始算

  • 能看到,消息已经相对均匀分布在3个分区,也就是三台机器上面,从而达到kafka负载均衡

你可能感兴趣的:(flume-ng添加自定义拦截器)