Flume中自定义Source和自定义Sink

自定义Source

Source是负责接收数据到Flume Agent的组件。Source组件可以处理各种类型、各种格式的日志数据,包括avro、thrift、exec、jms、spooling directory、netcat、sequence generator、syslog、http、legacy。官方提供的source类型已经很多,但是有时候并不能满足实际开发当中的需求,此时我们就需要根据实际需求自定义某些source。
根据官方说明自定义MySource需要继承AbstractSource类并实现Configurable和PollableSource接口。

需求:使用flume接收数据,并给每条数据添加前缀,
    输出到控制台。前缀可从flume配置文件中配置。
    创建maven module,加入依赖
    <dependencies>
    <dependency>
        <groupId>org.apache.flume</groupId>
        <artifactId>flume-ng-core</artifactId>
        <version>1.9.0</version>
</dependency>

</dependencies>  
import org.apache.flume.Context;
import org.apache.flume.Event;
import org.apache.flume.EventDeliveryException;
import org.apache.flume.PollableSource;
import org.apache.flume.conf.Configurable;
import org.apache.flume.event.SimpleEvent;
import org.apache.flume.source.AbstractSource;

/**
 * 用for循环去模拟生产数据
 * 然后通过配置文件的方式将这个时间的body加上前缀和后缀
 */

public class Mysource extends AbstractSource implements Configurable, PollableSource {
     
    private String prefis;
    private String suffix;

    //负责接收数据的具体逻辑
    @Override
    public Status process() throws EventDeliveryException {
     
        Status status = null;
        try {
     
            for (int i = 0; i < 5; i++) {
     
                SimpleEvent event = new SimpleEvent();
                event.setBody((prefis + "===" + i + "====" + suffix).getBytes());
                getChannelProcessor().processEvent(event);
                Thread.sleep(2000);
            }
            status=Status.READY;
        } catch (Exception e) {
     
            e.printStackTrace();
            status = Status.BACKOFF;
        }
        return status;
    }

    @Override
    public long getBackOffSleepIncrement() {
     
        return 0;
    }

    @Override
    public long getMaxBackOffSleepInterval() {
     
        return 0;
    }

    @Override
    public void configure(Context context) {
     
        prefis = context.getString("prefix");
        suffix = context.getString("suffix", "biedata");
    }

测试:将写好的代码打包,放到flume/lib/下。
修改配置文件:在flume/job 下 vim Mysource-flume-logger.conf
添加如下配置文件:

# Name the components on this agent
a2.sources = r1 
a2.sinks = k1
a2.channels = c1 

# Describe/configure the source
a2.sources.r1.type =  MySource的绝对路径
a2.sources.r1.prefix = qianzhui
a2.sources.r1.suffix = houzhui

# Describe the sink
a2.sinks.k1.type = logger


# Use a channel which buffers events in memory
a2.channels.c1.type = memory
a2.channels.c1.capacity = 1000
a2.channels.c1.transactionCapacity = 100

# Bind the source and sink to the channel
a2.sources.r1.channels = c1
a2.sinks.k1.channel = c1 

观察测试结果:每2s会打印数据到控制台

自定义Sink

Sink不断地轮询Channel中的事件且批量地移除它们,并将这些事件批量写入到存储或索引系统、或者被发送到另一个Flume Agent。
Sink是完全事务性的。在从Channel批量删除数据之前,每个Sink用Channel启动一个事务。批量事件一旦成功写出到存储系统或下一个Flume Agent,Sink就利用Channel提交事务。事务一旦被提交,该Channel从自己的内部缓冲区删除事件。
根据官方说明自定义MySink需要继承AbstractSink类并实现Configurable接口。

需求:使用flume接收数据,并在Sink端给每条数据添加前缀和后缀,
输出到控制台。前后缀可在flume任务配置文件中配置。
import org.apache.flume.*;
import org.apache.flume.conf.Configurable;
import org.apache.flume.sink.AbstractSink;
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;

/**
 * 通过logger4j的方式打印数据
 * 通过配置文件将传送过来的数据加点前缀和后缀
 */
public class MySink extends AbstractSink implements Configurable {
     
    //定义的前缀后缀
    private String prefix;
    private String suffix;
    //定义全局的logger对象
    private Logger logger = LoggerFactory.getLogger(MySink.class);
    @Override
    //sink往外写数据的逻辑
    public Status process() throws EventDeliveryException {
     

        //1.定义状态
        Status status=null;
        //2.获取与sink绑定的channel
        Channel channel = getChannel();
        //3.从channel获取事务
        Transaction transaction = channel.getTransaction();
        //4.开始事务
        transaction.begin();
        try {
     
            //5.从channnel拿数据
            Event event = channel.take();
            //6.判断当前的这个event是不是为null
            if (event!=null){
     
                //7.通过logger打印日志(这个是将数据往外写的逻辑)
                logger.info( prefix+"-->"+new String(event.getBody())+"<--"+suffix);
            }
            //8.把事务提交
            transaction.commit();
            //9.将准备好的状态返回
            status=Status.READY;
        }catch (Exception e){
     
            e.printStackTrace();
            //10.将退避的状态放回
            status=Status.BACKOFF;
            //11.将事务回滚
            transaction.rollback();
        }finally {
     
            //12.关闭事务
            transaction.close();
        }
        return status;
    }

    @Override
    public void configure(Context context) {
     
    prefix=context.getString("aaa");
    suffix=context.getString("suffix", "abc");
    }
}

测试:将写好的代码打包,放到flume/lib/下。
添加配置文件:

# Name the components on this agent
a2.sources = r1 
a2.sinks = k1
a2.channels = c1 

# Describe/configure the source
a2.sources.r1.type = netcat
a2.sources.r1.bind = localhost
a2.sources.r1.port = 44444

# Describe the sink
a2.sinks.k1.type = MySink的绝对路径
a2.sinks.k1.aaa = qianzhui


# Use a channel which buffers events in memory
a2.channels.c1.type = memory
a2.channels.c1.capacity = 1000
a2.channels.c1.transactionCapacity = 100

# Bind the source and sink to the channel
a2.sources.r1.channels = c1
a2.sinks.k1.channel = c1 

观察结果即可。

你可能感兴趣的:(hadoop,大数据,hdfs)