1)案例需求
使用Flume采集服务器本地日志,需要按照日志类型的不同,将不同种类的日志发往不同的分析系统。
2)需求分析
在实际的开发中,一台服务器产生的日志类型可能有很多种,不同类型的日志可能需要发送到不同的分析系统。此时会用到Flume拓扑结构中的Multiplexing结构,Multiplexing的原理是,根据event中Header的某个key的值,将不同的event发送到不同的Channel中,所以我们需要自定义一个Interceptor,为不同类型的event的Header中的value赋予不同的值。
在该案例中,我们以端口数据模拟日志,以数字(单个)和字母(单个)模拟不同类型的日志,我们需要自定义interceptor区分数字和字母,将其分别发往不同的分析系统(Channel)。
3)实现步骤
(1)创建一个maven项目,并引入以下依赖
<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<parent>
<artifactId>HdfsTest</artifactId>
<groupId>com.caron.hdfs</groupId>
<version>1.0-SNAPSHOT</version>
<relativePath>../HdfsTest/pom.xml</relativePath>
</parent>
<modelVersion>4.0.0</modelVersion>
<groupId>com.caron.flume</groupId>
<artifactId>flume</artifactId>
<dependencies>
<dependency>
<groupId>org.apache.flume</groupId>
<artifactId>flume-ng-core</artifactId>
<version>1.9.0</version>
</dependency>
</dependencies>
</project>
(2)定义MyInterceptor类并实现Interceptor接口
package com.caron.flume.interceptor;
import org.apache.flume.Context;
import org.apache.flume.Event;
import org.apache.flume.interceptor.Interceptor;
import java.util.List;
import java.util.Map;
/**
* @author Caron
* @create 2020-05-05-9:33
* @Description
* @Version
*/
public class MyInterceptor implements Interceptor {
public void initialize() {
}
public Event intercept(Event event) {
//获取传入事件的Header
Map<String,String> headers = event.getHeaders();
//获取body
byte[] body = event.getBody();
//获取首字符
String s = new String(body);
char c = s.charAt(0);
if((c <= 'z' && c >= 'a')|| (c <= 'Z' && c >= 'A') ){
headers.put("xxx","aaa");
}else {
headers.put("xxx","bbb");
}
return event;
}
public List<Event> intercept(List<Event> list) {
for (Event event :
list) {
intercept(event);
}
return list;
}
public void close() {
}
/**
* 框架会调用Builder来创建Interceptor实例
*/
public static class MyBuilder implements Interceptor.Builder {
/**
* 创建实例的方法
* @return 新的Interceptor
*/
public Interceptor build() {
return new MyInterceptor();
}
/**
* 读取配置文件的方法
* @param context 配置文件
*/
public void configure(Context context) {
}
}
}
编写完毕打包放入flume/lib文件夹中
(3)编辑flume配置文件
为hadoop101上的Flume1配置1个netcat source,1个sink
group(2个avro sink),并配置相应的ChannelSelector和interceptor。
101:
sudo vim /opt/module/flume/job/group4/flume1.conf
#Name the components on this agent
a1.sources = r1
a1.sinks = k1 k2
a1.channels = c1 c2
# Describe/configure the source
a1.sources.r1.type = netcat
a1.sources.r1.bind = 0.0.0.0
a1.sources.r1.port = 44444
#拦截器链
a1.sources.r1.interceptors = i1
a1.sources.r1.interceptors.i1.type =com.caron.flume.interceptor.MyInterceptor$MyBuilder
#多路复用模式
a1.sources.r1.selector.type = multiplexing
a1.sources.r1.selector.header = xxx
a1.sources.r1.selector.mapping.aaa = c1
a1.sources.r1.selector.mapping.bbb = c2
# Describe the sink
a1.sinks.k1.type = avro
a1.sinks.k1.hostname = hadoop102
a1.sinks.k1.port = 4141
a1.sinks.k2.type=avro
a1.sinks.k2.hostname = hadoop103
a1.sinks.k2.port = 4242
# Use a channel which buffers events in memory
a1.channels.c1.type = memory
a1.channels.c1.capacity = 1000
a1.channels.c1.transactionCapacity = 100
# Use a channel which buffers events in memory
a1.channels.c2.type = memory
a1.channels.c2.capacity = 1000
a1.channels.c2.transactionCapacity = 100
# Bind the source and sink to the channel
a1.sources.r1.channels = c1 c2
a1.sinks.k1.channel = c1
a1.sinks.k2.channel = c2
102:
sudo vim /opt/module/flume/job/group4/flume2.conf
a2.sources = r1
a2.sinks = k1
a2.channels = c1
a2.sources.r1.type = avro
a2.sources.r1.bind = hadoop102
a2.sources.r1.port = 4141
a2.sinks.k1.type = logger
a2.channels.c1.type = memory
a2.channels.c1.capacity = 1000
a2.channels.c1.transactionCapacity = 100
a2.sinks.k1.channel = c1
a2.sources.r1.channels = c1
103:
sudo vim /opt/module/flume/job/group4/flume3.conf
a3.sources = r1
a3.sinks = k1
a3.channels = c1
a3.sources.r1.type = avro
a3.sources.r1.bind = hadoop103
a3.sources.r1.port = 4242
a3.sinks.k1.type = logger
a3.channels.c1.type = memory
a3.channels.c1.capacity = 1000
a3.channels.c1.transactionCapacity = 100
a3.sinks.k1.channel = c1
a3.sources.r1.channels = c1
(4)分别在hadoop101,hadoop102,hadoop103上启动flume进程,注意先后顺序。
103:
bin/flume-ng agent --conf conf/ --name a3 --conf-file job/group4/flume3.conf -Dflume.root.logger=INFO,console
102:
bin/flume-ng agent --conf conf/ --name a2 --conf-file job/group4/flume2.conf -Dflume.root.logger=INFO,console
101:
bin/flume-ng agent --conf conf/ --name a1 --conf-file job/group4/flume1.conf -Dflume.root.logger=INFO,console
(5)在hadoop101使用netcat向localhost:44444发送字母和数字。
(6)观察hadoop102和hadoop103打印的日志。