上节课已经详细介绍了 Flume 与 Kafka 的集成,本节课延续上节课的使用环境,使用 Java 测试 Flume 与 Kafka 的连通性。
Flume 介绍与安装:https://www.shiyanlou.com/courses/237
本课程属于中等难度级别,适合具有大数据 hadoop 基础的用户,如果对数据采集 了解能够更好的上手本课程。
本节实验是在上节课的基础上进行,需要的组件已经安装完成,并启动相应的服务,具体如下:
如果没有正确启动相关服务,请参考上节课实验。
用 hadoop 用户进入 flume 解压包的 conf 目录下, 通过使用 vi
命令添加修改 flume-kafka.properties。
hadoop@945f39ae074b:/opt/apache-flume-1.6.0-bin/conf$ sudo vi flume-kafka.properties
#添加如下
# Name the components on this agent
a1.sources = r1
a1.sinks = k1
a1.channels = c1
# Describe/configure the source
a1.sources.r1.type=avro
a1.sources.r1.bind=localhost
a1.sources.r1.port=41414
# Describe the sink
a1.sinks.k1.type = org.apache.flume.sink.kafka.KafkaSink
a1.sinks.k1.topic = testflume
a1.sinks.k1.brokerList = localhost:9092
a1.sinks.k1.requiredAcks = 1
a1.sinks.k1.batchSize = 20
a1.sinks.k1.channel = c1
# Use a channel which buffers events in memory
a1.channels.c1.type = memory
a1.channels.c1.capacity = 1000
a1.channels.c1.transactionCapacity = 100
# Bind the source and sink to the channel
a1.sources.r1.channels = c1
a1.sinks.k1.channel = c1
根据 flume-kafka.properties
的配置创建主题 testflume
。
$ bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic testflume
在 /opt
目录下,用 hadoop 用户通过 wget
命令下载
slf4j-1.7.22.zip
,并用 tar
命令解压。
$ sudo wget http://labfile.oss.aliyuncs.com/courses/785/slf4j-1.7.22.zip
$ sudo unzip slf4j-1.7.22.zip
用 cp
命令拷贝 slf4j-nop-1.7.22.jar
到 flume 安装包的 lib
目录下。
$ sudo cp slf4j-1.7.22/slf4j-nop-1.7.22.jar apache-flume-1.6.0-bin/lib/
双击打开 eclipse -> file -> New -> Java Project
输入 Project name -> 选择 javaSe-1.7 -> Finish
右键 src -> New -> Package
输入包名 -> Finish
添加 flume jar 包
点击 Libraries -> Add External JARs...-> 选中已经下载的 flume lib 目录下的 jar -> 确定
点击 OK
右键点击 test -> New -> Class
输入类名 -> 确认
MyTest.java
如下
package test;
import java.nio.charset.Charset;
import org.apache.flume.Event;
import org.apache.flume.EventDeliveryException;
import org.apache.flume.api.RpcClient;
import org.apache.flume.api.RpcClientFactory;
import org.apache.flume.event.EventBuilder;
public class MyTest {
public static void main(String[] args) throws Exception {
MyRpcClientFacade client = new MyRpcClientFacade();
// Initialize client with the remote Flume agent's host and port client.init("localhost", 41414); // Send 10 events to the remote Flume agent. That agent should be // configured to listen with an AvroSource. String sampleData = "=====Hello Flume====="; for (int i = 0; i < 1000; i++) { client.sendDataToFlume(sampleData); } client.cleanUp(); } } class MyRpcClientFacade { private String hostname; private RpcClient client; private int port; public void init(String hostname, int port) { // Setup the RPC connection this.hostname = hostname; this.port = port; this.client = RpcClientFactory.getDefaultInstance(hostname, port); // Use the following method to create a thrift client (instead of the // above line): // this.client = RpcClientFactory.getThriftInstance(hostname, port); } public void sendDataToFlume(String data) { // Create a Flume Event object that encapsulates the sample data Event event = EventBuilder.withBody(data, Charset.forName("UTF-8")); // Send the event try { client.append(event); } catch (EventDeliveryException e) { // clean up and recreate the client client.close(); client = null; client = RpcClientFactory.getDefaultInstance(hostname, port); // Use the following method to create a thrift client (instead of // the above line): // this.client = RpcClientFactory.getThriftInstance(hostname, port); } } public void cleanUp() { // Close the RPC connection client.close(); } }
启动顺序:
1)flume-ng 启动 agent:
hadoop@945f39ae074b:/opt/apache-flume-1.6.0-bin$ bin/flume-ng agent -c ./conf/ -f conf/flume-kafka.properties -Dflume.root.logger=DEBUG,console -n a1
2)另外开启 kafka Xfce 终端,启动 consumer。
hadoop@945f39ae074b:/opt/kafka_2.10-0.10.0.0$ bin/kafka-console-consumer.sh --zookeeper localhost:2181 --from-beginning --topic testflume
右键运行 main
回到 kafka Xfce 终端,观察到 kafka 终端不断消费消息。
通过修改 Flume 的配置文件,并基于 Flume API 编写程序,实现了 Flume 与 Kafka 的连通,更高级的应用请参考 Flume API。