Flume连接HDFS和Hive

Flume连接HDFS

  1. 进入Flume配置




  2. 配置flume.conf


 # Name the components on this agent
 a1.sources = r1
 a1.sinks = k1
 a1.channels = c1

 # sources
 a1.sources.r1.type = netcat
 a1.sources.r1.bind = 0.0.0.0
 a1.sources.r1.port = 41414

 # sinks
 a1.sinks.k1.type = hdfs
 a1.sinks.k1.hdfs.path = hdfs://slave1/flume/events/%y-%m-%d/%H%M/%S
 a1.sinks.k1.hdfs.filePrefix = events-
 a1.sinks.k1.hdfs.round = true
 a1.sinks.k1.hdfs.roundValue = 10
 a1.sinks.k1.hdfs.roundUnit = minute
 a1.sinks.k1.hdfs.useLocalTimeStamp=true
 a1.sinks.k1.hdfs.batchSize = 10
 a1.sinks.k1.hdfs.fileType = DataStream

 # channels
 a1.channels.c1.type = memory
 a1.channels.c1.capacity = 1000
 a1.channels.c1.transactionCapacity = 100

 # Bind the source and sink to the channel
 a1.sources.r1.channels = c1
 a1.sinks.k1.channel = c1
  1. 测试telnet通信
telnet slave1 41414
  1. 查看日志找到HDFS文件


  2. 查看文件内容,测试成功



Windows下Flume连接Hive

 # Name the components on this agent
a1.sources=r1
a1.sinks=k1
a1.channels=c1

# source
a1.sources.r1.type=avro
a1.sources.r1.bind=0.0.0.0
a1.sources.r1.port=43434

 # sink
a1.sinks.k1.type = hive
a1.sinks.k1.hive.metastore = thrift://192.168.18.33:9083
a1.sinks.k1.hive.database = bd14
a1.sinks.k1.hive.table = flume_log
a1.sinks.k1.useLocalTimeStamp = true
a1.sinks.k1.serializer = DELIMITED
a1.sinks.k1.serializer.delimiter = "\t"
a1.sinks.k1.serializer.serdeSeparator = '\t'
a1.sinks.k1.serializer.fieldnames = id,time,context
a1.sinks.k1.hive.txnsPerBatchAsk = 5

 # channel
a1.channels.c1.type=memory
a1.channels.c1.capacity=1000
a1.channels.c1.transactionCapacity=100

 # Bind the source and sink to the channel
a1.sources.r1.channels=c1
a1.sinks.k1.channel=c1
  1. 配置Windows下的flume
 # Name the components on this agent
 a1.sources = r1
 a1.sinks = k1
 a1.channels = c1

 # source
 a1.sources.r1.type = spooldir
 a1.sources.r1.spoolDir = F:\\test
 a1.sources.r1.fileHeader = true

 # sink
 a1.sinks.k1.type = avro
 a1.sinks.k1.hostname = 192.168.18.34
 a1.sinks.k1.port = 43434

 # channel
 a1.channels.c1.type = memory
 a1.channels.c1.capacity = 1000
 a1.channels.c1.transactionCapacity = 100

 # Bind the source and sink to the channel
 a1.sources.r1.channels = c1
 a1.sinks.k1.channel = c1
  1. 在hive中创建日志表



    在flume文档中要求将hive表分桶以及设置为orc格式,测试不声明orc格式,Hive将不会收到数据

create table flume_log(
id int
,time string
,context string
)
clustered by (id) into 3 buckets
stored as orc;
  1. 创建日志文件到监控目录F:\test


  2. 在Windows中 flume的bin目录下启动flume

flume-ng.cmd agent -conf-file ../conf/windows.conf -name a1 -property flume.root.logger=INFO,console
  1. 在Windows中查找一个log文件拖放到F:\test中,内容如下


  2. 当flume读取完文件后,文件后缀会增加completed


  3. 查看Hive表


  4. 测试成功,本来是想通过impala查询Hive表,但Impala不支持orc格式的Hive表,而flume中sink端需要采用orc格式传输数据,所以只能放弃impala,后续解决问题再进行补充

三、遇到问题

  1. Flume无法连接到HDFS
    解决:a1.sinks.k1.hdfs.path = hdfs://slave1:9000/flume/events/%y-%m-%d/%H%M/%S
    改为 a1.sinks.k1.hdfs.path = hdfs://slave1/flume/events/%y-%m-%d/%H%M/%S

原因:在CDH的Flume中,设置路径只需要IP地址,不需要配置端口

  1. HDFS文件存在乱码



    解决:在flume配置中添加

a1.sinks.k1.hdfs.fileType = DataStream

原因:
hdfs.fileType默认为SequenceFile,会压缩文件


  1. AvroRuntimeException: Excessively large list allocation request detected: 825373449 items!



    解决:调整flume中java堆栈大小
    原因:Flume内存溢出

  2. NoClassDefFoundError: org/apache/hive/hcatalog/streaming/RecordWriter



    解决:
    找到Hive的jar包所在目录



    找到Flume的jar包所在目录
cp /opt/cloudera/parcels/CDH-5.11.1-1.cdh5.11.1.p0.4/jars/hive-* /opt/cloudera/parcels/CDH-5.11.1-1.cdh5.11.1.p0.4/lib/flume-ng/lib/

原因:flume缺少了hive的jar包,需要从CDH拷贝

  1. EventDeliveryException: java.lang.NullPointerException: Expected timestamp in the Flume event headers, but it was null



    原因:时间戳参数设置错误
    解决:
    在flume的conf文件中配置sink端

a1.sinks.k1.hive.useLocalTimeStamp=true

参考文章:
https://blog.csdn.net/lifuxiangcaohui/article/details/49949865
https://blog.csdn.net/panguoyuan/article/details/39555239
http://miximixi.me/index.php/archives/961

你可能感兴趣的:(Flume连接HDFS和Hive)