Docker内flume http source & hdfs sink

场景:
使用flume的http source 获取数据,hdfs sink将数据输入到hdfs ,下面是进行的配置和说明
Sink-hdfs
查看用户文档
http://flume.apache.org/FlumeUserGuide.html#netcat-source
修改配置文件


    1. 官方文档的例子



    2. 修改配置
    root@master:/usr/local/flume/conf# cp flume-conf.properties hdfs-s.conf
    root@master:/usr/local/flume/conf# vim hdfs-s.conf 
    # Name the components on this agent
    a1.sources = r1
    a1.sinks = k1
    a1.channels = c1

    # Describe/configure the source
    a1.sources.r1.type = http
    a1.sources.r1.port = 8888

    # Describe the sink
    a1.sinks.k1.type = hdfs
    a1.sinks.k1.channel = c1
    a1.sinks.k1.hdfs.path = /home/flume-log/%y-%m-%d/%H%M/%S
    a1.sinks.k1.hdfs.filePrefix = events-
    a1.sinks.k1.hdfs.round = true
    a1.sinks.k1.hdfs.roundValue = 10
    a1.sinks.k1.hdfs.roundUnit = minute
    a1.sinks.k1.hdfs.useLocalTimeStamp=true
    # Use a channel which buffers events in memory
    a1.channels.c1.type = memory
    #a1.channels.c1.capacity = 1000
    #a1.channels.c1.transactionCapacity = 100

    # Bind the source and sink to the channel
    a1.sources.r1.channels = c1
    a1.sinks.k1.channel = c1

    3. 注意需要改时间戳,否则会出现错误

    4. 运行
    root@master:/usr/local/flume/conf# flume-ng agent --conf . --conf-file hdfs-s.conf --name a1 -Dflume.root.logger=INFO,console
    5. 运行结果

    6. 测试发送json数据
    7. 启动hadoop完全分布式集群
    root@master:/usr/local/hadoop/sbin# /usr/local/hadoop/sbin/start-all.sh 
    8. 启动spark集群
    root@master:/usr/local/hadoop/sbin# /usr/local/spark/sbin/start-all.sh 
    9. 会出现错误 ,flume只是放到了一个文件里,暂时和spark集群没什么关系

图片

你可能感兴趣的:(大数据,hdfs,数据,docker,flume,spring)