Flume监控 Monitor

Flume监控 Monitor

主要监控flume source channel sink 各自传输了多少条数据,消息偏差是否过大

官网链接

conf 配置

# Name the components on this agent
a1.sources = r1
a1.sinks = k1
a1.channels = c1

# Describe/configure the custom exec source
a1.sources.r1.type = com.onlinelog.analysis.ExecSourceJSON
a1.sources.r1.hostname = hadoop001
a1.sources.r1.servicename = namenode


# Describe the sink
a1.sinks.k1.type = org.apache.flume.sink.kafka.KafkaSink
a1.sinks.k1.topic = onlinelog
a1.sinks.k1.brokerList = hadoop001:9092
a1.sinks.k1.requiredAcks = 1
a1.sinks.k1.batchSize = 1
a1.sinks.k1.channel = c1

# Use a channel which buffers events in memory
a1.channels.c1.type = memory
a1.channels.c1.keep-alive = 90
a1.channels.c1.capacity = 2000000
a1.channels.c1.transactionCapacity = 6000

# Bind the source and sink to the channel
a1.sources.r1.channels = c1
a1.sinks.k1.channel = c1

启动脚本

#!bin/bash
flume=/home/hadoop/app/apache-flume-1.6.0-cdh5.7.0-bin

  $flume/bin/flume-ng agent \
 -c $flume/conf \
 -f /home/hadoop/flume/online/exec_memory_kafka.properties \
 -n a1 -Dflume.root.logger=INFO,console -Dflume.monitoring.type=http -Dflume.monitoring.port=34545
 #flume.monitoring.type=http 指定了Reporting的方式为http,flume.monitoring.port 指定了http服务的端口号

访问

http://hadoop001:34545/metrics

结果

{"SINK.k1":{"ConnectionCreatedCount":"0","BatchCompleteCount":"0","EventDrainAttemptCount":"0","BatchEmptyCount":"0","StartTime":"1562743746188","BatchUnderflowCount":"0","ConnectionFailedCount":"0","ConnectionClosedCount":"0","Type":"SINK","RollbackCount":"0","EventDrainSuccessCount":"67","KafkaEventSendTimer":"3610","StopTime":"0"},"CHANNEL.c1":{"ChannelCapacity":"2000000","ChannelFillPercentage":"0.0","Type":"CHANNEL","ChannelSize":"0","EventTakeSuccessCount":"67","EventTakeAttemptCount":"68","StartTime":"1562743741409","EventPutAttemptCount":"67","EventPutSuccessCount":"67","StopTime":"0"},"SOURCE.r1":{"EventReceivedCount":"68","AppendBatchAcceptedCount":"0","Type":"SOURCE","EventAcceptedCount":"67","AppendReceivedCount":"0","StartTime":"1562743741695","AppendAcceptedCount":"0","OpenConnectionCount":"0","AppendBatchReceivedCount":"0","StopTime":"0"}}

SOURCE中”EventReceivedCount”:”68″ 表示SOURCE从文件中读取到68条消息;

CHANNEL中”EventPutSuccessCount”:”67″ 表示成功存放76条消息;

SINK中”EventDrainSuccessCount”:”67″ 表示成功向Kafka发送了67条消息。

你可能感兴趣的:(大数据,flume)