基于hdp的flume

1.flume配置  flume-test.conf

producer.sources = source1
producer.sinks = sink1
producer.channels = channel1


producer.sources.source1.channels = channel1
producer.sources.source1.type = syslogudp
producer.sources.s.command = tail -f  /var/log/messages
producer.sources.source1.bind = 127.0.0.1
producer.sources.source1.port = 41414


producer.sinks.sink1.channel = channel1
producer.sinks.sink1.type = org.apache.flume.plugins.KafkaSink
producer.sinks.sink1.metadata.broker.list=172.16.1.171:6667
producer.sinks.sink1.custom.partition.key=kafkaPartition
producer.sinks.sink1.custom.topic.name=kafkaTopic
producer.sinks.sink1.serializer.class=kafka.serializer.StringEncoder


producer.channels.channel1.type = memory


2.flume命令

flume-ng agent --conf conf/ -f /etc/flume/2.4.0.0-169/0/flume-test.conf -n producer

你可能感兴趣的:(hadoop)