2019独角兽企业重金招聘Python工程师标准>>>
1.kafka的配置参照
https://my.oschina.net/u/1591525/blog/2251910
2.flum配置
在flume的conf目录下新建 kafka.properties
agent.sources = s1
agent.channels = c1
agent.sinks = k1
agent.sources.s1.type=exec
agent.sources.s1.command=tail -F /tmp/logs/kafka.log
agent.sources.s1.channels=c1
agent.channels.c1.type=memory
agent.channels.c1.capacity=10000
agent.channels.c1.transactionCapacity=100
#设置Kafka接收器
agent.sinks.k1.type= org.apache.flume.sink.kafka.KafkaSink
#设置Kafka的broker地址和端口号
agent.sinks.k1.brokerList=master:9092
#设置Kafka的Topic
agent.sinks.k1.topic=kafkatest
#设置序列化方式
agent.sinks.k1.serializer.class=kafka.serializer.StringEncoder
agent.sinks.k1.channel=c1
3.连接测试
启动kafka和zk
bin/zookeeper-server-start.sh -daemon config/zookeeper.properties &
bin/kafka-server-start.sh -daemon config/server.properties &
启动flume
bin/flume-ng agent -n agent -c conf -f conf/kafka.properties
kafka生产者和消费者参照springboot和kafka集成配置