kafka->logstash

一、安装kafka

请参考:kafka安装

二、安装logstash

请参考:logstash安装

三、kafka写入logstash

3.1 注意

  • 请注意kafka版本必须为 kafka_2.10-0.10.0.1
  • kafka、kafka-client、logstash具体对应关系可以参考:https://www.elastic.co/guide/en/logstash/current/plugins-inputs-kafka.html

3.2 相关主机配置上hosts文件

192.168.200.21 elk-node1
192.168.200.81 flume
192.168.200.91 kafka

3.3 配置启动kafka

mkdir -p /data/kafka/kafka-logs
grep '^[a-z]' /usr/local/kafka/config/server.properties
broker.id=0
num.network.threads=3
num.io.threads=8
socket.send.buffer.bytes=102400
socket.receive.buffer.bytes=102400
socket.request.max.bytes=104857600
log.dirs=/data/kafka/kafka-logs
num.partitions=1
num.recovery.threads.per.data.dir=1
log.retention.hours=168
log.segment.bytes=1073741824
log.retention.check.interval.ms=300000
zookeeper.connect=kafka:2181
zookeeper.connection.timeout.ms=6000
  • 启动zookeeper
bin/zookeeper-server-start.sh -daemon config/zookeeper.properties &
  • 启动kafka
bin/kafka-server-start.sh -daemon config/server.properties &
  • 创建topic
bin/kafka-topics.sh --create --zookeeper kafka:2181 --replication-factor 1 --partitions 1 --topic zsdaitest
  • 生产数据
bin/kafka-console-producer.sh --broker-list kafka:9092 --topic zsdaitest
  • 消费数据
bin/kafka-console-consumer.sh --bootstrap-server kafka:9092 --zookeeper kafka:2181 --topic zsdaitest --from-beginning

3.4 配置启动logstash

  • vim 1-inputKafka.conf
input{
    kafka {
        topics => ["zsdaitest"]
        bootstrap_servers => "kafka:9092"
   }

}
output{
    stdout{
        codec => rubydebug
    }
}
  • 启动
./bin/logstash -f 1-inputKafka.conf
  • 启动后,在kafka生产端输入任意字符进行测试,如:kafka test1,然后观察logstash控制台,如果有接受到对应字符信息,则成功。

你可能感兴趣的:(kafka->logstash)