logstash整合kafka

1. 准备工作

logstash版本:6.1.1

es版本:6.1.1

kibana版本6.1.1

kafka版本:0.8.2.1

2. 环境搭建

3. 在pom.xml中依赖


            net.logstash.logback
            logstash-logback-encoder
            4.11
        

        
            com.github.danielwegener
            logback-kafka-appender
            0.1.0
            runtime
        

4. 在logback中配置


        
            
                true
                true
                {"system":"test"}
                
            
            UTF-8
        
        
        abklog_topic
        
        
        bootstrap.servers=kafka_ip:kafka_port
    
    
    
        
    
    
    
        
        
    

5. logstash中配置文件中配置


input {
  kafka {
    bootstrap_servers  => "127.0.01:9099"
    # topic_id => "abklog_topic"
    # reset_beginning => false
    topics => ["abklog_topic"]
   }
}
filter {

     grok {
       match => {
              "message" => "(?m)%{TIMESTAMP_ISO8601:timestamp}\s+\[%{DATA:trace},%{DATA:span}\]\s+%{GREEDYDATA:msg}"
         }
    }

    mutate{
     "add_field" => {"appname" => "api"}
   }

}
output {
  elasticsearch {
    action => "index"
    hosts  => "127.0.0.1:9200"
    index  => "abk_logs"
  }
}

6. 启动项目,打印log,可以看到log消息被logstash消费

logstash整合kafka_第1张图片

=============================================================================================

你可能感兴趣的:(linux集群)