springboot整合ELK---分两种直接使用logstash,另外一种整合kafka

环境说明:springBoot2.1.3,logback,es6.8.2

当我们服务节点特别多的时候,我们就需要考虑将日志统一放到ELK中去高效查找定位日志,不用去服务器一个一个找。同时整合分布式链路追踪打印日志。这里提供两种springboot整合ELK的方式。

1.第一种springboot-logstash环境搭建

1.1 添加maven

        
            net.logstash.logback
            logstash-logback-encoder
            6.3
        

1.2 logback-spring.xml添加日志打印

    
    
        localhost:5000
        
        
            %d{yyyy-MM-dd HH:mm:ss.SSS}
                [service:${springAppName:-}]
                [traceId:%X{X-B3-TraceId:-},spanId:%X{X-B3-SpanId:-},parentSpanId:%X{X-B3-ParentSpanId:-},exportable:%X{X-Span-Export:-}]
                [%thread] %-5level %logger{50} - %msg%n
            UTF-8 
        
    

1.3配置logstash

input {
  tcp {
    port => 5000
  }
}
filter {
  grok {
    match => {
    "message" => "%{TIMESTAMP_ISO8601:logTime} %{GREEDYDATA:service} %{GREEDYDATA:thread} %{LOGLEVEL:level} %{GREEDYDATA:loggerClass}-%{GREEDYDATA:logContent}"}
  }
}
output {
  elasticsearch {
    hosts => ["http://localhost:9200"]
    index => "springboot-%{+YYYY.MM.dd}"
    user => "elastic"
    password => "changeme"
  }
}

1.4 启动ES, kibana, logstash, springboot查看

springboot整合ELK---分两种直接使用logstash,另外一种整合kafka_第1张图片

 

 

springboot整合ELK---分两种直接使用logstash,另外一种整合kafka_第2张图片

2.第二种springboot-kafka-logstash环境搭建

2.1 maven添加依赖


    com.github.danielwegener
    logback-kafka-appender
    0.2.0-RC2

2.2 修改日志配置


        
            %d{yyyy-MM-dd HH:mm:ss.SSS} [service:${springAppName:-}]
                [traceId:%X{X-B3-TraceId:-},spanId:%X{X-B3-SpanId:-},parentSpanId:%X{X-B3-ParentSpanId:-},exportable:%X{X-Span-Export:-}]
                [%thread] %-5level %logger{50} - %msg%n
        
        authLog
        
        
​
        
        
​
        
        
        
        bootstrap.servers=localhost:9092
        
        acks=0
        
        linger.ms=1000
        
        max.block.ms=0
        
        client.id=${HOSTNAME}-${CONTEXT_NAME}-logback-relaxed
    

 

2.3 修改Logstash

input {
  kafka {
    id => "my_plugin_id"
    bootstrap_servers => "127.0.0.1:9092"
    topics => ["authLog"]
    auto_offset_reset => "latest"
  }
}
filter {
  grok {
    match => {
    "message" => "%{TIMESTAMP_ISO8601:logTime} %{GREEDYDATA:service} %{GREEDYDATA:thread} %{LOGLEVEL:level} %{GREEDYDATA:loggerClass}-%{GREEDYDATA:logContent}"}
  }
}
output {
  elasticsearch {
    hosts => ["http://localhost:9200"]
    index => "springboot-%{+YYYY.MM.dd}"
    user => "elastic"
    password => "changeme"
  }
}

2.4启动zk,kafka,elk

springboot整合ELK---分两种直接使用logstash,另外一种整合kafka_第3张图片

 

你可能感兴趣的:(java,elasticsearch,运维)