【spring cloud】 sleuth、zipkin、rabbitmq、elasticsearch、logstash、kibana实现分布式微服务链路追踪,日志分析

spring cloud微服务链路追踪、日志收集

  1. 依赖版本
    spring boot <2.1.3.RELEASE>
    spring cloud
    rabbitmq < 3.7.10>
    elasticsearch <6.7.0>下载地址
    logstash <6.7.0>下载地址
    kibana <6.7.0>下载地址
    zipkin-server-2.9.4-exec.jar下载地址

  2. 实现链路追踪(sleuth、zipkin、rabbitmq)
    2.1 微服务添加依赖(zuul、service_client)

    		
                org.springframework.cloud
                spring-cloud-starter-sleuth
            
            
            
                
                
            
            
            
                org.springframework.cloud
                spring-cloud-starter-zipkin
            
            
                org.springframework.cloud
                spring-cloud-stream-binder-rabbit
            
    

    2.2. 微服务添加配置(bootstrap.yml)

      rabbitmq:
        host: localhost
        username: guest
        password: guest
        port: 5672
      # 使用rabbit作为链路跟踪信息进行异步收集,时不需要配置base-url
      zipkin:
        sender:
          type: rabbit
       # base-url: http://localhost:9411
      sleuth:
        sampler:
          probability: 1.0 #日志采集比值(0.1~1.0)
    

    2.3. 启动微服务zuul、service_client
    2.4. 运行 zipkin-server

    java -jar zipkin-server-2.9.4-exec.jar --zipkin.collector.rabbitmq.addresses=localhost
    

    2.5. 访问zipkin-UI页面
    【spring cloud】 sleuth、zipkin、rabbitmq、elasticsearch、logstash、kibana实现分布式微服务链路追踪,日志分析_第1张图片

  3. 链路信息写入elasticsearch,通过kibana查看(sleuth、zipkin、rabbitmq、elasticsearch、kibana)
    3.1 关闭之前启动的zipkin-server线程
    3.2 修改/config/elasticsearch.yml,放下以下注释,启动elasticsearch(windows环境)

    # Set the bind address to a specific IP (IPv4 or IPv6):
    #
    network.host: 127.0.0.1
    #
    # Set a custom port for HTTP:
    #
    http.port: 9200
    

    3.3修改/config/kibana.yml,放开以下注释. 启动kibana(windows环境)

    # Kibana is served by a back end server. This setting specifies the port to use.
    server.port: 5601
    # To allow connections from remote users, set this parameter to a non-loopback address.
    server.host: "localhost"
    # The URLs of the Elasticsearch instances to use for all your queries.
    elasticsearch.hosts: ["http://localhost:9200"]
    

    3.3 运行zipkin-server

    java -jar zipkin-server-2.9.4-exec.jar --zipkin.collector.rabbitmq.addresses=localhost  --STORAGE_TYPE=elasticsearch --ES_HOSTS=http://127.0.0.1:9200
    

    3.4 访问kibanaUi
    3.5 management添加zipkin索引,即可看到链路信息记录
    【spring cloud】 sleuth、zipkin、rabbitmq、elasticsearch、logstash、kibana实现分布式微服务链路追踪,日志分析_第2张图片

  4. 微服务日志写入elk,通过tarce_id,span_id可查询完整的日志记录(请求流程+info/error 日志收集)
    4.1 在以上基础上集成logstash
    4.2 在service_client resources目录下添加logback-spring.xml

    
    
        
        
        
        
        
        
    
        
            
                
                    
                
            
            ${MQHost}
            ${MQPort}
            ${MQUserName}
            ${MQPassword}
            service.${applicationName}
            service.${applicationName}
            true
            topic
            log_logstash
            true
            UTF-8
            true
            PERSISTENT
        
        
        
            
            true
        
    
        
            
        
    
    

    4.3 在logstash bin目录下,新建logstash.conf文件

    input {
      rabbitmq { 
    	type => "oct-mid-ribbon"
    	durable => true
    	exchange => "log_logstash"
    	exchange_type => "topic"
    	key => "service.#"
    	host => "127.0.0.1"
    	port => 5672
    	user => "guest"
    	password => "guest"
    	queue => "OCT_MID_Log"
    	auto_delete => false
    	tags => ["service"]
      }
    }
    
    output {
    	if[trace_id] != "" {
    		elasticsearch {
    			hosts => ["http://localhost:9200"]
    			index => "logstash-%{+YYYY.MM.dd}"
    	  }
    	}
    }
    

    4.4 启动logstash

    logstash.bat -f logstash.conf
    

    4.5 kibana添加logstash索引,通过message字段里tarce_id,span_id,即可还原整个完整请求中的所有打印日志信息

你可能感兴趣的:(spring,cloud)