spring cloud sleuth 与zipkin

zipkin 作为调用链核心组件,需要解决3个问题 。
1 , 流量问题 ,如果 全量流量收集 , zipkin 流量会是 业务流量的数倍 。
2 , 数据问题 , 如此大的流量 ,当然会产生更多数据 。
针对流量这块
1. 减少不必要的请求拦截
2. 是不是可以考虑ziplin 集群方案
3. 是否可以通过kafka 来做
3 , 比如说 zipkin 无法解决service1 ->kafka -service 2

数据这块
1. 采用 es 或者 canssandra 扩展容易 ,读取快
2. 限定日期 比如2周的 量

后期发展
1. kafka + zipkin + es ,kafka 削峰平谷
2. 异常数据单独保存到另一个系统中 , 不被清理

docker-compose.yml

# This file uses the version 2 docker-compose file format, described here:
# https://docs.docker.com/compose/compose-file/#version-2
#
# This runs the zipkin and zipkin-mysql containers, using docker-compose's
# default networking to wire the containers together.
#
# Note that this file is meant for learning Zipkin, not production deployments.

version: '2'

services:
  storage:
    image: openzipkin/zipkin-mysql
    container_name: mysql
    # Uncomment to expose the storage port for testing
    ports:
       - 3306:3306
    volumes:
      - dbfiles:/mysql/data

  # The zipkin process services the UI, and also exposes a POST endpoint that
  # instrumentation can send trace data to. Scribe is disabled by default.
  zipkin:
    image: openzipkin/zipkin
    container_name: zipkin
    # Environment settings are defined here https://github.com/openzipkin/zipkin/blob/master/zipkin-server/README.md#environment-variables
    environment:
      - STORAGE_TYPE=mysql
      # Point the zipkin at the storage backend
      - MYSQL_HOST=mysql
      # Uncomment to enable scribe
      # - SCRIBE_ENABLED=true
      # Uncomment to enable self-tracing
      # - SELF_TRACING_ENABLED=true
      # Uncomment to enable debug logging
      # - JAVA_OPTS=-Dlogging.level.zipkin2=DEBUG
    ports:
      # Port used for the Zipkin UI and HTTP Api
      - 9411:9411
      # Uncomment if you set SCRIBE_ENABLED=true
      # - 9410:9410
    depends_on:
      - storage

  # Adds a cron to process spans since midnight every hour, and all spans each day
  # This data is served by http://192.168.99.100:8080/dependency
  #
  # For more details, see https://github.com/openzipkin/docker-zipkin-dependencies
  dependencies:
    image: openzipkin/zipkin-dependencies
    container_name: dependencies
    entrypoint: crond -f
    environment:
      - STORAGE_TYPE=mysql
      - MYSQL_HOST=mysql
      # Add the baked-in username and password for the zipkin-mysql image
      - MYSQL_USER=zipkin
      - MYSQL_PASS=zipkin
      # Uncomment to see dependency processing logs
      # - ZIPKIN_LOG_LEVEL=DEBUG
      # Uncomment to adjust memory used by the dependencies job
      # - JAVA_OPTS=-verbose:gc -Xms1G -Xmx1G
    depends_on:
      - storage

volumes:
  dbfiles:

docker-compose-es.yml

# This file uses the version 2 docker-compose file format, described here:
# https://docs.docker.com/compose/compose-file/#version-2
#
# It extends the default configuration from docker-compose.yml to run the
# zipkin-elasticsearch container instead of the zipkin-mysql container.

version: '2'

services:
  # Run Elasticsearch instead of MySQL
  storage:
    image: openzipkin/zipkin-elasticsearch7
    container_name: elasticsearch
    # Uncomment to expose the storage port for testing
    ports:
       - 9200:9200

  # Switch storage type to Elasticsearch
  zipkin:
    image: openzipkin/zipkin
    environment:
      - STORAGE_TYPE=elasticsearch
      # Point the zipkin at the storage backend
      - ES_HOSTS=elasticsearch
      # Uncomment to see requests to and from elasticsearch
      # - ES_HTTP_LOGGING=BODY

  dependencies:
    environment:
      - STORAGE_TYPE=elasticsearch
      - ES_HOSTS=elasticsearch

deleteData.sh

#!/bin/bash

###################################
#删除过期ES的索引
###################################
function delete_indices() {
    comp_date=`date -d "14 day ago" +"%Y-%m-%d"`
    date1="$1 00:00:00"
    date2="$comp_date 00:00:00"

    t1=`date -d "$date1" +%s` 
    t2=`date -d "$date2" +%s` 
    echo 'begin'
    if [ $t1 -le $t2 ]; then
        echo "$1时间早于$comp_date,进行索引删除"
        format_date=`echo $1`
        curl -XDELETE http://localhost:9200/*$format_date
    else 
        echo "$1时间晚于$comp_date,不进行索引删除"
    fi
    
}

curl -XGET http://localhost:9200/_cat/indices | awk -F" " '{print $3}' | awk -F"zipkin-span-" '{print $NF}'| while read LINE
do
   echo 'begin'
     #调用索引删除函数
    delete_indices $LINE
done

数据清理脚本

centos 7 先要 yum install crontabs

然后cronttab -e

        • 6 sh /root/deleteData.sh

systemctl reload crond.service

相关资源

 - https://cloud.spring.io/spring-cloud-static/spring-cloud-sleuth/2.0.4.RELEASE/single/spring-cloud-sleuth.html
 - https://segmentfault.com/a/1190000012342007

你可能感兴趣的:(spring cloud sleuth 与zipkin)