微服务中利用Skywalking 实现链路追踪和日志(全局traceID)查看

1、下载安装:

1.1 下载 Skywalking

下载地址: Downloads | Apache SkyWalking

分别下载 apm 和 agent

微服务中利用Skywalking 实现链路追踪和日志(全局traceID)查看_第1张图片

微服务中利用Skywalking 实现链路追踪和日志(全局traceID)查看_第2张图片

wegt 下载连接如下;

wget https://archive.apache.org/dist/skywalking/java-agent/8.8.0/apache-skywalking-java-agent-8.8.0.tgz

wget https://archive.apache.org/dist/skywalking/8.8.1/apache-skywalking-apm-8.8.1.tar.gz

1.2 下载 Elasticsearch

wegt https://www.elastic.co/cn/downloads/past-releases/elasticsearch-7-17-0

1.3 Elasticsearch config

在打开安装目录的config下elasticsearch.yml 并添加以下配置

#http.port: 9200
cluster.name: CollectorDBCluster
path.data: /opt/elasticsearch-7.17.0/data
path.logs: /opt/elasticsearch-7.17.0/logs
network.host: 0.0.0.0
http.port: 9200

node.name: node-1
cluster.initial_master_nodes: ["node-1"]

1.4 启动Elasticsearch

# /opt/elasticsearch-7.17.0/bin/elasticsearch

1.5 Sekywalking config

storage:
  selector: ${SW_STORAGE:elasticsearch}
  elasticsearch:
    namespace: ${SW_NAMESPACE:"CollectorDBCluster"}
    clusterNodes: ${SW_STORAGE_ES_CLUSTER_NODES:服务器ip:9200}
    protocol: ${SW_STORAGE_ES_HTTP_PROTOCOL:"http"}
    connectTimeout: ${SW_STORAGE_ES_CONNECT_TIMEOUT:500}
    socketTimeout: ${SW_STORAGE_ES_SOCKET_TIMEOUT:30000}
    numHttpClientThread: ${SW_STORAGE_ES_NUM_HTTP_CLIENT_THREAD:0}
#    user: ${SW_ES_USER:""}
#    password: ${SW_ES_PASSWORD:""}
#    trustStorePath: ${SW_STORAGE_ES_SSL_JKS_PATH:""}
#    trustStorePass: ${SW_STORAGE_ES_SSL_JKS_PASS:""}
    secretsManagementFile: ${SW_ES_SECRETS_MANAGEMENT_FILE:""} # Secrets management file in the properties format includes the username, password, which are managed by 3rd party tool.
    dayStep: ${SW_STORAGE_DAY_STEP:1} # Represent the number of days in the one minute/hour/day index.
    indexShardsNumber: ${SW_STORAGE_ES_INDEX_SHARDS_NUMBER:1} # Shard number of new indexes
    indexReplicasNumber: ${SW_STORAGE_ES_INDEX_REPLICAS_NUMBER:1} # Replicas number of new indexes
    # Super data set has been defined in the codes, such as trace segments.The following 3 config would be improve es performance when storage super size data in es.
    superDatasetDayStep: ${SW_SUPERDATASET_STORAGE_DAY_STEP:-1} # Represent the number of days in the super size dataset record index, the default value is the same as dayStep when the value is less than 0
    superDatasetIndexShardsFactor: ${SW_STORAGE_ES_SUPER_DATASET_INDEX_SHARDS_FACTOR:5} #  This factor provides more shards for the super data set, shards number = indexShardsNumber * superDatasetIndexShardsFactor. Also, this factor effects Zipkin and Jaeger traces.
    superDatasetIndexReplicasNumber: ${SW_STORAGE_ES_SUPER_DATASET_INDEX_REPLICAS_NUMBER:0} # Represent the replicas number in the super size dataset record index, the default value is 0.
    indexTemplateOrder: ${SW_STORAGE_ES_INDEX_TEMPLATE_ORDER:0} # the order of index template
    bulkActions: ${SW_STORAGE_ES_BULK_ACTIONS:5000} # Execute the async bulk record data every ${SW_STORAGE_ES_BULK_ACTIONS} requests
    # flush the bulk every 10 seconds whatever the number of requests
    # INT(flushInterval * 2/3) would be used for index refresh period.
    flushInterval: ${SW_STORAGE_ES_FLUSH_INTERVAL:15}
    concurrentRequests: ${SW_STORAGE_ES_CONCURRENT_REQUESTS:2} # the number of concurrent requests
    resultWindowMaxSize: ${SW_STORAGE_ES_QUERY_MAX_WINDOW_SIZE:10000}
    metadataQueryMaxSize: ${SW_STORAGE_ES_QUERY_MAX_SIZE:5000}
    segmentQueryMaxSize: ${SW_STORAGE_ES_QUERY_SEGMENT_SIZE:200}
    profileTaskQueryMaxSize: ${SW_STORAGE_ES_QUERY_PROFILE_TASK_SIZE:200}
    oapAnalyzer: ${SW_STORAGE_ES_OAP_ANALYZER:"{\"analyzer\":{\"oap_analyzer\":{\"type\":\"stop\"}}}"} # the oap analyzer.
    oapLogAnalyzer: ${SW_STORAGE_ES_OAP_LOG_ANALYZER:"{\"analyzer\":{\"oap_log_analyzer\":{\"type\":\"standard\"}}}"} # the oap log analyzer. It could be customized by the ES analyzer configuration to support more language log formats, such as Chinese log, Japanese log and etc.
    advanced: ${SW_STORAGE_ES_ADVANCED:""}
  

主要需要修改

storage:
selector: ${SW_STORAGE:elasticsearch}

我的版本是8.8 如果你是低版本 且Elasticsearch7 ,就配置

storage:
selector: ${SW_STORAGE:elasticsearch7}

然后修改elasticsarch的服务ip和端口就可以了

1.6 启动skywalking

# /opt/apache-skywalking-apm-bin/bin/startup.sh 

1.7 docker-compost 部署微服务项目并通过 -javaagent方式

每个jar单独一个文件夹

分别 orderservice、gatway、userservice

微服务中利用Skywalking 实现链路追踪和日志(全局traceID)查看_第3张图片

微服务中利用Skywalking 实现链路追踪和日志(全局traceID)查看_第4张图片

version: "3.2"

services:
#  nacos:
#    image: nacos/nacos-server
#    environment:
#      MODE: standalone
#    ports:
#      - "9010:8848"
  userservice:
    env_file: .env
    environment:
      - USER_NAME=${COMNAME}
    build: ./user-service
  
  orderservice:
    build: ./order-service
  gateway:
    build: ./gateway
    ports:
      - "9013:9013"

.env 可以指定运行参数

## docker-compose环境变量

## 测试docker绑定参数
COMNAME=abcdefg129001

每个服务文件夹里面包含以下几个文件

image-20220919143258267

其中dockerfile 如下;

# 将下面的代码放入Dockerfile文件中,复制三份分别放入三个文件夹
FROM java:8
COPY ./app.jar /tmp/app.jar
COPY ./agent /tmp/agent
ENTRYPOINT java  -javaagent:/tmp/agent/skywalking-agent.jar -Dskywalking.agent.service_name=gatway  -Dskywalking.collector.backend_service=sky服务器ip:11800 -jar /tmp/app.jar

其他几个服务相同配置

1.8 skywalking 日志功能

每个springboot小项目单独添加依赖:

    <!--打印skywalking的TraceId到日志-->
    
        org.apache.skywalking
        apm-toolkit-logback-1.x
        8.8.0
    
    
        org.apache.skywalking
        apm-toolkit-trace
        8.8.0
    

其中版本号要与skywalking一致

日志可以使用logback (其他日志框架可以自行google)



<configuration>

    
    
    
    <springProperty scope="context" name="base.path" source="logging.file.path" defaultValue="${user.home}/kenlogs"/>

    
    <springProperty scope="context" name="app.name" source="spring.application.name" defaultValue="applog"/>

    <property name="log.path" value="${base.path}/${app.name}"/>

    
    <property name="log.pattern" value="%d{yyyy-MM-dd HH:mm:ss.SSS} [%thread] %-5level %logger{50} - [%tid] - %msg%n"/>

    
    <appender name="stdout" class="ch.qos.logback.core.ConsoleAppender">
        <encoder class="ch.qos.logback.core.encoder.LayoutWrappingEncoder">
            <layout class="org.apache.skywalking.apm.toolkit.log.logback.v1.x.TraceIdPatternLogbackLayout">
                <pattern>${log.pattern}pattern>
            layout>
        encoder>
    appender>


    <appender name="SKYWALKING" class="org.apache.skywalking.apm.toolkit.log.logback.v1.x.log.GRPCLogClientAppender">
        
        <encoder>
            
            <pattern>%d{yyyy-MM-dd HH:mm:ss.SSS} [%thread] %-5level %logger{50} - %msg%n
            pattern>
            <charset>UTF-8charset> 
        encoder>
    appender>




    
    <appender name="file" class="ch.qos.logback.core.rolling.RollingFileAppender">
        <rollingPolicy class="ch.qos.logback.core.rolling.SizeAndTimeBasedRollingPolicy">
            
            <FileNamePattern>${log.path}-%d{yyyy-MM-dd}.%i.logFileNamePattern>
            
            <MaxHistory>30MaxHistory>
            <MaxFileSize>3KBMaxFileSize>
        rollingPolicy>

        <encoder class="ch.qos.logback.core.encoder.LayoutWrappingEncoder">
            <layout class="org.apache.skywalking.apm.toolkit.log.logback.v1.x.TraceIdPatternLogbackLayout">
                
                <pattern>${log.pattern}pattern>
            layout>
        encoder>
    appender>

    
    
    
    

    <root level="INFO">
        <appender-ref ref="SKYWALKING"/>
        <appender-ref ref="file"/>
    root>

    


configuration>

其中最重要的是

appender name=“SKYWALKING” class 指定正确

同时要日志生效;必须修改服务 -javaagent的config/agent.config ; 我开始就是没指定这个;日志一直没生成

plugin.toolkit.log.grpc.reporter.server_host=${SW_GRPC_LOG_SERVER_HOST:skywalking服务ip}
plugin.toolkit.log.grpc.reporter.server_port=${SW_GRPC_LOG_SERVER_PORT:11800}
plugin.toolkit.log.grpc.reporter.max_message_size=${SW_GRPC_LOG_MAX_MESSAGE_SIZE:10485760}
plugin.toolkit.log.grpc.reporter.upstream_timeout=${SW_GRPC_LOG_GRPC_UPSTREAM_TIMEOUT:30}
plugin.toolkit.log.transmit_formatted=${SW_PLUGIN_TOOLKIT_LOG_TRANSMIT_FORMATTED:true}

测试使用日志

简单点直接controller使用一个;真实业务应该在service层比较合适

private final Logger log = LoggerFactory.getLogger(*controller.class);

1.9 最终效果

微服务中利用Skywalking 实现链路追踪和日志(全局traceID)查看_第5张图片

微服务中利用Skywalking 实现链路追踪和日志(全局traceID)查看_第6张图片

代码里面获取日志的tranceid

String traceId = TraceContext.traceId();

1.10 日志如何删除历史数据

微服务中利用Skywalking 实现链路追踪和日志(全局traceID)查看_第7张图片

以上配置保留一天;其他详细需求可以自行百度。

你可能感兴趣的:(微服务,skywalking,elasticsearch)