springcloud 发布了finchley Release版,只兼容springboot 2.x,项目升级到新版,遇到了各种坑,在此记录。版本构建使用gradle,如果是maven对应过去就可以了。
1 eureka client引入发生变化,新版多了netflix
旧版:compile "org.springframework.cloud:spring-cloud-starter-eureka-client"
新版:compile "org.springframework.cloud:spring-cloud-starter-netflix-eureka-client"
2 feign引入发生变化,多了open
旧版:compile('org.springframework.cloud:spring-cloud-starter-feign')
新版:compile('org.springframework.cloud:spring-cloud-starter-openfeign')
3 hystrix, hystrix-dashboard引入发生变化,多了netflix
旧版:
compile('org.springframework.cloud:spring-cloud-starter-hystrix')
compile('org.springframework.cloud:spring-cloud-starter-hystrix-dashboard')
新版:
compile('org.springframework.cloud:spring-cloud-starter-netflix-hystrix')
compile('org.springframework.cloud:spring-cloud-starter-netflix-hystrix-dashboard')
4 context.path配置发生变化,多了servlet
旧版:server.context-path: xxxx
新版:server.servlet.context-path: xxxx
5 zipkin配置发生变化(服务端)
据说是spring建议使用jar包直接运行zipkin-server
从zipkin官网下载jar包运行,[zipkin官网](https://zipkin.io/),如果使用rabbitmq或kafka做异步收集,需要配置环境变量,见下面地址http://www.mamicode.com/info-detail-2292005.html.本人在winows下cgywin下亲测不好使,后来去github看了下zipkin源码,找到了解决办法。github源码地址:https://github.com/openzipkin/zipkin。zipkin-server,resources下的配置文件拷贝一份放到jar包同级目录下,配置rabbitmq的信息搞定。
配置文件见下面,修改其中的rabbitmq.addresses,rabbitmq.username,rabbitmq.password,如果用的kafka,修改对应的就好了。
zipkin:
self-tracing:
# Set to true to enable self-tracing.
enabled: ${SELF_TRACING_ENABLED:false}
# percentage to self-traces to retain
sample-rate: ${SELF_TRACING_SAMPLE_RATE:1.0}
# Timeout in seconds to flush self-tracing data to storage.
message-timeout: ${SELF_TRACING_FLUSH_INTERVAL:1}
collector:
# percentage to traces to retain
sample-rate: ${COLLECTOR_SAMPLE_RATE:1.0}
http:
# Set to false to disable creation of spans via HTTP collector API
enabled: ${HTTP_COLLECTOR_ENABLED:true}
kafka:
# Kafka bootstrap broker list, comma-separated host:port values. Setting this activates the
# Kafka 0.10+ collector.
bootstrap-servers: ${KAFKA_BOOTSTRAP_SERVERS:}
# Name of topic to poll for spans
topic: ${KAFKA_TOPIC:zipkin}
# Consumer group this process is consuming on behalf of.
group-id: ${KAFKA_GROUP_ID:zipkin}
# Count of consumer threads consuming the topic
streams: ${KAFKA_STREAMS:1}
rabbitmq:
# RabbitMQ server address list (comma-separated list of host:port)
#addresses: ${RABBIT_ADDRESSES:}
addresses: localhost:5672
concurrency: ${RABBIT_CONCURRENCY:1}
# TCP connection timeout in milliseconds
connection-timeout: ${RABBIT_CONNECTION_TIMEOUT:60000}
#password: ${RABBIT_PASSWORD:guest}
password: cloud
queue: ${RABBIT_QUEUE:zipkin}
#username: ${RABBIT_USER:guest}
username: cloud
virtual-host: ${RABBIT_VIRTUAL_HOST:/}
useSsl: ${RABBIT_USE_SSL:false}
uri: ${RABBIT_URI:}
query:
enabled: ${QUERY_ENABLED:true}
# 1 day in millis
lookback: ${QUERY_LOOKBACK:86400000}
# The Cache-Control max-age (seconds) for /api/v1/services and /api/v1/spans
names-max-age: 300
# CORS allowed-origins.
allowed-origins: "*"
storage:
strict-trace-id: ${STRICT_TRACE_ID:true}
search-enabled: ${SEARCH_ENABLED:true}
type: ${STORAGE_TYPE:mem}
mem:
# Maximum number of spans to keep in memory. When exceeded, oldest traces (and their spans) will be purged.
# A safe estimate is 1K of memory per span (each span with 2 annotations + 1 binary annotation), plus
# 100 MB for a safety buffer. You'll need to verify in your own environment.
# Experimentally, it works with: max-spans of 500000 with JRE argument -Xmx600m.
max-spans: 500000
cassandra:
# Comma separated list of host addresses part of Cassandra cluster. Ports default to 9042 but you can also specify a custom port with 'host:port'.
contact-points: ${CASSANDRA_CONTACT_POINTS:localhost}
# Name of the datacenter that will be considered "local" for latency load balancing. When unset, load-balancing is round-robin.
local-dc: ${CASSANDRA_LOCAL_DC:}
# Will throw an exception on startup if authentication fails.
username: ${CASSANDRA_USERNAME:}
password: ${CASSANDRA_PASSWORD:}
keyspace: ${CASSANDRA_KEYSPACE:zipkin}
# Max pooled connections per datacenter-local host.
max-connections: ${CASSANDRA_MAX_CONNECTIONS:8}
# Ensuring that schema exists, if enabled tries to execute script /zipkin-cassandra-core/resources/cassandra-schema-cql3.txt.
ensure-schema: ${CASSANDRA_ENSURE_SCHEMA:true}
# 7 days in seconds
span-ttl: ${CASSANDRA_SPAN_TTL:604800}
# 3 days in seconds
index-ttl: ${CASSANDRA_INDEX_TTL:259200}
# the maximum trace index metadata entries to cache
index-cache-max: ${CASSANDRA_INDEX_CACHE_MAX:100000}
# how long to cache index metadata about a trace. 1 minute in seconds
index-cache-ttl: ${CASSANDRA_INDEX_CACHE_TTL:60}
# how many more index rows to fetch than the user-supplied query limit
index-fetch-multiplier: ${CASSANDRA_INDEX_FETCH_MULTIPLIER:3}
# Using ssl for connection, rely on Keystore
use-ssl: ${CASSANDRA_USE_SSL:false}
cassandra3:
# Comma separated list of host addresses part of Cassandra cluster. Ports default to 9042 but you can also specify a custom port with 'host:port'.
contact-points: ${CASSANDRA_CONTACT_POINTS:localhost}
# Name of the datacenter that will be considered "local" for latency load balancing. When unset, load-balancing is round-robin.
local-dc: ${CASSANDRA_LOCAL_DC:}
# Will throw an exception on startup if authentication fails.
username: ${CASSANDRA_USERNAME:}
password: ${CASSANDRA_PASSWORD:}
keyspace: ${CASSANDRA_KEYSPACE:zipkin2}
# Max pooled connections per datacenter-local host.
max-connections: ${CASSANDRA_MAX_CONNECTIONS:8}
# Ensuring that schema exists, if enabled tries to execute script /zipkin2-schema.cql
ensure-schema: ${CASSANDRA_ENSURE_SCHEMA:true}
# how many more index rows to fetch than the user-supplied query limit
index-fetch-multiplier: ${CASSANDRA_INDEX_FETCH_MULTIPLIER:3}
# Using ssl for connection, rely on Keystore
use-ssl: ${CASSANDRA_USE_SSL:false}
elasticsearch:
# host is left unset intentionally, to defer the decision
hosts: ${ES_HOSTS:}
pipeline: ${ES_PIPELINE:}
max-requests: ${ES_MAX_REQUESTS:64}
timeout: ${ES_TIMEOUT:10000}
index: ${ES_INDEX:zipkin}
date-separator: ${ES_DATE_SEPARATOR:-}
index-shards: ${ES_INDEX_SHARDS:5}
index-replicas: ${ES_INDEX_REPLICAS:1}
username: ${ES_USERNAME:}
password: ${ES_PASSWORD:}
http-logging: ${ES_HTTP_LOGGING:}
legacy-reads-enabled: ${ES_LEGACY_READS_ENABLED:true}
mysql:
host: ${MYSQL_HOST:localhost}
port: ${MYSQL_TCP_PORT:3306}
username: ${MYSQL_USER:}
password: ${MYSQL_PASS:}
db: ${MYSQL_DB:zipkin}
max-active: ${MYSQL_MAX_CONNECTIONS:10}
use-ssl: ${MYSQL_USE_SSL:false}
ui:
enabled: ${QUERY_ENABLED:true}
## Values below here are mapped to ZipkinUiProperties, served as /config.json
# Default limit for Find Traces
query-limit: 10
# The value here becomes a label in the top-right corner
environment:
# Default duration to look back when finding traces.
# Affects the "Start time" element in the UI. 1 hour in millis
default-lookback: 3600000
# When false, disables the "find a trace" screen
search-enabled: ${SEARCH_ENABLED:true}
# Which sites this Zipkin UI covers. Regex syntax. (e.g. http:\/\/example.com\/.*)
# Multiple sites can be specified, e.g.
# - .*example1.com
# - .*example2.com
# Default is "match all websites"
instrumented: .*
# URL placed into the tag in the HTML
base-path: /zipkin
server:
port: ${QUERY_PORT:9411}
use-forward-headers: true
compression:
enabled: true
# compresses any response over min-response-size (default is 2KiB)
# Includes dynamic json content and large static assets from zipkin-ui
mime-types: application/json,application/javascript,text/css,image/svg
spring:
mvc:
favicon:
# zipkin has its own favicon
enabled: false
autoconfigure:
exclude:
# otherwise we might initialize even when not needed (ex when storage type is cassandra)
- org.springframework.boot.autoconfigure.jdbc.DataSourceAutoConfiguration
info:
zipkin:
version: "@project.version@"
logging:
pattern:
level: "%clr(%5p) %clr([%X{traceId}/%X{spanId}]){yellow}"
level:
# Silence Invalid method name: '__can__finagle__trace__v3__'
com.facebook.swift.service.ThriftServiceProcessor: 'OFF'
# # investigate /api/v1/dependencies or /api/v2/dependencies
# zipkin2.internal.DependencyLinker: 'DEBUG'
# # log cassandra queries (DEBUG is without values)
# com.datastax.driver.core.QueryLogger: 'TRACE'
# # log cassandra trace propagation
# com.datastax.driver.core.Message: 'TRACE'
# # log reason behind http collector dropped messages
# zipkin.server.ZipkinHttpCollector: 'DEBUG'
# zipkin.collector.kafka.KafkaCollector: 'DEBUG'
# zipkin.collector.kafka10.KafkaCollector: 'DEBUG'
# zipkin.collector.rabbitmq.RabbitMQCollector: 'DEBUG'
# zipkin.collector.scribe.ScribeCollector: 'DEBUG'
management:
endpoints:
web:
exposure:
include: '*'
endpoint:
health:
show-details: always
# Disabling auto time http requests since it is added in Undertow HttpHandler in Zipkin autoconfigure
# Prometheus module. In Zipkin we use different naming for the http requests duration
metrics:
web:
server:
auto-time-requests: false
6 zipkin配置发生变化(客户端)
直接引入compile "org.springframework.cloud:spring-cloud-starter-zipkin"就够了。
配置文件配置如下:
spring:
sleuth:
web:
client:
enabled: true
sampler:
probability: 1.0 # 将采样比例设置为 1.0,也就是全部都需要。默认是 0.1
zipkin:
base-url: http://localhost:9411/ # 指定了 Zipkin 服务器的地址
7 springsecurity httpbasic验证发生变化eureka server验证发生改一下。eureka server修改如下:
#springboot2以上不支持
#security:
# basic:
# enabled: true
# user:
# name: zqw
# password: zqw
在main方法类或自定义一个带@Configuration的类加入如下内容,将csrf disabled掉,自定义验证的用户名密码,这样eureka client 就可以像旧版一样 http://[user]:[password]@xxxx与server建立连接了
@EnableWebSecurity
static class WebSecurityConfig extends WebSecurityConfigurerAdapter {
@Override
public void configure(AuthenticationManagerBuilder auth) throws Exception {
auth.inMemoryAuthentication()
.passwordEncoder(NoOpPasswordEncoder.getInstance())
.withUser("zqw").password("zqw")
.authorities("ADMIN");
}
@Override
protected void configure(HttpSecurity http) throws Exception {
http.csrf()
.disable()
.authorizeRequests()
.anyRequest().authenticated()
.and()
.httpBasic();
}
}
8 bus总线配置发生变化
现在要将actuator endpoint 手动放开一下
management.endpoints.web.exposure.include: bus-refresh
如果这样配置management.endpoints.web.exposure.include=* 在yml文件是行不通的,会报错,加上双引号或使用properties文件。
手动刷新的地址也发生了变化, POST方式,http://xxxxxx/actuator/bus-refresh
9 tubine配置,由于actuator发生变化,配置做相应修改,见这个文章
https://blog.csdn.net/ifrozen/article/details/80019143
上面文章介绍了基本的turbine配置方式,注意turbine服务端配置,
其中instanceUrlSuffix:
default: actuator/hystrix.stream
因为springboot2.x actuator 默认需要在路径上带上/actuator,这里定义了
后缀就可以用了
turbine:
app-config: consumer-client
cluster-name-expression: "'default'"
# cluster-name-expression: metadata['cluster'] //不好用不知道为什么
combine-host-port: true
# aggregator:
# cluster-config: MAIN
instanceUrlSuffix:
default: actuator/hystrix.stream
上面解决了基本配置问题,还有一个问题就是当自定义了server.context-path,turbine
又用不了了,可以用下面这样的方式解决。其中自定义management.server.port,将management.server.servlet.context-path 置为 /,这样turbine可以用了
management:
endpoints:
web:
exposure:
include: "*"
cors:
allowed-methods: "*"
allowed-origins: "*"
server:
port: 5101
servlet:
context-path: /
10 datasource 自动配置出现循环引用错误
Description:
The dependencies of some of the beans in the application context form a cycle:
servletEndpointRegistrar defined in class path resource [org/springframework/boot/actuate/autoconfigure/endpoint/web/ServletEndpointManagementContextConfiguration.class]
↓
healthEndpoint defined in class path resource [org/springframework/boot/actuate/autoconfigure/health/HealthEndpointConfiguration.class]
↓
dbHealthIndicator defined in class path resource [org/springframework/boot/actuate/autoconfigure/jdbc/DataSourceHealthIndicatorAutoConfiguration.class]
┌─────┐
| scopedTarget.dataSource defined in class path resource [org/springframework/boot/autoconfigure/jdbc/DataSourceConfiguration$Hikari.class]
↑ ↓
| org.springframework.boot.autoconfigure.jdbc.DataSourceInitializerInvoker
找了好久终于找到解决方式配置文件添加spring.cloud.refresh.refreshable: none
解决办法的issue地址https://github.com/spring-cloud/spring-cloud-commons/issues/355
上一篇 SpringCloud Config-Server 使用JCE对称加密配置文件
下一篇 SpringCloud学习(四) Config Server将配置文件配置在本地仓库并刷新