Spring Boot允许配置外部化,这样同样的代码可以运行在不同的环境中,比如生产环境、测试环境用同一套代码,但是连的数据库信息可以放在应用的外部。可以使用properties文件,ymal文件,环境变量和命令行参数来使得配置外部化。属性值可以通过@Value注入到bean中,或者通过Spring的环境变量拿到配置,也可以通过@ConfigurationProperties把配置项绑定到结构化对象上。
@Value只能注入一些key-value形式的配置,比如properites文件中的配置。对于yml文件中比较复杂的配置,推荐用@ConfigurationProperties绑定到结构化的对象的属性上。
例如,将下面的yml文件中的内容绑定到java对象上:
spring:
kafka:
#对应字符串列表(属于"spring.kafka"配置组)
#下面的配置,如bootstrap-servers不能写成bootstrap.servers,用点号分隔取不到值
bootstrap-servers: ["192.168.1.204:9092","192.168.1.100:9092","192.168.1.200:9092"]
#对应字符串列表(属于"spring.kafka"配置组)
schema-registry-url:
- "http://192.168.1.204:18081"
- "http://192.168.1.100:18081"
- "http://192.168.1.200:18081"
#下面的这些配置属于"spring.kafka.producer"配置组
producer:
retries: 100000
buffer-memory: 33554432 #这个属性将绑定到@ConfigurationProperties(prefix = "spring.kafka.producer")注解的类的bufferMemory字段
enableIdempotence: false #写成驼峰式
max-in-flight-requests-per-connection: 1 #横线连接
batch_size: 16384 #下划线连接
linger-ms: 10
ACKS: "-1" #大写字母
topic: "role_operation_dev"
#下面的这些配置属于"spring.kafka.consumer"配置组
consumer:
auto-offset-reset: "earliest"
enable-auto-commit: false
group.id: "record-service" #错误示范,用点号分割是无法绑定到对象属性上的
topics: ["role_operation_dev"]
几点需要说明的是:
对应的配置对象代码如下:
package cn.*.config;
import org.springframework.boot.context.properties.ConfigurationProperties;
import org.springframework.context.annotation.Configuration;
import java.util.ArrayList;
import java.util.List;
/**
* Kafka生产者和消费者共同的配置
* @author pilaf
* @create: 2018-08-08 20:36
*/
@Configuration
@ConfigurationProperties(prefix = "spring.kafka")
public class KafkaCommonConfig {
private List<String> bootstrapServers = new ArrayList<>();
private List<String> schemaRegistryUrl;
public List<String> getBootstrapServers() {
return bootstrapServers;
}
public void setBootstrapServers(List<String> bootstrapServers) {
this.bootstrapServers = bootstrapServers;
}
public List<String> getSchemaRegistryUrl() {
return schemaRegistryUrl;
}
public void setSchemaRegistryUrl(List<String> schemaRegistryUrl) {
this.schemaRegistryUrl = schemaRegistryUrl;
}
@Override
public String toString() {
return "KafkaCommonConfig{" +
"bootstrapServers=" + bootstrapServers +
", schemaRegistryUrl=" + schemaRegistryUrl +
'}';
}
}
package cn.superid.config;
import org.springframework.boot.context.properties.ConfigurationProperties;
import org.springframework.context.annotation.Configuration;
/**
* @author pilaf
* @create: 2018-08-09 09:15
*/
@Configuration
@ConfigurationProperties(prefix = "spring.kafka.producer")
public class KafkaProducerConfig {
private int retries = Integer.MAX_VALUE;
private int bufferMemory =33554432;
private boolean enableIdempotence = true;
private int maxInFlightRequestsPerConnection = 1;
private int batchSize = 16384;
private int lingerMs =1;
private String acks = "all";
private String topic;
public int getRetries() {
return retries;
}
public void setRetries(int retries) {
this.retries = retries;
}
public int getBufferMemory() {
return bufferMemory;
}
public void setBufferMemory(int bufferMemory) {
this.bufferMemory = bufferMemory;
}
public boolean isEnableIdempotence() {
return enableIdempotence;
}
public void setEnableIdempotence(boolean enableIdempotence) {
this.enableIdempotence = enableIdempotence;
}
public int getMaxInFlightRequestsPerConnection() {
return maxInFlightRequestsPerConnection;
}
public void setMaxInFlightRequestsPerConnection(int maxInFlightRequestsPerConnection) {
this.maxInFlightRequestsPerConnection = maxInFlightRequestsPerConnection;
}
public int getBatchSize() {
return batchSize;
}
public void setBatchSize(int batchSize) {
this.batchSize = batchSize;
}
public int getLingerMs() {
return lingerMs;
}
public void setLingerMs(int lingerMs) {
this.lingerMs = lingerMs;
}
public String getAcks() {
return acks;
}
public void setAcks(String acks) {
this.acks = acks;
}
public String getTopic() {
return topic;
}
public void setTopic(String topic) {
this.topic = topic;
}
@Override
public String toString() {
return "KafkaProducerConfig{" +
"retries=" + retries +
", bufferMemory=" + bufferMemory +
", enableIdempotence=" + enableIdempotence +
", maxInFlightRequestsPerConnection=" + maxInFlightRequestsPerConnection +
", batchSize=" + batchSize +
", lingerMs=" + lingerMs +
", acks='" + acks + '\'' +
", topic='" + topic + '\'' +
'}';
}
}
package cn.superid.config;
import org.springframework.boot.context.properties.ConfigurationProperties;
import org.springframework.context.annotation.Configuration;
import java.util.List;
/**
* @author pilaf
* @create: 2018-08-08 14:11
*/
@Configuration
@ConfigurationProperties(prefix = "spring.kafka.consumer")
public class KafkaConsumerConfig {
private String autoOffsetReset ="earliest";
private boolean enableAutoCommit = false;
private String groupId;
private List<String> topics;
public String getAutoOffsetReset() {
return autoOffsetReset;
}
public void setAutoOffsetReset(String autoOffsetReset) {
this.autoOffsetReset = autoOffsetReset;
}
public boolean isEnableAutoCommit() {
return enableAutoCommit;
}
public void setEnableAutoCommit(boolean enableAutoCommit) {
this.enableAutoCommit = enableAutoCommit;
}
public String getGroupId() {
return groupId;
}
public void setGroupId(String groupId) {
this.groupId = groupId;
}
public List<String> getTopics() {
return topics;
}
public void setTopics(List<String> topics) {
this.topics = topics;
}
@Override
public String toString() {
return "KafkaConsumerConfig{" +
"autoOffsetReset='" + autoOffsetReset + '\'' +
", enableAutoCommit=" + enableAutoCommit +
", groupId='" + groupId + '\'' +
", topics=" + topics +
'}';
}
}
POM.xml文件中的依赖如下:
<dependency>
<groupId>org.springframework.bootgroupId>
<artifactId>spring-boot-starterartifactId>
<version>2.0.3.RELEASEversion>
<optional>trueoptional>
dependency>
<dependency>
<groupId>org.springframework.bootgroupId>
<artifactId>spring-boot-configuration-processorartifactId>
<optional>trueoptional>
dependency>
在程序中打印绑定后的内容如下:
commonConfig=KafkaCommonConfig{bootstrapServers=[192.168.1.204:9092, 192.168.1.100:9092, 192.168.1.200:9092], schemaRegistryUrl=[http://192.168.1.204:18081, http://192.168.1.100:18081, http://192.168.1.200:18081]}
producerConfig=KafkaProducerConfig{retries=100000, bufferMemory=33554432, enableIdempotence=false, maxInFlightRequestsPerConnection=1, batchSize=16384, lingerMs=10, acks='-1', topic='role_operation_dev'}
consumerConfig=KafkaConsumerConfig{autoOffsetReset='earliest', enableAutoCommit=false, groupId='null', topics=[role_operation_dev]}
结合上面yml文件中的规则可以看到,不符合规则的group.id确实没绑定成功。
参考文章: