ShardingSphere介绍
ShardingSphere是Apache旗下的一套开源的分布式数据库解决方案,由ShardingSphere-JDBC、ShardingSphere-Proxy、ShardingSphere-Sidecar(待完善)组成。核心实现功能是:数据分片、分布式事务和数据库治理。本篇我们使用ShardingSphere-JDBC实现数据分片查询的功能。
传统将数据存储在单个库、单个表的方案,当数据量达到一定的量级后,数据库的查询性能急剧下降。
数据分片: 是指按照某个维度将存放在单一数据库中的数据分散存储到多个数据库或表中以达到提升性能瓶颈及可用性的效果。
数据分片又分为垂直分片和水平分片
-
垂直分片又称为纵向拆分,核心理念是专库专用,拆分之后,是按照业务将表进行归类,分布到不同的数据库中,如下图所示,根据业务需要,将用户表和订单表垂直分片到不同的数据库方案。
-
水平分片又称为横向分片,是通过某个字段(或某几个字段),根据某种规则将数据分散至多个库或表中,每个分片仅包含一部分数据,例如:根据主键分片,偶数主键的记录放入0库(或表),奇数主键的记录放入1库(或表),如下图所示。
水平分片案例
本节我们使用SpringBoot + Mybatis + Druid +ShardingJDBC来搭建一个水平分片的入门案例。
1. 创建两个数据库t_user0和t_user1
DROP TABLE IF EXISTS `t_user0`;
CREATE TABLE `t_user0` (
`id` bigint(20) NOT NULL,
`name` varchar(64) CHARACTER SET utf8 COLLATE utf8_general_ci NULL DEFAULT NULL COMMENT '名称',
`city_id` int(12) NULL DEFAULT NULL COMMENT '城市',
`sex` tinyint(1) NULL DEFAULT NULL COMMENT '性别',
`phone` varchar(32) CHARACTER SET utf8 COLLATE utf8_general_ci NULL DEFAULT NULL COMMENT '电话',
`email` varchar(32) CHARACTER SET utf8 COLLATE utf8_general_ci NULL DEFAULT NULL COMMENT '邮箱',
`create_time` timestamp(0) NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP(0) COMMENT '创建时间',
`password` varchar(32) CHARACTER SET utf8 COLLATE utf8_general_ci NULL DEFAULT NULL COMMENT '密码',
PRIMARY KEY (`id`) USING BTREE
) ENGINE = InnoDB CHARACTER SET = utf8 COLLATE = utf8_general_ci ROW_FORMAT = Dynamic;
DROP TABLE IF EXISTS `t_user1`;
CREATE TABLE `t_user1` (
`id` bigint(20) NOT NULL,
`name` varchar(64) CHARACTER SET utf8 COLLATE utf8_general_ci NULL DEFAULT NULL COMMENT '名称',
`city_id` int(12) NULL DEFAULT NULL COMMENT '城市',
`sex` tinyint(1) NULL DEFAULT NULL COMMENT '性别',
`phone` varchar(32) CHARACTER SET utf8 COLLATE utf8_general_ci NULL DEFAULT NULL COMMENT '电话',
`email` varchar(32) CHARACTER SET utf8 COLLATE utf8_general_ci NULL DEFAULT NULL COMMENT '邮箱',
`create_time` timestamp(0) NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP(0) COMMENT '创建时间',
`password` varchar(32) CHARACTER SET utf8 COLLATE utf8_general_ci NULL DEFAULT NULL COMMENT '密码',
PRIMARY KEY (`id`) USING BTREE
) ENGINE = InnoDB CHARACTER SET = utf8 COLLATE = utf8_general_ci ROW_FORMAT = Dynamic;
2.新创建maven工程,引入依赖
4.0.0
org.springframework.boot
spring-boot-starter-parent
2.3.0.RELEASE
com.example.shardingjdbc
shardingjdbctest
1.0-SNAPSHOT
1.8
org.springframework.boot
spring-boot-starter-web
org.springframework
spring-tx
mysql
mysql-connector-java
runtime
org.mybatis.spring.boot
mybatis-spring-boot-starter
2.1.3
com.alibaba
druid-spring-boot-starter
1.1.22
org.apache.shardingsphere
sharding-jdbc-core
4.1.1
org.projectlombok
lombok
true
org.springframework.boot
spring-boot-starter-test
test
org.junit.vintage
junit-vintage-engine
log4j
log4j
1.2.16
org.slf4j
slf4j-log4j12
1.7.5
org.apache.maven.plugins
maven-compiler-plugin
1.8
UTF-8
org.springframework.boot
spring-boot-maven-plugin
3.创建项目对应的包及类
整个项目结构如图所示,为了测试方便,代码中没有service包及相关的类。
4.编写application.properties文件
修改数据库的连接地址、用户名及密码。
#################################### common config : ####################################
spring.application.name=shardingjdbc
# 应用服务web访问端口
server.port=8080
# mybatis配置
mybatis.mapper-locations=classpath:com.example.shardingjdbc.mapper/*.xml
mybatis.type-aliases-package=com.example.shardingjdbc.**.domain
datasource0.url=jdbc:mysql://xxx:3306/test0?useUnicode=true&characterEncoding=utf-8&useSSL=false&serverTimezone=Asia/Shanghai
datasource0.driver-class-name=com.mysql.cj.jdbc.Driver
datasource0.type=com.alibaba.druid.pool.DruidDataSource
datasource0.username=XXX
datasource0.password=XXX
datasource1.url=jdbc:mysql://XXX:3306/test1?useUnicode=true&characterEncoding=utf-8&useSSL=false&serverTimezone=Asia/Shanghai
datasource1.driver-class-name=com.mysql.cj.jdbc.Driver
datasource1.type=com.alibaba.druid.pool.DruidDataSource
datasource1.username=XXX
datasource1.password=XXX
#
##### 连接池配置 #######
# 过滤器设置(第一个stat很重要,没有的话会监控不到SQL)
spring.datasource.druid.filters=stat,wall,log4j2
##### WebStatFilter配置 #######
#启用StatFilter
spring.datasource.druid.web-stat-filter.enabled=true
#添加过滤规则
spring.datasource.druid.web-stat-filter.url-pattern=/*
#排除一些不必要的url
spring.datasource.druid.web-stat-filter.exclusions=*.js,*.gif,*.jpg,*.png,*.css,*.ico,/druid/*
#开启session统计功能
spring.datasource.druid.web-stat-filter.session-stat-enable=true
#缺省sessionStatMaxCount是1000个
spring.datasource.druid.web-stat-filter.session-stat-max-count=1000
#spring.datasource.druid.web-stat-filter.principal-session-name=
#spring.datasource.druid.web-stat-filter.principal-cookie-name=
#spring.datasource.druid.web-stat-filter.profile-enable=
##### StatViewServlet配置 #######
#启用内置的监控页面
spring.datasource.druid.stat-view-servlet.enabled=true
#内置监控页面的地址
spring.datasource.druid.stat-view-servlet.url-pattern=/druid/*
#关闭 Reset All 功能
spring.datasource.druid.stat-view-servlet.reset-enable=false
#设置登录用户名
spring.datasource.druid.stat-view-servlet.login-username=admin
#设置登录密码
spring.datasource.druid.stat-view-servlet.login-password=123
#白名单(如果allow没有配置或者为空,则允许所有访问)
spring.datasource.druid.stat-view-servlet.allow=127.0.0.1
#黑名单(deny优先于allow,如果在deny列表中,就算在allow列表中,也会被拒绝)
spring.datasource.druid.stat-view-servlet.deny=
5.编写分片算法
UserShardingAlgorithm类中实现了根据奇数偶数来分库和分表。
public class UserShardingAlgorithm {
public static final DatabaseShardingAlgorithm databaseShardingAlgorithm = new DatabaseShardingAlgorithm();
public static final TableShardingAlgorithm tableShardingAlgorithm = new TableShardingAlgorithm();
// 分库
static class DatabaseShardingAlgorithm implements PreciseShardingAlgorithm {
@Override
public String doSharding(Collection databaseNames, PreciseShardingValue shardingValue) {
for(String database : databaseNames) {
if(database.endsWith(String.valueOf(shardingValue.getValue() % 2))) {
return database;
}
}
return "";
}
}
// 分表
static class TableShardingAlgorithm implements PreciseShardingAlgorithm {
@Override
public String doSharding(Collection tableNames, PreciseShardingValue shardingValue) {
for (String table : tableNames) {
if (table.endsWith(String.valueOf(shardingValue.getValue() % 2))) {
return table;
}
}
return "";
}
}
}
6.编写id生成器
使用分库和分表后,就不能使用数据库的自增主键了,需要单独生成,ShardingSphere-JDBC提供了两种主键生成算法,雪花算法Snowflake和UUID。本节用的是雪花算法。
@Configuration
public class KeyIdConfig {
@Bean("userKeyGenerator")
public SnowflakeShardingKeyGenerator userKeyGenerator() {
return new SnowflakeShardingKeyGenerator();
}
@Bean("orderKeyGenerator")
public SnowflakeShardingKeyGenerator orderKeyGenerator() {
return new SnowflakeShardingKeyGenerator();
}
}
7.配置数据源
@Configuration
public class DataSourceConfig {
@Value("${datasource0.url}")
private String url0;
@Value("${datasource0.username}")
private String username0;
@Value("${datasource0.password}")
private String password0;
@Value("${datasource0.driver-class-name}")
private String driverClassName0;
@Value("${datasource1.url}")
private String url1;
@Value("${datasource1.username}")
private String username1;
@Value("${datasource1.password}")
private String password1;
@Value("${datasource1.driver-class-name}")
private String driverClassName1;
@Value("${spring.datasource.druid.filters}")
private String filters;
@Bean("dataSource")
public DataSource dataSource() {
try{
DruidDataSource dataSource0 = new DruidDataSource();
dataSource0.setDriverClassName(this.driverClassName0);
dataSource0.setUrl(this.url0);
dataSource0.setUsername(this.username0);
dataSource0.setPassword(this.password0);
dataSource0.setFilters(this.filters);
DruidDataSource dataSource1 = new DruidDataSource();
dataSource1.setDriverClassName(this.driverClassName1);
dataSource1.setUrl(this.url1);
dataSource1.setUsername(this.username1);
dataSource1.setPassword(this.password1);
dataSource1.setFilters(this.filters);
//分库设置
Map dataSourceMap = new HashMap<>(2);
//添加两个数据库database0和database1
dataSourceMap.put("ds0",dataSource0);
dataSourceMap.put("ds1",dataSource1);
//配置t_user表规则
TableRuleConfiguration userRuleConfiguration = new TableRuleConfiguration("t_user","ds${0..1}.t_user${0..1}");
// 配置分表规则
userRuleConfiguration.setTableShardingStrategyConfig(new StandardShardingStrategyConfiguration("id", UserShardingAlgorithm.tableShardingAlgorithm));
//配置分库规则
userRuleConfiguration.setDatabaseShardingStrategyConfig(new StandardShardingStrategyConfiguration("id",UserShardingAlgorithm.databaseShardingAlgorithm));
// Sharding全局配置
ShardingRuleConfiguration shardingRuleConfiguration = new ShardingRuleConfiguration();
shardingRuleConfiguration.getTableRuleConfigs().add(userRuleConfiguration);
//创建数据源
DataSource dataSource = ShardingDataSourceFactory.createDataSource(dataSourceMap,shardingRuleConfiguration,new Properties());
return dataSource;
}catch (Exception ex) {
ex.printStackTrace();
return null;
}
}
}
8.编写User实体类
public class User implements Serializable {
private Long id;
private String name;
private String phone;
private String email;
private String password;
private Integer cityId;
private Date createTime;
private Integer sex;
// TODO 省略了getter和setter方法
}
9.编写Mapper及xml
public interface UserMapper {
/**
* 保存
* @param user
*/
void save(User user);
/**
* 查询
* @param id
* @return
*/
User get(Long id);
}
insert into t_user (id, name, phone, email, password, city_id, create_time, sex)
values (#{id}, #{name}, #{phone}, #{email}, #{password}, #{cityId}, #{createTime}, #{sex})
10. 编写controller类和启动类
@Controller
public class UserController {
@Autowired
private UserMapper userMapper;
@Resource
SnowflakeShardingKeyGenerator userKeyGenerator;
@RequestMapping("/user/save")
@ResponseBody
public String save() {
for(int i = 0;i < 50;i++) {
Long id = (Long) userKeyGenerator.generateKey();
User user = new User();
user.setId(id);
user.setName("test" + i);
user.setCityId(i);
user.setCreateTime(new Date());
user.setSex(i % 2 == 0 ? 1 : 2);
user.setPhone("1111111" +i);
user.setEmail("xxxxxx");
user.setPassword("eeeeeeeee");
userMapper.save(user);
}
return "success";
}
@RequestMapping("/user/get/{id}")
@ResponseBody
public User get(@PathVariable Long id) {
User user = userMapper.get(id);
return user;
}
}
// 启动类
@MapperScan("com.example.shardingjdbc.mapper") //修改为对应的包路径
@SpringBootApplication
public class ShardingJdbcApplication {
public static void main(String[] args) {
SpringApplication.run(ShardingJdbcApplication.class,args);
}
}
启动服务并访问
http://localhost:8080/user/save 数据最终会把奇数id保存在t_user1,偶数id保存在t_user0中。