前言
前面有介绍过开源的分布式事务lcn,并基于4.1.0和5.0.2写了相应的文章,如今lcn官网发布说维护似乎成为一种问题,而seata也越来越火热,最近研究了seata源码,并集成了最新版本1.1.0(今天2020.3.16。注意:版本不同可能会有各种奇奇怪怪的问题,所以遇到问题解决不了的时候,可以考虑是否是版本问题),分布式框架使用的是:
(ps:这篇文章说明是基于有一定的事务以及分布式基础的,不熟悉的还请先熟悉 springCloud-Greenwich.SR2 + springboot 2.1.6 + nacos 1.2.0 + springcloud-gateway.其中分布式事务集成了seata的AT模式,网关 gateway集成了限流,熔断,降级,并加入了时下比较流行的xxl-job做分布式任务, 安全框架使用的是oauth2+springsecurity.
seata的基本情况
seata的具体使用方式,其实官网其实有说明的很清楚,如果有涉及到的问题,大家可以多阅读官方文档,基本上95%你遇到的问题,别人都遇到过了,并且给了详细的解决方案,在这这篇文章里我就介绍一下seata的基本原理,然后使用我上面说的框架,集成并做一个最简单的AT模式demo.供大家参考。
首先,我们需要了解seata的3种对象:
TC: 事务协调者
维护全局和分支事务的状态,驱动全局事务提交或回滚。
TM: 事务管理者
定义全局事务的范围:开始全局事务、提交或回滚全局事务。
RM: 资源管理器
管理分支事务处理的资源,与TC交谈以注册分支事务和报告分支事务的状态,并驱动分支事务提交或回滚。
用我觉得描述的比较好的一个图简单描述一下这三者之间的关系:
(图片来源:https://www.sohu.com/a/326431...)
整个全局事务执行的步骤大概可以分为4步:
1. TM向TC申请开启一个全局事务,TC返回一个全局XID,这个IXD会伴随着整个分布式事务传播。
2. RM向TC注册一个属于XID的分支事务,并commit/rollback本地事务
3. TM向TC发起全局commit/rollback
4. TC调度XID下的分支事务commit/rollback
(commit时,只用删除分支的undo_log
rollback时,需要反向解析undo_log 并执行sql,然后删除undo_log
)
seata与springcloud结合的demo
seata与springCloud的demo其实还是很简单的,但是和版本有一定的关系,版本不同,有一些配置就不同。再次说明,我这里使用的版本可以看一下上面。
1.seata下载并在本地启动
seata下载可以直接在官网找到最新版本(目前时1.1.0)并根据自己的系统下载就好了。然后修改file.conf,registry.conf 为自己想要的配置.下载地址:https://seata.io/zh-cn/blog/download.html 。具体启动流程可以参考官网: https://seata.io/zh-cn/docs/ops/deploy-guide-beginner.html。
我的file.conf
## transaction log store, only used in seata-server
store {
## store mode: file、db
mode = "db"
## file store property
file {
## store location dir
dir = "sessionStore"
# branch session size , if exceeded first try compress lockkey, still exceeded throws exceptions
maxBranchSessionSize = 16384
# globe session size , if exceeded throws exceptions
maxGlobalSessionSize = 512
# file buffer size , if exceeded allocate new buffer
fileWriteBufferCacheSize = 16384
# when recover batch read size
sessionReloadReadSize = 100
# async, sync
flushDiskMode = async
}
## database store property
db {
## the implement of javax.sql.DataSource, such as DruidDataSource(druid)/BasicDataSource(dbcp) etc.
datasource = "dbcp"
## mysql/oracle/h2/oceanbase etc.
dbType = "mysql"
driverClassName = "com.mysql.jdbc.Driver"
url = "jdbc:mysql://www.iamcrawler.cn:3306/seata"
user = "**"//这里请写你的用户名
password = "***"//这里请写你的密码
minConn = 1
maxConn = 10
globalTable = "global_table"
branchTable = "branch_table"
lockTable = "lock_table"
queryLimit = 100
}
}
我的registry.conf
registry {
# file 、nacos 、eureka、redis、zk、consul、etcd3、sofa
type = "nacos"
nacos {
serverAddr = "localhost"
namespace = ""
cluster = "default"
}
eureka {
serviceUrl = "http://localhost:8761/eureka"
application = "default"
weight = "1"
}
redis {
serverAddr = "localhost:6379"
db = "0"
}
zk {
cluster = "default"
serverAddr = "127.0.0.1:2181"
session.timeout = 6000
connect.timeout = 2000
}
consul {
cluster = "default"
serverAddr = "127.0.0.1:8500"
}
etcd3 {
cluster = "default"
serverAddr = "http://localhost:2379"
}
sofa {
serverAddr = "127.0.0.1:9603"
application = "default"
region = "DEFAULT_ZONE"
datacenter = "DefaultDataCenter"
cluster = "default"
group = "SEATA_GROUP"
addressWaitTime = "3000"
}
file {
name = "file.conf"
}
}
config {
# file、nacos 、apollo、zk、consul、etcd3
type = "nacos"
nacos {
serverAddr = "localhost"
namespace = ""
group = "SEATA_GROUP"
}
consul {
serverAddr = "127.0.0.1:8500"
}
apollo {
app.id = "seata-server"
apollo.meta = "http://192.168.1.204:8801"
namespace = "application"
}
zk {
serverAddr = "127.0.0.1:2181"
session.timeout = 6000
connect.timeout = 2000
}
etcd3 {
serverAddr = "http://localhost:2379"
}
file {
name = "file.conf"
}
}
ps: 有一个需要注意下的,因为我是先装的seata0.8的版本,配置文件使用的是nacos。0.8的版本的时候,初始化的配置在/conf文件夹下有一个nacos-config.txt文件,但是升级到1.1.0版本的时候却没有这个配置了,然而那些配置都是必须的,那么这些配置在哪里呢?其实官网早已说明 (这里有两个链接哦,请仔细阅读)。我们按照官网的配置执行一下就好啦。
好,启动好seata后,我们启动nacos就会看到有一个serverAddr注册上去了,这就是我们的分布式事务seata的服务端,也就是上文所说的TC。
2.具体参与模块:
maven:
```
com.alibaba.cloud
spring-cloud-alibaba-seata
2.1.0.RELEASE
io.seata
seata-all
io.seata
seata-all
1.1.0
因为我使用的是mybatis-plus,所以我就手动的使用了seata的代理数据源,当然也可以配置一下,就不用手写了。官网都有交代:
package com.iamcrawler.microuser.config;
import com.alibaba.druid.pool.DruidDataSource;
import com.zaxxer.hikari.HikariDataSource;
import io.seata.rm.datasource.DataSourceProxy;
import org.springframework.boot.autoconfigure.jdbc.DataSourceProperties;
import org.springframework.boot.context.properties.ConfigurationProperties;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.context.annotation.Primary;
import javax.sql.DataSource;
/**
* @auther liuliang
* @date 2020/3/3 11:26 AM
*/
@Configuration
public class DataSourceProxyAutoConfiguration {
/**
* 数据源属性配置
* {@link DataSourceProperties}
*/
private DataSourceProperties dataSourceProperties;
public DataSourceProxyAutoConfiguration(DataSourceProperties dataSourceProperties) {
this.dataSourceProperties = dataSourceProperties;
}
/**
* 配置数据源代理,用于事务回滚
*
* @return The default datasource
* @see DataSourceProxy
*/
@Primary
@Bean("dataSource")
public DataSource dataSource() {
HikariDataSource dataSource = new HikariDataSource();
dataSource.setJdbcUrl(dataSourceProperties.getUrl());
dataSource.setUsername(dataSourceProperties.getUsername());
dataSource.setPassword(dataSourceProperties.getPassword());
dataSource.setDriverClassName(dataSourceProperties.getDriverClassName());
return new DataSourceProxy(dataSource);
}
}
配置文件:
spring:
cloud:
alibaba:
seata:
tx-service-group: micro-user-group
最后,我们还要在resource下添加file.conf,registry.conf
file.conf:
transport {
# tcp udt unix-domain-socket
type = "TCP"
#NIO NATIVE
server = "NIO"
#enable heartbeat
heartbeat = true
#thread factory for netty
thread-factory {
boss-thread-prefix = "NettyBoss"
worker-thread-prefix = "NettyServerNIOWorker"
server-executor-thread-prefix = "NettyServerBizHandler"
share-boss-worker = false
client-selector-thread-prefix = "NettyClientSelector"
client-selector-thread-size = 1
client-worker-thread-prefix = "NettyClientWorkerThread"
# netty boss thread size,will not be used for UDT
boss-thread-size = 1
#auto default pin or 8
worker-thread-size = 8
}
shutdown {
# when destroy server, wait seconds
wait = 3
}
serialization = "seata"
compressor = "none"
}
service {
#vgroup->rgroup
vgroup_mapping.micro-user-group= "default"
#only support single node
default.grouplist = "127.0.0.1:8091"
#degrade current not support
enableDegrade = false
#disable
disable = false
disableGlobalTransaction = false
#unit ms,s,m,h,d represents milliseconds, seconds, minutes, hours, days, default permanent
max.commit.retry.timeout = "-1"
max.rollback.retry.timeout = "-1"
}
client {
async.commit.buffer.limit = 10000
lock {
retry.internal = 10
retry.times = 30
}
report.retry.count = 5
}
## transaction log store
store {
## store mode: file、db
mode = "db"
## file store
file {
dir = "sessionStore"
# branch session size , if exceeded first try compress lockkey, still exceeded throws exceptions
max-branch-session-size = 16384
# globe session size , if exceeded throws exceptions
max-global-session-size = 512
# file buffer size , if exceeded allocate new buffer
file-write-buffer-cache-size = 16384
# when recover batch read size
session.reload.read_size = 100
# async, sync
flush-disk-mode = async
}
## database store
db {
## the implement of javax.sql.DataSource, such as DruidDataSource(druid)/BasicDataSource(dbcp) etc.
datasource = "dbcp"
## mysql/oracle/h2/oceanbase etc.
db-type = "mysql"
url = "jdbc:mysql://www.iamcrawler.cn:3306/seata"
user = "root"
password = "mysql"
min-conn = 1
max-conn = 300
global.table = "global_table"
branch.table = "branch_table"
lock-table = "lock_table"
query-limit = 100
}
}
lock {
## the lock store mode: local、remote
mode = "remote"
local {
## store locks in user's database
}
remote {
## store locks in the seata's server
}
}
recovery {
committing-retry-delay = 30
asyn-committing-retry-delay = 30
rollbacking-retry-delay = 30
timeout-retry-delay = 30
}
transaction {
undo.data.validation = true
undo.log.serialization = "jackson"
}
## metrics settings
metrics {
enabled = false
registry-type = "compact"
# multi exporters use comma divided
exporter-list = "prometheus"
exporter-prometheus-port = 9898
}
config {
nacos.group = "default"
}
registry.conf
registry {
# file 、nacos 、eureka、redis、zk、consul、etcd3、sofa
type = "nacos"
nacos {
serverAddr = "localhost"
namespace = ""
cluster = "default"
}
eureka {
serviceUrl = "http://localhost:8761/eureka"
application = "default"
weight = "1"
}
redis {
serverAddr = "localhost:6379"
db = "0"
password = ""
cluster = "default"
timeout = "0"
}
zk {
cluster = "default"
serverAddr = "127.0.0.1:2181"
session.timeout = 6000
connect.timeout = 2000
username = ""
password = ""
}
consul {
cluster = "default"
serverAddr = "127.0.0.1:8500"
}
etcd3 {
cluster = "default"
serverAddr = "http://localhost:2379"
}
sofa {
serverAddr = "127.0.0.1:9603"
application = "default"
region = "DEFAULT_ZONE"
datacenter = "DefaultDataCenter"
cluster = "default"
group = "SEATA_GROUP"
addressWaitTime = "3000"
}
file {
name = "file.conf"
}
}
config {
# file、nacos 、apollo、zk、consul、etcd3、springCloudConfig
type = "nacos"
nacos {
serverAddr = "localhost"
namespace = ""
cluster = "default"
}
consul {
serverAddr = "127.0.0.1:8500"
}
apollo {
app.id = "seata-server"
apollo.meta = "http://192.168.1.204:8801"
namespace = "application"
}
zk {
serverAddr = "127.0.0.1:2181"
session.timeout = 6000
connect.timeout = 2000
username = ""
password = ""
}
etcd3 {
serverAddr = "http://localhost:2379"
}
file {
name = "file.conf"
}
}
同理,在另外的一个模块加上述配置,只是tx-service-group配置的不同就好了。在发起方使用@GlobalTransactional,参与方都不用使用任何注解,你会发现,事务就被seata全局控制了。当然,第一次集成可能会有一些大大小小的问题,这里大家可以多到seata官网取查资料,也可以到seata的github看一看人家的解决方案。还可以加seata的官方资料群,参与seata的作者也在群里哦。还很热心的回答大家各种问题,下一讲想花点时间写一些关于我看的源码方面的at模式。本文的代码放在码云上,大家可以下载下来自行参考:欢迎拍砖
https://gitee.com/iamcrawler/...