Seata 是一款开源的分布式事务解决方案,致力于提供高性能和简单易用的分布式事务服务。Seata 将为用户提供了 AT、TCC、SAGA 和XA 事务模式,为用户打造一站式的分布式解决方案。点击进入官网
seata的整体机制由两阶段提交演变而来
seata通过默认配置seata.enable-auto-data-source-proxy: true
对数据源进行代理,当程序需要对数据库进行写操作时,seata可对要执行的sql语句进行分析获得该行数据的详细信息并保存在当前服务所连接的数据库的undo_log表中作为日志备份,当下游服务出现异常时,seata-server可以对上游服务的undo_log表中的数据进行回滚,以实现分布式事务的强一致性。
在数据库本地事务隔离级别 读已提交(Read Committed) 或以上的基础上,Seata(AT 模式)的默认全局隔离级别是 读未提交(Read Uncommitted) 。
如果应用在特定场景下,必需要求全局的 读已提交 ,目前 Seata 的方式是通过 SELECT … FOR UPDATE 语句的代理。
前提
原理
应用
适合支持事务的关系型数据库,如使用innodb的mysql数据库
所谓 TCC 模式,是指支持把 自定义 的分支事务纳入到全局事务的管理中。
该模式不依赖于数据库的事务支持。
原理
应用
适合不支持事务的关系型数据库,如使用myisam的mysql数据库
Saga模式是SEATA提供的长事务解决方案。
在Saga模式中,业务流程中每个参与者都提交本地事务,当出现某一个参与者失败则补偿前面已经成功的参与者。
一阶段正向服务和二阶段补偿服务都由业务开发实现。
应用
优点
缺点
综上所述,本项目使用AT模式。
本项目使用nacos作为seata的配置中心,在nacos添加配置文件名为seata-server.properties
,配置内容参考具体配置
#Transport configuration, for client and server
transport.type=TCP
transport.server=NIO
transport.heartbeat=true
transport.enableTmClientBatchSendRequest=false
transport.enableRmClientBatchSendRequest=true
transport.enableTcServerBatchSendResponse=false
transport.rpcRmRequestTimeout=30000
transport.rpcTmRequestTimeout=30000
transport.rpcTcRequestTimeout=30000
transport.threadFactory.bossThreadPrefix=NettyBoss
transport.threadFactory.workerThreadPrefix=NettyServerNIOWorker
transport.threadFactory.serverExecutorThreadPrefix=NettyServerBizHandler
transport.threadFactory.shareBossWorker=false
transport.threadFactory.clientSelectorThreadPrefix=NettyClientSelector
transport.threadFactory.clientSelectorThreadSize=1
transport.threadFactory.clientWorkerThreadPrefix=NettyClientWorkerThread
transport.threadFactory.bossThreadSize=1
transport.threadFactory.workerThreadSize=default
transport.shutdown.wait=3
transport.serialization=seata
transport.compressor=none
#Transaction routing rules configuration, only for the client
# 注意:该配置为map类型,key为my_test_tx_group,value为seata-cluster
# 该配置与seata-server和seata-client严格要求一致
service.vgroupMapping.my_test_tx_group=seata-cluster
#If you use a registry, you can ignore it
#service.default.grouplist=127.0.0.1:8091
service.enableDegrade=false
service.disableGlobalTransaction=false
#Transaction rule configuration, only for the client
client.rm.asyncCommitBufferLimit=10000
client.rm.lock.retryInterval=10
client.rm.lock.retryTimes=30
client.rm.lock.retryPolicyBranchRollbackOnConflict=true
client.rm.reportRetryCount=5
client.rm.tableMetaCheckEnable=true
client.rm.tableMetaCheckerInterval=60000
client.rm.sqlParserType=druid
client.rm.reportSuccessEnable=false
client.rm.sagaBranchRegisterEnable=false
client.rm.sagaJsonParser=fastjson
client.rm.tccActionInterceptorOrder=-2147482648
client.tm.commitRetryCount=5
client.tm.rollbackRetryCount=5
client.tm.defaultGlobalTransactionTimeout=60000
client.tm.degradeCheck=false
client.tm.degradeCheckAllowTimes=10
client.tm.degradeCheckPeriod=2000
client.tm.interceptorOrder=-2147482648
client.undo.dataValidation=true
client.undo.logSerialization=jackson
client.undo.onlyCareUpdateColumns=true
server.undo.logSaveDays=7
server.undo.logDeletePeriod=86400000
client.undo.logTable=seata_undo_log
client.undo.compress.enable=true
client.undo.compress.type=zip
client.undo.compress.threshold=64k
#Log rule configuration, for client and server
log.exceptionRate=100
#Transaction storage configuration, only for the server. The file, DB, and redis configuration values are optional.
store.mode=db
store.lock.mode=db
store.session.mode=db
store.db.datasource=druid
store.db.dbType=mysql
store.db.driverClassName=com.mysql.cj.jdbc.Driver
store.db.url=jdbc:mysql://ip:port/seata?rewriteBatchedStatements=true&serverTimezone=Asia/Shanghai&useUnicode=true&characterEncoding=utf8&useSSL=false
store.db.user=username
store.db.password=password
store.db.minConn=5
store.db.maxConn=30
store.db.globalTable=seata_global_table
store.db.branchTable=seata_branch_table
store.db.distributedLockTable=seata_distributed_lock
store.db.queryLimit=100
store.db.lockTable=seata_lock_table
store.db.maxWait=5000
#Transaction rule configuration, only for the server
server.recovery.committingRetryPeriod=1000
server.recovery.asynCommittingRetryPeriod=1000
server.recovery.rollbackingRetryPeriod=1000
server.recovery.timeoutRetryPeriod=1000
server.recovery.handleAllSessionPeriod=1000
server.maxCommitRetryTimeout=-1
server.maxRollbackRetryTimeout=-1
server.rollbackRetryTimeoutUnlockEnable=false
server.distributedLockExpireTime=10000
server.xaerNotaRetryTimeout=60000
server.session.branchAsyncQueueSize=5000
server.session.enableBranchAsyncRemove=false
server.enableParallelRequestHandle=false
#Metrics configuration, only for the server
metrics.enabled=false
metrics.registryType=compact
metrics.exporterList=prometheus
metrics.exporterPrometheusPort=9898
#security configuration, only for the server
security.secretKey = SeataSecretKey0c382ef121d778043159209298fd40bf3850a017
security.tokenValidityInMilliseconds = 1800000
security.ignore.urls = /,/**/*.css,/**/*.js,/**/*.html,/**/*.map,/**/*.svg,/**/*.png,/**/*.ico,/console-fe/public/**,/api/v1/auth/login
本项目使用nacos作为seata的注册中心,通过nacos提供的服务发现,以实现seata在服务调用过程中对全局事务标识(xid)的传递。
docker pull seataio/seata-server:1.5.2
docker run \
--name seata-server \
-d \
-p 8091:8091 \
-p 7091:7091 \
--restart=always \
seataio/seata-server:1.5.2
要求配置中心配置store.mode=db
,并修改数据源等配置信息。
全局事务会话信息由3块内容构成,全局事务–>分支事务–>全局锁,对应表global_table、branch_table、lock_table
建表DDL语句可查看官方文件
通过挂载文件的方式实现,将宿主机上的 application.yml
挂载到容器中相应的目录
docker cp seata-server:/seata-server/resources /data/seata/config
拷出后可以选择
- 修改`application.yml`再`cp`进容器
- `rm`临时容器,重新创建,并做好映射路径设置
docker run \
--name seata-server \
-d \
-p 8091:8091 \
-p 7091:7091 \
-v /data/seata/config/resources:/seata-server/resources \
--restart=always \
seataio/seata-server:1.5.2
可选, 指定seata-server启动的IP, 该IP用于向注册中心注册时使用, 如eureka等
可选, 指定seata-server启动的端口, 默认为 `8091
可选, 用于指定seata-server节点ID, 如
1
,2
,3
…, 默认为根据ip生成
可选, 指定 seata-server 运行环境, 如
dev
,test
等, 服务启动时会使用registry-dev.conf
这样的配置
seata-server可配置的属性参考application.example.yaml
server:
2 port: 7091
3 spring:
4 application:
5 name: seata-server
6 profiles:
7 active: test
8
9 logging:
10 config: classpath:logback-spring.xml
11 file:
12 path: ${user.home}/logs/seata
13 extend:
14 logstash-appender:
15 destination: 127.0.0.1:4560
16 kafka-appender:
17 bootstrap-servers: 127.0.0.1:9092
18 topic: logback_to_logstash
19 # seata控制台
20 console:
21 user:
22 username: seata
23 password: seata
24
25 seata:
26 config:
27 # support: nacos, consul, apollo, zk, etcd3
# 以nacos作为配置中心
28 type: nacos
29 nacos:
30 server-addr: ip:port
31 namespace: ${spring.profiles.active}
32 group: SEATA_GROUP
33 username: nacos
34 password: nacos
35 ##if use MSE Nacos with auth, mutex with username/password attribute
36 #access-key: ""
37 #secret-key: ""
38 data-id: seata-server-dev.properties
39 registry:
40 # support: nacos, eureka, redis, zk, consul, etcd3, sofa
# 以nacos作为服务注册中心
41 type: nacos
42 nacos:
43 application: seata-server
44 server-addr: ip:port
45 group : SEATA_GROUP
46 namespace: ${spring.profiles.active}
47 username: nacos
48 password: nacos
# 该配置若不写,则默认为default
# 注意:该配置与seata-client端的配置要求一致
# 参考配置中心配置:service.vgroupMapping.my_test_tx_group=seata-cluster
49 cluster: seata-cluster
52 server:
53 service-port: 8091 #If not configured, the default is '${server.port} + 1000'
54 enable-check-auth: true
55 retry-dead-threshold: 130000
6 undo:
57 log-save-days: 7
58 log-delete-period: 86400000
<dependency>
<groupId>io.seatagroupId>
<artifactId>seata-spring-boot-starterartifactId>
<version>最新版version>
dependency>
<dependency>
<groupId>com.alibaba.cloudgroupId>
<artifactId>spring-cloud-starter-alibaba-seataartifactId>
<version>最新版本version>
<exclusions>
<exclusion>
<groupId>io.seatagroupId>
<artifactId>seata-spring-boot-starterartifactId>
exclusion>
exclusions>
dependency>
seata:
application-id: ${spring.application.name}
# 参考配置中心配置:service.vgroupMapping.my_test_tx_group=seata-cluster
tx-service-group: my_test_tx_group
config:
type: nacos
nacos:
namespace: ${spring.profiles.active}
data-id: seata-server-${spring.profiles.active}.properties
server-addr: ${spring.cloud.nacos.discovery.server-addr}
group: SEATA_GROUP
username: ${spring.cloud.nacos.discovery.username}
password: ${spring.cloud.nacos.discovery.password}
registry:
type: nacos
nacos:
application: seata-server
server-addr: ${spring.cloud.nacos.discovery.server-addr}
group: SEATA_GROUP
namespace: ${spring.profiles.active}
username: ${spring.cloud.nacos.discovery.username}
password: ${spring.cloud.nacos.discovery.password}
# 参考配置中心配置:service.vgroupMapping.my_test_tx_group=seata-cluster
# 参考seata-server配置:seata.registry.nacos.cluster = seata-cluster
cluster: seata-cluster
SEATA AT 模式需要 UNDO_LOG
表
-- 注意此处0.3.0+ 增加唯一索引 ux_undo_log
CREATE TABLE `undo_log` (
`id` bigint(20) NOT NULL AUTO_INCREMENT,
`branch_id` bigint(20) NOT NULL,
`xid` varchar(100) NOT NULL,
`context` varchar(128) NOT NULL,
`rollback_info` longblob NOT NULL,
`log_status` int(11) NOT NULL,
`log_created` datetime NOT NULL,
`log_modified` datetime NOT NULL,
`ext` varchar(100) DEFAULT NULL,
PRIMARY KEY (`id`),
UNIQUE KEY `ux_undo_log` (`xid`,`branch_id`)
) ENGINE=InnoDB AUTO_INCREMENT=1 DEFAULT CHARSET=utf8;
Q:启动时一直报错:
can not get cluster name in registry config ‘service.vgroupMapping.default_tx_group’, please make sure registry config correct
A:请检查
- 配置中心是否配置了`service.vgroupMapping`属性,如果没配,则默认为`default_tx_group`
- client端的`seata.tx-service-group`的值是否和配置中心`service.vgroupMapping`的key一致
- client端的`seata.registry.nacos.cluster`的值是否和配置中心`service.vgroupMapping`的value一致
- server端的`seata.registry.nacos.cluster`的值是否和配置中心`service.vgroupMapping`的value一致