分布式事务解决方案——Seata使用

在微服务开发过程中分布式事务一直是一个比较重要的问题,之前对于分布式事务的解决方法一般会通过MQ的最终一致性来解决,尤其是RocketMQ的事务消息,感兴趣的可以看我的Spring Boot整合RocketMQ之事务消息。今天主要介绍一下是另一款阿里推出的分布式事务中间件——Seata。在我入职目前这家公司之前其实我对分布式事务的解决方案了解的也不多,也是在面试的时候当时面试我的同事提出了Seata,而且目前公司有些部门已经在生产环境上使用了。而我在的项目组是对老系统进行重构,即从原有的单服务集群转变为微服务架构,因此在分布式事务的解决方案上也选择了Seata。疫情期间在家办公的时候自己也对选择的技术方案进行了验证和测试,整体还是很不错的,而且我们项目也在生产中使用了Seata

一、Seata简介

这里我就不再过的介绍了,我还是推荐到官方去看相关的文档,Seata官方。我的习惯一般不会做很多文字上的介绍,主要就是使用以及踩的坑,今天主要是通过Spring Boot整合Seata以及分布式事务的具体使用,下面直接开始准备项目和其他准备工作。

二、启动Seata的服务端

Seata是一个分布式事务中间件,使用它必须要启动服务,然后微服务中的服务,也就是Seata的客户端会向Seata的服务端进行注册,注册的时候会有携带该客户端的一些相关信息,具体是什么后面我们会说到。
首先我们先去下载:官方下载地址,根据自己的需要选择版本就行,目前我们项目组使用的是1.1.0的版本,但是我看最新的已经到了1.3.0,所以这次我也下载1.3.0的最新版本。
下载之后解压文件:

unzip seata-server-1.3.0.zip

这里说一下Seata服务端数据的存储模式,目前是支持两种一种是数据库db,一种是文件file,默认是使用的文件,但是个人觉得最好还是使用数据库会好一点。Seata的全局事务会话信息由3部分内容组成,全局事务、分支事务、全局锁,它们对应的表名分别为global_table、branch_table、lock_table。
下面我们创建Seata服务端需要的数据库seata和表,建表脚本如下:

CREATE TABLE IF NOT EXISTS `global_table`
(
    `xid`                       VARCHAR(128) NOT NULL,
    `transaction_id`            BIGINT,
    `status`                    TINYINT      NOT NULL,
    `application_id`            VARCHAR(32),
    `transaction_service_group` VARCHAR(32),
    `transaction_name`          VARCHAR(128),
    `timeout`                   INT,
    `begin_time`                BIGINT,
    `application_data`          VARCHAR(2000),
    `gmt_create`                DATETIME,
    `gmt_modified`              DATETIME,
    PRIMARY KEY (`xid`),
    KEY `idx_gmt_modified_status` (`gmt_modified`, `status`),
    KEY `idx_transaction_id` (`transaction_id`)
) ENGINE = InnoDB
  DEFAULT CHARSET = utf8;

CREATE TABLE IF NOT EXISTS `branch_table`
(
    `branch_id`         BIGINT       NOT NULL,
    `xid`               VARCHAR(128) NOT NULL,
    `transaction_id`    BIGINT,
    `resource_group_id` VARCHAR(32),
    `resource_id`       VARCHAR(256),
    `branch_type`       VARCHAR(8),
    `status`            TINYINT,
    `client_id`         VARCHAR(64),
    `application_data`  VARCHAR(2000),
    `gmt_create`        DATETIME(6),
    `gmt_modified`      DATETIME(6),
    PRIMARY KEY (`branch_id`),
    KEY `idx_xid` (`xid`)
) ENGINE = InnoDB
  DEFAULT CHARSET = utf8;

CREATE TABLE IF NOT EXISTS `lock_table`
(
    `row_key`        VARCHAR(128) NOT NULL,
    `xid`            VARCHAR(96),
    `transaction_id` BIGINT,
    `branch_id`      BIGINT       NOT NULL,
    `resource_id`    VARCHAR(256),
    `table_name`     VARCHAR(32),
    `pk`             VARCHAR(36),
    `gmt_create`     DATETIME,
    `gmt_modified`   DATETIME,
    PRIMARY KEY (`row_key`),
    KEY `idx_branch_id` (`branch_id`)
) ENGINE = InnoDB
  DEFAULT CHARSET = utf8;

之后就是修改Seata的配置文件,打开解压后的目录,并进入到conf目录,这里主要是Seata的配置文件,分别是registry.conffile.conf,这两个文件主要是配置服务端配置的相关信息,因为默认使用的注册类型是file(其他有nacoseurekaredis等),即会从file.conf读取相关配置。我们接下来就是修改file.conf,修改存储模式为db,并配置db的相关信息,如下:

store {
  ## store mode: file、db、redis
  mode = "db"
  ...
  db {
    ## the implement of javax.sql.DataSource, such as DruidDataSource(druid)/BasicDataSource(dbcp)/HikariDataSource(hikari) etc.
    datasource = "druid"
    ## mysql/oracle/postgresql/h2/oceanbase etc.
    dbType = "mysql"
    driverClassName = "com.mysql.cj.jdbc.Driver"
    url = "jdbc:mysql://127.0.0.1:3306/seata"
    user = "root"
    password = "123456"
    minConn = 5
    maxConn = 30
    globalTable = "global_table"
    branchTable = "branch_table"
    lockTable = "lock_table"
    queryLimit = 100
    maxWait = 5000
  }
  ....
}  

配置完成之后我们一后台启动的方式启动服务端,并将日志输出到nohup.out:

nohup  ./seata-server.sh -h 127.0.0.1 -p 8091 -m db &

三、项目准备

测试分布式事务我这里就准备两个简单的项目,一个订单服务(order-service),一个仓库服务(warehouse-service),这两个项目分别使用不同的数据库来模拟一个分布式事务。两个项目的pom.xml中均添加Seata的依赖。服务之间的调用我就直接通过Feign来完成,不使用注册中心。

...
    
        org.springframework.boot
        spring-boot-starter-parent
        2.1.13.RELEASE
    
...
    
        1.8
        Greenwich.SR5
        1.3.0
    

    
        
            org.springframework.boot
            spring-boot-starter-data-jpa
        
        
            org.springframework.boot
            spring-boot-starter-web
        
        
            org.springframework.cloud
            spring-cloud-starter-openfeign
        
        
        
            io.seata
            seata-spring-boot-starter
            ${seata.version}
        
        
        
            org.postgresql
            postgresql
            runtime
        
        
            org.projectlombok
            lombok
            true
        
        
            org.springframework.boot
            spring-boot-starter-test
            test
            
                
                    org.junit.vintage
                    junit-vintage-engine
                
            
        
    

    
        
            
                org.springframework.cloud
                spring-cloud-dependencies
                ${spring-cloud.version}
                pom
                import
            
        
    
...

为了能更好的测试分布式事务,上述的仓库服务和订单服务使用了不同的数据库平台,其中:订单服务使用Mysql,仓库服务使用Postgresql,这点注意区分一下。两个项目的配置文件如下:

订单服务配置:
spring.application.name=order-service

spring.datasource.driver-class-name=com.mysql.cj.jdbc.Driver
spring.datasource.url=jdbc:mysql://localhost:3306/order_db?useSSL=false&characterEncoding=utf8&serverTimezone=Asia/Shanghai&allowPublicKeyRetrieval=true
spring.datasource.username=root
spring.datasource.password=123456

spring.jpa.show-sql=true
spring.jpa.hibernate.ddl-auto=update
spring.jpa.database=mysql
spring.jpa.hibernate.naming.physical-strategy=org.hibernate.boot.model.naming.PhysicalNamingStrategyStandardImpl
spring.jpa.generate-ddl=true
spring.jpa.properties.hibernate.dialect=org.hibernate.dialect.MySQL8Dialect

## seata config
seata.enabled=true
seata.application-id=${spring.application.name}
seata.tx-service-group=${spring.application.name}-seata-service-group
seata.service.vgroup-mapping.order-service-seata-service-group=default
seata.service.enable-degrade=false
seata.service.disable-global-transaction=false
seata.service.grouplist.default=127.0.0.1:8091
仓库服务配置:
server.port=18080
spring.application.name=warehouse-service
## db
spring.datasource.driver-class-name=org.postgresql.Driver
spring.datasource.url=jdbc:postgresql://localhost:5432/warehouse_db?useSSL=false&characterEncoding=utf8&serverTimezone=Asia/Shanghai
spring.datasource.username=postgres
spring.datasource.password=123456
## jpa
spring.jpa.show-sql=true
spring.jpa.hibernate.ddl-auto=update
spring.jpa.database=postgresql
spring.jpa.hibernate.naming.physical-strategy=org.hibernate.boot.model.naming.PhysicalNamingStrategyStandardImpl
spring.jpa.generate-ddl=true
spring.jpa.properties.hibernate.dialect=org.hibernate.dialect.PostgreSQL10Dialect

## seata config
seata.enabled=true
seata.application-id=${spring.application.name}
seata.tx-service-group=${spring.application.name}-seata-service-group
seata.service.vgroup-mapping.warehouse-service-seata-service-group=default
seata.service.enable-degrade=false
seata.service.disable-global-transaction=false
seata.service.grouplist.default=127.0.0.1:8091

上面创建的两个项目我分别只写了一个接口用于测试,订单服务接口的Controller代码如下:

@Slf4j
@RestController
@RequestMapping("/order")
public class OrderController {

    private OrderService orderService;

    public OrderController(OrderService orderService) {
        this.orderService = orderService;
    }

    @PostMapping("/purchase")
    public ResponseEntity purchase(@RequestBody OrderDTO orderDTO) {
        Map resultMap = orderService.purchase(orderDTO);
        return ResponseEntity.ok(resultMap);
    }
}

Service代码如下:

@Slf4j
@Service
public class OrderServiceImpl implements OrderService {

    @Autowired
    private WarehouseClient warehouseClient;

    @Autowired
    private OrderRepository orderRepository;

    @Override
    @GlobalTransactional
    public Map purchase(OrderDTO orderDTO) {
        Map resultMap = new HashMap<>();
        OrderEntity orderEntity = new OrderEntity();
        orderEntity.setAddress(orderDTO.getAddress());
        orderEntity.setCreateTime(new Date());
        orderEntity.setTotalPrice(orderDTO.getTotalPrice());
        orderEntity.setOrderNum(orderDTO.getOrderNum());
        // 本地事务
        OrderEntity result = orderRepository.save(orderEntity);
        log.info(">>>> insert result = {} <<<<",result);

        // 分支事务
        Map response = warehouseClient.reduce(orderDTO.getWarehouseCode(),orderDTO.getNums());
        log.info(">>>> warehouse response={}",response);

        resultMap.put("success",true);
        return resultMap;
    }
}

通过上面的代码,我们通过Feign客户端调用仓库服务,当然我这里并没有开启hystrix,所以异常信息会直接返回。上面代码中主要需要关注的是@GlobalTransactional注解,这个是Seata提供的注解,主要就表明该事务是一个全局事务,该注解还有一些可选参数,这里就不介绍了。不管是订单服务还是仓库服务出现异常,整个涉及到的各个分支事务都会回滚。
仓库服务的Service代码如下:

    @Override
    @Transactional(rollbackFor = Exception.class)
    public Map reduceStock(String code, Long num) {
        Map resultMap = new HashMap<>();
        resultMap.put("success",false);
        WarehouseEntity warehouseEntity = warehouseRepository.findByCode(code);
        // 仓库减少库存
        Long remain = warehouseEntity.getStock() - num;
        warehouseEntity.setStock(remain);
        warehouseEntity.setUpdateTime(new Date());
        WarehouseEntity result = warehouseRepository.save(warehouseEntity);
        // 模拟异常
        if (result.getStock() % 2 == 0) {
            log.error(">>>> 仓库分支事务抛出异常 <<<<");
            throw new RuntimeException("仓库分支事务抛出异常");
        }

        resultMap.put("success",true);
        resultMap.put("message","更新库存成功");
        return resultMap;
    }

上面的代码中和普通的本地事务没有什么区别,通过判断stock的数量是奇数还是偶数用来模拟异常情况。

注意:因为Seata 通过代理数据源实现分支事务,如果没有进行数据代理,事务无法成功回滚

关于实现数据源代理可以通过@EnableAutoDataSourceProxy实现,也可以通过自定义代码配置,比如:

@Configuration
public class DataSourceConfig {

    @Bean
    @ConfigurationProperties(prefix = "spring.datasource")
    public DruidDataSource druidDataSource() {
        // 或者使用其他数据源
        return new DruidDataSource();
    }

    /**
     * 将 代理数据源设置为主数据源
     */
    @Primary
    @Bean
    public DataSource dataSource(DruidDataSource druidDataSource) {
        return new DataSourceProxy(druidDataSource);
    }
}

到这里项目准备工作已经基本完成了,现在还需要为没一个服务创建一个表undo_log,这个表必须和该服务所在的表在同一个数据库,Mysql的脚本如下,其他数据库做相应修改即可,

CREATE TABLE `undo_log` (
  `id` bigint(20) NOT NULL AUTO_INCREMENT,
  `branch_id` bigint(20) NOT NULL,
  `xid` varchar(100) NOT NULL,
  `context` varchar(128) NOT NULL,
  `rollback_info` longblob NOT NULL,
  `log_status` int(11) NOT NULL,
  `log_created` datetime NOT NULL,
  `log_modified` datetime NOT NULL,
  `ext` varchar(100) DEFAULT NULL,
  PRIMARY KEY (`id`),
  UNIQUE KEY `ux_undo_log` (`xid`,`branch_id`)
) ENGINE=InnoDB AUTO_INCREMENT=1 DEFAULT CHARSET=utf8mb4;

另外,单独向仓库服务的数据库中插入两条模拟数据,到这里整个准备阶段的工作都算是完成了,接下就是进行测试。

四、分布式事务测试

仓库服务的初始信息如下:

图-1.png

接下来,分别启动订单服务和仓库服务,并使用IdeaHttp Client创建相应的HTTP请求:

POST http://localhost:8080/order/purchase
Accept: *
Content-Type: application/json
Cache-Control: no-cache

## 例子
{"totalPrice": 222.00,"orderNum": "222222","address": "CN-SC-CD-22","warehouseCode":"abc","nums": 3}

订单服务的日志如下:

2020-09-20 19:58:32.141  INFO 28244 --- [nio-8080-exec-1] io.seata.tm.TransactionManagerHolder     : TransactionManager Singleton io.seata.tm.DefaultTransactionManager@6597606d
2020-09-20 19:58:32.170  INFO 28244 --- [nio-8080-exec-1] i.seata.tm.api.DefaultGlobalTransaction  : Begin new global transaction [172.17.0.1:8091:51035922649583616]
Hibernate: insert into t_order (address, createTime, orderNum, totalPrice, updateTime, warehouseCode) values (?, ?, ?, ?, ?, ?)
2020-09-20 19:58:32.401  INFO 28244 --- [nio-8080-exec-1] c.y.s.o.service.impl.OrderServiceImpl    : >>>> insert result = OrderEntity(id=14, orderNum=222222, address=CN-SC-CD-22, totalPrice=222.0, warehouseCode=abc, createTime=Sun Sep 20 19:58:32 CST 2020, updateTime=null) <<<<
2020-09-20 19:58:32.677  INFO 28244 --- [nio-8080-exec-1] c.y.s.o.service.impl.OrderServiceImpl    : >>>> warehouse response={success=true, message=更新库存成功}
2020-09-20 19:58:32.692  INFO 28244 --- [nio-8080-exec-1] i.seata.tm.api.DefaultGlobalTransaction  : [172.17.0.1:8091:51035922649583616] commit status: Committed
2020-09-20 19:58:33.246  INFO 28244 --- [ch_RMROLE_1_1_8] i.s.c.r.p.c.RmBranchCommitProcessor      : rm client handle branch commit process:xid=172.17.0.1:8091:51035922649583616,branchId=51035923391975425,branchType=AT,resourceId=jdbc:mysql://localhost:3306/order_db,applicationData=null
2020-09-20 19:58:33.251  INFO 28244 --- [ch_RMROLE_1_1_8] io.seata.rm.AbstractRMHandler            : Branch committing: 172.17.0.1:8091:51035922649583616 51035923391975425 jdbc:mysql://localhost:3306/order_db null
2020-09-20 19:58:33.254  INFO 28244 --- [ch_RMROLE_1_1_8] io.seata.rm.AbstractRMHandler            : Branch commit result: PhaseTwo_Committed

根据日志可以看出,首先创建了一个全局事务,该事务有一个全局172.17.0.1:8091:51035922649583616,另外可以看到订单服务有一个分支事务branchId=51035923391975425,首先是全局事务提交成功,然后订单分支事务开始提交,最后是PhaseTwo_Committed,即两阶段提交成功(我是猜的,并没有看文档,所以对准确性不负责)。另外也可以看出订单分支事务的xid其实就是全局事务的id,且其类型是AT
然后是仓库服务的日志:

2020-09-20 19:58:32.488  INFO 28493 --- [io-18080-exec-1] c.y.s.w.controller.WarehouseController   : >>>> request params: code=abc, nums=3 <<<<
2020-09-20 19:58:32.512  INFO 28493 --- [io-18080-exec-1] o.h.h.i.QueryTranslatorFactoryInitiator  : HHH000397: Using ASTQueryTranslatorFactory
Hibernate: select warehousee0_.id as id1_0_, warehousee0_.code as code2_0_, warehousee0_.createTime as createTi3_0_, warehousee0_.stock as stock4_0_, warehousee0_.unit as unit5_0_, warehousee0_.updateTime as updateTi6_0_ from t_warehouse warehousee0_ where warehousee0_.code=?
Hibernate: update t_warehouse set code=?, createTime=?, stock=?, unit=?, updateTime=? where id=?
2020-09-20 19:58:56.095  INFO 28493 --- [eoutChecker_1_1] i.s.c.r.netty.NettyClientChannelManager  : will connect to 127.0.0.1:8091
2020-09-20 19:58:56.096  INFO 28493 --- [eoutChecker_1_1] i.s.core.rpc.netty.NettyPoolableFactory  : NettyPool create channel to transactionRole:TMROLE,address:127.0.0.1:8091,msg:< RegisterTMRequest{applicationId='warehouse-service', transactionServiceGroup='warehouse-service-seata-service-group'} >
2020-09-20 19:58:56.108  INFO 28493 --- [eoutChecker_1_1] i.s.c.rpc.netty.TmNettyRemotingClient    : register TM success. client version:1.3.0, server version:1.3.0,channel:[id: 0x23f4f5db, L:/127.0.0.1:45528 - R:/127.0.0.1:8091]
2020-09-20 19:58:56.108  INFO 28493 --- [eoutChecker_1_1] i.s.core.rpc.netty.NettyPoolableFactory  : register success, cost 7 ms, version:1.3.0,role:TMROLE,channel:[id: 0x23f4f5db, L:/127.0.0.1:45528 - R:/127.0.0.1:8091]

很可惜没有看到有关全局事务和分支事务的日志输出.....
这里我有点疑问,是不是我应该创建三个服务,请求从:服务A -> 服务B -> 服务C这样是不是会更好些,这个有时间可以再测试一下。
我们看下两个数据的数据情况:

仓库服务数据图-1.png

订单服务数据图-1.png

和我们预期的结果一下,没有问题,接下来测试下异常情况。

修改的HTTP请求:

POST http://localhost:8080/order/purchase
Accept: *
Content-Type: application/json
Cache-Control: no-cache

## 例子
{"totalPrice": 333.00,"orderNum": "333333","address": "CN-SC-CD-33","warehouseCode":"abc","nums": 7}

订单服务的日志:

2020-09-20 20:16:40.815  INFO 28997 --- [Send_TMROLE_1_1] i.s.core.rpc.netty.NettyPoolableFactory  : register success, cost 3 ms, version:1.3.0,role:TMROLE,channel:[id: 0xb4f2b43b, L:/127.0.0.1:45924 - R:/127.0.0.1:8091]
2020-09-20 20:16:40.825  INFO 28997 --- [nio-8080-exec-1] i.seata.tm.api.DefaultGlobalTransaction  : Begin new global transaction [172.17.0.1:8091:51040488803799040]
Hibernate: insert into t_order (address, createTime, orderNum, totalPrice, updateTime, warehouseCode) values (?, ?, ?, ?, ?, ?)
2020-09-20 20:16:41.033  INFO 28997 --- [nio-8080-exec-1] c.y.s.o.service.impl.OrderServiceImpl    : >>>> insert result = OrderEntity(id=15, orderNum=333333, address=CN-SC-CD-33, totalPrice=333.0, warehouseCode=abc, createTime=Sun Sep 20 20:16:40 CST 2020, updateTime=null) <<<<
2020-09-20 20:16:41.085  INFO 28997 --- [ch_RMROLE_1_1_8] i.s.c.r.p.c.RmBranchRollbackProcessor    : rm handle branch rollback process:xid=172.17.0.1:8091:51040488803799040,branchId=51040489474887681,branchType=AT,resourceId=jdbc:mysql://localhost:3306/order_db,applicationData=null
2020-09-20 20:16:41.086  INFO 28997 --- [ch_RMROLE_1_1_8] io.seata.rm.AbstractRMHandler            : Branch Rollbacking: 172.17.0.1:8091:51040488803799040 51040489474887681 jdbc:mysql://localhost:3306/order_db
2020-09-20 20:16:41.141  INFO 28997 --- [ch_RMROLE_1_1_8] i.s.r.d.undo.AbstractUndoLogManager      : xid 172.17.0.1:8091:51040488803799040 branch 51040489474887681, undo_log deleted with GlobalFinished
2020-09-20 20:16:41.142  INFO 28997 --- [ch_RMROLE_1_1_8] io.seata.rm.AbstractRMHandler            : Branch Rollbacked result: PhaseTwo_Rollbacked
2020-09-20 20:16:41.165  INFO 28997 --- [nio-8080-exec-1] i.seata.tm.api.DefaultGlobalTransaction  : [172.17.0.1:8091:51040488803799040] rollback status: Rollbacked
2020-09-20 20:16:41.196 ERROR 28997 --- [nio-8080-exec-1] o.a.c.c.C.[.[.[/].[dispatcherServlet]    : Servlet.service() for servlet [dispatcherServlet] in context with path [] threw exception [Request processing failed; nested exception is feign.FeignException$InternalServerError: status 500 reading WarehouseClient#reduce(String,Long)] with root cause

首先依然是创建全局事务:172.17.0.1:8091:51040488803799040,接着是订单服务的分支事务branchId=51040489474887681回滚,且undo_log数据在全局事务完成也删除了相关数据。Branch Rollbacked result: PhaseTwo_Rollbacked,分支事务回滚成功,最后是全局事务回滚成功,可以通过全局事务回滚状态Rollbacked确定。
仓库服务日志:

2020-09-20 20:16:41.043  INFO 28493 --- [io-18080-exec-3] c.y.s.w.controller.WarehouseController   : >>>> request params: code=abc, nums=7 <<<<
Hibernate: select warehousee0_.id as id1_0_, warehousee0_.code as code2_0_, warehousee0_.createTime as createTi3_0_, warehousee0_.stock as stock4_0_, warehousee0_.unit as unit5_0_, warehousee0_.updateTime as updateTi6_0_ from t_warehouse warehousee0_ where warehousee0_.code=?
2020-09-20 20:16:41.046 ERROR 28493 --- [io-18080-exec-3] c.y.s.w.s.impl.WarehouseServiceImpl      : >>>> 仓库分支事务抛出异常 <<<<

很可惜依然没有全局事务和分支事务的日志.....
我们再次确认下数据库:

仓库服务数据图-2.png

订单服务数据图-2.png

连个表的数据和原来没有任何的变化,也说明测试是成功的,仓库服务发生异常时两个服务的数据都成功回滚。测试还是很顺利的,这些都只是Seata客户端的日志信息,接下来我们看下服务端的日志。

下面的日志请注意时间:

2020-09-20 19:58:32.161  INFO --- [LoggerPrint_1_1] i.s.c.r.p.server.BatchLogHandler         : SeataMergeMessage timeout=60000,transactionName=purchase(com.ypc.seata.orderservice.entity.dto.OrderDTO)
,clientIp:127.0.0.1,vgroup:order-service-seata-service-group
2020-09-20 19:58:32.166  INFO --- [Thread_1_11_500] i.s.s.coordinator.DefaultCoordinator     : Begin new global transaction applicationId: order-service,transactionServiceGroup: order-service-seata-service-group, transactionName: purchase(com.ypc.seata.orderservice.entity.dto.OrderDTO),timeout:60000,xid:172.17.0.1:8091:51035922649583616
2020-09-20 19:58:32.338  INFO --- [LoggerPrint_1_1] i.s.c.r.p.server.BatchLogHandler         : SeataMergeMessage xid=172.17.0.1:8091:51035922649583616,branchType=AT,resourceId=jdbc:mysql://localhost:3306/order_db,lockKey=t_order:14
,clientIp:127.0.0.1,vgroup:order-service-seata-service-group
2020-09-20 19:58:32.347  INFO --- [Thread_1_12_500] i.seata.server.coordinator.AbstractCore  : Register branch successfully, xid = 172.17.0.1:8091:51035922649583616, branchId = 51035923391975425, resourceId = jdbc:mysql://localhost:3306/order_db ,lockKeys = t_order:14
2020-09-20 19:58:32.679  INFO --- [LoggerPrint_1_1] i.s.c.r.p.server.BatchLogHandler         : SeataMergeMessage xid=172.17.0.1:8091:51035922649583616,extraData=null
,clientIp:127.0.0.1,vgroup:order-service-seata-service-group
2020-09-20 19:58:33.275  INFO --- [cCommitting_1_1] io.seata.server.coordinator.DefaultCore  : Committing global transaction is successfully done, xid = 172.17.0.1:8091:51035922649583616.
2020-09-20 19:58:37.619  INFO --- [NIOWorker_1_2_8] i.s.c.r.n.AbstractNettyRemotingServer    : 127.0.0.1:45308 to server channel inactive.
2020-09-20 19:58:37.619  INFO --- [NIOWorker_1_2_8] i.s.c.r.n.AbstractNettyRemotingServer    : remove channel:[id: 0xdc2f0baa, L:/127.0.0.1:8091 ! R:/127.0.0.1:45308]context:RpcContext{applicationId='order-service', transactionServiceGroup='order-service-seata-service-group', clientId='order-service:127.0.0.1:45308', channel=[id: 0xdc2f0baa, L:/127.0.0.1:8091 ! R:/127.0.0.1:45308], resourceSets=null}
2020-09-20 19:58:38.002  INFO --- [NIOWorker_1_1_8] i.s.c.r.n.AbstractNettyRemotingServer    : 127.0.0.1:45252 to server channel inactive.
2020-09-20 19:58:38.003  INFO --- [NIOWorker_1_1_8] i.s.c.r.n.AbstractNettyRemotingServer    : remove channel:[id: 0x584d000b, L:/127.0.0.1:8091 ! R:/127.0.0.1:45252]context:RpcContext{applicationId='order-service', transactionServiceGroup='order-service-seata-service-group', clientId='order-service:127.0.0.1:45252', channel=[id: 0x584d000b, L:/127.0.0.1:8091 ! R:/127.0.0.1:45252], resourceSets=[]}
2020-09-20 19:58:56.105  INFO --- [NIOWorker_1_4_8] i.s.c.r.processor.server.RegTmProcessor  : TM register success,message:RegisterTMRequest{applicationId='warehouse-service', transactionServiceGroup='warehouse-service-seata-service-group'},channel:[id: 0x5d296a48, L:/127.0.0.1:8091 - R:/127.0.0.1:45528],client version:1.3.0

上面内容是是第一次正常访问时开启全局事务时服务端日志输出,时间也完全符合,且全局的事务id也是一致的。
下面是第二次异常情况的日志输出:

2020-09-20 20:16:40.817  INFO --- [LoggerPrint_1_1] i.s.c.r.p.server.BatchLogHandler         : SeataMergeMessage timeout=60000,transactionName=purchase(com.ypc.seata.orderservice.entity.dto.OrderDTO)
,clientIp:127.0.0.1,vgroup:order-service-seata-service-group
2020-09-20 20:16:40.822  INFO --- [Thread_1_16_500] i.s.s.coordinator.DefaultCoordinator     : Begin new global transaction applicationId: order-service,transactionServiceGroup: order-service-seata-service-group, transactionName: purchase(com.ypc.seata.orderservice.entity.dto.OrderDTO),timeout:60000,xid:172.17.0.1:8091:51040488803799040
2020-09-20 20:16:40.976  INFO --- [LoggerPrint_1_1] i.s.c.r.p.server.BatchLogHandler         : SeataMergeMessage xid=172.17.0.1:8091:51040488803799040,branchType=AT,resourceId=jdbc:mysql://localhost:3306/order_db,lockKey=t_order:15
,clientIp:127.0.0.1,vgroup:order-service-seata-service-group
2020-09-20 20:16:40.984  INFO --- [Thread_1_17_500] i.seata.server.coordinator.AbstractCore  : Register branch successfully, xid = 172.17.0.1:8091:51040488803799040, branchId = 51040489474887681, resourceId = jdbc:mysql://localhost:3306/order_db ,lockKeys = t_order:15
2020-09-20 20:16:41.077  INFO --- [LoggerPrint_1_1] i.s.c.r.p.server.BatchLogHandler         : SeataMergeMessage xid=172.17.0.1:8091:51040488803799040,extraData=null
,clientIp:127.0.0.1,vgroup:order-service-seata-service-group
2020-09-20 20:16:41.150  INFO --- [Thread_1_19_500] io.seata.server.coordinator.DefaultCore  : Rollback branch transaction successfully, xid = 172.17.0.1:8091:51040488803799040 branchId = 51040489474887681
2020-09-20 20:16:41.158  INFO --- [Thread_1_19_500] io.seata.server.coordinator.DefaultCore  : Rollback global transaction successfully, xid = 172.17.0.1:8091:51040488803799040.
2020-09-20 20:17:11.856  INFO --- [NIOWorker_1_6_8] i.s.c.r.n.AbstractNettyRemotingServer    : 127.0.0.1:45924 to server channel inactive.
2020-09-20 20:17:11.856  INFO --- [NIOWorker_1_6_8] i.s.c.r.n.AbstractNettyRemotingServer    : remove channel:[id: 0xa0be40ed, L:/127.0.0.1:8091 ! R:/127.0.0.1:45924]context:RpcContext{applicationId='order-service', transactionServiceGroup='order-service-seata-service-group', clientId='order-service:127.0.0.1:45924', channel=[id: 0xa0be40ed, L:/127.0.0.1:8091 ! R:/127.0.0.1:45924], resourceSets=null}
2020-09-20 20:17:12.206  INFO --- [NIOWorker_1_5_8] i.s.c.r.n.AbstractNettyRemotingServer    : 127.0.0.1:45872 to server channel inactive.
2020-09-20 20:17:12.207  INFO --- [NIOWorker_1_5_8] i.s.c.r.n.AbstractNettyRemotingServer    : remove channel:[id: 0xca7c6e7d, L:/127.0.0.1:8091 ! R:/127.0.0.1:45872]context:RpcContext{applicationId='order-service', transactionServiceGroup='order-service-seata-service-group', clientId='order-service:127.0.0.1:45872', channel=[id: 0xca7c6e7d, L:/127.0.0.1:8091 ! R:/127.0.0.1:45872], resourceSets=[]}

服务端的日志也没有看到仓库服务的情况,但是启动时确实有看到注册成功,但是事务提交和回滚都只看到订单服务的相关信息,这个有点不太懂了,也许应该再增加一个服务试试。

五、总结

本次关于Spring Boot整合Seata以及测试分布式事务的学习就到这里,代码我已经提交到我的github了。当然在项目中我直接引入的是starter依赖,感兴趣的话其实可以试试直接引入seata-all依赖,我之前项目组使用的就是这种方式,相比使用starter要稍微麻烦一点,不过从中也可以更好的理解相关的配置。其实我个人不太喜欢也不擅长直接从原理开始,所以个人一般都是直接从项目开,主要学习怎么使用,后面慢慢的可以了解下原理甚至是源码。最后希望大家能关注下我的个人号:超超学堂,也欢迎大家多多交流讨论。

你可能感兴趣的:(分布式事务解决方案——Seata使用)