spring boot + spring cloud alibaba 引入 Seata1.4.2

前言

在引入Seata的时候需先了解Seata的工作机制,本文采用的是AT模式,所以基本上围绕AT来讲,在使用前先了解一下Seata,这里我是直接采用官方的介绍和示例,官方地址。

引入Seata

说明

依赖版本说明:

  1. spring boot 2.2.4
  2. jdk 8 & jdk 14
  3. mybatis-plus 3.1.2
  4. spring cloud 2.1.3
  5. spring cloud alibaba 0.9.0

安装Seata

  1. 进入 Seata官方下载 source是项目源码 binary可运行的应用程序包,这里我们选择1.4.2版本两个包都下载下来,后续会使用到。

  2. 下载后解压binary进入conf目录

  3. 打开registry.conf文件,修改注册类型为nacos,配置类型也改为从nacos获取。

    registry {
      # file 、nacos 、eureka、redis、zk、consul、etcd3、sofa
      type = "nacos"
    
      nacos {
        application = "seata-server"
        serverAddr = "127.0.0.1:8848"
        group = "SEATA_GROUP"
        namespace = "public"
        cluster = "default"
        username = "nacos"
        password = "nacos"
      }
      
      
    }
    
    config {
      # file、nacos 、apollo、zk、consul、etcd3
      type = "nacos"
    
      nacos {
        serverAddr = "127.0.0.1:8848"
        namespace = "public"
        group = "SEATA_GROUP"
        username = "nacos"
        password = "nacos"
        dataId = "seataServer.properties"
      }
    }
    
    
  4. 解压source源码包,进入到script\server\db下,选择对应的数据库sql创建Seata服务所使用的表,我这里是pgsql

    -- -------------------------------- The script used when storeMode is 'db' --------------------------------
    -- the table to store GlobalSession data
    CREATE TABLE IF NOT EXISTS public.global_table
    (
        xid                       VARCHAR(128) NOT NULL,
        transaction_id            BIGINT,
        status                    SMALLINT     NOT NULL,
        application_id            VARCHAR(32),
        transaction_service_group VARCHAR(32),
        transaction_name          VARCHAR(128),
        timeout                   INT,
        begin_time                BIGINT,
        application_data          VARCHAR(2000),
        gmt_create                TIMESTAMP(0),
        gmt_modified              TIMESTAMP(0),
        CONSTRAINT pk_global_table PRIMARY KEY (xid)
    );
    
    CREATE INDEX idx_gmt_modified_status ON public.global_table (gmt_modified, status);
    CREATE INDEX idx_transaction_id ON public.global_table (transaction_id);
    
    -- the table to store BranchSession data
    CREATE TABLE IF NOT EXISTS public.branch_table
    (
        branch_id         BIGINT       NOT NULL,
        xid               VARCHAR(128) NOT NULL,
        transaction_id    BIGINT,
        resource_group_id VARCHAR(32),
        resource_id       VARCHAR(256),
        branch_type       VARCHAR(8),
        status            SMALLINT,
        client_id         VARCHAR(64),
        application_data  VARCHAR(2000),
        gmt_create        TIMESTAMP(6),
        gmt_modified      TIMESTAMP(6),
        CONSTRAINT pk_branch_table PRIMARY KEY (branch_id)
    );
    
    CREATE INDEX idx_xid ON public.branch_table (xid);
    
    -- the table to store lock data
    CREATE TABLE IF NOT EXISTS public.lock_table
    (
        row_key        VARCHAR(128) NOT NULL,
        xid            VARCHAR(128),
        transaction_id BIGINT,
        branch_id      BIGINT       NOT NULL,
        resource_id    VARCHAR(256),
        table_name     VARCHAR(32),
        pk             VARCHAR(36),
        gmt_create     TIMESTAMP(0),
        gmt_modified   TIMESTAMP(0),
        CONSTRAINT pk_lock_table PRIMARY KEY (row_key)
    );
    
    CREATE INDEX idx_branch_id ON public.lock_table (branch_id);
    
    
  5. 进入到script\config-center目录,打开config.txt,配置如下

    transport.type=TCP
    transport.server=NIO
    transport.heartbeat=true
    transport.enableClientBatchSendRequest=false
    transport.threadFactory.bossThreadPrefix=NettyBoss
    transport.threadFactory.workerThreadPrefix=NettyServerNIOWorker
    transport.threadFactory.serverExecutorThreadPrefix=NettyServerBizHandler
    transport.threadFactory.shareBossWorker=false
    transport.threadFactory.clientSelectorThreadPrefix=NettyClientSelector
    transport.threadFactory.clientSelectorThreadSize=1
    transport.threadFactory.clientWorkerThreadPrefix=NettyClientWorkerThread
    transport.threadFactory.bossThreadSize=1
    transport.threadFactory.workerThreadSize=default
    transport.shutdown.wait=3
    service.vgroupMapping.my_test_tx_group=default
    service.default.grouplist=127.0.0.1:8091
    service.enableDegrade=false
    service.disableGlobalTransaction=false
    client.rm.asyncCommitBufferLimit=10000
    client.rm.lock.retryInterval=10
    client.rm.lock.retryTimes=30
    client.rm.lock.retryPolicyBranchRollbackOnConflict=false
    client.rm.reportRetryCount=5
    client.rm.tableMetaCheckEnable=false
    client.rm.tableMetaCheckerInterval=60000
    client.rm.sqlParserType=druid 
    client.rm.reportSuccessEnable=false
    client.rm.sagaBranchRegisterEnable=false
    client.tm.commitRetryCount=5
    client.tm.rollbackRetryCount=5
    client.tm.defaultGlobalTransactionTimeout=60000
    client.tm.degradeCheck=false
    client.tm.degradeCheckAllowTimes=10
    client.tm.degradeCheckPeriod=2000
    store.mode=db
    store.publicKey=
    store.file.dir=file_store/data
    store.file.maxBranchSessionSize=16384
    store.file.maxGlobalSessionSize=512
    store.file.fileWriteBufferCacheSize=16384
    store.file.flushDiskMode=async
    store.file.sessionReloadReadSize=100
    store.db.datasource=hikari
    store.db.dbType=postgresql
    store.db.driverClassName=org.postgresql.Driver
    store.db.url=jdbc:postgresql://localhost:5432/seata
    store.db.user=root
    store.db.password=123456
    store.db.minConn=5
    store.db.maxConn=30
    store.db.globalTable=global_table
    store.db.branchTable=branch_table
    store.db.queryLimit=100
    store.db.lockTable=lock_table
    store.db.maxWait=5000
    store.redis.mode=single
    store.redis.single.host=127.0.0.1
    store.redis.single.port=6379
    store.redis.sentinel.masterName=
    store.redis.sentinel.sentinelHosts=
    store.redis.maxConn=10
    store.redis.minConn=1
    store.redis.maxTotal=100
    store.redis.database=0
    store.redis.password=
    store.redis.queryLimit=100
    server.recovery.committingRetryPeriod=1000
    server.recovery.asynCommittingRetryPeriod=1000
    server.recovery.rollbackingRetryPeriod=1000
    server.recovery.timeoutRetryPeriod=1000
    server.maxCommitRetryTimeout=-1
    server.maxRollbackRetryTimeout=-1
    server.rollbackRetryTimeoutUnlockEnable=false
    client.undo.dataValidation=true
    client.undo.logSerialization=jackson
    client.undo.onlyCareUpdateColumns=true
    server.undo.logSaveDays=7
    server.undo.logDeletePeriod=86400000
    client.undo.logTable=undo_log
    client.undo.compress.enable=true
    client.undo.compress.type=zip
    client.undo.compress.threshold=64k
    log.exceptionRate=100
    transport.serialization=seata
    transport.compressor=none
    metrics.enabled=false
    metrics.registryType=compact
    metrics.exporterList=prometheus
    metrics.exporterPrometheusPort=9898
    
  6. 进入\script\config-center\nacos目录执行nacos-config.sh脚本,我这里使用的是git bash

    执行 sh nacos-config.sh 后可在nacos配置管理中查看到导入的配置

  7. 在binary包下的bin目录启动seata-server.bat脚本

    执行脚本后可在nacos服务列表中看到seata-server则注册成功

客户端配置

  1. 添加Maven依赖

            com.alibaba.cloud
            spring-cloud-starter-alibaba-seata
            
                
                    io.seata
                    seata-spring-boot-starter
                
            
            2.2.0.RELEASE
        
        
            io.seata
            seata-spring-boot-starter
            1.4.2
        
  1. application.yml加入配置

    seata:
      enabled: true
      application-id: ${spring.application.name}
      tx-service-group: my_test_tx_group
      service:
        vgroupMapping:
          my_test_tx_group: default
      config:
        type: nacos
        nacos:
          namespace:
          serverAddr: localhost:8848
          group: SEATA_GROUP
          userName: "nacos"
          password: "nacos"
    
  2. 创建undo_log表

    进入到 script\client\at\db 目录下找到sql文件执行创建表(每个客户端对应的数据库都需要创建)

    -- for AT mode you must to init this sql for you business database. the seata server not need it.
    CREATE TABLE IF NOT EXISTS public.undo_log
    (
        id            SERIAL       NOT NULL,
        branch_id     BIGINT       NOT NULL,
        xid           VARCHAR(128) NOT NULL,
        context       VARCHAR(128) NOT NULL,
        rollback_info BYTEA        NOT NULL,
        log_status    INT          NOT NULL,
        log_created   TIMESTAMP(0) NOT NULL,
        log_modified  TIMESTAMP(0) NOT NULL,
        CONSTRAINT pk_undo_log PRIMARY KEY (id),
        CONSTRAINT ux_undo_log UNIQUE (xid, branch_id)
    );
    
    CREATE SEQUENCE IF NOT EXISTS undo_log_id_seq INCREMENT BY 1 MINVALUE 1 ;
    
  3. 启动项目

    控制台打印如下则项目已经成功注册到TC上了

    i.s.c.rpc.netty.RmNettyRemotingClient : register RM success. client version:1.4.2, server version:1.4.2,channel:[id: 0x055376de, L:/127.0.0.1:55130 - R:/127.0.0.1:8091]
    i.s.core.rpc.netty.NettyPoolableFactory : register success, cost 135 ms, version:1.4.2,role:RMROLE,channel:[id: 0x055376de, L:/127.0.0.1:55130 - R:/127.0.0.1:8091]

快速开始

让我们从一个微服务示例开始。

用例

用户购买商品的业务逻辑。整个业务逻辑由3个微服务提供支持:

  • 仓储服务:对给定的商品扣除仓储数量。
  • 订单服务:根据采购需求创建订单。
  • 帐户服务:从用户帐户中扣除余额。

架构图

spring boot + spring cloud alibaba 引入 Seata1.4.2_第1张图片

仓储服务

public interface StorageService {

    /**
     * 扣除存储数量
     */
    void deduct(String commodityCode, int count);
}

订单服务

public interface OrderService {

    /**
     * 创建订单
     */
    Order create(String userId, String commodityCode, int orderCount);
}

帐户服务

public interface AccountService {

    /**
     * 从用户账户中借出
     */
    void debit(String userId, int money);
}

主要业务逻辑

public class BusinessServiceImpl implements BusinessService {

    private StorageService storageService;

    private OrderService orderService;

    /**
     * 采购
     */
    public void purchase(String userId, String commodityCode, int orderCount) {

        storageService.deduct(commodityCode, orderCount);

        orderService.create(userId, commodityCode, orderCount);
    }
}
public class OrderServiceImpl implements OrderService {

    private OrderDAO orderDAO;

    private AccountService accountService;

    public Order create(String userId, String commodityCode, int orderCount) {

        int orderMoney = calculate(commodityCode, orderCount);

        accountService.debit(userId, orderMoney);

        Order order = new Order();
        order.userId = userId;
        order.commodityCode = commodityCode;
        order.count = orderCount;
        order.money = orderMoney;

        // INSERT INTO orders ...
        return orderDAO.insert(order);
    }
}

SEATA 的分布式交易解决方案

spring boot + spring cloud alibaba 引入 Seata1.4.2_第2张图片 我们只需要使用一个 @GlobalTransactional 注解在业务方法上:

    @GlobalTransactional
    public void purchase(String userId, String commodityCode, int orderCount) {
        ......
    }

注意事项

  1. 脏数据回滚失败如何处理?

    在使用Seata分布式事务后会出现脏写的问题,主要原因还是因为未做好隔离保证无脏数据的产生,如果有脏数据的出现需要根据提示手工处理。

  2. 如何避免脏数据?

    使用@GlobalTransactional 或 @GlobalLock+@Transactional

    • @GlobalTransactional注解参与到整个全局事务中的分支事务修改的数据会被记录到全局锁中,如果当前本地事务方法修改不想发起全局事务但又想避免脏数据则使用@GlobalLock+@Transactional,@GlobalLock注解必须搭配本地事务才可参与到全局锁冲突检查中。
  3. 全局锁冲突如何重新提交?

    config.txt配置中修改:

    //改为false则开启锁冲突重试策略

    client.rm.lock.retryPolicyBranchRollbackOnConflict=false

    //重试间隔,单位ms

    client.rm.lock.retryInterval=10

    //重试次数

    client.rm.lock.retryTimes=30

  4. 报错 not found service provider for : io.seata.sqlparser.util.DbTypeParser?

    config.txt配置中修改:

    //这里必须为druid

    client.rm.sqlParserType=druid

    • 这里为什么必须是druid?

      因为Seata的数据源代理中获取数据库类型的实现在只有druid有实现,如果不为druid在RM启动的时候会报错。

你可能感兴趣的:(数据库,postgresql,sql)