Introduction
Based on Seata distributed transaction Demo
Bank A and Bank B services transfer
source code address: https://gitee.com/xrzi2016/spring-cloud-seata-demo
Instructions for use
-
Start seata-server: sh ./bin/seata-server.sh
-
Install seata-common to the local factory library: mvn install
-
Start the registration center: mvn spring-boot: run
-
启动bank-a: mvn spring-boot:run
See seata-serve output
- 启动bank-b: mvn spring-boot:run
See seata-serve output
Instructions for use
-
Normal test: http://127.0.0.1:8081/bank-a/updAmount
100 yuan for Zhang San account in bank-a library
Bank-b Li Si account deducts 100 yuan
-
Abnormal test: http://127.0.0.1:8081/bank-a/updAmount?mode=OUT
The amount in bank-a bank and bank-b bank remains unchanged
See seata-serve output rollback information
Spring Cloud quickly integrates Seata
Official address: http://seata.io/zh-cn/
官方Github:https://github.com/seata/seata-samples/blob/master/doc/quick-integration-with-spring-cloud.md#registryconf
1. Add dependencies
Maven
<dependency>
<groupId>com.alibaba.cloud</groupId>
<artifactId>spring-cloud-starter-alibaba-seata</artifactId>
<version>2.1.0.RELEASE</version>
</dependency>
It should be noted that the GroupId of the graduated version of Spring Cloud Alibaba is com.alibaba.cloud
spring-cloud-starter-alibaba-seata depends only on spring-cloud-alibaba-seata, so adding spring-cloud-starter-alibaba-seata and spring-cloud-alibaba-seata in the project is the same
2. Add Seata configuration file
registry.conf
This configuration is used to specify the TC's registry and configuration file. The default is file; if another registry is used, Seata-Server is also required to register to the configuration center
registry.conf
registry {
# file 、nacos 、eureka、redis、zk、consul、etcd3、sofa
type = "file"
nacos {
serverAddr = "localhost"
namespace = "public"
cluster = "default"
}
eureka {
serviceUrl = "http://localhost:8761/eureka"
application = "default"
weight = "1"
}
redis {
serverAddr = "localhost:6379"
db = "0"
}
zk {
cluster = "default"
serverAddr = "127.0.0.1:2181"
session.timeout = 6000
connect.timeout = 2000
}
consul {
cluster = "default"
serverAddr = "127.0.0.1:8500"
}
etcd3 {
cluster = "default"
serverAddr = "http://localhost:2379"
}
sofa {
serverAddr = "127.0.0.1:9603"
application = "default"
region = "DEFAULT_ZONE"
datacenter = "DefaultDataCenter"
cluster = "default"
group = "SEATA_GROUP"
addressWaitTime = "3000"
}
file {
name = "file.conf"
}
}
config {
# file、nacos 、apollo、zk、consul、etcd3
type = "file"
nacos {
serverAddr = "localhost"
namespace = "public"
cluster = "default"
}
consul {
serverAddr = "127.0.0.1:8500"
}
apollo {
app.id = "seata-server"
apollo.meta = "http://192.168.1.204:8801"
}
zk {
serverAddr = "127.0.0.1:2181"
session.timeout = 6000
connect.timeout = 2000
}
etcd3 {
serverAddr = "http://localhost:2379"
}
file {
name = "file.conf"
}
}
file.conf
This configuration is used to specify the relevant attributes of TC; if you use the registration center, you can also add the configuration to the configuration center
file.conf
transport {
# tcp udt unix-domain-socket
type = "TCP"
#NIO NATIVE
server = "NIO"
#enable heartbeat
heartbeat = true
#thread factory for netty
thread-factory {
boss-thread-prefix = "NettyBoss"
worker-thread-prefix = "NettyServerNIOWorker"
server-executor-thread-prefix = "NettyServerBizHandler"
share-boss-worker = false
client-selector-thread-prefix = "NettyClientSelector"
client-selector-thread-size = 1
client-worker-thread-prefix = "NettyClientWorkerThread"
# netty boss thread size,will not be used for UDT
boss-thread-size = 1
#auto default pin or 8
worker-thread-size = 8
}
shutdown {
# when destroy server, wait seconds
wait = 3
}
serialization = "seata"
compressor = "none"
}
service {
#vgroup->rgroup
vgroup_mapping.my_test_tx_group = "default"
#only support single node
default.grouplist = "127.0.0.1:8091"
#degrade current not support
enableDegrade = false
#disable
disable = false
#unit ms,s,m,h,d represents milliseconds, seconds, minutes, hours, days, default permanent
max.commit.retry.timeout = "-1"
max.rollback.retry.timeout = "-1"
}
client {
async.commit.buffer.limit = 10000
lock {
retry.internal = 10
retry.times = 30
}
report.retry.count = 5
}
## transaction log store
store {
## store mode: file、db
mode = "file"
## file store
file {
dir = "sessionStore"
# branch session size , if exceeded first try compress lockkey, still exceeded throws exceptions
max-branch-session-size = 16384
# globe session size , if exceeded throws exceptions
max-global-session-size = 512
# file buffer size , if exceeded allocate new buffer
file-write-buffer-cache-size = 16384
# when recover batch read size
session.reload.read_size = 100
# async, sync
flush-disk-mode = async
}
## database store
db {
## the implement of javax.sql.DataSource, such as DruidDataSource(druid)/BasicDataSource(dbcp) etc.
datasource = "dbcp"
## mysql/oracle/h2/oceanbase etc.
db-type = "mysql"
url = "jdbc:mysql://127.0.0.1:3306/seata"
user = "mysql"
password = "mysql"
min-conn = 1
max-conn = 3
global.table = "global_table"
branch.table = "branch_table"
lock-table = "lock_table"
query-limit = 100
}
}
lock {
## the lock store mode: local、remote
mode = "remote"
local {
## store locks in user's database
}
remote {
## store locks in the seata's server
}
}
recovery {
committing-retry-delay = 30
asyn-committing-retry-delay = 30
rollbacking-retry-delay = 30
timeout-retry-delay = 30
}
transaction {
undo.data.validation = true
undo.log.serialization = "jackson"
}
## metrics settings
metrics {
enabled = false
registry-type = "compact"
# multi exporters use comma divided
exporter-list = "prometheus"
exporter-prometheus-port = 9898
}
It should be noted that the configuration of service.vgroup_mapping is $ {spring.application.name} -fescar-service-group by default in Spring Cloud. You can specify spring.cloud.alibaba.seata.tx-service- by application.properties The group attribute is covered, but it must be consistent with the file.conf, otherwise it will prompt no available server to connect
3. Inject data source
Seata implements branch transactions by proxying data sources; both MyBatis and JPA need to be injected into io.seata.rm.datasource.DataSourceProxy. The difference is that MyBatis also needs to inject additional org.apache.ibatis.session.SqlSessionFactory
MyBatis
@Configuration
public class DataSourceProxyConfig {
@Bean
@ConfigurationProperties(prefix = "spring.datasource")
public DataSource dataSource() {
return new DruidDataSource();
}
@Bean
public DataSourceProxy dataSourceProxy(DataSource dataSource) {
return new DataSourceProxy(dataSource);
}
@Bean
public SqlSessionFactory sqlSessionFactoryBean(DataSourceProxy dataSourceProxy) throws Exception {
SqlSessionFactoryBean sqlSessionFactoryBean = new SqlSessionFactoryBean();
sqlSessionFactoryBean.setDataSource(dataSourceProxy);
return sqlSessionFactoryBean.getObject();
}
}
4. Add the undo_log table
Add the undo_log table in the business-related database to save the data that needs to be rolled back
CREATE TABLE `undo_log`
(
`id` BIGINT(20) NOT NULL AUTO_INCREMENT,
`branch_id` BIGINT(20) NOT NULL,
`xid` VARCHAR(100) NOT NULL,
`context` VARCHAR(128) NOT NULL,
`rollback_info` LONGBLOB NOT NULL,
`log_status` INT(11) NOT NULL,
`log_created` DATETIME NOT NULL,
`log_modified` DATETIME NOT NULL,
`ext` VARCHAR(100) DEFAULT NULL,
PRIMARY KEY (`id`),
UNIQUE KEY `ux_undo_log` (`xid`, `branch_id`)
) ENGINE = InnoDB
AUTO_INCREMENT = 1
DEFAULT CHARSET = utf8
5. Start Seata-Server
Download the corresponding version of Seata-Server at https://github.com/seata/seata/releases, modify registry.conf to the corresponding configuration (if you use file, you do not need to modify it), unzip and start with the following command:
sh ./bin/seata-server.sh
6. Use @GlobalTransactional to start the transaction
Use @GlobalTransactional on the method of the business initiator to start the global transaction. Seata will add the xid of the transaction to the request to call other services through the interceptor to implement distributed transactions