The insert method of BaseMapper quickly inserts data without submitting the problem

I. Introduction

Today, I tested a batch of log data to be inserted into the database, and found that the inserted data became uncommitted when a large amount of data was inserted through BaseMapper's int insert(T entity); method. It means that the program runs insert successfully, but there is no data in the database. It is possible to insert data one by one, but the loop fast insertion does not take effect. don't know why.

Two, BaseMapper insert method

public interface BaseMapper<T> extends Mapper<T> {

    /**
     * 插入一条记录
     *
     * @param entity 实体对象
     */
    int insert(T entity);
}

call the insert method

@Service
@Slf4j
@Transactional(rollbackFor = Exception.class)
public class BlockchainLogService {
	@Autowired
	private BlockchainLogMapper blockchainLogMapper;
	
	public void blockchainLog(BlockchainLog blockchainLog) {
	
	 
		try {
			blockchainLogMapper.insert(blockchainLog);
			
		} catch (Exception e) {
			log.info("保存日志异常:"+e.getMessage());
		}
		 
	}

}

This way it is found that the transaction is not committed when the log is inserted circularly.

My solution: manually add commit statement

@Repository
public interface BlockchainLogMapper  extends BaseMapper<BlockchainLog>{
	
	/**
	 * 查询商户
	 */
	@Select("<script>"
			+ "commit "
			+ "</script>")
	public  void commit();


}

After inserting the log, call the commit() method, which is solved in this way.

@Service
@Slf4j
@Transactional(rollbackFor = Exception.class)
public class BlockchainLogService {
	@Autowired
	private BlockchainLogMapper blockchainLogMapper;
	
	public void blockchainLog(BlockchainLog blockchainLog) {
	
		 
		try {
			blockchainLogMapper.insert(blockchainLog);
			blockchainLogMapper.commit();
			
		} catch (Exception e) {
			log.info("保存日志异常:"+e.getMessage());
		}
	 
		 
	}

 

 

3. MySQL transaction isolation level

The MySQL transaction isolation level is to solve the problem of concurrent transactions interfering with each other. There are four types of MySQL transaction isolation levels:

  1. READ UNCOMMITTED: Read uncommitted.
  2. READ COMMITTED: The read has been committed.
  3. REPEATABLE READ: Repeatable read.
  4. SERIALIZABLE: serialization.

1. Four transaction isolation levels

 

1.1 READ UNCOMMITTED

Read uncommitted, also known as uncommitted read, transactions at this isolation level can see uncommitted data in other transactions. Because this isolation level can read uncommitted data in other transactions, and uncommitted data may be rolled back, so we call the data read at this level dirty data, and this problem is called dirty read.

1.2 READ COMMITTED

Read committed, also known as committed read, transactions at this isolation level can read the data of committed transactions, so it will not have dirty read problems. However, since the results submitted by other transactions can be read during the execution of the transaction, different results may be obtained in the same SQL query at different times. This phenomenon is called non-repeatable read.

1.3 REPEATABLE READ

Repeatable read, MySQL's default transaction isolation level. Repeatable reading can solve the problem of "non-repeatable reading", but there is still the problem of phantom reading. The so-called phantom reading means that when the same SQL query is used at different times in the same transaction, different results will be produced. For example, a SELECT that is executed twice, but returns a row the second time that was not returned the first time, is a "phantom" row.

Note: The focus of phantom reading and non-repeatable reading is different. Non-repeatable reading focuses on data modification, and the same row of data read twice is different; while phantom reading focuses on addition or deletion, and the data returned by two queries The number of rows is different.

1.4 SERIALIZABLE

Serialization, the highest isolation level of transactions, it will force transactions to be sorted so that there will be no conflicts, thus solving the problems of dirty reads, non-repeatable reads and phantom reads, but because of low execution efficiency, there are not many scenarios that are actually used.

 

Guess you like

Origin blog.csdn.net/dongjing991/article/details/132298979