[java] Various difficult ideas often encountered in java development

As a developer, you will always encounter various problems. This article lists the examples encountered/thought by bloggers. I also hope that students can share examples and make progress together in the comment area~

How does logical deletion create a unique index

Scenario description:

For example, we have a project project table,
the field project_name is unique,
and there is a logical deletion field is_delete 0 means not deleted 1 means deleted

Obviously, project_name cannot be directly set as a unique index.
For example, the project_name created by user A is a java project, and this project (logic) is deleted. At this time, user B is allowed to create a java project.

Is it feasible to set is_delete project_name together as a unique index? The answer is also no, when user B deletes, there will be problems.

Solution:
is_delete does not need to be represented by 0 and 1, but can be changed to incrementing numbers or timestamps (as small as possible, such as nanosecond level). At this time, setting is_delete project_name together as a unique index can solve this problem.

Unique index invalidation problem

Scenario description :
A person's name and phone number form a unique index.
insert image description here
insert image description here
There is a problem :
there are two children whose names are both children and neither of them has a mobile phone number. At this time, the data is duplicated and the unique index becomes invalid. Let's change the scenario. In high-concurrency e-commerce activities, the user name and VIP ID code form a unique index. At this time, there are two users who are not VIP users, and the VIP ID code is empty. Then the possible problems will be more serious.

insert image description here

Solution: The field of the unique index is set to non-null, because empty is allowed to be repeated
(it does not matter whether a field is set as a unique index alone or multiple fields are combined into a unique index)

Encrypted field fuzzy query problem

Scenario description : When sensitive user information, such as mobile phone number, ID card and household registration location, is stored in the database, we usually encrypt it, and fuzzy query is required at this time

solution:

  1. When the amount of data is small, for example, it is just a personnel table of a company's internal system, the entire table can be queried and decrypted, and filtered in java code (if you encounter paging, you have to think carefully about how to deal with the paging problem)

  2. Communicate with business/products to see if the number of search words is relatively fixed. For example, if a user's household registration is in Guangzhou City, Guangdong Province, then we can split and encrypt Guangdong Province and Guangzhou City.
    Assume that the encrypted string in Guangdong Province is pwd_gds, and the encrypted string in Guangzhou City is pwd_gzs.
    At this time, our front-end passes it to Guangzhou City, and after the back-end is encrypted, the fuzzy query sql statement becomes like %pwd_gzs%

  3. Of course, the first two methods are just tricks. Usually, they are not applicable to medium-scale projects. Since splitting is mentioned, we can think of word segmentation, so we can use es to split and encrypt all words and store them in es. (Off-topic, whether it is es or other storage, you must set a password)

Maven dependency conflict problem (jar package version conflict problem)

Scenario description : classNotFound , this is the most frequently encountered problem in projects where the imported version is incorrect. We follow up the error reporting class and find the top import guide package. Assuming that the red smear part reports red, we can find the previous directory (the red line), hold down the ctrl key and click the left mouse button to find the jar package
insert image description here

Solution: upgrade (or downgrade) the jar package.
But in many cases, the jar package is not imported directly through maven dependencies, but may be internally referenced by other components. At this time, we can use the mvn dependency:tree command to copy the console print information to a text editor. Search with a text editor to know which parent package introduced it

insert image description here
insert image description here

Sort the results according to the incoming order when sql in conditional query

Scenario description : For example, we call the external interface to obtain the id, and then query the database through the id. If we obtain an id and check the database once, we can ensure that the order of the results is consistent with the order in which the id is passed in; then we hope to optimize it at this time, and wait for the acquisition For a batch of ids, use the in condition query form:

select xx,xxx,xxxx from t where id in(5,1,4,2,3) 

At this time, how to ensure that the order of returned results is consistent with the order of id input? When the above pseudo code id=5, I hope to return the first record

solution:

  1. SQL level processing

orcale : order by decode
insert image description here

mysql : order by field
insert image description here
2. If the condition permits, instead of direct sql development, it is recommended to process the data twice in the java code, and loop idList to reassemble the results according to the id comparison.

Database master-slave replication master-slave out of sync problem

Scenario description : The master-slave data is
inconsistent due to various reasons such as network delay, load, and auto-increment primary key. Fix the data, no one can afford it if there is a problem...

Closer to home:

  1. The lock master lock is read-only
  2. Data output
  3. stop slave
  4. data import
  5. restart sync

However, if there is a data source at the time of locking the master library and stopping the slave library, it is very difficult to deal with. The best way at this time is to announce and maintain the business.

Database read-write separation read-write inconsistency

Scenario description : When reading and writing are separated, the data is inconsistent with the master database when reading from the slave database.
Solution : It is still a data synchronization problem. See if the business can tolerate errors.
The temporary solution is: forced routing (mandatory reading of the main library) But the blogger still believes that as long as there are no large-scale problems, manual data repair is a relatively safe solution.

Double write inconsistency problem Concurrent database and cache inconsistency

Scenario description : Specific cases were mentioned in the blogger's "From Oversold Problems in High Concurrency Scenarios to Redis Distributed Locks" blog

Solution :

  1. Delayed double deletion
    Advantages: The blogger personally thinks that the advantages are not obvious.
    Disadvantages: The blogger thinks that there is no point in the scenario of writing more and reading less. In the scenario of
    writing more and reading less, the cache is deleted when writing and the cache is updated when reading. At this time, there is a delay Double deletion does not solve any problems but reduces performance

  2. Use queue serialization
    Advantages: Avoid inconsistencies
    Disadvantages: Low efficiency

  3. Distributed lock serialization such as redislock provides read-write locks.
    The advantages and disadvantages are consistent with point 2.

  4. Using canal middleware,
    the blogger has never been in touch but just knows that the middleware can solve the problem

How does a java service act as a websocket client

Scenario description : Sometimes when we connect with the supplier/Party A interface, we may encounter the websocket interface provided by the other party. We may want to build a websocket client on the backend to avoid data loss between the front-end and back-end transmissions. Note that it is the client. Searching for the java websocket client on the Internet will always find tutorials that serve as the server.
Solution : It can be implemented using netty. The blogger is currently encountering problems when writing automatic reconnection and sending heartbeats. I found a better code written by the big guy and tested it. The specific code will be posted separately. blog tutorial

Spring transaction failure problem

Scenario description : An exception occurs when a transaction fails and does not roll back. First, @Transactional needs to be added (rollbackFor = Exception.class). The blogger has a separate article before explaining why the Ali specification requires adding

Solution : The blogger privately believes that all failure problems are caused by a lack of understanding of the spring proxy object mechanism. The failure is just useless for him. Welcome to search for related articles on the blogger’s blog

Database deadlock problem

Scenario description : The database deadlock causes the system to crash

Solution : The blogger has personally experienced that in the old project, the oracle stored procedure is used for development. Due to the large amount of sql code and the pessimistic lock for update, there are too many sql everywhere, and the commit is not timely. A deadlock is caused. When a deadlock occurs, we need to find the deadlock process in v$session and kill the process, and optimize sql in time to simplify or split the logic.

In mysql, using the replace into statement will also cause deadlock. It is recommended to use select + insert instead (it is said that mysql8.0 has fixed this bug, and the blogger has not personally tested it)

Cross-library pagination problem

Scenario description :
data sources come from different libraries, even different types of databases (for example, some from mysql, some from time series databases)

Most of the time, it is only necessary to check different libraries separately to meet the business and perform their duties; but there is a page that needs to view the data of these two libraries and realize the paging function.

Solution :
First of all, if you can not paginate across databases, you can see whether the business cannot be compromised and whether the data sources cannot be merged.
If you can't, then you can only consider the pagination scheme. The following is the method that bloggers think of:

Synchronize the data of the two databases into the same large table, and record the timestamp of the latest piece of data for each synchronization. In the next synchronization, just synchronize the data after this timestamp. The large table is only responsible for page-by-page queries.

At this time, although the amount of data in the large table is larger, but with paging, the efficiency will not be too low.
(If the amount of data is too large, consider synchronizing to es, clickhouse, etc. according to the actual situation)

The blogger saw that someone mentioned that canal can synchronize mysql data to es, but I still want to remind: In the production environment, we are not writing demos for fun. To use this kind of middleware, you must be familiar with the principles, otherwise the loss of important data or problems will outweigh the benefits!

Distributed transaction problem

Scenario description : transaction rollback is required in distributed

Solution : Seata middleware can be introduced. Seata middleware itself is a transaction scheduler based on mysql's undo log;
if you don't introduce seata, you can also manually roll back, but this strictly requires the code to be called in time, and it is not suitable for high concurrency Scenario,
only suitable for small and medium-sized projects, the pseudo code is as follows:

// service A
public GoodsDO delete(Long id){
    
    
	GoodsDO gs = database.getOne(id);
	database.deleteById(id);
	return gs;
}

public void insert(GoodsDO  gs){
    
    
	database.insert(gs);
}

// service B
@Autowired 
private ServiceA serviceA;

public void handle(Long id){
    
    

	try{
    
    
		GoodsDO  gs = serviceA.deleteById(id);
		// do other things  serviceB.xx();
	}
	 catch(E e){
    
    
	 	// 这里可以换成aop方式,也可以通过mq实现异步
	 	serviceA.insert(gs);
	 }
}

How to avoid multiple people modifying the problem at the same time

Scenario description : For example, in the management system, managers can modify the basic information of employees, and employees can also modify it. During the modification process of employees, if the administrator has already modified and submitted, and the employee then submits, this will overwrite the content modified by the administrator.

Solution : Add the optimistic lock version number to the details interface. When the edit button is clicked, call the details interface once to obtain the current optimistic lock version number. The version number obtained by the administrator is also 1 (the employee has not saved it yet), and then the administrator clicks Save, and the front-end sends the version number back to the back-end, and the save interface is used to determine whether the version number passed in by the front-end and the current database version number are Consistent (at this time, the consistency is 1), the administrator saves and successfully modifies the optimistic lock version number. When the employee clicks save, the version number passed in is also 1, but at this time the version number obtained by the database has changed to 2, prompting that the front-end information has been modified by others, refresh the page and enter again.

How to send multiple commands in netty to correspond to the reply content

Scenario description : In netty, multiple commands are sent to the server, and when a reply is received, how to determine which content corresponds to which command was sent

Solution : You can add a request ID field at the header of the data when sending, or add an ack response mechanism at the end, but this requires the cooperation of the server.
The reference code is as follows:

// 客户端代码
public class ClientHandler extends ChannelInboundHandlerAdapter {
    
    
    // 记录每个请求的请求ID
    private final Map<Integer, String> requestMap = new ConcurrentHashMap<>();
    // 记录每个请求对应的响应结果
    private final Map<String, String> responseMap = new ConcurrentHashMap<>();
    // 请求ID生成器
    private final AtomicInteger requestIdGenerator = new AtomicInteger(0);
    
    public void sendRequest(byte[] data) {
    
    
        int requestId = requestIdGenerator.incrementAndGet();
        ByteBuf buf = Unpooled.buffer(data.length + 4);
        buf.writeInt(requestId);
        buf.writeBytes(data);
        channel.writeAndFlush(buf);
        // 将请求ID和请求数据保存到请求映射表
        requestMap.put(requestId, Arrays.toString(data));
    }
    
    @Override
    public void channelRead(ChannelHandlerContext ctx, Object msg) {
    
    
        if (msg instanceof ByteBuf) {
    
    
            ByteBuf buf = (ByteBuf) msg;
            int requestId = buf.readInt();
            byte[] data = new byte[buf.readableBytes()];
            buf.getBytes(buf.readerIndex(), data);
            String request = requestMap.get(requestId);
            if (request != null) {
    
    
                // 将请求ID和响应数据保存到响应映射表
                String response = Arrays.toString(data);
                responseMap.put(request, response);
                // 从请求映射表中删除请求ID
                requestMap.remove(requestId);
            }
        }
    }
}

ack:

public class MyClientHandler extends ChannelInboundHandlerAdapter {
    
    
    // 记录上一次请求的ACK字段的值
    private int lastAck = 1;
    
    public void sendRequest(byte[] data) {
    
    
        // 在请求数据末尾添加一个预留的ACK字段
        byte[] requestData = Arrays.copyOf(data, data.length + 1);
        requestData[data.length] = (byte) lastAck;
        channel.writeAndFlush(requestData);
    }
    
    @Override
    public void channelRead(ChannelHandlerContext ctx, Object msg) {
    
    
        if (msg instanceof ByteBuf) {
    
    
            ByteBuf buf = (ByteBuf) msg;
            byte[] data = new byte[buf.readableBytes()];
            buf.readBytes(data);
            int ack = data[data.length - 1];
            // 修改ACK字段的值为1
            data[data.length - 1] = 1;
            lastAck = 1;
            // 处理服务端的响应
            handleResponse(data);
        }
    }
    
    public void handleResponse(byte[] data) {
    
    
        // 处理服务端的响应
        // ...
    }
}

What if the server refuses to cooperate? Then we can only send the next command after receiving the response. The idea is as follows
(but note that there will be problems under concurrency. If there are concurrency scenarios, the server must cooperate with the response mechanism):

  1. Define an instruction subscript (let's take 10 instructions to be sent as an example):

       public static AtomicInteger index = new AtomicInteger(0);
    
  2. Provide a method to modify the subscript

     	public static void setOtherIndex() {
              // 如果下标到了10 则清0 进行下一次的轮询
          if (Objects.equals(cabinIndex.get(), 10)) {
              cabinIndex.set(0);
          } else {
              cabinIndex.getAndAdd(1);
          }
    
      }
    
  3. send command

     	if(index.get() == 0){
     	  new byte[]{0x01}
     	}else if (index.get() == 1){
     	  new byte[]{0x02}
     	}
     	// .....
    
  4. Process data in channelRead method

     // dosomething
     // 处理完毕后 下标偏移
     setOtherIndex();
    

Guess you like

Origin blog.csdn.net/qq_36268103/article/details/129437570