Three caching problems that are prone to occur under high concurrency

1. Cache penetration

Refers to querying a data that must not exist. Because the cache is missed, the database will be queried, but the database does not have this record. We did not write the null of this query into the cache, which will cause this non-existent data to be requested every time All of them have to go to the storage layer to query, losing the meaning of caching.

Risk: Attacks with non-existent data will increase the transient pressure of the database and eventually lead to a crash.

Solution: Cache null results and add a short expiration time.

2. Cache avalanche

Cache avalanche means that when we set up the cache, the key uses the same expiration time, which causes the cache to fail at a certain moment, and all requests are forwarded to the DB. The transient pressure of the DB is too heavy.

Solution: Add a random value based on the original expiration time, such as 1-5 minutes random, so that the repetition rate of the expiration time of each cache will be reduced, and it is difficult to cause collective failure events.

3. Cache penetration

• For some keys with an expiration time set, if these keys may be accessed extremely concurrently at certain points in time, it is a very "hot" data.

• If this key just fails before a large number of requests come in at the same time, then all data queries for this key will fall to the db, which is called cache breakdown.

Solution: For a large number of locks and concurrency, only one is required to check, the others wait, and the lock is released after the check is found, and the others get the lock, check the cache first, and there will be data, no need to go to db

Guess you like

Origin blog.csdn.net/cyb_123/article/details/107736348