Redis series (2): What are Redis cache penetration and cache avalanche?

One, Redis penetration

Cache penetration phenomenon: The user wants to query a piece of data and finds that the redis memory database is not available, that is, the cache is not hit, so the user queries the persistence layer database. Nothing was found, so this query failed. When there are many users, the cache misses, so they all request the persistence layer database. This will put a lot of pressure on the persistence layer database, which is equivalent to cache penetration.

Introduce two common solutions:

1. Solution 1: Bloom filter

Refer to the following article, I think it is very detailed,

Explain the principle, usage scenarios and precautions of bloom filters in detail

2. Solution 2: Cache empty objects

Cache penetration: The hacker sends a large number of requests, and the requested data is not in the database. Each time, they will not go to the cache and go directly to the database, which may cause the database to crash.

Solution: As long as the database is not found, write a null value to the cache, next time there is this request, you can go to the cache

Disadvantages:

  1. If null values ​​can be cached, it means that the cache needs more space to store more keys, because there may be a lot of keys with null values;
  2. Even if the expiration time is set for the null value, there will still be a window of inconsistency between the data of the cache layer and the storage layer for a period of time, which will have an impact on businesses that need to maintain consistency.

Second, Redis breakdown

XXX

Three, Redis avalanche

Cache avalanche phenomenon: a large number of keys all fail at the same time, and a large number of requests come in at the same time, causing traffic to be directly hit on the DB, making the DB unavailable.

Then consider the following questions:

  • Why does Redis cache fail for a large number of keys at the same time?
  • What is the maximum number of concurrent requests that the database can support?
  • What will the DBA do if the database is down?

To understand the above problem, look at the picture below:

1. Why does Redis cause a large number of keys to fail at the same time?

The Redis server is down, and the Redis server is down generally because the number of concurrency exceeds the maximum number of concurrency that the Redis server can support. The specific case can be seen in the next service avalanche accident caused by redis hanging up. No, it is a story~ (very good avalanche case analysis)

2. Let's look at the maximum number of concurrent requests that the database can support?

Database type Maximum number of concurrent support Remarks
MySql 16384 Restricted by server configuration and network environment, the actual number of concurrent connections supported by the server will be smaller. The main determinants are:

1. The configuration of server CPU and memory.

2. The bandwidth of the network. The influence of the upstream bandwidth in the Internet connection is particularly obvious.

Oracle 40000

1. View the maximum concurrency limit of oracle, but view the v$license view

2. Manual calculation: The memory consumed by each session is related to the parameter settings. If it is DEDICATE (independent mode), the size of sort_area_size is related to the PGA memory consumption. If it is 512K, each session consumes about 3M, so the machine allows The maximum number of concurrent connections = (machine memory-ORACLE and other system software memory-oracleSGa memory) / 3m

 

Postgre

Larger than Oracle

1. The maximum number of connections can also be configured in the pg configuration file:

Set in postgresql.conf:

max_connections = 500

2. View the number of connections reserved for super users:

show superuser_reserved_connections ; 

3. Redis avalanche solution

(1) Redis is highly available

The implication of this idea is that since redis may hang, I will add a few more redis so that after one hangs, the others can continue to work, which is actually a cluster built.

(2) Current limit downgrade

The idea of ​​this solution is to control the number of threads that read the database and write the cache by locking or queuing after the cache is invalid. For example, only one thread is allowed to query data and write cache for a certain key, and other threads wait.

(3) Data warm-up

The meaning of data heating is that before the formal deployment, I first visit the possible data first, so that part of the data that may be accessed in a large amount will be loaded into the cache. Before a large concurrent access is about to occur, manually trigger the loading of different keys in the cache, set different expiration times, and make the time points of cache invalidation as uniform as possible.

For specific cases, please see the solution of the above pattern example

Reference article:

Help you interpret what is Redis cache penetration and cache avalanche (including solutions)

Explain the principle, usage scenarios and precautions of bloom filters in detail

Guess you like

Origin blog.csdn.net/sanmi8276/article/details/107077061