Redis seven series: redis cache avalanche breakdown cache, cache warming, update the cache, the cache downgrade

A cache avalanche

1, the concept of

The effectiveness of the cache at the same time (since the same cache time), while access to the database, resulting in enormous pressure on the cpu and memory database, the database can cause serious downtime, thus forming a chain reaction, causing the whole system to crash.

2. Solution

A, using locks or queue to access the database (non-high concurrency scenarios, or severe obstruction)
B, set an expiration flag update the cache (when the data length is twice as long expired when the flag indicating expired, old data is returned to the calling terminal, asynchronous load data into the cache)
C, set different cache key expiration time
D, "secondary cache", to be investigated

Second, the cache penetration

1, the concept of

User query data in the database is not natural in the cache it did not, thus leading when a user's query, and then query the database every time to go again, and then return empty (the equivalent of useless conducted two inquiries). Such requests will bypass the cache directly query the database, which is often mentioned cache hit rates.

2. Solution

A, in the Bloom filter, there may be a hash of all data to a bitmap large enough, one must not exist the bitmap data will be blocked, thus avoiding pressure on the underlying query of the storage system. (Multiple hash functions)
B, if a query returned an empty data (either data or system failure does not exist), it is still the empty cache results, but its expiration time will be very short, no longer than 5 minutes, set directly by the the default value is stored in the cache, so the second time to get there is value in the cache, but will not continue to access the database, this approach is simple and crude.

Third, the cache warming

1, the concept of

After the on-line system, ahead of the first turn of the number of cache directly loaded into the cache system to avoid when requested by the user, to query the database, and then revisit the issue of data cache. Users to directly query cache data previously been preheated.

2. Solution

A, direct write cache refresh the page, the next line when the manual operation;
B, the amount of data can be loaded automatically when the project started;
C, timed flush the cache.
 

Fourth, the cache update

In addition to built-in cache server cache invalidation strategy outside (Redis default there are 6 strategies to choose from), we can also customize cache invalidation based on specific business scenarios, common strategies are as follows:
A, regular cleaning expired cache;
B, when requested by a user over time, then the request is being determines the cache has expired, the underlying system expired then go get the new data and update the cache.
 
The advantages and disadvantages of two options:
The first drawback is that maintaining a large number of key buffer is too much trouble, the second drawback is that every time you come every user request cache invalidation judgment, logic is relatively complicated!
 

Fifth, the cache downgrade

When the traffic surge, the service problems (such as slow response times or no response) or non-core services affects the performance of core processes still need to ensure that the service is still available, even detrimental to the service. The system may automatically downgrade critical data, the switch may be arranged to achieve artificial degraded.
 
The ultimate goal is to ensure that the core downgrade service is available, even for lossy. And some services are not degraded (such as the shopping cart settlement).
To downgrade the system prior to carding see if the system is not to lose a pawn to save handsome; in order to tease out what must fight to the death to protect, which can downgrade; for example, can refer to the log level plan:
(1) General: Some services, such as occasional jitter or the network service is timed out on the line, to automatically downgrade;
(2) Warning: Some services in the success rate fluctuates over time (e.g., between 95 to 100%) can be automatically or manually downgrade degraded, and transmitting alarm;
(3) Error: available such as below 90%, or a database connection pool off the hook, or visits to a sudden surge in the system can withstand the maximum threshold, this time may be automatically or manually downgrade downgrade some cases;
(4) Critical error: for example, for special reasons data error, this time in need of emergency manual downgrade.

Guess you like

Origin www.cnblogs.com/dudu2mama/p/11366302.html