Cache penetration, breakdown cache, the cache avalanche

This link: https://blog.csdn.net/kongtiao5/article/details/82771694

First, caching process flow

      Reception request to take back existing cache data, to take a direct result of the return, whichever is less than when taken from the database, the database taken to update the cache, and returns the result, the database did not get to, that directly returns an empty result.

      

 

Second, the cache penetration

       description:

       Cache and data cache penetration means are not in the database, and users continue to initiate a request, such as to initiate data id is "-1" data id is particularly large or nonexistent. This time, the user is likely to be the attacker, the attacker can cause excessive pressure on the database.

      solution:

Checking the interface layer increases, as the user authentication check, id foundation check, id <= 0, direct interception;
not taken from the cache data in the database not accessible to, this time may be key- value is written to key-null, the cache valid time point can be set short as 30 seconds (set too long leads to normally would not be able to use). This prevents attacks repeatedly use the same user id violent attack
 

Third, the cache breakdown

      description:

      Cache breakdown refers to the cache database but no existing data (typically cache time expires), then due to the particularly large number of concurrent users, while the read cache data is not read, but at the same time to get the data to the database, the database caused by pressure increases instantaneously, resulting in excessive pressure

      solution:

Set hot data never expires.
Plus mutex, mutex reference code as follows:
         

 

          Description:

          1) buffer the data, go directly after the above code, line 13 returns the result of

         2) there is no data in the cache, the first one into the thread acquires the lock and fetch data from the database, not before the lock is released into the other parallel threads will wait 100ms, then re-cache to fetch data. This prevents duplicate database to fetch data are repeated to update the data cache occurs.

          3) Of course, this is a simplified process, in theory, if the lock key according to value the better, thread A is taken from the database data does not interfere with data key1 taken key2 of thread B, the above code obviously can not do this.

 

四、缓存雪崩

      描述:

      缓存雪崩是指缓存中数据大批量到过期时间,而查询数据量巨大,引起数据库压力过大甚至down机。和缓存击穿不同的是,        缓存击穿指并发查同一条数据,缓存雪崩是不同数据都过期了,很多数据都查不到从而查数据库。

     解决方案:

 1)缓存数据的过期时间设置随机,防止同一时间大量数据过期现象发生。
 2)如果缓存数据库是分布式部署,将热点数据均匀分布在不同搞得缓存数据库中。
 3)设置热点数据永远不过期。

Guess you like

Origin www.cnblogs.com/xhlwjy/p/11723903.html