Redis: Detailed explanation of three cluster strategies

redis contains three cluster strategies

  • master-slave replication
  • sentinel
  • cluster

master-slave replication

In master-slave replication, the database is divided into two categories, the master database (master) and the slave database (slave). The master-slave replication has the following characteristics:

  • The master database can perform read and write operations, and when the read and write operations cause data changes, the data will be automatically synchronized to the slave database
  • The slave database is generally read-only, and ends the data synchronized from the master database
  • A master can have multiple slaves, but a slave can only correspond to one master

Master-slave replication mechanism

When the slave starts, it actively sends the SYNC command to the master. After the master receives the SYNC command, it saves the snapshot in the background (RDB persistence) and caches the command to save the snapshot during this period, and then sends the saved snapshot file and the cached command to the slave. After the slave receives the snapshot file and command, it loads the snapshot file and the cached execution command.

After replication is initialized, every write command received by the master will be sent to the slave synchronously to ensure master-slave data consistency.

master-slave configuration

Redis is the main data by default, so the master does not need to be configured, we only need to modify the configuration of the slave.

Set the ip port of the master that needs to be connected:

slaveof 192.168.0.107 6379

If the master has a password set. Need to configure:

masterauth

After the connection is successfully entered into the command line, you can view other library information connected to the database through the following command line:

info replication

sentinel

The role of the sentinel is to monitor the running status of the redis system. Its functions are as follows:
- Monitor whether the master-slave database is running normally
- When the master fails, automatically convert the slave to the master
- When multiple sentinels are configured, the sentinels will also automatically monitor
- Multiple Sentinels can monitor the same redis

Sentinel working mechanism

When the sentinel process starts, it reads the content of the configuration file and sentinel monitor master-name ip port quorumfinds the ip port of the master. A sentinel can monitor multiple master databases, just need to provide multiple configuration items.

The colleague configuration file also defines parameters related to monitoring, such as how long the master does not respond, that is, it is determined that the bit is offline.

After the sentinel is started, it will establish two connections with the master to be monitored:

  1. A connection is used to subscribe to the master's _sentinel_:hellochannel and get information about other sentinel nodes monitoring the master
  2. Another connection periodically sends INFO and other commands to the master to obtain information about the master itself

After establishing a connection with the master, the sentinel will perform three operations, and the sending frequency of these three operations can be configured in the configuration file:

  1. Send INFO commands to master and slave periodically
  2. Send your own information to the sentinel :hello channel of master slaves regularly
  3. Periodically send PING commands to master, slave and other sentinels

The significance of these three operations is very important. Sending the INFO command can obtain the relevant information of the current database to realize the automatic discovery of new nodes. Therefore, the sentinel only needs to configure the master database information to automatically discover its slave information. After obtaining the slave information, the sentinel will also establish two connections with the slave to perform monitoring. Through the INFO command, the sentinel can obtain the latest information of the master-slave database and perform corresponding operations, such as role changes.

Next, the sentinel sends information to the sentinel :hello channel of the master-slave database and shares its own information with the sentinels who also monitor these databases. The sent content is the sentinel's ip port, running id, configuration version, master name, master's ip port and master. configuration version. This information is useful for:

  • Other sentinels can use this information to determine whether the sender is a newly discovered sentinel, and if so, create a connection to the sentinel for sending PIN commands.
  • Other sentries can judge the version of the master through this information. If the version is higher than the directly recorded version, it will be updated

After the automatic discovery of slaves and other sentinel nodes is realized, the sentinel can regularly monitor whether these databases and nodes have stopped services by sending PING commands regularly. The sending frequency can be configured, but the longest interval is 1s, which can be sentinel down-after-milliseconds mymaster 600set.

If the pinged database or node fails to reply after timeout, the sentinel task will be offline subjectively. If it is the master who is offline, the sentinel will send commands to other sentinel points to ask if they also think the master is offline subjectively. If a certain number of quorumvotes (that is, in the configuration file) are reached, the sentinel will consider that the master has been objectively offline, and Election of the leading sentinel node initiates failure recovery of the master-slave system.

As mentioned above, after the sentinel believes that after the master is objectively offline, the operation of failure recovery needs to be performed by the leading sentinel of the election. The election adopts the Raft algorithm:
1. The sentinel node (we call him A) that finds the master offline is sent to each sentinel. Send an order to ask the other party to elect itself as the lead sentinel
2. If the target sentinel node has not elected anyone else, it will agree to elect A as the lead sentinel
3. If more than half of the sentinels agree to elect A as the lead sentinel, then A is elected
4. If There are multiple sentinel nodes participating in the election at the same time. At this time, there may be a round of voting without a candidate winning. At this time, each node participating in the election waits for a random time and then initiates the election request again for the next round of voting selection. until the lead sentinel is elected

After the leader sentinel is elected, the leader starts to perform fault recovery, and selects one from the database of the failed master to elect a new master. The selection rules are as follows:

  1. Select the highest priority among all online slaves, and the priority can be slave-priorityconfigured by
  2. If there are multiple slaves with the highest priority, the one with the largest replication offset (that is, the more complete the replication) is selected.
  3. If the above conditions are the same, select the slave with the smallest id

After selecting the slave to succeed, the leading sentinel sends a command to the database to upgrade it to master, and then sends commands to other slaves to accept the new master, and finally updates the data. Update the stopped old master to the slave database of the new master, so that it will continue to run as the slave after the service is restored.

Sentinel configuration

The configuration file of the sentinel configuration is to sentinel.confset the host name, address, port, and the number of election votes, that is, at least several sentinel nodes need to agree to restore.

sentinel monitor mymaster 192.168.0.107 6379 1

Just configure the master that needs to be monitored, and the sentinel will monitor the slave connected to the master.

Start the sentinel node:

redis-server sentinel.conf –sentinel &

The following content appears to indicate that the startup is successful

[root@buke110 redis]# bin/redis-server etc/sentinel.conf --sentinel &
[1] 3072
[root@buke110 redis]# 3072:X 12 Apr 22:40:02.503 * Increased maximum number of open files to 10032 (it was originally set to 1024).
                _._                                                  
           _.-``__ ''-._                                             
      _.-``    `.  `_.  ''-._           Redis 2.9.102 (00000000/0) 64 bit
  .-`` .-```.  ```\/    _.,_ ''-._                                   
 (    '      ,       .-`  | `,    )     Running in sentinel mode
 |`-._`-...-` __...-.``-._|'` _.-'|     Port: 26379
 |    `-._   `._    /     _.-'    |     PID: 3072
  `-._    `-._  `-./  _.-'    _.-'                                   
 |`-._`-._    `-.__.-'    _.-'_.-'|                                  
 |    `-._`-._        _.-'_.-'    |           http://redis.io        
  `-._    `-._`-.__.-'_.-'    _.-'                                   
 |`-._`-._    `-.__.-'    _.-'_.-'|                                  
 |    `-._`-._        _.-'_.-'    |                                  
  `-._    `-._`-.__.-'_.-'    _.-'                                   
      `-._    `-.__.-'    _.-'                                       
          `-._        _.-'                                           
              `-.__.-'                                               

3072:X 12 Apr 22:40:02.554 # Sentinel runid is e510bd95d4deba3261de72272130322b2ba650e7
3072:X 12 Apr 22:40:02.554 # +monitor master mymaster 192.168.0.107 6379 quorum 1
3072:X 12 Apr 22:40:03.516 * +slave slave 192.168.0.108:6379 192.168.0.108 6379 @ mymaster 192.168.0.107 6379
3072:X 12 Apr 22:40:03.516 * +slave slave 192.168.0.109:6379 192.168.0.109 6379 @ mymaster 192.168.0.107 6379

You can view the specified sentinel node information on any server:

bin/redis-cli -h 192.168.0.110 -p 26379 info Sentinel

Console output sentinel information:

[root@buke107 redis]# bin/redis-cli -h 192.168.0.110  -p 26379  info Sentinel
# Sentinel
sentinel_masters:1
sentinel_tilt:0
sentinel_running_scripts:0
sentinel_scripts_queue_length:0
master0:name=mymaster,status=ok,address=192.168.0.107:6379,slaves=2,sentinels=1

cluster

To use a cluster, you only need to turn on the cluster-enable configuration of each database node. At least three primary databases are required in each cluster to function properly.

Cluster configuration

Install the dependent environment ruby, note that the ruby ​​version must be higher than 2.2

yum install ruby
yum install rubygems
gem install redis

Modify the configuration file:

bind 192.168.0.107

Configure ports

port 6380

Configure the snapshot save path

dir /usr/local/redis-cluster/6380/

Start the cluster

cluster-enabled yes

Set a different working directory for nodes

cluster-config-file nodes-6380.conf

Cluster failure time

cluster-node-timeout 15000

Start the nodes in the cluster:

reids-service ../6380/redis.conf

Add the node to the cluster

redis-trib.rb create –replicas 1 192.168.0.107:6380 192.168.0.107:6381 192.168.0.107:6382 192.168.0.107:6383 192.168.0.107:6384 192.168.0.107:6385

In the middle, you need to enter yes to confirm the creation of the cluster:

[root@buke107 src]# redis-trib.rb create --replicas 1 192.168.0.107:6380 192.168.0.107:6381 192.168.0.107:6382 192.168.0.107:6383 192.168.0.107:6384 192.168.0.107:6385 
>>> Creating cluster
Connecting to node 192.168.0.107:6380: OK
Connecting to node 192.168.0.107:6381: OK
Connecting to node 192.168.0.107:6382: OK
Connecting to node 192.168.0.107:6383: OK
Connecting to node 192.168.0.107:6384: OK
Connecting to node 192.168.0.107:6385: OK
>>> Performing hash slots allocation on 6 nodes...
Using 3 masters:
192.168.0.107:6380
192.168.0.107:6381
192.168.0.107:6382
Adding replica 192.168.0.107:6383 to 192.168.0.107:6380
Adding replica 192.168.0.107:6384 to 192.168.0.107:6381
Adding replica 192.168.0.107:6385 to 192.168.0.107:6382
M: 5cd3ed3a84ead41a765abd3781b98950d452c958 192.168.0.107:6380
   slots:0-5460 (5461 slots) master
M: 90b4b326d579f9b5e181e3df95578bceba29b204 192.168.0.107:6381
   slots:5461-10922 (5462 slots) master
M: 868456121fa4e6c8e7abe235a88b51d354a944b5 192.168.0.107:6382
   slots:10923-16383 (5461 slots) master
S: b8e047aeacb9398c3f58f96d0602efbbea2078e2 192.168.0.107:6383
   replicates 5cd3ed3a84ead41a765abd3781b98950d452c958
S: 68cf66359318b26df16ebf95ba0c00d9f6b2c63e 192.168.0.107:6384
   replicates 90b4b326d579f9b5e181e3df95578bceba29b204
S: d6d01fd8f1e5b9f8fc0c748e08248a358da3638d 192.168.0.107:6385
   replicates 868456121fa4e6c8e7abe235a88b51d354a944b5
Can I set the above configuration? (type 'yes' to accept): yes
>>> Nodes configuration updated
>>> Assign a different config epoch to each node
>>> Sending CLUSTER MEET messages to join the cluster
Waiting for the cluster to join....
>>> Performing Cluster Check (using node 192.168.0.107:6380)
M: 5cd3ed3a84ead41a765abd3781b98950d452c958 192.168.0.107:6380
   slots:0-5460 (5461 slots) master
M: 90b4b326d579f9b5e181e3df95578bceba29b204 192.168.0.107:6381
   slots:5461-10922 (5462 slots) master
M: 868456121fa4e6c8e7abe235a88b51d354a944b5 192.168.0.107:6382
   slots:10923-16383 (5461 slots) master
M: b8e047aeacb9398c3f58f96d0602efbbea2078e2 192.168.0.107:6383
   slots: (0 slots) master
   replicates 5cd3ed3a84ead41a765abd3781b98950d452c958
M: 68cf66359318b26df16ebf95ba0c00d9f6b2c63e 192.168.0.107:6384
   slots: (0 slots) master
   replicates 90b4b326d579f9b5e181e3df95578bceba29b204
M: d6d01fd8f1e5b9f8fc0c748e08248a358da3638d 192.168.0.107:6385
   slots: (0 slots) master
   replicates 868456121fa4e6c8e7abe235a88b51d354a944b5
[OK] All nodes agree about slots configuration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.

Enter any node in the cluster:

redis-cli -c -h 192.168.0.107 -p 6381

View the nodes in the cluster:

[root@buke107 src]# redis-cli -c -h 192.168.0.107 -p 6381
192.168.0.107:6381> cluster nodes
868456121fa4e6c8e7abe235a88b51d354a944b5 192.168.0.107:6382 master - 0 1523609792598 3 connected 10923-16383
d6d01fd8f1e5b9f8fc0c748e08248a358da3638d 192.168.0.107:6385 slave 868456121fa4e6c8e7abe235a88b51d354a944b5 0 1523609795616 6 connected
5cd3ed3a84ead41a765abd3781b98950d452c958 192.168.0.107:6380 master - 0 1523609794610 1 connected 0-5460
b8e047aeacb9398c3f58f96d0602efbbea2078e2 192.168.0.107:6383 slave 5cd3ed3a84ead41a765abd3781b98950d452c958 0 1523609797629 1 connected
68cf66359318b26df16ebf95ba0c00d9f6b2c63e 192.168.0.107:6384 slave 90b4b326d579f9b5e181e3df95578bceba29b204 0 1523609796622 5 connected
90b4b326d579f9b5e181e3df95578bceba29b204 192.168.0.107:6381 myself,master - 0 0 2 connected 5461-10922

As shown in the figure above, a cluster of three masters and three slaves has been established.

Add cluster nodes

cluster meet ip port

Guess you like

Origin http://43.154.161.224:23101/article/api/json?id=324379854&siteId=291194637