Redis序列之Sentinel

前段时间一直在研究Redis的Sentinel集群,实验几次也没成功过,要么是Master有问题,要么是Mster宕机后,Slave没能自动切换成新的Master,今天突发灵感,实验一次成功,集群配置如上:
Master:127.0.0.1:10001
Slave:127.0.0.1:10002,127.0.0.1:10003
Sentinel:127.0.0.1:26389,127.0.0.1:26489

首先新建10001,10002,10003文件夹,将redis.conf,redid-server分别Copy到文件夹中

longwentaodeMacBook-Pro:redis-cluster longwentao$ ls
10001       10002       10003       sentinel
longwentaodeMacBook-Pro:redis-cluster longwentao$ ls 10001
redis-server    redis.conf

修改redis.conf文件中的端口号和文件夹保持一致

port 10001
port 10002
port 10003

修改10002,10003文件下的redis.conf文件,指定master为127.0.0.1:10001,其他使用默认配置

# slaveof <masterip> <masterport>
slaveof 127.0.0.1 10001

再创建sentinel文件夹,将redis-sentinel,sentinel.conf文件Copy到文件夹中,将sentinel.conf文件重命名为sentinel_26379.conf,修改端口号及需要监控的master

port 26379
sentinel monitor mymaster 127.0.0.1 10001 1

1表示至少一个sentinel同意master宕机后,切换新的master

将sentinel_26379.conf Copy一份成sentinel_26479.conf,并修改端口号为26479,其他不变,最后sentinel文件夹中的内容如下

longwentaodeMacBook-Pro:sentinel longwentao$ ls
redis-sentinel      sentinel_26379.conf sentinel_26479.conf

一个conf文件代表一个sentinel

到这里整个集群配置就完成了,接下来先启动master和sentinel

longwentaodeMacBook-Pro:10001 longwentao$ ./redis-server redis.conf 
longwentaodeMacBook-Pro:sentinel longwentao$ ./redis-sentinel sentinel_26379.conf 
longwentaodeMacBook-Pro:sentinel longwentao$ ./redis-sentinel sentinel_26479.conf 

再启动10002和10003

longwentaodeMacBook-Pro:10002 longwentao$ ./redis-server redis.conf 
longwentaodeMacBook-Pro:10003 longwentao$ ./redis-server redis.conf 

可以看到,sentinel中已经监控到增加了两个slave

扫描二维码关注公众号,回复: 2157851 查看本文章
588:X 31 Dec 12:54:52.469 # Sentinel ID is 75e84135f9dddd13910b3e835f6836b0d861dbdb
588:X 31 Dec 12:54:52.469 # +monitor master mymaster 127.0.0.1 10001 quorum 1
588:X 31 Dec 12:54:52.470 * +slave slave 127.0.0.1:10002 127.0.0.1 10002 @ mymaster 127.0.0.1 10001
588:X 31 Dec 12:54:52.470 * +slave slave 127.0.0.1:10003 127.0.0.1 10003 @ mymaster 127.0.0.1 10001

在10001中查看群集信息,10001为Master,有两个slave10002,10003

longwentaodeMacBook-Pro:src longwentao$ ./redis-cli -p 10001
127.0.0.1:10001> info replication
# Replication
role:master
connected_slaves:2
slave0:ip=127.0.0.1,port=10002,state=online,offset=417020,lag=1
slave1:ip=127.0.0.1,port=10003,state=online,offset=417020,lag=1
master_repl_offset:417020
repl_backlog_active:1
repl_backlog_size:1048576
repl_backlog_first_byte_offset:2
repl_backlog_histlen:417019

在10002中查看集群信息

longwentaodeMacBook-Pro:src longwentao$ ./redis-cli -p 10002
127.0.0.1:10002> info replication
# Replication
role:slave
master_host:127.0.0.1
master_port:10001
master_link_status:up
master_last_io_seconds_ago:2
master_sync_in_progress:0
slave_repl_offset:424608
slave_priority:100
slave_read_only:1
connected_slaves:0
master_repl_offset:0
repl_backlog_active:0
repl_backlog_size:1048576
repl_backlog_first_byte_offset:0
repl_backlog_histlen:0
127.0.0.1:10002> 

宕机验证,将master停掉,sentinel中显示10003自动变成了master

588:X 31 Dec 13:57:38.194 # +switch-master mymaster 127.0.0.1 10001 127.0.0.1 10003
588:X 31 Dec 13:57:38.195 * +slave slave 127.0.0.1:10002 127.0.0.1 10002 @ mymaster 127.0.0.1 10003
588:X 31 Dec 13:57:38.195 * +slave slave 127.0.0.1:10001 127.0.0.1 10001 @ mymaster 127.0.0.1 10003
588:X 31 Dec 13:58:08.268 # +sdown slave 127.0.0.1:10001 127.0.0.1 10001 @ mymaster 127.0.0.1 10003

再查看集群信息,10003为master,有一个10002的slave节点

longwentaodeMacBook-Pro:src longwentao$ ./redis-cli -p 10003
127.0.0.1:10003> info replication
# Replication
role:master
connected_slaves:1
slave0:ip=127.0.0.1,port=10002,state=online,offset=28740,lag=1
master_repl_offset:28874
repl_backlog_active:1
repl_backlog_size:1048576
repl_backlog_first_byte_offset:2
repl_backlog_histlen:28873

再将10001启动,sentinel显示增加了一个10001的slave,master为10003

588:X 31 Dec 14:03:23.277 # -sdown slave 127.0.0.1:10001 127.0.0.1 10001 @ mymaster 127.0.0.1 10003
588:X 31 Dec 14:03:33.256 * +convert-to-slave slave 127.0.0.1:10001 127.0.0.1 10001 @ mymaster 127.0.0.1 10003

查看集群信息,master还是10003,10001已经变成了slave

127.0.0.1:10003> info replication
# Replication
role:master
connected_slaves:2
slave0:ip=127.0.0.1,port=10002,state=online,offset=60039,lag=1
slave1:ip=127.0.0.1,port=10001,state=online,offset=60039,lag=1
master_repl_offset:60039
repl_backlog_active:1
repl_backlog_size:1048576
repl_backlog_first_byte_offset:2
repl_backlog_histlen:60038
127.0.0.1:10003> 

猜你喜欢

转载自blog.csdn.net/kity9420/article/details/53955413