redis集群伸缩【转】

一:实验介绍

在不影响集群对外服务的情况下,可以为集群添加节点进行扩容,也可以下线部分节点进行缩容。

原理可以抽象为槽和对应数据在不同节点之间灵活移动。

如果希望加入一个节点来实现集群扩容时,需要通过相关命令把一部分槽和数据迁移给新节点。

集群伸缩=槽和数据在节点之间的移动。
二:实验环境

现在想添加一个主节点127.0.0.1:6385及一个从节点127.0.0.1:6386
三:实验步骤
3.1 扩容集群
3.1.1 准备新节点

cp redis-6384.conf redis-6385.conf
cp redis-6384.conf redis-6386.conf


修改这两个文件的以下几个参数:

port 6385
pidfile /var/run/redis_6385.pid
cluster-enabled yes
cluster-node-timeout 15000
cluster-config-file"nodes-6385.conf"

#启动节点

redis-server /usr/local/redis/conf/redis-6385.conf&
redis-server /usr/local/redis/conf/redis-6386.conf&

3.1.2 加入集群

命令:

redis-trib.rb add-node --slave --master-id<arg> new_host:new_port existing_host:existing_port

--slave命令指定作为从节点加入集群。

如:

redis-trib.rb add-node 127.0.0.1:6385127.0.0.1:6379

redis-trib.rb add-node --slave --master-id33dc6f56d2992442af40827b8f73b2b487e9dc62 127.0.0.1:6386 127.0.0.1:6379
 
127.0.0.1:6379> cluster nodes
f823b940266c33b6b9e11817887c17a45c6c9183127.0.0.1:6383 slave c5424585ae85cfcc8074ee2017463a4de08de8ff 0 1510742172520 5connected
fc47848325254ee3ea3253ca5947afc553212f54127.0.0.1:6381 master - 0 1510742173822 3 connected 10923-16383
b29819d09405c5f111c15808a83fdb1c93eb6142127.0.0.1:6386 slave 33dc6f56d2992442af40827b8f73b2b487e9dc62 0 1510742168486 8connected
c5424585ae85cfcc8074ee2017463a4de08de8ff127.0.0.1:6380 master - 0 1510742170325 2 connected 5461-10922
4ef4f547ae119bd7758670153e54417b45668591127.0.0.1:6384 slave fc47848325254ee3ea3253ca5947afc553212f54 0 1510742171327 6connected
8636e84389a6ce864c5e99afc08e61d5095fc4f4127.0.0.1:6379 myself,master - 0 0 1 connected 0-5460
33dc6f56d2992442af40827b8f73b2b487e9dc62127.0.0.1:6385 master - 0 1510742174522 8 connected
c7a0e63f207885ac2e7261de635bf53703f24f66127.0.0.1:6382 slave 8636e84389a6ce864c5e99afc08e61d5095fc4f4 0 1510742173521 4connected


3.1.3 迁移槽和数据

#确定槽迁移计划

共16384个槽,四个节点,为了使得槽均衡,现计划每个节点4096个槽,因此,需要迁移4096个槽。

redis-trib提供了槽重分片功能,命令如下:

redis-trib.rb reshard host:port--from <arg>--to <arg> --slots <arg> --yes –timeout <arg> --pipeline<arg>

host:port为必传参数,可以是集群内任意节点地址,用来获取整个集群信息。
--from:源节点的id,如果有多个源节点,使用逗号分隔。
--to:目标节点id,目标节点只能填写一个。
--slots:需要迁移槽的总数量

 

示例:
[root@ocp ~]# redis-trib.rb reshard127.0.0.1:6379
……
[OK] All nodes agree about slotsconfiguration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.

How many slots do you want to move (from 1to 16384)? 4096

What is the receiving node ID?33dc6f56d2992442af40827b8f73b2b487e9dc62
Please enter all the source node IDs.
 Type 'all' to use all the nodes as source nodes for the hash slots.
 Type 'done' once you entered all the source nodes IDs.
Source node #1:8636e84389a6ce864c5e99afc08e61d5095fc4f4
Source node#2:c5424585ae85cfcc8074ee2017463a4de08de8ff
Source node #3:fc47848325254ee3ea3253ca5947afc553212f54
Source node #4:done

这里需要输入done表示结束。

 

……

  Moving slot 12285 from fc47848325254ee3ea3253ca5947afc553212f54
   Moving slot 12286 from fc47848325254ee3ea3253ca5947afc553212f54
   Moving slot 12287 from fc47848325254ee3ea3253ca5947afc553212f54
Do you want to proceed with the proposedreshard plan (yes/no)? yes

……

#检查是否迁移成功

127.0.0.1:6379> cluster nodes
f823b940266c33b6b9e11817887c17a45c6c9183127.0.0.1:6383 slave c5424585ae85cfcc8074ee2017463a4de08de8ff 0 1510808413079 5connected
fc47848325254ee3ea3253ca5947afc553212f54127.0.0.1:6381 master - 0 1510808417086 3 connected 12288-16383
b29819d09405c5f111c15808a83fdb1c93eb6142127.0.0.1:6386 slave 33dc6f56d2992442af40827b8f73b2b487e9dc62 0 1510808415082 8connected
c5424585ae85cfcc8074ee2017463a4de08de8ff127.0.0.1:6380 master - 0 1510808416083 2 connected 6827-10922
4ef4f547ae119bd7758670153e54417b45668591127.0.0.1:6384 slave fc47848325254ee3ea3253ca5947afc553212f54 0 1510808412074 6connected
8636e84389a6ce864c5e99afc08e61d5095fc4f4127.0.0.1:6379 myself,master - 0 0 1 connected 1365-5460
33dc6f56d2992442af40827b8f73b2b487e9dc62127.0.0.1:6385 master - 0 1510808411072 8 connected 0-1364 5461-682610923-12287
c7a0e63f207885ac2e7261de635bf53703f24f66127.0.0.1:6382 slave 8636e84389a6ce864c5e99afc08e61d5095fc4f4 0 1510808414080 4connected 

#检查节点之间槽的均衡性
[root@ocp ~]# redis-trib.rb rebalance127.0.0.1:6379
>>> Performing Cluster Check(using node 127.0.0.1:6379)
[OK] All nodes agree about slotsconfiguration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.
*** No rebalancing needed! All nodes arewithin the 2.0% threshold.



 
3.2 收缩集群

这里把刚加入集群的端口号为6385,6386的两个节点给剔除掉。
3.2.1 迁移槽

由于每次执行reshard命令只能有一个目标节点,因此需要执行三次reshard命令。

分别迁移1365,1365,1366个槽。

这里只记录一次槽迁移。

[root@ocp ~]# redis-trib.rb reshard127.0.0.1:6379
……
[OK] All nodes agree about slotsconfiguration.
>>> Check for open slots...
>>> Check slots coverage...
[OK] All 16384 slots covered.

How many slots do you want to move (from 1to 16384)? 1365

What is the receiving node ID?8636e84389a6ce864c5e99afc08e61d5095fc4f4

Please enter all the source node IDs.
 Type 'all' to use all the nodes as source nodes for the hash slots.
 Type 'done' once you entered all the source nodes IDs.
Source node#1:33dc6f56d2992442af40827b8f73b2b487e9dc62
Source node #2:done

#查看节点信息

127.0.0.1:6379> cluster nodes
f823b940266c33b6b9e11817887c17a45c6c9183127.0.0.1:6383 slave c5424585ae85cfcc8074ee2017463a4de08de8ff 0 151080943541610 connected
fc47848325254ee3ea3253ca5947afc553212f54127.0.0.1:6381 master - 0 1510809434414 11 connected 6826 10923-16383
b29819d09405c5f111c15808a83fdb1c93eb6142127.0.0.1:6386 slave fc47848325254ee3ea3253ca5947afc553212f54 0 151080943341211 connected
c5424585ae85cfcc8074ee2017463a4de08de8ff127.0.0.1:6380 master - 0 1510809436419 10 connected 5461-6825 6827-10922
4ef4f547ae119bd7758670153e54417b45668591127.0.0.1:6384 slave fc47848325254ee3ea3253ca5947afc553212f54 0 151080943040511 connected
8636e84389a6ce864c5e99afc08e61d5095fc4f4127.0.0.1:6379 myself,master - 0 0 9 connected 0-5460
33dc6f56d2992442af40827b8f73b2b487e9dc62127.0.0.1:6385 master - 0 1510809434915 8 connected
c7a0e63f207885ac2e7261de635bf53703f24f66127.0.0.1:6382 slave 8636e84389a6ce864c5e99afc08e61d5095fc4f4 0 1510809431909 9connected


 
3.2.2 下线节点

命令:

redis-trib.rb del-node {host:port}{donwNodeId}

先下线从节点

[root@ocp ~]# redis-trib.rb del-node127.0.0.1:6379 b29819d09405c5f111c15808a83fdb1c93eb6142
>>> Removing nodeb29819d09405c5f111c15808a83fdb1c93eb6142 from cluster 127.0.0.1:6379
>>> Sending CLUSTER FORGETmessages to the cluster...
>>> SHUTDOWN the node.

再下线主节点

[root@ocp ~]# redis-trib.rb del-node127.0.0.1:6379 33dc6f56d2992442af40827b8f73b2b487e9dc62
>>> Removing node33dc6f56d2992442af40827b8f73b2b487e9dc62 from cluster 127.0.0.1:6379
>>> Sending CLUSTER FORGETmessages to the cluster...
>>> SHUTDOWN the node.

#确认节点状态

127.0.0.1:6379> cluster nodes
f823b940266c33b6b9e11817887c17a45c6c9183127.0.0.1:6383 slave c5424585ae85cfcc8074ee2017463a4de08de8ff 0 151080973103110 connected
fc47848325254ee3ea3253ca5947afc553212f54127.0.0.1:6381 master - 0 1510809727523 11 connected 6826 10923-16383
c5424585ae85cfcc8074ee2017463a4de08de8ff 127.0.0.1:6380master - 0 1510809729027 10 connected 5461-6825 6827-10922
4ef4f547ae119bd7758670153e54417b45668591127.0.0.1:6384 slave fc47848325254ee3ea3253ca5947afc553212f54 0 151080973002811 connected
8636e84389a6ce864c5e99afc08e61d5095fc4f4127.0.0.1:6379 myself,master - 0 0 9 connected 0-5460
c7a0e63f207885ac2e7261de635bf53703f24f66127.0.0.1:6382 slave 8636e84389a6ce864c5e99afc08e61d5095fc4f4 0 1510809728024 9connected


---------------------

转载地址如下

作者:雅冰石
来源:CSDN
原文:https://blog.csdn.net/yabingshi_tech/article/details/78550185
版权声明:本文为博主原创文章,转载请附上博文链接!

--Redis集群搭建可以参考:

http://blog.csdn.net/yabingshi_tech/article/details/78539871

猜你喜欢

转载自www.cnblogs.com/paul8339/p/10649033.html