复制集的选举原理

复制的原理

        复制时基于啊哦做日志 oplog , 相当于 MySQL 中的二进制日志,只记录发生改变的记录。复制是将主节点的oplog 日志同步并应用到其他从节点的过程。

选举的原理

      节点类型分为标准(host)节点 、被动(passive)节点和仲裁(arbiter)节点。

(1)只有标准节点可能被选举为活跃(primary)节点,有选举权。被动节点有完整副本,不可能成为活跃节点,有选举权。仲裁节点不复制数据,不可能成为活跃节点,只有选举权。

(2)标准节点与被动节点的区别:primary 值高者是标准节点,低者则为被动节点。

(3)选举规则是票数高者获胜,primary 是优先权为 0~1000 的值,相当于额为增加 0~1000 的票数。选举结果:票数高者获胜;若票数相同,数据新者获胜。


实验部署

搭建配置4个 MongoDB 实例 mongodb  、mongpdb2 、mongodb3 、mongodb4 。 IP 地址:192.168.213.184

由于MongoDB 数据库安装好后,mongodb 为默认第一个实例,因此只要再搭建2 ,3,4 实例即可

1. 配置复制集

(1)创建数据文件和日志文件存储路径

[root@localhost ~]# mkdir -p /data/mongodb/mongodb{2,3,4}                #创建数据库文件2,3,4
[root@localhost ~]# cd /data/mongodb/
[root@localhost mongodb]# ls
mongodb2  mongodb3  mongodb4
[root@localhost mongodb]# mkdir logs
[root@localhost mongodb]# ls
logs  mongodb2  mongodb3  mongodb4
[root@localhost mongodb]# cd logs/
[root@localhost logs]# touch mongodb{2,3,4}.log                            #在 /data/mongodb/logs/ 目录下的日志存储路径
[root@localhost logs]# chmod  - R 777 /data/mongodb/logs/*.log         #修改目录权限   
[root@localhost logs]# ls -l
总用量 0
-rwxrwxrwx. 1 root root 0 9月  13 18:43 mongodb2.log
-rwxrwxrwx. 1 root root 0 9月  13 18:43 mongodb3.log
-rwxrwxrwx. 1 root root 0 9月  13 18:43 mongodb4.log

(2)配置实例1 的配置文件,启用复制集名称

[root@localhost logs]# vim /etc/mongod.conf
replication:
      replSetName: abc                        #去掉注释,指定复制集名称(名称随便定义)

创建多实例配置文件

[root@localhost logs]# cp -p /etc/mongod.conf /etc/mongod2.conf        复制生成实例2 的配置文件
[root@localhost logs]# vim /etc/mongod2.conf

systemLog:
   destination: file
   logAppend: true
   path: /data/mongodb/logs/mongodb2.log             #日志文件位置    

storage:
   dbPath: /data/mongodb/mongodb2                        #数据文件位置
   journal:
     enabled: true

port: 27018             #端口号,默认端口号为27017,因此实例2 的端口号要与之区别,实例3和4 依次往后排


配置其他实例,与实例2修改一样,注意端口号

[root@localhost logs]# cp -p /etc/mongod2.conf /etc/mongod3.conf
[root@localhost logs]# cp -p /etc/mongod2.conf /etc/mongod4.conf

(3)启动服务并查看端口是否开启

[root@localhost logs]# mongod -f /etc/mongod.conf –shutdown                      #先关闭实例1 的服务,在开启
killing process with pid: 3997                                   #进程号
[root@localhost logs]# mongod -f /etc/mongod.conf
about to fork child process, waiting until server is ready for connections.
forked process: 4867
child process started successfully, parent exiting


[root@localhost logs]# mongod -f /etc/mongod2.conf                                            #启动实例2 的服务
about to fork child process, waiting until server is ready for connections.
forked process: 4899
child process started successfully, parent exiting


[root@localhost logs]# mongod -f /etc/mongod3.conf                                             #启动实例3 的服务
about to fork child process, waiting until server is ready for connections.
forked process: 4927
child process started successfully, parent exiting


[root@localhost logs]# mongod -f /etc/mongod4.conf                                              #启动实例4 的服务
about to fork child process, waiting until server is ready for connections.
forked process: 4955

查看节点端口是否开启

[root@localhost logs]# netstat -ntap
Active Internet connections (servers and established)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name   
tcp        0      0 0.0.0.0:27019           0.0.0.0:*               LISTEN      4927/mongod        
tcp        0      0 0.0.0.0:27020           0.0.0.0:*               LISTEN      4955/mongod        
tcp        0      0 0.0.0.0:27017           0.0.0.0:*               LISTEN      4867/mongod        
tcp        0      0 0.0.0.0:27018           0.0.0.0:*               LISTEN      4899/mongod
      

(4)查看各实例能否进入 mongodb 数据库 

[root@localhost logs]# mongo
MongoDB shell version v3.6.7
connecting to: mongodb://127.0.0.1:27017
MongoDB server version: 3.6.7

[root@localhost logs]# mongo --port 27018
MongoDB shell version v3.6.7
connecting to: mongodb://127.0.0.1:27018/
MongoDB server version: 3.6.7
Server has startup warnings:

[root@localhost logs]# mongo --port 27019
MongoDB shell version v3.6.7
connecting to: mongodb://127.0.0.1:27019/
MongoDB server version: 3.6.7
Server has startup warnings:

[root@localhost logs]# mongo --port 27020
MongoDB shell version v3.6.7
connecting to: mongodb://127.0.0.1:27020/
MongoDB server version: 3.6.7

2 . 配置复制集的优先级

(1)登录默认实例 mongo,重新配置4个节点 MongoDB 复制集,设置两个标准节点,有个被动节点和一个仲裁节点,命令如下:

根据优先级确定节点: 优先级为 100的为标准节点,端口号为 27017和27018  ,优先级为0 的为被动节点,端口号为27019;仲裁节点为27020

> cfg={"_id":"abc","members":[{"_id":0,"host":"192.168.213.184:27017","priority":100},{"_id":1,"host":"192.168.213.184:27018","priority":100},{"_id":2,"host":"192.168.213.184:27019","priority":0},{"_id":3,"host":"192.168.213.184:27020","arbiterOnly":true}]}
{
     "_id" : "abc",
     "members" : [
         {
             "_id" : 0,
             "host" : "192.168.213.184:27017",                   #标准节点,优先级为100
             "priority" : 100
         },
         {
             "_id" : 1,
             "host" : "192.168.213.184:27018",                  #标准节点
             "priority" : 100
         },
         {
             "_id" : 2,
             "host" : "192.168.213.184:27019",                   #被动节点,优先级为0
             "priority" : 0
         },
         {
             "_id" : 3,
             "host" : "192.168.213.184:27020",                   #仲裁节点
             "arbiterOnly" : true
         }
     ]
}


> rs.initiate(cfg)                    # 初始化复制集
{
     "ok" : 1,
     "operationTime" : Timestamp(1536819122, 1),
     "$clusterTime" : {
         "clusterTime" : Timestamp(1536819122, 1),
         "signature" : {
             "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
             "keyId" : NumberLong(0)
         }
     }
}

(2)使用命令 rs.isMaster()  查看各节点身份

abc:PRIMARY> rs.isMaster()
{
     "hosts" : [                                         #标准节点 
         "192.168.213.184:27017",
         "192.168.213.184:27018"
     ],
     "passives" : [                                    #被动节点
         "192.168.213.184:27019"
     ],
     "arbiters" : [                                     #仲裁节点
         "192.168.213.184:27020"
     ],
     "setName" : "abc",
     "setVersion" : 1,
     "ismaster" : true,
     "secondary" : false,
     "primary" : "192.168.213.184:27017",

(3)此时默认实例已经被选举为活跃(primary)节点,可以在数据库中插入信息了。

abc:PRIMARY> use school                                                 #活跃节点(primory)
switched to db school
abc:PRIMARY> db.info.insert({"id":1,"name":"lili"})                #插入两条信息
WriteResult({ "nInserted" : 1 })
abc:PRIMARY> db.info.insert({"id":2,"name":"tom"})
WriteResult({ "nInserted" : 1 })

(4)查看日志记录所有操作,在默认数据库 local 中的oplog.rs   查看

abc:PRIMARY> show dbs
admin   0.000GB
config  0.000GB
local   0.000GB
school  0.000GB
abc:PRIMARY> use local                  #进入 local 数据库
switched to db local
abc:PRIMARY> show collections    #显示
me
oplog.rs
replset.election
replset.minvalid
startup_log
system.replset
system.rollback.id
abc:PRIMARY> db.oplog.rs.find()     #查看日志记录所有操作

通过日志记录,可以找到刚才加入的两条信息

"school.info", "ui" : UUID("2b9c93b6-a58a-4021-a2b4-33f9b19925d8"), "wall" : ISODate("2018-09-13T06:24:44.537Z"), "o" : { "_id" : ObjectId("5b9a02aca106c3eab9c639e5"), "id" : 1, "name" : "lili" } }
{ "ts" : Timestamp(1536819899, 1), "t" : NumberLong(1), "h" : NumberLong("-1447313909384631008"), "v" : 2, "op" : "i", "ns" : "school.info", "ui" : UUID("2b9c93b6-a58a-4021-a2b4-33f9b19925d8"), "wall" : ISODate("2018-09-13T06:24:59.186Z"), "o" : { "_id" : ObjectId("5b9a02bba106c3eab9c639e6"), "id" : 2, "name" : "tom" } }

3.模拟节点故障,如果主节点出现故障,另一个标准节点会选举成为新的主节点。

(1)先查看当前状态,使用命令 rs.status()

abc:PRIMARY> rs.status()

"members" : [
         {
             "_id" : 0,                  
             "name" : "192.168.213.184:27017",               #标准节点,端口27017,作为主节点
             "health" : 1,
             "state" : 1,
             "stateStr" : "PRIMARY",
             "uptime" : 2481,
             "optime" : {
                 "ts" : Timestamp(1536820334, 1),
                 "t" : NumberLong(1)
             },

{
             "_id" : 1,
             "name" : "192.168.213.184:27018",                 #标准节点 ,端口27018 为从节点
             "health" : 1,
             "state" : 2,
             "stateStr" : "SECONDARY",
             "uptime" : 1213,
             "optime" : {
                 "ts" : Timestamp(1536820324, 1),
                 "t" : NumberLong(1)
             },

{
             "_id" : 2,
             "name" : "192.168.213.184:27019",               #被动节点, 端口27019位从节点
             "health" : 1,
             "state" : 2,
             "stateStr" : "SECONDARY",
             "uptime" : 1213,
             "optime" : {
                 "ts" : Timestamp(1536820324, 1),
                 "t" : NumberLong(1)
             },

{
             "_id" : 3,
             "name" : "192.168.213.184:27020",                     #仲裁节点
             "health" : 1,
             "state" : 7,
             "stateStr" : "ARBITER",
             "uptime" : 1213,
             "lastHeartbeat" : ISODate("2018-09-13T06:32:14.152Z"),

(2)模拟主节点故障,关闭主节点服务,登录另一个标准节点端口 27018 ,查看麟能够一个标准节点是否被选举为主节点

abc:PRIMARY> exit
bye
[root@localhost logs]# mongod -f /etc/mongod.conf –shutdown                       #关闭主节点服务
killing process with pid: 4821
[root@localhost logs]# mongo --port 27018                                                           #指定27018 端口进入数据库

abc:PRIMARY> rs.status()                                                              #查看状态

"members" : [
         {
             "_id" : 0,
             "name" : "192.168.213.184:27017",
            "health" : 0,                                                 #健康之为 0 ,说明端口27017 已经宕机了
             "state" : 8,
             "stateStr" : "(not reachable/healthy)",
             "uptime" : 0,
             "optime" : {
                 "ts" : Timestamp(0, 0),
                 "t" : NumberLong(-1)
             },

{
             "_id" : 1,
             "name" : "192.168.213.184:27018",               #此时另一台标准节点被选举为主节点,端口为 27018
             "health" : 1,
             "state" : 1,
             "stateStr" : "PRIMARY",


             "uptime" : 2812,
             "optime" : {
                 "ts" : Timestamp(1536820668, 1),
                 "t" : NumberLong(2)
             },

{
             "_id" : 2,
             "name" : "192.168.213.184:27019",
             "health" : 1,
             "state" : 2,
             "stateStr" : "SECONDARY",
             "uptime" : 1552,
             "optime" : {
                 "ts" : Timestamp(1536820668, 1),
                 "t" : NumberLong(2)
             },

{
         "_id" : 3,
         "name" : "192.168.213.184:27020",
         "health" : 1,
         "state" : 7,
         "stateStr" : "ARBITER",
         "uptime" : 1552,

(3)将标准节点服务全部关闭,查看被动节点是否会被选举为主节点

abc:PRIMARY> exit
bye
[root@localhost logs]# mongod -f /etc/mongod2.conf –shutdown                     #关闭主节点服务
killing process with pid: 4853
[root@localhost logs]# mongo --port 27019                                                                #开启备用节点服务

abc:SECONDARY> rs.status()

"members" : [
         {
             "_id" : 0,
             "name" : "192.168.213.184:27017",                                   #此时两个标准节点都处于宕机状态,健康值为 0
             "health" : 0,
             "state" : 8,
             "stateStr" : "(not reachable/healthy)",
             "uptime" : 0,
             "optime" : {
                 "ts" : Timestamp(0, 0),
                 "t" : NumberLong(-1)
             },

{
             "_id" : 1,
             "name" : "192.168.213.184:27018",
             "health" : 0,
             "state" : 8,
             "stateStr" : "(not reachable/healthy)",
             "uptime" : 0,
             "optime" : {
                 "ts" : Timestamp(0, 0),
                 "t" : NumberLong(-1)
             },

{
             "_id" : 2,
             "name" : "192.168.213.184:27019",                            #被动节点并没有被选举为主节点,说明被动节点不可能成为活跃节点
             "health" : 1,
             "state" : 2,
             "stateStr" : "SECONDARY",
             "uptime" : 3102,
             "optime" : {
                 "ts" : Timestamp(1536820928, 1),
                 "t" : NumberLong(2)
             },

{
             "_id" : 3,
             "name" : "192.168.213.184:27020",
             "health" : 1,
             "state" : 7,
             "stateStr" : "ARBITER",
             "uptime" : 1849,

(4)再次启动标准节点服务,查看主节点能否恢复。

abc:SECONDARY> exit
bye
[root@localhost logs]# mongod -f /etc/mongod.conf                                     #启动实例1 的服务
about to fork child process, waiting until server is ready for connections.
forked process: 39839
child process started successfully, parent exiting
[root@localhost logs]# mongod -f /etc/mongod2.conf                                   #启动实例2的服务
about to fork child process, waiting until server is ready for connections.
forked process: 39929
child process started successfully, parent exiting
[root@localhost logs]# mongo

{
             "_id" : 0,
             "name" : "192.168.213.184:27017",                           #端口27017 恢复为主节点,(与标准节点启动顺序有关,先启动的为主节点)
             "health" : 1,
             "state" : 1,
             "stateStr" : "PRIMARY",
             "uptime" : 25,
             "optime" : {
                 "ts" : Timestamp(1536821324, 1),
                 "t" : NumberLong(3)
             },

{
         "_id" : 1,
         "name" : "192.168.213.184:27018",
         "health" : 1,
         "state" : 2,
         "stateStr" : "SECONDARY",
         "uptime" : 14,
         "optime" : {
             "ts" : Timestamp(1536821324, 1),
             "t" : NumberLong(3)
         },

{
         "_id" : 2,
         "name" : "192.168.213.184:27019",
         "health" : 1,
         "state" : 2,
         "stateStr" : "SECONDARY",
         "uptime" : 23,
         "optime" : {

{
         "_id" : 3,
         "name" : "192.168.213.184:27020",
         "health" : 1,
         "state" : 7,
         "stateStr" : "ARBITER",
         "uptime" : 23,

由此可以看出:只有标准节点可能被选举为活跃节点(主  primary),被动节点不可能成为活跃节点,有选举权。仲裁节点不可能成为活跃节点。

猜你喜欢

转载自blog.51cto.com/13706703/2175076
今日推荐