MongoDB Replica Sets 不仅提供高可用性的解决方案,它也同时提供负载均衡的解决方案,增减Replica Sets 节点在实际应用中非常普遍,例如当应用的读压力暴增时,3 台节点的环境已不能满足需求,那么就需要增加一些节点将压力平均分配一下。
增加节点
两种方式:一是通过oplog增加节点,二是通过数据库快照和oplog来增加节点
通过oplog增加节点
1、配置并启动新节点,启用28013这个端口给新的节点
root@localhost ~]# mkdir -p /data/data/r3 [root@localhost ~]# echo "this is rs1 super secret key" > /data/key/r3 [root@localhost ~]# chmod 600 /data/key/r3 [root@localhost ~]# /Apps/mongo/bin/mongod --replSet rs1 --keyFile /data/key/r3 --fork --port 28013 --dbpath /data/data/r3 --logpath=/data/log/r3.log --logappend all output going to: /data/log/r3.log forked process: 10553 [root@localhost ~]#2、添加此新节点到现有的Replica Sets
rs1:PRIMARY> rs.add("localhost:28013") { "ok" : 1 }3、查看Replica Sets,我们可以清晰的看到内部是如何添加28013这个新节点的
(1)进行初始化
rs1: PRIMARY > rs.status() { "set" : "rs1", "date" : ISODate("2012-05-31T12:17:44Z"), "myState" : 1, "members" : [ …… { "_id" : 3, "name" : "localhost:28013", "health" : 0, "state" : 6, "stateStr" : "(not reachable/healthy)", "uptime" : 0, "optime" : { "t" : 0, "i" : 0 }, "optimeDate" : ISODate("1970-01-01T00:00:00Z"), "lastHeartbeat" : ISODate("2012-05-31T12:17:43Z"), "errmsg" : "still initializing" } ], "ok" : 1 }(2)进行数据同步
rs1:PRIMARY> rs.status() { "set" : "rs1", "date" : ISODate("2012-05-31T12:18:07Z"), "myState" : 1, "members" : [ …… { "_id" : 3, "name" : "localhost:28013", "health" : 1, "state" : 3, "stateStr" : "RECOVERING", "uptime" : 16, "optime" : { "t" : 0, "i" : 0 }, "optimeDate" : ISODate("1970-01-01T00:00:00Z"), "lastHeartbeat" : ISODate("2012-05-31T12:18:05Z"), "errmsg" : "initial sync need a member to be primary or secondary to do our initial sync" } ], "ok" : 1 }(3)初始化同步完成
rs1:PRIMARY> rs.status() { "set" : "rs1", "date" : ISODate("2012-05-31T12:18:08Z"), "myState" : 1, "members" : [ …… { "_id" : 3, "name" : "localhost:28013", "health" : 1, "state" : 3, "stateStr" : "RECOVERING", "uptime" : 17, "optime" : { "t" : 1338466661000, "i" : 1 }, "optimeDate" : ISODate("2012-05-31T12:17:41Z"), "lastHeartbeat" : ISODate("2012-05-31T12:18:07Z"), "errmsg" : "initial sync done" } ], "ok" : 1 }(4)节点添加完成,状态正常
rs1:PRIMARY> rs.status() { "set" : "rs1", "date" : ISODate("2012-05-31T12:18:10Z"), "myState" : 1, "members" : [ …… { "_id" : 3, "name" : "localhost:28013", "health" : 1, "state" : 2, "stateStr" : "SECONDARY", "uptime" : 19, "optime" : { "t" : 1338466661000, "i" : 1 }, "optimeDate" : ISODate("2012-05-31T12:17:41Z"), "lastHeartbeat" : ISODate("2012-05-31T12:18:09Z") } ], "ok" : 1 }4、验证数据已经同步过来
[root@localhost data]# /Apps/mongo/bin/mongo -port 28013 MongoDB shell version: 1.8.1 connecting to: 127.0.0.1:28013/test rs1:SECONDARY> rs.slaveOk() rs1:SECONDARY> db.c1.find() { "_id" : ObjectId("4fc760d2383ede1dce14ef86"), "age" : 10 } rs1:SECONDARY>通过数据库快照和oplog增加节点
通过oplog 直接进行增加节点操作简单且无需人工干预过多,但oplog 是capped collection,采用循环的方式进行日志处理,所以采用oplog 的方式进行增加节点,有可能导致数据的不一致,因为日志中存储的信息有可能已经刷新过了。不过没关系,我们可以通过数据库快照(--fastsync)和oplog 结合的方式来增加节点,这种方式的操作流程是,先取某一个复制集成员的物理文件来做为初始化数据,然后剩余的部分用oplog 日志来追,最终达到数据一致性。
(1)取某一个复制集成员的物理文件来作为初始化数据
[root@localhost ~]# scp -r /data/data/r3 /data/data/r4 [root@localhost ~]# echo "this is rs1 super secret key" > /data/key/r4 [root@localhost ~]# chmod 600 /data/key/r4(2)在取完物理文件后,在c1集中插入一条新文档,用于最后验证此更新也同步了
rs1:PRIMARY> db.c1.find() { "_id" : ObjectId("4fc760d2383ede1dce14ef86"), "age" : 10 } rs1:PRIMARY> db.c1.insert({age:20}) rs1:PRIMARY> db.c1.find() { "_id" : ObjectId("4fc760d2383ede1dce14ef86"), "age" : 10 } { "_id" : ObjectId("4fc7748f479e007bde6644ef"), "age" : 20 } rs1:PRIMARY>(3)启用28014这个端口给新的节点
/Apps/mongo/bin/mongod --replSet rs1 --keyFile /data/key/r4 --fork --port 28014 --dbpath /data/data/r4 --logpath=/data/log/r4.log --logappend --fastsync(4)添加28014节点
rs1:PRIMARY> rs.add("localhost:28014") { "ok" : 1 }(5)验证数据已经同步过来
[root@localhost data]# /Apps/mongo/bin/mongo -port 28014 MongoDB shell version: 1.8.1 connecting to: 127.0.0.1:28014/test rs1:SECONDARY> rs.slaveOk() rs1:SECONDARY> db.c1.find() { "_id" : ObjectId("4fc760d2383ede1dce14ef86"), "age" : 10 } { "_id" : ObjectId("4fc7748f479e007bde6644ef"), "age" : 20 } rs1:SECONDARY>