单机环境下MongoDB复制集测试步骤

在自己的虚拟机环境中,对MongoDB复制集进行了测试,整个过程如下:


MongoDB复制集简介


MongoDB支持在多个机器中通过异步复制到达故障转移和实现冗余。多个机器中同一时刻只有一台用于写操作。正是由于这个情况,为MongoDB提供了数据一致性的保障。担当primary角色的机器能把读操作分发给slave。
MongoDB高可用分两种:
Master-Slave主从复制:只需要在某一个服务启动时加上 -master参数,而另一个服务加上-slave和 -source参数,即可实现同步。这种方式与MySQL数据库的 master-slave复制方式非常类似,但MongoDB新版本已经不推荐此方案。
Replica Sets复制集:MongoDB在1.6版本的基础上开发了新功能replica set,这个功能更强大一些,增加了故障自动切换和自动修复成员节点,各个DB之间数据完全一致,大大降低了维护成本。auto shared已经明确说明不支持replication Paris,建议使用replica set故障切换完全自动。
replica sets的结构非常类似一个集群,和集群实现的作用是一样的,其中一个节点出现故障,其他节点马上回将业务接过来而无需停机操作。

复制集的架构如下:




MongoDB复制集规划

对于MongoDB的replica Sets复制集,也需要有一个规划:
replSet 复制集名称: rs1
MongoDB数据库安装安装路径为:/usr/local/mongodb/
复制集成员IP与端口:
节点1: localhost:28010   (默认的primary节点)
节点2: localhost:20811
节点3: localhost:28012
复制集各节点的数据文件,日志文件,私钥文件路径:
节点1: /data/data/r0  , /data/log/r0.log , /data/key/r0
节点2: /data/data/r1  , /data/log/r1.log , /data/key/r1
节点3: /data/data/r2  , /data/log/r2.log , /data/key/r2



场景一:部署ReplicaSets
MongoDB的复制集配置步骤还是比较简单的,整体上说,就是准备好各个节点的数据文件、日志文件、私钥文件,然后带参数启动实例,配置replica set内容并初始化就可以了。

1) 创建数据文件存储路径
mkdir -p /data02/mongors/data/r0
mkdir -p /data02/mongors/data/r1
mkdir -p /data02/mongors/data/r2


2)创建日志文件路径
mkdir -p /data02/mongors/log


3)创建主从key文件
用于标识集群的私钥的完整路径,如果各个实例的key file内容不一致,程序将不能正常使用
mkdir -p /data02/mongors/key
echo "this is rs1 super secret key" > /data02/mongors/key/r0
echo "this is rs1 super secret key" > /data02/mongors/key/r1
echo "this is rs1 super secret key" > /data02/mongors/key/r2
chmod 600 /data02/mongors/key/r*


4)启动3个实例
依次添加启动参数,其中三个MongoDB实例:
/usr/local/mongodb/bin/mongod --replSet rs1 --keyFile /data02/mongors/key/r0 --fork --port 28010 --dbpath /data02/mongors/data/r0 --logpath=/data02/mongors/log/r0.log --logappend
/usr/local/mongodb/bin/mongod --replSet rs1 --keyFile /data02/mongors/key/r1 --fork --port 28011 --dbpath /data02/mongors/data/r1 --logpath=/data02/mongors/log/r1.log --logappend
/usr/local/mongodb/bin/mongod --replSet rs1 --keyFile /data02/mongors/key/r2 --fork --port 28012 --dbpath /data02/mongors/data/r2 --logpath=/data02/mongors/log/r2.log --logappend
注:启动命令的参数分别为,mongod为主启动命令, replSet指定复制集名称为rs1,keyfile指定公钥文件 ,fork指定启动方式为deamo后台启动;port指定端口号,dbpath指定数据文件目录,logpath指定日志文件,logappend指定错误日志为日志追加模式;
实例启动过程如下:
#
#/usr/local/mongodb/bin/mongod --replSet rs1 --keyFile /data02/mongors/key/r0 --fork --port 28010 --dbpath /data02/mongors/data/r0 --logpath=/data02/mongors/log/r0.log --logappend
about to fork child process, waiting until server is ready for connections.
forked process: 10984
child process started successfully, parent exiting
#
#/usr/local/mongodb/bin/mongod --replSet rs1 --keyFile /data02/mongors/key/r1 --fork --port 28011 --dbpath /data02/mongors/data/r1 --logpath=/data02/mongors/log/r1.log --logappend
about to fork child process, waiting until server is ready for connections.
forked process: 11030
child process started successfully, parent exiting
#
#/usr/local/mongodb/bin/mongod --replSet rs1 --keyFile /data02/mongors/key/r2 --fork --port 28012 --dbpath /data02/mongors/data/r2 --logpath=/data02/mongors/log/r2.log --logappend
about to fork child process, waiting until server is ready for connections.
forked process: 11076
child process started successfully, parent exiting
#



通过进程和端口,验证启动状态:
ps -ef|grep mongo
netstat -tunlp | grep 28010
netstat -tunlp | grep 28011
netstat -tunlp | grep 28012
验证结果如下:
#ps -ef |grep mongo
root      5715     1  0 Apr09 ?        00:11:50 mongod --dbpath=/data02/mongodb/db/ --logpath=/data02/mongodb/logs/mongodb.log --fork
root     10984     1  0 14:23 ?        00:00:00 /usr/local/mongodb/bin/mongod --replSet rs1 --keyFile /data02/mongors/key/r0 --fork --port 28010 --dbpath /data02/mongors/data/r0 --logpath=/data02/mongors/log/r0.log --logappend
root     11030     1  0 14:23 ?        00:00:00 /usr/local/mongodb/bin/mongod --replSet rs1 --keyFile /data02/mongors/key/r1 --fork --port 28011 --dbpath /data02/mongors/data/r1 --logpath=/data02/mongors/log/r1.log --logappend
root     11076     1  0 14:23 ?        00:00:00 /usr/local/mongodb/bin/mongod --replSet rs1 --keyFile /data02/mongors/key/r2 --fork --port 28012 --dbpath /data02/mongors/data/r2 --logpath=/data02/mongors/log/r2.log --logappend
root     11121 10769  0 14:23 pts/1    00:00:00 grep mongo
#
#
#netstat -tunlp |grep 28010
tcp        0      0 0.0.0.0:28010               0.0.0.0:*                   LISTEN      10984/mongod        
#
#netstat -tunlp | grep 28011
tcp        0      0 0.0.0.0:28011               0.0.0.0:*                   LISTEN      11030/mongod        
#
#netstat -tunlp | grep 28012
tcp        0      0 0.0.0.0:28012               0.0.0.0:*                   LISTEN      11076/mongod        
#




5)配置及初始化 Replica Sets
登录到primary服务器上:
# /usr/local/mongo/bin/mongo -port 28010
配置复制集:
> config = {_id: 'rs1', members: [
                           {_id: 0, host: 'localhost:28010',priority:1},
                           {_id: 1, host: 'localhost:28011'},
                           {_id: 2, host: 'localhost:28012'}]
            }
{
    "_id" : "rs1",
    "members": [
        {
            "_id": 0,
            "host" : "localhost:28010"
        },
        {
            "_id": 1,
            "host" : "localhost:28011"
        },
        {
            "_id": 2,
            "host" : "localhost:28012"
        }
    ]
}





初始化配置,使上面的配置生效:
>  rs.initiate(config);

实际操作过程如下:
#/usr/local/mongodb/bin/mongo -port 28010
MongoDB shell version: 2.6.5
connecting to: 127.0.0.1:28010/test
> 
> show databases;
admin  (empty)
local  0.078GB
> 
> show collections;
2015-04-13T14:26:59.980+0800 error: { "$err" : "not master and slaveOk=false", "code" : 13435 } at src/mongo/shell/query.js:131
> 
> config = {_id: 'rs1', members: [
...                            {_id: 0, host: 'localhost:28010',priority:1},
...                            {_id: 1, host: 'localhost:28011'},
...                            {_id: 2, host: 'localhost:28012'}]
...             }
{
        "_id" : "rs1",
        "members" : [
                {
                        "_id" : 0,
                        "host" : "localhost:28010",
                        "priority" : 1
                },
                {
                        "_id" : 1,
                        "host" : "localhost:28011"
                },
                {
                        "_id" : 2,
                        "host" : "localhost:28012"
                }
        ]
}
> 
> rs.initiate(config);
{
        "info" : "Config now saved locally.  Should come online in about a minute.",
        "ok" : 1
}
> 
rs1:PRIMARY> 
rs1:PRIMARY> 
复制集配置并初始化完毕后,20810的前段自动产生了 rs1:PRIMARY 的标志。



6)查看复制集状态
> rs.status()

在primary节点上查看复制集状态:
> rs.isMaster()
实际查看结果如下:
rs1:PRIMARY> 
rs1:PRIMARY> rs.status()
{
        "set" : "rs1",
        "date" : ISODate("2015-04-13T06:40:12Z"),
        "myState" : 1,
        "members" : [
                {
                        "_id" : 0,
                        "name" : "localhost:28010",
                        "health" : 1,
                        "state" : 1,
                        "stateStr" : "PRIMARY",
                        "uptime" : 1014,
                        "optime" : Timestamp(1428907084, 1),
                        "optimeDate" : ISODate("2015-04-13T06:38:04Z"),
                        "electionTime" : Timestamp(1428907094, 1),
                        "electionDate" : ISODate("2015-04-13T06:38:14Z"),
                        "self" : true
                },
                {
                        "_id" : 1,
                        "name" : "localhost:28011",
                        "health" : 1,
                        "state" : 2,
                        "stateStr" : "SECONDARY",
                        "uptime" : 126,
                        "optime" : Timestamp(1428907084, 1),
                        "optimeDate" : ISODate("2015-04-13T06:38:04Z"),
                        "lastHeartbeat" : ISODate("2015-04-13T06:40:10Z"),
                        "lastHeartbeatRecv" : ISODate("2015-04-13T06:40:11Z"),
                        "pingMs" : 0,
                        "syncingTo" : "localhost:28010"
                },
                {
                        "_id" : 2,
                        "name" : "localhost:28012",
                        "health" : 1,
                        "state" : 2,
                        "stateStr" : "SECONDARY",
                        "uptime" : 126,
                        "optime" : Timestamp(1428907084, 1),
                        "optimeDate" : ISODate("2015-04-13T06:38:04Z"),
                        "lastHeartbeat" : ISODate("2015-04-13T06:40:10Z"),
                        "lastHeartbeatRecv" : ISODate("2015-04-13T06:40:11Z"),
                        "pingMs" : 0,
                        "syncingTo" : "localhost:28010"
                }
        ],
        "ok" : 1
}
rs1:PRIMARY> 
rs1:PRIMARY> 
rs1:PRIMARY> rs.isMaster()
{
        "setName" : "rs1",
        "setVersion" : 1,
        "ismaster" : true,
        "secondary" : false,
        "hosts" : [
                "localhost:28010",
                "localhost:28012",
                "localhost:28011"
        ],
        "primary" : "localhost:28010",
        "me" : "localhost:28010",
        "maxBsonObjectSize" : 16777216,
        "maxMessageSizeBytes" : 48000000,
        "maxWriteBatchSize" : 1000,
        "localTime" : ISODate("2015-04-13T06:40:52.024Z"),
        "maxWireVersion" : 2,
        "minWireVersion" : 0,
        "ok" : 1
}
rs1:PRIMARY> 
rs1:PRIMARY> 
        通过这两个命令就可以确认复制集的节点状态和primary的状态,正常情况下,可以再primary节点进行各种添删改查等各种数据操作,其他非master节点不能进行各种数据操作,也不能发起复制集修改命令,如果进行命令操作,会提示如下错误:
rs1:SECONDARY> show collections;
2015-04-13T14:42:58.431+0800 error: { "$err" : "not master and slaveOk=false", "code" : 13435 } at src/mongo/shell/query.js:131
rs1:SECONDARY> 






场景二:查看主从操作日志oplog

MongoDB的replica set架构是通过一个日志来存储写操作的,这个日志就叫做 oplog 。oplog.rs 是一个固定长度的 Capped Collection,它存在于local数据库中,用于记录replicaSets操作日志。在默认情况下,对于64位的MongoDB,oplog是比较大的,可以达到5%的磁盘空间,oplog的大小是可以通过mongod的参数 “ -oplogSize”来改变oplog的日志大小。
oplog内容样例:
> use local
> show collections
> db.oplog.rs.find()

查看master的oplog元数据信息:
> db.printReplicationInfo()

查看salve的同步状态:
> db.printSlaveReplicationInfo()

上面的命令执行结果如下:
#/usr/local/mongodb/bin/mongo -port 28010
MongoDB shell version: 2.6.5
connecting to: 127.0.0.1:28010/test
rs1:PRIMARY> 
rs1:PRIMARY> show databases;
admin  (empty)
local  1.078GB
test   (empty)
rs1:PRIMARY> 
rs1:PRIMARY> use local
switched to db local
rs1:PRIMARY> 
rs1:PRIMARY> show databases;
admin  (empty)
local  1.078GB
test   (empty)
rs1:PRIMARY> 
rs1:PRIMARY> show collections;
me
oplog.rs
slaves
startup_log
system.indexes
system.replset
rs1:PRIMARY> 
rs1:PRIMARY> db.oplog.rs.find();
{ "ts" : Timestamp(1428907084, 1), "h" : NumberLong(0), "v" : 2, "op" : "n", "ns" : "", "o" : { "msg" : "initiating set" } }
rs1:PRIMARY> 
rs1:PRIMARY> db.printReplicationInfo();
configured oplog size:   990MB
log length start to end: 0secs (0hrs)
oplog first event time:  Mon Apr 13 2015 14:38:04 GMT+0800 (CST)
oplog last event time:   Mon Apr 13 2015 14:38:04 GMT+0800 (CST)
now:                     Mon Apr 13 2015 14:46:44 GMT+0800 (CST)
rs1:PRIMARY> 
rs1:PRIMARY> db.printSlaveReplicationInfo();
source: localhost:28011
        syncedTo: Mon Apr 13 2015 14:38:04 GMT+0800 (CST)
        0 secs (0 hrs) behind the primary 
source: localhost:28012
        syncedTo: Mon Apr 13 2015 14:38:04 GMT+0800 (CST)
        0 secs (0 hrs) behind the primary 
rs1:PRIMARY> 
rs1:PRIMARY> 




场景三:主从配置信息查看与确认

在local库中不仅有主从日志 oplog集合,还有一个集合用于记录主从配置信息 system.replset:
> use local
> show collections
> db.system.replset.find()

从这个集合中可以看出,Replica Sets的配置信息,也可以再任何已有成员实例上执行 rs.conf() 来查看信息。
rs1:PRIMARY> db
local
rs1:PRIMARY> 
rs1:PRIMARY> show collections;
me
oplog.rs
slaves
startup_log
system.indexes
system.replset
rs1:PRIMARY> 
rs1:PRIMARY> db.system.replset.find();
{ "_id" : "rs1", "version" : 1, "members" : [ { "_id" : 0, "host" : "localhost:28010" }, { "_id" : 1, "host" : "localhost:28011" }, { "_id" : 2, "host" : "localhost:28012" } ] }
rs1:PRIMARY> 
rs1:PRIMARY> rs.conf();
{
        "_id" : "rs1",
        "version" : 1,
        "members" : [
                {
                        "_id" : 0,
                        "host" : "localhost:28010"
                },
                {
                        "_id" : 1,
                        "host" : "localhost:28011"
                },
                {
                        "_id" : 2,
                        "host" : "localhost:28012"
                }
        ]
}
rs1:PRIMARY> 
#/usr/local/mongodb/bin/mongo -port 28011
MongoDB shell version: 2.6.5
connecting to: 127.0.0.1:28011/test
rs1:SECONDARY> 
rs1:SECONDARY> 
rs1:SECONDARY> db
test
rs1:SECONDARY> show collections;
2015-04-13T14:51:32.097+0800 error: { "$err" : "not master and slaveOk=false", "code" : 13435 } at src/mongo/shell/query.js:131
rs1:SECONDARY> 
rs1:SECONDARY> rs.conf()
{
        "_id" : "rs1",
        "version" : 1,
        "members" : [
                {
                        "_id" : 0,
                        "host" : "localhost:28010"
                },
                {
                        "_id" : 1,
                        "host" : "localhost:28011"
                },
                {
                        "_id" : 2,
                        "host" : "localhost:28012"
                }
        ]
}
rs1:SECONDARY> 
rs1:SECONDARY> db.printSlaveReplicationInfo()
source: localhost:28011
        syncedTo: Mon Apr 13 2015 14:50:54 GMT+0800 (CST)
        0 secs (0 hrs) behind the primary 
source: localhost:28012
        syncedTo: Mon Apr 13 2015 14:50:54 GMT+0800 (CST)
        0 secs (0 hrs) behind the primary 
rs1:SECONDARY> 



场景四:管理维护ReplicaSets实现读写分离和故障转移

读写分离实现步骤:
有一些第三方工具,提供了一些可以让数据库进行读写分离的工具,如果从库能够进行查询就更好了,这样可以分担主库的大量的查询请求。
1) 先向主库中插入一条测试数据
# ./mongo --port 28010
> db.c1.insert({age:30});
> db.c1.find()


2)在从库进行查询等操作
# ./mongo --port 28011
> show collections
当查询时报错了,说明是个从库,目前不能执行查询的操作。


3)让从库可以读,分担主库的压力
> db.getMongo().setSlaveOk()
> show collections
> db.c1.find
如果执行 db.getMongo().setSlaveOk() ,就可以将从库的查询功能打开。
整个测试过程如下:
#/usr/local/mongodb/bin/mongo -port 28010
MongoDB shell version: 2.6.5
connecting to: 127.0.0.1:28010/test
rs1:PRIMARY> 
rs1:PRIMARY> db
test
rs1:PRIMARY> show collections;
rs1:PRIMARY> 
rs1:PRIMARY> db.c1.insert({age:30});
WriteResult({ "nInserted" : 1 })
rs1:PRIMARY> 
rs1:PRIMARY> show collections;
c1
system.indexes
rs1:PRIMARY> 
rs1:PRIMARY> db.c1.find();
{ "_id" : ObjectId("552b674ee7b1c3fc735c8316"), "age" : 30 }
rs1:PRIMARY> 
rs1:PRIMARY> quit();
[root@MySQL193 /data02]#
[root@MySQL193 /data02]#/usr/local/mongodb/bin/mongo -port 28011
MongoDB shell version: 2.6.5
connecting to: 127.0.0.1:28011/test
rs1:SECONDARY> 
rs1:SECONDARY> 
rs1:SECONDARY> db
test
rs1:SECONDARY> show collections;
2015-04-13T14:51:32.097+0800 error: { "$err" : "not master and slaveOk=false", "code" : 13435 } at src/mongo/shell/query.js:131
rs1:SECONDARY> 
rs1:SECONDARY> rs.conf()
{
        "_id" : "rs1",
        "version" : 1,
        "members" : [
                {
                        "_id" : 0,
                        "host" : "localhost:28010"
                },
                {
                        "_id" : 1,
                        "host" : "localhost:28011"
                },
                {
                        "_id" : 2,
                        "host" : "localhost:28012"
                }
        ]
}
rs1:SECONDARY> 
rs1:SECONDARY> db.printSlaveReplicationInfo()
source: localhost:28011
        syncedTo: Mon Apr 13 2015 14:50:54 GMT+0800 (CST)
        0 secs (0 hrs) behind the primary 
source: localhost:28012
        syncedTo: Mon Apr 13 2015 14:50:54 GMT+0800 (CST)
        0 secs (0 hrs) behind the primary 
rs1:SECONDARY> 
rs1:SECONDARY> db.getMongo().setSlaveOk();
rs1:SECONDARY> 
rs1:SECONDARY> show collections;
c1
system.indexes
rs1:SECONDARY> 
rs1:SECONDARY> db.c1.find();
{ "_id" : ObjectId("552b674ee7b1c3fc735c8316"), "age" : 30 }
rs1:SECONDARY> 
rs1:SECONDARY> quit();
#



故障转移演示:

复制集比传统的master-slave有改进的地方就是它可以进行故障的自动转移,如果我们停掉复制集中的一个成员,那么剩余的成员就会再自动选举一个成员,作为主库,如下所示。
我们将28010这个主库停掉,然后再看一下复制集的状态。
1) 杀掉28010端口的MongoDB
# ps aux | grep mongod
# kill -9 6706

2) 查看复制集状态
# ./mongo --port 28011
> rs.status();

可以看到28010这个端口的MongoDB出现了异常,而系统自动选举了 28012 这个端口为主,所以这样的故障处理机制,能讲系统的稳定性大大提高。
测试过程如下:
#echo "kill primary to Fail over"
kill primary to Fail over
[root@MySQL193 /data02]#
[root@MySQL193 /data02]#
[root@MySQL193 /data02]#ps -ef|grep mongo
root      5715     1  0 Apr09 ?        00:11:55 mongod --dbpath=/data02/mongodb/db/ --logpath=/data02/mongodb/logs/mongodb.log --fork
root     10984     1  0 14:23 ?        00:00:06 /usr/local/mongodb/bin/mongod --replSet rs1 --keyFile /data02/mongors/key/r0 --fork --port 28010 --dbpath /data02/mongors/data/r0 --logpath=/data02/mongors/log/r0.log --logappend
root     11030     1  0 14:23 ?        00:00:05 /usr/local/mongodb/bin/mongod --replSet rs1 --keyFile /data02/mongors/key/r1 --fork --port 28011 --dbpath /data02/mongors/data/r1 --logpath=/data02/mongors/log/r1.log --logappend
root     11076     1  0 14:23 ?        00:00:05 /usr/local/mongodb/bin/mongod --replSet rs1 --keyFile /data02/mongors/key/r2 --fork --port 28012 --dbpath /data02/mongors/data/r2 --logpath=/data02/mongors/log/r2.log --logappend
root     11731     1  0 14:58 ?        00:00:01 /usr/local/mongodb/bin/mongod --replSet rs1 --keyFile /data02/mongors/key/r3 --fork --port 28013 --dbpath /data02/mongors/data/r3 --logpath=/data02/mongors/log/r3.log --logappend
root     12186 10769  0 15:05 pts/1    00:00:00 grep mongo
[root@MySQL193 /data02]#
[root@MySQL193 /data02]#/usr/local/mongodb/bin/mongo -port 28010
MongoDB shell version: 2.6.5
connecting to: 127.0.0.1:28010/test
rs1:PRIMARY> 
rs1:PRIMARY> rs.isMaster();
{
        "setName" : "rs1",
        "setVersion" : 2,
        "ismaster" : true,
        "secondary" : false,
        "hosts" : [
                "localhost:28010",
                "localhost:28013",
                "localhost:28012",
                "localhost:28011"
        ],
        "primary" : "localhost:28010",
        "me" : "localhost:28010",
        "maxBsonObjectSize" : 16777216,
        "maxMessageSizeBytes" : 48000000,
        "maxWriteBatchSize" : 1000,
        "localTime" : ISODate("2015-04-13T07:06:06.185Z"),
        "maxWireVersion" : 2,
        "minWireVersion" : 0,
        "ok" : 1
}
rs1:PRIMARY> 
rs1:PRIMARY> quit();
[root@MySQL193 /data02]#
[root@MySQL193 /data02]#
[root@MySQL193 /data02]#
[root@MySQL193 /data02]#ps -ef|grep mongod
root      5715     1  0 Apr09 ?        00:11:55 mongod --dbpath=/data02/mongodb/db/ --logpath=/data02/mongodb/logs/mongodb.log --fork
root     10984     1  0 14:23 ?        00:00:06 /usr/local/mongodb/bin/mongod --replSet rs1 --keyFile /data02/mongors/key/r0 --fork --port 28010 --dbpath /data02/mongors/data/r0 --logpath=/data02/mongors/log/r0.log --logappend
root     11030     1  0 14:23 ?        00:00:06 /usr/local/mongodb/bin/mongod --replSet rs1 --keyFile /data02/mongors/key/r1 --fork --port 28011 --dbpath /data02/mongors/data/r1 --logpath=/data02/mongors/log/r1.log --logappend
root     11076     1  0 14:23 ?        00:00:06 /usr/local/mongodb/bin/mongod --replSet rs1 --keyFile /data02/mongors/key/r2 --fork --port 28012 --dbpath /data02/mongors/data/r2 --logpath=/data02/mongors/log/r2.log --logappend
root     11731     1  0 14:58 ?        00:00:01 /usr/local/mongodb/bin/mongod --replSet rs1 --keyFile /data02/mongors/key/r3 --fork --port 28013 --dbpath /data02/mongors/data/r3 --logpath=/data02/mongors/log/r3.log --logappend
root     12237 10769  0 15:06 pts/1    00:00:00 grep mongod
[root@MySQL193 /data02]#
[root@MySQL193 /data02]#ps aux |grep mongod
root      5715  0.2  2.1 1660908 41588 ?       Sl   Apr09  11:55 mongod --dbpath=/data02/mongodb/db/ --logpath=/data02/mongodb/logs/mongodb.log --fork
root     10984  0.2  2.1 2703968 41512 ?       Sl   14:23   0:06 /usr/local/mongodb/bin/mongod --replSet rs1 --keyFile /data02/mongors/key/r0 --fork --port 28010 --dbpath /data02/mongors/data/r0 --logpath=/data02/mongors/log/r0.log --logappend
root     11030  0.2  1.9 2683292 36708 ?       Sl   14:23   0:06 /usr/local/mongodb/bin/mongod --replSet rs1 --keyFile /data02/mongors/key/r1 --fork --port 28011 --dbpath /data02/mongors/data/r1 --logpath=/data02/mongors/log/r1.log --logappend
root     11076  0.2  1.9 2683288 36668 ?       Sl   14:23   0:06 /usr/local/mongodb/bin/mongod --replSet rs1 --keyFile /data02/mongors/key/r2 --fork --port 28012 --dbpath /data02/mongors/data/r2 --logpath=/data02/mongors/log/r2.log --logappend
root     11731  0.2  1.9 2684316 37188 ?       Sl   14:58   0:01 /usr/local/mongodb/bin/mongod --replSet rs1 --keyFile /data02/mongors/key/r3 --fork --port 28013 --dbpath /data02/mongors/data/r3 --logpath=/data02/mongors/log/r3.log --logappend
root     12245  0.0  0.0 103252   832 pts/1    S+   15:06   0:00 grep mongod
[root@MySQL193 /data02]#
[root@MySQL193 /data02]#kill -9 10984
[root@MySQL193 /data02]#
[root@MySQL193 /data02]#ps -ef|grep mongod 
root      5715     1  0 Apr09 ?        00:11:55 mongod --dbpath=/data02/mongodb/db/ --logpath=/data02/mongodb/logs/mongodb.log --fork
root     11030     1  0 14:23 ?        00:00:06 /usr/local/mongodb/bin/mongod --replSet rs1 --keyFile /data02/mongors/key/r1 --fork --port 28011 --dbpath /data02/mongors/data/r1 --logpath=/data02/mongors/log/r1.log --logappend
root     11076     1  0 14:23 ?        00:00:06 /usr/local/mongodb/bin/mongod --replSet rs1 --keyFile /data02/mongors/key/r2 --fork --port 28012 --dbpath /data02/mongors/data/r2 --logpath=/data02/mongors/log/r2.log --logappend
root     11731     1  0 14:58 ?        00:00:01 /usr/local/mongodb/bin/mongod --replSet rs1 --keyFile /data02/mongors/key/r3 --fork --port 28013 --dbpath /data02/mongors/data/r3 --logpath=/data02/mongors/log/r3.log --logappend
root     12289 10769  0 15:07 pts/1    00:00:00 grep mongod
[root@MySQL193 /data02]#
[root@MySQL193 /data02]#/usr/local/mongodb/bin/mongo -port 28013
MongoDB shell version: 2.6.5
connecting to: 127.0.0.1:28013/test
rs1:SECONDARY> 
rs1:SECONDARY> rs.status();
{
        "set" : "rs1",
        "date" : ISODate("2015-04-13T07:07:41Z"),
        "myState" : 2,
        "syncingTo" : "localhost:28011",
        "members" : [
                {
                        "_id" : 0,
                        "name" : "localhost:28010",
                        "health" : 0,
                        "state" : 8,
                        "stateStr" : "(not reachable/healthy)",
                        "uptime" : 0,
                        "optime" : Timestamp(1428908577, 1),
                        "optimeDate" : ISODate("2015-04-13T07:02:57Z"),
                        "lastHeartbeat" : ISODate("2015-04-13T07:07:40Z"),
                        "lastHeartbeatRecv" : ISODate("2015-04-13T07:07:12Z"),
                        "pingMs" : 0
                },
                {
                        "_id" : 1,
                        "name" : "localhost:28011",
                        "health" : 1,
                        "state" : 1,
                        "stateStr" : "PRIMARY",
                        "uptime" : 462,
                        "optime" : Timestamp(1428908577, 1),
                        "optimeDate" : ISODate("2015-04-13T07:02:57Z"),
                        "lastHeartbeat" : ISODate("2015-04-13T07:07:39Z"),
                        "lastHeartbeatRecv" : ISODate("2015-04-13T07:07:39Z"),
                        "pingMs" : 0,
                        "electionTime" : Timestamp(1428908838, 1),
                        "electionDate" : ISODate("2015-04-13T07:07:18Z")
                },
                {
                        "_id" : 2,
                        "name" : "localhost:28012",
                        "health" : 1,
                        "state" : 2,
                        "stateStr" : "SECONDARY",
                        "uptime" : 462,
                        "optime" : Timestamp(1428908577, 1),
                        "optimeDate" : ISODate("2015-04-13T07:02:57Z"),
                        "lastHeartbeat" : ISODate("2015-04-13T07:07:39Z"),
                        "lastHeartbeatRecv" : ISODate("2015-04-13T07:07:39Z"),
                        "pingMs" : 0,
                        "lastHeartbeatMessage" : "syncing to: localhost:28011",
                        "syncingTo" : "localhost:28011"
                },
                {
                        "_id" : 3,
                        "name" : "localhost:28013",
                        "health" : 1,
                        "state" : 2,
                        "stateStr" : "SECONDARY",
                        "uptime" : 578,
                        "optime" : Timestamp(1428908577, 1),
                        "optimeDate" : ISODate("2015-04-13T07:02:57Z"),
                        "infoMessage" : "syncing to: localhost:28011",
                        "self" : true
                }
        ],
        "ok" : 1
}
rs1:SECONDARY> 
rs1:SECONDARY> 
rs1:SECONDARY> quit();
[root@MySQL193 /data02]#
[root@MySQL193 /data02]#/usr/local/mongodb/bin/mongo -port 28011
MongoDB shell version: 2.6.5
connecting to: 127.0.0.1:28011/test
rs1:PRIMARY> 
rs1:PRIMARY> db
test
rs1:PRIMARY> 
rs1:PRIMARY> show collections;
c1
system.indexes
rs1:PRIMARY> 
rs1:PRIMARY> db.c1.find();
{ "_id" : ObjectId("552b674ee7b1c3fc735c8316"), "age" : 30 }
{ "_id" : ObjectId("552b6a21572fc5a02732f62c"), "age" : 40 }
rs1:PRIMARY> 
rs1:PRIMARY> db.c1.insert({age:50});
WriteResult({ "nInserted" : 1 })
rs1:PRIMARY> 
rs1:PRIMARY> db.c1.find();
{ "_id" : ObjectId("552b674ee7b1c3fc735c8316"), "age" : 30 }
{ "_id" : ObjectId("552b6a21572fc5a02732f62c"), "age" : 40 }
{ "_id" : ObjectId("552b6ba3cb6fc10cc6fc9085"), "age" : 50 }
rs1:PRIMARY> 
rs1:PRIMARY> rs.conf();
{
        "_id" : "rs1",
        "version" : 2,
        "members" : [
                {
                        "_id" : 0,
                        "host" : "localhost:28010"
                },
                {
                        "_id" : 1,
                        "host" : "localhost:28011"
                },
                {
                        "_id" : 2,
                        "host" : "localhost:28012"
                },
                {
                        "_id" : 3,
                        "host" : "localhost:28013"
                }
        ]
}
rs1:PRIMARY> 
rs1:PRIMARY> db.printSlaveReplicationInfo()
source: localhost:28010
        syncedTo: Mon Apr 13 2015 15:02:57 GMT+0800 (CST)
        386 secs (0.11 hrs) behind the primary 
source: localhost:28012
        syncedTo: Mon Apr 13 2015 15:09:23 GMT+0800 (CST)
        0 secs (0 hrs) behind the primary 
source: localhost:28013
        syncedTo: Mon Apr 13 2015 15:09:23 GMT+0800 (CST)
        0 secs (0 hrs) behind the primary 
rs1:PRIMARY> 
rs1:PRIMARY> quit()
[root@MySQL193 /data02]#
[root@MySQL193 /data02]#
[root@MySQL193 /data02]#/usr/local/mongodb/bin/mongo -port 28013
MongoDB shell version: 2.6.5
connecting to: 127.0.0.1:28013/test
rs1:SECONDARY> 
rs1:SECONDARY> 
rs1:SECONDARY> db
test
rs1:SECONDARY> 
rs1:SECONDARY> show collections;
2015-04-13T15:11:12.092+0800 error: { "$err" : "not master and slaveOk=false", "code" : 13435 } at src/mongo/shell/query.js:131
rs1:SECONDARY> 
rs1:SECONDARY> db.getMongo().setSlaveOk();
rs1:SECONDARY> 
rs1:SECONDARY> show collections;
c1
system.indexes
rs1:SECONDARY> 
rs1:SECONDARY> db.c1.find();
{ "_id" : ObjectId("552b674ee7b1c3fc735c8316"), "age" : 30 }
{ "_id" : ObjectId("552b6a21572fc5a02732f62c"), "age" : 40 }
{ "_id" : ObjectId("552b6ba3cb6fc10cc6fc9085"), "age" : 50 }
rs1:SECONDARY> 
rs1:SECONDARY> 
rs1:SECONDARY> quit()
[root@MySQL193 /data02]#
注意:故障转移以后,即使坏了的节点再次修好,复制集的primary也不会再自动漂移。



场景五:增加和减少复制集中的节点

MongoDB Replica Sets不仅提供高可用性的解决方案,同时也提供负载均衡的解决方案,增减 Replica Sets节点在实际应用中非常普通。例如,当应用的读压力暴增时,3台节点的环境已不能满足需求,那么就需要增加一些节点将压力平均分配一下;当应用的压力小时,可以减少一些节点来减少硬件资源的成本,总之这是一个长期且持续的工作。

增加节点步骤

1) 配置并启动新节点,启用 28013 这个端口给新的节点
mkdir -p /data02/mongors/data/r3
echo " this is rs1 super secret key " > /data02/mongors/key/r3
chmod 600 /data02/mongors/key/r3
/usr/local/mongodb/bin/mongod --replSet rs1 --keyFile /data02/mongors/key/r3 --fork --port 28013 --dbpath /data02/mongors/data/r3 --logpath=/data02/mongors/log/r3.log --logappend


2) 在primary节点上执行命令添加此新节点到现有的 Replica Sets
> rs.add("localhost:28013");


3)查看Replica Sets的状态,可以清晰地看到内部添加28013这个新节点的几个过程
> rs.stats
步骤一:进行初始化
步骤二:进行数据同步
步骤三:初始化同步完成
步骤四:节点添加完成,状态正常


4)验证数据已经同步过来了
# /usr/local/bin/mongo/bin/mongo -port 28013
> rs.slaveOK()
> db.c1.find()

实验过程如下:
#echo " add a new host into rs1"
 add a new host into rs1
[root@MySQL193 /data02]#
[root@MySQL193 /data02]#
[root@MySQL193 /data02]#
[root@MySQL193 /data02]#pwd
/data02
[root@MySQL193 /data02]#ll
total 12
drwxr-xr-x 4 root  root  4096 Apr  9 14:09 mongodb
drwxr-xr-x 5 root  root  4096 Apr 13 14:20 mongors
drwxr-xr-x 3 mysql mysql 4096 Apr  9 17:45 mysqldata
[root@MySQL193 /data02]#
[root@MySQL193 /data02]#mkdir -p /data02/mongors/data/r3
[root@MySQL193 /data02]#
[root@MySQL193 /data02]#echo " this is rs1 super secret key " > /data02/mongors/key/r3
[root@MySQL193 /data02]#
[root@MySQL193 /data02]#
[root@MySQL193 /data02]#chmod 600 /data02/mongors/key/r3
[root@MySQL193 /data02]#
[root@MySQL193 /data02]#/usr/local/bin/mongod --replSet rs1 --keyFile /data02/mongors/key/r3 --fork --port 28013 --dpath /data02/mongors/data/r3 --logpath=/data02/mongors/log/r3.log --logappend
-bash: /usr/local/bin/mongod: No such file or directory
[root@MySQL193 /data02]#
[root@MySQL193 /data02]#
[root@MySQL193 /data02]#
[root@MySQL193 /data02]#/usr/local/mongodb/bin/mongod --replSet rs1 --keyFile /data02/mongors/key/r3 --fork --port 28013 --dpath /data02/mongors/data/r3 --logpath=/data02/mongors/log/r3.log --logappend
Error parsing command line: unknown option dpath
try '/usr/local/mongodb/bin/mongod --help' for more information
[root@MySQL193 /data02]#
[root@MySQL193 /data02]#
[root@MySQL193 /data02]#
[root@MySQL193 /data02]#/usr/local/mongodb/bin/mongod --replSet rs1 --keyFile /data02/mongors/key/r3 --fork --port 28013 --dbpath /data02/mongors/data/r3 --logpath=/data02/mongors/log/r3.log --logappend
about to fork child process, waiting until server is ready for connections.
forked process: 11731
child process started successfully, parent exiting
[root@MySQL193 /data02]#
[root@MySQL193 /data02]#ps -ef|grep mongo
root      5715     1  0 Apr09 ?        00:11:54 mongod --dbpath=/data02/mongodb/db/ --logpath=/data02/mongodb/logs/mongodb.log --fork
root     10984     1  0 14:23 ?        00:00:05 /usr/local/mongodb/bin/mongod --replSet rs1 --keyFile /data02/mongors/key/r0 --fork --port 28010 --dbpath /data02/mongors/data/r0 --logpath=/data02/mongors/log/r0.log --logappend
root     11030     1  0 14:23 ?        00:00:04 /usr/local/mongodb/bin/mongod --replSet rs1 --keyFile /data02/mongors/key/r1 --fork --port 28011 --dbpath /data02/mongors/data/r1 --logpath=/data02/mongors/log/r1.log --logappend
root     11076     1  0 14:23 ?        00:00:04 /usr/local/mongodb/bin/mongod --replSet rs1 --keyFile /data02/mongors/key/r2 --fork --port 28012 --dbpath /data02/mongors/data/r2 --logpath=/data02/mongors/log/r2.log --logappend
root     11731     1  0 14:58 ?        00:00:00 /usr/local/mongodb/bin/mongod --replSet rs1 --keyFile /data02/mongors/key/r3 --fork --port 28013 --dbpath /data02/mongors/data/r3 --logpath=/data02/mongors/log/r3.log --logappend
root     11782 10769  0 14:58 pts/1    00:00:00 grep mongo
[root@MySQL193 /data02]#
[root@MySQL193 /data02]#/usr/local/mongodb/bin/mongod --replSet rs1 --keyFile /data02/mongors/key/r3 --fork --port 28013 --dbpath /data02/mongors/data/r3 --logpata02/mongors/key/r3 --fork --port 28013 --dpath /data02/mongors/data/r3 --logpa[root@MySQL193 /data02]#/usr/local/bin/mongod --replSet rs1 --keyFile /data02/mongors/key/r3 --fork --port 28013 --dpath /data02/mongors/data/r3 --logpath=/data[root@MySQL193 /data02]#chmod 600 /data02/mongors/key/r3                        
[root@MySQL193 /data02]#echo " this is rs1 super secret key " > /data02/mongors/[root@MySQL193 /data02]#mkdir -p /data02/mongors/data/r3                        [root@MySQL193 /data02]#/usr/local/mongodb/bin/mongo -port 28013
MongoDB shell version: 2.6.5
connecting to: 127.0.0.1:28013/test
> 
> 
> rs.status();
{
        "startupStatus" : 3,
        "info" : "run rs.initiate(...) if not yet done for the set",
        "ok" : 0,
        "errmsg" : "can't get local.system.replset config from self or any seed (EMPTYCONFIG)"
}
> 
> rs.add("localhost:28013");
assert failed : no config object retrievable from local.system.replset
Error: assert failed : no config object retrievable from local.system.replset
    at Error (<anonymous>)
    at doassert (src/mongo/shell/assert.js:11:14)
    at assert (src/mongo/shell/assert.js:20:5)
    at Function.rs.add (src/mongo/shell/utils.js:979:5)
    at (shell):1:4
2015-04-13T14:59:13.271+0800 Error: assert failed : no config object retrievable from local.system.replset at src/mongo/shell/assert.js:13
> 
> quit();
[root@MySQL193 /data02]#
[root@MySQL193 /data02]#
[root@MySQL193 /data02]#/usr/local/mongodb/bin/mongo -port 28010
MongoDB shell version: 2.6.5
connecting to: 127.0.0.1:28010/test
rs1:PRIMARY> 
rs1:PRIMARY> 
rs1:PRIMARY> rs.status();
{
        "set" : "rs1",
        "date" : ISODate("2015-04-13T06:59:34Z"),
        "myState" : 1,
        "members" : [
                {
                        "_id" : 0,
                        "name" : "localhost:28010",
                        "health" : 1,
                        "state" : 1,
                        "stateStr" : "PRIMARY",
                        "uptime" : 2176,
                        "optime" : Timestamp(1428907854, 1),
                        "optimeDate" : ISODate("2015-04-13T06:50:54Z"),
                        "electionTime" : Timestamp(1428907094, 1),
                        "electionDate" : ISODate("2015-04-13T06:38:14Z"),
                        "self" : true
                },
                {
                        "_id" : 1,
                        "name" : "localhost:28011",
                        "health" : 1,
                        "state" : 2,
                        "stateStr" : "SECONDARY",
                        "uptime" : 1288,
                        "optime" : Timestamp(1428907854, 1),
                        "optimeDate" : ISODate("2015-04-13T06:50:54Z"),
                        "lastHeartbeat" : ISODate("2015-04-13T06:59:33Z"),
                        "lastHeartbeatRecv" : ISODate("2015-04-13T06:59:33Z"),
                        "pingMs" : 0,
                        "syncingTo" : "localhost:28010"
                },
                {
                        "_id" : 2,
                        "name" : "localhost:28012",
                        "health" : 1,
                        "state" : 2,
                        "stateStr" : "SECONDARY",
                        "uptime" : 1288,
                        "optime" : Timestamp(1428907854, 1),
                        "optimeDate" : ISODate("2015-04-13T06:50:54Z"),
                        "lastHeartbeat" : ISODate("2015-04-13T06:59:33Z"),
                        "lastHeartbeatRecv" : ISODate("2015-04-13T06:59:33Z"),
                        "pingMs" : 0,
                        "syncingTo" : "localhost:28010"
                }
        ],
        "ok" : 1
}
rs1:PRIMARY> 
rs1:PRIMARY> rs.add("localhost:28013");
{ "ok" : 1 }
rs1:PRIMARY> 
rs1:PRIMARY> rs.status();
{
        "set" : "rs1",
        "date" : ISODate("2015-04-13T06:59:59Z"),
        "myState" : 1,
        "members" : [
                {
                        "_id" : 0,
                        "name" : "localhost:28010",
                        "health" : 1,
                        "state" : 1,
                        "stateStr" : "PRIMARY",
                        "uptime" : 2201,
                        "optime" : Timestamp(1428908396, 1),
                        "optimeDate" : ISODate("2015-04-13T06:59:56Z"),
                        "electionTime" : Timestamp(1428907094, 1),
                        "electionDate" : ISODate("2015-04-13T06:38:14Z"),
                        "self" : true
                },
                {
                        "_id" : 1,
                        "name" : "localhost:28011",
                        "health" : 1,
                        "state" : 2,
                        "stateStr" : "SECONDARY",
                        "uptime" : 1313,
                        "optime" : Timestamp(1428908396, 1),
                        "optimeDate" : ISODate("2015-04-13T06:59:56Z"),
                        "lastHeartbeat" : ISODate("2015-04-13T06:59:59Z"),
                        "lastHeartbeatRecv" : ISODate("2015-04-13T06:59:57Z"),
                        "pingMs" : 0,
                        "syncingTo" : "localhost:28010"
                },
                {
                        "_id" : 2,
                        "name" : "localhost:28012",
                        "health" : 1,
                        "state" : 2,
                        "stateStr" : "SECONDARY",
                        "uptime" : 1313,
                        "optime" : Timestamp(1428908396, 1),
                        "optimeDate" : ISODate("2015-04-13T06:59:56Z"),
                        "lastHeartbeat" : ISODate("2015-04-13T06:59:59Z"),
                        "lastHeartbeatRecv" : ISODate("2015-04-13T06:59:59Z"),
                        "pingMs" : 0,
                        "syncingTo" : "localhost:28010"
                },
                {
                        "_id" : 3,
                        "name" : "localhost:28013",
                        "health" : 1,
                        "state" : 5,
                        "stateStr" : "STARTUP2",
                        "uptime" : 3,
                        "optime" : Timestamp(0, 0),
                        "optimeDate" : ISODate("1970-01-01T00:00:00Z"),
                        "lastHeartbeat" : ISODate("2015-04-13T06:59:58Z"),
                        "lastHeartbeatRecv" : ISODate("1970-01-01T00:00:00Z"),
                        "pingMs" : 0,
                        "lastHeartbeatMessage" : "initial sync need a member to be primary or secondary to do our initial sync"
                }
        ],
        "ok" : 1
}
rs1:PRIMARY> 





删除节点步骤

下面将刚刚添加的两个新节点 28013 从复制集中去除掉,只需只需 rs.remove 指令就可以了,具体如下:
登录到primary 节点,执行下面的命令将28013 从复制集中去掉:
> rs.remove("localhsots:28013");

查看复制集状态,可以看到只有三个成员,原来的两个成员都已经成功去除了:
> rs.status()
实验过程如下:
#/usr/local/mongodb/bin/mongo -port 28011
MongoDB shell version: 2.6.5
connecting to: 127.0.0.1:28011/test
rs1:PRIMARY> 
rs1:PRIMARY> rs.conf()
{
        "_id" : "rs1",
        "version" : 2,
        "members" : [
                {
                        "_id" : 0,
                        "host" : "localhost:28010"
                },
                {
                        "_id" : 1,
                        "host" : "localhost:28011"
                },
                {
                        "_id" : 2,
                        "host" : "localhost:28012"
                },
                {
                        "_id" : 3,
                        "host" : "localhost:28013"
                }
        ]
}
rs1:PRIMARY> 
rs1:PRIMARY> rs.remove("localhost:28013");
2015-04-13T15:24:52.181+0800 DBClientCursor::init call() failed
2015-04-13T15:24:52.182+0800 Error: error doing query: failed at src/mongo/shell/query.js:81
2015-04-13T15:24:52.183+0800 trying reconnect to 127.0.0.1:28011 (127.0.0.1) failed
2015-04-13T15:24:52.184+0800 reconnect 127.0.0.1:28011 (127.0.0.1) ok
rs1:PRIMARY> 
rs1:PRIMARY> rs.conf()
{
        "_id" : "rs1",
        "version" : 3,
        "members" : [
                {
                        "_id" : 0,
                        "host" : "localhost:28010"
                },
                {
                        "_id" : 1,
                        "host" : "localhost:28011"
                },
                {
                        "_id" : 2,
                        "host" : "localhost:28012"
                }
        ]
}
rs1:PRIMARY> 
rs1:PRIMARY> rs.status()
{
        "set" : "rs1",
        "date" : ISODate("2015-04-13T07:25:15Z"),
        "myState" : 1,
        "members" : [
                {
                        "_id" : 0,
                        "name" : "localhost:28010",
                        "health" : 1,
                        "state" : 2,
                        "stateStr" : "SECONDARY",
                        "uptime" : 23,
                        "optime" : Timestamp(1428909892, 1),
                        "optimeDate" : ISODate("2015-04-13T07:24:52Z"),
                        "lastHeartbeat" : ISODate("2015-04-13T07:25:14Z"),
                        "lastHeartbeatRecv" : ISODate("2015-04-13T07:25:14Z"),
                        "pingMs" : 0,
                        "lastHeartbeatMessage" : "syncing to: localhost:28011",
                        "syncingTo" : "localhost:28011"
                },
                {
                        "_id" : 1,
                        "name" : "localhost:28011",
                        "health" : 1,
                        "state" : 1,
                        "stateStr" : "PRIMARY",
                        "uptime" : 3699,
                        "optime" : Timestamp(1428909892, 1),
                        "optimeDate" : ISODate("2015-04-13T07:24:52Z"),
                        "electionTime" : Timestamp(1428908838, 1),
                        "electionDate" : ISODate("2015-04-13T07:07:18Z"),
                        "self" : true
                },
                {
                        "_id" : 2,
                        "name" : "localhost:28012",
                        "health" : 1,
                        "state" : 2,
                        "stateStr" : "SECONDARY",
                        "uptime" : 23,
                        "optime" : Timestamp(1428909892, 1),
                        "optimeDate" : ISODate("2015-04-13T07:24:52Z"),
                        "lastHeartbeat" : ISODate("2015-04-13T07:25:14Z"),
                        "lastHeartbeatRecv" : ISODate("2015-04-13T07:25:15Z"),
                        "pingMs" : 0,
                        "lastHeartbeatMessage" : "syncing to: localhost:28011",
                        "syncingTo" : "localhost:28011"
                }
        ],
        "ok" : 1
}
rs1:PRIMARY> 
rs1:PRIMARY> 
rs1:PRIMARY> 
rs1:PRIMARY> quit()








你可能感兴趣的:(mongodb)