Replication options:
--oplogSize arg 操作日志大小,单位M
Master/slave options (old; use replica sets instead):
--master 指定角色为master mode
--slave 指定角色为slave mode
--source arg 当角色为slave的时候使用,格式为:
< server : port >--only arg 当角色为slave的时候使用,指定单独同步的数据库,默认为同步所有数据库.
--slavedelay arg 指定一个应用日志的延时,单位秒
--autoresync 当数据为旧数据时自动同步
[root@mongodb1 log]# cat /etc/mongod.conf
port=27017
dbpath=/data/db
logpath=/data/log/mongod.log
fork = true
master=true
oplogSize=2048
[root@mongodb2 ~]# cat /etc/mongod.conf
port=27017
dbpath=/data/db
logpath=/data/log/mongod.log
fork = true
slave = true
source = 192.168.56.80:27017
[root@mongodb2 data]# mongod -f /etc/mongod.conf
about to fork child process, waiting until server is ready for connections.
forked process: 2297
child process started successfully, parent exiting
2016-06-06T23:24:56.225+0800 I REPL [replslave] resync: dropping database suq
2016-06-06T23:24:56.225+0800 I REPL [replslave] resync: cloning database suq to get an initial copy
2016-06-06T23:24:56.251+0800 I INDEX [replslave] build index on: suq.test2 properties: { v: 1, key: { _id: 1 }, name: "_id_", ns: "suq.test2" }
2016-06-06T23:24:56.251+0800 I INDEX [replslave] building index using bulk method
2016-06-06T23:24:56.256+0800 I INDEX [replslave] build index done. scanned 4 total records. 0 secs
2016-06-06T23:24:56.277+0800 I INDEX [replslave] build index on: suq.test3 properties: { v: 1, key: { _id: 1 }, name: "_id_", ns: "suq.test3" }
2016-06-06T23:24:56.277+0800 I INDEX [replslave] building index using bulk method
...
...
...
2016-06-06T23:24:59.343+0800 I STORAGE [replslave] copying indexes for: { name: "fs.files", options: {} }
2016-06-06T23:24:59.344+0800 I REPL [replslave] resync: done with initial clone for db: test
2016-06-06T23:25:00.346+0800 I REPL [replslave] syncing from host:192.168.56.80:27017
2016-06-06T23:25:01.348+0800 I REPL [replslave] syncing from host:192.168.56.80:27017
2016-06-06T23:25:02.349+0800 I REPL [replslave] syncing from host:192.168.56.80:27017
2016-06-06T23:25:03.683+0800 I REPL [replslave] syncing from host:192.168.56.80:27017
> rs.printReplicationInfo()
configured oplog size: 990MB
log length start to end: 887037secs (246.4hrs)
oplog first event time: Fri May 27 2016 17:08:25 GMT+0800 (CST)
oplog last event time: Mon Jun 06 2016 23:32:22 GMT+0800 (CST)
now: Mon Jun 06 2016 23:32:26 GMT+0800 (CST)
> db.printReplicationInfo()
this is a slave, printing slave replication info.
source: 192.168.56.80:27017
syncedTo: Mon Jun 06 2016 23:34:12 GMT+0800 (CST)
5 secs (0 hrs) behind the freshest member (no primary available at the moment)
> db.serverStatus( { repl: 1 } )
{
"host" : "mongodb1",
"advisoryHostFQDNs" : [ ],
"version" : "3.2.6",
"process" : "mongod",
"pid" : NumberLong(3780),
"uptime" : 1029,
"uptimeMillis" : NumberLong(1029698),
"uptimeEstimate" : 962,
"localTime" : ISODate("2016-06-06T15:41:44.864Z"),
"asserts" : {
"regular" : 0,
"warning" : 0,
"msg" : 0,
"user" : 0,
"rollovers" : 0
},
"connections" : {
"current" : 3,
"available" : 816,
"totalCreated" : NumberLong(8)
},
...
...
...
> rs.slaveOk()
>
WARNING
This command obtains a global write lock and will block other operations until it has completed.
> use admin switched to db admin
> db.runCommand({"resync":1})
{ "ok" : 0, "errmsg" : "not dead, no need to resync" }
> use local
switched to db local
> show collections
me
oplog.$main
startup_log
> db.oplog.$main.findOne()
{
"ts" : Timestamp(1464340105, 1),
"h" : NumberLong(0),
"v" : 2,
"op" : "n",
"ns" : "",
"o" : {
}
}