复制是跨多个服务器数据同步的过程。
主从复制是可用于备份,故障恢复和读扩展等。基本就是搭建一个主节点和一个或多个从节点,在数据库集群中要明确的知道谁是主服务器,主服务器只有一台。从服务器要知道自己的数据源也就是对应的主服务是谁。
运行mongod --master
启动主服务器。
运行mongod --slave --source master_address
启动从服务器。
主要是指定了一下 dbPath 和 log path,master配置如下:
$ cat master.conf
# mongod.conf
# for documentation of all options, see:
# http://docs.mongodb.org/manual/reference/configuration-options/
# Where and how to store data.
storage:
dbPath: /home/work/mongo/master
journal:
enabled: true
# engine:
# mmapv1:
# wiredTiger:
# where to write logging data.
systemLog:
destination: file
logAppend: true
path: /home/work/mongo/master/mongod.log
# network interfaces
net:
port: 27017
bindIp: 127.0.0.1
mongod -f master.conf --logappend --master --fork
mongod -f slave-1.conf --logappend --slave --fork --source localhost:27017
mongod -f slave-2.conf --logappend --slave --fork --source localhost:27017
所有的从节点都是从主节点复制信息,目前还不能从节点到从节点复制机制,原因是从节点没有自己的oplog。
一个集群中从节点没有明确的限制,但是多个节点对单点主机发起的查询也是吃不消的,不超过12个节点的集群可以良好运转。
在主库中向user集合插入一条数据。
> use test
switched to db test
> db.user.insert({'u':'test', 'p':'pwd'})
WriteResult({ "nInserted" : 1 })
> db.user.find()
{ "_id" : ObjectId("59c8d10c836ac6ec4e425fc5"), "u" : "test", "p" : "pwd" }
登录从库,可以看到user集合插入的记录已经同步过来了。
> rs.slaveOk()
> show dbs
admin 0.000GB
local 0.000GB
test 0.000GB
> use test
switched to db test
> show collections;
user
> db.user.find()
{ "_id" : ObjectId("59c8d10c836ac6ec4e425fc5"), "u" : "test", "p" : "pwd" }
副本集就是有自动故障恢复功能的主从集群,由一个primary节点和一个或多个secondary节点组成。主从集群和副本集最为明显的区别就是副本集没有固定的主节点:整个集群会选举出一个主节点,当其不能工作时,则变更到其它节点。
副本集总会有一个活跃节点和一个或多个备份节点。
rep1,2,3 分别以27017,27018,27019端口启动。
work@jdu4e00u53f7:~/mongo$ mongod -f rep1.conf --replSet rs0 --fork
work@jdu4e00u53f7:~/mongo$ mongod -f rep2.conf --replSet rs0 --fork
work@jdu4e00u53f7:~/mongo$ mongod -f rep3.conf --replSet rs0 --fork
work@jdu4e00u53f7:~/mongo$ mongo --port 27017
> rsconf = {'_id':'rs0', 'members':[{'_id': 0, 'host':'127.0.0.1:27017'}, {'_id': 1, 'host': '127.0.0.1:27018'}, {'_id':2, 'host': '127.0.0.1:27019'}]}
{
"_id" : "rs0",
"members" : [
{
"_id" : 0,
"host" : "127.0.0.1:27017"
},
{
"_id" : 1,
"host" : "127.0.0.1:27018"
},
{
"_id" : 2,
"host" : "127.0.0.1:27019"
}
]
}
> rs.initiate(rsconf)
{ "ok" : 1 }
rs0:OTHER> rs.conf()
{
"_id" : "rs0",
"version" : 1,
"protocolVersion" : NumberLong(1),
"members" : [
{
"_id" : 0,
"host" : "127.0.0.1:27017",
"arbiterOnly" : false,
"buildIndexes" : true,
"hidden" : false,
"priority" : 1,
"tags" : {
},
"slaveDelay" : NumberLong(0),
"votes" : 1
},
{
"_id" : 1,
"host" : "127.0.0.1:27018",
"arbiterOnly" : false,
"buildIndexes" : true,
"hidden" : false,
"priority" : 1,
"tags" : {
},
"slaveDelay" : NumberLong(0),
"votes" : 1
},
{
"_id" : 2,
"host" : "127.0.0.1:27019",
"arbiterOnly" : false,
"buildIndexes" : true,
"hidden" : false,
"priority" : 1,
"tags" : {
},
"slaveDelay" : NumberLong(0),
"votes" : 1
}
],
"settings" : {
"chainingAllowed" : true,
"heartbeatIntervalMillis" : 2000,
"heartbeatTimeoutSecs" : 10,
"electionTimeoutMillis" : 10000,
"catchUpTimeoutMillis" : 60000,
"getLastErrorModes" : {
},
"getLastErrorDefaults" : {
"w" : 1,
"wtimeout" : 0
},
"replicaSetId" : ObjectId("59c8e527002aa17b30c3408c")
}
}
在mongodb会将配置里的节点自动添加到副本集rs0中。
rs0:SECONDARY> rs.isMaster()
{
"hosts" : [
"127.0.0.1:27017",
"127.0.0.1:27018",
"127.0.0.1:27019"
],
"setName" : "rs0",
"setVersion" : 1,
"ismaster" : false,
"secondary" : true,
"primary" : "127.0.0.1:27017",
"me" : "127.0.0.1:27018",
"lastWrite" : {
"opTime" : {
"ts" : Timestamp(1506338789, 1),
"t" : NumberLong(1)
},
"lastWriteDate" : ISODate("2017-09-25T11:26:29Z")
},
"maxBsonObjectSize" : 16777216,
"maxMessageSizeBytes" : 48000000,
"maxWriteBatchSize" : 1000,
"localTime" : ISODate("2017-09-25T11:26:37.835Z"),
"maxWireVersion" : 5,
"minWireVersion" : 0,
"readOnly" : false,
"ok" : 1
}
rs0:SECONDARY> db.printReplicationInfo()
configured oplog size: 1214.6957025527954MB
log length start to end: 862secs (0.24hrs)
oplog first event time: Mon Sep 25 2017 19:14:47 GMT+0800 (CST)
oplog last event time: Mon Sep 25 2017 19:29:09 GMT+0800 (CST)
now: Mon Sep 25 2017 19:29:15 GMT+0800 (CST)
在主库中插入一条数据。
rs0:PRIMARY> use testdb
switched to db testdb
rs0:PRIMARY> db.user.insert({'u':'test', 'p':'123'})
WriteResult({ "nInserted" : 1 })
rs0:PRIMARY> db.user.find()
{ "_id" : ObjectId("59c8e698c28239e0dfd6f578"), "u" : "test", "p" : "123" }
登录从库,可以看到数据已经同步过来了。
$ mongo --port 27018
rs0:SECONDARY> rs.slaveOk()
rs0:SECONDARY> show dbs;
admin 0.000GB
local 0.000GB
testdb 0.000GB
rs0:SECONDARY> use testdb
switched to db testdb
rs0:SECONDARY> db.user.find()
{ "_id" : ObjectId("59c8e698c28239e0dfd6f578"), "u" : "test", "p" : "123" }
关闭主节点
# 连接到主节点primary
$ mongo --port 27017
# 切换到 admin 数据库(关掉数据库命令只能在 admin 数据库)
rs0:PRIMARY> use admin
switched to db admin
rs0:PRIMARY> db.shutdownServer()
server should be down...
查看节点状态
rs0:PRIMARY> rs.status()
{
"set" : "rs0",
"date" : ISODate("2017-09-25T11:31:22.492Z"),
"myState" : 1,
"term" : NumberLong(2),
"heartbeatIntervalMillis" : NumberLong(2000),
"optimes" : {
"lastCommittedOpTime" : {
"ts" : Timestamp(1506339074, 1),
"t" : NumberLong(2)
},
"appliedOpTime" : {
"ts" : Timestamp(1506339074, 1),
"t" : NumberLong(2)
},
"durableOpTime" : {
"ts" : Timestamp(1506339074, 1),
"t" : NumberLong(2)
}
},
"members" : [
{
"_id" : 0,
"name" : "127.0.0.1:27017",
"health" : 0,
"state" : 8,
"stateStr" : "(not reachable/healthy)",
"uptime" : 0,
"optime" : {
"ts" : Timestamp(0, 0),
"t" : NumberLong(-1)
},
"optimeDurable" : {
"ts" : Timestamp(0, 0),
"t" : NumberLong(-1)
},
"optimeDate" : ISODate("1970-01-01T00:00:00Z"),
"optimeDurableDate" : ISODate("1970-01-01T00:00:00Z"),
"lastHeartbeat" : ISODate("2017-09-25T11:31:21.532Z"),
"lastHeartbeatRecv" : ISODate("2017-09-25T11:30:42.934Z"),
"pingMs" : NumberLong(0),
"lastHeartbeatMessage" : "Connection refused",
"configVersion" : -1
},
{
"_id" : 1,
"name" : "127.0.0.1:27018",
"health" : 1,
"state" : 1,
"stateStr" : "PRIMARY",
"uptime" : 1057,
"optime" : {
"ts" : Timestamp(1506339074, 1),
"t" : NumberLong(2)
},
"optimeDate" : ISODate("2017-09-25T11:31:14Z"),
"infoMessage" : "could not find member to sync from",
"electionTime" : Timestamp(1506339053, 1),
"electionDate" : ISODate("2017-09-25T11:30:53Z"),
"configVersion" : 1,
"self" : true
},
{
"_id" : 2,
"name" : "127.0.0.1:27019",
"health" : 1,
"state" : 2,
"stateStr" : "SECONDARY",
"uptime" : 992,
"optime" : {
"ts" : Timestamp(1506339074, 1),
"t" : NumberLong(2)
},
"optimeDurable" : {
"ts" : Timestamp(1506339074, 1),
"t" : NumberLong(2)
},
"optimeDate" : ISODate("2017-09-25T11:31:14Z"),
"optimeDurableDate" : ISODate("2017-09-25T11:31:14Z"),
"lastHeartbeat" : ISODate("2017-09-25T11:31:21.530Z"),
"lastHeartbeatRecv" : ISODate("2017-09-25T11:31:21.725Z"),
"pingMs" : NumberLong(0),
"syncingTo" : "127.0.0.1:27018",
"configVersion" : 1
}
],
"ok" : 1
}
可以看到127.0.0.1:27017
已经是不可达状态;127.0.0.1:27018
被选为新的主节点;自动故障转移测试成功。重新启动27017的实例,会发现127.0.0.1:27017
会成为从节点。
京东入门级云服务器,Ubuntu16.04,Mongo3.4