MongoDB分片集群由mongos路由进程(轻量级且非持久化进程)、复制集组成的片shards(分片一般基于复制集故障转移和冗余备份功能)、一组配置服务器(存储元数据信息,一般冗余3台)构成。
分片:shard1、shard2、shard3
配置:configsvr1、configsvr2、configsvr3
路由:mongos1,mongos2,mongos3
各个服务分布情况如下:
192.168.111.203:20001(shard1主节点)/20002(shard2仲裁点)/20003(shard3从节点)/20004(configsvr1)/30000(mongos1)
192.168.111.204:20001(shard1从节点)/20002(shard2主节点)/20003(shard3仲裁点)/20004(configsvr2)/30000(mongos2)
192.168.111.205:20001(shard1仲裁点)/20002(shard2从节点)/20003(shard3主节点)/20004(configsvr3)/30000(mongos3)
一、配置复制集群分片
1、在三台机子上分别建立如下目录:
存放shard1配置、数据、log目录
mkdir -p /data/mongodb/shard1/config
mkdir -p /data/mongodb/shard1/data
mkdir -p /data/mongodb/shard1/logs
存在shard2配置、数据、log目录
mkdir -p /data/mongodb/shard2/config
mkdir -p /data/mongodb/shard2/data
mkdir -p /data/mongodb/shard3/logs
存放shard3配置、数据、log目录
mkdir -p /data/mongodb/shard3/config
mkdir -p /data/mongodb/shard3/data
mkdir -p /data/mongodb/shard3/logs
2、配置分片1
1)shard1配置文件mongoshard.conf如下:
#mongodb.conf
#where to store the data.
dbpath = /data/mongodb/shard1/data
pidfilepath = /data/mongodb/shard1/mongod.pid
#where to log
logpath = /data/mongodb/shard1/logs/mongodb.log
#record logs with appending
logappend = true
bind_ip = 127.0.0.1,192.168.111.203
port = 20001
fork = true
maxConns = 500
#enable journaling
journal = true
smallfiles = true
#enables periodic logging of CPU utilization and I/O wait
#cpu = true
#turn on/off security.off is currently the default
#noauth = true
#auth = true
#verbose logging output.
#verbose = true
#inspect all client data for validity on receipt (useful for developing drivers)
#objcheck = true
#enable db quota management
#quota = true
#set oplogging level where n is
# 0=off(default)
# 1=W
# 2=R
# 3=both
# 7=W+some reads
#oplog = 0
#diagnostic/debugging option
#nocursors = true
#ignore query hints
#nohints = true
#turns off server-side scripting.this will result in greatly limited functionality
#noscripting = true
#turns off table scans.any query that would do a table scan fails.
#notablescan = true
#disable data file preallocation.
#noprealloc = true
#specify .ns file size for new databases.
#nssize =
#account token for mongo monitoring server.
#mms-token =
#server name for mongo monitoring server.
#mms-name =
#ping interval for mongo monitoring server.
#mms-interval =
# replication options
shardsvr = true
replSet = myshard1
#in replicated mongo databases,specify here whether this is a slave or master
#slave = true
#source = master.example.com
#slave only:specify a single database to replicate
#only = master.example.com
#or
#master = true
#source = slave.example.com
#address of a server to pair with.
#pairwith =
#address of arbiter server.
#arbiter =
#automatically resync if slave data is stale
#autoresync
#custom size for replication operation log.
oplogSize = 4096
#size limit for in_memory storage of op ids.
#opIdMem =
#SSL options
#enable SSL on normal ports
#sslOnNormalPorts = true
#SSL key file and password
#sslPEMKeyFile = /etc/ssl/mongodb.pem
#sslPEMKeyPassword = pass
此配置文件分别放入三台机器对应的分片1目录中,注意修改绑定IP为实际IP
2)启动服务器初始化复制集primary节点
mongod --config /data/mongodb/shard1/config/mongoshard.conf
在192.168.111.203上连接mongod
mongo --host=192.168.111.203 --port=20001
MongoDB Enterprise > use admin
初始化主节点
MongoDB Enterprise > rs.initiate({_id:'myshard1',members:[{_id:1,host:'192.168.111.203:20001'}]});
添加从节点
MongoDB Enterprise >rs.add('192.168.111.204:20001')
添加仲裁点
MongoDB Enterprise >rs.addArb('192.168.111.205:20001')
到此复制集myshard1添加完毕,可以用rs.status()查看复制集状态
3、利用同样的方法分别配置复制集myshard2、myshard3
二、配置配置服务器
1、分别在三台机器上建立如下文件夹
mkdir -p /data/mongodb/config/config
mkdir -p /data/mongodb/config/data
mkdir -p /data/mongodb/config/logs
建立配置文件cfgserver.conf:
#mongodb.conf
#where to store the data.
dbpath = /data/mongodb/config/data
pidfilepath = /data/mongodb/config/cfgserver.pid
#where to log
logpath = /data/mongodb/config/logs/mongodb.log
#record logs with appending
logappend = true
bind_ip = 127.0.0.1,192.168.111.203
port = 20004
fork = true
maxConns = 500
#enable journaling
journal = true
smallfiles = true
#enables periodic logging of CPU utilization and I/O wait
#cpu = true
#turn on/off security.off is currently the default
#noauth = true
#auth = true
#verbose logging output.
#verbose = true
#inspect all client data for validity on receipt (useful for developing drivers)
#objcheck = true
#enable db quota management
#quota = true
#set oplogging level where n is
# 0=off(default)
# 1=W
# 2=R
# 3=both
# 7=W+some reads
#oplog = 0
#diagnostic/debugging option
#nocursors = true
#ignore query hints
#nohints = true
#turns off server-side scripting.this will result in greatly limited functionality
#noscripting = true
#turns off table scans.any query that would do a table scan fails.
#notablescan = true
#disable data file preallocation.
#noprealloc = true
#specify .ns file size for new databases.
#nssize =
#account token for mongo monitoring server.
#mms-token =
#server name for mongo monitoring server.
#mms-name =
#ping interval for mongo monitoring server.
#mms-interval =
# replication options
replSet = cfgsvr
configsvr = true
oplogSize = 4096
2、此配置文件分别放入三台机器对应的配置服务器目录中,注意修改绑定IP为实际IP
3、启动并初始化配置服务器
分别启动三台配置服务器
mongod --config /data/mongodb/config/cfgserver.conf
mongo --host=192.168.111.240 --port=20004
cfgset:PRIMARY>rs.initiate({_id:"cfgsvr",members:[{_id:1,host:"192.168.111.203:20004"},{_id:2,host:"192.168.111.204:20004"},{_id:3,host:"192.168.111.205:20005"}]})
三、配置路由服务
1、在服务器上建立如下文件夹
mkdir -p /data/mongodb/mongos/logs
mkdir -p /data/mongodb/mongos/config
cd /data/mongodb/mongos/config下建立路由配置文件mongos.conf:
#mongodb.conf
pidfilepath = /data/mongodb/mongos/yowifi_route.pid
#where to log
logpath = /data/mongodb/mongos/logs/mongodb.log
#record logs with appending
logappend = true
bind_ip = 127.0.0.1,192.168.111.203
port = 30000
fork = true
maxConns = 500
configdb = cfgset/192.168.111.203:20004,192.168.111.204:20004,192.168.111.205:20004
2、此配置分别同步到三台mongos服务器上,注意绑定IP修改为对应的IP地址
3、登录到各mongos服务器,分别进行以下操作添加分片集
mongo --host=192.168.111.203 --port=30000
mongos>sh.addShard("yowifishard1/192.168.111.203:20001,192.168.111.240:20001,192.168.111.205:20001")
mongos>sh.addShard("yowifishard2/192.168.111.203:20002,192.168.111.240:20002,192.168.111.205:20002")
mongos>sh.addShard("yowifishard3/192.168.111.203:20003,192.168.111.240:20003,192.168.111.205:20003")
4、通过sh.status()检查配置是否正确
mongos>sh.status()
--- Sharding Status ---
sharding version: {
"_id" : 1,
"minCompatibleVersion" : 5,
"currentVersion" : 6,
"clusterId" : ObjectId("5a9d0f053f5b23cc7007399b")
}
shards:
{ "_id" : "yowifishard1", "host" : "yowifishard1/192.168.111.203:20001,192.168.111.204:20001", "state" : 1 }
{ "_id" : "yowifishard2", "host" : "yowifishard2/192.168.111.204:20002,192.168.111.205:20002", "state" : 1 }
{ "_id" : "yowifishard3", "host" : "yowifishard3/192.168.111.203:20003,192.168.111.205:20003", "state" : 1 }
active mongoses:
"3.6.3" : 3
autosplit:
Currently enabled: yes
balancer:
Currently enabled: yes
Currently running: no
Failed balancer rounds in last 5 attempts: 5
Last reported error: Could not find host matching read preference { mode: "primary" } for set yowifishard1
Time of Reported error: Tue Mar 06 2018 15:18:15 GMT+0800 (CST)
Migration Results for the last 24 hours:
No recent migrations
databases:
{ "_id" : "chavin", "primary" : "yowifishard3", "partitioned" : true }
chavin.users
shard key: { "city" : 1 }
unique: false
balancing: true
chunks:
yowifishard3 1
{ "city" : { "$minKey" : 1 } } -->> { "city" : { "$maxKey" : 1 } } on : yowifishard3 Timestamp(1, 0)
{ "_id" : "config", "primary" : "config", "partitioned" : true }
config.system.sessions
shard key: { "_id" : 1 }
unique: false
balancing: true
chunks:
yowifishard1 1
{ "_id" : { "$minKey" : 1 } } -->> { "_id" : { "$maxKey" : 1 } } on : yowifishard1 Timestamp(1, 0)
至此,mongodb分片集群已经部署完毕