[MongoDB学习笔记十三]MongoDB创建分片

MongoDB分片简介

MongoDB分片用于解决海量数据在多台机器上存储,如下所示:


[MongoDB学习笔记十三]MongoDB创建分片_第1张图片
 

一个典型的分片架构如下


[MongoDB学习笔记十三]MongoDB创建分片_第2张图片
 

本文在一台机器上,以1个路由服务器(mongos),1个配置服务器,3个分片(每个分片仅仅包括一个MongoDB服务器,而不是副本集)来快速搭建一个MongoDB分片服务器

 

二、搭建MongoDB分片服务器的步骤

2.1 启动配置服务器

 

mongod --dbpath config  --port 27000 

 可见,配置服务器的启动跟普通MongoDB服务器一样,指定它的db目录是config

 

2.2 启动路由服务器

 

mongos --configdb hostname:27100 --port 28000 

 

启动路由服务器需要指定配置服务器的信息,以域名:端口号的形式给定

 

2.2 启动三台分片服务器

 

mongod --dbpath data1 --port 27017
mongod --dbpath data2 --port 27018 
mongod --dbpath data3 --port 27019

 

2.3 将分片服务器加入到分片即群中

 

 

mongo -- 281000 //使用mongo命令,连接到路由服务器
mongos>use admin; //必须在admin上执行添加分片的操作,否则抛出error: "$err" : "error creating initial database config information :: caused by :: can't find a shard to put new db on"
mongos> db.runCommand({addshard:"hostname:27017",allowLocal:true })
mongos> db.runCommand({addshard:"hostname:27018",allowLocal:true })
mongos> db.runCommand({addshard:"hostname:27019",allowLocal:true })

 执行上面的命令后,得到的输出结果是
{ "shardAdded" : "shard0000", "ok" : 1 }
{ "shardAdded" : "shard0001", "ok" : 1 }
{ "shardAdded" : "shard0002", "ok" : 1 }
 这样,就把三个分片加入到分片集群中了。

 

 

2.3 查看集群中的分片情况

2.3.1 分片情况

mongo -- 281000 //连接到路由服务器,路由服务器中的config数据库复制自配置服务器
mongos>use config
mongos>db.shards.find();

执行上面的命令得到的结果是

{ "_id" : "shard0000", "host" : "10.1.241.203:27017" }
{ "_id" : "shard0001", "host" : "10.1.241.203:27018" }
{ "_id" : "shard0002", "host" : "10.1.241.203:27019" }

 

2.3.2 分片中的数据库情况

 

mongo -- 281000 //连接到路由服务器
mongos>use config
mongos>db.databases.find();

执行上面的命令得到的结果是

mongos> db.databases.find();
{ "_id" : "admin", "partitioned" : false, "primary" : "config" }

 其中

  • _id表示数据库的名字
  • partitioned表示数据库是否已经分区,默认是false,需要开启分区。数据库的分区是针对集合而言的,即启动分区,实际上是对数据库的集合进行分区
  • primary:MongoDB的分片是基于数据库的。即一个分片节点中,有些数据库开启了分片功能,而有些数据库没有开启分片。那么没有开启分片的数据只能位于分片集群的一个分片上,因此用户读写没有开启分片的数据库时,mongos需要通过某种机制知道,这个数据库位于哪台分片节点上。MongoDB通过primary这个属性来记录这个信息,如下所示:

比如建立一个数据库users,往里插入一条数据后,再次执行use config; db.databases.find()得到的结果如下,表示users数据库建立在shard0000上

{ "_id" : "admin", "partitioned" : false, "primary" : "config" }
{ "_id" : "users", "partitioned" : false, "primary" : "shard0000" }

 

 
[MongoDB学习笔记十三]MongoDB创建分片_第3张图片
 

 

 

2.4 开启数据库和集合的分片功能

 

mongo -- 281000 //连接到路由服务器
mongos>use admin
mongos> db.runCommand({"enablesharding":"foo"}) //对数据库启动分区
{ "ok" : 1 }
mongos> db.runCommand({"shardcollection":"foo.bar","key":{"uid":1}}); //对数据库中的哪个集合进行分区
{ "collectionsharded" : "foo.bar", "ok" : 1 }

通过上面的操作,使得可以对foo数据库的bar集合进行分片,再次执行下面的命令看看

 

mongos> db.databases.find();
{ "_id" : "admin", "partitioned" : false, "primary" : "config" }
{ "_id" : "users", "partitioned" : false, "primary" : "shard0000" }
{ "_id" : "foo", "partitioned" : true, "primary" : "shard0001" }

 

可见,foo数据库的partitioned值改为了true

 

 

2.5 分片的chunkSize设置

首先执行如下命令,查看数据库的分片信息。这是对片键uid的范围进行了定义,此时在shard0001上,uid的分片区间是负无穷到正无穷

 

mongos> db.chunks.find();
{ "_id" : "foo.bar-uid_MinKey", "lastmod" : Timestamp(1, 0), "lastmodEpoch" : ObjectId("546dea6b57bdea6a9874c9c9"), "ns" : "foo.bar", "min" : { "uid" : { "$minKey" : 1 } }, "max" : { "uid" : { "$ma
y" : 1 } }, "shard" : "shard0001" }

 

执行如下命令,查看MongoDB的默认的chunkSize,为64M

 

mongos> use config
mongos> db.settings.find();
{ "_id" : "chunksize", "value" : 64 }

 

通过如下命令将chunkSize的大小改为1M

 

mongos> use config
switched to db config
mongos> db.settings.save( { _id:"chunksize", value: 1 } ) //_id和value为什么没有用引号引起来??
WriteResult({ "nMatched" : 1, "nUpserted" : 0, "nModified" : 1 })
mongos> db.settings.find();
{ "_id" : "chunksize", "value" : 1 }

 

 

2.6 往分片中写入足够量的数据

执行如下命令,写入50万条数据:

 

mongos> use foo
switched to db foo
mongos> for(i=0;i<500000;i++){ db.bar.insert({"uid":i,"description":"this is a very long description for " + i,"Date":new Date()}); }

 

执行完后,执行db.chunks.find()得到如下结果:

 

mongos> use config
switched to db config
mongos> db.chunks.find();
{ "_id" : "foo.bar-uid_MinKey", "lastmod" : Timestamp(2, 0), "lastmodEpoch" : ObjectId("546dea6b57bdea6a9874c9c9"), "ns" : "foo.bar", "min" : { "uid" : { "$minKey" : 1 } }, "max" : { "uid" : 0 }, "sha
rd" : "shard0000" }
{ "_id" : "foo.bar-uid_0.0", "lastmod" : Timestamp(3, 0), "lastmodEpoch" : ObjectId("546dea6b57bdea6a9874c9c9"), "ns" : "foo.bar", "min" : { "uid" : 0 }, "max" : { "uid" : 5333 }, "shard" : "shard0002
" }
{ "_id" : "foo.bar-uid_5333.0", "lastmod" : Timestamp(4, 0), "lastmodEpoch" : ObjectId("546dea6b57bdea6a9874c9c9"), "ns" : "foo.bar", "min" : { "uid" : 5333 }, "max" : { "uid" : 16357 }, "shard" : "sh
ard0000" }
{ "_id" : "foo.bar-uid_16357.0", "lastmod" : Timestamp(5, 0), "lastmodEpoch" : ObjectId("546dea6b57bdea6a9874c9c9"), "ns" : "foo.bar", "min" : { "uid" : 16357 }, "max" : { "uid" : 26187 }, "shard" : "
shard0002" }
{ "_id" : "foo.bar-uid_26187.0", "lastmod" : Timestamp(6, 0), "lastmodEpoch" : ObjectId("546dea6b57bdea6a9874c9c9"), "ns" : "foo.bar", "min" : { "uid" : 26187 }, "max" : { "uid" : 36823 }, "shard" : "
shard0000" }
{ "_id" : "foo.bar-uid_36823.0", "lastmod" : Timestamp(7, 0), "lastmodEpoch" : ObjectId("546dea6b57bdea6a9874c9c9"), "ns" : "foo.bar", "min" : { "uid" : 36823 }, "max" : { "uid" : 46546 }, "shard" : "
shard0002" }
{ "_id" : "foo.bar-uid_46546.0", "lastmod" : Timestamp(8, 0), "lastmodEpoch" : ObjectId("546dea6b57bdea6a9874c9c9"), "ns" : "foo.bar", "min" : { "uid" : 46546 }, "max" : { "uid" : 57035 }, "shard" : "
shard0000" }
{ "_id" : "foo.bar-uid_57035.0", "lastmod" : Timestamp(9, 0), "lastmodEpoch" : ObjectId("546dea6b57bdea6a9874c9c9"), "ns" : "foo.bar", "min" : { "uid" : 57035 }, "max" : { "uid" : 66699 }, "shard" : "
shard0002" }
{ "_id" : "foo.bar-uid_66699.0", "lastmod" : Timestamp(10, 0), "lastmodEpoch" : ObjectId("546dea6b57bdea6a9874c9c9"), "ns" : "foo.bar", "min" : { "uid" : 66699 }, "max" : { "uid" : 77783 }, "shard" :
"shard0000" }
{ "_id" : "foo.bar-uid_77783.0", "lastmod" : Timestamp(11, 0), "lastmodEpoch" : ObjectId("546dea6b57bdea6a9874c9c9"), "ns" : "foo.bar", "min" : { "uid" : 77783 }, "max" : { "uid" : 87230 }, "shard" :
"shard0002" }
{ "_id" : "foo.bar-uid_87230.0", "lastmod" : Timestamp(12, 0), "lastmodEpoch" : ObjectId("546dea6b57bdea6a9874c9c9"), "ns" : "foo.bar", "min" : { "uid" : 87230 }, "max" : { "uid" : 96803 }, "shard" :
"shard0000" }
{ "_id" : "foo.bar-uid_96803.0", "lastmod" : Timestamp(13, 0), "lastmodEpoch" : ObjectId("546dea6b57bdea6a9874c9c9"), "ns" : "foo.bar", "min" : { "uid" : 96803 }, "max" : { "uid" : 106346 }, "shard" :
 "shard0002" }
{ "_id" : "foo.bar-uid_106346.0", "lastmod" : Timestamp(14, 0), "lastmodEpoch" : ObjectId("546dea6b57bdea6a9874c9c9"), "ns" : "foo.bar", "min" : { "uid" : 106346 }, "max" : { "uid" : 117120 }, "shard"
 : "shard0000" }
{ "_id" : "foo.bar-uid_117120.0", "lastmod" : Timestamp(15, 0), "lastmodEpoch" : ObjectId("546dea6b57bdea6a9874c9c9"), "ns" : "foo.bar", "min" : { "uid" : 117120 }, "max" : { "uid" : 127899 }, "shard"
 : "shard0002" }
{ "_id" : "foo.bar-uid_127899.0", "lastmod" : Timestamp(16, 0), "lastmodEpoch" : ObjectId("546dea6b57bdea6a9874c9c9"), "ns" : "foo.bar", "min" : { "uid" : 127899 }, "max" : { "uid" : 137956 }, "shard"
 : "shard0000" }
{ "_id" : "foo.bar-uid_137956.0", "lastmod" : Timestamp(17, 0), "lastmodEpoch" : ObjectId("546dea6b57bdea6a9874c9c9"), "ns" : "foo.bar", "min" : { "uid" : 137956 }, "max" : { "uid" : 147773 }, "shard"
 : "shard0002" }
{ "_id" : "foo.bar-uid_147773.0", "lastmod" : Timestamp(18, 0), "lastmodEpoch" : ObjectId("546dea6b57bdea6a9874c9c9"), "ns" : "foo.bar", "min" : { "uid" : 147773 }, "max" : { "uid" : 158075 }, "shard"
 : "shard0000" }
{ "_id" : "foo.bar-uid_158075.0", "lastmod" : Timestamp(19, 0), "lastmodEpoch" : ObjectId("546dea6b57bdea6a9874c9c9"), "ns" : "foo.bar", "min" : { "uid" : 158075 }, "max" : { "uid" : 168681 }, "shard"
 : "shard0002" }
{ "_id" : "foo.bar-uid_168681.0", "lastmod" : Timestamp(20, 0), "lastmodEpoch" : ObjectId("546dea6b57bdea6a9874c9c9"), "ns" : "foo.bar", "min" : { "uid" : 168681 }, "max" : { "uid" : 178122 }, "shard"
 : "shard0000" }
{ "_id" : "foo.bar-uid_178122.0", "lastmod" : Timestamp(21, 0), "lastmodEpoch" : ObjectId("546dea6b57bdea6a9874c9c9"), "ns" : "foo.bar", "min" : { "uid" : 178122 }, "max" : { "uid" : 188515 }, "shard"
 : "shard0002" }
Type "it" for more
mongos> it
{ "_id" : "foo.bar-uid_188515.0", "lastmod" : Timestamp(22, 0), "lastmodEpoch" : ObjectId("546dea6b57bdea6a9874c9c9"), "ns" : "foo.bar", "min" : { "uid" : 188515 }, "max" : { "uid" : 198908 }, "shard"
 : "shard0000" }
{ "_id" : "foo.bar-uid_198908.0", "lastmod" : Timestamp(23, 0), "lastmodEpoch" : ObjectId("546dea6b57bdea6a9874c9c9"), "ns" : "foo.bar", "min" : { "uid" : 198908 }, "max" : { "uid" : 210107 }, "shard"
 : "shard0002" }
{ "_id" : "foo.bar-uid_210107.0", "lastmod" : Timestamp(24, 0), "lastmodEpoch" : ObjectId("546dea6b57bdea6a9874c9c9"), "ns" : "foo.bar", "min" : { "uid" : 210107 }, "max" : { "uid" : 219471 }, "shard"
 : "shard0000" }
{ "_id" : "foo.bar-uid_219471.0", "lastmod" : Timestamp(25, 0), "lastmodEpoch" : ObjectId("546dea6b57bdea6a9874c9c9"), "ns" : "foo.bar", "min" : { "uid" : 219471 }, "max" : { "uid" : 229496 }, "shard"
 : "shard0002" }
{ "_id" : "foo.bar-uid_229496.0", "lastmod" : Timestamp(26, 0), "lastmodEpoch" : ObjectId("546dea6b57bdea6a9874c9c9"), "ns" : "foo.bar", "min" : { "uid" : 229496 }, "max" : { "uid" : 240423 }, "shard"
 : "shard0000" }
{ "_id" : "foo.bar-uid_240423.0", "lastmod" : Timestamp(27, 0), "lastmodEpoch" : ObjectId("546dea6b57bdea6a9874c9c9"), "ns" : "foo.bar", "min" : { "uid" : 240423 }, "max" : { "uid" : 250932 }, "shard"
 : "shard0002" }
{ "_id" : "foo.bar-uid_250932.0", "lastmod" : Timestamp(28, 0), "lastmodEpoch" : ObjectId("546dea6b57bdea6a9874c9c9"), "ns" : "foo.bar", "min" : { "uid" : 250932 }, "max" : { "uid" : 261576 }, "shard"
 : "shard0000" }
{ "_id" : "foo.bar-uid_261576.0", "lastmod" : Timestamp(29, 0), "lastmodEpoch" : ObjectId("546dea6b57bdea6a9874c9c9"), "ns" : "foo.bar", "min" : { "uid" : 261576 }, "max" : { "uid" : 272106 }, "shard"
 : "shard0002" }
{ "_id" : "foo.bar-uid_272106.0", "lastmod" : Timestamp(30, 0), "lastmodEpoch" : ObjectId("546dea6b57bdea6a9874c9c9"), "ns" : "foo.bar", "min" : { "uid" : 272106 }, "max" : { "uid" : 281512 }, "shard"
 : "shard0000" }
{ "_id" : "foo.bar-uid_281512.0", "lastmod" : Timestamp(31, 0), "lastmodEpoch" : ObjectId("546dea6b57bdea6a9874c9c9"), "ns" : "foo.bar", "min" : { "uid" : 281512 }, "max" : { "uid" : 291917 }, "shard"
 : "shard0002" }
{ "_id" : "foo.bar-uid_291917.0", "lastmod" : Timestamp(32, 0), "lastmodEpoch" : ObjectId("546dea6b57bdea6a9874c9c9"), "ns" : "foo.bar", "min" : { "uid" : 291917 }, "max" : { "uid" : 301451 }, "shard"
 : "shard0000" }
{ "_id" : "foo.bar-uid_301451.0", "lastmod" : Timestamp(33, 0), "lastmodEpoch" : ObjectId("546dea6b57bdea6a9874c9c9"), "ns" : "foo.bar", "min" : { "uid" : 301451 }, "max" : { "uid" : 310834 }, "shard"
 : "shard0002" }
{ "_id" : "foo.bar-uid_310834.0", "lastmod" : Timestamp(34, 0), "lastmodEpoch" : ObjectId("546dea6b57bdea6a9874c9c9"), "ns" : "foo.bar", "min" : { "uid" : 310834 }, "max" : { "uid" : 321227 }, "shard"
 : "shard0000" }
{ "_id" : "foo.bar-uid_321227.0", "lastmod" : Timestamp(34, 1), "lastmodEpoch" : ObjectId("546dea6b57bdea6a9874c9c9"), "ns" : "foo.bar", "min" : { "uid" : 321227 }, "max" : { "uid" : 331654 }, "shard"
 : "shard0001" }
{ "_id" : "foo.bar-uid_331654.0", "lastmod" : Timestamp(23, 4), "lastmodEpoch" : ObjectId("546dea6b57bdea6a9874c9c9"), "ns" : "foo.bar", "min" : { "uid" : 331654 }, "max" : { "uid" : 341811 }, "shard"
 : "shard0001" }
{ "_id" : "foo.bar-uid_341811.0", "lastmod" : Timestamp(23, 6), "lastmodEpoch" : ObjectId("546dea6b57bdea6a9874c9c9"), "ns" : "foo.bar", "min" : { "uid" : 341811 }, "max" : { "uid" : 351837 }, "shard"
 : "shard0001" }
{ "_id" : "foo.bar-uid_351837.0", "lastmod" : Timestamp(25, 2), "lastmodEpoch" : ObjectId("546dea6b57bdea6a9874c9c9"), "ns" : "foo.bar", "min" : { "uid" : 351837 }, "max" : { "uid" : 362649 }, "shard"
 : "shard0001" }
{ "_id" : "foo.bar-uid_362649.0", "lastmod" : Timestamp(26, 2), "lastmodEpoch" : ObjectId("546dea6b57bdea6a9874c9c9"), "ns" : "foo.bar", "min" : { "uid" : 362649 }, "max" : { "uid" : 373362 }, "shard"
 : "shard0001" }
{ "_id" : "foo.bar-uid_373362.0", "lastmod" : Timestamp(26, 4), "lastmodEpoch" : ObjectId("546dea6b57bdea6a9874c9c9"), "ns" : "foo.bar", "min" : { "uid" : 373362 }, "max" : { "uid" : 384005 }, "shard"
 : "shard0001" }
{ "_id" : "foo.bar-uid_384005.0", "lastmod" : Timestamp(26, 6), "lastmodEpoch" : ObjectId("546dea6b57bdea6a9874c9c9"), "ns" : "foo.bar", "min" : { "uid" : 384005 }, "max" : { "uid" : 394517 }, "shard"
 : "shard0001" }
Type "it" for more
mongos> it
{ "_id" : "foo.bar-uid_394517.0", "lastmod" : Timestamp(28, 2), "lastmodEpoch" : ObjectId("546dea6b57bdea6a9874c9c9"), "ns" : "foo.bar", "min" : { "uid" : 394517 }, "max" : { "uid" : 404599 }, "shard"
 : "shard0001" }
{ "_id" : "foo.bar-uid_404599.0", "lastmod" : Timestamp(28, 4), "lastmodEpoch" : ObjectId("546dea6b57bdea6a9874c9c9"), "ns" : "foo.bar", "min" : { "uid" : 404599 }, "max" : { "uid" : 414320 }, "shard"
 : "shard0001" }
{ "_id" : "foo.bar-uid_414320.0", "lastmod" : Timestamp(28, 6), "lastmodEpoch" : ObjectId("546dea6b57bdea6a9874c9c9"), "ns" : "foo.bar", "min" : { "uid" : 414320 }, "max" : { "uid" : 423890 }, "shard"
 : "shard0001" }
{ "_id" : "foo.bar-uid_423890.0", "lastmod" : Timestamp(30, 2), "lastmodEpoch" : ObjectId("546dea6b57bdea6a9874c9c9"), "ns" : "foo.bar", "min" : { "uid" : 423890 }, "max" : { "uid" : 436104 }, "shard"
 : "shard0001" }
{ "_id" : "foo.bar-uid_436104.0", "lastmod" : Timestamp(31, 2), "lastmodEpoch" : ObjectId("546dea6b57bdea6a9874c9c9"), "ns" : "foo.bar", "min" : { "uid" : 436104 }, "max" : { "uid" : 446250 }, "shard"
 : "shard0001" }
{ "_id" : "foo.bar-uid_446250.0", "lastmod" : Timestamp(31, 4), "lastmodEpoch" : ObjectId("546dea6b57bdea6a9874c9c9"), "ns" : "foo.bar", "min" : { "uid" : 446250 }, "max" : { "uid" : 456575 }, "shard"
 : "shard0001" }
{ "_id" : "foo.bar-uid_456575.0", "lastmod" : Timestamp(31, 6), "lastmodEpoch" : ObjectId("546dea6b57bdea6a9874c9c9"), "ns" : "foo.bar", "min" : { "uid" : 456575 }, "max" : { "uid" : 466768 }, "shard"
 : "shard0001" }
{ "_id" : "foo.bar-uid_466768.0", "lastmod" : Timestamp(31, 8), "lastmodEpoch" : ObjectId("546dea6b57bdea6a9874c9c9"), "ns" : "foo.bar", "min" : { "uid" : 466768 }, "max" : { "uid" : 476831 }, "shard"
 : "shard0001" }
{ "_id" : "foo.bar-uid_476831.0", "lastmod" : Timestamp(32, 2), "lastmodEpoch" : ObjectId("546dea6b57bdea6a9874c9c9"), "ns" : "foo.bar", "min" : { "uid" : 476831 }, "max" : { "uid" : 487520 }, "shard"
 : "shard0001" }
{ "_id" : "foo.bar-uid_487520.0", "lastmod" : Timestamp(34, 2), "lastmodEpoch" : ObjectId("546dea6b57bdea6a9874c9c9"), "ns" : "foo.bar", "min" : { "uid" : 487520 }, "max" : { "uid" : 497583 }, "shard"
 : "shard0001" }
{ "_id" : "foo.bar-uid_497583.0", "lastmod" : Timestamp(34, 3), "lastmodEpoch" : ObjectId("546dea6b57bdea6a9874c9c9"), "ns" : "foo.bar", "min" : { "uid" : 497583 }, "max" : { "uid" : { "$maxKey" : 1 }
 }, "shard" : "shard0001" }
mongos>
 

 

   执行如下命令查看分片的状态

 

mongos> printShardingStatus(db.getSisterDB("config"),1);
--- Sharding Status ---
  sharding version: {
        "_id" : 1,
        "version" : 4,
        "minCompatibleVersion" : 4,
        "currentVersion" : 5,
        "clusterId" : ObjectId("546de44857bdea6a9874c8bc")
}
  shards:
        {  "_id" : "shard0000",  "host" : "10.1.241.203:27017" }
        {  "_id" : "shard0001",  "host" : "10.1.241.203:27018" }
        {  "_id" : "shard0002",  "host" : "10.1.241.203:27019" }
  databases:
        {  "_id" : "admin",  "partitioned" : false,  "primary" : "config" }
        {  "_id" : "test1",  "partitioned" : false,  "primary" : "shard0000" }
        {  "_id" : "foo",  "partitioned" : true,  "primary" : "shard0001" }
                foo.bar
                        shard key: { "uid" : 1 }
                        chunks:
                                shard0000       17
                                shard0002       16
                                shard0001       18
                        { "uid" : { "$minKey" : 1 } } -->> { "uid" : 0 } on : shard0000 Timestamp(2, 0)
                        { "uid" : 0 } -->> { "uid" : 5333 } on : shard0002 Timestamp(3, 0)
                        { "uid" : 5333 } -->> { "uid" : 16357 } on : shard0000 Timestamp(4, 0)
                        { "uid" : 16357 } -->> { "uid" : 26187 } on : shard0002 Timestamp(5, 0)
                        { "uid" : 26187 } -->> { "uid" : 36823 } on : shard0000 Timestamp(6, 0)
                        { "uid" : 36823 } -->> { "uid" : 46546 } on : shard0002 Timestamp(7, 0)
                        { "uid" : 46546 } -->> { "uid" : 57035 } on : shard0000 Timestamp(8, 0)
                        { "uid" : 57035 } -->> { "uid" : 66699 } on : shard0002 Timestamp(9, 0)
                        { "uid" : 66699 } -->> { "uid" : 77783 } on : shard0000 Timestamp(10, 0)
                        { "uid" : 77783 } -->> { "uid" : 87230 } on : shard0002 Timestamp(11, 0)
                        { "uid" : 87230 } -->> { "uid" : 96803 } on : shard0000 Timestamp(12, 0)
                        { "uid" : 96803 } -->> { "uid" : 106346 } on : shard0002 Timestamp(13, 0)
                        { "uid" : 106346 } -->> { "uid" : 117120 } on : shard0000 Timestamp(14, 0)
                        { "uid" : 117120 } -->> { "uid" : 127899 } on : shard0002 Timestamp(15, 0)
                        { "uid" : 127899 } -->> { "uid" : 137956 } on : shard0000 Timestamp(16, 0)
                        { "uid" : 137956 } -->> { "uid" : 147773 } on : shard0002 Timestamp(17, 0)
                        { "uid" : 147773 } -->> { "uid" : 158075 } on : shard0000 Timestamp(18, 0)
                        { "uid" : 158075 } -->> { "uid" : 168681 } on : shard0002 Timestamp(19, 0)
                        { "uid" : 168681 } -->> { "uid" : 178122 } on : shard0000 Timestamp(20, 0)
                        { "uid" : 178122 } -->> { "uid" : 188515 } on : shard0002 Timestamp(21, 0)
                        { "uid" : 188515 } -->> { "uid" : 198908 } on : shard0000 Timestamp(22, 0)
                        { "uid" : 198908 } -->> { "uid" : 210107 } on : shard0002 Timestamp(23, 0)
                        { "uid" : 210107 } -->> { "uid" : 219471 } on : shard0000 Timestamp(24, 0)
                        { "uid" : 219471 } -->> { "uid" : 229496 } on : shard0002 Timestamp(25, 0)
                        { "uid" : 229496 } -->> { "uid" : 240423 } on : shard0000 Timestamp(26, 0)
                        { "uid" : 240423 } -->> { "uid" : 250932 } on : shard0002 Timestamp(27, 0)
                        { "uid" : 250932 } -->> { "uid" : 261576 } on : shard0000 Timestamp(28, 0)
                        { "uid" : 261576 } -->> { "uid" : 272106 } on : shard0002 Timestamp(29, 0)
                        { "uid" : 272106 } -->> { "uid" : 281512 } on : shard0000 Timestamp(30, 0)
                        { "uid" : 281512 } -->> { "uid" : 291917 } on : shard0002 Timestamp(31, 0)
                        { "uid" : 291917 } -->> { "uid" : 301451 } on : shard0000 Timestamp(32, 0)
                        { "uid" : 301451 } -->> { "uid" : 310834 } on : shard0002 Timestamp(33, 0)
                        { "uid" : 310834 } -->> { "uid" : 321227 } on : shard0000 Timestamp(34, 0)
                        { "uid" : 321227 } -->> { "uid" : 331654 } on : shard0001 Timestamp(34, 1)
                        { "uid" : 331654 } -->> { "uid" : 341811 } on : shard0001 Timestamp(23, 4)
                        { "uid" : 341811 } -->> { "uid" : 351837 } on : shard0001 Timestamp(23, 6)
                        { "uid" : 351837 } -->> { "uid" : 362649 } on : shard0001 Timestamp(25, 2)
                        { "uid" : 362649 } -->> { "uid" : 373362 } on : shard0001 Timestamp(26, 2)
                        { "uid" : 373362 } -->> { "uid" : 384005 } on : shard0001 Timestamp(26, 4)
                        { "uid" : 384005 } -->> { "uid" : 394517 } on : shard0001 Timestamp(26, 6)
                        { "uid" : 394517 } -->> { "uid" : 404599 } on : shard0001 Timestamp(28, 2)
                        { "uid" : 404599 } -->> { "uid" : 414320 } on : shard0001 Timestamp(28, 4)
                        { "uid" : 414320 } -->> { "uid" : 423890 } on : shard0001 Timestamp(28, 6)
                        { "uid" : 423890 } -->> { "uid" : 436104 } on : shard0001 Timestamp(30, 2)
                        { "uid" : 436104 } -->> { "uid" : 446250 } on : shard0001 Timestamp(31, 2)
                        { "uid" : 446250 } -->> { "uid" : 456575 } on : shard0001 Timestamp(31, 4)
                        { "uid" : 456575 } -->> { "uid" : 466768 } on : shard0001 Timestamp(31, 6)
                        { "uid" : 466768 } -->> { "uid" : 476831 } on : shard0001 Timestamp(31, 8)
                        { "uid" : 476831 } -->> { "uid" : 487520 } on : shard0001 Timestamp(32, 2)
                        { "uid" : 487520 } -->> { "uid" : 497583 } on : shard0001 Timestamp(34, 2)
                        { "uid" : 497583 } -->> { "uid" : { "$maxKey" : 1 } } on : shard0001 Timestamp(34, 3)
        {  "_id" : "test",  "partitioned" : false,  "primary" : "shard0002" }

 

 

 因为uid是单调递增的,所以,随着uid的增大,越来越多的数据集中在一个分片上了,这里是shard0001,这也说明了,片键不能选择单调增的属性作为片键

 

 三、添加分片

 当现有的分片集群存储压力上升后,需要添加分片,添加分片也比较简单,做法是:

启动一个新的MongoDB实例,这个实例中没有要分片的数据库集合,如果有的话,先删除掉

 

mongo -- 281000 //连接到路由服务器
mongos>use admin; //必须在admin上执行添加分片的操作,否则抛出error: "$err" : "error creating initial database config information :: caused by :: can't find a shard to put new db on"
mongos> db.runCommand({addshard:"hostname:27020",allowLocal:true })

 

此后,分片Balancer开始平衡分片服务器中的文档,在上面三个分片50万个文档,添加一个分片后,平衡后,执行命令:

 

 

mongo>use config;
mongo>printShardingStatus(db.getSisterDB("config"),1);
 

 

接到的分片结果是

 

 shard key: { "uid" : 1 }
 chunks:
         shard0003       12
         shard0002       13
         shard0000       13
         shard0001       13
 
如果要添加的分片包含了分片的数据库集合,此时加入到分片集合中,会出错:
mongos> use admin
switched to db admin
mongos> db.runCommand({addshard:"10.1.241.203:27020",allowLocal:true })
{
        "ok" : 0,
        "errmsg" : "can't add shard 10.1.241.203:27020 because a local database 'foo' exists in another shard0001:10.1.241.203:27018"
}
意思是说,在加入集群时,只有一个分片可以包含分片数据库文档,它称为"primary" ,如下所示:
mongos> use config
switched to db config
mongos> db.databases.find();
{ "_id" : "admin", "partitioned" : false, "primary" : "config" }
{ "_id" : "test1", "partitioned" : false, "primary" : "shard0000" }
{ "_id" : "foo", "partitioned" : true, "primary" : "shard0001" }
{ "_id" : "test", "partitioned" : false, "primary" : "shard0002" }
 
其中,shard0001就是27018所在的分片,如下所示(27020)被删除,但是它没有从config数据库的shards中删除,相反只是添加了一个属性draining:true
use config
db.shards.find();
{ "_id" : "shard0000", "host" : "10.1.241.203:27017" }
{ "_id" : "shard0001", "host" : "10.1.241.203:27018" }
{ "_id" : "shard0002", "host" : "10.1.241.203:27019" }
{ "_id" : "shard0003", "host" : "10.1.241.203:27020", "draining" : true }
 
 
 

 

 三、删除分片

执行如下命令,从集群服务器中删除一个分片

 

mongo -- 281000 //连接到路由服务器
mongos>use admin; //必须在admin上执行添加分片的操作,否则抛出error: "$err" : "error creating initial database config information :: caused by :: can't find a shard to put new db on"
mongos> db.runCommand({addshard:"hostname:27020",allowLocal:true })

分片删除后,分片集群要做的事情是把被删除分片的数据迁移到其他分片中,并保持均衡。数据迁移完成后,

执行如下命令,

 

mongo>use config;
mongo>printShardingStatus(db.getSisterDB("config"),1);

 

得到的结果如下,可见三个集群是平衡的。

 

  foo.bar
          shard key: { "uid" : 1 }
          chunks:
                  shard0000       17
                  shard0001       17
                  shard0002       17
 
分片删除后,被删除的分片中的数据被移动到了其它分片,自身没有任何数据留下。
 
 
 
 

 

 

 

你可能感兴趣的:(mongodb)