【NoSQL】Mongodb高可用架构―Replica Set 集群实战


Replica Set使用的是n个mongod节点,构建具备自动的容错功能(auto-failover),自动恢复的(auto-recovery)的高可用方案。
使用Replica Set来实现读写分离。通过在连接时指定或者在主库指定slaveOk,由Secondary来分担读的压力,Primary只承担写操作。
对于Replica Set中的secondary 节点默认是不可读的。
架构图:

170704347.jpg

分别在各服务器上运行两个mongod实例:

shard11 + shard12 + shard13 ----> 组成一个replica set --|
|-----> sharding_cluster
shard21 + shard22 + shard23 ----> 组成一个replica set --|
Shard Server: 用于存储实际的数据块,实际生产环境中一个shard server角色可由几台机器组个一个relica set承担,防止主机单点故障!
Config Server: 存储了整个 Cluster Metadata,其中包括 chunk 信息!
Route Server: 前端路由,客户端由此接入,且让整个集群看上去像单一数据库,前端应用可以透明使用。
一、安装配置mongodb环境
1.安装
 
 
  1. wget http://fastdl.mongodb.org/linux/mongodb-linux-x86_64-2.0.4.tgz

  2. tar zxf mongodb-linux-x86_64-2.0.4.tgz

  3. mv mongodb-linux-x86_64-2.0.4 /opt/mongodb

  4. echo "export PATH=$PATH:/opt/mongodb/bin" >> /etc/profile

  5. source /etc/profile

2.建立用户和组
 
 
  1. useradd -u 600 -s /bin/false mongodb  

3.创建数据目录
在各服务器上建立如下目录:
 
 
  1. 30服务器:

  2. mkdir -p /data0/mongodb/{db,logs}

  3. mkdir -p /data0/mongodb/db/{shard11,shard21,config}

  4. 31服务器:

  5. mkdir -p /data0/mongodb/{db,logs}

  6. mkdir -p /data0/mongodb/db/{shard12,shard22,config}

  7. 32服务器:

  8. mkdir -p /data0/mongodb/{db,logs}

  9. mkdir -p /data0/mongodb/db/{shard13,shard23,config}

4.设置各节点服务器hosts解析
 
 
  1. true > /etc/hosts  

  2. echo -ne "

  3. 192.168.8.30     mong01  

  4. 192.168.8.31     mong02

  5. 192.168.8.32     mong03

  6. " >>/etc/hosts

  7. cat >> /etc/hosts << EOF

  8. 192.168.8.30     mong01  

  9. 192.168.8.31     mong02

  10. 192.168.8.32     mong03

  11. EOF

5.同步时钟
ntpdate ntp.api.bz
写到crontab任务计划中!
这里务必要同步时钟,不然shrad不能同步!
以上配置各节点都进行操作!!
二、配置relica sets
1.配置两个shard
 
 
  1. 30 server:

  2. /opt/mongodb/bin/mongod -shardsvr -replSet shard1 -port 27021 -dbpath /data0/mongodb/db/shard11 -oplogSize 1000 -logpath /data0/mongodb/logs/shard11.log -logappend  --maxConns 10000 --quiet -fork --directoryperdb

  3. sleep 2

  4. /opt/mongodb/bin/mongod -shardsvr -replSet shard2 -port 27022 -dbpath /data0/mongodb/db/shard21 -oplogSize 1000 -logpath /data0/mongodb/logs/shard21.log -logappend  --maxConns 10000 --quiet -fork --directoryperdb

  5. sleep 2

  6. echo "all mongo started."

  7. 31 server:

  8. /opt/mongodb/bin/mongod -shardsvr -replSet shard1 -port 27021 -dbpath /data0/mongodb/db/shard12 -oplogSize 1000 -logpath /data0/mongodb/logs/shard12.log -logappend  --maxConns 10000 --quiet -fork --directoryperdb

  9. sleep 2

  10. numactl --interleave=all /opt/mongodb/bin/mongod -shardsvr -replSet shard2 -port 27022 -dbpath /data0/mongodb/db/shard22 -oplogSize 1000 -logpath /data0/mongodb/logs/shard22.log -logappend  --maxConns 10000 --quiet -fork --directoryperdb

  11. sleep 2

  12. echo "all mongo started."

  13. 32 server:

  14. numactl --interleave=all /opt/mongodb/bin/mongod -shardsvr -replSet shard1 -port 27021 -dbpath /data0/mongodb/db/shard13 -oplogSize 1000 -logpath /data0/mongodb/logs/shard13.log -logappend  --maxConns 10000 --quiet -fork --directoryperdb

  15. sleep 2

  16. numactl --interleave=all /opt/mongodb/bin/mongod -shardsvr -replSet shard2 -port 27022 -dbpath /data0/mongodb/db/shard23 -oplogSize 1000 -logpath /data0/mongodb/logs/shard23.log -logappend  --maxConns 10000 --quiet -fork --directoryperdb  

  17. sleep 2

  18. echo "all mongo started."

可以对应的把上面的命令放在一个脚本内,方便启动!
也可以写到配置文件中,通过-f参数来启动!
改成配置文件方式:
 
 
  1. shardsvr = true

  2. replSet = shard1

  3. port = 27021

  4. dbpath = /data0/mongodb/db/shard11

  5. oplogSize = 1000

  6. logpath = /data0/mongodb/logs/shard11.log

  7. logappend = true

  8. maxConns = 10000

  9. quit=true

  10. profile = 1

  11. slowms = 5

  12. rest = true

  13. fork = true

  14. directoryperdb = true

这样可以通过 mognod -f mongodb.conf来启动了!
我这里把这些命令都放入脚本中:
启动脚本(这里只启动mongod,后面有专门启动config和mongos脚本):
 
 
  1. [root@mon1 sh]# cat start.sh  

  2. /opt/mongodb/bin/mongod -shardsvr -replSet shard1 -port 27021 -dbpath /data0/mongodb/db/shard11 -oplogSize 1000 -logpath /data0/mongodb/logs/shard11.log -logappend  --maxConns 10000 --quiet -fork --directoryperdb

  3. sleep 2

  4. /opt/mongodb/bin/mongod -shardsvr -replSet shard2 -port 27022 -dbpath /data0/mongodb/db/shard21 -oplogSize 1000 -logpath /data0/mongodb/logs/shard21.log -logappend  --maxConns 10000 --quiet -fork --directoryperdb

  5. sleep 2

  6. ps aux |grep mongo

  7. echo "all mongo started."

  8. [root@mon2 sh]# cat start.sh  

  9. /opt/mongodb/bin/mongod -shardsvr -replSet shard1 -port 27021 -dbpath /data0/mongodb/db/shard12 -oplogSize 1000 -logpath /data0/mongodb/logs/shard12.log -logappend  --maxConns 10000 --quiet -fork --directoryperdb  

  10. sleep 2

  11. /opt/mongodb/bin/mongod -shardsvr -replSet shard2 -port 27022 -dbpath /data0/mongodb/db/shard22 -oplogSize 1000 -logpath /data0/mongodb/logs/shard22.log -logappend  --maxConns 10000 --quiet -fork --directoryperdb  

  12. sleep 2

  13. ps aux |grep mongo

  14. echo "all mongo started."

  15. [root@mongo03 sh]# cat start.sh  

  16. /opt/mongodb/bin/mongod -shardsvr -replSet shard1 -port 27021 -dbpath /data0/mongodb/db/shard11 -oplogSize 1000 -logpath /data0/mongodb/logs/shard11.log -logappend  --maxConns 10000 --quiet -fork --directoryperdb  --keyFile=/opt/mongodb/sh/keyFile

  17. sleep 2

  18. /opt/mongodb/bin/mongod -shardsvr -replSet shard2 -port 27022 -dbpath /data0/mongodb/db/shard21 -oplogSize 1000 -logpath /data0/mongodb/logs/shard21.log -logappend  --maxConns 10000 --quiet -fork --directoryperdb  --keyFile=/opt/mongodb/sh/keyFile

  19. sleep 2

  20. echo "all mongo started."


PS:要是想开启一个HTTP协议的端口提供rest服务,可以在mongod启动参数中加上 --rest 选项!
这样我们可以通过 http://IP:28021/_replSet 查看状态!
生产环境推荐用配置文件和脚本文件方式启动。
三、初始化replica set
1.配置shard1用到的replica sets
 
 
  1. [root@mongo01 ~]# mongo 192.168.8.30:27021

  2. > config = {_id: 'shard1', members: [

  3.    {_id: 0, host: '192.168.8.30:27021'},

  4.    {_id: 1, host: '192.168.8.31:27021'},

  5.    {_id: 2, host: '192.168.8.32:27021'}]

  6.  }

  7. > config = {_id: 'shard1', members: [

  8. ...                           {_id: 0, host: '192.168.8.30:27021'},

  9. ...                           {_id: 1, host: '192.168.8.31:27021'},

  10. ...                           {_id: 2, host: '192.168.8.32:27021'}]

  11. ...            }

  12. {

  13. "_id" : "shard1",

  14. "members" : [

  15.        {

  16. "_id" : 0,

  17. "host" : "192.168.8.30:27021"

  18.        },

  19.        {

  20. "_id" : 1,

  21. "host" : "192.168.8.31:27021"

  22.        },

  23.        {

  24. "_id" : 2,

  25. "host" : "192.168.8.32:27021"

  26.        }

  27.    ]

  28. }

出现如下信息表示成功:
 
 
  1. > rs.initiate(config)

  2. {

  3. "info" : "Config now saved locally.  Should come online in about a minute.",

  4. "ok" : 1

  5. }

  6. > rs.status()

  7. {

  8. "set" : "shard1",

  9. "date" : ISODate("2012-06-07T11:35:22Z"),

  10. "myState" : 1,

  11. "members" : [

  12.        {

  13. "_id" : 0,

  14. "name" : "192.168.8.30:27021",

  15. "health" : 1,  #1 表示正常

  16. "state" : 1,   #1 表示是primary

  17. "stateStr" : "PRIMARY",  #表示此服务器是主库

  18. "optime" : {

  19. "t" : 1339068873000,

  20. "i" : 1

  21.            },

  22. "optimeDate" : ISODate("2012-06-07T11:34:33Z"),

  23. "self" : true

  24.        },

  25.        {

  26. "_id" : 1,

  27. "name" : "192.168.8.31:27021",

  28. "health" : 1,    #1 表示正常

  29. "state" : 2,     #2 表示是secondary

  30. "stateStr" : "SECONDARY",  #表示此服务器是从库

  31. "uptime" : 41,

  32. "optime" : {

  33. "t" : 1339068873000,

  34. "i" : 1

  35.            },

  36. "optimeDate" : ISODate("2012-06-07T11:34:33Z"),

  37. "lastHeartbeat" : ISODate("2012-06-07T11:35:21Z"),

  38. "pingMs" : 7

  39.        },

  40.        {

  41. "_id" : 2,

  42. "name" : "192.168.8.32:27021",

  43. "health" : 1,

  44. "state" : 2,

  45. "stateStr" : "SECONDARY",

  46. "uptime" : 36,

  47. "optime" : {

  48. "t" : 1341373105000,

  49. "i" : 1

  50.            },

  51. "optimeDate" : ISODate("2012-06-07T11:34:00Z"),

  52. "lastHeartbeat" : ISODate("2012-06-07T11:35:21Z"),

  53. "pingMs" : 3

  54.        }

  55.    ],

  56. "ok" : 1

  57. }

  58. PRIMARY>

可以看马上变成 PRIMARY 即主节点!
再到其它节点看下:
 
 
  1. [root@mongo02 sh]# mongo 192.168.8.31:27021

  2. MongoDB shell version: 2.0.5

  3. connecting to: 192.168.8.31:27021/test

  4. SECONDARY>  

  5. [root@mongo03 sh]# mongo 192.168.8.32:27021

  6. MongoDB shell version: 2.0.5

  7. connecting to: 192.168.8.32:27021/test

  8. SECONDARY>  

在所有节点上可以查看Replica Sets 的配置信息:
 
 
  1. PRIMARY> rs.conf()

  2. {

  3. "_id" : "shard1",

  4. "version" : 1,

  5. "members" : [

  6.        {

  7. "_id" : 0,

  8. "host" : "192.168.8.30:27021"

  9.        },

  10.        {

  11. "_id" : 1,

  12. "host" : "192.168.8.31:27021"

  13.        },

  14.        {

  15. "_id" : 2,

  16. "host" : "192.168.8.32:27021",

  17. "shardOnly" : true

  18.        }

  19.    ]

  20. }

2.配置shard2用到的replica sets
 
 
  1. [root@mongo02 sh]# mongo 192.168.8.30:27022

  2. MongoDB shell version: 2.0.5

  3. connecting to: 192.168.8.30:27022/test

  4. > config = {_id: 'shard2', members: [

  5.    {_id: 0, host: '192.168.8.30:27022'},

  6.    {_id: 1, host: '192.168.8.31:27022'},

  7.    {_id: 2, host: '192.168.8.32:27022'}]

  8.    }

  9. > config = {_id: 'shard2', members: [

  10. ... {_id: 0, host: '192.168.8.30:27022'},

  11. ... {_id: 1, host: '192.168.8.31:27022'},

  12. ... {_id: 2, host: '192.168.8.32:27022'}]

  13. ... }

  14. {

  15. "_id" : "shard2",

  16. "members" : [

  17.        {

  18. "_id" : 0,

  19. "host" : "192.168.8.30:27022"

  20.        },

  21.        {

  22. "_id" : 1,

  23. "host" : "192.168.8.31:27022"

  24.        },

  25.        {

  26. "_id" : 2,

  27. "host" : "192.168.8.32:27022"

  28.        }

  29.    ]

  30. }

  31. > rs.initiate(config)

  32. {

  33. "info" : "Config now saved locally.  Should come online in about a minute.",

  34. "ok" : 1

  35. }

验证节点:
 
 
  1. > rs.status()

  2. {

  3. "set" : "shard2",

  4. "date" : ISODate("2012-06-07T11:43:47Z"),

  5. "myState" : 2,

  6. "members" : [

  7.        {

  8. "_id" : 0,

  9. "name" : "192.168.8.30:27022",

  10. "health" : 1,

  11. "state" : 1,

  12. "stateStr" : "PRIMARY",

  13. "optime" : {

  14. "t" : 1341367921000,

  15. "i" : 1

  16.            },

  17. "optimeDate" : ISODate("2012-06-07T11:43:40Z"),

  18. "self" : true

  19.        },

  20.        {

  21. "_id" : 1,

  22. "name" : "192.168.8.31:27022",

  23. "health" : 1,

  24. "state" : 2,

  25. "stateStr" : "SECONDARY",

  26. "uptime" : 50,

  27. "optime" : {

  28. "t" : 1341367921000,

  29. "i" : 1

  30.            },

  31. "optimeDate" : ISODate("1970-01-01T00:00:00Z"),

  32. "lastHeartbeat" : ISODate("2012-06-07T11:43:46Z"),

  33. "pingMs" : 0,

  34.        },

  35.        {

  36. "_id" : 2,

  37. "name" : "192.168.8.32:27022",

  38. "health" : 1,

  39. "state" : 2,

  40. "stateStr" : "SECONDARY",

  41. "uptime" : 81,

  42. "optime" : {

  43. "t" : 1341373254000,

  44. "i" : 1

  45.            },

  46. "optimeDate" : ISODate("2012-06-07T11:41:00Z"),

  47. "lastHeartbeat" : ISODate("2012-06-07T11:43:46Z"),

  48. "pingMs" : 0,

  49.        }

  50.    ],

  51. "ok" : 1

  52. }

  53. PRIMARY>  

到此就配置好了二个replica sets!
PS: 初始化时,不指定priority默认id 0 为primary
状态中关键数据位:
在用 rs.status()查看replica sets状态时,
state:1表示该host是当前可以进行读写,2:不能读写
health:1表示该host目前是正常的,0:异常
注意:初使化replica sets时也可以用这种方法:
db.runCommand({"replSetInitiate":{"_id":"shard1","members":[{"_id":0,"host":"192.168.8.30:27021"},{"_id":1,"host":"192.168.8.31:27021"},{"_id":2,"host":"192.168.8.32:27021","shardOnly":true}]}})
可以省略用rs.initiate(config)。
四、配置三台config server
分别在各服务器上运行(启动都一样):
/opt/mongodb/bin/mongod --configsvr --dbpath /data0/mongodb/db/config --port 20000 --logpath /data0/mongodb/logs/config.log --logappend --fork --directoryperdb
用脚本形式:
 
 
  1. [root@mongo01 sh]# cat config.sh  

  2. /opt/mongodb/bin/mongod --configsvr --dbpath /data0/mongodb/db/config --port 20000 --logpath /data0/mongodb/logs/config.log --logappend --fork --directoryperdb

  3. [root@mongo01 sh]# pwd

  4. /opt/mongodb/sh

  5. [root@mongo01 sh]# ./config.sh

然后在各节点查看有没有启动起来:
 
 
  1. [root@mongo01 sh]# ps aux |grep mong

  2. root     25343  0.9  6.8 737596 20036 ?        Sl   19:32   0:12 /opt/mongodb/bin/mongod -shardsvr -replSet shard1 -port 27021 -dbpath /data0/mongodb/db/shard1 -oplogSize 100 -logpath /data0/mongodb/logs/shard1.log -logappend --maxConns 10000 --quiet -fork --directoryperdb

  3. root     25351  0.9  7.0 737624 20760 ?        Sl   19:32   0:11 /opt/mongodb/bin/mongod -shardsvr -replSet shard2 -port 27022 -dbpath /data0/mongodb/db/shard2 -oplogSize 100 -logpath /data0/mongodb/logs/shard2.log -logappend --maxConns 10000 --quiet -fork --directoryperdb

  4. root     25669 13.0  4.7 118768 13852 ?        Sl   19:52   0:07 /opt/mongodb/bin/mongod --configsvr --dbpath /data0/mongodb/db/config --port 20000 --logpath /data0/mongodb/logs/config.log --logappend --fork --directoryperdb

  5. root     25695  0.0  0.2  61220   744 pts/3    R+   19:53   0:00 grep mong

五、配置mongs(启动路由)
分别在206、207服务器上运行(也可以在所有节点上启动):
/opt/mongodb/bin/mongos -configdb 192.168.8.30:20000,192.168.8.31:20000,192.168.8.32:20000 -port 30000 -chunkSize 50 -logpath /data0/mongodb/logs/mongos.log -logappend -fork
用脚本形式:
 
 
  1. [root@mongo01 sh]# cat mongos.sh  

  2. /opt/mongodb/bin/mongos -configdb 192.168.8.30:20000,192.168.8.31:20000,192.168.8.32:20000 -port 30000 -chunkSize 50 -logpath /data0/mongodb/logs/mongos.log -logappend -fork

  3. [root@mongo01 sh]# pwd

  4. /opt/mongodb/sh

  5. [root@mongo01 sh]# ./mongos.sh  

  6. [root@mongo01 sh]# ps aux |grep mong

  7. root     25343  0.8  6.8 737596 20040 ?        Sl   19:32   0:13 /opt/mongodb/bin/mongod -shardsvr -replSet shard1 -port 27021 -dbpath /data0/mongodb/db/shard1 -oplogSize 100 -logpath /data0/mongodb/logs/shard1.log -logappend --maxConns 10000 --quiet -fork --directoryperdb

  8. root     25351  0.9  7.0 737624 20768 ?        Sl   19:32   0:16 /opt/mongodb/bin/mongod -shardsvr -replSet shard2 -port 27022 -dbpath /data0/mongodb/db/shard2 -oplogSize 100 -logpath /data0/mongodb/logs/shard2.log -logappend --maxConns 10000 --quiet -fork --directoryperdb

  9. root     25669  2.0  8.0 321852 23744 ?        Sl   19:52   0:09 /opt/mongodb/bin/mongod --configsvr --dbpath /data0/mongodb/db/config --port 20000 --logpath /data0/mongodb/logs/config.log --logappend --fork --directoryperdb

  10. root     25863  0.5  0.8  90760  2388 ?        Sl   20:00   0:00 /opt/mongodb/bin/mongos -configdb 192.168.8.30:20000,192.168.8.31:20000,192.168.8.32:20000 -port 30000 -chunkSize 50 -logpath /data0/mongodb/logs/mongos.log -logappend -fork

  11. root     25896  0.0  0.2  61220   732 pts/3    D+   20:00   0:00 grep mong

注意:
1). mongos里面的ip和端口是config服务的ip和端口:192.168.8.30:20000,192.168.8.31:20000,192.168.8.32:20000
2). 必须先启动config后(并且config启动正常后,有config的进程存在)再启动mongos
六、配置shard集群
1.连接一台路由
 
 
  1. [root@mongo01 sh]# mongo 192.168.8.30:30000/admin

  2. MongoDB shell version: 2.0.5

  3. connecting to: 192.168.8.30:30000/admin

  4. mongos>  

2.加入shards
 
 
  1. mongos> db.runCommand({ addshard : "shard1/192.168.8.30:27021,192.168.8.31:27021,192.168.8.32:27021",name:"shard1",maxSize:20480})

  2. { "shardAdded" : "shard1", "ok" : 1 }

  3. mongos> db.runCommand({ addshard : "shard2/192.168.8.30:27022,192.168.8.31:27022,192.168.8.32:27022",name:"shard2",maxSize:20480})

  4. { "shardAdded" : "shard2", "ok" : 1 }

PS:
分片操作必须在 admin 库下操作
如果只启动206和207服务器的路由!因此可不用把208服务器加进来!
可选参数说明:
Name:用于指定每个shard的名字,不指定的话系统将自动分配
maxSize:指定各个shard可使用的最大磁盘空间,单位MegaBytes
3.列出加入的shards
 
 
  1. mongos> db.runCommand( { listshards : 1 } );

  2. {

  3. "shards" : [

  4.        {

  5. "_id" : "shard1",

  6. "host" : "shard1/192.168.8.30:27021,192.168.8.31:27021,192.168.8.32:27021",

  7. "maxSize" : NumberLong(20480)

  8.        },

  9.        {

  10. "_id" : "shard2",

  11. "host" : "shard2/192.168.8.30:27022,192.168.8.31:27022,192.168.8.32:27022",

  12. "maxSize" : NumberLong(20480)

  13.        }

  14.    ],

  15. "ok" : 1

  16. }


PS: 列出了以上二个我加的shards(shard1和shard2),表示shards已经配置成功!!
如果206那台机器挂了,其它两个节点中某个会成为主节点,mongos会自动连接到主节点!
 
 
  1. mongos> db.runCommand({ismaster:1});

  2. {

  3. "ismaster" : true,

  4. "msg" : "isdbgrid",

  5. "maxBsonObjectSize" : 16777216,

  6. "ok" : 1

  7. }

  8. mongos> db.runCommand( { listshards : 1 } );

  9. { "ok" : 0, "errmsg" : "access denied - use admin db" }

  10. mongos> use admin

  11. switched to db admin

  12. mongos> db.runCommand( { listshards : 1 } );

  13. {

  14. "shards" : [

  15.        {

  16. "_id" : "s1",

  17. "host" : "shard1/192.168.8.30:27021,192.168.8.31:27021"

  18.        },

  19.        {

  20. "_id" : "s2",

  21. "host" : "shard2/192.168.8.30:27022,192.168.8.31:27022"

  22.        }

  23.    ],

  24. "ok" : 1

  25. }

  26. mongos>

七.添加分片
1.激活数据库分片
db.runCommand( { enablesharding : "<dbname>" } );
如:db.runCommand( { enablesharding : "" } );
插入测试数据:
 
 
  1. mongos> use nosql

  2. switched to db nosql

  3. mongos> for(var i=0;i<100;i++)db.fans.insert({uid:i,uname:'nosqlfans'+i});

激活数据库:
 
 
  1. mongos> use admin

  2. switched to db admin

  3. mongos> db.runCommand( { enablesharding : "nosql" } );

  4. { "ok" : 1 }


通过执行以上命令,可以让数据库跨shard,如果不执行这步,数据库只会存放在一个shard,一旦激活数据库分片,数据库中不同的collection 将被存放在不同的shard上,但一个collection仍旧存放在同一个shard上,要使单个collection也分片,还需单独对 collection作些操作!
2.添加索引
必须加索引,不然不能对collections分片!
 
 
  1. mongos> use nosql

  2. switched to db nosql

  3. mongos> db.fans.find()

  4. { "_id" : ObjectId("4ff2ae6816df1d1b33bad081"), "uid" : 0, "uname" : "nosqlfans0" }

  5. { "_id" : ObjectId("4ff2ae6816df1d1b33bad082"), "uid" : 1, "uname" : "nosqlfans1" }

  6. { "_id" : ObjectId("4ff2ae6816df1d1b33bad083"), "uid" : 2, "uname" : "nosqlfans2" }

  7. { "_id" : ObjectId("4ff2ae6816df1d1b33bad084"), "uid" : 3, "uname" : "nosqlfans3" }

  8. { "_id" : ObjectId("4ff2ae6816df1d1b33bad085"), "uid" : 4, "uname" : "nosqlfans4" }

  9. { "_id" : ObjectId("4ff2ae6816df1d1b33bad086"), "uid" : 5, "uname" : "nosqlfans5" }

  10. { "_id" : ObjectId("4ff2ae6816df1d1b33bad087"), "uid" : 6, "uname" : "nosqlfans6" }

  11. { "_id" : ObjectId("4ff2ae6816df1d1b33bad088"), "uid" : 7, "uname" : "nosqlfans7" }

  12. { "_id" : ObjectId("4ff2ae6816df1d1b33bad089"), "uid" : 8, "uname" : "nosqlfans8" }

  13. { "_id" : ObjectId("4ff2ae6816df1d1b33bad08a"), "uid" : 9, "uname" : "nosqlfans9" }

  14. { "_id" : ObjectId("4ff2ae6816df1d1b33bad08b"), "uid" : 10, "uname" : "nosqlfans10" }

  15. { "_id" : ObjectId("4ff2ae6816df1d1b33bad08c"), "uid" : 11, "uname" : "nosqlfans11" }

  16. { "_id" : ObjectId("4ff2ae6816df1d1b33bad08d"), "uid" : 12, "uname" : "nosqlfans12" }

  17. { "_id" : ObjectId("4ff2ae6816df1d1b33bad08e"), "uid" : 13, "uname" : "nosqlfans13" }

  18. { "_id" : ObjectId("4ff2ae6816df1d1b33bad08f"), "uid" : 14, "uname" : "nosqlfans14" }

  19. { "_id" : ObjectId("4ff2ae6816df1d1b33bad090"), "uid" : 15, "uname" : "nosqlfans15" }

  20. { "_id" : ObjectId("4ff2ae6816df1d1b33bad091"), "uid" : 16, "uname" : "nosqlfans16" }

  21. { "_id" : ObjectId("4ff2ae6816df1d1b33bad092"), "uid" : 17, "uname" : "nosqlfans17" }

  22. { "_id" : ObjectId("4ff2ae6816df1d1b33bad093"), "uid" : 18, "uname" : "nosqlfans18" }

  23. { "_id" : ObjectId("4ff2ae6816df1d1b33bad094"), "uid" : 19, "uname" : "nosqlfans19" }

  24. has more

  25. mongos> db.fans.ensureIndex({"uid":1})

  26. mongos> db.fans.find({uid:10}).explain()

  27. {

  28. "cursor" : "BtreeCursor uid_1",

  29. "nscanned" : 1,

  30. "nscannedObjects" : 1,

  31. "n" : 1,

  32. "millis" : 25,

  33. "nYields" : 0,

  34. "nChunkSkips" : 0,

  35. "isMultiKey" : false,

  36. "indexOnly" : false,

  37. "indexBounds" : {

  38. "uid" : [

  39.            [

  40.                10,

  41.                10

  42.            ]

  43.        ]

  44.    }

  45. }

3.Collecton分片
要使单个collection也分片存储,需要给collection指定一个分片key,通过以下命令操作:
db.runCommand( { shardcollection : "",key : });
 
 
  1. mongos> use admin

  2. switched to db admin

  3. mongos> db.runCommand({shardcollection : "nosql.fans",key : {uid:1}})

  4. { "collectionsharded" : "nosql.fans", "ok" : 1 }

PS:
1). 操作必须切换到admin数据库下
2). 分片的collection系统要创建好索引
3). 分片的collection只能有一个在分片key上的唯一索引,其它唯一索引不被允许
4.查看分片状态
 
 
  1. mongos> use nosql

  2. mongos> db.fans.stats()

  3. {

  4. "sharded" : true,

  5. "flags" : 1,

  6. "ns" : "nosql.fans",

  7. "count" : 100,

  8. "numExtents" : 2,

  9. "size" : 5968,

  10. "storageSize" : 20480,

  11. "totalIndexSize" : 24528,

  12. "indexSizes" : {

  13. "_id_" : 8176,

  14. "uid0_1" : 8176,

  15. "uid_1" : 8176

  16.    },

  17. "avgObjSize" : 59.68,

  18. "nindexes" : 3,

  19. "nchunks" : 1,

  20. "shards" : {

  21. "shard1" : {

  22. "ns" : "nosql.test",

  23. "count" : 100,

  24. "size" : 5968,

  25. "avgObjSize" : 59.68,

  26. "storageSize" : 20480,

  27. "numExtents" : 2,

  28. "nindexes" : 3,

  29. "lastExtentSize" : 16384,

  30. "paddingFactor" : 1,

  31. "flags" : 1,

  32. "totalIndexSize" : 24528,

  33. "indexSizes" : {

  34. "_id_" : 8176,

  35. "uid0_1" : 8176,

  36. "uid_1" : 8176

  37.            },

  38. "ok" : 1

  39.        }

  40.    },

  41. "ok" : 1

  42. }

  43. mongos>

些时分片没有发生变化!
再插入比较多的数据:
 
 
  1. mongos> use nosql

  2. switched to db nosql

  3. mongos> for(var i=200;i<200003;i++)db.fans.save({uid:i,uname:'nosqlfans'+i});

  4. mongos> db.fans.stats()

  5. {

  6. "sharded" : true,

  7. "flags" : 1,

  8. "ns" : "nosql.fans",

  9. "count" : 200002,

  10. "numExtents" : 12,

  11. "size" : 12760184,

  12. "storageSize" : 22646784,

  13. "totalIndexSize" : 12116832,

  14. "indexSizes" : {

  15. "_id_" : 6508096,

  16. "uid_1" : 5608736

  17.    },

  18. "avgObjSize" : 63.80028199718003,

  19. "nindexes" : 2,

  20. "nchunks" : 10,

  21. "shards" : {

  22. "shard1" : {

  23. "ns" : "nosql.fans",

  24. "count" : 9554,

  25. "size" : 573260,

  26. "avgObjSize" : 60.00209336403601,

  27. "storageSize" : 1396736,

  28. "numExtents" : 5,

  29. "nindexes" : 2,

  30. "lastExtentSize" : 1048576,

  31. "paddingFactor" : 1,

  32. "flags" : 1,

  33. "totalIndexSize" : 596848,

  34. "indexSizes" : {

  35. "_id_" : 318864,

  36. "uid_1" : 277984

  37.            },

  38. "ok" : 1

  39.        },

  40. "shard2" : {

  41. "ns" : "nosql.fans",

  42. "count" : 190448,

  43. "size" : 12186924,

  44. "avgObjSize" : 63.990821641602956,

  45. "storageSize" : 21250048,

  46. "numExtents" : 7,

  47. "nindexes" : 2,

  48. "lastExtentSize" : 10067968,

  49. "paddingFactor" : 1,

  50. "flags" : 1,

  51. "totalIndexSize" : 11519984,

  52. "indexSizes" : {

  53. "_id_" : 6189232,

  54. "uid_1" : 5330752

  55.            },

  56. "ok" : 1

  57.        }

  58.    },

  59. "ok" : 1

  60. }

  61. mongos>

当再次插入大量数据时。。自动分片处理了!!所以OK!!!
八.停止所有服务脚本
 
 
  1. [root@mon1 sh]# cat /opt/mongodb/sh/stop.sh

  2. #!/bin/sh

  3. check=`ps aux|grep mongo|grep configdb|awk '{print $2;}'|wc -l`

  4. echo $check

  5. while [ $check -gt 0 ]

  6. do

  7.       # echo $check

  8.        no=`ps aux|grep mongo|grep configdb|awk '{print $2;}'|sed -n '1p'`

  9.        kill -3 $no

  10.        echo "kill $no mongo daemon is ok."

  11.        sleep 2

  12.        check=`ps aux|grep mongo|grep configdb|awk '{print $2;}'|wc -l`

  13.        echo "stopping mongo,pls waiting..."

  14. done

  15. check=`ps aux|grep mongo|grep configsvr|awk '{print $2;}'|wc -l`

  16. echo $check

  17. while [ $check -gt 0 ]

  18. do

  19.       # echo $check

  20.        no=`ps aux|grep mongo|grep configsvr|awk '{print $2;}'|sed -n '1p'`

  21.        kill -3 $no

  22.        echo "kill $no mongo daemon is ok."

  23.        sleep 2

  24.        check=`ps aux|grep mongo|grep configsvr|awk '{print $2;}'|wc -l`

  25.        echo "stopping mongo,pls waiting..."

  26. done

  27. check=`ps aux|grep mongo|grep shardsvr|awk '{print $2;}'|wc -l`

  28. echo $check

  29. while [ $check -gt 0 ]

  30. do

  31.       # echo $check

  32.        no=`ps aux|grep mongo|grep shardsvr|awk '{print $2;}'|sed -n '1p'`

  33.        kill -3 $no

  34.        echo "kill $no mongo daemon is ok."

  35.        sleep 2

  36.        check=`ps aux|grep mongo|grep shardsvr|awk '{print $2;}'|wc -l`

  37.        echo "stopping mongo,pls waiting..."

  38. done

  39. echo "all mongodb stopped!"

九.分片管理
1.listshards:列出所有的Shard
 
 
  1. >use admin

  2. >db.runCommand({listshards:1})

2.移除shard
 
 
  1. >use admin

  2. >db.runCommand( { removeshard : "shard1/192.168.8.30:27021,192.168.8.31:27021" } )

  3. >db.runCommand( { removeshard : "shard2/192.168.8.30:27022,192.168.8.31:27022" } )

对于移除的分片后,我们再加入相同分片时,会加不进去,可以按如下方法进行:
 
 
  1. mongos> use config

  2. switched to db config

  3. mongos> show collections

  4. changelog

  5. chunks

  6. collections

  7. databases

  8. lockpings

  9. locks

  10. mongos

  11. settings

  12. shards

  13. system.indexes

  14. version

  15. mongos> db.shards.find()

  16. { "_id" : "shard1", "host" : "shard1/192.168.8.30:27021,192.168.8.31:27021,192.168.8.32:27021", "maxSize" : NumberLong(20480) }

  17. { "_id" : "shard2", "host" : "shard2/192.168.8.30:27022,192.168.8.31:27022,192.168.8.32:27022", "maxSize" : NumberLong(20480) }

要做的就是删除shards表中的信息,把移除的shard键值删除掉!再重新加入shard
如:db.shards.remove({"_id":"shard2"})
3.查看Sharding信息
> printShardingStatus()
PRIMARY> db.system.replset.find()
PRIMARY> rs.isMaster()
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

关于mongos接入高可用的介绍请看下回分解!!!!

本文出自 “->” 博客,转载请与作者联系!

你可能感兴趣的:(mongodb,sharding,分片)