MongoDB 有三种集群架构模式,分别为主从复制(Master-Slaver)、副本集(Replica Set)和分片(Sharding)模式。
Master-Slaver 是一种主从复制的模式,目前已经不推荐使用
ReplicaSet模式取代了Master-Slaver模式,是一种互为主从的关系。Replica Set 将数据复制多份保存,不同服务器保存同一份数据,在出现故障时自动切换,实现故障转移。
MongoDB复制集主要用于实现服务的高可用性,与Redis中的哨兵模式相似。它的核心作用是数据的备份和故障转移。
Sharding 模式适合处理大量数据,它将数据分开存储,不同服务器保存不同的数据,所有服务器数据的总和即为整个数据集。
本文档主要内容为分片集群的搭建和操作
构建一个MongoDB的分片集群,需要三个重要组件,分别是分片服务器(Shard Server)、配置服务器(Config Server)、路由服务器(Router Server)。
Shard Server
每个分片服务器都是一个mongod数据库实例,用于存储实际的数据块,整个数据库集合分成多个存储在不同的分片服务器中。在实际生产中,一个Shard Server可以由多台机器组成一个副本集来承担,防止因主节点单点故障导致整个系统崩溃。
Config Server
这是独立的一个mongod进程,存储所有数据库元信息(路由、分片)的配置。mongos本身没有物理存储分片服务器和数据路由信息,只是缓存在内存里,配置服务器则实际存储这些数据。mongos第一次启动或者关掉重启就会从config server加载配置信息,以后如果配置服务器信息变化会通知到所有的mongos更新自己的状态,这样mongos就能继续准确路由。在生产环境通常设置多个config server,因为它存储了分片路由的元数据,防止单点数据丢失!
Router Server
这是独立的一个mongod进程,Router Server在集群中可作为路由使用,客户端由此
接入,让整个集群看起来像是一个单一的数据库,提供客户端应用程序和分片集群之间的接口。router Server本身不保存数据,启动时从Config Server加载集群信息到缓存中,并将客户端的请求路由给每个Shard Server,在各Shard Server返回结果后进行聚合并返回给客户端。
在实际生产环境中,副本集和分片是结合起来使用的,可满足实际应用场景中高可用性和高可扩展性的需求
副本集介绍
replica set
中文翻译副本集,其实就是shard的备份,防止shard挂掉之后数据丢失。复制提供了数据的冗余备份,并在多个服务器上存储数据副本,提高了数据的可用性, 并可以保证数据的安全性。
仲裁者(Arbiter),是复制集中的一个MongoDB实例,它并不保存数据。仲裁节点使用最小的资源并且不要求硬件设备,不能将Arbiter部署在同一个数据集节点中,可以部署在其他应用服务器或者监视服务器中,也可部署在单独的虚拟机中。为了确保复制集中有奇数的投票成员(包括primary),需要添加仲裁节点做为投票,否则primary不能运行时不会自动切换primary。Secondary为primary的备用节点、在primary出现故障时通过arbiter使secondary成为primary,而当原来的primary节点恢复时自动成为secondary。
echo never > /sys/kernel/mm/transparent_hugepage/enabled echo never > /sys/kernel/mm/transparent_hugepage/defrag 并追加⽌⽂件中 /etc/rc.local [root@localhost ~]# vim /etc/rc.d/rc.local 增加下列内容: if test -f /sys/kernel/mm/transparent_hugepage/enabled; then echo never > /sys/kernel/mm/transparent_hugepage/enabled fi if test -f /sys/kernel/mm/transparent_hugepage/defrag; then echo never > /sys/kernel/mm/transparent_hugepage/defrag fi 保存并退出,然后给rc.local添加可执行权限。 [root@localhost ~]# chmod +x /etc/rc.d/rc.local 最后重启。 |
使用普通用户启动命令之前添加 sudo -u mongo numactl --interleave=all
修改内核参数
echo 0 > /proc/sys/vm/zone_reclaim_mode echo vm.zone_reclaim_mode = 0 >> /etc/sysctl.conf |
临时修改
sysctl -w vm.swappiness=0 永久修改 vim /etc/sysctl.conf vm.swappiness = 0 |
vim /etc/security/limits.conf mongo soft nofile 65535 mongo hard nofile 65535 mongo soft nproc 65535 mongo hard nproc 65535 mongo soft stack 10240 mongo hards stack 10240 |
节点名称 |
IP |
端口 |
角色 |
Mongos1 |
127.0.0.1 |
30001 |
Mongos(路由) |
Mongos2 |
127.0.0.1 |
30002 |
Mongos(路由) |
Config1 |
127.0.0.1 |
20001 |
Config(配置服务主) |
Config2 |
127.0.0.1 |
20002 |
Config(配置服务从) |
Config3 |
127.0.0.1 |
20003 |
Config(配置服务从) |
Shard1_1 |
127.0.0.1 |
27017 |
分片集群1数据节点1主 |
Shard1_2 |
127.0.0.1 |
27018 |
分片集群1数据节点2从 |
Shard1_3 |
127.0.0.1 |
27019 |
分片集群1数据节点3从 |
Shard2_1 |
127.0.0.1 |
28017 |
分片集群2数据节点1主 |
Shard2_2 |
127.0.0.1 |
28018 |
分片集群2数据节点2从 |
Shard2_3 |
127.0.0.1 |
28019 |
分片集群2数据节点3从 |
cd /u01 tar -zxvf mongodb-linux-x86_64-rhel70-4.4.26.tgz |
移动目录
mv mongodb-linux-x86_64-rhel70-4.4.26.tgz mongodb |
创建目录
mkdir key config1 cd /data/mongodb/config1 mkdir data log conf cd /u01/mongodb cp -r config1 config2 cp -r config1 config3 cp –r config1 shard1_1 cp –r config1 shard1_2 cp –r config1 shard1_3 cp –r config1 shard2_1 cp –r config1 shard2_2 cp –r config1 shard2_3 cp -r config1 mongos1 cp -r config1 mongos2 |
创建启动账号
groupadd mongo useradd -M -g mongo mongo |
生成keyfile,如果部署到不同主机,需要拷贝到其他主机上
openssl rand -base64 259 > /data/mongodb/key/mongo_cluster.key chmod 600 /data/mongodb/key/mongo_cluster.key |
修改目录权限
chown -R mongo:mongo /data/mongodb |
systemLog: destination: file path: /data/mongodb/config1/log/mongod.log # log path logAppend: true logRotate: reopen destination: file timeStampFormat: "iso8601-local" storage: dbPath: /data/mongodb/config1/data # data directory journal: #是否启用journal日志 enabled: true directoryPerDB: true syncPeriodSecs: 60 engine: wiredTiger #存储引擎 wiredTiger: engineConfig: cacheSizeGB: 10 journalCompressor: "snappy" directoryForIndexes: false collectionConfig: blockCompressor: "snappy" indexConfig: prefixCompression: true net: port: 20001 # port bindIpAll: true maxIncomingConnections: 50000 wireObjectCheck: true ipv6: false unixDomainSocket: enabled: true pathPrefix: "/data/mongodb/config1/tmp" filePermissions: 0700 processManagement: fork: true pidFilePath: /data/mongodb/config1/mongodb.pid security: keyFile: "/u01/mongodb/key/mongo_cluster.key" clusterAuthMode: "keyFile" authorization: "enabled" javascriptEnabled: true operationProfiling: slowOpThresholdMs: 100 mode: slowOp replication: oplogSizeMB: 20480 replSetName: "configset" #副本集名称# sharding: clusterRole: configsvr # 集群角色,这里配置的角色是配置节点# |
其他2节点根据上面配置文件进行相应的修改
sudo -u mongo numactl --interleave=all mongod -f /data/mongodb/config1/conf/mongodb.conf sudo -u mongo numactl --interleave=all mongod -f /data/mongodb/config2/conf/mongodb.conf sudo -u mongo numactl --interleave=all mongod -f /data/mongodb/config2/conf/mongodb.conf |
use admin cfg={_id:"configset", members:[ {_id:0,host:"localhost:20001"}, {_id:1,host:"localhost:20002"}, {_id:2,host:"localhost:20003"}] } rs.initiate(cfg); |
执行结果
> cfg={_id:"configset", ... members:[ ... {_id:0,host:"localhost:20001"}, ... {_id:1,host:"localhost:20002"}, ... {_id:2,host:"localhost:20003"}] ... } { "_id" : "configset", "members" : [ { "_id" : 0, "host" : "localhost:20001" }, { "_id" : 1, "host" : "localhost:20002" }, { "_id" : 2, "host" : "localhost:20003" } ] } > rs.initiate(cfg); { "ok" : 1, "$gleStats" : { "lastOpTime" : Timestamp(1704942959, 1), "electionId" : ObjectId("000000000000000000000000") }, "lastCommittedOpTime" : Timestamp(0, 0) } |
rs.status() { "set" : "configset", "date" : ISODate("2024-01-11T03:17:44.330Z"), "myState" : 1, "term" : NumberLong(1), "syncSourceHost" : "", "syncSourceId" : -1, "configsvr" : true, "heartbeatIntervalMillis" : NumberLong(2000), "majorityVoteCount" : 2, "writeMajorityCount" : 2, "votingMembersCount" : 3, "writableVotingMembersCount" : 3, "members" : [ { "_id" : 0, "name" : "localhost:20001", "health" : 1, "state" : 1, "stateStr" : "PRIMARY", "uptime" : 1395, "optime" : { "ts" : Timestamp(1704943063, 1), "t" : NumberLong(1) }, "optimeDate" : ISODate("2024-01-11T03:17:43Z"), "lastAppliedWallTime" : ISODate("2024-01-11T03:17:43.985Z"), "lastDurableWallTime" : ISODate("2024-01-11T03:17:43.985Z"), "syncSourceHost" : "", "syncSourceId" : -1, "infoMessage" : "Could not find member to sync from", "electionTime" : Timestamp(1704942969, 1), "electionDate" : ISODate("2024-01-11T03:16:09Z"), "configVersion" : 1, "configTerm" : 1, "self" : true, "lastHeartbeatMessage" : "" }, { "_id" : 1, "name" : "localhost:20002", "health" : 1, "state" : 2, "stateStr" : "SECONDARY", "uptime" : 104, "optime" : { "ts" : Timestamp(1704943062, 1), "t" : NumberLong(1) }, "optimeDurable" : { "ts" : Timestamp(1704943062, 1), "t" : NumberLong(1) }, "optimeDate" : ISODate("2024-01-11T03:17:42Z"), "optimeDurableDate" : ISODate("2024-01-11T03:17:42Z"), "lastAppliedWallTime" : ISODate("2024-01-11T03:17:43.985Z"), "lastDurableWallTime" : ISODate("2024-01-11T03:17:43.985Z"), "lastHeartbeat" : ISODate("2024-01-11T03:17:43.872Z"), "lastHeartbeatRecv" : ISODate("2024-01-11T03:17:42.973Z"), "pingMs" : NumberLong(0), "lastHeartbeatMessage" : "", "syncSourceHost" : "localhost:20001", "syncSourceId" : 0, "infoMessage" : "", "configVersion" : 1, "configTerm" : 1 }, { "_id" : 2, "name" : "localhost:20003", "health" : 1, "state" : 2, "stateStr" : "SECONDARY", "uptime" : 104, "optime" : { "ts" : Timestamp(1704943062, 1), "t" : NumberLong(1) }, "optimeDurable" : { "ts" : Timestamp(1704943062, 1), "t" : NumberLong(1) }, "optimeDate" : ISODate("2024-01-11T03:17:42Z"), "optimeDurableDate" : ISODate("2024-01-11T03:17:42Z"), "lastAppliedWallTime" : ISODate("2024-01-11T03:17:43.985Z"), "lastDurableWallTime" : ISODate("2024-01-11T03:17:43.985Z"), "lastHeartbeat" : ISODate("2024-01-11T03:17:43.872Z"), "lastHeartbeatRecv" : ISODate("2024-01-11T03:17:42.973Z"), "pingMs" : NumberLong(0), "lastHeartbeatMessage" : "", "syncSourceHost" : "localhost:20001", "syncSourceId" : 0, "infoMessage" : "", "configVersion" : 1, "configTerm" : 1 } ], "ok" : 1, "$gleStats" : { "lastOpTime" : Timestamp(1704942959, 1), "electionId" : ObjectId("7fffffff0000000000000001") }, "lastCommittedOpTime" : Timestamp(1704943063, 1), "$clusterTime" : { "clusterTime" : Timestamp(1704943063, 1), "signature" : { "hash" : BinData(0,"oTJEVY6xZIgckyqV7XQ8rNE6e4Y="), "keyId" : NumberLong("7322674293400141847") } }, "operationTime" : Timestamp(1704943063, 1) } |
use admin db.createUser({user:"admin", pwd: "123456",roles:[{role:"root",db:"admin"}]}) db.auth("admin", "123456"); |
systemLog: destination: file path: /data/mongodb/shard1_1/log/mongod.log # log path logAppend: true logRotate: reopen timeStampFormat: "iso8601-local" storage: dbPath: /data/mongodb/shard1_1/data # data directory journal: #是否启用journal日志 enabled: true directoryPerDB: true syncPeriodSecs: 60 engine: wiredTiger #存储引擎 wiredTiger: engineConfig: cacheSizeGB: 10 journalCompressor: "snappy" directoryForIndexes: false collectionConfig: blockCompressor: "snappy" indexConfig: prefixCompression: true net: port: 27017 # port bindIpAll: true processManagement: fork: true pidFilePath: /data/mongodb/shard1_1/mongodb.pid security: keyFile: "/data/mongodb/key/mongo_cluster.key" clusterAuthMode: "keyFile" authorization: "enabled" javascriptEnabled: true operationProfiling: slowOpThresholdMs: 100 mode: slowOp replication: oplogSizeMB: 20480 replSetName: "shard1" sharding: clusterRole: shardsvr # 集群角色,这里配置的角色是shard节点# |
sudo -u mongo numactl --interleave=all mongod -f /data/mongodb/shard1_1/conf/mongodb.conf sudo -u mongo numactl --interleave=all mongod -f /data/mongodb/shard1_2/conf/mongodb.conf sudo -u mongo numactl --interleave=all mongod -f /data/mongodb/shard1_3/conf/mongodb.conf |
use admin cfg={_id:"shard1", members:[ {_id:0,host:"localhost:27017"}, {_id:1,host:"localhost:27018"}, {_id:2,host:"localhost:27019"}] } rs.initiate(cfg); |
执行结果
> use admin; switched to db admin > cfg={_id:"shard1", ... members:[ ... {_id:0,host:"localhost:27017"}, ... {_id:1,host:"localhost:27018"}, ... {_id:2,host:"localhost:27019"}] ... } { "_id" : "shard1", "members" : [ { "_id" : 0, "host" : "localhost:27017" }, { "_id" : 1, "host" : "localhost:27018" }, { "_id" : 2, "host" : "localhost:27019" } ] } > > rs.initiate(cfg); { "ok" : 1 } |
> rs.status() { "set" : "shard1", "date" : ISODate("2024-01-11T06:23:06.964Z"), "myState" : 1, "term" : NumberLong(1), "syncSourceHost" : "", "syncSourceId" : -1, "heartbeatIntervalMillis" : NumberLong(2000), "majorityVoteCount" : 2, "writeMajorityCount" : 2, "votingMembersCount" : 3, "writableVotingMembersCount" : 3, "members" : [ { "_id" : 0, "name" : "localhost:27017", "health" : 1, "state" : 1, "stateStr" : "PRIMARY", "uptime" : 8442, "optime" : { "ts" : Timestamp(1704954182, 1), "t" : NumberLong(1) }, "optimeDate" : ISODate("2024-01-11T06:23:02Z"), "lastAppliedWallTime" : ISODate("2024-01-11T06:23:02.177Z"), "lastDurableWallTime" : ISODate("2024-01-11T06:23:02.177Z"), "syncSourceHost" : "", "syncSourceId" : -1, "infoMessage" : "Could not find member to sync from", "electionTime" : Timestamp(1704954092, 1), "electionDate" : ISODate("2024-01-11T06:21:32Z"), "configVersion" : 1, "configTerm" : -1, "self" : true, "lastHeartbeatMessage" : "" }, { "_id" : 1, "name" : "localhost:27018", "health" : 1, "state" : 2, "stateStr" : "SECONDARY", "uptime" : 105, "optime" : { "ts" : Timestamp(1704954182, 1), "t" : NumberLong(1) }, "optimeDurable" : { "ts" : Timestamp(1704954182, 1), "t" : NumberLong(1) }, "optimeDate" : ISODate("2024-01-11T06:23:02Z"), "optimeDurableDate" : ISODate("2024-01-11T06:23:02Z"), "lastAppliedWallTime" : ISODate("2024-01-11T06:23:02.177Z"), "lastDurableWallTime" : ISODate("2024-01-11T06:23:02.177Z"), "lastHeartbeat" : ISODate("2024-01-11T06:23:06.162Z"), "lastHeartbeatRecv" : ISODate("2024-01-11T06:23:05.584Z"), "pingMs" : NumberLong(0), "lastHeartbeatMessage" : "", "syncSourceHost" : "localhost:27017", "syncSourceId" : 0, "infoMessage" : "", "configVersion" : 1, "configTerm" : -1 }, { "_id" : 2, "name" : "localhost:27019", "health" : 1, "state" : 2, "stateStr" : "SECONDARY", "uptime" : 105, "optime" : { "ts" : Timestamp(1704954182, 1), "t" : NumberLong(1) }, "optimeDurable" : { "ts" : Timestamp(1704954182, 1), "t" : NumberLong(1) }, "optimeDate" : ISODate("2024-01-11T06:23:02Z"), "optimeDurableDate" : ISODate("2024-01-11T06:23:02Z"), "lastAppliedWallTime" : ISODate("2024-01-11T06:23:02.177Z"), "lastDurableWallTime" : ISODate("2024-01-11T06:23:02.177Z"), "lastHeartbeat" : ISODate("2024-01-11T06:23:06.162Z"), "lastHeartbeatRecv" : ISODate("2024-01-11T06:23:05.580Z"), "pingMs" : NumberLong(0), "lastHeartbeatMessage" : "", "syncSourceHost" : "localhost:27017", "syncSourceId" : 0, "infoMessage" : "", "configVersion" : 1, "configTerm" : -1 } ], "ok" : 1 } |
use admin db.createUser({user:"admin", pwd: "123456",roles:[{role:"root",db:"admin"}]}) db.auth("admin", "123456"); |
Shard2按照上面的步骤配置一遍
启动并配置
sudo -u mongo numactl --interleave=all mongod -f /data/mongodb/shard1_1/conf/mongodb.conf sudo -u mongo numactl --interleave=all mongod -f /data/mongodb/shard1_2/conf/mongodb.conf sudo -u mongo numactl --interleave=all mongod -f /data/mongodb/shard1_3/conf/mongodb.conf |
use admin cfg={_id:"shard2", members:[ {_id:0,host:"localhost:28017"}, {_id:1,host:"localhost:28018"}, {_id:2,host:"localhost:28019"}] } rs.initiate(cfg); |
查看集群状态
shard2:PRIMARY> rs.status() { "set" : "shard2", "date" : ISODate("2024-01-11T06:39:37.061Z"), "myState" : 1, "term" : NumberLong(1), "syncSourceHost" : "", "syncSourceId" : -1, "heartbeatIntervalMillis" : NumberLong(2000), "majorityVoteCount" : 2, "writeMajorityCount" : 2, "votingMembersCount" : 3, "writableVotingMembersCount" : 3, "members" : [ { "_id" : 0, "name" : "localhost:28017", "health" : 1, "state" : 1, "stateStr" : "PRIMARY", "uptime" : 251, "optime" : { "ts" : Timestamp(1704955172, 5), "t" : NumberLong(1) }, "optimeDate" : ISODate("2024-01-11T06:39:32Z"), "lastAppliedWallTime" : ISODate("2024-01-11T06:39:32.976Z"), "lastDurableWallTime" : ISODate("2024-01-11T06:39:32.976Z"), "syncSourceHost" : "", "syncSourceId" : -1, "infoMessage" : "", "electionTime" : Timestamp(1704955172, 1), "electionDate" : ISODate("2024-01-11T06:39:32Z"), "configVersion" : 1, "configTerm" : -1, "self" : true, "lastHeartbeatMessage" : "" }, { "_id" : 1, "name" : "localhost:28018", "health" : 1, "state" : 2, "stateStr" : "SECONDARY", "uptime" : 14, "optime" : { "ts" : Timestamp(1704955172, 5), "t" : NumberLong(1) }, "optimeDurable" : { "ts" : Timestamp(1704955172, 5), "t" : NumberLong(1) }, "optimeDate" : ISODate("2024-01-11T06:39:32Z"), "optimeDurableDate" : ISODate("2024-01-11T06:39:32Z"), "lastAppliedWallTime" : ISODate("2024-01-11T06:39:32.976Z"), "lastDurableWallTime" : ISODate("2024-01-11T06:39:32.976Z"), "lastHeartbeat" : ISODate("2024-01-11T06:39:36.962Z"), "lastHeartbeatRecv" : ISODate("2024-01-11T06:39:36.706Z"), "pingMs" : NumberLong(0), "lastHeartbeatMessage" : "", "syncSourceHost" : "localhost:28019", "syncSourceId" : 2, "infoMessage" : "", "configVersion" : 1, "configTerm" : -1 }, { "_id" : 2, "name" : "localhost:28019", "health" : 1, "state" : 2, "stateStr" : "SECONDARY", "uptime" : 14, "optime" : { "ts" : Timestamp(1704955172, 5), "t" : NumberLong(1) }, "optimeDurable" : { "ts" : Timestamp(1704955172, 5), "t" : NumberLong(1) }, "optimeDate" : ISODate("2024-01-11T06:39:32Z"), "optimeDurableDate" : ISODate("2024-01-11T06:39:32Z"), "lastAppliedWallTime" : ISODate("2024-01-11T06:39:32.976Z"), "lastDurableWallTime" : ISODate("2024-01-11T06:39:32.976Z"), "lastHeartbeat" : ISODate("2024-01-11T06:39:36.963Z"), "lastHeartbeatRecv" : ISODate("2024-01-11T06:39:35.704Z"), "pingMs" : NumberLong(0), "lastHeartbeatMessage" : "", "syncSourceHost" : "localhost:28017", "syncSourceId" : 0, "infoMessage" : "", "configVersion" : 1, "configTerm" : -1 } ], "ok" : 1 } |
创建管理用户
use admin db.createUser({user:"admin", pwd: "123456",roles:[{role:"root",db:"admin"}]}) db.auth("admin", "123456"); |
mongos负责查询与数据写入的路由,是实例访问的统一入口,是一个无状态的节点,每一个节点都可以从config-server节点获取到配置信息
注意点:
Mongos不需要存储数据,所有不需要配置storage相关属性值
Mongos不是副本集的概念,所有不需要配置replication相关属性值
Mongos需要配置configDB信息
systemLog: destination: file path: /data/mongodb/mongos2/log/mongod.log # log path logAppend: true logRotate: reopen timeStampFormat: "iso8601-local" #storage: net: port: 30002 # port bindIpAll: true processManagement: fork: true pidFilePath: /data/mongodb/mongos2/mongodb.pid security: keyFile: "/data/mongodb/key/mongo_cluster.key" clusterAuthMode: "keyFile" #replication: #配置configsvr副本集和IP端口 sharding: configDB: configset/127.0.0.1:20001,127.0.0.1:20002,127.0.0.1:20003 |
sudo -u mongo numactl --interleave=all mongos -f /data/mongodb/mongos1/conf/mongodb.conf sudo -u mongo numactl --interleave=all mongos -f /data/mongodb/mongos2/conf/mongodb.conf |
查看路由状态
mongos> sh.status() --- Sharding Status --- sharding version: { "_id" : 1, "minCompatibleVersion" : 5, "currentVersion" : 6, "clusterId" : ObjectId("659f5d79606c4a0b3f65bd66") } shards: active mongoses: autosplit: Currently enabled: yes balancer: Currently enabled: yes Currently running: no Failed balancer rounds in last 5 attempts: 0 Migration Results for the last 24 hours: No recent migrations databases: { "_id" : "config", "primary" : "config", "partitioned" : true } |
因为mongos是无中心的配置,所有需要每一台mongos都需要进行分片配置
添加shard1和shard2分片节点,
sh.addShard("shard1/127.0.0.1:27017,127.0.0.1:27018,127.0.0.1:27019") sh.addShard("shard2/127.0.0.1:28017,127.0.0.1:28018,127.0.0.1:28019") |
mongos> sh.addShard("shard1/localhost:27017,localhost:27018,localhost:27019") { "shardAdded" : "shard1", "ok" : 1, "operationTime" : Timestamp(1704958302, 6), "$clusterTime" : { "clusterTime" : Timestamp(1704958302, 6), "signature" : { "hash" : BinData(0,"EGVAx43UOwaSIDFvR/vqvbATiQs="), "keyId" : NumberLong("7322674293400141847") } } } mongos> sh.addShard("shard2/localhost:28017,localhost:28018,localhost:28019") { "shardAdded" : "shard2", "ok" : 1, "operationTime" : Timestamp(1704958469, 4), "$clusterTime" : { "clusterTime" : Timestamp(1704958469, 4), "signature" : { "hash" : BinData(0,"PNzHtyzXRmDaVE0Y+rnDLKg/L8E="), "keyId" : NumberLong("7322674293400141847") } } } |
mongos> sh.status() --- Sharding Status --- sharding version: { "_id" : 1, "minCompatibleVersion" : 5, "currentVersion" : 6, "clusterId" : ObjectId("659f5d79606c4a0b3f65bd66") } shards: { "_id" : "shard1", "host" : "shard1/localhost:27017,localhost:27018,localhost:27019", "state" : 1 } { "_id" : "shard2", "host" : "shard2/localhost:28017,localhost:28018,localhost:28019", "state" : 1 } active mongoses: "4.4.26" : 2 autosplit: Currently enabled: yes balancer: Currently enabled: yes Currently running: no Failed balancer rounds in last 5 attempts: 0 Migration Results for the last 24 hours: 405 : Success databases: { "_id" : "config", "primary" : "config", "partitioned" : true } config.system.sessions shard key: { "_id" : 1 } unique: false balancing: true chunks: shard1 619 shard2 405 too many chunks to print, use verbose if you want to force print |
默认添加数据是没有使用分片存储的,操作都是在路由服务中
为数据库开启分片功能:sh.enableSharding("test_employ")
为指定集合开启分片功能:sh.shardCollection("test_employ.employ_datas",{"name":"hashed"})
##根据test_employ数据库中employ_datas表的name字段开启hash分片存储
mongos> sh.enableSharding("test_employ") { "ok" : 1, "operationTime" : Timestamp(1704960417, 9), "$clusterTime" : { "clusterTime" : Timestamp(1704960417, 9), "signature" : { "hash" : BinData(0,"79mTp8DzV5l7rNeGwerhUvXsID8="), "keyId" : NumberLong("7322674293400141847") } } } mongos> sh.shardCollection("test_employ.employ_datas",{"name":"hashed"}) { "collectionsharded" : "test_employ.employ_datas", "collectionUUID" : UUID("4a5ccf70-259e-4910-b613-21fa29cc9f41"), "ok" : 1, "operationTime" : Timestamp(1704961475, 25), "$clusterTime" : { "clusterTime" : Timestamp(1704961475, 25), "signature" : { "hash" : BinData(0,"IcfmJfwNDiKEVIydLb4pnDIC7xU="), "keyId" : NumberLong("7322674293400141847") } } } |
查看分片集群状态
mongos> sh.status() --- Sharding Status --- sharding version: { "_id" : 1, "minCompatibleVersion" : 5, "currentVersion" : 6, "clusterId" : ObjectId("659f5d79606c4a0b3f65bd66") } shards: { "_id" : "shard1", "host" : "shard1/localhost:27017,localhost:27018,localhost:27019", "state" : 1 } { "_id" : "shard2", "host" : "shard2/localhost:28017,localhost:28018,localhost:28019", "state" : 1 } active mongoses: "4.4.26" : 2 autosplit: Currently enabled: yes balancer: Currently enabled: yes Currently running: no Failed balancer rounds in last 5 attempts: 0 Migration Results for the last 24 hours: 512 : Success databases: { "_id" : "config", "primary" : "config", "partitioned" : true } config.system.sessions shard key: { "_id" : 1 } unique: false balancing: true chunks: shard1 512 shard2 512 too many chunks to print, use verbose if you want to force print { "_id" : "test_employ", "primary" : "shard2", "partitioned" : true, "version" : { "uuid" : UUID("7fff4bee-c4e4-47c4-a23c-1b7daed90a1b"), "lastMod" : 1 } } test_employ.employ_datas shard key: { "name" : "hashed" } unique: false balancing: true chunks: shard1 2 shard2 2 { "name" : { "$minKey" : 1 } } -->> { "name" : NumberLong("-4611686018427387902") } on : shard1 Timestamp(1, 0) { "name" : NumberLong("-4611686018427387902") } -->> { "name" : NumberLong(0) } on : shard1 Timestamp(1, 1) { "name" : NumberLong(0) } -->> { "name" : NumberLong("4611686018427387902") } on : shard2 Timestamp(1, 2) { "name" : NumberLong("4611686018427387902") } -->> { "name" : { "$maxKey" : 1 } } on : shard2 Timestamp(1, 3) |
可以查看到test_employ数据库及对于的集合分片存储已经开启
查看分片是否同步集合
shard1:PRIMARY> show dbs; admin 0.000GB config 0.001GB local 0.001GB test_employ 0.000GB shard1:PRIMARY> |
shard2:PRIMARY> show dbs; admin 0.000GB config 0.001GB local 0.001GB test_employ 0.000GB shard2:PRIMARY> |
测试数据是否被插入到不同的分片副本集中
mongos> use test_employ switched to db test_employ mongos> for (i=1; i<=500; i++){db.employ_datas.insert({name:'test01' + i, age:i})} WriteResult({ "nInserted" : 1 }) mongos> db.test_employ.count() 500 mongos> |
查看分片集群的数据库存储情况
shard1:PRIMARY> show tables; employ_datas shard1:PRIMARY> db.employ_datas.count() 272 shard1:PRIMARY> |
shard2:PRIMARY> show tables; employ_datas shard2:PRIMARY> db.employ_datas.count() 228 shard2:PRIMARY> |
备注:配置服务或者路由服务如果挂掉一台,对数据也不会有影响。如果切换了一台新的路由服务器,则需要配置表的分片存储,否则插入数据不会被均分到数据库集群中,只会固定插入到某一台数据库实例中