MongoDB分片集群搭建部署

文章目录

  • 一、环境规划
  • 二、安装部署
    • 2.1 安装包部署
    • 2.2 config serve 配置服务器部署
    • 2.3 shard 分片集群部署
    • 2.4 mongos 路由服务器部署
  • 三、分片集群功能测试

MongoDB分片集群主要由以下组件组成:

  • shard : 主要用来实际存储分片数据,每个分片都可以部署为一个副本集
  • mongos : 主要用来做查询路由器,将客户端请求与与各分片之间数据进行协调路由
  • config servers : 主要用来存储整个集群的元数据以及配置信息

MongoDB分片集群搭建部署_第1张图片

一、环境规划

1、服务器部署规划

服务器 IP 部署角色
sdw1 172.16.104.12 shard1(primary)、shard2(secondry)、shard3(secondry)、mongos、config server
sdw2 172.16.104.13 shard1(secondry)、shard2(primary)、shard3(secondry)、mongos、config server
sdw3 172.16.104.14 shard1(secondry)、shard2(secondry)、shard3(primary)、mongos、config server

2、端口规划

角色 端口
shard1 27001
shard2 27002
shard3 27003
mongos 20000
config server 21000

3、目录规划

角色 目录
config server /data/mongodb40/config/{data,logs,conf}
shard /data/mongodb40/shard{N}/{data,logs,conf}
mongos /data/mongodb40/mongos/{logs,conf}

其中,data为数据目录、logs为日志目录、conf目录存储该节点的config文件以及keyfile文件。

二、安装部署

2.1 安装包部署

1、安装包部署

-- MongoDB官网
https://www.mongodb.com/try/download/community

-- 最新稳定版4.4下载
# wget -c https://fastdl.mongodb.org/linux/mongodb-linux-x86_64-rhel70-4.0.22.tgz

-- 解压
# tar xf mongodb-linux-x86_64-rhel70-4.0.22.tgz -C /usr/local/
# mv mongodb-linux-x86_64-rhel70-4.0.22/ mongodb40

2、环境配置

# vim /root/.bash_profile
PATH=$PATH:$HOME/bin:/usr/local/mongodb40/bin

# source /root/.bash_profile

3、创建规划目录

# mkdir -p /data/mongodb40/{shard1,shard2,shard3,config}/{data,logs,conf}
# mkdir -p /data/mongodb40/mongos/{conf,logs}

2.2 config serve 配置服务器部署

1、配置文件

[root@sdw1 conf]# cat /data/mongodb40/config/conf/config.conf
systemLog:
   verbosity: 0
   quiet: false
   traceAllExceptions: false
   path: "/data/mongodb40/config/logs/config.log"               # 日志目录
   logAppend: true                                              # 日志追加写入
   logRotate: reopen
   destination: file
processManagement:
   fork: true
   pidFilePath: "/data/mongodb40/config/conf/config.pid"        # pid文件
net:
   port: 21000                                                  # config server 监听端口【关键】
   bindIp: 0.0.0.0                                              # 开放IP【关键】
   maxIncomingConnections: 2000
   wireObjectCheck: true
security:
   keyFile: "/data/mongodb40/config/conf/KeyFile.file"          # Keyfile文件
   clusterAuthMode: keyFile
   authorization: enabled                                       # 是否开启用户密码认证
storage:
   dbPath: /data/mongodb40/config/data                          # 数据目录
   journal:
      enabled: true
      commitIntervalMs: 100
   directoryPerDB: true
   syncPeriodSecs: 60
   engine: wiredTiger
   wiredTiger:
      engineConfig:
         cacheSizeGB: 256
         #journalCompressor: snappy
         directoryForIndexes: false
      collectionConfig:
         blockCompressor: snappy
      indexConfig:
         prefixCompression: false
operationProfiling:
   slowOpThresholdMs: 100
   mode: slowOp
replication:
   #oplogSizeMB: 1024
   replSetName: configs
   secondaryIndexPrefetch: all
   enableMajorityReadConcern: false
sharding:
   clusterRole: configsvr                                       # 定义为config server【关键】

2、生成keyfile

# openssl rand -base64 90 -out  /data/mongodb40/config/conf/KeyFile.file
# chmod 600  /data/mongodb40/config/conf/KeyFile.file

3、将第一个节点的keyfile、config文件传输至其他config server节点

# scp -r ./config.conf ./KeyFile.file sdw2:/data/mongodb40/config/conf/
# scp -r ./config.conf ./KeyFile.file sdw3:/data/mongodb40/config/conf/

4、启动config server副本集的各个节点

# mongod -f /data/mongodb40/config/conf/config.conf  &

5、初始化副本集

随便登录一个config server节点,进行config server副本集的初始化。

> cnf = {"_id":"configs","members":[{"_id":1,"host":"172.16.104.12:21000","priority":1 },{"_id":2, "host":"172.16.104.13:21000","priority":1},{"_id":3, "host":"172.16.104.14:21000","priority":1}]}
> rs.initiate(cnf)

6、创建用户

configs:PRIMARY> db.createUser({"user":"root","pwd":"123","roles":[{role:"root",db:"admin"}]})

2.3 shard 分片集群部署

1、配置文件

[root@sdw1 conf]# cat /data/mongodb40/shard1/config.conf
systemLog:
   verbosity: 0
   quiet: false
   traceAllExceptions: false
   path: "/data/mongodb40/shard1/logs/config.log"
   logAppend: true
   logRotate: reopen
   destination: file
processManagement:
   fork: true
   pidFilePath: "/data/mongodb40/shard1/conf/config.pid"
net:
   port: 27001                                              # shard1副本集节点端口
   bindIp: 0.0.0.0                                          # 开放IP
   maxIncomingConnections: 2000
   wireObjectCheck: true
security:
   keyFile: "/data/mongodb40/shard1/conf/KeyFile.file"
   clusterAuthMode: keyFile
   authorization: enabled
   #authorization: disabled
storage:
   dbPath: /data/mongodb40/shard1/data
   #indexBuildRetry: true
   journal:
      enabled: true
      commitIntervalMs: 100
   directoryPerDB: true
   syncPeriodSecs: 60
   engine: wiredTiger
   wiredTiger:
      engineConfig:
         cacheSizeGB: 256
         #journalCompressor: snappy
         directoryForIndexes: false
      collectionConfig:
         blockCompressor: snappy
      indexConfig:
         prefixCompression: false
operationProfiling:
   slowOpThresholdMs: 100
   mode: slowOp
replication:
   #oplogSizeMB: 1024
   replSetName: shard1
   secondaryIndexPrefetch: all
   enableMajorityReadConcern: false
sharding:
   clusterRole: shardsvr                                    # 标识节点为shard分片节点

2、keyfile、config文件复制到相应的分片节点下

-- 直接传输之前创建的keyfile
# scp -r ./config.conf ./KeyFile.file sdw2:/data/mongodb40/config/conf/
# scp -r ./config.conf ./KeyFile.file sdw3:/data/mongodb40/config/conf/

3、启动shard1副本集各节点

# mongod -f /data/mongodb40/shard1/conf/config.conf &

5、初始化副本集

随便登录一个shard1节点,进行副本集的初始化。

> cnf = {"_id":"shard1","members":[{"_id":1,"host":"172.16.104.12:27001","priority":10 },{"_id":2, "host":"172.16.104.13:27001","hidden":true,"priority":0},{"_id":3, "host":"172.16.104.14:27001","priority":1}]}
> rs.initiate(cnf)

6、重复以上操作对shard2、shard3进行初始化,各shard副本集初始化配置

-- shard1
cnf = {"_id":"shard1","members":[{"_id":1,"host":"172.16.104.12:27001","priority":10 },{"_id":2, "host":"172.16.104.13:27001","priority":1},{"_id":3, "host":"172.16.104.14:27001","priority":1}]}

-- shard2
cnf = {"_id":"shard2","members":[{"_id":1,"host":"172.16.104.12:27002","priority":1 },{"_id":2, "host":"172.16.104.13:27002","priority":10},{"_id":3, "host":"172.16.104.14:27002","priority":1}]}

-- shard3
cnf = {"_id":"shard3","members":[{"_id":1,"host":"172.16.104.12:27003","priority":1},{"_id":2, "host":"172.16.104.13:27003","priority":1},{"_id":3, "host":"172.16.104.14:27003","priority":10}]}

8、创建用户

use admin
db.createUser({"user":"root","pwd":"123","roles":[{role:"root",db:"admin"}]})

2.4 mongos 路由服务器部署

1、配置文件

[root@sdw1 conf]# cat /data/mongodb40/config/conf/config.conf
systemLog:
   verbosity: 0
   quiet: false
   traceAllExceptions: false
   path: "/data/mongodb40/mongos/logs/config.log"
   logAppend: true
   logRotate: reopen
   destination: file
processManagement:
   fork: true
   pidFilePath: "/data/mongodb40/mongos/conf/config.pid"
net:
   port: 20000
   bindIp: 0.0.0.0
   maxIncomingConnections: 2000
   wireObjectCheck: true
security:
   keyFile: "/data/mongodb40/mongos/conf/KeyFile.file"
   clusterAuthMode: keyFile
sharding:
   configDB: configs/172.16.104.12:21000,172.16.104.13:21000,172.16.104.14:21000            # configDB : /config节点ip:port

2、keyfile

-- 直接传输之前创建的keyfile
# scp -r ./config.conf ./KeyFile.file sdw2:/data/mongodb40/config/conf/
# scp -r ./config.conf ./KeyFile.file sdw3:/data/mongodb40/config/conf/

3、启动mongos下各节点,注意是使用mongos启动,而不是mongod

# mongos -f /data/mongodb40/mongos/conf/config.conf &

4、为分片集群添加分片节点信息

随便登录一个mongos节点,添加对应的shard节点。

-- 登录mongos节点并进行用户认证
# mongo --port 20000
mongos> use admin
switched to db admin
mongos> db.auth("root","123")
1

-- 为分片集群添加分片信息
mongos> sh.addShard("shard1/172.16.104.12:27001,172.16.104.13:27001,172.16.104.14:27001")               # 添加shard1
mongos> sh.addShard("shard2/172.16.104.12:27002,172.16.104.13:27002,172.16.104.14:27002")               # 添加shard2
mongos> sh.addShard("shard3/172.16.104.12:27003,172.16.104.13:27003,172.16.104.14:27003")               # 添加shard3

-- 查看分片集群信息
mongos> sh.status()
--- Sharding Status ---
  sharding version: {
  	"_id" : 1,
  	"minCompatibleVersion" : 5,
  	"currentVersion" : 6,
  	"clusterId" : ObjectId("6069b489d65227986c7fbe7a")
  }
  shards:
        {  "_id" : "shard1",  "host" : "shard1/172.16.104.12:27001,172.16.104.13:27001,172.16.104.14:27001",  "state" : 1 }
        {  "_id" : "shard2",  "host" : "shard2/172.16.104.12:27002,172.16.104.13:27002,172.16.104.14:27002",  "state" : 1 }
        {  "_id" : "shard3",  "host" : "shard3/172.16.104.12:27003,172.16.104.13:27003,172.16.104.14:27003",  "state" : 1 }
  active mongoses:
        "4.0.22" : 3
  autosplit:
        Currently enabled: yes
  balancer:
        Currently enabled:  yes
        Currently running:  no
        Failed balancer rounds in last 5 attempts:  0
        Migration Results for the last 24 hours:
                No recent migrations
  databases:
        {  "_id" : "config",  "primary" : "config",  "partitioned" : true }

三、分片集群功能测试

1、对db1.test1进行分片

mongos> sh.enableSharding("db1");
{
	"ok" : 1,
	"operationTime" : Timestamp(1617541590, 5),
	"$clusterTime" : {
		"clusterTime" : Timestamp(1617541590, 5),
		"signature" : {
			"hash" : BinData(0,"8DW8X4Bx5zkMHiCJ2LQzoTkqi4g="),
			"keyId" : NumberLong("6947282400699219974")
		}
	}
}
mongos> sh.shardCollection("db1.test1", {_id: 'hashed'});
{
	"collectionsharded" : "db1.test1",
	"collectionUUID" : UUID("33adf1e2-128e-4ee3-8624-2adaf5311468"),
	"ok" : 1,
	"operationTime" : Timestamp(1617541590, 76),
	"$clusterTime" : {
		"clusterTime" : Timestamp(1617541590, 76),
		"signature" : {
			"hash" : BinData(0,"8DW8X4Bx5zkMHiCJ2LQzoTkqi4g="),
			"keyId" : NumberLong("6947282400699219974")
		}
	}
}

2、查看配置集合分片后的状态

mongos> sh.status()
--- Sharding Status ---
  sharding version: {
  	"_id" : 1,
  	"minCompatibleVersion" : 5,
  	"currentVersion" : 6,
  	"clusterId" : ObjectId("6069b489d65227986c7fbe7a")
  }
  shards:
        {  "_id" : "shard1",  "host" : "shard1/172.16.104.12:27001,172.16.104.13:27001,172.16.104.14:27001",  "state" : 1 }
        {  "_id" : "shard2",  "host" : "shard2/172.16.104.12:27002,172.16.104.13:27002,172.16.104.14:27002",  "state" : 1 }
        {  "_id" : "shard3",  "host" : "shard3/172.16.104.12:27003,172.16.104.13:27003,172.16.104.14:27003",  "state" : 1 }
  active mongoses:
        "4.0.22" : 3
  autosplit:
        Currently enabled: yes
  balancer:
        Currently enabled:  yes
        Currently running:  no
        Failed balancer rounds in last 5 attempts:  0
        Migration Results for the last 24 hours:
                No recent migrations
  databases:
        {  "_id" : "config",  "primary" : "config",  "partitioned" : true }

3、模拟通过mongos进行数据写入

mongos> use db1
switched to db db1
mongos> for (var i = 0; i < 10000; i++) {db.test1.insert({i: i});}
WriteResult({ "nInserted" : 1 })
mongos> db.test1.find().count()
18378

4、查看数据写入后shard的数据分布状态,过一段时间后,可以发现每个shard的数据分布基本比较平均

mongos> sh.status()
--- Sharding Status ---
  sharding version: {
  	"_id" : 1,
  	"minCompatibleVersion" : 5,
  	"currentVersion" : 6,
  	"clusterId" : ObjectId("6069b489d65227986c7fbe7a")
  }
  shards:
        {  "_id" : "shard1",  "host" : "shard1/172.16.104.12:27001,172.16.104.13:27001,172.16.104.14:27001",  "state" : 1 }
        {  "_id" : "shard2",  "host" : "shard2/172.16.104.12:27002,172.16.104.13:27002,172.16.104.14:27002",  "state" : 1 }
        {  "_id" : "shard3",  "host" : "shard3/172.16.104.12:27003,172.16.104.13:27003,172.16.104.14:27003",  "state" : 1 }
  active mongoses:
        "4.0.22" : 3
  autosplit:
        Currently enabled: yes
  balancer:
        Currently enabled:  yes
        Currently running:  no
        Failed balancer rounds in last 5 attempts:  0
        Migration Results for the last 24 hours:
                682 : Success
  databases:
        {  "_id" : "config",  "primary" : "config",  "partitioned" : true }
                config.system.sessions
                        shard key: { "_id" : 1 }
                        unique: false
                        balancing: true
                        chunks:
                                shard1	342
                                shard2	341
                                shard3	341
                        too many chunks to print, use verbose if you want to force print
        {  "_id" : "db1",  "primary" : "shard2",  "partitioned" : true,  "version" : {  "uuid" : UUID("bd30b8c8-a9f5-4bdd-917e-2734e4a64ae1"),  "lastMod" : 1 } }
                db1.test1
                        shard key: { "_id" : "hashed" }
                        unique: false
                        balancing: true
                        chunks:
                                shard1	2
                                shard2	2
                                shard3	2
                        { "_id" : { "$minKey" : 1 } } -->> { "_id" : NumberLong("-6148914691236517204") } on : shard1 Timestamp(1, 0)
                        { "_id" : NumberLong("-6148914691236517204") } -->> { "_id" : NumberLong("-3074457345618258602") } on : shard1 Timestamp(1, 1)
                        { "_id" : NumberLong("-3074457345618258602") } -->> { "_id" : NumberLong(0) } on : shard2 Timestamp(1, 2)
                        { "_id" : NumberLong(0) } -->> { "_id" : NumberLong("3074457345618258602") } on : shard2 Timestamp(1, 3)
                        { "_id" : NumberLong("3074457345618258602") } -->> { "_id" : NumberLong("6148914691236517204") } on : shard3 Timestamp(1, 4)
                        { "_id" : NumberLong("6148914691236517204") } -->> { "_id" : { "$maxKey" : 1 } } on : shard3 Timestamp(1, 5)

mongos>

你可能感兴趣的:(MongoDB,mongodb分片,mongodb,分片集群)