docker-compose 部署 MongoDB 6 版本以上分片集群及配置 SSL / TLS 认证

一、前提

在对 MongoDB 分片部署之前,建议先看副本集部署文章,这样在部署过程才会更加深入了解到 MongDB 集群部署原理和方式。本篇文章是在副本集部署基础上去搭建的,续上一篇文章:docker-compose 部署 MongoDB 6 以上版本副本集及配置 SSL / TLS 协议。

二、准备环境
1、关于使用 x509 认证方式

自签名证书的生成可参考 MongoDB 副本集部署的文章,这里不做解释。

2、MongoDB 分片方案

本文以部署 1 个配置服务器、2 个分片服务器和1 个路由服务器实现。

三、实现 MongoDB 分片集群部署
1、创建 MongoDB 分片集群相关目录
# 创建配置、分片、路由、认证相关目录
mkdir -p /mongodb-shard-cluster/{config,shard,mongos,certs}
mkdir -p /mongodb-shard-cluster/config/{config0,config1,config2,conf}
mkdir -p /mongodb-shard-cluster/shard/{shard0,shard1}
mkdir -p /mongodb-shard-cluster/shard/shard0/{shard00,shard01,shard02,conf}
mkdir -p /mongodb-shard-cluster/shard/shard1/{shard10,shard11,shard12,conf}

# 创建各个分片下相关目录并给日志目录赋予权限
mkdir -p /mongodb-shard-cluster/shard/shard0/shard00/{data,db,logs}
mkdir -p /mongodb-shard-cluster/shard/shard0/shard01/{data,db,logs}
mkdir -p /mongodb-shard-cluster/shard/shard0/shard02/{data,db,logs}
mkdir -p /mongodb-shard-cluster/shard/shard1/shard10/{data,db,logs}
mkdir -p /mongodb-shard-cluster/shard/shard1/shard11/{data,db,logs}
mkdir -p /mongodb-shard-cluster/shard/shard1/shard12/{data,db,logs}
chmod 777 /mongodb-shard-cluster/shard/shard0/shard00/logs
chmod 777 /mongodb-shard-cluster/shard/shard0/shard01/logs
chmod 777 /mongodb-shard-cluster/shard/shard0/shard02/logs
chmod 777 /mongodb-shard-cluster/shard/shard1/shard10/logs
chmod 777 /mongodb-shard-cluster/shard/shard1/shard11/logs
chmod 777 /mongodb-shard-cluster/shard/shard1/shard12/logs

# 创建路由服务相关目录并给日志目录赋予权限
mkdir -p /mongodb-shard-cluster/mongos/{conf,data,logs}
chmod 777 /mongodb-shard-cluster/mongos/logs

# 创建配置服务相关目录并给日志目录赋予权限
mkdir -p /mongodb-shard-cluster/config/config0/{db,data,logs}
mkdir -p /mongodb-shard-cluster/config/config1/{db,data,logs}
mkdir -p /mongodb-shard-cluster/config/config2/{db,data,logs}
chmod 777  /mongodb-shard-cluster/config/config0/logs
chmod 777  /mongodb-shard-cluster/config/config1/logs
chmod 777  /mongodb-shard-cluster/config/config2/logs
2、创建配置、分片、路由和部署文件
创建配置服务器 mongod.conf 文件, 切换到 /mongodb-shard-cluster/config/conf 目录下, 内容如下:
net:
  bindIp: 0.0.0.0
  port: 27019
  tls:
    CAFile: /data/certs/ca.pem
    certificateKeyFile: /data/certs/server.pem
    clusterFile: /data/certs/cluster.pem
    allowInvalidCertificates: true
    allowInvalidHostnames: true
    allowConnectionsWithoutCertificates: true
    mode: requireTLS
processManagement:
  fork: false
  timeZoneInfo: /usr/share/zoneinfo
replication:
  replSetName: cfg
  oplogSizeMB: 256
security:
  clusterAuthMode: x509
  authorization: enabled
storage:
  dbPath: /data/db
  directoryPerDB: true
  engine: wiredTiger
  wiredTiger:
      engineConfig:
         cacheSizeGB: 0.5
         directoryForIndexes: true
         journalCompressor: zstd
         zstdCompressionLevel: 6 
      collectionConfig:
         blockCompressor: zstd
      indexConfig:
         prefixCompression: true
systemLog:
  destination: file
  logAppend: true
  path: /data/logs/mongod.log
sharding:
  clusterRole: configsvr
创建 shard0 分片服务器 mongod.conf 文件, 切换到 /mongodb-shard-cluster/shard/shard0/conf 目录下, 内容如下:
net:
  bindIp: 0.0.0.0
  port: 27018
  tls:
    CAFile: /data/certs/ca.pem
    certificateKeyFile: /data/certs/server.pem
    clusterFile: /data/certs/cluster.pem
    allowInvalidCertificates: true
    allowInvalidHostnames: true
    allowConnectionsWithoutCertificates: true
    mode: requireTLS
processManagement:
  fork: false
  timeZoneInfo: /usr/share/zoneinfo
replication:
  replSetName: rs0
  oplogSizeMB: 256
security:
  clusterAuthMode: x509
  authorization: enabled
storage:
  directoryPerDB: true
  engine: wiredTiger
  wiredTiger:
      engineConfig:
         cacheSizeGB: 0.5
         directoryForIndexes: true
         journalCompressor: zstd
         zstdCompressionLevel: 6 
      collectionConfig:
         blockCompressor: zstd
      indexConfig:
         prefixCompression: true
systemLog:
  destination: file
  logAppend: true
  path: /data/logs/mongod.log
sharding:
  clusterRole: shardsvr
创建 shard1 分片服务器 mongod.conf 文件, 切换到 /mongodb-shard-cluster/shard/shard1/conf 目录下, 内容如下:
net:
  bindIp: 0.0.0.0
  port: 27018
  tls:
    CAFile: /data/certs/ca.pem
    certificateKeyFile: /data/certs/server.pem
    clusterFile: /data/certs/cluster.pem
    allowInvalidCertificates: true
    allowInvalidHostnames: true
    allowConnectionsWithoutCertificates: true
    mode: requireTLS
processManagement:
  fork: false
  timeZoneInfo: /usr/share/zoneinfo
replication:
  replSetName: rs1
  oplogSizeMB: 256
security:
  clusterAuthMode: x509
  authorization: enabled
storage:
  directoryPerDB: true
  engine: wiredTiger
  wiredTiger:
      engineConfig:
         cacheSizeGB: 0.5
         directoryForIndexes: true
         journalCompressor: zstd
         zstdCompressionLevel: 6 
      collectionConfig:
         blockCompressor: zstd
      indexConfig:
         prefixCompression: true
systemLog:
  destination: file
  logAppend: true
  path: /data/logs/mongod.log
sharding:
  clusterRole: shardsvr
创建路由服务器 mongod.conf 文件, 切换到 /mongodb-shard-cluster/mongos/conf 目录下, 内容如下:
net:
  bindIp: 0.0.0.0
  port: 27017
  tls:
    CAFile: /data/certs/ca.pem
    certificateKeyFile: /data/certs/server.pem
    clusterFile: /data/certs/cluster.pem
    allowInvalidCertificates: true
    allowInvalidHostnames: true
    allowConnectionsWithoutCertificates: true
    mode: requireTLS
processManagement:
  fork: false
  timeZoneInfo: /usr/share/zoneinfo
security:
  clusterAuthMode: x509
systemLog:
  destination: file
  logAppend: true
  path: /data/logs/mongod.log
sharding:
  configDB: cfg/mongo-config0:27019,mongo-config1:27019,mongo-config2:27019 # 配置服务器地址, 如:replName/hostname

将生成的 x509 所有认证证书放到 /mongodb-shard-cluster/certs 目录里面。

证书包含:ca.pem client.pem cluster.pem server.pem

注意:具体 X509 证书生成方式,前面已说明,可参考副本集部署的文章。

创建 MongoDB 分片集群部署 docker-compose.yaml 文件,内容如下:
version: "3.9"
services:
  mongo-config0:
    container_name: mongo-config0
    image: mongo
    hostname: mongo-config0
    privileged: true
    environment:
      TZ: Asia/Shanghai
    volumes:
      - /etc/localtime:/etc/localtime
      - ./config/config0/db:/data/db
      - ./config/conf:/data/configdb
      - ./certs:/data/certs
      - ./config/config0/logs:/data/logs
    restart: always
    command: -f /data/configdb/mongod.conf
    networks:
      mongo_shard_network:
        ipv4_address: 172.26.1.20
  mongo-config1:
    container_name: mongo-config1
    image: mongo
    hostname: mongo-config1
    privileged: true
    environment:
      TZ: Asia/Shanghai
    volumes:
      - /etc/localtime:/etc/localtime
      - ./config/config1/db:/data/db
      - ./config/conf:/data/configdb
      - ./certs:/data/certs
      - ./config/config1/logs:/data/logs
    restart: always
    command: -f /data/configdb/mongod.conf
    networks:
      mongo_shard_network:
        ipv4_address: 172.26.1.21 
  mongo-config2:
    container_name: mongo-config2
    image: mongo
    hostname: mongo-config2
    privileged: true
    environment:
      TZ: Asia/Shanghai
    volumes:
      - /etc/localtime:/etc/localtime
      - ./config/config2/db:/data/db
      - ./config/conf:/data/configdb
      - ./certs:/data/certs
      - ./config/config2/logs:/data/logs
    restart: always
    command: -f /data/configdb/mongod.conf
    networks:
      mongo_shard_network:
        ipv4_address: 172.26.1.22
  mongo-shard0-0:
    container_name: mongo-shard0-0
    image: mongo
    privileged: true
    hostname: mongo-shard0-0
    environment:
      TZ: Asia/Shanghai
    volumes:
      - /etc/localtime:/etc/localtime
      - ./shard/shard0/shard00/db:/data/db
      - ./shard/shard0/conf:/data/configdb
      - ./certs:/data/certs
      - ./shard/shard0/shard00/logs:/data/logs
    restart: always
    command: -f /data/configdb/mongod.conf
    networks:
      mongo_shard_network:
        ipv4_address: 172.26.1.23
  mongo-shard0-1:
    container_name: mongo-shard0-1
    image: mongo
    privileged: true
    hostname: mongo-shard0-1
    environment:
      TZ: Asia/Shanghai
    volumes:
      - /etc/localtime:/etc/localtime
      - ./shard/shard0/shard01/db:/data/db
      - ./shard/shard0/conf:/data/configdb
      - ./certs:/data/certs
      - ./shard/shard0/shard01/logs:/data/logs
    restart: always
    command: -f /data/configdb/mongod.conf
    networks:
      mongo_shard_network:
        ipv4_address: 172.26.1.24
  mongo-shard0-2:
    container_name: mongo-shard0-2
    image: mongo
    privileged: true
    hostname: mongo-shard0-2
    environment:
      TZ: Asia/Shanghai
    volumes:
      - /etc/localtime:/etc/localtime
      - ./shard/shard0/shard02/db:/data/db
      - ./shard/shard0/conf:/data/configdb
      - ./certs:/data/certs
      - ./shard/shard0/shard02/logs:/data/logs
    restart: always
    command: -f /data/configdb/mongod.conf
    networks:
      mongo_shard_network:
        ipv4_address: 172.26.1.25
  mongo-shard1-0:
    container_name: mongo-shard1-0
    image: mongo
    privileged: true
    hostname: mongo-shard1-0
    environment:
      TZ: Asia/Shanghai
    volumes:
      - /etc/localtime:/etc/localtime
      - ./shard/shard1/shard10/db:/data/db
      - ./shard/shard1/conf:/data/configdb
      - ./certs:/data/certs
      - ./shard/shard1/shard10/logs:/data/logs
    restart: always
    command: -f /data/configdb/mongod.conf
    networks:
      mongo_shard_network:
        ipv4_address: 172.26.1.26
  mongo-shard1-1:
    container_name: mongo-shard1-1
    image: mongo
    privileged: true
    hostname: mongo-shard1-1
    environment:
      TZ: Asia/Shanghai
    volumes:
      - /etc/localtime:/etc/localtime
      - ./shard/shard1/shard11/db:/data/db
      - ./shard/shard1/conf:/data/configdb
      - ./certs:/data/certs
      - ./shard/shard1/shard11/logs:/data/logs
    restart: always
    command: -f /data/configdb/mongod.conf
    networks:
      mongo_shard_network:
        ipv4_address: 172.26.1.27
  mongo-shard1-2:
    container_name: mongo-shard1-2
    image: mongo
    privileged: true
    hostname: mongo-shard1-2
    environment:
      TZ: Asia/Shanghai
    volumes:
      - /etc/localtime:/etc/localtime
      - ./shard/shard1/shard12/db:/data/db
      - ./shard/shard1/conf:/data/configdb
      - ./certs:/data/certs
      - ./shard/shard1/shard12/logs:/data/logs
    restart: always
    command: -f /data/configdb/mongod.conf
    networks:
      mongo_shard_network:
        ipv4_address: 172.26.1.28
  mongos:
    container_name: mongos
    hostname: mongos
    privileged: true
    image: mongo
    ports:
      - 27017:27017
    environment:
      TZ: Asia/Shanghai
    volumes:
      - /etc/localtime:/etc/localtime
      - ./mongos/conf:/data/configdb
      - ./certs:/data/certs
      - ./mongos/logs:/data/logs
    restart: always
    command: mongos -f /data/configdb/mongod.conf
    networks:
      mongo_shard_network:
        ipv4_address: 172.26.1.29
networks:
  mongo_shard_network:
    name: mongo_shard_network
    ipam: 
      config:
        - subnet: 172.26.1.0/24
启动 MongoDB 分片集群,运行如下命令:
docker-compose up -d

观察每个容器实例是否运行正常,若并未出现异常,再继续后面的操作。

3、配置、分片服务器初始化
随机进入一个配置服务器容器实例,进行配置服务器的初始化,执行如下命令:
# 进入容器
docker exec -it mongo-config0 /bin/bash

# 进入 mongo 客户端
mongosh --tls --tlsCertificateKeyFile /data/certs/client.pem --tlsCAFile /data/certs/ca.pem --authenticationDatabase='$external' --authenticationMechanism MONGODB-X509 --host localhost --port 27019

# 执行配置服务器初始化操作
rs.initiate(
  {
    _id: "cfg",
    configsvr: true,
    members: [
      { _id : 0, host : "mongo-config0:27019" },
      { _id : 1, host : "mongo-config1:27019" },
      { _id : 2, host : "mongo-config2:27019" }
    ]
  }
)

# 查看配置服务器副本集状态
rs.status()

# 查看配置服务器副本集配置信息
rs.conf()
进入 shard0 的一个分片服务器容器实例,进行分片服务器的初始化,执行如下命令:
# 进入容器
docker exec -it mongo-shard0-0 /bin/bash

# 进入 mongo 客户端
mongosh --tls --tlsCertificateKeyFile /data/certs/client.pem --tlsCAFile /data/certs/ca.pem --authenticationDatabase='$external' --authenticationMechanism MONGODB-X509 --host localhost --port 27018

# 执行分片服务器初始化操作
rs.initiate(
  {
    _id : "rs0",
    members: [
      { _id : 0, host : "mongo-shard0-0:27018" },
      { _id : 1, host : "mongo-shard0-1:27018" },
      { _id : 2, host : "mongo-shard0-2:27018" }
    ]
  }
)

# 查看分片服务器副本集状态
rs.status()

# 查看分片服务器副本集配置信息
rs.conf()
进入 shard1 的一个分片服务器容器实例,进行分片服务器的初始化,执行如下命令:
# 进入容器
docker exec -it mongo-shard1-0 /bin/bash

# 进入 mongo 客户端
mongosh --tls --tlsCertificateKeyFile /data/certs/client.pem --tlsCAFile /data/certs/ca.pem --authenticationDatabase='$external' --authenticationMechanism MONGODB-X509 --host localhost --port 27018

# 执行分片服务器初始化操作
rs.initiate(
  {
    _id : "rs1",
    members: [
      { _id : 0, host : "mongo-shard1-0:27018" },
      { _id : 1, host : "mongo-shard1-1:27018" },
      { _id : 2, host : "mongo-shard1-2:27018" }
    ]
  }
)

# 查看分片服务器副本集状态
rs.status()

# 查看分片服务器副本集配置信息
rs.conf()
4、路由服务器添加分片及创建用户
进入 mongos 路由服务器,执行如下命令:
# 进入容器
docker exec -it mongos /bin/bash

# 进入 mongos 客户端
mongosh --tls --tlsCertificateKeyFile /data/certs/client.pem --tlsCAFile /data/certs/ca.pem --host localhost --port 27017

# 创建一个用户
use admin
db.createUser(
	{
		user:"root",
		pwd:"mbql.",
		roles:[{role:"root",db:"admin"}]
	}
)

# 用户认证
db.auth("root","mbql.")

# 添加各个分片服务器到路由服务器
sh.addShard("rs0/mongo-shard0-0:27018,mongo-shard0-1:27018,mongo-shard0-2:27018")
sh.addShard("rs1/mongo-shard1-0:27018,mongo-shard1-1:27018,mongo-shard1-2:27018")

# 查看各个分片服务器状态
sh.status()
四、MongoDB 分片集群测试
进入 mongos 容器里面,执行如下命令:
# 进入容器
docker exec -it mongos /bin/bash

# 进入 mongos 客户端
mongosh --tls --tlsCertificateKeyFile /data/certs/client.pem --tlsCAFile /data/certs/ca.pem --host localhost --port 27017

# 用户认证
db.auth("root","mbql.")

# 在 test 库中开启分片, 分片集合为 user, 分片 key 为 _id, 策略为使用散列算法, 还支持范围、复合等
sh.shardCollection("test.user", { _id: "hashed" } )

# 往 test.user 集合添加 1000 条数据
for(var i=1;i<=1000;i++){
    db.user.insertOne({
        _id:i,
        username:"mbql" + i,
        password:"123456",
        age:5+i
    });
}

# 查看 test 库数据分布到各分片中情况
db.stats()

# 查看 test.user 集合数据分布到各分片中情况
db.user.stats()

# 查看 test.user 集合数据, 输入 it 查询后面的数据
db.user.find()

# 查看 test.user 集合数量
db.user.find().count()
五、其它
################################### 注意 #################################################
# 在分片的主节点也可以同样步骤创建用户并验证,然后可以执行其他命令。
# 之后,可以在分片副本上用上一步创建的用户并验证,但是show dbs 会出现错误 uncaught exception: Error: listDatabases failed:
## 当前从节点只是一个备份,不是奴隶节点,无法读取数据,写当然更不行。因为默认情况下,从节点是没有读写权限的,可以增加(删除)读的权限,rs.secondaryOk() / rs.secondaryOk(false)。
# 仲裁节点不会放任何业务数据,可以登陆查看,仲裁节点不能进行数据的查看。

# 查看副本集的配置:返回包含当前副本集配置的文档。rs.conf() / rs.config()是该方法的别名。
# 查看副本集的状态:返回包含状态信息的文档,次输出结果从副本集的其他成员发送的心跳包中获得的数据反映副本集的当前状态。rs.status()
# 添加副本从节点:在主节点添加从节点,将其他成员加入到副本集。rs.add(host, arbiterOnly)
# 查看全局所有账户:db.system.users.find().pretty()
# 查看当前库下的账户:use test; show users;

# MongoDB 数据库相关命令
db.serverStatus().connections; # 连接数查看
show collections  # 查看表信息
db.test_shard.find().count() # 查看table1数据长度
db.test_shard.remove({}) # 删除数据表
db.stats()   # 查看所有的分片服务器状态
db.adminCommand( { listShards: 1 } ) # 分片列表
db.test_shard.find({ age: 36 }).explain()   # 精确查询
db.test_shard.find({ age: { $gte : 36 ,$lt : 73 } }).explain() # 范围查询

# MongoDB 分片相关操作命令
sh.enableSharding('testdb')                # 开启数据库testdb分片
sh.shardCollection('testdb.users',{uid:1})    # 按testdb.users的uid字段分片
sh.shardCollection("testdb.test_shard",{"age": 1})     # 按ranged分片
sh.shardCollection("testdb.test_shard2",{"age": "hashed"}) # 按hash分片
sh.status()   # 查看分片节点
sh.addShard() # 向集群中添加一个 shard
sh.getBalancerState()   # 查看平衡器
sh.disableBalancing()   # 禁用平衡器
sh.enableBalancing()    # 启用平衡器
db.runCommand( { removeShard: "mongodb0" } ) # 删除分片mongodb0,迁移数据查看命令
db.runCommand( { movePrimary: "test", to: "mongodb1" })   # 将数据库test未分片mongodb0的数据,迁移到mongodb1主分片。
db.adminCommand("flushRouterConfig") # 处理分片后,刷新路由数据。

use config 
db.databases.find()  # 查看所有数据库使用分片
db.settings.save({_id:"chunksize",value:1}) # 将 chunk 的大小调整为 1MB
db.serverStatus().sharding

# 副本集操作命令
rs.status()   # 查看成员的运行状态等信息
rs.config()    # 查看配置信息
rs.slaveOk()  # 允许在SECONDARY节点上进行查询操作,默认从节点不具有查询功能
rs.isMaster()  # 查询该节点是否是主节点
rs.add({})   # 添加新的节点到该副本集中
rs.remove()   # 从副本集中删除节点
rs.stepDown # 降级节点
db.printSlaveReplicationInfo()  # 查看同步情况, 新版本已废弃
rs.addArb("172.20.0.16:27038") # 添加仲裁节点

# 强制加入仲裁节点
config=rs.conf()
config.members=[config.members[0],config.members[1],{_id:5,host:"127.0.0.1:27021",priority:5,arbiterOnly:"true"}]
rs.reconfig(config,{force:true})

# 强制主节点
cfg = rs.conf()
cfg.members[0].priority = 0.5
cfg.members[1].priority = 0.5
cfg.members[2].priority = 1
rs.reconfig(cfg)

# 备份 / 恢复命令
mongodump -h 127.0.0.1:27017 -d test -o /data/backup/
mongorestore -h 127.0.0.1:27017 -d test --dir /data/db/test

你可能感兴趣的:(Linux,mongodb,docker,ssl,分片)