Docker搭建MongoRocks副本分片集群(Docker & Mongodb & Rocksdb & Replication & Sharding)

Docker搭建MongoRocks副本分片集群

  • 准备
    • 依赖
    • 安装
    • 下载镜像
  • 基本单实例
  • 带配置的单实例
    • 权限配置
    • docker参数解释
    • 启动命令
    • rocksdb配置解释
    • 查看启动日志
    • 连接测试
  • overlay网络container分片集群
    • 准备基础环境
    • 创建swarm overlay网络
    • 测试overlay网络连通性
    • 创建数据目录
    • 启动configsvr
    • 建立configsvr 副本集
    • 启动shardsvr
    • 建立shardsvr副本集
    • 启动mongos
    • 配置分片
    • 测试
  • docker stack 部署service集群
    • 准备基础环境
    • 创建数据目录
    • 编写docker stack deploy脚本
    • 启动集群
    • 检查启动
    • 构建副本集群
    • 添加分片
    • 测试

准备

依赖

  • 操作系统:CentOS 7.6

安装

参照安装(点击)

下载镜像

mongodb 在3.4以后有部分api变化,导致percona无法在后续版本继续集成rocks engine。目前可用的mongodb最大版本号为3.4,percona官方提供了3.4的docker镜像。

docker pull percona/percona-server-mongodb:3.4

基本单实例

基本的命令很简单:

docker run -p 27017:27017 -d percona/percona-server-mongodb:3.4 --storageEngine=rocksdb

这里我们并不启动这个容器,直接从带配置的容器开始。

带配置的单实例

权限配置

创建目录,为容器内用户赋予执行权限

mkdir -p /root/volumns/mongorocks/db
chmod 777 -R /root/volumns/mongorocks/db

docker参数解释

  • docker设置目录映射-v /root/volumns/mongorocks/db:/data/db
  • docker设置network访问容器名–name=mongorocks
  • docker设置link访问主机地址–hostname=mongorocks
  • docker设置自动重启 --restart=always
  • mongo设置数据目录–dbpath=/data/db

启动命令

完整启动命令如下

docker run -d \
		--name=mongorocks \
		--hostname=mongorocks \
		--restart=always \
		-p 27017:27017 \
		-v /root/volumns/mongorocks/db:/data/db \
		-v /etc/localtime:/etc/localtime \
		percona/percona-server-mongodb:3.4 \
		--storageEngine=rocksdb \
		--dbpath=/data/db \
		--rocksdbCacheSizeGB=1 \
	  	--rocksdbCompression=snappy \
	  	--rocksdbMaxWriteMBPerSec=1024 \
	  	--rocksdbCrashSafeCounters=false \
	  	--rocksdbCounters=true \
	  	--rocksdbSingleDeleteIndex=false

rocksdb配置解释

配置可以参照:官方描述(点击)
简单说明需要更改的配置项:

  • rocksdbCacheSizeGB设置block cache的大小:默认为30%的可用物理内存。rocksdb使用cache包含两个部分,未压缩数据uncompressed data 放在block cache中,压缩数据放在kernel page cache中。
  • rocksdbMaxWriteMBPerSec定义最大写入的速度:默认为1GiB/S速度,目前能够超过这个读写速度的只有nvmeSSD,目前只有本地SSD可以支持以上的速度,使用云盘ESSD只有480MiB。nvmeSSD所受到的限制是网卡的限制,阿里云可用最大带宽为4Gib/s。也就是500MiB。根据使用的部署环境来设定最大写入速度。这个值决定了读写占用CPU时间的分配。当值设置小于物理硬盘最大速度,值越小,读取速度越快,写入速度越慢;反之值越大,读取速度越慢,写入速度越快。当值超过物理硬盘速度上限时,写入速度达到上限,超过上限的部分将降低读取速度而不提升写入速度。可以通过linux命令测试硬盘读写速度。参照这里(点击)
  • rocksdbCompression定义压缩格式:默认snappy,可选none, snappy, zlib, lz4, lz4hc,各种

查看启动日志

docker logs mongorocks

日志显示成功

2019-08-04T16:18:17.917+0000 I STORAGE  [main] [RocksDB] Block Cache Size GB: 1
2019-08-04T16:18:17.917+0000 I STORAGE  [main] [RocksDB] Compression: snappy
2019-08-04T16:18:17.917+0000 I STORAGE  [main] [RocksDB] MaxWriteMBPerSec: 1024
2019-08-04T16:18:17.917+0000 I STORAGE  [main] [RocksDB] Engine custom option: 
2019-08-04T16:18:17.917+0000 I STORAGE  [main] [RocksDB] Crash safe counters: 0
2019-08-04T16:18:17.917+0000 I STORAGE  [main] [RocksDB] Counters: 1
2019-08-04T16:18:17.917+0000 I STORAGE  [main] [RocksDB] Use SingleDelete in index: 0
2019-08-04T16:18:17.920+0000 I ACCESS   [main] Initialized External Auth Session
2019-08-04T16:18:17.922+0000 I CONTROL  [initandlisten] MongoDB starting : pid=1 port=27017 dbpath=/data/db 64-bit host=mongorocks
2019-08-04T16:18:17.922+0000 I CONTROL  [initandlisten] db version v3.4.21-2.19
2019-08-04T16:18:17.922+0000 I CONTROL  [initandlisten] git version: 2e0631f5e0d868dd51b71e1e55eb8a57300d00df
2019-08-04T16:18:17.922+0000 I CONTROL  [initandlisten] OpenSSL version: OpenSSL 1.0.1t  3 May 2016
2019-08-04T16:18:17.922+0000 I CONTROL  [initandlisten] allocator: tcmalloc
2019-08-04T16:18:17.922+0000 I CONTROL  [initandlisten] modules: none
2019-08-04T16:18:17.922+0000 I CONTROL  [initandlisten] build environment:
2019-08-04T16:18:17.922+0000 I CONTROL  [initandlisten]     distarch: x86_64
2019-08-04T16:18:17.922+0000 I CONTROL  [initandlisten]     target_arch: x86_64
2019-08-04T16:18:17.922+0000 I CONTROL  [initandlisten] options: { storage: { dbPath: "/data/db", engine: "rocksdb", rocksdb: { cacheSizeGB: 1, compression: "snappy", counters: true, crashSafeCounters: false, maxWriteMBPerSec: 1024, singleDeleteIndex: false } } }
2019-08-04T16:18:17.935+0000 I STORAGE  [initandlisten] 0 dropped prefixes need compaction
2019-08-04T16:18:17.936+0000 I CONTROL  [initandlisten] 
2019-08-04T16:18:17.936+0000 I CONTROL  [initandlisten] ** WARNING: Access control is not enabled for the database.
2019-08-04T16:18:17.936+0000 I CONTROL  [initandlisten] **          Read and write access to data and configuration is unrestricted.
2019-08-04T16:18:17.936+0000 I CONTROL  [initandlisten] **          You can use percona-server-mongodb-enable-auth.sh to fix it.
2019-08-04T16:18:17.936+0000 I CONTROL  [initandlisten] 
2019-08-04T16:18:17.936+0000 I CONTROL  [initandlisten] 
2019-08-04T16:18:17.936+0000 I CONTROL  [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/enabled is 'always'.
2019-08-04T16:18:17.936+0000 I CONTROL  [initandlisten] **        We suggest setting it to 'never'
2019-08-04T16:18:17.936+0000 I CONTROL  [initandlisten] 
2019-08-04T16:18:17.936+0000 I CONTROL  [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/defrag is 'always'.
2019-08-04T16:18:17.936+0000 I CONTROL  [initandlisten] **        We suggest setting it to 'never'
2019-08-04T16:18:17.937+0000 I CONTROL  [initandlisten] 
2019-08-04T16:18:17.937+0000 I FTDC     [initandlisten] Initializing full-time diagnostic data capture with directory '/data/db/diagnostic.data'
2019-08-04T16:18:17.938+0000 I INDEX    [initandlisten] build index on: admin.system.version properties: { v: 2, key: { version: 1 }, name: "incompatible_with_version_32", ns: "admin.system.version" }
2019-08-04T16:18:17.938+0000 I INDEX    [initandlisten] 	 building index using bulk method; build may temporarily use up to 500 megabytes of RAM
2019-08-04T16:18:17.938+0000 I INDEX    [initandlisten] build index done.  scanned 0 total records. 0 secs
2019-08-04T16:18:17.938+0000 I COMMAND  [initandlisten] setting featureCompatibilityVersion to 3.4
2019-08-04T16:18:17.939+0000 I NETWORK  [thread1] waiting for connections on port 27017

连接测试

docker run -it --link mongorocks --rm percona/percona-server-mongodb:3.4 \
	mongo --host mongorocks

连接成功

overlay网络container分片集群

准备基础环境

部署三台机器vm1,vm2,vm3。基础环境和单实例一样。

创建swarm overlay网络

创建swarm overlay网络(点击)

测试overlay网络连通性

在vm1上运行mongorocks manager

docker run -d --name=mongorocks-1 \
 --hostname=mongorocks-1 \
 --network=mongo-overlay \
 --restart=always \
 -p 27017:27017 \
 -v /root/volumns/mongorocks/db:/data/db 
 -v /etc/localtime:/etc/localtime \
percona/percona-server-mongodb:3.4 \
 --storageEngine=rocksdb --dbpath=/data/db \
 --rocksdbCacheSizeGB=1   \
 --rocksdbCompression=snappy   \
 --rocksdbMaxWriteMBPerSec=1024   \
 --rocksdbCrashSafeCounters=false   \
 --rocksdbCounters=true   \
 --rocksdbSingleDeleteIndex=false

参数解析:

  • container加入overlay网络:–network=mongo-overlay

在vm3上通过刚刚建立的overlay网络测试连通性。

docker pull debian:latest
docker run --network=mongo-overlay -it debian:latest ping mongorocks-1

显示结果

64 bytes from mongorocks-1.mongo-overlay (10.0.1.2): icmp_seq=1 ttl=64 time=0.494 ms
64 bytes from mongorocks-1.mongo-overlay (10.0.1.2): icmp_seq=2 ttl=64 time=1.05 ms
64 bytes from mongorocks-1.mongo-overlay (10.0.1.2): icmp_seq=3 ttl=64 time=1.19 ms

创建数据目录

准备建立三个configsvr,三个mongos,三个shardsvr,每台机器上分别部署一个configsvr,,一个mongos,一个shardsvr。
在vm1,vm2,vm3上建立数据目录。

#vm1
mkdir -p /root/volumns/mongo-config-1/db /root/volumns/mongo-mongos-1/db /root/volumns/mongo-shard-1-master/db /root/volumns/mongo-shard-3-slave/db /root/volumns/mongo-shard-2-arbiter/db
chmod -R 777 /root/volumns/mongo-config-1/db /root/volumns/mongo-mongos-1/db /root/volumns/mongo-shard-1-master/db /root/volumns/mongo-shard-3-slave/db /root/volumns/mongo-shard-2-arbiter/db
#vm2
mkdir -p /root/volumns/mongo-config-2/db /root/volumns/mongo-mongos-2/db /root/volumns/mongo-shard-2-master/db /root/volumns/mongo-shard-1-slave/db /root/volumns/mongo-shard-3-arbiter/db
chmod -R 777 /root/volumns/mongo-config-2/db /root/volumns/mongo-mongos-2/db /root/volumns/mongo-shard-2-master/db /root/volumns/mongo-shard-1-slave/db /root/volumns/mongo-shard-3-arbiter/db
#vm3
mkdir -p /root/volumns/mongo-config-3/db /root/volumns/mongo-mongos-3/db /root/volumns/mongo-shard-3-master/db /root/volumns/mongo-shard-2-slave/db /root/volumns/mongo-shard-1-arbiter/db
chmod -R 777 /root/volumns/mongo-config-3/db /root/volumns/mongo-mongos-3/db /root/volumns/mongo-shard-3-master/db /root/volumns/mongo-shard-2-slave/db /root/volumns/mongo-shard-1-arbiter/db

这里取名用的是master和slave,除了arbiter节点只负责仲裁意外,实际上副本集其他的节点都是对等关系,出现单点故障自动切换。

启动configsvr

#vm1
docker run -d --name=mongo-config-1 --hostname=mongo-config-1 --network=mongo-overlay --restart=always -p 27019:27019 -v /root/volumns/mongo-config-1/db:/data/db -v /etc/localtime:/etc/localtime percona/percona-server-mongodb:3.4 mongod --storageEngine=rocksdb --dbpath=/data/db --configsvr --replSet=config
#vm2
docker run -d --name=mongo-config-2 --hostname=mongo-config-2 --network=mongo-overlay --restart=always -p 27019:27019 -v /root/volumns/mongo-config-2/db:/data/db -v /etc/localtime:/etc/localtime percona/percona-server-mongodb:3.4 mongod --storageEngine=rocksdb --dbpath=/data/db --configsvr --replSet=config
#vm3
docker run -d --name=mongo-config-3 --hostname=mongo-config-3 --network=mongo-overlay --restart=always -p 27019:27019 -v /root/volumns/mongo-config-3/db:/data/db -v /etc/localtime:/etc/localtime percona/percona-server-mongodb:3.4 mongod --storageEngine=rocksdb --dbpath=/data/db --configsvr --replSet=config

docker logs检查所有config 容器的启动日志,看是否启动正常

建立configsvr 副本集

3.4版本以后要求configsvr必须是副本集

docker run -it --network mongo-overlay --rm percona/percona-server-mongodb:3.4 \
	mongo mongo-config-1:27019 --eval \
	"rs.initiate({_id:'config',members:[{_id:0,host:'mongo-config-1:27019'},{_id:1,host:'mongo-config-2:27019'},{_id:2,host:'mongo-config-3:27019'}]})"

启动shardsvr

#vm1
docker run -d --name=mongo-shard-1-master --hostname=mongo-shard-1-master --network=mongo-overlay --restart=always -p 27018:27018 -v /root/volumns/mongo-shard-1-master/db:/data/db -v /etc/localtime:/etc/localtime percona/percona-server-mongodb:3.4 mongod --storageEngine=rocksdb --dbpath=/data/db --shardsvr -replSet=shard-1
docker run -d --name=mongo-shard-3-slave --hostname=mongo-shard-3-slave --network=mongo-overlay --restart=always -p 27020:27018 -v /root/volumns/mongo-shard-3-slave/db:/data/db -v /etc/localtime:/etc/localtime percona/percona-server-mongodb:3.4 mongod --storageEngine=rocksdb --dbpath=/data/db --shardsvr -replSet=shard-3
docker run -d --name=mongo-shard-2-arbiter --hostname=mongo-shard-2-arbiter --network=mongo-overlay --restart=always -p 27021:27018 -v /root/volumns/mongo-shard-2-arbiter/db:/data/db -v /etc/localtime:/etc/localtime percona/percona-server-mongodb:3.4 mongod --storageEngine=rocksdb --dbpath=/data/db --shardsvr -replSet=shard-2
#vm2
docker run -d --name=mongo-shard-2-master --hostname=mongo-shard-2-master --network=mongo-overlay --restart=always -p 27018:27018 -v /root/volumns/mongo-shard-2-master/db:/data/db -v /etc/localtime:/etc/localtime percona/percona-server-mongodb:3.4 mongod --storageEngine=rocksdb --dbpath=/data/db --shardsvr -replSet=shard-2
docker run -d --name=mongo-shard-1-slave --hostname=mongo-shard-1-slave --network=mongo-overlay --restart=always -p 27020:27018 -v /root/volumns/mongo-shard-1-slave/db:/data/db -v /etc/localtime:/etc/localtime percona/percona-server-mongodb:3.4 mongod --storageEngine=rocksdb --dbpath=/data/db --shardsvr -replSet=shard-1
docker run -d --name=mongo-shard-3-arbiter --hostname=mongo-shard-3-arbiter --network=mongo-overlay --restart=always -p 27021:27018 -v /root/volumns/mongo-shard-3-arbiter/db:/data/db -v /etc/localtime:/etc/localtime percona/percona-server-mongodb:3.4 mongod --storageEngine=rocksdb --dbpath=/data/db --shardsvr -replSet=shard-3
#vm3
docker run -d --name=mongo-shard-3-master --hostname=mongo-shard-3-master --network=mongo-overlay --restart=always -p 27018:27018 -v /root/volumns/mongo-shard-3-master/db:/data/db -v /etc/localtime:/etc/localtime percona/percona-server-mongodb:3.4 mongod --storageEngine=rocksdb --dbpath=/data/db --shardsvr -replSet=shard-3
docker run -d --name=mongo-shard-2-slave --hostname=mongo-shard-2-slave --network=mongo-overlay --restart=always -p 27020:27018 -v /root/volumns/mongo-shard-2-slave/db:/data/db -v /etc/localtime:/etc/localtime percona/percona-server-mongodb:3.4 mongod --storageEngine=rocksdb --dbpath=/data/db --shardsvr -replSet=shard-2
docker run -d --name=mongo-shard-1-arbiter --hostname=mongo-shard-1-arbiter --network=mongo-overlay --restart=always -p 27021:27018 -v /root/volumns/mongo-shard-1-arbiter/db:/data/db -v /etc/localtime:/etc/localtime percona/percona-server-mongodb:3.4 mongod --storageEngine=rocksdb --dbpath=/data/db --shardsvr -replSet=shard-1

docker logs检查所有shard 容器的启动日志,看是否启动正常

建立shardsvr副本集

arbiterOnly标记arbiter实例。因为进入了mongo-overlay网络,以下命令可以在任意机器上运行。

docker run -it --network mongo-overlay --rm percona/percona-server-mongodb:3.4 mongo mongo-shard-1-master:27018 --eval "rs.initiate({_id:'shard-1',members:[{_id:0,host:'mongo-shard-1-master:27018'},{_id:1,host:'mongo-shard-1-slave:27018'},{_id:2,host:'mongo-shard-1-arbiter:27018', arbiterOnly: true}]})"
docker run -it --network mongo-overlay --rm percona/percona-server-mongodb:3.4 mongo mongo-shard-2-master:27018 --eval "rs.initiate({_id:'shard-2',members:[{_id:0,host:'mongo-shard-2-master:27018'},{_id:1,host:'mongo-shard-2-slave:27018'},{_id:2,host:'mongo-shard-2-arbiter:27018', arbiterOnly: true}]})"
docker run -it --network mongo-overlay --rm percona/percona-server-mongodb:3.4 mongo mongo-shard-3-master:27018 --eval "rs.initiate({_id:'shard-3',members:[{_id:0,host:'mongo-shard-3-master:27018'},{_id:1,host:'mongo-shard-3-slave:27018'},{_id:2,host:'mongo-shard-3-arbiter:27018', arbiterOnly: true}]})"

启动mongos

mongos不是副本集,不需要做副本集配置;可以在外层使用代理软件例如haproxy做mongos的反向代理。

#vm1
docker run -d --name=mongo-mongos-1 --hostname=mongo-mongos-1 --network=mongo-overlay --restart=always -p 27017:27017  -v /etc/localtime:/etc/localtime percona/percona-server-mongodb:3.4 mongos --configdb=config/mongo-config-1:27019,mongo-config-2:27019,mongo-config-3:27019
#vm2
docker run -d --name=mongo-mongos-2 --hostname=mongo-mongos-2 --network=mongo-overlay --restart=always -p 27017:27017  -v /etc/localtime:/etc/localtime percona/percona-server-mongodb:3.4 mongos --configdb=config/mongo-config-1:27019,mongo-config-2:27019,mongo-config-3:27019
#vm3
docker run -d --name=mongo-mongos-3 --hostname=mongo-mongos-3 --network=mongo-overlay --restart=always -p 27017:27017 -v /etc/localtime:/etc/localtime percona/percona-server-mongodb:3.4 mongos --configdb=config/mongo-config-1:27019,mongo-config-2:27019,mongo-config-3:27019

docker logs检查所有mongos 容器的启动日志,看是否启动正常

配置分片

shard分片数据保存在configsvr中,只在一个mongos节点上添加所有shard到configsvr就可以了。逐条执行以下命令,以判定哪个分片出错了。

docker run -it --network mongo-overlay --rm percona/percona-server-mongodb:3.4 mongo mongo-mongos-1:27017 --eval "sh.addShard('shard-1/mongo-shard-1-master:27018,mongo-shard-1-slave:27018,mongo-shard-1-arbiter:27018');"
docker run -it --network mongo-overlay --rm percona/percona-server-mongodb:3.4 mongo mongo-mongos-1:27017 --eval "sh.addShard('shard-2/mongo-shard-2-master:27018,mongo-shard-2-slave:27018,mongo-shard-2-arbiter:27018');"
docker run -it --network mongo-overlay --rm percona/percona-server-mongodb:3.4 mongo mongo-mongos-1:27017 --eval "sh.addShard('shard-3/mongo-shard-3-master:27018,mongo-shard-3-slave:27018,mongo-shard-3-arbiter:27018');"

检查shard状态

docker run -it --network mongo-overlay --rm percona/percona-server-mongodb:3.4 mongo mongo-mongos-1:27017 --eval "sh.status()"

显示结果

Percona Server for MongoDB shell version v3.4.21-2.19
connecting to: mongodb://mongo-mongos-1:27017/test
Percona Server for MongoDB server version: v3.4.21-2.19
--- Sharding Status --- 
  sharding version: {
   "_id" : 1,
   "minCompatibleVersion" : 5,
   "currentVersion" : 6,
   "clusterId" : ObjectId("5d498c6d23c7391c25ba330a")
  }
  shards:
        {  "_id" : "shard-1",  "host" : "shard-1/mongo-shard-1-master:27018,mongo-shard-1-slave:27018",  "state" : 1 }
        {  "_id" : "shard-2",  "host" : "shard-2/mongo-shard-2-master:27018,mongo-shard-2-slave:27018",  "state" : 1 }
        {  "_id" : "shard-3",  "host" : "shard-3/mongo-shard-3-master:27018,mongo-shard-3-slave:27018",  "state" : 1 }
  active mongoses:
        "3.4.21-2.19" : 3
  autosplit:
        Currently enabled: yes
  balancer:
        Currently enabled:  yes
        Currently running:  no
NaN
        Failed balancer rounds in last 5 attempts:  0
        Migration Results for the last 24 hours: 
                No recent migrations
  databases:

测试

使用NosqlBooster for MongoDB生成测试程序
NosqlBooster for MongoDB会自动添加随机数据,保证每条数据都不一样。

faker.locale = "en"
const STEPCOUNT = 1000; //total 10 * 1000 = 10000
function isRandomBlank(blankWeight) {
    return Math.random() * 100 <= blankWeight;
};
for (let i = 0; i < 10; i++) {
    db.getCollection("testCollection").insertMany(
        _.times(STEPCOUNT, () => {
            return {
                "name": faker.name.findName(),
                "username": faker.internet.userName(),
                "email": faker.internet.email(),
                "address": {
                    "street": faker.address.streetName(),
                    "suite": faker.address.secondaryAddress(),
                    "city": faker.address.city(),
                    "zipcode": faker.address.zipCode()
                },
                "phone": faker.phone.phoneNumber(),
                "website": faker.internet.domainName(),
                "company": faker.company.companyName()
            }
        })
    )
    console.log("test:testCollection", `${(i + 1) * STEPCOUNT} docs inserted`);
}

根据测试程序设置分片属性,1是根据范围分片,hashed是根据key进行分片,为了更容易看到分片效果,选择hashed

sh.enableSharding('test')
sh.shardCollection(`test.testCollection`, { _id: 'hashed'})

在NosqlBooster for MongoDB运行测试程序,执行完成后查看分片状态

use test
db.getCollection('testCollection').stats()

显示结果

{
  "sharded": true,
  "capped": false,
  "ns": "test.testCollection",
  "count": 10000,
  "size": 2998835,
  "storageSize": 2998528,
  "totalIndexSize": 346888,
  "indexSizes": {
    "_id_": 180000,
    "_id_hashed": 166888
  },
  "avgObjSize": 299,
  "nindexes": 2,
  "nchunks": 6,
  "shards": {
    "shard-1": {
      "ns": "test.testCollection",
      "size": 994917,
      "count": 3317,
      "avgObjSize": 299,
      "storageSize": 994816,
      "capped": false,
      "nindexes": 2,
      "totalIndexSize": 115072,
      "indexSizes": {
        "_id_": 59706,
        "_id_hashed": 55366
      },
      "ok": 1
    },
    "shard-2": {
      "ns": "test.testCollection",
      "size": 1005509,
      "count": 3354,
      "avgObjSize": 299,
      "storageSize": 1005312,
      "capped": false,
      "nindexes": 2,
      "totalIndexSize": 116324,
      "indexSizes": {
        "_id_": 60372,
        "_id_hashed": 55952
      },
      "ok": 1
    },
    "shard-3": {
      "ns": "test.testCollection",
      "size": 998409,
      "count": 3329,
      "avgObjSize": 299,
      "storageSize": 998400,
      "capped": false,
      "nindexes": 2,
      "totalIndexSize": 115492,
      "indexSizes": {
        "_id_": 59922,
        "_id_hashed": 55570
      },
      "ok": 1
    }
  },
  "ok": 1
}

docker stack 部署service集群

准备基础环境

跟前面的环境配置一样,建立vm1,vm2,vm3的的swarm网络,但不需要建立overlay网络。

创建数据目录

在vm1,vm2,vm3上创建目录,并赋予权限

mkdir -p /data/mongo/config/db
chmod -R 777 /data/mongo/config/db
mkdir -p /data/mongo/shard-1/db 
chmod -R 777 /data/mongo/shard-1/db
mkdir -p /data/mongo/shard-2/db 
chmod -R 777 /data/mongo/shard-2/db
mkdir -p /data/mongo/shard-3/db 
chmod -R 777 /data/mongo/shard-3/db

编写docker stack deploy脚本

version: '3.4'
services:
  shard-1-server-1:
    image: percona/percona-server-mongodb:3.4
    command: mongod --storageEngine=rocksdb --dbpath=/data/db --shardsvr --replSet shard-1
    networks:
      - overlay
    volumes:
      - /etc/localtime:/etc/localtime
      - /data/mongo/shard-1/db:/data/db
    deploy:
      restart_policy:
        condition: on-failure
      replicas: 1
      placement:
        constraints:
          - node.hostname==vm1
  shard-1-server-2:
    image: percona/percona-server-mongodb:3.4
    command: mongod --storageEngine=rocksdb --dbpath=/data/db --shardsvr --replSet shard-1
    networks:
      - overlay
    volumes:
      - /etc/localtime:/etc/localtime
      - /data/mongo/shard-1/db:/data/db
    deploy:
      restart_policy:
        condition: on-failure
      replicas: 1
      placement:
        constraints:
          - node.hostname==vm3
  shard-1-server-3:
    image: percona/percona-server-mongodb:3.4
    command: mongod --storageEngine=rocksdb --dbpath=/data/db --shardsvr --replSet shard-1
    networks:
      - overlay
    volumes:
      - /etc/localtime:/etc/localtime
      - /data/mongo/shard-1/db:/data/db
    deploy:
      restart_policy:
        condition: on-failure
      replicas: 1
      placement:
        constraints:
          - node.hostname==vm2
  shard-2-server-1:
    image: percona/percona-server-mongodb:3.4
    command: mongod --storageEngine=rocksdb --dbpath=/data/db --shardsvr --replSet shard-2
    networks:
      - overlay
    volumes:
      - /etc/localtime:/etc/localtime
      - /data/mongo/shard-2/db:/data/db
    deploy:
      restart_policy:
        condition: on-failure
      replicas: 1
      placement:
        constraints:
          - node.hostname==vm2
  shard-2-server-2:
    image: percona/percona-server-mongodb:3.4
    command: mongod --storageEngine=rocksdb --dbpath=/data/db --shardsvr --replSet shard-2
    networks:
      - overlay
    volumes:
      - /etc/localtime:/etc/localtime
      - /data/mongo/shard-2/db:/data/db
    deploy:
      restart_policy:
        condition: on-failure
      replicas: 1
      placement:
        constraints:
          - node.hostname==vm1
  shard-2-server-3:
    image: percona/percona-server-mongodb:3.4
    command: mongod --storageEngine=rocksdb --dbpath=/data/db --shardsvr --replSet shard-2
    networks:
      - overlay
    volumes:
      - /etc/localtime:/etc/localtime
      - /data/mongo/shard-2/db:/data/db
    deploy:
      restart_policy:
        condition: on-failure
      replicas: 1
      placement:
        constraints:
          - node.hostname==vm3
  shard-3-server-1:
    image: percona/percona-server-mongodb:3.4
    command: mongod --storageEngine=rocksdb --dbpath=/data/db --shardsvr --replSet shard-3
    networks:
      - overlay
    volumes:
      - /etc/localtime:/etc/localtime
      - /data/mongo/shard-3/db:/data/db
    deploy:
      restart_policy:
        condition: on-failure
      replicas: 1
      placement:
        constraints:
          - node.hostname==vm3
  shard-3-server-2:
    image: percona/percona-server-mongodb:3.4
    command: mongod --storageEngine=rocksdb --dbpath=/data/db --shardsvr --replSet shard-3
    networks:
      - overlay
    volumes:
      - /etc/localtime:/etc/localtime
      - /data/mongo/shard-3/db:/data/db
    deploy:
      restart_policy:
        condition: on-failure
      replicas: 1
      placement:
        constraints:
          - node.hostname==vm2
  shard-3-server-3:
    image: percona/percona-server-mongodb:3.4
    command: mongod --storageEngine=rocksdb --dbpath=/data/db --shardsvr --replSet shard-3
    networks:
      - overlay
    volumes:
      - /etc/localtime:/etc/localtime
      - /data/mongo/shard-3/db:/data/db
    deploy:
      restart_policy:
        condition: on-failure
      replicas: 1
      placement:
        constraints:
          - node.hostname==vm1
  config-1:
    image: percona/percona-server-mongodb:3.4
    command: mongod --storageEngine=rocksdb --dbpath=/data/db --configsvr --replSet config
    networks:
      - overlay
    volumes:
      - /etc/localtime:/etc/localtime
      - /data/mongo/config/db:/data/db
    deploy:
      restart_policy:
        condition: on-failure
      replicas: 1
      placement:
        constraints:
          - node.hostname==vm1
  config-2:
    image: percona/percona-server-mongodb:3.4
    command: mongod --storageEngine=rocksdb --dbpath=/data/db --configsvr --replSet config
    networks:
      - overlay
    volumes:
      - /etc/localtime:/etc/localtime
      - /data/mongo/config/db:/data/db
    deploy:
      restart_policy:
        condition: on-failure
      replicas: 1
      placement:
        constraints:
          - node.hostname==vm2
  config-3:
    image: percona/percona-server-mongodb:3.4
    command: mongod --storageEngine=rocksdb --dbpath=/data/db --configsvr --replSet config
    networks:
      - overlay
    volumes:
      - /etc/localtime:/etc/localtime
      - /data/mongo/config/db:/data/db
    deploy:
      restart_policy:
        condition: on-failure
      replicas: 1
      placement:
        constraints:
          - node.hostname==vm3
  mongos-1:
    image: percona/percona-server-mongodb:3.4
    command: mongos --configdb=config/config-1:27019,config-2:27019,config-3:27019
    networks:
      - overlay
    ports:
      - 27017:27017
    volumes:
      - /etc/localtime:/etc/localtime
    depends_on:
      - config-1
      - config-2
      - config-3
    deploy:
      restart_policy:
        condition: on-failure
      replicas: 1
      placement:
        constraints:
          - node.hostname==vm1
  mongos-2:
    image: percona/percona-server-mongodb:3.4
    command: mongos --configdb=config/config-1:27019,config-2:27019,config-3:27019
    networks:
      - overlay
    ports:
      - 27018:27017
    volumes:
      - /etc/localtime:/etc/localtime
    depends_on:
      - config-1
      - config-2
      - config-3
    deploy:
      restart_policy:
        condition: on-failure
      replicas: 1
      placement:
        constraints:
          - node.hostname==vm2
  mongos-3:
    image: percona/percona-server-mongodb:3.4
    command: mongos --configdb=config/config-1:27019,config-2:27019,config-3:27019
    networks:
      - overlay
    ports:
      - 27019:27017
    volumes:
      - /etc/localtime:/etc/localtime
    depends_on:
      - config-1
      - config-2
      - config-3
    deploy:
      restart_policy:
        condition: on-failure
      replicas: 1
      placement:
        constraints:
          - node.hostname==vm3
networks:
  overlay:
    driver: overlay

启动集群

docker stack deploy -c mongo.yaml mongo

显示

Creating network mongo_overlay
Creating service mongo_shard-3-server-3
Creating service mongo_shard-1-server-2
Creating service mongo_mongos-3
Creating service mongo_shard-3-server-1
Creating service mongo_mongos-1
Creating service mongo_config-2
Creating service mongo_shard-1-server-1
Creating service mongo_mongos-2
Creating service mongo_shard-2-server-2
Creating service mongo_shard-1-server-3
Creating service mongo_shard-2-server-1
Creating service mongo_config-3
Creating service mongo_shard-2-server-3
Creating service mongo_config-1
Creating service mongo_shard-3-server-2

检查启动

docker service ls

Relpliates全部是1/1就表示启动成功

ID                  NAME                     MODE                REPLICAS            IMAGE                                PORTS
9urzybegrbmz        mongo_config-1           replicated          1/1                 percona/percona-server-mongodb:3.4   
jldecj0s6238        mongo_config-2           replicated          1/1                 percona/percona-server-mongodb:3.4   
n9r4ld6komnq        mongo_config-3           replicated          1/1                 percona/percona-server-mongodb:3.4   
ni94pd5odl89        mongo_mongos-1           replicated          1/1                 percona/percona-server-mongodb:3.4   *:27017->27017/tcp
sh4ykadpmoka        mongo_mongos-2           replicated          1/1                 percona/percona-server-mongodb:3.4   *:27018->27017/tcp
12m4nbyn77va        mongo_mongos-3           replicated          1/1                 percona/percona-server-mongodb:3.4   *:27019->27017/tcp
psolde1gltn9        mongo_shard-1-server-1   replicated          1/1                 percona/percona-server-mongodb:3.4   
4t2xwavpgg26        mongo_shard-1-server-2   replicated          1/1                 percona/percona-server-mongodb:3.4   
qwjfpg93qkho        mongo_shard-1-server-3   replicated          1/1                 percona/percona-server-mongodb:3.4   
ztbxk12npvwo        mongo_shard-2-server-1   replicated          1/1                 percona/percona-server-mongodb:3.4   
tz3n5oj55osx        mongo_shard-2-server-2   replicated          1/1                 percona/percona-server-mongodb:3.4   
pcprsbo9xxin        mongo_shard-2-server-3   replicated          1/1                 percona/percona-server-mongodb:3.4   
nn7mrm0iy26v        mongo_shard-3-server-1   replicated          1/1                 percona/percona-server-mongodb:3.4   
ps4zqmiqzw1k        mongo_shard-3-server-2   replicated          1/1                 percona/percona-server-mongodb:3.4   
iv1gvzzm3ai0        mongo_shard-3-server-3   replicated          1/1                 percona/percona-server-mongodb:3.4   

构建副本集群

docker exec -it $(docker ps | grep "config" | awk '{ print $1 }') bash -c "echo 'rs.initiate({_id: \"config\",configsvr: true, members: [{ _id : 0, host : \"config-1:27019\" },{ _id : 1, host : \"config-2:27019\" }, { _id : 2, host : \"config-3:27019\" }]})' | mongo --port 27019";
docker exec -it $(docker ps | grep "shard-1" | awk '{ print $1 }') bash -c "echo 'rs.initiate({_id : \"shard-1\", members: [{ _id : 0, host : \"shard-1-server-1:27018\" },{ _id : 1, host : \"shard-1-server-2:27018\" },{ _id : 2, host : \"shard-1-server-3:27018\", arbiterOnly: true }]})' | mongo --port 27018";
docker exec -it $(docker ps | grep "shard-2" | awk '{ print $1 }') bash -c "echo 'rs.initiate({_id : \"shard-2\", members: [{ _id : 0, host : \"shard-2-server-1:27018\" },{ _id : 1, host : \"shard-2-server-2:27018\" },{ _id : 2, host : \"shard-2-server-3:27018\", arbiterOnly: true }]})' | mongo --port 27018";
docker exec -it $(docker ps | grep "shard-3" | awk '{ print $1 }') bash -c "echo 'rs.initiate({_id : \"shard-3\", members: [{ _id : 0, host : \"shard-3-server-1:27018\" },{ _id : 1, host : \"shard-3-server-2:27018\" },{ _id : 2, host : \"shard-3-server-3:27018\", arbiterOnly: true }]})' | mongo --port 27018";

注意:不可以在arbiter所在机器添加当前分片,需要换一台机器。

添加分片

docker exec -it $(docker ps | grep "mongos-1" | awk '{ print $1 }') bash -c "echo 'sh.addShard(\"shard-1/shard-1-server-1:27018,shard-1-server-2:27018,shard-1-server-3:27018\")' | mongo ";
docker exec -it $(docker ps | grep "mongos-1" | awk '{ print $1 }') bash -c "echo 'sh.addShard(\"shard-2/shard-2-server-1:27018,shard-2-server-2:27018,shard-2-server-3:27018\")' | mongo ";
docker exec -it $(docker ps | grep "mongos-1" | awk '{ print $1 }') bash -c "echo 'sh.addShard(\"shard-3/shard-3-server-1:27018,shard-3-server-2:27018,shard-3-server-3:27018\")' | mongo ";

测试

同overlay的测试部分。

你可能感兴趣的:(Docker)