应工作需要,搭建MongoDB数据库集群进行测试(切片+副本集)
还是这三台(为什么要用还是…)
在三台机器搭建集群,可以有如下架构:
- 三组(切片+副本集), 切片组间端口区分
- 一组(配置服务器+副本集), 同其他模块端口区分
- 三实例mongos服务,提供路由功能
1. Shard:
副本集可以理解为一种模式,每组切片中由一主一从结构,构成一组副本集。(剩下一个实例做什么呢?)。
每组切片中还需配置一个仲裁节点,该实例不提供数据保存,仅参与选举投票过程。
2. ConfigSvr(配置服务器):
元数据包含:
- shard节点的配置信息
- chunk的shard key范围
- chunk的各shard分布情况
- 集群DB、collection的sharding配置信息等
表格示意
角色名称 | 10.0.0.192 | 10.0.0.193 | 10.0.0.194 |
---|---|---|---|
shard1 | shard11:10004(主) | shard12:10004(从) | shard13:10004(仲裁) |
shard2 | shard21:10005(从) | shard22:10005(仲裁) | shard23:10005(主) |
shard3 | shard31:10006(仲裁) | shard32:10006(主) | shard33:10006(从) |
config | c1:10007(主) | c2:10007(从) | c3:10007(从) |
mongos | mongos:10008 | mongos:10008 | mongos:10008 |
# 使用公司内提供的mongo-3.6.10安装包,三台节点全部安装
$ wget http://10.0.0.156/rpm/7/mongo-3.6.10.el7-1.x86_64.rpm
# 需要同时安装一些依赖包
$ yum -y install mongo-3.6.10.el7-1.x86_64.rpm
/usr/local/mongodb/
/usr/local/mongodb/etc/
config.conf
: ConfigSvr服务配置shard.conf
: shard服务配置
sharding:
配置块,应注释mongos.conf
: mongos服务配置/usr/local/mongodb/script/
start_mongod.sh
: 负责切片单个实例启动,需对应切片配置名start_config.sh
: 负责configSvr实例启动,需要对应配置名start_mongos.sh
: 负责mongos实例启动,需要对应配置名/data/logs/mongodb/
config.log
-> ConfigSvrshard/access.log
-> Shardroute.log
-> mongos/data/mongodb
# 持久化设置允许用户/进程打开文件句柄数
$ vim /etc/security/limits.conf # 下修改
* soft nofile 655360
* hard nofile 655360
* soft nproc 131072
* hard nproc 131072
# 该文件制约全局nproc的最大限制
$ vim /etc/security/limits.d/90-nproc.conf # 下修改
* soft nproc 655360
* hard nproc 655360
# 以下为启动脚本中,配置的系统优化,无需手动操作。
# 0表示关闭zone_reclaim模式,即当前内存区域(zone)内存耗尽时,可以从其他zone或NUMA节点回收内存。
$ echo 'vm.zone_reclaim_mode = 0' >> /etc/sysctl.conf
# 禁用内存大页相关功能(Mongo官方建议关闭)
$ echo 'never' > /sys/kernel/mm/transparent_hugepage/enabled
$ echo 'never' > /sys/kernel/mm/transparent_hugepage/defrag
1. 10.0.0.192 - shard1
# copy shard.conf 为shard1
$ cp /usr/local/mongodb/etc/shard.conf /usr/local/mongodb/etc/shard1.conf
$ vim /usr/local/mongodb/etc/shard1.conf
storage:
##shard根据自己情况修改名字
dbPath: "/data/mongodb/shard1"
indexBuildRetry: true
journal:
enabled: true
commitIntervalMs: 30
directoryPerDB: true
syncPeriodSecs: 60
engine: wiredTiger
wiredTiger:
engineConfig:
cacheSizeGB: 16
journalCompressor: zlib
directoryForIndexes: true
collectionConfig:
blockCompressor: zlib
indexConfig:
prefixCompression: true
systemLog:
destination: file
##shard对应上面的shard名字,统一修改
path: "/data/logs/mongodb/shard1/access.log"
logAppend: true
timeStampFormat: iso8601-local
processManagement:
fork: true
##shard对应上面的shard名字,统一修改
pidFilePath: "/tmp/mongodb-shard1.pid"
net:
bindIp: 10.0.0.192,127.0.0.1 # 本机ip + 127
port: 10004 # shard1端口
replication:
##shard对应上面的shard1名字,统一修改
replSetName: shard1
##下面的配置需要等待配置完集群后,添加对应的用户和密码,重启时候再开启
# security:
# keyFile: '/usr/local/mongodb/etc/keyfile'
# clusterAuthMode: "keyFile"
# authorization: enabled
2. 10.0.0.192 - shard2
# copy shard.conf 为shard2
$ cp /usr/local/mongodb/etc/shard.conf /usr/local/mongodb/etc/shard2.conf
$ vim /usr/local/mongodb/etc/shard2.conf
storage:
##shard根据自己情况修改名字
dbPath: "/data/mongodb/shard2"
indexBuildRetry: true
journal:
enabled: true
commitIntervalMs: 30
directoryPerDB: true
syncPeriodSecs: 60
engine: wiredTiger
wiredTiger:
engineConfig:
cacheSizeGB: 16
journalCompressor: zlib
directoryForIndexes: true
collectionConfig:
blockCompressor: zlib
indexConfig:
prefixCompression: true
systemLog:
destination: file
##shard对应上面的shard名字,统一修改
path: "/data/logs/mongodb/shard2/access.log"
logAppend: true
timeStampFormat: iso8601-local
processManagement:
fork: true
##shard对应上面的shard名字,统一修改
pidFilePath: "/tmp/mongodb-shard2.pid"
net:
bindIp: 10.0.0.192,127.0.0.1
port: 10005 # shard2 端口
replication:
##shard对应上面的shard名字,统一修改
replSetName: shard2
##下面的配置需要等待配置完集群后,添加对应的用户和密码,重启时候再开启
# security:
# keyFile: '/usr/local/mongodb/etc/keyfile'
# clusterAuthMode: "keyFile"
# authorization: enabled
3. 10.0.0.192 - shard3
# copy shard.conf 为shard3
$ cp /usr/local/mongodb/etc/shard.conf /usr/local/mongodb/etc/shard3.conf
$ vim /usr/local/mongodb/etc/shard3.conf
storage:
##shard根据自己情况修改名字
dbPath: "/data/mongodb/shard3"
indexBuildRetry: true
journal:
enabled: true
commitIntervalMs: 30
directoryPerDB: true
syncPeriodSecs: 60
engine: wiredTiger
wiredTiger:
engineConfig:
cacheSizeGB: 16
journalCompressor: zlib
directoryForIndexes: true
collectionConfig:
blockCompressor: zlib
indexConfig:
prefixCompression: true
systemLog:
destination: file
##shard对应上面的shard名字,统一修改
path: "/data/logs/mongodb/shard3/access.log"
logAppend: true
timeStampFormat: iso8601-local
processManagement:
fork: true
##shard对应上面的shard名字,统一修改
pidFilePath: "/tmp/mongodb-shard3.pid"
net:
bindIp: 10.0.0.192,127.0.0.1
port: 10006 # shard3 端口
replication:
##shard对应上面的shard名字,统一修改
replSetName: shard3
##下面的配置需要等待配置完集群后,添加对应的用户和密码,重启时候再开启
# security:
# keyFile: '/usr/local/mongodb/etc/keyfile'
# clusterAuthMode: "keyFile"
# authorization: enabled
10.0.0.193、10.0.0.194 shard1、2、3 配置同上,对应修改ip地址即可
1. 10.0.0.192 - config
$ vim /usr/local/mongodb/etc/config.conf
bind_ip=10.0.0.192,127.0.0.1
dbpath=/data/mongodb/config
logpath=/data/logs/mongodb/config.log
port=10007
pidfilepath=/data/logs/mongodb/config.pid
directoryperdb=true
logappend=true
fork=true
configsvr=true
journal=true
##mongo config的副本集名字,根据自己情况修改,对应要修改mongos.conf中的信息
replSet = config
##下面的配置需要等待配置完集群后,添加对应的用户和密码,重启时候再开启
# keyFile=/usr/local/mongodb/etc/keyfile
# clusterAuthMode=keyFile
10.0.0.193、10.0.0.194 config配置同上,对应修改ip地址即可
1. 10.0.0.192 - mongos
$ vim /usr/local/mongodb/etc/mongos.conf
##mongo config 对应的ip+端口,若config副本集名字更改需要对应更改信息
configdb=config/10.0.0.192:10007,10.0.0.193:10007,10.0.0.194:10007
logpath=/data/logs/mongodb/route.log
port=10008
pidfilepath=/data/logs/mongodb/route.pid
logappend=true
fork=true
keyFile=/usr/local/mongodb/etc/keyfile
clusterAuthMode=keyFile
10.0.0.193、10.0.0.194 mongos配置同上,无需修改
shard_conf = {_id: 'shard1', members: [
{_id: 0, host: '10.0.0.192:10004', priority:2},
{_id: 1, host: '10.0.0.193:10004'},
{_id: 2, host: '10.0.0.194:10004', arbiterOnly:true},
]};
shard_conf = {_id: 'shard2', members: [
{_id: 0, host: '10.0.0.192:10005'},
{_id: 1, host: '10.0.0.193:10005', arbiterOnly:true},
{_id: 2, host: '10.0.0.194:10005', priority:2},
]};
shard_conf = {_id: 'shard3', members: [
{_id: 0, host: '10.0.0.192:10006', arbiterOnly:true},
{_id: 1, host: '10.0.0.193:10006', priority:2},
{_id: 2, host: '10.0.0.194:10006'},
]};
configSvr = {_id: 'config', members: [
{_id: 0, host: '10.0.0.192:10007', priority:2},
{_id: 1, host: '10.0.0.193:10007'},
{_id: 2, host: '10.0.0.194:10007'},
]};
# 为区分shard,各节点创建不同目录
$ mkdir -p /data/logs/mongodb/shard{1,2,3}
$ chown -R mongodb:mongodb /data/logs/mongodb/shard{1,2,3}
$ mkdir -p /data/mongodb/shard{1,2,3}
$ chown -R mongodb:mongodb /data/mongodb/shard{1,2,3}
# 区分shard,拷贝并对应shard名称编辑脚本的REPLSET变量
$ seq 1 3|xargs -i cp start_mongod.sh start_mongod{}.sh
# 使用启动脚本启动,三个节点分别都启动
$ cd /usr/local/mongodb/script
$ sh start_config.sh
# 脚本核心为以下命令
sudo -u mongodb numactl --interleave=all /usr/local/mongodb/bin/mongod -f /usr/local/mongodb/etc/config.conf
# 登录10.0.0.192 的config
$ mongo 127.0.0.1:10007/admin
# 定义配置
> configSvr = {_id: 'config', members: [
{_id: 0, host: '10.0.0.192:10007', priority:2},
{_id: 1, host: '10.0.0.193:10007'},
{_id: 2, host: '10.0.0.194:10007'},
]};
# 初始化副本集
> rs.initiate(configSvr)
# 添加成功,查看下状态
> rs.status()
# 创建账号
> user = {user:'root', pwd: 'xxxxxx', roles:[{role: 'root', db: 'admin'}]}
> db.createUser(user)
# 使用启动脚本启动,三个节点9个实例分别都启动
$ cd /usr/local/mongodb/script
$ seq 1 3 | xargs -i sh start_mongod{}.sh
# 脚本核心命令
$ sudo -u mongodb numactl --interleave=all /usr/local/mongodb/bin/mongod --shardsvr -f /usr/local/mongodb/etc/shard1.conf
# 每组分片,对应权重登录mongo
$ mongo 127.0.0.1:10004/admin
> shard_conf = {_id: 'shard1', members: [
{_id: 0, host: '10.0.0.192:10004', priority:2},
{_id: 1, host: '10.0.0.193:10004'},
{_id: 2, host: '10.0.0.194:10004', arbiterOnly:true},
]};
> rs.initiate(shard_conf)
> rs.status()
# 创建账号
> user = {user:'root', pwd: 'xxxxxx', roles:[{role: 'root', db: 'admin'}]}
> db.createUser(user)
其余实例操作同上,注意端口和shard_conf
# 三个节点均需要该文件
$ cd /usr/local/mongodb/etc/
$ openssl rand -base64 753 > keyfile
$ chmod 600 keyfile
# 拷贝到其他机器
$ seq 193 194 |xargs -i scp keyfile 10.0.0.{}:/usr/local/mongodb/etc/
# 各服务配置中的keyfile配置块取消注释
# 杀掉mongod服务,使用脚本重启各实例,启用加密。
$ -_-
# 使用启动脚本启动,三个节点分别都启动
$ cd /usr/local/mongodb/script
$ sh start_mongos.sh
# 脚本核心命令
sudo -u mongodb numactl --interleave=all /usr/local/mongodb/bin/mongos -f /usr/local/mongodb/etc/mongos.conf
# 分别登录三个mongos实例
$ mongo 127.0.0.1:10008/admin
# 添加分片
mongos> sh.addShard("shard1/10.0.0.192:10004,10.0.0.193:10004,10.0.0.194:10004")
mongos> sh.addShard("shard2/10.0.0.192:10005,10.0.0.193:10005,10.0.0.194:10005")
mongos> sh.addShard("shard3/10.0.0.192:10006,10.0.0.193:10006,10.0.0.194:10006")
# 查看集群状态
mongos> sh.stats()
# 创建索引
mongos> use shard_test
mongos> db.c1.ensureindex({id: "hashed"})
# 指定库进行分片
mongos> db.runCommand({enablesharding : "shard_test"})
# 指定表进行分片,以id创建哈希索引进行分片
mongos> db.runCommand({shardcollection : "shard_test.c1", key : {id : "hashed"}})
# 写入测试分片情况
mongos> use shard_test
mongos> for (var i = 1; i <= 1000; i++){db.c1.insert({id:i, text: "hello world"})}
# 查看分片情况
mongos> db.c1.status()
...
"shard1" : {
"ns" : "shard_test.c1",
"size" : 17360,
"count" : 310,
...
"shard2" : {
"ns" : "shard_test.c1",
"size" : 19376,
"count" : 346,
...
"shard3" : {
"ns" : "shard_test.c1",
"size" : 19264,
"count" : 344,
数据已经分片成功。
# 分别登录分片的PRIMARY节点,进行操作
PRIMARY> cfg = rs.conf()
PRIMARY> cfg.members[2].priority = 0
PRIMARY> cfg.members[2].hidden = true
PRIMARY> cfg.members[2].slaveDelay = 3600
PRIMARY> rs.reconfig(cfg)