2018-08-24( mongodb)

21.26 mongodb介绍

MongoDB是一个基于分布式文件存储的数据库,属于文档型的,虽然也是NoSQL数据库的一种,但是与redis、memcached等数据库有些区别。MongoDB由C++语言编写。旨在为WEB应用提供可扩展的高性能数据存储解决方案。

MongoDB是一个介于关系数据库和非关系数据库之间的产品,是非关系数据库当中功能最丰富,最像关系数据库的。他支持的数据结构非常松散,是类似json的bson格式,因此可以存储比较复杂的数据类型。Mongo最大的特点是他支持的查询语言非常强大,其语法有点类似于面向对象的查询语言,几乎可以实现类似关系数据库单表查询的绝大部分功能,而且还支持对数据建立索引。

2007年10月,MongoDB由10gen团队所发展。2009年2月首度推出。

MongoDB 可以将数据存储为一个文档,数据结构由键值(key=>value)对组成。MongoDB 文档类似于 JSON 对象。字段值可以包含其他文档、数组及文档数组。

MongoDB官网地址,目前最新版是4.0:
https://www.mongodb.com/

关于JSON的描述可以参考以下文章:
http://www.w3school.com.cn/json/index.asp

特点:

面向集合存储,易存储对象类型的数据
模式自由
支持动态查询
支持完全索引,包含内部对象
支持查询
支持复制和故障恢复
使用高效的二进制数据存储,包括大型对象(如视频等)
自动处理碎片,以支持云计算层次的扩展性
支持RUBY,PYTHON,JAVA,C++,PHP,C#等多种语言
文件存储格式为BSON(一种JSON的扩展)
可通过网络访问
在mongodb中基本的概念是文档、集合、数据库,下图是MongoDB和关系型数据库的术语以及概念的对比:

以下使用两张图对比一下关系型数据库和MongoDB数据库的数据存储结构:
关系型数据库数据结构:

MongoDB数据结构:

21.27 mongodb安装

说明: MongoDB安装就是弄一个官方的yum源,安装即可

创建yum源

说明: 进入/etc/yum.repos.d/目录,创建mongodb.repo

[root@localhost ~]# cd /etc/yum.repos.d/
[root@localhost yum.repos.d]# vim mongodb.repo     #写入如下内容
[mongodb-org-4.0]
name=MongoDB Repository
baseurl=https://repo.mongodb.org/yum/redhat/$releasever/mongodb-org/4.0/x86_64/
gpgcheck=1
enabled=1
gpgkey=https://www.mongodb.org/static/pgp/server-4.0.asc

编辑保存完成之后yum list查看是否有mongodb相关的rpm包:

[root@localhost yum.repos.d]# yum list |grep mongodb
mongodb-org.x86_64                          4.0.1-1.el7                mongodb-org-4.0
mongodb-org-mongos.x86_64                   4.0.1-1.el7                mongodb-org-4.0
mongodb-org-server.x86_64                   4.0.1-1.el7                mongodb-org-4.0
mongodb-org-shell.x86_64                    4.0.1-1.el7                mongodb-org-4.0
mongodb-org-tools.x86_64                    4.0.1-1.el7                mongodb-org-4.0

直接使用yum安装:

[root@localhost yum.repos.d]# yum install -y mongodb-org

21.28 连接mongodb

1.mongodb的配置文件:

[root@localhost ~]# vim /etc/mongod.conf
# mongod.conf

# for documentation of all options, see:
#   http://docs.mongodb.org/manual/reference/configuration-options/

# where to write logging data.
systemLog:  # 这里是定义日志文件相关的
  destination: file
  logAppend: true
  path: /var/log/mongodb/mongod.log

# Where and how to store data.
storage:   # 数据存储目录相关
  dbPath: /var/lib/mongo
  journal:
    enabled: true
#  engine:
#  mmapv1:
#  wiredTiger:

# how the process runs
processManagement:  # pid文件相关
  fork: true  # fork and run in background
  pidFilePath: /var/run/mongodb/mongod.pid  # location of pidfile
  timeZoneInfo: /usr/share/zoneinfo

# network interfaces
net:  # 定义监听的端口以及绑定的IP,IP可以有多个使用逗号分隔即可 演示:( bindIp: 127.0.0.1,192.168.66.130)
  port: 27017
  bindIp: 127.0.0.1  # Enter 0.0.0.0,:: to bind to all IPv4 and IPv6 addresses or, alternatively, use the net.bindIpAll setting.

#security:

#operationProfiling:

#replication:

#sharding:

## Enterprise-Only Options

#auditLog:

#snmp:

2.启动 mongod 服务,并查看进程和监听端口:

[root@localhost yum.repos.d]# systemctl start mongod
[root@localhost yum.repos.d]# ps aux |grep mongod
mongod    12194  9.7  6.0 1067976 60192 ?       Sl   10:33   0:00 /usr/bin/mongod -f /etc/mongod.conf
root      12221  0.0  0.0 112720   972 pts/1    S+   10:33   0:00 grep --color=auto mongod
[root@localhost yum.repos.d]# netstat -lnp|grep mongod
tcp        0      0 192.168.66.130:27017    0.0.0.0:*               LISTEN      12194/mongod        
tcp        0      0 127.0.0.1:27017         0.0.0.0:*               LISTEN      12194/mongod        
unix  2      [ ACC ]     STREAM     LISTENING     90072    12194/mongod         /tmp/mongodb-27017.sock

3.连接MongoDB

说明:连接MongoDB,使用Mongo命令即可连接

演示:

[root@localhost yum.repos.d]# mongo
MongoDB shell version v4.0.1
connecting to: mongodb://127.0.0.1:27017
MongoDB server version: 4.0.1
Welcome to the MongoDB shell.
For interactive help, type "help".
For more comprehensive documentation, see
    http://docs.mongodb.org/
Questions? Try the support group
    http://groups.google.com/group/mongodb-user
Server has startup warnings: 
2018-08-26T10:33:15.470+0800 I CONTROL  [initandlisten] 
2018-08-26T10:33:15.470+0800 I CONTROL  [initandlisten] ** WARNING: Access control is not enabled for the database.
2018-08-26T10:33:15.470+0800 I CONTROL  [initandlisten] **          Read and write access to data and configuration is unrestricted.
2018-08-26T10:33:15.470+0800 I CONTROL  [initandlisten] 
2018-08-26T10:33:15.471+0800 I CONTROL  [initandlisten] 
2018-08-26T10:33:15.471+0800 I CONTROL  [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/enabled is 'always'.
2018-08-26T10:33:15.471+0800 I CONTROL  [initandlisten] **        We suggest setting it to 'never'
2018-08-26T10:33:15.471+0800 I CONTROL  [initandlisten] 
2018-08-26T10:33:15.471+0800 I CONTROL  [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/defrag is 'always'.
2018-08-26T10:33:15.471+0800 I CONTROL  [initandlisten] **        We suggest setting it to 'never'
2018-08-26T10:33:15.471+0800 I CONTROL  [initandlisten] 
---
Enable MongoDB's free cloud-based monitoring service, which will then receive and display
metrics about your deployment (disk utilization, CPU, operation statistics, etc).

The monitoring data will be available on a MongoDB website with a unique URL accessible to you
and anyone you share the URL with. MongoDB may use this information to make product
improvements and to suggest MongoDB products and deployment options to you.

To enable free monitoring, run the following command: db.enableFreeMonitoring()
To permanently disable this reminder, run the following command: db.disableFreeMonitoring()
---

> 

•如果mongodb监听端口并不是默认的27017,则在连接的时候需要加--port 选项,例如:

mongo --port 27018

•连接远程MongoDB,需要加--host选项: mongo --host 192.168.66.130

[root@localhost yum.repos.d]# mongo --host 192.168.66.130 --port 27017

•如果MongoDB设置了验证,则在连接时需要带用户名和密码

格式:

mongo -u username -p passwd --authenticationDatabase db

21.29 mongodb用户管理

1.进入到MongoDB里,切换到admin库,因为得到admin库里才能进行用户相关的操作:

[root@localhost ~]# mongo
#切换到admin库
> use admin
switched to db admin
> 

2.创建用户:

> db.createUser( { user: "admin", customData: {description: "superuser"}, pwd: "admin122", roles: [ { role: "root", db: "admin" } ] } )
Successfully added user: {
    "user" : "admin",
    "customData" : {
        "description" : "superuser"
    },
    "roles" : [
        {
            "role" : "root",
            "db" : "admin"
        }
    ]
}

以上我们创建了一个admin用户,说明:小括号的级别是最大的,然后是方括号,再者是花括号,user指定用户,customData说明字段,可以省略,pwd为密码,roles指定用户的角色,role相当于指定权限,db指定库名,创建用户时必须针对一个库。
3.db.system.users.find() --->列出所有的用户,需要先切换到admin库下

> db.system.users.find()
{ "_id" : "admin.admin", "user" : "admin", "db" : "admin", "credentials" : { "SCRAM-SHA-1" : { "iterationCount" : 10000, "salt" : "9sOjvYZsJhrO177B4xNJxg==", "storedKey" : "d3xRf6ZmyyKeIRRffwsxVfPV/V4=", "serverKey" : "IK5KBhAzowClzCBa4NNHmuWDvIM=" }, "SCRAM-SHA-256" : { "iterationCount" : 15000, "salt" : "F7/rtZ0WJiqByvtiZpZ/sKareDh0rI2kDtc0gw==", "storedKey" : "5ZDQ8uZhiTuqI8g1zeSTfj3mNOFlc8d19cL+dlQYUWI=", "serverKey" : "BTidtpburV7jXTRyL5a3VknUsXXVoVtMpkC19p5bgQg=" } }, "customData" : { "description" : "superuser" }, "roles" : [ { "role" : "root", "db" : "admin" } ] }
  1. show users --->查看当前库下所有的用户
    > show users
    {
    "_id" : "admin.admin",
    "user" : "admin",
    "db" : "admin",
    "customData" : {
        "description" : "superuser"
    },
    "roles" : [
        {
            "role" : "root",
            "db" : "admin"
        }
    ],
    "mechanisms" : [
        "SCRAM-SHA-1",
        "SCRAM-SHA-256"
    ]
    }

    5.删除用户:

    # 先创建一个用户luo,密码为123456,用户的角色为read,库为testdb
    > db.createUser({user:"luo",pwd:"123456",roles:[{role:"read",db:"testdb"}]})
    Successfully added user: {
    "user" : "luo",
    "roles" : [
        {
            "role" : "read",
            "db" : "testdb"
        }
    ]
    }
    #查看所有的用户,可以看到有两个
    > show users
    {
    "_id" : "admin.admin",
    "user" : "admin",
    "db" : "admin",
    "customData" : {
        "description" : "superuser"
    },
    "roles" : [
        {
            "role" : "root",
            "db" : "admin"
        }
    ],
    "mechanisms" : [
        "SCRAM-SHA-1",
        "SCRAM-SHA-256"
    ]
    }
    {
    "_id" : "admin.luo",
    "user" : "luo",
    "db" : "admin",
    "roles" : [
        {
            "role" : "read",
            "db" : "testdb"
        }
    ],
    "mechanisms" : [
        "SCRAM-SHA-1",
        "SCRAM-SHA-256"
    ]
    }

    db.dropUser -->删除一个用户 ,把刚才建的用户luo删除

    > db.dropUser("luo")
    true
    # 返回true说明删除成功

    6.使用新建用户名和密码登录MongoDB
    我们之前创建了一个admin用户,但是若要用户密码生效,还需要编辑启动脚本vim /usr/lib/systemd/system/mongod.service,在OPTIONS=后面增--auth:

    [root@localhost ~]#  vim /usr/lib/systemd/system/mongod.service 
    ................
    Environment="OPTIONS=-f /etc/mongod.conf"  
    #更改为:
    Environment="OPTIONS=--auth -f /etc/mongod.conf"
    ...............

    重启服务:

    
    [root@localhost ~]#  systemctl restart mongod
    Warning: mongod.service changed on disk. Run 'systemctl daemon-reload' to reload units.
    #说明:因为刚修改了启动脚本,所以提示需要reload一下

[root@localhost ~]# systemctl daemon-reload
[root@localhost ~]# systemctl restart mongod

查看进程(可以看到多了 --auth):

[root@localhost ~]# ps aux|grep mongod
mongod 12294 8.7 5.2 1067972 52052 ? Sl 10:55 0:01 /usr/bin/mongod --auth -f /etc/mongod.conf
root 12325 0.0 0.0 112720 972 pts/1 S+ 10:56 0:00 grep --color=auto mongod

7.使用admin用户登录MongoDB:

[root@localhost ~]# mongo -u "admin" -p "admin122" --authenticationDatabase "admin" --host 192.168.66.130 --port 27017

8.创建库和用户:

当use一个不存在的库时会自动创建

use db1
switched to db db1
#创建test1用户,并授权:
db.createUser( { user: "test1", pwd: "123aaa", roles: [ { role: "readWrite", db: "db1" }, {role: "read", db: "db2" } ] } )
Successfully added user: {
"user" : "test1",
"roles" : [
{
"role" : "readWrite",
"db" : "db1"
},
{
"role" : "read",
"db" : "db2"
}
]
}

•说明:test1用户对db1库读写,对db2只读. 之所以先创建db1库,表示用户在db1库中创建,就一定要db1库验证身份,即用户的信息跟随数据库. 比如test1用户虽然有db2库的读取权限,但是一定要先在db1库进行身份验证,直接访问会提示验证失败.
在db1中验证身份:

db.auth('test1','123aaa')
1

验证完之后才可以在db2库中进行相关的操作

use db2
switched to db db2

**关于MongoDB用户角色:**
![](https://s1.51cto.com/images/blog/201808/26/87de8f163b8394c871fd32fe511487a1.png?x-oss-process=image/watermark,size_16,text_QDUxQ1RP5Y2a5a6i,color_FFFFFF,t_100,g_se,x_10,y_10,shadow_90,type_ZmFuZ3poZW5naGVpdGk=)
### 21.30 mongodb创建集合、数据管理
创建集合语法:

db.createCollection(name,options)


name就是集合的名字,options可选,用来配置集合的参数。

例如我要创建一个名为mycol的集合,命令如下:

> db.createCollection("mycol", { capped : true, size : 6142800, max : 10000 } )
{ "ok" : 1 }
> 

以上命令创建了一个名为mycol的集合,在参数中指定了启用封顶集合,并且设置该集合的大小为6142800个字节,以及设置该集合允许在文件的最大数量为10000。

可配置集合的参数如下:

capped true/false (可选)如果为true,则启用封顶集合。封顶集合是固定大小的集合,当它达到其最大大小,会自动覆盖最早的条目。如果指定true,则也需要指定尺寸参数。
autoindexID true/false (可选)如果为true,自动创建索引_id字段的默认值是false。
size (可选)指定最大大小字节封顶集合。如果封顶如果是 true,那么你还需要指定这个字段。单位B
max (可选)指定封顶集合允许在文件的最大数量。

MongoDB其他的一些常用命令:
1.show collections命令可以查看集合,或者使用show tables也可以:

> show collections
mycol
> show tables
mycol

2.插入数据
说明: 当集合不存在时,在给集合插入数据时,会自动创建这个集合.

> db.Account.insert({AccountID:1,UserName:"test",password:"123456"})
WriteResult({ "nInserted" : 1 })
> show tables
Account
mycol
> db.mycol.insert({AccountID:1,UserName:"test",password:"123456"})
WriteResult({ "nInserted" : 1 })

3.更新数据

// $set是一个动作,以下这条语句是在集合中新增了一个名为Age的key,设置的value为20
> db.Account.update({AccountID:1},{"$set":{"Age":20}})
WriteResult({ "nMatched" : 1, "nUpserted" : 0, "nModified" : 1 })
  1. 查看所有文档
    #先插入一条数据
    > db.Account.insert({AccountID:2,UserName:"test2",password:"123456"})
    WriteResult({ "nInserted" : 1 })
    > db.Account.find()  // 查看指定集合中的所有文档
    { "_id" : ObjectId("5a5377cb503451a127782146"), "AccountID" : 1, "UserName" : "test", "password" : "123456", "Age" : 20 }
    { "_id" : ObjectId("5a537949503451a127782149"), "AccountID" : 2, "UserName" : "test2", "password" : "123456" }

    可以根据条件进行查询,例如我要指定id进行查看:

    > db.Account.find({AccountID:1})
    { "_id" : ObjectId("5a5377cb503451a127782146"), "AccountID" : 1, "UserName" : "test", "password" : "123456", "Age" : 20 }
    > db.Account.find({AccountID:2})
    { "_id" : ObjectId("5a537949503451a127782149"), "AccountID" : 2, "UserName" : "test2", "password" : "123456" }

    根据条件删除数据:

    > db.Account.remove({AccountID:1})
    WriteResult({ "nRemoved" : 1 })
    > db.Account.find()
    { "_id" : ObjectId("5a537949503451a127782149"), "AccountID" : 2, "UserName" : "test2", "password" : "123456" }

    5.删除集合

    >  db.Account.drop()
    true
    > show tables
    mycol

    6查看集合的状态

    > db.printCollectionStats()
    mycol
    {
    "ns" : "db1.mycol",
    "size" : 0,
    "count" : 0,
    "storageSize" : 4096,
    "capped" : true,
    "max" : 10000,
    "maxSize" : 6142976,
    "sleepCount" : 0,
    "sleepMS" : 0,
    "wiredTiger" : {
        "metadata" : {
            "formatVersion" : 1
        }
    ..............................
    ..............................
    信息太长,省略

    21.31 php的mongodb扩展

    php的官方给出了两个mongodb的扩展,一个是mongodb.so,另一个是mongo.so。mongodb.so是针对新版本的php扩展,而mongo.so则是对旧版本的php扩展。
    以下是官方给出的关于两个扩展的参考文档:

https://docs.mongodb.com/ecosystem/drivers/php/

以下是官方的解释:

MongoDB.so :   
目前维护的驱动程序是PECL提供的mongodb扩展。这个驱动程序可以独立使用,虽然它是非常裸体的。您应该考虑使用带有免费PHP库的驱动程序,该库可在裸机驱动程序之上实现更全功能的API。有关此架构的更多信息可以在PHP.net文档中找到。
Mongo.so:
mongo扩展是针对的PHP 5.x的旧版驱动程序。该mongo扩展不再保持,新的项目建议使用mongodb扩展和PHP库。社区开发的Mongo PHP适配器项目mongo使用新的mongodb扩展和PHP库来实现旧 扩展的API ,这对于希望迁移现有应用程序的用户来说可能是有用的。

由于现在新旧版本的php都有在使用,所以我们需要了解两种扩展的安装方式,首先介绍mongodb.so的安装方式:
有两种方式可以安装mongodb.so,第一种是通过git安装:

[root@localhost ~]# cd /usr/local/src/
[root@localhost /usr/local/src]# git clone https://github.com/mongodb/mongo-php-driver
[root@localhost /usr/local/src/mongo-php-driver]# git submodule update --init
[root@localhost /usr/local/src/mongo-php-driver]# /usr/local/php/bin/phpize
[root@localhost /usr/local/src/mongo-php-driver]# ./configure --with-php-config=/usr/local/php/bin/php-config
[root@localhost /usr/local/src/mongo-php-driver]# make && make install
[root@localhost /usr/local/src/mongo-php-driver]# vim /usr/local/php/etc/php.ini
extension = mongodb.so   // 增加这一行
[root@localhost /usr/local/src/mongo-php-driver]# /usr/local/php/bin/php -m |grep mongodb
mongodb
[root@localhost /usr/local/src/mongo-php-driver]#

由于国内连GitHub不是很流畅,所以这种安装方式会有点慢。

第二种是通过源码包安装:

[root@localhost ~]# cd /usr/local/src/
[root@localhost /usr/local/src]# wget https://pecl.php.net/get/mongodb-1.3.0.tgz
[root@localhost /usr/local/src]# tar zxvf mongodb-1.3.0.tgz
[root@localhost /usr/local/src]# cd mongodb-1.3.0
[root@localhost /usr/local/src/mongodb-1.3.0]# /usr/local/php/bin/phpize
[root@localhost /usr/local/src/mongodb-1.3.0]# ./configure --with-php-config=/usr/local/php/bin/php-config
[root@localhost /usr/local/src/mongodb-1.3.0]# make && make install
[root@localhost /usr/local/src/mongodb-1.3.0]# vim /usr/local/php/etc/php.ini
extension = mongodb.so  // 增加这一行
[root@localhost /usr/local/src/mongodb-1.3.0]# /usr/local/php/bin/php -m |grep mongodb
mongodb
[root@localhost /usr/local/src/mongodb-1.3.0]#

21.32 php的mongo扩展

安装过程如下:

[root@localhost ~]# cd /usr/local/src/
[root@localhost /usr/local/src]# wget https://pecl.php.net/get/mongo-1.6.16.tgz
[root@localhost /usr/local/src]# tar -zxvf mongo-1.6.16.tgz
[root@localhost /usr/local/src]# cd mongo-1.6.16/
[root@localhost /usr/local/src/mongo-1.6.16]# /usr/local/php/bin/phpize
[root@localhost /usr/local/src/mongo-1.6.16]# ./configure --with-php-config=/usr/local/php/bin/php-config
[root@localhost /usr/local/src/mongo-1.6.16]# make && make install
[root@localhost /usr/local/src/mongo-1.6.16]# vim /usr/local/php/etc/php.ini
extension = mongo.so  // 增加这一行
[root@localhost /usr/local/src/mongo-1.6.16]# /usr/local/php/bin/php -m |grep mongo
mongo
mongodb

测试mongo扩展:

1.先去掉MongoDB的用户认证,然后编辑测试页:

[root@localhost ~]# vim /usr/lib/systemd/system/mongod.service  # 将--auth去掉
[root@localhost ~]# systemctl daemon-reload
[root@localhost ~]# systemctl restart mongod.service
[root@localhost ~]# vim /data/wwwroot/abc.com/index.php  # 编辑测试页
test; # 获取名称为 "test" 的数据库
$collection = $db->createCollection("runoob");
echo "集合创建成功";
?>

2.访问测试页:

[root@localhost ~]# curl localhost/index.php
集合创建成功

3.到MongoDB里看看集合是否存在:

[root@localhost ~]# mongo --host 192.168.66.130 --port 27017
> use test
switched to db test
> show tables
runoob  # 集合创建成功就代表没问题了
>

关于php连接MongoDB可以参考以下文章:

http://www.runoob.com/mongodb/mongodb-php.html
扩展内容

mongodb安全设置
http://www.mongoing.com/archives/631

mongodb执行js脚本
http://www.jianshu.com/p/6bd8934bd1ca

21.33 mongodb副本集介绍

副本集(Replica Set)是一组MongoDB实例组成的集群,由一个主(Primary)服务器和多个备份(Secondary)服务器构成。通过Replication,将数据的更新由Primary推送到其他实例上,在一定的延迟之后,每个MongoDB实例维护相同的数据集副本。通过维护冗余的数据库副本,能够实现数据的异地备份,读写分离和自动故障转移。

也就是说如果主服务器崩溃了,备份服务器会自动将其中一个成员升级为新的主服务器。使用复制功能时,如果有一台服务器宕机了,仍然可以从副本集的其他服务器上访问数据。如果服务器上的数据损坏或者不可访问,可以从副本集的某个成员中创建一份新的数据副本。

早期的MongoDB版本使用master-slave,一主一从和MySQL类似,但slave在此架构中为只读,当主库宕机后,从库不能自动切换为主。目前已经淘汰master-slave模式,改为副本集,这种模式下有一个主(primary),和多个从(secondary),只读。支持给它们设置权重,当主宕掉后,权重最高的从切换为主。在此架构中还可以建立一个仲裁(arbiter)的角色,它只负责裁决,而不存储数据。此架构中读写数据都是在主上,要想实现负载均衡的目的需要手动指定读库的目标server。

简而言之MongoDB 副本集是有自动故障恢复功能的主从集群,有一个Primary节点和一个或多个Secondary节点组成。类似于MySQL的MMM架构。更多关于副本集的介绍请见官方文档:
https://docs.mongodb.com/manual/replication/
副本集架构图:
原理很简单一个primary,secondary至少是一个,也可以是多个secondary,除了多个secondary之外,还可以加一个Arbiter,Arbiter叫做仲裁,当Primary宕机后,Arbiter可以很准确的告知Primary宕掉了,但可能Primary认为自己没有宕掉,这样的话就会出现脑裂,为了防止脑裂就增加了Arbiter这个角色,尤其是数据库坚决不能出现脑裂的状态,脑裂会导致数据会紊乱,数据一旦紊乱恢复就非常麻烦.


变迁图
说明:Primary宕机后,其中secondary就成为一个新的Primary,另外一个secondary依然是secondary的角色. 对于MySQL主从来讲,即使做一主多从,万一master宕机后,可以让从成为新的主,但这过程是需要手动的更改的. 但是在MongoDB副本集架构当中呢,它完全都是自动的,rimary宕机后,其中secondary就成为一个新的Primary,另外一个secondary可以自动识别新的primary.

21.34 mongodb副本集搭建

使用三台机器搭建副本集:

192.168.66.130 (primary)
192.168.66.131 (secondary)
192.168.66.132 (secondary)

这三台机器上都已经安装好了MongoDB
开始搭建:
1.编辑三台机器的配置文件,更改或增加以下内容:

Primary机器:
[root@primary ~]# vim /etc/mongod.conf
net:
  port: 27017
  bindIp: 127.0.0.1,192.168.66.130  # Enter 0.0.0.0,:: to bind to all IPv4 and IPv6 addresses or, alternatively, use the net.bindIpAll setting..
#说明:做副本集bindIp 要监听本机IP和内网IP
#replication:    //把#去掉,并增两行
replication:
  oplogSizeMB: 20        # 增加这一行配置定义oplog的大小,注意前面需要有两个空格    
  replSetName: luo      # 定义复制集的名称,同样的前面需要有两个空格
#重启mongod服务
[root@primary ~]# systemctl restart mongod
[root@primary ~]# ps aux |grep mongod
mongod    13085  9.6  5.7 1100792 56956 ?       Sl   13:30   0:00 /usr/bin/mongod -f /etc/mongod.conf
root      13120  0.0  0.0 112720   972 pts/1    S+   13:30   0:00 grep --color=auto mongod

Secondary1机器:
[root@secondary1 ~]# vim /etc/mongod.conf
net:
  port: 27017
  bindIp: 127.0.0.1,192.168.66.131  # Enter 0.0.0.0,:: to bind to all IPv4 and IPv6 addresses or, alternatively, use the net.bindIpAll setting..
#说明:做副本集bindIp 要监听本机IP和内网IP
#replication:    //把#去掉,并增两行
replication:
  oplogSizeMB: 20        # 增加这一行配置定义oplog的大小,注意前面需要有两个空格    
  replSetName: luo      # 定义复制集的名称,同样的前面需要有两个空格 
#重启mongod服务 
[root@secondary1 ~]# ps aux |grep mongod
mongod     4012  0.8  5.6 1104640 55916 ?       Sl   13:33   0:03 /usr/bin/mongod -f /etc/mongod.conf

Secondary2机器:
[root@secondary2 ~]# vim /etc/mongod.conf
net:
  port: 27017
  bindIp: 127.0.0.1,192.168.66.132  # Enter 0.0.0.0,:: to bind to all IPv4 and IPv6 addresses or, alternatively, use the net.bindIpAll setting..
#说明:做副本集bindIp 要监听本机IP和内网IP
#replication:    //把#去掉,并增两行
replication:
  oplogSizeMB: 20        # 增加这一行配置定义oplog的大小,注意前面需要有两个空格    
  replSetName: luo      # 定义复制集的名称,同样的前面需要有两个空格 
#重启mongod服务     
[root@secondary2 ~]# systemctl restart mongod
[root@secondary2 ~]# ps aux |grep mongod
mongod     1407 17.0  5.3 1101172 53064 ?       Sl   13:41   0:01 /usr/bin/mongod -f /etc/mongod.conf
root       1442  0.0  0.0 112720   964 pts/0    R+   13:41   0:00 grep --color=auto mongod

2.关闭三台机器的防火墙,或者清空iptables规则
3.连接主机器的MongoDB,在主机器上运行命令mongo,然后配置副本集

[root@primary ~]# mongo
# 分别配置三台机器的ip
> config={_id:"luo",members:[{_id:0,host:"192.168.66.130:27017"},{_id:1,host:"192.168.66.131:27017"},{_id:2,host:"192.168.66.132:27017"}]}
{
    "_id" : "luo",
    "members" : [
        {
            "_id" : 0,
            "host" : "192.168.66.130:27017"
        },
        {
            "_id" : 1,
            "host" : "192.168.66.131:27017"
        },
        {
            "_id" : 2,
            "host" : "192.168.66.132:27017"
        }
    ]
}
> rs.initiate(config)    # 初始化
{
    "ok" : 1,
    "operationTime" : Timestamp(1535266738, 1),
    "$clusterTime" : {
        "clusterTime" : Timestamp(1535266738, 1),
        "signature" : {
            "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
            "keyId" : NumberLong(0)
        }
    }
}
luo:SECONDARY>rs.status()  # 查看状态
{
    "set" : "luo",
    "date" : ISODate("2018-08-26T06:59:18.812Z"),
    "myState" : 1,
    "term" : NumberLong(1),
    "syncingTo" : "",
    "syncSourceHost" : "",
    "syncSourceId" : -1,
    "heartbeatIntervalMillis" : NumberLong(2000),
    "optimes" : {
        "lastCommittedOpTime" : {
            "ts" : Timestamp(1535266751, 5),
            "t" : NumberLong(1)
        },
        "readConcernMajorityOpTime" : {
            "ts" : Timestamp(1535266751, 5),
            "t" : NumberLong(1)
        },
        "appliedOpTime" : {
            "ts" : Timestamp(1535266751, 5),
            "t" : NumberLong(1)
        },
        "durableOpTime" : {
            "ts" : Timestamp(1535266751, 5),
            "t" : NumberLong(1)
        }
    },
    "lastStableCheckpointTimestamp" : Timestamp(1535266751, 1),
    "members" : [
        {
            "_id" : 0,
            "name" : "192.168.66.130:27017",
            "health" : 1,
            "state" : 1,
            "stateStr" : "PRIMARY",
            "uptime" : 168,
            "optime" : {
                "ts" : Timestamp(1535266751, 5),
                "t" : NumberLong(1)
            },
            "optimeDate" : ISODate("2018-08-26T06:59:11Z"),
            "syncingTo" : "",
            "syncSourceHost" : "",
            "syncSourceId" : -1,
            "infoMessage" : "could not find member to sync from",
            "electionTime" : Timestamp(1535266749, 1),
            "electionDate" : ISODate("2018-08-26T06:59:09Z"),
            "configVersion" : 1,
            "self" : true,
            "lastHeartbeatMessage" : ""
        },
        {
            "_id" : 1,
            "name" : "192.168.66.131:27017",
            "health" : 1,
            "state" : 2,
            "stateStr" : "SECONDARY",
            "uptime" : 19,
            "optime" : {
                "ts" : Timestamp(1535266751, 5),
                "t" : NumberLong(1)
            },
            "optimeDurable" : {
                "ts" : Timestamp(1535266751, 5),
                "t" : NumberLong(1)
            },
            "optimeDate" : ISODate("2018-08-26T06:59:11Z"),
            "optimeDurableDate" : ISODate("2018-08-26T06:59:11Z"),
            "lastHeartbeat" : ISODate("2018-08-26T06:59:17.408Z"),
            "lastHeartbeatRecv" : ISODate("2018-08-26T06:59:17.957Z"),
            "pingMs" : NumberLong(0),
            "lastHeartbeatMessage" : "",
            "syncingTo" : "192.168.66.130:27017",
            "syncSourceHost" : "192.168.66.130:27017",
            "syncSourceId" : 0,
            "infoMessage" : "",
            "configVersion" : 1
        },
        {
            "_id" : 2,
            "name" : "192.168.66.132:27017",
            "health" : 1,
            "state" : 2,
            "stateStr" : "SECONDARY",
            "uptime" : 19,
            "optime" : {
                "ts" : Timestamp(1535266751, 5),
                "t" : NumberLong(1)
            },
            "optimeDurable" : {
                "ts" : Timestamp(1535266751, 5),
                "t" : NumberLong(1)
            },
            "optimeDate" : ISODate("2018-08-26T06:59:11Z"),
            "optimeDurableDate" : ISODate("2018-08-26T06:59:11Z"),
            "lastHeartbeat" : ISODate("2018-08-26T06:59:17.408Z"),
            "lastHeartbeatRecv" : ISODate("2018-08-26T06:59:17.959Z"),
            "pingMs" : NumberLong(0),
            "lastHeartbeatMessage" : "",
            "syncingTo" : "192.168.66.130:27017",
            "syncSourceHost" : "192.168.66.130:27017",
            "syncSourceId" : 0,
            "infoMessage" : "",
            "configVersion" : 1
        }
    ],
    "ok" : 1,
    "operationTime" : Timestamp(1535266751, 5),
    "$clusterTime" : {
        "clusterTime" : Timestamp(1535266751, 5),
        "signature" : {
            "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
            "keyId" : NumberLong(0)
        }
    }
}

以上我们需要关注三台机器的stateStr状态,主机器的stateStr状态需要为PRIMARY,两台从机器的stateStr状态需要为SECONDARY才是正常。
如果出现两个从上的stateStr状态为"stateStr" : "STARTUP", 则需要进行如下操作:

> config={_id:"luo",members:[{_id:0,host:"192.168.66.130:27017"},{_id:1,host:"192.168.66.131:27017"},{_id:2,host:"192.168.66.132:27017"}]}
> rs.reconfig(config)

然后再次查看状态:rs.status(),确保从的状态变为SECONDARY。

21.35 mongodb副本集测试

1.在主机器上创建库以及创建集合:

luo:PRIMARY> use testdb
switched to db testdb
luo:PRIMARY> db.test.insert({AccountID:1,UserName:"luo",password:"123456"})
WriteResult({ "nInserted" : 1 })
luo:PRIMARY> show dbs
admin   0.000GB
config  0.000GB
local   0.000GB
testdb  0.000GB
luo:PRIMARY> show tables
test

2.然后到从机器上查看是否有同步主机器上的数据:

luo:SECONDARY> show dbs
2018-08-26T15:04:48.981+0800 E QUERY    [js] Error: listDatabases failed:{
    "operationTime" : Timestamp(1535267081, 1),
    "ok" : 0,
    "errmsg" : "not master and slaveOk=false",
    "code" : 13435,
    "codeName" : "NotMasterNoSlaveOk",
    "$clusterTime" : {
        "clusterTime" : Timestamp(1535267081, 1),
        "signature" : {
            "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
            "keyId" : NumberLong(0)
        }
    }
} :
_getErrorWithCode@src/mongo/shell/utils.js:25:13
Mongo.prototype.getDBs@src/mongo/shell/mongo.js:67:1
shellHelper.show@src/mongo/shell/utils.js:876:19
shellHelper@src/mongo/shell/utils.js:766:15
@(shellhelp2):1:1
luo:SECONDARY> rs.slaveOk()                          #出现以上错误执行这个命令
luo:SECONDARY> show dbs
admin   0.000GB
config  0.000GB
local   0.000GB
testdb  0.000GB
luo:SECONDARY> use testdb
switched to db testdb
luo:SECONDARY> show tables
test

如上可以看到数据已经成功同步到从机器上了。
副本集更改权重模拟主宕机

使用rs.config()命令可以查看每台机器的权重:

luo:PRIMARY> rs.config()
{
    "_id" : "luo",
    "version" : 1,
    "protocolVersion" : NumberLong(1),
    "writeConcernMajorityJournalDefault" : true,
    "members" : [
        {
            "_id" : 0,
            "host" : "192.168.66.130:27017",
            "arbiterOnly" : false,
            "buildIndexes" : true,
            "hidden" : false,
            "priority" : 1,
            "tags" : {

            },
            "slaveDelay" : NumberLong(0),
            "votes" : 1
        },
        {
            "_id" : 1,
            "host" : "192.168.66.131:27017",
            "arbiterOnly" : false,
            "buildIndexes" : true,
            "hidden" : false,
            "priority" : 1,
            "tags" : {

            },
            "slaveDelay" : NumberLong(0),
            "votes" : 1
        },
        {
            "_id" : 2,
            "host" : "192.168.66.132:27017",
            "arbiterOnly" : false,
            "buildIndexes" : true,
            "hidden" : false,
            "priority" : 1,
            "tags" : {

            },
            "slaveDelay" : NumberLong(0),
            "votes" : 1
        }
    ],
    "settings" : {
        "chainingAllowed" : true,
        "heartbeatIntervalMillis" : 2000,
        "heartbeatTimeoutSecs" : 10,
        "electionTimeoutMillis" : 10000,
        "catchUpTimeoutMillis" : -1,
        "catchUpTakeoverDelayMillis" : 30000,
        "getLastErrorModes" : {

        },
        "getLastErrorDefaults" : {
            "w" : 1,
            "wtimeout" : 0
        },
        "replicaSetId" : ObjectId("5b824fb2deea499ddf787ebe")
    }
}

priority的值表示该机器的权重,默认都为1。

增加一条防火墙规则来阻断通信模拟主机器宕机:

# 注意这是在主机器上执行
[root@primary mongo]# iptables -I INPUT -p tcp --dport 27017 -j DROP

然后到从机器上查看状态:

luo:SECONDARY> rs.status()
{
    "set" : "luo",
    "date" : ISODate("2018-08-26T07:10:19.959Z"),
    "myState" : 1,
    "term" : NumberLong(2),
    "syncingTo" : "",
    "syncSourceHost" : "",
    "syncSourceId" : -1,
    "heartbeatIntervalMillis" : NumberLong(2000),
    "optimes" : {
        "lastCommittedOpTime" : {
            "ts" : Timestamp(1535267410, 1),
            "t" : NumberLong(2)
        },
        "readConcernMajorityOpTime" : {
            "ts" : Timestamp(1535267410, 1),
            "t" : NumberLong(2)
        },
        "appliedOpTime" : {
            "ts" : Timestamp(1535267410, 1),
            "t" : NumberLong(2)
        },
        "durableOpTime" : {
            "ts" : Timestamp(1535267410, 1),
            "t" : NumberLong(2)
        }
    },
    "lastStableCheckpointTimestamp" : Timestamp(1535267410, 1),
    "members" : [
        {
            "_id" : 0,
            "name" : "192.168.66.130:27017",
            "health" : 0,
            "state" : 8,
            "stateStr" : "(not reachable/healthy)",
            "uptime" : 0,
            "optime" : {
                "ts" : Timestamp(0, 0),
                "t" : NumberLong(-1)
            },
            "optimeDurable" : {
                "ts" : Timestamp(0, 0),
                "t" : NumberLong(-1)
            },
            "optimeDate" : ISODate("1970-01-01T00:00:00Z"),
            "optimeDurableDate" : ISODate("1970-01-01T00:00:00Z"),
            "lastHeartbeat" : ISODate("2018-08-26T07:10:12.109Z"),
            "lastHeartbeatRecv" : ISODate("2018-08-26T07:10:19.645Z"),
            "pingMs" : NumberLong(0),
            "lastHeartbeatMessage" : "Couldn't get a connection within the time limit",
            "syncingTo" : "",
            "syncSourceHost" : "",
            "syncSourceId" : -1,
            "infoMessage" : "",
            "configVersion" : -1
        },
        {
            "_id" : 1,
            "name" : "192.168.66.131:27017",
            "health" : 1,
            "state" : 1,
            "stateStr" : "PRIMARY",
            "uptime" : 842,
            "optime" : {
                "ts" : Timestamp(1535267410, 1),
                "t" : NumberLong(2)
            },
            "optimeDate" : ISODate("2018-08-26T07:10:10Z"),
            "syncingTo" : "",
            "syncSourceHost" : "",
            "syncSourceId" : -1,
            "infoMessage" : "could not find member to sync from",
            "electionTime" : Timestamp(1535267391, 1),
            "electionDate" : ISODate("2018-08-26T07:09:51Z"),
            "configVersion" : 1,
            "self" : true,
            "lastHeartbeatMessage" : ""
        },
        {
            "_id" : 2,
            "name" : "192.168.66.132:27017",
            "health" : 1,
            "state" : 2,
            "stateStr" : "SECONDARY",
            "uptime" : 679,
            "optime" : {
                "ts" : Timestamp(1535267410, 1),
                "t" : NumberLong(2)
            },
            "optimeDurable" : {
                "ts" : Timestamp(1535267410, 1),
                "t" : NumberLong(2)
            },
            "optimeDate" : ISODate("2018-08-26T07:10:10Z"),
            "optimeDurableDate" : ISODate("2018-08-26T07:10:10Z"),
            "lastHeartbeat" : ISODate("2018-08-26T07:10:19.555Z"),
            "lastHeartbeatRecv" : ISODate("2018-08-26T07:10:19.302Z"),
            "pingMs" : NumberLong(0),
            "lastHeartbeatMessage" : "",
            "syncingTo" : "192.168.66.131:27017",
            "syncSourceHost" : "192.168.66.131:27017",
            "syncSourceId" : 1,
            "infoMessage" : "",
            "configVersion" : 1
        }
    ],
    "ok" : 1,
    "operationTime" : Timestamp(1535267410, 1),
    "$clusterTime" : {
        "clusterTime" : Timestamp(1535267410, 1),
        "signature" : {
            "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
            "keyId" : NumberLong(0)
        }
    }
}

如上,可以看到 192.168.66.130 的 stateStr 的值变成 not reachable/healthy 了,而 192.168.66.131 自动切换成主了,也可以看到192.168.66.131 的 stateStr 的值变成 了PRIMARY。因为权重是相同的,所以切换是有一定的随机性的。
接下来我们指定每台机器权重,让权重高的机器自动切换为主。
1.先删除192.168.66.130的防火墙规则:

[root@primary mongo]# iptables -D INPUT -p tcp --dport 27017 -j DROP

2.因为刚才192.168.66.131切换为主了,所以我们在131上指定各个机器的权重:

luo:PRIMARY> cfg = rs.conf()
luo:PRIMARY> cfg.members[0].priority = 3    #设置id为0的机器权重为3,也就是130机器
3
luo:PRIMARY> cfg.members[1].priority = 2
2
luo:PRIMARY> cfg.members[2].priority = 1
1
luo:PRIMARY> rs.reconfig(cfg)
{
    "ok" : 1,
    "operationTime" : Timestamp(1535267733, 1),
    "$clusterTime" : {
        "clusterTime" : Timestamp(1535267733, 1),
        "signature" : {
            "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
            "keyId" : NumberLong(0)
        }
    }
}

3.这时候192.168.66.130应该就切换成主了,到192.168.66.130上执行rs.config()进行查看:

luo:PRIMARY> rs.config()
{
    "_id" : "luo",
    "version" : 2,
    "protocolVersion" : NumberLong(1),
    "writeConcernMajorityJournalDefault" : true,
    "members" : [
        {
            "_id" : 0,
            "host" : "192.168.66.130:27017",
            "arbiterOnly" : false,
            "buildIndexes" : true,
            "hidden" : false,
            "priority" : 3,
            "tags" : {

            },
            "slaveDelay" : NumberLong(0),
            "votes" : 1
        },
        {
            "_id" : 1,
            "host" : "192.168.66.131:27017",
            "arbiterOnly" : false,
            "buildIndexes" : true,
            "hidden" : false,
            "priority" : 2,
            "tags" : {

            },
            "slaveDelay" : NumberLong(0),
            "votes" : 1
        },
        {
            "_id" : 2,
            "host" : "192.168.66.132:27017",
            "arbiterOnly" : false,
            "buildIndexes" : true,
            "hidden" : false,
            "priority" : 1,
            "tags" : {

            },
            "slaveDelay" : NumberLong(0),
            "votes" : 1
        }
    ],
    "settings" : {
        "chainingAllowed" : true,
        "heartbeatIntervalMillis" : 2000,
        "heartbeatTimeoutSecs" : 10,
        "electionTimeoutMillis" : 10000,
        "catchUpTimeoutMillis" : -1,
        "catchUpTakeoverDelayMillis" : 30000,
        "getLastErrorModes" : {

        },
        "getLastErrorDefaults" : {
            "w" : 1,
            "wtimeout" : 0
        },
        "replicaSetId" : ObjectId("5b824fb2deea499ddf787ebe")
    }
}

可以看到三台机器的权重分别为3,2,1,如果主宕机,根据权重选择候选主节点

21.36 mongodb分片介绍

分片(sharding)是指将数据库拆分,将其分散在不同的机器上的过程。将数据分散到不同的机器上,不需要功能强大的服务器就可以存储更多的数据和处理更大的负载。基本思想就是将集合切成小块,这些块分散到若干片里,每个片只负责总数据的一部分,最后通过一个均衡器来对各个分片进行均衡(数据迁移)。通过一个名为mongos的路由进程进行操作,mongos知道数据和片的对应关系(通过配置服务器)。大部分使用场景都是解决磁盘空间的问题,对于写入有可能会变差,查询则尽量避免跨分片查询。使用分片的时机:

1,机器的磁盘不够用了。使用分片解决磁盘空间的问题。
2,单个mongod已经不能满足写数据的性能要求。通过分片让写压力分散到各个分片上面,使用分片服务器自身的资源。
3,想把大量数据放到内存里提高性能。和上面一样,通过分片使用分片服务器自身的资源。
所以简单来说分片就是将数据库进行拆分,将大型集合分隔到不同服务器上,所以组成分片的单元是副本集。比如,本来100G的数据,可以分割成10份存储到10台服务器上,这样每台机器只有10G的数据,一般分片在大型企业或者数据量很大的公司才会使用

MongoDB通过一个mongos的进程(路由分发)实现分片后的数据存储与访问,也就是说mongos是整个分片架构的核心,是分片的总入口,对客户端而言是不知道是否有分片的,客户端只需要把读写操作转达给mongos即可。

虽然分片会把数据分隔到很多台服务器上,但是每一个节点都是需要有一个备用角色的,这样才能保证数据的高可用。

当系统需要更多空间或者资源的时候,分片可以让我们按照需求方便的横向扩展,只需要把mongodb服务的机器加入到分片集群中即可
MongoDB分片架构图

MongoDB分片相关概念

•mongos: 数据库集群请求的入口,所有的请求都通过mongos进行协调,不需要在应用程序添加一个路由选择器,mongos自己就是一个请求分发中心,它负责把对应的数据请求请求转发到对应的shard服务器上。在生产环境通常有多mongos作为请求的入口,防止其中一个挂掉所有的mongodb请求都没有办法操作。
•config server: 配置服务器,存储所有数据库元信息(路由、分片)的配置。mongos本身没有物理存储分片服务器和数据路由信息,只是缓存在内存里,配置服务器则实际存储这些数据。mongos第一次启动或者关掉重启就会从 config server 加载配置信息,以后如果配置服务器信息变化会通知到所有的 mongos 更新自己的状态,这样 mongos 就能继续准确路由。在生产环境通常有多个 config server 配置服务器,因为它存储了分片路由的元数据,防止数据丢失!
•shard: 存储了一个集合部分数据的MongoDB实例,每个分片是单独的mongodb服务或者副本集,在生产环境中,所有的分片都应该是副本集。

21.37/21.38/21.39 mongodb分片搭建

分片搭建 -服务器规划:

资源有限,我这里使用三台机器 A B C 作为演示:

A搭建:mongos、config server、副本集1主节点、副本集2仲裁、副本集3从节点
B搭建:mongos、config server、副本集1从节点、副本集2主节点、副本集3仲裁
C搭建:mongos、config server、副本集1仲裁、副本集2从节点、副本集3主节点
端口分配:mongos 20000、config server 21000、副本集1 27001、副本集2 27002、副本集3 27003
三台机器全部关闭firewalld服务和selinux,或者增加对应端口的规则
三台机器的IP分别是:
A机器:192.168.77.128
B机器:192.168.77.130
C机器:192.168.77.134

分片搭建 – 创建目录:

分别在三台机器上创建各个角色所需要的目录:

mkdir -p /data/mongodb/mongos/log
mkdir -p /data/mongodb/config/{data,log}
mkdir -p /data/mongodb/shard1/{data,log}
mkdir -p /data/mongodb/shard2/{data,log}
mkdir -p /data/mongodb/shard3/{data,log}

分片搭建–config server配置:

mongodb3.4版本以后需要对config server创建副本集
添加配置文件(三台机器都操作)

[root@localhost ~]# mkdir /etc/mongod/
[root@localhost ~]# vim /etc/mongod/config.conf  # 加入如下内容
pidfilepath = /var/run/mongodb/configsrv.pid
dbpath = /data/mongodb/config/data
logpath = /data/mongodb/config/log/congigsrv.log
logappend = true
bind_ip = 0.0.0.0  # 绑定你的监听ip
port = 21000
fork = true
configsvr = true #declare this is a config db of a cluster;
replSet=configs #副本集名称
maxConns=20000 #设置最大连接数

启动三台机器的config server:

[root@localhost ~]# mongod -f /etc/mongod/config.conf  # 三台机器都要操作
about to fork child process, waiting until server is ready for connections.
forked process: 4183
child process started successfully, parent exiting
[root@localhost ~]# ps aux |grep mongo
mongod     2518  1.1  2.3 1544488 89064 ?       Sl   09:57   0:42 /usr/bin/mongod -f /etc/mongod.conf
root       4183  1.1  1.3 1072404 50992 ?       Sl   10:56   0:00 mongod -f /etc/mongod/config.conf
root       4240  0.0  0.0 112660   964 pts/0    S+   10:57   0:00 grep --color=auto mongo
[root@localhost ~]# netstat -lntp |grep mongod
tcp        0      0 192.168.77.128:21000    0.0.0.0:*               LISTEN      4183/mongod         
tcp        0      0 192.168.77.128:27017    0.0.0.0:*               LISTEN      2518/mongod         
tcp        0      0 127.0.0.1:27017         0.0.0.0:*               LISTEN      2518/mongod         
[root@localhost ~]#

登录任意一台机器的21000端口,初始化副本集:

[root@localhost ~]# mongo --host 192.168.77.128 --port 21000
> config = { _id: "configs", members: [ {_id : 0, host : "192.168.77.128:21000"},{_id : 1, host : "192.168.77.130:21000"},{_id : 2, host : "192.168.77.134:21000"}] }
{
    "_id" : "configs",
    "members" : [
        {
            "_id" : 0,
            "host" : "192.168.77.128:21000"
        },
        {
            "_id" : 1,
            "host" : "192.168.77.130:21000"
        },
        {
            "_id" : 2,
            "host" : "192.168.77.134:21000"
        }
    ]
}
> rs.initiate(config)  # 初始化副本集
{
    "ok" : 1,
    "operationTime" : Timestamp(1515553318, 1),
    "$gleStats" : {
        "lastOpTime" : Timestamp(1515553318, 1),
        "electionId" : ObjectId("000000000000000000000000")
    },
    "$clusterTime" : {
        "clusterTime" : Timestamp(1515553318, 1),
        "signature" : {
            "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
            "keyId" : NumberLong(0)
        }
    }
}
configs:SECONDARY> rs.status()  # 确保每台机器都正常
{
    "set" : "configs",
    "date" : ISODate("2018-08-25T03:03:40.244Z"),
    "myState" : 1,
    "term" : NumberLong(1),
    "configsvr" : true,
    "heartbeatIntervalMillis" : NumberLong(2000),
    "optimes" : {
        "lastCommittedOpTime" : {
            "ts" : Timestamp(1515553411, 1),
            "t" : NumberLong(1)
        },
        "readConcernMajorityOpTime" : {
            "ts" : Timestamp(1515553411, 1),
            "t" : NumberLong(1)
        },
        "appliedOpTime" : {
            "ts" : Timestamp(1515553411, 1),
            "t" : NumberLong(1)
        },
        "durableOpTime" : {
            "ts" : Timestamp(1515553411, 1),
            "t" : NumberLong(1)
        }
    },
    "members" : [
        {
            "_id" : 0,
            "name" : "192.168.77.128:21000",
            "health" : 1,
            "state" : 1,
            "stateStr" : "PRIMARY",
            "uptime" : 415,
            "optime" : {
                "ts" : Timestamp(1515553411, 1),
                "t" : NumberLong(1)
            },
            "optimeDate" : ISODate("2018-08-25T03:03:31Z"),
            "infoMessage" : "could not find member to sync from",
            "electionTime" : Timestamp(1515553329, 1),
            "electionDate" : ISODate("2018-08-25T03:02:09Z"),
            "configVersion" : 1,
            "self" : true
        },
        {
            "_id" : 1,
            "name" : "192.168.77.130:21000",
            "health" : 1,
            "state" : 2,
            "stateStr" : "SECONDARY",
            "uptime" : 101,
            "optime" : {
                "ts" : Timestamp(1515553411, 1),
                "t" : NumberLong(1)
            },
            "optimeDurable" : {
                "ts" : Timestamp(1515553411, 1),
                "t" : NumberLong(1)
            },
            "optimeDate" : ISODate("2018-08-25T03:03:31Z"),
            "optimeDurableDate" : ISODate("2018-08-25T03:03:31Z"),
            "lastHeartbeat" : ISODate("2018-08-25T03:03:39.973Z"),
            "lastHeartbeatRecv" : ISODate("2018-08-25T03:03:38.804Z"),
            "pingMs" : NumberLong(0),
            "syncingTo" : "192.168.77.134:21000",
            "configVersion" : 1
        },
        {
            "_id" : 2,
            "name" : "192.168.77.134:21000",
            "health" : 1,
            "state" : 2,
            "stateStr" : "SECONDARY",
            "uptime" : 101,
            "optime" : {
                "ts" : Timestamp(1515553411, 1),
                "t" : NumberLong(1)
            },
            "optimeDurable" : {
                "ts" : Timestamp(1515553411, 1),
                "t" : NumberLong(1)
            },
            "optimeDate" : ISODate("2018-08-25T03:03:31Z"),
            "optimeDurableDate" : ISODate("2018-08-25T03:03:31Z"),
            "lastHeartbeat" : ISODate("2018-08-25T03:03:39.945Z"),
            "lastHeartbeatRecv" : ISODate("2018-08-25T03:03:38.726Z"),
            "pingMs" : NumberLong(0),
            "syncingTo" : "192.168.77.128:21000",
            "configVersion" : 1
        }
    ],
    "ok" : 1,
    "operationTime" : Timestamp(1515553411, 1),
    "$gleStats" : {
        "lastOpTime" : Timestamp(1515553318, 1),
        "electionId" : ObjectId("7fffffff0000000000000001")
    },
    "$clusterTime" : {
        "clusterTime" : Timestamp(1515553411, 1),
        "signature" : {
            "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
            "keyId" : NumberLong(0)
        }
    }
}
configs:PRIMARY>

分片搭建–分片配置:

添加配置文件(三台机器都需要操作):

[root@localhost ~]# vim /etc/mongod/shard1.conf
pidfilepath = /var/run/mongodb/shard1.pid
dbpath = /data/mongodb/shard1/data
logpath = /data/mongodb/shard1/log/shard1.log
logappend = true
logRotate=rename
bind_ip = 0.0.0.0  # 绑定你的监听IP
port = 27001
fork = true
replSet=shard1 #副本集名称
shardsvr = true #declare this is a shard db of a cluster;
maxConns=20000 #设置最大连接数

[root@localhost ~]# vim /etc/mongod/shard2.conf //加入如下内容
pidfilepath = /var/run/mongodb/shard2.pid
dbpath = /data/mongodb/shard2/data
logpath = /data/mongodb/shard2/log/shard2.log
logappend = true
logRotate=rename
bind_ip = 0.0.0.0  # 绑定你的监听IP
port = 27002
fork = true
replSet=shard2 #副本集名称
shardsvr = true #declare this is a shard db of a cluster;
maxConns=20000 #设置最大连接数

[root@localhost ~]# vim /etc/mongod/shard3.conf //加入如下内容
pidfilepath = /var/run/mongodb/shard3.pid
dbpath = /data/mongodb/shard3/data
logpath = /data/mongodb/shard3/log/shard3.log
logappend = true
logRotate=rename
bind_ip = 0.0.0.0  # 绑定你的监听IP
port = 27003
fork = true
replSet=shard3 #副本集名称
shardsvr = true #declare this is a shard db of a cluster;
maxConns=20000 #设置最大连接数

都配置完成之后逐个进行启动,三台机器都需要启动:

1.先启动shard1:

[root@localhost ~]# mongod -f /etc/mongod/shard1.conf  # 三台机器都要操作
about to fork child process, waiting until server is ready for connections.
forked process: 13615
child process started successfully, parent exiting
[root@localhost ~]# ps aux |grep shard1
root      13615  0.7  1.3 1023224 52660 ?       Sl   17:16   0:00 mongod -f /etc/mongod/shard1.conf
root      13670  0.0  0.0 112660   964 pts/0    R+   17:17   0:00 grep --color=auto shard1
[root@localhost ~]#

然后登录128或者130机器的27001端口初始化副本集,134之所以不行,是因为shard1我们把134这台机器的27001端口作为了仲裁节点:

[root@localhost ~]# mongo --host 192.168.77.128 --port 27001
> use admin
switched to db admin
> config = { _id: "shard1", members: [ {_id : 0, host : "192.168.77.128:27001"}, {_id: 1,host : "192.168.77.130:27001"},{_id : 2, host : "192.168.77.134:27001",arbiterOnly:true}] }
{
    "_id" : "shard1",
    "members" : [
        {
            "_id" : 0,
            "host" : "192.168.77.128:27001"
        },
        {
            "_id" : 1,
            "host" : "192.168.77.130:27001"
        },
        {
            "_id" : 2,
            "host" : "192.168.77.134:27001",
            "arbiterOnly" : true
        }
    ]
}
> rs.initiate(config)  # 初始化副本集
{ "ok" : 1 }
shard1:SECONDARY> rs.status()  # 查看状态
{
    "set" : "shard1",
    "date" : ISODate("2018-08-25T09:21:37.682Z"),
    "myState" : 1,
    "term" : NumberLong(1),
    "heartbeatIntervalMillis" : NumberLong(2000),
    "optimes" : {
        "lastCommittedOpTime" : {
            "ts" : Timestamp(1515576097, 1),
            "t" : NumberLong(1)
        },
        "readConcernMajorityOpTime" : {
            "ts" : Timestamp(1515576097, 1),
            "t" : NumberLong(1)
        },
        "appliedOpTime" : {
            "ts" : Timestamp(1515576097, 1),
            "t" : NumberLong(1)
        },
        "durableOpTime" : {
            "ts" : Timestamp(1515576097, 1),
            "t" : NumberLong(1)
        }
    },
    "members" : [
        {
            "_id" : 0,
            "name" : "192.168.77.128:27001",
            "health" : 1,
            "state" : 1,
            "stateStr" : "PRIMARY",
            "uptime" : 317,
            "optime" : {
                "ts" : Timestamp(1515576097, 1),
                "t" : NumberLong(1)
            },
            "optimeDate" : ISODate("2018-08-25T09:21:37Z"),
            "infoMessage" : "could not find member to sync from",
            "electionTime" : Timestamp(1515576075, 1),
            "electionDate" : ISODate("2018-08-25T09:21:15Z"),
            "configVersion" : 1,
            "self" : true
        },
        {
            "_id" : 1,
            "name" : "192.168.77.130:27001",
            "health" : 1,
            "state" : 2,
            "stateStr" : "SECONDARY",
            "uptime" : 33,
            "optime" : {
                "ts" : Timestamp(1515576097, 1),
                "t" : NumberLong(1)
            },
            "optimeDurable" : {
                "ts" : Timestamp(1515576097, 1),
                "t" : NumberLong(1)
            },
            "optimeDate" : ISODate("2018-08-25T09:21:37Z"),
            "optimeDurableDate" : ISODate("2018-08-25T09:21:37Z"),
            "lastHeartbeat" : ISODate("2018-08-25T09:21:37.262Z"),
            "lastHeartbeatRecv" : ISODate("2018-08-25T09:21:36.213Z"),
            "pingMs" : NumberLong(0),
            "syncingTo" : "192.168.77.128:27001",
            "configVersion" : 1
        },
        {
            "_id" : 2,
            "name" : "192.168.77.134:27001",
            "health" : 1,
            "state" : 7,
            "stateStr" : "ARBITER",  # 可以看到134是仲裁节点
            "uptime" : 33,
            "lastHeartbeat" : ISODate("2018-08-25T09:21:37.256Z"),
            "lastHeartbeatRecv" : ISODate("2018-08-25T09:21:36.024Z"),
            "pingMs" : NumberLong(0),
            "configVersion" : 1
        }
    ],
    "ok" : 1
}
shard1:PRIMARY>

2.shard1配置完毕之后启动shard2:

[root@localhost ~]# mongod -f /etc/mongod/shard2.conf   # 三台机器都要进行启动操作
about to fork child process, waiting until server is ready for connections.
forked process: 13910
child process started successfully, parent exiting
[root@localhost ~]# ps aux |grep shard2
root      13910  1.9  1.2 1023224 50096 ?       Sl   17:25   0:00 mongod -f /etc/mongod/shard2.conf
root      13943  0.0  0.0 112660   964 pts/0    S+   17:25   0:00 grep --color=auto shard2
[root@localhost ~]#

登录130或者134任何一台机器的27002端口初始化副本集,128之所以不行,是因为shard2我们把128这台机器的27002端口作为了仲裁节点:

[root@localhost ~]# mongo --host 192.168.77.130 --port 27002
> use admin
switched to db admin
> config = { _id: "shard2", members: [ {_id : 0, host : "192.168.77.128:27002" ,arbiterOnly:true},{_id : 1, host : "192.168.77.130:27002"},{_id : 2, host : "192.168.77.134:27002"}] }
{
    "_id" : "shard2",
    "members" : [
        {
            "_id" : 0,
            "host" : "192.168.77.128:27002",
            "arbiterOnly" : true
        },
        {
            "_id" : 1,
            "host" : "192.168.77.130:27002"
        },
        {
            "_id" : 2,
            "host" : "192.168.77.134:27002"
        }
    ]
}
> rs.initiate(config)
{ "ok" : 1 }
shard2:SECONDARY> rs.status()
{
    "set" : "shard2",
    "date" : ISODate("22018-08-25T17:26:12.250Z"),
    "myState" : 1,
    "term" : NumberLong(1),
    "heartbeatIntervalMillis" : NumberLong(2000),
    "optimes" : {
        "lastCommittedOpTime" : {
            "ts" : Timestamp(1515605171, 1),
            "t" : NumberLong(1)
        },
        "readConcernMajorityOpTime" : {
            "ts" : Timestamp(1515605171, 1),
            "t" : NumberLong(1)
        },
        "appliedOpTime" : {
            "ts" : Timestamp(1515605171, 1),
            "t" : NumberLong(1)
        },
        "durableOpTime" : {
            "ts" : Timestamp(1515605171, 1),
            "t" : NumberLong(1)
        }
    },
    "members" : [
        {
            "_id" : 0,
            "name" : "192.168.77.128:27002",
            "health" : 1,
            "state" : 7,
            "stateStr" : "ARBITER",  # 仲裁节点
            "uptime" : 42,
            "lastHeartbeat" : ISODate("22018-08-25T17:26:10.792Z"),
            "lastHeartbeatRecv" : ISODate("2018-08-25T17:26:11.607Z"),
            "pingMs" : NumberLong(0),
            "configVersion" : 1
        },
        {
            "_id" : 1,
            "name" : "192.168.77.130:27002",
            "health" : 1,
            "state" : 1,
            "stateStr" : "PRIMARY",  # 主节点
            "uptime" : 546,
            "optime" : {
                "ts" : Timestamp(1515605171, 1),
                "t" : NumberLong(1)
            },
            "optimeDate" : ISODate("2018-08-25T17:26:11Z"),
            "infoMessage" : "could not find member to sync from",
            "electionTime" : Timestamp(1515605140, 1),
            "electionDate" : ISODate("2018-08-25T17:25:40Z"),
            "configVersion" : 1,
            "self" : true
        },
        {
            "_id" : 2,
            "name" : "192.168.77.134:27002",
            "health" : 1,
            "state" : 2,
            "stateStr" : "SECONDARY",  # 从节点
            "uptime" : 42,
            "optime" : {
                "ts" : Timestamp(1515605161, 1),
                "t" : NumberLong(1)
            },
            "optimeDurable" : {
                "ts" : Timestamp(1515605161, 1),
                "t" : NumberLong(1)
            },
            "optimeDate" : ISODate("2018-08-25T17:26:01Z"),
            "optimeDurableDate" : ISODate("2018-08-25T17:26:01Z"),
            "lastHeartbeat" : ISODate("2018-08-25T17:26:10.776Z"),
            "lastHeartbeatRecv" : ISODate("2018-08-25T17:26:10.823Z"),
            "pingMs" : NumberLong(0),
            "syncingTo" : "192.168.77.130:27002",
            "configVersion" : 1
        }
    ],
    "ok" : 1
}
shard2:PRIMARY>

3.接着启动shard3:

[root@localhost ~]# mongod -f /etc/mongod/shard3.conf   # 三台机器都要操作
about to fork child process, waiting until server is ready for connections.
forked process: 14204
child process started successfully, parent exiting
[root@localhost ~]# ps aux |grep shard3
root      14204  2.2  1.2 1023228 50096 ?       Sl   17:36   0:00 mongod -f /etc/mongod/shard3.conf
root      14237  0.0  0.0 112660   960 pts/0    S+   17:36   0:00 grep --color=auto shard3
[root@localhost ~]#

然后登录128或者134任何一台机器的27003端口初始化副本集,130之所以不行,是因为shard3我们把130这台机器的27003端口作为了仲裁节点:

[root@localhost ~]# mongo --host 192.168.77.128 --port 27003
> use admin
switched to db admin
> config = { _id: "shard3", members: [ {_id : 0, host : "192.168.77.128:27003"},  {_id : 1, host : "192.168.77.130:27003", arbiterOnly:true}, {_id : 2, host : "192.168.77.134:27003"}] }
{
    "_id" : "shard3",
    "members" : [
        {
            "_id" : 0,
            "host" : "192.168.77.128:27003"
        },
        {
            "_id" : 1,
            "host" : "192.168.77.130:27003",
            "arbiterOnly" : true
        },
        {
            "_id" : 2,
            "host" : "192.168.77.134:27003"
        }
    ]
}
> rs.initiate(config)
{ "ok" : 1 }
shard3:SECONDARY> rs.status()
{
    "set" : "shard3",
    "date" : ISODate("2018-08-26T09:39:47.530Z"),
    "myState" : 1,
    "term" : NumberLong(1),
    "heartbeatIntervalMillis" : NumberLong(2000),
    "optimes" : {
        "lastCommittedOpTime" : {
            "ts" : Timestamp(1515577180, 2),
            "t" : NumberLong(1)
        },
        "readConcernMajorityOpTime" : {
            "ts" : Timestamp(1515577180, 2),
            "t" : NumberLong(1)
        },
        "appliedOpTime" : {
            "ts" : Timestamp(1515577180, 2),
            "t" : NumberLong(1)
        },
        "durableOpTime" : {
            "ts" : Timestamp(1515577180, 2),
            "t" : NumberLong(1)
        }
    },
    "members" : [
        {
            "_id" : 0,
            "name" : "192.168.77.128:27003",
            "health" : 1,
            "state" : 1,
            "stateStr" : "PRIMARY",  # 主节点
            "uptime" : 221,
            "optime" : {
                "ts" : Timestamp(1515577180, 2),
                "t" : NumberLong(1)
            },
            "optimeDate" : ISODate("2018-08-26T09:39:40Z"),
            "infoMessage" : "could not find member to sync from",
            "electionTime" : Timestamp(1515577179, 1),
            "electionDate" : ISODate("2018-08-26T09:39:39Z"),
            "configVersion" : 1,
            "self" : true
        },
        {
            "_id" : 1,
            "name" : "192.168.77.130:27003",
            "health" : 1,
            "state" : 7,
            "stateStr" : "ARBITER",  # 仲裁节点
            "uptime" : 18,
            "lastHeartbeat" : ISODate("2018-08-26T09:39:47.477Z"),
            "lastHeartbeatRecv" : ISODate("2018-08-26T09:39:45.715Z"),
            "pingMs" : NumberLong(0),
            "configVersion" : 1
        },
        {
            "_id" : 2,
            "name" : "192.168.77.134:27003",
            "health" : 1,
            "state" : 2,
            "stateStr" : "SECONDARY",  # 从节点
            "uptime" : 18,
            "optime" : {
                "ts" : Timestamp(1515577180, 2),
                "t" : NumberLong(1)
            },
            "optimeDurable" : {
                "ts" : Timestamp(1515577180, 2),
                "t" : NumberLong(1)
            },
            "optimeDate" : ISODate("2018-08-26T09:39:40Z"),
            "optimeDurableDate" : ISODate("2018-08-26T09:39:40Z"),
            "lastHeartbeat" : ISODate("2018-08-26T09:39:47.477Z"),
            "lastHeartbeatRecv" : ISODate("2018-08-26T09:39:45.779Z"),
            "pingMs" : NumberLong(0),
            "syncingTo" : "192.168.77.128:27003",
            "configVersion" : 1
        }
    ],
    "ok" : 1
}
shard3:PRIMARY>

分片搭建–配置路由服务器

mongos放在最后面配置是因为它需要知道作为config server的是哪个机器,以及作为shard副本集的机器。

1添加配置文件(三台机器都操作):

[root@localhost ~]# vim /etc/mongod/mongos.conf  # 加入如下内容
pidfilepath = /var/run/mongodb/mongos.pid
logpath = /data/mongodb/mongos/log/mongos.log
logappend = true
bind_ip = 0.0.0.0  # 绑定你的监听ip
port = 20000
fork = true
#监听的配置服务器,只能有1个或者3个,configs为配置服务器的副本集名字
configdb = configs/192.168.77.128:21000, 192.168.77.130:21000, 192.168.77.134:21000 
maxConns=20000 #设置最大连接数

2.然后三台机器上都启动mongos服务,注意命令,前面都是mongod,这里是mongos:

[root@localhost ~]# mongos -f /etc/mongod/mongos.conf   # 三台机器上都需要执行
2018-08-26T18:26:02.566+0800 I NETWORK  [main] getaddrinfo(" 192.168.77.130") failed: Name or service not known
2018-08-26T18:26:22.583+0800 I NETWORK  [main] getaddrinfo(" 192.168.77.134") failed: Name or service not known
about to fork child process, waiting until server is ready for connections.
forked process: 15552
child process started successfully, parent exiting
[root@localhost ~]# ps aux |grep mongos  # 三台机器上都需要检查进程是否已启动
root      15552  0.2  0.3 279940 15380 ?        Sl   18:26   0:00 mongos -f /etc/mongod/mongos.conf
root      15597  0.0  0.0 112660   964 pts/0    S+   18:27   0:00 grep --color=auto mongos
[root@localhost ~]# netstat -lntp |grep mongos  # 三台机器上都需要检查端口是否已监听
tcp        0      0 0.0.0.0:20000           0.0.0.0:*               LISTEN      15552/mongos        
[root@localhost ~]#

分片搭建–启用分片

1.登录任意一台机器的20000端口,然后把所有分片和路由器串联:

[root@localhost ~]# mongo --host 192.168.77.128 --port 20000
# 串联shard1
mongos> sh.addShard("shard1/192.168.77.128:27001,192.168.77.130:27001,192.168.77.134:27001")
{
    "shardAdded" : "shard1",  # 这里得对应的是shard1才行
    "ok" : 1,  # 注意,这里得是1才是成功
    "$clusterTime" : {
        "clusterTime" : Timestamp(1515580345, 6),
        "signature" : {
            "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
            "keyId" : NumberLong(0)
        }
    },
    "operationTime" : Timestamp(1515580345, 6)
}

# 串联shard2
mongos> sh.addShard("shard2/192.168.77.128:27002,192.168.77.130:27002,192.168.77.134:27002")
{
    "shardAdded" : "shard2",   # 这里得对应的是shard2才行
    "ok" : 1,   # 注意,这里得是1才是成功
    "$clusterTime" : {
        "clusterTime" : Timestamp(1515608789, 6),
        "signature" : {
            "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
            "keyId" : NumberLong(0)
        }
    },
    "operationTime" : Timestamp(1515608789, 6)
}

# 串联shard3
mongos> sh.addShard("shard3/192.168.77.128:27003,192.168.77.130:27003,192.168.77.134:27003")
{
    "shardAdded" : "shard3",  # 这里得对应的是shard3才行
    "ok" : 1,   # 注意,这里得是1才是成功
    "$clusterTime" : {
        "clusterTime" : Timestamp(1515608789, 14),
        "signature" : {
            "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
            "keyId" : NumberLong(0)
        }
    },
    "operationTime" : Timestamp(1515608789, 14)
}
mongos>

使用sh.status()命令查询分片状态,要确认状态正常:

mongos> sh.status()
--- Sharding Status --- 
  sharding version: {
    "_id" : 1,
    "minCompatibleVersion" : 5,
    "currentVersion" : 6,
    "clusterId" : ObjectId("5a55823348aee75ba3928fea")
  }
  shards:  # 成功的情况下,这里会列出分片信息和状态,state的值要为1
        {  "_id" : "shard1",  "host" : "shard1/192.168.77.128:27001,192.168.77.130:27001",  "state" : 1 }
        {  "_id" : "shard2",  "host" : "shard2/192.168.77.130:27002,192.168.77.134:27002",  "state" : 1 }
        {  "_id" : "shard3",  "host" : "shard3/192.168.77.128:27003,192.168.77.134:27003",  "state" : 1 }
  active mongoses:
        "3.6.1" : 1
  autosplit:
        Currently enabled: yes  # 成功的情况下,这里是yes
  balancer:
        Currently enabled:  yes  # 成功的情况下,这里是yes
        Currently running:  no   # 没有创建库和表的情况下,这里是no,反之则得是yes
        Failed balancer rounds in last 5 attempts:  0
        Migration Results for the last 24 hours: 
                No recent migrations
  databases:
        {  "_id" : "config",  "primary" : "config",  "partitioned" : true }
                config.system.sessions
                        shard key: { "_id" : 1 }
                        unique: false
                        balancing: true
                        chunks:
                                shard1  1
                        { "_id" : { "$minKey" : 1 } } -->> { "_id" : { "$maxKey" : 1 } } on : shard1 Timestamp(1, 0) 

mongos>

21.40 mongodb分片测试

1.登录任意一台20000端口:

[root@localhost ~]# mongo --host 192.168.77.128 --port 20000

2.进入admin库,使用以下任意一条命令指定要分片的数据库:

db.runCommand({ enablesharding : "testdb"})
sh.enableSharding("testdb")

示例:

mongos> use admin
switched to db admin
mongos> sh.enableSharding("testdb")
{
    "ok" : 1,
    "$clusterTime" : {
        "clusterTime" : Timestamp(1515609562, 6),
        "signature" : {
            "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
            "keyId" : NumberLong(0)
        }
    },
    "operationTime" : Timestamp(1515609562, 6)
}
mongos>

3.使用以下任意一条命令指定数据库里需要分片的集合和片键:

db.runCommand( { shardcollection : "testdb.table1",key : {id: 1} } )
sh.shardCollection("testdb.table1",{"id":1} )

示例:

mongos> sh.shardCollection("testdb.table1",{"id":1} )
{
    "collectionsharded" : "testdb.table1",
    "collectionUUID" : UUID("f98762a6-8b2b-4ae5-9142-3d8acc589255"),
    "ok" : 1,
    "$clusterTime" : {
        "clusterTime" : Timestamp(1515609671, 12),
        "signature" : {
            "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
            "keyId" : NumberLong(0)
        }
    },
    "operationTime" : Timestamp(1515609671, 12)
}
mongos>

4.进入刚刚创建的testdb库里插入测试数据:

mongos> use testdb
switched to db testdb
mongos> for (var i = 1; i <= 10000; i++) db.table1.save({id:i,"test1":"testval1"})
WriteResult({ "nInserted" : 1 })
mongos>

5.然后创建多几个库和集合:

mongos> sh.enableSharding("db1")
mongos> sh.shardCollection("db1.table1",{"id":1} )
mongos> sh.enableSharding("db2")
mongos> sh.shardCollection("db2.table1",{"id":1} )
mongos> sh.enableSharding("db3")
mongos> sh.shardCollection("db3.table1",{"id":1} )

6.查看状态:

mongos> sh.status()
--- Sharding Status --- 
  sharding version: {
    "_id" : 1,
    "minCompatibleVersion" : 5,
    "currentVersion" : 6,
    "clusterId" : ObjectId("5a55823348aee75ba3928fea")
  }
  shards:
        {  "_id" : "shard1",  "host" : "shard1/192.168.77.128:27001,192.168.77.130:27001",  "state" : 1 }
        {  "_id" : "shard2",  "host" : "shard2/192.168.77.130:27002,192.168.77.134:27002",  "state" : 1 }
        {  "_id" : "shard3",  "host" : "shard3/192.168.77.128:27003,192.168.77.134:27003",  "state" : 1 }
  active mongoses:
        "3.6.1" : 1
  autosplit:
        Currently enabled: yes
  balancer:
        Currently enabled:  yes
        Currently running:  no
        Failed balancer rounds in last 5 attempts:  0
        Migration Results for the last 24 hours: 
                No recent migrations
  databases:
        {  "_id" : "config",  "primary" : "config",  "partitioned" : true }
                config.system.sessions
                        shard key: { "_id" : 1 }
                        unique: false
                        balancing: true
                        chunks:
                                shard1  1  
                        { "_id" : { "$minKey" : 1 } } -->> { "_id" : { "$maxKey" : 1 } } on : shard1 Timestamp(1, 0) 
        {  "_id" : "db1",  "primary" : "shard3",  "partitioned" : true }
                db1.table1
                        shard key: { "id" : 1 }
                        unique: false
                        balancing: true
                        chunks:
                                shard3  1  # db1存储到了shard3中
                        { "id" : { "$minKey" : 1 } } -->> { "id" : { "$maxKey" : 1 } } on : shard3 Timestamp(1, 0) 
        {  "_id" : "db2",  "primary" : "shard1",  "partitioned" : true }
                db2.table1
                        shard key: { "id" : 1 }
                        unique: false
                        balancing: true
                        chunks:
                                shard1  1  # db2存储到了shard1中
                        { "id" : { "$minKey" : 1 } } -->> { "id" : { "$maxKey" : 1 } } on : shard1 Timestamp(1, 0) 
        {  "_id" : "db3",  "primary" : "shard3",  "partitioned" : true }
                db3.table1
                        shard key: { "id" : 1 }
                        unique: false
                        balancing: true
                        chunks:
                                shard3  1  # db3存储到了shard3中
                        { "id" : { "$minKey" : 1 } } -->> { "id" : { "$maxKey" : 1 } } on : shard3 Timestamp(1, 0) 
        {  "_id" : "testdb",  "primary" : "shard2",  "partitioned" : true }
                testdb.table1
                        shard key: { "id" : 1 }
                        unique: false
                        balancing: true
                        chunks:
                                shard2  1  # testdb存储到了shard2中
                        { "id" : { "$minKey" : 1 } } -->> { "id" : { "$maxKey" : 1 } } on : shard2 Timestamp(1, 0) 

mongos> 

如上,可以看到,刚刚创建的库都存储在了各个分片上,证明分片已经搭建成功。

使用以下命令可以查看某个集合的状态:

db.集合名称.stats()

21.41 mongodb备份恢复

1.首先演示备份某个指定库:

[root@localhost ~]# mkdir /tmp/mongobak  # 先创建一个目录用来存放备份文件
[root@localhost ~]# mongodump --host 192.168.77.128 --port 20000  -d testdb -o /tmp/mongobak
2018-08-26T20:47:51.893+0800    writing testdb.table1 to 
2018-08-26T20:47:51.968+0800    done dumping testdb.table1 (10000 documents)
[root@localhost ~]# ls /tmp/mongobak/  # 备份成功后会生成一个目录
testdb
[root@localhost ~]# ls /tmp/mongobak/testdb/  # 目录里则会生成相应的数据文件
table1.bson  table1.metadata.json
[root@localhost /tmp/mongobak/testdb]# du -sh *  # 可以看到,存放数据的是.bson文件
528K    table1.bson
4.0K    table1.metadata.json
[root@localhost /tmp/mongobak/testdb]#

mongodump 命令中,-d指定需要备份的库,-o指定备份路径

2.备份所有库示例:

[root@localhost ~]# mongodump --host 192.168.77.128 --port 20000 -o /tmp/mongobak
2018-08-26T20:52:28.231+0800    writing admin.system.version to 
2018-08-26T20:52:28.233+0800    done dumping admin.system.version (1 document)
2018-08-26T20:52:28.233+0800    writing testdb.table1 to 
2018-08-26T20:52:28.234+0800    writing config.locks to 
2018-08-26T20:52:28.234+0800    writing config.changelog to 
2018-08-26T20:52:28.234+0800    writing config.lockpings to 
2018-08-26T20:52:28.235+0800    done dumping config.locks (15 documents)
2018-08-26T20:52:28.236+0800    writing config.chunks to 
2018-08-26T20:52:28.236+0800    done dumping config.lockpings (10 documents)
2018-08-26T20:52:28.236+0800    writing config.collections to 
2018-08-26T20:52:28.236+0800    done dumping config.changelog (13 documents)
2018-08-26T20:52:28.236+0800    writing config.databases to 
2018-08-26T20:52:28.237+0800    done dumping config.collections (5 documents)
2018-08-26T20:52:28.237+0800    writing config.shards to 
2018-08-26T20:52:28.237+0800    done dumping config.chunks (5 documents)
2018-08-26T20:52:28.237+0800    writing config.version to 
2018-08-26T20:52:28.238+0800    done dumping config.databases (4 documents)
2018-08-26T20:52:28.238+0800    writing config.mongos to 
2018-08-26T20:52:28.238+0800    done dumping config.version (1 document)
2018-08-26T20:52:28.238+0800    writing config.migrations to 
2018-08-26T20:52:28.239+0800    done dumping config.mongos (1 document)
2018-08-26T20:52:28.239+0800    writing db1.table1 to 
2018-08-26T20:52:28.239+0800    done dumping config.shards (3 documents)
2018-08-26T20:52:28.239+0800    writing db2.table1 to 
2018-08-26T20:52:28.239+0800    done dumping config.migrations (0 documents)
2018-08-26T20:52:28.239+0800    writing db3.table1 to 
2018-08-26T20:52:28.241+0800    done dumping db2.table1 (0 documents)
2018-08-26T20:52:28.241+0800    writing config.tags to 
2018-08-26T20:52:28.241+0800    done dumping db1.table1 (0 documents)
2018-08-26T20:52:28.242+0800    done dumping db3.table1 (0 documents)
2018-08-26T20:52:28.243+0800    done dumping config.tags (0 documents)
2018-08-26T20:52:28.272+0800    done dumping testdb.table1 (10000 documents)
[root@localhost ~]# ls /tmp/mongobak/
admin  config  db1  db2  db3  testdb
[root@localhost ~]#

没有指定-d选项就会备份所有的库。

3.除了备份库之外,还可以备份某个指定的集合:

[root@localhost ~]# mongodump --host 192.168.77.128 --port 20000 -d testdb -c table1 -o /tmp/collectionbak
2018-08-26T20:56:55.300+0800    writing testdb.table1 to 
2018-08-26T20:56:55.335+0800    done dumping testdb.table1 (10000 documents)
[root@localhost ~]# ls !$
ls /tmp/collectionbak
testdb
[root@localhost ~]# ls /tmp/collectionbak/testdb/
table1.bson  table1.metadata.json
[root@localhost ~]#

-c选项指定需要备份的集合,如果没有指定-c选项,则会备份该库的所有集合。

4.mongoexport 命令可以将集合导出为json文件:

[root@localhost ~]# mongoexport --host 192.168.77.128 --port 20000 -d testdb -c table1 -o /tmp/table1.json  # 导出来的是一个json文件
2018-08-26T21:00:48.098+0800    connected to: 192.168.77.128:20000
2018-08-26T21:00:48.236+0800    exported 10000 records
[root@localhost ~]# ls !$
ls /tmp/table1.json
/tmp/table1.json
[root@localhost ~]# tail -n5 !$  # 可以看到文件中都是json格式的数据
tail -n5 /tmp/table1.json
{"_id":{"$oid":"5a55f036f6179723bfb03611"},"id":9996.0,"test1":"testval1"}
{"_id":{"$oid":"5a55f036f6179723bfb03612"},"id":9997.0,"test1":"testval1"}
{"_id":{"$oid":"5a55f036f6179723bfb03613"},"id":9998.0,"test1":"testval1"}
{"_id":{"$oid":"5a55f036f6179723bfb03614"},"id":9999.0,"test1":"testval1"}
{"_id":{"$oid":"5a55f036f6179723bfb03615"},"id":10000.0,"test1":"testval1"}
[root@localhost ~]#

mongodb恢复数据

1.上面我们已经备份好了数据,现在我们先把MongoDB中的数据都删除:


[root@localhost ~]# mongo --host 192.168.77.128 --port 20000
mongos> use testdb
switched to db testdb
mongos> db.dropDatabase()
{
    "dropped" : "testdb",
    "ok" : 1,
    "$clusterTime" : {
        "clusterTime" : Timestamp(1515617938, 13),
        "signature" : {
            "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
            "keyId" : NumberLong(0)
        }
    },
    "operationTime" : Timestamp(1515617938, 13)
}
mongos> use db1
switched to db db1
mongos> db.dropDatabase()
{
    "dropped" : "db1",
    "ok" : 1,
    "$clusterTime" : {
        "clusterTime" : Timestamp(1515617993, 19),
        "signature" : {
            "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
            "keyId" : NumberLong(0)
        }
    },
    "operationTime" : Timestamp(1515617993, 19)
}
mongos> use db2
switched to db db2
mongos> db.dropDatabase()
{
    "dropped" : "db2",
    "ok" : 1,
    "$clusterTime" : {
        "clusterTime" : Timestamp(1515618003, 13),
        "signature" : {
            "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
            "keyId" : NumberLong(0)
        }
    },
    "operationTime" : Timestamp(1515618003, 13)
}
mongos> use db3
switched to db db3
mongos> db.dropDatabase()
{
    "dropped" : "db3",
    "ok" : 1,
    "$clusterTime" : {
        "clusterTime" : Timestamp(1515618003, 34),
        "signature" : {
            "hash" : BinData(0,"AAAAAAAAAAAAAAAAAAAAAAAAAAA="),
            "keyId" : NumberLong(0)
        }
    },
    "operationTime" : Timestamp(1515618003, 34)
}
mongos> show databases
admin   0.000GB
config  0.001GB
mongos>

2.恢复所有的库:

[root@localhost ~]# rm -rf /tmp/mongobak/config/  # 因为不需要恢复config和admin库,所以先把备份文件删掉
[root@localhost ~]# rm -rf /tmp/mongobak/admin/
[root@localhost ~]# mongorestore --host 192.168.77.128 --port 20000 --drop /tmp/mongobak/
2018-08-26T21:11:40.031+0800    preparing collections to restore from
2018-08-26T21:11:40.033+0800    reading metadata for testdb.table1 from /tmp/mongobak/testdb/table1.metadata.json
2018-08-26T21:11:40.035+0800    reading metadata for db2.table1 from /tmp/mongobak/db2/table1.metadata.json
2018-08-26T21:11:40.040+0800    reading metadata for db3.table1 from /tmp/mongobak/db3/table1.metadata.json
2018-08-26T21:11:40.050+0800    reading metadata for db1.table1 from /tmp/mongobak/db1/table1.metadata.json
2018-08-26T21:11:40.086+0800    restoring testdb.table1 from /tmp/mongobak/testdb/table1.bson
2018-08-26T21:11:40.100+0800    restoring db2.table1 from /tmp/mongobak/db2/table1.bson
2018-08-26T21:11:40.102+0800    restoring indexes for collection db2.table1 from metadata
2018-08-26T21:11:40.118+0800    finished restoring db2.table1 (0 documents)
2018-08-26T21:11:40.123+0800    restoring db3.table1 from /tmp/mongobak/db3/table1.bson
2018-08-26T21:11:40.124+0800    restoring indexes for collection db3.table1 from metadata
2018-08-26T21:11:40.126+0800    restoring db1.table1 from /tmp/mongobak/db1/table1.bson
2018-08-26T21:11:40.172+0800    finished restoring db3.table1 (0 documents)
2018-08-26T21:11:40.173+0800    restoring indexes for collection db1.table1 from metadata
2018-08-26T21:11:40.185+0800    finished restoring db1.table1 (0 documents)
2018-08-26T21:11:40.417+0800    restoring indexes for collection testdb.table1 from metadata
2018-08-26T21:11:40.437+0800    finished restoring testdb.table1 (10000 documents)
2018-08-26T21:11:40.437+0800    done
[root@localhost ~]# mongo --host 192.168.77.128 --port 20000
mongos> show databases;  # 可以看到,所有的库都恢复了
admin   0.000GB
config  0.001GB
db1     0.000GB
db2     0.000GB
db3     0.000GB
testdb  0.000GB
mongos>

mongorestore 命令中的--drop可选,意思是当恢复之前先把之前的数据删除,生产环境不建议使用

3.恢复指定的库:

[root@localhost ~]# mongorestore --host 192.168.77.128 --port 20000 -d testdb --drop /tmp/mongobak/testdb/
2018-08-26T21:15:40.185+0800    the --db and --collection args should only be used when restoring from a BSON file. Other uses are deprecated and will not exist in the future; use --nsInclude instead
2018-08-26T21:15:40.185+0800    building a list of collections to restore from /tmp/mongobak/testdb dir
2018-08-26T21:15:40.232+0800    reading metadata for testdb.table1 from /tmp/mongobak/testdb/table1.metadata.json
2018-08-26T21:15:40.241+0800    restoring testdb.table1 from /tmp/mongobak/testdb/table1.bson
2018-08-26T21:15:40.507+0800    restoring indexes for collection testdb.table1 from metadata
2018-08-26T21:15:40.529+0800    finished restoring testdb.table1 (10000 documents)
2018-08-26T21:15:40.529+0800    done
[root@localhost ~]#

恢复某个指定库的时候要指定到具体的备份该库的目录。

4.恢复指定的集合:

[root@localhost ~]# mongorestore --host 192.168.77.128 --port 20000 -d testdb -c table1 --drop /tmp/mongobak/testdb/table1.bson 
2018-08-26T21:18:14.097+0800    checking for collection data in /tmp/mongobak/testdb/table1.bson
2018-08-26T21:18:14.139+0800    reading metadata for testdb.table1 from /tmp/mongobak/testdb/table1.metadata.json
22018-08-26T21:18:14.149+0800    restoring testdb.table1 from /tmp/mongobak/testdb/table1.bson
2018-08-26T21:18:14.331+0800    restoring indexes for collection testdb.table1 from metadata
2018-08-26T21:18:14.353+0800    finished restoring testdb.table1 (10000 documents)
2018-08-26T21:18:14.353+0800    done
[root@localhost ~]#

同样的恢复某个指定集合的时候要指定到具体的备份该集合的.bson文件。

5.恢复json文件中的集合数据:

[root@localhost ~]# mongoimport --host 192.168.77.128 --port 20000 -d testdb -c table1 --file /tmp/table1.json

恢复json文件中的集合数据使用的是mongoimport 命令,--file指定json文件所在路径。

转载于:https://blog.51cto.com/13736286/2164516

你可能感兴趣的:(数据库,php,json)