定时备份/全量备份
备份:mongodump
还原:mongorestore
备份格式: bson/gzip
分析数据/迁移数据
备份:mongoexport
还原:mongoimport
备份格式:json csv
mongoexport备份
mongoexport备份某个表语法格式:mongoexport --port 端口号 -d 库名 -c 表名 -o 备份文件路径.json
mongoexport备份某个表csv格式:mongoexport --port 端口号 -d 库名 -c 表名 --type=csv -f 备份的字段 -o 备份文件路径.json
mongoimport还原
mongoimport还原某个表json格式:mongoimport --port 26017 -d 要还原的库名-c 表名 备份文件路径.json
mongoimport还原某个表csv格式: mongoimport --port 26017 -d 库名 -c 表名–type=csv --headerline 备份文件路径.csv
mongodump备份
mongodump备份库:mongodump --port 26017 -d 库名 -o 备份文件路径
mongorestore还原
mongorestore还原:mongorestore --port 26017 -d 库名 备份文件路径 --drop
[root@mongodb-1 ~]# mkdir /data/backup
[root@mongodb-1 ~]# chown -R mongo.mongo /data/backup
[mongo@mongodb-1 ~]$ mongoexport --port 26017 -d test -c user_info -o /data/backup/user_info.json
2021-02-17T17:17:30.810+0800 connected to: localhost:26017
2021-02-17T17:17:30.903+0800 exported 5 records
只备份user_info表的name和ad字段
[mongo@mongodb-1 ~]$ mongoexport --port 26017 -d test -c user_info --type=csv -f name,age,ad,sex -o /data/backup/user_info.csv
2021-02-17T17:30:18.998+0800 connected to: localhost:26017
2021-02-17T17:30:18.999+0800 exported 5 records
[mongo@mongodb-1 ~]$ mongoimport --port 26017 -d user_db -c user_json /data/backup/user_info.json
2021-02-17T17:39:14.395+0800 connected to: localhost:26017
2021-02-17T17:39:14.418+0800 imported 5 documents
还原成功
mongoimport还原csv格式表时要加上–headerline参数,否则会将字段名也作为一条数据插入
–headerline和-f不能同时使用
[mongo@mongodb-1 ~]$ mongoimport --port 26017 -d user_db -c user_csv --type=csv --headerline /data/backup/user_info.csv
2021-02-17T17:46:45.187+0800 connected to: localhost:26017
2021-02-17T17:46:45.209+0800 imported 5 documents
还原成功
还原指定的字段
[mongo@mongodb-1 ~]$ mongoimport --port 26017 -d user_db -c user_csv2 --type=csv -f name,age /data/backup/user_info.csv
1.创建一个表和数据
> db.book_date.insertMany([
{ "name":"nginx", "price":25, "num":100, "status":"N" },
{ "name":"ansible", "price":50, "num":200 , "status":"A" },
{ "name":"tomcat", "price":100, "num":150, "status":"T" },
{ "name":"redis", "price":75, "num":320 , "status":"R" },
{ "name":"docker", "price":45, "num":270, "status":"D" }
]);
2.当前数据库有两张表
mongo-rs:PRIMARY> show tables
book_date
user_info
3.备份数据库所有表
[mongo@mongodb-1 ~]$ mongodump --port 26017 -d test -o /data/backup/test_db
2021-02-17T17:24:48.732+0800 writing test.book_date to
2021-02-17T17:24:48.732+0800 writing test.user_info to
2021-02-17T17:24:48.734+0800 done dumping test.book_date (5 documents)
2021-02-17T17:24:48.734+0800 done dumping test.user_info (5 documents)
4.查看备份文件
[mongo@mongodb-1 ~]$ cd /data/backup/test_db/
[mongo@mongodb-1 /data/backup/test_db]$ tree .
.
└── test
├── book_date.bson
├── book_date.metadata.json
├── user_info.bson
└── user_info.metadata.json
1 directory, 4 files
[mongo@mongodb-1 ~]$ mongodump --port 26017 -o /data/backup/all_db
2021-02-17T18:35:38.770+0800 writing admin.system.version to
2021-02-17T18:35:38.773+0800 done dumping admin.system.version (1 document)
2021-02-17T18:35:38.773+0800 writing user_db.user_csv2 to
2021-02-17T18:35:38.773+0800 writing zabbix.users to
2021-02-17T18:35:38.774+0800 writing test.book_date to
2021-02-17T18:35:38.774+0800 writing test.user_info to
2021-02-17T18:35:38.779+0800 done dumping zabbix.users (6 documents)
2021-02-17T18:35:38.779+0800 writing user_db.user_json to
2021-02-17T18:35:38.779+0800 done dumping test.book_date (5 documents)
2021-02-17T18:35:38.779+0800 writing user_db.user_csv to
2021-02-17T18:35:38.780+0800 done dumping test.user_info (5 documents)
2021-02-17T18:35:38.780+0800 done dumping user_db.user_csv2 (6 documents)
2021-02-17T18:35:38.781+0800 done dumping user_db.user_json (5 documents)
2021-02-17T18:35:38.781+0800 done dumping user_db.user_csv (5 documents)
[mongo@mongodb-1 ~]$ tree /data/backup/all_db
[mongo@mongodb-1 ~]$ mongodump --port 26017 -d zabbix -o /data/backup/zabbix_db --gzip
2021-02-17T18:36:48.908+0800 writing zabbix.users to
2021-02-17T18:36:48.909+0800 done dumping zabbix.users (6 documents)
[mongo@mongodb-1 ~]$ tree /data/backup/zabbix_db/
1.删除zabbix数据库
mongo-rs:PRIMARY> use zabbix
mongo-rs:PRIMARY> db.dropDatabase()
2.还原数据库
[mongo@mongodb-1 ~]$ mongorestore --port 26017 -d zabbix /data/backup/all_db/ --drop
普通备份
mongodump --host=mongo-rs/192.168.81.210:26017,192.168.81.210:28017,192.168.81.210:29017 -o /data/backup/mongo_rs
压缩备份
mongodump --host=mongo-rs/192.168.81.210:26017,192.168.81.210:28017,192.168.81.210:29017 -o /data/backup/mongo_rs --gzip
MariaDB [(none)]> select * from zabbix.users into outfile '/var/lib/mysql/users.csv' fields terminated by ',';
[root@mongodb-1 ~]# cp /var/lib/mysql/users.csv /tmp/
命令导出的csv格式不包含字段名
可以用Navicat导出,Navicat导出的csv格式包含字段名,如果没有字段名,mongodb导入时会报错
备份全部字段
包含列标题
这样导出的csv格式的表,都是包含列标题的
1.将Navicat导出包含列标题的csv传到Linux中
2.导入mysql数据
[mongo@mongodb-1 ~]$ mongoimport --port 26017 -d zabbix -c users --type=csv --headerline /tmp/users.csv
2021-02-17T18:23:35.717+0800 connected to: localhost:26017
2021-02-17T18:23:35.748+0800 imported 6 documents
[mongo@mongodb-1 ~]$ mongo --port 26017
mongo-rs:PRIMARY> show dbs
admin 0.000GB
config 0.000GB
local 0.000GB
test 0.000GB
user_db 0.000GB
zabbix 0.000GB
mongo-rs:PRIMARY> use zabbix
switched to db zabbix
mongo-rs:PRIMARY> show tables
users
mongo-rs:PRIMARY> db.users.find()
oplog:在replica set中oplog是一个定容集合,它的大小默认是磁盘空间的5%,可以通过–oplogSizeMB参数修改
oplog相当于mysql的binlog,可以从里面恢复没有备份且删除的数据
具体实现思路:
1.插入测试数据
2.删除之前所有的备份,保证目录干净
3.全备数据库
4.插入新的数据,使数据记录在oplog中
5.删除新插入的数据
6.备份现有的oplog.rs表,也就是存储所有操作的表,可以从里面恢复没有备份的数据
7.截取oplog.rs表中删除数据的操作位置,记录时间戳
8.使用oplog还原删除的数据
[mongo@mongodb-1 ~]$ mongo --port 26017
mongo-rs:PRIMARY> use my_testdb
mongo-rs:PRIMARY> for (var i = 1 ;i < 20; i++){
db.ci.insert({a:i});
}
mongo-rs:PRIMARY> show tables
ci
[mongo@mongodb-1 ~]$ rm -rf /data/backup/*
–oplog:在备份同时,将备份过程中产生的日志进行备份,文件放在/data/backup下,会生成一个oplog.bson的文件存放最新数据
[mongo@mongodb-1 ~]$ mongodump --port 26017 --oplog -o /data/backup/
2021-02-17T19:39:09.272+0800 writing admin.system.version to
2021-02-17T19:39:09.273+0800 done dumping admin.system.version (1 document)
2021-02-17T19:39:09.282+0800 writing my_testdb.ci to
2021-02-17T19:39:09.282+0800 writing test_db.ci to
2021-02-17T19:39:09.282+0800 writing user_db.user_csv2 to
2021-02-17T19:39:09.282+0800 writing test.book_date to
2021-02-17T19:39:09.287+0800 done dumping my_testdb.ci (19 documents)
2021-02-17T19:39:09.287+0800 writing test.user_info to
2021-02-17T19:39:09.287+0800 done dumping test_db.ci (19 documents)
2021-02-17T19:39:09.287+0800 writing user_db.user_json to
2021-02-17T19:39:09.287+0800 done dumping user_db.user_csv2 (6 documents)
2021-02-17T19:39:09.287+0800 writing user_db.user_csv to
2021-02-17T19:39:09.287+0800 done dumping test.book_date (5 documents)
2021-02-17T19:39:09.287+0800 writing test_db.test1 to
2021-02-17T19:39:09.295+0800 done dumping test.user_info (5 documents)
2021-02-17T19:39:09.296+0800 done dumping user_db.user_csv (5 documents)
2021-02-17T19:39:09.296+0800 done dumping test_db.test1 (3 documents)
2021-02-17T19:39:09.296+0800 done dumping user_db.user_json (5 documents)
2021-02-17T19:39:09.297+0800 writing captured oplog to
2021-02-17T19:39:09.577+0800 dumped 1 oplog entry
mongo-rs:PRIMARY> use my_testdb
mongo-rs:PRIMARY> db.ci_new1.insertMany( [
{ "id": 1},
{ "id": 2},
{ "id": 3},
]);
mongo-rs:PRIMARY> db.ci_new2.insertMany( [
{ "id": 1},
{ "id": 2},
{ "id": 3},
]);
mongo-rs:PRIMARY> db.ci.drop()
true
mongo-rs:PRIMARY> show tables
ci_new1
ci_new2
[mongo@mongodb-1 ~]$ mongodump --port 26017 -d local -c oplog.rs -o /data/backup/
2021-02-17T19:46:50.590+0800 writing local.oplog.rs to
2021-02-17T19:46:50.598+0800 done dumping local.oplog.rs (3406 documents)
mongo-rs:PRIMARY> db.oplog.rs.find({ns:"my_testdb.$cmd"}).pretty()
{
"ts" : Timestamp(1613561789, 1),
"t" : NumberLong(5),
"h" : NumberLong("1313438378401925148"),
"v" : 2,
"op" : "c",
"ns" : "my_testdb.$cmd",
"ui" : UUID("24f4c7aa-7ae0-4767-adea-0a3cb0b7709e"),
"wall" : ISODate("2021-02-17T11:36:29.834Z"),
"o" : {
"create" : "ci",
"idIndex" : {
"v" : 2,
"key" : {
"_id" : 1
},
"name" : "_id_",
"ns" : "my_testdb.ci"
}
}
}
{
"ts" : Timestamp(1613562107, 1),
"t" : NumberLong(5),
"h" : NumberLong("1204714198446601217"),
"v" : 2,
"op" : "c",
"ns" : "my_testdb.$cmd",
"ui" : UUID("8e38f17b-a2ad-436d-989c-9cf12da216fc"),
"wall" : ISODate("2021-02-17T11:41:47.233Z"),
"o" : {
"create" : "ci_new1",
"idIndex" : {
"v" : 2,
"key" : {
"_id" : 1
},
"name" : "_id_",
"ns" : "my_testdb.ci_new1"
}
}
}
{
"ts" : Timestamp(1613562113, 1),
"t" : NumberLong(5),
"h" : NumberLong("-8601733323753055448"),
"v" : 2,
"op" : "c",
"ns" : "my_testdb.$cmd",
"ui" : UUID("f0988675-e4aa-43aa-84ca-35cb3c9795fb"),
"wall" : ISODate("2021-02-17T11:41:53.652Z"),
"o" : {
"create" : "ci_new2",
"idIndex" : {
"v" : 2,
"key" : {
"_id" : 1
},
"name" : "_id_",
"ns" : "my_testdb.ci_new2"
}
}
}
{
"ts" : Timestamp(1613562226, 1),
"t" : NumberLong(5),
"h" : NumberLong("503106244595267657"),
"v" : 2,
"op" : "c",
"ns" : "my_testdb.$cmd",
"ui" : UUID("24f4c7aa-7ae0-4767-adea-0a3cb0b7709e"),
"wall" : ISODate("2021-02-17T11:43:46.345Z"),
"o" : {
"drop" : "ci" #删除ci的位置,记录ts的时间戳
}
}
1.将备份路径local下面的oplog.rs.bson替换到全库备份路径下,成为最新备份
[mongo@mongodb-1 ~]$ cd /data/backup/local/
[mongo@mongodb-1 /data/backup/local]$ cp oplog.rs.bson ../oplog.bson
2.将oplog.rs.bson替换成最新后,需要将local目录删除,否则还原会失败
[mongo@mongodb-1 /data/backup]$ rm -rf local/
3.利用oplog还原误删除的数据
[mongo@mongodb-1 /data/backup]$ mongorestore --port 26017 --oplogReplay --oplogLimit "1613562226:1" --drop /data/backup/
[mongo@mongodb-1 /data/backup]$ mongo --port 26017
mongo-rs:PRIMARY> show dbs
admin 0.000GB
config 0.000GB
local 0.000GB
my_testdb 0.000GB
test 0.000GB
test_db 0.000GB
user_db 0.000GB
mongo-rs:PRIMARY> use my_testdb
switched to db my_testdb
mongo-rs:PRIMARY> show tables
ci
ci_new1
ci_new2
mongo-rs:PRIMARY> db.ci.find()