MySQL Shell提供了util.dumpInstance()
和util.dumpSchemas()
用于导出数据,支持导出所有的schema
或者选择部分schame
到本地文件中,后续可以通过util.loadDump()
进行恢复。支持多线程并行转储和文件压缩,这些都是mysqldump备份不提供的。
可以指定schema
进行备份,默认不备份information_schema
, mysql
, ndbinfo
, performance_schema
, 和 sys
schemas
使用核心要求如下:
latin1
或 utf8
字符集.# 备份实例
util.dumpInstance(outputUrl[, options])
# 备份schema
util.dumpSchemas(schemas, outputUrl[, options])
# 使用msyqlsh连接实例
# mysqlsh root@127.0.0.1:3306
# 备份整个实例
MySQL 127.0.0.1:3306 JS > util.dumpInstance('/tmp/instanceDump',{"threads":4})
Acquiring global read lock
All transactions have been started
Locking instance for backup
Global read lock has been released
Writing global DDL files
Writing users DDL
Preparing data dump for table `monitor`.`digests_prev`
Writing DDL for schema `monitor`
Writing DDL for view `monitor`.`my_statements`
Writing DDL for table `monitor`.`digests_prev`
Writing DDL for schema `sbtest`
WARNING: Could not select a column to be used as an index for table `monitor`.`digests_prev`. Chunking has been disabled for this table, data will be dumped to a single file.
Preparing data dump for table `sbtest`.`sbtest1`
Data dump for table `sbtest`.`sbtest1` will be chunked using column `id`
Preparing data dump for table `sbtest`.`sbtest10`
Data dump for table `sbtest`.`sbtest10` will be chunked using column `id`
Preparing data dump for table `sbtest`.`sbtest2`
Data dump for table `sbtest`.`sbtest2` will be chunked using column `id`
Preparing data dump for table `sbtest`.`sbtest3`
Writing DDL for table `sbtest`.`sbtest1`
Writing DDL for table `sbtest`.`sbtest10`
Data dump for table `sbtest`.`sbtest3` will be chunked using column `id`
Preparing data dump for table `sbtest`.`sbtest4`
Data dump for table `sbtest`.`sbtest4` will be chunked using column `id`
Preparing data dump for table `sbtest`.`sbtest5`
Data dump for table `sbtest`.`sbtest5` will be chunked using column `id`
Preparing data dump for table `sbtest`.`sbtest6`
Data dump for table `sbtest`.`sbtest6` will be chunked using column `id`
Preparing data dump for table `sbtest`.`sbtest7`
Data dump for table `sbtest`.`sbtest7` will be chunked using column `id`
Preparing data dump for table `sbtest`.`sbtest8`
Data dump for table `sbtest`.`sbtest8` will be chunked using column `id`
Preparing data dump for table `sbtest`.`sbtest9`
Writing DDL for table `sbtest`.`sbtest2`
Data dump for table `sbtest`.`sbtest9` will be chunked using column `id`
Running data dump using 4 threads.
NOTE: Progress information uses estimated values and may not be accurate.
Writing DDL for table `sbtest`.`sbtest3`
Writing DDL for table `sbtest`.`sbtest4`
Writing DDL for table `sbtest`.`sbtest5`
Writing DDL for table `sbtest`.`sbtest6`
Writing DDL for table `sbtest`.`sbtest7`
Writing DDL for table `sbtest`.`sbtest8`
Writing DDL for table `sbtest`.`sbtest9`
Data dump for table `monitor`.`digests_prev` will be written to 1 file
Data dump for table `sbtest`.`sbtest1` will be written to 1 file
Data dump for table `sbtest`.`sbtest4` will be written to 3 files
Data dump for table `sbtest`.`sbtest2` will be written to 3 files
Data dump for table `sbtest`.`sbtest10` will be written to 3 files
Data dump for table `sbtest`.`sbtest3` will be written to 3 files
Data dump for table `sbtest`.`sbtest5` will be written to 3 files
Data dump for table `sbtest`.`sbtest7` will be written to 3 files
Data dump for table `sbtest`.`sbtest8` will be written to 3 files
Data dump for table `sbtest`.`sbtest6` will be written to 3 files
Data dump for table `sbtest`.`sbtest9` will be written to 3 files
1 thds dumping - 113% (4.58M rows / ~4.05M rows), 70.94K rows/s, 13.69 MB/s uncompressed, 6.20 MB/s compressed
Duration: 00:01:53s
Schemas dumped: 2
Tables dumped: 11
Uncompressed data size: 883.48 MB
Compressed data size: 400.76 MB
Compression ratio: 2.2
Rows written: 4582747
Bytes written: 400.76 MB
Average uncompressed throughput: 7.78 MB/s
Average compressed throughput: 3.53 MB/s
# 使用msyqlsh连接实例
# mysqlsh root@127.0.0.1:3306
# 只备份sbtest库
MySQL 127.0.0.1:3306 sbtest JS > util.dumpSchemas(['sbtest'], '/tmp/sbtest',{"threads":8})
Acquiring global read lock
All transactions have been started
Locking instance for backup
Global read lock has been released
Writing global DDL files
Preparing data dump for table `sbtest`.`sbtest1`
Writing DDL for schema `sbtest`
Writing DDL for table `sbtest`.`sbtest1`
Writing DDL for table `sbtest`.`sbtest10`
Writing DDL for table `sbtest`.`sbtest2`
Writing DDL for table `sbtest`.`sbtest3`
Writing DDL for table `sbtest`.`sbtest4`
Writing DDL for table `sbtest`.`sbtest5`
Writing DDL for table `sbtest`.`sbtest6`
Data dump for table `sbtest`.`sbtest1` will be chunked using column `id`
Preparing data dump for table `sbtest`.`sbtest10`
Data dump for table `sbtest`.`sbtest10` will be chunked using column `id`
Preparing data dump for table `sbtest`.`sbtest2`
Data dump for table `sbtest`.`sbtest2` will be chunked using column `id`
Preparing data dump for table `sbtest`.`sbtest3`
Data dump for table `sbtest`.`sbtest3` will be chunked using column `id`
Preparing data dump for table `sbtest`.`sbtest4`
Data dump for table `sbtest`.`sbtest4` will be chunked using column `id`
Preparing data dump for table `sbtest`.`sbtest5`
Data dump for table `sbtest`.`sbtest5` will be chunked using column `id`
Preparing data dump for table `sbtest`.`sbtest6`
Data dump for table `sbtest`.`sbtest6` will be chunked using column `id`
Preparing data dump for table `sbtest`.`sbtest7`
Data dump for table `sbtest`.`sbtest7` will be chunked using column `id`
Preparing data dump for table `sbtest`.`sbtest8`
Data dump for table `sbtest`.`sbtest8` will be chunked using column `id`
Preparing data dump for table `sbtest`.`sbtest9`
Data dump for table `sbtest`.`sbtest9` will be chunked using column `id`
Running data dump using 8 threads.
NOTE: Progress information uses estimated values and may not be accurate.
Writing DDL for table `sbtest`.`sbtest7`
Writing DDL for table `sbtest`.`sbtest8`
Writing DDL for table `sbtest`.`sbtest9`
Data dump for table `sbtest`.`sbtest1` will be written to 1 file
Data dump for table `sbtest`.`sbtest3` will be written to 3 files
Data dump for table `sbtest`.`sbtest10` will be written to 3 files
Data dump for table `sbtest`.`sbtest2` will be written to 3 files
Data dump for table `sbtest`.`sbtest5` will be written to 3 files
Data dump for table `sbtest`.`sbtest6` will be written to 3 files
Data dump for table `sbtest`.`sbtest4` will be written to 3 files
Data dump for table `sbtest`.`sbtest9` will be written to 3 files
Data dump for table `sbtest`.`sbtest7` will be written to 3 files
Data dump for table `sbtest`.`sbtest8` will be written to 3 files
1 thds dumping - 113% (4.58M rows / ~4.05M rows), 99.32K rows/s, 19.15 MB/s uncompressed, 8.69 MB/s compressed
Duration: 00:00:32s
Schemas dumped: 1
Tables dumped: 10
Uncompressed data size: 883.46 MB
Compressed data size: 400.76 MB
Compression ratio: 2.2
Rows written: 4582718
Bytes written: 400.76 MB
Average uncompressed throughput: 27.58 MB/s
Average compressed throughput: 12.51 MB/s