docker pull elasticdump/elasticsearch-dump
首先要新建一个存放数据文件的文件夹,如/tmp/data
docker run --rm -ti -v /tmp/data:/tmp elasticdump/elasticsearch-dump --input=http://es_address:9200/my_index --output=/tmp/index_data.json --type=data
执行完毕将在/tmp/data/目录下生成index_data.json文件
docker run --rm -ti -v /tmp/data:/tmp elasticdump/elasticsearch-dump --input=http://es_address:9200/my_index --output=/tmp/index_mapping.json --type=mapping
执行完毕将在/tmp/data/目录下生成index_mapping.json文件
docker run --rm -ti -v /tmp/data:/tmp elasticdump/elasticsearch-dump --output=http://es_address:9200/my_index --input=/tmp/index_data.json --type=data
docker run --rm -ti -v /tmp/data:/tmp elasticdump/elasticsearch-dump --output=http://es_address:9200/my_index --input=/tmp/index_mapping.json --type=mapping
更多实用方法参考 https://github.com/elasticsearch-dump/elasticsearch-dump
elasticdump: Import and export tools for elasticsearch
version: %%version%%
Usage: elasticdump --input SOURCE --output DESTINATION [OPTIONS]
--input
Source location (required)
--input-index
Source index and type
(default: all, example: index/type)
--output
Destination location (required)
--output-index
Destination index and type
(default: all, example: index/type)
--overwrite
Overwrite output file if it exists
(default: false)
--limit
How many objects to move in batch per operation
limit is approximate for file streams
(default: 100)
--size
How many objects to retrieve
(default: -1 -> no limit)
--concurrency
The maximum number of requests the can be made concurrently to a specified transport.
(default: 1)
--concurrencyInterval
The length of time in milliseconds in which up to requests can be made
before the interval request count resets. Must be finite.
(default: 5000)
--intervalCap
The maximum number of transport requests that can be made within a given .
(default: 5)
--carryoverConcurrencyCount
If true, any incomplete requests from a will be carried over to
the next interval, effectively reducing the number of new requests that can be created
in that next interval. If false, up to requests can be created in the
next interval regardless of the number of incomplete requests from the previous interval.
(default: true)
--throttleInterval
Delay in milliseconds between getting data from an inputTransport and sending it to an
outputTransport.
(default: 1)
--debug
Display the elasticsearch commands being used
(default: false)
--quiet
Suppress all messages except for errors
(default: false)
--type
What are we exporting?
(default: data, options: [settings, analyzer, data, mapping, alias, template])
--delete
Delete documents one-by-one from the input as they are
moved. Will not delete the source index
(default: false)
--searchBody
Preform a partial extract based on search results
when ES is the input, default values are
if ES > 5
`'{"query": { "match_all": {} }, "stored_fields": ["*"], "_source": true }'`
else
`'{"query": { "match_all": {} }, "fields": ["*"], "_source": true }'`
--headers
Add custom headers to Elastisearch requests (helpful when
your Elasticsearch instance sits behind a proxy)
(default: '{"User-Agent": "elasticdump"}')
--params
Add custom parameters to Elastisearch requests uri. Helpful when you for example
want to use elasticsearch preference
(default: null)
--sourceOnly
Output only the json contained within the document _source
Normal: {"_index":"","_type":"","_id":"", "_source":{SOURCE}}
sourceOnly: {SOURCE}
(default: false)
--ignore-errors
Will continue the read/write loop on write error
(default: false)
--scrollTime
Time the nodes will hold the requested search in order.
(default: 10m)
--maxSockets
How many simultaneous HTTP requests can we process make?
(default:
5 [node <= v0.10.x] /
Infinity [node >= v0.11.x] )
--timeout
Integer containing the number of milliseconds to wait for
a request to respond before aborting the request. Passed
directly to the request library. Mostly used when you don't
care too much if you lose some data when importing
but rather have speed.
--offset
Integer containing the number of rows you wish to skip
ahead from the input transport. When importing a large
index, things can go wrong, be it connectivity, crashes,
someone forgetting to `screen`, etc. This allows you
to start the dump again from the last known line written
(as logged by the `offset` in the output). Please be
advised that since no sorting is specified when the
dump is initially created, there's no real way to
guarantee that the skipped rows have already been
written/parsed. This is more of an option for when
you want to get most data as possible in the index
without concern for losing some rows in the process,
similar to the `timeout` option.
(default: 0)
--noRefresh
Disable input index refresh.
Positive:
1. Much increase index speed
2. Much less hardware requirements
Negative:
1. Recently added data may not be indexed
Recommended to use with big data indexing,
where speed and system health in a higher priority
than recently added data.
--inputTransport
Provide a custom js file to use as the input transport
--outputTransport
Provide a custom js file to use as the output transport
--toLog
When using a custom outputTransport, should log lines
be appended to the output stream?
(default: true, except for `$`)
--transform
A javascript, which will be called to modify documents
before writing it to destination. global variable 'doc'
is available.
Example script for computing a new field 'f2' as doubled
value of field 'f1':
doc._source["f2"] = doc._source.f1 * 2;
May be used multiple times.
Additionally, transform may be performed by a module. See [Module Transform](#module-transform) below.
--awsChain
Use [standard](https://aws.amazon.com/blogs/security/a-new-and-standardized-way-to-manage-credentials-in-the-aws-sdks/) location and ordering for resolving credentials including environment variables, config files, EC2 and ECS metadata locations
_Recommended option for use with AWS_
--awsAccessKeyId
--awsSecretAccessKey
When using Amazon Elasticsearch Service protected by
AWS Identity and Access Management (IAM), provide
your Access Key ID and Secret Access Key.
--sessionToken can also be optionally provided if using temporary credentials
--awsIniFileProfile
Alternative to --awsAccessKeyId and --awsSecretAccessKey,
loads credentials from a specified profile in aws ini file.
For greater flexibility, consider using --awsChain
and setting AWS_PROFILE and AWS_CONFIG_FILE
environment variables to override defaults if needed
--awsIniFileName
Override the default aws ini file name when using --awsIniFileProfile
Filename is relative to ~/.aws/
(default: config)
--support-big-int
Support big integer numbers
--retryAttempts
Integer indicating the number of times a request should be automatically re-attempted before failing
when a connection fails with one of the following errors `ECONNRESET`, `ENOTFOUND`, `ESOCKETTIMEDOUT`,
ETIMEDOUT`, `ECONNREFUSED`, `EHOSTUNREACH`, `EPIPE`, `EAI_AGAIN`
(default: 0)
--retryDelay
Integer indicating the back-off/break period between retry attempts (milliseconds)
(default : 5000)
--parseExtraFields
Comma-separated list of meta-fields to be parsed
--fileSize
supports file splitting. This value must be a string supported by the **bytes** module.
The following abbreviations must be used to signify size in terms of units
b for bytes
kb for kilobytes
mb for megabytes
gb for gigabytes
tb for terabytes
e.g. 10mb / 1gb / 1tb
Partitioning helps to alleviate overflow/out of memory exceptions by efficiently segmenting files
into smaller chunks that then be merged if needs be.
--fsCompress
gzip data before sending outputting to file
--s3AccessKeyId
AWS access key ID
--s3SecretAccessKey
AWS secret access key
--s3Region
AWS region
--s3Endpoint
AWS endpoint can be used for AWS compatible backends such as
OpenStack Swift and OpenStack Ceph
--s3SSLEnabled
Use SSL to connect to AWS [default true]
--s3ForcePathStyle Force path style URLs for S3 objects [default false]
--s3Compress
gzip data before sending to s3
--retryDelayBase
The base number of milliseconds to use in the exponential backoff for operation retries. (s3)
--customBackoff
Activate custom customBackoff function. (s3)
--tlsAuth
Enable TLS X509 client authentication
--cert, --input-cert, --output-cert
Client certificate file. Use --cert if source and destination are identical.
Otherwise, use the one prefixed with --input or --output as needed.
--key, --input-key, --output-key
Private key file. Use --key if source and destination are identical.
Otherwise, use the one prefixed with --input or --output as needed.
--pass, --input-pass, --output-pass
Pass phrase for the private key. Use --pass if source and destination are identical.
Otherwise, use the one prefixed with --input or --output as needed.
--ca, --input-ca, --output-ca
CA certificate. Use --ca if source and destination are identical.
Otherwise, use the one prefixed with --input or --output as needed.
--inputSocksProxy, --outputSocksProxy
Socks5 host address
--inputSocksPort, --outputSocksPort
Socks5 host port
--help
This page
非docker方式安装
注:node 版本不低于 v10.0.0
wget https://nodejs.org/dist/v14.17.3/node-v14.17.3-linux-x64.tar.xz -O /opt/node-v14.17.3-linux-x64.tar.xz
tar -xvf /opt/node-v14.17.3-linux-x64.tar.xz
vim ~/.bashrc
# 追加以下内容
#node
export NODE_HOME=/opt/node-v14.17.3-linux-x64
export PATH=$NODE_HOME/bin:$PATH
# 刷新
source ~/.bashrc
查看是否出现版本
[root@localhost ~]# node -v
v14.17.3
[root@localhost ~]# npm -v
6.14.13
npm install elasticdump
出现安装成功提示
+ [email protected]
added 112 packages from 198 contributors and audited 112 packages in 19.171s
安装成功后会在当前目录生成node_modules
目录,里面包含 elasticdump
主目录
bin
目录下面有两个可执行文件 elasticdump(单索引操作)
、multielasticdump(多索引操作)
为了方便使用最好配置个环境变量
vim ~/.bashrc
# 追加以下内容
#node
export DUMP_HOME=/root/node_modules/elasticdump
export PATH=$DUMP_HOME/bin:$PATH
# 刷新
source ~/.bashrc
.
├── bin
│ ├── elasticdump
│ └── multielasticdump
├── elasticdump.js
├── lib
│ ├── add-auth.js
│ ├── argv.js
│ ├── aws4signer.js
│ ├── help.txt
│ ├── ioHelper.js
│ ├── is-url.js
│ ├── jsonparser.js
│ ├── parse-base-url.js
│ ├── parse-meta-data.js
│ ├── processor.js
│ ├── splitters
│ ├── transports
│ └── version-check.js
├── LICENSE.txt
├── package.json
├── README.md
└── transforms
└── anonymize.js
# Copy an index from production to staging with analyzer and mapping:
elasticdump \
--input=http://production.es.com:9200/my_index \
--output=http://staging.es.com:9200/my_index \
--type=analyzer
elasticdump \
--input=http://production.es.com:9200/my_index \
--output=http://staging.es.com:9200/my_index \
--type=mapping
elasticdump \
--input=http://production.es.com:9200/my_index \
--output=http://staging.es.com:9200/my_index \
--type=data
# Backup index data to a file:
elasticdump \
--input=http://production.es.com:9200/my_index \
--output=/data/my_index_mapping.json \
--type=mapping
elasticdump \
--input=http://production.es.com:9200/my_index \
--output=/data/my_index.json \
--type=data
# Backup and index to a gzip using stdout:
elasticdump \
--input=http://production.es.com:9200/my_index \
--output=$ \
| gzip > /data/my_index.json.gz
# Backup the results of a query to a file
elasticdump \
--input=http://production.es.com:9200/my_index \
--output=query.json \
--searchBody="{\"query\":{\"term\":{\"username\": \"admin\"}}}"
# Specify searchBody from a file
elasticdump \
--input=http://production.es.com:9200/my_index \
--output=query.json \
--searchBody=@/data/searchbody.json
# Copy a single shard data:
elasticdump \
--input=http://es.com:9200/api \
--output=http://es.com:9200/api2 \
--input-params="{\"preference\":\"_shards:0\"}"
# Backup aliases to a file
elasticdump \
--input=http://es.com:9200/index-name/alias-filter \
--output=alias.json \
--type=alias
# Import aliases into ES
elasticdump \
--input=./alias.json \
--output=http://es.com:9200 \
--type=alias
# Backup templates to a file
elasticdump \
--input=http://es.com:9200/template-filter \
--output=templates.json \
--type=template
# Import templates into ES
elasticdump \
--input=./templates.json \
--output=http://es.com:9200 \
--type=template
# Split files into multiple parts
elasticdump \
--input=http://production.es.com:9200/my_index \
--output=/data/my_index.json \
--fileSize=10mb
# Import data from S3 into ES (using s3urls)
elasticdump \
--s3AccessKeyId "${access_key_id}" \
--s3SecretAccessKey "${access_key_secret}" \
--input "s3://${bucket_name}/${file_name}.json" \
--output=http://production.es.com:9200/my_index
# Export ES data to S3 (using s3urls)
elasticdump \
--s3AccessKeyId "${access_key_id}" \
--s3SecretAccessKey "${access_key_secret}" \
--input=http://production.es.com:9200/my_index \
--output "s3://${bucket_name}/${file_name}.json"
# Import data from MINIO (s3 compatible) into ES (using s3urls)
elasticdump \
--s3AccessKeyId "${access_key_id}" \
--s3SecretAccessKey "${access_key_secret}" \
--input "s3://${bucket_name}/${file_name}.json" \
--output=http://production.es.com:9200/my_index
--s3ForcePathStyle true
--s3Endpoint https://production.minio.co
# Export ES data to MINIO (s3 compatible) (using s3urls)
elasticdump \
--s3AccessKeyId "${access_key_id}" \
--s3SecretAccessKey "${access_key_secret}" \
--input=http://production.es.com:9200/my_index \
--output "s3://${bucket_name}/${file_name}.json"
--s3ForcePathStyle true
--s3Endpoint https://production.minio.co
# Import data from CSV file into ES (using csvurls)
elasticdump \
# csv:// prefix must be included to allow parsing of csv files
# --input "csv://${file_path}.csv" \
--input "csv:///data/cars.csv"
--output=http://production.es.com:9200/my_index \
--csvSkipRows 1 # used to skip parsed rows (this does not include the headers row)
--csvDelimiter ";" # default csvDelimiter is ','
# backup ES indices & all their type to the es_backup folder
multielasticdump \
--direction=dump \
--match='^.*$' \
--input=http://production.es.com:9200 \
--output=/tmp/es_backup
# Only backup ES indices ending with a prefix of `-index` (match regex).
# Only the indices data will be backed up. All other types are ignored.
# NB: analyzer & alias types are ignored by default
multielasticdump \
--direction=dump \
--match='^.*-index$'\
--input=http://production.es.com:9200 \
--ignoreType='mapping,settings,template' \
--output=/tmp/es_backup
常用参数:
--direction dump/load 导出/导入
--ignoreType 被忽略的类型,data,mapping,analyzer,alias,settings,template
--includeType 包含的类型,data,mapping,analyzer,alias,settings,template
--suffix 加前缀,es6-${index}
--prefix 加后缀,${index}-backup-2018-03-13
直接将两个ES的数据同步
elasticdump \
--input=http://192.168.1.140:9200/source_index \
--output=http://192.168.1.141:9200/target_index \
--type=mapping
elasticdump \
--input=http://192.168.1.140:9200/source_index \
--output=http://192.168.1.141:9200/target_index \
--type=data \
--limit=2000 # 每次操作的objects数量,默认100,数据量大的话,可以调大加快迁移速度
# 导出
elasticdump \
--input=http://192.168.1.140:9200/source_index \
--output=/data/source_index_mapping.json \
--type=mapping
elasticdump \
--input=http://192.168.1.140:9200/source_index \
--output=/data/source_index.json \
--type=data \
--limit=2000
# 导入
elasticdump \
--input=/data/source_index_mapping.json \
--output=http://192.168.1.141:9200/source_index \
--type=mapping
elasticdump \
--input=/data/source_index.json \
--output=http://192.168.1.141:9200/source_index \
--type=data \
--limit=2000
# 导出
multielasticdump \
--direction=dump \
--match='^.*$' \
--input=http://192.168.1.140:9200 \
--output=/tmp/es_backup \
--includeType='data,mapping' \
--limit=2000
# 导入
multielasticdump \
--direction=load \
--match='^.*$' \
--input=/tmp/es_backup \
--output=http://192.168.1.141:9200 \
--includeType='data,mapping' \
--limit=2000 \
将es索引备份成gz文件,减少储存压力
elasticdump \
--input=http://192.168.1.140:9200/source_index \
--output=$ \
--limit=2000 \
| gzip > /data/source_index.json.gz
#!/bin/bash
echo -n "源ES地址: "
read source_es
echo -n "目标ES地址: "
read target_es
echo -n "源索引名: "
read source_index
echo -n "目标索引名: "
read target_index
DUMP_HOME=/root/node_modules/elasticdump/bin
${DUMP_HOME}/elasticdump --input=${source_es}/${source_index} --output=${target_es}/${target_index} --type=mapping
${DUMP_HOME}/elasticdump --input=${source_es}/${source_index} --output=${target_es}/${target_index} --type=data --limit=2000
#!/bin/bash
source_es=http://192.168.1.140:9200
target_index=tspa-template-question-answer
data_dir=/opt/es_backup
DUMP_HOME=/root/node_modules/elasticdump/bin
if [ ! -d "${data_dir}" ]; then
mkdir ${data_dir}
fi
${DUMP_HOME}/elasticdump --input=${source_es}/${target_index} --output=/${data_dir}/${target_index}_mapping.json --type=mapping
${DUMP_HOME}/elasticdump --input=${source_es}/${target_index} --output=/${data_dir}/${target_index}.json --type=data --limit=2000
zip -jqrm ${data_dir}/$(date '+%Y%m%d-%H%M').zip ${data_dir}/*.json
#!/bin/bash
echo -n "目标ES地址:"
read target_es
echo -n "源索引名:"
read source_index
echo -n "map文件名:"
read map_file
echo -n "data文件名:"
read data_file
DUMP_HOME=/root/node_modules/elasticdump/bin
${DUMP_HOME}/elasticdump --input=${map_file} --output=${target_es}/${source_index} --type=mapping
${DUMP_HOME}/elasticdump --input=${data_file} --output=${target_es}/${source_index} --type=data --limit=2000
#!/bin/bash
source_es=http://192.168.1.140:9200
data_dir=/opt/es_backup
DUMP_HOME=/root/node_modules/elasticdump/bin
if [ ! -d "${data_dir}" ]; then
mkdir ${data_dir}
fi
${DUMP_HOME}/multielasticdump --direction=dump --match='^.*$' --input=${source_es} --output=${data_dir} --includeType='data,mapping' --limit=2000
zip -jqrm ${data_dir}/$(date '+%Y%m%d-%H%M').zip ${data_dir}/*.json
使用ElasticSearch-dump进行数据迁移、备份_刘李404not found的博客-CSDN博客_elasticsearch-dump
curl 方式创建索引结构
curl -X PUT --header 'Content-Type: application/json' --header 'Accept: application/json' -d '{"aliases":{"dcd":{}},"settings":{"index":{"number_of_shards":2,"number_of_replicas":1}}}' 'http://10.128.3.87:9200/dcd-2022-04-24'