elasticdump安装_elasticdump备份及恢复es数据

1.elasticdump备份还原

1.1elasticdump 简介

用于转移和保存ES索引数据

# 获取集群的节点列表:

curl 'localhost:9200/_cat/nodes?v'

# 列出所有索引:

curl 'localhost:9200/_cat/indices?v'

创建一个名为“customer”的索引,然后再查看所有的索引:

curl -X PUT 'localhost:9200/customer?pretty'

curl 'localhost:9200/_cat/indices?v'

如果需要用户名和密码登录才可以访问,通过下面的方式指定用户名和密码

# 获取集群的节点列表:

curl --user username:password 'localhost:9200/_cat/nodes?v'

1.2前置条件

elasticdump具有两种两种方式:npm和Docker(假设已存在对应的安装环境,可直接跳转至安装elasticdump步骤),本文只讲述npm的安装方式,Docker的安装方式可以查阅官网文档细节。

1.下载node.js安装包

wget https://nodejs.org/dist/v10.13.0/node-v10.13.0-linux-x64.tar.gz

2. 解压node.js 安装包

tar xf node-v10.13.0-linux-x64.tar.gz

3. 创建链接

ln -s ~/node-v10.13.0-linux-x64/bin/node /usr/bin/node

ln -s ~/node-v10.13.0-linux-x64/bin/npm /usr/bin/npm

2.安装 elasticdump

使用npm安装elasticdump,执行如下命令。

npm install elasticdump -g

2.1elasticdump 使用方法

本文主要介绍数据导出为文件,从文件导入数据这两个常用的方法,但elasticdump并不局限于这两种使用方式,如果对此感兴趣,可以登陆官网查阅更为详细的用法细节。

[root@cndh1323-2-11 bin]# ./elasticdump --help

elasticdump: Import and export tools for elasticsearch

version: 6.27.5

Usage: elasticdump --input SOURCE --output DESTINATION [OPTIONS]

--input

Source location (required)

--input-index

Source index and type

(default: all, example: index/type)

--output

Destination location (required)

--output-index

Destination index and type

(default: all, example: index/type)

--overwrite

Overwrite output file if it exists

(default: false)

--limit

How many objects to move in batch per operation

limit is approximate for file streams

(default: 100)

--size

How many objects to retrieve

(default: -1 -> no limit)

--concurrency

How many concurrent request is sent to a specified transport

(default: 1)

--concurrencyInterval

The length of time in milliseconds before the interval count resets. Must be finite.

(default: 5000)

--intervalCap

The max number of transport request in the given interval of time.

(default: 5)

--carryoverConcurrencyCount

Whether the task must finish in the given concurrencyInterval

(intervalCap will reset to the default whether the request is completed or not)

or will be carried over into the next interval count,

which will effectively reduce the number of new requests created in the next interval

i.e. intervalCap -=

(default: true)

--throttleInterval

The length of time in milliseconds to delay between getting data from an inputTransport and sending it to an outputTransport

(default: 1)

--debug

Display the elasticsearch commands being used

(default: false)

--quiet

Suppress all messages except for errors

(default: false)

--type

What are we exporting?

(default: data, options: [data, settings, analyzer, mapping, alias])

--delete

Delete documents one-by-one from the input as they are

moved. Will not delete the source index

(default: false)

--headers

Add custom headers to Elastisearch requests (helpful when

your Elasticsearch instance sits behind a proxy)

(default: '{"User-Agent": "elasticdump"}')

--params

Add custom parameters to Elastisearch requests uri. Helpful when you for example

want to use elasticsearch preference

(default: null)

--searchBody

Preform a partial extract based on search results

(when ES is the input, default values are

if ES > 5

`'{"query": { "match_all": {} }, "stored_fields": ["*"], "_source": true }'`

else

`'{"query": { "match_all": {} }, "fields": ["*"], "_source": true }'`

--searchWithTemplate

Enable to use Search Template when using --searchBody

If using Search Template then searchBody has to consist of "id" field and "params" objects

If "size" field is defined within Search Template, it will be overridden by --size parameter

See https://www.elastic.co/guide/en/elasticsearch/reference/current/search-template.html for

further information

(default: false)

--sourceOnly

Output only the json contained within the document _source

Normal: {"_index":"","_type":"","_id":"", "_source":{SOURCE}}

sourceOnly: {SOURCE}

(default: false)

--ignore-errors

Will continue the read/write loop on write error

(default: false)

--scrollId

The last scroll Id returned from elasticsearch.

This will allow dumps to be resumed used the last scroll Id &

`scrollTime` has not expired.

--scrollTime

Time the nodes will hold the requested search in order.

(default: 10m)

--maxSockets

How many simultaneous HTTP requests can we process make?

(default:

5 [node <= v0.10.x] /

Infinity [node >= v0.11.x] )

--timeout

Integer containing the number of milliseconds to wait for

a request to respond before aborting the request. Passed

directly to the request library. Mostly used when you don't

care too much if you lose some data when importing

but rather have speed.

--offset

Integer containing the number of rows you wish to skip

ahead from the input transport. When importing a large

index, things can go wrong, be it connectivity, crashes,

someone forgetting to `screen`, etc. This allows you

to start the dump again from the last known line written

(as logged by the `offset` in the output). Please be

advised that since no sorting is specified when the

dump is initially created, there's no real way to

guarantee that the skipped rows have already been

written/parsed. This is more of an option for when

you want to get most data as possible in the index

without concern for losing some rows in the process,

similar to the `timeout` option.

(default: 0)

--noRefresh

Disable input index refresh.

Positive:

1. Much increase index speed

2. Much less hardware requirements

Negative:

1. Recently added data may not be indexed

Recommended to use with big data indexing,

where speed and system health in a higher priority

than recently added data.

--inputTransport

Provide a custom js file to use as the input transport

--outputTransport

Provide a custom js file to use as the output transport

--toLog

When using a custom outputTransport, should log lines

be appended to the output stream?

(default: true, except for `$`)

--awsChain

Use [standard](https://aws.amazon.com/blogs/security/a-new-and-standardized-way-to-manage-credentials-in-the-aws-sdks/) location and ordering for resolving credentials including environment variables, config files, EC2 and ECS metadata locations

_Recommended option for use with AWS_

--awsAccessKeyId

--awsSecretAccessKey

When using Amazon Elasticsearch Service protected by

AWS Identity and Access Management (IAM), provide

your Access Key ID and Secret Access Key

--awsIniFileProfile

Alternative to --awsAccessKeyId and --awsSecretAccessKey,

loads credentials from a specified profile in aws ini file.

For greater flexibility, consider using --awsChain

and setting AWS_PROFILE and AWS_CONFIG_FILE

environment variables to override defaults if needed

--awsService

Sets the AWS service that the signature will be generated for

(default: calculated from hostname or host)

--awsRegion

Sets the AWS region that the signature will be generated for

(default: calculated from hostname or host)

--transform

A javascript, which will be called to modify documents

before writing it to destination. global variable 'doc'

is available.

Example script for computing a new field 'f2' as doubled

value of field 'f1':

doc._source["f2"] = doc._source.f1 * 2;

--httpAuthFile

When using http auth provide credentials in ini file in form

`user=

password=`

--support-big-int

Support big integer numbers

--retryAttempts

Integer indicating the number of times a request should be automatically re-attempted before failing

when a connection fails with one of the following errors `ECONNRESET`, `ENOTFOUND`, `ESOCKETTIMEDOUT`,

ETIMEDOUT`, `ECONNREFUSED`, `EHOSTUNREACH`, `EPIPE`, `EAI_AGAIN`

(default: 0)

--retryDelay

Integer indicating the back-off/break period between retry attempts (milliseconds)

(default : 5000)

--parseExtraFields

Comma-separated list of meta-fields to be parsed

--fileSize

supports file splitting. This value must be a string supported by the **bytes** module.

The following abbreviations must be used to signify size in terms of units

b for bytes

kb for kilobytes

mb for megabytes

gb for gigabytes

tb for terabytes

e.g. 10mb / 1gb / 1tb

Partitioning helps to alleviate overflow/out of memory exceptions by efficiently segmenting files

into smaller chunks that then be merged if needs be.

--fsCompress

gzip data before sending outputting to file

--s3AccessKeyId

AWS access key ID

--s3SecretAccessKey

AWS secret access key

--s3Region

AWS region

--s3Endpoint

AWS endpoint can be used for AWS compatible backends such as

OpenStack Swift and OpenStack Ceph

--s3SSLEnabled

Use SSL to connect to AWS [default true]

--s3ForcePathStyle Force path style URLs for S3 objects [default false]

--s3Compress

gzip data before sending to s3

--retryDelayBase

The base number of milliseconds to use in the exponential backoff for operation retries. (s3)

--customBackoff

Activate custom customBackoff function. (s3)

--tlsAuth

Enable TLS X509 client authentication

--cert, --input-cert, --output-cert

Client certificate file. Use --cert if source and destination are identical.

Otherwise, use the one prefixed with --input or --output as needed.

--key, --input-key, --output-key

Private key file. Use --key if source and destination are identical.

Otherwise, use the one prefixed with --input or --output as needed.

--pass, --input-pass, --output-pass

Pass phrase for the private key. Use --pass if source and destination are identical.

Otherwise, use the one prefixed with --input or --output as needed.

--ca, --input-ca, --output-ca

CA certificate. Use --ca if source and destination are identical.

Otherwise, use the one prefixed with --input or --output as needed.

--inputSocksProxy, --outputSocksProxy

Socks5 host address

--inputSocksPort, --outputSocksPort

Socks5 host port

--handleVersion

Tells elastisearch transport to handle the `_version` field if present in the dataset

(default : false)

--versionType

Elasticsearch versioning types. Should be `internal`, `external`, `external_gte`, `force`.

NB : Type validation is handle by the bulk endpoint and not elasticsearch-dump

--help

This page

Examples:

# Copy an index from production to staging with mappings:

elasticdump \

--input=http://production.es.com:9200/my_index \

--output=http://staging.es.com:9200/my_index \

--type=mapping

elasticdump \

--input=http://production.es.com:9200/my_index \

--output=http://staging.es.com:9200/my_index \

--type=data

# Backup index data to a file:

elasticdump \

--input=http://production.es.com:9200/my_index \

--output=/data/my_index_mapping.json \

--type=mapping

elasticdump \

--input=http://production.es.com:9200/my_index \

--output=/data/my_index.json \

--type=data

# Backup and index to a gzip using stdout:

elasticdump \

--input=http://production.es.com:9200/my_index \

--output=$ \

| gzip > /data/my_index.json.gz

# Backup the results of a query to a file

elasticdump \

--input=http://production.es.com:9200/my_index \

--output=query.json \

--searchBody '{"query":{"term":{"username": "admin"}}}'

------------------------------------------------------------------------------

Learn more @ https://github.com/taskrabbit/elasticsearch-dump

进入elasticdump脚本目录

使用local模式执行elasticdump脚本

cd /root/node-v10.13.0-linux-x64/lib/node_modules/elasticdump/bin

方法一:索引数据导出为文件

导出模板:

# 导出索引Mapping数据

./elasticdump \

--input=http://172.20.2.11:9200/lingxi-product-2020-04 \

--output=/data/my_index_mapping.json \

--type=mapping

# 导出索引数据

./elasticdump \

--input=http://172.20.2.11:9200/lingxi-product-2020-04 \

--output=/data/my_index.json \

--type=data

方法二:索引数据文件导入至索引

导入模板:

# Mapping 数据导入至索引

./elasticdump \

--output=http://72.20.2.11:9200/lingxi-product-2020-04 \

--input=/data/my_index_mapping.json \ # 导入数据目录

--type=mapping

# ES文档数据导入至索引

./elasticdump \

--output=http:///72.20.2.11:9200/lingxi-product-2020-04 \

--input=/data/my_index.json \

--type=data

##命令路径

/usr/local/node-v10.13.0-linux-x64/lib/node_modules/elasticdump/bin

如无特殊说明,文章均为本站原创,转载请注明出处

本文永久链接地址:https://www.xionghaier.cn/archives/1234.html

该文章由

这货来去如风,什么鬼都没留下!!!

你可能感兴趣的:(elasticdump安装)