2019-04-27 ElasticSearch7.0 Low Level Rest Clinet文档阅读

ElasticSearch Low Level Rest Client

Initialization(初始化)

A RestClient instance can be built through the corresponding RestClientBuilder class, created via RestClient#builder(HttpHost...) static method. The only required argument is one or more hosts that the client will communicate with, provided as instances of HttpHost as follows:

// RestClient实例可以通过相应的RestClientBuilder类构建,该类是通过RestClient#builder(HttpHost…)静态方法创建的。唯一需要的参数是客户机将与之通信的一个或多个主机,作为HttpHost的实例提供如下:
RestClient restClient = RestClient.builder(
    new HttpHost("localhost", 9200, "http"),
    new HttpHost("localhost", 9201, "http")).build();

The RestClient class is thread-safe and ideally has the same lifecycle as the application that uses it. It is important that it gets closed when no longer needed so that all the resources used by it get properly released, as well as the underlying http client instance and its threads:

// RestClient类是线程安全的,并且理想情况下具有与使用它的应用程序相同的生命周期。重要的是,当不再需要它的时候,它会被关闭,这样它所使用的所有资源,以及底层http客户端实例和它的线程都会被正确释放:
restClient.close();

RestClientBuilder also allows to optionally set the following configuration parameters while building the RestClient instance:

// RestClientBuilder还允许在构建RestClient实例时选择性地设置以下配置参数:
RestClientBuilder builder = RestClient.builder(
    new HttpHost("localhost", 9200, "http"));
Header[] defaultHeaders = new Header[]{new BasicHeader("header", "value")};
// 设置每个请求需要发送的默认头文件,以避免必须在每个请求中指定它们
builder.setDefaultHeaders(defaultHeaders); 
//  Set the default headers that need to be sent with each request, to prevent having to specify them with each single request
RestClientBuilder builder = RestClient.builder(
        new HttpHost("localhost", 9200, "http"));
builder.setFailureListener(new RestClient.FailureListener() {
    @Override
    public void onFailure(Node node) {
        // 设置一个侦听器,在需要采取操作时,每当节点发生故障时,侦听器都会收到通知。启用故障嗅探时在内部使用。
    }
});
// Set a listener that gets notified every time a node fails, in case actions need to be taken. Used internally when sniffing on failure is enabled.
RestClientBuilder builder = RestClient.builder(
    new HttpHost("localhost", 9200, "http"));
// 将节点选择器设置为用于过滤客户机将发送请求到的节点之间的节点,这些节点被设置为客户机本身。这对于防止在启用嗅探时将请求发送到专用的主节点非常有用。默认情况下,客户机向每个配置的节点发送请求。
builder.setNodeSelector(NodeSelector.SKIP_DEDICATED_MASTERS); 
//  Set the node selector to be used to filter the nodes the client will send requests to among the ones that are set to the client itself. This is useful for instance to prevent sending requests to dedicated master nodes when sniffing is enabled. By default the client sends requests to every configured node.

Set a callback that allows to modify the default request configuration (e.g. request timeouts, authentication, or anything that the org.apache.http.client.config.RequestConfig.Builderallows to set)

RestClientBuilder builder = RestClient.builder(
    new HttpHost("localhost", 9200, "http"));
builder.setRequestConfigCallback(
    new RestClientBuilder.RequestConfigCallback() {
        @Override
        public RequestConfig.Builder customizeRequestConfig(
            RequestConfig.Builder requestConfigBuilder) {
            // 设置一个回调函数,允许修改默认的请求配置(例如,请求超时、身份验证或org.apache.http.client.config.RequestConfig中的任何内容)。生成器允许设置)
            return requestConfigBuilder.setSocketTimeout(10000); 
        }
    });

Set a callback that allows to modify the http client configuration (e.g. encrypted communication over ssl, or anything that the org.apache.http.impl.nio.client.HttpAsyncClientBuilder allows to set)

RestClientBuilder builder = RestClient.builder(
    new HttpHost("localhost", 9200, "http"));
builder.setHttpClientConfigCallback(new HttpClientConfigCallback() {
        @Override
        public HttpAsyncClientBuilder customizeHttpClient(
                HttpAsyncClientBuilder httpClientBuilder) {
            return httpClientBuilder.setProxy(
                // 设置一个回调,允许修改http客户机配置(例如,通过ssl或org.apache.http.impl.nio.client的任何加密通信)。HttpAsyncClientBuilder允许设置)
                new HttpHost("proxy", 9000, "http"));  
        }
    });

Performing requests(执行请求)

一旦RestClinet被创建, 请求可以通过调用发送performRequestperformRequestAsync. performRequest是同步的, 将阻塞调用线程并Response在请求成功时返回, 如果失败则抛出异常. performRequestAsync是异步的, 并接受一个ResponseListenere参数, 它Response在请求成功时或Exception if it4失败时调用

// This is synchronous:
Request request = new Request("GET", /);
Response response = restClient.performRequest(request);

// 解释
new Request("GET", "/");
param one: The HTTP method(GET, POST, HEAD, etc); (这是HTTP方法)
param two: The endpoint on the server; (服务器上的端点, 等于就是请求方法)
// And this is asynchronous:
Request request = new Request("GET", "/");
restClient.performRequestAsync(request, new ResponseListener(){
    @Override
    public void onSuccess(Response response) {
        // Handle the response(处理相应)
    }
     @Override
    public void onFailure(Exception exception) {
        // Handle the failure(处理失败)
    }
});

// 解释
new Request("GET", "/");
param one: The HTTP method(GET, POST, HEAD, etc); (这是HTTP方法)
param two: The endpoint on the server; (服务器上的端点, 等于就是请求方法)

You can add request parameters to the request object:(您可以向请求对象添加请求参数:)

request.addParameter("pretty", "true");

You can set the body of the request to any HttpEntity:(您可以将请求的主体设置为任何Http实体)

request.setEntity(new NStringEntity("{\"json\":\"text\"}", ContentType.APPLICATION_JSON));

Warning: The ContentType specified for the HttpEntity is important because it will be used to set the Content-Type header so that Elasticsearch can properly parse the content.

您可以将请求的主体设置为任何HttpEntity:
为HttpEntity指定的ContentType非常重要,因为它将用于设置content - type头部,以便Elasticsearch能够正确解析内容。

You can also set it to a String which will default to a ContentType of application/json

// 您还可以将其设置为一个字符串,该字符串将默认为application/json的ContentType。
request.setJsonEntity("{\"json\":\"text\"}");

Request Options (请求设置)

The RequestOptions class holds parts of the request that should be shared between many requests in the same application. You can make a singleton instance and share it between all requests

// RequestOptions类包含请求的一些部分,这些部分应该在同一个应用程序中的多个请求之间共享。你可以创建一个单实例,并在所有请求之间共享
private static final RequestOptions COMMON_OPTIONS;
static {
    RequestOptions.Builder builder = RequestOptions.DEFAULT.toBuilder();
    // Add any headers needed by all requests.
    // 添加所有请求所需的任何头。
    builder.addHeader("Authorization", "Bearer " + TOKEN); 
    // 定制响应使用者 Customize the response consumer.
    builder.setHttpAsyncResponseConsumerFactory(           
        new HttpAsyncResponseConsumerFactory
            .HeapBufferedResponseConsumerFactory(30 * 1024 * 1024 * 1024));
    COMMON_OPTIONS = builder.build();
}
// addHeader is for headers that are required for authorization or to work with a proxy in front of Elasticsearch. There is no need to set the Content-Type header because the client will automatically set that from the HttpEntity attached to the request.

// You can set the NodeSelector which controls which nodes will receive requests. NodeSelector.NOT_MASTER_ONLY is a good choice.

// You can also customize the response consumer used to buffer the asynchronous responses. The default consumer will buffer up to 100MB of response on the JVM heap. If the response is larger then the request will fail. You could, for example, lower the maximum size which might be useful if you are running in a heap constrained environment like the example above.

// Once you’ve created the singleton you can use it when making requests:

// addHeader适用于授权或在Elasticsearch前使用代理所需的标头。无需设置Content-Type 标头,因为客户端会自动将其设置为HttpEntity 附加到请求。

// 您可以设置NodeSelector控制哪些节点将接收请求。NodeSelector.NOT_MASTER_ONLY是个不错的选择。

// 您还可以自定义用于缓冲异步响应的响应使用者。默认使用者将在JVM堆上缓冲最多100MB的响应。如果响应较大,则请求将失败。例如,如果您在如上例所示的堆约束环境中运行,则可以降低可能有用的最大大小。

// 创建单例后,您可以在发出请求时使用它:
request.setOptions(COMMON_OPTIONS);
// You can also customize these options on a per request basis. For example, this adds an extra header:
// 您还可以根据请求自定义这些选项。例如,这会添加一个额外的标头:
RequestOptions.Builder options = COMMON_OPTIONS.toBuilder();
options.addHeader("cats", "knock things off of other things");
request.setOptions(options);

Multiple parallel asynchronous actions(多个并行异步操作)

// The client is quite happy to execute many actions in parallel. The following example indexes many documents in parallel. In a real world scenario you’d probably want to use the _bulk API instead, but the example is illustrative.
// 客户端很乐意并行执行许多操作。下面的示例并行索引许多文档。在真实的场景中,您可能想要使用_bulk API,但是这个示例只是说明。
final CountDownLatch latch = new CountDownLatch(documents.length);
for (int i = 0; i < documents.length; i++) {
    Request request = new Request("PUT", "/posts/doc/" + i);
    //let's assume that the documents are stored in an HttpEntity array
    request.setEntity(documents[i]);
    restClient.performRequestAsync(
            request,
            new ResponseListener() {
                @Override
                public void onSuccess(Response response) {
                    // Process the returned response 处理返回的响应
                    latch.countDown();
                }

                @Override
                public void onFailure(Exception exception) {
                    // Handle the returned exception, due to communication error or a response with status code that indicates an error 处理返回的异常,由于通信错误或响应状态代码指示错误
                    latch.countDown();
                }
            }
    );
}
latch.await();

Logging(日志)

The Java REST client uses the same logging library that the Apache Async Http Client uses: Apache Commons Logging, which comes with support for a number of popular logging implementations. The java packages to enable logging for are org.elasticsearch.client for the client itself and org.elasticsearch.client.sniffer for the sniffer.

The request tracer logging can also be enabled to log every request and corresponding response in curl format. That comes handy when debugging, for instance in case a request needs to be manually executed to check whether it still yields the same response as it did. Enable trace logging for the tracerpackage to have such log lines printed out. Do note that this type of logging is expensive and should not be enabled at all times in production environments, but rather temporarily used only when needed.

Java REST客户端使用与Apache Async Http Client使用的相同的日志库:Apache Commons Logging,它支持许多流行的日志记录实现。启用日志记录的java包org.elasticsearch.client用于客户端本身和org.elasticsearch.client.sniffer嗅探器。

还可以启用请求跟踪器日志记录,以便以curl格式记录每个请求和相应的响应。这在调试时很方便,例如,如果需要手动执行请求以检查它是否仍然产生与它相同的响应。启用tracer 程序包的跟踪日志记录以打印出此类日志行。请注意,此类日志记录非常昂贵,不应在生产环境中始终启用,而应仅在需要时暂时使用。

Common configuration(默认配置)

As explained in Initialization, the RestClientBuilder supports providing both a RequestConfigCallback and an HttpClientConfigCallback which allow for any customization that the Apache Async Http Client exposes. Those callbacks make it possible to modify some specific behaviour of the client without overriding every other default configuration that the RestClient is initialized with. This section describes some common scenarios that require additional configuration for the low-level Java REST Client.

如初始化中所述,RestClientBuilder 支持提供a RequestConfigCallback和HttpClientConfigCallback 允许Apache Async Http Client公开的任何自定义。这些回调可以修改客户端的某些特定行为,而不会覆盖RestClient 初始化的所有其他默认配置。本节介绍一些需要为低级Java REST Client进行其他配置的常见方案。

TimeOut(超时)

Configuring requests timeouts can be done by providing an instance ofRequestConfigCallback while building the RestClient through its builder. The interface has one method that receives an instance oforg.apache.http.client.config.RequestConfig.Builder as an argument and has the same return type. The request config builder can be modified and then returned. In the following example we increase the connect timeout (defaults to 1 second) and the socket timeout (defaults to 30 seconds).

配置请求超时可以通过提供RequestConfigCallback构建RestClient通过其构建器的实例来完成 。该接口有一个方法,它接收一个org.apache.http.client.config.RequestConfig.Builder 作为参数的实例 并具有相同的返回类型。可以修改请求配置构建器,然后返回。在以下示例中,我们将连接超时(默认为1秒)和套接字超时(默认为30秒)增加。
RestClientBuilder builder = RestClient.builder(
    new HttpHost(“localhost”,9200))。
    setRequestConfigCallback(
        new RestClientBuilder.RequestConfigCallback(){ 
            @ Override 
            public RequestConfig.Builder customizeRequestConfig(RequestConfig.Builder 
                    requestConfigBuilder){ 
                return requestConfigBuilder 
                    .setConnectTimeout(5000)
                    .setSocketTimeout (60000); 
            } 
        });

Number of Threads(线程数)

The Apache Http Async Client starts by default one dispatcher thread, and a number of worker threads used by the connection manager, as many as the number of locally detected processors (depending on whatRuntime.getRuntime().availableProcessors() returns). The number of threads can be modified as follows:

Apache Http Async Client默认启动一个调度程序线程,以及连接管理器使用的许多工作线程,与本地检测到的处理器数量一样多(取决于 Runtime.getRuntime().availableProcessors()返回的内容)。线程数可以修改如下:
RestClientBuilder builder = RestClient.builder(
    new HttpHost("localhost", 9200))
    .setHttpClientConfigCallback(new HttpClientConfigCallback() {
        @Override
        public HttpAsyncClientBuilder customizeHttpClient(
            HttpAsyncClientBuilder httpClientBuilder) {
            return httpClientBuilder.setDefaultIOReactorConfig(
                IOReactorConfig.custom()
                .setIoThreadCount(1)
                .build());
        }
    });

Basic authentication

Configuring basic authentication can be done by providing anHttpClientConfigCallback while building the RestClient through its builder. The interface has one method that receives an instance oforg.apache.http.impl.nio.client.HttpAsyncClientBuilder as an argument and has the same return type. The http client builder can be modified and then returned. In the following example we set a default credentials provider that requires basic authentication.

配置基本身份验证可以通过提供一段 HttpClientConfigCallback时间来构建RestClient它的构建器来完成。该接口有一个方法,它接收一个org.apache.http.impl.nio.client.HttpAsyncClientBuilder 作为参数的实例 并具有相同的返回类型。可以修改http客户端构建器,然后返回。在以下示例中,我们设置了需要基本身份验证的默认凭据提供程序。
final CredentialsProvider credentialsProvider =
    new BasicCredentialsProvider();
credentialsProvider.setCredentials(AuthScope.ANY,
                                   new UsernamePasswordCredentials("user", "password"));

RestClientBuilder builder = RestClient.builder(
    new HttpHost("localhost", 9200))
    .setHttpClientConfigCallback(new HttpClientConfigCallback() {
        @Override
        public HttpAsyncClientBuilder customizeHttpClient(
            HttpAsyncClientBuilder httpClientBuilder) {
            return httpClientBuilder
                .setDefaultCredentialsProvider(credentialsProvider);
        }
    });

Preemptive Authentication can be disabled, which means that every request will be sent without authorization headers to see if it is accepted and, upon receiving an HTTP 401 response, it will resend the exact same request with the basic authentication header. If you wish to do this, then you can do so by disabling it via the HttpAsyncClientBuilder:

可以禁用抢占式身份验证,这意味着每个请求都将在没有授权头的情况下发送,以查看它是否被接受,并且在接收到HTTP 401响应后,它将使用基本身份验证头重新发送完全相同的请求。如果你想这样做,你可以通过HttpAsyncClientBuilder禁用它:
final CredentialsProvider credentialsProvider =
    new BasicCredentialsProvider();
credentialsProvider.setCredentials(AuthScope.ANY,
    new UsernamePasswordCredentials("user", "password"));

RestClientBuilder builder = RestClient.builder(
    new HttpHost("localhost", 9200))
    .setHttpClientConfigCallback(new HttpClientConfigCallback() {
        @Override
        public HttpAsyncClientBuilder customizeHttpClient(
                HttpAsyncClientBuilder httpClientBuilder) {
            httpClientBuilder.disableAuthCaching(); 
            return httpClientBuilder
                .setDefaultCredentialsProvider(credentialsProvider);
        }
    });

Encrypted communication(加密通信)

Encrypted communication can also be configured through theHttpClientConfigCallback. Theorg.apache.http.impl.nio.client.HttpAsyncClientBuilder received as an argument exposes multiple methods to configure encrypted communication: setSSLContext, setSSLSessionStrategy and setConnectionManager, in order of precedence from the least important. The following is an example:

加密通信也可以通过 HttpClientConfigCallback。的 org.apache.http.impl.nio.client.HttpAsyncClientBuilder :作为一个参数暴露多种方法来配置加密通信接收setSSLContext,setSSLSessionStrategy并且 setConnectionManager,从最不重要的优先级顺序。以下是一个例子:
KeyStore truststore = KeyStore.getInstance("jks");
try (InputStream is = Files.newInputStream(keyStorePath)) {
    truststore.load(is, keyStorePass.toCharArray());
}
SSLContextBuilder sslBuilder = SSLContexts.custom()
    .loadTrustMaterial(truststore, null);
final SSLContext sslContext = sslBuilder.build();
RestClientBuilder builder = RestClient.builder(
    new HttpHost("localhost", 9200, "https"))
    .setHttpClientConfigCallback(new HttpClientConfigCallback() {
        @Override
        public HttpAsyncClientBuilder customizeHttpClient(
                HttpAsyncClientBuilder httpClientBuilder) {
            return httpClientBuilder.setSSLContext(sslContext);
        }
    });

If no explicit configuration is provided, the system default configuration will be used.

如果未提供显式配置,则将使用系统默认配置 。

Other(其他)

For any other required configuration needed, the Apache HttpAsyncClient docs should be consulted: https://hc.apache.org/httpcomponents-asyncclient-4.1.x/ .

对于所需的任何其他所需配置,应参考Apache HttpAsyncClient文档:https://hc.apache.org/httpcomponents-asyncclient-4.1.x/ 。

If your application runs under the security manager you might be subject to the JVM default policies of caching positive hostname resolutions indefinitely and negative hostname resolutions for ten seconds. If the resolved addresses of the hosts to which you are connecting the client to vary with time then you might want to modify the default JVM behavior. These can be modified by addingnetworkaddress.cache.ttl= andnetworkaddress.cache.negative.ttl= to your Java security policy.

如果您的应用程序在安全管理器下运行,则可能会受到JVM默认策略的限制,即无限期缓存正主机名解析和负主机名解析,持续10秒。如果您连接客户端的主机的已解析地址随时间变化,那么您可能希望修改默认的JVM行为。这些可以通过添加修改 networkaddress.cache.ttl= ,并 networkaddress.cache.negative.ttl= 在您的 Java安全策略。

Node selector(节点选择)

The client sends each request to one of the configured nodes in round-robin fashion. Nodes can optionally be filtered through a node selector that needs to be provided when initializing the client. This is useful when sniffing is enabled, in case only dedicated master nodes should be hit by HTTP requests. For each request the client will run the eventually configured node selector to filter the node candidates, then select the next one in the list out of the remaining ones.

客户端以循环方式将每个请求发送到其中一个配置的节点。可以选择通过在初始化客户端时需要提供的节点选择器来过滤节点。这在启用嗅探时很有用,以防只有HTTP请求才能访问专用主节点。对于每个请求,客户端将运行最终配置的节点选择器以过滤候选节点,然后从列表中选择下一个节点选择器。
RestClientBuilder builder = RestClient.builder(
        new HttpHost("localhost", 9200, "http"));
// Set an allocation aware node selector that allows to pick a node in the local rack if any available, otherwise go to any other node in any rack. It acts as a preference rather than a strict requirement, given that it goes to another rack if none of the local nodes are available, rather than returning no nodes in such case which would make the client forcibly revive a local node whenever none of the nodes from the preferred rack is available.
// 设置一个分配感知节点选择器,允许在本地机架中选择可用的节点,否则转到机架中的任何其他节点。它作为一个偏好,而不是一个严格的要求,鉴于它到另一架如果没有一个本地节点可用,而不是返回没有节点在这种情况下这将使客户端强制重启本地节点时没有一个节点从首选架是可用的。
builder.setNodeSelector(new NodeSelector() { 
    @Override
    public void select(Iterable nodes) {
        /*
         * Prefer any node that belongs to rack_one. If none is around(选择属于rack_one的任何节点。如果周围没有)
         * we will go to another rack till it's time to try and revive(我们将去另一个机架,直到时间来尝试和恢复)
         * some of the nodes that belong to rack_one.(属于rack_one的一些节点。)
         */
        boolean foundOne = false;
        for (Node node : nodes) {
            String rackId = node.getAttributes().get("rack_id").get(0);
            if ("rack_one".equals(rackId)) {
                foundOne = true;
                break;
            }
        }
        if (foundOne) {
            Iterator nodesIt = nodes.iterator();
            while (nodesIt.hasNext()) {
                Node node = nodesIt.next();
                String rackId = node.getAttributes().get("rack_id").get(0);
                if ("rack_one".equals(rackId) == false) {
                    nodesIt.remove();
                }
            }
        }
    }
});

warning

Node selectors that do not consistently select the same set of nodes will make round-robin behaviour unpredictable and possibly unfair. The preference example above is fine as it reasons about availability of nodes which already affects the predictability of round-robin. Node selection should not depend on other external factors or round-robin will not work properly.

不一致地选择相同节点集的节点选择器将使循环行为变得不可预测并且可能不公平。上面的偏好示例很好,因为它可以解释已经影响循环可预测性的节点的可用性。节点选择不应该依赖于其他外部因素,否则循环将无法正常工作。

Sniffer (嗅探器)

Minimal library that allows to automatically discover nodes from a running Elasticsearch cluster and set them to an existing RestClient instance. It retrieves by default the nodes that belong to the cluster using the Nodes Info api and uses jackson to parse the obtained json response.

Compatible with Elasticsearch 2.x and onwards.

最小的库,允许从正在运行的Elasticsearch集群中自动发现节点并将它们设置为现有RestClient实例。它默认使用节点信息api检索属于集群的节点,并使用jackson解析获得的json响应。

与Elasticsearch 2.x及以后版本兼容。

JavaDoc

The javadoc for the REST client sniffer can be found at https://artifacts.elastic.co/javadoc/org/elasticsearch/client/elasticsearch-rest-client-sniffer/7.0.0/index.html.

可以在https://artifacts.elastic.co/javadoc/org/elasticsearch/client/elasticsearch-rest-client-sniffer/7.0.0/index.html找到REST客户端嗅探器的javadoc 。

Maven Repository(maven仓库)

The REST client sniffer is subject to the same release cycle as Elasticsearch. Replace the version with the desired sniffer version, first released with 5.0.0-alpha4. There is no relation between the sniffer version and the Elasticsearch version that the client can communicate with. Sniffer supports fetching the nodes list from Elasticsearch 2.x and onwards.

If you are looking for a SNAPSHOT version, the Elastic Maven Snapshot repository is available at https://snapshots.elastic.co/maven/.

REST客户端嗅探器与Elasticsearch具有相同的发布周期。用所需的嗅探器版本替换版本,首先发布5.0.0-alpha4。嗅探器版本与客户端可以与之通信的Elasticsearch版本之间没有任何关系。Sniffer支持从Elasticsearch 2.x及以后获取节点列表。

如果您正在寻找SNAPSHOT版本,可以访问https://snapshots.elastic.co/maven/获取Elastic Maven Snapshot存储库。

    org.elasticsearch.client
    elasticsearch-rest-client-sniffer
    7.0.0

Gradle配置

dependencies {
    compile 'org.elasticsearch.client:elasticsearch-rest-client-sniffer:7.0.0'
}

Usage(用法)

Once a RestClient instance has been created as shown in Initialization, a Sniffer can be associated to it. The Sniffer will make use of the provided RestClient to periodically (every 5 minutes by default) fetch the list of current nodes from the cluster and update them by calling RestClient#setNodes.

一旦RestClient创建了一个实例,如初始化中所示,Sniffer就可以将其关联到它。在Sniffer将利用所提供的RestClient 定期(每5分钟默认情况下)获取从集群当前节点列表,并通过调用更新它们RestClient#setNodes。
RestClient restClient = RestClient.builder(
    new HttpHost("localhost", 9200, "http"))
    .build();
Sniffer sniffer = Sniffer.builder(restClient).build();

It is important to close the Sniffer so that its background thread gets properly shutdown and all of its resources are released. The Sniffer object should have the same lifecycle as the RestClient and get closed right before the client:

关闭它以Sniffer使其后台线程正确关闭并释放其所有资源非常重要。该Sniffer 对象应具有与RestClient客户端相同的生命周期并在客户端之前关闭:
sniffer.close();
restClient.close();

The Sniffer updates the nodes by default every 5 minutes. This interval can be customized by providing it (in milliseconds) as follows:

该Sniffer更新默认情况下,每5分钟的节点。可以通过提供(以毫秒为单位)来定制此间隔,如下所示:
RestClient restClient = RestClient.builder(
    new HttpHost("localhost", 9200, "http"))
    .build();
Sniffer sniffer = Sniffer.builder(restClient)
    .setSniffIntervalMillis(60000).build();

It is also possible to enable sniffing on failure, meaning that after each failure the nodes list gets updated straightaway rather than at the following ordinary sniffing round. In this case a SniffOnFailureListener needs to be created at first and provided at RestClient creation. Also once the Sniffer is later created, it needs to be associated with that same SniffOnFailureListenerinstance, which will be notified at each failure and use the Sniffer to perform the additional sniffing round as described.

也可以在失败时启用嗅探,这意味着在每次失败之后,节点列表会立即更新,而不是在接下来的普通嗅探轮次中。在这种情况下,SniffOnFailureListener需要首先创建并在RestClient创建时提供。此外,一旦 Sniffer稍后创建,它需要与同一个SniffOnFailureListener实例相关联,该 实例将在每次失败时得到通知,并使用它Sniffer来执行所述的额外嗅探轮。
SniffOnFailureListener sniffOnFailureListener =
    new SniffOnFailureListener();
RestClient restClient = RestClient.builder(
    // 将失败侦听器设置为RestClient实例
    new HttpHost("localhost", 9200))
    .setFailureListener(sniffOnFailureListener) 
    .build();
//  在嗅探失败时,不仅节点在每次失败后都会更新,而且在失败后的一分钟内,还会比平时更快地安排额外的嗅探轮,假设事情会恢复正常并且我们想要检测到尽快地。可以Sniffer通过该setSniffAfterFailureDelayMillis 方法在创建时定制所述间隔。请注意,如上所述,如果未启用嗅探失败,则此最后一个配置参数无效。就是3无效
Sniffer sniffer = Sniffer.builder(restClient)
    .setSniffAfterFailureDelayMillis(30000) 
    .build();
// 将Sniffer实例设置为失败侦听器
sniffOnFailureListener.setSniffer(sniffer); 

The Elasticsearch Nodes Info api doesn’t return the protocol to use when connecting to the nodes but only their host:port key-pair, hence http is used by default. In case https should be used instead, the ElasticsearchNodesSniffer instance has to be manually created and provided as follows:

Elasticsearch Nodes Info api不会返回连接到节点时使用的协议,而只返回它们的host:port密钥对,因此http 默认使用。如果https应该使用,则 ElasticsearchNodesSniffer必须手动创建实例并提供如下:
RestClient restClient = RestClient.builder(
        new HttpHost("localhost", 9200, "http"))
        .build();
NodesSniffer nodesSniffer = new ElasticsearchNodesSniffer(
        restClient,
        ElasticsearchNodesSniffer.DEFAULT_SNIFF_REQUEST_TIMEOUT,
        ElasticsearchNodesSniffer.Scheme.HTTPS);
Sniffer sniffer = Sniffer.builder(restClient)
        .setNodesSniffer(nodesSniffer).build();

In the same way it is also possible to customize the sniffRequestTimeout, which defaults to one second. That is the timeout parameter provided as a querystring parameter when calling the Nodes Info api, so that when the timeout expires on the server side, a valid response is still returned although it may contain only a subset of the nodes that are part of the cluster, the ones that have responded until then.

以同样的方式,也可以自定义sniffRequestTimeout,默认为一秒。这是timeout在调用Nodes Info api时作为查询字符串参数提供的参数,因此当服务器端超时到期时,仍会返回有效响应,尽管它可能只包含属于集群的节点的子集,那些在那之前做出回应的人。
RestClient restClient = RestClient.builder(
    new HttpHost("localhost", 9200, "http"))
    .build();
NodesSniffer nodesSniffer = new ElasticsearchNodesSniffer(
    restClient,
    TimeUnit.SECONDS.toMillis(5),
    ElasticsearchNodesSniffer.Scheme.HTTP);
Sniffer sniffer = Sniffer.builder(restClient)
    .setNodesSniffer(nodesSniffer).build();

Also, a custom NodesSniffer implementation can be provided for advanced use-cases that may require fetching the Nodes from external sources rather than from Elasticsearch:

此外,NodesSniffer可以为高级用例提供自定义实现,这些用例可能需要从外部源而不是从Elasticsearch获取`Node`s:
RestClient restClient = RestClient.builder(
    new HttpHost("localhost", 9200, "http"))
    .build();
NodesSniffer nodesSniffer = new NodesSniffer() {
        @Override
        public List sniff() throws IOException {
            // Fetch the hosts from the external source
            // 从外部源获取主机
            return null; 
        }
    };
Sniffer sniffer = Sniffer.builder(restClient)
    .setNodesSniffer(nodesSniffer).build();

借鉴: ElasticSearch 官方文档 7.0

你可能感兴趣的:(2019-04-27 ElasticSearch7.0 Low Level Rest Clinet文档阅读)