ELK学习笔记

安装

es单机安装
docker run -d --name elasticsearch --net bridge --restart=always -p 9200:9200 -p 9300:9300 \
-e "discovery.type=single-node" elasticsearch:6.8.10
设置es密码

参考文章:https://blog.csdn.net/qq_43188744/article/details/108096394

kibana安装
docker run -d --name kibana --net bridge --restart=always -p 5601:5601 -v \   
/usr/local/aicp/kibana_entry/config/kibana.yml:/usr/share/kibana/config/kibana.yml kibana:6.8.10

kibana.yml配置文件,需在elasticsearch.hosts指定es的http连接地址

# Default Kibana configuration for docker target
server.name: kibana
server.host: "0.0.0.0"
elasticsearch.hosts: [ "http://peer151:9200" ]
xpack.monitoring.ui.container.elasticsearch.enabled: true
安装filebeat(参考:https://blog.csdn.net/UbuntuTouch/article/details/104790367)

filebeat.docker.yml配置文件

filebeat.config:
  modules:
    path: ${path.config}/modules.d/*.yml
    reload.enabled: false
 
filebeat.autodiscover:
  providers:
    - type: docker
      hints.enabled: true
 
processors:
- add_cloud_metadata: ~
 
setup.kibana.host: "10.106.11.151:5601"
 
output.elasticsearch:
  hosts: '${ELASTICSEARCH_HOSTS:10.106.11.151:9200}'

filebeat安装(可以收集所有docker容器里面的日志)

docker pull docker.elastic.co/beats/filebeat:6.8.10
docker run -d --name=filebeat --user=root --net bridge --restart=always \
  -v /usr/local/aicp/filebeat_entry/config/filebeat.docker.yml:/usr/share/filebeat/filebeat.yml:ro \
  -v /var/lib/docker/containers:/var/lib/docker/containers:ro \
  -v /var/run/docker.sock:/var/run/docker.sock:ro \
  docker.elastic.co/beats/filebeat:6.8.10 filebeat -e -strict.perms=false \
  -E output.elasticsearch.hosts=["10.106.11.151:9200"]
SpringBoot项目通过logback直接将日志写入ES

参考文章:https://www.cnblogs.com/linkanyway/p/elastic-logger-appender.html

kibana基础操作

查看索引

进入kibana管理页面,选择左边菜单栏 Management->Elasticsearch->Index management。

查看并创建 Index Patterns

Management->Kibana->Index Patters->Create index pattern(可以使用通配符*).

使用kibana搜索

选择左边菜单栏 Discover->下拉选择Index pattern->Add a filter->Refresh->Save

使用kibana建立可视化

选择左边菜单栏 Visualize->Create a visualization->选择图表类型->下拉选择Index pattern->Add a filter->Add metrics->Save

使用kibana建立Dashboard

选择左边菜单栏Dashboard->Create new Dashboard->选择Visualize或者Saved Search->Save

使用kibana建立Timelion

可以参考:https://blog.csdn.net/UbuntuTouch/article/details/109254566

时间序列可视化是按时间顺序分析数据的可视化。 Timelion 可用于绘制二维图,时间绘制在 x 轴上使用 Timelion,你可以在同一可视化文件中组合独立的数据源。 使用相对简单的语法,你可以执行高级数学计算,例如除以和减去指标,计算导数和移动平均值,当然还可以可视化这些计算的结果

使用kibana绘图

可以参考:https://www.elastic.co/guide/en/kibana/7.2/canvas.html

在绝大多数的情况下,使用Dashboard已经非常漂亮了。但是对于一些大屏幕的需求,我们很希望有自己个性化的屏幕展示。那么问题来了,我们该如何实现这个嗯?答案是Canvas。顾名思义,作为一个Canvas,我们可以在画布上任意拖拽安排我们的Widget。可以定制我们的字体,背景等等。

使用curl操作es

  1. 创建库
    curl -X PUT http://wl158:8180/food?pretty
  2. 查看索引信息
    curl -X GET http://127.0.0.1:8180/_cat/indices?v
  3. 插入数据
    curl -X PUT -d '{"computer":"secisland","msg":"secisland is a company"}' -H 'Content-Type: application/json'
    http://wl158:8180/food/secilog/1/
  4. 修改数据
    curl -X POST -d '{"doc":{"computer":"secisland","msg":"secisland is a company, provide log products"}}' -H 'Content-Type: application/json'
    http://127.0.0.1:8180/secisland/secilog/1/_update
  5. 查询文档
    curl -X GET http://127.0.0.1:8180/secisland/secilog/1/
  6. 删除文档
    curl -X DELETE http://127.0.0.1:8180/secisland/secilog/1/
  7. 删除库
    curl -X DELETE http://127.0.0.1:8180/secisland/secilog/
  8. 查询集群信息
    curl -X GET http://127.0.0.1:8180

通过java客户端操作es

添加maven依赖


        
            org.elasticsearch.client
            transport
            6.8.10
        
import org.elasticsearch.action.delete.DeleteResponse;
import org.elasticsearch.action.get.GetResponse;
import org.elasticsearch.action.get.MultiGetItemResponse;
import org.elasticsearch.action.get.MultiGetResponse;
import org.elasticsearch.action.index.IndexResponse;
import org.elasticsearch.action.update.UpdateRequest;
import org.elasticsearch.client.AdminClient;
import org.elasticsearch.client.IndicesAdminClient;
import org.elasticsearch.client.transport.TransportClient;
import org.elasticsearch.cluster.node.DiscoveryNode;
import org.elasticsearch.common.document.DocumentField;
import org.elasticsearch.common.settings.Settings;
import org.elasticsearch.common.transport.TransportAddress;
import org.elasticsearch.rest.RestStatus;
import org.elasticsearch.transport.client.PreBuiltTransportClient;

import java.io.IOException;
import java.net.InetAddress;
import java.net.UnknownHostException;
import java.util.*;
import java.util.concurrent.ExecutionException;

import static org.elasticsearch.common.xcontent.XContentFactory.jsonBuilder;

/**
 * 通过client.transport.sniff启动嗅探功能,这样只需要指定集群中的某一个节点(不一定是主节点),
 * 然后会加载集群中的其他节点,这样只要程序不停即使此节点宕机仍然可以连接到其他节点。
 */
public class Main {
    public static void main(String[] args) throws UnknownHostException {
        Settings settings = Settings.builder()
                .put("cluster.name", "docker-cluster")
                //.put("client.transport.sniff", true)
                .build();
        @SuppressWarnings("unchecked")
        TransportClient client = new PreBuiltTransportClient(settings);
        client.addTransportAddress(new TransportAddress(InetAddress.getByName("peer151"), 9300));

        List nodes= client.connectedNodes();
        for(DiscoveryNode node : nodes){
            System.out.println(node.toString());
        }

//         createIndexWithSettings(client, "clp");
//         addDoc(client, "clp", 100);
        getDoc(client, "clp", String.valueOf(new Random().nextInt(100)));
//        deleteDoc(client,"kaka","2");
//        updateDoc(client,"kaka","3");
//        multiGetDoc(client,"kaka", new String[]{"1"});
        client.close();
    }

    public static void createIndexWithSettings(TransportClient client, String index) {
        // 获取Admin的API
        AdminClient admin = client.admin();
        // 使用Admin API对索引进行操作
        IndicesAdminClient indices = admin.indices();
        // 准备创建索引
        indices.prepareCreate(index)
                // 配置索引参数
                .setSettings(
                        // 参数配置器
                        Settings.builder()// 指定索引分区的数量。shards分区
                                .put("index.number_of_shards", 5)
                                        // 指定索引副本的数量(注意:不包括本身,如果设置数据存储副本为1,实际上数据存储了2份)
                                        // replicas副本
                                .put("index.number_of_replicas", 1))
                        // 真正执行
                .get();
    }

    public static void addDoc(TransportClient client, String index, int times) {
        IndexResponse response = null;
        try {
            for (int i = 0; i < times; i++) {
                response = client.prepareIndex(index, "_doc", String.valueOf(i))
                        .setSource(jsonBuilder()
                                        .startObject()
                                        .field("user", "kaka" + i)
                                        .field("postDate", new Date())
                                        .field("message", "kaka is good football player, trying out Elasticsearch")
                                        .endObject()
                        )
                        .get();
            }
        } catch (IOException e) {
            e.printStackTrace();
        }
        System.out.println(response);
    }

    public static void getDoc(TransportClient client, String index, String id) {
        GetResponse response = client.prepareGet(index, "_doc", id).get();
        Map fieldMap = response.getFields();
        Map sourceMap = response.getSource();
        System.out.println(response);
    }

    public static void deleteDoc(TransportClient client, String index, String id) {
        DeleteResponse response = client.prepareDelete(index, "_doc", id).get();
        RestStatus status = response.status();
        System.out.println(response);
    }

    public static void updateDoc(TransportClient client, String index, String id) {
        UpdateRequest updateRequest = new UpdateRequest();
        updateRequest.index(index);
        updateRequest.type("_doc");
        updateRequest.id(id);
        try {
            updateRequest.doc(jsonBuilder()
                    .startObject()
                    .field("gender", "male")
                    .endObject());
        } catch (IOException e) {
            e.printStackTrace();
        }
        try {
            client.update(updateRequest).get();
        } catch (InterruptedException e) {
            e.printStackTrace();
        } catch (ExecutionException e) {
            e.printStackTrace();
        }
    }

    public static void multiGetDoc(TransportClient client, String index, String[] ids) {
        MultiGetResponse multiGetItemResponses = client.prepareMultiGet()
                .add(index, "_doc", ids)
                .get();

        for (MultiGetItemResponse itemResponse : multiGetItemResponses) {
            GetResponse response = itemResponse.getResponse();
            if (response.isExists()) {
                String json = response.getSourceAsString();
                System.out.println(json);
            }
        }
    }
}
es的分页条件搜索

通常我们使用QueryBuilders API构建搜索条件,然后执行TransportClient的prepareSearch方法。如果命中的条目较多,会发现SearchResponse.getHits().getTotalHits()和SearchResponse.getHits()数组长度不一致,这是因为API默认对其进行了分页,可以通过setSize方法设置页数。最后通过TransportClient.prepareSearchScroll逐页显示。该种模式需要调用.setScroll方法。
可以参考:https://blog.csdn.net/fanrenxiang/article/details/86509688

 public static void getConditionDoc(TransportClient client, String index) {
        MatchQueryBuilder queryBuilder = QueryBuilders.matchQuery("message", "kaka is good");
        SearchRequestBuilder requestBuilder = client.prepareSearch(index)
                .setTypes("_doc")
                .setScroll(new Scroll(TimeValue.timeValueSeconds(5000)))
                .setQuery(queryBuilder)
                .setSize(10) // 默认显示10条数据
                .addSort("postDate", SortOrder.ASC);
        SearchResponse response = requestBuilder.get();
        System.out.println("Status:" + response.status());


        long k = 0, total = response.getHits().getTotalHits();
        do {
            for (SearchHit hit : response.getHits().getHits()) {
                String src = hit.getSourceAsString();
                System.out.println(src);
            }

            k += response.getHits().getHits().length;
            response = client.prepareSearchScroll(response.getScrollId())
                    .setScroll(TimeValue.timeValueSeconds(5000))
                    .get();
        } while (response.getHits().getHits().length != 0);
        System.out.println("Hit count:" + total + " Response Count:" + k);
    }

上述查询方式属于滚动查询,是效率最高的一种方式。下面是分页查询方式,我们查询第n页数据的时候,在es内部查询了从开始到n页的所有数据,只是在返回的时候抛弃了前面n-1页的内容,效率较低。我们只需要定义 .setFrom .setSize即可。此时不能指定.setScroll

 public static void getConditionDoc(TransportClient client, String index) {
        MatchQueryBuilder queryBuilder = QueryBuilders.matchQuery("message", "kaka is good");
        SearchRequestBuilder requestBuilder = client.prepareSearch(index)
                .setTypes("_doc")
                //.setScroll(new Scroll(TimeValue.timeValueSeconds(5000)))
                .setQuery(queryBuilder)
                .setFrom(10) // 从第11行开始
                .setSize(10) // 默认显示10条数据
                .addSort("postDate", SortOrder.ASC);
        SearchResponse response = requestBuilder.get();
        System.out.println("Status:" + response.status());

        for (SearchHit hit : response.getHits().getHits()) {
            String src = hit.getSourceAsString();
            System.out.println(src);
        }


//        long k = 0, total = response.getHits().getTotalHits();
//        do {
//            for (SearchHit hit : response.getHits().getHits()) {
//                String src = hit.getSourceAsString();
//                System.out.println(src);
//            }
//
//            k += response.getHits().getHits().length;
//            response = client.prepareSearchScroll(response.getScrollId())
//                    .setScroll(TimeValue.timeValueSeconds(5000))
//                    .get();
//        } while (response.getHits().getHits().length != 0);
//        System.out.println("Hit count:" + total + " Response Count:" + k);
    }
es的批处理

批处理可以提高系统的效率,下面演示es的批删除。

    public static void bulkDeleteDoc(TransportClient client, String index, String[] ids){
        BulkRequestBuilder requestBuilder= client.prepareBulk();
        for(String id : ids){
            requestBuilder.add(client.prepareDelete(index, "_doc", id));
        }
        try {
            BulkResponse responses= requestBuilder.execute().get();
            System.out.println(responses.status());
            for(BulkItemResponse response: responses.getItems()){
                System.out.println(response);
            }
        } catch (InterruptedException e) {
            e.printStackTrace();
        } catch (ExecutionException e) {
            e.printStackTrace();
        }
    }
es的聚合查询

查询zipkin中的请求时间范围在0到10ms之间的平均查询时间

   public static void aggregationDoc(TransportClient client, String index) {
        BoolQueryBuilder boolQueryBuilder = QueryBuilders.boolQuery();
        boolQueryBuilder.must(QueryBuilders.rangeQuery("duration").gte(0).lte(10000));
        AvgAggregationBuilder aggregation = AggregationBuilders.avg("duration").field("duration");
        SearchSourceBuilder searchSourceBuilder = new SearchSourceBuilder();
        searchSourceBuilder.query(boolQueryBuilder).aggregation(aggregation).explain(true);
        SearchRequestBuilder requestBuilder = client.prepareSearch(index).setTypes("span").setSource(searchSourceBuilder);
        SearchResponse response = requestBuilder.get();
        InternalAvg avg = response.getAggregations().get("duration");
        System.out.println(index + " avg value:" + avg.getValue());
    }
es的分组聚合

zipkin中按照traceId分类统计请求时间的最大值和最小值

// 分组聚合
    public static void aggregationGroupDoc(TransportClient client, String index){
        MinAggregationBuilder minAggregationBuilder = AggregationBuilders.min("minDuration").field("duration");
        MaxAggregationBuilder maxAggregationBuilder = AggregationBuilders.max("maxDuration").field("duration");
        SearchSourceBuilder searchSourceBuilder = new SearchSourceBuilder()
                .aggregation(AggregationBuilders.terms("traceId").field("traceId")
                        .subAggregation(minAggregationBuilder)
                        .subAggregation(maxAggregationBuilder));
        SearchRequestBuilder requestBuilder = client.prepareSearch(index).setTypes("span").setSource(searchSourceBuilder);
        SearchResponse response = requestBuilder.get();
        Terms terms= response.getAggregations().get("traceId");
        for(Terms.Bucket bucket : terms.getBuckets()){
            System.out.println(bucket.getKeyAsString());
            InternalMin min = bucket.getAggregations().get("minDuration");
            InternalMax max = bucket.getAggregations().get("maxDuration");
            System.out.println(bucket.getKeyAsString() + " min value:" + min.getValue() + " max value:" + max);
        }
    }
es的时间聚合处理

参考文章:https://blog.csdn.net/dm_vincent/article/details/42594043

/**
     * Get car milage by interval, limit the id of device in the 'deviceIds' collection.
     *
     * @param startTimestamp
     * @param endTimestamp
     * @param deviceIds
     * @return
     */
    @Override
    public Double[] getCarMilagesByInterval(long startTimestamp, long endTimestamp, DateHistogramInterval interval, List deviceIds) {
        List totals = new ArrayList<>();
        // Create Search Source
        SearchSourceBuilder sourceBuilder = new SearchSourceBuilder();
        BoolQueryBuilder boolQuery = QueryBuilders.boolQuery();
        // Filter timestamp
        RangeQueryBuilder rangeQuery = QueryBuilders.rangeQuery("timestamp").gte(startTimestamp).lte(endTimestamp);
        boolQuery.must(rangeQuery);
        if (Objects.nonNull(deviceIds) && !deviceIds.isEmpty()) {
            deviceIds.forEach(deviceId -> {
                boolQuery.should(QueryBuilders.matchQuery("device_id", deviceId));
            });
        }
        // Group by month of timestamp
        sourceBuilder.query(boolQuery);
        DateHistogramAggregationBuilder dateHistogramBuilder = AggregationBuilders.dateHistogram("histogram")
                .field("timestamp")
                .calendarInterval(interval)
                .minDocCount(0)
                .timeZone(ZoneId.systemDefault()) // If need use zone, the type of 'timestamp' must be date not long(timestamp)
                .extendedBounds(new ExtendedBounds(startTimestamp, endTimestamp));
        // Sum by group of month
        SumAggregationBuilder sumBuilder = AggregationBuilders.sum("total").field("milage");
        sourceBuilder.aggregation(dateHistogramBuilder.subAggregation(sumBuilder)).size(0).explain(true);
        // Create search request
        SearchRequest request = new SearchRequest();
        request.indices(ESIndexName.LOG_CAR_MILAGE);
        request.source(sourceBuilder);
        // Get response
        try {
            SearchResponse response = client.search(request, RequestOptions.DEFAULT);
            ParsedDateHistogram terms = response.getAggregations().get("histogram");
            for (Histogram.Bucket bucket : terms.getBuckets()) {
                ParsedSum parsedSum = bucket.getAggregations().get("total");
                // log.info(bucket.getKeyAsString() + " sum value:" + parsedSum.getValue());
                totals.add(parsedSum.getValue());
            }
        } catch (IOException e) {
            log.error("Es search from index:{}, error:{}", ESIndexName.LOG_CAR_MILAGE, e.getMessage());
        }
        return totals.toArray(new Double[0]);
    }

集成高阶API, RestHighLevelClient

  • 添加maven依赖,记得把低阶API的依赖也添加进去,否则可能出现类似NoClassDefFoundError之类的异常,我就是在实施的过程中出现了类似的错误,具体参考文章:https://www.cnblogs.com/xbq8080/p/12814002.html

        
            org.elasticsearch
            elasticsearch
            7.8.0
        
        
            org.elasticsearch.client
            elasticsearch-rest-client
            7.8.0
        
        
            org.elasticsearch.client
            elasticsearch-rest-high-level-client
            7.8.0
        
  • 注册RestHighLevelClient bean
@Bean
    @ConditionalOnMissingBean(
            name = {"restHighLevelClient"}
    )
    public RestHighLevelClient restHighLevelClient() {
        RestClientBuilder builder = RestClient.builder(new HttpHost(this.host, this.port, "http"));
        // Set request config
        builder.setRequestConfigCallback(requestConfigBuilder -> {
            requestConfigBuilder.setConnectTimeout(3000);
            requestConfigBuilder.setConnectionRequestTimeout(3000);
            requestConfigBuilder.setSocketTimeout(5000);
            return requestConfigBuilder;
        });
        // Set httpClient config
        builder.setHttpClientConfigCallback(httpClientBuilder -> {
            httpClientBuilder.setMaxConnPerRoute(10);
            httpClientBuilder.setMaxConnTotal(30);
            return httpClientBuilder;
        });
        return new RestHighLevelClient(builder);
    }
  • 新增文档
// 归档日志到es
            BulkRequest bulkRequest = new BulkRequest();
            IndexRequest indexRequest = new IndexRequest(EsConfig.API_LOG_INDEX);
            indexRequest.source(new ObjectMapper().writeValueAsString(logApi), XContentType.JSON);
            bulkRequest.add(indexRequest);
            BulkResponse response = this.restHighLevelClient.bulk(bulkRequest, RequestOptions.DEFAULT);
            log.debug("Save api log to es success:{}", response.status());

你可能感兴趣的:(ELK学习笔记)