WSL上部署ELK日志系统

WSL上部署ELK日志系统​

机器说明

OS: Windows 10

虚拟机: WSL 2

子系统: Ubuntu 18.04

概念

现在ELK成为了服务器主流的日志系统, 统一的查看界面, 简单的条件筛选, 方便开发者快速定位为题.
ELK其实是Elasticsearch,Logstash 和 Kibana三个产品的首字母缩写,这三款都是开源产品。

ElasticSearch(简称ES),是一个实时的分布式搜索和分析引擎,它可以用于全文搜索,结构化搜索以及分析。
Logstash,是一个数据收集引擎,主要用于进行数据收集、解析,并将数据发送给ES。支持的数据源包括本地文件、ElasticSearch、MySQL、Kafka等等。
Kibana,为 Elasticsearch 提供了分析和 Web 可视化界面,并生成各种维度表格、图形。

ElasticSearch和Logstash都是使用java开发, Kibana使用js.

本次部署架构为:

appliction
Logstash
ElasticSearch
Kibana

下载

jdk-8u321-linux-x64.tar.gz // jdk要1.8版本以上

elasticsearch-8.5.1-linux-x86_64.tar.gz

logstash-8.5.1-linux-x86_64.tar.gz

kibana-8.5.2-linux-x86_64.tar.gz

安装jdk

下载, 解压, 配置环境变量, 测试 java -version

安装elasticsearch

压缩包放在一个目录 如 /home/tom/tools/elk

cd /home/tom/tools/elk
tar -zxf elasticsearch-8.5.1-linux-x86_64.tar.gz
# 修改配置
vim config/elasticsearch.yml
cluster.name: test
node.name: node-1
path.data: /home/tom/tools/elk/elasticsearch-8.5.1/data
path.logs: /home/tom/tools/elk/elasticsearch-8.5.1/logs
#network.host: 0.0.0.0
http.port: 9200
cluster.initial_master_nodes: ["node-1"]
action.destructive_requires_name: false

#----------------------- BEGIN SECURITY AUTO CONFIGURATION -----------------------
#
# The following settings, TLS certificates, and keys have been automatically
# generated to configure Elasticsearch security features on 23-11-2022 03:47:04
#
# --------------------------------------------------------------------------------

# Enable security features
xpack.security.enabled: false #简化http访问

xpack.security.enrollment.enabled: true

# Enable encryption for HTTP API client connections, such as Kibana, Logstash, and Agents
xpack.security.http.ssl:
  enabled: false #简化http访问
  keystore.path: certs/http.p12

# Enable encryption and mutual authentication between cluster nodes
xpack.security.transport.ssl:
  enabled: true
  verification_mode: certificate
  keystore.path: certs/transport.p12
  truststore.path: certs/transport.p12
#----------------------- END SECURITY AUTO CONFIGURATION -------------------------

保存后启动ES, 一定要以普通用户身份,这里就直接使用常用的用户

# 启动ES, 加-d表示后台运行, 这里不加为了方便调试看到信息输出
./bin/elasticsearch

如果报错:

max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]

则需要修改系统的vm配置:

# 编辑文件
sudo vi /etc/sysctl.conf
# 在末尾添加
vm.max_map_count=655300
# 保存后再执行
sudo sysctl -p

如果报错:

max file descriptors [4096] for elasticsearch process is too low, increase to at least [65535]

则需要修改当前用户的支持的最大fd数量:

#文件文件
sudo vi /etc/security/limits.conf
#在末尾添加
tom soft nofile 65535
tom hard nofile 65537
root soft nofile 65535
root hard nofile 65537
# 保存
# 退出重登 因为wsl是直接登录, 所以重新打开终端要su tom
# 使用 ulimit -Hn / ulimit -Sn 查看是否生效

测试elasticsearch

curl http://127.0.0.1:9200
#看是否有信息返回,有就表示成功
{
  "name": "node-1",
  "cluster_name": "test",
  "cluster_uuid": "Lo18yh07T-mfMxwQ6QJM8A",
  "version": {
    "number": "8.2.2",
    "build_flavor": "default",
    "build_type": "tar",
    "build_hash": "9876968ef3c745186b94fdabd4483e01499224ef",
    "build_date": "2022-05-25T15:47:06.259735307Z",
    "build_snapshot": false,
    "lucene_version": "9.1.0",
    "minimum_wire_compatibility_version": "7.17.0",
    "minimum_index_compatibility_version": "7.0.0"
  },
  "tagline": "You Know, for Search"
}

安装Logstash

压缩包放在一个目录 如 /home/tom/tools/elk

tar -zxf logstash-8.5.1-linux-x86_64.tar.gz
#编辑配置文件 官方文档: 
vi config/logstash-sample.conf

# Sample Logstash configuration for creating a simple
# Beats -> Logstash -> Elasticsearch pipeline.

input {
  # 从标准输入读取, 也可以从port, kafka
  stdin { 
    # 表示输入格式为json
    codec => "json"
  }
}

filter {
  date {
    # Time字段配置的格式
    match => [ "Time", "yyyy-MM-dd HH:mm:ss.SSS" ]
  }
  # 增加字段target_index
  mutate { add_field => { "[@metadata][target_index]" => "qq-%{Service}-%{+YYYY-MM-dd}" } } 
  mutate { lowercase => [ "[@metadata][target_index]" ] }
}

output {
  # 输出到elasticsearch 
  elasticsearch {
    hosts => ["http://localhost:9200"]
    # 使用target_index作为索引字段
    index => "%{[@metadata][target_index]}"
  }
  # 同时也输出到标准输出
  stdout { codec => rubydebug }
}

# 运行
./bin/logstash -f config/logstash-sample.conf

测试Logstash

输入 hello
看终端是否出现

The stdin plugin is now waiting for input:
[2022-11-24T12:07:04,679][INFO ][logstash.agent           ] Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}
hello
{
       "message" => "hello",
    "@timestamp" => 2022-11-24T04:07:19.085177Z,
         "event" => {
        "original" => "hello"
    },
      "@version" => "1",
          "host" => {
        "hostname" => "PC-20221010YU"
    }
}

安装Kibana

压缩包放在一个目录 如 /home/tom/tools/elk

tar -zxf kibana-8.5.2-linux-x86_64.tar.gz
#修改配置文件
server.port: 5601
server.name: "kibana"
elasticsearch.hosts: ["http://localhost:9200"]
elasticsearch.requestTimeout: 30000

# 运行 ./bin/kibana

浏览器访问 http://127.0.0.1:5601

先创建Index, 使用通配符 qq-*, 完成后, 点击左上角 home->Kibana->Discover 即可查询log
官方文档: https://www.elastic.co/guide/index.html

至此全部部署完成

参考

https://www.jianshu.com/p/3233874097e7

你可能感兴趣的:(GOLANG,elk,elasticsearch)