ELK日志分析系统

目录

开始

 第一台安装elasticsearch-head插件

第一台node1安装logstash

配置收集系统日志   第一台

安装kibana第一台

第三台


简述:

日志分析是运维工程师解决系统故障,发现问题的主要手段。日志主要包括系统日志、应用程序日志和安全日志。系统运维和开发人员可以通过日志了解服务器软硬件信息、检查配置过程中的错误及错误发生的原因,经常分析日志可以了解服务器的负荷,性能安全性,从而及时采取措施纠正错误。
日志是一个非常庞大的数据,并且常常被分散在不同的设备上,这样排查问题的时候找日志就非常繁琐困难。
这时,一个专门处理日志的系统就非常必要,这里介绍其中的一种,ELK日志分析系统(ELasticsearch+Logstash+Kibana)
————————————————

环境:

三台主机

关闭防火墙和安全规则

[root@bogon ~]# iptables -F

[root@bogon ~]# setenforce 0

[root@bogon ~]# systemctl stop firewalld

设置主机名

第一台   elk-node1

第二台   elk-node2

第三台   现在不进行操作 等下用到再说

主机映射配置

/etc/hosts

192.168.1.117 elk-node1
192.168.1.120 elk-node2

先说一下这个服务  过程中容易端口丢失  建议调整内核和内存    不然费老大劲了

开始

上传软件包

[root@elk-node1 elk软件包]# rpm -ivh elasticsearch-5.5.0.rpm

警告:elasticsearch-5.5.0.rpm: 头V4 RSA/SHA512 Signature, 密钥 ID d88e42b4: NOKEY

准备中...                          ################################# [100%]

Creating elasticsearch group... OK

Creating elasticsearch user... OK

正在升级/安装...

   1:elasticsearch-0:5.5.0-1          ################################# [100%]

### NOT starting on installation, please execute the following statements to configure elasticsearch service to start automatically using systemd

 sudo systemctl daemon-reload

 sudo systemctl enable elasticsearch.service

### You can start elasticsearch service by executing

 sudo systemctl start elasticsearch.service

已经告诉我们下一步了

[root@elk-node1 elk软件包]# systemctl daemon-reload

[root@elk-node1 elk软件包]#  sudo systemctl enable elasticsearch.service

Created symlink from /etc/systemd/system/multi-user.target.wants/elasticsearch.service to /usr/lib/systemd/system/elasticsearch.service.

修改node1和node2配置文件

vim /etc/elasticsearch/elasticsearch.yml

  17 cluster.name: my-elk-cluster

 23 node.name: elk-node1

33 path.data: /data/elk_data

37 path.logs: /var/log/elasticsearch

55 network.host: 0.0.0.0

59 http.port: 9200

 68 discovery.zen.ping.unicast.hosts: ["elk-node1", "elk-node2"]

 90 http.cors.enabled: true                 这两行只有node1添加

 91 http.cors.allow-origin: "*"

创建目录并且把用户权限改为elasticsearch

[root@elk-node2 ~]# mkdir -p /data/elk_data

[root@elk-node2 ~]#  chown elasticsearch:elasticsearch /data/elk_data/

启动服务并查看端口

[root@elk-node1 ~]# systemctl start elasticsearch.service

[root@elk-node1 ~]#  netstat -anpt | grep 9200

tcp6       0      0 :::9200                 :::*                    LISTEN      56622/java          

[root@elk-node2 ~]# systemctl restart elasticsearch.service

[root@elk-node2 ~]# netstat -anpt | grep 9200

tcp6       0      0 :::9200                 :::*                    LISTEN      55553/java          

访问节点IP

ELK日志分析系统_第1张图片

ELK日志分析系统_第2张图片

 第一台安装elasticsearch-head插件

解压软件包    tar xf node-v8.2.1-linux-x64.tar.gz -C /usr/local/

做链接

[root@elk-node1 elk软件包]# ln -s /usr/local/node-v8.2.1-linux-x64//bin/npm /usr/bin/node

[root@elk-node1 elk软件包]# ln -s /usr/local/node-v8.2.1-linux-x64//bin/npm /usr/local/bin

解压head包

[root@elk-node1 elk软件包]# tar xf elasticsearch-head.tar.gz -C /data/elk_data/

cd到elk_data

[root@elk-node1 elk软件包]# cd /data/elk_data/

修改用户和组

[root@elk-node1 elk_data]# chown -R elasticsearch:elasticsearch elasticsearch-head/

cd到elasticsearch-head/

[root@elk-node1 elk_data]# cd elasticsearch-head/

安装nmp

[root@elk-node1 elasticsearch-head]# npm install

npm WARN deprecated [email protected]: The v1 package contains DANGEROUS / INSECURE binaries. Upgrade to safe fsevents v2

npm WARN optional SKIPPING OPTIONAL DEPENDENCY: fsevents@^1.0.0 (node_modules/karma/node_modules/chokidar/node_modules/fsevents):

npm WARN notsup SKIPPING OPTIONAL DEPENDENCY: Unsupported platform for [email protected]: wanted {"os":"darwin","arch":"any"} (current: {"os":"linux","arch":"x64"})

npm WARN [email protected] license should be a valid SPDX license expression

up to date in 8.357s

cd到_site下把app.js做备份并编辑

[root@elk-node1 elasticsearch-head]# cd _site/

[root@elk-node1 _site]#  cp app.js{,.bak}

[root@elk-node1 _site]#  vim app.js

进去后按4329加大G就到4329行了

4329 行                        this.base_uri = this.config.base_uri || this.prefs.get("app-base_uri") ||      "http://192.168.1.117:9200";

启动npm

[root@elk-node1 _site]#  npm run start &

[1] 4423

[root@elk-node1 _site]#

> [email protected] start /data/elk_data/elasticsearch-head

> grunt server

Running "connect:server" (connect) task

Waiting forever...

Started connect web server on http://localhost:9100

systemctl start elasticsearch

启动elasticsearch

[root@elk-node1 _site]# systemctl start elasticsearch

查看端口

[root@elk-node1 _site]# netstat -lnpt | grep 9100

tcp        0      0 0.0.0.0:9100            0.0.0.0:*               LISTEN      4433/grunt      

访问节点IP

ELK日志分析系统_第3张图片cdc

插入数据   测试类型为test

[root@elk-node1 _site]# curl -XPUT 'localhost:9200/index-demo/test/1?pretty&pretty' -H 'Content-Type: application/json' -d '{ "user": "zhangsan","mesg":"hello word" }'

{

  "_index" : "index-demo",

  "_type" : "test",

  "_id" : "1",

  "_version" : 1,

  "result" : "created",

  "_shards" : {

    "total" : 2,

    "successful" : 2,

    "failed" : 0

  },

  "created" : true

}

刷新查看变化

ELK日志分析系统_第4张图片

第一台node1安装logstash

rpm -ivh logstash-5.5.1.rpm  

启动服务并做链接

systemctl start logstash

ln -s /usr/share/logstash/bin/logstash /usr/local/bin/

启动一个logstash -e   标准输入

[root@elk-node1 elk软件包]#  logstash -e 'input { stdin{} } output { stdout{} }'

ERROR StatusLogger No log4j2 configuration file found. Using default configuration: logging only errors to the console.

WARNING: Could not find logstash.yml which is typically located in $LS_HOME/config or /etc/logstash. You can specify the path using --path.settings. Continuing using the defaults

Could not find log4j2 configuration at path //usr/share/logstash/config/log4j2.properties. Using default config which logs to console

20:53:11.284 [main] INFO  logstash.setting.writabledirectory - Creating directory {:setting=>"path.queue", :path=>"/usr/share/logstash/data/queue"}

20:53:11.294 [main] INFO  logstash.setting.writabledirectory - Creating directory {:setting=>"path.dead_letter_queue", :path=>"/usr/share/logstash/data/dead_letter_queue"}

20:53:11.378 [LogStash::Runner] INFO  logstash.agent - No persistent UUID file found. Generating new UUID {:uuid=>"a184b92b-62d6-416a-86bc-d496e1d07fbc", :path=>"/usr/share/logstash/data/uuid"}

20:53:11.590 [[main]-pipeline-manager] INFO  logstash.pipeline - Starting pipeline {"id"=>"main", "pipeline.workers"=>2, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>5, "pipeline.max_inflight"=>250}

20:53:11.681 [[main]-pipeline-manager] INFO  logstash.pipeline - Pipeline main started

The stdin plugin is now waiting for input:

20:53:11.825 [Api Webserver] INFO  logstash.agent - Successfully started Logstash API endpoint {:port=>9600}

www.baidu.com      输入网页  手动输入

2023-07-06T12:53:26.772Z elk-node1 www.baidu.com

www.slan.com.cn               手动输入

2023-07-06T12:53:48.435Z elk-node1 www.slan.com.cn

显示详细输出

[root@elk-node1 elk软件包]# logstash -e 'input { stdin{} } output { stdout{ codec =>rubydebug} }'

ERROR StatusLogger No log4j2 configuration file found. Using default configuration: logging only errors to the console.

WARNING: Could not find logstash.yml which is typically located in $LS_HOME/config or /etc/logstash. You can specify the path using --path.settings. Continuing using the defaults

Could not find log4j2 configuration at path //usr/share/logstash/config/log4j2.properties. Using default config which logs to console

20:54:54.115 [[main]-pipeline-manager] INFO  logstash.pipeline - Starting pipeline {"id"=>"main", "pipeline.workers"=>2, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>5, "pipeline.max_inflight"=>250}

20:54:54.241 [[main]-pipeline-manager] INFO  logstash.pipeline - Pipeline main started

The stdin plugin is now waiting for input:

20:54:54.387 [Api Webserver] INFO  logstash.agent - Successfully started Logstash API endpoint {:port=>9600}

www.baidu.com             手动输入

{

    "@timestamp" => 2023-07-06T12:55:04.406Z,

      "@version" => "1",

          "host" => "elk-node1",

       "message" => "www.baidu.com"

}

配置收集系统日志   第一台

[root@elk-node1 conf.d]# vim systemc.conf

input {
    file {
        path => "/var/log/messages"
        type => "system"
        start_position => "beginning"

   }
}
output  {
    elasticsearch {
        hosts => ["192.168.1.117:9200"]
        index => "system-%{+YYYY.MM.dd}"
    }
}

重启logstash

[root@elk-node1 _site]#  systemctl restart logstash

加载文件查看是否打入到es
[root@
elk-node1 conf.d]# logstash -f systemc.conf

ELK日志分析系统_第5张图片

安装kibana第一台

[root@elk-node1 elk软件包]#  rpm -ivh kibana-5.5.1-x86_64.rpm
警告:kibana-5.5.1-x86_64.rpm: 头V4 RSA/SHA512 Signature, 密钥 ID d88e42b4: NOKEY
准备中...                          ################################# [100%]
正在升级/安装...
   1:kibana-5.5.1-1                   ################################# [100%]

设置开机自启
[root@
elk-node1 elk软件包]#  systemctl enable kibana.service
Created symlink from /etc/systemd/system/multi-user.target.wants/kibana.service to /etc/systemd/system/kibana.service.

修改配置文件并启动
[root@
elk-node1 elk软件包]#  vim /etc/kibana/kibana.yml

 2 server.port: 5601

 7 server.host: "0.0.0.0"

21 elasticsearch.url: "http://192.168.1.117:9200"

30 kibana.index: ".kibana"
[root@localhost elk软件包]#  systemctl restart kibana.service

查看端口

[root@elk-node1 elk软件包]#  netstat -lnpt | grep 5601
tcp        0      0 0.0.0.0:5601            0.0.0.0:*               LISTEN      4370/node    

root@elk-node1 elk软件包]# logstash -f /etc/logstash/conf.d/system.conf

访问

 创建       注意     system-2023.07.07

ELK日志分析系统_第6张图片

把英文添加进去

ELK日志分析系统_第7张图片

看一下有没有创建成功

ELK日志分析系统_第8张图片

ELK日志分析系统_第9张图片

第三台

起名

[root@localhost ~]#  hostname apache

刷新

[root@localhost ~]# bash

关闭防火墙和安全规则

[root@apache ~]#  iptables -F

[root@apache ~]#  systemctl stop firewalld

[root@apache ~]#  setenforce 0

安装httpd

root@apache ~]# yum -y install httpd

启动

[root@apache ~]# systemctl start httpd

上传软件包

[root@apache ~]#  rpm -ivh logstash-5.5.1.rpm

警告:logstash-5.5.1.rpm: 头V4 RSA/SHA512 Signature, 密钥 ID d88e42b4: NOKEY

准备中...                          ################################# [100%]

正在升级/安装...

   1:logstash-1:5.5.1-1               ################################# [100%]

Using provided startup.options file: /etc/logstash/startup.options

OpenJDK 64-Bit Server VM warning: If the number of processors is expected to increase from one, then you should configure the number of parallel GC threads appropriately using -XX:ParallelGCThreads=N

Successfully created system startup script for Logstash

设置开机启动

[root@apache ~]#  systemctl enable logstash.service

Created symlink from /etc/systemd/system/multi-user.target.wants/logstash.service to /etc/systemd/system/logstash.service.

cd到logstash下

[root@apache ~]#  cd /etc/logstash/conf.d/

编辑配置文件

[root@apache conf.d]#  vim apache_log.conf

nput {

   file {

        path => "/var/log/httpd/access_log"

        type => "access"

        start_position => "beginning"

   }

  file {

     path => "/var/log/httpd/error_log"

     type => "error"

     start_position => "beginning"

  }

}

output  {

    if [type] == "access" {

       elasticsearch {

        hosts => ["192.168.1.117:9200"]

        index => "apache_access-%{+YYYY.MM.dd}"

     }

 }

  if [type] == "error" {

    elasticsearch {

        hosts => ["192.168.1.117:9200"]

        index => "apache_error-%{+YYYY.MM.dd}"

    }

  }

}

创建链接文件

[root@apache bin]# ln -s /usr/share/logstash/bin/logstash /usr/local/bin/

[root@apache bin]# ll

总用量 0

lrwxrwxrwx. 1 root root 32 7月   7 11:36 logstash -> /usr/share/logstash/bin/logstash

[root@apache bin]#  cd /etc/logstash/conf.d/

加载文件

[root@apache conf.d]# logstash -f apache_log.conf

OpenJDK 64-Bit Server VM warning: If the number of processors is expected to increase from one, then you should configure the number of parallel GC threads appropriately using -XX:ParallelGCThreads=N

ERROR StatusLogger No log4j2 configuration file found. Using default configuration: logging only errors to the console.

WARNING: Could not find logstash.yml which is typically located in $LS_HOME/config or /etc/logstash. You can specify the path using --path.settings. Continuing using the defaults

Could not find log4j2 configuration at path //usr/share/logstash/config/log4j2.properties. Using default config which logs to console

11:37:25.689 [main] INFO  logstash.setting.writabledirectory - Creating directory {:setting=>"path.queue", :path=>"/usr/share/logstash/data/queue"}

11:37:25.693 [main] INFO  logstash.setting.writabledirectory - Creating directory {:setting=>"path.dead_letter_queue", :path=>"/usr/share/logstash/data/dead_letter_queue"}

11:37:25.909 [LogStash::Runner] INFO  logstash.agent - No persistent UUID file found. Generating new UUID {:uuid=>"e79973b7-8439-4aaf-9f4a-248acb91eb3f", :path=>"/usr/share/logstash/data/uuid"}

11:37:26.125 [LogStash::Runner] ERROR logstash.agent - Cannot create pipeline {:reason=>"Expected one of #, input, filter, output at line 1, column 1 (byte 1) after "}

:class=>"LogStash::Outputs::ElasticSearch", :hosts=>[#]}

12:04:31.849 [[main]-pipeline-manager] INFO  logstash.pipeline - Starting pipeline {"id"=>"main", "pipeline.workers"=>1, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>5, "pipeline.max_inflight"=>125}

12:04:32.199 [[main]-pipeline-manager] INFO  logstash.pipeline - Pipeline main started

12:04:32.380 [Api Webserver] INFO  logstash.agent - Successfully started Logstash API endpoint {:port=>9601}      显示这个证明成功

随便输入一些东西
bvnc vch df hdf dfh hfd

aefssdfgsfdg

去网页登陆一下httpd

ELK日志分析系统_第10张图片

网页查看192.168.1.117:9100

ELK日志分析系统_第11张图片

创建

 看看有没有创建成功

ELK日志分析系统_第12张图片

进入

查看

ELK日志分析系统_第13张图片

拜拜

你可能感兴趣的:(jenkins,运维)