一、ELK介绍
1、需求背景:随着业务发展越来越大,服务器会越来越多,那么,各种日志量(比如,访问日志、应用日志、错误日志等)会越来越多。因此,开发人员排查问题,需要到服务器上查看日志,很不方便。而运维人员也需要一些数据,所以也要到服务器分析日志,很麻烦。

2、ELK Stack

官网:https://www.elastic.co/cn/

ELK Stach从5.0版版本开始,改为Elastic Stack(ELK Stach+Beats)

ELK Stack包含:ElasticSearch、Logstash、Kibana

ElasticSearch:是一个搜索引擎,用来搜索、分析、存储日志。它是分布式的,也就是说可以横向扩容,可以自动发现,索引自动分片,总之很强大。

Logstash:用 来 采 集日志,把日志解析为json格式交给ElasticSearch。

Kibana:是一个数据可视化组件,把处理后的结果通过web界面展示。

Beats:是一个轻量级日志采 集   器,其实Beats家族有5个成员。

早期的ELK架构中使用Logstash收集、解析日志,但是Logstash对内存、cpu、io等资源消耗比较高。相比Logstash,Beats所占系统的cpu和内存几乎可以忽略不计。

x-pack对Elastic Stack提供了安全、警报、监控、报表、图表于一身的扩展包,但是收费的。

二、ELK安装准备工作
1、机器的规划

准备3台机子:

角色划分:

(1)3台机子都安装elasticsearch(简称:es)

(2)3台机子安装jdk(可以使用yum安装openjdk,也可以到甲骨文官网下载jdk安装)

(3)1个主节点:129,2个数据节点:128、131

(4)主节点101上安装kibana

(5)1台数据节点安装logstash+beats,比如在102机子上安装

2、安装jdk

在129机器上安装jdk1.8。

下载jdk过程省略,下载地址:https://www.oracle.com/technetwork/java/javase/downloads/jdk8-downloads-2133151.html。

安装过程如下:

[root@hongwei-02 ~]# tar xf jdk-8u181-linux-x64.tar.gz -C /usr/local/
[root@hongwei-02~]# mv /usr/local/jdk1.8.0_181/ /usr/local/jdk1.8
[root@hongwei-02 ~]# echo -e "export JAVA_HOME=/usr/local/jdk1.8\nexport PATH=\$PATH:\$JAVA_HOME/bin\nexport CLASSPATH=\$JAVA_HOME/lib\n"> /etc/profile.d/jdk.sh
[root@hongwei-02 ~]# chmod +x /etc/profile.d/jdk.sh
[root@hongwei-02 ~]# source /etc/profile.d/jdk.sh
[root@hongwei-02 ~]#

执行一下java命令:

[root@hongwei-02 ~]# java -version
java version "1.8.0_181"
Java(TM) SE Runtime Environment (build 1.8.0_181-b13)
Java HotSpot(TM) 64-Bit Server VM (build 25.181-b13, mixed mode)

OK,jdk安装成功。

我们已经在192.168.93.129上安装配置好jdk了。下面在129机子上使用expect和rsync同步jdk文件到128,131机子

[root@hongwei-02~]# vim ip.list
192.168.93.128
192.168.93.131
[root@hongwei-02 ~]# vim rsync.sh
#!/bin/bash
cat > rsync.expect <#!/usr/bin/expect

set host [lindex \$argv 0]
set file [lindex \$argv 1]
spawn rsync -avr \$file root@\$host:/usr/local/
expect eof
EOF
file=$2

for host in cat $1
do
./rsync.expect $host $file
scp /etc/profile.d/jdk.sh root@$host:/etc/profile.d/
done
rm -f rsync.expect
[root@hongwei-02 ~]# ./rsync.sh ip.list /usr/local/jdk1.8

使用source命令

[root@lb01 ~]# ansible all -m shell -a "source /etc/profile.d/jdk.sh"

执行java相关命令:

[root@lb01 ~]# ansible all -m shell -a "java -version"
192.168.10.103 | SUCCESS | rc=0 >>
java version "1.8.0_181"
Java(TM) SE Runtime Environment (build 1.8.0_181-b13)
Java HotSpot(TM) 64-Bit Server VM (build 25.181-b13, mixed mode)

192.168.10.102 | SUCCESS | rc=0 >>
java version "1.8.0_181"
Java(TM) SE Runtime Environment (build 1.8.0_181-b13)
Java HotSpot(TM) 64-Bit Server VM (build 25.181-b13, mixed mode)

192.168.10.101 | SUCCESS | rc=0 >>
java version "1.8.0_181"
Java(TM) SE Runtime Environment (build 1.8.0_181-b13)
Java HotSpot(TM) 64-Bit Server VM (build 25.181-b13, mixed mode)

[root@lb01 ~]#

OK,3台机子的jdk安装配置成功。

三、安装es
3台机子都要安装es

rpm下载地址:https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-6.4.0.rpm

[root@lb01 ~]# curl -O https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-6.4.0.rpm
[root@lb01 ~]# rpm -ivh elasticsearch-6.4.0.rpm
warning: elasticsearch-6.4.0.rpm: Header V4 RSA/SHA512 Signature, key ID d88e42b4: NOKEY
Preparing... ################################# [100%]
Creating elasticsearch group... OK
Creating elasticsearch user... OK
Updating / installing...
1:elasticsearch-0:6.4.0-1 ################################# [100%]

NOT starting on installation, please execute the following statements to configure elasticsearch service to start automatically using systemd

sudo systemctl daemon-reload
sudo systemctl enable elasticsearch.service

You can start elasticsearch service by executing

sudo systemctl start elasticsearch.service
Created elasticsearch keystore in /etc/elasticsearch
[root@lb01 ~]#

可以配置yum源使用yum安装:

[root@lb01 ~]# vim /etc/yum.repos.d/elk.repo
[elasticsearch]
name=Elasticsearch Repository for 6.x Package
baseurl=https://artifacts.elastic.co/packages/6.x/yum
gpgcheck=1
enabled=1
gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch

将此repo文件发送到另外两个节点即可。

[root@lb01 ~]# ansible 192.168.10.102,192.168.10.103 -m copy -a "src=/etc/yum.repos.d/elk.repo dest=/etc/yum.repos.d/"
192.168.10.102 | SUCCESS => {
"changed": true,
"checksum": "3a03d411370d2c7e15911fe4d62175158faad4a5",
"dest": "/etc/yum.repos.d/elk.repo",
"gid": 0,
"group": "root",
"md5sum": "53c8bd404275373a7529bd48b57823c4",
"mode": "0644",
"owner": "root",
"size": 195,
"src": "/root/.ansible/tmp/ansible-tmp-1536495999.48-252668322650802/source",
"state": "file",
"uid": 0
}
192.168.10.103 | SUCCESS => {
"changed": true,
"checksum": "3a03d411370d2c7e15911fe4d62175158faad4a5",
"dest": "/etc/yum.repos.d/elk.repo",
"gid": 0,
"group": "root",
"md5sum": "53c8bd404275373a7529bd48b57823c4",
"mode": "0644",
"owner": "root",
"size": 195,
"src": "/root/.ansible/tmp/ansible-tmp-1536495999.48-36568269955377/source",
"state": "file",
"uid": 0
}
[root@lb01 ~]#

四、配置es
1、配置es

elasticsearch的两个配置文件:/etc/elasticsearch/elasticsearch.yml和/etc/sysconfig/elasticsearch

/etc/elasticsearch/elasticsearch.yml:配置集群相关的配置文件

/etc/sysconfig/elasticsearch:es服务本身的配置文件。

101机子上的配置:

elasticsearch.yml修改或添加以下内容:

[root@lb01 ~]# cp /etc/elasticsearch/elasticsearch.yml{,.bak}
[root@lb01 ~]# vim /etc/elasticsearch/elasticsearch.yml
cluster.name: my-elk
node.name: my-test01
node.master: true
node.data: false
network.host: 192.168.10.101
discovery.zen.ping.unicast.hosts: ["192.168.10.101","192.168.10.102","192.168.10.103"]

解释:

cluster.name: my-elk #集群的名称
node.name: my-test01 #节点的名称
node.master: true #是否为master(主节点),true:是,false:不是
node.data: false #是否是数据节点,false:不是,true:是
network.host: 192.168.10.101 #监听的ip地址,如果是0.0.0.0,则表示监听全部ip
discovery.zen.ping.unicast.hosts: ["192.168.10.101","192.168.10.102","192.168.10.103"] #定义自动发现的主机

102机子的elasticsearch.yml:

cluster.name: my-elk
node.name: my-test02
node.master: false
node.data: true
network.host: 192.168.10.102
discovery.zen.ping.unicast.hosts: ["192.168.10.101","192.168.10.102","192.168.10.103"]

103机子的elasticsearch.yml:

cluster.name: my-elk
node.name: my-test03
node.master: false
node.data: true
network.host: 192.168.10.103
discovery.zen.ping.unicast.hosts: ["192.168.10.101","192.168.10.102","192.168.10.103"]

102、103的配置文件注意红色字体部分。

x-pack是收费的,所以不安装了。

2、所有的机子/etc/sysconfig/elasticsearch文件添加java环境

因为jdk安装在:/usr/local/jdk1.8,所以要在/etc/sysconfig/elasticsearch文件中添加此路径。3台机子都要修改此文件。

[root@lb01 ~]# vim /etc/sysconfig/elasticsearch
JAVA_HOME=/usr/local/jdk1.8

3、停止防火墙

因为是RHEL7.5版本,默认防火墙是firewalld

[root@lb01 ~]# ansible all -m shell -a "systemctl stop firewalld"

4、启动es

先启动主节点的es,再启动其他节点的es,启动命令:systemctl start elasticsearch.service

启动完成后,查看一下进程:

[root@lb01 ~]# ansible ELK -m shell -a "ps aux | grep elas"
192.168.10.101 | SUCCESS | rc=0 >>
elastic+ 4167 48.4 63.5 3218372 1279184 ? Ssl 19:53 0:55 /usr/local/jdk1.8/bin/java -Xms1g
-Xmx1g -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly -XX:+AlwaysPreTouch -Xss1m -Djava.awt.headless=true -Dfile.encoding=UTF-8 -Djna.nosys=true -XX:-OmitStackTraceInFastThrow -Dio.netty.noUnsafe=true -Dio.netty.noKeySetOptimization=true -Dio.netty.recycler.maxCapacityPerThread=0 -Dlog4j.shutdownHookEnabled=false -Dlog4j2.disable.jmx=true -Djava.io.tmpdir=/tmp/elasticsearch.iKVNygVa -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/var/lib/elasticsearch -XX:ErrorFile=/var/log/elasticsearch/hs_err_pid%p.log -XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:+PrintTenuringDistribution -XX:+PrintGCApplicationStoppedTime -Xloggc:/var/log/elasticsearch/gc.log -XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=32 -XX:GCLogFileSize=64m -Des.path.home=/usr/share/elasticsearch -Des.path.conf=/etc/elasticsearch -Des.distribution.flavor=default -Des.distribution.type=rpm -cp /usr/share/elasticsearch/lib/* org.elasticsearch.bootstrap.Elasticsearch -p /var/run/elasticsearch/elasticsearch.pid --quiet
elastic+ 4217 0.0 0.2 72136 5116 ? Sl 19:53 0:00 /usr/share/elasticsearch/modules/x-pack-ml/platform/linux-x86_64/bin/controller
root 4249 66.7 1.8 403704 37832 pts/6 Sl+ 19:55 0:29 /usr/bin/python2 /usr/bin/ansible ELK -m shell -a ps aux | grep elas
root 4258 2.3 1.7 406852 35444 pts/6 S+ 19:55 0:00 /usr/bin/python2 /usr/bin/ansible ELK -m shell -a ps aux | grep elas
root 4260 6.0 1.8 408832 37688 pts/6 S+ 19:55 0:00 /usr/bin/python2 /usr/bin/ansible ELK -m shell -a ps aux | grep elas
root 4261 2.7 1.7 406852 35412 pts/6 S+ 19:55 0:00 /usr/bin/python2 /usr/bin/ansible ELK -m shell -a ps aux | grep elas
root 4324 0.0 0.0 113172 1208 pts/4 S+ 19:55 0:00 /bin/sh -c ps aux | grep elas
root 4326 0.0 0.0 112704 940 pts/4 R+ 19:55 0:00 grep elas

192.168.10.103 | SUCCESS | rc=0 >>
elastic+ 18631 60.6 73.4 3178368 732804 ? Ssl 19:53 1:17 /usr/local/jdk1.8/bin/java -Xms1g -Xmx1g -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly -XX:+AlwaysPreTouch -Xss1m -Djava.awt.headless=true -Dfile.encoding=UTF-8 -Djna.nosys=true -XX:-OmitStackTraceInFastThrow -Dio.netty.noUnsafe=true -Dio.netty.noKeySetOptimization=true -Dio.netty.recycler.maxCapacityPerThread=0 -Dlog4j.shutdownHookEnabled=false -Dlog4j2.disable.jmx=true -Djava.io.tmpdir=/tmp/elasticsearch.NtMzlEoG -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/var/lib/elasticsearch -XX:ErrorFile=/var/log/elasticsearch/hs_err_pid%p.log -XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:+PrintTenuringDistribution -XX:+PrintGCApplicationStoppedTime -Xloggc:/var/log/elasticsearch/gc.log -XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=32 -XX:GCLogFileSize=64m -Des.path.home=/usr/share/elasticsearch -Des.path.conf=/etc/elasticsearch -Des.distribution.flavor=default -Des.distribution.type=rpm -cp /usr/share/elasticsearch/lib/* org.elasticsearch.bootstrap.Elasticsearch -p /var/run/elasticsearch/elasticsearch.pid --quiet
elastic+ 18683 4.4 0.1 63940 1072 ? Sl 19:55 0:02 /usr/share/elasticsearch/modules/x-pack-ml/platform/linux-x86_64/bin/controller
root 18731 0.0 0.1 113172 1208 pts/0 S+ 19:56 0:00 /bin/sh -c ps aux | grep elas
root 18733 0.0 0.0 112704 940 pts/0 S+ 19:56 0:00 grep elas

192.168.10.102 | SUCCESS | rc=0 >>
elastic+ 51207 45.9 60.2 3131536 290424 ? Ssl 19:53 1:00 /usr/local/jdk1.8/bin/java -Xms1g
-Xmx1g -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly -XX:+AlwaysPreTouch -Xss1m -Djava.awt.headless=true -Dfile.encoding=UTF-8 -Djna.nosys=true -XX:-OmitStackTraceInFastThrow -Dio.netty.noUnsafe=true -Dio.netty.noKeySetOptimization=true -Dio.netty.recycler.maxCapacityPerThread=0 -Dlog4j.shutdownHookEnabled=false -Dlog4j2.disable.jmx=true -Djava.io.tmpdir=/tmp/elasticsearch.tF106vFb -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=/var/lib/elasticsearch -XX:ErrorFile=/var/log/elasticsearch/hs_err_pid%p.log -XX:+PrintGCDetails -XX:+PrintGCDateStamps -XX:+PrintTenuringDistribution -XX:+PrintGCApplicationStoppedTime -Xloggc:/var/log/elasticsearch/gc.log -XX:+UseGCLogFileRotation -XX:NumberOfGCLogFiles=32 -XX:GCLogFileSize=64m -Des.path.home=/usr/share/elasticsearch -Des.path.conf=/etc/elasticsearch -Des.distribution.flavor=default -Des.distribution.type=rpm -cp /usr/share/elasticsearch/lib/* org.elasticsearch.bootstrap.Elasticsearch -p /var/run/elasticsearch/elasticsearch.pid --quiet
elastic+ 51258 4.6 0.2 63940 1148 ? Sl 19:55 0:00 /usr/share/elasticsearch/modules/x-pack-ml/platform/linux-x86_64/bin/controller
root 51309 1.0 0.2 113172 1344 pts/0 S+ 19:56 0:00 /bin/sh -c ps aux | grep elas
root 51311 0.0 0.1 112704 936 pts/0 S+ 19:56 0:00 grep elas

[root@lb01 ~]#

OK,3台机子都启动成功。

五、curl查看es
1、查看集群健康状态

curl ' 192.168.10.101:9200 / _ cluster / health ? pretty '

[root@lb01 ~]# curl '192.168.10.101:9200/_cluster/health?pretty'
{
"cluster_name" : "my-elk",
"status" : "green",
"timed_out" : false,
"number_of_nodes" : 3,
"number_of_data_nodes" : 2,
"active_primary_shards" : 0,
"active_shards" : 0,
"relocating_shards" : 0,
"initializing_shards" : 0,
"unassigned_shards" : 0,
"delayed_unassigned_shards" : 0,
"number_of_pending_tasks" : 0,
"number_of_in_flight_fetch" : 0,
"task_max_waiting_in_queue_millis" : 0,
"active_shards_percent_as_number" : 100.0
}
[root@lb01 ~]#

解释:

status:状态,green表示正常

2、查看集群详细信息

curl ' 192.168.10.101:9200 / _ cluster / state ? pretty '

[root@lb01 ~]# curl '192.168.10.101:9200/_cluster/state?pretty' | more
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0{
"cluster_name" : "my-elk",
"compressed_size_in_bytes" : 9585,
"cluster_uuid" : "SPnC040UT6eo_o_a6hw9Hw",
"version" : 5,
"state_uuid" : "hH1LRd3RSuCQzyykx7ZIuQ",
"master_node" : "MT6Hvwv9Sziu1xBmNcl89g",
"blocks" : { },
"nodes" : {
"S1ArtroOTZuswzayKr9wmA" : {
"name" : "my-test02",
"ephemeral_id" : "S1EqQurVQLe9fDhaB4lf2Q",
"transport_address" : "192.168.10.102:9300",
"attributes" : {
"ml.machine_memory" : "493441024",
"ml.max_open_jobs" : "20",
"xpack.installed" : "true",
"ml.enabled" : "true"
}
},
"MT6Hvwv9Sziu1xBmNcl89g" : {
"name" : "my-test01",
"ephemeral_id" : "JdMZ9Up4QwGribVXcav9cA",
"transport_address" : "192.168.10.101:9300",
--More--

六、安装kibana
1、主节点安装kibana

Kibana安装在主节点上。也就是安装在192.168.10.101机子上。

下载地址: https ://artifacts.elastic.co/downloads/kibana/kibana-6.4.0-x86_64.rpm

[root@lb01 ~]# curl -O https://artifacts.elastic.co/downloads/kibana/kibana-6.4.0-x86_64.rpm
[root@lb01 ~]# rpm -ivh kibana-6.4.0-x86_64.rpm

在前面中已经配好了yum源,所以可以yum安装,不过yum安装会很慢。

2、配置文件

kibana的配置文件为:/etc/kibana/kibana.yml

[root@lb01 ~]# vim /etc/kibana/kibana.yml
server.port: 5601
server.host: "192.168.10.101"
elasticsearch.url: "http://192.168.10.101:9200"
logging.dest: /var/log/kibana.log

创建存放日志的文件:

[root@lb01 ~]# touch /var/log/kibana.log && chmod 777 /var/log/kibana.log

3、启动kibana

[root@lb01 ~]# systemctl start kibana
[root@lb01 ~]# netstat -tnlnp | grep nod
tcp 0 0 192.168.10.101:5601 0.0.0.0:* LISTEN 5383/node
[root@lb01 ~]#

浏览器打开:192.168.10.101:5601

七、安装logstash
1、下载安装logstash

在任一数据节点安装logstash,这里在102机子上安装。

注意:logstash不支持jdk1.9。

logstash下载地址:https://artifacts.elastic.co/downloads/logstash/logstash-6.4.0.rpm

[root@lb01 ~]# curl -O https://artifacts.elastic.co/downloads/logstash/logstash-6.4.0.rpm
[root@lb01 ~]# rpm -ivh logstash-6.4.0.rpm

2、配置logstash收集syslog日志

[root@lb02 ~]# vim /etc/logstash/conf.d/syslog.conf
input {
syslog {
type => "system-syslog"
port => 10514
}
}
output {
stdout {
codec => rubydebug
}

在/usr/share/logstash/bin/logstash.lib.sh文件中添加JAVA_HOME变量值:

[root@lb02 ~]# vim /usr/share/logstash/bin/logstash.lib.sh
JAVA_HOME=/usr/local/jdk1.8

检查配置文件是否有错:

[root@lb02 ~]# cd /usr/share/logstash/bin
[root@lb02 bin]# ./logstash --path.settings /etc/logstash/ -f /etc/logstash/conf.d/syslog.conf --config.test_and_exit

Sending Logstash logs to /var/log/logstash which is now configured via log4j2.properties
[2018-09-09T21:07:49,379][INFO ][logstash.setting.writabledirectory] Creating directory {:setting=>"path.queue", :path=>"/var/lib/logstash/queue"}
[2018-09-09T21:07:49,506][INFO ][logstash.setting.writabledirectory] Creating directory {:setting=>"path.dead_letter_queue", :path=>"/var/lib/logstash/dead_letter_queue"}
[2018-09-09T21:07:52,013][WARN ][logstash.config.source.multilocal] Ignoring the 'pipelines.yml' file because modules or command line options are specified
Configuration OK
[2018-09-09T21:08:01,228][INFO ][logstash.runner ] Using config.test_and_exit mode. Config Validation Result: OK. Exiting Logstash
[root@lb02 bin]#

修改系统日志文件并重启:

3、前台启动logstash

[root@lb02 bin]# ./logstash --path.settings /etc/logstash/ -f /etc/logstash/conf.d/syslog.conf

。。。
[2018-09-09T21:13:59,328][INFO ][logstash.agent ] Pipelines running {:count=> 1, :running_pipelines=>[:main], :non_running_pipelines=>[]}
[2018-09-09T21:14:02,092][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600}

回车之后,不会退出。可以按ctrl c结束。

八、配置logstash
修改一下/etc/logstash/conf.d/syslog.conf :

[root@lb02 ~]# vim /etc/logstash/conf.d/syslog.conf
input {
syslog {
type => "system-syslog"
port => 10514
}
}
output {
elasticsearch {
hosts => ["192.168.10.101:9200"]
index => "system-syslog-%{+YYYY.MM}"
}
}

修改监听的host ip:

[root@lb02 ~]# vim /etc/logstash/logstash.yml
http.host: "192.168.10.102"

修改rsyslog:添加一行: . @ @192.168.10.102:10514

[root@lb02 ~]# vim /etc/rsyslog.conf

RULES

  • . * @@192.168.10.102:10514

重启rsyslog:systemctl restart rsyslog

启动logstash:

[root@lb02 ~]# systemctl start logstash

查看端口:

logstash启动很慢,要过一阵端口才能起来。

[root@lb02 ~]# netstat -tnl
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State
tcp 0 0 0.0.0.0:22 0.0.0.0: LISTEN
tcp 0 0 127.0.0.1:25 0.0.0.0:
LISTEN
tcp6 0 0 127.0.0.1:9600 ::: LISTEN
tcp6 0 0 192.168.10.102:9200 :::
LISTEN
tcp6 0 0 :::10514 ::: LISTEN
tcp6 0 0 192.168.10.102:9300 :::
LISTEN
tcp6 0 0 :::22 ::: LISTEN
tcp6 0 0 ::1:25 :::
LISTEN
[root@lb02 ~]#

10514、9600端口已监听。

九、kibana上查看日志
前面中,kibana安装在192.168.10.101,监听端口:5601

查看索引:curl '192.168.10.101:9200/_cat/indices?v'

[root@lb01 ~]# curl '192.168.10.101:9200/_cat/indices?v'
health status index uuid pri rep docs.count docs.deleted store.size pri.store.size
green open system-syslog-2018.09 7kIi3UOtQR6DJaPmBZF1JQ 5 1 15 0 130.3kb 74.3kb
green open .kibana SAqQ2nPHQfWkHrx3uBb0zA 1 1 1 0 8kb 4kb
[root@lb01 ~]#

查看索引详细信息:curl -XGET '192.168.10.101:9200/索引名称?pretty'

[root@lb01 ~]# curl -XGET '192.168.10.101:9200/system-syslog-2018.09?pretty'
{
"system-syslog-2018.09" : {
"aliases" : { },
"mappings" : {
"doc" : {
"properties" : {
"@timestamp" : {
"type" : "date"
},
"@version" : {
"type" : "text",
"fields" : {
"keyword" : {
"type" : "keyword",
"ignore_above" : 256
}
...

删除索引:

参考资料:https://zhaoyanblog.com/archives/732.html

kibana查看日志:

浏览器打开:192.168.10.101:5601

在左侧点击:Managerment - - Index Patterns - - Create Index Patterns

填写索引的匹配模式,在前面创建了system - syslog - 2018.09索引

点击下一步,按下图的设置,然后创建。

创建成功如下:

点击左侧的discover:

就可以看到日志信息了。

十、收集nginx日志
1、在logstash主机(192.168.10.102)上配置logstash收集nginx日志的配置文件。

nginx安装在192.168.10.102机子上,安装过程省略。

[root@lb02 ~]# vim /etc/logstash/conf.d/nginx.conf
input {
file {
path => "/tmp/elk_access.log"
start_position => "beginning"
type => "nginx"
}
}
filter {
grok {
match => {
"message" => "%{IPORHOST:http_host} %{IPORHOST:clientip} - %{USERNAME:remote_user} [%{HTTPDATE:timestamp}] \"(?:%{WORD:http_verb} %{NOTSPACE:http_request}(?: HTTP/%{NUMBER:http_version})?|%{DATA:raw_http_request})\" %{NUMBER:response} (?:%{NUMBER:bytes_read}|-) %{QS:referrer} %{QS:agent} %{QS:xforwardedfor} %{NUMBER:request_time:float}"
}
}
geoip {
source => "clientip"
}
}

检查一下配置文件有没错:

[root@lb02 ~]# /usr/share/logstash/bin/logstash --path.settings /etc/logstash/ -f /etc/logstash/conf.d/nginx.conf --config.test_and_exit
Sending Logstash logs to /var/log/logstash which is now configured via log4j2.properties
[2018-09-13T21:34:10,596][WARN ][logstash.config.source.multilocal] Ignoring the 'pipelines.yml' file because modules or command line options are specified
Configuration OK
[2018-09-13T21:34:21,798][INFO ][logstash.runner ] Using config.test_and_exit mode. Config Validation Result: OK. Exiting Logstash
[root@lb02 ~]#

重启logstash:

[root@lb02 ~]# systemctl restart logstash

2、设置nignx虚拟主机

nginx安装目录:/usr/local/nginx

[root@lb02 ~]# vim /usr/local/nginx/conf.d/elk.conf
server {
listen 80;
server_name elk.localhost;

location / {
    proxy_pass http://192.168.10.101:5601;
    proxy_set_header Host $host;
    proxy_set_header X-Real-IP $remote_addr;
    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
access_log /tmp/elk_access.log main2;

}

3、修改nginx日志格式

添加以下内容:

log_format main2 '$http_host $remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$upstream_addr" $request_time';

自定义的日志格式

[root@lb02 ~]# vim /usr/local/nginx/conf/nginx.conf
log_format main2 '$http_host $remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$upstream_addr" $request_time';

启动nginx:

[root@lb02 ~]# /usr/local/nginx/sbin/nginx
[root@lb02 ~]#
[root@lb02 ~]# netstat -tnlp | grep 80
tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN 3378/nginx: master
[root@lb02 ~]#

查看一下索引:

[root@lb01 ~]# curl '192.168.10.101:9200/_cat/indices ? v '
health status index uuid pri rep docs.count docs.deleted store.size pri.store.size
green open system-syslog-2018.09 7kIi3UOtQR6DJaPmBZF1JQ 5 1 188 0 894.8kb 465.4kb
green open .kibana SAqQ2nPHQfWkHrx3uBb0zA 1 1 2 0 22kb 11kb
green open nginx-test-2018.09.13 wGtUOYe0QJqpBE5iJbQVTQ 5 1 99 0 298.3kb 149.1kb
[root@lb01 ~]#

OK,nginx-test索 引已被获取到了。配置成功。

物理机绑定host:

192.168.10.102 elk.localhost

浏览器打开:

跟前面的方法一样,创建nginx的索引

查看nginx的日志:

ok。

十 一、使用beats   采 集日志
 1、下载安装filebeat

beats是轻量级的日志采集工具。网址:https ://www.elastic.co/cn/products/beats

在192.168.101.103机子安装filebeat。下载地址:

https ://artifacts.elastic.co/downloads/beats/filebeat/filebeat-6.4.0-x86_64.rpm

[root@rs01 ~]# rpm -ivh https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-6.4.0-x86_64.rpm

2、修改配置文件

注释以下内容:

#enabled: false

#output.elasticsearch:

hosts: ["localhost:9200"]

添加以下内容:

output.console:
enable: true
修改paths:

paths:

  • /var/log/messages

[root@rs01 ~]# vim /etc/filebeat/filebeat.yml

查看一下:

[root@rs01 ~]# /usr/share/filebeat/bin/filebeat -c /etc/filebeat/filebeat.yml
{"@timestamp":"2018-09-13T14:46:50.390Z","@metadata":{"beat":"filebeat","type":"doc","version":"6.4.0"},"source":"/var/log/messages","offset":709995,"message":"Sep 13 22:46:48 rs01 systemd-logind: Removed session 10.","prospector":{"type":"log"},"input":{"type":"log"},"host":{"name":"rs01"},"beat":{"name":"rs01","hostname":"rs01","version":"6.4.0"}}
{"@timestamp":"2018-09-13T14:46:57.446Z","@metadata":{"beat":"filebeat","type":"doc","version":"6.4.0"},"host":{"name":"rs01"},"message":"Sep 13 22:46:56 rs01 systemd: Started Session 11 of user root.","source":"/var/log/messages","offset":710052,"prospector":{"type":"log"},"input":{"type":"log"},"beat":{"name":"rs01","hostname":"rs01","version":"6.4.0"}}
{"@timestamp":"2018-09-13T14:46:57.523Z","@metadata":

messages的日志输出到屏幕了。

上面设置的是输出messages日志,下面修改为:/var/log/elasticsearch/my-elk.log

当然,你可以设置/var/log/目录中的任何一个日志文件。把前面的

output.console:
enable: true

注释掉,设置:

output.elasticsearch:
hosts: ["192.168.93.129:9200"]

[root@rs01 ~]# vim /etc/filebeat/filebeat.yml

#=========================== Filebeat inputs =============================

filebeat.inputs:

Each - is an input. Most options can be set at the input level, so

you can use different inputs for various configurations.

Below are the input specific configurations.

  • type: log

    Change to true to enable this input configuration.

    #enabled: true

    Paths that should be crawled and fetched. Glob based paths.

    paths:

    • /var/log/elasticsearch/my-elk.log

......

#output.console:

enable: true

#================================ Outputs =====================================

Configure what output to use when sending the data collected by the beat.

#-------------------------- Elasticsearch output ------------------------------
output.elasticsearch:

Array of hosts to connect to.

hosts: ["192.168.10.101:9200"]

修改完成之后,启动filebeat

[root@hongwei-02 ~]# systemctl start filebeat
[root@rs01 ~]#

查看es的索引:

[root@lb01 ~]# curl '192.168.10.101:9200/_cat/indices ? v '
health status index uuid pri rep docs.count docs.deleted store.size pri.store.size
green open filebeat-6.4.0-2018.09.13 hJ92dpWGShq2J8uI8hN7pQ 3 1 980 0 484.8kb 223.8kb
green open nginx-test-2018.09.13 wGtUOYe0QJqpBE5iJbQVTQ 5 1 25779 0 12.1mb 6.1mb
green open system-syslog-2018.09 7kIi3UOtQR6DJaPmBZF1JQ 5 1 25868 0 12.7mb 6.3mb
green open .kibana SAqQ2nPHQfWkHrx3uBb0zA 1 1 3 0 36.1kb 18kb
[root@lb01 ~]#

OK,es能采集到filebeat。

那么可以在kibana中创建索引了。

相比logstash,filebeat的设置简单多了。