最近因公司项目增多,且环境也越来复杂,开发人员找我查看错误日志越来越频繁,心里便念想到ELK,心之所想,念之所达,快来get一项新技能吧
1、系统架构组成
- laravel日志:日志源通过filebeat将日志写进redis中间件
- logstsh:logstash通过input将redis数据拿来分析,通过其filter模块分析所需要的语句,然后输出到elasticsearch
3.elasticsearch 接收logstash发送过来的数据,并提供了一个分布式多用户能力的全文搜索引擎- Kibana是一个优秀的前端日志展示框架,它可以非常详细的将日志转化为各种图表,为用户提供强大的数据可视化支持。
二、各个服务的ip地址
laravel: 172.18.109.227
redis: 172.18.215.207
elasticsearch: 172.18.215.207
kibana: 172.18.215.207
三、laravel日志服务器配置:
- 配置filebeat yum源文件
[elastic-5.x] name=Elastic repository for 5.x packages baseurl=https://artifacts.elastic.co/packages/5.x/yum gpgcheck=1 gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch enabled=1 autorefresh=1 type=rpm-md
- 安装filebeat
yum install filebeat
- 配置filebeat配置文件
...
#写入源
- input_type: log
paths:
- /var/www/html/*/storage/logs/laravel-2018-12-29.log
...
#输出至redis
output.redis:
# Array of hosts to connect to.
hosts: ["172.18.215.207:6379"]
password: "***********"
db: 0
timeout: 5
key: "php-01"
#protocol: "https"
#username: "elastic"
#password: "changeme"
四、redis服务器
- 安装redis
yum install redis
- 配置redis
...
# bind 192.168.1.100 10.0.0.1
bind 172.18.215.207
# bind 127.0.0.1 ::1
#
...
# are explicitly listed using the "bind" directive.
protected-mode yes
# Accept connections on the specified port, default is 6379 (IANA #815344).
# If port 0 is specified Redis will not listen on a TCP socket.
port 6379
# TCP listen() backlog.
#
# In high requests-per-second environments you need an high backlog in order
# to avoid slow clients connections issues. Note that the Linux kernel
# will silently truncate it to the value of /proc/sys/net/core/somaxconn so
# make sure to raise both the value of somaxconn and tcp_max_syn_backlog
# in order to get the desired effect.
tcp-backlog 511
...
# use a very strong password otherwise it will be very easy to break.
#
requirepass ***********
# Command renaming.
#
# It is possible to change the name of dangerous commands in a shared
# environment. For instance the CONFIG command may be renamed into something
# hard to guess so that it
五、配置logstash服务器
- 配置logstash yum源
[elasticsearch-5.x] name=Elasticsearch repository for 5.x packages baseurl=https://artifacts.elastic.co/packages/5.x/yum gpgcheck=1 gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch enabled=1 autorefresh=1 type=rpm-md
- logstash配置文件
#
# Where to fetch the pipeline configuration for the main pipeline
#
path.config: /etc/logstash/conf.d
#
# Pipeline configuration string for the main pipeline
#
# config.string:
#
...
path.config: /etc/logstash/conf.d
#
...
http.host: "127.0.0.1"
#
# Bind port for the metrics REST endpoint, this option also accept a range
# (9600-9700) and logstash will pick up the first available ports.
#
# http.port: 9600-9700
#
# ------------ Debugging Settings --------------
#
# Options for log.level:
# * fatal
# * error
# * warn
# * info (default)
# * debug
# * trace
#
# log.level: info
path.logs: /var/log/logstash
#
...
vim /etc/logstash/conf.d/nginx.conf
# 从redis将数据取出
input {
redis {
type => "php-01"
host => "172.18.215.207"
port => "6379"
db => "0"
password => "*************"
data_type => "list"
key => "php-01"
}
}
# 格式化laravel日志
filter {
grok {
match => [ "message","\[%{TIMESTAMP_ISO8601:logtime}\] %{WORD:env}\.(?[A-Z]{4,5})\: %{GREEDYDATA:msg}}" ]
}
}
output {
#过滤level为ERROR的日志
if [level] == "ERROR" {
elasticsearch {
hosts => ["127.0.0.1:9200"]
index => "laravellog"
user => "elastic"
password => "changeme"
}
}
}
laravel的实例日志为
[2019-01-02 09:58:00] produce.INFO: {"code":200,"message":"成功","data":{"code":"1000","message":"成功"}}
[2019-01-02 10:00:03] produce.INFO: 不能充值的原因账户金额不够
[2019-01-02 10:00:03] produce.INFO: 不能充值的原因账户金额不够
[2019-01-02 10:00:03] produce.INFO: 不能充值的原因账户金额不够
[2019-01-02 10:00:34] produce.ERROR: cannot find user by this audience {"exception":"[object] (JPush\\Exceptions\\APIRequestException(code: 1011): cannot find user by this audience at /var/www/html/enjoyCarTask/vendor/jpush/jpush/src/JPush/Http.php:123)
[stacktrace]
#0 /var/www/html/enjoyCarTask/vendor/jpush/jpush/src/JPush/Http.php(16): JPush\\Http::proce***esp(Array)
#1 /var/www/html/enjoyCarTask/vendor/jpush/jpush/src/JPush/PushPayload.php(537): JPush\\Http::post(Object(JPush\\Client), 'https://api.jpu...', '{\"platform\":\"al...')
#2 /var/www/html/enjoyCarTask/vendor/ucar/push/Push/Jobs/JPush.php(89): JPush\\PushPayload->send()
#3 [internal function]: Ucar\\Push\\Jobs\\JPush->handle(Object(JPush\\Client))
#4 /var/www/html/enjoyCarTask/vendor/laravel/framework/src/Illuminate/Container/BoundMethod.php(29): call_user_func_array(Array, Array)
#5 /var/www/html/enjoyCarTask/vendor/laravel/framework/src/Illuminate/Container/BoundMethod.php(87): Illuminate\\Container\\BoundMethod::Illuminate\\Container\\{closure}()
#6 /var/www/html/enjoyCarTask/vendor/laravel/framework/src/Illuminate/Container/BoundMethod.php(31): Illuminate\\Container\\BoundMethod::callBoundMethod(Object(Illuminate\\Foundation\\Application), Array, Object(Closure))
#7 /var/www/html/enjoyCarTask/vendor/laravel/framework/src/Illuminate/Container/Container.php(564): Illuminate\\Container\\BoundMethod::call(Object(Illuminate\\Foundation\\Application), Array, Array, NULL)
注意:我们只想把ERROR的信息提取出来,所以logstash的配置文件中把level 为ERROR的筛选了出来
六、配置elasticsearch服务器
- 配置elasticsearch yum源
[elasticsearch-5.x] name=Elasticsearch repository for 5.x packages baseurl=https://artifacts.elastic.co/packages/5.x/yum gpgcheck=1 gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch enabled=1 autorefresh=1 type=rpm-md
- 安装elasticsearch
yum install -y elasticsearch
- 配置elasticsearch
chown -R elasticsearch:elasticsearch /data/es-data
chown -R elasticsearch:elasticsearch /var/log/elstic
vim /etc/elasticsearch/elasticsearch.yml
#
# Use a descriptive name for the node:
#
node.name: Elstic
#
# Add custom attributes to the node:
#
#node.attr.rack: r1
#
# ----------------------------------- Paths ------------------------------------
#
# Path to directory where to store the data (separate multiple locations by comma):
#
# 数据路径
path.data: /data/es-data
#
# Path to log files:
#日志路径
path.logs: /var/log/elstic
...
# 监听地址,设置为127,只保持本机访问
network.host: 127.0.0.1
#
# Set a custom port for HTTP:
#
# 监听的端口
http.port: 9200
#
# For more information, consult the network module documentation.
#
# --------------------------------- Discovery ----------------------------------
#
七、配置kibana服务
vim /etc/kibana/kibana.ym
# Kibana is served by a back end server. This setting specifies the port to use.
# 监听的端口
server.port: 5601
# Specifies the address to which the Kibana server will bind. IP addresses and host names are both valid values.
# The default is 'localhost', which usually means remote machines will not be able to connect.
# To allow connections from remote users, set this parameter to a non-loopback address.
# 监听地址,使用内网地址,然后用nginx反代
server.host: "127.0.0.1"
# Enables you to specify a path to mount Kibana at if you are running behind a proxy. This only affects
# the URLs generated by Kibana, your proxy is expected to remove the basePath value before forwarding requests
# to Kibana. This setting cannot end in a slash.
#server.basePath: ""
# The maximum payload size in bytes for incoming server requests.
#server.maxPayloadBytes: 1048576
# The Kibana server's name. This is used for display purposes.
#server.name: "your-hostname"
# The URL of the Elasticsearch instance to use for all your queries.
elasticsearch.url: "http://localhost:9200"
# When this setting's value is true Kibana uses the hostname specified in the server.host
# setting. When the value of this setting is false, Kibana uses the hostname of the host
# that connects to this Kibana instance.
#elasticsearch.preserveHost: true
# Kibana uses an index in Elasticsearch to store saved searches, visualizations and
# dashboards. Kibana creates a new index if the index doesn't already exist.
#kibana.index: ".kibana"
# The default application to load.
....
八、使用nginx反代kibana
$ cat /etc/nginx/conf.d/elk.conf
server {
listen 443 http2 ssl;
listen [::]:443 http2 ssl;
server_name *********;
ssl on;
ssl_certificate "**************";
ssl_certificate_key "/usr/local/certificate/************";
ssl_session_cache shared:SSL:1m;
ssl_session_timeout 10m;
ssl_ciphers ECDHE-RSA-AES128-GCM-SHA256:ECDHE:ECDH:AES:HIGH:!NULL:!aNULL:!MD5:!ADH:!RC4;
ssl_prefer_server_ciphers on;
#ssl_dhparam /etc/ssl/certs/dhparam.pem;
########################################################################
# from https://cipherli.st/ #
# and https://raymii.org/s/tutorials/Strong_SSL_Security_On_nginx.html #
########################################################################
# Disable preloading HSTS for now. You can use the commented out header line that includes
# the "preload" directive if you understand the implications.
#add_header Strict-Transport-Security "max-age=63072000; includeSubdomains; preload";
add_header Strict-Transport-Security "max-age=63072000; includeSubdomains";
add_header X-Frame-Options DENY;
add_header X-Content-Type-Options nosniff;
##################################
# 这里是反向代理到kibana服务 走http协议
location / {
proxy_pass http://127.0.0.1:5601;
}
}
#配置80端口重写443端口
server {
listen 80;
server_name *********;
rewrite ^/(.*)$ https://*********/$1;
}
九、访问kibana配置索引
启动所以服务后再次访问kibana,查看效果。
大功告成~,以后可以让开发自己看错误日志咯
总结:
- 此次因为服务器数量有限,(不敢在正式服务器上乱搞= =)只使用了两台服务器,php的laravel日志和filebeat服务一台,作为写入源,Redis,Elasticsearch、Logstash、Kibana,一台作为日志处理服务器,(如果做多台的话,监听地址需要改变)。跑一段时间确认没问题再上正式。
- 监听的地址最好都使用内网地址,防止被***。
喜欢我写的东西的朋友可以关注一下我的公众号:Devops部落