Logstash是一个开源的日志管理工具。
项目地址:http://logstash.net/
Logstash安装使用以下组件:
服务端:
作者更喜欢使用RPM包来安装软件,要注意版本号,不要去追求时髦用最新的最伟大的,Elasticsearch的版本应该匹配Logstash的版本。
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
|
$
vi
/
etc
/
yum
.
repos
.
d
/
logstash
.
repo
[
logstash
-
1.4
]
name
=
logstash
repository
for
1.4.x
packages
baseurl
=
http
:
//packages.elasticsearch.org/logstash/1.4/centos
gpgcheck
=
1
gpgkey
=
http
:
//packages.elasticsearch.org/GPG-KEY-elasticsearch
enabled
=
1
$
vi
/
etc
/
yum
.
repos
.
d
/
elasticsearch
.
repo
[
elasticsearch
-
1.0
]
name
=
Elasticsearch
repository
for
1.0.x
packages
baseurl
=
http
:
//packages.elasticsearch.org/elasticsearch/1.0/centos
gpgcheck
=
1
gpgkey
=
http
:
//packages.elasticsearch.org/GPG-KEY-elasticsearch
enabled
=
1
$
vi
/
etc
/
yum
.
repos
.
d
/
nginx
.
repo
[
nginx
]
name
=
nginx
repo
baseurl
=
http
:
//nginx.org/packages/centos/$releasever/$basearch/
gpgcheck
=
0
enabled
=
1
$
rpm
-
Uvh
http
:
//mirror.1000mbps.com/fedora-epel/6/i386/epel-release-6-8.noarch.rpm
$
yum
-
y
install
elasticsearch
redis
nginx
logstash
|
1
2
3
|
$
wget
https
:
//download.elasticsearch.org/kibana/kibana/kibana-3.0.0.tar.gz
$
tar
-
xvzf
kibana
-
3.0.0.tar.gz
$
mv
kibana
-
3.0.0
/
usr
/
share
/
kibana3
|
我们需要告诉Kibana在哪里可以找到elasticsearch。打开配置文件并修改elasticsearch参数:
1
|
$
vi
/
usr
/
share
/
kibana3
/
config
.
js
|
搜索“elasticsearch”参数,并对其进行修改以适应您的环境:
1
|
elasticsearch
:
"http://dev.kanbier.lan:9200"
,
|
您还可以修改default_route参数,默认打开logstash仪表板而不是Kibana欢迎页面:
1
|
default_route
:
'/dashboard/file/logstash.json'
,
|
通过web界面访问:
1
2
3
4
|
$
wget
https
:
//raw.github.com/elasticsearch/kibana/master/sample/nginx.conf
$
mv
nginx
.
conf
/
etc
/
nginx
/
conf
.
d
/
$
vi
/
etc
/
nginx
/
conf
.
d
/
nginx
.
conf
server_name
dev
.
kanbier
.
lan
;
|
nginx配置如下:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
|
#
# Nginx proxy for Elasticsearch + Kibana
#
# In this setup, we are password protecting the saving of dashboards. You may
# wish to extend the password protection to all paths.
#
# Even though these paths are being called as the result of an ajax request, the
# browser will prompt for a username/password on the first request
#
# If you use this, you'll want to point config.js at http://FQDN:80/ instead of
# http://FQDN:9200
#
server
{
listen *
:
80
;
server_name
kibana
.
myhost
.
org
;
access_log
/
var
/
log
/
nginx
/
kibana
.
myhost
.
org
.
access
.
log
;
location
/
{
root
/
usr
/
share
/
kibana3
;
index
index
.
html
index
.
htm
;
}
location
~
^
/
_aliases
$
{
proxy_pass
http
:
//127.0.0.1:9200;
proxy_read
_timeout
90
;
}
location
~
^
/
.
*
/
_aliases
$
{
proxy_pass
http
:
//127.0.0.1:9200;
proxy_read
_timeout
90
;
}
location
~
^
/
_nodes
$
{
proxy_pass
http
:
//127.0.0.1:9200;
proxy_read
_timeout
90
;
}
location
~
^
/
.
*
/
_search
$
{
proxy_pass
http
:
//127.0.0.1:9200;
proxy_read
_timeout
90
;
}
location
~
^
/
.
*
/
_mapping
{
proxy_pass
http
:
//127.0.0.1:9200;
proxy_read
_timeout
90
;
}
# Password protected end points
location
~
^
/
kibana
-
int
/
dashboard
/
.
*
$
{
proxy_pass
http
:
//127.0.0.1:9200;
proxy_read
_timeout
90
;
limit_except
GET
{
proxy_pass
http
:
//127.0.0.1:9200;
auth
_basic
"Restricted"
;
auth_basic_user_file
/
etc
/
nginx
/
conf
.
d
/
kibana
.
myhost
.
org
.
htpasswd
;
}
}
location
~
^
/
kibana
-
int
/
temp
.
*
$
{
proxy_pass
http
:
//127.0.0.1:9200;
proxy_read
_timeout
90
;
limit_except
GET
{
proxy_pass
http
:
//127.0.0.1:9200;
auth
_basic
"Restricted"
;
auth_basic_user_file
/
etc
/
nginx
/
conf
.
d
/
kibana
.
myhost
.
org
.
htpasswd
;
}
}
}
|
1
2
|
$
vi
/
etc
/
redis
.
conf
bind
10.37.129.8
|
可以使用Logstash文档上的logstash-complex.conf文件,并不是很负责,包含:
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
|
$
vi
/
etc
/
logstash
/
conf
.
d
/
logstash
-
complex
.
conf
input
{
file
{
type
=
>
"syslog"
# Wildcards work, here <img src="http://www.denniskanbier.nl/blog/wp-includes/images/smilies/icon_smile.gif" alt=":)" class="wp-smiley">
path
=
>
[
"/var/log/*.log"
,
"/var/log/messages"
,
"/var/log/syslog"
]
sincedb_path
=
>
"/opt/logstash/sincedb-access"
}
redis
{
host
=
>
"10.37.129.8"
type
=
>
"redis-input"
data_type
=
>
"list"
key
=
>
"logstash"
}
syslog
{
type
=
>
"syslog"
port
=
>
"5544"
}
}
filter
{
grok
{
type
=
>
"syslog"
match
=
>
[
"message"
,
"%{SYSLOGBASE2}"
]
add_tag
=
>
[
"syslog"
,
"grokked"
]
}
}
output
{
elasticsearch
{
host
=
>
"dev.kanbier.lan"
}
}
|
1
2
3
4
|
$
service
redis
start
;
chkconfig
redis
on
$
service
elasticsearch
start
;
chkconfig
--
add
elasticsearch
;
chkconfig
elasticsearch
on
$
service
logstash
start
;
chkconfig
logstash
on
$
service
nginx
start
;
chkconfig
nginx
on
|
对于rsyslog现在你可以将这些行添加到/ etc/ rsyslog.conf
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
|
# ### begin forwarding rule ###
# The statement between the begin ... end define a SINGLE forwarding
# rule. They belong together, do NOT split them. If you create multiple
# forwarding rules, duplicate the whole block!
# Remote Logging (we use TCP for reliable delivery)
#
# An on-disk queue is created for this action. If the remote host is
# down, messages are spooled to disk and sent when it is up again.
$
WorkDirectory
/
var
/
lib
/
rsyslog
# where to place spool files
$
ActionQueueFileName
fwdRule1
# unique name prefix for spool files
$
ActionQueueMaxDiskSpace
1g
# 1gb space limit (use as much as possible)
$
ActionQueueSaveOnShutdown
on
# save messages to disk on shutdown
$
ActionQueueType
LinkedList
# run asynchronously
$
ActionResumeRetryCount
-
1
# infinite retries if host is down
# remote host is: name/ip:port, e.g. 192.168.0.1:514, port optional
*
.
*
@
@
10.37.129.8
:
5544
# ### end of the forwarding rule ###
|
如果有防火墙需要放开这些端口:
译自:http://www.denniskanbier.nl/blog/logging/installing-logstash-on-rhel-and-centos-6/