master端:elasticsearch + logstash + redis + kibana

slave端:logstash + nginx or logstash + rsyslog


1.slave端收集nginx、syslog日志通过logstash写入到master上的redis中

2.master上的logstash读取redis中的日志输出到elastic,kibana再匹配读取elastic上内容


一、环境配置

 1.安装jdk

  wget http://download.oracle.com/otn-pub/java/jdk/8u102-b14/jdk-8u102-linux-x64.tar.gz?AuthParam=1473218169_4d538ded6eda268bfa110cc3f1af771b

  tar zxf jdk-8u102-linux-x64.tar.gz

  mv jdk1.8.0_102 /usr/local/java

  cat /etc/profile

JAVA_HOME=/usr/local/java
JRE_HOME=/usr/local/java/jre
PATH=/usr/local/java/jre/bin:/usr/local/java/bin:$PATH
CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar
export JAVA_HOME JRE_HOME PATH CLASSPATH

  source /etc/profile

 2.安装redis

  wget http://download.redis.io/releases/redis-3.2.3.tar.gz

  tar zxf redis-3.2.3.tar.gz

  cd redis-3.2.3

  make 

  make install

  ./utils/install_server.sh 


二、elasticsearch配置

 1.安装elastic

 wget https://download.elastic.co/elasticsearch/release/org/elasticsearch/distribution/tar/elasticsearch/2.4.0/elasticsearch-2.4.0.tar.gz

 tar zxf elasticsearch-2.4.0.tar.gz

 cd elasticsearch-2.4.0

 cat config/elasticsearch.yml    #配置文件   

cluster.name: elk-test                           #配置集群名称(加入同一个集群名称要一样)
node.name: server-102                            #配置节点名称
node.master: true                                #配置是否作为主节点(默认为true)
node.data: true                                  #配置是否为数据节点(默认为true)
path.data: /data/ela/data                        #配置数据路径
path.logs: /data/ela/logs                        #配置日志路径
bootstrap.memory_lock: true                      #锁住内存不让内存在swap中使用
network.host: 172.16.0.102                       #配置绑定ip地址(默认为0.0.0.0)
http.port: 9200                                  #配置端口号
node.max_local_storage_nodes: 1                  #配置启动1个节点
index.number_of_shards: 5                        #配置索引碎片的数量(默认为5)
discovery.zen.minimum_master_nodes: 1            #配置集群中主节点的数量(当节点大于三个时候可配置2-4)
discovery.zen.ping.timeout: 5s                   #配置连接其他节点的超时时间
discovery.zen.ping.multicast.enabled: false      #配置允许发现多个集群节点(默认为true禁止)
discovery.zen.ping.unicast.hosts: ["host1", "host2:port", "host3:port"]  #配置新节点被启动时能够发现的主节点列表

 mkdir -p /data/ela/data /data/ela/logs

 useradd elk

 chown -R elk.elk /data/ela/data /data/ela/logs /data/elasticsearch-2.4.0

 /data/elasticsearch-2.4.0/bin/elasticsearch  (elastic默认不能使用root启动使用-Des.insecure.allow.root=true参数以root启动)

 2.安装插件

  Head插件(节点数据查看管理)

  ./elasticsearch/bin/plugin install mobz/elasticsearch-head

  Kopf插件(集群管理)

  ./elasticsearch/bin/plugin install lmenezes/elasticsearch-kopf

  Bigdesk插件(监控查看CPU内存索引数据搜索情况http连接数)

  ./elasticsearch/bin/plugin install hlstudio/bigdesk

  Marvel插件(管理和监控,通过kibana上访问插件)

  ./elasticsearch/bin/plugin install license

  ./elasticsearch/bin/plugin install marvel-agent 

  ./kiabana/bin/kibana plugin --install elasticsearch/marvel/latest 

  

三、logstash配置

  1. 安装logstash

  wget https://download.elastic.co/logstash/logstash/logstash-2.4.0.tar.gz

  tar zxf logstash-2.4.0.tar.gz

  cd logstash-2.4.0

  vim config/logstash.conf    #新建配置master端的文件  

#读取redis里的日志
input {
    redis {
        host => "100.100.100.102"
        data_type => "list"
    key => "logstash:redis"
    type => "redis-input"
  }
}
#过滤掉内容包含5.3.3与down以外日志
filter {
    if [message] !~  "5.3.3|down" {
        ruby  {
            code => "event.cancel"
    }
    }
}
#使用自带的过滤规则显示更多的字段
filter {
    grok {
        match => {"message" => "%{COMBINEDAPACHELOG}"}
  }
}
#合并不是以[开头的日志
filter {
    multiline {
        pattern => "^[^[]"
        negate => true
        what => "previous"
    }
}
#输出到elastic并建立索引
output {
    if [type] == "syslog" {
        elasticsearch {
            hosts => "172.16.0.102:9200"
            index => "syslog-%{+YYYY.MM.dd}"
    }
}
    
    if [type] == "nginx" {
        elasticsearch {
            hosts => "172.16.0.102:9200"
            index => "nglog-%{+YYYY.MM.dd}"
     }
}
#匹配内容包含paramiko与simplejson的日志通邮件发送
    if [message] =~  /paramiko|simplejson/ {
        email {
            to => "[email protected]"
            from => "[email protected]"
            contenttype => "text/plain; charset=UTF-8"
            address => "smtp.163.com"
            username => "[email protected]"
            password => "12344"
            subject => "服务器%{host}日志异常"
            body => "%{@timestamp} %{type}: %{message}"
        }
    }
}

 ./bin/logstash -f config/logstash-slave.conf    #启动master端logstash


 vim config/logstash.conf #新建配置slave端的文件(安装过程与master一样)   

#收集nginx与系统日志
input {
    file {
        type => "nginx"
        path => "/usr/local/nginx/logs/access.log"
        add_field => {"ip"=>"100.100.100.100"}
        start_position => "beginning"  #从文件头开始读取
    }

    syslog {
        type => "syslog"
        host => "100.100.100.100"
        port => "514"
    }
    
    file {
        type=> "syslog"
        path => "/var/log/messages"
    }
}
#输出日志到master端的redis中
output {
    redis {
        host => "100.100.100.102"  
        port => "6379"
        data_type => "list"
        key => "logstash:redis"
    }
}

 ./bin/logstash -f config/logstash-slave.conf    #启动slave端logstash


四、kibana配置

 1.安装kibana

  wget https://download.elastic.co/kibana/kibana/kibana-4.6.1-linux-x86_64.tar.gz

  tar zxf kibana-4.6.1-linux-x86_64.tar.gz

  cd kibana-4.6.1-linux-x86_64

  cat config/kibana.yml

server.port: 5601                                  #端口
server.host: "172.16.0.102"                        #访问ip地址
elasticsearch.url: "http://172.16.0.102:9200"      #连接elastic               
kibana.index: ".kibana"                            #在elastic中添加.kibana索引

  ./bin/kibana