RedHat/CentOS 7使用ELK进行日志分析

How it works?

The following diagram shows how this ecosystem works:

RedHat/CentOS 7使用ELK进行日志分析_第1张图片

  1. The logstash-forwarder reads as many local log files as you want to configure for and send them to the logstash server (port 5000) encrypted, using the logstash certification file.

  2. The logstash server receives the logs and process the different queues to store the data in the local folders.

  3. Elasticsearch performs different operations (optimize the data structures, create search indexes, group information…) to create a better experience accessing to the information.

  4. Kibana reads logstash data structures and present them to the users using custom layouts, dashboards and filters.

This is pretty much how these different OpenSource projects work together. Of course, this is a very high-level diagram and there is a huge world on each project. In this post we are going to see how to install and configure a single RHEL 7 ELK server to centralize and exploit our logs and how to prepare the log-clients to use logstash-forwarder to send the logs to the ELK server.

Index

I have divided the topic in six steps:

  1. Preparing the environment

  2. Installing and configuring Elasticsearch

  3. Installing and configuring Kibana

  4. Installing and configuring Logstash

  5. Install and configure logstash-forwarder on your hosts/clients

  6. Mastering ELK

 

Preparing the environment


I’ve chosen Red Hat Enterprise Linux 7 because it would probably be the standard enterprise linux distribution for the next 5-7 years. Of course, you can perform the same actions using CentOS or Fedora 20 releases.

In my case, I have a minimal installation of RHEL7, subscribed to the following repositories (I use Red Hat Satellite 6 as deployment, software and configuration management tool):

$ yum repolistLoaded plugins: package_upload, product-id, subscription-manager
repo id                                               repo name                                                              status
rhel-7-server-rh-common-rpms/x86_64                   Red Hat Enterprise Linux 7 Server - RH Common (RPMs)                      68
rhel-7-server-rpms/x86_64                             Red Hat Enterprise Linux 7 Server (RPMs)                               4.817
repolist: 4.885

And firewalld and SELinux enabled by default:

$ firewall-cmd --list-allpublic (default, active)
  interfaces: eth0
  sources: 
  services: dhcpv6-client ssh
  ports:
  masquerade: no
  forward-ports: 
  icmp-blocks: 
  rich rules:

$ sestatusSELinux status:                 enabled
SELinuxfs mount:                /sys/fs/selinux
SELinux root directory:         /etc/selinux
Loaded policy name:             targeted
Current mode:                   enforcing
Mode from config file:          enforcing
Policy MLS status:              enabled
Policy deny_unknown status:     allowed
Max kernel policy version:      28

You must install java-openjdk-7, Apache web server and SELinux policy core-utils (this one if you want to keep SELinux enabled, because you will need to restore contexts in future steps):

$ yum install -y java-1.7.0-openjdk httpd policycoreutils-python

Also, you may want to open firewalld ports if you want to securize your server:

$ firewall-cmd --add-service=http --permanent
$ firewall-cmd --permanent --add-port=9200/tcp$ firewall-cmd --permanent --add-port=5000/tcp 
$ firewall-cmd --reload

$ firewall-cmd --list-allpublic (default, active)
  interfaces: eth0
  sources: 
  services: dhcpv6-client http ssh
  ports: 9200/tcp 5000/tcp  masquerade: no
  forward-ports: 
  icmp-blocks: 
  rich rules:

Installing and configuring Elasticsearch


First task is to create the elasticsearch repo in your server to point to the official Elasticsearch distribution:

$ rpm --import http://packages.elasticsearch.org/GPG-KEY-elasticsearch
$ cat > /etc/yum.repos.d/elasticsearch.repo << EOF
[elasticsearch-1.3]
name=Elasticsearch repository for 1.3.x packages
baseurl=http://packages.elasticsearch.org/elasticsearch/1.3/centos
gpgcheck=1
gpgkey=http://packages.elasticsearch.org/GPG-KEY-elasticsearch
enabled=1
EOF

Then, install the software:

$ yum install -y elasticsearch

And perform a minimal configuration:

  • Disabling dynamic scripts:

$ cat >> /etc/elasticsearch/elasticsearch.yml << EOF# Custom config parameters
script.disable_dynamic: true
EOF

  • Restricting external access: to do it, please look for the property ‘network:hosts:‘, uncomment it and give it the value of ‘localhost‘.

  • Disable multicast: Find ‘discovery.ze.ping.multicast.enabled: false‘ and uncomment it.

Finally, configure systemd to start the daemon at boot time and start the service:

$ systemctl daemon-reload
$ systemctl enable elasticsearch.service
$ systemctl start elasticsearch.service

Note: This is a default installation. To look for default directories, please refer to thislink.

Installing and configuring Kibana


Kibana hasn’t got a repository or RPM available, but we can download it executing the following command:

$ wget -P /tmp/ https://download.elasticsearch.org/kibana/kibana/kibana-3.1.0.tar.gz
$ tar xvf /tmp/kibana-3.1.0.tar.gz; rm -f /tmp/kibana-3.1.0.tar.gz

Next, edit ‘kibana-3.1.0/config.js‘ config file, find the line that specifies the elasticsearch server URL and replace the port number 9200 with 80:

elasticsearch: "http://"+window.location.hostname+":80",

Move the entire directory to /var/www/html/ location and fix SElinux context:

$ mv kibana-3.1.0/ /var/www/html/kibana3$ restorecon -R /var/www/html/

Create the apache VirtualHost configuration file for kibana3 service:

<VirtualHost elk-rhel7.example.com:80>
  ServerName elk-rhel7.example.com

  DocumentRoot /var/www/html/kibana3
  <Directory /var/www/html/kibana3>
    Allow from all
    Options -Multiviews
  </Directory>

  LogLevel debug
  ErrorLog /var/log/httpd/error_log
  CustomLog /var/log/httpd/access_log combined

  # Set global proxy timeouts
  <Proxy http://127.0.0.1:9200>
    ProxySet connectiontimeout=5 timeout=90
  </Proxy>

  # Proxy for _aliases and .*/_search
  <LocationMatch "^/(_nodes|_aliases|.*/_aliases|_search|.*/_search|_mapping|.*/_mapping)$">
    ProxyPassMatch http://127.0.0.1:9200/$1
    ProxyPassReverse http://127.0.0.1:9200/$1
  </LocationMatch>

  # Proxy for kibana-int/{dashboard,temp} stuff (if you don't want auth on /, then you will want these to be protected)
  <LocationMatch "^/(kibana-int/dashboard/|kibana-int/temp)(.*)$">
    ProxyPassMatch http://127.0.0.1:9200/$1$2
    ProxyPassReverse http://127.0.0.1:9200/$1$2
  </LocationMatch>

  <Location />
    AuthType Basic
    AuthBasicProvider file
    AuthName "Restricted"
    AuthUserFile /etc/httpd/conf.d/kibana-htpasswd
    Require valid-user
  </Location>
</VirtualHost>

Note my ELK server is ‘elk-rhel7.example.com‘ and the root-directory is ‘/var/www/html/kibana3‘. put your own requirements as your convenience.

Move the VirtualHost configuration file to the Apache configuration folder (and fix SElinux labels):

$ mv kibana3.conf /etc/httpd/conf.d/$ restorecon -R /etc/httpd/conf.d/$ semanage port -a -t http_port_t -p tcp 9200

In you want to protect Kibana from unauthorized access, add an htpasswd entry for your user (for example ‘admin’):

$ htpasswd -c /etc/httpd/conf.d/kibana-htpasswd admin
New password:
Re-type new password:
Adding password for user admin

And finally, enable the service at boot time and start it:

$ systemctl start httpd; systemctl status httpd

Installing and configuring Logstash


The last step is to install and configure logstash, the responsible to receive and process log traces.
First, create the repo file to get access to the latest logstash version:

$ cat > /etc/yum.repos.d/logstash.repo << EOF[logstash-1.4] name=logstash repository for 1.4.x packages baseurl=http://packages.elasticsearch.org/logstash/1.4/centos gpgcheck=1 gpgkey=http://packages.elasticsearch.org/GPG-KEY-elasticsearch enabled=1
EOF

and nstall it:

$ yum install -y logstash

Since we are going to use Logstash Forwarder to ship logs from our Servers to our Logstash Server, we need to create an SSL certificate and key pair. The certificate is used by the Logstash Forwarder to verify the identity of Logstash Server.
Generate the SSL certificate and private key, in the appropriate locations (/etc/pki/tls/…), with the following command:

$ cd /etc/pki/tls
$ sudo openssl req -x509 -batch -nodes -days 3650 -newkey rsa:2048 -keyout private/logstash-forwarder.key -out certs/logstash-forwarder.crt

NOTE: The logstash-forwarder.crt file will be copied to all of the servers that will send logs to Logstash.

Make the certificate available via http (to be downloaded by other potential hosts that are going to install logstash forwarder):

$ mkdir /var/www/html/kibana3/pub
$ cp /etc/pki/tls/certs/logstash-forwarder.crt /var/www/html/kibana3/pub/
$ restorecon -R /var/www/html/kibana3

NOTE: From now on, the certificate will be available for every client at: http://elk-rhel7.example.com/pub/logstash-forwarder.crt

Create the configuration file for lumberjack protocol (which is used by Logstash forwarders):

$ cat > /etc/logstash/conf.d/01-lumberjack-input.conf << EOF
input {
  lumberjack {
    port => 5000
    type => "logs"
    ssl_certificate => "/etc/pki/tls/certs/logstash-forwarder.crt"
    ssl_key => "/etc/pki/tls/private/logstash-forwarder.key"
  }
}EOF

This configuration file specifies a lumberjack input that will listen on tcp port 5000, and it will use the SSL certificate and private key that we created earlier.

Now let’s create a configuration file called 10-syslog.conf, where we will add a filter for syslog messages:

$ cat > /etc/logstash/conf.d/10-syslog.conf << EOF
filter {
  if [type] == "syslog" {
    grok {
      match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:\[%{POSINT:syslog_pid}\])?: %{GREEDYDATA:syslog_message}" }
      add_field => [ "received_at", "%{@timestamp}" ]
      add_field => [ "received_from", "%{host}" ]
    }
    syslog_pri { }
    date {
      match => [ "syslog_timestamp", "MMM  d HH:mm:ss", "MMM dd HH:mm:ss" ]
    }
  }
}
EOF

This filter looks for logs that are labeled as “syslog” type (by a logstash-forwarder), and it will try to use “grok” to parse incoming syslog logs to make it structured and query-able.

Lastly, we will create a configuration file called 30-lumberjack-output.conf, that basically configures Logstash to store the logs in Elasticsearch:

$ cat > /etc/logstash/conf.d/30-lumberjack-output.conf << EOF
output {
  elasticsearch { host => localhost }
  stdout { codec => rubydebug }
}
EOF

One of the last steps is to restart the logstash service to apply changes (please, be aware that logstash is not systemd compliant):

$ chkconfig logstash on; service logstash restart

And finally, share the following files from your elk-server to any potential host-client in your network:

$ wget -P /var/www/html/kibana3/pub/ http://packages.elasticsearch.org/logstashforwarder/centos/logstash-forwarder-0.3.1-1.x86_64.rpm;
$ wget -P /var/www/html/kibana3/pub/ http://logstashbook.com/code/4/logstash_forwarder_redhat_init
$ wget -P /var/www/html/kibana3/pub/ http://logstashbook.com/code/4/logstash_forwarder_redhat_sysconfig

$ restorecon -R /var/www/html/kibana3/

Where:

  • logstash-forwarder-0.3.1-1.x86_64.rpm: Package for logstash-forwarder agent.

  • logstash_forwarder_redhat_init: Example of init.d script for logstash-forwarder.

  • logstash_forwarder_redhat_sysconfig: Config file for logstash-forwarder.

Install and configure logstash-forwarder on your hosts/clients


The main reason to share the files described above is to easily perform the following steps.
First, you must install Logstash Forwarder package:

$ wget -P /tmp/ --user=<user> --password=<pass> http://elk-rhel7.example.com/pub/logstash-forwarder-0.3.1-1.x86_64.rpm
$ yum localinstall /tmp/logstash-forwarder-0.3.1-1.x86_64.rpm
$ rm -f /tmp/logstash-forwarder-0.3.1-1.x86_64.rpm

Then, download the logstash-forwarder init script and config file:

$ wget -O /etc/init.d/logstash-forwarder --user=<user> --password=<pass> http://elk-rhel7.example.com/pub/logstash_forwarder_redhat_init$ chmod +x /etc/init.d/logstash-forwarder
$ wget -O /etc/sysconfig/logstash-forwarder --user=<user> --password=<pass> http://elk-rhel7.example.com/pub/logstash_forwarder_redhat_sysconfig

Copy the SSL certificate into the appropriate location (/etc/pki/tls/certs):

$ wget -P /etc/pki/tls/certs/ --user=<user> --password=<pass> http://elk-rhel7.example.com/pub/logstash-forwarder.crt

And create (or download from logstash-server at http://elk-rhel7.example.com/pub/logstash-forwarder) the logstash-forwarder config file:

$ mkdir -P /etc/logstash-forwarder

$ cat > /etc/logstash-forwarder/logstash-forwarder.conf << EOF
{
  "network": {
    "servers": [ "192.168.100.10:5000" ],
    "timeout": 15,
    "ssl ca": "/etc/pki/tls/certs/logstash-forwarder.crt"
  },
  "files": [
    {
      "paths": [
        "/var/log/messages",
        "/var/log/secure"
       ],
      "fields": { "type": "syslog" }
    }
   ]
}
EOF

NOTE: The internal IP-Address of my logstash-server is 192.168.100.10.

This configures Logstash Forwarder to connect to your Logstash Server on port 5000 (the port that we specified an input for earlier), and uses the SSL certificate that we created earlier.
The paths section specifies which log files to send (here we specify messages and secure), and the type section specifies that these logs are of type “syslog* (which is the type that our filter is looking for).

NOTE: This is where you would add more files/types to configure Logstash Forwarder to other log files to Logstash on port 5000.

Finally, lets activate Logstash-Forwarder on boot and start the service:

$ chkconfig --add logstash-forwarder
$ service logstash-forwarder start

You should see now that log events are forwarded to your elk-server accessing to the Kibana Dashboard (in my case):

http://elk-rhel7.example.com/index.html#/dashboard/file/logstash.json

RedHat/CentOS 7使用ELK进行日志分析_第2张图片

Mastering ELK


If you want to go further (remember this is an introduction to ELK, not a deep dive), I suggest you to go to theElasticsearch Resources page and regist to one of the scheduled trainings on your hometown or next to it:

  • Elasticsearch Core Outline (2 days).

  • Getting started workshop (1 day).

Also, there is an O’Reilly book just to be released: Elasticsearch: The Definitive Guide.


你可能感兴趣的:(ELK,ganglia,EFK,ntopng)