Elk官方https://www.elastic.co/downloads
环境:centos 7.1,Elk 版本6.2
es-node-1**:**
IP:10.57.22.128
安装:
elasticsearch
kibana:前端展示
es head:查看es数据库内容
logstash: 接收日志
es-node-2**:**
IP:10.57.22.126
安装:
elasticsearch
cerebro:查看集群状态
#安装JAVA
yum install java -y
#两台HOSTS
[root@10-57-22-128 elasticsearch]# cat /etc/hosts
10.57.22.126 es-node-2
10.57.22.128 es-node-1
#关iptables 与selinux
systemctl stop firewalld
systemctl disable firewalld
iptables -F
vi /etc/selinux/config
SELINUX=disabled
setenforce 0
安装:Elasticsearch
下载:
wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-6.2.2.rpm
安装 rpm -ivh elasticsearch-6.2.2.rpm
配置文件:
[root@10-57-22-128 elasticsearch]# cat /etc/elasticsearch/elasticsearch.yml | grep -v ^'#'
cluster.name: my-es
node.name: es-node-1
path.data: /var/lib/elasticsearch
path.logs: /var/log/elasticsearch
network.host: 0.0.0.0
http.port: 9200
discovery.zen.ping.unicast.hosts: ["es-node-2", "es-node-1"]
重启:
systemctl restart elasticsearch
测试访问:
可选:安装es的head插件
1、node
下载https://nodejs.org/en/download/
wget https://nodejs.org/dist/v8.10.0/node-v8.10.0-linux-x64.tar.xz
解压 xz -d node-v8.10.0-linux-x64.tar.xz
tar -zxvf node-v8.10.0-linux-x64.tar
安装 mv node-v8.10.0-linux-x64 /usr/local/node
设置环境变量vi /etc/profile,增加以下内容
#set for nodejs
export NODE_HOME=/usr/local/node
export PATH=$NODE_HOME/bin:$PATH
source /etc/profile
查看是否安装成功:
node -v
npm -v
2、head
下载 elasticsearch-head
git clone git://github.com/mobz/elasticsearch-head.git
cd elasticsearch-head
npm install -g grunt-cli
grunt -version
npm install
报错,提示phantomjs-prebuilt无法安装,手动安装
npm install [email protected] --ignore-scripts
重新安装npm install成功
修改ES配置文件vi /etc/elasticsearch/elasticsearch.yml,增加两行
http.cors.enabled: true
http.cors.allow-origin: "*"
重启服务
systemctl restart elasticsearch.service
遇到坑,刚才在下载node时,把文件放到/usr/share/elasticsearch/plugins的目录,导致ES启动不起来。删除安装文件重启OK,日志如下:
org.elasticsearch.bootstrap.StartupException: org.elasticsearch.bootstrap.BootstrapException: java.nio.file.FileSystemException: /usr/share/elasticsearch/plugins/node-v8.10.0-linux-x64.tar/plugin-descriptor.properties: Not a directory
Caused by: org.elasticsearch.bootstrap.BootstrapException: java.nio.file.FileSystemException: /usr/share/elasticsearch/plugins/node-v8.10.0-linux-x64.tar/plugin-descriptor.properties: Not a directory
Caused by: java.nio.file.FileSystemException: /usr/share/elasticsearch/plugins/node-v8.10.0-linux-x64.tar/plugin-descriptor.properties: Not a directory
启动head.
grunt server &
打开WEB查看正常。
安装 kibana
下载
wget https://artifacts.elastic.co/downloads/kibana/kibana-6.2.2-x86_64.rpm
安装 rpm -ivh kibana-6.2.2-x86_64.rpm
配置文件
[root@10-57-22-128 kibana]# cat kibana.yml | grep -v ^'#'
server.port: 5601
server.host: "0.0.0.0"
elasticsearch.url: "[http://localhost:9200](http://localhost:9200)"
启动:
/etc/init.d/kibana start
systemctl enable kibana
测试访问:
可选:Cerebro用于管理ES集群
下载
wget https://github.com/lmenezes/cerebro/releases/download/v0.7.2/cerebro-0.7.2.zip
解压 unzip cerebro-0.7.2.zip
修改配置文件
cd cerebro-0.7.2/
vi conf/application.conf
secret = "ki:s:[[@=Ag?QI`W2jMwkY:eqvrJ]JqoJyi2axj3ZvOv^/KavOT4ViJSv?6YY4[N"
basePath = "/"
pidfile.path=/dev/null
rest.history.size = 50 // defaults to 50 if not specified
data.path = "./cerebro.db"
es = {
gzip = true
}
auth = {
type: basic
settings: {
username = "admin"
password = "1234"
}
}
hosts = [
{
host = "[http://10.57.22.126:9200](http://10.57.22.126:9200)"
name = "my-es"
},
]
运行 nohup ./bin/cerebro -Dhttp.port=1234 -Dhttp.address=10.57.22.126 &
Logstash
下载安装
weget https://artifacts.elastic.co/downloads/logstash/logstash-6.2.2.rpm
rpm -ivh logstash-6.2.2.rpm
配置文件
[root@10-57-22-128 conf.d]# cat /etc/logstash/logstash.conf
input {
tcp {
port => 514
type => syslog
}
udp {
port => 514
type => syslog
}
}
output {
elasticsearch { hosts => ["localhost:9200"] }
}
安装服务
cd /usr/share/logstash/bin
[root@10-57-22-128 bin]# ./system-install
Successfully created system startup script for Logstash
可选:用自带的服务启动有问题安装了supervisor(上面解决了问题)
pip install supervisor
生成配置文件
echo_supervisord_conf > /etc/supervisord.conf
修改配置文件:
[root@10-57-22-128 conf.d]# cat /etc/supervisord.conf | grep -v ^'\;'
[unix_http_server]
file=/tmp/supervisor.sock ; the path to the socket file
[supervisord]
logfile=/tmp/supervisord.log ; main log file; default $CWD/supervisord.log
logfile_maxbytes=50MB ; max main logfile bytes b4 rotation; default 50MB
logfile_backups=10 ; # of main logfile backups; 0 means none, default 10
loglevel=info ; log level; default info; others: debug,warn,trace
pidfile=/tmp/supervisord.pid ; supervisord pidfile; default supervisord.pid
nodaemon=false ; start in foreground if true; default false
minfds=1024 ; min. avail startup file descriptors; default 1024
minprocs=200 ; min. avail process descriptors;default 200
[rpcinterface:supervisor]
supervisor.rpcinterface_factory = supervisor.rpcinterface:make_main_rpcinterface
[supervisorctl]
serverurl=unix:///tmp/supervisor.sock ; use a unix:// URL for a unix socket
;更改以下内容
[include]
files = /etc/supervisord.d/*.ini
生成logstash服务配置文件
[root@10-57-22-128 conf.d]# cat /etc/supervisord.d/logstash.ini
[program:logstash]
directory = /usr/share/logstash
command = /usr/share/logstash/bin/logstash -f /etc/logstash/logstash.conf -l /var/log/logstash/logstash.log
启动服务
supervisord -c /etc/supervisord.conf
更新新的配置到supervisord
# supervisorctl update
重新启动配置中的所有程序
# supervisorctl reload
查看服务是否起来
ss -lnp | grep 514
交换机配置
[~testare4-GE1/0/4]dis current-configuration | in info
info-center loghost source Vlanif99
info-center loghost 10.57.22.128 source-ip 172.16.200.6
进交换机down up 接口,查看kibana