docker-compose
git 安装
命令补全安装
curl -L https://github.com/docker/compose/releases/download/1.28.2/docker-compose-`uname -s`-`uname -m` -o /usr/local/bin/docker-compose
建议放置在 /usr/local/bin/ 此目录为本地第三方安装 区分 /usr/bin/ 目录的系统默认安装位置
yum install git -y
自动补齐需要依赖工具 bash-complete,如果没有,则需要手动安装,命令如下:
yum -y install bash-completion
安装成功后,得到文件为 /usr/share/bash-completion/bash_completion ,如果没有这个文件,则说明系统上没有安装这个工具。
命令如下:
curl -L https://raw.githubusercontent.com/docker/compose/$(docker-compose version --short)/contrib/completion/bash/docker-compose -o /etc/bash_completion.d/docker-compose
如出现 0curl: (7) Failed connect to raw.githubusercontent.com:443; Connection refused
证明所在地的域名已被污染,解决办法在hosts中加入raw.githubusercontent.com的真实地址进行本地解析
1、登陆 https://www.ipaddress.com/ 解析出真实地址
2、修改本地hosts进行本地解析
vim /etc/hosts
加入本地解析
199.232.28.133 raw.githubusercontent.com
git clone https://github.com/deviantony/docker-elk.git
修改
elasticsearch/config/elasticsearch.yml
## Default Elasticsearch configuration from Elasticsearch base image.
## https://github.com/elastic/elasticsearch/blob/master/distribution/docker/src/docker/config/elasticsearch.yml
#
cluster.name: "docker-cluster"
network.host: 0.0.0.0
node.name: node-1
node.master: true
http.cors.enabled: true
http.cors.allow-origin: "*"
## X-Pack settings
## see https://www.elastic.co/guide/en/elasticsearch/reference/current/setup-xpack.html
#
xpack.license.self_generated.type: trial
xpack.security.enabled: true
xpack.monitoring.collection.enabled: true
kibana/config/kibana.yml
## Default Kibana configuration from Kibana base image.
## https://github.com/elastic/kibana/blob/master/src/dev/build/tasks/os_packages/docker_generator/templates/kibana_yml.template.ts
#
server.name: kibana
server.host: 0.0.0.0
elasticsearch.hosts: [ "http://elasticsearch_IP:9200" ]
xpack.monitoring.ui.container.elasticsearch.enabled: true
## X-Pack security credentials
#
elasticsearch.username: elastic
elasticsearch.password: changeme
logstash/config/logstash.yml
## Default Logstash configuration from Logstash base image.
## https://github.com/elastic/logstash/blob/master/docker/data/logstash/config/logstash-full.yml
#
http.host: "0.0.0.0"
xpack.monitoring.elasticsearch.hosts: [ "http://elasticsearch_IP:9200" ]
## X-Pack security credentials
#
xpack.monitoring.enabled: true
xpack.monitoring.elasticsearch.username: elastic
xpack.monitoring.elasticsearch.password: changeme
logstash/pipeline/logstash.conf
input {
beats {
port => 5044
}
tcp {
port => 5000
type =>"tcp"
}
udp {
port => 5140
type =>"udp"
}
}
## Add your filters / logstash plugins configuration here
output {
if [type] == "tcp"{
elasticsearch {
hosts => "IP:9200"
user => "xxx"
password => "xxx"
ecs_compatibility => disabled
index => "syslog-%{+YYYY.MM.dd}"
}
}
if [type] == "udp"{
elasticsearch {
hosts => "IP:9200"
user => "xxx"
password => "xxx"
ecs_compatibility => disabled
index => "udp_syslog-%{+YYYY.MM.dd}"
}
}
}
以上情况下,当5000端口的TCP数据过来时index syslog 中可以收到 index udp_syslog 中不会收到,当5140的UDP数据过来时 index syslog 和 udp_syslog 都会收到
解决办法:
在 logstash/pipeline/ 目录中 另外创建两份 conf文件,分别使用TCP和UDP的端口
[root@localhost docker-elk]# docker-compose up -d
Building with native build. Learn about native build in Compose here: https://docs.docker.com/go/compose-native-build/
Creating docker-elk_elasticsearch_1 ... done
Creating docker-elk_logstash_1 ... done
Creating docker-elk_kibana_1 ... done
[root@localhost docker-elk]# docker-compose down
Stopping docker-elk_kibana_1 ... done
Stopping docker-elk_logstash_1 ... done
Stopping docker-elk_elasticsearch_1 ... done
Removing docker-elk_kibana_1 ... done
Removing docker-elk_logstash_1 ... done
Removing docker-elk_elasticsearch_1 ... done
Removing network docker-elk_elk
[root@localhost docker-elk]# docker-compose stop
Stopping docker-elk_logstash_1 ... done
Stopping docker-elk_kibana_1 ... done
Stopping docker-elk_elasticsearch_1 ... done
[root@localhost docker-elk]# docker-compose start
Starting elasticsearch ... done
Starting logstash ... done
Starting kibana ... done
up down stop start 后均可跟镜像,单独执行
Kibana server is not ready yet 错误
一般由ElasticSearch索引问题导致Kibana 提示该未准备好的错误
解决办法:
curl -u elastic:changeme 'localhost:9200/_cat/indices?v' //注意使用xpack插件时要带帐号密码访问
curl -u elastic:changeme XDELETE 'localhost:9200/.kibana*'
Logstash conf
logstash/pipeline/logstash_tcp.conf //5000的TCP端口
input {
tcp {
port => 5000
type =>"tcp"
}
}
## Add your filters / logstash plugins configuration here
output {
if [type] == "tcp"{
elasticsearch {
hosts => "192.168.6.151:9200"
user => "elastic"
password => "changeme"
ecs_compatibility => disabled
index => "syslog-%{+YYYY.MM.dd}"
}
}
}
logstash/pipeline/logstash_udp.conf //5140的UDP端口
input {
udp {
port => 5140
type =>"udp"
}
}
## Add your filters / logstash plugins configuration here
output {
if [type] == "udp"{
elasticsearch {
hosts => "192.168.6.151:9200"
user => "elastic"
password => "changeme"
ecs_compatibility => disabled
index => "udp_syslog-%{+YYYY.MM.dd}"
}
}
}
docker-compose.yml //注意logstash 模块中添加 5140的UDP端口映射
logstash:
build:
context: logstash/
args:
ELK_VERSION: $ELK_VERSION
volumes:
- type: bind
source: ./logstash/config/logstash.yml
target: /usr/share/logstash/config/logstash.yml
read_only: true
- type: bind
source: ./logstash/pipeline
target: /usr/share/logstash/pipeline
read_only: true
ports:
- "5044:5044"
- "5000:5000/tcp"
- "5000:5000/udp"
- "5140:5140/udp"
- "9600:9600"
environment:
LS_JAVA_OPTS: "-Xmx256m -Xms256m"
networks:
- elk
#network_mode: bridge //使用默认的docker0网桥
depends_on:
- elasticsearch
以上配置正确并启动容器后
提供一个python的测试脚本
import logging
import logging.handlers # handlers要单独import
logger = logging.getLogger()
fh = logging.handlers.SysLogHandler(('192.168.6.151', 5140), logging.handlers.SysLogHandler.LOG_AUTH)
formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')
fh.setFormatter(formatter)
logger.addHandler(fh)
logger.warning("msg4")
logger.error("msg4")
logstash.conf 的 filter grok 解析及dissect分割(Huawei 交换机中的log收集分析