OS: CentOS Linux release 7.9.2009 (Core)
机器:10.28.19.110
安装版本:7.17.7
Docker版本:1.13.1
IP:10.28.19.110
mkdir -p /opt/elk/{elasticsearch/{data,plugins},logstash/config}
最终目录结构如下:
/opt/elk
├── elasticsearch
│ ├── data
│ └── plugins
└── logstash
└── config
从5.0开始 elasticsearch 安全级别提高了,这里准备将es的插件安装目录,及数据存储目录从宿机挂载至容器里面。
对/opt/elk/elasticsearch
目录提前授权:
chmod -R 777 /opt/elk/elasticsearch/
tee /opt/elk/logstash/config/logstash.conf << \EOF
input {
tcp {
mode => "server"
host => "0.0.0.0"
port => 5041
type => "amaxlog"
codec => json_lines
}
}
filter {
date {
match => [ "timestamp", "yyyy-MM-dd HH:mm:ss.SSS" ]
target => "@timestamp"
}
}
output {
elasticsearch {
hosts => ["10.28.19.110:9200"]
index => "amaxlog-%{+YYYY.MM.dd}"
codec => json
action => "index"
}
}
EOF
上面hosts是配置的es节点地址,index是指定ES索引名称,索引名称这里按天创建,是为了将日志按天切割及清理。
创建docker-compose编排文件:
tee /opt/elk/elk-docker-compose.yml << \EOF
version: '3'
services:
elasticsearch:
image: elasticsearch:7.17.7 # 镜像
container_name: elk_elasticsearch # 定义容器名称
restart: always # 开机启动,失败也会一直重启
environment:
- "cluster.name=elasticsearch" # 设置集群名称为elasticsearch
- "discovery.type=single-node" # 以单一节点模式启动
- "ES_JAVA_OPTS=-Xms512m -Xmx1024m" # 设置使用jvm内存大小
volumes:
- /opt/elk/elasticsearch/plugins:/usr/share/elasticsearch/plugins # 插件文件挂载
- /opt/elk/elasticsearch/data:/usr/share/elasticsearch/data # 数据文件挂载
ports:
- 9200:9200
kibana:
image: kibana:7.17.7
container_name: elk_kibana
restart: always
depends_on:
- elasticsearch # kibana在elasticsearch启动之后再启动
environment:
- ELASTICSEARCH_URL=http://elasticsearch:9200 # 设置访问elasticsearch的地址
- I18N_LOCALE=zh-CN # kibana中文界面显示
ports:
- 5601:5601
logstash:
image: logstash:7.17.7
container_name: elk_logstash
restart: always
volumes:
# 挂载logstash的配置文件
- /opt/elk/logstash/config/logstash.conf:/usr/share/logstash/pipeline/logstash.conf
depends_on:
- elasticsearch # kibana在elasticsearch启动之后再启动
links:
- elasticsearch:es # 可以用es这个域名访问elasticsearch服务
ports:
- 5041:5041
EOF
注意,这里logstash暴露的端口要与logstash配置文件中监听的端口一致,否则后续收集不到日志。
启动部署:
docker-compose -f /opt/elk/elk-docker-compose.yml up -d
验证ES服务:
http://10.28.19.110:9200/
验证kibana:
http://10.28.19.110:5601/
如无意外,es与Kibana都能正常打开。
这里以logback为示例,集成logstash。
添加jar引用:
<dependency>
<groupId>net.logstash.logbackgroupId>
<artifactId>logstash-logback-encoderartifactId>
<version>5.3version>
dependency>
logback.xml完整示例:
<configuration scan="true" scanPeriod="60 seconds" debug="false">
<property name="logStashIP" value="10.28.19.110"/>
<property name="logStashPort" value="5041"/>
<property name="appName" value="profile-dev}"/>
<property name="pattern_script"
value="${appName} %date{yyyy-MM-dd HH:mm:ss.SSS} [%thread] [%-5level] [%c{50}.%M\\(\\) : %line] - %msg%n"/>
<appender name="STDOUT" class="ch.qos.logback.core.ConsoleAppender">
<encoder class="ch.qos.logback.core.encoder.LayoutWrappingEncoder">
<layout class="org.apache.skywalking.apm.toolkit.log.logback.v1.x.TraceIdPatternLogbackLayout">
<pattern>${pattern_script}pattern>
layout>
<charset class="java.nio.charset.Charset">UTF-8charset>
encoder>
appender>
<appender name="LOGSTASH" class="net.logstash.logback.appender.LogstashTcpSocketAppender">
<destination>${logStashIP:- }:${logStashPort:- }destination>
<encoder charset="UTF-8" class="net.logstash.logback.encoder.LogstashEncoder">
<customFields>{"appname":"${appName}"}customFields>
encoder>
<filter class="ch.qos.logback.classic.filter.ThresholdFilter">
<level>INFOlevel>
filter>
appender>
<logger name="com.centaline" level="DEBUG"/>
<appender name="ASYNC_STDOUT" class="ch.qos.logback.classic.AsyncAppender">
<appender-ref ref="STDOUT"/>
<neverBlock>trueneverBlock>
<includeCallerData>trueincludeCallerData>
appender>
<appender name="ASYNC_LOGSTASH" class="ch.qos.logback.classic.AsyncAppender">
<appender-ref ref="LOGSTASH"/>
<neverBlock>trueneverBlock>
<includeCallerData>trueincludeCallerData>
appender>
<root level="INFO">
<appender-ref ref="ASYNC_STDOUT"/>
<appender-ref ref="ASYNC_LOGSTASH"/>
root>
configuration>
注意:日志索引一定要以日期后缀方式才能按时间去清理,示例格式:amaxlog-2023.07.07
编写清理日志索引shell脚本:
tee /opt/elk/clean_elk.sh << \EOF
#!/bin/bash
# @Author: 胡桃夹子
# @Date: 2023-07-10
# ELK日志数据保留近 N 天
KEEP_DAYS=30
# ES地址
ES_SERVER=10.28.19.110:9200
# 日志索引名称前缀,示例:amaxlog-2023.07.07
LOG_INDEX_PREFIX=amaxlog
# 删除前 N的所有天到 前N+10天==>每天执行
function get_to_days()
{
# declare -A DAY_ARR
# DAY_ARR=""
for i in $(seq 1 10);
do
THIS_DAY=$(date -d "$(($KEEP_DAYS+$i)) day ago" +%Y.%m.%d)
DAY_ARR=( "${DAY_ARR[@]}" $THIS_DAY)
done
echo ${DAY_ARR[*]}
}
# 返回数组的写法
TO_DELETE_DAYS=(`get_to_days`)
for DAY in "${TO_DELETE_DAYS[@]}"
do
echo "${LOG_INDEX_PREFIX}-${DAY} index will be delete"
URL=http://${ES_SERVER}/${LOG_INDEX_PREFIX}-${DAY}
# echo ${URL}
curl -XDELETE ${URL}
done
EOF
上面shell中变量解释:
KEEP_DAYS
代表日志保留天数;
ES_SERVER
代表ES的服务地址;
LOG_INDEX_PREFIX
日志索引名称前缀,示例:amaxlog-2023.07.07;
for i in $(seq 1 10)
数字10
代表 删除7天以前的10天,如果你之前还有1年的数据 可以改成365;
授权shell脚本可执行权限:
chmod +x /opt/elk/clean_elk.sh
配置定时任务:
编辑任务列表
crontab -e
添加定时任务,每天凌晨1点钟执行
0 1 * * * /opt/elk/clean_elk.sh >/dev/null 2>&1
至此,ELK已经完成安装、应用集成、定时清理。
Kubernetes 1.25.4版本安装
kubeasz安装kubernetes1.25.5
CentOS8搭建nfs服务
k8s一键安装redis单机版
k8s一键安装mysql8单机版
Docker制作springboot运行应用镜像
k8s部署springboot应用
zookeeper集群安装
Nginx日志切割
Elasticsearch单机版本安装
Elasticsearch集群安装
springboot集成prometheus+grafana
安装Docker及学习
RabbitMQ集群安装
Docker安装Mysql5.7.31
ELK安装
Docker安装ELK