ELK centos部署记录【简要流程】

整体版本为8.1.3

elasticsearch安装:

1.下载解压

wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-8.1.3-linux-x86_64.tar.gz
tar -zxf  elasticsearch-8.1.3-linux-x86_64.tar.gz

2.配置说明

elasticsearch-8.1.3/config/jvm.options

2.1 jvm配置:

-Xms1g
-Xmx1g

2.2 elasticsearch配置

elasticsearch-8.1.3/config/elasticsearch.yml

#数据存储路径
path.data:  /usr/local/elk8.1/data/1
#日志存储路径
path.logs:  /usr/local/elk8.1/logs/1
#绑定ip,0.0.0.0 全部可访问
network.host: 0.0.0.0

2.3 elasticsearch必须使用非root用户

#创建用户组和用户
groupadd elsearch
useradd elsearch -g elsearch -p codingwhy
chown -R elsearch:elsearch elasticsearch-8.1.3

2.4 部分机器环境变量需要修改

#设置环境配置(打开文件后,在最后添加即可,需要用root用户操作)
vim /etc/sysctl.conf
#fs.file-max主要是配置系统最大打开文件描述符数,建议修改为655360或者更高
fs.file-max=655360
#影响Java线程数量,用于限制一个进程可以拥有的VMA(虚拟内存区域)的大小
vm.max_map_count = 262144

vim /etc/security/limits.conf
#可打开的文件描述符的最大数(软限制)
* soft noporc 65535
#可打开的文件描述符的最大数(硬限制)
* hard noporc 65535
#单个用户可用的最大进程数量(软限制)
* soft nofile 65535
#单个用户可用的最大进程数量(硬限制)
* hard nofile 65535



#centos7特有,修改软限制[需要执行]
vim /etc/security/limits.d/20-nproc.conf
*          soft    nproc     40960
elsearch          soft    nofile    65536
elsearch          soft    nofile    131072

#设置后,需要让这些配置生效一下,命令:
sysctl -p 

2.5 集群配置

每一个实例的配置文件基本类似下文,只用处理其中的节点、节点发现、数据目录、日志目录、端口等相关配置即可

vi config/elasticsearch.yml
#集群名称
cluster.name: elasticsearch
#)节点名称
node.name: node-1
#是否作为主节点,每个节点都可以被配置成为主节点,默认值为true
node.master: true
#默认情况下,多个节点可以在同一个安装路径启动,如果你想让你的es只启动一个节点,可以进行如下设置
node.max_local_storage_nodes: 1
#数据目录
path.data: /usr/local/elk8.1/data/2
#日志目录
path.logs: /usr/local/elk8.1/logs/2
#访问ip绑定   0.0.0.0 不限制
network.host: 192.168.86.173
#http 端口
http.port: 9202
#
transport.port: 9302
#节点发现
discovery.seed_hosts: ["192.168.86.173:9300","192.168.86.173:9301","192.168.86.173:9302"]
#主节点配置
cluster.initial_master_nodes: ["192.168.86.173:9300"]

2.6 如果开启了xpack安全授权

授权密钥生成命令

# elasticsearch x-pack安全认证登录/tcp启用TLS
# 1. 生成CA证书,使用elasticsearch内部命令
bin/elasticsearch-certutil ca 
# 2.为集群中每个节点生成证书和私钥
bin/elasticsearch-certutil cert --ca elastic-stack-ca.p12
#文件权限
chmod 777 ../config/elastic-certificates.p12
chmod 777 ../config/elastic-stack-ca.p12

在elasticsearch.yml文件添加如下配置

#开启安全认证登录
xpack.security.enabled: true    

##tcp启用TSL
xpack.security.transport.ssl.enabled: true    
xpack.security.transport.ssl.verification_mode: certificate    
xpack.security.transport.ssl.keystore.path: ./elastic-certificates.p12     
xpack.security.transport.ssl.truststore.path: ./elastic-certificates.p12

#http启用TLS   可选
xpack.security.http.ssl.enabled: true
xpack.security.http.ssl.keystore.path: ./elastic-certificates.p12
xpack.security.http.ssl.truststore.path: ./elastic-certificates.p12

执行命令使密钥生效

./elasticsearch-keystore add xpack.security.transport.ssl.keystore.secure_password
./elasticsearch-keystore add xpack.security.transport.ssl.truststore.secure_password
./elasticsearch-keystore add xpack.security.http.ssl.keystore.secure_password
 ./elasticsearch-keystore add xpack.security.http.ssl.truststore.secure_password

3 启动

#启动
./elasticsearch

#后台启动
./elasticsearch -d

密码相关信息

✅ Elasticsearch security features have been automatically configured!
✅ Authentication is enabled and cluster connections are encrypted.

ℹ️  Password for the elastic user (reset with `bin/elasticsearch-reset-password -u elastic`):
  8ZGJ*R6xLbyQvtJ5IIaQ

ℹ️  HTTP CA certificate SHA-256 fingerprint:
  acf3d9f26a7db408b2650318b114574e616c01feeae24fc4a8c531008ee08c0f

ℹ️  Configure Kibana to use this cluster:
• Run Kibana and click the configuration link in the terminal when Kibana starts.
• Copy the following enrollment token and paste it into Kibana in your browser (valid for the next 30 minutes):
  eyJ2ZXIiOiI4LjEuMyIsImFkciI6WyIxMC4xMC4xMC4yNTM6OTIwMCJdLCJmZ3IiOiJhY2YzZDlmMjZhN2RiNDA4YjI2NTAzMThiMTE0NTc0ZTYxNmMwMWZlZWFlMjRmYzRhOGM1MzEwMDhlZTA4YzBmIiwia2V5IjoiREpjWmRZUUJCN2FxSmVKZHdfXzg6YjdSQzVOSDZSLUtFOXRrWElMdUszQSJ9

ℹ️  Configure other nodes to join this cluster:
• On this node:
  ⁃ Create an enrollment token with `bin/elasticsearch-create-enrollment-token -s node`.
  ⁃ Uncomment the transport.host setting at the end of config/elasticsearch.yml.
  ⁃ Restart Elasticsearch.
• On other nodes:
  ⁃ Start Elasticsearch with `bin/elasticsearch --enrollment-token `, using the enrollment token that you generated.

部分命令

# 自动随机生成并设置密码
./elasticsearch-setup-passwords auto

./elasticsearch-reset-password -u kibana --auto

./elasticsearch-reset-password -u logstash_system --auto

4 查看集群状态

https://127.0.0.1:9201/_cat/nodes

kibana安装

1.下载解压

wget -c https://artifacts.elastic.co/downloads/kibana/kibana-8.1.3-linux-x86_64.tar.gz
tar -zxvf ./kibana-8.1.3-linux-x86_64.tar.gz

2.关联es

elasticsearch.hosts
elasticsearch.username
elasticsearch.password

#中文
i18n.locale: "zh-CN"

#修改配置:
server.host: "0.0.0.0"

3. 启动

nohup /usr/local/kibana/bin/kibana &

4.配置示例

server.port: 5602

server.host: "0.0.0.0"

#server.publicBaseUrl: ""

# The maximum payload size in bytes for incoming server requests.
#server.maxPayload: 1048576

# The Kibana server's name. This is used for display purposes.
#server.name: "your-hostname"

server.ssl.enabled: false

elasticsearch.hosts: ["http://10.10.10.253:9201"]

elasticsearch.username: "kibana"
elasticsearch.password: "xxx"

i18n.locale: "zh-CN"

logstash 安装

1.下载解压

wget https://artifacts.elastic.co/downloads/logstash/logstash-8.1.3-linux-x86_64.tar.gz
tar -zxvf logstash-8.1.3-linux-x86_64.tar.gz

2.启动

 bin/logstash -f config/logstash-sample.conf --log.level=debug

3.配置样例

# Sample Logstash configuration for creating a simple
# Beats -> Logstash -> Elasticsearch pipeline.

input {

   file {
    path =>  [ "C:/work/file/file-output-test.txt" ]
    add_field => {
        "appname"=>"演示demo"
    }
    start_position => "beginning"
        discover_interval => 1
        close_older => 3600
        type => "demo_platform"
    #监听文件读取信息记录的位置
    sincedb_path => "C:/work/file/1.txt"
    #设置多长时间会写入读取的位置信息
    sincedb_write_interval => 15
    #设置多长时间检测文件是否修改
        stat_interval => 1
  }
}

filter {

   if [type] == "demo_platform" {
    ruby {
        code => '
            #转成json对象
            logInfoJson=JSON.parse event.get("message")
            event.set("ip",logInfoJson["host"])
            event.set("timestamp",logInfoJson["@timestamp"])
        '
    }

    date {
        match => ["timestamp", "yyyy-MM-dd'T'HH:mm:ss.SSSZ"]
    }

    mutate {
        remove_field => [ "agent", "ecs",  "input" ]
    }
  }
}

output {

  if [type] == 'demo_platform'{
    elasticsearch {
      hosts => ["http://10.10.10.253:9201"]
      index => "demo_platform"
      template_overwrite => true
      template_name => "ecs-logstash"
      user => "xxx"
      password => "xxx"
    }
  }
}

filebeat 安装

1.下载解压

wget https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-8.1.3-linux-x86_64.tar.gz
tar zxvf filebeat-8.1.3-linux-x86_64.tar.gz

2.启动

部分场景下因为日志文件停止更新,导致自动关闭,可以加入服务避免此类问题

nohup ./filebeat -e -c filebeat.yml -d "publish"

加入服务方式

cd /lib/systemd/system  
vim filebeat.service

filebeat.service脚本内容

[Unit]     
Description=filebeat   
Wants=network-online.target   
After=network-online.target   
[Service]  
User=root  
ExecStart=//usr/local/filebeat/filebeat -e -c /usr/local/filebeat/filebeat.yml  
Restart=always  
[Install]WantedBy=multi-user.target

启动方式

systemctl daemon-reload              #加载配置    
systemctl start filebeat             #启动filebeat服务   
systemctl enable filebeat            #设置开机自启   
systemctl list-units --type=service        #查看所有已启动的服务   
filebeat.service        loaded active running filebeat          #如果有这一行就证明设置成功

3.配置说明

filebeat.yml

max_procs: 1                            # *限制一个CPU核心,避免过多抢占业务资源
queue.mem.events: 2048                  # 存储于内存队列的事件数,排队发送 (默认4096)
queue.mem.flush.min_events: 1536        # 小于 queue.mem.events ,增加此值可提高吞吐量 (默认值2048
filebeat.inputs:
- type: log
  enabled: true
  paths:
    - C:\work\file\8.txt
  fields:
    type: demo_platform2 
  multiline.pattern: '^[0-9]{4}-[0-9]{2}-[0-9]{2}'
  multiline.negate: true
  multiline.match: after

output.logstash:
  hosts: ["10.10.10.253:5044"]

elastalert2 安装

1. 准备python3.10以上环境

 #编译安装
 ./configure --prefix=/usr/local/soft/python3.11 --with-openssl=/usr/local/openssl --enable-optimizations
 make
 make install

ln -s /usr/local/python/bin/python3 /usr/bin/python3
ln -s /usr/local/python/bin/pip3 /usr/bin/pip3

#后续建议使用虚拟环境
python3 -m venv myvenv

2. 下载

git clone https://github.com/jertel/elastalert2.git

3.安装

pip install elasticsearch==8.1.3
pip install -r requirements.txt
python setup.py install

4.可以调整软链

ln -s /usr/local/python/bin/elastalert* /usr/bin

5.启动前需要创建索引

elastalert-create-index --config config.yaml

6.启动

#debug 方式启动,此项启动不会调用相关通知
elastalert --debug --rule test.yaml --config config.yaml
#正常启动
elastalert --rule test.yaml --config config.yaml

7. 配置示例

具体配置说明请查看官方说明:https://elastalert2.readthedocs.io/en/latest/

config.yaml

# This is the folder that contains the rule yaml files
# Any .yaml file will be loaded as a rule
rules_folder: example

run_every:
  minutes: 1

buffer_time:
  minutes: 15

# The Elasticsearch hostname for metadata writeback
# Elasticsearch host
es_host: 10.10.10.253

es_port: 9201

es_username: xxx
es_password: xxx

writeback_index: elastalert_status
writeback_alias: elastalert_alerts

alert_time_limit:
  days: 2

test.yaml

# Rule name, must be unique
name: test_frequency

# Alert on x events in y seconds
type: frequency

# Alert when this many documents matching the query occur within a timeframe
num_events: 1

# num_events must occur within this amount of time to trigger an alert
timeframe:
  minutes: 1

#警报时间控制
#start_time: "4:00"
#end_time: "20:00"


# A list of elasticsearch filters used for find events
# These filters are joined with AND and nested in a filtered query
# For more info: http://www.elasticsearch.org/guide/en/elasticsearch/reference/current/query-dsl.html
filter:
- query:
    query_string:
      default_field: "message"
      query: "test"

index: demo_platform2

# When the attacker continues, send a new alert after x minutes
realert:
  minutes: 0

# The alert is use when a match is found
alert:
  - "post"

http_post_url: "http://p74q5c.natappfree.cc/general/project/uniplat_base/service/anonymous/elk_notify/test"
http_post_static_payload:
    # 添加到post包中的数据,规则名称
    rule_name: test_frequency
    # 添加到post包中的数据,告警级别
    rule_level: medium

# Alert body only cointains a title and text
#alert_text_type: alert_text_only

定时删除索引日志数据

可使用linux crontab来定时执行

1.脚本示例

#!/bin/sh
savedays=7

if [ ! -n "$savedays" ]; then
  echo "the args is not right,please input again...."
  exit 1
fi

#sevendayago=`date -d "-${savedays} day " +'%Y-%m-%dT%H:%M:%S.000+0800'`
#获取x月之前的时间
#sevendayago=`date -d "-${savedays} month" +'%Y-%m-%dT00:00:00.000+0800'`
#获取x天之前的时间
sevendayago=`date -d "-${savedays} day " +'%Y-%m-%dT00:00:00.000+0800'`

echo $sevendayago
for line in `cat /home/elk/elk_download/indexname.conf`
do
  echo $line
  echo "10.10.10.253:9201/${line}"
  curl -H "Content-Type: application/json" -XPOST "10.10.10.253:9201/${line}/_delete_by_query?refresh&slices=10&scroll_size=10000" -uelastic:123456 -d '
{"query": {
    "bool": {
        "must": [
        {"range": {
            "@timestamp": {
              "lt": "'${sevendayago}'"
            }
        }}
        ]
    }
 }
}'
done
echo "ok"

indexname.conf示例

demo
demo2

简单使用说明:

filebeat读取日志文件,逐行提交给output配置,本文使给到logstash,logstash根据input配置进行接收,filter进行过滤处理,之后output输出到es中。

kibana 进行对es的可视化管理,其中索引管理 可查看当前生成的索引列表。 数据视图管理 可以对索引进行视图的创建,达到在看板管理的效果。

elastalert2是在elastalert停止维护后使用的,作为监控报警程序,监控规则在rules.yaml中配置,连接es的配置在config.yaml配置。

你可能感兴趣的:(Python,Linux,elk,centos,elasticsearch)