CentOS 7.9 搭建 Redis + ELK(7.10.2) 集群

CentOS 7.9 搭建 Redis + ELK 集群

  • 一、前期准备
    • 1、日志采集过程
    • 2、服务内容表
    • 3、防火墙配置
    • 4、配置主机名hosts文件
    • 5、设置3台主机互相ssh不需要密码
    • 6、配置系统参数(每台服务器都要配置)
      • (1)修改句柄限制值
      • (2)修改max_map_count值
      • (3)修改时区并设置自动同步阿里云的时间
    • 7、关闭SELIunx
  • 二、安装ELK
    • 1、Elasticsearch安装
    • 2、生成ES集群之间证书,用于配置安全认证使用
      • (1) 创建证书(192.168.4.2、192.168.4.3都创建)
    • 3、修改配置文件
    • 4、修改systemctl启动文件
    • 5、启动Elasticsearch
    • 6、初始化Elasticsearch内部用户密码
  • 三、安装Kibana (192.168.4.2)
  • 四、安装redis(192.168.4.4)
    • 1、编辑配置文件修改bind绑定地址和设置密码
    • 2、配置系统参数防止redis报错
    • 3、启动redis
  • 五、安装logstash(192.168.4.4)
    • 1、编辑logstash配置文件
    • 2、日志过滤配置-示例(192.168.4.4 /etc/logstash/conf.d)
    • 3、启动logstash
  • 六、客户端采集日志程序filebeat
    • 1、安装filebeat
    • 2、编辑配置文件
    • 3、启动filebeat
  • 七、日志查看

一、前期准备

1、日志采集过程

input
input
input
input
output
output
output
output
input
output
日志文件
client Filebeat
日志文件
client Filebeat
日志文件
client Filebeat
日志文件
client Filebeat
redis
logstash
elasticsearch
kibana

2、服务内容表

系统发行版 IP地址 主机名 CPU 内存 硬盘 使用组件
CentOS7.9 192.168.4.2 elk-node1-Kibana 4H 8G 50G/2TB(NFS挂载) ES集群node1、Kibana
CentOS7.9 192.168.4.3 elk-node2-Filebeat 4H 8G 50G/2TB(NFS挂载) ES集群node2 、Filebeat
CentOS7.9 192.168.4.4 Redis-Logstash 4H 8G 50G/2TB(NFS挂载) Logstash,Redis

3、防火墙配置

# 每台服务器添加各个组件节点的IP地址到信任区防止发生数据无法同步和不能访问的情况
firewall-cmd --permanent --zone=trusted --add-source=192.168.4.0/24
firewall-cmd --reload

4、配置主机名hosts文件

#每台服务器都执行
vim /etc/hosts
    # 以下为文件的全部内容,主要添加后3行
	127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
	::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
	192.168.4.2 elk-node1-Kibana
	192.168.4.3 elk-node2-Filebeat
	192.168.4.4 Redis-Logstash

5、设置3台主机互相ssh不需要密码

ssh-keygen # 每台服务器都执行,如果你有更好的办法也可以
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa):   #此处回车即可
Created directory '/root/.ssh'.
Enter passphrase (empty for no passphrase):  #此处回车即可
Enter same passphrase again:  #此处回车即可
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:**************************** root@redis-logstash
The key's randomart image is:
+---[RSA 2048]----+
|     o . .   .   |
|.   o o . . . .  |
|o  . +     = . . |
|..+ . .   * . .  |
| oo.    SB o     |
| o E    =+O.     |
|  .     oO=oo .  |
|       =+++o.+   |
|      .o=.+=+.   |
+----[SHA256]-----+
# 在 [root@redis-logstash] 操作
# 将三台服务器的id_rsa.pub里面的内容追加到/root/.ssh/authorized_keys文件内
cat /root/.ssh/id_rsa.pub # 查看一下,先找一台机器追加,然后拷贝到其他两台服务器
# 拷贝文件到redis-logstash
[root@redis-logstash opm]# scp /root/.ssh/authorized_keys root@elk-node2-filebeat:/root/.ssh/
The authenticity of host 'elk-node2-filebeat (192.168.4.3)' can't be established.
ECDSA key fingerprint is SHA256:2xlcRYVVf+AkmhA66j++glMgV0NP3kCaFcWbCc6ljeE.
ECDSA key fingerprint is MD5:39:7f:7c:8e:64:6c:8e:8a:ba:38:6c:84:60:ed:32:c2.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'elk-node2-filebeat,192.168.4.3' (ECDSA) to the list of known hosts.
root@elk-node2-filebeat's password:
authorized_keys                                         100% 1209   524.4KB/s   00:00

[root@redis-logstash opm]# scp /root/.ssh/authorized_keys root@elk-node1-kibana:/root/.ssh/
The authenticity of host 'elk-node1-kibana (192.168.4.2)' can't be established.
ECDSA key fingerprint is SHA256:2xlcRYVVf+AkmhA66j++glMgV0NP3kCaFcWbCc6ljeE.
ECDSA key fingerprint is MD5:39:7f:7c:8e:64:6c:8e:8a:ba:38:6c:84:60:ed:32:c2.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'elk-node1-kibana,192.168.4.2' (ECDSA) to the list of known hosts.
root@elk-node1-kibana's password:
authorized_keys                                         100% 1209   487.1KB/s   00:00

6、配置系统参数(每台服务器都要配置)

(1)修改句柄限制值

vim /etc/security/limits.conf
# 文件末尾以下内容
* soft nproc 65535
* hard nproc 65535
* soft nofile 65535
* hard nofile 65535
* soft memlock unlimited
* hard memlock unlimited

(2)修改max_map_count值

sysctl -w vm.max_map_count=655360
echo 'vm.max_map_count=655360' >> /etc/sysctl.conf 
sysctl -p

(3)修改时区并设置自动同步阿里云的时间

tzselect
Please identify a location so that time zone rules can be set correctly.
Please select a continent or ocean.
 1) Africa
 2) Americas
 3) Antarctica
 4) Arctic Ocean
 5) Asia
 6) Atlantic Ocean
 7) Australia
 8) Europe
 9) Indian Ocean
10) Pacific Ocean
11) none - I want to specify the time zone using the Posix TZ format.
#? 5

Please select a country.
 1) Afghanistan		  18) Israel		    35) Palestine
 2) Armenia		      19) Japan		        36) Philippines
 3) Azerbaijan		  20) Jordan		    37) Qatar
 4) Bahrain		      21) Kazakhstan	    38) Russia
 5) Bangladesh		  22) Korea (North)	    39) Saudi Arabia
 6) Bhutan		      23) Korea (South)	    40) Singapore
 7) Brunei		      24) Kuwait		    41) Sri Lanka
 8) Cambodia		  25) Kyrgyzstan	    42) Syria
 9) China		      26) Laos		        43) Taiwan
10) Cyprus		      27) Lebanon		    44) Tajikistan
11) East Timor		  28) Macau		        45) Thailand
12) Georgia		      29) Malaysia		    46) Turkmenistan
13) Hong Kong		  30) Mongolia		    47) United Arab Emirates
14) India		      31) Myanmar (Burma)	48) Uzbekistan
15) Indonesia		  32) Nepal		    	49) Vietnam
16) Iran		      33) Oman		    	50) Yemen
17) Iraq		      34) Pakistan
#? 9

Please select one of the following time zone regions.
1) Beijing Time
2) Xinjiang Time
#? 1

The following information has been given:

	China
	Beijing Time

Therefore TZ='Asia/Shanghai' will be used.
Local time is now:	Fri Feb 18 13:32:35 CST 2022.
Universal Time is now:	Fri Feb 18 05:32:35 UTC 2022.
Is the above information OK?
1) Yes
2) No
#? 1

# 删除原时间配置文件
rm -f /etc/localtime
# 做软连接到时间配置文件
ln -s /usr/share/zoneinfo/Asia/Shanghai /etc/localtime
# 设置定时任务
yum install ntpdate -y
crontab -e
# 添加以下内容
*/5 * * * * /sbin/ntpdate ntp.aliyun.com
# 保存退出后重启crond服务
systemctl restart crond

7、关闭SELIunx

临时关闭 setenforce 0
永久关闭sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config
重启系统

二、安装ELK

1、Elasticsearch安装

cd /opt
wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-7.10.2-x86_64.rpm
rpm -ivh elasticsearch-7.10.2-x86_64.rpm

2、生成ES集群之间证书,用于配置安全认证使用

(1) 创建证书(192.168.4.2、192.168.4.3都创建)

创建保存证书的文件夹
mkdir /etc/elasticsearch/cert.d

生成证书文件
cd /usr/share/elasticsearch/bin/
elasticsearch-certutil cert -out /etc/elasticsearch/cert.d/elastic-certificates.p12 -pass ""

拷贝证书到其他集群节点
scp elastic-certificates.p12 root@elk-node2-Filebeat:/etc/elasticsearch/cert.d

配置证书文件权限-注意,所有集群节点在拷贝完证书后都要执行一遍
chown -Rf elasticsearch:elasticsearch /etc/elasticsearch/cert.d/

3、修改配置文件

查看完整配置:grep -Ev "#|^$" /etc/elasticsearch/jvm.options

# -----只是修改了这两行------
-Xms4g
-Xmx4g
# ------------------------

查看完整配置:grep -Ev "#|^$" /etc/elasticsearch/elasticsearch.yml

# 按照以下内容修改
bootstrap.memory_lock: true
cluster.name: my-application
# 每台服务器的配置都注意 elk-node1-Kibana elk-node2-filebeat 
node.name: elk-node1-Kibana
# -------此处路径我是挂载的NFS卷,根据实际情况修改路径--------------
# 路径名称注意 elk-node1-Kibana elk-node2-filebeat 
path.data: /data/elk_node1/elasticsearch/data
path.logs: /data/elk_node1/elasticsearch/log
#------------------------------------------------------------
cluster.name: my-application
node.name: node-1
network.host: 192.168.4.2
http.port: 9200
node.master: true
node.data: true
discovery.seed_hosts: ["192.168.4.2", "192.168.4.3"]
cluster.initial_master_nodes: ["192.168.4.2:9300", "192.168.4.3:9300"]
#http.cors.enabled: true
#http.cors.allow-origin: "*"
# 下方配置文件是后添加的请复制粘贴就好
# ----------------------------------- xpack ------------------------------------
xpack.security.enabled: true
xpack.license.self_generated.type: basic
xpack.security.transport.ssl.enabled: true
xpack.security.transport.ssl.verification_mode: certificate
# 确认号证书的路径在 二、安装ELK-第二小节的路径
xpack.security.transport.ssl.keystore.path: /etc/elasticsearch/cert.d/elastic-certificates.p12
xpack.security.transport.ssl.truststore.path: /etc/elasticsearch/cert.d/elastic-certificates.p12

4、修改systemctl启动文件

执行systemctl edit elasticsearch命令
进到一个界面,添加以下2行
[Service]
LimitMEMLOCK=infinity
然后保存退出,运行systemctl daemon-reload

5、启动Elasticsearch

所有节点启动成功后进行第6步骤
systemctl start elasticsearch

6、初始化Elasticsearch内部用户密码

cd /usr/share/elasticsearch/bin/
./elasticsearch-setup-passwords auto

... 
# 以下内容很机密请保存好
Changed password for user apm_system
PASSWORD apm_system = xxxxxxxxxxxxxxxxxxxx

Changed password for user kibana_system
PASSWORD kibana_system = xxxxxxxxxxxxxxxxxxxx

Changed password for user kibana
PASSWORD kibana = xxxxxxxxxxxxxxxxxxxx

Changed password for user logstash_system
PASSWORD logstash_system = xxxxxxxxxxxxxxxxxxxx

Changed password for user beats_system
PASSWORD beats_system = xxxxxxxxxxxxxxxxxxxx

Changed password for user remote_monitoring_user
PASSWORD remote_monitoring_user = xxxxxxxxxxxxxxxxxxxx

Changed password for user elastic
PASSWORD elastic = xxxxxxxxxxxxxxxxxxxx

三、安装Kibana (192.168.4.2)

cd /opt
wget https://artifacts.elastic.co/downloads/kibana/kibana-7.10.2-x86_64.rpm
rpm -ivh kibana-7.10.2-x86_64.rpm

编辑kibana配置文件
vim /etc/kibana/kibana.yml

# 绑定地址,0.0.0.0代表允许所有IP访问
server.host: "0.0.0.0"
# 集群访问地址根据实际情况配置
elasticsearch.hosts: ["http://192.168.4.2:9200", "http://192.168.4.3:9200"]
# 设置kibana中文访问
i18n.locale: "zh-CN"
# ES提供的用户名密码
elasticsearch.username: "kibana_system"
elasticsearch.password: ""

启动kibana
systemctl start kibana
访问 http://192.168.4.2:5601 查看是否正常

四、安装redis(192.168.4.4)

yum install -y redis

1、编辑配置文件修改bind绑定地址和设置密码

vim /etc/redis.conf 配置完成自己验证

# 根据实际情况修改配置
bind 192.168.4.4
...
logfile /data/redis-logstash/redis.log
...
requirepass <自己设置的密码>
...

2、配置系统参数防止redis报错

echo "echo never > /sys/kernel/mm/transparent_hugepage/enabled" >> /etc/rc.local
chmod +x /etc/rc.d/rc.local
# 一般新安装的redis会根这3个相关报错,不信自己测试
echo never > /sys/kernel/mm/transparent_hugepage/enabled
echo "vm.overcommit_memory = 1" >> /etc/sysctl.conf
echo "net.core.somaxconn= 1024" >> /etc/sysctl.conf
sysctl -p

3、启动redis

启动redis服务:systemctl start redis
开机自启动: systemctl enable redis

五、安装logstash(192.168.4.4)

cd /opt
wget https://artifacts.elastic.co/downloads/logstash/logstash-7.10.2-x86_64.rpm
rpm -ivh logstash-7.10.2-x86_64.rpm

1、编辑logstash配置文件

查看完整配置:grep -Ev "#|^$" /etc/logstash/jvm.options

# 只修改了2行
-Xms4g
-Xmx4g
# 其他配置不变

查看完整配置:grep -Ev "#|^$" /etc/logstash/logstash.yml

path.data: /data/redis-logstash/logstash/data
pipeline.ordered: auto
log.level: debug
path.logs: /data/redis-logstash/logstash/log
xpack.monitoring.enabled: true
xpack.monitoring.elasticsearch.username: logstash_system
xpack.monitoring.elasticsearch.password: <logstash_system的密码>
# es集群访问地址
xpack.monitoring.elasticsearch.hosts: ["http://192.168.4.2:9200", "http://192.168.4.3:9200"]

2、日志过滤配置-示例(192.168.4.4 /etc/logstash/conf.d)

# 日志来源从redis取日志
input {
  redis {
    host => "127.0.0.1"
    port => "6379"
    batch_count => 1
    data_type => "list"
    db => 0
    password => "*******"
    key => "rename-fs-log"
  }
}
# 过滤到host和agent字段
filter {
    mutate {
        remove_field => ["host", "agent"]
    }
}
# 日志采集到es
output {
  elasticsearch {
    hosts => ["192.168.4.2:9200", "192.168.4.3:9200"]
    user => "elastic"
    password => "*******"
    index => "%{[@metadata][beat]}-%{[@metadata][version]}-fs-log"
  }
}

3、启动logstash

启动logstash服务:systemctl start logstash
logstash开机自启动:systemctl enable logstash

六、客户端采集日志程序filebeat

将日志文件采集到redis队列,再由logstash采集到ES数据库

1、安装filebeat

cd /opt
# ----根据实际系统发行版本下载不同的包----
# redhat发行系列
wget https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-7.10.2-x86_64.rpm
# Ubuntu发行系列
wget https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-7.10.2-amd64.deb
# -----------------------------------
# centos7 安装-其他系统发行版本自测
rpm -ivh filebeat-7.10.2-x86_64.rpm
# ubuntu16 安装-其他系统发行版本自测
dpkg -i filebeat-7.10.2-amd64.deb

2、编辑配置文件

filebeat.inputs:
- type: log
  enabled: true
  # 需要采集的日志路径可用*.log
  paths:
    - /usr/local/freeswitch/log/freeswitch.log
# 对接redis配置
output.redis:
  hosts: ["1.1.1.1:6899"]
  password: ""
  key: "rename-fs-log"
  db: 0
  timeout: 5
  
# 以下内容是默认的没有调整过
processors:
  - add_host_metadata:
      when.not.contains.tags: forwarded
  - add_cloud_metadata: ~
  - add_docker_metadata: ~
  - add_kubernetes_metadata: ~
logging.level: debug
logging.to_files: true
logging.files:
  path: /var/log/filebeat
  name: filebeat
  keepfiles: 7
  permissions: 0644

3、启动filebeat

启动服务:systemctl start filebeat
开机自启:systemctl enable filebeat

七、日志查看

访问:http://kibana.woziji.xyz:5601
CentOS 7.9 搭建 Redis + ELK(7.10.2) 集群_第1张图片
如果有数据第一次访问会让添加索引,具体的自己研究吧,只是提供了安装过程,如果安装过程中有问题可以联系下发留言,或者联系本人QQ:4-9-5-8-4-3-2-3-6不长在线

你可能感兴趣的:(Linux,elk,redis,centos,elk)