日志监控平台:Flume-Kafka-ELK的部署

日志监控平台
flume-kafka-logstash-elasticsearch-kibana

一、部署环境
Centosos7x

Jdk1.8

二、安装教程
2.1 flume安装
下载地址:
http://archive.apache.org/dist/flume/1.8.0/

image.png

下载完后将tar包上载到linux /usr/local下
image.png

上载完后,到/usr/local下解压tar包

cd /usr/local


image.png

tar -zxvf apache-flume-1.8.0-bin.tar.gz


image.png

重命名 mv apache-flume-1.8.0-bin flume


image.png

Flume的运行需要依赖JDK环境

下载jdk1.8
https://www.oracle.com/java/technologies/javase/javase-jdk8-downloads.html

image.png

将下载好的jdk上载到 /usr/local


image.png

解压 tar包 tar -zxvf jdk-8u251-linux-x64.tar.gz


image.png

重命名 mv jdk1.8.0_251 jdk

修改 /etc/profile 配置jdk环境变量

vi /etc/profile

在文件末尾加上

export JAVA_HOME=/usr/local/jdk
export PATH=P A T H : PATH:PATH:JAVA_HOME/bin


image.png

验证是否成功

source /etc/profile
java -version


image.png

成功后到 /usr/local/flume/conf下,将flume/conf下的flume-env.sh.template文件修改为flume-env.sh,并配置flume-env.sh文件

cd /usr/local/flume/conf
mv flume-env.sh.template flume-env.sh
vi flume-env.sh


image.png

验证安装是否成功
/usr/local/flume/bin/flume-ng version


image.png

2.2 ELK安装
systemctl stop firewalld 关闭防火墙

systemctl disable firewalld 禁止防火墙开机自启

修改节点下的默认内核参数
echo “vm.swappiness=0” >> /etc/sysctl.conf
echo “vm.max_map_count=655350” >> /etc/sysctl.conf
sysctl -p


image.png

修改节点下的linux的资源限制
vi /etc/security/limits.conf
esuser soft nofile 65536
esuser hard nofile 65536
esuser soft nproc 2048
esuser hard nproc 2048
esuser soft memlock unlimited
esuser hard memlock unlimited


image.png

执行下以下命令立即生效
ulimit -SHn 65536

在/usr/local 下分别下载

wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-7.3.2-linux-x86_64.tar.gz

image.png

tar -zxvf elasticsearch-7.3.2-linux-x86_64.tar.gz


image.png

wget https://artifacts.elastic.co/downloads/kibana/kibana-7.3.2-linux-x86_64.tar.gz

tar -zxvf kibana-7.3.2-linux-x86_64.tar.gz


image.png

wget https://artifacts.elastic.co/downloads/logstash/logstash-7.3.2.tar.gz

tar -zxvf logstash-7.3.2.tar.gz


image.png

2.2.1 安装es
Cd 到 es目录下创建数据存放目录
mkdir data


image.png

添加es用户
adduser esuser

passwd esuser (设置密码 esuser)
更改es目录所属用户

chown es /usr/local/elasticsearc/ -R

修改elasticsearch 配置文件
cd /usr/local/elasticsearch/config
vi elasticsearch.yml

cluster.name: es-cluster //设置集群名称
node.name: es-1 //设置节点名称
path.data: /var/lib/elasticsearch //设置data存放路径
path.logs: /var/log/elasticsearch //修改logs日志存放路径
bootstrap.memory_lock: true //配置内存使用交换分区
network.host: 0.0.0.0 //监听的网络地址
http.port: 9200 //开启监听端口号
discovery.seed_hosts: [“192.168.2.13”]
cluster.initial_master_nodes: [“es-1”]
http.cors.enabled: true //修改head插件可以访问es
http.cors.allow-origin: “*”
bootstrap.system_call_filter: false

启动es服务

启动elasticsearch服务

cd /usr/local/elasticsearch/bin

./elasticsearch


image.png
image.png

2.2.2 安装logstash
进入logstash安装目录
cd /usr/local/logstash

创建任务配置文件夹
mkdir job

创建配置文件
touch kafka-logstash-esconf
vi kafka-logstash-esconf
input {
kafka {
bootstrap_servers => “n78.aa-data.cn:9092,n79.aa-data.cn:9092,n80.aa-data.cn:9092” // kafka和elk集群不在同意节点上的时候,一定要先修改/etc/hosts 将主机名 和 ip地址都想映射好,要不然logstash消费kafka数据时 DNS解析不了
topics => [“kettle-kafka”]
group_id => “logstash”
auto_offset_reset => “earliest”
consumer_threads => 6
decorate_events => true
}
}
output {
elasticsearch {
index => “kettle_run-%{+YYYY.MM.dd}”
hosts => [“192.168.2.13:9200”]
}
}

执行 bin/logstash -f job/kafka-logstash-es.conf


image.png

2.2.3安装kibana
cd 到kibana config目录下

vi kibana.yml

server.port: 5601

server.host: “192.168.2.13”
//es地址
elasticsearch.hosts: [“http://192.168.2.13:9200”]
//kibana不能以root用户运行
chown es /usr/local/kibana/ -R

cd /usr/local/kibana/bin
运行kibana
./kibana


image.png

三、日志监控流程
3.1 使用flume监控日志文件发往kafka
在flume目录下创建job文件夹
cd /usr/local/flume
Mkdir job

创建flume配置文件
touch flume-file-kafka.conf
vi flume-file-kafka.conf

agent.sources=r1
agent.sinks=k1
agent.channels=c1

agent.sources.r1.type = exec
agent.sources.r1.shell = /bin/bash -c
agent.sources.r1.command = tail -F /var/log/kettle/run.log
agent.sources.r1.channels = c1
agent.sources.r1.threads = 5

agent.channels.c1.type=memory
agent.channels.c1.capacity=102400
agent.channels.c1.transactionCapacity=1000
agent.channels.c1.byteCapacity=134217728
agent.channels.c1.byteCapacityBufferPercentage=80

agent.sinks.k1.channel=c1
agent.sinks.k1.type=org.apache.flume.sink.kafka.KafkaSink
agent.sinks.k1.kafka.topic=kettle-kafka
agent.sinks.k1.kafka.bootstrap.servers= n78.aa-data.cn:9092,n79.aa-data.cn:9092,n80.aa-data.cn:9092
agent.sinks.k1.serializer.class=kafka.serializer.StringEncoder
agent.sinks.k1.flumeBatchSize=1000

启动flume
bin/flume-ng agent --conf conf/ --name agent --conf-file job/flume-file-kafka.conf


image.png

3.2 使用logstash消费kafka数据并发往es
详见2.2.2

3.3使用kibana查看es中日志信息
登录
http://192.168.2.13:5601/


image.png

image.png

image.png

image.png

image.png

image.png

转自:https://blog.csdn.net/weixin_45682234/article/details/105954968

你可能感兴趣的:(日志监控平台:Flume-Kafka-ELK的部署)