applicationName | version |
---|---|
java | 1.8 |
kafka | 2.12-2.2.1 |
filbeat | 7.1.1 |
logstash | 7.1.1 |
elasticsearch | 7.1.1 |
kibana | 7.1.1 |
yum install vim -y
yum -y install wget
cd /usr/local
wget https://download.java.net/openjdk/jdk8u41/ri/openjdk-8u41-b04-linux-x64-14_jan_2020.tar.gz
tar -zxvf openjdk-8u41-b04-linux-x64-14_jan_2020.tar.gz
mv java-se-8u41-ri/ /usr/local/java8
sudo vim /etc/profile
export JAVA_HOME=/usr/local/java8
export PATH=$PATH:$JAVA_HOME/bin
source /etc/profile
查看版本
java -version
vim /etc/sysctl.conf
fs.file-max=65536
vm.max_map_count = 262144
sysctl –p
vim /etc/security/limits.conf
* soft nofile 65535
* hard nofile 131072
* soft nproc 4096
* hard nproc 4096
tar zxvf 包名 -C /data/sgmw/apps/
vim /data/sgmw/apps/kafka/config/zookeeper.properties
创建文件夹
dataDir=/data/sgmw/apps/kafka/zookeeper/data
dataLogDir=/data/sgmw/apps/kafka/zookeeper/log
server.1=你.的.服.务.器.地.址:12888:13888
dataDir下→ touch myid →
vim myid 1对应properties中server.1
nohup sh zookeeper-server-start.sh ../config/zookeeper.properties &
vim server.properties
创建对应文件夹
log.dirs=/data/sgmw/apps/kafka/kafka-logs
bin/kafka-server-start.sh -daemon config/server.properties &
-daemon为进程守护 调试时可去除观察是否报错
bin/kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic test
bin/kafka-topics.sh --list --zookeeper localhost:2181
注意新建多个窗口观察控制台输出
bin/kafka-console-producer.sh --broker-list localhost:9092 --topic test
可在控制台输入一些消息发送到服务器
bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic test --from-beginning
控制台展示接收消息
tar zxvf elasticsearch-7.1.1-linux-x86_64.tar.gz -C /data/sgmw/apps/
vim elasticsearch/config/elasticsearch.yml
node.name: node-1
cluster.initial_master_nodes: ["node-1"]
network.host: 0.0.0.0
http.cors.allow-origin: "*"
http.cors.enabled: true
http.cors.allow-headers : X-Requested-With,X-Auth-Token,Content-Type,Content-Length,Authorization
http.cors.allow-credentials: true
http.port: 9200
discovery.seed_hosts: ["127.0.0.1", "[::1]"]
Warning : root用户下无法启动elasticsearch
修改elasticsearch文件权限
chown -R sgmw: es文件名*
在非root用户下启动
vim kibana.yml
#-------------
server.port: 5601
server.host: "你.的.服.务.器.地.址"
elasticsearch.hosts: ["http://localhost:9200"]
i18n.locale: "zh-CN"
nohup sh kibana &
查看进程
netstat -tunlp|grep 5601
下载elk对应版本分词器
sudo yum install -y unzip zip
Elasticsearch根目录下plugin文件夹内创建ik文件夹 将文件解压至此
unzip elasticsearch-analysis-ik-7.1.1.zip -d /data/sgmw/apps/elasticsearch/plugins/ik
重启elasticsearch
sh ./elasticsearch -d
Kibana开发工具测试 左侧[外链图片转存失败,源站可能有防盗链机制,建议将图片保存下来直接上传(img-wdTLKWTs-1615777068612)(F:\Record\3.10 ELK - 生产环境搭建\B10DC279.png)])图标
地址
http://你.的.服.务.器.地.址:5601/app/kibana#/dev_tools/console?_g=()
GET _analyze
{
"text":"靓仔通明"
}
{
"tokens" : [
{
"token" : "靓",
"start_offset" : 0,
"end_offset" : 1,
"type" : "" ,
"position" : 0
},
{
"token" : "仔",
"start_offset" : 1,
"end_offset" : 2,
"type" : "" ,
"position" : 1
},
{
"token" : "通",
"start_offset" : 2,
"end_offset" : 3,
"type" : "" ,
"position" : 2
},
{
"token" : "明",
"start_offset" : 3,
"end_offset" : 4,
"type" : "" ,
"position" : 3
}
]
}
得到以上结果,可以发现es的默认分词器无法识别中文中靓仔、通明这样的词汇,而是简单的将每个字拆完分为一个词,这显然不符合我们的使用要求。
GET _analyze
{
"analyzer": "ik_max_word",
"text": "靓仔通明"
}
{
"tokens" : [
{
"token" : "靓仔",
"start_offset" : 0,
"end_offset" : 2,
"type" : "CN_WORD",
"position" : 0
},
{
"token" : "通明",
"start_offset" : 2,
"end_offset" : 4,
"type" : "CN_WORD",
"position" : 1
}
]
}
结果正常分词,ik分词器安装成功
vim filebeat/filebeat.yml
filebeat.inputs:
- type: log
enabled: true
#监控文件路径
paths:
- /data/sgmw/apps/filebeat/logTest/*.log
#报错换行处理
multiline.type: pattern
multiline.pattern: '^\['
multiline.negate: true
multiline.match: after
#添加字段区分项目
fields:
app: logTest
fields_under_root: true
#输出设置
output.kafka:
enable: true
hosts: ["你.的.服.务.器.地.址:9092"]
topic: test1
compression: gzip
max_message_bytes: 100000
前台输出
./filebeat -e -c filebeat.yml
后台运行
nohup ./filebeat -e -c filebeat.yml >/dev/null 2>&1 &
可在对应文件夹目录下创建文件
打开kafka从对应topic查看消费情况
touch test.log
bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic test1 --from-beginning
Exiting: error loading config file: config file ("filebeat.yml") can only be writable by the owner but the permissions are "-rw-rw-r--" (to fix the permissions use: 'chmod go-w /usr/local/filebeat/filebeat.yml')
可按照提示进行处理
#解压 复制样例配置文件 对其进行修改
cp logstash-sample.conf logstash-final.conf
vim logstash-final.conf
#-----------------------
input {
kafka{
bootstrap_servers => "你.的.服.务.器.地.址:9092"
topics => ["test1"]
codec => json
}
}
output {
elasticsearch {
hosts => ["http://localhost:9200"]
index => "test-%{+YYYY.MM.dd}"
#user => "elastic"
#password => "changeme"
}
}
@timestamp现在是采集日志的时间,如何把这个时间盖横日志里的时间?
在配置文件中新增filter
filter {
grok {
match => ["message", "%{TIMESTAMP_ISO8601:logdate}"]
}
date {
match => ["logdate", "yyyy-MM-dd HH:mm:ss,SSS"]
target => "@timestamp"
}
}
在网上复制配置代码块时 重启失败
Failed to execute action {
:action=>LogStash::PipelineAction::Create/pipeline_id:main, :exception=>"LogStash::ConfigurationError", :message=>"Expected one of #, input, filter, output at line 3, column 1 (byte 76) after "
复制网上查询 配置时,脚本可能将空格识别成乱码。
配置文件用notepad打开 将文件编码utf-8转至ANSI
或将换行空格删除 重启尝试
注意:启动配置文件需***与当前配置文件名一致***
nohup sh logstash -f ../config/logstash-final.conf &
Ik分词器安装成功后对之后创建索引并未主动使用
!在kibana开发工具下运行
Post _template/template_default
{
"index_patterns": ["*"],
"order" : 0,
"version": 1,
"settings": {
#当前为单机部署模式,是数据备份数,如果只有一台机器,设置为0
"number_of_shards": 1,
"number_of_replicas":0
},
"mappings": {
"date_detection": true,
"numeric_detection": true,
"dynamic_templates": [
{
"string_fields": {
"match": "message",
"match_mapping_type": "string",
"mapping": {
"type": "text",
"norms": false,
"analyzer": "ik_max_word",
"fields": {
"keyword": {
"type": "keyword"
}
}
}
}
}
]
}
}
{
"acknowledged":true}