目录
Elasticsearch:
1.安装elasticsearch:
2.elasticsearch目录和文件:
3.修改配置文件:
4.创建数据目录,并修改权限
5.分配锁定内存:
6.修改锁定内存后,无法重启,解决方法如下:
7.查看单主机
8.下载es-head插件
9.创建索引:vipinfo,类型:users,序号:1,数据部分:...
elasticsearch群集
常见群集管理监控命令
构建filebeat+redis+logstash+es+kibana架构
开源分布式搜索引擎,它的特点有:分布式,零配置,自动发现,索引自动分片,
索引副本机制,restful 风格接口,多数据源,自动搜索负载等
官方网站:
http:// https://www.elastic.co
中文社区:
https://elasticsearch.cn
官方参考文档:
https://www.elastic.co/guide/en/elasticsearch/reference/6.6/setup-configuration-memory.html
下载地址:
https://mirrors.tuna.tsinghua.edu.cn/elasticstack/6.x/yum/6.6.0/
elasticsearch-6.6.0.rpm
filebeat-6.6.0-x86_64.rpm
kibana-6.6.0-x86_64.rpm
logstash-6.6.0.rpm
############################################################################
前提:jdk-1.8.0 (yum -y install java-1.8.0-openjdk )
复制elasticsearch-6.6.0.rpm到虚拟机
rpm -ivh elasticsearch-6.6.0.rpm
/etc/elasticsearch/elasticsearch.yml #配置文件
/etc/elasticsearch/jvm.options #java虚拟机
/etc/init.d/elasticsearch #服务启动脚本
/etc/sysconfig/elasticsearch #elasticsearch服务变量
/usr/lib/sysctl.d/elasticsearch.conf #设置elasticsearch用户使用的内存大小
/usr/lib/systemd/system/elasticsearch.service #添加系统服务文件
/var/log/elasticsearch/elasticsearch.log #日志文件路径
vim /etc/elasticsearch/elasticsearch.yml
node.name: node-1 #群集中本机节点名
path.data: /data/elasticsearch #数据目录
path.logs: /var/log/elasticsearch #日志目录
bootstrap.memory_lock: true #锁定内存,需要和/etc/elasticsearch/jvm.options关联
network.host: 192.168.8.10,127.0.0.1 #监听的ip地址
http.port: 9200 #端口号
mkdir -p /data/elasticsearch
chown -R elasticsearch.elasticsearch /data/elasticsearch/
vim /etc/elasticsearch/jvm.options
-Xms1g #分配最小内存
-Xmx1g #分配最大内存,官方推荐为物理内存的一半,但最大为32G
systemctl edit elasticsearch
添加:
[Service]
LimitMEMLOCK=infinity
F2保存退出
systemctl daemon-reload
systemctl restart elasticsearch
http://192.168.8.10:9200/
查看群集健康状态
http://192.168.8.10:9200/_cluster/health?pretty
查看整个群集状态信息
http://192.168.8.10:9200/_cluster/state?pretty
http:// https://github.com/mobz/elasticsearch-head
下载后,解压,复制crx目录下es-head.crx到桌面
改名es-head.crx为es-head.crx.zip
解压es-head.crx.zip到es-head.crx目录,把目录es-head.crx,上传到谷歌浏览器开发工具--扩展程序里
curl -XPUT '192.168.8.10:9200/vipinfo/users/1?pretty&pretty' -H 'Content-Type: application/json' -d '{"name": "guofucheng","age": "45","job": "mingxing"}'
选项说明:
XPUT 创建
XDELETE 删除
###############################################################################################
:
状态颜色:
灰色:未连接
绿色:数据完整态
黄色:副本不完整
红色:数据分片不完整
紫色:数据分片复制过程
群集主机角色:
主节点master:负责管理调度
工作节点: 负责处理数据
默认情况,所有节点都是工作节点,即主节点也处理数据
################################################################################
往群集中添加第二台主机:192.168.8.20
1.安装es,步骤参考第一台,注意配置文件需要修改
vim /etc/elasticsearch/elasticsearch.yml
node.name: node-2
path.data: /data/elasticsearch
path.logs: /var/log/elasticsearch
bootstrap.memory_lock: true
network.host: 192.168.8.20,127.0.0.1
http.port: 9200
discovery.zen.ping.unicast.hosts: ["192.168.8.10", "192.168.8.20"]
discovery.zen.minimum_master_nodes: 2 #添加的值=节点数/2 + 1
2.创建数据目录,并修改权限
mkdir -p /data/elasticsearch
chown -R elasticsearch.elasticsearch /data/elasticsearch/
3.分配锁定内存:
vim /etc/elasticsearch/jvm.options
-Xms1g #分配最小内存
-Xmx1g #分配最大内存,官方推荐为物理内存的一半,但最大为32G
4.修改锁定内存后,无法重启,解决方法如下:
systemctl edit elasticsearch
添加:
[Service]
LimitMEMLOCK=infinity
F2保存退出
systemctl daemon-reload
systemctl restart elasticsearch
################################################################################
往群集中添加第三台主机:192.168.8.30
1.安装es,步骤参考第一台,注意配置文件需要修改
vim /etc/elasticsearch/elasticsearch.yml
node.name: node-3
path.data: /data/elasticsearch
path.logs: /var/log/elasticsearch
bootstrap.memory_lock: true
network.host: 192.168.8.30,127.0.0.1
http.port: 9200
discovery.zen.ping.unicast.hosts: ["192.168.8.10", "192.168.8.30"]
discovery.zen.minimum_master_nodes: 2
2.创建数据目录,并修改权限
mkdir -p /data/elasticsearch
chown -R elasticsearch.elasticsearch /data/elasticsearch/
3.分配锁定内存:
vim /etc/elasticsearch/jvm.options
-Xms1g #分配最小内存
-Xmx1g #分配最大内存,官方推荐为物理内存的一半,但最大为32G
4.修改锁定内存后,无法重启,解决方法如下:
systemctl edit elasticsearch
添加:
[Service]
LimitMEMLOCK=infinity
F2保存退出
systemctl daemon-reload
systemctl restart elasticsearch
##############################################################################
(1)查看索引信息
curl -XGET '192.168.8.10:9200/_cat/indices?pretty'
(2)查看群集健康状态
curl -XGET '192.168.8.10:9200/_cluster/health?pretty'
(3)统计群集节点
curl -XGET '192.168.8.10:9200/_cat/nodes?human&pretty'
(4)查看群集所有节点详细信息
curl -XGET '192.168.8.10:9200/_nodes/_all/info/jvm.process?human&pretty'
注意:企业环境使用脚本监控群集健康状态是否为green 或 节点数不匹配 就邮件报警
(5)创建索引index1时,修改分片为3和副本数为2
curl -X PUT 192.168.8.10:9200/index1 -H 'Content-Type: application/json' -d '{
"settings" : {
"index" : {
"number_of_shards" : 3,
"number_of_replicas" : 2
}
}
}'
(6)针对已有索引,可修改副本数,不可改分片数。下面语句把index1的副本数由2改为1
curl -X PUT '192.168.8.10:9200/index1/_settings?pretty' -H 'Content-Type: application/json' -d '{
"settings": {
"number_of_replicas": "1"
}
}'
============================================
1.另开一台centos,安装nginx
复制nginx-rpm包到虚拟机/root下
cd /root/nginx-rpm
yum -y localinstall *.rpm
systemctl start nginx
2.安装filebeat,收集nginx的日志,传输到elasticsearch
复制filebeat包到虚拟机
rpm -ivh filebeat-6.6.0-x86_64.rpm
vim /etc/filebeat/filebeat.yml
删除已有内容,添加:
filebeat.inputs:
- type: log
enabled: true
paths:
- /var/log/nginx/access.log
output.elasticsearch:
hosts: ["192.168.8.10:9200"]
保存退出
systemctl start filebeat
3.测试访问nginx,产生日志,查看elastsearch
EFK日志收集
Elasticsearch: 数据库,存储数据 java
logstash: 日志收集,过滤数据 java
kibana: 分析,过滤,展示 java
filebeat: 收集日志,传输到ES或logstash go
filebeat官方文档:
https://www.elastic.co/guide/en/beats/filebeat/current/index.html
环境:
es主机:192.168.8.10 (内存:4G)
elasticsearch
kibana
filebeat
nginx
##################################################################
安装es主机:192.168.8.10
1.安装elasticsearch:
前提:jdk-1.8.0
复制elasticsearch-6.6.0.rpm到虚拟机
rpm -ivh elasticsearch-6.6.0.rpm
2.修改配置文件:
vim /etc/elasticsearch/elasticsearch.yml
node.name: node-1
path.data: /data/elasticsearch
path.logs: /var/log/elasticsearch
bootstrap.memory_lock: true
network.host: 192.168.8.10,127.0.0.1
http.port: 9200
3.创建数据目录,并修改权限
mkdir -p /data/elasticsearch
chown -R elasticsearch.elasticsearch /data/elasticsearch/
4.分配锁定内存:
vim /etc/elasticsearch/jvm.options
-Xms1g #分配最小内存
-Xmx1g #分配最大内存,官方推荐为物理内存的一半,但最大为32G
5.修改锁定内存后,无法重启,解决方法如下:
systemctl edit elasticsearch
添加:
[Service]
LimitMEMLOCK=infinity
F2保存退出
systemctl daemon-reload
systemctl restart elasticsearch
##################################################################
在es主机上安装kibana
(1)安装kibana
rpm -ivh kibana-6.6.0-x86_64.rpm
(2)修改配置文件
vim /etc/kibana/kibana.yml
修改:
server.port: 5601
server.host: "192.168.8.10"
server.name: "db01" #自己所在主机的主机名
elasticsearch.hosts: ["http://192.168.8.10:9200"] #es服务器的ip,便于接收日志数据
保存退出
(3)启动kibana
systemctl start kibana
###################################################################
在nginx(192.168.8.20)主机上安装filebeat
1.安装filebeat
rpm -ivh filebeat-6.6.0-x86_64.rpm
2.修改配置文件
vim /etc/filebeat/filebeat.yml
修改:
filebeat.inputs:
- type: log
enabled: true
paths:
- /var/log/nginx/access.log
output.elasticsearch:
hosts: ["192.168.8.10:9200"]
保存退出
3.启动filebeat
systemctl start filebeat
######################################################################
在es主机安装httpd-tools
1.配置yum源,安装 httpd-tools
yum -y install httpd-tools
2.使用ab压力测试工具测试访问
ab -c 1000 -n 20000 http://192.168.8.20/
3.在es浏览器查看filebeat索引和数据
4.在kibana添加索引
management--create index
discover--右上角--选择today
5.修改nginx的日志格式为json
vim /etc/nginx/nginx.conf
添加在http {}内:
log_format log_json '{ "@timestamp": "$time_local", '
'"remote_addr": "$remote_addr", '
'"referer": "$http_referer", '
'"request": "$request", '
'"status": $status, '
'"bytes": $body_bytes_sent, '
'"agent": "$http_user_agent", '
'"x_forwarded": "$http_x_forwarded_for", '
'"up_addr": "$upstream_addr",'
'"up_host": "$upstream_http_host",'
'"up_resp_time": "$upstream_response_time",'
'"request_time": "$request_time"'
' }';
access_log /var/log/nginx/access.log log_json;
保存退出
systemctl restart nginx
清空日志:vim /var/log/nginx/access.log
ab测试访问,生成json格式日志
7.修改filebeat配置文件
vim /etc/filebeat/filebeat.yml
修改为:
filebeat.inputs:
- type: log
enabled: true
paths:
- /var/log/nginx/access.log
json.keys_under_root: true
json.overwrite_keys: true
output.elasticsearch:
hosts: ["192.168.8.10:9200"]
index: "nginx-%{+yyyy.MM.dd}"
setup.template.name: "nginx"
setup.template.patten: "nginx-*"
setup.template.enabled: false
setup.template.overwrite: true
保存退出
重启服务:systemctl restart filebeat
8.配置access.log和error.log分开
vim /etc/filebeat/filebeat.yml
修改为:
filebeat.inputs:
- type: log
enabled: true
paths:
- /var/log/nginx/access.log
json.keys_under_root: true
json.overwrite_keys: true
tags: ["access"]
- type: log
enabled: true
paths:
- /var/log/nginx/error.log
tags: ["error"]
output.elasticsearch:
hosts: ["192.168.8.10:9200"]
indices:
- index: "nginx-access-%{+yyyy.MM.dd}"
when.contains:
tags: "access"
- index: "nginx-error-%{+yyyy.MM.dd}"
when.contains:
tags: "error"
setup.template.name: "nginx"
setup.template.patten: "nginx-*"
setup.template.enabled: false
setup.template.overwrite: true
保存退出
重启服务:systemctl restart filebeat
===============================================================
kibana图表:
登录--左侧面板选择visualize--点击“+”号--选择图表类型--选择索引--Buckets--x-Axis--Aggregation(选择Terms)--
Field(remote_addr.keyword)--size(5)--点击上方三角标志
kibana监控(x-pack):
登录--左侧面板选择--Monitoring--启用监控
===============================================================
1.安装redis,并启动
(1)准备安装和数据目录
mkdir -p /data/soft
mkdir -p /opt/redis_cluster/redis_6379/{conf,logs,pid}
(2)下载redis安装包
cd /data/soft
wget http://download.redis.io/releases/redis-5.0.7.tar.gz
(3)解压redis到/opt/redis_cluster/
tar xf redis-5.0.7.tar.gz -C /opt/redis_cluster/
ln -s /opt/redis_cluster/redis-5.0.7 /opt/redis_cluster/redis
(4)切换目录安装redis
cd /opt/redis_cluster/redis
make && make install
(5)编写配置文件
vim /opt/redis_cluster/redis_6379/conf/6379.conf
添加:
bind 127.0.0.1 192.168.8.10
port 6379
daemonize yes
pidfile /opt/redis_cluster/redis_6379/pid/redis_6379.pid
logfile /opt/redis_cluster/redis_6379/logs/redis_6379.log
databases 16
dbfilename redis.rdb
dir /opt/redis_cluster/redis_6379
保存退出
(6)启动当前redis服务
redis-server /opt/redis_cluster/redis_6379/conf/6379.conf
2.修改filebeat配置文件,output给redis
(参考文档:https://www.elastic.co/guide/en/beats/filebeat/6.6/index.html)
(1)修改filebeat配置output指向redis,重启
vim /etc/filebeat/filebeat.yml
filebeat.inputs:
- type: log
enabled: true
paths:
- /var/log/nginx/access.log
json.keys_under_root: true
json.overwrite_keys: true
tags: ["access"]
- type: log
enabled: true
paths:
- /var/log/nginx/error.log
tags: ["error"]
setup.template.settings:
index.number_of_shards: 3
setup.kibana:
output.redis:
hosts: ["192.168.8.10"]
key: "filebeat"
db: 0
timeout: 5
保存退出
重启服务:systemctl restart filebeat
(2)测试访问网站,登录redis,查看键值
redis-cli #登录
keys * #列出所有键
type filebeat #filebeat为键值名
LLEN filebeat #查看list长度
LRANGE filebeat 0 -1 #查看list所有内容
3.安装logstash,收集redis的日志,提交给es
(1)安装logstash(安装包提前放在了/data/soft下)
cd /data/soft/
rpm -ivh logstash-6.6.0.rpm
(2)修改logstash配置文件,实现access和error日志分离
vim /etc/logstash/conf.d/redis.conf
添加:
input {
redis {
host => "192.168.8.10"
port => "6379"
db => "0"
key => "filebeat"
data_type => "list"
}
}
filter {
mutate {
convert => ["upstream_time","float"]
convert => ["request_time","float"]
}
}
output {
stdout {}
if "access" in [tags] {
elasticsearch {
hosts => ["http://192.168.8.10:9200"]
index => "nginx_access-%{+YYYY.MM.dd}"
manage_template => false
}
}
if "error" in [tags] {
elasticsearch {
hosts => ["http://192.168.8.10:9200"]
index => "nginx_error-%{+YYYY.MM.dd}"
manage_template => false
}
}
}
保存退出
重启logstash:
/usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/redis.conf