官网下载:https://www.elastic.co/cn/downloads/elasticsearch
我这里给你提供es7.6.1和es7.8.0的安装包,如下:
链接:https://pan.baidu.com/s/145lhCdtd7Qbbp6b5fzkCyA
提取码:5qgt
如果要启动ES,要求windows上安装 JDK版本 >= 8,如果没有安装的,这里我给大家提供Windows中安装jdk
的方式,文章链接如下:
https://blog.csdn.net/qq_42449963/article/details/111408140
解压即可用,双击bin目录下面的elasticsearch.bat启动即可,如下图:
在浏览器上访问 http://127.0.0.1:9200 即可
# 开启密码设置
xpack.security.enabled: true
xpack.security.transport.ssl.enabled: true
xpack.license.self_generated.type: basic
在DOS窗口中输入elasticsearch-setup-passwords interactive
回车
然后输入y
回车
然后开始设置多个用户的密码,每当设置一个之后点击回车,比如我的密码是:123456
将所有用户(最常用的用户是elastic
)密码设置完成后如下图:
在浏览器地址栏中输入http://127.0.0.1:9200回车,然后输入用户名和密码就可以访问es了
说明:常用的用户名是elastic
,密码是上面你自己设置的,比如我的就是123456
其中访问es时的用户登录页面如下:
官网下载: https://www.elastic.co/cn/downloads/kibana
我这里给你提供kibana7.6.1和kibana7.8.0的安装包,如下:
链接:https://pan.baidu.com/s/1iQF8v-y2JoCVRHESg7PBsA
提取码:1r69
解压kibana,进入kibana目录下面的config目录,打开里面的kibana.yml文件,你可以看到# i18n.locale: "en"
,这说明默认使用的是英文版,所以我们需要改变为i18n.locale: "zh-CN"
,zh-CN来自于“kibana-7.6.1-windows-x86_64\x-pack\plugins\translations\translations”中的zh-CN.java文件,另外还需要找到elasticsearch.requestTimeout
,把注释去掉,把后面的值改成1000000,也就是elasticsearch.requestTimeout: 1000000
,这样可以放置超时,毕竟我的计算机挺慢的,避免超时异常
如果你感觉上面看着麻烦,那就在kibana.yml最后加入以下代码即可:
i18n.locale: "zh-CN"
elasticsearch.requestTimeout: 1000000
还有一点要保证是UTF-8的编码方式不能变,否则会出错
双击kibana安装目录下面的bin目录中的kibana.bat就可以启动了,初始启动有点慢你等下就是了,如果你遇到了windows10系统下启动kibana.bat一闪而过的问题,请访问:https://blog.csdn.net/qq_42449963/article/details/109017331,启动结果如下:
直接在浏览器地址栏输入http://localhost:5601就可以访问了,访问页面如下:
官网下载:https://www.elastic.co/cn/downloads/elasticsearch
我这里给你提供一些es的安装包,如下:
链接:https://pan.baidu.com/s/10GfDP0pehqnKLH12ZuxazQ?pwd=6g3j
提取码:6g3j
如果要启动ES,要求Windows上安装 JDK版本 >= 8,如果没有安装的,这里我给大家提供Windows中安装jdk
的方式,文章链接如下:
https://blog.csdn.net/qq_42449963/article/details/111408140
原因: 避免集群中产生脏数据和脏日志
# 集群名称,同一个集群中所有节点配置的集群名称需要一致,但是不限制名称;默认值:elasticsearch
cluster.name: my-application
# 当前节点名称,同一个集群中所有节点名称都不能一致,但是不限制名称
node.name: node-1
# 设置通过哪些ip可以连接到es,目前设置是所有ip都可以;注意:在实际环境中应设置为一个安全的ip
network.host: 0.0.0.0
# 外部访问ES端口,同一个集群中所有节点都不能一致;默认值:9200
http.port: 9201
# 集群节点通信端口,同一个集群中所有节点都不能一致;默认值:9300
transport.tcp.port: 9301
# 跨域配置
http.cors.enabled: true
http.cors.allow-origin: "*"
# 主节点信息
cluster.initial_master_nodes: ["node-1"]
# 集群中节点信息,用于组建集群,并且进行集群通信
discovery.seed_hosts: ["127.0.0.1:9301", "127.0.0.1:9302", "127.0.0.1:9303"]
# 集群名称,同一个集群中所有节点配置的集群名称需要一致,但是不限制名称;默认值:elasticsearch
cluster.name: my-application
# 当前节点名称,同一个集群中所有节点名称都不能一致,但是不限制名称
node.name: node-2
# 设置通过哪些ip可以连接到es,目前设置是所有ip都可以;注意:在实际环境中应设置为一个安全的ip
network.host: 0.0.0.0
# 外部访问ES端口,同一个集群中所有节点都不能一致;默认值:9200
http.port: 9202
# 集群节点通信端口,同一个集群中所有节点都不能一致;默认值:9300
transport.tcp.port: 9302
# 跨域配置
http.cors.enabled: true
http.cors.allow-origin: "*"
# 主节点信息
cluster.initial_master_nodes: ["node-1"]
# 集群中节点信息,用于组建集群,并且进行集群通信
discovery.seed_hosts: ["127.0.0.1:9301", "127.0.0.1:9302", "127.0.0.1:9303"]
# 集群名称,同一个集群中所有节点配置的集群名称需要一致,但是不限制名称;默认值:elasticsearch
cluster.name: my-application
# 当前节点名称,同一个集群中所有节点名称都不能一致,但是不限制名称
node.name: node-3
# 设置通过哪些ip可以连接到es,目前设置是所有ip都可以;注意:在实际环境中应设置为一个安全的ip
network.host: 0.0.0.0
# 外部访问ES端口,同一个集群中所有节点都不能一致;默认值:9200
http.port: 9203
# 集群节点通信端口,同一个集群中所有节点都不能一致;默认值:9300
transport.tcp.port: 9303
# 跨域配置
http.cors.enabled: true
http.cors.allow-origin: "*"
# 主节点信息
cluster.initial_master_nodes: ["node-1"]
# 集群中节点信息,用于组建集群,并且进行集群通信
discovery.seed_hosts: ["127.0.0.1:9301", "127.0.0.1:9302", "127.0.0.1:9303"]
双击可执行文件即可,其中可执行文件elasticsearch.bat
位置是:
elasticsearch-cluster\node-1\bin\elasticsearch.bat
elasticsearch-cluster\node-2\bin\elasticsearch.bat
elasticsearch-cluster\node-3\bin\elasticsearch.bat
通过以下链接可以查看集群状况:
例如访问node-1
节点的结果如下:
参照Linux中设置es集群密码方式设置即可,由于windows中的es集群使用不多,这里不在赘述
官网下载:https://www.elastic.co/cn/downloads/elasticsearch
我这里给你提供es7.8.0的安装包,如下:
链接:https://pan.baidu.com/s/1HCzfWVvGfgTcIvyC3uAlsQ?pwd=1xs8
提取码:1xs8
如果要启动ES,要求Linux上安装 JDK版本 >= 8,如果没有安装的,这里我给大家提供Linux中安装jdk
的方式,文章链接如下:
https://blog.csdn.net/qq_42449963/article/details/122648768
(1)打开/etc/security/limits.conf
文件,然后将以下内容放在文件末尾,如下:
# 每个进程可以打开的文件数限制
* soft nofile 65536
* hard nofile 65536
(2)打开/etc/security/limits.d/20-nproc.conf
文件,然后将下面内容放在文件末尾,如下:
# 每个进程可以打开的文件数限制
* soft nofile 65536
* hard nofile 65536
# 操作系统级别对每个用户创建的进程数限制;*代表Linux中所有用户
* hard nproc 4096
并且把* soft nproc 4096
注释掉,最终效果如下:
(3)打开/etc/sysctl.conf
文件,然后将下面内容放在文件末尾,如下:
# 一个进程可以拥有的VMA(虚拟内存区域)的数量,默认值是65536
vm.max_map_count=655360
(4)执行以下命令,进行配置刷新(这些配置都是es启动时要求的)
sysctl -p
执行如下命令:
tar -zxvf elasticsearch-7.8.0-linux-x86_64.tar.gz -C /usr/local/src/
打开ES解压目录下的config目录
,然后打开elasticsearch.yml
配置文件,并将以下内容添加到文件中(这些配置信息是es启动时要求的),如下:
# 集群名称
cluster.name: elasticsearch
# 节点名称
node.name: node-1
# 设置通过哪些ip可以连接到es,目前设置是所有ip都可以;注意:在实际环境中应设置为一个安全的ip
network.host: 0.0.0.0
## 支持跨域访问
http.cors.enabled: true
http.cors.allow-origin: "*"
## 集群节点列表
cluster.initial_master_nodes: ["node-1"]
在Linux中,由于Elasticsearch不能使用root直接启动,所以我们需要新建用户来启动ES
# 新建用户
useradd es
# 为用户设置密码,点击回车之后可以输入密码,比如密码:123456
passwd es
# 将es解压文件夹授权给用户es
chown -R es:es /usr/local/src/elasticsearch-7.8.0/
首先登录es用户,命令如下
# 输入以下命令,点击回车之后输入密码,比如密码:123456
su es
然后启动es,执行命令如下:
# 进入es目录的bin目录下
cd /usr/local/src/elasticsearch-7.8.0/bin
# 前台启动
./elasticsearch
# 后台启动
./elasticsearch -d
# 前台启动
点击Ctrl+C按键即可
# 后台启动
先登录root用户,然后执行命令找到和es相关的进程
ps -ef | grep elasticsearch | grep -v grep | awk '{print $2}'
然后杀死这些进程
kill -9 进程号
# 开启密码设置
xpack.security.enabled: true
xpack.security.transport.ssl.enabled: true
xpack.license.self_generated.type: basic
输入elasticsearch-setup-passwords interactive
回车
然后输入y
回车
然后开始设置多个用户的密码,每当设置一个之后点击回车,比如我的密码是:123456
将所有用户(最常用的用户是elastic
)密码设置完成后如下图:
在浏览器地址栏中输入http://192.168.56.10:9200回车,然后输入用户名和密码就可以访问es了
说明:常用的用户名是elastic
,密码是上面你自己设置的,比如我的就是123456
其中访问es时的用户登录页面如下:
官网下载:https://www.elastic.co/cn/downloads/elasticsearch
我这里给你提供es7.8.0的安装包,如下:
链接:https://pan.baidu.com/s/1HCzfWVvGfgTcIvyC3uAlsQ?pwd=1xs8
提取码:1xs8
如果要启动ES,要求Linux上安装 JDK版本 >= 8,如果没有安装的,这里我给大家提供Linux中安装jdk
的方式,文章链接如下:
https://blog.csdn.net/qq_42449963/article/details/122648768
(1)打开/etc/security/limits.conf
文件,然后将以下内容放在文件末尾,如下:
# 每个进程可以打开的文件数限制
* soft nofile 65536
* hard nofile 65536
(2)打开/etc/security/limits.d/20-nproc.conf
文件,然后将下面内容放在文件末尾,如下:
# 每个进程可以打开的文件数限制
* soft nofile 65536
* hard nofile 65536
# 操作系统级别对每个用户创建的进程数限制;*代表Linux中所有用户
* hard nproc 4096
并且把* soft nproc 4096
注释掉,最终效果如下:
(3)打开/etc/sysctl.conf
文件,然后将下面内容放在文件末尾,如下:
# 一个进程可以拥有的VMA(虚拟内存区域)的数量,默认值是65536
vm.max_map_count=655360
(4)执行以下命令,进行配置刷新(这些配置都是es启动时要求的)
sysctl -p
mkdir -p /usr/local/src/esCluster
执行如下命令:
tar -zxvf elasticsearch-7.8.0-linux-x86_64.tar.gz -C /usr/local/src/esCluster
# 进入es解压目录
cd /usr/local/src/esCluster
# 修改解压目录名称
mv elasticsearch-7.8.0 node-1
# 复制出node-2
cp -rf node-1 node-2
# 复制出node-3
cp -rf node-1 node-3
最终/usr/local/src/esCluster
目录下的效果如下所示:
首先执行cd /usr/local/src/esCluster/node-1/config
命令进入node-1的config目录
然后将以下内容添加到elasticsearch.yml
文件中,如下:
# 集群名称,同一个集群中所有节点配置的集群名称需要一致,但是不限制名称;默认值:elasticsearch
cluster.name: my-application
# 当前节点名称,同一个集群中所有节点名称都不能一致,但是不限制名称
node.name: node-1
# 设置通过哪些ip可以连接到es,目前设置是所有ip都可以;注意:在实际环境中应设置为一个安全的ip
network.host: 0.0.0.0
# 外部访问ES端口,同一个集群中所有节点都不能一致;默认值:9200
http.port: 9201
# 集群节点通信端口,同一个集群中所有节点都不能一致;默认值:9300
transport.tcp.port: 9301
# 跨域配置
http.cors.enabled: true
http.cors.allow-origin: "*"
# 初始化主节点信息
cluster.initial_master_nodes: ["node-1"]
# 集群中节点信息,用于组建集群,并且进行集群通信;其中192.168.56.10是我的虚拟机ip
discovery.seed_hosts: ["192.168.56.10:9301", "192.168.56.10:9302", "192.168.56.10:9303"]
首先执行cd /usr/local/src/esCluster/node-2/config
命令进入node-2的config目录
然后将以下内容添加到elasticsearch.yml
文件中,如下:
# 集群名称,同一个集群中所有节点配置的集群名称需要一致,但是不限制名称;默认值:elasticsearch
cluster.name: my-application
# 当前节点名称,同一个集群中所有节点名称都不能一致,但是不限制名称
node.name: node-2
# 设置通过哪些ip可以连接到es,目前设置是所有ip都可以;注意:在实际环境中应设置为一个安全的ip
network.host: 0.0.0.0
# 外部访问ES端口,同一个集群中所有节点都不能一致;默认值:9200
http.port: 9202
# 集群节点通信端口,同一个集群中所有节点都不能一致;默认值:9300
transport.tcp.port: 9302
# 跨域配置
http.cors.enabled: true
http.cors.allow-origin: "*"
# 初始化主节点信息
cluster.initial_master_nodes: ["node-1"]
# 集群中节点信息,用于组建集群,并且进行集群通信;其中192.168.56.10是我的虚拟机ip
discovery.seed_hosts: ["192.168.56.10:9301", "192.168.56.10:9302", "192.168.56.10:9303"]
首先执行cd /usr/local/src/esCluster/node-3/config
命令进入node-3的config目录
然后将以下内容添加到elasticsearch.yml
文件中,如下:
# 集群名称,同一个集群中所有节点配置的集群名称需要一致,但是不限制名称;默认值:elasticsearch
cluster.name: my-application
# 当前节点名称,同一个集群中所有节点名称都不能一致,但是不限制名称
node.name: node-3
# 设置通过哪些ip可以连接到es,目前设置是所有ip都可以;注意:在实际环境中应设置为一个安全的ip
network.host: 0.0.0.0
# 外部访问ES端口,同一个集群中所有节点都不能一致;默认值:9200
http.port: 9203
# 集群节点通信端口,同一个集群中所有节点都不能一致;默认值:9300
transport.tcp.port: 9303
# 跨域配置
http.cors.enabled: true
http.cors.allow-origin: "*"
# 初始化主节点信息
cluster.initial_master_nodes: ["node-1"]
# 集群中节点信息,用于组建集群,并且进行集群通信;其中192.168.56.10是我的虚拟机ip
discovery.seed_hosts: ["192.168.56.10:9301", "192.168.56.10:9302", "192.168.56.10:9303"]
默认情况下,es使用的初始化堆内存和最大堆内存都是1g,像我现在是在一台虚拟机上搭建三个节点,如果虚拟机总内存大小不够3g,将会导致一个节点在启动的时候会导致杀死(killed)另外一个已经启动的节点,这是我真实遇到的情况
下面以node-1
节点为例,来说明如何调整es启动的堆内存大小
首先执行以下命令,来进入node-1节点的config目录,如下:
cd /usr/local/src/esCluster/node-1/config
然后将config目录下的jvm.options
文件中的配置修改成如下内容:
-Xms512m
-Xmx512m
node-2和node-3节点的修改方式也是一样的,大家根据需要以此类推
在Linux中,由于Elasticsearch不能使用root直接启动,所以我们需要新建用户来启动ES
# 新建用户
useradd es
# 为用户设置密码,点击回车之后可以输入密码,比如密码:123456
passwd es
# 将es解压文件夹授权给用户es
chown -R es:es /usr/local/src/esCluster/
说明: 本次为了观察到ES运行日志,所以使用前台启动方式,如果大家想使用后台启动会,可以在命令后面添加-d
(1)准备启动node-1节点,新开一个窗口,通过su es
登录用户es,然后执行以下命令启动es节点
# 进入node-1节点的bin目录下
cd /usr/local/src/esCluster/node-1/bin
# 通过前台方式启动es,后台方式请在命令后面加-d
./elasticsearch
(2)准备启动node-2节点,新开一个窗口,通过su es
登录用户es,然后执行以下命令启动es节点
# 进入node-2节点的bin目录下
cd /usr/local/src/esCluster/node-2/bin
# 通过前台方式启动es,后台方式请在命令后面加-d
./elasticsearch
(3)准备启动node-3节点,新开一个窗口,通过su es
登录用户es,然后执行以下命令启动es节点
# 进入node-3节点的bin目录下
cd /usr/local/src/esCluster/node-3/bin
# 通过前台方式启动es,后台方式请在命令后面加-d
./elasticsearch
可以访问任意一个es节点来查看集群启动情况,例如我来查看node-1节点启动情况,如下:
在浏览器访问:http://192.168.56.10:9201/_cluster/health
如果看到节点数量是3,说明已经集群已经启动完成了,如下:
在浏览器上访问任何一个es节点,比如:
http://192.168.56.10:9201/_cat/nodes
其中带*
号的就是主节点,如下:
注意: 需要使用用户es
操作
# 进入集群主节点的bin目录下(通过上面可以得知node-2是主节点)
cd /usr/local/src/esCluster/node-2/bin
# 生成elastic-certificates.p12密钥文件到主节点node-2的config目录下
./elasticsearch-certutil cert -out /usr/local/src/esCluster/node-2/config/elastic-certificates.p12 -pass ""
注意: 需要使用用户es
操作
# 进入node-2节点的config目录下(通过上面可以得知node-2是主节点)
cd /usr/local/src/esCluster/node-2/config
# 复制elastic-certificates.p12密钥文件到node-1节点的config目录下
cp -f elastic-certificates.p12 /usr/local/src/esCluster/node-1/config/
# 复制elastic-certificates.p12密钥文件到node-3节点的config目录下
cp -f elastic-certificates.p12 /usr/local/src/esCluster/node-3/config/
# 以下配置用于设置密码访问ES集群,如不需要可以不用
xpack.security.enabled: true
xpack.license.self_generated.type: basic
xpack.security.transport.ssl.enabled: true
xpack.security.transport.ssl.verification_mode: certificate
xpack.security.transport.ssl.keystore.path: elastic-certificates.p12
xpack.security.transport.ssl.truststore.path: elastic-certificates.p12
综上所述:追加完成后,各个节点的elasticsearch.yml文件内容如下:
node-1的elasticsearch.yml文件:
# 集群名称,同一个集群中所有节点配置的集群名称需要一致,但是不限制名称;默认值:elasticsearch
cluster.name: my-application
# 当前节点名称,同一个集群中所有节点名称都不能一致,但是不限制名称
node.name: node-1
# 设置通过哪些ip可以连接到es,目前设置是所有ip都可以;注意:在实际环境中应设置为一个安全的ip
network.host: 0.0.0.0
# 外部访问ES端口,同一个集群中所有节点都不能一致;默认值:9200
http.port: 9201
# 集群节点通信端口,同一个集群中所有节点都不能一致;默认值:9300
transport.tcp.port: 9301
# 跨域配置
http.cors.enabled: true
http.cors.allow-origin: "*"
# 初始化主节点信息
cluster.initial_master_nodes: ["node-1"]
# 集群中节点信息,用于组建集群,并且进行集群通信
discovery.seed_hosts: ["192.168.56.10:9301", "192.168.56.10:9302", "192.168.56.10:9303"]
# 以下配置用于设置密码访问ES集群,如不需要可以不用
xpack.security.enabled: true
xpack.license.self_generated.type: basic
xpack.security.transport.ssl.enabled: true
xpack.security.transport.ssl.verification_mode: certificate
xpack.security.transport.ssl.keystore.path: elastic-certificates.p12
xpack.security.transport.ssl.truststore.path: elastic-certificates.p12
node-2的elasticsearch.yml文件:
# 集群名称,同一个集群中所有节点配置的集群名称需要一致,但是不限制名称;默认值:elasticsearch
cluster.name: my-application
# 当前节点名称,同一个集群中所有节点名称都不能一致,但是不限制名称
node.name: node-2
# 设置通过哪些ip可以连接到es,目前设置是所有ip都可以;注意:在实际环境中应设置为一个安全的ip
network.host: 0.0.0.0
# 外部访问ES端口,同一个集群中所有节点都不能一致;默认值:9200
http.port: 9202
# 集群节点通信端口,同一个集群中所有节点都不能一致;默认值:9300
transport.tcp.port: 9302
# 跨域配置
http.cors.enabled: true
http.cors.allow-origin: "*"
# 主节点信息
cluster.initial_master_nodes: ["node-1"]
# 集群中节点信息,用于组建集群,并且进行集群通信
discovery.seed_hosts: ["192.168.56.10:9301", "192.168.56.10:9302", "192.168.56.10:9303"]
# 以下配置用于设置密码访问ES集群,如不需要可以不用
xpack.security.enabled: true
xpack.license.self_generated.type: basic
xpack.security.transport.ssl.enabled: true
xpack.security.transport.ssl.verification_mode: certificate
xpack.security.transport.ssl.keystore.path: elastic-certificates.p12
xpack.security.transport.ssl.truststore.path: elastic-certificates.p12
node-3的elasticsearch.yml文件:
# 集群名称,同一个集群中所有节点配置的集群名称需要一致,但是不限制名称;默认值:elasticsearch
cluster.name: my-application
# 当前节点名称,同一个集群中所有节点名称都不能一致,但是不限制名称
node.name: node-3
# 设置通过哪些ip可以连接到es,目前设置是所有ip都可以;注意:在实际环境中应设置为一个安全的ip
network.host: 0.0.0.0
# 外部访问ES端口,同一个集群中所有节点都不能一致;默认值:9200
http.port: 9203
# 集群节点通信端口,同一个集群中所有节点都不能一致;默认值:9300
transport.tcp.port: 9303
# 跨域配置
http.cors.enabled: true
http.cors.allow-origin: "*"
# 主节点信息
cluster.initial_master_nodes: ["node-1"]
# 集群中节点信息,用于组建集群,并且进行集群通信
discovery.seed_hosts: ["192.168.56.10:9301", "192.168.56.10:9302", "192.168.56.10:9303"]
# 以下配置用于设置密码访问ES集群,如不需要可以不用
xpack.security.enabled: true
xpack.license.self_generated.type: basic
xpack.security.transport.ssl.enabled: true
xpack.security.transport.ssl.verification_mode: certificate
xpack.security.transport.ssl.keystore.path: elastic-certificates.p12
xpack.security.transport.ssl.truststore.path: elastic-certificates.p12
注意: 需要使用用户es
操作
首先任意
找一个es集群中的节点,不要求是主节点(现在需要设置密码,也无法查看主节点,所以任意一个节点都可以),例如node-1
节点,然后按照以下步骤操作:
# 进入node-1节点的bin目录中
cd /usr/local/src/esCluster/node-1/bin
# 执行设置集群密码命令
./elasticsearch-setup-passwords interactive
然后输入y
回车
然后开始设置多个用户的密码,每当设置一个之后点击回车,比如我的密码是:123456
将所有用户(最常用的用户是elastic
)密码设置完成后如下图:
任意找一个es节点在浏览器上进行访问,比如node-1,访问链接是:
http://192.168.56.10:9201/_cat/nodes
然后输入用户名(常用用户名:elastic
)和密码就可以登录访问了,比如我的密码是123456
插件:
链接:https://pan.baidu.com/s/10Z4Gp-ilK0zB-XQ7uZIxww?pwd=ves1
提取码:ves1
可执行yaml文件:
---
apiVersion: v1
data:
elasticsearch.yml: >-
cluster.name: my-es
node.name: "node-1"
path.data: /usr/share/elasticsearch/data
#path.logs: /var/log/elasticsearch
path.logs: /usr/share/elasticsearch/logs
bootstrap.memory_lock: false
network.host: 0.0.0.0
http.port: 9200
discovery.seed_hosts: ["127.0.0.1", "[::1]"]
cluster.initial_master_nodes: ["node-1"]
#增加参数,使head插件可以访问es
http.cors.enabled: true
http.cors.allow-origin: "*"
http.cors.allow-headers:
Authorization,X-Requested-With,Content-Length,Content-Type
xpack.security.enabled: false
kind: ConfigMap
metadata:
name: es-config
namespace: elasticsearch
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
labels:
app: elasticsearch
name: elasticsearch
namespace: elasticsearch
spec:
selector:
matchLabels:
app: elasticsearch
serviceName: elasticsearch
template:
metadata:
labels:
app: elasticsearch
spec:
initContainers:
- name: init-sysctl
image: busybox
imagePullPolicy: IfNotPresent
command:
- sysctl
- -w
- vm.max_map_count=262144
securityContext:
privileged: true
containers:
- env:
- name: TZ
value: Asia/Shanghai
image: 'elasticsearch:7.6.0'
imagePullPolicy: IfNotPresent
name: elasticsearch
ports:
- containerPort: 9200
- containerPort: 9300
volumeMounts:
- mountPath: /usr/share/elasticsearch/config/elasticsearch.yml
name: es-config
subPath: elasticsearch.yml
- mountPath: /usr/share/elasticsearch/data
name: es-persistent-storage
- mountPath: /usr/share/elasticsearch/plugins
name: es-plugins
volumes:
- configMap:
name: es-config
name: es-config
volumeClaimTemplates:
- apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: es-persistent-storage
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 500Mi
storageClassName: "managed-nfs-storage" # nfs默认存储类名称
- apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: es-plugins
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 100Mi
storageClassName: "managed-nfs-storage" # nfs默认存储类名称
---
apiVersion: v1
kind: Service
metadata:
name: elasticsearch
namespace: elasticsearch
spec:
ports:
- name: es9200
nodePort: 19201
port: 9200
protocol: TCP
targetPort: 9200
- name: es9300
nodePort: 19301
port: 9300
protocol: TCP
targetPort: 9300
selector:
app: elasticsearch
type: NodePort
插件:
链接:https://pan.baidu.com/s/1M1QpDKSJd92PybPdR2W4Jg?pwd=n24z
提取码:n24z
可执行yaml文件:
---
apiVersion: v1
data:
elasticsearch.yml: >
cluster.name: my-es
node.name: "node-1"
path.data: /usr/share/elasticsearch/data
#path.logs: /var/log/elasticsearch
bootstrap.memory_lock: false
network.host: 0.0.0.0
http.port: 9200
discovery.seed_hosts: ["127.0.0.1", "[::1]"]
cluster.initial_master_nodes: ["node-1"]
#增加参数,使head插件可以访问es
http.cors.enabled: true
http.cors.allow-origin: "*"
http.cors.allow-headers:
Authorization,X-Requested-With,Content-Length,Content-Type
xpack.security.enabled: false
kind: ConfigMap
metadata:
name: es-config
namespace: es8-simple
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
labels:
app: elasticsearch
name: elasticsearch
namespace: es8-simple
spec:
replicas: 1
selector:
matchLabels:
app: elasticsearch
serviceName: elasticsearch
template:
metadata:
labels:
app: elasticsearch
spec:
containers:
- env:
- name: TZ
value: Asia/Shanghai
image: 'elasticsearch:8.2.3'
imagePullPolicy: IfNotPresent
name: elasticsearch
resources:
limits:
cpu: '1'
memory: 2Gi
requests:
cpu: '1'
memory: 1Gi
volumeMounts:
- mountPath: /usr/share/elasticsearch/config/elasticsearch.yml
name: es-config
subPath: elasticsearch.yml
- mountPath: /usr/share/elasticsearch/data
name: es-persistent-storage
- mountPath: /usr/share/elasticsearch/plugins
name: es-plugins
imagePullSecrets:
- name: docker-login
volumes:
- configMap:
name: es-config
name: es-config
volumeClaimTemplates:
- apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: es-persistent-storage
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 500Mi
storageClassName: "managed-nfs-storage" # nfs默认存储类名称
- apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: es-plugins
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 100Mi
storageClassName: "managed-nfs-storage" # nfs默认存储类名称
---
apiVersion: v1
kind: Service
metadata:
name: elasticsearch
namespace: es8-simple
spec:
ports:
- name: es9200
port: 9200
protocol: TCP
targetPort: 9200
- name: es9300
port: 9300
protocol: TCP
targetPort: 9300
selector:
app: elasticsearch
type: NodePort
插件:
链接:https://pan.baidu.com/s/1M1QpDKSJd92PybPdR2W4Jg?pwd=n24z
提取码:n24z
可执行yaml文件:
---
apiVersion: v1
data:
default.policy: "//\n\n// Permissions required by modules stored in a run-time image and loaded\n\n// by the platform class loader.\n\n//\n\n// NOTE that this file is not intended to be modified. If additional\n\n// permissions need to be granted to the modules in this file, it is\n\n// recommended that they be configured in a separate policy file or\n\n// ${java.home}/conf/security/java.policy.\n\n//\n\n\n\n\n\ngrant codeBase \"jrt:/java.compiler\" {\n\n permission java.security.AllPermission;\n\n};\n\n\n\n\n\ngrant codeBase \"jrt:/java.net.http\" {\n\n permission java.lang.RuntimePermission \"accessClassInPackage.sun.net\";\n\n permission java.lang.RuntimePermission \"accessClassInPackage.sun.net.util\";\n\n permission java.lang.RuntimePermission \"accessClassInPackage.sun.net.www\";\n\n permission java.lang.RuntimePermission \"accessClassInPackage.jdk.internal.misc\";\n\n permission java.lang.RuntimePermission \"modifyThread\";\n\n permission java.net.SocketPermission \"*\",\"connect,resolve\";\n\n permission java.net.URLPermission \"http:*\",\"*:*\";\n\n permission java.net.URLPermission \"https:*\",\"*:*\";\n\n permission java.net.URLPermission \"ws:*\",\"*:*\";\n\n permission java.net.URLPermission \"wss:*\",\"*:*\";\n\n permission java.net.URLPermission \"socket:*\",\"CONNECT\"; // proxy\n\n // For request/response body processors, fromFile, asFile\n\n permission java.io.FilePermission \"<>\",\"read,write,delete\";\n\n permission java.util.PropertyPermission \"*\",\"read\";\n\n permission java.net.NetPermission \"getProxySelector\";\n\n};\n\n\n\ngrant codeBase \"jrt:/java.scripting\" {\n\n permission java.security.AllPermission;\n\n};\n\n\n\ngrant codeBase \"jrt:/java.security.jgss\" {\n\n permission java.security.AllPermission;\n\n};\n\n\n\ngrant codeBase \"jrt:/java.smartcardio\" {\n\n permission javax.smartcardio.CardPermission \"*\", \"*\";\n\n permission java.lang.RuntimePermission \"loadLibrary.j2pcsc\";\n\n permission java.lang.RuntimePermission\n\n \"accessClassInPackage.sun.security.jca\";\n\n permission java.lang.RuntimePermission\n\n \"accessClassInPackage.sun.security.util\";\n\n permission java.util.PropertyPermission\n\n \"javax.smartcardio.TerminalFactory.DefaultType\", \"read\";\n\n permission java.util.PropertyPermission \"os.name\", \"read\";\n\n permission java.util.PropertyPermission \"os.arch\", \"read\";\n\n permission java.util.PropertyPermission \"sun.arch.data.model\", \"read\";\n\n permission java.util.PropertyPermission\n\n \"sun.security.smartcardio.library\", \"read\";\n\n permission java.util.PropertyPermission\n\n \"sun.security.smartcardio.t0GetResponse\", \"read\";\n\n permission java.util.PropertyPermission\n\n \"sun.security.smartcardio.t1GetResponse\", \"read\";\n\n permission java.util.PropertyPermission\n\n \"sun.security.smartcardio.t1StripLe\", \"read\";\n\n // needed for looking up native PC/SC library\n\n permission java.io.FilePermission \"<>\",\"read\";\n\n permission java.security.SecurityPermission \"putProviderProperty.SunPCSC\";\n\n permission java.security.SecurityPermission\n\n \"clearProviderProperties.SunPCSC\";\n\n permission java.security.SecurityPermission\n\n \"removeProviderProperty.SunPCSC\";\n\n};\n\n\n\ngrant codeBase \"jrt:/java.sql\" {\n\n permission java.security.AllPermission;\n\n};\n\n\n\ngrant codeBase \"jrt:/java.sql.rowset\" {\n\n permission java.security.AllPermission;\n\n};\n\n\n\n\n\ngrant codeBase \"jrt:/java.xml.crypto\" {\n\n permission java.lang.RuntimePermission\n\n \"getStackWalkerWithClassReference\";\n\n permission java.lang.RuntimePermission\n\n \"accessClassInPackage.sun.security.util\";\n\n permission java.util.PropertyPermission \"*\", \"read\";\n\n permission java.security.SecurityPermission \"putProviderProperty.XMLDSig\";\n\n permission java.security.SecurityPermission\n\n \"clearProviderProperties.XMLDSig\";\n\n permission java.security.SecurityPermission\n\n \"removeProviderProperty.XMLDSig\";\n\n permission java.security.SecurityPermission\n\n \"com.sun.org.apache.xml.internal.security.register\";\n\n permission java.security.SecurityPermission\n\n \"getProperty.jdk.xml.dsig.secureValidationPolicy\";\n\n permission java.lang.RuntimePermission\n\n \"accessClassInPackage.com.sun.org.apache.xml.internal.*\";\n\n permission java.lang.RuntimePermission\n\n \"accessClassInPackage.com.sun.org.apache.xpath.internal\";\n\n permission java.lang.RuntimePermission\n\n \"accessClassInPackage.com.sun.org.apache.xpath.internal.*\";\n\n permission java.io.FilePermission \"<>\",\"read\";\n\n permission java.net.SocketPermission \"*\", \"connect,resolve\";\n\n};\n\n\n\n\n\ngrant codeBase \"jrt:/jdk.accessibility\" {\n\n permission java.lang.RuntimePermission \"accessClassInPackage.sun.awt\";\n\n};\n\n\n\ngrant codeBase \"jrt:/jdk.charsets\" {\n\n permission java.util.PropertyPermission \"os.name\", \"read\";\n\n permission java.lang.RuntimePermission \"charsetProvider\";\n\n permission java.lang.RuntimePermission\n\n \"accessClassInPackage.jdk.internal.access\";\n\n permission java.lang.RuntimePermission\n\n \"accessClassInPackage.jdk.internal.misc\";\n\n permission java.lang.RuntimePermission \"accessClassInPackage.sun.nio.cs\";\n\n};\n\n\n\ngrant codeBase \"jrt:/jdk.crypto.ec\" {\n\n permission java.lang.RuntimePermission\n\n \"accessClassInPackage.sun.security.*\";\n\n permission java.lang.RuntimePermission \"loadLibrary.sunec\";\n\n permission java.security.SecurityPermission \"putProviderProperty.SunEC\";\n\n permission java.security.SecurityPermission \"clearProviderProperties.SunEC\";\n\n permission java.security.SecurityPermission \"removeProviderProperty.SunEC\";\n\n};\n\n\n\ngrant codeBase \"jrt:/jdk.crypto.cryptoki\" {\n\n permission java.lang.RuntimePermission\n\n \"accessClassInPackage.com.sun.crypto.provider\";\n\n permission java.lang.RuntimePermission \"accessClassInPackage.jdk.internal.misc\";\n\n permission java.lang.RuntimePermission\n\n \"accessClassInPackage.sun.security.*\";\n\n permission java.lang.RuntimePermission \"accessClassInPackage.sun.nio.ch\";\n\n permission java.lang.RuntimePermission \"loadLibrary.j2pkcs11\";\n\n permission java.util.PropertyPermission \"sun.security.pkcs11.allowSingleThreadedModules\", \"read\";\n\n permission java.util.PropertyPermission \"sun.security.pkcs11.disableKeyExtraction\", \"read\";\n\n permission java.util.PropertyPermission \"os.name\", \"read\";\n\n permission java.util.PropertyPermission \"os.arch\", \"read\";\n\n permission java.util.PropertyPermission \"jdk.crypto.KeyAgreement.legacyKDF\", \"read\";\n\n permission java.security.SecurityPermission \"putProviderProperty.*\";\n\n permission java.security.SecurityPermission \"clearProviderProperties.*\";\n\n permission java.security.SecurityPermission \"removeProviderProperty.*\";\n\n permission java.security.SecurityPermission\n\n \"getProperty.auth.login.defaultCallbackHandler\";\n\n permission java.security.SecurityPermission \"authProvider.*\";\n\n // Needed for reading PKCS11 config file and NSS library check\n\n permission java.io.FilePermission \"<>\", \"read\";\n\n};\n\n\n\ngrant codeBase \"jrt:/jdk.dynalink\" {\n\n permission java.security.AllPermission;\n\n};\n\n\n\ngrant codeBase \"jrt:/jdk.httpserver\" {\n\n permission java.security.AllPermission;\n\n};\n\n\n\ngrant codeBase \"jrt:/jdk.internal.le\" {\n\n permission java.security.AllPermission;\n\n};\n\n\n\ngrant codeBase \"jrt:/jdk.internal.vm.compiler\" {\n\n permission java.security.AllPermission;\n\n};\n\n\n\ngrant codeBase \"jrt:/jdk.internal.vm.compiler.management\" {\n\n permission java.lang.RuntimePermission \"accessClassInPackage.jdk.internal.vm.compiler.collections\";\n\n permission java.lang.RuntimePermission \"accessClassInPackage.jdk.vm.ci.runtime\";\n\n permission java.lang.RuntimePermission \"accessClassInPackage.jdk.vm.ci.services\";\n\n permission java.lang.RuntimePermission \"accessClassInPackage.org.graalvm.compiler.core.common\";\n\n permission java.lang.RuntimePermission \"accessClassInPackage.org.graalvm.compiler.debug\";\n\n permission java.lang.RuntimePermission \"accessClassInPackage.org.graalvm.compiler.hotspot\";\n\n permission java.lang.RuntimePermission \"accessClassInPackage.org.graalvm.compiler.options\";\n\n permission java.lang.RuntimePermission \"accessClassInPackage.org.graalvm.compiler.phases.common.jmx\";\n\n permission java.lang.RuntimePermission \"accessClassInPackage.org.graalvm.compiler.serviceprovider\";\n\n};\n\n\n\ngrant codeBase \"jrt:/jdk.jsobject\" {\n\n permission java.security.AllPermission;\n\n};\n\n\n\ngrant codeBase \"jrt:/jdk.localedata\" {\n\n permission java.lang.RuntimePermission \"accessClassInPackage.sun.text.*\";\n\n permission java.lang.RuntimePermission \"accessClassInPackage.sun.util.*\";\n\n};\n\n\n\ngrant codeBase \"jrt:/jdk.naming.dns\" {\n\n permission java.security.AllPermission;\n\n};\n\n\n\ngrant codeBase \"jrt:/jdk.scripting.nashorn\" {\n\n permission java.security.AllPermission;\n\n};\n\n\n\ngrant codeBase \"jrt:/jdk.scripting.nashorn.shell\" {\n\n permission java.security.AllPermission;\n\n};\n\n\n\ngrant codeBase \"jrt:/jdk.security.auth\" {\n\n permission java.security.AllPermission;\n\n};\n\n\n\ngrant codeBase \"jrt:/jdk.security.jgss\" {\n\n permission java.security.AllPermission;\n\n};\n\n\n\ngrant codeBase \"jrt:/jdk.zipfs\" {\n\n permission java.io.FilePermission \"<>\", \"read,write,delete\";\n\n permission java.lang.RuntimePermission \"fileSystemProvider\";\n\n permission java.lang.RuntimePermission \"accessUserInformation\";\n\n permission java.util.PropertyPermission \"os.name\", \"read\";\n\n permission java.util.PropertyPermission \"user.dir\", \"read\";\n\n permission java.util.PropertyPermission \"user.name\", \"read\";\n\n};\n\n\n\n// permissions needed by applications using java.desktop module\n\ngrant {\n\tpermission java.net.SocketPermission \"*\", \"accept,connect,listen,resolve\";\n permission java.lang.RuntimePermission \"accessClassInPackage.com.sun.beans\";\n\n permission java.lang.RuntimePermission \"accessClassInPackage.com.sun.beans.*\";\n\n permission java.lang.RuntimePermission \"accessClassInPackage.com.sun.java.swing.plaf.*\";\n\n permission java.lang.RuntimePermission \"accessClassInPackage.com.apple.*\";\n\n};\n\n"
elasticsearch.yml: >-
cluster.name: my-es
node.name: "node-1"
path.data: /usr/share/elasticsearch/data
#path.logs: /var/log/elasticsearch
path.logs: /usr/share/elasticsearch/logs
bootstrap.memory_lock: false
network.host: 0.0.0.0
http.port: 9200
discovery.seed_hosts: ["127.0.0.1", "[::1]"]
cluster.initial_master_nodes: ["node-1"]
#增加参数,使head插件可以访问es
http.cors.enabled: true
http.cors.allow-origin: "*"
http.cors.allow-headers:
Authorization,X-Requested-With,Content-Length,Content-Type
xpack.security.enabled: false
log4j2.properties: >
status = error
appender.console.type = Console
appender.console.name = console
appender.console.layout.type = PatternLayout
appender.console.layout.pattern = [%d{ISO8601}][%-5p][%-25c{1.}]
[%node_name]%marker %m%n
######## Server JSON ############################
appender.rolling.type = RollingFile
appender.rolling.name = rolling
appender.rolling.fileName =
${sys:es.logs.base_path}${sys:file.separator}${sys:es.logs.cluster_name}_server.json
appender.rolling.layout.type = ECSJsonLayout
appender.rolling.layout.dataset = elasticsearch.server
appender.rolling.filePattern =
${sys:es.logs.base_path}${sys:file.separator}${sys:es.logs.cluster_name}-%d{yyyy-MM-dd}-%i.json.gz
appender.rolling.policies.type = Policies
appender.rolling.policies.time.type = TimeBasedTriggeringPolicy
appender.rolling.policies.time.interval = 1
appender.rolling.policies.time.modulate = true
appender.rolling.policies.size.type = SizeBasedTriggeringPolicy
appender.rolling.policies.size.size = 128MB
appender.rolling.strategy.type = DefaultRolloverStrategy
appender.rolling.strategy.fileIndex = nomax
appender.rolling.strategy.action.type = Delete
appender.rolling.strategy.action.basepath = ${sys:es.logs.base_path}
appender.rolling.strategy.action.condition.type = IfFileName
appender.rolling.strategy.action.condition.glob =
${sys:es.logs.cluster_name}-*
appender.rolling.strategy.action.condition.nested_condition.type =
IfAccumulatedFileSize
appender.rolling.strategy.action.condition.nested_condition.exceeds = 2GB
################################################
######## Server - old style pattern ###########
appender.rolling_old.type = RollingFile
appender.rolling_old.name = rolling_old
appender.rolling_old.fileName =
${sys:es.logs.base_path}${sys:file.separator}${sys:es.logs.cluster_name}.log
appender.rolling_old.layout.type = PatternLayout
appender.rolling_old.layout.pattern = [%d{ISO8601}][%-5p][%-25c{1.}]
[%node_name]%marker %m%n
appender.rolling_old.filePattern =
${sys:es.logs.base_path}${sys:file.separator}${sys:es.logs.cluster_name}-%d{yyyy-MM-dd}-%i.log.gz
appender.rolling_old.policies.type = Policies
appender.rolling_old.policies.time.type = TimeBasedTriggeringPolicy
appender.rolling_old.policies.time.interval = 1
appender.rolling_old.policies.time.modulate = true
appender.rolling_old.policies.size.type = SizeBasedTriggeringPolicy
appender.rolling_old.policies.size.size = 128MB
appender.rolling_old.strategy.type = DefaultRolloverStrategy
appender.rolling_old.strategy.fileIndex = nomax
appender.rolling_old.strategy.action.type = Delete
appender.rolling_old.strategy.action.basepath = ${sys:es.logs.base_path}
appender.rolling_old.strategy.action.condition.type = IfFileName
appender.rolling_old.strategy.action.condition.glob =
${sys:es.logs.cluster_name}-*
appender.rolling_old.strategy.action.condition.nested_condition.type =
IfAccumulatedFileSize
appender.rolling_old.strategy.action.condition.nested_condition.exceeds =
2GB
################################################
rootLogger.level = info
rootLogger.appenderRef.console.ref = console
rootLogger.appenderRef.rolling.ref = rolling
rootLogger.appenderRef.rolling_old.ref = rolling_old
######## Deprecation JSON #######################
appender.deprecation_rolling.type = RollingFile
appender.deprecation_rolling.name = deprecation_rolling
appender.deprecation_rolling.fileName =
${sys:es.logs.base_path}${sys:file.separator}${sys:es.logs.cluster_name}_deprecation.json
appender.deprecation_rolling.layout.type = ECSJsonLayout
# Intentionally follows a different pattern to above
appender.deprecation_rolling.layout.dataset = deprecation.elasticsearch
appender.deprecation_rolling.filter.rate_limit.type = RateLimitingFilter
appender.deprecation_rolling.filePattern =
${sys:es.logs.base_path}${sys:file.separator}${sys:es.logs.cluster_name}_deprecation-%i.json.gz
appender.deprecation_rolling.policies.type = Policies
appender.deprecation_rolling.policies.size.type = SizeBasedTriggeringPolicy
appender.deprecation_rolling.policies.size.size = 1GB
appender.deprecation_rolling.strategy.type = DefaultRolloverStrategy
appender.deprecation_rolling.strategy.max = 4
appender.header_warning.type = HeaderWarningAppender
appender.header_warning.name = header_warning
#################################################
logger.deprecation.name = org.elasticsearch.deprecation
logger.deprecation.level = WARN
logger.deprecation.appenderRef.deprecation_rolling.ref = deprecation_rolling
logger.deprecation.appenderRef.header_warning.ref = header_warning
logger.deprecation.additivity = false
######## Search slowlog JSON ####################
appender.index_search_slowlog_rolling.type = RollingFile
appender.index_search_slowlog_rolling.name = index_search_slowlog_rolling
appender.index_search_slowlog_rolling.fileName =
${sys:es.logs.base_path}${sys:file.separator}${sys:es.logs\
.cluster_name}_index_search_slowlog.json
appender.index_search_slowlog_rolling.layout.type = ECSJsonLayout
appender.index_search_slowlog_rolling.layout.dataset =
elasticsearch.index_search_slowlog
appender.index_search_slowlog_rolling.filePattern =
${sys:es.logs.base_path}${sys:file.separator}${sys:es.logs\
.cluster_name}_index_search_slowlog-%i.json.gz
appender.index_search_slowlog_rolling.policies.type = Policies
appender.index_search_slowlog_rolling.policies.size.type =
SizeBasedTriggeringPolicy
appender.index_search_slowlog_rolling.policies.size.size = 1GB
appender.index_search_slowlog_rolling.strategy.type =
DefaultRolloverStrategy
appender.index_search_slowlog_rolling.strategy.max = 4
#################################################
#################################################
logger.index_search_slowlog_rolling.name = index.search.slowlog
logger.index_search_slowlog_rolling.level = trace
logger.index_search_slowlog_rolling.appenderRef.index_search_slowlog_rolling.ref
= index_search_slowlog_rolling
logger.index_search_slowlog_rolling.additivity = false
######## Indexing slowlog JSON ##################
appender.index_indexing_slowlog_rolling.type = RollingFile
appender.index_indexing_slowlog_rolling.name =
index_indexing_slowlog_rolling
appender.index_indexing_slowlog_rolling.fileName =
${sys:es.logs.base_path}${sys:file.separator}${sys:es.logs.cluster_name}\
_index_indexing_slowlog.json
appender.index_indexing_slowlog_rolling.layout.type = ECSJsonLayout
appender.index_indexing_slowlog_rolling.layout.dataset =
elasticsearch.index_indexing_slowlog
appender.index_indexing_slowlog_rolling.filePattern =
${sys:es.logs.base_path}${sys:file.separator}${sys:es.logs.cluster_name}\
_index_indexing_slowlog-%i.json.gz
appender.index_indexing_slowlog_rolling.policies.type = Policies
appender.index_indexing_slowlog_rolling.policies.size.type =
SizeBasedTriggeringPolicy
appender.index_indexing_slowlog_rolling.policies.size.size = 1GB
appender.index_indexing_slowlog_rolling.strategy.type =
DefaultRolloverStrategy
appender.index_indexing_slowlog_rolling.strategy.max = 4
#################################################
logger.index_indexing_slowlog.name = index.indexing.slowlog.index
logger.index_indexing_slowlog.level = trace
logger.index_indexing_slowlog.appenderRef.index_indexing_slowlog_rolling.ref
= index_indexing_slowlog_rolling
logger.index_indexing_slowlog.additivity = false
logger.com_amazonaws.name = com.amazonaws
logger.com_amazonaws.level = warn
logger.com_amazonaws_jmx_SdkMBeanRegistrySupport.name =
com.amazonaws.jmx.SdkMBeanRegistrySupport
logger.com_amazonaws_jmx_SdkMBeanRegistrySupport.level = error
logger.com_amazonaws_metrics_AwsSdkMetrics.name =
com.amazonaws.metrics.AwsSdkMetrics
logger.com_amazonaws_metrics_AwsSdkMetrics.level = error
logger.com_amazonaws_auth_profile_internal_BasicProfileConfigFileLoader.name
= com.amazonaws.auth.profile.internal.BasicProfileConfigFileLoader
logger.com_amazonaws_auth_profile_internal_BasicProfileConfigFileLoader.level
= error
logger.com_amazonaws_services_s3_internal_UseArnRegionResolver.name =
com.amazonaws.services.s3.internal.UseArnRegionResolver
logger.com_amazonaws_services_s3_internal_UseArnRegionResolver.level = error
appender.audit_rolling.type = RollingFile
appender.audit_rolling.name = audit_rolling
appender.audit_rolling.fileName =
${sys:es.logs.base_path}${sys:file.separator}${sys:es.logs.cluster_name}_audit.json
appender.audit_rolling.layout.type = PatternLayout
appender.audit_rolling.layout.pattern = {\
"type":"audit", \
"timestamp":"%d{yyyy-MM-dd'T'HH:mm:ss,SSSZ}"\
%varsNotEmpty{, "cluster.name":"%enc{%map{cluster.name}}{JSON}"}\
%varsNotEmpty{, "cluster.uuid":"%enc{%map{cluster.uuid}}{JSON}"}\
%varsNotEmpty{, "node.name":"%enc{%map{node.name}}{JSON}"}\
%varsNotEmpty{, "node.id":"%enc{%map{node.id}}{JSON}"}\
%varsNotEmpty{, "host.name":"%enc{%map{host.name}}{JSON}"}\
%varsNotEmpty{, "host.ip":"%enc{%map{host.ip}}{JSON}"}\
%varsNotEmpty{, "event.type":"%enc{%map{event.type}}{JSON}"}\
%varsNotEmpty{, "event.action":"%enc{%map{event.action}}{JSON}"}\
%varsNotEmpty{, "authentication.type":"%enc{%map{authentication.type}}{JSON}"}\
%varsNotEmpty{, "user.name":"%enc{%map{user.name}}{JSON}"}\
%varsNotEmpty{, "user.run_by.name":"%enc{%map{user.run_by.name}}{JSON}"}\
%varsNotEmpty{, "user.run_as.name":"%enc{%map{user.run_as.name}}{JSON}"}\
%varsNotEmpty{, "user.realm":"%enc{%map{user.realm}}{JSON}"}\
%varsNotEmpty{, "user.run_by.realm":"%enc{%map{user.run_by.realm}}{JSON}"}\
%varsNotEmpty{, "user.run_as.realm":"%enc{%map{user.run_as.realm}}{JSON}"}\
%varsNotEmpty{, "user.roles":%map{user.roles}}\
%varsNotEmpty{, "apikey.id":"%enc{%map{apikey.id}}{JSON}"}\
%varsNotEmpty{, "apikey.name":"%enc{%map{apikey.name}}{JSON}"}\
%varsNotEmpty{, "authentication.token.name":"%enc{%map{authentication.token.name}}{JSON}"}\
%varsNotEmpty{, "authentication.token.type":"%enc{%map{authentication.token.type}}{JSON}"}\
%varsNotEmpty{, "origin.type":"%enc{%map{origin.type}}{JSON}"}\
%varsNotEmpty{, "origin.address":"%enc{%map{origin.address}}{JSON}"}\
%varsNotEmpty{, "realm":"%enc{%map{realm}}{JSON}"}\
%varsNotEmpty{, "url.path":"%enc{%map{url.path}}{JSON}"}\
%varsNotEmpty{, "url.query":"%enc{%map{url.query}}{JSON}"}\
%varsNotEmpty{, "request.method":"%enc{%map{request.method}}{JSON}"}\
%varsNotEmpty{, "request.body":"%enc{%map{request.body}}{JSON}"}\
%varsNotEmpty{, "request.id":"%enc{%map{request.id}}{JSON}"}\
%varsNotEmpty{, "action":"%enc{%map{action}}{JSON}"}\
%varsNotEmpty{, "request.name":"%enc{%map{request.name}}{JSON}"}\
%varsNotEmpty{, "indices":%map{indices}}\
%varsNotEmpty{, "opaque_id":"%enc{%map{opaque_id}}{JSON}"}\
%varsNotEmpty{, "trace.id":"%enc{%map{trace.id}}{JSON}"}\
%varsNotEmpty{, "x_forwarded_for":"%enc{%map{x_forwarded_for}}{JSON}"}\
%varsNotEmpty{, "transport.profile":"%enc{%map{transport.profile}}{JSON}"}\
%varsNotEmpty{, "rule":"%enc{%map{rule}}{JSON}"}\
%varsNotEmpty{, "put":%map{put}}\
%varsNotEmpty{, "delete":%map{delete}}\
%varsNotEmpty{, "change":%map{change}}\
%varsNotEmpty{, "create":%map{create}}\
%varsNotEmpty{, "invalidate":%map{invalidate}}\
}%n
# "node.name" node name from the `elasticsearch.yml` settings
# "node.id" node id which should not change between cluster restarts
# "host.name" unresolved hostname of the local node
# "host.ip" the local bound ip (i.e. the ip listening for connections)
# "origin.type" a received REST request is translated into one or more
transport requests. This indicates which processing layer generated the
event "rest" or "transport" (internal)
# "event.action" the name of the audited event, eg. "authentication_failed",
"access_granted", "run_as_granted", etc.
# "authentication.type" one of "realm", "api_key", "token", "anonymous" or
"internal"
# "user.name" the subject name as authenticated by a realm
# "user.run_by.name" the original authenticated subject name that is
impersonating another one.
# "user.run_as.name" if this "event.action" is of a run_as type, this is the
subject name to be impersonated as.
# "user.realm" the name of the realm that authenticated "user.name"
# "user.run_by.realm" the realm name of the impersonating subject
("user.run_by.name")
# "user.run_as.realm" if this "event.action" is of a run_as type, this is
the realm name the impersonated user is looked up from
# "user.roles" the roles array of the user; these are the roles that are
granting privileges
# "apikey.id" this field is present if and only if the "authentication.type"
is "api_key"
# "apikey.name" this field is present if and only if the
"authentication.type" is "api_key"
# "authentication.token.name" this field is present if and only if the
authenticating credential is a service account token
# "authentication.token.type" this field is present if and only if the
authenticating credential is a service account token
# "event.type" informs about what internal system generated the event;
possible values are "rest", "transport", "ip_filter" and
"security_config_change"
# "origin.address" the remote address and port of the first network hop,
i.e. a REST proxy or another cluster node
# "realm" name of a realm that has generated an "authentication_failed" or
an "authentication_successful"; the subject is not yet authenticated
# "url.path" the URI component between the port and the query string; it is
percent (URL) encoded
# "url.query" the URI component after the path and before the fragment; it
is percent (URL) encoded
# "request.method" the method of the HTTP request, i.e. one of GET, POST,
PUT, DELETE, OPTIONS, HEAD, PATCH, TRACE, CONNECT
# "request.body" the content of the request body entity, JSON escaped
# "request.id" a synthetic identifier for the incoming request, this is
unique per incoming request, and consistent across all audit events
generated by that request
# "action" an action is the most granular operation that is authorized and
this identifies it in a namespaced way (internal)
# "request.name" if the event is in connection to a transport message this
is the name of the request class, similar to how rest requests are
identified by the url path (internal)
# "indices" the array of indices that the "action" is acting upon
# "opaque_id" opaque value conveyed by the "X-Opaque-Id" request header
# "trace_id" an identifier conveyed by the part of "traceparent" request
header
# "x_forwarded_for" the addresses from the "X-Forwarded-For" request header,
as a verbatim string value (not an array)
# "transport.profile" name of the transport profile in case this is a
"connection_granted" or "connection_denied" event
# "rule" name of the applied rule if the "origin.type" is "ip_filter"
# the "put", "delete", "change", "create", "invalidate" fields are only
present
# when the "event.type" is "security_config_change" and contain the security
config change (as an object) taking effect
appender.audit_rolling.filePattern =
${sys:es.logs.base_path}${sys:file.separator}${sys:es.logs.cluster_name}_audit-%d{yyyy-MM-dd}-%i.json.gz
appender.audit_rolling.policies.type = Policies
appender.audit_rolling.policies.time.type = TimeBasedTriggeringPolicy
appender.audit_rolling.policies.time.interval = 1
appender.audit_rolling.policies.time.modulate = true
appender.audit_rolling.policies.size.type = SizeBasedTriggeringPolicy
appender.audit_rolling.policies.size.size = 1GB
appender.audit_rolling.strategy.type = DefaultRolloverStrategy
appender.audit_rolling.strategy.fileIndex = nomax
logger.xpack_security_audit_logfile.name =
org.elasticsearch.xpack.security.audit.logfile.LoggingAuditTrail
logger.xpack_security_audit_logfile.level = info
logger.xpack_security_audit_logfile.appenderRef.audit_rolling.ref =
audit_rolling
logger.xpack_security_audit_logfile.additivity = false
logger.xmlsig.name = org.apache.xml.security.signature.XMLSignature
logger.xmlsig.level = error
logger.samlxml_decrypt.name =
org.opensaml.xmlsec.encryption.support.Decrypter
logger.samlxml_decrypt.level = fatal
logger.saml2_decrypt.name = org.opensaml.saml.saml2.encryption.Decrypter
logger.saml2_decrypt.level = fatal
kind: ConfigMap
metadata:
name: es-config
namespace: es8-all
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
labels:
app: elasticsearch
name: elasticsearch
namespace: es8-all
spec:
replicas: 1
selector:
matchLabels:
app: elasticsearch
serviceName: elasticsearch
template:
metadata:
labels:
app: elasticsearch
spec:
containers:
- env:
- name: TZ
value: Asia/Shanghai
image: 'elasticsearch:8.2.3'
imagePullPolicy: IfNotPresent
name: elasticsearch
resources:
limits:
cpu: '1'
memory: 2Gi
requests:
cpu: '1'
memory: 1Gi
volumeMounts:
- mountPath: /usr/share/elasticsearch/config/elasticsearch.yml
name: es-config
subPath: elasticsearch.yml
- mountPath: /usr/share/elasticsearch/data
name: es-persistent-storage
- mountPath: /usr/share/elasticsearch/plugins
name: es-plugins
subPath: testp
- mountPath: /usr/share/elasticsearch/config/log4j2.properties
name: es-config
subPath: log4j2.properties
- mountPath: usr/share/elasticsearch/jdk/lib/security/default.policy
name: es-config
subPath: default.policy
imagePullSecrets:
- name: docker-login
volumes:
- configMap:
name: es-config
name: es-config
volumeClaimTemplates:
- apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: es-persistent-storage
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 500Mi
storageClassName: "managed-nfs-storage" # nfs默认存储类名称
- apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: es-plugins
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 100Mi
storageClassName: "managed-nfs-storage" # nfs默认存储类名称
---
apiVersion: v1
kind: Service
metadata:
name: elasticsearch
namespace: es8-all
spec:
ports:
- name: es9200
port: 9200
protocol: TCP
targetPort: 9200
- name: es9300
port: 9300
protocol: TCP
targetPort: 9300
- name: es9400
port: 9400
protocol: TCP
targetPort: 9400
- name: es9500
port: 9500
protocol: TCP
targetPort: 9500
selector:
app: elasticsearch
type: NodePort
这里给大家提供多个版本的IK分词器和拼音分词器的百度网盘下载地址:
IK分词器:
链接:https://pan.baidu.com/s/11U16A8WvBw7Oo-px2oljDA?pwd=8xcg
提取码:8xcg
拼音分词器:
链接:https://pan.baidu.com/s/1NSgB-FtKeag2uWg58WEgxw?pwd=lgc0
提取码:lgc0
大家也可以去下载其他版本的ik分词器,具体方式如下:
下载路径:
下载方法:
点击tags
,如下:
找到合适的版本,点击Downloads
按钮,如下:
点击zip就可以下载,如下:
windows环境: 将zip解压之后放在elasticsearch安装目录的plugins
目录下面,如下:
k8s容器环境: 将zip解压之后放在/usr/share/elasticsearch/plugins
目录下面,如下: