目录
一、说明
二、思路
三、部署
1、建nfs服务器
2、建持久卷
3、部署elasticsearch
四、附件
pv.yaml内容
elasticsearch.yaml内容
本文章内容主要的参考来源是https://www.cnblogs.com/javashop-docs/p/12410845.html,但参考文献中的elasticsearch是用的6.x版本,7.x与6.x有些配置上的差异,因此有必要在此基础上记录一下。
在k8s中的持久化部署不可避免的要用到持久卷,我们采用nfs方式的持久卷来存储es数据。持久卷的详细介绍请见这里:
https://kubernetes.io/docs/concepts/storage/persistent-volumes/
默认启动5个节点,3主2数据。根据es官方推荐每个节点的智能要分离,因此maseter节点不存储数据,只用来协调。
es的数据目录默认只允许一个节点访问,但在k8s上采用了持久卷,所有节点的数据都存储在这个卷上,这会导致es的访问权限问题。报错如下:
java.io.IOException: failed to obtain lock on /usr/share/elasticsearch/data/nodes/0
当然可以通过更改es的配置max_local_storage_nodes来允许多个节点访问同一个数据目录,但es官方不推荐这样做。
所以我们的方案是更改每个节点的数据存储目录来解决 ps:指定es配置项path.data来实现。举例说明:
节点名 | 存储目录 |
es-data-1 | /usr/share/elasticsearch/data/es-data-1 |
es-data-2 | /usr/share/elasticsearch/data/es-data-2 |
对于持久卷的结构规划如下:
目录 | 内容 |
/nfs/data/esmaster | es master节点的数据 |
/nfs/data/esdata | es 数据节点的数据 |
关于索引的磁盘占用:请根据业务的数据量情况来规划持久卷硬件的情况,根据我们实际测算1000个商品大约需要1MB/每节点
在默认的规划中,我们使用k8s的master节点作为nfs服务器,为上述卷准备了10G的空间,请确保k8s master node 不少于10G的空闲磁盘。请根据您的具体业务情况选择nfs服务器,如果条件允许最好是独立的nfs服务器。根据如上规划建立nfs服务:
#master节点安装nfs
yum -y install nfs-utils
#创建nfs目录
mkdir -p /nfs/data/{mqdata,esmaster,esdata}
#修改权限
chmod -R 777 /nfs/data/
#编辑export文件
vim /etc/exports
粘贴如下内容:
/nfs/data/esmaster *(rw,no_root_squash,sync)
/nfs/data/esdata *(rw,no_root_squash,sync)
#配置生效
exportfs -r
#查看生效
exportfs
#启动rpcbind、nfs服务
systemctl restart rpcbind && systemctl enable rpcbind
systemctl restart nfs && systemctl enable nfs
#查看 RPC 服务的注册状况
rpcinfo -p localhost
#showmount测试,这里的ip输入master节点的局域网ip
showmount -e
如果以看到可被挂载的目录:
# showmount -e 172.17.14.73
Export list for 172.17.14.73:
/nfs/data/esmaster *
/nfs/data/mqdata *
接下来,要在每一个节点上安装nfs服务以便使k8s可以挂载nfs目录
#所有node节点安装客户端
yum -y install nfs-utils
systemctl start nfs && systemctl enable nfs
这样就为k8s的持久卷做好了准备。
复制附件中的pv.yaml内容,修改其中的server配置为nfs服务器的ip地址
nfs:
server: 192.168.0.186 #这里请写nfs服务器的ip
在k8s master节点上执行下面的命令创建namespace:
kubectl create namespace ns-elasticsearch
通过下面的命令建立持久卷:
kubectl create -f pv.yaml
通过以下命令查看持久卷是否建立成功:
kubectl get pv
由于在elasticsearch.yaml中设置了node信息:nodeSelector:es: enable,因此需要将node添加标签es:enable
kubectl label nodes
kubectl label nodes
kubectl label nodes
查看node标签
k get node --show-labels
复制附件elasticsearch.yaml的内,并执行下面的命令创建es集群
kubectl create -f elasticsearch.yaml
通过以上部署我们建立了一个ns-elasticsearch的namespace,并在其中创建了相应的pvc、角色账号,有状态副本集以及服务。
kubectl get pods --namespace ns-elasticsearch -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
elasticsearch-data-0 1/1 Running 0 46h 10.244.2.45 vm188
elasticsearch-data-1 1/1 Running 2 45h 10.244.1.62 vm187
elasticsearch-master-0 1/1 Running 0 46h 10.244.2.44 vm188
elasticsearch-master-1 1/1 Running 0 46h 10.244.1.61 vm187
elasticsearch-master-2 1/1 Running 0 46h 10.244.0.14 vm186
服务
我们默认开启了对外nodeport端口,对应关系:
32000->9200
32100->9300
k8s内部可以通过下面的服务名称访问:
elasticsearch-api-service.ns-elasticsearch:9300
elasticsearch-service.ns-elasticsearch:9200
等待容器都启动成功后验证。
注意:
es的最大内存和最小内存需要保持一致,默认的256m太小,可适当增加,我配置的是1024m。
es7.x参考:
https://blog.csdn.net/chengyuqiang/article/details/89841544
https://www.sohu.com/a/301517999_683048
---
#es master节点的持久卷
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-es-master
labels:
pv: pv-es-master
spec:
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Recycle
storageClassName: nfs
nfs:
server: 192.168.0.186 #这里请写nfs服务器的ip
path: /nfs/data/esmaster
---
#es数据节点的持久卷
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-es-data
labels:
pv: pv-es-data
spec:
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Recycle
storageClassName: nfs
nfs:
server: 192.168.0.186 #这里请写nfs服务器的ip
path: /nfs/data/esdata
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pvc-es-master
namespace: ns-elasticsearch
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
storageClassName: nfs
selector:
matchLabels:
pv: pv-es-master
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pvc-es-data
namespace: ns-elasticsearch
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
storageClassName: nfs
selector:
matchLabels:
pv: pv-es-data
---
apiVersion: v1
kind: ServiceAccount
metadata:
labels:
elastic-app: elasticsearch
name: elasticsearch-admin
namespace: ns-elasticsearch
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: elasticsearch-admin
labels:
elastic-app: elasticsearch
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: elasticsearch-admin
namespace: ns-elasticsearch
---
kind: StatefulSet
apiVersion: apps/v1
metadata:
labels:
elastic-app: elasticsearch
role: master
name: elasticsearch-master
namespace: ns-elasticsearch
spec:
serviceName: es-master
replicas: 3
revisionHistoryLimit: 10
selector:
matchLabels:
elastic-app: elasticsearch
role: master
template:
metadata:
labels:
elastic-app: elasticsearch
role: master
spec:
#将持久卷声明
volumes:
- name: pv-storage-elastic-master
persistentVolumeClaim:
claimName: pvc-es-master
nodeSelector:
es: enable
containers:
- name: elasticsearch-master
image: elasticsearch:7.6.2
lifecycle:
postStart:
exec:
command: ["/bin/bash", "-c", "sysctl -w vm.max_map_count=262144; ulimit -l unlimited;chown -R elasticsearch:elasticsearch /usr/share/elasticsearch/data;"]
ports:
- containerPort: 9200
protocol: TCP
- containerPort: 9300
protocol: TCP
env:
- name: MY_POD_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.name
#修改es默认的数据存储目录,否则多个节点同时写一个目录es权限不允许
- name: "path.data"
value: "/usr/share/elasticsearch/data/$(MY_POD_NAME)"
- name: "cluster.name"
value: "elasticsearch-cluster"
- name: "bootstrap.memory_lock"
value: "true"
- name: "discovery.seed_hosts" #7.x的配置方式
value: "elasticsearch-discovery"
- name: "node.master"
value: "true"
- name: "node.data"
value: "false"
- name: "node.ingest"
value: "false"
- name: "ES_JAVA_OPTS"
value: "-Xms1024m -Xmx1024m"
- name: "cluster.initial_master_nodes"
value: "elasticsearch-master-0,elasticsearch-master-1,elasticsearch-master-2"
securityContext:
privileged: true
#将持久卷映射为数据目录的父目录
volumeMounts:
- name: pv-storage-elastic-master
mountPath: /usr/share/elasticsearch/data/
imagePullSecrets:
- name: aliyun-secret
serviceAccountName: elasticsearch-admin
tolerations:
- key: node-role.kubernetes.io/master
effect: NoSchedule
---
kind: Service
apiVersion: v1
metadata:
labels:
elastic-app: elasticsearch
name: elasticsearch-discovery
namespace: ns-elasticsearch
spec:
ports:
- port: 9300
targetPort: 9300
selector:
elastic-app: elasticsearch
role: master
---
kind: StatefulSet
apiVersion: apps/v1
metadata:
labels:
elastic-app: elasticsearch
role: data
name: elasticsearch-data
namespace: ns-elasticsearch
spec:
serviceName: es-data
replicas: 2
revisionHistoryLimit: 10
selector:
matchLabels:
elastic-app: elasticsearch
template:
metadata:
labels:
elastic-app: elasticsearch
role: data
spec:
#将es-data持久卷声明
volumes:
- name: pv-storage-elastic-data
persistentVolumeClaim:
claimName: pvc-es-data
nodeSelector:
es: enable
containers:
- name: elasticsearch-data
image: elasticsearch:7.6.2
lifecycle:
postStart:
exec:
command: ["/bin/bash", "-c", "sysctl -w vm.max_map_count=262144; ulimit -l unlimited;chown -R elasticsearch:elasticsearch /usr/share/elasticsearch/data;"]
ports:
- containerPort: 9200
protocol: TCP
- containerPort: 9300
protocol: TCP
env:
- name: MY_POD_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.name
#修改es默认的数据存储目录,否则多个节点同时写一个目录es权限不允许
- name: "path.data"
value: "/usr/share/elasticsearch/data/$(MY_POD_NAME)"
- name: "cluster.name"
value: "elasticsearch-cluster"
- name: "bootstrap.memory_lock"
value: "true"
- name: "discovery.seed_hosts"
value: "elasticsearch-discovery"
- name: "node.master"
value: "false"
- name: "node.data"
value: "true"
- name: "ES_JAVA_OPTS"
value: "-Xms1024m -Xmx1024m"
securityContext:
privileged: true
#将持久卷映射到数据目录的父目录
volumeMounts:
- name: pv-storage-elastic-data
mountPath: /usr/share/elasticsearch/data/
# imagePullSecrets:
# - name: aliyun-secret
serviceAccountName: elasticsearch-admin
tolerations:
- key: node-role.kubernetes.io/master
effect: NoSchedule
---
kind: Service
apiVersion: v1
metadata:
labels:
elastic-app: elasticsearch-service
name: elasticsearch-service
namespace: ns-elasticsearch
spec:
ports:
- port: 9200
targetPort: 9200
nodePort: 32000
selector:
elastic-app: elasticsearch
type: NodePort
---
kind: Service
apiVersion: v1
metadata:
labels:
elastic-app: elasticsearch-service
name: elasticsearch-api-service
namespace: ns-elasticsearch
spec:
ports:
- port: 9300
targetPort: 9300
nodePort: 32100
selector:
elastic-app: elasticsearch
type: NodePort