Kubernetes使用ECK部署Elasticsearch和Kibana集群

Kubernetes使用ECK部署Elasticsearch和Kibana集群

  • 原文链接:Kubernetes使用ECK部署Elasticsearch8.0和Kibana集群(k8s)_k8s elasticsearch 8-CSDN博客
  • Elastic Cloud Kubernetes(ECK)安装Elasticsearch、Kibana实战教程
  • Elastic Cloud Kubernetes(ECK)安装Elasticsearch、Kibana实战教程-阿里云开发者社区
  • k8s使用ECK部署Elasticsearch和Kibana集群

一、安装ECK

  • kubectl create -f https://download.elastic.co/downloads/eck/2.0.0/crds.yaml
  • kubectl apply -f https://download.elastic.co/downloads/eck/2.0.0/operator.yaml
  • wget https://download.elastic.co/downloads/eck/2.10.0/crds.yaml
  • wget https://download.elastic.co/downloads/eck/2.10.0/operator.yaml
  • kubectl create -f crds.yaml
  • kubectl apply -f operator.yaml
  • 搞错了,就删除重新搞
  • kubectl delete -f crds.yaml
  • kubectl delete -f operator.yaml
  • 执行完成使用下面命令看容器运行成功就安装好了
  • kubectl -n elastic-system logs -f statefulset.apps/elastic-operator

二、部署Elasticsearch集群

  • 安装出现异常,内存不够,也没有绑定PVC,
  • 加内存,加CPU,绑定PVC
    • Warning FailedScheduling 14s default-scheduler 0/3 nodes are available: pod has unbound immediate PersistentVolumeClaims. preemption: 0/3 nodes are available: 3 Preemption is not helpful for scheduling..
    • Warning FailedScheduling 12s default-scheduler 0/3 nodes are available: 1 node(s) had untolerated taint {node-role.kubernetes.io/control-plane: }, 2 Insufficient cpu, 2 Insufficient memory. preemption: 0/3 nodes are available: 1 Preemption is not helpful for scheduling, 2 No preemption victims found for incoming pod..
    • Warning FailedScheduling 48s default-scheduler 0/3 nodes are available: pod has unbound immediate PersistentVolumeClaims. preemption: 0/3 nodes are available: 3 Preemption is not helpful for scheduling..
$ vim nfs-pvc.yaml
-------------------------------------
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: elasticsearch-data
spec:
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 10Gi
  storageClassName: nfs-sc
-------------------------------------
$ kubectl apply -f nfs-pvc.yaml

cat <
  • 运行完成等待pod就绪即可

[root@k8s-master01 ~]# kubectl get pods -owide
NAME                            READY   STATUS    RESTARTS      AGE     IP             NODE         NOMINATED NODE   READINESS GATES
quickstart-es-default-0         1/1     Running   0             28m     10.244.1.121   k8s-node02              
quickstart-es-default-1         1/1     Running   0             28m     10.244.2.115   k8s-node03              
quickstart-es-default-2         1/1     Running   0             28m     10.244.1.122   k8s-node02              
quickstart-kb-5bd78dcb9-8rlfq   1/1     Running   0             14m     10.244.2.116   k8s-node03              




[root@k8s-master01 ~]# kubectl get secret
NAME                                       TYPE     DATA   AGE
default-quickstart-kibana-user             Opaque   3      22m
quickstart-es-default-es-config            Opaque   1      36m
quickstart-es-default-es-transport-certs   Opaque   7      36m
quickstart-es-elastic-user                 Opaque   1      36m
quickstart-es-http-ca-internal             Opaque   2      36m
quickstart-es-http-certs-internal          Opaque   3      36m
quickstart-es-http-certs-public            Opaque   2      36m
quickstart-es-internal-users               Opaque   4      36m
quickstart-es-remote-ca                    Opaque   1      36m
quickstart-es-transport-ca-internal        Opaque   2      36m
quickstart-es-transport-certs-public       Opaque   1      36m
quickstart-es-xpack-file-realm             Opaque   4      36m
quickstart-kb-config                       Opaque   1      22m
quickstart-kb-es-ca                        Opaque   2      22m
quickstart-kibana-user                     Opaque   1      22m


[root@k8s-master01 ~]# kubectl get secret quickstart-es-elastic-user -o go-template='{{.data.elastic | base64decode}}'
yG197EPv635nf60IcvsIh35

[root@k8s-master01 ~]# curl -u "elastic:yG197EPv635nf60IcvsIh35X" -k "http://10.244.1.121:9200"
{
  "name" : "quickstart-es-default-0",
  "cluster_name" : "quickstart",
  "cluster_uuid" : "mMEuTMd0QQCeQKsFvsTU2w",
  "version" : {
    "number" : "7.10.0",
    "build_flavor" : "default",
    "build_type" : "docker",
    "build_hash" : "51e9d6f22758d0374a0f3f5c6e8f3a7997850f96",
    "build_date" : "2020-11-09T21:30:33.964949Z",
    "build_snapshot" : false,
    "lucene_version" : "8.7.0",
    "minimum_wire_compatibility_version" : "6.8.0",
    "minimum_index_compatibility_version" : "6.0.0-beta1"
  },
  "tagline" : "You Know, for Search"
}
[root@k8s-master01 ~]#

三.部署kibana集群

cat <
  • 运行完成等待pod就绪即可
  • 四.访问测试

  • 给quickstart-kb-http服务编辑外部访问,我这里直接用Kubesphere编写修改了
  • 访问节点加端口 我这里是 http://192.168.221.131:3xxxx/
  • elastic/密码
  • 启动成功
  • 密码获取方式
  • kubectl get secret quickstart-es-elastic-user -o go-template='{{.data.elastic | base64decode}}'
  • 这一串就是密码,默认用户名: elastic
  • 部署完毕,使用ECK部署确实方便

你可能感兴趣的:(kubernetes,elasticsearch,容器)