Mac下通过Docker安装ElasticSearch集群

elasticSearch镜像拉取

mac下输入命令:terminal
docker pull elasticsearch:5.6.8

mac本地文件目录创建

  • 创建配置文件(config)
    eg:/Users/xxx/developTools/docker/opt/es
    mkdir config
    在config目录下分别创建es1.yml、es2.yml、es3.yml文件
    es1.yml示例如下:重点关注node.name,http.port,transport.tcp.port
cluster.name: elasticsearch-cluster
node.name: es-node1
network.bind_host: 0.0.0.0
network.publish_host: 127.0.0.1
http.port: 9200
transport.tcp.port: 9300
http.cors.enabled: true
http.cors.allow-origin: "*"
node.master: true 
node.data: true  
discovery.zen.ping.unicast.hosts: ["127.0.0.1:9300","127.0.0.1:9301","127.0.0.1:9302"]
discovery.zen.minimum_master_nodes: 2

es2.yml示例:

cluster.name: elasticsearch-cluster
node.name: es-node2
network.bind_host: 0.0.0.0
network.publish_host: 127.0.0.1
http.port: 9201
transport.tcp.port: 9301
http.cors.enabled: true
http.cors.allow-origin: "*"
node.master: true 
node.data: true  
discovery.zen.ping.unicast.hosts: ["127.0.0.1:9300","127.0.0.1:9301","127.0.0.1:9302"]
discovery.zen.minimum_master_nodes: 2

es3.yml示例:

cluster.name: elasticsearch-cluster
node.name: es-node3
network.bind_host: 0.0.0.0
network.publish_host: 127.0.0.1
http.port: 9202
transport.tcp.port: 9302
http.cors.enabled: true
http.cors.allow-origin: "*"
node.master: true 
node.data: true  
discovery.zen.ping.unicast.hosts: ["127.0.0.1:9300","127.0.0.1:9301","127.0.0.1:9302"]
discovery.zen.minimum_master_nodes: 2
  • 创建持久层数据文件(data)
    mkdir -p data1 data2 data3
    最终效果如下:


    image.png

本地将镜像run起来

由于默认es实例是1g,比较耗费本机内存资源,因此在启动的时候修改jvm参数

docker run --name es01 -p 9200:9200 -p 9300:9300 \
-e ES_JAVA_OPTS="-Xms256m -Xmx256m" \
-v /Users/xxx/developTools/docker/opt/es/config/es1.yml:/usr/share/elasticsearch/config/elasticsearch.yml \
-v /Users/xxx/developTools/docker/opt/es/data1:/usr/share/elasticsearch/data \
-d elasticsearch:5.6.8

是否启动成功,可通过 docker -ps -a 查看结果
接着咱们继续启动第二个节点、第三个节点

docker run --name es02 -p 9201:9201 -p 9301:9301 \
-e ES_JAVA_OPTS="-Xms256m -Xmx256m" \
-v /Users/alsc/developTools/docker/opt/es/config/es2.yml:/usr/share/elasticsearch/config/elasticsearch.yml \
-v /Users/alsc/developTools/docker/opt/es/data2:/usr/share/elasticsearch/data \
-d elasticsearch:5.6.8
docker run --name es03 -p 9202:9202 -p 9302:9302 \
-e ES_JAVA_OPTS="-Xms256m -Xmx256m" \
-v /Users/alsc/developTools/docker/opt/es/config/es3.yml:/usr/share/elasticsearch/config/elasticsearch.yml \
-v /Users/alsc/developTools/docker/opt/es/data3:/usr/share/elasticsearch/data \
-d elasticsearch:5.6.8

至此,简单的elasticSearch集群即搭建完成,接下来咱们验证集群节点是否正常运行

http://127.0.0.1:9202/

http://127.0.0.1:9200/_cat/nodes?pretty
[单节点访问正常]

image.png

[集群节点异常]下文分析...
image.png

当然,看到最终结果,非常高兴开心,但在实际的实操过程中,遇到了很多异常问题,在下文继续补充,elasticSearch服务端,搭建完成,还缺一个客户端展示

拉取elasticSearch前端框架镜像

docker pull mobz/elasticsearch-head:5
docker run -d -p 9100:9100 --name es-manager  mobz/elasticsearch-head:5

前端部署安装完成效果图:


image.png

至此,整个基于基于docker安装elasticSearch简单的操作完成,咱们接下来讲讲安装过程中遇到的问题:

QA

1、启动参数命令等问题,导致安装完成后,本地localhost:9200 无法访问,提示链接拒绝

C02GC1JMML7H:es alsc$ curl localhost:9200/_cat/nodes
curl: (7) Failed to connect to localhost port 9200: Connection refused

原因是因为在启动镜像时,没有指定本地端口映射容器端口,进而容器内的进程正常,而我们却无法访问;

docker run -p 9202:9202 -p 9302:9302

2、启动容器镜像时,本地配置文件路径错误或符号不对,导致启动异常

docker: Error response from daemon: failed to create shim: OCI runtime create failed: container_linux.go:380: starting container process caused: process_linux.go:545: container init caused: rootfs_linux.go:75: mounting "/Users/alsc/developTools/docker/opt/es/config/es2yml" to rootfs at "/usr/share/elasticsearch/config/elasticsearch.yml" caused: mount through procfd: not a directory: unknown: Are you trying to mount a directory onto a file (or vice-versa)? Check if the specified host path exists and is the expected type.

修复文件的正确路径即可,网上有相关的异常解决方案,有的反馈是docker镜像版本问题,也有反馈是系统问题......仔细核对错误原因,找到自己的解决方案。

3、集群安装好了后,http://127.0.0.1:9200/_cat/nodes?pretty 访问异常

拉取镜像启动异常日志:docker logs containerID

not enough master nodes discovered during pinging(found [[Candidate{node={es-node3}{GzxKyYwXRUi3iodQP10T4w}{SKaN6gKRRemwYQ3rcNV27w}{127.0.0.1}{127.0.0.1:9302}, clusterStateVersion=-1}]], but needed [2]), pinging again
这个问题的解决用了很长一段时间,最后发现是yml配置文件中不能使用宿主机的ip地址作为通信地址。容器之间通信用的是容器内部ip,导致节点之间通信失败。
解决方案为每个yml文件中该配置调整为容器节点ip地址:

discovery.zen.ping.unicast.hosts: ["172.17.0.3:9300","172.17.0.4:9301","172.17.0.5:9302"]
image.png

image.png

你可能感兴趣的:(Mac下通过Docker安装ElasticSearch集群)