docker通过docker-compose安装elasticsearch Kibana

docker-compose.yml配置文件

version: "3.8"  
services:
  elasticsearch:
    image: elasticsearch:7.8.0   ##这是dockerhub官方镜像,可以使用国内镜像
    restart: always
    container_name: elasticsearch
    ports:
      - 9200:9200
    ulimits: 
      memlock:
        soft: -1
        hard: -1
    environment: 
      - "ES_JAVA_OPTS=-Xms600m -Xmx600m"  ##如果无法启动,可以降低Xms和Xmx值
    volumes: 
      - /home/docker/elasticsearch/data/:/usr/share/elastcisearch/data
      - /home/docker/elasticsearch/config/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml   ##elasticsearch.yml文件映射到容器内部,冒号前面为自己elasticsearch.yml在服务器中的位置
  kibana:
    image: kibana:7.8.0
    restart: always
    container_name: kibana
    ports:
      - 5601:5601
    environment:
      - elasticsearch_url=http://192.168.253.77:9200  ##改为自己elasticsearch服务器地址
    depends_on:
      - elasticsearch

创建elasticsearch.yml文件放入上面的config文件夹中

# ======================== Elasticsearch Configuration =========================
#
# NOTE: Elasticsearch comes with reasonable defaults for most settings.
#       Before you set out to tweak and tune the configuration, make sure you
#       understand what are you trying to accomplish and the consequences.
#
# The primary way of configuring a node is via this file. This template lists
# the most important settings you may want to configure for a production cluster.
#
# Please consult the documentation for further information on configuration options:
# https://www.elastic.co/guide/en/elasticsearch/reference/index.html
#
# ---------------------------------- Cluster -----------------------------------
#
# Use a descriptive name for your cluster:
#
cluster.name: es-cluster
#
# ------------------------------------ Node ------------------------------------
#
# Use a descriptive name for the node:
#
node.name: "es-master"
#
# Add custom attributes to the node:
#
#node.attr.rack: r1
#
# ----------------------------------- Paths ------------------------------------
#
# Path to directory where to store the data (separate multiple locations by comma):
#
#${path.data}
#
# Path to log files:
#
#${path.logs}
#
# ----------------------------------- Memory -----------------------------------
#
# Lock the memory on startup:
#
#bootstrap.memory_lock: true
#
# Make sure that the heap size is set to about half the memory available
# on the system and that the owner of the process is allowed to use this
# limit.
#
# Elasticsearch performs poorly when the system is swapping the memory.
#
# ---------------------------------- Network -----------------------------------
#
# Set the bind address to a specific IP (IPv4 or IPv6):
#
network.host: 0.0.0.0
#
# Set a custom port for HTTP:
#
#http.port: 9200
#
# For more information, consult the network module documentation.
#
# --------------------------------- Discovery ----------------------------------
#
# Pass an initial list of hosts to perform discovery when this node is started:
# The default list of hosts is ["127.0.0.1", "[::1]"]
#
discovery.seed_hosts: ["127.0.0.1", "[::1]"]
#
# Bootstrap the cluster using an initial set of master-eligible nodes:
#
cluster.initial_master_nodes: ["es-master"]
#
# For more information, consult the discovery and cluster formation module documentation.
#
# ---------------------------------- Gateway -----------------------------------
#
# Block initial recovery after a full cluster restart until N nodes are started:
#
#gateway.recover_after_nodes: 3
#
# For more information, consult the gateway module documentation.
#
# ---------------------------------- Various -----------------------------------
#
# Require explicit names when deleting indices:
#
#action.destructive_requires_name: true
http.cors.enabled: true
http.cors.allow-origin: /.*/ 

修改/etc/sysctl.conf

vm.max_map_count=262144  

然后执行
sysctl -p 让修改生效
然后进入docker-compose.yml文件目录中 通过docker-compose up -d即可启动elasticsearch和Kibana

你可能感兴趣的:(docker通过docker-compose安装elasticsearch Kibana)