六、ubuntu18 docker安装ELK7.6.1

1、安装Elasticsearch

## 1、拉取不到在后附加详细版本号,例如:7.6.1不加默认最新
docker search elasticsearch:7.6.1   
## 2、拉取镜像
docker pull elasticsearch:7.6.1
## 3、创建用户定义网络elk
docker network create --driver bridge elk
## 4、查看网络
docker network ls
## 5、查看源数据
docker network inspect elk

## 6、先使用自定义网络中的网桥创建es容器
docker run --name elasticsearch \
--network=elk -p 9200:9200 -p 9300:9300 \
-e "discovery.type=single-node" \
-e ES_JAVA_OPTS="-Xms128m -Xmx128m" \
-d elasticsearch:7.6.1
## 7、进入容器
docker exec -it elasticsearch /bin/bash
## 8、拷贝容器config目录(或者elasticsearch.yml)到本机目录
docker cp elasticsearch:/usr/share/elasticsearch/config /my/elasticsearch/

## 9、修改config下elasticsearch.yml和jvm.options参数
## 10、下载安装ik分词,并配置,后文有详细讲解(手动cp添加或者容器挂载择一)
unzip elasticsearch-analysis-ik-7.6.1.zip -d /my/elasticsearch/plugins/ik
## 11、ik分词器添加product.dic(自定义分词文件)到config目录
cd /my/elasticsearch/plugins/ik/config
## 12、ik分词器配置文件IKAnalyzer.cfg.xml添加自定义的product.dic文件
vim IKAnalyzer.cfg.xml 

## 13、停止并删除上面启动的es容器,重新启动并挂载宿主机目录
docker run --name elasticsearch \
--network=elk -p 9200:9200 -p 9300:9300 \
--privileged=true --restart=always \
-e "discovery.type=single-node" \
-e ES_JAVA_OPTS="-Xms1024m -Xmx1024m" \
-v /my/elasticsearch/config:/usr/share/elasticsearch/config \
-v /my/elasticsearch/data:/usr/share/elasticsearch/data \
-v /my/elasticsearch/plugins:/usr/share/elasticsearch/plugins \
-v /my/elasticsearch/logs:/usr/share/elasticsearch/logs \
-d elasticsearch:7.6.1 

1.1、说明:

--name elasticsearch:将容器命名为 elasticsearch
-p 9200:9200:将容器的9200端口映射到宿主机9200端口
-p 9300:9300:将容器的9300端口映射到宿主机9300端口,目的是集群互相通信
-e "discovery.type=single-node":单例模式
-e ES_JAVA_OPTS="-Xms64m -Xmx128m":配置内存大小
-v /my/elasticsearch/conf:/usr/share/elasticsearch/config:将配置文件挂载到宿主机
-v /my/elasticsearch/data:/usr/share/elasticsearch/data:将数据文件夹挂载到宿主机
-v /my/elasticsearch/plugins:/usr/share/elasticsearch/plugins:将插件目录挂载到宿主机(需重启,自动映射)
-d elasticsearch:7.6.1 :后台运行容器,并返回容器ID

1.2、测试

curl http://localhost:9200

{
  "name" : "6a122f6a7607",
  "cluster_name" : "docker-cluster",
  "cluster_uuid" : "8HINmx36T3qPzpCS6b_Yyg",
  "version" : {
    "number" : "7.6.1",
    "build_flavor" : "default",
    "build_type" : "docker",
    "build_hash" : "2f90bbf7b93631e52bafb59b3b049cb44ec25e96",
    "build_date" : "2019-10-28T20:40:44.881551Z",
    "build_snapshot" : false,
    "lucene_version" : "8.2.0",
    "minimum_wire_compatibility_version" : "6.8.0",
    "minimum_index_compatibility_version" : "6.0.0-beta1"
  },
  "tagline" : "You Know, for Search"
}

1.3、手动安装ik分词,ik分词器的版本要和elasticsearch的版本对应

## 进入文件下载目录
cd /my/tools.bak
## 下载ik分词器,上传到/my/tools.bak
## https://github.com/medcl/elasticsearch-analysis-ik/releases/tag/v7.6.1
## 解压
unzip /my/tools.bak/elasticsearch-analysis-ik-7.6.1.zip -d /my/elasticsearch/plugins/ik7.6.1
## 拷贝宿主机下载的ik分词器zip到容器
docker cp /my/elasticsearch/plugins/ik7.6.1 elasticsearch:/usr/share/elasticsearch/plugins
## 进入容器
docker exec -it elasticsearch /bin/bash
cd /usr/share/elasticsearch/plugins

1.4、测试postman

POST http://localhost:9200/_analyze?pretty
{
    "analyzer":"ik_smart",
    "text":"中华"
}
## 测试结果
{
  "tokens": [
    {
      "token": "中华",
      "start_offset": 0,
      "end_offset": 2,
      "type": "CN_WORD",
      "position": 0
    }
  ]
}

1.5、elasticsearch.yml配置

# ======================== Elasticsearch Configuration =========================
#
# NOTE: Elasticsearch comes with reasonable defaults for most settings.
#       Before you set out to tweak and tune the configuration, make sure you
#       understand what are you trying to accomplish and the consequences.
#
# The primary way of configuring a node is via this file. This template lists
# the most important settings you may want to configure for a production cluster.
#
# Please consult the documentation for further information on configuration options:
# https://www.elastic.co/guide/en/elasticsearch/reference/index.html
#
# ---------------------------------- Cluster -----------------------------------
#
# Use a descriptive name for your cluster:
#
cluster.name: my-application
#
# ------------------------------------ Node ------------------------------------
#
# Use a descriptive name for the node:
#
node.name: node-1
#
# Add custom attributes to the node:
#
#node.attr.rack: r1
#
# ----------------------------------- Paths ------------------------------------
#
# Path to directory where to store the data (separate multiple locations by comma):
#
path.data: /home/es/elasticsearch/data
#
# Path to log files:
#
path.logs: /home/es/elasticsearch/logs
#
# ----------------------------------- Memory -----------------------------------
#
# Lock the memory on startup:
#
#bootstrap.memory_lock: true
#
# Make sure that the heap size is set to about half the memory available
# on the system and that the owner of the process is allowed to use this
# limit.
#
# Elasticsearch performs poorly when the system is swapping the memory.
#
# ---------------------------------- Network -----------------------------------
#
# Set the bind address to a specific IP (IPv4 or IPv6):
#
network.host: 0.0.0.0 
network.bind_host: 0.0.0.0
#
# Set a custom port for HTTP:
#
http.port: 9200
#
# For more information, consult the network module documentation.
#
# --------------------------------- Discovery ----------------------------------
#
# Pass an initial list of hosts to perform discovery when this node is started:
# The default list of hosts is ["127.0.0.1", "[::1]"]
#
#discovery.seed_hosts: ["host1", "host2"]
#
# Bootstrap the cluster using an initial set of master-eligible nodes:
#
cluster.initial_master_nodes: ["node-1"]
#
# For more information, consult the discovery and cluster formation module documentation.
#
# ---------------------------------- Gateway -----------------------------------
#
# Block initial recovery after a full cluster restart until N nodes are started:
#
#gateway.recover_after_nodes: 3
#
# For more information, consult the gateway module documentation.
#
# ---------------------------------- Various -----------------------------------
#
# Require explicit names when deleting indices:
#
#action.destructive_requires_name: true
http.cors.enabled: true
http.cors.allow-origin: "*"

2、安装logstash

## 1、拉取镜像
docker pull logstash:7.6.1
## 2、创建容器
docker run --network=elk --name=logstash --rm -di logstash:7.6.1  
## 3、进入容器查看
docker exec -it logstash /bin/bash
## 4、容器内配置cp到宿主机
docker cp logstash:/usr/share/logstash/config /my/logstash/
docker cp logstash:/usr/share/logstash/pipeline /my/logstash/
docker cp logstash:/usr/share/logstash/logstash-core/lib/jars/ /my/logstash/
## 5、修改配置文件并上传宿主机对应目录的jar(mysql8)
##  mysql脚本product.sql 存放到 /my/logstash/config/mysql/目录 
##  /my/logstash/pipeline/logstash.conf 配置对应的jar 和 脚本.sql
## 6、停止删除旧容器,创建新容器并挂载目录
docker run --network=elk --name=logstash \
--privileged=true --restart=always \
-v /my/logstash/config:/usr/share/logstash/config \
-v /my/logstash/pipeline:/usr/share/logstash/pipeline \
-v /my/logstash/jars:/usr/share/logstash/logstash-core/lib/jars \
-d logstash:7.6.1 

2.1、logstash.conf

## jdbc_driver_library和statement_filepath要对应到容器中的位置
input {
  jdbc {
    jdbc_connection_string => "jdbc:mysql://localhost:3306/yourdatabase?useUnicode=true&characterEncoding=utf8&serverTimezone=Asia/Shanghai&useSSL=false&allowMultiQueries=true"
    jdbc_user => "root"
    jdbc_password => "yourpassword"
    jdbc_driver_library => "/usr/share/logstash/logstash-core/lib/jars/mysql-connector-java-8.0.20.jar"
    jdbc_driver_class => "com.mysql.cj.jdbc.Driver"
    jdbc_paging_enabled => true
    jdbc_page_size => "5000"
    clean_run => true
    statement_filepath => "/usr/share/logstash/config/mysql/product.sql"
    lowercase_column_names => false
  }
}

output {
  elasticsearch {
    hosts => ["http://elasticsearch:9200"]
    index => "tb_product"
    document_id => "%{id}"
  }
  stdout {
    codec => json_lines
  }
}

3、安装kibana

## 1、拉取镜像
docker pull kibana:7.6.1
## 2、创建容器
docker run --name kibana \
--privileged=true --network=elk \
-p 5601:5601 -d kibana:7.6.1
## 3、进入容器查看
docker exec -it kibana /bin/bash
## 4、copy配置文件到宿主机
docker cp kibana:/usr/share/kibana/config/kibana.yml /my/kibana/conf/
vim /my/kibana/conf/kibana.yml
## 5、修改属性值为实际es的地址 elasticsearch.hosts: [ "http://elasticsearch:9200" ]
## 6、stop rm 之前kibana容器,重新启动
docker run --network=elk --name kibana \
--privileged=true --restart=always \
-v /my/kibana/conf/kibana.yml:/usr/share/kibana/config/kibana.yml \
-p 5601:5601 -d kibana:7.6.1

3.1、kibana.yml配置

# Default Kibana configuration for docker target
server.name: kibana
server.host: "0"
elasticsearch.hosts: [ "http://elasticsearch:9200" ]
xpack.monitoring.ui.container.elasticsearch.enabled: true
i18n.locale: zh-CN

你可能感兴趣的:(六、ubuntu18 docker安装ELK7.6.1)