Docker安装ELK环境步骤

本次安装版本

  • elasticsearch-6.2.4

  • kibana7.6.2

  • logstash-6.3.0

  • kafka_2.10-0.10.2.1

  • 所需环境下载地址:https://pan.baidu.com/s/1LNJuF0kEXkG2FyzBBZGI3g 提取码:izhg

一、构建ES

1. 创建一个文件夹

用于存放 ElasticSearch

mkdir -p /usr/local/docker/elk/ES # 递归创建文件夹
cd /usr/local/docker/elk/ES # 进入 ES文件夹

需要准备的文件

Dockerfile # 构建镜像的文件
elasticsearch.yml # es配置文件
elasticsearch-6.2.4.tar.gz # es本体 tar.gz
elasticsearch-analysis-ik-6.2.4.zip # ik分词器 zip

2. 创建Dockerfile

vim Dockerfile  # 把下边的配置复制进来即可

Dockerfile本体

# 指定基础镜像 java8
FROM jdk8 
# 复制 es本体到 /usr/local下(会自动解压)
ADD elasticsearch-6.2.4.tar.gz /usr/local 
# 复制ik分词器到 /usr/local下
COPY elasticsearch-analysis-ik-6.2.4.zip /usr/local/ 

# 创建 elsearch用户组
RUN groupadd elsearch 
# 创建elsearch用户
RUN useradd elsearch -g elsearch -p elasticsearch 

# 切换工作目录
WORKDIR /usr/local 
# 赋予elsearch用户权限
RUN chown -R elsearch:elsearch  elasticsearch-6.2.4 
# 解压 ik分词器
RUN unzip /usr/local/elasticsearch-analysis-ik-6.2.4.zip 
# 在es/plugins/文件夹中创建 analysis-ik 目录
RUN mkdir /usr/local/elasticsearch-6.2.4/plugins/analysis-ik/ 
# 移动到ik分词器到es文件夹中的/plugins/analysis-ik/目录下
RUN cp -r /usr/local/elasticsearch/* /usr/local/elasticsearch-6.2.4/plugins/analysis-ik/ 
# 复制es配置文件
COPY elasticsearch.yml /usr/local/elasticsearch-6.2.4/config

# 切换到 elsearch用户
USER elsearch
# 暴露两个端口
EXPOSE 9200 
EXPOSE 9300

# 在容器启动的时候执行 启动es的命令
CMD ["/usr/local/elasticsearch-6.2.4/bin/elasticsearch"]

3. 创建elasticsearch.yml配置文件

vim elasticsearch.yml # 把下边的配置复制进来即可

elasticsearch.yml本体

cluster.name: elasticsearch-application # 节点名称
http.port: 9200 # 这样配置才可以通过其它主机访问
network.host: 0.0.0.0
http.cors.enabled: true  # 开启跨域
http.cors.allow-origin: "*"  # 允许所有源进行访问
bootstrap.memory_lock: false # 防止报错
bootstrap.system_call_filter: false

4. 构建Docker镜像

docker build -t my_es .
# 出现这个就为构建成功
Successfully built 8da8d8fd39e9
Successfully tagged my_es:latest

因为内存不足引发的错误

# 在主机上执行
sysctl -w vm.max_map_count=262144 # 配置内存大小
# echo "vm.max_map_count=262144" >> /etc/sysctl.conf
systemctl  restart docker # 重启docker

二、构建Kibana

1. 创建存放Kibana的文件夹

mkdir -p /usr/local/docker/elk/Kibana # 递归创建文件夹
cd /usr/local/docker/elk/Kibana # 进入Kibana文件夹

需要准备的文件

Dockerfile # 构建镜像的文件
kibana.yml # kibana的配置
kibana-6.2.4-linux-x86_64.tar.gz # kibana本体 tar.gz

2. 创建Dockerfile

vim Dockerfile  # 把下边的配置复制进来即可

Dockerfile本体

FROM jdk8

ADD kibana-6.2.4-linux-x86_64.tar.gz /usr/local
ADD kibana.yml /usr/local/kibana-6.2.4-linux-x86_64/config/

RUN groupadd kibana
RUN useradd kibana -g kibana -p kibana

WORKDIR /usr/local/
RUN chown -R kibana:kibana /usr/local/kibana-6.2.4-linux-x86_64

USER kibana

EXPOSE 5601

CMD ["/usr/local/kibana-6.2.4-linux-x86_64/bin/kibana"]

3. 创建kibana.yml配置文件

vim kibana.yml # 把下边的配置复制进来即可

kibana.yml本体

server.port: 5601 
server.host: "0.0.0.0"
elasticsearch.url: "http://192.168.244.110:9200" # 改成你的ES所在位置

4. 构建Docker镜像

docker build -t my_kibana .
# 出现这个就为构建成功
Successfully built 8da8d8fd39e9
Successfully tagged my_kibana:latest

三、构建Logstash

1. 创建存放Logstash的文件夹

mkdir -p /usr/local/docker/elk/Logstash # 递归创建文件夹
cd /usr/local/docker/elk/Logstash # 进入Logstash文件夹

需要准备的文件

Dockerfile # 构建镜像的文件
logstash.conf # logstash的配置
logstash.yml # logstash的配置
logstash-6.3.0.tar.gz # logstash本体 tar.gz

2. 创建Dockerfile

vim Dockerfile  # 把下边的配置复制进来即可

Dockerfile本体

FROM jdk8

ADD logstash-6.3.0.tar.gz /usr/local/
WORKDIR /usr/local/
# RUN tar -zxvf logstash-6.3.0.tar.gz

COPY logstash.conf  /usr/local/logstash-6.3.0/bin/logstash.conf
COPY logstash.yml  /usr/local/logstash-6.3.0/config/logstash.yml
RUN mkdir /usr/local/logs

ENTRYPOINT  /usr/local/logstash-6.3.0/bin/logstash -f /usr/local/logstash-6.3.0/bin/logstash.conf

3. 创建logstash.yml配置文件

vim logstash.yml # 把下边的配置复制进来即可

logstash.yml

http.host: "0.0.0.0"
xpack.monitoring.elasticsearch.url: http://192.168.244.110:9200 # 改成你的ES所在位置
xpack.monitoring.elasticsearch.username: elastic
xpack.monitoring.elasticsearch.password: changeme
xpack.monitoring.enabled: false

4. 创建logstash.conf

vim logstash.conf

logstash.conf
需要将里边的 IP 修改为你的Kafka地址

input{
        kafka {
                bootstrap_servers => ["192.168.244.110:9092"]
                auto_offset_reset => "latest"
                consumer_threads => 5
                decorate_events => true
                topics => ["user-info"]
                type => "user-info"
        }
        kafka {
                bootstrap_servers => ["192.168.244.110:9092"]
                auto_offset_reset => "latest"
                consumer_threads => 5
                decorate_events => true
                topics => ["user-error"]
                type => "user-error"
        }
         kafka {
                bootstrap_servers => ["192.168.244.110:9092"]
                auto_offset_reset => "latest"
                consumer_threads => 5
                decorate_events => true
                topics => ["goods-info"]
                type => "goods-info"
        }
        kafka {
                bootstrap_servers => ["192.168.244.110:9092"]
                auto_offset_reset => "latest"
                consumer_threads => 5
                decorate_events => true
                topics => ["goods-error"]
                type => "goods-error"
        }
}

output {
    elasticsearch {
           hosts => [ "192.168.244.110:9200"]
           index => "%{[type]}log-%{+YYYY-MM-dd}"
    }
}

5. 构建Docker镜像

docker build -t my_logstash .
# 出现这个就为构建成功
Successfully built e6cc9b9b3bea
Successfully tagged my_logstash:latest

四、构建Kafka

1. 创建存放Kafka的文件夹

mkdir -p /usr/local/docker/elk/Kafka # 递归创建文件夹
cd /usr/local/docker/elk/Kafka # 进入Kafka文件夹

需要准备的文件

Dockerfile # 构建镜像的文件
server.properties # kibana的配置
supervimsord.conf # 启动kafka的文件
kafka_2.10-0.10.2.1.tgz # kafka本体 tgz

2. 创建Dockerfile

vim Dockerfile  # 把下边的配置复制进来即可

Dockerfile本体

FROM jdk8
MAINTAINER yuanfire

ADD kafka_2.10-0.10.2.1.tgz /usr/local/
COPY server.properties  /usr/local/kafka_2.10-0.10.2.1/config/server.properties
COPY supervisord.conf /etc/supervisord.conf
EXPOSE 9092
# RUN tar -zxvf kafka_2.10-0.10.2.1.tgz
CMD ["/usr/bin/supervisord"]

3. 创建server.properties配置文件

vim server.properties # 把下边的配置复制进来即可

server.properties本体

broker.id=1
listeners=PLAINTEXT://0.0.0.0:9092
advertised.listeners=PLAINTEXT://192.168.255.130:9092 # 改为你的机器ip
num.network.threads=3
num.io.threads=8
socket.send.buffer.bytes=102400
socket.receive.buffer.bytes=102400
socket.request.max.bytes=104857600
log.dirs=/tmp/kafka-logs
num.partitions=1
num.recovery.threads.per.data.dir=1
log.retention.hours=168
log.segment.bytes=1073741824
log.retention.check.interval.ms=300000
zookeeper.connect=192.168.255.130:2181  # 你的zookeeper地址
zookeeper.connection.timeout.ms=6000 # 连接超时时间

4. 创建supervisord.conf

[supervisord]
nodaemon=true
[program:zookeeper]
command=/usr/local/kafka_2.10-0.10.2.1/bin/zookeeper-server-start.sh -daemon /usr/local/kafka_2.10-0.10.2.1/config/zookeeper.properties
[program:kafka]
command=/usr/local/kafka_2.10-0.10.2.1/bin/kafka-server-start.sh  -daemon /usr/local/kafka_2.10-0.10.2.1/config/server.properties

5. 构建Docker镜像

docker build -t my_kafka .
# 出现这个就为构建成功
Successfully built 6a06d2cdd08b
Successfully tagged my_kafka:latest

五、查看构建好的几个镜像

docker images
REPOSITORY            TAG                 IMAGE ID            CREATED          SIZE
my_kafka              latest              6a06d2cdd08b        54 seconds ago   353MB
my_logstash           latest              e6cc9b9b3bea        13 minutes ago   561MB
my_es                 latest              8da8d8fd39e9        44 minutes ago   401MB
my_kibana             latest              fa78f0961627        55 minutes ago   927MB
... 你的其他镜像

六、启动

  • 按照创建镜像的顺序启动
docker run -d -p 9200:9200 -p 9300:9300 --name elasticsearch my_es # 启动ES
docker run -d -p 5601:5606 --name kibana my_kibana # 启动ES

访问

Docker安装ELK环境步骤_第1张图片

完…

你可能感兴趣的:(docker,kafka)