docker 下ELK

docker pull daocloud.io/library/elasticsearch:7.3.2
docker pull kibana:7.3.2
docker pull logstash

创建elasticsearch容器

docker run -d -p 9200:9200 -p 9300:9300 -e “ES_JAVA_OPTS=-Xms256m -Xmx256m” -e “discovery.type=single-node” --name elasticsearch

java_opt为了小内存运行 single-node单节点
测试 host:9200 有内容就ok

配置es跨域

docker exec -it es /bin/bash

cd /user/share/elasticsearch/config
修改配置文件
vi elasticsearch.yml
加入跨域配置
http.cors.enabled: true
http.cors.allow-origin: “*”
重启

创建kibana容器

注意要和elasticsearch版本一致
docker run -d -it --name kibana -p 5601:5601 --link elasticsearch:elasticsearch kibana

测试 host:5601 有内容就ok

创建kafka与zookeeper容器

https://editor.csdn.net/md/?articleId=105464996

创建logstash容器

docker 下ELK_第1张图片
在工作目录建立一个 docker 目录 并在里面创建了 logstash 目录,用来存放所有配置

logstash.yml (文件内容)

path.config: /usr/share/logstash/conf.d/*.conf
path.logs: /var/log/logstash
http.host: "0.0.0.0"
xpack.monitoring.elasticsearch.url: http://elasticsearch:9200
xpack.monitoring.elasticsearch.username: elastic
xpack.monitoring.elasticsearch.password: changeme
xpack.monitoring.enabled: false
conf.d/test.conf (文件内容)

input{
        kafka {
		bootstrap_servers => ["kafka:9092"]
		auto_offset_reset => "latest" 
		consumer_threads => 5
		decorate_events => true
		topics => ["user-info"]
		type => "user-info"
        }

        kafka {
                bootstrap_servers => ["kafka:9092"]
		auto_offset_reset => "latest" 
		consumer_threads => 5
		decorate_events => true
		topics => ["user-error"]
		type => "user-error"
        }

}

output {
    elasticsearch {
	   hosts => [ "elasticsearch:9200"]
	   index => "%{[type]}log-%{+YYYY-MM-dd}"
    }
}

docker run -it -d -p 5044:5044 --name logstash -v /home/cyh/docker/logstash/logstash.yml:/usr/share/logstash/config/logstash.yml -v /home/cyh/docker/logstash/conf.d/:/usr/share/logstash/conf.d/ --link elasticsearch:elasticsearch --link kafka:kafka logstash

你可能感兴趣的:(后端)