docker 部署elasticsearch+logstash+kibana

1 elasticsearch

1.1elasticsearch容器运行

docker pull elasticsearch:6.5.4
docker run -d --name elasticsearch -p 9200:9200 -p 9300:9300 -e "discovery.type=single-node" elasticsearch:6.5.4
docker exec -it elasticsearch /bin/bash

1.2 elasticsearch head

docker pull mobz/elasticsearch-head:5
docker create --name elasticsearch-head -p 9100:9100 mobz/elasticsearch-head:5
docker start elasticsearch-head


docker exec -it elasticsearch /bin/bash 


vi config/elasticsearch.yml 
http.cors.enabled: true 
http.cors.allow-origin: "*"

http://sandbox:9100/
http://sandbox:9200/

2.logstash

docker pull logstash:6.5.4
 mkdir -p /usr/local/src/docker_logstash
 mkdir -p /usr/local/src/docker_logstash/logs
vi logstash.yml 
http.host: "0.0.0.0"
xpack.monitoring.elasticsearch.url: http://sandbox:9200

/usr/local/src/docker_logstash 中添加 log4j2.properties pipelines.yml *.conf logstash.yml

cd /usr/local/src/docker_logstash

vi log4j2.properties
logger.elasticsearchoutput.name = logstash.outputs.elasticsearch
logger.elasticsearchoutput.level = debug
 

vi pipelines.yml
- pipeline.id: my-logstash
  path.config: "/usr/share/logstash/config/*.conf"
  pipeline.workers: 3


vi *.conf
#控制台输入
input { stdin { } }
output {
     #codec输出到控制台
stdout { codec=> rubydebug }
#输出到elasticsearch
elasticsearch {
  		hosts => "sandbox:9200"
        codec => json
        }
#输出到文件
file {
    path => ""/usr/share/logstash/config/logs/all.log" #指定写入文件路径
    flush_interval => 0                  # 指定刷新间隔,0代表实时写入
    codec => json
    }
}

docker run -d -p 5044:5044 -p 9600:9600 -it   --name logstash  -v /usr/local/src/docker_logstash:/usr/share/logstash/config  logstash:6.5.4 
docker exec -it logstash /bin/bash

LogStash 错误信息: Logstash could not be started because there is
already another instance using the configured data directory. If you
wish to run multiple instances, you must change the “path.data”
setting.

cd  data 
rm  -rf  .lock
docker exec -it logstash /bin/bash

2.1 logstash测试

 bin/logstash -e 'input { stdin { } } output { stdout {} }'
vi test.conf
input { stdin { } } output { stdout {} }

写完文件后,启动
bin/logstash -f config/test.conf

写入至elasticsearch

vi es.conf

#控制台输入
input { stdin { } }
output {
     #codec输出到控制台
stdout { codec=> rubydebug }
#输出到elasticsearch
elasticsearch {
        hosts => "sandbox:9200"
        codec => json
        }
#输出到文件
file {
    path => "/usr/share/logstash/config/logs/es.log" #指定写入文件路径
    flush_interval => 0                  # 指定刷新间隔,0代表实时写入
    codec => json
    }
} 

vi mysql-es.conf mysql–>elasticsearch

input {
  stdin {
  }
  jdbc {
  jdbc_connection_string => "jdbc:mysql://sandbox:3306/erp_test4"
  jdbc_user => "root"
  jdbc_password => "123456"
  jdbc_driver_library => "/usr/share/logstash/config/mysql-connector-java-5.1.27.jar"
  jdbc_driver_class => "com.mysql.jdbc.Driver"
  statement => "SELECT * FROM nrd2_project"
  type => "project"
  }
}

filter {
  json {
  source => "message"
  remove_field => ["message"]
  }
}

output {
  elasticsearch {
  hosts => "sandbox:9200"
  index => "project"
  document_id => "%{id}"
  }
  stdout {
  codec => json_lines
  }
}
bin/logstash -f config/mysql-es.conf
#后台运行
nohup  bin/logstash -f config/mysql3.conf  >mysql3-es.txt 2>&1 &

3.kibana

docker pull kibana:6.5.4
docker images
docker run --name kibana6.5.4 -e ELASTICSEARCH_URL=http://sandbox:9200 -p 5601:5601 -d kibana:6.5.4

http://sandbox:5601

你可能感兴趣的:(docker)