filebeat + elasticsearch + kibana日志管理初探

1. 容器化部署elasticsearch

docker run -d -p 9200:9200 -p 9300:9300 -e "discovery.type=single-node" docker.elastic.co/elasticsearch/elasticsearch:6.4.2

check:

curl http://127.0.0.1:9200/_cat/health

2. 容器化部署kibana

docker run -d --name kibana -e ELASTICSEARCH_URL="http://your_ip:9200"  -p 5601:5601 docker.elastic.co/kibana/kibana:6.4.2

check:
浏览器访问:localhost:5601

3. 二进制启动filebeat

获取二进制包

curl -L -O https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-6.4.2-linux-x86_64.tar.gz
tar xzvf filebeat-6.4.2-linux-x86_64.tar.gz

编辑修改filebeat.yml

有几处需要注意,enable必须为true,且paths中的路径有访问权限。

- type: log

  # Change to true to enable this input configuration.
  enabled: true

  # Paths that should be crawled and fetched. Glob based paths.
  paths:
    - /var/lib/docker/containers/*/*.log
    #- c:\programdata\elasticsearch\logs\*

	# 对外接的elastic search 配置
	
output.elasticsearch:
  # Array of hosts to connect to.
  hosts: ["localhost:9200"]
sudo chown root filebeat.yml 
sudo nohup ./filebeat -e -c filebeat.yml -d "publish" &

测试过程中,发现kibana总是无法创建index pattern,
在这里查到

but no index will be created in ES until you load data from a source (like Logstash or Beats) or until you create it using the API yourself.
You can check what indices you have in your ES by running a “GET _cat/indices” on localhost:9200 (or your ES host and port).

原理filebeat 一直没有数据过来,容器化部署时,没有获取容器日志目录的权限,有空再查下吧。

你可能感兴趣的:(日志管理)