FileBeat+Logstash+Elasticsearch+Kibana集群环境搭建

  1. ELKB架构模式图为
  2. FileBeat+Logstash+Elasticsearch+Kibana集群环境搭建_第1张图片
  3. 日志文件通过Filebeat组件(Filebeat安装步骤查看:Filebeat6.5.4安装及收集日志文件)将日志信息搜集到Logstas组件(Logstash安装步骤查看:Logstash6.5.4安装配置)里进行数据清洗,再传输到Elasticsearch集群(Elasticsearch集群环境搭建:ElasticSearch6.5.4集群搭建)中存储,再进行kibana进行展示分析(kibana安装步骤查看:Elasticsearch、Kibana6.5.4服务搭建及错误处理)
  4. 协调节点的作用
  5. Load Balancing Across Multiple Elasticsearch Nodesedit
  6. How you deploy Kibana largely depends on your use case. If you are the only user, you can run Kibana on your local machine and configure it to point to whatever Elasticsearch instance you want to interact with. Conversely, if you have a large number of heavy Kibana users, you might need to load balance across multiple Kibana instances that are all connected to the same Elasticsearch instance.
  7. While Kibana isn’t terribly resource intensive, we still recommend running Kibana separate from your Elasticsearch data or master nodes. To distribute Kibana traffic across the nodes in your Elasticsearch cluster, you can run Kibana and an Elasticsearch client node on the same machine. For more information, see Load Balancing Across Multiple Elasticsearch Nodes.
  8. 上面一段话说的明明白白!!!!
  9. 由于工地环境没有使用到协调节点,所以没有实践,这里列出Elasticsearch6.5协调节点的配置,参考官方配置
  10. Using Kibana in a production environment
  11. 第一步:Install Elasticsearch on the same machine as Kibana.一般安装在kibana所在的机器上
    
    #You want this node to be neither master nor data node nor ingest node, but
    #    to act as a "search load balancer" (fetching data from nodes,
    #    aggregating results, etc.)
    #你希望这个节点既不是主节点,也不是数据节点,也不是摄取节点,但是
    #充当“搜索负载均衡器”(从节点获取数据,
    #聚合结果等)
    node.master: false
    node.data: false
    node.ingest: false
    
    #Configure the client node to join your Elasticsearch cluster. In elasticsearch.yml, set #the cluster.name to the name of your cluster.
    #配置客户机(es协调节点)节点来加入您的Elasticsearch集群。在elasticsearch。yml,将cluster.name设置为集群的名称。
    cluster.name:"clusterName"
    
    #配置协调节点的ip,port
    network.host: localhost
    http.port: 9200
    
    #和集群中的其他节点不同就行了
    node.name: node-3
    
    discovery.zen.ping.unicast.hosts: ["192.168.184.133","192.168.184.135","192.168.184.136"]
    #防止脑裂
    discovery.zen.minimum_master_nodes: 2
    
    #最后在kibana.yml文件中配置协调节点的ip:port就行了
    elasticsearch.url: "http://协调节点的ip:9200"
    
    
    

     

  12. 到此ELKB集群架构搭建完毕

参考:

Using Kibana in a production environment

Elasticsearch 5.X集群多节点角色配置深入详解

Elasticsearch5.2.1集群搭建,动态加入节点,并添加监控诊断插件

 

你可能感兴趣的:(ELK)