本文档旨在指导如何在7台机器上部署ELK(Elasticsearch, Logstash, Kibana)堆栈、Filebeat、Zookeeper和Kafka。该部署方案适用于日志收集、处理和可视化场景。
| 机器编号 | 主机名 | IP地址 | 部署组件
|----------|--------------|--------------|-----------------------------------------------|
| 1 | node1 | 192.168.1.1 | Elasticsearch, Zookeeper, Kafka
| 2 | node2 | 192.168.1.2 | Elasticsearch, Zookeeper, Kafka
| 3 | node3 | 192.168.1.3 | Elasticsearch, Zookeeper, Kafka
| 4 | node4 | 192.168.1.4 | Logstash, Kibana
| 5 | node5 | 192.168.1.5 | Logstash, Kibana
| 6 | node6 | 192.168.1.6 | Filebeat
| 7 | node7 | 192.168.1.7 | Filebeat
在所有机器上安装JDK 11:
```bash
sudo yum install java-11-openjdk-devel # CentOS
sudo apt-get install openjdk-11-jdk # Ubuntu
```
验证安装:
```bash
java -version
```
1. 下载并解压Zookeeper:
```bash
wget https://downloads.apache.org/zookeeper/zookeeper-3.6.2/apache-zookeeper-3.6.2-bin.tar.gz
mv apache-zookeeper-3.6.2-bin /opt/zookeeper
```
2. 配置Zookeeper:
```ini
tickTime=2000
initLimit=10
syncLimit=5
dataDir=/var/lib/zookeeper
clientPort=2181
server.1=node1:2888:3888
server.2=node2:2888:3888
server.3=node3:2888:3888
```
在`dataDir`目录下创建`myid`文件,内容分别为1、2、3。
3. 启动Zookeeper:
```bash
/opt/zookeeper/bin/zkServer.sh start
```
1. 下载并解压Kafka:
```bash
wget https://downloads.apache.org/kafka/2.7.0/kafka_2.13-2.7.0.tgz
mv kafka_2.13-2.7.0 /opt/kafka
```
2. 配置Kafka:
```properties
broker.id=1 # 在node2和node3上分别改为2和3
listeners=PLAINTEXT://node1:9092 # 在node2和node3上分别改为node2和node3
zookeeper.connect=node1:2181,node2:2181,node3:2181
```
3. 启动Kafka:
```bash
/opt/kafka/bin/kafka-server-start.sh /opt/kafka/config/server.properties &
```
1. 下载并解压Elasticsearch:
```bash
wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-7.10.0-linux-x86_64.tar.gz
mv elasticsearch-7.10.0 /opt/elasticsearch
```
2. 配置Elasticsearch:
```yaml
cluster.name: my-cluster
node.name: node1 # 在node2和node3上分别改为node2和node3
network.host: 0.0.0.0
discovery.seed_hosts: ["node1", "node2", "node3"]
cluster.initial_master_nodes: ["node1", "node2", "node3"]
```
3. 启动Elasticsearch:
```bash
/opt/elasticsearch/bin/elasticsearch &
```
1. 下载并解压Logstash:
```bash
wget https://artifacts.elastic.co/downloads/logstash/logstash-7.10.0-linux-x86_64.tar.gz
mv logstash-7.10.0 /opt/logstash
```
2. 配置Logstash:
```yaml
input {
kafka {
bootstrap_servers => "node1:9092,node2:9092,node3:9092"
topics => ["logs"]
}
}
output {
elasticsearch {
hosts => ["node1:9200", "node2:9200", "node3:9200"]
index => "logs-%{+YYYY.MM.dd}"
}
}
```
3. 启动Logstash:
```bash
/opt/logstash/bin/logstash -f /opt/logstash/config/logstash.conf &
```
1. 下载并解压Kibana:
```bash
wget https://artifacts.elastic.co/downloads/kibana/kibana-7.10.0-linux-x86_64.tar.gz
mv kibana-7.10.0-linux-x86_64 /opt/kibana
```
2. 配置Kibana:
修改`/opt/kibana/config/kibana.yml`:
```yaml
server.host: "0.0.0.0"
elasticsearch.hosts: ["http://node1:9200", "http://node2:9200", "http://node3:9200
3. 启动Kibana:
```bash
/opt/kibana/bin/kibana &
```
1. 下载并安装Filebeat:
```bash
wget https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-7.10.0-linux-x86_64.tar.gz
mv filebeat-7.10.0-linux-x86_64 /opt/filebeat
```
2. 配置Filebeat:
```yaml
filebeat.inputs:
- type: log
paths:
output.kafka:
hosts: ["node1:9092", "node2:9092", "node3:9092"]
topic: "logs"
```
3. 启动Filebeat:
/opt/filebeat/filebeat -e -c /opt/filebeat/filebeat.yml &
```
## 4. 验证部署
1. 访问Kibana:`http://node4:5601` 或 `http://node5:5601