ELK Stack 7.3.0构建多系统多用户安全认证日志平台(二)

Elasticsearch 集群搭建,操作系统Linux centos7
三台机器:
192.168.137.55
192.168.137.56
192.168.137.57

1、192.168.137.55这台机器Elasticsearch 安装,进入安装目录/usr/local/elkstack,先下载安装包

cd /usr/local/elkstack
Curl -L -O https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-7.3.0-linux-x86_64.tar.gz

2、解压下载后的安装包

tar -zxvf elasticsearch-7.3.0-linux-x86_64.tar.gz

3、由于Elasticsearch可以接收用户输入的脚本并且执行,所以为了系统的安全考虑,一般是不以root用户启动,这里新创建一个用户elkstack。

创建用户
adduser elkstack
给新创建的用户设置密码
passwd elkstack 输入密码

4、新用户elkstack创建好之后,进行赋权

chown -R elkstack /usr/local/elkstack/elasticsearch-7.3.0
赋权之后切换到新创建用户
su elkstack

5、192.168.137.55 elasticsearch-7.3.0/config/elasticsearch.yml配置文件

cluster.name: my-elkcluster #集群名称
node.name: esmaster #当前节点的名称
path.data: /usr/local/elkstack/elasticsearch-7.3.0/data #es数据保存目录
path.logs: /usr/local/elkstack/elasticsearch-7.3.0/logs #es日志文件目录
network.host: 192.168.137.55
http.port: 9200
discovery.seed_hosts: ["192.168.137.55", "192.168.137.56", "192.168.137.57"] #es集群节点host
cluster.initial_master_nodes: ["esmaster","esnode1","esnode2"] #集群启动前可以选举为master的节点

6、将192.168.137.55这台机器的elasticsearch-7.3.0整个文件目录复制到另外两台机器192.168.137.56、192.168.137.57,这台机器需要和第一台机器55一样要创建elkstack用户,创建命令参考上面即可。

scp -r elasticsearch-7.3.0 [email protected]:/usr/local/elkstack/
scp -r elasticsearch-7.3.0 [email protected]:/usr/local/elkstack/

7、192.168.137.56 elasticsearch-7.3.0/config/elasticsearch.yml配置文件

cluster.name: my-elkcluster #集群名称
node.name: esnode1 #当前节点的名称
path.data: /usr/local/elkstack/elasticsearch-7.3.0/data #es数据保存目录
path.logs: /usr/local/elkstack/elasticsearch-7.3.0/logs #es日志文件目录
network.host: 192.168.137.56
http.port: 9200
discovery.seed_hosts: ["192.168.137.55", "192.168.137.56", "192.168.137.57"] #es集群节点host
cluster.initial_master_nodes: ["esmaster","esnode1","esnode2"] #集群启动前可以选举为master的节点

8、192.168.137.57 elasticsearch-7.3.0/config/elasticsearch.yml配置文件

cluster.name: my-elkcluster #集群名称
node.name: esnode2 #当前节点的名称
path.data: /usr/local/elkstack/elasticsearch-7.3.0/data #es数据保存目录
path.logs: /usr/local/elkstack/elasticsearch-7.3.0/logs #es日志文件目录
network.host: 192.168.137.57
http.port: 9200
discovery.seed_hosts: ["192.168.137.55", "192.168.137.56", "192.168.137.57"] #es集群节点host
cluster.initial_master_nodes: ["esmaster","esnode1","esnode2"] #集群启动前可以选举为master的节点

9、配置文件都修改完毕后,以elkstack用户启动es,以第一台机器192.168.137.55为例

cd /usr/local/elkstack/elasticsearch-7.3.0
bin/elasticsearch
报错信息:
ERROR: [3] bootstrap checks failed
[1]: max file descriptors [4096] for elasticsearch process is too low, increase to at least [65535]
[2]: max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]
上面1、2两个错误的解决:
1、切换到 root 用户,编辑 /etc/security/limits.conf配置文件, 添加类似如下内容:
vi /etc/security/limits.conf
添加如下内容:
* soft nofile 65536
* hard nofile 65536
2、修改/etc/sysctl.conf文件,增加:
vm.max_map_count=327680
执行命令以下命令让配置生效
sysctl -p

10、针对上面启动两个错误,在另外两台机器也要进行相同的设置。设置好后,另外两台机器也以elkstack用户启动,在另外两个节点没有启动之前,第一个节点输出日志信息:无法选举master节点,至少再启动一个节点。

[2019-08-24T13:39:52,876][WARN ][o.e.c.c.ClusterFormationFailureHelper] [esmaster] master not discovered or elected yet, an election requires at least 2 nodes with ids from [OiSZkFloS8mMgRP6DtO8aA, czgBLnXURtWshyiQX42ZIg, Grk_jw0BTFas4MuZ2hZzXg], have discovered [{esmaster}{Grk_jw0BTFas4MuZ2hZzXg}{k2Tr8AHKQfmmbG_0tn3C4w}{192.168.137.55}{192.168.137.55:9300}{dim}{ml.machine_memory=1929015296, xpack.installed=true, ml.max_open_jobs=20}] which is not a quorum; discovery will continue using [192.168.137.56:9300, 192.168.137.57:9300] from hosts providers and [{esmaster}{Grk_jw0BTFas4MuZ2hZzXg}{k2Tr8AHKQfmmbG_0tn3C4w}{192.168.137.55}{192.168.137.55:9300}{dim}{ml.machine_memory=1929015296, xpack.installed=true, ml.max_open_jobs=20}] from last-known cluster state; node term 43, last-accepted version 267 in term 43

启动另外一个节点192.168.137.56后,55、56选举56为master

[2019-08-24T13:49:15,691][INFO ][o.e.c.s.ClusterApplierService] [esmaster] master node changed {previous [], current [{esnode1}{czgBLnXURtWshyiQX42ZIg}{NeaIegxNRxKfmBLmt8vY5g}{192.168.137.56}{192.168.137.56:9300}{dim}{ml.machine_memory=1929015296, ml.max_open_jobs=20, xpack.installed=true}]}, removed {{esnode1}{czgBLnXURtWshyiQX42ZIg}{5AJn867GR02ZCErmScPXBg}{192.168.137.56}{192.168.137.56:9300}{dim}{ml.machine_memory=1929015296, ml.max_open_jobs=20, xpack.installed=true},}, added {{esnode1}{czgBLnXURtWshyiQX42ZIg}{NeaIegxNRxKfmBLmt8vY5g}{192.168.137.56}{192.168.137.56:9300}{dim}{ml.machine_memory=1929015296, ml.max_open_jobs=20, xpack.installed=true},}, term: 48, version: 275, reason: ApplyCommitRequest{term=48, version=275, sourceNode={esnode1}{czgBLnXURtWshyiQX42ZIg}{NeaIegxNRxKfmBLmt8vY5g}{192.168.137.56}{192.168.137.56:9300}{dim}{ml.machine_memory=1929015296, ml.max_open_jobs=20, xpack.installed=true}}

启动第三个节点57,由于56已经被选举为master节点,57加入集群成node节点。
56节点的日志信息:

[2019-08-24T13:58:44,138][INFO ][o.e.c.s.MasterService ] [esnode1] node-join[{esnode2}{OiSZkFloS8mMgRP6DtO8aA}{zxdczh91Q62QvlY9N5UPSg}{192.168.137.57}{192.168.137.57:9300}{dim}{ml.machine_memory=1929015296, ml.max_open_jobs=20, xpack.installed=true} join existing leader], term: 48, version: 288, reason: added {{esnode2}{OiSZkFloS8mMgRP6DtO8aA}{zxdczh91Q62QvlY9N5UPSg}{192.168.137.57}{192.168.137.57:9300}{dim}{ml.machine_memory=1929015296, ml.max_open_jobs=20, xpack.installed=true},}

55节点的日志信息:

[2019-08-24T13:59:55,047][INFO ][o.e.c.s.ClusterApplierService] [esmaster] added {{esnode2}{OiSZkFloS8mMgRP6DtO8aA}{zxdczh91Q62QvlY9N5UPSg}{192.168.137.57}{192.168.137.57:9300}{dim}{ml.machine_memory=1929015296, ml.max_open_jobs=20, xpack.installed=true},}, term: 48, version: 288, reason: ApplyCommitRequest{term=48, version=288, sourceNode={esnode1}{czgBLnXURtWshyiQX42ZIg}{NeaIegxNRxKfmBLmt8vY5g}{192.168.137.56}{192.168.137.56:9300}{dim}{ml.machine_memory=1929015296, ml.max_open_jobs=20, xpack.installed=true}}

11、查看每台机器的elasticsearch是否正常启动,直接通过浏览器访问
http://192.168.137.55:9200

{
"name" : "esmaster",
"cluster_name" : "my-elkcluster",
"cluster_uuid" : "1gvR7wetT4yMyAfo_gKL4A",
"version" : {
"number" : "7.3.0",
"build_flavor" : "default",
"build_type" : "tar",
"build_hash" : "de777fa",
"build_date" : "2019-07-24T18:30:11.767338Z",
"build_snapshot" : false,
"lucene_version" : "8.1.0",
"minimum_wire_compatibility_version" : "6.8.0",
"minimum_index_compatibility_version" : "6.0.0-beta1"
},
"tagline" : "You Know, for Search"
}
显示如上信息,说明elasticsearch启动成功.

12、查看集群信息:http://192.168.137.55:9200/_cluster/health?pretty

{
"cluster_name" : "my-elkcluster",
"status" : "green",
"timed_out" : false,
"number_of_nodes" : 3,
"number_of_data_nodes" : 3,
"active_primary_shards" : 8,
"active_shards" : 16,
"relocating_shards" : 0,
"initializing_shards" : 0,
"unassigned_shards" : 0,
"delayed_unassigned_shards" : 0,
"number_of_pending_tasks" : 0,
"number_of_in_flight_fetch" : 0,
"task_max_waiting_in_queue_millis" : 0,
"active_shards_percent_as_number" : 100.0
}
可以看到集群状态green,健康状态,节点总数3,数据节点数3

status 字段指示着当前集群在总体上是否工作正常。它的三种颜色含义如下:

green
  所有的主分片和副本分片都正常运行。
yellow
  所有的主分片都正常运行,但不是所有的副本分片都正常运行。
red
  有主分片没能正常运行。

大家在集群搭建的过程中,如果遇到问题,欢迎留言共同交流讨论。

你可能感兴趣的:(ELK Stack 7.3.0构建多系统多用户安全认证日志平台(二))