在7.0版本之后,ES内置的JAVA环境,不需要再次安装
https://www.elastic.co/cn/products/elastic-stack
目录 | 配置文件 | 描述 |
---|---|---|
bin | 脚本文件,包含启动elasticSearch,安装插件,运行统计数据等 | |
config | elasticsearch.yml | 配置集群文件,user,role base相关配置 |
JDK | JAVA运行环境 | |
data | path.data | 数据文件 |
lib | JAVA类库 | |
logs | path.log | 日志文件 |
moudules | 包含所有ES文件 | |
plugins | 包含所有已安装插件 |
进入bin,执行elastic
打开浏览器 访问 http://localhost:9200/ ,可以访问代表运行成功
查看已安装的所有插件
安装指定名称的插件
查看已经安装的插件
bin/elasticsearch -E node.name=node1 -E cluster.name=geektime -E path.data=node1_data -d
bin/elasticsearch -E node.name=node2 -E cluster.name=geektime -E path.data=node2_data -d
bin/elasticsearch -E node.name=node3 -E cluster.name=geektime -E path.data=node3_data -d
bin/elasticsearch -E node.name=node4 -E cluster.name=geektime -E path.data=node4_data -d
.\elasticsearch -E node.name=node1 -E cluster.name=geektime -E path.data=node1_data -d
.\elasticsearch -E node.name=node2 -E cluster.name=geektime -E path.data=node2_data -d
.\elasticsearch -E node.name=node3 -E cluster.name=geektime -E path.data=node3_data -d
.\elasticsearch -E node.name=node4 -E cluster.name=geektime -E path.data=node4_data -d
参数说明
-E 环境设置
node.name 节点名称
cluster.name 集群名称
path 为每个节点制定不同的存放数据的地址
-d 以dev mode启动,会在一定的范围内分配端口
删除进程 ps|grep elasticsearch
kill pid
命令台设置字符集 chcp 936 https://blog.csdn.net/u014078154/article/details/79199215
http://localhost:9200/_cat/nodes
https://www.elastic.co/cn/products/elastic-stack
// 启动 kibana
bin/kibana
// 查看插件
bin/kibana-plugin list
//安装插件
kibana-plugin install
//卸载插件
kibana-plugin remove
docker的安装部署
https://www.xttblog.com/?p=4402
docker-compose的安装
https://www.xttblog.com/?p=4404
docker中安装elasticSearch
https://www.xttblog.com/?p=4408
docker
将docker-compose.yml
文件放到docker
路径下(意思是使用ls
命令能看到docker-compose.yml
这个文件)执行命令 docker-compose up
docker-compose.yml
version: '2.2'
services:
cerebro:
image: lmenezes/cerebro:0.8.3
container_name: cerebro
ports:
- "9000:9000"
command:
- -Dhosts.0.host=http://elasticsearch:9200
networks:
- es72net
kibana:
image: docker.elastic.co/kibana/kibana:7.2.0
container_name: kibana72
environment:
#- I18N_LOCALE=zh-CN
- XPACK_GRAPH_ENABLED=true
- TIMELION_ENABLED=true
- XPACK_MONITORING_COLLECTION_ENABLED="true"
ports:
- "5601:5601"
networks:
- es72net
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:7.2.0
container_name: es72_01
environment:
- cluster.name=geektime
- node.name=es72_01
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
- discovery.seed_hosts=es72_01,es72_02
- network.publish_host=elasticsearch
- cluster.initial_master_nodes=es72_01,es72_02
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- es72data1:/usr/share/elasticsearch/data
ports:
- 9200:9200
networks:
- es72net
elasticsearch2:
image: docker.elastic.co/elasticsearch/elasticsearch:7.2.0
container_name: es72_02
environment:
- cluster.name=geektime
- node.name=es72_02
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
- discovery.seed_hosts=es72_01,es72_02
- network.publish_host=elasticsearch
- cluster.initial_master_nodes=es72_01,es72_02
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- es72data2:/usr/share/elasticsearch/data
networks:
- es72net
volumes:
es72data1:
driver: local
es72data2:
driver: local
networks:
es72net:
driver: bridge
访问kibana
http://localhost:5601/
访问cerebro
http://localhost:9200/
注意:使用windows的docker Tools的用户需要先用docker-machine ip default
命令找到当前docker-machine的IP(默认是192.168.99.100)然后访问如下地址
KIBANA http://192.168.99.100:5601/
CEREBRO http://192.168.99.100:9200/
#启动
docker-compose up
#停止容器
docker-compose down
#停止容器并且移除数据
docker-compose down -v
#一些docker 命令
docker ps
docker stop Name/ContainerId
docker start Name/ContainerId
#删除单个容器
$docker rm Name/ID
-f, –force=false; -l, –link=false Remove the specified link and not the underlying container; -v, –volumes=false Remove the volumes associated to the container
#删除所有容器
$docker rm `docker ps -a -q`
停止、启动、杀死、重启一个容器
$docker stop Name/ID
$docker start Name/ID
$docker kill Name/ID
$docker restart name/ID
docker-compose是对多容器多镜像进行管理
LogStash是一个快速采集处理的管道,是开源的服务端处理管道
https://www.elastic.co/cn/downloads/logstash
https://grouplens.org/datasets/movielens/
logstash.conf
logstash.conf
input {
file {
path => ["D:/Elasticsearch/ml-latest-small/movies.csv"]
start_position => "beginning"
sincedb_path => "nul"
}
}
filter {
csv {
separator => ","
columns => ["id","content","genre"]
}
mutate {
split => { "genre" => "|" }
remove_field => ["path", "host","@timestamp","message"]
}
mutate {
split => ["content", "("]
add_field => { "title" => "%{[content][0]}"}
add_field => { "year" => "%{[content][1]}"}
}
mutate {
convert => {
"year" => "integer"
}
strip => ["title"]
remove_field => ["path", "host","@timestamp","message","content"]
}
}
output {
elasticsearch {
hosts => "http://localhost:9200"
index => "movies"
document_id => "%{id}"
}
stdout {}
}
主要参数说明
通过我们课程中的logstash的配置文件可知,我们有一个参数sincedb_path,其值为“/dev/null”,
这个参数的是用来配置记录logstash读取日志文件位置的文件的名称的,
我们将文件的名称指定为“/dev/null”这个 Linux 系统上特殊的空文件,
那么 logstash 每次重启进程的时候,尝试读取 sincedb 内容,都只会读到空白内容,
也就会理解成之前没有过运行记录,自然就从初始位置开始读取了!
logstash -f logstash.conf极客ES01.