Presto 部署安装

1. 解压安装包到指定文件夹

个人习惯将软件部署在opt/module下

tar -zxvf presto-server-0.196.tar.gz -C /opt/module/

2. 创建数据存储目录data

mkdir data
路径为
/opt/module/presto/data

3. 创建存储配置文件文件夹 etc

mkdir etc
路径为
/opt/module/presto/et

4. etc目录下添加jvm.config配置文件

-server
-Xmx16G
-XX:+UseG1GC
-XX:G1HeapRegionSize=32M
-XX:+UseGCOverheadLimit
-XX:+ExplicitGCInvokesConcurrent
-XX:+HeapDumpOnOutOfMemoryError
-XX:+ExitOnOutOfMemoryError

5. Presto可以支持多个数据源,在Presto里面叫catalog,这里我们配置支持Hive的数据源,配置一个Hive的catalog

mkdir catalog
路径
/opt/module/presto/etc/catalog
创建hive的配置文件
vim hive.properties
添加配置
connector.name=hive-hadoop11
hive.metastore.uri=thrift://hadoop11:9083  //远程调用地址

分发配置文件到其他节点

xsync presto

6. 配置node属性,node id每个节点都不一样

在每个节点的etc目录下配置node节点信息
vim node.properties
	node.environment=production
	node.id=ffffffff-ffff-ffff-ffff-ffffffffffff
	node.data-dir=/opt/module/presto/data
vim node.properties
	node.environment=production
	node.id=ffffffff-ffff-ffff-ffff-fffffffffffe
	node.data-dir=/opt/module/presto/data
vim node.properties
	node.environment=production
	node.id=ffffffff-ffff-ffff-ffff-fffffffffffd
	node.data-dir=/opt/module/presto/data

7.Presto是由一个coordinator节点和多个worker节点组成。在hadoop11上配置成coordinator,在hadoop12、hadoop13上配置为worker

hadoop11 etc目录下配置 coordinator  vim config.properties
	coordinator=true
	node-scheduler.include-coordinator=false
	http-server.http.port=8881
	query.max-memory=50GB
	discovery-server.enabled=true
	discovery.uri=http://hadoop11:8881
hadoop12 etc目录下配置 worker vim config.properties
	coordinator=false
	http-server.http.port=8881
	query.max-memory=50GB
	discovery.uri=http://hadoop12:8881
hadoop13 etc目录下配置 worker vim config.properties
	coordinator=false
	http-server.http.port=8881
	query.max-memory=50GB
	discovery.uri=http://hadoop102:8881

8.启动Hive Metastore

nohup bin/hive --service metastore >/dev/null 2>&1 &

9.各个节点启用(后台)

bin/launcher start
日志查看路径
/opt/module/presto/data/var/log

----------------------------------------------------分割线-----------------------------------------

11. 配置安装命令行客户端

1. 下载对应的jar包
	https://repo1.maven.org/maven2/com/facebook/presto/presto-cli/0.196/presto-cli-0.196-executable.jar
2. 将presto-cli-0.196-executable.jar上传到hadoop11的/opt/module/presto文件夹下
3. 修改文件名
	mv presto-cli-0.196-executable.jar /opt/module/presto/
4. 增加执行权限
	chmod +x prestocli
5.启动prestocli
   ./prestocli --server hadoop11:8881 --catalog hive --schema default
   
Presto的命令行操作,相当于Hive命令行操作。每个表必须要加上schema。

12. Presto可视化Client安装

yanagishima-18.0.zip上传到hadoop11的/opt/module目录
	cp yanagishima-18.0.zip /opt/module/
解压安装包
	unzip yanagishima-18.0.zip
	cd yanagishima-18.0
到/opt/module/yanagishima-18.0/conf文件夹,编写yanagishima.properties配置
	vim yanagishima.properties
添加如下内容
	jetty.port=7080
	presto.datasources=root-presto
	presto.coordinator.server.root-presto=http://hadoop11:8881
	catalog.root-presto=hive
	schema.root-presto=default
	sql.query.engines=presto

全部配置文件

# yanagishima web porta
jetty.port=7080
# 30 minutes. If presto query exceeds this time, yanagishima cancel the query.
presto.query.max-run-time-seconds=1800
# 1GB. If presto query result file size exceeds this value, yanagishima cancel the query.
presto.max-result-file-byte-size=1073741824
# you can specify freely. But you need to specify same name to presto.coordinator.server.[...] and presto.redirect.server.[...] and catalog.[...] and schema.[...]
presto.datasources=root-presto
auth.your-presto=false
# presto coordinator url
presto.coordinator.server.root-presto=http://hadoop11:8080
# almost same as presto coordinator url. If you use reverse proxy, specify it
presto.redirect.server.your-presto=http://presto.coordinator:8080
# presto catalog name
catalog.root-presto=hive
# presto schema name
schema.root-presto=default
# if query result exceeds this limit, to show rest of result is skipped
select.limit=500
# http header name for audit log
audit.http.header.name=some.auth.header
use.audit.http.header.name=false
# limit to convert from tsv to values query
to.values.query.limit=500
# authorization feature
check.datasource=false
hive.jdbc.url.your-hive=jdbc:hive2://localhost:10000/default;auth=noSasl
hive.jdbc.user.your-hive=root
hive.jdbc.password.your-hive=000000
hive.query.max-run-time-seconds=3600
hive.query.max-run-time-seconds.your-hive=3600
resource.manager.url.your-hive=http://localhost:8088
sql.query.engines=presto
hive.datasources=your-hive
hive.disallowed.keywords.your-hive=insert,drop
# 1GB. If hive query result file size exceeds this value, yanagishima cancel the query.
hive.max-result-file-byte-size=1073741824
hive.setup.query.path.your-hive=/usr/local/yanagishima/conf/hive_setup_query_your-hive
cors.enabled=false

在/opt/module/yanagishima-18.0路径下启动yanagishima
nohup bin/yanagishima-start.sh >y.log 2>&1 &

你可能感兴趣的:(其他组件)