个人习惯将软件部署在opt/module下
tar -zxvf presto-server-0.196.tar.gz -C /opt/module/
mkdir data
路径为
/opt/module/presto/data
mkdir etc
路径为
/opt/module/presto/et
-server
-Xmx16G
-XX:+UseG1GC
-XX:G1HeapRegionSize=32M
-XX:+UseGCOverheadLimit
-XX:+ExplicitGCInvokesConcurrent
-XX:+HeapDumpOnOutOfMemoryError
-XX:+ExitOnOutOfMemoryError
mkdir catalog
路径
/opt/module/presto/etc/catalog
创建hive的配置文件
vim hive.properties
添加配置
connector.name=hive-hadoop11
hive.metastore.uri=thrift://hadoop11:9083 //远程调用地址
分发配置文件到其他节点
xsync presto
在每个节点的etc目录下配置node节点信息
vim node.properties
node.environment=production
node.id=ffffffff-ffff-ffff-ffff-ffffffffffff
node.data-dir=/opt/module/presto/data
vim node.properties
node.environment=production
node.id=ffffffff-ffff-ffff-ffff-fffffffffffe
node.data-dir=/opt/module/presto/data
vim node.properties
node.environment=production
node.id=ffffffff-ffff-ffff-ffff-fffffffffffd
node.data-dir=/opt/module/presto/data
hadoop11 etc目录下配置 coordinator vim config.properties
coordinator=true
node-scheduler.include-coordinator=false
http-server.http.port=8881
query.max-memory=50GB
discovery-server.enabled=true
discovery.uri=http://hadoop11:8881
hadoop12 etc目录下配置 worker vim config.properties
coordinator=false
http-server.http.port=8881
query.max-memory=50GB
discovery.uri=http://hadoop12:8881
hadoop13 etc目录下配置 worker vim config.properties
coordinator=false
http-server.http.port=8881
query.max-memory=50GB
discovery.uri=http://hadoop102:8881
nohup bin/hive --service metastore >/dev/null 2>&1 &
bin/launcher start
日志查看路径
/opt/module/presto/data/var/log
----------------------------------------------------分割线-----------------------------------------
1. 下载对应的jar包
https://repo1.maven.org/maven2/com/facebook/presto/presto-cli/0.196/presto-cli-0.196-executable.jar
2. 将presto-cli-0.196-executable.jar上传到hadoop11的/opt/module/presto文件夹下
3. 修改文件名
mv presto-cli-0.196-executable.jar /opt/module/presto/
4. 增加执行权限
chmod +x prestocli
5.启动prestocli
./prestocli --server hadoop11:8881 --catalog hive --schema default
Presto的命令行操作,相当于Hive命令行操作。每个表必须要加上schema。
yanagishima-18.0.zip上传到hadoop11的/opt/module目录
cp yanagishima-18.0.zip /opt/module/
解压安装包
unzip yanagishima-18.0.zip
cd yanagishima-18.0
到/opt/module/yanagishima-18.0/conf文件夹,编写yanagishima.properties配置
vim yanagishima.properties
添加如下内容
jetty.port=7080
presto.datasources=root-presto
presto.coordinator.server.root-presto=http://hadoop11:8881
catalog.root-presto=hive
schema.root-presto=default
sql.query.engines=presto
全部配置文件
# yanagishima web porta
jetty.port=7080
# 30 minutes. If presto query exceeds this time, yanagishima cancel the query.
presto.query.max-run-time-seconds=1800
# 1GB. If presto query result file size exceeds this value, yanagishima cancel the query.
presto.max-result-file-byte-size=1073741824
# you can specify freely. But you need to specify same name to presto.coordinator.server.[...] and presto.redirect.server.[...] and catalog.[...] and schema.[...]
presto.datasources=root-presto
auth.your-presto=false
# presto coordinator url
presto.coordinator.server.root-presto=http://hadoop11:8080
# almost same as presto coordinator url. If you use reverse proxy, specify it
presto.redirect.server.your-presto=http://presto.coordinator:8080
# presto catalog name
catalog.root-presto=hive
# presto schema name
schema.root-presto=default
# if query result exceeds this limit, to show rest of result is skipped
select.limit=500
# http header name for audit log
audit.http.header.name=some.auth.header
use.audit.http.header.name=false
# limit to convert from tsv to values query
to.values.query.limit=500
# authorization feature
check.datasource=false
hive.jdbc.url.your-hive=jdbc:hive2://localhost:10000/default;auth=noSasl
hive.jdbc.user.your-hive=root
hive.jdbc.password.your-hive=000000
hive.query.max-run-time-seconds=3600
hive.query.max-run-time-seconds.your-hive=3600
resource.manager.url.your-hive=http://localhost:8088
sql.query.engines=presto
hive.datasources=your-hive
hive.disallowed.keywords.your-hive=insert,drop
# 1GB. If hive query result file size exceeds this value, yanagishima cancel the query.
hive.max-result-file-byte-size=1073741824
hive.setup.query.path.your-hive=/usr/local/yanagishima/conf/hive_setup_query_your-hive
cors.enabled=false
在/opt/module/yanagishima-18.0路径下启动yanagishima
nohup bin/yanagishima-start.sh >y.log 2>&1 &