See the migration guide from Hive. Presto basically follows ANSI SQL syntax and semantics, while Hive uses SQL-like syntax.
See the list of functions of Presto. For Treasure Data specific UDFs, see Presto UDFs .
Presto can process more than millions of rows in a second. If you see some query errors or unexpected results, etc., first try to minimize your input data set, since checking many rows is a hard task. Here is a guideline to fix your query:
Extract sub-queries and check their results one by one.
Check the conditions in WHERE clause carefully.
SELECT * FROM table1 WHERE col1 < 100 OR col2 is TRUE AND TD_TIME_RANGE(time, '2015-11-01')
This is because AND has stroger precedence than OR, so this condition is equivalent to:
(col1 < 100) OR (col2 is TRUE AND TD_TIME_RANGE(time, '2015-11-01'))
The first condition does not specify any TD_TIME_RANGE, so it results in scanning the whole table. To fix this, use parenthesis appropriately:
(col1 < 100 OR col2 is TRUE) AND TD_TIME_RANGE(time, '2015-11-01')
To test Presto queries, you don’t need import any data set. Just use VALUES clause to prepare a sample data set:
SELECT * FROM (VALUES (1, 'apple'), (2, 'banana')) as fruit(id, name); # This query gives the following table: # id | name #-----+-------- # 1 | apple # 2 | banana
(For Treasure Data customers) If you still see any problem, ask our support. Our support engineers are experts of SQL and various types of data analysis. Send an e-mail to [email protected] with job ID(s) of your queries. Information of the expected results and the meaning of your data set is helpful for us to give you a better answer.
See the answer below.
Presto transfers the data through the network. Due to connection timeout or some problem in worker nodes, this network data transfer may fail occasionally (PAGE_TRANSPORT_TIMEOUT, WORKER_RESTARTED, etc.). Presto is designed to make faster the query processing compared to Hive, so it sacrifices fault-tolerance to some extent. In reality, however, more than 99.5% of Presto queries finishes without any error at the first run. And also, Treasure Data provides query retry mechanism on query failures, so nearly 100% of queries finishes successfully after retry.
Note that, however, because of the nature of network-based distributed query processing, if your query tries to process billions of records, the chance of hitting network failures will increase. If you see PAGE_TRANSPORT_TIMEOUT error frequently, try to reduce the input data size by narrowing down the TD_TIME_RANGE or reducing the number of columns in SELECT statements.
Presto is a distributed query engine, but some operators need to be processed in a single process. For example,
Treasure Data is a time-series database, and creates 1-hour partitioned buckets. So focusing on the specific time range of the data will give the best query performance. TD_TIME_RANGE is your friend:
SELECT ... FROM ... WHERE TD_TIME_RANGE(time, '2015-10-01 PST', '2015-11-01 PST')
See Q: Query that produces a huge result is slow
Presto’s query optimizer is unable to improve queries where many LIKE clauses are used. As a consequence the query execution can be slower than expected. To improve the performance, you can substitute a series of LIKE clauses chainied in an OR with a single regexp_like clause. For example:
SELECT ... FROM access WHERE method LIKE '%GET%' OR method LIKE '%POST%' OR method LIKE '%PUT%' OR method LIKE '%DELETE%'
can be optimized by replacing the 4 LIKE clauses with a single regexp_like function:
SELECT ... FROM access WHERE regexp_like(method, 'GET|POST|PUT|DELETE')
If you have a huge number of rows in an 1-hour partition, processing this partition can be the performance bottleneck. To check the number rows contained in each partition, run the following query:
SELECT TD_TIME_RANGE(time, 'yyyy-MM-dd HH') hour, count(*) cnt FROM table1 WHERE TD_TIME_RANGE(time, '2015-10-01 UTC', '2015-11-01 UTC') GROUP BY 1 ORDER BY cnt DESC LIMIT 100
This query shows the top 100 partitions that contain the highest number of records during 2015-10 to 2015-11.
The equi-join concatenates tables by comparing join keys using the equal (=) operator. If this comparison becomes complex, the join processing will slow down. For example, if you want to join two tables with date string e.g., ‘2015-10-01’, but one of the tables only has columns for year, month, day values, you may write the following query to generate date strings:
SELECT a.date, b.name FROM left_table a JOIN right_table b ON a.date = CAST((b.year * 10000 + b.month * 100 + b.day) as VARCHAR)
This query will delay the join processing since the join condition involves several expressions. You can make it faster by pushing down this condition into a sub query to prepare a join key beforehand:
SELECT a.date, b.name FROM left_table a JOIN ( SELECT CAST((b.year * 10000 + b.month * 100 + b.day) as VARCHAR) date, # generate join key name FROM right_table ) b ON a.date = b.date # Simple equi-join
In this example, join keys are a.date and b.datestr columns. Comparing VARCHAR strings is much faster than comparing VARCHAR and expression result. In future Presto would be able to optimize this type of queries, but for now, you need to rewrite the query by hand.
If your query becomes complex or deeply nested, try to extract sub queries using WITH clause. For example, the following query that has a nested sub query:
SELECT a, b, c FROM ( SELECT a, MAX(b) AS b, MIN(c) AS c FROM tbl GROUP BY a ) tbl_alias
can be rewritten as follows:
WITH tbl_alias AS (SELECT a, MAX(b) AS b, MIN(c) AS c FROM tbl GROUP BY a) SELECT a, b, c FROM tbl_alias
You can also enumerate multiple sub-queries in WITH clause, by using comma:
WITH tbl1 AS (SELECT a, MAX(b) AS b, MIN(c) AS c FROM tbl GROUP BY a), tbl2 AS (SELECT a, AVG(d) AS d FROM another_tbl GROUP BY a) SELECT tbl1.*, tbl2.* FROM tbl1 JOIN tbl2 ON tbl1.a = tbl2.a
If your CREATE TABLE query becomes complex or deeply nested, try to extract sub queries using WITH clause. For example, one sub query can be rewritten as follow:
CREATE TABLE tbl_new AS WITH tbl_alias AS (SELECT a, MAX(b) AS b, MIN(c) AS c FROM tbl1) SELECT a, b, c FROM tbl_alias
You can also enumerate multiple sub-queries in WITH clause as follow:
CREATE TABLE tbl_new AS WITH tbl_alias1 AS (SELECT a, MAX(b) AS b, MIN(c) AS c FROM tbl1), tbl_alias2 AS (SELECT a, AVG(d) AS d FROM tbl2) SELECT tbl_alias1.*, tbl2_alias.* FROM tbl_alias1 JOIN tbl_alias2 ON tbl_alias1.a = tbl_alias2.a
Group by clause requires to have the same expression in SELECT statement:
SELECT TD_TIME_FORMAT(time, 'yyyy-MM-dd HH', 'PDT') hour, count(*) cnt FROM my_table GROUP BY TD_TIME_FORMAT(time, 'yyyy-MM-dd HH', 'PDT') # <-- redundant expression
You can simplify this query by using GROUP BY 1, 2, …:
SELECT TD_TIME_FORMAT(time, 'yyyy-MM-dd HH', 'PDT') hour, count(*) cnt FROM my_table GROUP BY 1
These numbers correspond to the column indexes (1-origin) of the SELECT statement.
Answer: Rewrite your SQL to use less memory.
Presto tracks the usage of memory of a query. While the available memory varies according to your price plan, in most cases it is possible to rewrite your query to resolve this issue. Here is a list of memory-intensive operations:
distinct eliminates all non unique rows. For example, the following query checks whether your table contains the same set of (c1, c2, c3) tuples:
SELECT distinct c1, c2, c3 FROM my_table
This stores the entire set of columns c1, c2 and c3 into a memory of a single worker node to check the uniqueness of the tuples. The amount of the required memory increases with the number of columns and their size. Remove distinct from your query or use it after reducing the number of input rows by using a sub query.
If you are counting the number of distinct users, events, etc. you usually want to use count(distinct id) operation. But this can cause memory issues:
SELECT count(distinct id) FROM my_table
To reduce the memory usage, use an approximate version of count(distinct x):
SELECT approx_distinct(id) FROM my_table
approx_distinct(x) returns an approximate result of the true value. It ensures that returning a far distant value happens only in a quite low probability. If you simply need to glance the characteristics of your data set, use this approximate version.
For the same reason with distinct, UNION of SQL queries performs duplicate elimination and requires a substantial amount of memory:
SELECT c1, c2, c3 FROM my_table1 UNION SELECT c1, c2, c3 FROM my_table2
If you use UNION ALL, you can avoid the duplicate elimination:
SELECT c1, c2, c3 FROM my_table1 UNION ALL SELECT c1, c2, c3 FROM my_table2
This requires less memory and is faster. If you need to concatenate two or more SQL query results, use UNION ALL.
SELECT c1, c2 FROM my_table ORDER BY c1
Presto performs sorting using a single worker node. So the entire data set must fit within the memory limit of a worker (usually less than 5GB).
If you are sorting a small number of rows (e.g., ~ 10,000 rows), using ORDER By is fine, but if you are going to sort GBs of data set, you might need to find an alternative strategy: if sorting the entire data set is necessary, you can combine the use of Hive and Presto.
First store the results of your Presto query by using CREATE TABLE AS or INSERT INTO query, then use Hive to sort the data set.
SELECT avg(c1), min_by(c2, time), max(c3), count(c4), ... FROM my_table GROUP BY c1, c2, c3, c4, ...
If you enumerate many target columns in a GROUP BY clause, storing the set of keys consisting of (c1, c2, c3, c4, …) will require a lot of memory. Reducing the number of columns in GROUP BY clause will reduce the memory usage.
We provide smart_digest(key) UDF for Treasure Data customers, which creates smaller hash values to reduce the size of keys.
SELECT smart_digest(path), count(*) FROM www_access GROUP BY smart_digest(path)
The following type of queries that starts with a small table in a join clause usually causes Presto to run against its memory limits:
SELECT * FROM small_table, large_table WHERE small_table.id = large_table.id
Presto performs broadcast join by default, which paritions the left-hand side table into several worker nodes, then sends the entire copy of the right-hand side table to the worker nodes that have a partition. If the right-hand side table is large and doesn’t fit in memory on a worker node, it will cause an error.
Reordering the table join putting the largest table first will solve the issue:
SELECT * FROM large_table, small_table WHERE large_table.id = small_table.id
This query distributes the left table (large_table), greatly reducing the chance of hitting the memory limit.
If your query still doesn’t work, try distributed join by adding a magic comment that sets a session property:
-- set session distributed_join = 'true' SELECT * FROM large_table, small_table WHERE small_table.id = large_table.id
The distributed join algorithms paritions both left and right-hand side tables by using hash values of the join key(s) as a paritioning key. So it would work even if the right-hand side table is large. A downside is it increases the number of network data transfers and is usually slower than the broadcast join.
Answer: Consider using CREATE TABLE AS or INSERT INTO.
SELECT * FROM my_table
Presto uses JSON text for materializing query results. If the above table contains 100GB of data, the coordinator transfers more than 100GB of JSON text to save the query result. So, even if the query computation is almost finished, outputting JSON results takes long time.
DROP TABLE IF EXISTS my_result; CREATE TABLE my_result AS SELECT * FROM my_table;
You can parallelize the query result output process by using CREATE TABLE AS SELECT statement. To properly clean up the result table beforehand, it would be better to add DROP TABLE statement at the top of your query. The result output performance will be 5x ~ more faster than running SELECT *. Our Presto skips the JSON output process and directly produces 1-hour partitioned table.
You can also use INSERT INTO (table) AS SELECT … to append the data to an existing table. It also improves the query result output performance:
CREATE TABLE IF NOT EXISTS my_result(time bigint); INSERT INTO my_result SELECT * FROM my_table;
Note that if the subsequent SELECT statement does not produce the time column, INSERT INTO attaches the query execution time as time columm values. So you can find the inserted rows by using the query execution time.
SELECT * FROM my_table
Treasure Data uses a column-orinented storage format, so accessing a small set of columns is really fast, but as the number of columns increases in your query, it deteriorates the query performance. Be selective in choosing columns:
SELECT id, name, address FROM my_table
(This is an experimental feature that is subject to change in future) By setting result_output_redirect='true'
within a magic comment, you can make faster the query output:
--set session result_output_redirect='true' select * FROM my_table
With this query hint our Presto produces the query results in parallel and can skip the JSON conversion process at the coordinator.
Sometimes, you may need to report access counts of different date ranges into the columns in a row. For example:
SELECT id, d1.total AS day1_total, d2.total AS day2_total FROM ( SELECT id, count(*) total FROM my_table WHERE TD_TIME_RANGE(time, TD_TIME_ADD(TD_SCHEDULED_TIME(), '-1d'), TD_SCHEDULED_TIME(), 'UTC') GROUP BY id ) d1 LEFT JOIN ( SELECT id, count(*) total FROM my_table WHERE TD_TIME_RANGE(time, TD_TIME_ADD(TD_SCHEDULED_TIME(), '-2d'), TD_TIME_ADD(TD_SCHEUDLED_TIME(), '-1d'), 'UTC') GROUP BY id ) d2
The result of this query would be:
id | day1_total | day2_total |
---|---|---|
1 | 10 | 13 |
2 | 14 | 3 |
However if your input table is huge, this query becomes inefficient since it involves joins and also scans the same table multiple times.
A more efficient approach is, instead of using joins, creating a sparse table as follows in a single table scan:
SELECT id, CASE diff WHEN 0 THEN 1 ELSE 0 END AS day1, CASE diff WHEN 1 THEN 1 ELSE 0 END AS day2 FROM ( SELECT id, date_diff('day', date_trunc('day', from_unixtime(time)), date_trunc('day', from_unixtime(TD_SCHEUDLED_TIME()))) AS diff FROM my_table WHERE TD_TIME_RANGE(time, TD_TIME_ADD(TD_SCHEUDLED_TIME(), '-2d'), TD_SCHEDULED_TIME(), 'UTC') )
id | day1 | day2 |
---|---|---|
1 | 1 | 0 |
2 | 1 | 0 |
1 | 1 | 0 |
1 | 0 | 1 |
2 | 0 | 1 |
2 | 0 | 1 |
… | … | … |
Then, aggregate the result:
SELECT id, sum(day1) AS day1_total, sum(day2) AS day2_total FROM sparse_table GROUP BY id
For readability, you can write these steps in a single job by using WITH statement:
WITH -- Compute the date differencess of the events from the TD_SCHEDULED_TIME date_diff_table AS ( SELECT id, date_diff('day', date_trunc('day', from_unixtime(time)), date_trunc('day', from_unixtime(TD_SCHEDULED_TIME()))) AS diff FROM my_table WHERE TD_TIME_RANGE(time, TD_TIME_ADD(TD_SCHEDULED_TIME(), '-2d'), TD_SCHEDULED_TIME(), 'UTC') ), -- Create a sparse table, which maps daily event counts into day1, day2, ... sparse_table AS ( SELECT id, CASE diff WHEN 0 THEN 1 ELSE 0 END AS day1, CASE diff WHEN 1 THEN 1 ELSE 0 END AS day2 FROM date_diff_table ) -- Aggregate the sparse table SELECT id, sum(day1) AS day1_total, sum(day2) AS day2_total FROM sparse_table GROUP BY id
Presto does not provide median function, which returns the middle value of the sorted list. Instead you can use approx_percentile function:
SELECT approx_percentile(price, 0.5) FROM nasdaq
Presto supports multiple statement execution, separated by semi-colon(;):
DROP TABLE IF EXISTS my_table; CREATE TABLE my_table AS SELECT ...;
Note that however multiple statement does not support transaction. See also limitation.
SELECT 'hello ' || 'presto'
This returns string ‘hello presto’.
COALESCE(v1, v2, ..) gives the first non-null values from v1, v2, …:
-- This retuns 'N/A' if name value is null SELECT COALESCE(name, 'N/A') FROM table1
Last modified: Nov 14 2016 03:31:00 UTC
If this article is incorrect or outdated, or omits critical information, please let us know. For all other issues, please see our support channels.