After a long period of intense engineering effort and user feedback, we are very pleased, and proud, to announce the Cloudera Impala project. This technology is a revolutionary one for Hadoop users, and we do not take that claim lightly.
When Google published its Dremel paper in 2010, we were as inspired as the rest of the community by the technical vision to bring real-time, ad hoc query capability to Apache Hadoop, complementing traditional MapReduce batch processing. Today, we are announcing a fully functional, open-sourced codebase that delivers on that vision – and, we believe, a bit more – which we call Cloudera Impala. An Impala binary is now available in public beta form, but if you would prefer to test-drive Impala via a pre-baked VM, we have one of those for you, too. (Links to all downloads and documentation are here.) You can also review the source code and testing harness at Github right now.
Impala raises the bar for query performance while retaining a familiar user experience. With Impala, you can query data, whether stored in HDFS or Apache HBase – including SELECT, JOIN, and aggregate functions – in real time. Furthermore, it uses the same metadata, SQL syntax (Hive SQL), ODBC driver and user interface (Hue Beeswax) as Apache Hive, providing a familiar and unified platform for batch-oriented or real-time queries. (For that reason, Hive users can utilize Impala with little setup overhead.) The first beta drop includes support for text files and SequenceFiles; SequenceFiles can be compressed as Snappy, GZIP, and BZIP (with Snappy recommended for maximum performance). Support for additional formats including Avro, RCFile, LZO text files, and the Parquet columnar format is planned for the production drop.
To avoid latency, Impala circumvents MapReduce to directly access the data through a specialized distributed query engine that is very similar to those found in commercial parallel RDBMSs. The result is order-of-magnitude faster performance than Hive, depending on the type of query and configuration. (See FAQ below for more details.) Note that this performance improvement has been confirmed by several large companies that have tested Impala on real-world workloads for several months now.
A high-level architectural view is below:
There are many advantages to this approach over alternative approaches for querying Hadoop data, including::
- Thanks to local processing on data nodes, network bottlenecks are avoided.
- A single, open, and unified metadata store can be utilized.
- Costly data format conversion is unnecessary and thus no overhead is incurred.
- All data is immediately query-able, with no delays for ETL.
- All hardware is utilized for Impala queries as well as for MapReduce.
- Only a single machine pool is needed to scale.
We encourage you to read the documentation for further technical details.
Finally, we’d like to answer some questions that we anticipate will be popular:
Is Impala open source?
Yes, Impala is 100% open source (Apache License). You can review the code for yourself at Github today.
How is Impala different than Dremel?
The first and principal difference is that Impala is open source and available for everyone to use, whereas Dremel is proprietary to Google.
Technically, Dremel achieves interactive response times over very large data sets through the use of two techniques:
- A novel columnar storage format for nested relational data/data with nested structures
- Distributed scalable aggregation algorithms, which allow the results of a query to be computed on thousands of machines in parallel.
The latter is borrowed from techniques developed for parallel DBMSs, which also inspired the creation of Impala. Unlike Dremel as described in the 2010 paper, which could only handle single-table queries, Impala already supports the full set of join operators that are one of the factors that make SQL so popular.
In order to realize the full performance benefits demonstrated by Dremel, Hadoop will shortly have an efficient columnar binary storage format called Parquet. But contrary to Dremel, Impala supports a range of popular file formats. This lets users run Impala on their existing data without having to “load” or transform it. It also lets users decide if they want to optimize for flexibility or just pure performance.
To sum it up, Impala plus Parquet will achieve the query performance described in the Dremel paper, but surpass what is described there in SQL functionality.
How much faster are Impala queries than Hive ones, really?
The precise amount of performance improvement is highly dependent on a number of factors:
- Hardware configuration: Impala is generally able to take full advantage of hardware resources and specifically generates less CPU load than Hive, which often translates into higher observed aggregate I/O bandwidth than with Hive. Impala of course cannot go faster than the hardware permits, so any hardware bottlenecks will limit the observed speedup. For purely I/O bound queries, we typically see performance gains in the range of 3-4x.
- Complexity of the query: Queries that require multiple MapReduce phases in Hive or require reduce-side joins will see a higher speedup than, say, simple single-table aggregation queries. For queries with at least one join, we have seem performance gains of 7-45X.
- Availability of main memory as a cache for table data: If the data accessed through the query comes out of the cache, the speedup will be more dramatic thanks to Impala’s superior efficiency. In those scenarios, we have seen speedups of 20x-90x over Hive even on simple aggregation queries.
Is Impala a replacement for MapReduce or Hive – or for traditional data warehouse infrastructure, for that matter?
No. There will continue be many viable use cases for MapReduce and Hive (for example, for long-running data transformation workloads) as well as traditional data warehouse frameworks (for example, for complex analytics on limited, structured data sets). Impala is a complement to those approaches, supporting use cases where users need to interact with very large data sets, across all data silos, to get focused result sets quickly.
Does the Impala Beta Release have any technical limitations?
As mentioned previously, supported file formats in the first beta drop include text files and SequenceFiles, with many other formats to be supported in the upcoming production release. Furthermore, currently all joins are done in a memory space no larger than that of the smallest node in the cluster; in production, joins will be done in aggregate memory. Lastly, no UDFs are possible at this time.
What are the technical requirements for the Impala Beta Release?
You will need to have CDH4.1 installed on RHEL/CentOS 6.2. We highly recommend the use of Cloudera Manager (Free or Enterprise Edition) to deploy and manage Impala because it takes care of distributed deployment and monitoring details automatically.
What is the support policy for the Impala Beta Release?
If you are an existing Cloudera customer with a bug, you may raise a Customer Support ticket and we will attempt to resolve it on a best-effort basis. If you are not an existing Cloudera customer, you may use our public JIRA instance or the impala-user mailing list, which will be monitored by Cloudera employees.
When will Impala be generally available for production use?
A production drop is planned for the first quarter of 2013. Customers may obtain commercial support in the form of a Cloudera Enterprise RTQ subscription at that time.
We hope that you take the opportunity to review the Impala source code, explore the beta release, download and install the VM, or any combination of the above. Your feedback in all cases is appreciated; we need your help to make Impala even better.
We will bring you further updates about Impala as we get closer to production availability. (Update: Read about Impala 1.0.)
Impala resources:
– Impala source code
– Impala downloads (Beta Release and VM)
– Impala documentation
– Public JIRA
– Impala mailing list
- Free Impala training (Screencast)
(Added 10/30/2012) Third-party articles about Impala:
- GigaOm: Real-time query for Hadoop democratizes access to big data analytics (Oct. 22, 2012)
- Wired: Man Busts Out of Google, Rebuilds Top-Secret Query Machine (Oct. 24, 2012)
- InformationWeek: Cloudera Debuts Real-Time Hadoop Query (Oct. 24, 2012)
- GigaOm: Cloudera Makes SQL a First-Class Citizen on Hadoop (Oct. 24, 2012)
- ZDNet: Cloudera’s Impala Brings Hadoop to SQL and BI (Oct. 25, 2012)
- Wired: Marcel Kornacker Profile (Oct. 29, 2012)
- Dr. Dobbs: Cloudera Impala – Processing Petabytes at The Speed Of Thought (Oct. 29, 2012)
http://blog.cloudera.com/blog/2012/10/cloudera-impala-real-time-queries-in-apache-hadoop-for-real/