Cloudera Developer Training for Spark and hadoop

Cloudera Developer Training for Spark and hadoop
Course Time:2016年6月27-30日
Course Location:上海市 浦东新区 张江高科 伯克利工程创新中心
Contact us:400-679-6113
QQ:1438118790

Certification:CCA-175
Learn how to import data into your Apache Hadoop closter and process it with spark、hive、flume、sqoop、impala and other Hadoop ecosystem tools.

Audience and Prerequisites
This course designed for developers and engineers who have programming experience. Apache spark examples and hands-on exercises are presented in Scala and Python, so the ability to program in one of those languages is required. Basic familiarity with the Linux command line is assumed. Basic knowledge of SQL is helpful. Prior knowledge of Hadoop is not required.

Course outline:Developer Training for Spark and hadoop
 Introduction to Hadoop and the Hadoop ecosystem
 Hadoop architecture and HDFS
 Importing relational data with Apache spoop
 Introduction to impala and hive
 Modeling and managing data with impala and hive
 Data formats
 Data partitioning
 Capturing data with Apache flume
 Spark basics
 Working with RDDs in spark
 Writing and deploying spark applications
 Parallel programming with spark
 Spark caching and persistence
 Common patterns in spark data processing
 Preview:spark SQL

你可能感兴趣的:(数据库)