https://dzone.com/articles/r-and-hadoop-data-analytics

原文地址:https://dzone.com/articles/r-and-hadoop-data-analytics


The Big Data Zone is presented by  Exaptive.  Learn about how to  rapidly iterate data applications, while reusing existing code and leveraging open source technologies.

Introduction

R is a programming language and a software suite used for data analysis, statistical computing and data visualization. It is highly extensible and has object oriented features and strong graphical capabilities. At its heart R is an interpreted language and comes with a command line interpreter - available for Linux, Windows and Mac machines - but there are IDEs as well to support development like  RStudio or  JGR. R and Hadoop can complement each other very well, they are a natural match in big data analytics and visualization. One of the most well-known R packages to support Hadoop functionalities is  RHadoop that was developed by RevolutionAnalytics.

Installing RHadoop

RHadoop is a collection of three R packages: rmr, rhdfs and rhbase. rmr package provides Hadoop MapReduce functionality in R, rhdfs provides HDFS file management in R and rhbase provides HBase database management from within R. To install these R packages, first we need to install R base package. On Ubuntu 12.04 LTS we can do it running:
$ sudo apt-get install r-base
Then we need to install RHadoop packages with their dependencies.  rmr requires RCpp, RJSONIO, digest, functional, stringr and plyr, while rhdfs requires rJava. As part of the installation, we need to reconfigure Java for rJava package and we also need to set HADOOP_CMD variable for rhdfs package. The installation requires the corresponding tar.gz archives to be downloaded and then we can run R CMD INSTALL command with sudo privileges.
sudo R CMD INSTALL Rcpp Rcpp_0.10.2.tar.gz
sudo R CMD INSTALL RJSONIO RJSONIO_1.0-1.tar.gz
sudo R CMD INSTALL digest digest_0.6.2.tar.gz
sudo R CMD INSTALL functional functional_0.1.tar.gz
sudo R CMD INSTALL stringr stringr_0.6.2.tar.g
sudo R CMD INSTALL plyr plyr_1.8.tar.gz
sudo R CMD INSTALL rmr rmr2_2.0.2.tar.gz

sudo JAVA_HOME=/home/istvan/jdk1.6.0_38/jre R CMD javareconf
sudo R CMD INSTALL rJava rJava_0.9-3.tar.gz 
sudo HADOOP_CMD=/home/istvan/hadoop/bin/hadoop R CMD INSTALL rhdfs rhdfs_1.0.5.tar.gz 
sudo R CMD INSTALL rhdfs rhdfs_1.0.5.tar.gz

Getting started with RHadoop

In principle, RHadoop MapReduce is a similar operation to R lapply function that applies a function over a list or vector. Without mapreduce function we could write a simple R code to double all the numbers from 1 to 100:
> ints = 1:100
> doubleInts = sapply(ints, function(x) 2*x)
> head(doubleInts)
[1]  2  4  6  8 10 12
With RHadoop rmr package we could use mapreduce function to implement the same calculations - see doubleInts.R script:
Sys.setenv(HADOOP_HOME="/home/istvan/hadoop")
Sys.setenv(HADOOP_CMD="/home/istvan/hadoop/bin/hadoop")

library(rmr2)
library(rhdfs)

ints = to.dfs(1:100)
calc = mapreduce(input = ints,
                   map = function(k, v) cbind(v, 2*v))

from.dfs(calc)

$val
         v    
  [1,]   1   2
  [2,]   2   4
  [3,]   3   6
  [4,]   4   8
  [5,]   5  10
.....
If we want to run HDFS filesystem commands from R, we first need to initialize rhdfs using hdfs.init() function, then we can run the well-known ls, rm, mkdir, stat, etc commands:
> hdfs.init()
> hdfs.ls("/tmp")
  permission  owner      group size          modtime               file
1 drwxr-xr-x istvan supergroup    0 2013-02-25 21:59    /tmp/RtmpC94L4R
2 drwxr-xr-x istvan supergroup    0 2013-02-25 21:49 /tmp/hadoop-istvan
> hdfs.stat("/tmp")
      perms isDir block replication  owner      group size              modtime path
1 rwxr-xr-x  TRUE     0           0 istvan supergroup    0 45124-08-29 23:58:48 /tmp

Data analysis with RHadoop

The following example demonstrates how to use RHadoop for data analysis. Let us assume that we need to determine how many countries have greater GDP than Apple Inc.'s revenue in 2012. (It was 156,508 millions USD, see more details   http://www.google.com/finance?q=NASDAQ%3AAAPL&fstype=ii&ei=5eErUcCpB8KOwAOkQQ) GDP data can be downloaded from Worldbank data catalog site. The data needs to be adjusted to be suitable for MapReduce algorithm. The final format that we used for data analysis is as follows (where the last column is the GDP of the given country in millions USD):
Country Code,Number,Country Name,GDP,
USA,1,United States,14991300
CHN,2,China,7318499
JPN,3,Japan,5867154
DEU,4,Germany,3600833
FRA,5,France,2773032
....
The gdp.R script looks like this:
Sys.setenv(HADOOP_HOME="/home/istvan/hadoop")
Sys.setenv(HADOOP_CMD="/home/istvan/hadoop/bin/hadoop")

library(rmr2)
library(rhdfs)

setwd("/home/istvan/rhadoop/blogs/")
gdp <- read.csv("GDP_converted.csv")
head(gdp)

hdfs.init()
gdp.values <- to.dfs(gdp)

# AAPL revenue in 2012 in millions USD
aaplRevenue = 156508

gdp.map.fn <- function(k,v) {
key <- ifelse(v[4] < aaplRevenue, "less", "greater")
keyval(key, 1)
}

count.reduce.fn <- function(k,v) {
keyval(k, length(v))
}

count <- mapreduce(input=gdp.values,
                   map = gdp.map.fn,
                   reduce = count.reduce.fn)

from.dfs(count)
R will initiate a Hadoop streaming job to process the data using mapreduce algorithm.
packageJobJar: [/tmp/Rtmp4llUjl/rmr-local-env1e025ac0444f, /tmp/Rtmp4llUjl/rmr-global-env1e027a86f559, /tmp/Rtmp4llUjl/rmr-streaming-map1e0214a61fa5, /tmp/Rtmp4llUjl/rmr-streaming-reduce1e026da4f6c9, /tmp/hadoop-istvan/hadoop-unjar1158187086349476064/] [] /tmp/streamjob430878637581358129.jar tmpDir=null
13/02/25 22:28:12 INFO mapred.FileInputFormat: Total input paths to process : 1
13/02/25 22:28:12 INFO streaming.StreamJob: getLocalDirs(): [/tmp/hadoop-istvan/mapred/local]
13/02/25 22:28:12 INFO streaming.StreamJob: Running job: job_201302252148_0006
13/02/25 22:28:12 INFO streaming.StreamJob: To kill this job, run:
13/02/25 22:28:12 INFO streaming.StreamJob: /home/istvan/hadoop-1.0.4/libexec/../bin/hadoop job  -Dmapred.job.tracker=localhost:9001 -kill job_201302252148_0006
13/02/25 22:28:12 INFO streaming.StreamJob: Tracking URL: http://localhost:50030/jobdetails.jsp?jobid=job_201302252148_0006
13/02/25 22:28:13 INFO streaming.StreamJob:  map 0%  reduce 0%
13/02/25 22:28:25 INFO streaming.StreamJob:  map 100%  reduce 0%
13/02/25 22:28:37 INFO streaming.StreamJob:  map 100%  reduce 100%
13/02/25 22:28:43 INFO streaming.StreamJob: Job complete: job_201302252148_0006
13/02/25 22:28:43 INFO streaming.StreamJob: Output: /tmp/Rtmp4llUjl/file1e025f146f8f
Then we will get the data saying how many countries have greater and how many contries have less GDP than Apple Inc.'s revenue in year 2012. The result is that 55 countries had greater GDP than Apple and 138 countries had less.
$key
   GDP      
1  "greater"
56 "less"   

$val
[1]  55 138
The following screenshot from RStudio shows the histogram of GDPs - there are 15 countries having more than 1,000 millions USD GDP; 1 country is in the range of 14,000 - 15,000 millions USD, 1 country is in the range of 7,000 - 8,000 millions USD and 1 country is in the range of 5,000 - 6,000 USD. 

Conclusion

If someone needs to combine strong data analytics and visualization features  with big data capabilities supported by Hadoop, it is certainly worth to have a closer look at RHadoop features. It has packages to integrate R with MapReduce, HDFS and HBase, the key components of the Hadoop ecosystem. For more details, please read the  R and Hadoop Big Data Analytics whitepaper.
The Big Data Zone is presented by  Exaptive.  Learn how rapid data application development can address the data science shortage.
Topics:
 
Platinum Partner

Spark and Scala Resources

A resource list for Spark and Scala newbies looking to get started with both.

   ·  Big Data Zone
 

 Comment (0)

 
Save
 
  Tweet
 561  Views
The Big Data Zone is brought to you in partnership with  Hortonworks.  Learn more about  Connected Data Platforms that power the creation of modern data applications and how they deliver actionable intelligence. 

Github Resources 

Github Resources - Scala Workshop

Setup Resources

  • Download Scala 2.10.x

  • Download Apache Spark 1.6 (Prebuild with Hadoop)

  • Scala Install

Setup

Make sure you have the Java 7 or Java 8 SDK installed for your platform (Mac, Linux or Windows).   Then you’ll need to download Scala 2.10.x, SBT and then Spark.   I also recommend downloading Maven. 

Solving Local Spark Issues

export SPARK_MASTER_IP=127.0.0.1
export SPARK_LOCAL_IP=127.0.0.1
export SCALA_HOME=~/Downloads/scala-2.10.6
export PATH=$PATH:$SCALA_HOME/bin

For Windows, use SET instead of EXPORT and and not :.

Books

  • Learning Spark

  • Scala by Example by Odersky – most material is from or similar to material covered in both of his classes on Coursera.

  • Scala Overview by Odersky et al.

  • Programming in Scala, First Edition by Odersky.

  • Structure and Interpretation of Computer Programs A classic computer science text that teaches some advanced programming concepts using Lisp and the basis of Martin Odersky‘s coursera class. Formerly the MIT standard text on programming. View on Amazon

  • Scala for the Impatient

  • Programming Scala

  • A big list of Scala books linked at Scala-Lang

  • Learning Spark

  • Advanced Analytics with Spark

  • Spark Cookbook

Tutorials

  • Scala for Java Programmers

  • Scala Tutorial

  • Effective Scala (Twitter)

  • Scala Tour

E-Books

  • Books at Lightbend (Typesafe)

  • AtomicScala (sample)

Scala Koans/Exercises

  • Scala Exercises

  • Scala Koans

Resources

  • Scala Roundup for Java Engineers

  • Scala Info at StackOverflow

  • Scala Cheetsheats

  • Scala Notes

  • Cake Solutions Blog

  • Scala School (Twitter)

  • Functional Programming in Scala

  • How to Learn Scala

  • Scala Lang Overviews

  • Learning Scala in Small Bites

Online Free Courses – Scala

  • Functional Programming with Scala

  • Reactive Programming with Scala

Online Free Courses – Spark

  • Big Data Analysis with Spark

  • Distributed Machine Learning with Spark

  • Introduction to Spark

  • Spark Fundamentals

  • Data Science / Engineering Spark

  • CS100

  • CS190

The Big Data Zone is brought to you in partnership with  Hortonworks.  Learn, Collaborate, and Thrive with  Hortonworks Community Connection. 

你可能感兴趣的:(RHadoop)