Standalone模式:Standalone模式运行的Spark集群对不同的应用程序采用先进先出(FIFO)的顺序进行调度。默认情况下每个应用程序会独占所有可用节点的资源。
现在版本的SparkR只能运行在standalone模式下
问题1:安装问题
由于R涉及到Fortran语言,要下载gcc-gfortran包
安装步骤:1)将R-3.2.3.tar.gz解压 2)./configure 3)make 4)make install(这步可以没有) 5)配置环境变量 vi .bash_profile
./configure的时候会出现以下错误:
--with-readline=yes (default) and headers/libs are not available 这是由于需要依赖readline-devel包的缘故 yum install readline-devel即可
configure: error: cannot compile a simple Fortran program 这是由于需要依赖gcc-gfortran包的缘故 yum install gcc-gfortran即可
configure: error: --with-x=yes (default) and X11 headers/libs are not available 这是由于需要依赖libXt-devel包的缘故 yum install libXt-devel即可
以上步骤依赖了较多的包:①gcc ②gcc-c++ ③readline-devel ④gcc-gfortran ⑤libXt-devel
yum install libXt-devel yum install readline-devel
yum install gcc yum install gcc-c++ yum install gcc-gfortran tar -zxvf R-3.2.3.tar.gz cd R-3.2.3 ./configure make
问题2:
unsupported URL scheme Warning: unable to access index for repository https://rweb.crmda.ku.edu/cran/src/contrib
镜像问题,解决方式有两种:1)换镜像,即在选择的时候改 2)install.packages("RODBC", dependencies = TRUE, repos = "http://cran.rstudio.com/")
问题3:在安装R包的时候遇见错误
configure: error: "ODBC headers sql.hand sqlext.h not found"
是因为没有在Linux 下安装ODBC包。RODBC 需要 unixODBC 和unixODBC development 包,使用YUM 安装之后即可解决。
yum install unixODBC
yum install unixODBC-devel
则之后再install.packages("RODBC", dependencies = TRUE, repos = "http://cran.rstudio.com/")
一直连不上远程数据库,要查看一下是不是网络不通,ping一下远程主机。
SparkR编程示例:
#如果直接调用的sparkR,则不用设置Sys.setenv和.libPaths,直接library(SparkR)即可
#Sys.setenv(SPARK_HOME = "D:/StudySoftWare/Spark/spark-1.5.2-bin-hadoop2.6") #.libPaths(c(file.path(Sys.getenv("SPARK_HOME"),"R","lib"), .libPaths())) library(SparkR) sc <- sparkR.init(master = "local")
#sc <- sparkR.init(master = "spark://192.168.133.11:7077") 以集群方式运行 sqlContext <- sparkRSQL.init(sc) DF <- createDataFrame(sqlContext, faithful) head(DF) localDF <- data.frame(name=c("John", "Smith", "Sarah"), age=c(19, 23, 18)) df <- createDataFrame(sqlContext, localDF) # Print its schema printSchema(df) # root # |-- name: string (nullable = true) # |-- age: double (nullable = true) # Create a DataFrame from a JSON file path <- file.path(Sys.getenv("SPARK_HOME"), "examples/src/main/resources/people.json") peopleDF <- jsonFile(sqlContext, path) printSchema(peopleDF) # Register this DataFrame as a table. registerTempTable(peopleDF, "people") # SQL statements can be run by using the sql methods provided by sqlContext teenagers <- sql(sqlContext, "SELECT name FROM people WHERE age >= 13 AND age <= 19") # Call collect to get a local data.frame teenagersLocalDF <- collect(teenagers) # Print the teenagers in our dataset print(teenagersLocalDF) # Stop the SparkContext now sparkR.stop()
java.io.IOException: Cannot run program "Rscript": error=2, No such file or directory 遇到这种错误是因为:
looks like the issue was that code was looking for Rscript under "/usr/bin". Our default installation was /usr/revolutionr.
Just created a link Rscript in /usr/bin that points to /usr/revolution/bin/Revoscript
或者拷贝一份Rscript到/usr/bin目录下即可,参考:https://github.com/RevolutionAnalytics/RHadoop/issues/87
示例二:wordCount
library(SparkR) sparkR.stop() #调用sparkR的时候会自动的初始化一个SparkContext,默认是local模式 sc <- sparkR.init(master="spark://<pre name="code" class="plain">192.168.133.11:7077","WordCount")#sparkR.init(master = "", appName = "SparkR",sparkHome = Sys.getenv("SPARK_HOME"), sparkEnvir = list(),sparkExecutorEnv = list(), s#parkJars = "", sparkPackages = "")
lines <- SparkR:::textFile(sc, "hdfs://namenode主机名/user/root/test/word.txt")
words <- SparkR:::flatMap(lines, function(line) { strsplit(line, " ")[[1]] })
wordCount <- SparkR:::lapply(words, function(word) { list(word, 1L) })
counts <- SparkR:::reduceByKey(wordCount, "+", 2L)
#如果要保存到hdfs中,则path要写成"hdfs://namenode主机名/user/root/test/sparkR.txt") path要给出全路径
SparkR:::saveAsTextFile(counts, "hdfs://namenode主机名/user/root/test/sparkR.txt")
output <- SparkR:::collect(counts)
API documentation2:http://spark.apache.org/docs/1.5.2/api/R/index.html,该网址给出的API可以直接调用。