我的原创地址:http://dongkelun.com/2018/03/25/sparkHive/
cp /opt/apache-hive-2.3.2-bin/conf/hive-site.xml /opt/spark-2.2.1-bin-hadoop2.7/conf/
cp /opt/apache-hive-2.3.2-bin/bin/mysql-connector-java-5.1.46-bin.jar /opt/spark-2.2.1-bin-hadoop2.7/jars/
spark-shell
import org.apache.spark.sql.hive.HiveContext
val hiveContext = new HiveContext(sc)
hiveContext.sql("select * from test").show()
在执行hiveContext.sql(“select * from test”).show() 报了一个异常:
The root scratch dir: /tmp/hive on HDFS should be writable. Current permissions are: rwxr-xr-x;
解决办法:
hadoop fs -chmod 777 /tmp/hive
hadoop fs -rm -r /tmp/hive
rm -rf /tmp/hive
这次错误采用的是第二种解决办法,有的情况下用第一种方法,比如一次在启动hive时候报这种错误~。
错误截图:
参考:http://www.cnblogs.com/czm1032851561/p/5751722.html
"mysql" % "mysql-connector-java" % "5.1.46"
package com.dkl.leanring.spark.sql
import org.apache.spark.SparkConf
import org.apache.spark.sql.SQLContext
import org.apache.spark.sql.hive.HiveContext
import org.apache.spark.SparkContext
/**
* 旧版本spark-hive测试
*/
object OldSparkHiveDemo {
def main(args: Array[String]): Unit = {
val conf = new SparkConf().setAppName("OldSparkHiveDemo").setMaster("local")
val sc = new SparkContext(conf)
val sqlContext = new SQLContext(sc)
import sqlContext.implicits._
val hiveCtx = new HiveContext(sc)
hiveCtx.sql("select * from test").show()
val data = Array((3, "name3"), (4, "name4"), (5, "name5"))
val df = sc.parallelize(data).toDF("id", "name")
df.createOrReplaceTempView("user")
hiveCtx.sql("insert into test select id,name from user")
hiveCtx.sql("select * from test").show()
}
}
(注:其中df.createOrReplaceTempView(“user”)改为df.registerTempTable(“user”),因为createOrReplaceTempView方法是2.0.0才有的,registerTempTable是旧版的方法,1.6.0就有了,嫌麻烦就不改代码重新贴图了)
package com.dkl.leanring.spark.sql
import org.apache.spark.sql.SparkSession
/**
* 新版本spark-hive测试
*/
object NewSparkHiveDemo {
def main(args: Array[String]): Unit = {
val spark = SparkSession
.builder()
.appName("Spark Hive Example")
.master("local")
.config("spark.sql.warehouse.dir", "/user/hive/warehouse/")
.enableHiveSupport()
.getOrCreate()
import spark.implicits._
import spark.sql
sql("CREATE TABLE IF NOT EXISTS src (key INT, value STRING)")
val data = Array((1, "val1"), (2, "val2"), (3, "val3"))
var df = spark.createDataFrame(data).toDF("key", "value")
df.createOrReplaceTempView("temp_src")
sql("insert into src select key,value from temp_src")
sql("SELECT * FROM src").show()
}
}
在执行insert语句时会出现如下异常信息:
org.apache.hadoop.security.AccessControlException: Permission denied: user=dongkelun, access=EXECUTE, inode="/user/hive/warehouse":root...
原因是:启动 Spark 应用程序的win用户对spark.sql.warehouse.dir没有写权限
解决办法:
hadoop fs -chmod 777 /user/hive/warehouse/
直接用下面这句代码即可将df里的全部数据存到hive表里
df.write.mode(SaveMode.Append).saveAsTable(tableName)
上面讲的hive-site.xml为我在博客centos7 hive 单机模式安装配置配置的。
<configuration>
<property>
<name>javax.jdo.option.ConnectionURLname>
<value>jdbc:mysql://192.168.44.128:3306/hive_metadata?&createDatabaseIfNotExist=true&characterEncoding=UTF-8&useSSL=falsevalue>
property>
<property>
<name>javax.jdo.option.ConnectionUserNamename>
<value>rootvalue>
property>
<property>
<name>javax.jdo.option.ConnectionPasswordname>
<value>Root-123456value>
property>
<property>
<name>javax.jdo.option.ConnectionDriverNamename>
<value>com.mysql.jdbc.Drivervalue>
property>
<property>
<name>datanucleus.schema.autoCreateAllname>
<value>truevalue> property>
<property>
<name>hive.metastore.schema.verificationname>
<value>falsevalue>
property>
configuration>
后来在工作中发现可以不用将整个hive-site.xml全部拷过来,用一个metastore就可以搞定~
用下面的命令启动metastore
nohup hive --service metastore &
在nohup.out文件里看一下启动日志,如果启动成功的话,就可以将hive-site.xml改为
<configuration>
<property>
<name>hive.metastore.uris</name>
<value>thrift://192.168.44.128:9083</value>
</property>
</configuration>
这样就可以用上面讲的代码连接Hive了,一般我都是用这种方式连接Hive,至于优缺点我没有深入研究,表面上看起来有两点
如何关闭metastore(Linux基础)
-bash-4.2# ps aux | grep metastore
root 8814 2.6 6.2 2280480 240040 pts/0 Sl 03:10 0:18 /opt/jdk1.8.0_45/bin/java -Xmx256m -Djava.net.preferIPv4Stack=true -Dhadoop.log.dir=/opt/hadoop-2.7.5/logs -Dhadoop.log.file=hadoop.log -Dhadoop.home.dir=/opt/hadoop-2.7.5 -Dhadoop.id.str=root -Dhadoop.root.logger=INFO,console -Djava.library.path=/opt/hadoop-2.7.5/lib/native -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true -Xmx512m -Dhadoop.security.logger=INFO,NullAppender org.apache.hadoop.util.RunJar /opt/apache-hive-2.3.2-bin/lib/hive-metastor-2.3.2.jar org.apache.hadoop.hive.metastore.HiveMetaStore
root 9073 0.0 0.0 114724 984 pts/0 S+ 03:22 0:00 grep --color=auto metastore
-bash-4.2# kill 8814
用下面这行代码就可以读取整个Hive表了
spark.table("test")
下面这个配置可以去掉
.config("spark.sql.warehouse.dir", "/user/hive/warehouse/")