spark连接hive数据库

hive在执行查询sql时出现java.lang.IllegalArgumentException: Wrong FS: hdfs://node1:9000/user/hive/warehouse/test1.db/t1, expected: hdfs://cluster1

原因是hadoop由普通集群修改成了高可用集群后没有更改hive设置中warehouse在hdfs上的储存路径
修改hive-site.xml文件内hive.metastore.warehouse.dir的值

将之前的hdfs://k200:9000/user/hive/warehouse 修改为 hdfs://k131/user/hive/warehouse

(这里的hdfs://cluster1是Hadoop配置文件core-site.xml中的fs.defaultFS指定的值)

 1 "1.0"?>
 2 "text/xsl" href="configuration.xsl"?>
 3 
 4         
 5             javax.jdo.option.ConnectionURL
 6             jdbc:mysql://k131:3306/metastore? 
 7               createDatabaseIfNotExist=true
 8             JDBC connect string for a JDBC 
 9             metastore
10         
11 
12         
13            javax.jdo.option.ConnectionDriverName
14            com.mysql.jdbc.Driver
15            Driver class name for a JDBC 
16             metastore
17          
18 
19          
20               javax.jdo.option.ConnectionUserName
21               root
22               username to use against metastore 
23               database
24          
25 
26          
27                javax.jdo.option.ConnectionPassword
28                root
29                password to use against metastore 
30                database
31         
32 
33         
34                hive.cli.print.header
35                true
36         
37 
38         
39                hive.cli.print.current.db
40                true
41         
42         
43                hive.exec.mode.local.auto
44                true
45         
46 
47         
48                 hive.zookeeper.quorum
49                 k131
50                 The list of ZooKeeper servers to talk to. This is only needed for read/write locks.
51                 
52 
53         
54               hive.zookeeper.client.port
55               2181
56               The port of ZooKeeper servers to talk to. This is only needed for read/write locks.
57         
58 
59 
hive-site.xml

spark 无法查看 hive 表中原来的内容,只能重新创建新表

hive (default)> select * from emp;
FAILED: SemanticException Unable to determine if hdfs://k200:9000/user/hive/warehouse/emp is encrypted: java.lang.IllegalArgumentException: Wrong FS: hdfs://k200:9000/user/hive/warehouse/emp, expected: hdfs://k131:9000
hive (default)>

 

转载于:https://www.cnblogs.com/Vowzhou/p/10882160.html

你可能感兴趣的:(spark连接hive数据库)