hive遇到的问题:The ownership on the staging directory /tmp/hadoop-yarnis not as expectedThe directory mus

搭建好了hive,由于切换到hadoop无法进入hive,权限不足,于是切换到root,执行:

hive history file=/tmp/hadoop/hive_job_log_hadoop_201407010908_503942368.txt
hive>
hive>select count(*) from test;
后出现:

java.io.IOException: The ownership on the staging directory /tmp/hadoop-yarn/staging/root/.staging is not as expected. It is owned by hive. The directory must be owned by the submitter root or by root
	at org.apache.hadoop.mapreduce.JobSubmissionFiles.getStagingDir(JobSubmissionFiles.java:120)
	at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:146)
	at org.apache.hadoop.mapreduce.Job$11.run(Job.java:1341)
	at org.apache.hadoop.mapreduce.Job$11.run(Job.java:1338)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:421)
如上省略了一些,这问题提示是需要root权限,退出hive,执行:

hadoop fs -chown -R root:root /tmp
然而并未修改成功:

chmod: changing permissions of '/tmp/hadoop-yarn': Permission denied: user=root, access=EXECUTE, inode="/tmp":hive:hive:drwxrwx--

切换成hadoop用户,执行上面的修改权限指令,成功!原因。。。。是因为root没有权限修改hadoop?因为我的所有权是hadoop的,这问题待解答。。

上面命令修改后:结果如下:

hive> SELECT COUNT(*) FROM test;
Query ID = root_20180128162626_f875d94a-9ada-47c0-8af8-b9b37271c64a
Total jobs = 1
Launching Job 1 out of 1
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
  set hive.exec.reducers.bytes.per.reducer=
In order to limit the maximum number of reducers:
  set hive.exec.reducers.max=
In order to set a constant number of reducers:
  set mapreduce.job.reduces=
Starting Job = job_1517036563281_0005, Tracking URL = http://master:8088/proxy/application_1517036563281_0005/
Kill Command = /usr/local/hadoop/bin/hadoop job  -kill job_1517036563281_0005
Hadoop job information for Stage-1: number of mappers: 1; number of reducers: 1
2018-01-28 16:26:58,698 Stage-1 map = 0%,  reduce = 0%
2018-01-28 16:27:06,402 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU 1.32 sec
2018-01-28 16:27:13,780 Stage-1 map = 100%,  reduce = 100%, Cumulative CPU 2.05 sec
MapReduce Total cumulative CPU time: 2 seconds 50 msec
Ended Job = job_1517036563281_0005
MapReduce Jobs Launched: 
Stage-Stage-1: Map: 1  Reduce: 1   Cumulative CPU: 2.05 sec   HDFS Read: 6436 HDFS Write: 2 SUCCESS
Total MapReduce CPU Time Spent: 2 seconds 50 msec
OK
0
Time taken: 33.905 seconds, Fetched: 1 row(s)






你可能感兴趣的:(hadoop)