问题三.Exception in thread "main"java.lang.UnsatisfiedLinkError:org.apache.hadoop.io.nativeio.NativeIO$Windows.access0(Ljava/lang/String;I)Z
在运行WordCount.java代码时,出现这样的问题
- log4j:WARN No appenders could be found forlogger (org.apache.hadoop.metrics2.lib.MutableMetricsFactory).
- log4j:WARN Please initialize the log4jsystem properly.
- log4j:WARN Seehttp://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
- Exception in thread "main"java.lang.UnsatisfiedLinkError:org.apache.hadoop.io.nativeio.NativeIO$Windows.access0(Ljava/lang/String;I)Z
- atorg.apache.hadoop.io.nativeio.NativeIO$Windows.access0(Native Method)
- atorg.apache.hadoop.io.nativeio.NativeIO$Windows.access(NativeIO.java:557)
- atorg.apache.hadoop.fs.FileUtil.canRead(FileUtil.java:977)
- atorg.apache.hadoop.util.DiskChecker.checkAccessByFileMethods(DiskChecker.java:187)
- atorg.apache.hadoop.util.DiskChecker.checkDirAccess(DiskChecker.java:174)
- atorg.apache.hadoop.util.DiskChecker.checkDir(DiskChecker.java:108)
- atorg.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.confChanged(LocalDirAllocator.java:285)
- atorg.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.getLocalPathForWrite(LocalDirAllocator.java:344)
- atorg.apache.hadoop.fs.LocalDirAllocator.getLocalPathForWrite(LocalDirAllocator.java:150)
- atorg.apache.hadoop.fs.LocalDirAllocator.getLocalPathForWrite(LocalDirAllocator.java:131)
- atorg.apache.hadoop.fs.LocalDirAllocator.getLocalPathForWrite(LocalDirAllocator.java:115)
- atorg.apache.hadoop.mapred.LocalDistributedCacheManager.setup(LocalDistributedCacheManager.java:131)
分析:
C:\Windows\System32下缺少hadoop.dll,把这个文件拷贝到C:\Windows\System32下面即可。
解决:
hadoop-common-2.2.0-bin-master下的bin的hadoop.dll放到C:\Windows\System32下,然后重启电脑,也许还没那么简单,还是出现这样的问题。
我们在继续分析:
我们在出现错误的的atorg.apache.hadoop.io.nativeio.NativeIO$Windows.access(NativeIO.java:557)我们来看这个类NativeIO的557行,如图所示:
![](http://img.e-com-net.com/image/info8/f60fb31fb6f04673b231ae13906c64fc.png)
Windows的唯一方法用于检查当前进程的请求,在给定的路径的访问权限,所以我们先给以能进行访问,我们自己先修改源代码,return true 时允许访问。我们下载对应hadoop源代码,hadoop-2.6.0-src.tar.gz解压,hadoop-2.6.0-src\hadoop-common-project\hadoop-common\src\main\java\org\apache\hadoop\io\nativeio下NativeIO.java 复制到对应的Eclipse的project,然后修改557行为return true如图所示:
问题四:org.apache.hadoop.security.AccessControlException: Permissiondenied: user=zhengcy, access=WRITE,inode="/user/root/output":root:supergroup:drwxr-xr-x
我们在执行运行WordCount.java代码时,出现这样的问题
- 2014-12-18 16:03:24,092 WARN (org.apache.hadoop.mapred.LocalJobRunner:560) - job_local374172562_0001
- org.apache.hadoop.security.AccessControlException: Permission denied: user=zhengcy, access=WRITE, inode="/user/root/output":root:supergroup:drwxr-xr-x
- at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkFsPermission(FSPermissionChecker.java:271)
- at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:257)
- at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:238)
- at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:179)
- at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:6512)
- at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:6494)
- at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkAncestorAccess(FSNamesystem.java:6446)
- at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInternal(FSNamesystem.java:4248)
- at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInt(FSNamesystem.java:4218)
- at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:4191)
- at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:813)
分析:
我们没权限访问output目录。
解决:
我们 在设置hdfs配置的目录是在hdfs-site.xml配置hdfs文件存放的地方,我在hadoop伪分布式部署那边有介绍过,我们在这边在复习一下,如图所示:
我们在这个etc/hadoop下的hdfs-site.xml添加
dfs.permissions
false
设置没有权限,不过我们在正式的 服务器上不能这样设置。
问题五:File/usr/root/input/file01._COPYING_ could only be replicated to 0 nodes instead ofminRepLication (=1) There are 0 datanode(s) running and no node(s) are excludedin this operation
如图所示:
![](http://img.e-com-net.com/image/info8/49e1ecc78cff4b90b4280f50897682df.png)
分析:
我们在第一次执行#hadoop namenode –format 完然后在执行#sbin/start-all.sh
在执行#jps,能看到Datanode,在执行#hadoop namenode –format然后执行#jps这时看不到Datanode ,如图所示:
![](http://img.e-com-net.com/image/info8/63dc1cdd5bf54a94abb6fc5d51bb3deb.png)
然后我们想把文本放到输入目录执行bin/hdfs dfs -put/usr/local/hadoop/hadoop-2.6.0/test/* /user/root/input 把/test/*文件上传到hdfs的/user/root/input中,出现这样的问题,
解决:
是我们执行太多次了hadoopnamenode –format,在创建了多个,我们对应的hdfs目录删除hdfs-site.xml配置的保存datanode和namenode目录。
参考资料:
http://blog.csdn.net/congcong68/article/details/42043093
http://blog.csdn.net/liuxingjiaofu/article/details/7094131
http://blog.csdn.net/ruidongliu/article/details/10006905
http://www.cnblogs.com/hxsyl/p/6145225.html