文章来自:
http://blog.csdn.net/lanwenbing/article/details/40783335
以下是具体的解决办法,本人遇到问题的时候,通过下面的方法已经解决!
可以在windows8、windows7 64位操作系统4G内存 i5 进行hadoop2.4.1和hadoop2.5.2的mapreduce开发
【若本文有错误,请查看 上面的详情链接】
文章来自http://www.aboutyun.com/thread-8030-1-1.html
问题导读:
1.建一个MapReduce Project,运行时发现出问题:Could not locate executable null,该如何解决?
2.Could not locate executabl ....\hadoop-2.2.0\hadoop-2.2.0\bin\winutils.exe in the Hadoop binaries.该如何解决?
1.
创建一个MapReduce Project,运行时发现出问题了。
java.io.IOException: Could not locate executable null\bin\winutils.exe in the Hadoop binaries.
跟代码就去发现是HADOOP_HOME的问题。如果HADOOP_HOME为空,必然fullExeName为null\bin\winutils.exe。解决方法很简单啦,乖乖的配置环境变量吧,不想重启电脑可以在MapReduce程序里加上System.setProperty("hadoop.home.dir", "...");暂时缓缓。org.apache.hadoop.util.Shell.java
- public static final String getQualifiedBinPath(String executable)
- throws IOException {
- // construct hadoop bin path to the specified executable
- String fullExeName = HADOOP_HOME_DIR + File.separator + "bin"
- + File.separator + executable;
-
- File exeFile = new File(fullExeName);
- if (!exeFile.exists()) {
- throw new IOException("Could not locate executable " + fullExeName
- + " in the Hadoop binaries.");
- }
-
- return exeFile.getCanonicalPath();
- }
-
- private static String HADOOP_HOME_DIR = checkHadoopHome();
- private static String checkHadoopHome() {
-
- // first check the Dflag hadoop.home.dir with JVM scope
- String home = System.getProperty("hadoop.home.dir");
-
- // fall back to the system/user-global env variable
- if (home == null) {
- home = System.getenv("HADOOP_HOME");
- }
- ...
- }
复制代码
2.
这个时候得到完整的地址fullExeName,我机器上是D:\Hadoop\tar\hadoop-2.2.0\hadoop-2.2.0\bin\winutils.exe。继续执行代码又发现了错误
Could not locate executable D:\Hadoop\tar\hadoop-2.2.0\hadoop-2.2.0\bin\winutils.exe in the Hadoop binaries.
就去一看,没有winutils.exe这个东西。去https://github.com/srccodes/hadoop-common-2.2.0-bin下载一个,放就去即可。
3.
继续出问题
- at org.apache.hadoop.util.Shell.execCommand(Shell.java:661)at org.apache.hadoop.fs.RawLocalFileSystem.setPermission(RawLocalFileSystem.java:639)at org.apache.hadoop.fs.RawLocalFileSystem.mkdirs(RawLocalFileSystem.java:435)
复制代码
继续跟代码org.apache.hadoop.util.Shell.java
- <span style="line-height: 1.5;"> </span> public static String[] getSetPermissionCommand(String perm, boolean recursive,
- String file) {
- String[] baseCmd = getSetPermissionCommand(perm, recursive);
- String[] cmdWithFile = Arrays.copyOf(baseCmd, baseCmd.length + 1);
- cmdWithFile[cmdWithFile.length - 1] = file;
- return cmdWithFile;
- }
-
- /** Return a command to set permission */
- public static String[] getSetPermissionCommand(String perm, boolean recursive) {
- if (recursive) {
- return (WINDOWS) ? new String[] { WINUTILS, "chmod", "-R", perm }
- : new String[] { "chmod", "-R", perm };
- } else {
- return (WINDOWS) ? new String[] { WINUTILS, "chmod", perm }
- : new String[] { "chmod", perm };
- }
- }
复制代码
cmdWithFile数组的内容为{"D:\Hadoop\tar\hadoop-2.2.0\hadoop-2.2.0\bin\winutils.exe", "chmod", "755", "xxxfile"},我把这个单独在cmd里执行了一下,发现
无法启动此程序,因为计算机中丢失 MSVCR100.dll
那就下载一个呗http://files.cnblogs.com/sirkevin/msvcr100.rar,丢到C:\Windows\System32里面。再次cmd执行,又来了问题
下载 http://blog.csdn.net/vbcom/article/details/7245186
DirectX_Repair来解决这个问题吧。记得修复完后要重启电脑。搞定后cmd试一下,很棒。
4.
到了这里,已经看到曙光了,但问题又来了
Exception in thread "main" java.lang.UnsatisfiedLinkError: org.apache.hadoop.io.nativeio.NativeIO$Windows.access0(Ljava/lang/String;I)Z
代码就去
- /** Windows only method used to check if the current process has requested* access rights on the given path. */
- private static native boolean access0(String path, int requestedAccess);
复制代码
显然缺少dll文件,还记得 https://github.com/srccodes/hadoop-common-2.2.0-bin
下载的东西吧,里面就有hadoop.dll,最好的方法就是用hadoop-common-2.2.0-bin-master/bin目录替换本地hadoop的bin目录,并在环境变量里配置PATH=HADOOP_HOME/bin,重启电脑。
1.下载地址
hadoop家族、strom、spark、Linux、flume等jar包、安装包汇总下载(持续更新)
注意的问题:
环境变量一定配置正确,否则还是不能运行
PATH=HADOOP_HOME/bin,如果这个不行,可以换成绝对路径
5.
终于看到了MapReduce的正确输出output99。
总结
- hadoop eclipse插件不是必须的,其作用在我看来就是如下三点(这个是一个错误的认识,具体请参考http://zy19982004.iteye.com/blog/2031172)。study-hadoop是一个普通project,直接运行(不通过Run on Hadoop这只大象),一样可以调试到MapReduce。
- 对hadoop中的文件可视化。
- 创建MapReduce Project时帮你引入依赖的jar。
- Configuration conf = new Configuration();时就已经包含了所有的配置信息。
- 还是自己下载hadoop2.2的源码编译好,应该是不会有任何问题的(没有亲测)。
六.
其它问题
- 还是
- Exception in thread "main" java.lang.UnsatisfiedLinkError: org.apache.hadoop.io.nativeio.NativeIO$Windows.access0(Ljava/lang/String;I)Z
复制代码
代码跟到org.apache.hadoop.util.NativeCodeLoader.java去看
- static {
- // Try to load native hadoop library and set fallback flag appropriately
- if(LOG.isDebugEnabled()) {
- LOG.debug("Trying to load the custom-built native-hadoop library...");
- }
- try {
- System.loadLibrary("hadoop");
- LOG.debug("Loaded the native-hadoop library");
- nativeCodeLoaded = true;
- } catch (Throwable t) {
- // Ignore failure to load
- if(LOG.isDebugEnabled()) {
- LOG.debug("Failed to load native-hadoop with error: " + t);
- LOG.debug("java.library.path=" +
- System.getProperty("java.library.path"));
- }
- }
-
- if (!nativeCodeLoaded) {
- LOG.warn("Unable to load native-hadoop library for your platform... " +
- "using builtin-java classes where applicable");
- }
- }
复制代码
这里报错如下
- DEBUG org.apache.hadoop.util.NativeCodeLoader - Failed to load native-hadoop with error: java.lang.UnsatisfiedLinkError: HADOOP_HOME\bin\hadoop.dll: Can't load AMD 64-bit .dll on a IA 32-bit platform
-
复制代码
怀疑是32位jdk的问题,替换成64位后,没问题了
- 2014-03-11 19:43:08,805 DEBUG org.apache.hadoop.util.NativeCodeLoader - Trying to load the custom-built native-hadoop library...
- 2014-03-11 19:43:08,812 DEBUG org.apache.hadoop.util.NativeCodeLoader - Loaded the native-hadoop library
复制代码
这里也解决了一个常见的警告
- WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
复制代码