hadoop2.6在window上搭建测试环境

解决Exception: org.apache.hadoop.io.nativeio.NativeIO$Windows.access0(Ljava/lang/String;I)Z 等一系列问题,ljavalangstring


  

一.简介

   Windows下的 Eclipse上调试Hadoop2代码,所以我们在windows下的Eclipse配置hadoop-eclipse-plugin-2.6.0.jar插件,并在运行Hadoop代码时出现了一系列的问题,搞了好几天终于能运行起代码。接下来我们来看看问题并怎么解决,提供给跟我同样遇到的问题作为参考。

 

  Hadoop2的WordCount.java统计代码如下:

     

import java.io.IOException;
import java.util.StringTokenizer;

import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.IntWritable;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.Mapper;
import org.apache.hadoop.mapreduce.Reducer;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;

public class WordCount {

  public static class TokenizerMapper
       extends Mapper{

    private final static IntWritable one = new IntWritable(1);
    private Text word = new Text();

    public void map(Object key, Text value, Context context
                    ) throws IOException, InterruptedException {
      StringTokenizer itr = new StringTokenizer(value.toString());
      while (itr.hasMoreTokens()) {
        word.set(itr.nextToken());
        context.write(word, one);
      }
    }
  }

  public static class IntSumReducer
       extends Reducer {
    private IntWritable result = new IntWritable();

    public void reduce(Text key, Iterable values,
                       Context context
                       ) throws IOException, InterruptedException {
      int sum = 0;
      for (IntWritable val : values) {
        sum += val.get();
      }
      result.set(sum);
      context.write(key, result);
    }
  }

  public static void main(String[] args) throws Exception {
    Configuration conf = new Configuration();
    Job job = Job.getInstance(conf, "word count");
    job.setJarByClass(WordCount.class);
    job.setMapperClass(TokenizerMapper.class);
    job.setCombinerClass(IntSumReducer.class);
    job.setReducerClass(IntSumReducer.class);
    job.setOutputKeyClass(Text.class);
    job.setOutputValueClass(IntWritable.class);
    FileInputFormat.addInputPath(job, new Path(args[0]));
    FileOutputFormat.setOutputPath(job, new Path(args[1]));
    System.exit(job.waitForCompletion(true) ? 0 : 1);
  }
}

 

 

问题一.An internal error occurred during: "Map/Reducelocation status updater".java.lang.NullPointerException

 

 

   我们hadoop-eclipse-plugin-2.6.0.jar放到Eclipse的plugins目录下,我们的Eclipse目录是F:\tool\eclipse-jee-juno-SR2\eclipse-jee-juno-SR2\plugins,重启一下Eclipse,然后,打开Window-->Preferens,可以看到Hadoop Map/Reduc选项,然后点击出现了An internal error occurredduring: "Map/Reduce location status updater".java.lang.NullPointerException,如图所示:

   

 

  解决:

   我们发现刚配置部署的Hadoop2还没创建输入和输出目录,先在hdfs上建个文件夹 。

   #bin/hdfs dfs -mkdir –p /user/root/input

   #bin/hdfs dfs -mkdir -p  /user/root/output

 我们在Eclipse的DFS Locations目录下看到我们这两个目录,如图所示:

  

问题二.Exception in thread "main" java.lang.NullPointerException atjava.lang.ProcessBuilder.start(Unknown Source)
  

运行Hadoop2的WordCount.java代码时出现了这样错误,

     

  log4j:WARNPlease initialize the log4j system properly.
log4j:WARN Seehttp://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
Exception in thread "main" java.lang.NullPointerException
       atjava.lang.ProcessBuilder.start(Unknown Source)
       atorg.apache.hadoop.util.Shell.runCommand(Shell.java:482)
       atorg.apache.hadoop.util.Shell.run(Shell.java:455)
       atorg.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:715)
       atorg.apache.hadoop.util.Shell.execCommand(Shell.java:808)
       atorg.apache.hadoop.util.Shell.execCommand(Shell.java:791)
       at

 

 

 

分析:

  下载Hadoop2以上版本时,在Hadoop2的bin目录下没有winutils.exe

解决:

  1.下载https://codeload.github.com/srccodes/hadoop-common-2.2.0-bin/zip/master下载hadoop-common-2.2.0-bin-master.zip,然后解压后,把hadoop-common-2.2.0-bin-master下的bin全部复制放到我们下载的Hadoop2的binHadoop2/bin目录下。如图所示:

     

  2.Eclipse-》window-》Preferences 下的Hadoop Map/Peduce 把下载放在我们的磁盘的Hadoop目录引进来,如图所示:

    

 

  3.Hadoop2配置变量环境HADOOP_HOME 和path,如图所示:

 

 问题三.Exception in thread "main"java.lang.UnsatisfiedLinkError:org.apache.hadoop.io.nativeio.NativeIO$Windows.access0(Ljava/lang/String;I)Z

 

  当我们解决了问题三时,在运行WordCount.java代码时,出现这样的问题

    

log4j:WARN No appenders could be found forlogger (org.apache.hadoop.metrics2.lib.MutableMetricsFactory).
log4j:WARN Please initialize the log4jsystem properly.
log4j:WARN Seehttp://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.
Exception in thread "main"java.lang.UnsatisfiedLinkError:org.apache.hadoop.io.nativeio.NativeIO$Windows.access0(Ljava/lang/String;I)Z
       atorg.apache.hadoop.io.nativeio.NativeIO$Windows.access0(Native Method)
       atorg.apache.hadoop.io.nativeio.NativeIO$Windows.access(NativeIO.java:557)
       atorg.apache.hadoop.fs.FileUtil.canRead(FileUtil.java:977)
       atorg.apache.hadoop.util.DiskChecker.checkAccessByFileMethods(DiskChecker.java:187)
       atorg.apache.hadoop.util.DiskChecker.checkDirAccess(DiskChecker.java:174)
       atorg.apache.hadoop.util.DiskChecker.checkDir(DiskChecker.java:108)
       atorg.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.confChanged(LocalDirAllocator.java:285)
       atorg.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.getLocalPathForWrite(LocalDirAllocator.java:344)
       atorg.apache.hadoop.fs.LocalDirAllocator.getLocalPathForWrite(LocalDirAllocator.java:150)
       atorg.apache.hadoop.fs.LocalDirAllocator.getLocalPathForWrite(LocalDirAllocator.java:131)
       atorg.apache.hadoop.fs.LocalDirAllocator.getLocalPathForWrite(LocalDirAllocator.java:115)
       atorg.apache.hadoop.mapred.LocalDistributedCacheManager.setup(LocalDistributedCacheManager.java:131)

 

 分析:

    C:\Windows\System32下缺少hadoop.dll,把这个文件拷贝到C:\Windows\System32下面即可。

 解决:

    hadoop-common-2.2.0-bin-master下的bin的hadoop.dll放到C:\Windows\System32下,然后重启电脑,也许还没那么简单,还是出现这样的问题。

 

  我们在继续分析:

    我们在出现错误的的atorg.apache.hadoop.io.nativeio.NativeIO$Windows.access(NativeIO.java:557)我们来看这个类NativeIO的557行,如图所示:

       

 

   Windows的唯一方法用于检查当前进程的请求,在给定的路径的访问权限,所以我们先给以能进行访问,我们自己先修改源代码,return true 时允许访问。我们下载对应hadoop源代码,hadoop-2.6.0-src.tar.gz解压,hadoop-2.6.0-src\hadoop-common-project\hadoop-common\src\main\java\org\apache\hadoop\io\nativeio下NativeIO.java 复制到对应的Eclipse的project,然后修改557行为return true如图所示:

  

 

   

  问题四:org.apache.hadoop.security.AccessControlException: Permissiondenied: user=zhengcy, access=WRITE,inode="/user/root/output":root:supergroup:drwxr-xr-x

 

  我们在执行运行WordCount.java代码时,出现这样的问题

    

2014-12-18 16:03:24,092  WARN (org.apache.hadoop.mapred.LocalJobRunner:560) - job_local374172562_0001
org.apache.hadoop.security.AccessControlException: Permission denied: user=zhengcy, access=WRITE, inode="/user/root/output":root:supergroup:drwxr-xr-x
	at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkFsPermission(FSPermissionChecker.java:271)
	at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:257)
	at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.check(FSPermissionChecker.java:238)
	at org.apache.hadoop.hdfs.server.namenode.FSPermissionChecker.checkPermission(FSPermissionChecker.java:179)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:6512)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkPermission(FSNamesystem.java:6494)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkAncestorAccess(FSNamesystem.java:6446)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInternal(FSNamesystem.java:4248)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirsInt(FSNamesystem.java:4218)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:4191)
	at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:813)

   

 

 分析:

  我们没权限访问output目录。

解决:

    我们 在设置hdfs配置的目录是在hdfs-site.xml配置hdfs文件存放的地方,我在hadoop伪分布式部署那边有介绍过,我们在这边在复习一下,如图所示:

我们在这个etc/hadoop下的hdfs-site.xml添加

   

     dfs.permissions 
     false 
  

设置没有权限,不过我们在正式的 服务器上不能这样设置。

 

  问题五:File/usr/root/input/file01._COPYING_ could only be replicated to 0 nodes instead ofminRepLication (=1) There are 0 datanode(s) running and no node(s) are excludedin this operation

     如图所示:

      

  分析:  

  我们在第一次执行#hadoop namenode –format 完然后在执行#sbin/start-all.sh 

在执行#jps,能看到Datanode,在执行#hadoop namenode –format然后执行#jps这时看不到Datanode ,如图所示:

      

   然后我们想把文本放到输入目录执行bin/hdfs dfs -put/usr/local/hadoop/hadoop-2.6.0/test/* /user/root/input  把/test/*文件上传到hdfs的/user/root/input中,出现这样的问题,

 解决:

  是我们执行太多次了hadoopnamenode –format,在创建了多个,我们对应的hdfs目录删除hdfs-site.xml配置的保存datanode和namenode目录。

 

 

修改配置文件,以下四个文件全部在hadoop目录下的etc/hadoop目录下

修改core-site.xml,如下:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
xml  version = "1.0"  encoding = "UTF-8" ?>
xml-stylesheet  type = "text/xsl"  href = "configuration.xsl" ?>
  
  
< configuration >
     < property >
         < name >fs.defaultFS name >
         < value >hdfs://localhost:9000 value >
     property >
configuration >

        修改hdfs-site.xml如下:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
xml  version = "1.0"  encoding = "UTF-8" ?>
xml-stylesheet  type = "text/xsl"  href = "configuration.xsl" ?>
  
  
< configuration >
     < property >
         < name >dfs.replication name >
         < value >1 value >
     property >
     < property >
         < name >dfs.namenode.name.dir name >
         < value >file:/hadoop/data/dfs/namenode value >
     property >
     < property >
         < name >dfs.datanode.data.dir name >
         < value >file:/hadoop/data/dfs/datanode value >
     property >
configuration >

修改yarn-site.xml如下:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
xml  version = "1.0" ?>
< configuration >
     < property >
        < name >yarn.nodemanager.aux-services name >
        < value >mapreduce_shuffle value >
     property >
     < property >
        < name >yarn.nodemanager.aux-services.mapreduce.shuffle.class name >
        < value >org.apache.hadoop.mapred.ShuffleHandler value >
     property >
configuration >

修改mapred-site.xml如下:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
xml  version = "1.0" ?>
xml-stylesheet  type = "text/xsl"  href = "configuration.xsl" ?>
  
  
< configuration >
     < property >
        < name >mapreduce.framework.name name >
        < value >yarn value >
     property >
configuration >

然后打开cmd,运行hadoop namenode -format命令,运行结果基本如下:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
Microsoft Windows [Version 6.1.7601]
Copyright (c) 2009 Microsoft Corporation.  All rights reserved.
  
C:\Users\abhijitg> cd  c:\hadoop\bin
  
c:\hadoop\bin>hdfs namenode - format
13 /11/03  18:07:47 INFO namenode.NameNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG:   host = ABHIJITG /x .x.x.x
STARTUP_MSG:   args = [- format ]
STARTUP_MSG:   version = 2.2.0
STARTUP_MSG:   classpath = 
STARTUP_MSG:   build = Unknown -r Unknown; compiled by ABHIJITG on 2013-11-01T13:42Z
STARTUP_MSG:   java = 1.7.0_03
************************************************************/
Formatting using clusterid: CID-1af0bd9f-efee-4d4e-9f03-a0032c22e5eb
13 /11/03  18:07:48 INFO namenode.HostFileManager:  read  includes:
HostSet(
)
13 /11/03  18:07:48 INFO namenode.HostFileManager:  read  excludes:
HostSet(
)
13 /11/03  18:07:48 INFO blockmanagement.DatanodeManager: dfs.block.invalidate.limit=1000
13 /11/03  18:07:48 INFO util.GSet: Computing capacity  for  map BlocksMap
13 /11/03  18:07:48 INFO util.GSet: VM  type        = 64-bit
13 /11/03  18:07:48 INFO util.GSet: 2.0% max memory = 888.9 MB
13 /11/03  18:07:48 INFO util.GSet: capacity      = 2^21 = 2097152 entries
13 /11/03  18:07:48 INFO blockmanagement.BlockManager: dfs.block.access.token. enable = false
13 /11/03  18:07:48 INFO blockmanagement.BlockManager: defaultReplication         = 1
13 /11/03  18:07:48 INFO blockmanagement.BlockManager: maxReplication             = 512
13 /11/03  18:07:48 INFO blockmanagement.BlockManager: minReplication             = 1
13 /11/03  18:07:48 INFO blockmanagement.BlockManager: maxReplicationStreams      = 2
13 /11/03  18:07:48 INFO blockmanagement.BlockManager: shouldCheckForEnoughRacks  =  false
13 /11/03  18:07:48 INFO blockmanagement.BlockManager: replicationRecheckInterval = 3000
13 /11/03  18:07:48 INFO blockmanagement.BlockManager: encryptDataTransfer        =  false
13 /11/03  18:07:48 INFO namenode.FSNamesystem: fsOwner             = ABHIJITG (auth:SIMPLE)
13 /11/03  18:07:48 INFO namenode.FSNamesystem: supergroup          = supergroup
13 /11/03  18:07:48 INFO namenode.FSNamesystem: isPermissionEnabled =  true
13 /11/03  18:07:48 INFO namenode.FSNamesystem: HA Enabled:  false
13 /11/03  18:07:48 INFO namenode.FSNamesystem: Append Enabled:  true
13 /11/03  18:07:49 INFO util.GSet: Computing capacity  for  map INodeMap
13 /11/03  18:07:49 INFO util.GSet: VM  type        = 64-bit
13 /11/03  18:07:49 INFO util.GSet: 1.0% max memory = 888.9 MB
13 /11/03  18:07:49 INFO util.GSet: capacity      = 2^20 = 1048576 entries
13 /11/03  18:07:49 INFO namenode.NameNode: Caching  file  names occuring  more  than 10  times
13 /11/03  18:07:49 INFO namenode.FSNamesystem: dfs.namenode.safemode.threshold-pct = 0.9990000128746033
13 /11/03  18:07:49 INFO namenode.FSNamesystem: dfs.namenode.safemode.min.datanodes = 0
13 /11/03  18:07:49 INFO namenode.FSNamesystem: dfs.namenode.safemode.extension     = 30000
13 /11/03  18:07:49 INFO namenode.FSNamesystem: Retry cache on namenode is enabled
13 /11/03  18:07:49 INFO namenode.FSNamesystem: Retry cache will use 0.03 of total heap and retry cache entry expiry  time
is 600000 millis
13 /11/03  18:07:49 INFO util.GSet: Computing capacity  for  map Namenode Retry Cache
13 /11/03  18:07:49 INFO util.GSet: VM  type        = 64-bit
13 /11/03  18:07:49 INFO util.GSet: 0.029999999329447746% max memory = 888.9 MB
13 /11/03  18:07:49 INFO util.GSet: capacity      = 2^15 = 32768 entries
13 /11/03  18:07:49 INFO common.Storage: Storage directory \hadoop\data\dfs\namenode has been successfully formatted.
13 /11/03  18:07:49 INFO namenode.FSImage: Saving image  file  \hadoop\data\dfs\namenode\current\fsimage.ckpt_00000000000000
00000 using no compression
13 /11/03  18:07:49 INFO namenode.FSImage: Image  file  \hadoop\data\dfs\namenode\current\fsimage.ckpt_0000000000000000000 o
f size 200 bytes saved  in  0 seconds.
13 /11/03  18:07:49 INFO namenode.NNStorageRetentionManager: Going to retain 1 images with txid >= 0
13 /11/03  18:07:49 INFO util.ExitUtil: Exiting with status 0
13 /11/03  18:07:49 INFO namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at ABHIJITG /x .x.x.x
************************************************************/

然后在cmd下切换目录到hadoop目录下的sbin目录下,运行start-all 会打开四个cmd窗口,可以打开浏览器输入 http://localhost:8042以及http://localhost:50070查看是否配置成功!

你可能感兴趣的:(hadoop2.6在window上搭建测试环境)