Hadoop上传文件到HDFS时异常处理步骤

 

Hadoop环境搭建主要参考如下两篇博客

参考如下:

http://blog.csdn.net/hitwengqi/article/details/8008203

http://www.cnblogs.com/tippoint/archive/2012/10/23/2735532.html

本人环境如下:

       VM 9.0

       Ubuntu 12.04

       Hadoop 0.20.203.0

       Eclipse helios-sr2-linux-gtk.xxx

       Eclipse 工作目录:/home/hadoop/workspace

                     安装目录:/opt/eclipse

       Hadoop 安装目录:/usr/local/hadoop

      

 

本人在参考两篇文章搭建时,基本上比较顺利,在操作过程中,也出现过一些错误或异常,在参考上面链接中的两篇描述,肯定是可以解决的。

但是,在做经典的wordcount时,参考的文章没有给出具体的连接,因为当时参考了很多博客文章,可以说没有一篇中提到的方法,能解决本人试验过程中出现的异常或错误。

       遗憾的是,试验的过程中出现的很多错误或异常,本人没有具体记录错误或异常的描述,姑且概述为:“Hadoop上传文件到HDFS时异常”!

       在下面的试验步骤中,本人力求完全展示操作步骤,所有的操作尽量原始,基本上是将操作步骤直接复制过来的,可能看上去有些乱,但是却是最真实的。

 

下面是操作的步骤,本人尽量详细的加以说明。

 

adoop@ubuntu:~$ ls

Desktop   Downloads         Music     Public    Videos

Documents examples.desktop  Pictures  Templates workspace

hadoop@ubuntu:~$ cd /usr

hadoop@ubuntu:/usr$ cd local

hadoop@ubuntu:/usr/local$ ls

bin games  hadoop-0.20.203.0rc1.tar.gz lib  sbin   src

etc hadoop  include                      man  share

hadoop@ubuntu:/usr/local$ cd hadoop/

hadoop@ubuntu:/usr/local/hadoop$ ls

bin         docs                            LICENSE.txt

build.xml   hadoop-ant-0.20.203.0.jar      logs

c++         hadoop-core-0.20.203.0.jar     NOTICE.txt

CHANGES.txt hadoop-examples-0.20.203.0.jar README.txt

conf        hadoop-test-0.20.203.0.jar     src

contrib     hadoop-tools-0.20.203.0.jar    webapps

data1       ivy                            word.txt

data2       ivy.xml                        word.txt~

datalog1    lib

datalog2    librecordio

hadoop@ubuntu:/usr/local/hadoop$ ls

bin         docs                            LICENSE.txt

build.xml   hadoop-ant-0.20.203.0.jar      logs

c++         hadoop-core-0.20.203.0.jar     NOTICE.txt

CHANGES.txt hadoop-examples-0.20.203.0.jar README.txt

conf        hadoop-test-0.20.203.0.jar     src

contrib     hadoop-tools-0.20.203.0.jar     webapps

data1       ivy                            word.txt

data2       ivy.xml                        word.txt~

datalog1    lib

datalog2    librecordio

hadoop@ubuntu:/usr/local/hadoop$ cd data1

hadoop@ubuntu:/usr/local/hadoop/data1$ ls

current detach  storage  tmp

hadoop@ubuntu:/usr/local/hadoop/data1$chmod 777 current

hadoop@ubuntu:/usr/local/hadoop/data1$ rm-rf current

hadoop@ubuntu:/usr/local/hadoop/data1$ ls

detach storage  tmp

hadoop@ubuntu:/usr/local/hadoop/data1$chmod 777 tmp

hadoop@ubuntu:/usr/local/hadoop/data1$ rm-rf tmp

hadoop@ubuntu:/usr/local/hadoop/data1$ ls

detach storage

hadoop@ubuntu:/usr/local/hadoop/data1$chmod 777 detach

hadoop@ubuntu:/usr/local/hadoop/data1$ rm-rf detach

hadoop@ubuntu:/usr/local/hadoop/data1$ ls

storage

hadoop@ubuntu:/usr/local/hadoop/data1$chmod 777 storage

hadoop@ubuntu:/usr/local/hadoop/data1$ rm-rf storage

hadoop@ubuntu:/usr/local/hadoop/data1$ ls

hadoop@ubuntu:/usr/local/hadoop/data1$ cd..

c: command not found

hadoop@ubuntu:/usr/local/hadoop/data1$ cd..

hadoop@ubuntu:/usr/local/hadoop$ ls

bin         docs                           LICENSE.txt

build.xml   hadoop-ant-0.20.203.0.jar      logs

c++         hadoop-core-0.20.203.0.jar     NOTICE.txt

CHANGES.txt hadoop-examples-0.20.203.0.jar README.txt

conf        hadoop-test-0.20.203.0.jar     src

contrib     hadoop-tools-0.20.203.0.jar    webapps

data1       ivy                            word.txt

data2       ivy.xml                        word.txt~

datalog1    lib

datalog2    librecordio

hadoop@ubuntu:/usr/local/hadoop$ cd data2

hadoop@ubuntu:/usr/local/hadoop/data2$ ls

current detach  storage  tmp

hadoop@ubuntu:/usr/local/hadoop/data2$chmod 777 current detach

hadoop@ubuntu:/usr/local/hadoop/data2$ rm-rf current detach

hadoop@ubuntu:/usr/local/hadoop/data2$ ls

storage tmp

hadoop@ubuntu:/usr/local/hadoop/data2$chmod 777 storage tmp/

hadoop@ubuntu:/usr/local/hadoop/data2$ ls

storage tmp

hadoop@ubuntu:/usr/local/hadoop/data2$ rm-rf storage tmp/

hadoop@ubuntu:/usr/local/hadoop/data2$ ls

hadoop@ubuntu:/usr/local/hadoop/data2$ ls

hadoop@ubuntu:/usr/local/hadoop/data2$ cd..

hadoop@ubuntu:/usr/local/hadoop$ ls

bin         docs                           LICENSE.txt

build.xml   hadoop-ant-0.20.203.0.jar      logs

c++         hadoop-core-0.20.203.0.jar     NOTICE.txt

CHANGES.txt hadoop-examples-0.20.203.0.jar README.txt

conf        hadoop-test-0.20.203.0.jar     src

contrib     hadoop-tools-0.20.203.0.jar    webapps

data1       ivy                             word.txt

data2       ivy.xml                        word.txt~

datalog1    lib

datalog2    librecordio

hadoop@ubuntu:/usr/local/hadoop$ chmod 777datalog1

hadoop@ubuntu:/usr/local/hadoop$ rm -rfdatalog1

hadoop@ubuntu:/usr/local/hadoop$ ls

bin         docs                           librecordio

build.xml   hadoop-ant-0.20.203.0.jar      LICENSE.txt

c++         hadoop-core-0.20.203.0.jar     logs

CHANGES.txt hadoop-examples-0.20.203.0.jar NOTICE.txt

conf        hadoop-test-0.20.203.0.jar     README.txt

contrib     hadoop-tools-0.20.203.0.jar    src

data1       ivy                            webapps

data2       ivy.xml                        word.txt

datalog2    lib                            word.txt~

hadoop@ubuntu:/usr/local/hadoop$ chmod 777datalog2

hadoop@ubuntu:/usr/local/hadoop$ rm -rfdatalog2/

hadoop@ubuntu:/usr/local/hadoop$ ls

bin         data2                          ivy          README.txt

build.xml   docs                           ivy.xml      src

c++         hadoop-ant-0.20.203.0.jar       lib          webapps

CHANGES.txt hadoop-core-0.20.203.0.jar     librecordio  word.txt

conf        hadoop-examples-0.20.203.0.jar LICENSE.txt  word.txt~

contrib     hadoop-test-0.20.203.0.jar     logs

data1       hadoop-tools-0.20.203.0.jar    NOTICE.txt

hadoop@ubuntu:/usr/local/hadoop$ ls

bin         data2                          ivy          README.txt

build.xml   docs                           ivy.xml      src

c++         hadoop-ant-0.20.203.0.jar       lib          webapps

CHANGES.txt hadoop-core-0.20.203.0.jar     librecordio  word.txt

conf        hadoop-examples-0.20.203.0.jar LICENSE.txt  word.txt~

contrib     hadoop-test-0.20.203.0.jar     logs

data1       hadoop-tools-0.20.203.0.jar    NOTICE.txt

很乱是吧?

上面所有的操作,目的就一个,为了重新格式化HDFS。因为本人在上传word.txt文件时,总是不成功。具体异常提示没有记录下来而为了重新格式化,要删除上一次生成的文件或目录,所以上面的所有操作都是为了删除文件或目录。操作步骤比较操蛋呵呵,新手哦所以的操作方法或者说命令都是一边“度娘”,一边操作的

 

上面的步骤完成后。

重新格式化:

hadoop@ubuntu:/usr/local/hadoop$ bin/hadoopnanmenode -format

Exception in thread "main"java.lang.NoClassDefFoundError: nanmenode

Caused by:java.lang.ClassNotFoundException: nanmenode

       atjava.net.URLClassLoader$1.run(URLClassLoader.java:217)

       atjava.security.AccessController.doPrivileged(Native Method)

       atjava.net.URLClassLoader.findClass(URLClassLoader.java:205)

       atjava.lang.ClassLoader.loadClass(ClassLoader.java:321)

       atsun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:294)

       atjava.lang.ClassLoader.loadClass(ClassLoader.java:266)

Could not find the main class: nanmenode.Program will exit.

hadoop@ubuntu:/usr/local/hadoop$ ls

bin         data2                          ivy          README.txt

build.xml   docs                           ivy.xml      src

c++         hadoop-ant-0.20.203.0.jar      lib          webapps

CHANGES.txt hadoop-core-0.20.203.0.jar     librecordio  word.txt

conf        hadoop-examples-0.20.203.0.jar LICENSE.txt  word.txt~

contrib     hadoop-test-0.20.203.0.jar     logs

data1       hadoop-tools-0.20.203.0.jar    NOTICE.txt

hadoop@ubuntu:/usr/local/hadoop$ cd data1

hadoop@ubuntu:/usr/local/hadoop/data1$ ls

hadoop@ubuntu:/usr/local/hadoop/data1$ ls

hadoop@ubuntu:/usr/local/hadoop/data1$ cd..

hadoop@ubuntu:/usr/local/hadoop$ mkdirdatalog1

hadoop@ubuntu:/usr/local/hadoop$ ls

bin         docs                           librecordio

build.xml   hadoop-ant-0.20.203.0.jar      LICENSE.txt

c++         hadoop-core-0.20.203.0.jar     logs

CHANGES.txt hadoop-examples-0.20.203.0.jar NOTICE.txt

conf        hadoop-test-0.20.203.0.jar     README.txt

contrib     hadoop-tools-0.20.203.0.jar    src

data1       ivy                            webapps

data2       ivy.xml                        word.txt

datalog1    lib                            word.txt~

hadoop@ubuntu:/usr/local/hadoop$ mkdir datalog2

hadoop@ubuntu:/usr/local/hadoop$ ls

bin         docs                           LICENSE.txt

build.xml   hadoop-ant-0.20.203.0.jar      logs

c++         hadoop-core-0.20.203.0.jar     NOTICE.txt

CHANGES.txt hadoop-examples-0.20.203.0.jar README.txt

conf        hadoop-test-0.20.203.0.jar     src

contrib     hadoop-tools-0.20.203.0.jar    webapps

data1       ivy                            word.txt

data2       ivy.xml                        word.txt~

datalog1    lib

datalog2    librecordio

 

上面报了异常,原因是没有创建配置环境时,xml(具体哪个文件,参考开头的两篇博客描述)文件中的默认目录,新建目录后,继续格式化

 

 

hadoop@ubuntu:/usr/local/hadoop$ bin/hadoopnamenode -format

13/09/10 16:45:02 INFO namenode.NameNode:STARTUP_MSG:

/************************************************************

STARTUP_MSG: Starting NameNode

STARTUP_MSG:   host = ubuntu/127.0.1.1

STARTUP_MSG:   args = [-format]

STARTUP_MSG:   version = 0.20.203.0

STARTUP_MSG:   build =http://svn.apache.org/repos/asf/hadoop/common/branches/branch-0.20-security-203-r 1099333; compiled by 'oom' on Wed May 4 07:57:50 PDT 2011

************************************************************/

Re-format filesystem in/usr/local/hadoop/datalog1 ? (Y or N) Y

Re-format filesystem in/usr/local/hadoop/datalog2 ? (Y or N) Y

13/09/10 16:45:07 INFO util.GSet: VMtype       = 32-bit

13/09/10 16:45:07 INFO util.GSet: 2% maxmemory = 19.33375 MB

13/09/10 16:45:07 INFO util.GSet:capacity      = 2^22 = 4194304 entries

13/09/10 16:45:07 INFO util.GSet:recommended=4194304, actual=4194304

13/09/10 16:45:08 INFO namenode.FSNamesystem:fsOwner=hadoop

13/09/10 16:45:08 INFOnamenode.FSNamesystem: supergroup=supergroup

13/09/10 16:45:08 INFOnamenode.FSNamesystem: isPermissionEnabled=true

13/09/10 16:45:08 INFOnamenode.FSNamesystem: dfs.block.invalidate.limit=100

13/09/10 16:45:08 INFOnamenode.FSNamesystem: isAccessTokenEnabled=false accessKeyUpdateInterval=0min(s), accessTokenLifetime=0 min(s)

13/09/10 16:45:08 INFO namenode.NameNode:Caching file names occuring more than 10 times

13/09/10 16:45:08 INFO common.Storage:Image file of size 112 saved in 0 seconds.

13/09/10 16:45:08 INFO common.Storage:Storage directory /usr/local/hadoop/datalog1 has been successfully formatted.

13/09/10 16:45:08 INFO common.Storage:Image file of size 112 saved in 0 seconds.

13/09/10 16:45:08 INFO common.Storage:Storage directory /usr/local/hadoop/datalog2 has been successfully formatted.

13/09/10 16:45:08 INFO namenode.NameNode:SHUTDOWN_MSG:

/************************************************************

SHUTDOWN_MSG: Shutting down NameNode atubuntu/127.0.1.1

************************************************************/

格式完成后,接着启动

 

hadoop@ubuntu:/usr/local/hadoop$bin/start-all.sh

starting namenode, logging to/usr/local/hadoop/logs/hadoop-hadoop-namenode-ubuntu.out

127.0.0.1: starting datanode, logging to/usr/local/hadoop/logs/hadoop-hadoop-datanode-ubuntu.out

127.0.0.1: starting secondarynamenode,logging to /usr/local/hadoop/logs/hadoop-hadoop-secondarynamenode-ubuntu.out

starting jobtracker, logging to/usr/local/hadoop/logs/hadoop-hadoop-jobtracker-ubuntu.out

127.0.0.1: starting tasktracker, logging to/usr/local/hadoop/logs/hadoop-hadoop-tasktracker-ubuntu.out

 

查看是否所有进程都正常启动

 

hadoop@ubuntu:/usr/local/hadoop$ jps

3317 DataNode

3593 JobTracker

3521 SecondaryNameNode

3107 NameNode

3833 TaskTracker

3872 Jps

hadoop@ubuntu:/usr/local/hadoop$ bin/hadoopdfsadmin -report

Configured Capacity: 40668069888 (37.88 GB)

Present Capacity: 28074295311 (26.15 GB)

DFS Remaining: 28074246144 (26.15 GB)

DFS Used: 49167 (48.01 KB)

DFS Used%: 0%

Under replicated blocks: 1

Blocks with corrupt replicas: 0

Missing blocks: 0

 

-------------------------------------------------

Datanodes available: 1 (1 total, 0 dead)

 

Name: 127.0.0.1:50010

Decommission Status : Normal

Configured Capacity: 40668069888 (37.88 GB)

DFS Used: 49167 (48.01 KB)

Non DFS Used: 12593774577 (11.73 GB)

DFS Remaining: 28074246144(26.15 GB)

DFS Used%: 0%

DFS Remaining%: 69.03%

Last contact: Tue Sep 10 16:46:29 PDT 2013

 

 

hadoop@ubuntu:/usr/local/hadoop$ ls

bin         docs                           LICENSE.txt

build.xml   hadoop-ant-0.20.203.0.jar      logs

c++         hadoop-core-0.20.203.0.jar     NOTICE.txt

CHANGES.txt hadoop-examples-0.20.203.0.jar README.txt

conf        hadoop-test-0.20.203.0.jar     src

contrib     hadoop-tools-0.20.203.0.jar    webapps

data1       ivy                            word.txt

data2       ivy.xml                        word.txt~

datalog1    lib

datalog2    librecordio

 

查看hdfs上的所有文件和目录

 

hadoop@ubuntu:/usr/local/hadoop$ bin/hadoopdfs -lsr /

drwxr-xr-x  - hadoop supergroup          02013-09-10 16:45 /tmp

drwxr-xr-x  - hadoop supergroup          02013-09-10 16:45 /tmp/hadoop-hadoop

drwxr-xr-x  - hadoop supergroup          02013-09-10 16:45 /tmp/hadoop-hadoop/mapred

drwx------  - hadoop supergroup          02013-09-10 16:46 /tmp/hadoop-hadoop/mapred/system

-rw-------  2 hadoop supergroup          42013-09-10 16:46 /tmp/hadoop-hadoop/

 

 

新建wordcount目录,并上传word.txt文件

这次正常了,没有抛出异常和错误。

 

hadoop@ubuntu:/usr/local/hadoop$ bin/hadoopfs -mkdir /tmp/wordcount

hadoop@ubuntu:/usr/local/hadoop$ bin/hadoopfs -put word.txt /tmp/wordcount/

hadoop@ubuntu:/usr/local/hadoop$ bin/hadoopdfs -lsr /

drwxr-xr-x  - hadoop supergroup          02013-09-10 16:50 /tmp

drwxr-xr-x  - hadoop supergroup          02013-09-10 16:45 /tmp/hadoop-hadoop

drwxr-xr-x  - hadoop supergroup          02013-09-10 16:45 /tmp/hadoop-hadoop/mapred

drwx------  - hadoop supergroup          02013-09-10 16:46 /tmp/hadoop-hadoop/mapred/system

-rw-------  2 hadoop supergroup          42013-09-10 16:46 /tmp/hadoop-hadoop/mapred/system/jobtracker.info

drwxr-xr-x  - hadoop supergroup          02013-09-10 16:50 /tmp/wordcount

-rw-r--r--  2 hadoop supergroup         832013-09-10 16:50 /tmp/wordcount/word.txt

 

查看word.txt文件中的内容:

第一次提示文件不存在,系文件后缀名错误

 

hadoop@ubuntu:/usr/local/hadoop$ bin/hadoopfs -text /tmp/wordcount/word.text

text: File does not exist:/tmp/wordcount/word.text

hadoop@ubuntu:/usr/local/hadoop$ bin/hadoopfs -text /tmp/wordcount/word.txt

java c++ python c

java c++ javascript

helloword hadoop

mapreduce java hadoop hbase

hadoop@ubuntu:/usr/local/hadoop$

 

运行wordcount看效果:

       用的hadoop自带的,具体方法如下

hadoop@ubuntu:/usr/local/hadoop$ bin/hadoopjar hadoop-examples-0.20.203.0.jar wordcount /tmp/wordcount/word.txt/tmp/outpu1

13/09/10 17:05:16 INFOinput.FileInputFormat: Total input paths to process : 1

13/09/10 17:05:16 INFO mapred.JobClient:Running job: job_201309101645_0002

13/09/10 17:05:17 INFOmapred.JobClient:  map 0% reduce 0%

13/09/10 17:05:32 INFOmapred.JobClient:  map 100% reduce 0%

13/09/10 17:05:43 INFOmapred.JobClient:  map 100% reduce 100%

13/09/10 17:05:49 INFO mapred.JobClient:Job complete: job_201309101645_0002

13/09/10 17:05:49 INFO mapred.JobClient:Counters: 25

13/09/10 17:05:49 INFOmapred.JobClient:   Job Counters

13/09/10 17:05:49 INFOmapred.JobClient:     Launched reducetasks=1

13/09/10 17:05:49 INFOmapred.JobClient:    SLOTS_MILLIS_MAPS=13099

13/09/10 17:05:49 INFOmapred.JobClient:     Total time spent byall reduces waiting after reserving slots (ms)=0

13/09/10 17:05:49 INFOmapred.JobClient:     Total time spent byall maps waiting after reserving slots (ms)=0

13/09/10 17:05:49 INFOmapred.JobClient:     Launched map tasks=1

13/09/10 17:05:49 INFOmapred.JobClient:     Data-local maptasks=1

13/09/10 17:05:49 INFOmapred.JobClient:    SLOTS_MILLIS_REDUCES=10098

13/09/10 17:05:49 INFOmapred.JobClient:   File Output FormatCounters

13/09/10 17:05:49 INFOmapred.JobClient:     Bytes Written=80

13/09/10 17:05:49 INFOmapred.JobClient:   FileSystemCounters

13/09/10 17:05:49 INFOmapred.JobClient:     FILE_BYTES_READ=122

13/09/10 17:05:49 INFOmapred.JobClient:     HDFS_BYTES_READ=192

13/09/10 17:05:49 INFOmapred.JobClient:    FILE_BYTES_WRITTEN=42599

13/09/10 17:05:49 INFOmapred.JobClient:    HDFS_BYTES_WRITTEN=80

13/09/10 17:05:49 INFOmapred.JobClient:   File Input FormatCounters

13/09/10 17:05:49 INFOmapred.JobClient:     Bytes Read=83

13/09/10 17:05:49 INFOmapred.JobClient:   Map-Reduce Framework

13/09/10 17:05:49 INFOmapred.JobClient:     Reduce inputgroups=9

13/09/10 17:05:49 INFOmapred.JobClient:     Map outputmaterialized bytes=122

13/09/10 17:05:49 INFO mapred.JobClient:     Combine output records=9

13/09/10 17:05:49 INFOmapred.JobClient:     Map input records=4

13/09/10 17:05:49 INFOmapred.JobClient:     Reduce shufflebytes=122

13/09/10 17:05:49 INFOmapred.JobClient:     Reduce outputrecords=9

13/09/10 17:05:49 INFOmapred.JobClient:     Spilled Records=18

13/09/10 17:05:49 INFOmapred.JobClient:     Map outputbytes=135

13/09/10 17:05:49 INFOmapred.JobClient:     Combine inputrecords=13

13/09/10 17:05:49 INFOmapred.JobClient:     Map outputrecords=13

13/09/10 17:05:49 INFOmapred.JobClient:     SPLIT_RAW_BYTES=109

13/09/10 17:05:49 INFOmapred.JobClient:     Reduce inputrecords=9

hadoop@ubuntu:/usr/local/hadoop$

 

查看

浏览器中输入:http://localhost:50070, 回到主目录

查找一下,点击:part-r-00000

结果如下图所示:

 

命令行方式查看结果

hadoop@ubuntu:/usr/local/hadoop$

hadoop@ubuntu:/usr/local/hadoop$ bin/hadoopfs -lsr /

drwxr-xr-x  - hadoop supergroup          02013-09-10 17:05 /tmp

drwxr-xr-x  - hadoop supergroup          02013-09-10 16:45 /tmp/hadoop-hadoop

drwxr-xr-x  - hadoop supergroup          02013-09-10 17:04 /tmp/hadoop-hadoop/mapred

drwxr-xr-x  - hadoop supergroup          02013-09-10 17:04 /tmp/hadoop-hadoop/mapred/staging

drwxr-xr-x  - hadoop supergroup          02013-09-10 17:04 /tmp/hadoop-hadoop/mapred/staging/hadoop

drwx------  - hadoop supergroup          02013-09-10 17:05 /tmp/hadoop-hadoop/mapred/staging/hadoop/.staging

drwx------  - hadoop supergroup          02013-09-10 17:04 /tmp/hadoop-hadoop/mapred/staging/hadoop/.staging/job_201309101645_0001

-rw-r--r-- 10 hadoop supergroup     1424692013-09-10 17:04/tmp/hadoop-hadoop/mapred/staging/hadoop/.staging/job_201309101645_0001/job.jar

drwx------  - hadoop supergroup          02013-09-10 17:05 /tmp/hadoop-hadoop/mapred/system

-rw-------  2 hadoop supergroup          42013-09-10 16:46 /tmp/hadoop-hadoop/mapred/system/jobtracker.info

drwxr-xr-x  - hadoop supergroup          02013-09-10 17:05 /tmp/outpu1

-rw-r--r--  2 hadoop supergroup          02013-09-10 17:05 /tmp/outpu1/_SUCCESS

drwxr-xr-x  - hadoop supergroup          02013-09-10 17:05 /tmp/outpu1/_logs

drwxr-xr-x  - hadoop supergroup          02013-09-10 17:05 /tmp/outpu1/_logs/history

-rw-r--r--  2 hadoop supergroup      103962013-09-10 17:05 /tmp/outpu1/_logs/history/job_201309101645_0002_1378857916792_hadoop_word+count

-rw-r--r--  2 hadoop supergroup      199692013-09-10 17:05 /tmp/outpu1/_logs/history/job_201309101645_0002_conf.xml

-rw-r--r--  2 hadoop supergroup         802013-09-10 17:05 /tmp/outpu1/part-r-00000

drwxr-xr-x  - hadoop supergroup          02013-09-10 16:50 /tmp/wordcount

-rw-r--r--  2 hadoop supergroup         832013-09-10 16:50 /tmp/wordcount/word.txt

hadoop@ubuntu:/usr/local/hadoop$ bin/hadoopfs -text /tmp/output1/part-r-00000

text: File does not exist: /tmp/output1/part-r-00000

//目录名敲错了

hadoop@ubuntu:/usr/local/hadoop$ bin/hadoopfs -cat /tmp/outpu1/part-r-00000

c      1

c++  2

hadoop    2

hbase      1

helloword       1

java 3

javascript 1

mapreduce      1

python    1

hadoop@ubuntu:/usr/local/hadoop$ 



注:

     hadoop 版本:20.203 自带的貌似有问题,jar文件打包不全。请自行下载,网上可以搜索到。


你可能感兴趣的:(Hadoop上传文件到HDFS时异常处理步骤)