排错日志:


    • 解决办法:可能是修改 机器名导致的,修改hosts,写入hostname和IP,然后,try it agin!

    • 解决办法:



    • 当引入hadoop-common-2.2.0.jar包进行二次开发,比如读写HDFS文件时,初次运行报错。


      java.io.IOException: No FileSystem for scheme: hdfs
              at org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:2421)
      at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2428)

              at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:88)

              at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2467)

              at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2449)

              at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:367)

              at FileCopyToHdfs.readFromHdfs(FileCopyToHdfs.java:65)

              at FileCopyToHdfs.main(FileCopyToHdfs.java:26)


      这是因为该包下默认的 core-default.xml没有配置如下属性:


      fs.hdfs.impl
      org.apache.hadoop.hdfs.DistributedFileSystem
      The FileSystem for hdfs: uris.
      < /property>


      添加完后,问题解决。
      建议下载hadoop-2.2.0源码,在源码修改core-default.xml文件后再编译打包,再在二次开发的工程引入新的jar包。




    • http://www.cnblogs.com/tangtianfly/p/3491133.html

    • http://www.cnblogs.com/tangtianfly/p/3491133.html

    • http://blog.csdn.net/u013281331/article/details/17992077

    • 上属性指定fs.hdfs.impl的实现类。

    • 解决办法:


    • 时间不同步

    • su root

    • ntpdate 133.100.11.8

    • cd /usr/local/hbase/bin/

    • ./hbase-daemon.sh start regionserver

    • 解决办法:


    • 打开hdfs-site.xml里配置的datanode和namenode对应的目录,分别打开current文件夹里的VERSION,可以看到clusterID项正如日志里记录的一样,确实不一致,修改datanode里VERSION文件的clusterID 与namenode里的一致,再重新启动dfs(执行start-dfs.sh)再执行jps命令可以看到datanode已正常启动。

      出现该问题的原因:在第一次格式化dfs后,启动并使用了hadoop,后来又重新执行了格式化命令(hdfs namenode -format),这时namenode的clusterID会重新生成,而datanode的clusterID 保持不变。

    • 解决办法:


    • 1.检查防火墙和selinux

      2.hosts里不要有127.0.0.1指向机器名的解析存在,如“127.0.0.1 localhost”。



    • 解决办法:


    • 是因为hbase和hadoop里都有这个jar包,选择其一移除即可。



    • 解决办法:


    • 原来是Hadoop在刚启动的时候,还处在安全模式造成的。

      [coder@h1 hadoop-0.20.2]$ bin/hadoop dfsadmin -safemode get
      Safe mode is ON
      [coder@h1 hadoop-0.20.2]$

        可等Hadoop退出安全模式后再执行HBase命令,或者手动退出Hadoop的安全模式

      [coder@h1 hadoop-0.20.2]$ bin/hadoop dfsadmin -safemode leaveSafe mode is OFF
      [coder@h1 hadoop-0.20.2]$



      cd /usr/local/hadoop2/bin

      ./hadoop dfsadmin -safemode leave
    • 解决办法:


    •  

          zookeeper.znode.parent

          /usr/local/hbase/hbase_tmp/hbase





    • 解决办法:


    • ./stop-all.sh

      hadoop namenode -format

      rm -rf /home/hadoop/tmp/dfs

      ./start-all.sh





      rm -rf /home/hadoop/tmp

      rm -rf /home/hadoop/dfs_data

      rm -rf /home/hadoop/pids

      rm -rf /home/hadoop/dfs_name

      cd /usr/local/hadoop2/bin/

      ./hadoop namenode -format

    • closing ipc connection to master.kaiser.com/192.168.0.60:8020:  Connection refused

      Call From master.kaiser.com/192.168.0.60 to master.kaiser.com:8020 failed on connection exception: java.net.ConnectException:Connection refused



      log:



      2014-09-03 13:50:39,029 INFO org.apache.hadoop.hdfs.server.common.Storage: Lock on /home/hadoop/dfs_name/in_use.lock acquired by nodename [email protected]

      2014-09-03 13:50:39,032 WARN org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Encountered exception loading fsp_w_picpath

      java.io.IOException: NameNode is not formatted.

      2014-09-03 13:50:39,141 FATAL org.apache.hadoop.hdfs.server.namenode.NameNode: Exception in namenode join

      java.io.IOException: NameNode is not formatted.


    • hadoop.hbase.MasterNotRunningException: The node /hbase is not in ZooKeeper. It should have been written by the master.Check the value configured in 'zookeeper.znode.parent'. There could be a mismatch with the one configured in the master.


    • 2013-04-13 17:13:17,374 INFO org.apache.hadoop.hbase.util.FSUtils: Waiting for dfs to exit safe mode...
      2013-04-13 17:13:27,377 INFO org.apache.hadoop.hbase.util.FSUtils: Waiting for dfs to exit safe mode...
      2013-04-13 17:13:37,386 INFO org.apache.hadoop.hbase.util.FSUtils: Waiting for dfs to exit safe mode...
      2013-04-13 17:13:47,393 INFO org.apache.hadoop.hbase.util.FSUtils: Waiting for dfs to exit safe mode...
      2013-04-13 17:13:57,395 INFO org.apache.hadoop.hbase.util.FSUtils: Waiting for dfs to exit safe mode...
      2013-04-13 17:14:07,409 INFO org.apache.hadoop.hbase.util.FSUtils: Waiting for dfs to exit safe mode...

    • 执行hbase程序orshell命令出现如下提示(./hbase shell):

      SLF4J: Class path contains multiple SLF4J bindings.

      SLF4J: Found binding in [jar:file:/usr/local/hbase-0.92.1/lib/slf4j-log4j12-1.5.8.jar!/org/slf4j/impl/StaticLoggerBinder.class]

      SLF4J: Found binding in [jar:file:/usr/local/hadoop2-1.0.3/lib/slf4j-log4j12-1.4.3.jar!/org/slf4j/impl/StaticLoggerBinder.class]

      SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.


    • WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Problem connecting to server: tmaster.kaiser.com/192.168.0.63:9000


    • 2014-06-18 20:34:59,622 FATAL org.apache.hadoop.hdfs.server.datanode.DataNode: Initialization failed for block pool Block pool (Datanode Uuid unassigned) service to localhost/127.0.0.1:9000

      java.io.IOException: Incompatible clusterIDs in /usr/local/hadoop/hdfs/data: namenode clusterID = CID-af6f15aa-efdd-479b-bf55-77270058e4f7; datanode clusterID = CID-736d1968-8fd1-4bc4-afef-5c72354c39ce

      at org.apache.hadoop.hdfs.server.datanode.DataStorage.doTransition(DataStorage.java:472)

      at org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:225)

      at org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:249)

      at org.apache.hadoop.hdfs.server.datanode.DataNode.initStorage(DataNode.java:929)

      at org.apache.hadoop.hdfs.server.datanode.DataNode.initBlockPool(DataNode.java:900)

      at org.apache.hadoop.hdfs.server.datanode.BPOfferService.verifyAndSetNamespaceInfo(BPOfferService.java:274)

      at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshake(BPServiceActor.java:220)

      at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:815)

      at java.lang.Thread.run(Thread.java:744)

       

      从日志中可以看出,原因是因为datanode的clusterID 和 namenode的clusterID 不匹配。


    • regionserver.HRegionServer: Failed deleting my ephemeral node








    • java.io.IOException: No FileSystem for scheme: hdfs


    • hadoop:pache.hadoop.hdfs.server.blockmanagement.DatanodeManager: Unresolved datanode registration: hostname cannot be resolved

      Hbase:Will not attempt to authenticate using SASL (unknown error)






eyJ1c2VyX2lkIjogMjI1NiwgInRhc2tfaWQiOiAi