hadoop 中常见的异常与解决办法

异常一:Bringing up interface eth0: Device eth0 does not seem to be present,delaying initialization.

[root@cs0 ~]# service network restart
 Shutting down loopback insterface:                   [ OK ]
Bringing up loopback insterface:                      [ OK ]

Bringing up interface eth0:  Device eth0 does not seem to be present,delaying initialization.                                       [Failed]

解决办法:

1.使用ifcnfig -a 查看当前主机mac地址
eth1      Link encap:Ethernet  HWaddr 00:0C:29:DA:77:6C  
          inet addr:192.168.80.134  Bcast:192.168.80.255  Mask:255.255.255.0
          inet6 addr: fe80::20c:29ff:feda:776c/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:25 errors:0 dropped:0 overruns:0 frame:0
          TX packets:23 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:10551 (10.3 KiB)  TX bytes:2135 (2.0 KiB)
          Interrupt:19 Base address:0x2024 

lo        Link encap:Local Loopback  
          inet addr:127.0.0.1  Mask:255.0.0.0
          inet6 addr: ::1/128 Scope:Host
          UP LOOPBACK RUNNING  MTU:16436  Metric:1
          RX packets:12 errors:0 dropped:0 overruns:0 frame:0
          TX packets:12 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:720 (720.0 b)  TX bytes:720 (720.0 b)

2.修改eth0中的DEVICE="eth0" 为DEVICE="eth1"以及网卡硬件地址为当前地址

vi /etc/sysconfig/network-scripts/ifcfg-eth0

DEVICE="eth1"
BOOTPROTO="static"
HWADDR="00:0C:29:DA:77:6C"
IPV6INIT="yes"
NM_CONTROLLED="yes"
ONBOOT="yes"
TYPE="Ethernet"
IPADDR="192.168.80.131"
NATMASK="255.255.255.0"
GATEWAY="192.168.80.2"

保存退出
service network restart
错误原因:是因为linux网卡绑定了原mac地址导致

异常二:ssh: Could not resolve hostname library: Temporary failure in name resolution

16/04/25 19:27:21 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Starting namenodes on [Java HotSpot(TM) Client VM warning: You have loaded library /home/hadoop/app/hadoop/lib/native/libhadoop.so which might have disabled stack guard. The VM will try to fix the stack guard now.
It's highly recommended that you fix the library with 'execstack -c <libfile>', or link it with '-z noexecstack'.
cs0 cs1]
sed: -e expression #1, char 6: unknown option to `s'
which: ssh: Could not resolve hostname which: Temporary failure in name resolution
warning:: ssh: Could not resolve hostname warning:: Temporary failure in name resolution
have: ssh: Could not resolve hostname have: Temporary failure in name resolution
stack: ssh: Could not resolve hostname stack: Temporary failure in name resolution
VM: ssh: Could not resolve hostname VM: Temporary failure in name resolution
have: ssh: Could not resolve hostname have: Temporary failure in name resolution
HotSpot(TM): ssh: Could not resolve hostname HotSpot(TM): Temporary failure in name resolution
You: ssh: Could not resolve hostname You: Temporary failure in name resolution
disabled: ssh: Could not resolve hostname disabled: Temporary failure in name resolution
will: ssh: Could not resolve hostname will: Temporary failure in name resolution
Client: ssh: Could not resolve hostname Client: Temporary failure in name resolution
loaded: ssh: Could not resolve hostname loaded: Temporary failure in name resolution
The: ssh: Could not resolve hostname The: Temporary failure in name resolution
guard.: ssh: Could not resolve hostname guard.: Temporary failure in name resolution
library: ssh: Could not resolve hostname library: Temporary failure in name resolution
to: ssh: Could not resolve hostname to: Temporary failure in name resolution
might: ssh: Could not resolve hostname might: Temporary failure in name resolution
the: ssh: Could not resolve hostname the: Temporary failure in name resolution
Java: ssh: Could not resolve hostname Java: Temporary failure in name resolution

解决办法:

打开 /etc/profile在其中添加如下内容:
export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/native
export HADOOP_OPTS="-Djava.library.path=$HADOOP_HOME/lib"

异常三:put: File /test/test.txt.COPYING could only be replicated to 0 nodes instead of minReplication (=1). There are 0 datanode(s) running and no node(s) are excluded in this operation.

org.apache.hadoop.ipc.RemoteException(java.io.IOException): File /test/test.txt._COPYING_ could only be replicated to 0 nodes instead of minReplication (=1).  There are 0 datanode(s) running and no node(s) are excluded in this operation.
        at org.apache.hadoop.hdfs.server.blockmanagement.BlockManager.chooseTarget4NewBlock(BlockManager.java:1549)
        at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:3200)
        at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.addBlock(NameNodeRpcServer.java:641)
        at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.addBlock(ClientNamenodeProtocolServerSideTranslatorPB.java:482)
        at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)
        at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:619)
        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:962)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2039)
        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2035)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:415)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2033)

        at org.apache.hadoop.ipc.Client.call(Client.java:1468)
        at org.apache.hadoop.ipc.Client.call(Client.java:1399)
        at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232)
        at com.sun.proxy.$Proxy14.addBlock(Unknown Source)
        at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.addBlock(ClientNamenodeProtocolTranslatorPB.java:399)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:606)
        at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187)
        at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
        at com.sun.proxy.$Proxy15.addBlock(Unknown Source)
        at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.locateFollowingBlock(DFSOutputStream.java:1532)
        at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.nextBlockOutputStream(DFSOutputStream.java:1349)
        at org.apache.hadoop.hdfs.DFSOutputStream$DataStreamer.run(DFSOutputStream.java:588)
put: File /test/test.txt._COPYING_ could only be replicated to 0 nodes instead of minReplication (=1).  There are 0 datanode(s) running and no node(s) are excluded in this operation.

解决办法:

到你的每个Slave主机下面去,找到hadoop.tmp.dir对应的目录下。例如:/home/hadoop/data/hadoop_hadoop/dfs 将data文件夹删除
接着直接在刚才的目录下启动hadoop
使用命令: start-all.sh 
   这个问题是由于没有datanode 没有正常启动的原因,通常是由于节点之间的启动顺序不正确导致的。一般先启动namenode,再启动datanode就不会出现上述异常

你可能感兴趣的:(hadoop,异常,eth0)