java.lang.OutOfMemoryError: unable to create new native thread

35227 2014-05-21 13:53:18,504 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Reopen already-open Block for append blk_8901346392456488003_201326
135228 2014-05-21 13:53:18,506 ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(10.1.33.11:50010, storageID=DS-420686803-10.1.33.11-50010-1399181350037, infoPort=50       075, ipcPort=50020):DataXceiver
135229 java.lang.OutOfMemoryError: unable to create new native thread
135230     at java.lang.Thread.start0(Native Method)
135231     at java.lang.Thread.start(Thread.java:714)
135232     at org.apache.hadoop.hdfs.server.datanode.BlockReceiver.receiveBlock(BlockReceiver.java:576)
135233     at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:404)
135234     at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:112)
135235     at java.lang.Thread.run(Thread.java:745)
135236 2014-05-21 13:53:25,866 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(10.1.33.11:50010, storageID=DS-420686803-10.1.33.11-50010-1399181350037, infoPort=500       75, ipcPort=50020) Starting thread to transfer blk_-2263414036967179224_200863 to 10.1.33.13:50010
135237 2014-05-21 13:53:25,866 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(10.1.33.11:50010, storageID=DS-420686803-10.1.33.11-50010-1399181350037, infoPort=500       75, ipcPort=50020) Starting thread to transfer blk_-2231241119796918398_200963 to 10.1.33.15:50010
135238 2014-05-21 13:53:25,877 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(10.1.33.11:50010, storageID=DS-420686803-10.1.33.11-50010-1399181350037, infoPort=500       75, ipcPort=50020):Failed to transfer blk_-2263414036967179224_200863 to 10.1.33.13:50010 got java.net.ConnectException: Connection refused
135239     at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
135240     at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:739)
135241     at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
135242     at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:511)
135243     at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:481)
135244     at org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer.run(DataNode.java:1511)
135245     at java.lang.Thread.run(Thread.java:745)
135246
135247 2014-05-21 13:53:25,877 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: checkDiskError: exception:
135248 java.net.ConnectException: Connection refused
135249     at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
135250     at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:739)
135251     at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
135252     at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:511)
135253     at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:481)
135254     at org.apache.hadoop.hdfs.server.datanode.DataNode$DataTransfer.run(DataNode.java:1511)
135255     at java.lang.Thread.run(Thread.java:745)
135256 2014-05-21 13:53:25,877 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Not checking disk as checkDiskError was called on a network related exception
135257 2014-05-21 13:53:25,877 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Exiting Datanode 


ulimit -u 参数调大 

你可能感兴趣的:(hadoop)