Hadoop and Linux kernel 2.6.27 - epoll limits (java.io.IOException:Too many open files)

 

Hadoop and Linux kernel 2.6.27 - epoll limits

Yesterday we faced a strange problem. A newly set up Hadoop cluster got unstable after a few minutes. Logs reported a lot of exceptions like:

java.io.IOException: Too many open files
at sun.nio.ch.EPollArrayWrapper.epollCreate(Native Method)
at sun.nio.ch.EPollArrayWrapper.(EPollArrayWrapper.java:68)
at sun.nio.ch.EPollSelectorImpl.(EPollSelectorImpl.java:52)
at sun.nio.ch.EPollSelectorProvider.openSelector(EPollSelectorProvider.java:18)
at sun.nio.ch.Util.getTemporarySelector(Util.java:123)
at sun.nio.ch.SocketAdaptor.connect(SocketAdaptor.java:92)
at org.apache.hadoop.hdfs.server.datanode.DataXceiver.writeBlock(DataXceiver.java:281)
at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:102)
at java.lang.Thread.run(Thread.java:619)

or

DataXceiver
java.io.EOFException
at java.io.DataInputStream.readShort(DataInputStream.java:298)
at org.apache.hadoop.hdfs.server.datanode.DataXceiver.run(DataXceiver.java:78)
at java.lang.Thread.run(Thread.java:619)

and others. We double-checked ulimit -n and it reported 32768 on all datanodes as expected. lsof -u hadoop | wc -l was as low as 2000, so “Too many open files”-exceptions seemed strange.

A day and several installation routines later we figured out that the available epoll resources were not sufficient any more. Java JDK 1.6 uses epoll to implement non-blocking-IO. With kernel 2.6.27 resource limits have been introduced and the default on openSuSE is 128 - way too low.

Increasing the limit with echo 1024 > /proc/sys/fs/epoll/max_user_instances fixed the cluster immediately. To make this setting boot safe add the following line to /etc/sysctl.conf:

fs.epoll.max_user_instances = 1024

 

 

转:http://pero.blogs.aprilmayjune.org/2009/01/22/hadoop-and-linux-kernel-2627-epoll-limits/

你可能感兴趣的:(hadoop,linux,resources,user,jdk,java)