异常信息:
............ Oct 17, 2011 5:22:41 PM org.apache.tomcat.util.net.JIoEndpoint$Acceptor run SEVERE: Socket accept failed java.net.SocketException: Too many open files at java.net.PlainSocketImpl.socketAccept(Native Method) at java.net.AbstractPlainSocketImpl.accept(AbstractPlainSocketImpl.java:375) at java.net.ServerSocket.implAccept(ServerSocket.java:470) at java.net.ServerSocket.accept(ServerSocket.java:438) at org.apache.tomcat.util.net.DefaultServerSocketFactory.acceptSocket(DefaultServerSocketFactory.java:59) at org.apache.tomcat.util.net.JIoEndpoint$Acceptor.run(JIoEndpoint.java:210) at java.lang.Thread.run(Thread.java:636) Oct 17, 2011 5:22:43 PM org.apache.tomcat.util.net.JIoEndpoint$Acceptor run SEVERE: Socket accept failed java.net.SocketException: Too many open files at java.net.PlainSocketImpl.socketAccept(Native Method) at java.net.AbstractPlainSocketImpl.accept(AbstractPlainSocketImpl.java:375) at java.net.ServerSocket.implAccept(ServerSocket.java:470) at java.net.ServerSocket.accept(ServerSocket.java:438) at org.apache.tomcat.util.net.DefaultServerSocketFactory.acceptSocket(DefaultServerSocketFactory.java:59) at org.apache.tomcat.util.net.JIoEndpoint$Acceptor.run(JIoEndpoint.java:210) at java.lang.Thread.run(Thread.java:636) Oct 17, 2011 5:22:44 PM org.apache.tomcat.util.net.JIoEndpoint$Acceptor run SEVERE: Socket accept failed java.net.SocketException: Too many open files at java.net.PlainSocketImpl.socketAccept(Native Method) at java.net.AbstractPlainSocketImpl.accept(AbstractPlainSocketImpl.java:375) at java.net.ServerSocket.implAccept(ServerSocket.java:470) at java.net.ServerSocket.accept(ServerSocket.java:438) at org.apache.tomcat.util.net.DefaultServerSocketFactory.acceptSocket(DefaultServerSocketFactory.java:59) at org.apache.tomcat.util.net.JIoEndpoint$Acceptor.run(JIoEndpoint.java:210) at java.lang.Thread.run(Thread.java:636) ............
原因:
1、默认linux同时最大打开文件数量为1024个,用命令查看如下:ulimit -a:查看系统上受限资源的设置(open files (-n) 1024):
[root@**** bin]# ulimit -a core file size (blocks, -c) 0 data seg size (kbytes, -d) unlimited scheduling priority (-e) 0 file size (blocks, -f) unlimited pending signals (-i) 16384 max locked memory (kbytes, -l) 32 max memory size (kbytes, -m) unlimited open files (-n) 1024 pipe size (512 bytes, -p) 8 POSIX message queues (bytes, -q) 819200 real-time priority (-r) 0 stack size (kbytes, -s) 10240 cpu time (seconds, -t) unlimited max user processes (-u) 16384 virtual memory (kbytes, -v) unlimited file locks (-x) unlimited [root@**** bin]#
2、可以修改同时打开文件的最大数基本可以解决:ulimit -n 4096
[root@**** bin]# ulimit -n 4096 [root@**** bin]# ulimit -a core file size (blocks, -c) 0 data seg size (kbytes, -d) unlimited scheduling priority (-e) 0 file size (blocks, -f) unlimited pending signals (-i) 16384 max locked memory (kbytes, -l) 32 max memory size (kbytes, -m) unlimited open files (-n) 4096 pipe size (512 bytes, -p) 8 POSIX message queues (bytes, -q) 819200 real-time priority (-r) 0 stack size (kbytes, -s) 10240 cpu time (seconds, -t) unlimited max user processes (-u) 16384 virtual memory (kbytes, -v) unlimited file locks (-x) unlimited [root@**** bin]#
已经修改了最大打开文件数。
程序中有个静态的方法打开文件后,没有关闭文件,导致每次请求都会去打开文件,在程序中填入关闭输入流的操作即可以:
public static List<GpsPoint> getArrayList() throws IOException { List<GpsPoint> pointList = null; // 读取配置文件 InputStream in = ParseGpsFile.class.getClassLoader().getResourceAsStream("GPS1.TXT"); // 读路径出错,换另一种方式读取配置文件 if (null == in) { System.out.println("读取文件失败"); return pointList; } pointList = new ArrayList<GpsPoint>(); try { BufferedReader br = new BufferedReader(new InputStreamReader(in)); String longtude = ""; String latude = ""; String elevation = ""; while ((longtude = br.readLine()) != null) { // 读下一行数据,读纬度 latude = br.readLine(); if (null == latude) { // 退出循环 break; } // 读下一行数据,读海拔 elevation = br.readLine(); if (null == latude) { // 退出循环 break; } // 加入一个点 pointList.add(gps2point(longtude, latude, elevation)); } in.close(); System.out.println("\n\n"); } catch (Exception e) { in.close(); e.printStackTrace(); } return pointList; }
问题彻底解决
1、/etc/pam.d/login 添加
session required /lib/security/pam_limits.so
# 注意看这个文件的注释
具体文件的内容为:
[root@**** ~]# vi /etc/pam.d/login #%PAM-1.0 auth [user_unknown=ignore success=ok ignore=ignore default=bad] pam_securetty.so auth include system-auth account required pam_nologin.so account include system-auth password include system-auth # pam_selinux.so close should be the first session rule session required pam_selinux.so close session optional pam_keyinit.so force revoke session required pam_loginuid.so session include system-auth session optional pam_console.so # pam_selinux.so open should only be followed by sessions to be executed in the user context session required pam_selinux.so open ~ "/etc/pam.d/login" 15L, 644C
修改后的内容:
-bash: [root@**** : command not found [root@**** ~]# clear [root@**** ~]# cat /etc/pam.d/login #%PAM-1.0 auth [user_unknown=ignore success=ok ignore=ignore default=bad] pam_securetty.so auth include system-auth account required pam_nologin.so account include system-auth password include system-auth # pam_selinux.so close should be the first session rule session required pam_selinux.so close session optional pam_keyinit.so force revoke session required pam_loginuid.so session include system-auth session optional pam_console.so # pam_selinux.so open should only be followed by sessions to be executed in the user context session required pam_selinux.so open # kevin.xie added, fixed 'too many open file' bug, limit open max files 1024, 2011-10-24 session required /lib/security/pam_limits.so [root@**** ~]#
2. /etc/security/limits.conf 添加
root – nofile 1006154
root 是一个用户,如果是想所有用户生效的话换成 * ,设置的数值与硬件配置有关,别设置太大了。
修改前内容
[root@**** ~]# cat /etc/security/limits.conf # /etc/security/limits.conf # #Each line describes a limit for a user in the form: # #<domain> <type> <item> <value> # #Where: #<domain> can be: # - an user name # - a group name, with @group syntax # - the wildcard *, for default entry # - the wildcard %, can be also used with %group syntax, # for maxlogin limit # #<type> can have the two values: # - "soft" for enforcing the soft limits # - "hard" for enforcing hard limits # #<item> can be one of the following: # - core - limits the core file size (KB) # - data - max data size (KB) # - fsize - maximum filesize (KB) # - memlock - max locked-in-memory address space (KB) # - nofile - max number of open files # - rss - max resident set size (KB) # - stack - max stack size (KB) # - cpu - max CPU time (MIN) # - nproc - max number of processes # - as - address space limit # - maxlogins - max number of logins for this user # - maxsyslogins - max number of logins on the system # - priority - the priority to run user process with # - locks - max number of file locks the user can hold # - sigpending - max number of pending signals # - msgqueue - max memory used by POSIX message queues (bytes) # - nice - max nice priority allowed to raise to # - rtprio - max realtime priority # #<domain> <type> <item> <value> # #* soft core 0 #* hard rss 10000 #@student hard nproc 20 #@faculty soft nproc 20 #@faculty hard nproc 50 #ftp hard nproc 0 #@student - maxlogins 4 # End of file [root@**** ~]# [root@**** ~]# cat /etc/security/limits.conf # /etc/security/limits.conf # #Each line describes a limit for a user in the form: # #<domain> <type> <item> <value> # #Where: #<domain> can be: # - an user name # - a group name, with @group syntax # - the wildcard *, for default entry # - the wildcard %, can be also used with %group syntax, # for maxlogin limit # #<type> can have the two values: # - "soft" for enforcing the soft limits # - "hard" for enforcing hard limits # #<item> can be one of the following: # - core - limits the core file size (KB) # - data - max data size (KB) # - fsize - maximum filesize (KB) # - memlock - max locked-in-memory address space (KB) # - nofile - max number of open files # - rss - max resident set size (KB) # - stack - max stack size (KB) # - cpu - max CPU time (MIN) # - nproc - max number of processes # - as - address space limit # - maxlogins - max number of logins for this user # - maxsyslogins - max number of logins on the system # - priority - the priority to run user process with # - locks - max number of file locks the user can hold # - sigpending - max number of pending signals # - msgqueue - max memory used by POSIX message queues (bytes) # - nice - max nice priority allowed to raise to # - rtprio - max realtime priority # #<domain> <type> <item> <value> # #* soft core 0 #* hard rss 10000 #@student hard nproc 20 #@faculty soft nproc 20 #@faculty hard nproc 50 #ftp hard nproc 0 #@student - maxlogins 4 # kevin.xie added, fixed 'too many open file' bug, limit open max files 1024, 2011-10-24 * - nofile 102400 # End of file [root@**** ~]#
修改后的内容
3. 修改 /etc/rc.local 添加
echo 8061540 > /proc/sys/fs/file-max
修改前内容
[root@**** ~]# cat /proc/sys/fs/file-max 4096 [root@**** ~]#
修改后内容
[root@**** ~]# cat /proc/sys/fs/file-max 4096000 [root@**** ~]#
做完3个步骤,就可以了。
原来的程序:
/** * <pre><b>功能描述:</b>获取异步的session实例。 * * @author :Kevin.xie * <b>创建日期 :</b>2011-9-15 上午10:06:27 * * @return * * <b>修改历史:</b>(修改人,修改时间,修改原因/内容) * * </pre> */ public static IoSession getSession1() { // 创建客户端连接器 IoConnector connector = new NioSocketConnector(); // 设置事件处理器 connector.setHandler(new WebClientHandler()); // 设置编码过滤器和按行读取数据模式 connector.getFilterChain() .addLast("codec", new ProtocolCodecFilter(new ObdDemuxingProtocolCodecFactory(false))); // 创建连接 ConnectFuture future = connector.connect(new InetSocketAddress(ServerConfigBoundle.getServerIp(), ServerConfigBoundle.getServerPort())); // 等待连接创建完成 future.awaitUninterruptibly(); // 获取连接会话 IoSession session = future.getSession(); return session; } /** * <pre><b>功能描述:</b>必须要关闭Connector和IoSession * @author :Kevin.xie * <b>创建日期 :</b>2011-10-20 上午10:20:54 * * @param session 要关闭的session * * <b>修改历史:</b>(修改人,修改时间,修改原因/内容) * * </pre> */ public static void closeSession(IoSession session) { if (session != null && !session.isClosing()) { // 没有关闭,就关闭 session.close(true); session = null; } }
修改后的程序
/** * * <pre><b>功能描述:</b>获取IoConnector和异步的session实例 * 无法关闭。特别的提醒,NioSocketConnector 也要关闭。 * 函数名是 dispose()。这点特别重要。这次出现 too many open files 的问题根源在这里 * * @author :Kevin.xie * <b>创建日期 :</b>2011-9-15 上午10:06:27 * * @return * * <b>修改历史:</b>(修改人,修改时间,修改原因/内容) * * </pre> */ public static Map<String, Object> getConnectorAndSession() { // 创建客户端连接器 IoConnector connector = new NioSocketConnector(); // 设置事件处理器 connector.setHandler(new WebClientHandler()); // 设置编码过滤器和按行读取数据模式 connector.getFilterChain() .addLast("codec", new ProtocolCodecFilter(new ObdDemuxingProtocolCodecFactory(false))); // 创建连接 ConnectFuture future = connector.connect(new InetSocketAddress(ServerConfigBoundle.getServerIp(), ServerConfigBoundle.getServerPort())); // 等待连接创建完成 future.awaitUninterruptibly(); // 获取连接会话 IoSession session = future.getSession(); Map<String, Object> map = new HashMap<String, Object>(); map.put(CONNECTOR_KEY, connector); map.put(SESSION_KEY, session); return map; } /** * * <pre><b>功能描述:</b>必须要关闭Connector和IoSession * 特别的提醒,NioSocketConnector 也要关闭。 * 函数名是 dispose()。这点特别重要。这次出现 too many open files 的问题根源在这里 * @author :Kevin.xie * <b>创建日期 :</b>2011-10-20 上午10:20:54 * * @param connector 要关闭的IoConnector,不关闭会报 too many open files 错误 * @param session 要关闭的session * * <b>修改历史:</b>(修改人,修改时间,修改原因/内容) * * </pre> */ public static void closeConnectorAndSession(IoConnector connector, IoSession session) { if (session != null && !session.isClosing()) { // 没有关闭,就关闭 session.close(true); session = null; } if (connector != null && !(connector.isDisposing() || connector.isDisposed())) { // 没有关闭,就关闭 connector.dispose(); connector = null; } } ]
用完后一定要释放资源:
Map<String, Object> resultMap = SocketUtils.getConnectorAndSession(); IoSession session = (IoSession) resultMap.get(SocketUtils.SESSION_KEY); IoConnector connector = (IoConnector) resultMap.get(SocketUtils.CONNECTOR_KEY); ............ ............ // 主动关闭连接 SocketUtils.closeConnectorAndSession(connector, session);
同时在配置文件 /etc/security/limits.conf 加了一个配置(该不该问题不大):
* hard nofile 65536
# 第二次解决添加的内容 # kevin.xie added, fixed 'too many open file' bug, limit open max files 1024, 2011-10-24 * - nofile 102400 # 第三次(本次)解决添加的问题(不过这个应该可以不修改,没有印证,也懒得修改了) # kevin.xie added, fixed 'too many open file' bug', 2012-01-04 * soft nofile 65536 * hard nofile 65536