We have been running a Red5 server for quite some time, and as our application was getting more popular, we happened to see new errors we had never seen before. Actually, we saw a bunch of "Too many open files" exceptions in the logs, and most users could not connect anymore to our application.
The issue has been discussed several times on the Red5 mailing list, but most of it is squattered over several threads, and some content is available only on threads of other apps (the Nginx forums have a lot of valuable information about those issues). So I decided to put all my findings in one nice blog article that would become (hopefully) the definitive guide to solve this issue.
If you are having this issue, it is almost sure that you are running Red5 on a Linux system. First of all, you need to know that on any Unix system (including Linux), "everything is a file". This is especially true of network connections. Therefore, when you read the error message "Too many open files", it is more likely that the real problem is "Too many open connexions"! This message comes from the operating system to tell us that our application opened an unusually large number of connexions. The Linux kernel has several securities to prevent an application from slowing the system by opening too many file handlers. In the case of a Red5 server, having a lot of connections is just normal, so we will just have to raise that limit.
There are 2 limits in Linux:
The global limit can be read using the command:
cat /proc/sys/fs/file-nr
This will return a set of 3 numbers like this:
7072 0 796972
By default, on Ubuntu systems, the maximum number of file descriptors is 294180. This should be enough for most of your needs with Red5.
If for some reason, the value is lower and you want to raise it, you can edit the file vi /etc/sysctl.conf
(edit this file as root).
Add/edit this line:
fs.file-max = 294180
You will need to disconnect/reconnect for changes to be taken into account.
There is another limit on the total number of files an application can open.
To know the value of this limit, run:
ulimit -n
ulimit -n
On a default Ubuntu instance, the per-process limit is set to 1024. This is particularly low, as this means that Red5 will not be able to connect to more than 1024 users.
You can solve this in 2 ways:
Edit file /etc/security/limits.conf
In this file, you should add those 4 lines:
* hard nofile 65536 * soft nofile 65536 root hard nofile 65536 root soft nofile 65536
Let's explain what this does.
So with those 4 lines, any application from any user on the system can open up to 65536, which should be enough for any Red5 application (as anyone knows, "64k should be enough for anyone" :) ).
If you don't want to play with configuration files, you can also temporarily raise the limit just before running your application. Just use:
ulimit -n 65536
This will set both soft and hard limit to 65536 for the rest of your shell's life. However, the limit cannot be set above the value in /etc/security/limits.conf
unless you are root.
Tip: set this line in the Red5 startup script!
ulimit -n 65536
inside the Red5 startup script.
On several website, you will find that if you are using pam.d, there are additional steps to be performed for the limits to be accounted for. Although I couldn't find a clear explanation about what this is used for, I will write the steps to perform below. If you have any explanation about what these lines do and why they are useful, please add a comment below, it will be very welcome!
So you should edit /etc/pam.d/common-session
and /etc/pam.d/login
and add this line to the file:
session required pam_limits.so
You might be interested at any given time to know how many files are opened by a process. For this you first need to know the number of the process (the PID):
ps -aux | grep java
Got the number? Now, you run the command
lsof -p XXX |wc -l
where XXX is your process number. It will return the number of open files.