If your application makes use of SSL certificates, then some decisions need to be made about how to use them with a load balancer.
A simple setup of one server usually sees a client's SSL connection being decrypted by the server receiving the request. Because a load balancer sits between a client and one or more servers, where the SSL connection is decrypted becomes a concern.
There are two main strategies.
SSL Termination is the practice of terminating/decrypting an SSL connection at the load balancer, and sending unencrypted connections to the backend servers.
This means the load balancer is responsible for decrypting an SSL connection - a slow and CPU intensive process relative to accepting non-SSL requests.
This is the opposite of SSL Pass-Through, which sends SSL connections directly to the proxied servers.
With SSL-Pass-Through, the SSL connection is terminated at each proxied server, distributing the CPU load across those servers. However, you lose the ability to add or edit HTTP headers, as the connection is simply routed through the load balancer to the proxied servers.
This means your application servers will lose the ability to get the X-Forwarded-*
headers, which may include the client's IP address, port and scheme used.
Which strategy you choose is up to you and your application needs. SSL Termination is the most typical I've seen, but pass-thru is likely more secure.
There is a combination of the two strategies, where SSL connections are terminated at the load balancer, adjusted as needed, and then proxied off to the backend servers as a new SSL connection. This may provide the best of both security and ability to send the client's information. The trade off is more CPU power being used all-around, and a little more complexity in configuration.
An older article of mine on the consequences and gotchas of using load balancers explains these issues (and more) as well.
We'll cover the most typical use case first - SSL Termination. As stated, we need to have the load balancer handle the SSL connection. This means having the SSL Certificate live on the load balancer server.
We saw how to create a self-signed certificate in a previous edition of SFH. We'll re-use that information for setting up a self-signed SSL certificate for HAProxy to use.
Keep in mind that for a production SSL Certificate (not a self-signed one), you won't need to generate or sign a certificate yourself - you'll just need to create a Certificate Signing Request (csr) and pass that to whomever you purchase a certificate from.
First, we'll create a self-signed certificate for *.xip.io
, which is handy for demonstration purposes, and lets use one the same certificate when our server IP addresses might change while testing locally. For example, if our local server exists at 192.168.33.10, but then our Virtual Machine IP changes to 192.168.33.11, then we don't need to re-create the self-signed certificate.
I use the
xip.io
service as it allows us to use a hostname rather than directly accessing the servers via an IP address, all without having to edit my computers' Host file.
As this process is outlined in a passed edition on SSL certificates, I'll simple show the steps to generate a self-signed certificate here:
$ sudo mkdir /etc/ssl/xip.io $ sudo openssl genrsa -out /etc/ssl/xip.io/xip.io.key 1024 $ sudo openssl req -new -key /etc/ssl/xip.io/xip.io.key -out /etc/ssl/xip.io/xip.io.csr > Country Name (2 letter code) [AU]:US > State or Province Name (full name) [Some-State]:Connecticut > Locality Name (eg, city) []:New Haven > Organization Name (eg, company) [Internet Widgits Pty Ltd]:SFH > Organizational Unit Name (eg, section) []: > Common Name (e.g. server FQDN or YOUR name) []:*.xip.io > Email Address []: > Please enter the following 'extra' attributes to be sent with your certificate request > A challenge password []: > An optional company name []: $ sudo openssl x509 -req -days 365 -in /etc/ssl/xip.io/xip.io.csr -signkey /etc/ssl/xip.io/xip.io.key -out /etc/