**This post has been updated. Read the new version, published July 2013.**
Nginx is a modern, open-source, high-performance web server. It is capable of handling a huge number of concurrent client connections easily (see the C10K problem). While both nginx and Apache HTTP are capable of handling a large number of requests per second, nginx can handle a larger number of concurrent requests without severe performance degradation and the increased memory usage seen in Apache under the same conditions (nginx relies on a non-blocking I/O, event-driven model while Apache relies on threads which may block on I/O).
This makes nginx an excellent load balancer and reverse proxy — a single nginx server can handle the large number of incoming concurrent client connections and distribute them to number of different of upstream servers to actually handle the client requests. The client requests can all be for a single service or application (load balancing) or for a variety of different services and applications which live together on an internal network (reverse proxying).
Using nginx in this way, it is not difficult to create a readily scalable solution for a given web-application to handle a huge number of simultaneous requests.
The rest of this post will contain nginx configuration examples which I have used when setting up and maintaining applications and network infrastructures for both Atomic Object and our clients. I hope that you’ll find them informative and helpful.
Let’s say we have a Ruby on Rails application running on Phusion Passenger on Apache HTTP. The application gets a lot of traffic, and the current Apache server can no longer be performant with the number of client requests. We could beef up the Apache server a bit (more RAM, another CPU), or we could start to scale up with more servers.
We can setup nginx to handle all incoming requests, and have nginx distribute the requests to an arbitrary number of upstream Apache servers for handling.
In nginx, the syntax for such a setup is quite concise:
upstream rails_application {
server 10.0.0.1 max_fails=1 fail_timeout=10s;
server 10.0.0.2 max_fails=1 fail_timeout=10s;
# and so on: server 10.0.0.x;
}
server {
listen 1.2.3.4:80;
location / {
proxy_pass http://rails_application;
}
}
In addition, let’s say we had a legacy PHP application which was running on a separate Apache server from the Rails application. We can now bring that legacy PHP application into the same internal network behind nginx.
upstream rails_application {
server 10.0.0.1 max_fails=1 fail_timeout=10s;
server 10.0.0.2 max_fails=1 fail_timeout=10s;
# and so on: server 10.0.0.x;
}
upstream legacy_php_application {
server 10.0.1.1 max_fails=1 fail_timeout=10s;
}
server {
listen 1.2.3.4:80;
server_name railsapp.example.com;
location / {
proxy_pass http://rails_application;
}
}
server {
listen 1.2.3.4:80;
server_name phpapp.example.com;
location / {
proxy_pass http://legacy_php_application;
}
}
As you can see, nginx configuration syntax is quite simple. In just a few lines, we have completely configured nginx as a load balancer and reverse proxy.
Often, upstream servers need certain information about the original client request. This information can be passed upstream by setting the appropriate proxy headers (more can be set, see the documentation):
server {
listen 1.2.3.4:80;
server_name railsapp.example.com;
location / {
proxy_set_header Host $host;
# So the original HTTP Host header is preserved
proxy_set_header X-Real-IP $remote_addr;
# The IP address of the client (which might be a proxy itself)
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
# The IP address in the HTTP X-Forwarded-For header (which would be the 'origin' client).
proxy_pass http://rails_application;
}
}
We can also easily tweak our setup based on the servers in our internal network. If one of our upstream Apache HTTP servers for our Rails application was twice as powerful, we could tell nginx to use it more frequently:
upstream rails_application {
server 10.0.0.1 max_fails=1 fail_timeout=10s;
server 10.0.0.2 max_fails=1 fail_timeout=10s;
server 10.0.0.3 max_fails=1 fail_timeout=10s weight=2;
# and so on: server 10.0.0.x;
}
If we wanted only a single server to receive requests, but had a backup server available in the event the primary failed, we could tell nginx to handle it:
upstream rails_application {
server 10.0.0.1 max_fails=1 fail_timeout=10s;
server 10.0.0.2 max_fails=1 fail_timeout=10s backup;
# and so on: server 10.0.0.x;
}
If we wanted clients requests to always be handled by the same upstream (such as for sticky sessions which were not handled by the application directly), we could tell nginx to track client IP addresses, and send the client request to the appropriate upstream:
upstream rails_application {
ip_hash;
server 10.0.0.1 max_fails=1 fail_timeout=10s;
server 10.0.0.2 max_fails=1 fail_timeout=10s;
# and so on: server 10.0.0.x;
}
Let’s say we wanted to have nginx handle SSL connections as well. It is not difficult:
server {
listen 1.2.3.4:443;
server_name railsapp.example.com;
ssl on;
ssl_certificate cert.pem;
# Path to an SSL certificate;
ssl_certificate_key cert.key;
# Path to the key for the SSL certificate;
location / {
proxy_pass http://rails_application;
}
}
We can also have nginx use SSL to communicate with the upstream servers:
upstream secure_rails_application {
server 10.0.0.1:443 max_fails=1 fail_timeout=10s;
# Notice port 443;
server 10.0.0.2:443 max_fails=1 fail_timeout=10s;
# Notice port 443;
# and so on: server 10.0.0.x;
}
server {
listen 1.2.3.4:443;
server_name railsapp.example.com;
ssl on;
ssl_certificate cert.pem;
ssl_certificate_key cert.key;
location / {
proxy_pass https://secure_rails_application;
# Notice the https://;
}
}
Or, we can just let the upstreams know that the client made a connection to nginx over SSL:
server {
listen 1.2.3.4:443;
server_name railsapp.example.com;
ssl on;
ssl_certificate cert.pem;
ssl_certificate_key cert.key;
location / {
proxy_set_header X-Forwarded-Proto https;
# Sets the HTTP headers appropiately;
proxy_pass http://rails_application;
}
}
We can also have nginx use basic HTTP authentication (over HTTP or HTTPS):
server {
listen 1.2.3.4:443;
server_name railsapp.example.com;
ssl on;
ssl_certificate cert.pem;
ssl_certificate_key cert.key;
location / {
auth_basic "development";
# An arbitrary name for the authentication realm;
auth_basic_user_file htpasswd;
# Path to an Apache htpasswd file;
proxy_set_header Authorization "";
# Prevents the HTTP Basic authorization headers from being sent upstream;
proxy_pass http://rails_application;
}
}
Hopefully these configuration snippets give a good indication of how versatile nginx can be, and how easily relatively complex web application architectures can be implemented. The nginx wiki provides a wealth of information about the available configuration parameters, and often includes helpful examples. Nearly all Linux distros have nginx as an available packages, however, these are often quite out of date. Fortunately, it is not difficult to build nginx from source.
What about Nginx and Unicorn? Why add another web server?
Hi Jurriann,
You can definitely use nginx and Unicorn instead of something like Passenger. My examples used Apache and Passenger as that is the combination that we’ve used most often here at Atomic Object.
The upstream configuration for something like nginx and Unicorn is quite simple as well; you can setup the upstreams with either a Unix socket (if running locally) or over TCP. An example of setting up an upstream using a Unix socket can be found on the nginx wiki page: http://wiki.nginx.org/HttpUpstreamModule#upstream
Thanks Justin, great post. A comprehensive list of use cases, a new must ready nginx resource!
Thanks very much for this wonderful resource! Really helped me get up and running with nginx as a load balancer at work.
Have you tested Zen Load Balancer? You can manage your farms using a usefull and easy web GUI. You can install it on virtual machine and it can be installed in active-pasive cluster mode for HA.
This was what i need !!
Thanks =)
With regards to the upstreams using SSL, setting up a new SSL connection every time we want to communicate back to an upstream server from nginx is quite expensive (we’re talking performance reductions of 60%+). Is there a way to configure an upstream server to be accessible over a persistent SSL connection instead? I was thinking with a SOCKS tunnel but hopefully a pure nginx solution exists.
thanks