**This post has been updated. Read the new version, published July 2013.**
Nginx is a modern, open-source, high-performance web server. It is capable of handling a huge number of concurrent client connections easily (see the C10K problem). While both nginx and Apache HTTP are capable of handling a large number of requests per second, nginx can handle a larger number of concurrent requests without severe performance degradation and the increased memory usage seen in Apache under the same conditions (nginx relies on a non-blocking I/O, event-driven model while Apache relies on threads which may block on I/O).
This makes nginx an excellent load balancer and reverse proxy — a single nginx server can handle the large number of incoming concurrent client connections and distribute them to number of different of upstream servers to actually handle the client requests. The client requests can all be for a single service or application (load balancing) or for a variety of different services and applications which live together on an internal network (reverse proxying).
Using nginx in this way, it is not difficult to create a readily scalable solution for a given web-application to handle a huge number of simultaneous requests.
The rest of this post will contain nginx configuration examples which I have used when setting up and maintaining applications and network infrastructures for both Atomic Object and our clients. I hope that you’ll find them informative and helpful.
Let’s say we have a Ruby on Rails application running on Phusion Passenger on Apache HTTP. The application gets a lot of traffic, and the current Apache server can no longer be performant with the number of client requests. We could beef up the Apache server a bit (more RAM, another CPU), or we could start to scale up with more servers.
We can setup nginx to handle all incoming requests, and have nginx distribute the requests to an arbitrary number of upstream Apache servers for handling.
In nginx, the syntax for such a setup is quite concise:
In addition, let’s say we had a legacy PHP application which was running on a separate Apache server from the Rails application. We can now bring that legacy PHP application into the same internal network behind nginx.
As you can see, nginx configuration syntax is quite simple. In just a few lines, we have completely configured nginx as a load balancer and reverse proxy.
Often, upstream servers need certain information about the original client request. This information can be passed upstream by setting the appropriate proxy headers (more can be set, see the documentation):
We can also easily tweak our setup based on the servers in our internal network. If one of our upstream Apache HTTP servers for our Rails application was twice as powerful, we could tell nginx to use it more frequently:
If we wanted only a single server to receive requests, but had a backup server available in the event the primary failed, we could tell nginx to handle it:
If we wanted clients requests to always be handled by the same upstream (such as for sticky sessions which were not handled by the application directly), we could tell nginx to track client IP addresses, and send the client request to the appropriate upstream:
Let’s say we wanted to have nginx handle SSL connections as well. It is not difficult:
We can also have nginx use SSL to communicate with the upstream servers:
Or, we can just let the upstreams know that the client made a connection to nginx over SSL:
We can also have nginx use basic HTTP authentication (over HTTP or HTTPS):
Hopefully these configuration snippets give a good indication of how versatile nginx can be, and how easily relatively complex web application architectures can be implemented. The nginx wiki provides a wealth of information about the available configuration parameters, and often includes helpful examples. Nearly all Linux distros have nginx as an available packages, however, these are often quite out of date. Fortunately, it is not difficult to build nginx from source.