We're hiring!

We're actively seeking developers and designers for our Ann Arbor & Detroit locations.

Load Balancing and Reverse Proxying with Nginx, Updated

nginxNginx is a modern, open-source, high-performance web server. It is capable of handling a huge number of concurrent connections easily (see the C10K problem). Over a year ago, I wrote about using nginx as a load balancer and remote proxy. Since then, my understanding of nginx and best practices in its configuration have progressed significantly. I’ve decided to refresh my blog post to provide some of this additional knowledge.

As I explained in my previous post, nginx relies on a non-blocking I/O, event-driven model, which allows it to easily handle a large number of incoming concurrent client connections with ease. This makes it an excellent choice as a load balancer and reverse proxy. In contrast, the traditional Apache HTTP model relies on a limited number of synchronous threads which may block on I/O.

Nginx running on a single server handles incoming client requests and distributes them to a pool of upstream application servers that actually fulfill the requests. The pool of application servers can be easily scaled up or down to handle changes in traffic levels. This flexibility provides a way to scale the capacity of almost any web application quite easily.

Following are some specific scenarios and nginx configuration examples that I have used when setting up and maintaining applications and network infrastructures for both Atomic Object and our clients. I lead up to a fairly practical configuration implementation that I’ve used recently.

Simple Scenario

Situation: We have a JRuby application running on Apache Tomcat. The application gets a significant amount of traffic and is no longer performing as well as required. We have hit the limit of how much additional RAM/CPU resources we can easily add to the server. Instead, we decide to scale horizontally and add more servers.

First, we setup a new nginx server to a accept the connections, and distribute them to our Tomcat application servers.

upstream jruby_application  {
  server 10.0.0.2:8080 max_fails=1 fail_timeout=10s;
  server 10.0.0.3:8080 max_fails=1 fail_timeout=10s;
  # and so on: server 10.0.0.x;
}
 
server {
  listen 80;
 
  root /var/www/html;
  try_files /maintenance.html $uri $uri/index.html $uri.html @proxy;
 
  location @proxy {
    proxy_pass  http://jruby_application;
  }
}

Explanation: We set nginx to listen on port 80. When a request comes in, nginx first checks to see if the /var/www/html/maintenance.html file exists. If it does, it serves it. Otherwise, it continues down the list of checks so see if the request can be handled locally. If the request cannot be served locally, nginx passes it to the @proxy location block, which will proxy the request to the upstream JRuby applications running on port 8080.

This is extremely powerful as it allows nginx to intercept certain requests before they are proxied to upstream applications. This is useful for static assets such as images and files. Additionally, the check for a maintenance page allows us to selectively put the entire site in maintenance mode simply by creating a file.

Scenario with HTTP Headers

Situation: Our new cluster of servers has been running great. However, we notice that the IP addresses of clients aren’t being properly passed to the JRuby application. Instead, all clients have an IP address of 10.0.0.1.

Fortunately, all that we need to do is to tell nginx to set or pass on certain HTTP headers to the upstream JRuby application.

server {
  listen 80;
 
  root /var/www/html;
  try_files /maintenance.html $uri $uri/index.html $uri.html @proxy;
 
  location @proxy {
 
    proxy_set_header Host $host;
    proxy_set_header X-Real-IP $remote_addr;
    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
 
    proxy_pass  http://jruby_application;
  }
}

Explanation: By adding some specific HTTP headers via the proxy_set_header directive, we can ensure that the JRuby application has access to the correct HTTP headers from the client. In addition, we can set certain headers based on characteristics of the client connection. In this instance, X-Real-IP is set to the remote address of the client. However, the client itself may just be a proxy server forwarding a request on behalf of a client (common with certain CDN’s). X-Forwarded-For is set to the ultimate client IP address (end user) based on the HTTP headers provided with the request.

Scenario with SSL/HTTPS

Situation: Our JRuby application now needs to be able to secure client requests. It would be a pain to setup SSL on Apache Tomcat, so we decide to use nginx as the SSL endpoint. In our current setup, all connections to upstream JRuby application servers are on a private network, and so do not need to be separately secured.

server {
  listen 443;
  server_name jrubyapp.example.com;
 
  root /var/www/html;
  try_files /maintenance.html $uri $uri/index.html $uri.html @proxy;
 
  ssl on;
  ssl_certificate cert.crt;			
  ssl_certificate_key cert.key;	
 
  location @proxy {
 
    proxy_set_header Host $host;
    proxy_set_header X-Real-IP $remote_addr;
    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    proxy_set_header X-Forwarded-Proto https;
 
    proxy_pass  http://jruby_application;
  }
}

Explanation: We set nginx to listen on port 443 (HTTPS), specify that nginx should enable the SSL engine, and use the provided SSL certificate and SSL certificate key. Nginx does not have a separate directive for providing a SSL chain certificate (such as with Apache HTTP), so any chain certificates need to be appended to the primary certificate (e.g. cat newcert.crt chain.crt > cert.crt). Additionally, we set the X-Forwarded-Proto HTTP header so that the JRuby application can detect that the client connected securely over HTTPS.

Scenario Redirecting all Traffic to HTTPS

Situation: We decide that all traffic to our JRuby application should be secured with SSL. This tends to be for non-public-facing websites such as particular client application instances. While it would be better that the connections were originally made with SSL, we can provide a redirect as a convenience to users where appropriate.

server {
  listen 80;
 
  root /var/www/html;
  try_files /maintenance.html $uri $uri/index.html $uri.html @secure;
 
  location @secure {
    rewrite ^ https://jrubyapp.example.com permanent;
  }
 
}
server {
  listen 443;
  server_name jrubyapp.example.com;
 
  root /var/www/html;
  try_files /maintenance.html $uri $uri/index.html $uri.html @proxy;
 
  ssl on;
  ssl_certificate cert.crt;			
  ssl_certificate_key cert.key;	
 
  location @proxy {
 
    proxy_set_header Host $host;
    proxy_set_header X-Real-IP $remote_addr;
    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    proxy_set_header X-Forwarded-Proto https;
 
    proxy_pass  http://jruby_application;
  }
}

Explanation: We set nginx to listen on both port 80 (HTTP) and port 443 (HTTPS). On port 80, nginx first checks for the /var/www/html/maintenance.html page, and then checks if it can otherwise handle the request locally. If it cannot, it returns a redirect to HTTPS. The permanent keyword tells nginx to respond with a HTTP 301 (moved permanently) as opposed to a HTTP 302 (moved temporarily). Note that the server block for port 80 has no means of actually proxying the request.

Scenario with Proxy Redirect

Situation: Our upstream JRuby application does not always respond to clients with the appropriate URI. For instance, the JRuby application occasionally responds with a redirect, but specifies a protocol of HTTP instead of HTTPS as the JRuby application itself is running as HTTP (nginx is handling the SSL termination). We can intercept any of these responses and ensure that the client is redirected properly.

server {
  listen 443;
  server_name jrubyapp.example.com;
 
  root /var/www/html;
  try_files /maintenance.html $uri $uri/index.html $uri.html @proxy;
 
  ssl on;
  ssl_certificate cert.crt;			
  ssl_certificate_key cert.key;	
 
  location @proxy {
 
    proxy_set_header Host $host;
    proxy_set_header X-Real-IP $remote_addr;
    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    proxy_set_header X-Forwarded-Proto https;
 
    proxy_pass  http://jruby_application;
    proxy_redirect http://jrubyapp.example.com https://jrubyapp.example.com;
  }
}

Explanation: The proxy_redirect directive tells nginx to look for any instances of http://jrubyapp.example.com in the HTTP headers of responses from the upstream JRuby application. If such an instance is found, it is dynamically rewritten to https://jrubyapp.example.com before the response is sent to the client. In this way, we ensure that any redirects or callbacks generated by the JRuby application are properly constructed before responding to a client.

Failure to handle cases such as these can result in nasty redirect loops or cause clients to attempt to connect with HTTP to HTTPS ports, or vice versa. Failing to setup proper proxy_redirect directives when they are needed is one of the most common instances of ‘under-configuring’ nginx.

The Final Configuration

Situation: After several iterations, we finally have a nginx configuration to load balance and reverse proxy for our JRuby application. Our nginx server can handle serving assets locally, display a maintenance pages, proxy to upstreams, forward HTTP headers, and handle both HTTP and HTTPS requests.

upstream jruby_application  {
  server 10.0.0.2:8080 max_fails=1 fail_timeout=10s;
  server 10.0.0.3:8080 max_fails=1 fail_timeout=10s;
  server 10.0.0.4:8080 max_fails=1 fail_timeout=10s;
  # and so on: server 10.0.0.x;
}
server {
  listen 80;
 
  access_log  /var/log/nginx/jrubyapp.example.com.access.log;
  error_log  /var/log/nginx/jrubyapp.example.com.error.log;
 
 
  root /var/www/html;
  try_files /maintenance.html $uri $uri/index.html $uri.html @secure;
 
  location @secure {
    rewrite ^ https://jrubyapp.example.com permanent;
  }
 
}
server {
  listen 443;
  server_name jrubyapp.example.com someotherurl.com;
 
  access_log  /var/log/nginx/jrubyapp.example.com.ssl.access.log;
  error_log  /var/log/nginx/jrubyapp.example.com.ssl.error.log;
 
  root /var/www/html;
  try_files /maintenance.html $uri $uri/index.html $uri.html @proxy;
 
  if ($host != 'jrubyapp.example.com') {
    rewrite ^/(.*)$ https://jrubyapp.example.com/$1 permanent;
  }
 
  ssl on;
  ssl_certificate cert.crt;
  ssl_certificate_key cert.key;
 
  location @proxy {
 
    proxy_set_header Host $host;
    proxy_set_header X-Real-IP $remote_addr;
    proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
    proxy_set_header X-Forwarded-Proto https;
 
    proxy_pass  http://jruby_application;
    proxy_redirect http://jrubyapp.example.com https://jrubyapp.example.com;
  }
 
   client_max_body_size 64M;
}

Explanation: The above configuration is nearly identical to the one that has been built up through each of the scenarios. Notably, we’ve added a client_max_body_size directive to allow clients to upload large files through the proxy, access_log and error_log directives so that we can properly log access requests and aerrors, and a check to be sure that a client is using the correct ‘canonical url’ when accessing the application.

Additional Resources

 

Justin Kulesza (46 Posts)

Justin is a DevOps practitioner at Atomic Object. He runs servers, troubleshoots the network, deploys apps, fixes bugs, manages backups, monitors monitoring, and does all manner of general problem solving for Atomic Object and our customers. He often works with configuration management tools like Chef and Puppet, and loves working with Linux.

This entry was posted in DevOps & System Admin. and tagged . Bookmark the permalink. Both comments and trackbacks are currently closed.