While the concept of load balancing has been around for a while, using Nginx to do this is fairly new to me. Other common load balancers in use today are LVS, HAProxy, Perlbal and Pound. In this example, I am using 3 (ve) servers from Media Temple, each running on Ubuntu 9.10. To get started, log into the server you want to set up as the load balancer and install Nginx:
Then we'll need to make a new Nginx default virtual host,
/etc/nginx/sites-available/default. In the two server directives under the upstream backend section, be sure to put in the IP addresses or hosts you are balancing:
This is the simplest configuration possible. After you've completed the proxy configuration, test and restart Nginx:
At this point, requests handled by the load balancer will serve equal requests from each upstream server. The really nice thing is that if one of the upstream servers is not responding, the load balancer will automagically stop routing requests to it. So although the configuration has the unavailable server loaded, Nginx sees that it is down and then routes to all other available upstream servers. If all upstreams are down, Nginx will halt the proxy, simply showing a 502 http error. This also makes it a very useful tool for balancing mongrels for those running rails.
It's also worth noting that you can add as many backend nodes as you want; I'm just using two as an example. This makes using a (ve) server an ideal choice; spin up a new (ve) and simply add it to the upstream pool. There are a few other configuration options that provide the ability to add weight to backends to force an uneven load distribution. You can also use
proxy_set_header to make sure the user's IP address is logged instead of localhost:
Nginx provides many other configuration directives so I would recommend checking the official Nginx documentation on the NginxHttpUpstreamModule for more information. In the next post, we'll look at using a few file synchronization tools like Unison and rsync for data replication between backends.