NGINX is a powerful and versatile open-source software that excels as an HTTP server, reverse proxy, and IMAP/POP3 proxy server. Renowned for its high performance, stability, rich feature set, straightforward configuration, and minimal resource consumption, NGINX is a favorite choice for handling web traffic at any scale.

NGINX was designed from the ground up to tackle the C10K problem – the challenge of handling ten thousand concurrent connections. Unlike traditional server architectures that rely on threads, NGINX employs an event-driven (asynchronous) architecture. This approach allows NGINX to manage a large number of simultaneous requests efficiently, using a small and predictable amount of memory. Whether you’re managing a small VPS or a large cluster of servers, NGINX can scale to meet your needs.

Setting up NGINX for HTTP Load Balancing

This guide will walk you through setting up NGINX as a load balancer for HTTP traffic. We’ll assume you have three servers running your web application that you want to distribute traffic across.

Our Setup:

  • nginx-server: nginx.server.com (This is the NGINX load balancer)
  • Server1: node1.application.com (Web application server 1)
  • Server2: node2.application.com (Web application server 2)
  • Server3: node3.application.com (Web application server 3)

This guide focuses on installation on RHEL/CentOS 6. Adapt the instructions as needed for other distributions.

Step 1: Create an NGINX Repository

You can either create a dedicated NGINX repository or utilize the EPEL repository. The dedicated repo is often more up-to-date.

Option 1: Create a Dedicated NGINX Repository

Create a new repository file:

sudo vim /etc/yum.repos.d/nginx.repo

Add the following content, depending on your distribution:

# CentOS 6
[nginx]
name=nginx repo
baseurl=http://nginx.org/packages/centos/$releasever/$basearch/
gpgcheck=0
enabled=1

# RHEL 6
[nginx]
name=nginx repo
baseurl=http://nginx.org/packages/rhel/$releasever/$basearch/
gpgcheck=0
enabled=1

Option 2: Use the EPEL Repository

If you already have the EPEL repository enabled, you can skip this step. Otherwise, install it:

wget http://download.fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm
sudo rpm -ivh epel-release-6-8.noarch.rpm

Important Note for EPEL: If you encounter errors during the yum install process (especially with heartbeat), you might need to change https to http in the epel.repo file:

sudo vim /etc/yum.repos.d/epel.repo

Step 2: Install NGINX

Now, install NGINX using yum:

sudo yum install nginx

Step 3: Configure NGINX for Load Balancing

This is the core of the configuration. We’ll define an upstream block to list our backend servers and then configure a server block to proxy requests to those servers.

Open the NGINX configuration file. The location may vary slightly depending on your installation, but it’s typically located at /etc/nginx/nginx.conf.

sudo vim /etc/nginx/nginx.conf

Replace the contents of the http block (or add a new one if it doesn’t exist) with the following:

http {
    include       /etc/nginx/mime.types;
    default_type  application/octet-stream;

    log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
                      '$status $body_bytes_sent "$http_referer" '
                      '"$http_user_agent" "$http_x_forwarded_for"';

    access_log  /var/log/nginx/access.log  main;

    sendfile        on;
    #tcp_nopush     on;
    keepalive_timeout  65;
    #gzip  on;

    upstream myappredirect {
        server node1.application.com;
        server node2.application.com;
        server node3.application.com;
    }

    server {
        listen 80;

        location / {
            proxy_pass http://myappredirect;
            proxy_set_header Host $host;
            proxy_set_header X-Real-IP $remote_addr;
            proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
            proxy_set_header X-Forwarded-Proto $scheme;
        }
    }
}

Explanation:

  • upstream myappredirect: This block defines a group of servers that NGINX will load balance across. Replace node1.application.com, node2.application.com, and node3.application.com with the actual addresses of your web application servers.
  • server { listen 80; ... }: This block defines a virtual server that listens on port 80 (the standard HTTP port).
  • location / { ... }: This block defines how to handle requests to the root path (/).
  • proxy_pass http://myappredirect;: This is the key line! It tells NGINX to forward requests to the myappredirect upstream group. NGINX will use a round-robin algorithm by default to distribute requests evenly across the servers in the group.
  • proxy_set_header ...;: These lines are crucial for passing information about the original request to the backend servers. Without these, your application may not function correctly. They forward the original host, client IP, and other useful information.

Step 4: Start NGINX

Now that you’ve configured NGINX, start the service:

sudo service nginx start

If Nginx is already running, you can reload to apply the changes:

sudo service nginx reload

Step 5: Testing

Open your web browser and navigate to the IP address or domain name of your NGINX server (e.g., http://nginx.server.com). Refresh the page a few times. You should see requests being distributed across your backend servers. You can verify this by checking the logs on your backend servers or by displaying the server hostname in your application.

By default, NGINX uses a round-robin algorithm for load balancing. This means it will send requests to each server in the upstream block in sequential order.

How it Works:

All inbound requests to NGINX (nginx.server.com) are received by the NGINX load balancer. NGINX then distributes the traffic to the backend servers according to the configured load balancing algorithm (round-robin by default).

-->[NGINX (nginx.server.com)]----->[HOST1 (node1.application.com)]
                                    \-->[HOST2 (node2.application.com)]
                                    \-->[HOST3 (node3.application.com)]

Further Configuration and Considerations

  • Load Balancing Algorithms: NGINX offers several load balancing algorithms besides round-robin, including least connections, IP hash, and more. Refer to the NGINX documentation for details.
  • Health Checks: Implement health checks to automatically remove unhealthy servers from the load balancing pool. This ensures that traffic is only sent to servers that are functioning correctly.
  • SSL/TLS: Configure SSL/TLS on NGINX to encrypt traffic between the client and the load balancer.
  • Caching: Use NGINX’s caching capabilities to improve performance by caching frequently accessed content.
  • Monitoring: Monitor your NGINX server and backend servers to identify and resolve performance issues.