Setting up an Nginx server to load balance several socket.io processes

An important challenge when setting out to use Socket.io in a heavily loaded environment is making sure it can scale well. As you may know it is fairly simple to set up a cluster of nodejs (socket servers) using RedisStore (for more information checkout this example).

Once I had N processes (each listening on a different port on a single machine) - the main challenge was to set up a good load balancer in front. My first attempt a few months ago was not very successful and left me a bit frustrated. I focused on Nginx since it seems to be one of the best software load balancers and I really wanted to check it out. Unfortunately at the time it didn't have integral support for socket proxying so all my clients fell back to XHR which was not cool :( . Other tasks forced me to leave it unsolved...

Today I finally got a chance to get back to it. First thing I noticed is that as of Nginx v1.3.13 it supports websockets (http://nginx.com/news/nginx-websockets.html) so no more home made solutions and hacks!

After some fiddling I got it to work. This may have been even easier if I had any experience with configuring Nginx and known what I was doing :). Bottom line - its working and here is the relevant configuration added to Nginx config:

.... 

#inside http directive
upstream my.server.com {
        # 4 instances of NodeJS

        server my.server.com:8881;
        server my.server.com:8882;
        server my.server.com:8883;
        server my.server.com:8884;

    }
    
    map $http_upgrade $connection_upgrade {
        default upgrade;
        ''      close;
    }
 
    server {
        listen 80;
        server_name my.server.com;
        location / {
            proxy_pass http://my.server.com;
            proxy_http_version 1.1;
            proxy_set_header Upgrade $http_upgrade;
            proxy_set_header Connection $connection_upgrade;
        } 
    }
.....

 

That's it - just replace my.server.com with your own domain and you should be done!

 

**** UPDATE ****

To test the configuration mentioned above I created a Java client that connects via socket to emulate heavy loads on the . I used this Java implementation of socket.io client to do the actual communication.

I stumbled into some weird behaviors  (on a very strong Win7 development machine):

  1. No matter how many nodes I created the Nginx server stops accepting connections at ~511 concurrent client threads. The client gets java.net.SocketException: Connection reset
  2. The connections are not balanced correctly - in a 4 nodes configuration: 2 nodes get 2 connections while the others get ~250.

I will be performing further inquiries and keep updating.

 

 

**** UPDATE 2 ****

Further investigation suggests that this is related to:

  1. Nginx on Windows sucks :)
  2. max clients = worker_processes * worker_connections / 4 in a reverse proxy configuration (Browsers open 2 connections and the server manages a connection to the browser and a connection to the backend - hence 4)

According to section 2 I only get to 511 concurrent (I'm using a Java client so it only opens one connection to the server). According to section 1 I haven't been able to increase the number by playing with the configuration.

Testing the configuration on Linux should create a better picture.

 

 

 

 

 

Javascript Architect

Frontend Group
Thank you for your interest!

We will contact you as soon as possible.

Send us a message

Oops, something went wrong
Please try again or contact us by email at info@tikalk.com