Load balance Tomcat with Nginx and store sessions in Redis

An awkward title, but that’s exactly what we’re going to do. For some time, I was looking for a way to push code to production systems with zero downtime and zero impact to any active users. Surprising, the solution took very little time to implement. At a high level, we have Nginx load balancing two instances of Tomcat. Tomcat stores it’s sessions in Redis. Nginx is configured as non-sticky, since a request can go to any node in the cluster. When we need to push new code, simply take down any Tomcat instance. All current users will now get routed to the active instance. Since session data is externalized in Redis, active users will not be impacted. Once the inactive instance has been updated, bring it up and repeat for the other node.

We’ll start with Nginx:
[raoul@raoul-wp ~]$ sudo rpm -ivh nginx-1.4.2-1.el6.ngx.x86_64.rpm

Edit /etc/nginx/nginx.conf and add the bolded text below

http {
upstream tomcat  {
        server localhost:8080;
        server localhost:8081;
    }
include       /etc/nginx/mime.types;
default_type  application/octet-stream;

Update /etc/nginx/conf.d/default.conf and replace the location section with this:

location / {
    proxy_pass  http://tomcat;
  }

Restart nginx:
[raoul@raoul-wp nginx]$ sudo service nginx restart

Next, install two instances of Tomcat. Change the server ports of the second instance, so that they do not conflict. At this point if you enter https://localhost in your browser, you will be taken to the default tomcat page. However, since we have not setup sticky sessions, every request will get load balanced in round robin, which effectively means it will be creating a new session per request. You can easily see this behavior using the built in tomcat examples. Navigate to http://localhost/examples/servlets/servlet/SessionExample and refresh this page a few times and notice the Session ID changing each time. Let us fix this.

Download  and install Redis. There is good documentation at http://redis.io/download so I’m not going into the details. Start the server and use the client program to check that it’s working.

Finally, we need to configure Tomcat to store it’s sessions in Redis. For this we’ll be using tomcat-redis-session-manager (https://github.com/jcoleman/tomcat-redis-session-manager). This did not work out-of-the-box and required some tweaking. You will need to download the source code of this project and re-build it after updating the dependent library versions. The versions I used are commons-pool2-2.2.jar and jedis-2.6.1.jar. Copy these jars to the lib directory of both the tomcat instances.

Update the versions of commons-pool, jedis and the tomcat version that you are using in build.gradle of tomcat-redis-session-manager and build the project. Then copy the built tomcat-redis-session-manager-1.2.jar to tomcat lib directory of each instance. Add the following to both the tomcat’s context.xml:

<Valve className="com.orangefunction.tomcat.redissessions.RedisSessionHandlerValve" />
<Manager className="com.orangefunction.tomcat.redissessions.RedisSessionManager"
         host="localhost"
         port="6379"
         database="0"
         maxInactiveInterval="60" />

Restart the tomcat instances and we’re done. You can now see tomcat’s session in Redis. Use the previous example and try various combinations by taking the tomcat instances up and down. The session data will remain unaffected. I even noticed that if you take down both the instances and then bring them back up, the user’s existing session will be restored.

Thank you for your time.

Advertisements

5 thoughts on “Load balance Tomcat with Nginx and store sessions in Redis

    • The only other benefit that I can think of is, in the remote event of both tomcats going down, sessions will still not be lost. The moment one tomcat comes up again, users can continue without loss of session data (provided they haven’t left the site yet, seeing the error pages). But yes, if it’s only a few nodes, tomcat’s built-in clustered sessions would probably make more sense. This solution can be used if clustering on a large number of nodes is required.

  1. Pingback: Links & Reads from 2015 Week 2 | Martin's Weekly Curations
  2. Pingback: Stackato: F5 Configurations | Kevin O'Neill Stoll

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

w

Connecting to %s