HTTP loadbalancing springboot servers with Nginx

When playing around with state-of-the-art technologies, I struggled over Nginx (called Engine X) that aims to be the new hot stuff within serving static and dynamic content. I’ve been aware of using Apache HTTP server for loadbalancing, so I wanted to try out how difficult it would be to use Nginx.

overview

Versions

component version download
Ubuntu 14.04 http://www.ubuntu.com/download/desktop
Java Hotspot 1.8.0_72 http://www.oracle.com/technetwork/java/javase/downloads/jdk8-downloads-2133151.html
Nginx 1.4.9 http://nginx.org/en/download.html
Maven 3.3.9 https://maven.apache.org/download.cgi
ApacheBenchmark 2.3 included in Apache utils

First of all the very simple „application server“ that is build up with springboot:
simpleserver-overview

The pom is straight forward, but I put the remote-shell dependency (aka CRaSH) to the path, so I can ssh into a running service to read stats, metrics and other helpful information. This is helpful for monitoring.

<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
	xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
	<modelVersion>4.0.0</modelVersion>
	<parent>
		<groupId>org.springframework.boot</groupId>
		<artifactId>spring-boot-starter-parent</artifactId>
		<version>1.3.3.RELEASE</version>
	</parent>
	<groupId>eu.christophburmeister.playground</groupId>
	<artifactId>simpleserver</artifactId>
	<version>0.0.1-SNAPSHOT</version>

	<build>
		<plugins>
			<plugin>
				<groupId>org.springframework.boot</groupId>
				<artifactId>spring-boot-maven-plugin</artifactId>
			</plugin>
		</plugins>
	</build>

	<dependencies>
		<dependency>
			<groupId>org.springframework.boot</groupId>
			<artifactId>spring-boot-starter-web</artifactId>
		</dependency>
		<dependency>
			<groupId>org.springframework.boot</groupId>
			<artifactId>spring-boot-starter-remote-shell</artifactId>
		</dependency>
	</dependencies>
</project>

In application.properties I just limit the max threads of springboot’s internal tomcat. This is not relevant for the example, but could give you the chance for putting huge load on a running service.

server.tomcat.max-threads=2
package eu.christophburmeister.playground.simpleserver;

import org.springframework.boot.autoconfigure.EnableAutoConfiguration;
import org.springframework.stereotype.Controller;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.ResponseBody;

@Controller
@EnableAutoConfiguration
public class SimpleController {
	
	@RequestMapping("/")
    @ResponseBody
    String home() {
        return "Spongebob";
    }

}
package eu.christophburmeister.playground.simpleserver;

import org.springframework.boot.SpringApplication;

public class Start {
	public static void main(String[] args) throws Exception {
        SpringApplication.run(SimpleController.class, args);
    }
}

After creating the runnable springboot app via „mvn clean package“, I start five instances, running in background on different ports:

christoph@apollo:~/jtools/test-workspace$ nohup java -jar simpleserver-0.0.1-SNAPSHOT.jar --server.port=8081 --shell.ssh.port=10081 &
[1] 29161
christoph@apollo:~/jtools/test-workspace$ nohup: ignoring input and appending output to ‘nohup.out’

christoph@apollo:~/jtools/test-workspace$ nohup java -jar simpleserver-0.0.1-SNAPSHOT.jar --server.port=8082 --shell.ssh.port=10082 &
[2] 29179
christoph@apollo:~/jtools/test-workspace$ nohup: ignoring input and appending output to ‘nohup.out’

christoph@apollo:~/jtools/test-workspace$ nohup java -jar simpleserver-0.0.1-SNAPSHOT.jar --server.port=8083 --shell.ssh.port=10083 &
[3] 29205
christoph@apollo:~/jtools/test-workspace$ nohup: ignoring input and appending output to ‘nohup.out’

christoph@apollo:~/jtools/test-workspace$ nohup java -jar simpleserver-0.0.1-SNAPSHOT.jar --server.port=8084 --shell.ssh.port=10084 &
[4] 29222
christoph@apollo:~/jtools/test-workspace$ nohup: ignoring input and appending output to ‘nohup.out’

christoph@apollo:~/jtools/test-workspace$ nohup java -jar simpleserver-0.0.1-SNAPSHOT.jar --server.port=8085 --shell.ssh.port=10085 &
[5] 29238
christoph@apollo:~/jtools/test-workspace$ nohup: ignoring input and appending output to ‘nohup.out’

christoph@apollo:~/jtools/test-workspace$ jobs
[1]   Running                 nohup java -jar simpleserver-0.0.1-SNAPSHOT.jar --server.port=8081 --shell.ssh.port=10081 &
[2]   Running                 nohup java -jar simpleserver-0.0.1-SNAPSHOT.jar --server.port=8082 --shell.ssh.port=10082 &
[3]   Running                 nohup java -jar simpleserver-0.0.1-SNAPSHOT.jar --server.port=8083 --shell.ssh.port=10083 &
[4]-  Running                 nohup java -jar simpleserver-0.0.1-SNAPSHOT.jar --server.port=8084 --shell.ssh.port=10084 &
[5]+  Running                 nohup java -jar simpleserver-0.0.1-SNAPSHOT.jar --server.port=8085 --shell.ssh.port=10085 &
christoph@apollo:~/jtools/test-workspace$ ps -ef | grep simpleserver
christo+ 29161 21496 99 10:08 pts/28   00:00:36 java -jar simpleserver-0.0.1-SNAPSHOT.jar --server.port=8081 --shell.ssh.port=10081
christo+ 29179 21496 99 10:08 pts/28   00:00:22 java -jar simpleserver-0.0.1-SNAPSHOT.jar --server.port=8082 --shell.ssh.port=10082
christo+ 29205 21496 89 10:08 pts/28   00:00:16 java -jar simpleserver-0.0.1-SNAPSHOT.jar --server.port=8083 --shell.ssh.port=10083
christo+ 29222 21496 83 10:08 pts/28   00:00:13 java -jar simpleserver-0.0.1-SNAPSHOT.jar --server.port=8084 --shell.ssh.port=10084
christo+ 29238 21496 76 10:08 pts/28   00:00:10 java -jar simpleserver-0.0.1-SNAPSHOT.jar --server.port=8085 --shell.ssh.port=10085
christo+ 29269 21496  0 10:08 pts/28   00:00:00 grep --color=auto simpleserver

Each of these servers is now accessible via its own port (8081-8085). And the CRaSH shells are also reachable via the ports 10081-10085. For the password you have to look into the nohup file as it is randomly generated with each run. The connection to the CRaSH shells looks like the following:

ssh -p <port> user@localhost

Ok, that’s the one part. The next part will be the installation and configuration of Nginx itself. This becomes quite easy, as on my Ubuntu machine, apt-get takes over most of the doing:

sudo apt-get install nginx

After that, the configuration file is placed into /etc/nginx/nginx.conf
The content of this file has to be replaced with the following loadbalancer configuration:

pid /run/nginx.pid;

events {
	worker_connections 768;
	# multi_accept on;
}

http {

    log_format formatWithUpstreamLogging '[$time_local] $remote_addr - $remote_user - $server_name to: $upstream_addr: $request';

    access_log  /var/log/nginx/access.log formatWithUpstreamLogging;
    error_log   /var/log/nginx/error.log;

    upstream simpleserver_backend {
	# default is round robin
        server localhost:8081;
        server localhost:8082;
        server localhost:8083;
        server localhost:8084;
        server localhost:8085;
    }

    server {
        listen 8080;

        location / {
            proxy_pass http://simpleserver_backend;
        }
    }
}

What does this exactly mean? The pid and the events section are boilerplate. The interesting stuff is in http section: First we define a logformat that contains some information (beside others the upstream address). Then this format is getting connected to the access_log. Then the locations for access and error logs are configured. After that, the main part of a loadbalancer, the upstream servers are configured as server group named „simpleserver_backend“. In server section the listener port of nginx itself is placed and after that, the forwarding of every request is proxied by the proxy_pass variable. If there is no further configuration, Nginx follows the round robin principle when forwarding the requests. Furter information on how to do loadbalancing can be found here: https://www.nginx.com/resources/admin-guide/load-balancer/

That’s it. Now you can point your browser to http://localhost:8080 and Nginx takes over the job of distributing the incoming requests to the upstream servers. The round robin can be checked with reloading the page in your browser several times. Then you will see in access_log, that with every request, the next upstream server is choosen:

[27/Mar/2016:12:30:04 +0100] 127.0.0.1 - - -  to: 127.0.0.1:8081: GET / HTTP/1.1
[27/Mar/2016:12:30:04 +0100] 127.0.0.1 - - -  to: 127.0.0.1:8082: GET /favicon.ico HTTP/1.1
[27/Mar/2016:12:30:07 +0100] 127.0.0.1 - - -  to: 127.0.0.1:8083: GET / HTTP/1.1
[27/Mar/2016:12:30:07 +0100] 127.0.0.1 - - -  to: 127.0.0.1:8084: GET /favicon.ico HTTP/1.1
[27/Mar/2016:12:30:15 +0100] 127.0.0.1 - - -  to: 127.0.0.1:8085: GET / HTTP/1.1
[27/Mar/2016:12:30:15 +0100] 127.0.0.1 - - -  to: 127.0.0.1:8081: GET /favicon.ico HTTP/1.1
[27/Mar/2016:12:30:21 +0100] 127.0.0.1 - - -  to: 127.0.0.1:8082: GET / HTTP/1.1
[27/Mar/2016:12:30:21 +0100] 127.0.0.1 - - -  to: 127.0.0.1:8083: GET /favicon.ico HTTP/1.1
[27/Mar/2016:12:30:34 +0100] 127.0.0.1 - - -  to: 127.0.0.1:8084: GET / HTTP/1.1
[27/Mar/2016:12:30:34 +0100] 127.0.0.1 - - -  to: 127.0.0.1:8085: GET /favicon.ico HTTP/1.1
[27/Mar/2016:12:30:52 +0100] 127.0.0.1 - - -  to: 127.0.0.1:8081: GET / HTTP/1.1
[27/Mar/2016:12:30:52 +0100] 127.0.0.1 - - -  to: 127.0.0.1:8082: GET /favicon.ico HTTP/1.1

For further load testing you could use „ab“ in the following way:

christoph@apollo:~$ ab -n 200000 -c 5 http://localhost:8080/

But be aware that load testing from the same machine where the application runs, doesn’t make so much sense 🙂

So as a result for nginx: very simple and effective 🙂