by
tags:

Unit and Integration tests for Java & Redis

Recently I worked on a project which was using Redis as its database. Many projects I worked on before had one or even multiple databases. From those projects I still remember the endless discussions on how to unit test my code, mocking, no mocking, access the database or not. This time I wanted to try something else.

I am of the opinion that accessing database during unit tests is bad. It is conceptually wrong. Thankfully I can say that most of my fellow programmers feel the same way. Most; not all.

On top of that, I feel that mocking is not really the best solution. Sometimes it takes much too long to code the mocking definitions and algorithms than it takes to code the test itself. I felt I was doing something wrong as I spent too much time on creating and fixing the test environment and not focusing on the tests. Also, many times I am recoding the same thing with small differences ending up with a lot of boilerplate code. And even if I could throw all of this into some hierarchy and utility classes again I am spending way too much time not on my tests.

My approach this time was different to begin with. I wanted to have a set of test which are pure unit tests, and I wanted to have another set of tests which were pure integration tests. Those integration tests would test my system as a whole.

As for the structure and physical location of the tests: unit tests are bundled in the same project as the source code while the integration tests will be bundled in a separate project.

Unit testing

How do I perform unit tests on functions which access the redis database without needing to mock each call ? For that I found this great project on github: embedded-redis.

From within my test @BeforeClass function I would first setup an embedded redis server. In order not to run into port accessibility issues I first need to get a free port from the underlying OS:

private static int getTemporaryPort() throws IOException {
    ServerSocket socket = new ServerSocket(0);
    int port = socket.getLocalPort();
    socket.close();
    return port;
}

Using the port I retrieved, I start a new embedded redis server:

    redisServer = new RedisServer(options.getPort());
    redisServer.start();

In case my tests are dependent on specific data I can now add the data needed. This ingeniously makes my tests even more independent as each unit test is also responsible for the data of its test. Now I have the data and the test in one place.

At the end of the tests in my @AfterClass function I call the stop function on the redis server in order to keep each of my unit tests isolated and avoid accidental data override or interdependency.

    redisServer.stop();

Integration Tests

Integration tests have another problem entirely. Since the tests are being run on an already running environment there is no knowing what data might be on the database. It is very hard to make data specific tests when you do not know whether the data exists. This could lead to very inefficient and inaccurate tests.

The solution I propose is to upload out own specific testing data for each test just as we did for the unit tests.

Using the redis client api (Lettuce in my case) I wrote a small module where for each redis command I have a function associated with it. So for a simple set command I have the following function:

BiConsumer<BaseRedisCommands, Object> setCommand = (client, data) -> {
    ((RedisStringCommands)client).mset((Map)data);
};

And for a sadd command (set add) I wrote the following function.

BiConsumer<RedisStringCommands, Object> sAddCommand = (client, data) -> {
    (( Map<String, List<String>>)data).entrySet().stream().forEach(entry -> {
    ((RedisSetCommands)client).sadd(entry.getKey(), entry.getValue().toArray(new String[0]));
    });
};

In addition I created a json file containing the data I need for each test. This file is located in the src/test/resources folder preferably in the same package declaration as the test itself. The structure of the json file is a map where each action is mapped to the data the action is being performed on:

{
    "set": {
        "key1": "value1",
        "key2": "value2",
        "key3": "value3"
    },
    "sadd": {
        "key1": "value1",
        "key2": "value2",
        "key3": "value3"
    }
}

From within my test in the @BeforeClass function I first read the associated json file. Iterating over the keys I call the specific function to invoke on the data associated with this key ending up with a database containing known data specific for my testing needs.

Please note that the associated data needs to be serialized differently to accommodate each and every redis function as the input might vary. This step can be easily done attaching a serialization function to the defined action function which would run prior to performing the action.

At the end of the tests in my @AfterClass function I call the again iterate over the keys in the associated json file, this time performing delete functions for the data I inserted. This again to keep each of my unit tests isolated and avoid accidental data override or interdependency.

Conclusion

This technique helped me focus on my tests and my data. I feel it increased my efficiency and testing performance which gives me a much better feeling about my code. Meanwhile I even found many other "embedded servers" which would help in unit/integration tests for other technologies. Enjoy.

A Similar approach performing unit tests on Kafka for can be found here

Links from this Blog embedded-redis Lettuce

Contact Details Github LinkedIn Twitter