SoftwareEngineering

Custom Nunit runner for parallel testing

In my new team there is considerable use of Nunit for unit tests, which has been further adopted as a convenient mechanism for all tests. Including a set of selenium based UI tests. The reasons for using Nunit are fair enough, it handles test execution control, allows for easy setup/teardown, spews results out in a fairly standard format. It gives you a lot for free. Last week I was investigating selenium GRID, a nice addition that allows you to run multiple selenium RC's per system, and potentially many on many systems and have your tests talk to a single controller. This allows you to massively speed up test execution by distributing tests to as many separate browser sessions as you like.

And here I hit a problem, Nunit only runs your tests one by one. So it doesn't matter how much selenium grid awesomeness I set up, it's still just going to run the tests sequentially.

So I started looking for an Nunit runner that supports running tests in what I would call 'batch parallel'. Only to discover no such thing apparently available! ok so maybe I'm not looking hard enough, but all I could find is PNunit, which supports parallel execution, but only for a fairly specific use case. It is designed to allow you to run in parallel tests which are e to do so. Eg it allows you to specify that test1, 2 and 3 must run in parallel, and define some shared locks/signals that you can use to synchronise events for specific testing needs. That's great, if that is what you wanted to achieve. But if you don't care which tests run together, and are only interested in parallelism for the sake of scaling, then it is no good. Basically it would require me to explicitly specify which test groupings I want, and keep updating it every time I want to add more tests.

Annoyingly what I'm looking for is something I've written before. Except I wrote it in Java as a test harness to a bunch of JMS tests. My test harness allowed me to specify how many tests I wanted batched together, how long I was prepared to let any given batch run for, before starting the next, and how long to give any slow tests to finish at the end of the run.

For instance, by default it would start 10 threads, with a test run on each, then watch for them to finish. If they all finished in under 60 seconds it just fired up the next 10. If not then it held a reference to all those still running, and still kicked off the next 10. Once all tests had been run, it waited another 60 seconds for those tests in its slow bucket to complete, before killing anything left.Afterall these were fairly simple FV tests, not stress tests, if they were taking more than 2 minutes to run then something was up.

As it happens my JMS test harness had to be quite a bit more complex due to the nature of the tests, they had been written to expect to be sequential and to have exclusive access to resources. So before starting tests the harness had to figure out what references they had to resources, and clone them onto a unique reference so that they could run along side other tests. This wasn't always possible since some resources really can only exist one per system, and so the harness could also detect when a test required exclusive access to such a resource and make sure it was the only test that required that access in a given batch.

I was pretty happy with this harness, and it sped up that test suite by several orders of magnitude. Now I find myself wanting something similar, but this time in c# for running Nunits. I really am surprised that something similar is not  available even commercially, it seems like a pretty obvious requirement. But I guess such things probably exist all over the place, created as proprietary solutions (much like my own test harness) inside companies that have no interest in productising or open sourcing their code.

Of course running in parallel is one thing, but I have a few other requirements that I will build into my own solution. Perhaps the most important for me is onFail processing and diagnostics capture. At the moment if a selenium test fails, it takes a screenshot of what was on the screen at that time. This is pretty much the bare minimum, and not terribly helpful most of the time. Think about it... the test failed because something it looked for on the screen wasn't there. The screenshot is just going to show you that it was right, what it was looking for wasn't on the screen, what you want to know is why not. Better would be to re-run a failing test from the start with additional information captured through out. Take a screencast of the whole sequence & request trace and logs from any component capable of generating it. The objective of an automated test should be to provide enough information on a failure for a developer to see the problem and fix it.

If you are spending a lot of time worrying about defect 'recreates' before you can figure out what went wrong, then you are doing your automation wrong.

So this is my new year plan, I get to learn how to do threading in c# and I'm rather looking forward to it ( I guess that makes me an incurable geek)