Wednesday, July 03, 2013

Your test suite is trying to tell you something

A few weeks ago I started wondering about 'the test that occasionally and randomly breaks' in a large test suite at my job. The test, called 'overloaded origin', tests a situation where a web server becomes overwhelmed with requests and a proxy server (the code being tested) has to handle the situation gracefully.

The test works by having a dummy web server that could randomly decide to (a) return a normal web page for a request (b) read the HTTP headers and then do nothing for 30 seconds and (c) read the HTTP headers, wait 30 seconds and then send a valid response. The proxy server is hit by 5,000 clients simultaneously requesting the same URL.

And sometimes, every now and again, this test failed.

And like many engineers I'd ignored it for a long time. But it kept worrying me because it must have meant something: computers are deterministic, after all. I was spurred to action by a colleague suggesting that the test be disabled because it was 'flaky'.

It took me two days of continuous work to find out what was wrong and it explained other occasional problems that had been seen with the code. And it made the test suite 100% stable on all platforms. That 'randomly failing test' was really 'a genuine bug in the code'.

But getting to that point was tricky because this was a system level test with clients, servers and the proxy and a memcached server in the middle. It turned out that the memcached server was the problem. In the end, I had to implement my own memcached server (a simple one) so that I had complete control over the environment. In doing so, I discovered the root cause of the problem.

The program has a timeout used to stop it waiting for memcached if it doesn't respond quickly (within 100s of ms).  Here are the lines of code that handle the memcached timeout (this is from inside the proxy server being tested).
var Timeout time.Duration
Timeout = time.Duration(conf("timeout", 100)) * time.Millisecond

cache := memcache.New(Servers...)
cache.Timeout = Timeout * time.Millisecond
The first two lines read the timeout value from a configuration file (with a default of 100) and convert that to a time.Duration in ms. The following lines (later in the code) use that value to set the timeout on the memcached connection

Oops!

There's another * time.Millisecond there. So, 100ms, for example, would become something much larger. To find out what you just need to know what a time.Duration is: it's a value representing a number of nanoseconds. 

So, the initial value of Timeout is 100,000,000ns (since 1ms is 1,000,000ns). Then when the second multiply happens Timeout becomes 100,000,000,000,000ns which is close to 28 hours.

The test would fail because occasionally the connection to memcached would fail resulting in the timeout starting. Instead of gracefully failing at 100ms the program was prepared to wait 28 hours.

And examination of the source code control log showed that this bug had always been there, right from time those lines of code were written.

By me.

If you enjoyed this blog post, you might enjoy my travel book for people interested in science and technology: The Geek Atlas. Signed copies of The Geek Atlas are available.

7 Comments:

Blogger Thorsten Claus said...

great debugging, and thanks for sharing... since I ran into similar problems once, I use an "_" or likewise for variables, regardless of some code comments in classes or includes stating that "all time in ms" or "all currency in cents" -- too often do I start reusing code and simply forgot these statements.

It would be interesting to find some sort of assertions to catch any "time > 1h" etc.

4:19 PM  
Blogger Thorsten Claus said...

This comment has been removed by the author.

4:20 PM  
Blogger Thorsten Claus said...

ups, that should've been "_ < unit >", such as "_ms"

4:21 PM  
Blogger Franklin Chen said...

Ha, these non-deterministic problems are the worst! Coincidentally, I just dealt with one also involving timeouts recently.

6:49 PM  
OpenID ingulf said...

Non-deterministic problems are indeed a pain. I wrote a bayesian version of git bisect (github.com/ealdwulf/bbchop) with the idea of finding them in an automated way. Wouldn't have found this one though, as it was there from the beginning.

11:26 PM  
Blogger Tim said...

Great post! Yes in my experience I find that people are all too willing to blame the test for being flaky when in actual fact it is really the system that is flaky.

11:54 AM  
Blogger Ralph Corderoy said...

Did you grep out other multiplications by Duration constants to see if the mistake is repeated elsewhere?

9:13 AM  

Post a Comment

Links to this post:

Create a Link

<< Home