Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

We write unit tests to test the specific implementation of a feature, not just its interface.  Typically, you should be looking at the code when writing tests, and you should write unit tests to exercise each piece of the code. This means that, in general, you can't write the tests before you've written the code (but, see the following rule for an exception).

When fixing a bug, write the test first

Whenever you fix a bug, there should be a test for that bug fix. Furthermore, you should write the test before fixing the bug: write the test, make sure it triggers the bug, then fix the bug and make sure that the test now passes. If you wait until after fixing the bug before writing the test, it's easy to end up with a test that doesn't actually exercise the bug. Or, alternatively, you can write the test after the bug has been fixed, but then you should back out the bug fix temporarily to make sure that the test really fails.

Tests should be microscopic

...

Please avoid tests that take a long time to run (e.g. hundreds of milliseconds or longer). Our entire unit test suite currently runs in about 10-20 seconds, and we'd like to keep the running time from growing much beyond this, so that it's not painful to run the tests. Occasionally long-running tests are unavoidable, but to avoid them as much as possible.

Avoid timing-dependent tests

When testing asynchronous behaviors of the system, it's tempting to write tests that look like this:

No Format
initiate some asynchronous action;
sleep for a while;
check that the action produced the expected results;

Unfortunately, the timing for the asynchronous action may not  very predictable, which creates a dilemma. If you set the sleep time low, then the test may fail because of variations in running speed (e.g., due to a high load on the machine). If you set the sleep time high enough  to be absolutely certain  that the action can complete, then the test will take a long time to run.

It's better to structure such tests like this:

No Format
initiate some asynchronous action;
enter a loop, waiting for a short interval and then checking for the expected results;
after many iterations of the loop, fail the test;

This has the advantage that the test will almost always complete quickly. You can be conservative in allowing the loop to execute many iterations before failing, since this will almost never happen.

Sometimes unit tests aren't feasible

Although in general  we expect all code to have unit tests, there are a few situations where it isn't really possible to write unit tests. One example is the our low-level network drivers. These classes tend to be highly dependent on the NIC hardware; it typically isn't possible to run tests using actual NICs, and if the NIC is mocked out then the test isn't particularly useful (the code might work with the mocked NIC but not with the real one). In addition, drivers tend to have fairly straight-line code, so almost all of it gets exercised immediately in a real RAMCloud cluster. Don't give up easily on writing unit tests, though; in most cases iit's possible to find a way to write meaningful tests.