TDD and Asynchronous Behaviour

I am going to be giving an experience report at XP Day about TDD and Asynchronous Behaviour.

I will describe what I learned on several past projects applying test-driven development to distributed systems with message-oriented middleware, REST web services and user interfaces written with AJAX and Swing.

The full blurb is:

Popular frameworks for test-driven development assume that the test and the code under test execute synchronously. The test invokes the code under test and control does not return to the test until that code has finished executing.

This works well for unit testing. However, the assumption does not often hold when testing at a larger scale: integration testing, system testing, enterprise integration testing. At those scales, tests must cope with asynchrony, concurrency, and event driven architectures.

This experience report will describe three systems that were built using a TDD process, and how we tested them:

  • An enterprise-integration application that communicates with reliable message queues and pub/sub, and has a GUI client that runs on the desktop.
  • A system that uses a content-based pub/sub event-bus to publish and compose services in a service-oriented architecture.
  • A financial analysis application that has a compute grid and REST web services at the back-end and an AJAX web front-end written with GWT.

While at first glance the three systems are quite different and use a variety of architectures and technologies, the tests for these three systems cope with asynchrony and concurrency in similar ways.

This report will describe these commonalities as an abstract architectural style and testing idioms that can be applied to testing any system that has concurrent or asynchronous behaviour. It will address common difficulties encountered when testing concurrent systems, and how those difficulties can be addressed:

  • "Flickering" tests that fail only occasionally (but usually just before you need to release).
  • False positives, caused by race conditions that make the test and the system get out of sync.
  • Poor test performance that slows down the TDD feedback cycle.
  • Messy test code, with ad-hoc sleeps and timeouts are scattered liberally throughout the tests.

A lot of the testing techniques I will describe have been implemented in the Window Licker library for testing Swing and AJAX GUIs.

Copyright © 2008 Nat Pryce. Posted 2008-11-24. Share it.

Comments powered by Disqus