Functional Test Automation Part 1: Room for Improvement

(This post was originally posted on KSM Technology Partners’ blog. It explains our motivation for creating Toffee, so it’s reprinted here.)

From my first encounter with HTMLUnit over ten years ago, I had high hopes for automated functional testing for web applications. Comprehensive test suites would run continuously and unattended, and notify us promptly of incipient problems. We would dazzle our customers and quality auditors with slick, comprehensive, and electronically signed hypertext reports. We would dispense with hundreds of person-hours of manual testing, and pass the cost savings on to our customers. We would retire waist-high piles of paper test scripts and reports to long term storage and, after the obligatory retention period, dance around the bonfire where they burned.

Ten years later, I still have high hopes for automated functional testing, but I’m left wanting. On the upside, the arrival of tools like SeleniumPhantomJSZombie, and Capybara expanded the reach of automation and provided a choice of languages for test implementation. My KSM colleagues and I have written tens of thousands of lines of automated test code, generated slick hypertext reports, and presented them to our customers’ auditors. We’ve saved countless hours of test execution time and effort. On the downside, we’ve struggled to make automated functional test authoring and execution as efficient as we’d like, and too often we’ve been forced to supplement our automated tests with manual ones. We’ve authored too much documentation to fill in the testing gaps for our auditors, customers, and quality assurance staff.

This post is the first in a series that explores why reality has fallen short of expectations, and suggests how to better realize the promise of automated functional web testing. Following is a list of the challenges of automated web application testing that I plan to address in this series. This tale will grow in the telling, so this list is subject to change:

  • Close the Communication Gap:  your business analysts can’t read your tests, and neither can your auditors.
  • Close the Skills Gap: your testers are not developers, and your developers are not testers.
  • Paint into the Corners: a sober, possibly cynical, look at the practical limitations of automation.
  • Show Your Work: capturing irrefutable evidence that links tests to results.
  • Roll with the Changes: the subject of the test – your application – will change in subtle ways.  Subtlety is lost on automatons.
  • Measure Coverage: prove that your tests test your requirements.
  • Test in Multiple Dimensions: testing multiple browser/operating system/application server/database/whatever combinations.

In the next post in this series, I’ll define functional testing and its importance in the heavily regulated industries I serve.

One thought on “Functional Test Automation Part 1: Room for Improvement

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s