April 8, 2015

4 Automated Testing Myths BUSTED

Test Management
[caption id="attachment_16140" align="alignright" width="226"]Photo credit: Axxnn Photo credit: Axxnn[/caption] Automated testing has several benefits for improving the efficacy of your testing effort, but there are several myths about the practice. If you're evaluating automated testing for your team, you'll want to bust those myths upfront and give all stakeholders a realistic idea of what to expect. I sat down with Jeff Knee, Senior Software Test Engineer for Seapine, to talk about what you should—and more importantly, shouldn't—expect if you add automated testing to your toolbox. Jeff’s been with Seapine for 11 years, and has been focused on automated testing for the past two years.

Myth #1: Automated testing can replace all our other testing!

JEFF’S TAKE: First of all, I don’t like the term “automated testing.” It’s really “automated checking.” James Bach and Michael Bolton explain the distinction in an excellent blog post they wrote a couple years ago. In a nutshell, it’s not really “testing” because the machine is performing exactly the same steps exactly the same way, and only “checking” exactly what you told it to check. Automated checking won’t replace all your other testing because scripts won’t do all the looking you need. You’ll still need a human to look at the results. Let’s say you’re checking a form used to set up a user’s new account. You write a script that will populate all form fields with sample data, submit the form, and check to see if a new account is created from the sample data. The script runs and tells you everything is working. Having a good QA system in place, eventually someone manually tests the form. She enters similar sample data into the form, submits it, and a new account is created—so far, so good. Because she has eyes, however, she sees that the background image is broken. Relying only on an automated test without any human testing can result in easy to spot problems slipping through.

Myth #2: Automated testing will let us automate everything and run it over and over.

JEFF’S TAKE: Automated test scripts are not endlessly reusable. When something as simple as an intermediate dialog is introduced, it can really mess up automated checking. A human tester can easily deal with a new dialog, but the script must be modified to deal with the new condition. If the new dialog shows up all of the time, this might seem easy—but what if the dialog only appears in edge cases? If you have a bunch of simple scripts designed just to push the user interface around, each one of them may need to be manually rewritten after changes like these. Even if you take the time to write flexible, modular scripts, changes to the user interface, process, or rules can sometimes require a ton of rework on the scripts. Time that could have been spent testing the changes might be spent analyzing what scripts to change and how to change them. If you are testing a client/server product and you want to add in multi-user testing, it becomes an order of magnitude more difficult because now the scripts running each “agent” will need to be able to communicate changes to each other.

Myth #3: Anybody can throw together a script.

JEFF’S TAKE: Automation solutions provide tools that make automating tests easier, but they can’t do everything for you. Or, more to the point, they can’t do everything for someone with no testing or development experience. Even a simple test will take some expertise to automate. More sophisticated tests will be increasingly difficult, and there will always be a kinks to work out that may be beyond the reach of an inexperienced scripter. The more engineering acumen you have, the better you will be at writing modular scripts, data driven scripts, and scripts that build their own model to check—scripts that do more than just push the client around, in other words. You’d be disappointed if a manual tester didn't carefully look at the results generated from their input, and you should be disappointed in an automated script that ignores the results, too.

Myth #4: Once the scripts are written, magic happens. Just keep running them against new builds!

JEFF’S TAKE: Wouldn’t it be great if it were that easy? In reality, now you need to start analyzing run results and maintaining the scripts to keep up with system under test (SUT) changes. And, the more scripts you have, the more things you need to look after. You could just keep running the scripts against new builds if the SUT doesn’t change much, with little script maintenance. But if that’s the case, I would argue that you don’t need to test the SUT as often, and therefore you won’t reap much of a benefit from automating the tests. Continually testing something that hasn't changed does not make much sense—manually or automated. On the other hand, if it is something where the user interface is changing constantly, you’ll have a lot of script maintenance to do. In that case, perhaps testing it manually would take less time than it would to maintain the scripts for automated testing. The sweet spot of automated testing is often when the UI is stable, but the underlying code still changes in small and subtle ways.