September 16, 2013
Many testing teams rely solely on scripted testing, both manual and automated, to decrease the risk of defects in the product release. The problem is, scripted testing is not meant to identify error conditions in scenarios that significantly deviate from the design or requirements. To find these hidden or divergent risks, you need to go off script, and that’s where adding exploratory testing can help.
It’s almost impossible to plan tests that cover every variation in data, configuration, interaction, sequence, timing, and so on. Scripted tests are designed to ensure that the application meets the requirements (using new-feature test cases) and to mitigate the risk of new features breaking existing functionality (via regression test cases).
Experienced testers can anticipate issues that might occur, but it may too costly or time-consuming to write a test case for every scenario that comes to mind.
In her book Explore It! Reduce Risk and Increase Confidence with Exploratory Testing, Elisabeth Hendrickson says the best test strategy answers two core questions:
Email sign up
Reducing Risk with Exploratory Testing
Test Management
Helix ALM

- Does the software behave as intended under the conditions it’s supposed to be able to handle?
- Are there any other risks?
Push the Envelope
Exploratory testing puts the thinking back in the hands (or rather, the head) of the tester. As an exploratory tester, you design the test, you execute it immediately, you observe the results, and you use what you learn to design the next test. You’re not just following steps in a test case someone else created; you’re pushing the application to its limits to gain a better understanding of how the application works and where it breaks. You see it from the user’s point of view, rather than the developer’s. Ultimately, you get a more complete view of the application—including its weaknesses and hidden risks.Discover More Defects
On average, 11% more overall defects are discovered through exploratory testing vs. scripted testing. For defects that should be immediately obvious, such as a missing button in the UI, exploratory testing discovers 29% more vs. scripted. When it comes to “complex” bugs (bugs requiring three or more user actions to cause an error or failure), it jumps to 33% more defects found. (Source: Defect Detection Efficiency: Test Case Based vs. Exploratory Testing.) The reason you find more defects when using an exploratory method is because you have more latitude to try different types of tests, using your past experience and knowledge of the product. Scripted testing, on the other hand, limits you to only the steps outlined in test cases, which in turn limits your ability to consider other test scenarios. There are numerous reasons why test cases don’t always lead to finding bugs, such as: how well the test case was written (did the analyst understand the requirement?), who wrote the test case (is the analyst writing the test case knowledgeable about how the product works?), how well the requirements document described new functionality, and so on. Even if you had perfect test cases, exploratory testing would still find more defects over the course of a release, for several reasons:- You tend to find a good number of defects when “testing around” functional areas while verifying defects. Fixing an issue often breaks something else.
- If a defect exists and is not found while executing the initial test run (following the test cases steps), it is unlikely that the next tester running the same test will find the defect. However, exploratory testing in the same functional area may reveal the bug.
- Exploratory testing allows you to think outside the box and come up with use cases that might not be covered in a test case. For example, you might perform one test and then ask yourself, “What if I tried this? What if I didn’t do that?”
- Some defects, typically the hard ones to find, are dependent on a sequence of events. If you don’t have really deep test cases, you can miss finding defects that exploratory testing can find in a less-structured, but longer, test session.