April 12, 2010
Manual or Automated Testing - Which is Better?
I am often asked a simple question that is anything but simple. "Which is better - manual testing or automated testing?" It's impossible to answer without knowing what’s being tested. Everyone seems to want an easy answer so they can go to their boss and say, "If we only use 'insert your favorite testing preference here' testing then all will be well with our testing". If it was that easy, there would only be one version of software, tested once and always working thereafter, with no revisions. Yeah right!Answering the following questions can help you determine if manual or automated testing is right for you.Email sign up
- What are you trying to test?Manual testing is great when a person’s judgment is required. Automated testing excels in repetitive testing, such as regression testing.
- Are there specific repeatable results every time? Manual testing is usually better with dynamic conditions while automated testing is good for verifiable results.
- What are all the variations that can occur? Automated testing handles variables well and never gets bored catching issues that manual testing may miss.
- Is the result text-based? Automated testing generally works well with text strings and numeric values. Manual testing of image-intense projects is recommended.
- Is it a repetitive test? Repetitive tests are usually good candidates for automation.
- Is a judgment call required? Manual testing is recommended if you are verifying the quality of images in a web page or the clarity of an online video since automation lacks judgment in such testing.
- Are you testing large quantities of data? Without automation this kind of testing can be a resource hog and is also more prone to human error.
- Are you testing size limits or boundaries? These tests are good candidates for automation because computers don’t care how few, how many, or what characters are typed. Computers also don't lose count. People have to manually count and visually check characters, which may lead to errors.
- Does the test include characters that are hard to distinguish? Automated testing can easily tell the differences between characters, such as an 'o' (lowercase letter) and a '0' (number), which may be difficult for a tester to distinguish.
- How are pop-up error messages handled?Mixed testing may be best, unless you're using test-driven development which is automated from the start. Use manual testing to determine the correct responses to the message, and then automate the error handling as part of the test.