May 30, 2013
How Did I Miss That Bug?
How many times have you asked: "How in the world did we miss that bug?" In my initial posts, Kahneman and Thinking About Testing and Why First Impressions Count, I looked at how our biases can influence how we conduct tests and interpret results. We can point to concrete examples of how our biases influence our testing. Too many defects make it through to production, and many of these are defects that we should have found in the testing process. That’s not to say that the test cases were incomplete, or that they were poorly written. Rather, it’s a matter of our own biases in testing and analyzing data affecting how we test and what slips past us. Christopher Chabris and Daniel Simons conducted an experiment where they asked subjects to count the number of times a basketball was passed during the course of a basketball game. They became so focused on that task that the majority of them failed to see a man in a gorilla suit run directly across the court. It turns out that we don’t easily see things we aren’t looking for. This is known as inattentional blindness. By focusing too strongly on executing our test cases and getting positive results, we fail to see indications that our cases may be incomplete. The cases may in fact have reasonable coverage of the stated requirements, but they may not fully support nonfunctional or assumed requirements. In Moneyball, the book by Michael Lewis, baseball scouts became so focused on finding the prototypical baseball player that they failed to consider just what skills and characteristics actually won baseball games. Oakland Athletics’ manager Billy Beane, faced with building a contending team without spending a lot on payroll, was forced to examine data to determine how to build the least expensive team possible, yet still be competitive on the field. And in the process, he discovered that baseball talent scouts were simply wrong in their assessments of individual players. As we start looking for solutions, we have to acknowledge that bias in our practices is inevitable, and not something we can will ourselves to stop. There is no “silver bullet” but there are ways to reduce the impact of bias and improve our testing outcomes. Combining exploratory testing with scripted testing can do a great deal to help with inattentional blindness, and to root out other types of bias. Another thing we can do is to let stakeholders outside of the testing team review our test cases, as well as the results of test runs. Automation is another important part of controlling for bias. When we can automate repetitive parts of our work, we reduce the potential for our biases to affect that work. The task is done the same way, again and again. In my presentation Moneyball and the Science of Building Great Testing Teams, I discuss these and other biases. I look at how these biases affect our ability to lead, manage, and participate in a software project team, focusing on how to better select team members, and make better (less biased) decisions in our day-to-day work. If we have a systematic bias, we’re probably not going to make the best decisions for our work or our team. There is a lot of relevance between how we think as individuals and how we perform as a team. And team performance relates directly to the success of the project. While we’re not talking about a baseball team, learning about how our thinking can be affected by other influences, and how we can correct for that, will make us better team members and better team leaders.Email sign up