May 15, 2013

An Anchor for Your Decisions

Helix ALM
I’m currently taking a MOOC through Coursera called A Beginner’s Guide to Irrational Behavior, taught by renowned economist Dan Ariely at Duke University. It’s a course that fits into the area of behavioral economics, and it dovetails nicely into some of the concepts I’ve been developing about bias in testing. It builds on the biases I discuss in my previous posts, Kahneman and Thinking About Testing, and Why First Impressions Count. One of the most amazing biases described by both Ariely and Kahneman is the anchoring bias. In an experiment, Kahneman asks subjects to spin a “wheel of fortune” that is designed to stop on one of two different numbers. He then asks these subjects how many countries are there on the African continent. The number they spun on the wheel of fortune greatly influenced their resulting guess. It turns out that we can become anchored by a random value prior to making other decisions that concern numbers. In economics, Ariely notes that initially setting the price of a product at one level serves to anchor the consumer to that price. If the price then goes down, we automatically believe that the lower price represents a better deal, irrespective of the profit made on the product or of our own utility in owning the product. Apple has practiced this technique well, first setting the price of the initial iPhone high, then lowering it within weeks. Potential buyers became anchored to accept the higher price, and thought they were getting a great deal at the lower price. In testing, the anchoring bias can leave us mentally transfixed on our numbers, whether they are defect count, find and fix rate, tests passed, or other metric. An initial value, early on in a project, can serve to provide us with an anchor by which we evaluate subsequent data. This isn’t necessarily a bad thing, except that we may have gone into the project with expectations as to what those values need to be in order to meet the project goals. One of the best ways to overcome the anchoring bias in testing is to share those metrics with other stakeholders. By sharing data as broadly as possible, we start getting different perspectives and questions about the underlying meaning of the data. The questions can prompt the team to start a discussion of the underlying meaning of the metrics. Another way of overcoming this bias is to define your expectations prior to the beginning of the project. Those expectations may be based on an assessment of the required quality, and what metric values directly correlate to that level of quality. By anchoring to numbers based in logic and design, we can train our biases to come to the right conclusions. In general the anchoring bias tells us to not depend upon metrics so absolutely that we fail to consider their meaning in context. We need to encourage ourselves and our team members to think deeply about the meaning of metrics in the context of the actual software. Seapine's TestTrack provides the ability to capture and report on many possible metrics, with predefined and easily customized reports that make it simple to share those measurements. And thanks to linking between project artifacts, we can delve behind the numbers to better understand their impact on the quality of our project.