May 21, 2013

Webinar Recording - Use the Windshield, Not the Mirror Predictive Metrics that Drive Successful Product Releases

Helix ALM
Events
Thank you to everyone who participated in the "Use the Windshield, Not the Mirror Predictive Metrics that Drive Successful Product Releases" webinar. The recording is now available if you weren’t able to attend or if you would like to watch it again. Sharon Niemi, Practice Director of SQA, talks about how the right combination of predictive and reactive metrics can help you build a measurement portfolio that improves product quality and release consistency. You'll learn how to build a measurement system that incorporates leading and lagging indicators to improve your team's consistency in delivering quality products on time and within budget. Near the end, Jeff Amfahr, Director of Product management at Seapine Software, demonstrates how Seapine's TestTrack solution for product development processes makes capturing and reporting on these metrics possible. http://youtu.be/irVCLzuriV4 Answered Questions Q: As someone relatively new to QA, would it be beneficial to take a six sigma course or should more experience be gained prior to six sigma? A: Six sigma is really a fundamental approach; I would definitely look to see if the organization was looking to implement six sigma practices. If so it would definitely be worth your while. I would go out and see what the approach to measurement is and then make your decision on what training you need   Q: What do you think of technical debt metrics, used as a software quality metric during development, as a predictive measure? A: It is a very useful one for our customers. As your technical debt grows, and as that backlog of issues grows, it’s inevitable that you will be in a constant catch up game. We look at the growth of the backlog over time as well as make-up of that growth in terms of open feature requests, issues, and architecture feedback from the development team. Those are all things that we look at as a critical metric to view overall quality.   Q: Is one person or group expected to claim ownership of the measurements, or is it a shared responsibility between multiple people to update measurement controls? A: I think you need one person assigned to the measurement system itself. That person would then identify the measurement techniques that are going to be used, the roles that will be required, and so forth. The ownership of the measures themselves would be placed on the manager who is responsible for the process. So there would be one owner of the measurement system but multiple owners of the metrics. The business lines would probably have someone own the requirement metrics and the stability and the rate changes. Then, the test manager who would be responsible for the defects, and the departments would be responsible for providing the measures around capabilities. To reiterate, the measurement system has one owner but the metrics should have shared ownership.   Q: Can you use TestTrack to predict level of effort on future requirement changes? What do you input to the tool and what do you get out? A: You can configure TestTrack to require users to enter an estimate of effort when they create test cases. You can also require users to enter how long they’ve been working on a test, or how much work is left to do. When they complete a test, you can require them to enter how much time was spent on it. To use that information for predictive metrics, we look at some charts and some of the metrics run on the variance column I talked about. That shows us how we are doing relative to our estimate and what our variance is. Our customers use that variance data in two ways. Initially, that variance gives insight into where estimations have been way off and management can use that information to train the team and individuals on better estimation techniques. Second, once you have variance over a few development cycles or sprints, you can start to buffer estimates with common variance numbers. So if one part of the project seems to consistently under-estimate the effort by X hours or points, adding that into the final estimates can get your schedule more consistent.   Q: Is there a way to organize a metric review to get to actionable outcomes? A: Yes, you need to have weekly or monthly meetings to review your portfolio of measures and what is being presented. Structure the meetings so you review predictive measures first and outcome measures second. If an outcome measure isn’t meeting the specified goals, ask the owner of the measure to discuss what they are doing to affect the results of the measure next week/month. Makes this process part of your measurement system, building it in as part of the ongoing measurement review and meeting discussion.   Q: Which TestTrack tool was used for the demo? Is it a reporting tool? Is it based on TestTrack Pro, TestTrack TCM, TestTrack RM, or all three? A: We used all three tools in the demo. If you’re not familiar with TestTrack, the tools have a unified interface. Q: How do you get time in a particular state in an item/defect workflow? A: That was a calculated custom field, which were introduced in TestTrack a couple releases back. You can now set up custom fields that are calculations based on other information; so things like time in state were all calculations based on information that I would otherwise display back to the user.