February 4, 2011

Webinar Recording: A Software Manager's Guide to Defining Testing in an Agile Age

Thanks to everyone who joined us for the A Software Manager's Guide to Defining Testing in an Agile Age webinar. The recording is now available if you missed the training session or want to watch it again. Q&A from the session follows.

Download video in mp4 format


Should I be automating feature sets that are currently being developed? Or is it just for feature sets already released and that I have testing confidence in? That’s an excellent question, and one that points to the inherent difficulty of using automation for functional testing. You have to test once, manually. If that test passes, there is little reason to test again, and consequently little reason to automate. Here are a couple of counterpoints. First, your automated functional test then becomes a candidate for your regression test suite. You absolutely want to automate regression testing, and the best way to do that is to start with your functional tests. Second, by automating functional tests, you can use some of those tests in load testing, which I believe is a great way of assessing your design and coding practices at the end of each iteration. I think there are more reasons to automate functional testing, but your mileage may vary. It depends on how many times you plan to execute those tests. If the focus is not on documentation and detailed test plans, how do we store institutional or historical knowledge? If the testers or developers leave after one year, how will we maintain and regression test the software two years down the line? This is where the regular reflection comes into play. By pausing often to examine mistakes and do better in the future, teams have a built-in mechanism for continuous process improvement. You need to document these improvements--if not in traditional Lessons Learned documents, then in an improved process. Some of this is reflected in better and more accurate process and testing metrics, and is self-propagating. But automation comes into play here too, starting with automating your functional tests. Those tests become your regression test suite. Your measurement tools can also embody the lessons learned, because you have learned what measurements are important to your team and the software. What you are lacking is the underlying rationale that collective experience brings, so with heavy turnover your process may seem out of whack to new people, even if it was justified by past experiences. A wiki can be a good lightweight way of documenting some of the underlying rationale of what decisions you made and why you made them. How do we ensure close collaboration for geographically distributed development and QA teams? You need multiple lines of communication. Meeting in person can quickly and easily a lot of misunderstandings and clarify issues quickly. However, if you can’t meet in person, you need multiple ways to communicate between groups – instant messaging, web meetings, email, wikis, and the process tools (such as defect tracking) all need to be used effectively to make up for the lack of face-to-face interaction. You also need to build a culture that encourages and supports trust across geographic and often cultural boundaries. Once you have trust, collaboration tools can more than make up for face time. I’ve spent a long time working remotely, and trust and the ability to communicate in multiple ways are great at building an effective distributed team. How important is it to have significant test case development for each iteration? Not as important as you might think. In the presentation I made it a point that testing and quality are iterative processes. Remember that both developers and testers are iterating – developers on features and ultimately on code, and testers on what they are able to test, and the quality they are able to assure. In some iterations, you’re going to build and execute a number of test cases. In others, because of either time pressures or a small number of new features, you’re going to have fewer test cases. Ideally, every test should be run against the code in the iteration. If the team can’t run every test against the code, you need to find out why they can’t. It’s easier with automation. The important thing is to be able to assess your quality at the end of each iteration. You need data to be able to determine where you’re at, how far you need to go, and where you need to focus your efforts. A part of this is also a matter of getting underneath the user stories to understand the users’ problem domain, as I described in the presentation. Please share some measurements of quality you use. There are the easy ones – number and severity of defects, time to fix defects, test cases run, and measures that you traditionally use. But once you’re comfortable there, you need more granularity, even in an Agile process. Focus on the things that matter to users – open defects, defects and severity per feature, and things like that. Remember, in most cases, users are using the software before all features are implemented, so things like identifying and getting defects fixed are important to them, and can be done in more-or-less real time. Also, get user feedback on quality. This is unlikely to be quantitative, but this is the group that is ultimately judging the application on quality. Find out what they think is great, and what they think can use some work. Use some of your tools to quantify those impressions. How do you handle testing in two week iterations when development continues until last day of iteration? That problem seems to exist whether you’re doing Agile or traditional development – development always seems to take up time that can be productively used for testing. In Agile, I think you have some options. One of the things that I noted is that testers have to work side-by-side with developers during coding. In itself, the interaction and testers influence here can help with quality in terms of making better decisions. By doing so, testers also learn firsthand what is important to test, and can accelerate functional testing. However, there will be iterations where traditional types of functional testing can’t be applied. Here is where you might have a ‘testing backlog’. With that documented, testers can make the argument for a hardening iteration as part of a plan to improve quality, at the same time developers improve design and refactor some code. In general, if development is continuing until the last day of the iteration, something is seriously wrong with their planning cycle. Maybe the team is taking on too much work in the iteration, the team is not building their code at least daily, or maybe the iteration length is too short. How involved should testers be in creating user stories? Very involved. Testers can’t wait until the product is well-defined to begin to figure out how to test it. They need to be leaders in understanding the business problem and defining the software to address that problem. Testers need every bit of knowledge of the business problem and how the application is intended to address that problem to get a head start on testing. At the same time that user stories are being developed, testers need to be thinking about that problem domain, the features, and how they are going to test those features as they are implemented. How do they do this? As user stories become solidified, testers should have already been thinking about testing them. At this point, testers need to be doing a high-level break down of their own to determine what types of tests will constitute acceptance criteria for the user stories. Next is the process of breaking down these stories into features by the Agile team, as a prelude to building software. As the team defines features, testers have to take their high-level testing view of the stories and decompose them in to specific tests for specific features. The entire team discusses the stories at iteration planning and testers need to have enough understanding to estimate the stories. The planning session is their opportunity to clear up any misunderstandings. Throughout the iteration, they continually refine their knowledge of the user stores to create the necessary tests. The only way for testers not to feel hopelessly behind is to take a leadership role  at the beginning – with defining user stories. That gives them a head start in the quality role, and makes them a visible part of the team that is working to address user needs. When 'iterating on quality' you mention that you measure quality and then improve over time. If you are not running all tests in each iteration, how do you compare these measures of quality? That’s a good question. Clearly you’re not going to have pure apples-to-apples comparisons between iterations. But you should have the data needed to demonstrate that you’re making progress on quality in general. Take a look at your metrics. Don’t do a direct comparison, but instead see if things are trending as you would like. Here’s where you can also get user involvement. Talk to users (Product Owners, etc.) to get a feel for their take on quality. Find out if there are areas that might be candidates for further testing, and areas they are satisfied with. Adjust your testing strategies accordingly. We are trying to be Agile, after all. Could you talk more about the interaction between testers and users. What are they communicating, how, how often? Are users actually testing during agile development? There’s a lot of communication at the beginning of the process, where testers are working to understand the business problem and what the users need in order to work in that problem domain. Communication with users slacks off during individual iterations, especially early, where the primary user contact will occur with the Product Owner. Communication with the Product Owner will happen daily, during the team stand-up meetings. The testers understand the user and the product, so now is the time to deliver the software and quality needed to address the business problem. After users start using the software, communication will pick up. Users have things to say, about the features they are using, and about the quality of the application as it stands. Testers should ask, listen to what users say, and adjust their testing strategies accordingly. And, yes, users are testing during Agile development, and this will happen as soon as the first build is available. What are your recommendations for QA to automate tests when there is such a high priority on testing new features as they are delivered? QA is almost never given enough time to do both. It’s true that automation poses an additional effort in terms of setting up and configuring the tools in order to achieve benefits down the road. The question is whether or not you have the ability to make that sacrifice now. You should automate the areas you touch. So, in planning, when testers knows they will touch a certain area of the code based on the user stories selected, they automate tests in those areas. This needs to be planned in to the iteration, so there could be a user story simply to automate tests. Also remember that you’re iterating on testing. This means that you pick your areas for testing focus, determining where the most important tests are at the beginning, and improving as time goes on. A part of that improvement can come through automation.