August 25, 2011

Webinar Recording: Measuring Technical Debt with Load Testing

Helix ALM
Thanks to everyone who joined us for the Measuring Technical Debt with Load Testing webinar. If you missed the event, take some time to watch the Measuring Technical Debt with Load Testing recorded webinar. Attendees asked several interesting questions, all of which we weren’t able to address during the webinar. To follow up, and invite others to participate, we’ve included a complete Q&A below. http://youtu.be/t4GUmHc7vysQ&ACan QA Wizard Pro support and work with non-supported languages? This is really a two-part question. For load testing, it doesn’t really matter because we are just sending the web casts and web posts to the web server. As far as functional testing, it varies because it is really more about the controls. We need to identify the controls so that we can interact with them and perform the various actions. In other words, if we see the button can we click on it. So it’s not so much about the language, it’s more about the controls. Is there a limit on how many virtual users QA Wizard Pro can simulate in one load test? Yes, the current limit is 62,500 virtual users. There is a limit of 250 virtual users on one machine, and QA Wizard Pro is limited to controlling 250 or fewer machines during one script execution. Can QA Wizard Pro access UNC shares? Yes, it is a Windows application that runs under whatever user you are currently logged in as. If you are able to access that share, QA Wizard Pro should be able to access that share as well. How do you test for memory leaks? You test for memory leaks as you run load testing. What you have to do is look at the working set size in Perfmon as the load test executes. There as several possible counters in Perfmon you can use. For example, percent committed bytes in the process you’re monitoring are a good measure of the working set. I would run a relatively long-lived load test, and look at the percent committed bytes over the course of the run. If it is trending up, that’s indicative of a leak. It doesn’t tell you where the leak is in the code, but that’s enough information for developers to start applying a memory analyzer to their code. In a managed environment like .NET, you can also look at garbage collections to see if there are large long-lived objects. How do you recommend handling dynamic values, such as correlation concepts? I have a degree in statistics, and there is nothing I enjoy more than playing with correlations. However, there is no need to calculate correlations or other statistical measures when simply observing trends will get you the information you need. If you see a higher level of disk activity than you do with other applications, for example, you don’t have to measure it exactly but you do have to ask why it’s higher. That might lead you to adjusting your load scripts to look at other aspects of the application, or rearranging feature use in the scripts to investigate where the issue is occurring. It could also be a memory leak, so you have to investigate that possibility too. Ultimately, if it is preventing the application from scaling as required, you write it up as a defect. At the same time, you’ve identified some technical debt likely brought on by poorly thought out database calls or bad memory management.