June 24, 2009

Measuring TestTrack API Performance (Part 1 of 2)

Helix ALM
Every so often we are asked for performance metrics on the TestTrack SDK, but unfortunately we don't have any standard metrics as far as timing and footprint. Performance is also critical for most of the integrations we do and, while we test that on an individual project basis, it would be helpful if we also knew baseline numbers for the SDK itself. JMeter is a tool I've been meaning to look at for awhile and I finally found time to take it for a spin. My goal was to compile some statistics on TestTrack SOAP SDK performance and scalability. Quite a lot of possibilities to test but I'm going to look at just record query performance in two posts I'll publish over the next couple of weeks.

Getting Started

First I had to learn JMeter, and luckily they have quite a bit of quality documentation so the learning curve was minor. It only took me an hour or so to get a simple test plan up and working to pull the project list, login and logoff. Once that was up, it was just a matter of defining the test scenario.

Testing Specs

I used an internal server we have in the Services group for large custom development and data migration projects. During the testing, TestTrack and the server itself were under very light use.
  • Windows Server 2003 SP2 x64
  • Xeon 3.20 GHz
  • 8GB of RAM
  • TestTrack 2009.0 (32-bit)
  • TestTrack native database
  • TestTrack Sample Project (installed with TT or download)

Testing Scenario

Let's keep it simple for now and just look at defect query performance. I performed the following four API calls, fetching defect records.
  1. getProjectList
  2. ProjectLogon
  3. getRecordListForTable (8 columns of data per defect, no filtering)
  4. DatabaseLogoff
This test plan was run 200 times sequentially, against three different projects with distinct record counts (100/1,000/2,500).

Results

The following results were observed.
 Apache 2.2IIS 6
API Function Median (ms) StdDev (ms) Median (ms) StdDev (ms)
getProjectList16610.96714.0
ProjectLogon19011.48711.3
DatabaseLogoff1,345112.61,222108.7
Table 1: Connect/Disconnect Performance
 1001,0002,500
  Median (ms) StdDev (ms) Median (ms) StdDev (ms) Median (ms) StdDev (ms)
Apache 2.225623.51,20924.12,844108.6
IIS 617835.61,358176.23,266361.7
Table 2: Defect Record Query Performance

Commentary

Ignoring the query performance for now, two things stick out. First, it's very expensive to log off and second is the performance differences between IIS and Apache. I was aware that logging off was slow but it's a bit of a shock to see just how slow it is in comparison. I'm not sure why IIS is faster at the smaller end and Apache at the higher end, and I hate to take a guess so if you're an expert in those matters please leave a comment and educate me about this. With query performance, Apache was more consistent and handled larger data sets better than IIS. Again, not sure why that is so would love to hear from experts out there. Next week, I'll have some graphs on performance and metrics on dataset sizes. I'll also look at pulling test run records to see how those compare to defects.