DevOps Digest 402: Web Testing Environments

Feb 28, 2017

Having briefly covered testing strategies, layers/scopes, and testing types in Issue 401, let’s now return to our sample application and illustrate how to make some choices. Again, we’re not going to build specific machinery, but rather evaluate your testing requirements against your comfort level to select the right strategy, layers, testing types, and tools.

Our sample is an MVC web application built atop Microsoft’s .NET framework using C#, which automatically winnows away a variety of tools and narrows the focus for our lower-layer testing decisions.

For example, because we’re dealing with a web application, our architectural choices help influence our testing choices. Our sample is trivial, but it at least outlines a development roadmap along which major functionality will be packaged in various library assemblies, then consumed through interfaces served up via DI.

There are a number of solid freely available DI tools, including the injection mechanics built into the MVC framework by Microsoft. No matter which you use, it helps considerably in unit testing. DI makes it simple to create a single test assembly for each library assembly, which can provide the unit tests to exercise all the lowest-layer functionality.

Where said tests are concerned, white-box testing is often the right choice, because only developers can know all the subtleties built into the underlying implementation. Crafting unit tests carefully to exercise as many code paths as possible provides assurance that the implementation works as intended. The code coverage reports we implemented in Issue 308 are very useful in determining which code remains to be exercised.

When it comes to functionality exposed across assembly boundaries, then interface testing with a black-box testing approach also makes sense. Such tests are written purely from the standpoint of a consumer of that functionality, wholly ignorant of the internal implementation details.

This combination delivers a powerful one-two punch for knocking out bugs and defects. The unit tests ensure best intentions are met for the details of an implementation, while interface tests ensure that the “surface” works well to provide reusable pieces. In effect, this approach jointly satisfies producers and consumers of low-level functionality.

And as long as care is taken in keeping such units of functionality isolated and maximally orthogonal, this simple architecture makes it possible to grow an application for years. It’s a poor-man’s component-based development, without the additional work required to treat components as things in themselves.

We chose NUnit as our tool to illustrate executing unit tests via Jenkins in Issue 307. But as mentioned then, Microsoft Visual Studio includes its own MSTest facilities out of the box, and there are other suitable alternatives (MbUnit and xUnit ) for unit tests that also serve nicely for interface testing when coupled with other tools.For example, you can inject faux expected behavior into interfaces for testing using “mocking” tools Rhino Mocks, NMock3, or Moq—all common frameworks worth considering for .NET development.

But unit and interface tests ensure only that the implementation performs as the developers intend. They don’t tell you whether the product’s resulting feature set actually satisfies the design requirements from a top-level user perspective.

As crazy as it sounds, substantial disconnects between the expectations of business stakeholders and what the developers actually create are surprisingly frequent. There are many factors for this, not the least of which being that various groups involved often don’t share the same language or level of understanding. Nonetheless, the issue for testing is how to make sure problems get caught and stopped as quickly as possible, preferably before wasting considerable time and effort on a bad direction.

This sort of disconnect is surely one of the reasons for the rise of Behavior-Driven Development (BDD) as an evolution of Test-Driven Development (TDD). BDD has implications for testing at all levels, but the way it helps facilitate discussion and understanding has led the industry to a renewed focus on acceptance testing.

Acceptance tests are often comparatively simple to exercise with web applications, particularly those built on the model-view-controller (MVC) paradigm, so they’re a good fit for our sample. The point is to tie togetherthe business requirements with actual results through a simple, shared vocabulary.

Those unfamiliar with BDD should expect to confront a given-when-then approach to acceptance tests. The approach assumes some well-known state of affairs (given), at which point something happens (when), in light of which a given set of outcomes should prevail (then). For example, consider this: “Given a customer record on file, when a request comes in to update its password, then the database should be updated and the customer should be emailed a notification.”

Breaking high-level functionality down in this tripartite manner facilitates communication between those writing the requirements and those writing the software, capturing behavioral expectations in simple language to avoid disconnect by keeping stakeholders on the same page.

It also simplifies acceptance testing, which for our sample could easily be implemented in FitNesse, NBehave, SpecFlow’s Cucumber for .NET, SpecsFor.Mvc, etc. They all work a bit differently but provide essentially the same set of services. Again, the most important thing to remember is that they must bridge the gap between your key stakeholders to avoid wasting time building the wrong thing.

The last sorts of testing it makes sense to consider here (for our sample) are load, stress, and performance testing. Web server technology has advanced to the point where standing up even non-trivial web applications is a generally simple thing.

Keeping it up and running under adverse conditions, however, and responding to users in a timely fashion can be a different story. Products can falter in the market when organizations don’t do load, stress, or performance testing prior to release.

One of the up sides of creating a popular product is that you’ll discover soon enough how well it performs under load and stress. The down side is that your users will be unhappy if they’re the ones doing your testing rather than your QA personnel. Unlike your employees, your users aren’t under any compunction to keep coming back for more.

So do yourself a favor and discover how your product performs under load and stress before you release it to the general populace. For our sample application, a valid choice would be to rely upon the load-testing mechanisms built into Microsoft Visual Studio. The Microsoft Azure cloud in particular is a powerful ally in discovering how well a web application performs.

But again, there are various third-party tools that can also be helpful. Popular options include Apache’s JMeter, The Grinder, and Pylot. If you’ve got the budget, there are also commercial offerings worth considering.

No discussion of Continuous Testing (CT) would be complete without saying at least something about your test lab itself. That’s because the state of our industry has advanced so quickly that the environments in which we test are almost always more complicated than the products being tested.

Think about it for a moment. How many lines of code make up your product: tens of thousands, hundreds of thousands, millions? The operating systems on which our products execute these days are typically measured in tens of millions of lines of code. No matter how complicated your products are, the environment(s) in which they execute are likely more complicated, often by orders of magnitude.

Software developers know how frustrating it is to keep receiving bug reports from the field that you simply can’t reproduce. Something must be different; the question is always what. Crucial to answering that question is having repeatable environments.

Fortunately, great progress has been made in the last decade in terms of infrastructure management tools. Tools like Puppet, Chef, Ansible, Salt, Fabric, and others produce repeatable environments in a highly automated way. Despite their differences, they all come down to specifying configuration data through a series of scripts and/or input flies.

We’ll cover these tools in greater depth when we tackle Continuous Delivery but they can greatly simplify the process of standing up repeatable environments for testing as well. Small investments in surmounting their learning curves can pay huge dividends by making sure your tests execute under the same circumstances reliably and repeatably every time.

Regardless of which strategies and tools you choose, tying it all together falls on the shoulders of your version control system (VCS). You’ll store test scripts, sample input files and other data, infrastructure management details, and so forth. Any VCS worth the name can store these sorts of files, usually text and comparatively small.

But only Perforce Helix gives you additional options, making it possible to store and version much larger data sets. Once you have a given set of test environments defined, Helix can store them as Docker images or even full virtual machines if needed. Infrastructure management tools can help you build well-defined environments, and Helix can help you maintain and recover them, skipping the time-consuming steps to rebuild them every time.

And so ends our high-level survey of CT. The keys to getting the best value for your testing investments are (1) to identify the strategies, test types, and tools that make the most sense for your projects, and (2) build them on top of the repeatable foundation supplied by today’s infrastructure management tools.

Next time, we’ll move on to CD and provide more specific details about those tools, their operation, and review some details to help you determine which of them best fit your organization. In the meantime, good luck getting your automated testing off the ground.

You Ask, We Answer

As previously mentioned, this is your roadmap to creating a successful DevOps pipeline. Don’t understand something? Just ask. Need to dive a little deeper? Send an email to info@perforce.com with your questions. Then, stay tuned for a live Q&A webinar at the end of this series.

Get DevOps Digest Sent to Your Inbox

You don’t need to remember to check back with us each week. Instead, get the digest delivered directly to your inbox. Subscribe to our 25-week DevOps Digest and we’ll get you where you need to go, one email at a time.

See Perforce Helix in Action!

Join us for a live demo every other Tuesday and see the best of Perforce Helix in 20 minutes. Save your spot!

Posted In: 
About the author:
See all posts by