Ever had a frightful encounter with the following testing demons? They tend to lurk around complex interconnected systems, just waiting to wreak havoc by forcing you to delay testing, work at dreadful hours, or make distressing trade-offs on the completeness of your testing…
You’re ready to test the part of the system that you’re responsible for, but you can’t really exercise it unless you interact with other system parts that are still evolving—or not yet even implemented.
This includes those dependent systems available for testing only from 3 am – 5 am on Saturday morning. They’re a close cousin to the evolving/incomplete systems: the reason you can’t access them is different (e.g., security restrictions, “geopolitical” boundaries, etc.), but the impact on your testing is the same.
Staging environments commonly lack the computing resources required to deliver realistic performance from downstream systems and/or to emulate complex network factors such as bandwidth, latency, and jitter. Testing vs. unrealistic conditions leads to nasty surprises later—when the environment doesn’t accurately represent real-world conditions, performance testing could result in false assurances.
Sure, you get the test environment provisioned with all the dependencies you need to exercise…you’ll just have to wait a few weeks to get it configured to your liking and stood up. By that time, your team is likely to be on the next iteration.
Test conditions that are difficult to achieve
To achieve the expected level of test coverage, you often need to see how having dependencies configured for various edge, error, or failure conditions impacts the AUT. But good luck getting these difficult-to-produce conditions configured—especially if you have limited access to (or control over) the dependency.
Third-party access fees
Pay-per-use fees for cloud-based or shared services such as payment card processing, credit checks, etc. might be an expected expense for production usage. However, these costs can escalate quickly for continuous testing or high-volume performance testing.
Other teams working on shared test environments
It can take hours or sometimes even days to get a test environment configured exactly how you like it—then another team comes in and re-configures it to suit their needs. You can’t blame them, but it’s frustrating nevertheless.
Developing and testing applications that leverage a mainframe environment is commonly a complex, costly, and time-consuming endeavor. Factors such as complexity of access, the cost of MIPS consumption, and the operational cost and delays involved in making changes to mainframe components make all mainframe-related testing extremely frightening to both the tester and the mainframe experts.
How Service Virtualization Helps Exorcise These Software Testing Demons
By exorcising—well, virtualizing—these demons through the power of service virtualization, you can test earlier, faster, and more completely.
Service virtualization is a new way to provide developers and testers the freedom to exercise their applications in incomplete, constantly evolving, and/or difficult-to-access environments. It gives you flexible 24/7 access to the dependent application behavior you need in order to complete your development and testing tasks. Teams taking advantage of service virtualization are able to:
- Start testing whenever they’re ready.
- Rapidly configure the environment conditions critical to their test plan.
- Complete the desired breadth and volume of tests.
- Confidently promote the application under test to the next level.
Watch this 2-minute introduction for more details…