Asking a roomful of testers, “What do software defects really cost?” and then telling them, “much more than you think,” before anyone gets the chance to answer is a pretty gutsy move. It’s certainly one that could’ve easily backfired for Parasoft chief strategy officer Wayne Ariola at last week’s STAREAST conference.
It did not backfire, and after waves of evidence of the immense financial impact that defects in production can have on an enterprise, I got the feeling that many in the room made mental notes to have some very serious conversations with various departments upon returning home.
Everyone knows that software defects cause long hours of rework, and new feature releases get pushed back, sometimes patches have to applied – but quantifying that into actual financial blows is not only difficult, those numbers are rarely shared with devs, testers, and others outside of the investor and executive levels.
Citing familiar software failures from banks and insurance, Sony’s constant hacks, Target’s 2014 identity theft problem, and American Airlines’ recent iPad glitch, Ariola then moved to a series of hard to stomach line graphs that showed the plummeting stock prices that each of these failures caused.
Immediately following these failures, as news grew and social media shares mounted, these stocks continued to tank. In some examples, once prices began to climb again, they plateaued far lower than the price per share before the release or bug discovery.
Ariola blamed these failures on a “culture of not focusing on software quality,” and no one disagreed. This isn’t to say that those testers in the room aren’t focused on quality, but is everyone else? Not likely. And for something as difficult to change as culture, it’s not the time for testers or anyone else to point fingers. It’s time to right the ship before your organization is the next one in the headlines.
So how do we fix this?
For one thing, Ariola says it’s time to start sharing financial information like stock prices with developers—and I would add product owners, designers, and anyone else who touches a release candidate before it ships. And that doesn’t mean sending a quarterly email to the company with little more than a three-month stock history screenshot in it.
This is looking at the price of your company’s stock at the hour a new release came out, and then tracking other significant moments from there. When was bug found? Who found it first? Was it publicized? How long did it take until it could be fixed? How quickly could support resolve issues, and satisfy customers? These are the kinds of metrics that absolutely can impact something as seemingly distant as to what’s going on on Wall Street – an area where some may not know they have so much of an impact.
And for those that don’t work for a publicly traded company, there are other metrics to go by, like the number of customers you have, or the number of people who are currently using your mobile app. Ariola asked early on, “How many of you have ever downloaded an app you hated?” Of course every hand went up, and when he then asked, “And what did you do when you realized you hated it?” we all proudly and immediately shouted in unison, “We deleted it!”
This is the mindset of today. The only thing that takes less time to download a mobile app, is deleting it. Like a stock price, if the number of subscribers/users of your software is falling more often than it’s rising, this can be a serious issue.
One suggestion that was made to combat buggy releases, was to stop asking, “Are we done testing” and instead, ask, “Does the release candidate have an acceptable level of risk?” Some may incorrectly assume that those two are similar enough to not warrant changing the approach to testing—but this only allows the current culture to continue to put your business and customers at risk.
Even if everyone did somehow manage to agree on a definition of “done”, when a disaster strikes, nobody’s going to want to hear (or even say) “But we were done testing!” when someone wants to know how that bug made it into production.
As the session continued, there were some in the room who shared stories of how they were increasing coverage with continuous testing, “shifting left”, utilizing cloud-based dev/test resources—and I hope these stories helped inspire some of those who knew the challenge ahead of them back home.
As I read back over this recap, I realize I’ve made it sound like the session was some fire and brimstone sermon meant to scare us all into fleeing back to our offices and never seeing free time or our families ever again, but that’s far from the case.
Everyone got the message without being beaten over the head, it applied to every software industry in the world, there was loads of involvement from the audience, and Ariola almost managed to go the entire session without namedropping or pitching his own company’s wares a single time—something I’ve literally never seen done during a vendor presentation.
During the Q&A portion at the end, one attendee excitedly asked, pen and paper in hand, “Do you have any service virtualization, test automation, or continuous testing tools that you would recommend?”
And after a chuckle, he had no choice but to suggest Parasoft as a great option for her, and the laughter and applause from the crowd proved that he’d definitely earned the plug.
Want to learn more about how continuous testing reduces risk while actually accelerating the SDLC? Parasoft wrote the book on it! Click here to download a complimentary copy for yourself!