​Noel: I recently read a piece of yours on the QAInfoTech blog where you asked, “Is it fair to compare manual and automated test results.” That’s a great question, and one that I imagine could result in a variety of answers, depending on who is being asked the question. What made you want to take a shot at answering this? It’s certainly hotly debated these days.

Rajini: Test automation has been in the limelight for the last five years or so. As more teams embrace agile development, test automation usage has increased significantly to cut testing time. However, this is not always the best solution for the product quality. While test automation does bring in more reliability and consistency in a testing effort, there is a lot more to it in making sure it is “effective.” Besides this, the core understanding that test automation cannot replace manual testing is important. 

I wrote a post a while back on “Technology advancements taking us back to the grass roots of testing,” that elaborated on how manual testing is regaining prominence. With all of these advancements happening in the industry today and we’re now constantly looking at what metrics to use to evaluate a product’s quality and test effort, I thought the timing could not have been better for this article.

Noel: In that same article, I thought you made a really good point, where you suggest: Compare the kind of defects reported by the two test approaches. If manual is reporting more UI and functional defects in a certain area, while automation is not catching them, this is a good checkpoint to see how the automation tests can be enhanced.

What are some of the ways you can enhance automation to give it better results?

Rajini: The list of things that might enhance an automation suite can be never-ending. Given the time constraints that most teams operate within, it will be good to look at a few core things you want your automation to achieve and accordingly work towards enhancing the suite. 

For example, if test coverage enhancement is your goal, it would be wise to run code coverage tools to understand where gaps exist today and accordingly enhance your automation suite. If your automation is giving a bunch of false positives or negatives, it would be worth to take the time to review and test the automation code at multiple times of the day, with multiple data sets, with a few simultaneous manual and automated test runs to understand what is going wrong. Analyzing test automation results rather than taking in the results at face value will be useful. 

A colleague I used to work with, referred to any weight carried over by an inefficient test automation suite as “test debt” and he would force the team to look at how to reduce this debt over time. Testing the automation suite, really helped us get to the desired levels of test debt. I have another blog posted on this topic on my company QAInfoTech’s site on “Testing your Test Code

Noel: You also wrote another piece recently that I really enjoyed that took me a little by surprise. It’s titled, “Healthy practices in mobile testing” and it wasn’t what I thought it was going to be, which was a piece on how testers can do their jobs better, or maintain some expected amount of cultural health – it was actual strategies for testers to remain physically and mentally healthy! I feel like this is an extremely under-covered topic in this, “faster faster faster” mentality at work these days.

What about mobile testing in particular warrants perhaps some extra precaution or prevention, in order to remain healthy enough to do your job well?

Rajini: I really feel this is a critical topic that all of us need to be aware of today. This will become even more important with the rate at which smartphone usage is growing. There are multiple facets about the mobile device that the tester should be wary about – the core facets are the small screen size and continuous exposure to it—especially given that physical device mobile testing plays a significant chunk in the overall test effort. The resultant eye and neck issues from such exposure cannot be ignored. And sometimes we are tempted to finish all our tasks on the same device we use. 

For example, as I am testing, I may initiate an email I need to send, and will think to send it from the same device. Instead, if I switch to my laptop or desktop for the mail, it will give me a small but much deserved break from my small screen and help me quickly recharge. I strongly suggest mobile testers to read the blog for further details on how to promote healthy usage of a mobile device for software testing.

Noel: I was looking over the the sessions at the upcoming STARWEST 2014 agenda, and I saw where one of the speakers claims that, “an alarming 65 percent of mobile apps—more than 1.3 million—have a 1-star rating or less…” he then goes on to say, “The majority of development organizations have neither the right processes nor access to the devices required to properly test mobile applications.” Do you agree with this statement, and if so, what are some of those missing processes that prevent quality mobile testing?

Rajini: Well, I am not very sure of this number, but I can see where he is coming from. Some of the main reasons I see that attribute to poor quality mobile testing include: lack of sufficient testing (lack of time and resources to test and an attitude to get the application pushed out of the door quickly), testing often done by developers who wrote the app (this is where we see the need for more independent testing – that is, the app should be tested by an individual who did not write the application), a huge compatibility matrix that makes the testing landscape very vast, and lack of physical devices to get the testing done. 

While solutions such as test matrix optimization, bringing in a test team, use of device emulators wherever possible can be looked at, another solution that is becoming more popular here is sourcing crowd testers. They often come from diverse geographies and have access to multiple devices across networks helping to significantly improve the quality of mobile apps.

Noel: Lastly, I read the results of a survey just today that was given to those at various levels of progress of working with agile, and very few of the responders, maybe only 5%-7%, responded that their agile efforts had “failed” on their last project. With agile being able to be “done” in so many ways, what do you suggest to someone who has said they “failed” at it when giving it another go? For instance, what are some signs that teams should stick with the agile version/practice they attempted last time, vs. giving another version a chance?

Rajini: You are right. Agile implementation variances can be quite stark across organizations. Some have settled for a hybrid mode, while some follow it to the “t”, some have gone back to waterfall, etc. A new variation called Wagile has been picking up lately. A study by Ambysoft in 2013 covered “how Agile” organizations claiming to be Agile really are. And this was a significant number (as high as in the 80s) that felt they were Agile “enough.” The one area where many thought they were lacking was “self-organization.” I would suggest, the ones that think they have failed, to really understand where they failed. And for this, the Ambysoft study is very useful. 

They talk about 5 core points to evaluate how Agile an organization is. I hosted a webinar in March 2014 on the topic “Key Testing Considerations in Agile” where, I discussed this study. Also, before considering an Agile effort to have failed and moving to some other technique, I strongly urge such an organization to take on an in depth retrospective to understand what criteria they failed on and see if it is fixable. After all, the investment they would have made to move to Agile would have been significant and it is not worth bailing out rather quickly. 

Facebooktwittergoogle_pluslinkedinFacebooktwittergoogle_pluslinkedin
rssrss