Is It Fair to Compare Manual and Automated Test Results?

​The article below was originally published by Rajini Padmanaban on the fantastic QAInfoTech blog.

Manual and automated testing have long co-existed and have contributed significantly in successfully signing off on a product’s quality. Over the years test teams have talked about the value of one over another, how testers need to groom their test automation skills and so on. A tester who really understands software quality and the intricacies it entails, will be one that appreciates the value of both these testing approaches and that one cannot be under or over-estimated over the other. With more organizations embracing the Agile style of development, test automation’s scope has definitely gone up. There is an increasing push for newer areas to be automated, newer test engineers to be trained to take on automation, newer tools to help with the process etc. Test managers and leads are being cautious in doing so, because an unplanned test automation effort can soon become a mammoth suite that is overwhelming to manage and maintain.

This is where an objective comparison between manual and test automation efforts is becoming important to determine their true value and make any adjustments that may be needed to further optimize them for the product’s benefit. You may ask – is it fair to compare the manual and automated test efforts and this is a valid question. Often times, it may not even be an apple to apple comparison. For example, the manual test focus may have been on the UI intensive portions of the application, which have been buggier. As a result, the test team may have reported a lot of valid defects through their manual efforts. On the other hand, test automation’s focus may have been API intensive areas, application’s performance, database level functionality etc. where defects are more difficult to find. It could also be the case, that APIs have been re-used from the past and are very stable that the tester has not reported as many bugs through test automation. Does this mean that the automation was not effective?  

While this argument holds weight, and that manual and automated test results comparison is not always apples to apples, it does not mean they cannot be compared. They need to compared on the right grounds to help the team arrive at the right balance between the two test approaches and also what, when, how and how much to automate. If so, how can they be objectively compared? Here are some tips:

  1. Pick areas that have been currently automated, see what bugs are reported through them and also have some of those areas tested manually, in parallel. Compare the results to see if the automation scenarios need any modification. This can also help add the manual tester’s instinctive creativity into the automated suite
  2. Understand the % of valid defects reported by manual and automated test efforts. This will help understand if automation has a lot of test/data/configuration issues that need to be attended to and fixed  
  3. After an exploratory round of testing or a bug bash, compare manual test results with the corresponding automation that may exist in those areas to see how the test automation suite can further be strengthened
  4. Look at overall numbers to a certain extent. If manual tests are yielding way too many bugs but hardly any are coming in through automation, it may be an indication for further analysis. This may end up being a false alarm, but is at least worth a quick look
  5. Compare the kind of defects reported by the two test approaches. If manual is reporting more UI and functional defects in a certain area, while automation is not catching them, this is a good checkpoint to see how the automation’s can be enhanced
  6. Use automation to also evaluate the manual testing effort’s efficiency. The comparison is not a one sided story. Sometimes the manual testers’ on the team may not be very efficient and may be careless in their test effort. Automation, once tested for its reliability and consistency in results, is a great approach to minimize human errors. So, say for instance a certain regression has been automated and the tester also happens to play around with that area manually, a manager can compare the results to understand the tester’s efficiency and areas of improvement

The above is certainly not an exhaustive list. However, when this mindset is established in the team, they will begin to appreciate the need for such an objective comparison between the manual and automated test approaches. This will be a new category that can be incorporated in the metrics that they use in line with the need of the current day, helping the two test approaches complement each other.

About the author:

Rajini Padmanaban is a Sr. Director of Testing Engagements at QA InfoTech and an active software testing evangelist. She has more than twelve years of professional experience, primarily in the software quality assurance space.

Join our email list for news, product updates, and more.