Welcome!

Wearables Authors: Elizabeth White, Yeshim Deniz, Pat Romanski, Liz McMillan, Zakia Bouachraoui

Blog Feed Post

Is It Fair to Compare Manual and Automated Test Results?

​The article below was originally published by Rajini Padmanaban on the fantastic QAInfoTech blog.

Manual and automated testing have long co-existed and have contributed significantly in successfully signing off on a product’s quality. Over the years test teams have talked about the value of one over another, how testers need to groom their test automation skills and so on. A tester who really understands software quality and the intricacies it entails, will be one that appreciates the value of both these testing approaches and that one cannot be under or over-estimated over the other. With more organizations embracing the Agile style of development, test automation’s scope has definitely gone up. There is an increasing push for newer areas to be automated, newer test engineers to be trained to take on automation, newer tools to help with the process etc. Test managers and leads are being cautious in doing so, because an unplanned test automation effort can soon become a mammoth suite that is overwhelming to manage and maintain.

This is where an objective comparison between manual and test automation efforts is becoming important to determine their true value and make any adjustments that may be needed to further optimize them for the product’s benefit. You may ask – is it fair to compare the manual and automated test efforts and this is a valid question. Often times, it may not even be an apple to apple comparison. For example, the manual test focus may have been on the UI intensive portions of the application, which have been buggier. As a result, the test team may have reported a lot of valid defects through their manual efforts. On the other hand, test automation’s focus may have been API intensive areas, application’s performance, database level functionality etc. where defects are more difficult to find. It could also be the case, that APIs have been re-used from the past and are very stable that the tester has not reported as many bugs through test automation. Does this mean that the automation was not effective?  

While this argument holds weight, and that manual and automated test results comparison is not always apples to apples, it does not mean they cannot be compared. They need to compared on the right grounds to help the team arrive at the right balance between the two test approaches and also what, when, how and how much to automate. If so, how can they be objectively compared? Here are some tips:

  1. Pick areas that have been currently automated, see what bugs are reported through them and also have some of those areas tested manually, in parallel. Compare the results to see if the automation scenarios need any modification. This can also help add the manual tester’s instinctive creativity into the automated suite
  2. Understand the % of valid defects reported by manual and automated test efforts. This will help understand if automation has a lot of test/data/configuration issues that need to be attended to and fixed  
  3. After an exploratory round of testing or a bug bash, compare manual test results with the corresponding automation that may exist in those areas to see how the test automation suite can further be strengthened
  4. Look at overall numbers to a certain extent. If manual tests are yielding way too many bugs but hardly any are coming in through automation, it may be an indication for further analysis. This may end up being a false alarm, but is at least worth a quick look
  5. Compare the kind of defects reported by the two test approaches. If manual is reporting more UI and functional defects in a certain area, while automation is not catching them, this is a good checkpoint to see how the automation’s can be enhanced
  6. Use automation to also evaluate the manual testing effort’s efficiency. The comparison is not a one sided story. Sometimes the manual testers’ on the team may not be very efficient and may be careless in their test effort. Automation, once tested for its reliability and consistency in results, is a great approach to minimize human errors. So, say for instance a certain regression has been automated and the tester also happens to play around with that area manually, a manager can compare the results to understand the tester’s efficiency and areas of improvement

The above is certainly not an exhaustive list. However, when this mindset is established in the team, they will begin to appreciate the need for such an objective comparison between the manual and automated test approaches. This will be a new category that can be incorporated in the metrics that they use in line with the need of the current day, helping the two test approaches complement each other.

About the author:

Rajini Padmanaban is a Sr. Director of Testing Engagements at QA InfoTech and an active software testing evangelist. She has more than twelve years of professional experience, primarily in the software quality assurance space.

Read the original blog entry...

More Stories By Skytap Blog

Author: Noel Wurst is the managing content editor at Skytap. Skytap provides SaaS-based dev/test environments to the enterprise. Skytap solution removes the inefficiencies and constraints that companies have within their software development lifecycle. As a result, customers release better software faster. In this blog, we publish engaging, thought provoking stories that revolve around agile enterprise applications and cloud-based development and testing.

IoT & Smart Cities Stories
Business professionals no longer wonder if they'll migrate to the cloud; it's now a matter of when. The cloud environment has proved to be a major force in transitioning to an agile business model that enables quick decisions and fast implementation that solidify customer relationships. And when the cloud is combined with the power of cognitive computing, it drives innovation and transformation that achieves astounding competitive advantage.
DXWorldEXPO LLC announced today that "IoT Now" was named media sponsor of CloudEXPO | DXWorldEXPO 2018 New York, which will take place on November 11-13, 2018 in New York City, NY. IoT Now explores the evolving opportunities and challenges facing CSPs, and it passes on some lessons learned from those who have taken the first steps in next-gen IoT services.
SYS-CON Events announced today that Silicon India has been named “Media Sponsor” of SYS-CON's 21st International Cloud Expo, which will take place on Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA. Published in Silicon Valley, Silicon India magazine is the premiere platform for CIOs to discuss their innovative enterprise solutions and allows IT vendors to learn about new solutions that can help grow their business.
In his general session at 19th Cloud Expo, Manish Dixit, VP of Product and Engineering at Dice, discussed how Dice leverages data insights and tools to help both tech professionals and recruiters better understand how skills relate to each other and which skills are in high demand using interactive visualizations and salary indicator tools to maximize earning potential. Manish Dixit is VP of Product and Engineering at Dice. As the leader of the Product, Engineering and Data Sciences team at D...
SYS-CON Events announced today that CrowdReviews.com has been named “Media Sponsor” of SYS-CON's 22nd International Cloud Expo, which will take place on June 5–7, 2018, at the Javits Center in New York City, NY. CrowdReviews.com is a transparent online platform for determining which products and services are the best based on the opinion of the crowd. The crowd consists of Internet users that have experienced products and services first-hand and have an interest in letting other potential buye...
Founded in 2000, Chetu Inc. is a global provider of customized software development solutions and IT staff augmentation services for software technology providers. By providing clients with unparalleled niche technology expertise and industry experience, Chetu has become the premiere long-term, back-end software development partner for start-ups, SMBs, and Fortune 500 companies. Chetu is headquartered in Plantation, Florida, with thirteen offices throughout the U.S. and abroad.
The standardization of container runtimes and images has sparked the creation of an almost overwhelming number of new open source projects that build on and otherwise work with these specifications. Of course, there's Kubernetes, which orchestrates and manages collections of containers. It was one of the first and best-known examples of projects that make containers truly useful for production use. However, more recently, the container ecosystem has truly exploded. A service mesh like Istio addr...
SYS-CON Events announced today that DatacenterDynamics has been named “Media Sponsor” of SYS-CON's 18th International Cloud Expo, which will take place on June 7–9, 2016, at the Javits Center in New York City, NY. DatacenterDynamics is a brand of DCD Group, a global B2B media and publishing company that develops products to help senior professionals in the world's most ICT dependent organizations make risk-based infrastructure and capacity decisions.
Nicolas Fierro is CEO of MIMIR Blockchain Solutions. He is a programmer, technologist, and operations dev who has worked with Ethereum and blockchain since 2014. His knowledge in blockchain dates to when he performed dev ops services to the Ethereum Foundation as one the privileged few developers to work with the original core team in Switzerland.
Cloud-enabled transformation has evolved from cost saving measure to business innovation strategy -- one that combines the cloud with cognitive capabilities to drive market disruption. Learn how you can achieve the insight and agility you need to gain a competitive advantage. Industry-acclaimed CTO and cloud expert, Shankar Kalyana presents. Only the most exceptional IBMers are appointed with the rare distinction of IBM Fellow, the highest technical honor in the company. Shankar has also receive...