Machine Learning for GUI Test Automation
September 19, 2018 No CommentsFeatured article by Maxim Chernyak, A1QA
Automation has become a necessity in today’s agile world where production cycles require more incremental updates that need to be tested quickly. Software testing engineers are looking for ways to speed up test automation in intelligent ways, for example by applying machine learning practices. This blog post will focus specifically on how machine learning is currently improving GUI test automation and describes what challenges need to be solved along the way.
Although automation tools fulfil their promise to reduce the amount of manual testing, they cannot automate all testing activities. This is especially true for GUI testing, which heavily relies on human perception and involves testing multiple elements in different states, screen resolutions, etc. To address this diversity, testing automation engineers write large chunks of code describing all aspects of the GUI elements, such as their positions, colors, and dimensions. As a result, even small GUI modifications lead to an increasing number of false-positives and drive the need for test refactoring.
Test automation engineers from A1QA argue that additional manual work questions the very viability of automated GUI testing and calls for a more efficient approach to it. Recent advances in machine learning (a subfield of AI) have made it possible. Now, let’s take a closer look at this approach.
How machine learning met testing automation
Automated testing can be optimized using some form of AI enabled by machine learning. Machine learning models learn from data without being given explicit rules by a programmer (or an automated testing engineer in this case). ML models either learn on a set of structured labeled training data (supervised learning) or detect patterns in unstructured data (unsupervised learning) to identify hidden rules and dependencies that will help them “make decisions” processing similar data in the future.
According to the recent expectations, machine learning techniques could greatly improve automated testing. If programs could learn and teach themselves to do what they’re programmed to do and get better at it over time, then why not apply the same to testing processes? However, the reality proved to be more complicated. To understand why, let’s have a look at a test case where automated testing was enriched with machine learning.
ML for GUI automated testing: use cases
A successful combination of machine learning and automated GUI testing is the use of computer vision for recognizing page elements. Researchers from MIT developed a tool called Sikuli Test, which enables testing engineers to write visual scripts based on computer vision. The script acts as a robot capable to see: it can “look” at a button on the screen, click on it, and see the result of this action.
For example, the tool can identify the following events (or their absence):
* Appearance
* Disappearance
* Replacement
* Scrolling/Movement
Source: http://groups.csail.mit.edu/up/projects/sikuli/sikuli-chi2010.pdf
Another research group in the collaboration with eBay implemented a convolutional neural network (CNN, a kind of supervised ML models especially successful at image analysis tasks) to detect defects on web pages.
Source: https://www.ebayinc.com/stories/blogs/tech/gui-testing-powered-by-deep-learning/
This approach proved to be successful, reducing time required for GUI testing and revealing some defects that “would have been practically impossible to capture by any other means of manual or automated testing.”
Challenges of applying ML in GUI automated testing
Gerd Weishaar, Chief Product Officer at Tricentis, who experimented with incorporating machines for continuous GUI testing has pointed out that it’s possible to develop algorithms that can recognize changed controls more accurately than humans. However, it takes time to develop such algorithms, while the industry is anxious to accelerate automated testing. Even more important is the notion that companies are in some way trying to reinvent the wheel here, as testing is the process of verifying of results. In order to be able to verify data, you first need to generate test data.
Getting back to our use case, the data that testers want and need resembles the way human beings would test or interact with an application. This is because automated tests need to be prepared to deal with the worst-case scenario of erratic human behavior. In order to take one step forward, companies need to take two steps back and think how they can generate or acquire the test data they need to verify with their automated tests. It’s no surprise that large technology companies such as Google have created their own initiatives to generate such large quantities of test data. Rather than creating their own data, companies can turn to such big data/deep learning programs and automate their test cases.
Additionally, the field of machine learning is improving quickly. AutoML is a field where custom ML solutions are being created for non-ML experts, who can automate parts of data science workflows using such tools. Again, large technology companies such as Microsoft and Google are leading the way.
Machine learning is not a fully automated process
By itself, the idea of using machine learning for automated GUI testing is self-contradictory. This is because machine learning models are generally regarded semi-automatic, requiring intelligent decisions by humans. Furthermore, machine learning models get confused by noisy data which makes it hard to apply them to automated GUI testing.
Applying machine learning concepts to automated testing requires a lot of knowledge and experience, but the good news is that progress is being made by large technology companies that are generating large quantities of artificial intelligence test data. For non-experts, AutoML solutions provides a great alternative to automate a part of the machine learning process.
Information about the author
Maxim Chernyak is a Head of Test Automation and Performance Testing Lab, an expert in test automation methodologies and tools for functional and nonfunctional testing at A1QA. Accountable for the education and adoption of state-of-the-art quality engineering practices by QA teams.