Here are the six levels of testing so you can avoid getting caught flat-footed in the AI testing revolution.
Level 0: No autonomyCongratulations! You write code that tests the application, and you’re happy because you can run the same tests again and again on every release. This is perfect, because now you can concentrate on the most important aspect of testing: thinking.
But nobody’s helping you write that automation code. And writing the code itself is repetitive. Adding any field to a form means adding a test. Adding any form to a page means adding a test that checks all the fields. And adding any page means checking all the components and forms in that page.
And the more tests you have, the more they will fail when developers make a sweeping change to the application. So check every failed test to verify whether you have a real bug or just a new baseline.
Level 1: Drive assistanceThe autonomous vehicle serves as a good metaphor for this level of testing. The better the vision system an autonomous car has, the more autonomous it is. Similarly, the better an AI system can see your app, the more autonomous your testing will be.
AI should be able to see not only a snapshot of the Document Object Model (DOM) of the page, but a visual picture of it as well. The DOM is only half the picture—the fact that there is an element with text on it does not mean that the user can see it. Other elements may obscure the page, or the page might not be in the correct position. Visuals let you concentrate on the data the user sees.
Once your testing framework can see the page and look at it holistically, whether through the DOM or a screenshot, it can help write checks that you would otherwise write manually.
If you take a screenshot of the page, you can check all form fields and texts in it in one fell swoop. Instead of writing code that checks each field, you test all of them at once against the baseline of your previous test.
Figure 1: Checking the whole form in one fell swoop. For this level of testing to work your testing tools need AI algorithms that can determine which changes are not really changes at all, and which are real.
AI technology today can assist you in writing test code by writing your checks. You’re still driving the tests, but AI can do some checks automatically.
Also, AI can check that a test passes. But when it fails, AI still must notify you so you can to check whether the failure is real, or happened because of a correct change in the software. You must either confirm that the change is good or reject it because it’s a bug.
And having AI “see” the application means it checks the visual aspects of the application against a baseline—something that until now could only be done through manual testing. But you still need to verify every change.
Level 2: Partial automationWith Level 1 autonomy, the tester can avoid the tedious aspect of writing checks for all fields on a page by having AI test against a baseline. And you can have AI can test the visual aspects of the page as well.
But checking every test failure to verify whether it’s a “good” failure or a bug can be tedious, especially if one change is reflected in many test failures. At Level 2, your AI needs to understand the difference in terms that the user of the application should be able to understand. Thus, a Level 2 AI should be able to group changes from lots of pages, since it will be able to understand that, semantically, those are the same change.
A Level 2 AI can group these changes, tell the human when the changes are the same, and ask whether to confirm or reject all the changes as a group.
Figure 2: Grouping similar changes with AI-powered visual test automation. In sum, Level 2 AI helps you check changes against baseline and turns what was a tedious effort into a simple one.
Level 3: Conditional automationIn Level 2, a human still must vet any failure or change detected in the software.
A Level 2 AI can analyze the change but can’t determine whether a page is correct or not just by looking at it. It needs a baseline against which to compare. But a Level 3 AI can do that and more by applying machine-learning techniques to the page.
For example, a Level 3 AI can examine the visual aspects of a page and decide if the design is off based on standard rules of design, including alignment, whitespace use, color and font usage, and layout.
Figure 3: A Level 3 AI will autonomously determine that this is a bug. And what about the data aspects? A Level 3 AI can check the data and determine that all numbers up to now, in this field or that, must be in a specific range; that this field is an email; and that another must be the sum of the fields above it. It knows that in a specific page, the table must be sorted by a given column.
Your AI can now evaluate pages without human intervention, just by understanding your design and data rules. And even if there were a change in the page, AI can understand that the page is still fine and doesn’t need to be passed to a human for review.
AI looks at hundreds of test results and can see how things change over time. And by applying machine-learning techniques the AI system can detect anomalies in changes and submit only those to a human for verification.
Level 4: High automationUntil now AI has only run checks automatically. Humans still drove the test and clicked on the links (using automation software). Level 4 is where AI will drives the test itself.
Because Level 4 AI can examine a page and understand it just as a human would, it understands when it’s looking at a login page versus a profile, registration or shopping cart page. And because it understands the page semantically, as a page that is part of the flow of interaction, AI can drive the tests.
While pages such as login and registration are standard, most others aren’t. But a Level 4 AI will be able to look at user interactions over time, visualizing the interactions, and understand the pages and the flow through them, even if they are pages of a type the AI system never saw.
Once AI understands the type of page, using techniques such as reinforcement learning—a type of machine learning—it can start driving tests automatically. It can write the tests, not just the checks for them.
Level 5: Full automationThis level is science fiction for now. At this level, the AI would be able to converse with the product manager, understand the application, and fully drive the tests by itself.
But given that no one has been able to understand a product manager’s description of an application, AI-5 would need to be much smarter than humans.
The current state-of-the artAdvanced tools are currently at Level 1, and they’re progressing nicely with Level 2 functionality. While tool vendors are working on Level 2, Level 3 AI still needs work. But it’s doable.
A Level 4 AI is far in the future, but in the next decade or so you can expect to see AI-assisted testing without the nasty side effects.
So don’t start looking for another job because you think computers are going to automate all software testing, including visual tests. Software testing is similar to driving in some aspects, but it is more complicated because these systems must understand complex human interactions. AI today doesn’t understand what it’s doing. It merely automates tasks based on lots and lots of historical data.
Your role is to be the automator, not the automated. Test and QA teams have an opportunity to leverage new automation techniques that will spur new ways of working. Once you’re freed up to focus more on how to test and ensure quality, as automation does its job, you’ll be able to create new value for the business.
Want to learn more about this topic? Post your questions and comments in our Facebook Group.
Or, better yet,“Let’s learn how to automate together!
BECOME IOS SOFTWARE TEST ENGINEER WITH CODEFITNESS!Sign up to The Ultimate Mobile Test Automation Bootcamp (5 seats left)
- We are not typical bootcamp. We teach through day-to-day work tasks using Agile approach. From day one you will start automating real use cases of a real startup-like iOS app. Thus you will gain real experience.
- You will learn programming via hands-on task completion — focusing you on topics that are necessary to complete your task
- You will work in pair just like at real-world job. Pair programming is proven to be very effective and used heavily industry wide.
- You will experience best software development practices:
- Each test case code will be committed to common GitHub repository via PullRequest. We will teach everything about Git and GitHub from scratch
- Upon successful code review and merge, your test code will be executed on Jenkins — your work will be run along with existing tests thus will be checked if your newly automated test is not breaking existing tests. This practice is called pre-merged testing and in recent years has become industry standard
- We will start each session with methodical approach on how to solve most asked Interview Programming problems thus will prepare you for coding interview screening