“Your job is to automate yourself out of a job.”
“100% automation means we will be able to get rid of our testers!”
“In order to do continuous deployment we need to have 100% of our tests automated!”
Do any of those quotes sound familiar? Maybe you’ve even caught yourself saying something similar? While these aren’t direct quotes, I have heard variations on them for years, and I believe they are a symptom of an underlying problem. Test automation is being sold as something it was never intended to be and is (unsurprisingly) falling short.
Test automation was never about replacing humans.
Its goal was never to test everything, and it was certainly never intended to be the panacea for software quality problems. Test automation was a natural evolution of task automation — having computers execute repetitive, menial tasks in order to free up time for the operator to do other tasks better suited for humans. Quite frankly, humans suck at repetition. Chances are we will invariably screw up somewhere along the lines, whereas computers are designed to execute a set of binary commands and return the same value every… single… time.
Automation (whether test or task) is only as good as the instructions provided.
It cannot learn; it cannot adapt; it cannot add new tests; and it cannot interpret output of the tests without explicitly being told what is considered a pass or fail (often times via an oracle). Maybe at some point in the future, AI will get to a point where automation can do all those things; but even then, it will not be able to creatively think about the problem. That’s something that is very much a human trait, and will likely not be replicated by computers in any of our lifetimes.
I’d like to propose that we look at test automation from a different lens and start looking at it as tester augmentation. What we call it is irrelevant (I’d just as soon keep calling it “automation”). What is important is that we understand the purpose of tester augmentation.
What can well-designed tester augmentation do?
Remove the need to execute scripted test cases
Great examples of tester augmentation are automated test cases that need to be executed repetitively and can easily be determined as a pass/fail based on simple oracles. They would also be executed at the right level (unit vs. component, component vs. API, API vs. UI, etc.).
Automate the execution of repetitive tasks such as data entry / environment setup
Data entry and environment setup is boring, and can often times be randomized or scripted. Making this task repeatable and accurate will also increase the reproducibility of any tests executed by the tester based on this data. This means the results of those tests are more valuable than those with lower reproducibility.
Record what the tester is doing to allow for easy reproduction
Having either a macro recorder or a screen recorder logging what is actually being done by the tester will allow for more detailed reproduction steps and likely drive up the reproducibility of the tests being executed.
So, what can’t tester augmentation/test automation do?
Determine the right test coverage
Determining the right amount of test coverage is a risk analysis process. Many different aspects need to be looked at, some of which are quantitative and others which are qualitative. Ultimately it takes the abilities of a human to make the final call on when it’s “good enough”.
Determining connections between failures and causes
In many (but not all) cases, there isn’t enough information available to the system to make this determination. Certainly there are some cases where autonomous systems can determine root causes of a failure, but it’s less common than you would imagine.
Explore uncharted scenarios and think on their own
Automation only knows how to do what it’s been told to do. One of the things that makes humans so unique is their ability to learn and think creatively. When using a piece of software, a human may see something and say “Huh, that looks odd …”, and then start down a completely different path than they originally had planned. This is something computers haven’t yet been able to replicate; they don’t have a mind of their own, and they can’t make decisions to go down a path that they haven’t been programmed to handle.
In closing, tester augmentation does not replace the tester. The goal should never be to automate yourself out of a job. It should never replace the act of sapient testing, and it should certainly never be something that is purchased out of a box.
On the contrary, tester augmentation frees up time for creativity, learning, and exploratory testing. It augments, not replaces, the tester.