Test efficacy and inefficacy, the idea that an activity can produce the expected results and that an activity done poorly causes a poorer result than if you hadn’t taken any action at all – respectively. This note is intended to describe how we should consider our testing from those two perspectives.
Test Efficacy
How effective or efficacious is your testing? What defects are you finding and what are you allowing to escape? You should always evaluate the effectiveness of your testing, not with an eye for blame but with an eye for improvement. And remember, the tester didn’t insert the bug you are just tasked with uncovering it to assure the quality of the product.
Your testing is layered, it moves from small and simple to complex and large. When you evaluate your effectiveness consider this. Consider all the methods of test you perform and how effective they are. You may even find that some methods of test are not worth your time and need to be paired down.
For example: you have bread and butter testing that varies for each iteration you run. You certainly have to do novel testing on your new work product, or craft regression tests for your rework – with an idea of validation that the development effort met objectives – confirmation tests. What else are you testing? What test vectors are you engaging – if all you do is script, GUI, or CRUD testing are you actually testing the solution? Does that level of testing, albeit required, meet the standard of effectiveness? I am of course speaking to the Quality Model attributes with an additional category of Safety.
The Quality Model has 31 attributes that each have test approaches. They include UX, Performance, Security, Maintainability, Portability, Reliability, Compatibility. Each of these has sub attributes, but more importantly, each have different methods of testing that will allow you to uncover discrepancies in those attributes. If you have not already inserted those methods into your iteration testing then doing so is almost guaranteed to find a significant set of bugs – but more importantly, improve the quality of your solution.
In a slight variant on test efficacy there is extension testing. When you have a complex problem how are you setting up your test? If you have interactions with other systems how are you modeling those interactions? Are you getting samples from the original system such that you understand the variety of those messages or are you assuming that the messages will be compliant with some specification? If you are going to conduct a performance effort what attributes of the problem are you going to exploit? If you are going to test allergies are you testing non-med allergies — like an allergy to stainless steel or glass – or are you testing a couple med allergies from the top 10 list and some random meds? Then, are you running the next test – how does DUR perform given your newly added allergies? I am of course not trying to say to only test med allergies – food, latex, and dye allergies are also significant – but what is the clinical impact of testing an allergy to glass? That test isn’t effective or efficacious.
Your mission as testers is to ensure that the product is fit to purpose and fit to use. Having a large toolkit to draw from and knowing when to use specific tools will dramatically improve your outcomes. And, dramatically improve the efficacy of your tests.
Inefficacious Testing
Again, testing that is done is such a way that the results are worse than no results at all. How can that possibly be? I had an experience about a decade ago where the contract house I was working for was asked to update an eCommerce site for a very famous company. The coding was done stateside and the testing overseas. My boss asked me to look at a statement from the Test Lead indicating that the product was defect free. Five or six testers for a couple months – and the product was defect free… I asked him to show me the defects that had been written and there were a couple – a couple. Five developers over 6 months and they had only created a couple bugs (several of which weren’t bugs)? I sat down at his station and tried a couple quick workflow tests… So many bugs that in 5 minutes he had a page full of notes. I wrote up the notes and he asked them to check those areas again.
Four weeks later they came back with a vague ‘cache’ defect. My Director asked me to spend a day and see what I could find. After that day, I found 16 critical and high defects, 14 of which the customer needed to have fixed before go live. That wasn’t a complete test set, just what I could do in one day. Their testing certainly wasn’t effective but more importantly, they had given their ‘assurance’ that the ‘quality’ of the product was impeccable, “defect free” – when in fact it would have completely shut down that company’s business if it had been deployed. I am not that good of a tester, so if I could find and document that many critical and high issues in such a short time the software was not deployable. Their testing was inefficacious, ineffective, and potentially exposing the company to a malpractice suit. If they had done no testing at all at least the customer would have had the expectation that they needed more than just a UAT battery before accepting the solution.
Test adulteration occurs for multiple reasons: lack of training and experience in the test team, lack of understanding of the solution, bad process, lack of commitment and ethics, and naivety in understanding the nature of the test problem. In an inefficacious test process you aren’t even aware that the testing is ineffective until the feature release hits the proverbial fan. If your release hits the customer and is completely rejected by the customer then you either weren’t testing, or didn’t understand the usage and purpose of the solution.
In Health IT we must be vigilant that both our efficacy is as complete as possible and we recognize when our efforts are infective, or when the team is conducting inefficacious testing. When we are not doing the testing we need to do, it is your responsibility to call out what is ineffective and hopefully how it might be resolved. If you don’t know how to solve the problem there are plenty of people capable of consulting to the problem. Inefficacious testing is a different beast altogether, as the fact the effort was adulterated often comes at the end of the mission – so all that remains is to pick up the pieces. We must recognize when this has happened, both as leaders and individual contributors, and address the failures appropriately. As with ineffective testing, inefficacious testing requires both introspection and retrospection, as well as a significant portion of humble pie.
My ask is to be vigilant, to be cognizant, and to constantly seek to improve your craft. As well as applying the standard adage: if you see something, say something.
As always, watch out for brown M&Ms.