Skip to main content

The Economics of Software Testing: The Law of Diminishing Returns

 

What the law of diminishing returns?

if one input in the production of a commodity is increased while all other inputs are held fixed, a point will eventually be reached at which additions of the input yield progressively smaller, or diminishing, increases in output (Source: https://www.britannica.com/topic/diminishing-returns

                    

How does the law of diminishing returns apply to testing?

This picture below shows how testing does NOT work.



This
is a more accurate portrayal.


At the start of your testing period, if you increase your testing effort you will increase the value you receive from testing (say from Point A to Point B) but later in the project, an increase in your testing effort will result in a smaller increase in value (compared to earlier on in the project), this is shown in the picture above from Points C to D.


Being aware of the law of diminishing returns will help you know when you can STOP testing.

It's important to be aware of the fact that a lot of the value from testing (and the bugs that will be found) will be uncovered earlier on. That's not to say that there is no value to be gained from testing later in as time goes on (say from Point C to Point D), it's just the rate of pay-off can change.

Edit: Note that this assumes you are taking a Risk  Based Approach to testing and are testing the most risky areas first 

Comments

Popular posts from this blog

Getting started on a testing project

I started on a new project a few weeks ago and thought it would be a good idea to share a checklist for what new testers on a project need and some good starting questions when you, as a tester, are new on a project Checklist for what new testers on a project need (Note, your project may not include all of the below) Note to check if user credentials are needed for any of the below

My Most Used Test Heuristics (with examples)

First, what is a heuristic? A heuristic is a guideline ,  it is fallible.  Therefore, it will give you a good idea of what behaviour you should see BUT it isn't definitely what should happen - it's up to you to confirm that the behaviour you are seeing is correct.  In a previous blog post I shared a step by step guide on  how to test without requirements/little requirements.    But I figured it's good to share my most used test heuristics that I use for testing without requirements. They are: Consistency with History Consistency with User Expectations Consistency within Product Let's take a look at each of them along with some examples to illustrate the concept. 1. Consistency with History  The feature's or function's current behaviour should be consistent with its past behaviour, assuming there is no good reason for it to change. This heuristic is especially useful when testing a new version of an existing program. (Source: developsense) Example: Whitcoulls, a

Bloggers Club: The Importance of Exploratory Testing

 Before we dive into the importance of Exploratory Testing, I would like to clear three things up. Firstly, I align with this definition of Exploratory Testing, by Cem Kamer, it is an approach to software testing that consists of simultaneous learning, test design and test execution. Secondly, I don't think Exploratory Testing has to be a substitute for test cases, it can complement test cases. (It's up to you, how or if you choose to combine both Exploratory Testing and Test cases when you test a feature) Lastly, exploratory testing is not adhoc testing - adhoc testing is random, unstructured testing, exploratory testing forced you to think critically about the application under test. (For more about the difference go here .) Job Interview Analogy To illustrate the importance of Exploratory Testing, I'd like to first use the analogy of a job interview. (I wrote about this in a previous blog post but will expand on this further) The Test Cases in this analogy In a job inte