Skip to main content

Step by step guide: Testing without requirements/ little requirements

 If you haven't already, you may one day find yourself in a project where there are no (explicit) requirements or very little requirements to go off. 

Software can be created from conversations and assumptions. People may also assume that the little requirements they do write is enough to easily code against or test off, but then you later realise that the few lines that have been written actually pose more questions than provide answers.

You first time testing on a project without requirements can be a bit scary but I'm hoping this step by step guide will help you.


1. Understand the difference between implicit requirements and explicit requirements

There are ALWAYS implicit requirements. But there aren't always explicit requirements.

An implicit requirement is a requirement on the SUT (Software Under Test) that is not explicitly stated somewhere (e.g. in documentation)

An explicit requirement is a requirement on the SUT that is explicitly stated.

Some potential examples:

Implicit requirements:

  • I've found generally browsers like Chrome and Firefox are expected to be supported and this might not be stated.
  • That useful error messages show to the users so they can recover

Explicit requirements:

  • That only a user with certain permissions can do certain actions
  • Performance - how fast should a page load for example?


2. Ask for the requirements or documentation

It might be obvious, but it's good to see what written documentation you have to go off for your testing


3. Ask if this is a reliable source of truth or for other oracles

Is this actually what you can expect? Maybe it's outdated or people have forgotten to update it?

At this stage you can also ask if there is someone who knows how the expected behaviour should be.  This person would be one of your oracles.

Here is my blog post on my most useful test heuristics 


4. Ask if there are any designs you can look at

Designs are a great way of not only understanding how the SUT is expected to behave, but also how the designer in your team interprets the implicit requirements or little requirements


5. Document your understanding of what should happen

Now based on the information you have, document your understanding of what should happen and be prepared to back it up. (You may not have to explain yourself, but it's good to be prepared with your reasoning/proof, should someone ask you why you have certain expectations)

A great way to come up with some ideas on what should happen is to use consistency heuristics. (You can also learn more about this at the BBST Foundations course that I'm a co-instructor of)


A few examples below:

  • Supported browsers - Some browsers are only available in certain markets. e.g. QQ in China (websites like Statcounter has information on this)
  • Mandatory fields - In general, mandatory fields are marked to show the user that they are mandatory
  • Mobile testing - Offline support. What happens if the user goes offline? You can check comparable/similar apps to see how they handle users having their internet drop

6. Communicate your understanding to the people in your team

  • Be sure to state that it is your understanding/interpretation of what should happen
  • Be explicit and specific
  • Use examples if and where possible

Once there is a shared understanding of the expected behaviour, share this information with your team.


7. Hands on testing!

Whether you like to write and execute test cases or create test charters and use those - now's the time to get to know the SUT a little bit better!



Recommended reading

https://associationforsoftwaretesting.org/bbst-black-box-software-testing-courses/  (The consistency heuristics you learn in this course are VERY useful for testing without requirements)


Comments

  1. For #2, if there is nothing else, many applications have some type of help available on line. It isn't the best always but it at least can give you an idea of what the software is supposed to be doing for the users.

    ReplyDelete

Post a Comment

Popular posts from this blog

Getting started on a testing project

I started on a new project a few weeks ago and thought it would be a good idea to share a checklist for what new testers on a project need and some good starting questions when you, as a tester, are new on a project Checklist for what new testers on a project need (Note, your project may not include all of the below) Note to check if user credentials are needed for any of the below

My Most Used Test Heuristics (with examples)

First, what is a heuristic? A heuristic is a guideline ,  it is fallible.  Therefore, it will give you a good idea of what behaviour you should see BUT it isn't definitely what should happen - it's up to you to confirm that the behaviour you are seeing is correct.  In a previous blog post I shared a step by step guide on  how to test without requirements/little requirements.    But I figured it's good to share my most used test heuristics that I use for testing without requirements. They are: Consistency with History Consistency with User Expectations Consistency within Product Let's take a look at each of them along with some examples to illustrate the concept. 1. Consistency with History  The feature's or function's current behaviour should be consistent with its past behaviour, assuming there is no good reason for it to change. This heuristic is especially useful when testing a new version of an existing program. (Source: developsense) Example: Whitcoulls, a

Bloggers Club: The Importance of Exploratory Testing

 Before we dive into the importance of Exploratory Testing, I would like to clear three things up. Firstly, I align with this definition of Exploratory Testing, by Cem Kamer, it is an approach to software testing that consists of simultaneous learning, test design and test execution. Secondly, I don't think Exploratory Testing has to be a substitute for test cases, it can complement test cases. (It's up to you, how or if you choose to combine both Exploratory Testing and Test cases when you test a feature) Lastly, exploratory testing is not adhoc testing - adhoc testing is random, unstructured testing, exploratory testing forced you to think critically about the application under test. (For more about the difference go here .) Job Interview Analogy To illustrate the importance of Exploratory Testing, I'd like to first use the analogy of a job interview. (I wrote about this in a previous blog post but will expand on this further) The Test Cases in this analogy In a job inte