Nicky Tests Software

Friday, June 8, 2018

Introducing people to Exploratory Testing Part II

It's been over 6 months since I posted my initial post on Introducing people to Exploratory Testing Part I and I have a few updates to share.

Reception of Exploratory Testing
From testers, the reception has been mainly really good. People on our project are eager to learn something new and learn a different approach to testing.

Here are excerpts of two feedback we have received:
1. By doing exploratory testing here it doesn't restrict my testing. It invites me to investigate further and also be able to pinpoint problems better. This leads to better and more precise defects being created also.

Another benefit that we've seen with ET is that we can feel more confident about the testing that has been done. Usually the checks(test cases) that has been created before, only covers the minimal required. Usually the checks are created directly from Acceptance criterias and also only covers those
2. Exploratory testing gave me more freedom to think more, analyse more and test more. So it helped me to find issues earlier and deliver product with better quality.

Some concerns that were raised in the workshops (more about the workshops before) include how to pass over the test cases to another team (e.g. System Integration Test team and Automation test team) and how to know if something passed or not.

Ideally all of the testing should be handled within a scrum team, so the first concern is rather redundant under our current team set-up, but when the concern was raised a few months ago (as I'm writing this blog post a few months late) it was valid as we were going through a transition during that time. In terms of knowing if something passed or not, we'd like to encourage more of a "informative" mentality.

Workshops given
My colleague on my project, Maria Kedemo, has given a few workshops on Exploratory Testing to testers on our project to teach testers on our project what Exploratory Testing is and how to do SBTM (that's the approach we are focusing here at the moment).

At the moment we don't have any workshops scheduled - but the Test Community leaders on our project plan to discuss this and figure out what information we can share that would be most beneficial to testers on our project.

Presentation to Test Managers about our project
About two weeks ago, I gave a presentation to Test Managers about how our project is tackling documentation without test cases - this presentation also focused a lot on how we document our testing without test cases, using charters.

I started off by describing what Exploratory Testing is and is not - to (hopefully) get them to understand, what I mean when I use the term in the presentation. I then talked a bit about what SBTM is and what a charter is.

I then showed an example charter I used in a past feature so they could see what it looks like (I didn't want this to be theoretical, I wanted them to see what we do and how we do it)

After a delving deeper into how we test on our project and document things, I stated exactly how we transitioned from test cases to Exploratory Testing in our project - we started off with a pilot in one team, slowly spread it to other teams, organised workshops and adapted to the testing tool we were forced to use. Adapting to the test tool we were forced to use didn't affect the ET itself, it just affected how we attached our charters to the ET.

Some questions (that I remember) that arose after the presentation included:

  • Has the quality of the software improved since we started this approach?
  • How do you use the testing tool, CLM, to record testing?
  • How do people find it?
  • Does everyone on our project do ET?

Challenges ahead
One challenge we still have ahead of  us is around expectations of what Exploratory Testing has to offer. There's still a misconception that it is a strict replacement for Test Cases and that you should be able to measure progress by counting the number of charters. It took a bit of time to get rid of the pass/fail mentality that test cases encourages, but people still like to count something so for now upper management are making do with counting charters.

Another thing is getting people to do Exploratory Testing properly. It seems to me some people are just using it as an excuse to skip test cases and do ad-hoc testing and not document anything whatsoever - we are working on handling this and figuring out how to give testers enough freedom to explore without doing a "big brother" situation where we constantly monitor everyone. (We would like to show trust).

Saturday, February 10, 2018

An analogy to explain the limitations of test cases

I love analogies. They help me explain things in a way that (hopefully) others can understand and relate to.

When I was thinking how can I try and explain the limitations of test cases (because knowing test cases aren't all they are cracked up to be, and explaining that to someone - are two different things) the first thing that came to mind is job interviews. We've all been on job interviews - it's an understandable concept and we can all relate.

So here we go:

First, let's agree that both testing and job interviews are information seeking activities.
In testing, we are trying to find out information about the Software Under Test.
In job interviews, the company is trying to seek information on the candidate (actually it goes both ways- the candidate is also trying to seek information on the company as well)

Second, let's agree that in both examples you want to make an informed decision.
In testing, you want to know if the software is ready to go live or proceed to another testing phase (there are other missions related to testing, but sticking to this, for the sake of the analogy).
In job interviews, the company wants to know if they want to hire you. (and the candidate wants to know, do I actually want to work here)

Using test cases is like coming to the job interview with all of your questions pre-planned (on both sides, candidate and company).

This means when you come to the job interview, both sides have a set of questions that they plan to ask and are only seeking the answers to THOSE questions. No follow-up or investigation based on what the other side said.

Interviewer: Do you have any experience working in an Agile environment? (planned question)
Candidate: Yes, I do. In my previous project, we were working in scrum teams but we didn't have scrum masters.

**This answer could be considered strange or would warrant a follow-up. Technically it may "pass" the interviewer's definition of acceptable, but not having a scrum master could be something that warrants investigation and further questioning to see if they were actually working in Scrum teams.**

Even worse, using metrics to dictate success could be misleading.

If 89 test cases passed out of 90. All that tells me is that 89/90 test cases passed. I don't know if that's great; amazing; concerning... to me, it's just a number. But to many people who look at test case metrics, that's not the case (see what I did there :D).

With just these type of metrics, we don't know the quality of the test cases, the coverage, how much overlapping material there is, if the 1 failing test case is a blocker. The high number (or low number) of test cases is also no indication of how well tested the feature is. Does 90 test cases mean the SUT is better tested than one with only 30 test cases? Maybe having 200 test cases would've been preferable?

Back to our analogy:

Let's say the Interviewer has 15 preplanned questions for the candidate. But then some of the questions are a lot more "shallow" than others. We shouldn't put equal weighting on each question.

Some examples of questions that may be asked at a job interview:

  • Why do you want to leave your current company? (interesting to know for the Interviewer)
  • Do you have any experience with XXX technology? (depending on the technology, could be a nice to have but not mandatory, you could learn this)
  • Have you worked in XX Industry before? (depending on the Interviewer, may be out of curiosity or an important question)

If you ever find yourself trying to explain to someone the limitations of test cases. Try using an analogy. Use specific examples of job interviews and the questions both sides asked. Did both sides only ask the questions they planned to before hand? Was there a "right" number of questions that had to be answered correctly? Did both sides know what the "right" expected answer was for all questions?

Monday, November 27, 2017

Reminding myself about how one's experience shapes one's point of view

As I am helping introduce Exploratory Testing to our current project, there is one thing I've had to remind myself over and over and over again.

One's experience shapes one's point of view.

When having a discussion, or trying to convince someone of my point of view, I try to consciously remember this.

If the people I am having a discussion with, have a different point of view to me, that doesn't necessarily mean I should jump to the conclusion that they are wrong and I am right (or vice versa). Based on our own experiences, chances are, we are both right in our own minds. Which means it's not up to me to try and figure out how to convince them that they are wrong and I am right


I need to figure out how to close the information gap.

I love analogies so let me use an analogy to further explain what I mean:

Working Remotely Analogy
Let's say you want to have the option to work from home and are going to propose remote working in your team.
You have had great experiences working from home. You've been able to get more done (less disturbance), you get to enjoy having no commute and you've had access to the right tools etc. so you can still get your job done and communicate with your team.
But then one of your teammates raises their concern about this because they have also worked remotely and it didn't work out so well for them. Your teammate says that they struggled to contact people who were working remotely and that people who worked remotely often had problems around logging into the VPN and around the communication tool.

We're not going to get anywhere by just having one person be right and another person be wrong. Each person's experience resulted in that person's opinion. Therefore each opinion is valid.

The goal here is to first get a shared understanding of what working remotely is (should be easy enough) but more importantly what working remotely requires by both the project and each individual.

Some questions that may run through the team's mind when discussing working remotely may include:

  • What are their experiences of working remotely?
  • How have these experiences affected their understanding and opinion of what working remotely is?
  • Since I can't just share my own experiences (I can't just tell them), is there any way I can get them to experience what I experienced when it comes to working remotely?

Ideas on the thought process
When it comes to introducing Exploratory Testing to our current project and helping dispel people's misconceptions about ET, I'm keeping the following in mind:

  • What do they think Exploratory Testing is?
  • How can I check to see our understanding of Exploratory Testing is the same thing? (Before trying to advocate for the use of Exploratory Testing, it might be worthwhile seeing if we are discussing the same concept or only the same term)
  • What are their experiences with Exploratory Testing?
  • How have these experiences affected their understanding and opinion of what Exploratory Testing is?

  • Self reflection
  • Am I happy with my use of words, to describe and explain Exploratory Testing?
  • Am I listening to understand, not to answer? (this is a very difficult one for me, working on this)
  • With my use of words and how I say things, am I showing I am open to discussion about the topic and that I am open to questions?
  • Since I can't just share my own experiences (I can't just tell them), is there any way I can get them to experience what I experienced when it comes to Exploratory Testing?
Note: This is an effort to document my thought process when it comes to certain discussions at work, not all of these questions run through my mind with each and every conversation. But I do try to be aware of these questions and again remind myself that:

One's experience shapes one's point of view.

Thursday, October 12, 2017

Introducing people to Exploratory Testing Part I

 A bit of context

For the past 2 months(ish) I've been working on introducing Exploratory Testing to people in my project, starting with people in my immediate team of 3 testers, which is distributed between 3 countries. The project, as a whole, has a lot more than that, but the plan is to introduce  this as a pilot, see what the testers think of it, and then (hopefully) introduce this approach to other teams and other features.

I'm still a relatively new person on this project as I've been on it for 6 months, but the other testers in my team have been on this project for 3-5 years. So I've made sure to ask their thoughts, listen to their ideas and address their concerns about this - they know things about our context (which I don't) because of their experience here.

Currently, on the project, we write test scripts, link them to test cases, then execute those test cases. I'm under the impression that a lot of people on the project have only ever used test cases to formally do testing (when they're not doing "Exploratory Testing").

Another thing to keep in mind, is that this process is still ongoing (hence 'Part I'), but while these thoughts are still fresh on my mind, I wanted to get them down.

Sites/resources I shared

Spotify Offline: Exploratory Testing by Rikard Edgren
I asked my Test Manager (who also thinks Exploratory Testing is effective) about resources I can share and she recommended two Youtube videos by Rikard Edgren.

James Bach has a lot of useful posts on his blog helping explain what Exploratory Testing is.
Few examples:
Exploratory Testing 3.0
What is Exploratory Testing

I also shared a post from Michael Bolton's series - what Exploratory Testing is not

Lastly, I decided that Session Based Test Management (SBTM) would be a great way to help us structure our Exploratory Testing, so I shared some resources around that including this powerpoint by Anders Claesson

Managing others' expectations

I've noticed that a lot of people on this project have a very different understanding (to me) of what Exploratory Testing is. Based on what people on this project say, it seems that they think Exploratory Testing and Ad hoc testing are the same thing.

Since initially introducing the idea to the other two testers I work with, I'd say I've been met with cautious reception. Test cases is seen, by one, as proper testing and Exploratory Testing is not-  I'm still working on breaking that misconception. Aside from that, it does seem to be a welcome idea - you get to see results faster and are able to react to what you find as you test.

In terms of time estimates and how this affects our team meeting it's goals - I've been sure to communicate with our team that this is a new way of working which we need to learn - so any time savings may not be seen straight away.

Lastly, there is the idea of coverage - to cover this, I've decided to specifically mention, at the start of each charter, which Acceptance Criteria is covered in the charter. People on this project like reports and seeing the number of passed test cases etc. - I'm still learning how to deal with that mindset and any obstacles which arise there.

Managing my own expectations

This has been tough. It's been a while since I've been in a work environment with people avid fans of test cases. I don't think that test cases offer no value at all, but I think people overestimate the value it provides.  Test cases can give people a false sense of security of the state of the product when they see that 95% of their test cases have passed, but they can't properly attach meaning to how a 95% pass rate affects the customer. It's just a number.

I've also been trying to manage my expectations around other people's understanding of good testing and my own understanding of good testing. I constantly remind myself that their understanding is based on their own experiences. So neither of us are necessarily wrong - we are both right in our own mind.

My goal is to effectively show people on my project another way of doing testing. Then they can make more informed decisions in the future and choose which approach is best for their context.

Moving Forward

I'm hoping to sit with the tester who'll soon be on-site and pair test with them as we do Exploratory Testing. I should also organise a pair testing session with the tester who'll still be offsite. Once we've done this, I'll seek more feedback on this approach and see what they like and what they are concerned about (in the context of this project). We're also working on a Low Tech Dashboard to help us communicate the testing status for our features and help others attach meaning to what we present. 

In time, I'm hoping to help introduce this approach to other teams in our project - but for now we need to continue the pilot first. 

Tuesday, May 30, 2017

The limitations of Acceptance Criteria

According to Software Testing Class, Acceptance Criteria are conditions which a software application should satisfy to be accepted by a user or customer.

Often these can also be used to guide the testing for a testing team. If the acceptance criteria are met, then the story has passed. You can choose to test strictly against the acceptance criteria by using test cases or exploratory testing etc. and then once each acceptance criteria has been "ticked off", you can mark testing as done.

The thing is - acceptance criteria has its limitations.

You are expecting someone to know in advance, before seeing the software, exactly how the software application should be. So if you are testing strictly against the Acceptance Criteria - you are in essence trusting that, that person (or group of people) who wrote the acceptance criteria knows everything about what is needed before the software is built.

People don't know what they want until they see it (same goes for knowing what they don't want)
I think it's possible to build a product that the customer did say they want then still find them to be unhappy because they realised once they saw it - that it wasn't quite what they wanted. After seeing something they are then better able to articulate what they want or need from the software application. They may not, however, be able to articulate what they want clearly - until they have something in front of them.

Here's a fun analogy to help me explain this further
Finding a romantic partner
Now of course finding a romantic partner and acceptance criteria are vastly different things, but let me explain. If you've ever, as a single person, talked to your friends etc. about what you want in a potential romantic partner, you may rattle off things such as:

  • Funny
  • Kind
  • Attractive
  • Likes sports
  • Same religion

Among other things, and these are things you may deem to be important. So let's assume the above criteria are all must-haves, they are your "acceptance criteria".

But as I said before - acceptance criteria has its limitations.

If you think you know exactly what you want when it comes to a romantic partner, then technically a friend can introduce you to someone who is Funny, Kind etc. and you'll be happy.

But it's not that simple, while there are some things that you may not want to compromise on, there are also some things you may not realise are important to you, until you have met someone who has those special qualities, which you didn't think to define. (or they may be missing some qualities that you didn't think to define)

 Image courtesy of

Wednesday, May 24, 2017

My Testbash Belfast 2017 Experience Report Part I

This is a two-part Experience Report, the first part will cover preparing my talk, the pre-Testbash Shindig and the first half of the conference. The second part will cover the second half of the conference and the Post-Testbash shindig.

Preparing my talk

I started preparing my talk around 2-2.5 months before the conference. But I didn't properly gain momentum until about 1.5 months before my talk. Initially I tried to write the whole talk in Google Docs - but I found that didn't work for me. Instead, I ended up creating the slides and writing speaker notes below.

I aimed to have a completed presentation ASAP and then just edit it continuously up until I gave my talk. I find it a lot easier to edit a presentation that's complete than to add more to one that is incomplete.

Tuesday, May 16, 2017

My Experience at Romanian Testing Conference 2017 - told through photos

I thought I'd share my experience at Romanian Testing Conference 2017 with the use of photos :)

Here are some photos of our talented speakers on the evening of Thursday 11 May, before the main conference day.

Here's Rob Lambert , the conference chair, welcoming all of us to the Romanian Testing Conference 2017

One of my favourite slides from Santhosh Tuppad's opening keynote

Some photos from Adam Knight's talk on communicating risk

Marcel Gehlen sharing his expertise on creating a test friendly environment

One of the slides from Elizabeth Zagroba's talk on how to succeed as an introvert

At lunch we had quite the dessert offering, I ate less than half of what's on this plate. It was very rich. But I saw lots of other people eat twice the amount on this plate for dessert. 

My certificate :)

Harry Girlea giving his closing keynote

Sightseeing - I'm doing my classic thumbs up pose here

The Opera House in Cluj

View over Cluj

Going for a walk in central park