Nicky Tests Software

Friday, December 7, 2018

How to set up a proxy on Charles (so you can test other devices on your local dev machine)

Have you ever wondered, how do I test my fixes/changes on my local machine on IE 11?
Or in general, how do I test my fixes/changes on my local machine, when I don't actually have that browser on my laptop?

If so, the Charles Proxy tool might be your answer.

A few years ago I learned this and I thought this could be useful for you if any of the below apply to you:
You are developing on a Mac and want to test Internet Explorer 11 or Microsoft Edge on your local machine
You want to test your fixes on a (physical) mobile device, with fixes done on your local machine
You don't have access to a mobile device/internet browser simulator like Browserstack or Perfecto

For this I'm going to explain how to set this up using your development laptop and a "test device"

1. Download Charles and install this on your laptop where you do your development
2. Make sure both the laptop and the test device are on the same internet network
3. Get your IP address from your laptop (you need your internal IP address, not the public facing one)
4. On your test device, go to your internet settings and enable HTTP proxy manually. In the server field enter the IP address of your laptop and in the port type in 8888 (port types must match in Charles and the one you enter in your test device, you can also enter 8080 in both)
5. In Charles, click on the Proxy menu, then click Proxy settings - then see the checkboxes etc as below.

6. Once you click OK. Go back to your test device and access your normal localhost URL - you should be able to see your traffic in Charles.

Wednesday, November 21, 2018

My experience at Agile Testing Days 2018

I had seen blog posts and activity on Twitter in the years leading up to my first ever Agile Testing Days (ATD(- it seemed like people really enjoyed this conference, so it's safe to say I was glad to get  this email from Uwe saying:

"Congratulations, you are part of the Agile Testing Days 10th anniversary as a speaker with: Bringing in change when you don’t have a “leadership” job title! "

For me, this is a BIG conference, a lot of people; a lot of attendees and A LOT of tracks - won't deny I found this intimidating as there were a lot of interesting talks/workshops at the same time. Looking back at this conference, one thing that really stands out to me is choice: Which will I see? Which am I prepared to miss out on?

Saturday, November 17, 2018

My experience at Belgrade Test Conference 2018

Just under two weeks ago, I flew to Belgrade for the first time, to present my talk: "What I wish I knew in my first year of testing" at the Starter track. 

According to the conference site the Starter track:
"will focus more on the role of testing in general and software development basics, as well as some technical showcases of what testing actually looks like. 
It is well suited for people who want to start their careers in software testing or deepen their understanding of testing."
The conference had 3 parallel tracks - 1 Starter and 2 tester tracks, based on my understanding, people could only get access to either the 2 tester tracks or the 1 Starter track.

It was my second time presenting this talk (after presenting it at Eurostar 2016) but this time the talk was very different, same core idea but the actual material itself was about 30-40% different and organised/structured very differently.

I arrived in Belgrade on the evening of November 8, about 6 hours late due to a missed connection, so unfortunately I missed out the Speakers dinner (while the other speakers were having delicious Serbian food I was eating lots of bread and chocolate in various airports).

Friday, June 8, 2018

Introducing people to Exploratory Testing Part II

It's been over 6 months since I posted my initial post on Introducing people to Exploratory Testing Part I and I have a few updates to share.

Reception of Exploratory Testing
From testers, the reception has been mainly really good. People on our project are eager to learn something new and learn a different approach to testing.

Here are excerpts of two feedback we have received:
1. By doing exploratory testing here it doesn't restrict my testing. It invites me to investigate further and also be able to pinpoint problems better. This leads to better and more precise defects being created also.

Another benefit that we've seen with ET is that we can feel more confident about the testing that has been done. Usually the checks(test cases) that has been created before, only covers the minimal required. Usually the checks are created directly from Acceptance criterias and also only covers those
2. Exploratory testing gave me more freedom to think more, analyse more and test more. So it helped me to find issues earlier and deliver product with better quality.

Some concerns that were raised in the workshops (more about the workshops before) include how to pass over the test cases to another team (e.g. System Integration Test team and Automation test team) and how to know if something passed or not.

Ideally all of the testing should be handled within a scrum team, so the first concern is rather redundant under our current team set-up, but when the concern was raised a few months ago (as I'm writing this blog post a few months late) it was valid as we were going through a transition during that time. In terms of knowing if something passed or not, we'd like to encourage more of a "informative" mentality.

Workshops given
My colleague on my project, Maria Kedemo, has given a few workshops on Exploratory Testing to testers on our project to teach testers on our project what Exploratory Testing is and how to do SBTM (that's the approach we are focusing here at the moment).

At the moment we don't have any workshops scheduled - but the Test Community leaders on our project plan to discuss this and figure out what information we can share that would be most beneficial to testers on our project.

Presentation to Test Managers about our project
About two weeks ago, I gave a presentation to Test Managers about how our project is tackling documentation without test cases - this presentation also focused a lot on how we document our testing without test cases, using charters.

I started off by describing what Exploratory Testing is and is not - to (hopefully) get them to understand, what I mean when I use the term in the presentation. I then talked a bit about what SBTM is and what a charter is.

I then showed an example charter I used in a past feature so they could see what it looks like (I didn't want this to be theoretical, I wanted them to see what we do and how we do it)

After a delving deeper into how we test on our project and document things, I stated exactly how we transitioned from test cases to Exploratory Testing in our project - we started off with a pilot in one team, slowly spread it to other teams, organised workshops and adapted to the testing tool we were forced to use. Adapting to the test tool we were forced to use didn't affect the ET itself, it just affected how we attached our charters to the ET.

Some questions (that I remember) that arose after the presentation included:

  • Has the quality of the software improved since we started this approach?
  • How do you use the testing tool, CLM, to record testing?
  • How do people find it?
  • Does everyone on our project do ET?

Challenges ahead
One challenge we still have ahead of  us is around expectations of what Exploratory Testing has to offer. There's still a misconception that it is a strict replacement for Test Cases and that you should be able to measure progress by counting the number of charters. It took a bit of time to get rid of the pass/fail mentality that test cases encourages, but people still like to count something so for now upper management are making do with counting charters.

Another thing is getting people to do Exploratory Testing properly. It seems to me some people are just using it as an excuse to skip test cases and do ad-hoc testing and not document anything whatsoever - we are working on handling this and figuring out how to give testers enough freedom to explore without doing a "big brother" situation where we constantly monitor everyone. (We would like to show trust).

Saturday, February 10, 2018

An analogy to explain the limitations of test cases

I love analogies. They help me explain things in a way that (hopefully) others can understand and relate to.

When I was thinking how can I try and explain the limitations of test cases (because knowing test cases aren't all they are cracked up to be, and explaining that to someone - are two different things) the first thing that came to mind is job interviews. We've all been on job interviews - it's an understandable concept and we can all relate.

So here we go:

First, let's agree that both testing and job interviews are information seeking activities.
In testing, we are trying to find out information about the Software Under Test.
In job interviews, the company is trying to seek information on the candidate (actually it goes both ways- the candidate is also trying to seek information on the company as well)

Second, let's agree that in both examples you want to make an informed decision.
In testing, you want to know if the software is ready to go live or proceed to another testing phase (there are other missions related to testing, but sticking to this, for the sake of the analogy).
In job interviews, the company wants to know if they want to hire you. (and the candidate wants to know, do I actually want to work here)

Using test cases is like coming to the job interview with all of your questions pre-planned (on both sides, candidate and company).

This means when you come to the job interview, both sides have a set of questions that they plan to ask and are only seeking the answers to THOSE questions. No follow-up or investigation based on what the other side said.

Interviewer: Do you have any experience working in an Agile environment? (planned question)
Candidate: Yes, I do. In my previous project, we were working in scrum teams but we didn't have scrum masters.

**This answer could be considered strange or would warrant a follow-up. Technically it may "pass" the interviewer's definition of acceptable, but not having a scrum master could be something that warrants investigation and further questioning to see if they were actually working in Scrum teams.**

Even worse, using metrics to dictate success could be misleading.

If 89 test cases passed out of 90. All that tells me is that 89/90 test cases passed. I don't know if that's great; amazing; concerning... to me, it's just a number. But to many people who look at test case metrics, that's not the case (see what I did there :D).

With just these type of metrics, we don't know the quality of the test cases, the coverage, how much overlapping material there is, if the 1 failing test case is a blocker. The high number (or low number) of test cases is also no indication of how well tested the feature is. Does 90 test cases mean the SUT is better tested than one with only 30 test cases? Maybe having 200 test cases would've been preferable?

Back to our analogy:

Let's say the Interviewer has 15 preplanned questions for the candidate. But then some of the questions are a lot more "shallow" than others. We shouldn't put equal weighting on each question.

Some examples of questions that may be asked at a job interview:

  • Why do you want to leave your current company? (interesting to know for the Interviewer)
  • Do you have any experience with XXX technology? (depending on the technology, could be a nice to have but not mandatory, you could learn this)
  • Have you worked in XX Industry before? (depending on the Interviewer, may be out of curiosity or an important question)

If you ever find yourself trying to explain to someone the limitations of test cases. Try using an analogy. Use specific examples of job interviews and the questions both sides asked. Did both sides only ask the questions they planned to before hand? Was there a "right" number of questions that had to be answered correctly? Did both sides know what the "right" expected answer was for all questions?

Monday, November 27, 2017

Reminding myself about how one's experience shapes one's point of view

As I am helping introduce Exploratory Testing to our current project, there is one thing I've had to remind myself over and over and over again.

One's experience shapes one's point of view.

When having a discussion, or trying to convince someone of my point of view, I try to consciously remember this.

If the people I am having a discussion with, have a different point of view to me, that doesn't necessarily mean I should jump to the conclusion that they are wrong and I am right (or vice versa). Based on our own experiences, chances are, we are both right in our own minds. Which means it's not up to me to try and figure out how to convince them that they are wrong and I am right


I need to figure out how to close the information gap.

I love analogies so let me use an analogy to further explain what I mean:

Working Remotely Analogy
Let's say you want to have the option to work from home and are going to propose remote working in your team.
You have had great experiences working from home. You've been able to get more done (less disturbance), you get to enjoy having no commute and you've had access to the right tools etc. so you can still get your job done and communicate with your team.
But then one of your teammates raises their concern about this because they have also worked remotely and it didn't work out so well for them. Your teammate says that they struggled to contact people who were working remotely and that people who worked remotely often had problems around logging into the VPN and around the communication tool.

We're not going to get anywhere by just having one person be right and another person be wrong. Each person's experience resulted in that person's opinion. Therefore each opinion is valid.

The goal here is to first get a shared understanding of what working remotely is (should be easy enough) but more importantly what working remotely requires by both the project and each individual.

Some questions that may run through the team's mind when discussing working remotely may include:

  • What are their experiences of working remotely?
  • How have these experiences affected their understanding and opinion of what working remotely is?
  • Since I can't just share my own experiences (I can't just tell them), is there any way I can get them to experience what I experienced when it comes to working remotely?

Ideas on the thought process
When it comes to introducing Exploratory Testing to our current project and helping dispel people's misconceptions about ET, I'm keeping the following in mind:

  • What do they think Exploratory Testing is?
  • How can I check to see our understanding of Exploratory Testing is the same thing? (Before trying to advocate for the use of Exploratory Testing, it might be worthwhile seeing if we are discussing the same concept or only the same term)
  • What are their experiences with Exploratory Testing?
  • How have these experiences affected their understanding and opinion of what Exploratory Testing is?

  • Self reflection
  • Am I happy with my use of words, to describe and explain Exploratory Testing?
  • Am I listening to understand, not to answer? (this is a very difficult one for me, working on this)
  • With my use of words and how I say things, am I showing I am open to discussion about the topic and that I am open to questions?
  • Since I can't just share my own experiences (I can't just tell them), is there any way I can get them to experience what I experienced when it comes to Exploratory Testing?
Note: This is an effort to document my thought process when it comes to certain discussions at work, not all of these questions run through my mind with each and every conversation. But I do try to be aware of these questions and again remind myself that:

One's experience shapes one's point of view.

Thursday, October 12, 2017

Introducing people to Exploratory Testing Part I

 A bit of context

For the past 2 months(ish) I've been working on introducing Exploratory Testing to people in my project, starting with people in my immediate team of 3 testers, which is distributed between 3 countries. The project, as a whole, has a lot more than that, but the plan is to introduce  this as a pilot, see what the testers think of it, and then (hopefully) introduce this approach to other teams and other features.

I'm still a relatively new person on this project as I've been on it for 6 months, but the other testers in my team have been on this project for 3-5 years. So I've made sure to ask their thoughts, listen to their ideas and address their concerns about this - they know things about our context (which I don't) because of their experience here.

Currently, on the project, we write test scripts, link them to test cases, then execute those test cases. I'm under the impression that a lot of people on the project have only ever used test cases to formally do testing (when they're not doing "Exploratory Testing").

Another thing to keep in mind, is that this process is still ongoing (hence 'Part I'), but while these thoughts are still fresh on my mind, I wanted to get them down.

Sites/resources I shared

Spotify Offline: Exploratory Testing by Rikard Edgren
I asked my Test Manager (who also thinks Exploratory Testing is effective) about resources I can share and she recommended two Youtube videos by Rikard Edgren.

James Bach has a lot of useful posts on his blog helping explain what Exploratory Testing is.
Few examples:
Exploratory Testing 3.0
What is Exploratory Testing

I also shared a post from Michael Bolton's series - what Exploratory Testing is not

Lastly, I decided that Session Based Test Management (SBTM) would be a great way to help us structure our Exploratory Testing, so I shared some resources around that including this powerpoint by Anders Claesson

Managing others' expectations

I've noticed that a lot of people on this project have a very different understanding (to me) of what Exploratory Testing is. Based on what people on this project say, it seems that they think Exploratory Testing and Ad hoc testing are the same thing.

Since initially introducing the idea to the other two testers I work with, I'd say I've been met with cautious reception. Test cases is seen, by one, as proper testing and Exploratory Testing is not-  I'm still working on breaking that misconception. Aside from that, it does seem to be a welcome idea - you get to see results faster and are able to react to what you find as you test.

In terms of time estimates and how this affects our team meeting it's goals - I've been sure to communicate with our team that this is a new way of working which we need to learn - so any time savings may not be seen straight away.

Lastly, there is the idea of coverage - to cover this, I've decided to specifically mention, at the start of each charter, which Acceptance Criteria is covered in the charter. People on this project like reports and seeing the number of passed test cases etc. - I'm still learning how to deal with that mindset and any obstacles which arise there.

Managing my own expectations

This has been tough. It's been a while since I've been in a work environment with people avid fans of test cases. I don't think that test cases offer no value at all, but I think people overestimate the value it provides.  Test cases can give people a false sense of security of the state of the product when they see that 95% of their test cases have passed, but they can't properly attach meaning to how a 95% pass rate affects the customer. It's just a number.

I've also been trying to manage my expectations around other people's understanding of good testing and my own understanding of good testing. I constantly remind myself that their understanding is based on their own experiences. So neither of us are necessarily wrong - we are both right in our own mind.

My goal is to effectively show people on my project another way of doing testing. Then they can make more informed decisions in the future and choose which approach is best for their context.

Moving Forward

I'm hoping to sit with the tester who'll soon be on-site and pair test with them as we do Exploratory Testing. I should also organise a pair testing session with the tester who'll still be offsite. Once we've done this, I'll seek more feedback on this approach and see what they like and what they are concerned about (in the context of this project). We're also working on a Low Tech Dashboard to help us communicate the testing status for our features and help others attach meaning to what we present. 

In time, I'm hoping to help introduce this approach to other teams in our project - but for now we need to continue the pilot first.