Nicky Tests Software

Thursday, February 7, 2019

Reflecting on leading a Testing Community of Practice Part II

For Part I go here

Devoting time and effort - when I have it

While I'm on my project my priority is as a tester in my scrum team. Therefore, I only devote time and effort when I have it. Some weeks I'm very busy in my team and barely give the CoP a second thought; other weeks I have more time to prepare a presentation or approach people to give presentations (or look up topics to see what people may find interesting to hear about from others).

I really appreciate the flexibility. While there is an expectation that something happens regularly, it seems that definition of "regularly" has become roughly once a month.

Merging the Test Automation COP and Testing COP

The lead of the Test Automation COP pinged me on slack a few weeks ago to see what I thought about merging the two. I said I was all for it (after all I saw Test Automation as a part of testing; a way to approach testing - and so did he).

We both posted messages in our slack channel saying we had this idea and wanted to hear  what people thought of it or if people were concerned/worried about this move. Based on the feedback, people seemed ok with it.

Now that we were merged, we updated the Confluence page (for those who read it that is. Are there page counters in Confluence? 💁).
I also sent out a survey asking how testing is going in their teams and what their biggest testing problems were and what they wanted to learn more about. I also asked them if they had automation set up in their team or if they wanted help in getting it set up. (I've found people don't always actively seek out help, but if you offer it - they may take you up on that offer).

Wednesday, February 6, 2019

Reflecting on leading a Testing Community of Practice Part I

For about 4-6 months, I have been leading the Testing Community of Practice at my current project. Before then there were 4 of us being co-leads (for 6 months ish) before I was approached to see if I wanted to drive it and be the lead. I said yes - and said I wanted to see if I was a good fit as a lead, if I had the energy/desire for it and if there was a need/desire for a Testing CoP in the first place.

Finding out what people expected from this Community of Practice

My first focus was to find out what people expected from a Community of Practice. I sent out some surveys to those already in the Testing slack channel, and had two discussion groups in our Malmö and Helsingborg offices.
The hard part was I already had my opinions already on what it was and what it would involve, so when I was holding these discussions I had to watch what I say, and how I say things, in an effort to not affect people's opinions.
The two main things people expected were to share information about testing with each other. Both with regards to how they test in their team and also to learn about new testing concepts and tools (new to them).

Getting people involved

While the majority of people expected information sharing and wanted to hear about how testing is done in other teams (we are in scrum teams distributed across two offices), people aren't exactly jumping up and down to share how they test in their team.
If I ask in a slack channel, "Does anyone have anything to share or want to share what they learned this week? Or a tool they are using?" then chances are I don't hear anything (it has happened, but very rarely).

I have found a much better approach to this is to approach people directly with what you have noticed about their skillset and ask if they wanted to talk about their experiences. It seems a lot of people don't realise that what they think is "easy" or "normal" or "not interesting to listen to" is actually something others would benefit hearing from.

Some upcoming sessions I'm really looking forward to in our Testing Community of Practice is on how one tester implemented and is using Cypress and how another one works with developers.


Figuring out if there's value in this, and if I'm adding value

This is a big one for me. At the start, I said yes, then said I wanted to see if I was a good fit as a lead, if I had the energy/desire for it and if there was a need/desire for a Testing CoP in the first place.

In terms of how testing is done in my project and what people want, part of me can't help but feel a little helpless. A very small minority tell me what they want/expect, and those same people share information about how they test in their team and new tools they use. I don't know what the majority wants and if they even care about links that are posted in the slack channel or automation workshops that I arranged with an automation teacher (he actually taught automation in a course before our current project) as I don't hear anything from a lot of people.

Another thing, there are less and less testers in my project. I'm seeing testers either being forced out or choosing to leave as they don't feel testing (and thus their skills) is valued anymore. And you know what - that sucks! Here I'm thinking is this Testing COP adding value? Am I adding value in this role?

The thing is, I have no "real authority" on this project or leadership title on this project (which is interesting considering it seems most people involved in testing at my project is a "Test Manager"). Therefore in terms of upskilling people or trying to inspire people to want to get involved and learn more I'm not sure how exactly I'm perceived by my peers.

Maybe I care too much? Maybe.
But if I realise that I'm feeling apathy, then that's almost a definite sign it's time for me to leave the project.


For Part II, go here.



Friday, December 7, 2018

How to set up a proxy on Charles (so you can test other devices on your local dev machine)

Have you ever wondered, how do I test my fixes/changes on my local machine on IE 11?
Or in general, how do I test my fixes/changes on my local machine, when I don't actually have that browser on my laptop?

If so, the Charles Proxy tool might be your answer.

A few years ago I learned this and I thought this could be useful for you if any of the below apply to you:
You are developing on a Mac and want to test Internet Explorer 11 or Microsoft Edge on your local machine
You want to test your fixes on a (physical) mobile device, with fixes done on your local machine
You don't have access to a mobile device/internet browser simulator like Browserstack or Perfecto

For this I'm going to explain how to set this up using your development laptop and a "test device"

1. Download Charles https://www.charlesproxy.com/download/ and install this on your laptop where you do your development
2. Make sure both the laptop and the test device are on the same internet network
3. Get your IP address from your laptop (you need your internal IP address, not the public facing one)
4. On your test device, go to your internet settings and enable HTTP proxy manually. In the server field enter the IP address of your laptop and in the port type in 8888 (port types must match in Charles and the one you enter in your test device, you can also enter 8080 in both)
5. In Charles, click on the Proxy menu, then click Proxy settings - then see the checkboxes etc as below.


6. Once you click OK. Go back to your test device and access your normal localhost URL - you should be able to see your traffic in Charles.




Wednesday, November 21, 2018

My experience at Agile Testing Days 2018

I had seen blog posts and activity on Twitter in the years leading up to my first ever Agile Testing Days (ATD(- it seemed like people really enjoyed this conference, so it's safe to say I was glad to get  this email from Uwe saying:



"Congratulations, you are part of the Agile Testing Days 10th anniversary as a speaker with: Bringing in change when you don’t have a “leadership” job title! "


For me, this is a BIG conference, a lot of people; a lot of attendees and A LOT of tracks - won't deny I found this intimidating as there were a lot of interesting talks/workshops at the same time. Looking back at this conference, one thing that really stands out to me is choice: Which will I see? Which am I prepared to miss out on?

Saturday, November 17, 2018

My experience at Belgrade Test Conference 2018

Just under two weeks ago, I flew to Belgrade for the first time, to present my talk: "What I wish I knew in my first year of testing" at the Starter track. 

According to the conference site the Starter track:
"will focus more on the role of testing in general and software development basics, as well as some technical showcases of what testing actually looks like. 
It is well suited for people who want to start their careers in software testing or deepen their understanding of testing."
The conference had 3 parallel tracks - 1 Starter and 2 tester tracks, based on my understanding, people could only get access to either the 2 tester tracks or the 1 Starter track.

It was my second time presenting this talk (after presenting it at Eurostar 2016) but this time the talk was very different, same core idea but the actual material itself was about 30-40% different and organised/structured very differently.

I arrived in Belgrade on the evening of November 8, about 6 hours late due to a missed connection, so unfortunately I missed out the Speakers dinner (while the other speakers were having delicious Serbian food I was eating lots of bread and chocolate in various airports).

Friday, June 8, 2018

Introducing people to Exploratory Testing Part II

It's been over 6 months since I posted my initial post on Introducing people to Exploratory Testing Part I and I have a few updates to share.

Reception of Exploratory Testing
From testers, the reception has been mainly really good. People on our project are eager to learn something new and learn a different approach to testing.

Here are excerpts of two feedback we have received:
1. By doing exploratory testing here it doesn't restrict my testing. It invites me to investigate further and also be able to pinpoint problems better. This leads to better and more precise defects being created also.

Another benefit that we've seen with ET is that we can feel more confident about the testing that has been done. Usually the checks(test cases) that has been created before, only covers the minimal required. Usually the checks are created directly from Acceptance criterias and also only covers those
2. Exploratory testing gave me more freedom to think more, analyse more and test more. So it helped me to find issues earlier and deliver product with better quality.

Some concerns that were raised in the workshops (more about the workshops before) include how to pass over the test cases to another team (e.g. System Integration Test team and Automation test team) and how to know if something passed or not.

Ideally all of the testing should be handled within a scrum team, so the first concern is rather redundant under our current team set-up, but when the concern was raised a few months ago (as I'm writing this blog post a few months late) it was valid as we were going through a transition during that time. In terms of knowing if something passed or not, we'd like to encourage more of a "informative" mentality.


Workshops given
My colleague on my project, Maria Kedemo, has given a few workshops on Exploratory Testing to testers on our project to teach testers on our project what Exploratory Testing is and how to do SBTM (that's the approach we are focusing here at the moment).

At the moment we don't have any workshops scheduled - but the Test Community leaders on our project plan to discuss this and figure out what information we can share that would be most beneficial to testers on our project.


Presentation to Test Managers about our project
About two weeks ago, I gave a presentation to Test Managers about how our project is tackling documentation without test cases - this presentation also focused a lot on how we document our testing without test cases, using charters.

I started off by describing what Exploratory Testing is and is not - to (hopefully) get them to understand, what I mean when I use the term in the presentation. I then talked a bit about what SBTM is and what a charter is.

I then showed an example charter I used in a past feature so they could see what it looks like (I didn't want this to be theoretical, I wanted them to see what we do and how we do it)

After a delving deeper into how we test on our project and document things, I stated exactly how we transitioned from test cases to Exploratory Testing in our project - we started off with a pilot in one team, slowly spread it to other teams, organised workshops and adapted to the testing tool we were forced to use. Adapting to the test tool we were forced to use didn't affect the ET itself, it just affected how we attached our charters to the ET.

Some questions (that I remember) that arose after the presentation included:

  • Has the quality of the software improved since we started this approach?
  • How do you use the testing tool, CLM, to record testing?
  • How do people find it?
  • Does everyone on our project do ET?



Challenges ahead
One challenge we still have ahead of  us is around expectations of what Exploratory Testing has to offer. There's still a misconception that it is a strict replacement for Test Cases and that you should be able to measure progress by counting the number of charters. It took a bit of time to get rid of the pass/fail mentality that test cases encourages, but people still like to count something so for now upper management are making do with counting charters.

Another thing is getting people to do Exploratory Testing properly. It seems to me some people are just using it as an excuse to skip test cases and do ad-hoc testing and not document anything whatsoever - we are working on handling this and figuring out how to give testers enough freedom to explore without doing a "big brother" situation where we constantly monitor everyone. (We would like to show trust).

Saturday, February 10, 2018

An analogy to explain the limitations of test cases

I love analogies. They help me explain things in a way that (hopefully) others can understand and relate to.

When I was thinking how can I try and explain the limitations of test cases (because knowing test cases aren't all they are cracked up to be, and explaining that to someone - are two different things) the first thing that came to mind is job interviews. We've all been on job interviews - it's an understandable concept and we can all relate.

So here we go:

First, let's agree that both testing and job interviews are information seeking activities.
In testing, we are trying to find out information about the Software Under Test.
In job interviews, the company is trying to seek information on the candidate (actually it goes both ways- the candidate is also trying to seek information on the company as well)

Second, let's agree that in both examples you want to make an informed decision.
In testing, you want to know if the software is ready to go live or proceed to another testing phase (there are other missions related to testing, but sticking to this, for the sake of the analogy).
In job interviews, the company wants to know if they want to hire you. (and the candidate wants to know, do I actually want to work here)



Using test cases is like coming to the job interview with all of your questions pre-planned (on both sides, candidate and company).

This means when you come to the job interview, both sides have a set of questions that they plan to ask and are only seeking the answers to THOSE questions. No follow-up or investigation based on what the other side said.

Scenario:
Interviewer: Do you have any experience working in an Agile environment? (planned question)
Candidate: Yes, I do. In my previous project, we were working in scrum teams but we didn't have scrum masters.

**This answer could be considered strange or would warrant a follow-up. Technically it may "pass" the interviewer's definition of acceptable, but not having a scrum master could be something that warrants investigation and further questioning to see if they were actually working in Scrum teams.**


Even worse, using metrics to dictate success could be misleading.

If 89 test cases passed out of 90. All that tells me is that 89/90 test cases passed. I don't know if that's great; amazing; concerning... to me, it's just a number. But to many people who look at test case metrics, that's not the case (see what I did there :D).

With just these type of metrics, we don't know the quality of the test cases, the coverage, how much overlapping material there is, if the 1 failing test case is a blocker. The high number (or low number) of test cases is also no indication of how well tested the feature is. Does 90 test cases mean the SUT is better tested than one with only 30 test cases? Maybe having 200 test cases would've been preferable?

Back to our analogy:

Let's say the Interviewer has 15 preplanned questions for the candidate. But then some of the questions are a lot more "shallow" than others. We shouldn't put equal weighting on each question.

Some examples of questions that may be asked at a job interview:

  • Why do you want to leave your current company? (interesting to know for the Interviewer)
  • Do you have any experience with XXX technology? (depending on the technology, could be a nice to have but not mandatory, you could learn this)
  • Have you worked in XX Industry before? (depending on the Interviewer, may be out of curiosity or an important question)


Summary:
If you ever find yourself trying to explain to someone the limitations of test cases. Try using an analogy. Use specific examples of job interviews and the questions both sides asked. Did both sides only ask the questions they planned to before hand? Was there a "right" number of questions that had to be answered correctly? Did both sides know what the "right" expected answer was for all questions?