Nicky Tests Software

Sunday, April 7, 2019

TestBash Brighton 2019: Morning Workshop on "Context Driven Coaching... There is no script."

In the morning I attended Martin Hynie's and Chris Blain's workshop on Context Driven Coaching... There is no script.

Here are my takeaways on what I learned in this workshop:

1. It's common for people to be promoted to managerial positions but not have the skills to coach people. 
Martin and Chris have coached managers, who are in charge of coaching people.
But people promoted to managerial positions aren't necessarily promoted to these positions because they have the skills to coach people.

2. If you want people to work together, you need to give them a chance to get to know each other, to build trust and to show some vulnerability
To kick off the workshop, Martin and Chris had us answer four questions to our table groups (roughly six people each table), so we can (presumably) achieve this goal.

3. The person you are coaching needs to trust you and to trust that you know what you're doing.

4. When you are coaching, there is a high probability that you need to explain why the skill (you are coaching) is important/needed

5. No major change has come from logic; it comes from energy

6. It's important to remember that everyone is trying (in their own mind) make the world a better place.
If things come up that stress you out, think about:

  • What can I control?
  • What can't I control? --> What can I influence?
7. The difference between Feedback, Coaching and Mentoring
Feedback - giving someone information about past behaviour, in the present, to affect future behaviour
Coaching - trying to transfer knowledge/skills
Mentoring - reflecting on a journey you've taken; sharing your experience with the mentee and how that experience can be related to the mentee's current experiences

8. Coaching vs. Training
Coaching - more likely to be individual
Training - more likely to be a larger audience

9. Coaching vs. Mentoring

Coaching Mentoring
"Transaction" - clear end/objective Reflecting on a journey - no clear objective
Timeframe - shorter timeframes; few months Timeframe - could be years
You can measure if it was successful Can't clearly measure success

10. Johanna Rothman's 4 step approach to coaching
  1. Verify help is wanted/needed
  2. Generate options - at least three
  3. Review options, let them choose
  4. Create an action plan (both the coach and the person being coached are responsible for this)

Thursday, February 7, 2019

Reflecting on leading a Testing Community of Practice Part II

For Part I go here

Devoting time and effort - when I have it

While I'm on my project my priority is as a tester in my scrum team. Therefore, I only devote time and effort when I have it. Some weeks I'm very busy in my team and barely give the CoP a second thought; other weeks I have more time to prepare a presentation or approach people to give presentations (or look up topics to see what people may find interesting to hear about from others).

I really appreciate the flexibility. While there is an expectation that something happens regularly, it seems that definition of "regularly" has become roughly once a month.

Merging the Test Automation COP and Testing COP

The lead of the Test Automation COP pinged me on slack a few weeks ago to see what I thought about merging the two. I said I was all for it (after all I saw Test Automation as a part of testing; a way to approach testing - and so did he).

We both posted messages in our slack channel saying we had this idea and wanted to hear  what people thought of it or if people were concerned/worried about this move. Based on the feedback, people seemed ok with it.

Now that we were merged, we updated the Confluence page (for those who read it that is. Are there page counters in Confluence? 💁).
I also sent out a survey asking how testing is going in their teams and what their biggest testing problems were and what they wanted to learn more about. I also asked them if they had automation set up in their team or if they wanted help in getting it set up. (I've found people don't always actively seek out help, but if you offer it - they may take you up on that offer).

Wednesday, February 6, 2019

Reflecting on leading a Testing Community of Practice Part I

For about 4-6 months, I have been leading the Testing Community of Practice at my current project. Before then there were 4 of us being co-leads (for 6 months ish) before I was approached to see if I wanted to drive it and be the lead. I said yes - and said I wanted to see if I was a good fit as a lead, if I had the energy/desire for it and if there was a need/desire for a Testing CoP in the first place.

Finding out what people expected from this Community of Practice

My first focus was to find out what people expected from a Community of Practice. I sent out some surveys to those already in the Testing slack channel, and had two discussion groups in our Malmö and Helsingborg offices.
The hard part was I already had my opinions already on what it was and what it would involve, so when I was holding these discussions I had to watch what I say, and how I say things, in an effort to not affect people's opinions.
The two main things people expected were to share information about testing with each other. Both with regards to how they test in their team and also to learn about new testing concepts and tools (new to them).

Getting people involved

While the majority of people expected information sharing and wanted to hear about how testing is done in other teams (we are in scrum teams distributed across two offices), people aren't exactly jumping up and down to share how they test in their team.
If I ask in a slack channel, "Does anyone have anything to share or want to share what they learned this week? Or a tool they are using?" then chances are I don't hear anything (it has happened, but very rarely).

I have found a much better approach to this is to approach people directly with what you have noticed about their skillset and ask if they wanted to talk about their experiences. It seems a lot of people don't realise that what they think is "easy" or "normal" or "not interesting to listen to" is actually something others would benefit hearing from.

Some upcoming sessions I'm really looking forward to in our Testing Community of Practice is on how one tester implemented and is using Cypress and how another one works with developers.

Figuring out if there's value in this, and if I'm adding value

This is a big one for me. At the start, I said yes, then said I wanted to see if I was a good fit as a lead, if I had the energy/desire for it and if there was a need/desire for a Testing CoP in the first place.

In terms of how testing is done in my project and what people want, part of me can't help but feel a little helpless. A very small minority tell me what they want/expect, and those same people share information about how they test in their team and new tools they use. I don't know what the majority wants and if they even care about links that are posted in the slack channel or automation workshops that I arranged with an automation teacher (he actually taught automation in a course before our current project) as I don't hear anything from a lot of people.

Another thing, there are less and less testers in my project. I'm seeing testers either being forced out or choosing to leave as they don't feel testing (and thus their skills) is valued anymore. And you know what - that sucks! Here I'm thinking is this Testing COP adding value? Am I adding value in this role?

The thing is, I have no "real authority" on this project or leadership title on this project (which is interesting considering it seems most people involved in testing at my project is a "Test Manager"). Therefore in terms of upskilling people or trying to inspire people to want to get involved and learn more I'm not sure how exactly I'm perceived by my peers.

Maybe I care too much? Maybe.
But if I realise that I'm feeling apathy, then that's almost a definite sign it's time for me to leave the project.

For Part II, go here.

Friday, December 7, 2018

How to set up a proxy on Charles (so you can test other devices on your local dev machine)

Have you ever wondered, how do I test my fixes/changes on my local machine on IE 11?
Or in general, how do I test my fixes/changes on my local machine, when I don't actually have that browser on my laptop?

If so, the Charles Proxy tool might be your answer.

A few years ago I learned this and I thought this could be useful for you if any of the below apply to you:
You are developing on a Mac and want to test Internet Explorer 11 or Microsoft Edge on your local machine
You want to test your fixes on a (physical) mobile device, with fixes done on your local machine
You don't have access to a mobile device/internet browser simulator like Browserstack or Perfecto

For this I'm going to explain how to set this up using your development laptop and a "test device"

1. Download Charles and install this on your laptop where you do your development
2. Make sure both the laptop and the test device are on the same internet network
3. Get your IP address from your laptop (you need your internal IP address, not the public facing one)
4. On your test device, go to your internet settings and enable HTTP proxy manually. In the server field enter the IP address of your laptop and in the port type in 8888 (port types must match in Charles and the one you enter in your test device, you can also enter 8080 in both)
5. In Charles, click on the Proxy menu, then click Proxy settings - then see the checkboxes etc as below.

6. Once you click OK. Go back to your test device and access your normal localhost URL - you should be able to see your traffic in Charles.

Wednesday, November 21, 2018

My experience at Agile Testing Days 2018

I had seen blog posts and activity on Twitter in the years leading up to my first ever Agile Testing Days (ATD(- it seemed like people really enjoyed this conference, so it's safe to say I was glad to get  this email from Uwe saying:

"Congratulations, you are part of the Agile Testing Days 10th anniversary as a speaker with: Bringing in change when you don’t have a “leadership” job title! "

For me, this is a BIG conference, a lot of people; a lot of attendees and A LOT of tracks - won't deny I found this intimidating as there were a lot of interesting talks/workshops at the same time. Looking back at this conference, one thing that really stands out to me is choice: Which will I see? Which am I prepared to miss out on?

Saturday, November 17, 2018

My experience at Belgrade Test Conference 2018

Just under two weeks ago, I flew to Belgrade for the first time, to present my talk: "What I wish I knew in my first year of testing" at the Starter track. 

According to the conference site the Starter track:
"will focus more on the role of testing in general and software development basics, as well as some technical showcases of what testing actually looks like. 
It is well suited for people who want to start their careers in software testing or deepen their understanding of testing."
The conference had 3 parallel tracks - 1 Starter and 2 tester tracks, based on my understanding, people could only get access to either the 2 tester tracks or the 1 Starter track.

It was my second time presenting this talk (after presenting it at Eurostar 2016) but this time the talk was very different, same core idea but the actual material itself was about 30-40% different and organised/structured very differently.

I arrived in Belgrade on the evening of November 8, about 6 hours late due to a missed connection, so unfortunately I missed out the Speakers dinner (while the other speakers were having delicious Serbian food I was eating lots of bread and chocolate in various airports).

Friday, June 8, 2018

Introducing people to Exploratory Testing Part II

It's been over 6 months since I posted my initial post on Introducing people to Exploratory Testing Part I and I have a few updates to share.

Reception of Exploratory Testing
From testers, the reception has been mainly really good. People on our project are eager to learn something new and learn a different approach to testing.

Here are excerpts of two feedback we have received:
1. By doing exploratory testing here it doesn't restrict my testing. It invites me to investigate further and also be able to pinpoint problems better. This leads to better and more precise defects being created also.

Another benefit that we've seen with ET is that we can feel more confident about the testing that has been done. Usually the checks(test cases) that has been created before, only covers the minimal required. Usually the checks are created directly from Acceptance criterias and also only covers those
2. Exploratory testing gave me more freedom to think more, analyse more and test more. So it helped me to find issues earlier and deliver product with better quality.

Some concerns that were raised in the workshops (more about the workshops before) include how to pass over the test cases to another team (e.g. System Integration Test team and Automation test team) and how to know if something passed or not.

Ideally all of the testing should be handled within a scrum team, so the first concern is rather redundant under our current team set-up, but when the concern was raised a few months ago (as I'm writing this blog post a few months late) it was valid as we were going through a transition during that time. In terms of knowing if something passed or not, we'd like to encourage more of a "informative" mentality.

Workshops given
My colleague on my project, Maria Kedemo, has given a few workshops on Exploratory Testing to testers on our project to teach testers on our project what Exploratory Testing is and how to do SBTM (that's the approach we are focusing here at the moment).

At the moment we don't have any workshops scheduled - but the Test Community leaders on our project plan to discuss this and figure out what information we can share that would be most beneficial to testers on our project.

Presentation to Test Managers about our project
About two weeks ago, I gave a presentation to Test Managers about how our project is tackling documentation without test cases - this presentation also focused a lot on how we document our testing without test cases, using charters.

I started off by describing what Exploratory Testing is and is not - to (hopefully) get them to understand, what I mean when I use the term in the presentation. I then talked a bit about what SBTM is and what a charter is.

I then showed an example charter I used in a past feature so they could see what it looks like (I didn't want this to be theoretical, I wanted them to see what we do and how we do it)

After a delving deeper into how we test on our project and document things, I stated exactly how we transitioned from test cases to Exploratory Testing in our project - we started off with a pilot in one team, slowly spread it to other teams, organised workshops and adapted to the testing tool we were forced to use. Adapting to the test tool we were forced to use didn't affect the ET itself, it just affected how we attached our charters to the ET.

Some questions (that I remember) that arose after the presentation included:

  • Has the quality of the software improved since we started this approach?
  • How do you use the testing tool, CLM, to record testing?
  • How do people find it?
  • Does everyone on our project do ET?

Challenges ahead
One challenge we still have ahead of  us is around expectations of what Exploratory Testing has to offer. There's still a misconception that it is a strict replacement for Test Cases and that you should be able to measure progress by counting the number of charters. It took a bit of time to get rid of the pass/fail mentality that test cases encourages, but people still like to count something so for now upper management are making do with counting charters.

Another thing is getting people to do Exploratory Testing properly. It seems to me some people are just using it as an excuse to skip test cases and do ad-hoc testing and not document anything whatsoever - we are working on handling this and figuring out how to give testers enough freedom to explore without doing a "big brother" situation where we constantly monitor everyone. (We would like to show trust).