Skip to main content


Showing posts from January, 2021

The Economics of Software Testing: Opportunity Cost

 When I was in high school I learned about a very interesting concept called Opportunity Cost in my Economics class. To be honest, it's probably one of the only things in high school that I still remember. I still remember this very interesting concept because it's everywhere. What is opportunity cost? It's the value of the next best alternative of a resource. (Source: ) In other words it's the "price" you pay for making a choice. Every time you decide to buy something, or spend time doing something etc. you are ultimately paying an opportunity cost by making that decision. Example I Here is a non-work example of how opportunity cost has affected me recently: I was in the final steps of going live with a personal project, had registered the business with the tax agency, done my research on logistics, website was ready, budget was done etc. But then I took a step back. My time is limited.  Aside from wor

What I wish I knew when I started testing: Expectations vs Reality

In November 2018, I gave a talk at Belgrade Test Conference on 'What I wish I knew in my first year of testing'. Here's the first post on the series with some key areas from that talk. (For the second part of this series:  What I wish I knew when I started testing: Get involved with the testing community )) This post will focus on Expectations vs Reality (Since it was over two years ago since I gave that talk, there'll be some mismatch between my talk and this post to reflect new things I've learned etc). These expectations reflect the expectations I had when I started my software testing career back in 2012 Expectations vs Reality Perception of quality My understanding of what quality was, was similar to perfection - I thought all (known) bugs had to be addressed before you went live. Bugs were something that hurt the quality of the software. As time passed, my understanding of quality and "good enough" has changed. Now I don't only focus on bugs, but

My Most Used Test Heuristics (with examples)

First, what is a heuristic? A heuristic is a guideline ,  it is fallible.  Therefore, it will give you a good idea of what behaviour you should see BUT it isn't definitely what should happen - it's up to you to confirm that the behaviour you are seeing is correct.  In a previous blog post I shared a step by step guide on  how to test without requirements/little requirements.    But I figured it's good to share my most used test heuristics that I use for testing without requirements. They are: Consistency with History Consistency with User Expectations Consistency within Product Let's take a look at each of them along with some examples to illustrate the concept. 1. Consistency with History  The feature's or function's current behaviour should be consistent with its past behaviour, assuming there is no good reason for it to change. This heuristic is especially useful when testing a new version of an existing program. (Source: developsense) Example: Whitcoulls, a

How to add an image for multiple locales using XCTest

An annoying thing I've had to deal with recently is adding an image (both where you take a photo with the camera and also from the gallery). You might think that sounds pretty straight forward, but when you want to run your test in multiple locales then "Choose from library" and "Upload photo" etc just doesn't quite cut it. Attempts to record the steps I took within the gallery view also proved futile, as I could not actually record what I did there. Here is how I went about adding images to my tests:      func testAddPhoto () {         let addPhotoButton = app.tabBars.buttons[LocalizedString( "" )].firstMatch         let photoLibraryButton = app.staticTexts[LocalizedString( "photo.library.button" )].firstMatch            let  takePhotoButton = app.staticTexts[LocalizedString( "" )].firstMatch          addPhotoButton.tap()           XCTAssertEqual(waiterResultWithExpectation(photoLibraryButton