Testing: 3 lessons learned

,

In Summer 2013 I made the difficult decision to move away from my beloved Cardiff to live in Yorkshire with my (now-) wife. During my 6 month job hunting period I blogged about my frustrations with Jobcentre Plus and shared my advice for dealing with recruiters. Dozens of applications and 3 job interviews later, I found a new career as a Web Tester for Numiko, a digital agency in Leeds. Like many others, testing wasn’t a career path I planned, but it had always interested me so I jumped at the chance to try it. As well as a switch from marketing to testing, this was also a change in company type (tiny SME to medium-sized agency), industry sector (desktop software to web development) and location (Cardiff to Leeds)! In October 2015 I joined Byng as their first test engineer.  This is my first blog post since switching careers – it’s been a busy 3 years, but I’ve learned a lot. Here are my top three lessons from this time:

1. Testing is an invisible output of software development.

Testing Animated GIFTesting is a very strange profession. As a professional critic of other people’s work, I rarely have much to show for my efforts. In fact I’d argue that a tester’s main deliverable at the end of a project (or sprint/iteration, if you’re working in an agile context) is a product that is lacking in bugs (or at least has fewer bugs than before testing began). The users (and sometimes the client) have no idea how much testing was done, the testing approach used, or indeed if the product was really tested at all! The main tangible artefacts of my work are:

  • bug reports
  • screenshots and videos
  • reports about the current state of the product
  • miscellaneous documentation

And, let’s be honest, who cares about any of these after a project/sprint/iteration is finished? The users certainly don’t; they just want the product to work well and do what they want it to. The client probably doesn’t care either (for similar reasons). Even our colleagues will rarely have need of these artefacts after they have been acted upon. Yet without the work involved in creating these artefacts – and the action that was taken as a result – the product would almost certainly be in a worse state. If we accept that the above testing artefacts have little long-term value, then it follows that testing has no real tangible deliverables. For the sake of our own career development and the growth of the testing profession as a whole, it’s up to us to make sure that stakeholders are aware of (and value) our contributions.

2. Testing is not (just) checking!

To the uninformed, testing seems like a linear task. Here’s the specification, here’s the product, does it work or not? The term ‘Quality Assurance‘, often synonymous with testing, doesn’t help this perception. Much of testing is a series of checks, there’s no denying that. But it’s the design, planning and execution of these checks that gives them value. A tester should also ask difficult questions and help predict likely edge cases. To put it another way: testing is not checking, checking is just one of its outputs. Getting this message across to project managers, developers and clients is vital if you want your work to be valued. Understanding what a tester actually does is equally vital for planning and budgeting – if the only testing time scheduled is for checking after development has been done, much of the value is lost.

In an agency environment I work with multiple PMs (depending on the project), and in the past I’ve found that different PMs have a different view of what testing involves. When a PM thinks of testing as post-development checking, they tend to allocate testing resources sparingly. The resulting lack of testing time often means that I’m forced to operate on a ‘works as expected’ basis. There probably won’t be any opportunity to scrutinise requirements, play around with work in progress, or suggest alternative solutions before a decision is made! Conversely, PMs who understand that testing is much more than checking, then allocate resources accordingly, should find that testing provides them with greater value. Once again, it’s up to us as testers to demonstrate our value by educating our colleagues about what testing is, and taking the initiative to get involved in every stage of the development process.

3. Testing performance cannot be measured.

Many software development teams attempt to measure performance in some way. This is often linked to salary or bonus. Unfortunately, there doesn’t seem to be a way to measure the performance of testers. Almost all of the testing metrics that could possibly be measured are also affected by the project and the developers who worked on it:

  • Bugs found – the more complex the project, the more bugs there are likely to be. Even if you control for this, this number is directly linked to the quality of the original code!
  • Bugs reported by end users  – again, this depends on the quality of the code, not just the quality of testing. Even then, users’ technical competency can greatly affect their ability to find bugs.
  • User satisfaction – assuming you find a way to measure this, you can’t directly link the user’s happiness to the quality of testing, for the reasons mentioned above.
  • Client satisfaction – as above.
  • Time spent testing – as well as depending on project complexity, it doesn’t really tell you anything. A good tester will adapt their test approach to the time they have available, and not enough time spent testing will inevitable produce a poorer result.

All of the above boils down to two key conclusions:

  • Quality cannot be objectively measured. You can track metrics relating to quality, but a subjective judgement still has to be made. Even then, the perceived quality of a product varies depending on who you ask.
  • Testing can’t be measured in isolation. Almost everything a tester does depends on other people’s work.

So, what’s the solution to this dilemma? No idea, sorry. Much has been written about this topic, with no clear answers. But the conclusion I’ve reached so far is that, since quality cannot be measured or directly linked to a tester’s skill, their performance has to be judged subjectively. Are projects smoother when a tester is involved? Are there fewer rounds of revisions because the tester is catching issues earlier? Is the PM spending less time focusing on minute technical details and more time managing the project and the people involved? If the answer to these and similar questions is yes, your tester is performing well. Reward them appropriately!

Conclusion

If these three lessons have anything in common, it’s that testing’s value is not always obvious to those who have little direct knowledge of what it involves. To be successful, engaging proactively with colleagues and stakeholders must be part of our role. Being a good tester isn’t enough on its own!

I expect fellow newbie testers would recognise some, if not all, of these lessons, although I’m sure some will disagree with my conclusions. I’d love to hear what others have learnt early on in their careers and how their experiences have differed from mine.


8 responses to “Testing: 3 lessons learned”

  1. randomactsofcartography Avatar

    Three very good points here. 2 especially shows that the more you put into something, the more value you can get out.

    We just need to persuade those PMs that haven’t yet seen the light!

    1. James Thomas Avatar

      A belated thanks for your comment, I agree entirely! I guess the overriding theme of my post is that testing advocacy is a crucial part of a tester’s role, especially so if we’re the only tester among a sea of PMs, devs, designers and account managers.

  2. Michael Bolton Avatar

    The resulting lack of testing time often means that I’m forced to operate on a ‘works as expected’ basis. There probably won’t be any opportunity to scrutinise requirements, play around with work in progress, or suggest alternative solutions before a decision is made!

    Instead of working on an “as expected” basis, why not work on “problem or no problem” basis? “As expected” makes sense if you’re trying to program checks for a machine to execute. But you have a human brain; the capacity to recognize a problem for when you have not been programmed to recognize that problem. “As expected” doesn’t mean “no problem”; and a problem may or may not exist irrespective of expectations. (See http://www.developsense.com/blog/2014/01/not-so-great-expectations/) At least you could work on an “as desired” basis, framing testing in terms of looking for inconsistencies between the product and desirable things.

    You might like to look at the series of blog posts starting here. http://www.developsense.com/blog/2015/09/oracles-from-the-inside-out/ where I try to work out some of the issues in detail.

    You’re writing vigourously and well. Keep it up.

    Cheers,

    —Michael B.

    1. James Thomas Avatar

      Hi Michael,

      Thanks for your comments, reading suggestions and words of encouragement; they’re very much appreciated. Your point about taking a “problem or no problem” approach instead of an “as expected” approach is a good one, and on reflection I suppose that is the mindset I adopt when the testing budget is severely limited and there’s no time to stray off the beaten track. Indeed, I’d argue that ‘Is there a problem here?’ is a fundamental basis for mine (and, I suspect, many others’) approach to testing – that gut feeling that something isn’t quite right and/or requires further investigation.

      Thanks,
      James

  3. […] Testing: 3 lessons learned: James Sheasby Thomas (@RightSaidJames), blogger at rightsaidjames.com, talks about his three learned lessons in Testing. […]

  4. Dream Cyber Infoway Avatar

    I really appreciate your effort here.
    All information provided above for testing is very helpfull to all who used to be on regular searches .

  5. […] accessibility in mind. That’s a topic for another blog post, but you should keep in mind the importance of advocacy in your role. As I concluded in my last post, an effective tester should advocate for good […]

Leave a Reply