In this blog post series I’ll share some tips and tricks for extending Cypress with some clever JS wizardry. All of these solutions are based on my real-world use of the tool on a complex greenfield project.
In March 2017 I attended TestBash Brighton. Despite being a long-time fan of the Ministry of Testing (as well as their busy Testers’ Slack), I’d never been to any of their events before. I expected an enjoyable and engaging day, and I was not disappointed! Both speakers and attendees were friendly and approachable, and each talk was directly relevant to my role at Inviqa. Above all, attending TestBash feels like joining a ready-made community for a day. From the pub drinks the night before, to the board games at the end, it felt like I’d known my fellow attendees for years.
A key thing that struck me was that there seemed to be a unifying theme to all of the talks. This theme wasn’t explicit or predetermined, but revealed itself as the day unfolded.
Continuous Delivery and the evolution of QA
If you follow me on Twitter, you might have noticed that I’ve already blogged about this conference on the Inviqa blog. In that post, I reflected on Amy Phillips‘ Continuous Delivery talk and how CD was changing the way that Inviqa’s QA team operates, both as individuals and in partnership with colleagues in other roles. Here’s a little snippet of that post:
QA has always been a bottleneck – most teams have more developers than testers / QAs – but on CD projects that bottleneck has the potential to become even more pronounced.
One solution to this problem is to add more QAs to the project, but another option is to get other team members involved in your testing. Testing is a job role, but it’s also a skill that can be taught to others fairly quickly.
On my projects at Inviqa, I’ve had success with asking developers and PMs to help me set up environments ready for testing, explore specific edge cases, and document the implementation details of a feature that’s ready for UAT.
This is especially helpful when deadlines are tight or the tickets are piling up in the QA column, and it fits well with the collaborative nature of continuous delivery projects. More importantly, by teaching our colleagues about testing we can help to spread quality throughout our teams and the organisation as a whole. This fits in well with the ‘shift left’ theory of QA, where quality is a key component of each stage of the process.
Check out the full post for my thoughts on the changing role of Testing/QA in a Continuous Delivery context. Some of this post was left out for length reasons, so I’ve put it here instead.
Pick-your-own testing career
Del Dewar gave a talk titled ‘Step Back to Move Forwards: A Software Testing Career Introspective’. He shared his reflections on his own career and how the world of testing has changed during this time. Many experienced testers will have treaded the path of Tester > Lead Tester > Test Manager during their careers. Over time these role distinctions have become less relevant and many more niche roles have sprung up in between.
In organisations with agile, self-organising teams, traditional role expectations may become outdated. A tester’s day-to-day responsibilities may also bear little relation to their job description. The key message I took from this talk is that testing has become such a broad church that we, as testers, must forge a career path to suit our own skills and the needs of the organisations we work in. Sticking to the old role archetypes and expectations of what a tester does/doesn’t do simply won’t cut it anymore!
Another of my favourite talks, ‘Rediscovering Test Strategy’, was given by the aptly-named Mike Talks. Like Del, he reflected on how testing has drastically changed during the course of his career. In the past 20 years, systems under test have evolved from standalone programs that ran on a single platform (i.e. Windows) to complex, connected and multi-component software. Modern software runs on a seemingly infinite combination of operating systems, hardware form factors, browsers, screen sizes etc. This increase in complexity has also largely resulted in a shift from explicit, repeatable test cases to exploratory and constantly evolving testing approaches. However, the move towards exploratory testing doesn’t remove the need for effective test planning. Mike shared his tips for developing test strategies, including looking at the bigger picture, capturing lots of ideas and identifying weak points to focus on.
.@TestSheepNZ on test strategy:
– Big picture not fine details
– Capture all your ideas
– Cluster ideas
– Then find the holes!#testbash
My final highlight among so many excellent talks was Professor Harry Collins, Professor of Sociology at Cardiff University and author of – among many other publications – Gravity’s Kiss, the story of the discovery of gravitational waves. He gave a riveting lecture on the commonalities between software testing and artificial intelligence. He also shared his thoughts on the importance of testers in shaping the future of AI.
Professor Collins pointed out that all software is a prosthesis (or model) of human behaviour. In the same way that a prosthetic leg can never work exactly like an ‘organic’ leg, a computer program can never be a perfect reproduction of the same function performed manually by humans. However, this isn’t necessarily a bad thing; if designed well, computer programs can perform specific tasks many times more efficiently than a human can. This frees us up to focus our attention on other things that cannot (yet) be automated.
Collins’ talk also helped me to think about the way I design my own testing. If we consider a test (manual or automated) as an imperfect model of human behaviour, we can use this knowledge to identify weak points and areas for improvements in our testing. This insight could lead us to change our testing approach in order to better match user behaviour.
Riveting talk on AI & testing by Prof Harry Collins. Automation is prosthesis 4 human behaviour, but there’s always new edge cases #testbash
The above highlights represent less than half of that day’s brilliant speakers – there were also talks on ethics and testing, API testing, tool-driven testing and running a startup. David Christiansen, a tester-turned-developer-turned-CEO, gave an insightful talk that helped us to consider how testers can be more mindful of the strengths and weaknesses of the developer mindset. As I alluded to in the introduction of this post, all the talks seemed to converge on a single theme – the evolving role of testers in the fast-paced world of software development.
One thing I love about conferences is the buzzy feeling that you get when it’s all over. You might be bursting to try out the new technologies or approaches that you’ve just learned about. Or perhaps a talk has helped you to think differently about a challenging situation you’ve encountered at work? It’s rarely possible to remember everything you learned or to try out every new tool you’ve discovered. Nonetheless, the right mix of talks and fellow travellers can help you to synthesise your own work with the wider community.
If you’ve never been to a TestBash before, then hopefully this post gives you an idea of what it’s like. I really enjoyed my time in Brighton, and it inspired me to apply to speak at future TestBashes. I was therefore thrilled to be invited to give my Accessibility Testing Crash Course talk at TestBash Manchester in October! Please do take a look at the event if you’re interested, browse their full event calendar, or even apply to be a speaker. If any of those talk summaries tickled your fancy, you can also find videos for all of TestBash Brighton’s talks at The Dojo.
While this was a case of using test data when real data was required, it got me thinking about some of the patterns I use when entering fake, placeholder or test data into forms or web apps. Continue Reading…
Accessibility is arguably the ‘last mile‘ of web development. No matter how good your site’s design, tech stack, code and testing is, its accessibility is probably passable at best unless you’ve invested time and resources in getting it right. It’s also fair to say that a high-quality site is probably more accessible than a poor quality site, but this doesn’t mean that people with disabilities will be actually able to use it. But what can you, as a tester, do about this? This post introduces some key accessibility testing tools and approaches, and also provides some business context to help you advocate for accessibility in your organisation.
What is an accessible website?
In simple terms, your website is accessible if people with a range of disabilities are able to use it. An accessible site should also play nicely with common accessibility tools such as screen readers and alternative input devices. That’s it, really. In terms of compliance, you should aim to comply with WCAG 2.0 Level AA or better, but a WCAG-compliant site is not necessarily an accessible site. Likewise, an accessible site may not be WCAG-compliant, even if it is easy for people with disabilities to use!
Why should my organisation bother with accessibility testing?
Other than the fact that it’s the Right Thing To Do, there are several key reasons for an organisation to make its site(s) accessible:
In Summer 2013 I made the difficult decision to move away from my beloved Cardiff to live in Yorkshire with my (now-) wife. During my 6 month job hunting period I blogged about my frustrations with Jobcentre Plus and shared my advice for dealing with recruiters. Dozens of applications and 3 job interviews later, I found a new career as a Web Tester for Numiko, a digital agency in Leeds. Like many others, testing wasn’t a career path I planned, but it had always interested me so I jumped at the chance to try it. As well as a switch from marketing to testing, this was also a change in company type (tiny SME to medium-sized agency), industry sector (desktop software to web development) and location (Cardiff to Leeds)! In October 2015 I joined Byng as their first test engineer. This is my first blog post since switching careers – it’s been a busy 3 years, but I’ve learned a lot. Here are my top three lessons from this time:
1. Testing is an invisible output of software development.