Crafting a Browser Matrix

Example Browser Matrix

When developing a website, you will likely have a list of browsers and devices that you expect the site to work with. When this list is formalised, it is often called a ‘browser matrix’ or ‘browser support matrix’. If you build sites for external clients, a browser matrix can form part of your agreement with them. If no matrix exists, you risk extra development work to fix bugs on environments you never intended to support. This post outlines the what, why and how of browser matrices, and gives some pointers for creating your own.

Example Browser Matrix

A snippet of an (out of date) browser matrix. In this example, individual desktop market share (obtained from StatCounter) is multiplied by the desktop market share (54.77%) to get the overall market share.

What is a Browser Matrix?

A browser matrix is a document that serves one or both of the following purposes:

  • To limit the scope (and cost) of web development a specific set of browsers and devices.
  • To limit the scope (and cost) of testing to specific set of browsers and devices.

These two purposes go hand in hand, of course; if browser choice changes how you develop a site, it will also inform your testing of that site. Beyond that, the number and variety of supported browsers can greatly affect the breadth and depth of your testing.

Your browser matrix should, at minimum, list the browsers you will support as part of your project. Ideally, you should also list which browsers you are not supporting, and give an indication of your decision-making criteria.

Why create a browser matrix?

As noted above, its main purpose is to limit the scope of development and testing to a specific set of browsers. Limiting the browsers you support can help focus your technology choices, and can also reduce your exposure to cross-browser bugs. Choosing to support only the most recent versions of browsers allows you to harness the latest technologies. Scoping your development and testing can lead to a better user experience for the environments that are supported!

Of course, any browsers not on your matrix are more likely to have bugs and missing functionality. However this is not a guarantee! If a non-supported browser is based on the same rendering engine as a supported browser then your site is likely to behave very similarly in both browsers. Opera uses Chrome’s Blink rendering engine, and all iOS browsers use the same base rendering engine (WebKit) as iOS Safari. Likewise, if you support Chrome 60 (the most recent public release at the time of writing), then it’s likely that Chrome versions 59 and below will behave in the broadly same way as 60. Obviously, the older the version the more additional bugs or missing functionality are likely to occur compared to the current version.

When a project has a browser support matrix in place, I tend to find that it makes my testing smoother. When testing a feature or fix, I’ll decide if I need to test it in one, some or all browsers. Having a definitive list of potential test environments makes this decision much simpler. Without it, you can never be sure how many browsers is ‘enough’!

What data sources should I use?

A browser matrix is only as good as the data used to create it. Picking browsers and devices based on what you/your company/your client uses will result in a matrix, but it’s unlikely to reflect real-world usage. Instead, there’s a few more reliable sources you can use. In order of preference, they are:

  • Data relating to existing users e.g. from Google Analytics.
  • Data for similar sites, or sites with a similar target audience. You’ll probably have to ask your industry colleagues nicely to share this with you.
  • Regional data. For example if your target audience is in the UK you could use UK data from a source like StatCounter. This approach is okay for sites with a mainstream consumer audience, but probably not suitable if you have a niche or business-focused product.
  • Global data. This isn’t recommended unless your intended audience is truly global!

If your matrix is for a brand new site, data on existing users might not exist. A significant revamp of an existing site could also lead to a change in browser usage. Still, if you have access to this data then it’s a good place to start.

Public data sources

If you have no data for existing users or similar sites, there’s lots of options available for regional or global data. I recommend these sources:

  • StatCounter Global Stats – has filter options for countries, date ranges, OSes, platforms etc. It also allows you to download their data in CSV format so you can process or combine it as needed. Check out their FAQs for more details on how they collect their stats.
  • NetMarketShare – this is a very similar service to StatCounter, but it can give wildly different results. I’ve heard that this because NMS’ data is more biased towards B2B sites, so if your target audience is office workers then this might be a good option. NMS has some advanced filtering options, but many of these are behind a paywall.
  • MixPanel Trends. MixPanel is an analytics service that regularly publishes its aggregate data in the form of glossy reports and charts. The data is quite US-centric, but it’s still a useful reference.
  • Apple’s App Store Support page – contains a regularly-updated pie chart of global iOS version usage. Useful as a secondary data source, or for deciding when to upgrade test devices to a newer iOS version.
  • Android Dashboards – pie charts and tables of global Android version usage, screen sizes and densities, and OpenGL ES versions. Again, a helpful secondary source that help you choose which test devices to buy and which Android versions to install.

Ideally, you should combine these data sources to give a better picture of your site’s likely browser usage. You can also use these public sources to augment private data. For example, I’ve often struggled to get info on iOS and Android versions from Google Analytics.

Which data points should I use for my browser matrix?

So, you’ve decided where your data is coming from. Next, decide what data you actually want to collect. For me, the main ones are:

  • Platform (desktop VS mobile VS tablet, or desktop VS mobile + tablet). If you’re not building a dedicated tablet experience, you can probably get away with combining mobile and tablet data.
  • Desktop browser usage. If you’re using StatCounter, I recommend looking at Browser Versions (with the ‘Combine Chrome (all versions) & Firefox (5+)‘ option selected) so major versions of IE and Safari are split out as separate browsers.
  • Mobile browser usage and tablet browser usage, or mobile + tablet browser usage (combined).
  • iOS vs Android usage on mobile and tablet (or mobile + tablet).
Browser Version options for StatCounter charts

Selecting ‘Edit Chart Data’ on a StatCounter chart will allow you to choose the ‘Browser Version’ and ‘Combine Chrome (all versions) & Firefox (5+)’ options. This will give you a better insight into usage of older versions of IE and Safari.

StatCounter Desktop Browser Bar Chart

… and here’s the resulting bar chart showing data from the last 6 months, with IE and Safari versions tracked separately.

You’ll notice that I’m recommending that you collect desktop data separately to mobile and tablet. Many browsers exist on both platforms but their capabilities can vary, so I find that it’s best to track them separately.

Secondary data points

There’s also some additional data points you might want to collect. These secondary data points can be useful for deciding which test devices to buy, which OS versions to install and where to prioritise your efforts.

  • Desktop OS + Version (e.g. Windows 7, Windows 10, OS X El Capitan, macOS Sierra).
  • Mobile OS versions. iOS users are generally quite quick to update to the latest version, while Android versions usage is more fragmented.
  • Screen resolutions, especially on mobile and tablet.
  • Mobile device manufacturers.

Unless you have enough budget to buy every popular device, choosing test devices can’t really be an exact science. However you can use these data points to help you to pick a representative cross section of devices.

Building your browser matrix

If you want your browser matrix to be transparent and reproducible, you’ll need to store and present the data in an accessible way. If you have good Google Analytics data you could do this with a GA report. However if your data comes from multiple sources you’ll probably need a spreadsheet. Here’s one I made last year:

Browser Support Matrix

A full screenshot of a browser matrix from September 2016.

This spreadsheet has the following components:

  • Desktop browser usage, normalised against the overall mobile + tablet usage share.
    • I combined minor versions of Safari and Edge. Previous versions of these browsers are in the ‘Others’ row.
  • Mobile + tablet browser usage, normalised against the overall desktop usage share.
    • The list of mobile devices is based on what we had in the office, or what devices I’d persuaded the company to buy.
  • Definition and threshold of support levels (full, limited, none).
  • Mobile OS Market Share (UK).
  • iOS Version Distribution (Global).
  • Android Version Distribution (Global).
  • Mobile/Tablet Screen Resolutions (UK).
  • A list of sources.

Note: the data in this spreadsheet is almost a year old!

It’s not just about testing…

Deciding what browsers and devices to test with is all well and good, but that’s only half of the story. Your browser matrix should also help inform the technologies you use to develop your site. The amazing reference site Can I Use allows you to see which browsers support modern web technologies like CSS Grid or Date and Time input types. You can even import your GA data straight into Can I Use to get an accurate picture of how many users your technical decisions might affect.

A browser matrix can also be used during project planning to help the team decide what technical approach to take. If you have a high level of IE 8/9/10/11 usage, you might want to shy away from building a JS-heavy app. Likewise, if you have a high percentage of mobile users you might decide to prioritise mobile UX or performance.

Final thoughts

It’s clear that crafting a browser matrix is not an exact science. But taking a data-driven approach can help you to make informed decisions during development and testing. It’s also important to update your matrix on a regular basis, to keep track of upcoming browsers (e.g. Edge), declining browsers (e.g. IE) and mobile trends. An up-to-date browser matrix helps you and your team to develop and test with users in mind. It also ensures that your technology and design choices reflect market realities.

Further reading

  • The Browser Statistics That Matter – Chris Coyier, Media Temple

    The reason you can’t use global statistics as a stand-in for your own is because they could be wildly wrong. Even keeping a wide angle lens here, different continents (and even countries) have different breakdowns in usage. Zoom in a little and different industries and markets have different breakdowns. Zoom all the way in and your website will have browser usage statistics totally unique to you.

  • Browser Trends December 2016: Mobile Overtakes Desktop – Craig Buckler, Sitepoint

    Does the mobile explosion change our development lives? Probably not if you’ve been reading SitePoint and watching industry trends: you’re already mobile aware. Fortunately, it will be a wake-up call for any client or boss who doubted the growth of the mobile platform or didn’t think it would affect their business. Be prepared for several “how can we make our digital experience better on a smartphone” conversations very soon.

TestBash Brighton and the Evolution of Testing

TestBash Brighton 2017 (testing conference) logo

TestBash Brighton 2017 logoIn March 2017 I attended TestBash Brighton. Despite being a long-time fan of the Ministry of Testing (as well as their busy Testers’ Slack), I’d never been to any of their events before. I expected an enjoyable and engaging day, and I was not disappointed! Both speakers and attendees were friendly and approachable, and each talk was directly relevant to my role at Inviqa. Above all, attending TestBash feels like joining a ready-made community for a day. From the pub drinks the night before, to the board games at the end, it felt like I’d known my fellow attendees for years.

A key thing that struck me was that there seemed to be a unifying theme to all of the talks. This theme wasn’t explicit or predetermined, but revealed itself as the day unfolded.

Continuous Delivery and the evolution of QA

If you follow me on Twitter, you might have noticed that I’ve already blogged about this conference on the Inviqa blog. In that post, I reflected on Amy Phillips‘ Continuous Delivery talk and how CD was changing the way that Inviqa’s QA team operates, both as individuals and in partnership with colleagues in other roles. Here’s a little snippet of that post:

QA has always been a bottleneck – most teams have more developers than testers / QAs – but on CD projects that bottleneck has the potential to become even more pronounced.

One solution to this problem is to add more QAs to the project, but another option is to get other team members involved in your testing. Testing is a job role, but it’s also a skill that can be taught to others fairly quickly.

On my projects at Inviqa, I’ve had success with asking developers and PMs to help me set up environments ready for testing, explore specific edge cases, and document the implementation details of a feature that’s ready for UAT.

This is especially helpful when deadlines are tight or the tickets are piling up in the QA column, and it fits well with the collaborative nature of continuous delivery projects. More importantly, by teaching our colleagues about testing we can help to spread quality throughout our teams and the organisation as a whole. This fits in well with the ‘shift left’ theory of QA, where quality is a key component of each stage of the process.

Check out the full post for my thoughts on the changing role of Testing/QA in a Continuous Delivery context. Some of this post was left out for length reasons, so I’ve put it here instead.

Pick-your-own testing career

Del Dewar gave a talk titled ‘Step Back to Move Forwards: A Software Testing Career Introspective’. He shared his reflections on his own career and how the world of testing has changed during this time. Many experienced testers will have treaded the path of Tester > Lead Tester > Test Manager during their careers. Over time these role distinctions have become less relevant and many more niche roles have sprung up in between.

In organisations with agile, self-organising teams, traditional role expectations may become outdated. A tester’s day-to-day responsibilities may also bear little relation to their job description. The key message I took from this talk is that testing has become such a broad church that we, as testers, must forge a career path to suit our own skills and the needs of the organisations we work in. Sticking to the old role archetypes and expectations of what a tester does/doesn’t do simply won’t cut it anymore!

Reimagining test strategy

Another of my favourite talks, ‘Rediscovering Test Strategy’, was given by the aptly-named Mike Talks. Like Del, he reflected on how testing has drastically changed during the course of his career. In the past 20 years, systems under test have evolved from standalone programs that ran on a single platform (i.e. Windows) to complex, connected and multi-component software. Modern software runs on a seemingly infinite combination of operating systems, hardware form factors, browsers, screen sizes etc. This increase in complexity has also largely resulted in a shift from explicit, repeatable test cases to exploratory and constantly evolving testing approaches. However, the move towards exploratory testing doesn’t remove the need for effective test planning. Mike shared his tips for developing test strategies, including looking at the bigger picture, capturing lots of ideas and identifying weak points to focus on.

AI and Testing: prostheses for human behaviour?

My final highlight among so many excellent talks was Professor Harry Collins, Professor of Sociology at Cardiff University and author of – among many other publications – Gravity’s Kiss, the story of the discovery of gravitational waves. He gave a riveting lecture on the commonalities between software testing and artificial intelligence. He also shared his thoughts on the importance of testers in shaping the future of AI.

Professor Collins pointed out that all software is a prosthesis (or model) of human behaviour. In the same way that a prosthetic leg can never work exactly like an ‘organic’ leg, a computer program can never be a perfect reproduction of the same function performed manually by humans. However, this isn’t necessarily a bad thing; if designed well, computer programs can perform specific tasks many times more efficiently than a human can. This frees us up to focus our attention on other things that cannot (yet) be automated.

Collins’ talk also helped me to think about the way I design my own testing. If we consider a test (manual or automated) as an imperfect model of human behaviour, we can use this knowledge to identify weak points and areas for improvements in our testing. This insight could lead us to change our testing approach in order to better match user behaviour.

But wait – there’s more!

The above highlights represent less than half of that day’s brilliant speakers – there were also talks on ethics and testing, API testing, tool-driven testing and running a startup. David Christiansen, a tester-turned-developer-turned-CEO, gave an insightful talk that helped us to consider how testers can be more mindful of the strengths and weaknesses of the developer mindset. As I alluded to in the introduction of this post, all the talks seemed to converge on a single theme – the evolving role of testers in the fast-paced world of software development.

One thing I love about conferences is the buzzy feeling that you get when it’s all over. You might be bursting to try out the new technologies or approaches that you’ve just learned about. Or perhaps a talk has helped you to think differently about a challenging situation you’ve encountered at work? It’s rarely possible to remember everything you learned or to try out every new tool you’ve discovered. Nonetheless, the right mix of talks and fellow travellers can help you to synthesise your own work with the wider community.

Shameless plugs

If you’ve never been to a TestBash before, then hopefully this post gives you an idea of what it’s like. I really enjoyed my time in Brighton, and it inspired me to apply to speak at future TestBashes. I was therefore thrilled to be invited to give my Accessibility Testing Crash Course talk at TestBash Manchester in October! Please do take a look at the event if you’re interested, browse their full event calendar, or even apply to be a speaker. If any of those talk summaries tickled your fancy, you can also find videos for all of TestBash Brighton’s talks at The Dojo.

We need to talk about test data

A sample debit card with test data on it.

Last month, I was hurriedly booking a vets’ appointment using my surgery’s online form. In the process, I accidentally used test data instead of my own!

While this was a case of using test data when real data was required, it got me thinking about some of the patterns I use when entering fake, placeholder or test data into forms or web apps. Continue Reading…

Accessibility testing crash course

A demonstration of Lea Verou's Color Contrast accessibility testing tool

This post is a companion to my ‘Accessibility testing crash course’ talk that I gave at Leeds Testing Atelier 2016. I gave a revised version of this talk at Inviqa DevDay in December 2016.

Accessibility is arguably the ‘last mile‘ of web development. No matter how good your site’s design, tech stack, code and testing is, its accessibility is probably passable at best unless you’ve invested time and resources in getting it right. It’s also fair to say that a high-quality site is probably more accessible than a poor quality site, but this doesn’t mean that people with disabilities will be actually able to use it. But what can you, as a tester, do about this? This post introduces some key accessibility testing tools and approaches, and also provides some business context to help you advocate for accessibility in your organisation.

What is an accessible website?

In simple terms, your website is accessible if people with a range of disabilities are able to use it. An accessible site should also play nicely with common accessibility tools such as screen readers and alternative input devices. That’s it, really. In terms of compliance, you should aim to comply with WCAG 2.0 Level AA or better, but a WCAG-compliant site is not necessarily an accessible site. Likewise, an accessible site may not be WCAG-compliant, even if it is easy for people with disabilities to use!

Why should my organisation bother with accessibility testing?

Other than the fact that it’s the Right Thing To Do, there are several key reasons for an organisation to make its site(s) accessible:

Continue Reading…

Testing: 3 lessons learned

Testing Animated GIF

In Summer 2013 I made the difficult decision to move away from my beloved Cardiff to live in Yorkshire with my (now-) wife. During my 6 month job hunting period I blogged about my frustrations with Jobcentre Plus and shared my advice for dealing with recruiters. Dozens of applications and 3 job interviews later, I found a new career as a Web Tester for Numiko, a digital agency in Leeds. Like many others, testing wasn’t a career path I planned, but it had always interested me so I jumped at the chance to try it. As well as a switch from marketing to testing, this was also a change in company type (tiny SME to medium-sized agency), industry sector (desktop software to web development) and location (Cardiff to Leeds)! In October 2015 I joined Byng as their first test engineer.  This is my first blog post since switching careers – it’s been a busy 3 years, but I’ve learned a lot. Here are my top three lessons from this time:

1. Testing is an invisible output of software development.

Continue Reading…