Accessibility is a crucial aspect of web and app development. Ensuring that digital content is
accessible to all users, including those with disabilities, is not only a legal requirement in many
places but also a moral imperative. One effective way to achieve accessibility is through
automated testing tools like WAVE, AXE or Lighthouse that can provide actionable insights like
missing alt text, low contrast or keyboard accessibility and improve the user experience for
everyone. To check the content behavior and response to different user interactions and
preferences, manual testing techniques like navigation, zooming, and resizing also can be used.
In this blog post, we will explore the importance of automated test results in enhancing
accessibility and provide a comprehensive guide for developers and testers.
digital accessibility is the practice of designing and developing digital content and technologies
in a way that ensures equal access and usability for people with disabilities. It aims to remove
barriers that may prevent individuals with disabilities from using digital products and services
effectively. By prioritizing accessibility, organizations can not only avoid legal troubles but also
contribute to a more inclusive and equitable society.
Manual testing often relies on human judgment, making it subjective. It can be labour-intensive
and time-consuming. Manual testing often focuses on common use cases, which may not cover
all possible user scenarios. Due to resource constraints, manual testing may not cover all aspects
of a website or application and we cannot scale it well, making it challenging to test frequently
updated content or applications. Testers can make mistakes or overlook issues during manual
testing. Effective manual testing requires testers to have expertise in accessibility guidelines and
assistive technologies. Finding and training testers with this expertise can be a challenge and it
can be costly. In some cases, accessibility issues need to be identified and fixed quickly, such as
during a website launch or when responding to user complaints. Manual testing may not always
provide timely results. Websites and applications with dynamic content that change based on
user input or other factors can be challenging to test manually.
Efficiency: Automated tests can execute much faster than manual tests, allowing for quicker
feedback on the quality of the software. This speed is particularly useful in agile and continuous
integration/continuous deployment (CI/CD) environments, where rapid feedback is crucial.
Time-Saving: Automated testing reduces the need for manual testers to perform repetitive, time-
consuming, and monotonous tasks. This frees up human testers to focus on more creative and
exploratory testing.
Enhanced Test Coverage: Automated tests can cover a wide range of test scenarios, including
edge cases and stress testing, which might be impractical to perform manually.
Accuracy: Automated tests follow predefined scripts and instructions, minimizing human errors.
They are less prone to overlook issues that can occur due to tester fatigue or oversight.
Resource Savings: Automated testing reduces the need for manual testers to perform repetitive,
time-consuming, and monotonous tasks.
Continuous Integration: Automated tests can be integrated into CI/CD pipelines, allowing for
automated testing after every code change. This helps catch issues early in the development
process, reducing the cost and effort required to fix them later.
Parallel Testing: Automated tests can be run in parallel on different environments and
configurations, which accelerates testing for various platforms, browsers, and devices
simultaneously.
Data-Driven Testing: Automated tests can be parameterized to run with different sets of test
data, helping to evaluate the behaviour of the software under various conditions.
Reusable Test Scripts: Once created, automated test scripts can be reused across different
versions of the software which saves time and effort in the long run.
Logging and Reporting: Automated testing tools generate detailed logs and reports, making it
Without proper guidance on what to do with the results increase in the amount of accessibility
testing does not correlate with the increased accessibility of the digital world. To tackle this
problem, development teams should make the most of the output from automated accessibility
testing. Only then can they effectively use the results and it requires a more strategic approach.
Quality gates are an essential part of continuous integration/continuous deployment (CI/CD)
pipelines and other software development workflows. Quality gates in automated testing are
predefined checkpoints. Quality gates help to check if there are no linting errors if the project has
no errors or if all test cases have passed.
Two different ways that the quality gate can operate a soft check and a hard assertion. A soft
check is relatively simpler than a hard assertion, in that It looks at whether or not the
accessibility checks were run, and then the test passes. In contrast, the hard assertion fails if it
finds even ONE issue.
The creation of gates that are data-driven, can help make a more effective gate that matches your
accessibility goals.
it is necessary to be able to prioritize accessibility test results to apply limited resources to
seemingly unlimited bugs to fix them in as efficient and effective a way as possible. Here context
is a top-level filter, attacking bugs and blockers that exist in critical user flows and high-traffic
pages or screens. Another filter is Prioritising bugs depending on their severity. The third filter is
the level of effort to implement affix. This way a developer always has a result that clearly states
what’s wrong, where’s it wrong, and how to fix it. To prevent similar issues from recurring,
several free tools exist in the community which ultimately improve downstream accessibility
while maintaining development velocity, the accessibility browser extension is one such family
of tools. The browser extensions allow a developer to quickly scan a page in the browser and
identify issues on the page. Another group of free tools is lintres. It’s fully automated and
requires little to no effort from the developer. Browser extensions and liners help the
development team to find and fix accessibility bugs immediately.
Even so, the actual impact on the overall accessibility empowers us to execute tests inefficient
ways and enables us to use the test results to drive rapid remediation, govern the overall quality
and put things in place to prevent regression.