Blog

The Modern Accessibility Testing Stack: Automation, Manual, and User Testing

TestParty
TestParty
January 12, 2025

Accessibility testing isn't a single tool or a single moment. It's a layered approach that combines automated accessibility testing for speed and coverage, manual testing for nuance and context, and user testing for real-world validation. Organizations that rely on any single layer leave significant gaps.

The WebAIM Million study consistently finds that 96% of home pages have automatically detectable WCAG failures. But automation only catches 30-40% of accessibility issues—the rest require human judgment. And even expert human judgment can miss how real users with disabilities actually experience your product.

Building a modern accessibility testing stack means understanding what each testing layer does well, where it falls short, and how to combine them into a workflow that catches issues early, cheaply, and comprehensively.

The Three Testing Layers

Automation – Catching the Obvious at Scale

Automated accessibility testing tools scan code and rendered pages to identify issues that can be programmatically detected.

What automation catches well:

  • Missing alt text on images. Tools can verify alt attributes exist (though not whether the text is meaningful).
  • Color contrast failures. Algorithms compare foreground and background colors against WCAG contrast ratios.
  • Missing form labels. Tools detect <input> elements without associated <label> elements or aria-label attributes.
  • Empty links and buttons. Interactive elements without accessible names are flagged.
  • Duplicate IDs. Structural issues that break ARIA references.
  • Language attributes. Missing lang attribute on the HTML element.
  • Heading hierarchy issues. Skipped heading levels or illogical structure.

Where automation falls short:

  • Alt text quality. Automation can confirm alt text exists but can't judge whether "image" is useful alt text for a product photo.
  • Keyboard navigation logic. Tools can check if elements are focusable but can't evaluate whether tab order is logical or whether all functionality is reachable.
  • Dynamic content. AJAX updates, single-page app navigation, and state changes often require interaction patterns automation struggles with.
  • Context-dependent issues. Whether a carousel's auto-play is problematic depends on content and duration—judgment automation can't make.
  • Screen reader experience. How content is announced, whether it makes sense, whether the reading order is logical.

Role in the stack: Automation provides broad coverage at low cost. Run automated tests on every commit, every page, every build. Catch the easy issues continuously so humans can focus on harder problems.

Manual Testing – Expert Judgment at Key Points

Manual accessibility testing involves humans evaluating experiences using assistive technologies and accessibility expertise.

What manual testing catches well:

  • Keyboard navigation flows. Humans can evaluate whether tab order is logical, whether focus is visible, whether all functionality is keyboard-operable.
  • Screen reader comprehension. Does the audio experience make sense? Is content announced in logical order? Are state changes communicated?
  • Form interaction patterns. Are error messages helpful? Does validation behavior make sense? Is recovery from errors intuitive?
  • Cognitive accessibility. Is language clear? Are instructions understandable? Is the interface overwhelming or confusing?
  • ARIA implementation quality. Are roles, states, and properties used correctly? Do custom components behave as expected?

Who performs manual testing:

  • Accessibility specialists with deep WCAG knowledge and assistive technology expertise
  • QA engineers trained in accessibility testing methodologies
  • Developers testing their own work as part of the development process
  • Third-party auditors providing independent evaluation and compliance documentation

When manual testing is needed:

  • Before major releases to catch issues automation misses
  • For new patterns and components that don't have established accessible implementations
  • When automated tests flag potential issues that need human verification
  • For compliance audits where documented expert evaluation is required

Role in the stack: Manual testing provides depth and judgment where automation can't. Schedule manual reviews at key milestones and for high-risk changes. Build manual testing skills across your team.

User Testing – Validation from Real Users

Testing with people who actually use assistive technologies daily reveals problems that even expert testers miss.

What user testing reveals:

  • Real workflow barriers. Users attempt actual tasks, revealing barriers in paths experts might not think to test.
  • Assistive technology diversity. Users bring their own tools, settings, and techniques—often different from what testers use.
  • Workaround discoveries. Users often have strategies for dealing with common inaccessible patterns—insight into what they tolerate versus what stops them.
  • Prioritization guidance. User frustration and success rates reveal which issues matter most in practice.

Approaches to user testing:

  • Moderated usability sessions. Watch and interview users as they complete tasks, asking about their experience.
  • Unmoderated remote testing. Users complete tasks independently, recording their experience for later review.
  • Beta programs and feedback channels. Ongoing channels for users with disabilities to report issues.
  • Disability community partnerships. Relationships with organizations that can facilitate user research.

The W3C's guidance on involving users in accessibility provides frameworks for conducting this research ethically and effectively.

Role in the stack: User testing validates that your accessibility work actually helps real people. Conduct user testing for major features, after significant remediation work, and periodically to maintain connection with actual user experience.

Building Your Accessibility Testing Stack

Coverage Goals – What to Test and When

Define what gets tested, how thoroughly, and at what points in your workflow.

Unit/component level: Test individual components for accessibility as part of component development. Automated tools integrated into the component library or Storybook can verify basics.

Page/template level: Test complete pages combining multiple components. Automated scanning plus spot manual checks for keyboard navigation and screen reader basics.

User journey level: Test complete user flows—signup, checkout, account management. Both automated crawling and manual walkthrough to verify end-to-end accessibility.

Full property level: Comprehensive testing of your entire digital presence. Deep automated scans plus selective manual audits of key experiences.

Frequency framework:

  • Every commit/PR: Automated tests on changed code
  • Every deploy: Automated scan of affected pages
  • Weekly/monthly: Broader automated scans, review of issue trends
  • Quarterly: Manual audits of priority user journeys
  • Annually: Comprehensive audit, user testing, VPAT updates

Tooling Choices – Selecting and Integrating Tools

Build a toolchain that works together across your development workflow.

Development environment:

  • Linters that flag accessibility issues in code (like eslint-plugin-jsx-a11y for React)
  • IDE extensions that highlight problems as developers write code
  • Browser extensions for ad-hoc testing during development

CI/CD integration:

  • Automated accessibility checks that run on every pull request
  • Build failures for critical issues that prevent merge
  • Reports that track issues over time
  • TestParty integrates directly into CI/CD pipelines to catch issues before deployment

Manual testing toolkit:

  • Screen readers: NVDA (Windows, free), JAWS (Windows, commercial), VoiceOver (Mac/iOS, built-in), TalkBack (Android, built-in)
  • Keyboard-only navigation testing procedures
  • Structured testing checklists and protocols
  • Issue documentation templates

Monitoring and reporting:

  • Dashboards showing accessibility status across properties
  • Trend tracking showing improvement or regression over time
  • Executive-level reporting for compliance documentation
  • Alerting for critical regressions

Workflow Integration – Embedding Testing in Development

Accessibility testing works best when embedded in existing workflows, not added as a separate track.

In design:

  • Accessibility review during design critiques
  • Annotation of accessibility requirements in design handoff
  • Design system components with documented accessibility behaviors

In development:

  • Automated linting during coding
  • PR checks that include accessibility verification
  • Code review guidelines that include accessibility criteria
  • Definition of done that requires accessibility sign-off

In QA:

  • Accessibility testing as part of standard test plans
  • Manual testing protocols for keyboard and screen reader
  • Bug templates that capture accessibility-specific information
  • Severity classifications that account for accessibility impact

In production:

  • Continuous monitoring for regressions
  • User feedback channels that surface accessibility issues
  • Incident response processes that treat accessibility blockers as high-priority

TestParty in the Testing Stack

TestParty functions across multiple layers of your accessibility testing stack, providing capabilities that individual point tools can't match.

Automated scanning with breadth and depth. TestParty scans your entire digital presence—not just pages you remember to test, but all pages including those generated dynamically. Comprehensive coverage means issues don't hide in untested corners.

Remediation, not just detection. Unlike tools that only report problems, TestParty provides code-level fixes. When an issue is detected, you see exactly how to fix it—accelerating remediation and teaching developers in context.

CI/CD integration for prevention. TestParty in your deployment pipeline catches issues before they ship. Automated gates prevent new accessibility problems from reaching production.

Dashboards for visibility. TestParty reporting shows accessibility status across your properties, trends over time, and progress toward compliance goals. Visibility keeps accessibility on the agenda and demonstrates program impact.

Foundation for manual testing. TestParty's automated scanning handles the baseline issues that don't require human judgment, freeing your experts to focus on nuanced problems automation can't catch.

Conclusion – Layered Testing for Complete Coverage

A modern accessibility testing stack combines automated accessibility testing for speed and coverage, manual testing for judgment and depth, and user testing for real-world validation. No single layer is sufficient alone.

The stack should be:

  • Continuous: Testing happens at every stage, not just before release
  • Integrated: Testing is part of normal workflows, not a separate track
  • Layered: Different testing types complement each other's strengths and weaknesses
  • Actionable: Testing produces fixes, not just findings

The goal isn't perfect automated coverage or endless manual audits. It's an efficient combination that catches most issues early and cheaply, catches nuanced issues through expert review, and validates through real user experience that your work actually helps.

With the right a11y testing tools and processes, accessibility testing becomes sustainable—part of how your team builds products, not a burden added at the end.

Ready to see how TestParty fits in your testing stack? Book a demo and we'll show how automated scanning, remediation, and monitoring work together.


Related Articles:

Stay informed

Accessibility insights delivered
straight to your inbox.

Contact Us

Automate the software work for accessibility compliance, end-to-end.

Empowering businesses with seamless digital accessibility solutions—simple, inclusive, effective.

Book a Demo