Blog

Screen Reader Testing Guide: NVDA, JAWS, and VoiceOver for Developers

TestParty
TestParty
July 19, 2025

Screen reader testing is essential for verifying web accessibility—automated scanners catch 30-40% of WCAG violations, but real assistive technology testing reveals how users actually experience your site. This guide covers practical testing with NVDA (Windows), JAWS (Windows), and VoiceOver (macOS/iOS)—the three most-used screen readers.

Understanding screen reader behavior helps developers write code that works for blind and low-vision users. More importantly, it reveals accessibility failures that automated tools miss—missing context, confusing navigation, and interactions that simply don't work.

Q: Which screen reader should I test with?

A: Test with at least two: NVDA (free, Windows) for broad coverage, and VoiceOver (built into Mac/iOS) for Apple users. JAWS remains the enterprise standard but requires licensing. Testing across multiple screen readers catches browser/AT combination issues.

Screen Reader Fundamentals

How Screen Readers Work

Screen readers parse web content and present it audibly (text-to-speech) or through refreshable braille displays. They navigate through:

Virtual buffer/browse mode: Screen reader builds its own representation of the page, allowing users to navigate by headings, landmarks, links, and other elements.

Focus/forms mode: Screen reader passes keystrokes directly to the browser for interactive elements—forms, custom widgets, applications.

Understanding this distinction explains why ARIA roles matter and why keyboard-only testing differs from screen reader testing.

Key Navigation Patterns

Screen reader users don't read pages linearly. They:

  • Jump by headings (H key in NVDA/JAWS) to scan page structure
  • Navigate by landmarks (D key) to move between regions
  • Tab through interactive elements for forms and links
  • Use element lists to see all headings, links, or forms at once
  • Search for text to find specific content

Your site's accessibility depends on how well these navigation patterns work.

NVDA Testing (Windows)

Setup

NVDA (NonVisual Desktop Access) is free, open-source, and widely used.

Download: nvaccess.org

Browser pairing: NVDA works best with Firefox. Chrome support is good. Test both.

Basic configuration:

  • Enable speech viewer (Tools > Speech Viewer) to see what NVDA announces
  • Consider slowing speech rate initially (NVDA+Ctrl+arrows)
  • Learn the NVDA modifier key (Insert or Caps Lock)

Essential Commands

| Action            | Command    |
|-------------------|------------|
| Stop speaking     | Ctrl       |
| Read current line | NVDA+Up    |
| Read from cursor  | NVDA+Down  |
| Next heading      | H          |
| Previous heading  | Shift+H    |
| Heading level 1-6 | 1-6        |
| Next landmark     | D          |
| Next link         | K          |
| Next form field   | F          |
| Next button       | B          |
| Elements list     | NVDA+F7    |
| Toggle forms mode | NVDA+Space |

Testing Workflow

1. Page structure test:

  • Press H repeatedly to navigate by headings
  • Verify logical heading hierarchy (H1 > H2 > H3)
  • Check that heading text accurately describes sections

2. Landmark test:

  • Press D to navigate by landmarks
  • Verify main, navigation, banner, contentinfo are present
  • Check landmark labels distinguish multiple navs/regions

3. Link test:

  • Press NVDA+F7, select Links tab
  • Review link text—should make sense out of context
  • Identify "click here" or "read more" failures

4. Form test:

  • Press F to navigate form fields
  • Verify each input has announced label
  • Check required fields are indicated
  • Test error messages are announced

5. Interactive element test:

  • Tab through page
  • Verify custom widgets announce role and state
  • Test that focus management works correctly

Common NVDA-Revealed Issues

Missing labels: NVDA announces "edit" with no context—form field lacks label

Duplicate announcements: Same content read multiple times due to redundant ARIA

Missing landmark labels: "Navigation" announced three times—can't distinguish

Broken live regions: Dynamic content updates without announcement

Focus loss: After interaction, focus disappears or jumps unexpectedly

JAWS Testing (Windows)

Setup

JAWS (Job Access With Speech) is the enterprise standard screen reader.

Download: freedomscientific.com (40-minute mode available without license)

Browser pairing: JAWS works well with Chrome, Edge, and Firefox.

Note: JAWS behaviors differ from NVDA. Testing both reveals compatibility issues.

Essential Commands

| Action                | Command                       |
|-----------------------|-------------------------------|
| Stop speaking         | Ctrl                          |
| Read current line     | Insert+Up                     |
| Start reading         | Insert+Down                   |
| Next heading          | H                             |
| Heading level 1-6     | 1-6                           |
| Next landmark         | R (or ; depending on version) |
| Next link             | Tab or K                      |
| Next form field       | F                             |
| List headings         | Insert+F6                     |
| List links            | Insert+F7                     |
| Virtual cursor toggle | Insert+Z                      |

JAWS-Specific Behaviors

Virtual PC cursor: JAWS's virtual buffer works similarly to NVDA's browse mode but with different behaviors for custom widgets.

Form field handling: JAWS may auto-switch to forms mode differently than NVDA.

ARIA support: JAWS implements some ARIA features differently—test complex widgets.

Testing Differences from NVDA

Some sites work in NVDA but fail in JAWS (or vice versa):

  • Custom combobox implementations
  • Complex ARIA widget patterns
  • Live region timing and behavior
  • Table navigation in complex layouts

Test critical functionality in both screen readers.

VoiceOver Testing (macOS)

Setup

VoiceOver is built into macOS—no installation needed.

Enable: System Preferences > Accessibility > VoiceOver, or press Cmd+F5

Browser pairing: Safari has best VoiceOver support. Test Chrome as secondary.

Initial learning: Complete the VoiceOver tutorial (VoiceOver Utility > Tutorial)

Essential Commands

VoiceOver uses VO (Ctrl+Option) as modifier:

| Action                | Command           |
|-----------------------|-------------------|
| Stop speaking         | Ctrl              |
| Read current item     | VO+A              |
| Next item             | VO+Right          |
| Previous item         | VO+Left           |
| Interact with element | VO+Shift+Down     |
| Stop interacting      | VO+Shift+Up       |
| Open rotor            | VO+U              |
| Next heading          | VO+Cmd+H          |
| Next link             | VO+Cmd+L          |
| Web item rotor        | VO+U, then arrows |

The Rotor

VoiceOver's rotor (VO+U) provides element navigation:

  • Use left/right arrows to switch categories (headings, links, landmarks)
  • Use up/down arrows to navigate items
  • Press Enter to jump to selected item

macOS vs iOS VoiceOver

iOS VoiceOver uses touch gestures:

| Action           | Gesture            |
|------------------|--------------------|
| Read current     | Single tap         |
| Move to next     | Swipe right        |
| Move to previous | Swipe left         |
| Activate         | Double tap         |
| Rotor            | Two-finger rotate  |
| Scroll           | Three-finger swipe |

Test mobile sites on actual iOS devices for accurate results.

VoiceOver-Specific Issues

Safari rendering differences: Safari sometimes handles ARIA differently than Chrome/Firefox.

macOS vs iOS inconsistency: Some patterns work on desktop but fail mobile.

Focus handling: VoiceOver focus behavior differs from Windows screen readers.

Testing Methodology

Systematic Testing Checklist

Structure:

  • [ ] Page has exactly one H1
  • [ ] Heading hierarchy is logical (no skipped levels)
  • [ ] Landmarks present and labeled
  • [ ] Page regions are navigable

Navigation:

  • [ ] Skip link works and is first focusable element
  • [ ] Tab order matches visual order
  • [ ] No keyboard traps
  • [ ] Focus visible throughout

Content:

  • [ ] Images have appropriate alt text
  • [ ] Links have descriptive text
  • [ ] Tables have headers associated with cells
  • [ ] Lists use proper list markup

Forms:

  • [ ] All inputs have labels
  • [ ] Required fields indicated
  • [ ] Error messages associated with fields
  • [ ] Form submission feedback announced

Dynamic Content:

  • [ ] Modal focus management correct
  • [ ] Live regions announce updates
  • [ ] Loading states communicated
  • [ ] State changes announced

Testing User Flows

Don't just test isolated elements—test complete workflows:

E-commerce checkout:

  1. Add product to cart (announced?)
  2. Open cart (focus managed?)
  3. Update quantity (live update announced?)
  4. Proceed to checkout (navigation clear?)
  5. Complete forms (labels, errors working?)
  6. Submit order (confirmation announced?)

Account creation:

  1. Navigate to sign-up
  2. Complete form fields
  3. Handle validation errors
  4. Submit and verify confirmation

Recording Issues

Document screen reader issues precisely:

Issue: Product price not announced with product name
Screen reader: NVDA 2024.1
Browser: Firefox 120
Steps: Navigate to product listing, press H to reach product heading
Expected: "Product Name, $29.99"
Actual: "Product Name" (price in separate unassociated element)
WCAG: 1.3.1 Info and Relationships

Common Accessibility Failures

What Screen Readers Reveal That Scanners Miss

Missing context:

  • Images with generic alt text ("image", "photo")
  • Links without meaningful text ("click here")
  • Form fields with visual-only labels

Broken relationships:

  • Data tables without header associations
  • Form fields without programmatic labels
  • Error messages not linked to inputs

Navigation failures:

  • Missing or incomplete landmark structure
  • Illogical heading hierarchy
  • Focus management in modals and SPAs

State communication:

  • Expanded/collapsed not announced
  • Selected state missing on tabs
  • Loading states silent
  • Form validation issues not conveyed

Fixing What You Find

TestParty's automated scanning catches many issues screen reader testing reveals:

For e-commerce sites: TestParty provides implementable code fixes for common patterns—form labels, ARIA states, heading structure.

For development teams: Bouncer catches issues in PRs before deployment. PreGame provides real-time feedback during development.

Screen reader testing validates that fixes actually work—automation ensures issues don't recur.

FAQ Section

Q: How often should I test with screen readers?

A: Test major features and user flows during development. Test complete site after significant releases. Automated scanning (like TestParty's Spotlight) catches regressions between manual tests.

Q: Do I need to buy JAWS for testing?

A: JAWS offers 40-minute evaluation sessions without purchase. For regular testing, NVDA is free and sufficient for most needs. JAWS testing matters for enterprise users.

Q: Why test multiple screen readers?

A: Screen readers interpret code differently. A pattern working in NVDA might fail in JAWS. Testing two screen readers catches most compatibility issues.

Q: Should I test with braille displays?

A: Braille display testing is valuable but optional for most teams. Focus on speech output testing first—it covers most accessibility requirements.

Q: How do I become proficient at screen reader testing?

A: Use a screen reader regularly—even for quick tasks. Try navigating familiar sites without looking at the screen. Proficiency develops through practice.

Key Takeaways

  • Screen reader testing reveals issues automation misses. 30-40% of accessibility is programmatically detectable; the rest requires real AT testing.
  • Test with at least two screen readers. NVDA + VoiceOver covers most users. Add JAWS for enterprise contexts.
  • Learn navigation patterns, not just commands. Understand how users actually navigate—by headings, landmarks, and element lists.
  • Test complete user flows, not just isolated elements. Checkout, sign-up, and search must work end-to-end.
  • Combine manual and automated testing. TestParty's scanning catches regressions; screen reader testing validates user experience.
  • Document issues precisely with screen reader version, browser, and steps to reproduce.

Conclusion

Screen reader testing transforms accessibility from abstract compliance to concrete user experience. Watching your site fail to communicate essential information—prices not associated with products, forms impossible to complete, navigation that makes no sense—motivates fixes in ways automated scan reports don't.

The investment in learning screen reader testing pays dividends: you'll write better code initially, catch issues earlier, and build empathy for users who depend on accessible websites.

TestParty's automated scanning catches programmatically-detectable issues and provides code fixes. Screen reader testing validates that your site actually works for users with disabilities. Together, they achieve genuine accessibility.

Ready to start fixing what screen readers reveal? Get a free accessibility scan to identify violations, then verify fixes with real screen reader testing.


Related Articles:


Hey, transparency matters to us—AI helped produce this content, with humans guiding the process. TestParty works on Shopify accessibility and WCAG compliance, but we're not lawyers. For legal questions or major compliance decisions, please get proper professional advice.

Contact Us

Automate the software work for accessibility compliance, end-to-end.

Empowering businesses with seamless digital accessibility solutions—simple, inclusive, effective.

Book a Demo