Blog

How Do Screen Readers Work? A Guide for Web Developers

TestParty
TestParty
September 14, 2025

When I ask developers about accessibility, many understand they should "make things work for screen readers" but have fuzzy ideas about what screen readers actually do. This knowledge gap leads to well-intentioned but ineffective accessibility efforts—developers adding ARIA attributes randomly or assuming visual layout translates to screen reader experience.

Understanding how screen readers actually interpret your code transforms accessibility from mysterious requirement to logical engineering challenge. Let me walk you through how these essential tools work and what it means for your development decisions.

Q: How do screen readers work?

A: Screen readers are software applications that convert digital text and interface elements into speech or braille output. They parse the accessibility tree (derived from HTML/DOM), interpret semantic meaning from markup, and present content sequentially to users. Users navigate using keyboard commands that let them jump between headings, links, form fields, and other elements.

What Screen Readers Actually Do

The Core Function

Screen readers perform several interconnected tasks:

Text-to-speech conversion: The most visible function—converting text content into spoken audio. But this is just the output layer.

Accessibility tree interpretation: Screen readers don't read your visual layout. They read the accessibility tree—a parallel structure derived from the DOM that represents how assistive technology should interpret content.

Semantic interpretation: Screen readers announce not just text but what things are. "Button, Submit" tells users both the element type and its label. This semantic information comes from HTML elements and ARIA attributes.

Navigation support: Users don't listen to pages linearly. They navigate using keyboard commands—jumping to headings, cycling through links, moving between form fields. Screen readers provide this navigation layer.

The Accessibility Tree

This concept is crucial for developers: screen readers interact with the accessibility tree, not directly with your HTML or visual output.

The browser builds the accessibility tree from your DOM, applying rules about which elements expose what information. The tree includes:

  • Accessible name: What the element is called (button text, image alt, form label)
  • Role: What the element is (button, link, heading, textbox)
  • State: Current conditions (checked, expanded, disabled)
  • Properties: Additional information (required, described by, level)

You can inspect the accessibility tree in browser DevTools (Chrome: DevTools → Elements → Accessibility pane).

When your HTML is semantic and properly structured, the accessibility tree accurately represents your content. When you use divs for everything or misuse ARIA, the tree becomes confusing or misleading.

How Users Navigate with Screen Readers

Navigation Modes

Screen reader users don't experience pages like sighted users scrolling visually. They navigate through structured interaction:

Browse/read mode: Arrow keys move through content sequentially. Users hear each element in order.

Focus mode: Tab key moves between interactive elements (links, buttons, form fields). Users interact with forms and controls.

Navigation shortcuts: Users jump directly to specific element types:

  • H: Move to next heading
  • K: Move to next link
  • F: Move to next form field
  • T: Move to next table
  • Landmarks: Jump to specific page regions

These shortcuts explain why semantic HTML matters so much. If your "button" is actually a styled div, it won't appear when users press B to find buttons.

What Users Hear

When a screen reader encounters elements, it announces:

Headings: "Heading level 2, Product Features" Links: "Link, Read our privacy policy" Buttons: "Button, Submit order" Images: "Image, Golden retriever playing fetch" (if alt text exists) Form fields: "Edit text, Email address, required" Lists: "List, 5 items" followed by individual items

This contextual information comes from your semantic HTML. Without it, users hear text without understanding what they're interacting with.

Common User Tasks

Understanding how users accomplish tasks helps you design accessible experiences:

Finding specific content: Users often use heading navigation (H key) to scan page structure, then drill into relevant sections. Good heading hierarchy enables this.

Completing forms: Users tab through form fields, hearing labels and instructions. They expect focus to move logically and errors to be announced.

Understanding tables: Users navigate tables by cell, hearing row/column headers to understand data relationships. Complex tables without proper headers become incomprehensible.

Operating custom widgets: Users expect consistent patterns. A disclosure widget should work like other disclosure widgets—Enter to toggle, clear state announcement.

Popular Screen Readers

JAWS (Windows)

JAWS (Job Access With Speech) is the most widely-used commercial screen reader, dominant in professional and enterprise settings.

Key characteristics:

  • Extensive customization options
  • Strong support for complex applications
  • Paid software (subscription or perpetual license)
  • Often used in workplace accommodations

NVDA (Windows)

NVDA (NonVisual Desktop Access) is a free, open-source screen reader that has gained significant market share.

Key characteristics:

  • Free and actively developed
  • Growing user base, especially outside enterprise
  • Slightly different behaviors from JAWS in some areas
  • Good choice for developer testing

VoiceOver (Mac/iOS)

VoiceOver is Apple's built-in screen reader, included with macOS and iOS.

Key characteristics:

  • Built into Apple devices at no additional cost
  • Different navigation model than Windows screen readers
  • Primary screen reader for iOS
  • Activated via Command+F5 (Mac) or Settings (iOS)

Others

Narrator: Built into Windows. Less feature-rich than JAWS/NVDA but useful for basic testing.

TalkBack: Android's built-in screen reader.

Orca: Linux screen reader for GNOME desktop.

What This Means for Developers

HTML Is Your Primary Tool

Semantic HTML creates the accessibility tree that screen readers interpret. Your most important accessibility decisions involve HTML element choice:

Use `<button>` for buttons: Not clickable divs. Buttons are automatically keyboard-focusable, announced as buttons, and work with screen reader navigation.

Use heading elements for headings: <h1> through <h6> create navigable structure. Bold text doesn't create headings.

Use `<nav>`, `<main>`, `<footer>`: Landmark elements create regions users can jump to directly.

Use proper form elements: <label> associated with inputs, <fieldset> and <legend> for groups, native form controls rather than custom implementations.

Use lists for lists: <ul>, <ol>, <li> announce "list, 5 items" and enable list navigation.

When ARIA Is Needed

ARIA (Accessible Rich Internet Applications) extends HTML's accessibility capabilities, but it's a supplement, not a replacement.

Use ARIA when:

  • HTML alone can't express the semantics (custom widgets)
  • You need to add relationships (describedby, labelledby)
  • You're creating patterns HTML doesn't natively support (tabs, trees, dialogs)

Don't use ARIA when:

  • Native HTML elements do the job
  • You're trying to "fix" semantics that should come from HTML
  • You don't understand what the attributes do

The first rule of ARIA from W3C: "If you can use a native HTML element or attribute with the semantics and behavior you require already built in, instead of re-purposing an element and adding an ARIA role, state or property to make it accessible, then do so."

Testing with Screen Readers

Developers should test with actual screen readers, not just assume automated testing catches everything.

Basic testing process:

  1. Install a screen reader: NVDA is free for Windows; VoiceOver is built into Mac.
  1. Learn basic navigation: Arrow keys, heading navigation (H), link navigation (K), form navigation.
  1. Close your eyes or look away: Experience the page without visual context.
  1. Attempt key tasks: Can you understand the page structure? Complete forms? Navigate to important content?
  1. Note confusion points: Where did you get lost? What wasn't announced? What was confusing?

This reveals issues automated tools miss: confusing announcements, missing context, illogical navigation order.

Common Developer Mistakes

Hiding content incorrectly: display: none and visibility: hidden hide from screen readers. Sometimes you want this; sometimes you need visually-hidden-but-accessible techniques.

Focus management failures: When modals open, focus should move into the modal. When they close, focus should return. Screen reader users get lost without proper focus management.

Announcing too much or too little: Every change doesn't need announcement. Use ARIA live regions thoughtfully.

Assuming visual layout equals reading order: Screen readers follow DOM order, not visual position. CSS Grid and Flexbox can create visual orders that don't match logical reading order.

Image alt text failures: Missing alt text announces "image" with filename. Decorative images should have empty alt (alt=""), not missing alt.

Designing for Screen Reader Experience

Information Architecture

Heading structure matters enormously. Users navigate by heading. Your heading hierarchy should let users understand page organization without reading every word.

Logical reading order. Content should make sense read linearly, even if visual layout is complex.

Descriptive links. "Click here" tells users nothing. "Download our annual report (PDF, 2.4MB)" tells them everything.

Dynamic Content

Announce important changes. When content updates (form errors, search results, notifications), use ARIA live regions appropriately.

Don't announce everything. A live region announcing every character typed would be unusable.

Manage focus on navigation. Single-page apps that change content should manage focus so users know the view changed.

Interactive Patterns

Follow established patterns. Screen reader users expect tabs, accordions, and dialogs to behave consistently. The WAI-ARIA Authoring Practices Guide documents expected patterns.

Provide keyboard access. If screen reader users can hear something but can't interact with it via keyboard, it's effectively broken.

FAQ Section

Q: Do screen reader users browse the web differently than sighted users?

A: Yes, fundamentally. Screen reader users navigate by structure (headings, landmarks, links) rather than scanning visually. They often listen at 2-3x normal speech speed and jump between elements rather than reading linearly. Page layouts optimized for visual scanning may be poorly organized for screen reader navigation.

Q: Which screen reader should developers test with?

A: Test with at least one, ideally multiple. NVDA (Windows, free) and VoiceOver (Mac/iOS, built-in) are accessible to most developers. The WebAIM screen reader survey shows usage statistics. Testing with the most common combinations (NVDA/Chrome, JAWS/Chrome, VoiceOver/Safari) covers most users.

Q: Do ARIA attributes fix accessibility issues?

A: ARIA supplements HTML but doesn't fix fundamental issues. A div with role="button" still needs JavaScript for keyboard interaction and focus handling. Native <button> provides all this automatically. ARIA misuse creates more problems than it solves.

Q: How do screen readers handle JavaScript-heavy sites?

A: Modern screen readers work with dynamic content, but developers must help. Use semantic HTML, announce changes via live regions, manage focus during navigation, and ensure keyboard access to all functionality. Single-page apps without accessibility consideration are often very difficult to use.

Q: Can automated accessibility testing replace screen reader testing?

A: No. Automated tools catch about 30-40% of issues—mostly code-level problems. They can't evaluate whether content makes sense, whether focus management works properly, or whether the user experience is logical. Automated testing is valuable but insufficient alone.

Building Screen Reader-Friendly Sites

Understanding how screen readers work transforms accessibility from checkbox compliance to thoughtful user experience design. The same principles that help screen reader users—semantic structure, logical organization, clear labeling—improve experiences for everyone.

Key takeaways:

  • Use semantic HTML as your primary accessibility tool
  • Test with actual screen readers, not just automated tools
  • Design for structural navigation, not just visual scanning
  • Follow established patterns for interactive components
  • Manage focus and announce changes thoughtfully

Ready to identify accessibility issues affecting screen reader users? Get a free accessibility scan to find problems in your code that impact assistive technology.


Related Articles:


Content disclosure: This article was produced using AI-assisted tools and reviewed by TestParty's team of accessibility specialists. As a company focused on source code remediation and continuous accessibility monitoring, we aim to share practical knowledge about WCAG and ADA compliance. That said, accessibility is complex and context-dependent. The information here is educational only—please work with qualified professionals for guidance specific to your organization's needs.

Contact Us

Automate the software work for accessibility compliance, end-to-end.

Empowering businesses with seamless digital accessibility solutions—simple, inclusive, effective.

Book a Demo