Seeing assistive technology
On this guide page
Text-to-speech
Text-to-speech devices and applications (e.g., Read Aloud, Spoken Content, Immersive Reader) read text off a screen or from special codes in digital resources. They allow people who cannot see to create mental pictures in their minds.
Text from off the screen:
- paragraphs
- headings
- lists
- table headers
- table cells
Text from special codes:
- alternative text for describing an image
- closed captions for dialogue or audio
- transcripts of audio descriptions for visuals or video
- form fields, including their labels and instructions
- form fields validation and error messages

While headings, lists, table headers, and table cells are all text, they must be semantically defined as those elements. Otherwise, they appear to text-to-speech technology as paragraphs with no special meaning.
If you make a semantic list, it will tell you how many list items are in the list. It will even tell you which list item you are on.
If you make a semantic table, identifying the table headers on the row and/or column, it will tell you where you are before it tells you what is in the data cell. You do not get lost in a semantic table.

Special codes are needed to describe multimedia (e.g., images, audio, and video) in text form. While video or visuals can be described orally, a text version makes them inclusive to everyone.
For example, subtitles help people who are deaf but not deaf-blind. Closed captions are text, so they help everyone.
Audio descriptions help people who are blind but not deaf-blind. Transcriptions of those audio descriptions are text, so they help everyone.
Screen Readers
A screen reader is a type of text-to-speech device or application. Text-to-speech applications allow you to select what you want to hear with a mouse click or screen tap. Screen readers have a different navigation system.
Screen readers are adjusted for people who cannot tell where their mouse or finger is pointing. This navigation is based on keyboard-only functionality:
- They may tab on a keyboard or swipe on a touchscreen to move from element to element.
- They may press Enter or Space on a keyboard or double-tap on a touchscreen to select.
This navigation can be aided with audial or kinesthetic feedback for people who need notification of their actions through sound or touch.
Here are some reasons why semantics are important for people who use screen readers:

This is a simulation of what the screen reader might say. This does not appear on the webpage.
Alternative text on images allows people who cannot see to create a mental picture. This mental picture is only possible through alternative text for images. The screen reader says there is an image and nothing else when there is no alternative text. Someone needs to describe the relevant points or takeaways of the image in alternative text.

This is a simulation of what the screen reader might say. This does not appear on the webpage.
Screen readers view digital content based on the underlying code, going top-down and left-to-right. That can be a lot to read. Sometimes it is repetitive.
Screen readers have a "headings" view that allows people to skip around. It scans for semantic headings only. It reads them based on their level, like a telephone menu system.

This is a simulation of what the screen reader might say. This does not appear on the webpage.
Screen readers have a "links" view that allows people to skip around. It scans for links only. It only reads the link text. It does not read the nearby paragraphs with the link text.
Proper context should be included in the link text. Two links that say "read more" will be very confusing. Where do they go? Do they go to the same place?
Refreshable Braille Display
This screen reader tells people what is on the screen through Braille. People who use this assistive technology must know Braille. It is useful when they need to read a screen but cannot hear it aloud. It may also have a Braille writer. This is not a Braille keyboard. A Braille keyboard has Braille characters embossed on a QWERTY keyboard.

The four blue buttons on the left and right are for typing Braille characters. The white buttons on the edge are for navigation and selection. The white pegs in the center are the Braille characters. They go up and down as people use their screen readers.