Search
Items tagged with: screenReader
- Talkback (0 votes)
- Prudence (0 votes)
- Jieshuo (0 votes)
- Other, please say what in a reply (0 votes)
- I don’t use a screen reader (0 votes)
- I don’t have an android (0 votes)
I’ve published Part 3 of “I Want to Love Linux. It Doesn’t Love Me Back.”
This one’s about the so-called universal interface: the console. The raw, non-GUI, text-mode TTY. The place where sighted Linux users fall back when the desktop breaks, and where blind users are supposed to do the same. Except — we can’t. Not reliably. Not safely. Not without building the entire stack ourselves.
This post covers Speakup, BRLTTY, Fenrir, and the audio subsystem hell that makes screen reading in the console a game of chance. It dives into why session-locked audio breaks espeakup, why BRLTTY fails silently and eats USB ports, why the console can be a full environment — and why it’s still unusable out of the box. And yes, it calls out the fact that if you’re deafblind, and BRLTTY doesn’t start, you’re just locked out of the machine entirely. No speech. No visuals. Just a dead black box.
There are workarounds. Scripts. Hacks. Weird client.conf magic that you run once as root, once as a user, and pray to PipeWire that it sticks. Some of this I learned from a reader of post 1. None of it is documented. None of it is standard. And none of it should be required.
This is a long one. Technical, and very real. Because the console should be the one place Linux accessibility never breaks. And it’s the one place that’s been left to rot.
Link to the post: fireborn.mataroa.blog/blog/i-w…
#Linux #Accessibility #BlindTech #BRLTTY #Speakup #Fenrir #TTY #PipeWire #ScreenReader #DisabilityTech #ConsoleComputing #LinuxAccessibility #FOSS
nvaccess.org/about-nv-access/
#ScreenReader #Accessibility
🌟 Excited to share Thorsten-Voice's YouTube channel! 🎥 🗣️🔊 ♿ 💬
Thorsten presents innovative TTS solutions and a variety of voice technologies, making it an excellent starting point for anyone interested in open-source text-to-speech. Whether you're a developer, accessibility advocate, or tech enthusiast, his channel offers valuable insights and resources. Don't miss out on this fantastic content! 🎬
follow hem here: @thorstenvoice
or on YouTube: youtube.com/@ThorstenMueller YouTube channel!
#Accessibility #FLOSS #TTS #ParlerTTS #OpenSource #VoiceTech #TextToSpeech #AI #CoquiAI #VoiceAssistant #Sprachassistent #MachineLearning #AccessibilityMatters #FLOSS #TTS #OpenSource #Inclusivity #FOSS #Coqui #AI #CoquiAI #VoiceAssistant #Sprachassistent #VoiceTechnology #KünstlicheStimme #MachineLearning #Python #Rhasspy #TextToSpeech #VoiceTech #STT #SpeechSynthesis #SpeechRecognition #Sprachsynthese #ArtificialVoice #VoiceCloning #Spracherkennung #CoquiTTS #voice #a11y #ScreenReader
Thorsten-Voice
Guude! (hi, nice to see you) 👋, i'm Thorsten 😊. You like open source, privacy aware and local running voice technology? Me too 😎. You'll find cooking recipe like tutorials on TTS, STT, Voice Assistants, AI, ML and way more cool stuff here.YouTube
Hey everyone,
I’m looking for an accessible Reddit app for Mac that works well with VoiceOver. What do you suggest? I’d love to hear what’s working for you.
#BlindTech #MacAccessibility #VoiceOver #Reddit #AccessibleApps #Apple #ScreenReader #TechForAll #BlindCommunity
#OpenTalk habe ich bereits erfolgreich für Aufnahmen und Konferenzen mit blinden Menschen eingesetzt.
Jetzt wurde nach eigener Aussage die Nutzung mit #ScreenReader weiter verbessert.
opentalk.eu/de/news/opentalk-u…
@OpenTalkMeeting@social.opentalk.eu #a11y #Inklusion #digitaleTeilhabe #blind #Video #VideoKonferenz
OpenTalk Update: Verbesserte Sicherheit & Benutzerfreundlichkeit
Die digitale Kommunikation entwickelt sich stetig weiter – und mit ihr auch OpenTalk. Das Update unserer Videokonferenzlösung auf die Version 25.1.3 bringt wesentliche Verbesserungen.OpenTalk
It is bewildering to me how many #blind people share content that they've copied from another source without fixing up issues with #screenReader readability. The latest example is an email starting with 13 lines—for NVDA and Chrome at least—of silent unicode characters at the beginning.
Actually, it's confusing why so many blind people copy and republish content from other sources rather than linking to the original, but that's a separate conversation.
NVDA 2025.1 Beta 6 is now available!
Read the full details and download from: nvaccess.org/post/nvda-2025-1b…
Changes introduced in Beta 6:
- Updates to translations
- Fix for the COM registration fixing tool: don’t run when cancelling with alt+f4
- Minor fix for SAPI 4 voices
- Fix for Braille display detection
- Minor improvements to the user experience of Remote Access
Please continue to test and give feedback via GitHub!
#NVDA #NVDAsr #Beta #PreRelease #News #ScreenReader #Accessibility
This Global Accessibility Awareness Day we'd like to shout out to our translators! NVDA has been translated into over 55 languages to provide access to technology for blind and vision impaired people around the world. Recently we spoke with Harun in Türkiye, who shared how access, in his own language, has helped him: nvaccess.org/post/harun-fast-l…
#GAAD #Accessibility #NVDA #NVDAsr #ScreenReader #Localization #Awareness #Translation
Für die neuen Versionen der @joplinapp@mastodon.social wurde nicht nur die #ScreenReader Kompatibilität für #Desktop und #Mobile verbessert, es gibt auch eine #Accessibility #Hilfe.seite für #Entwickler, die neue Komponenten ergänzen oder alte ändern möchten:
joplinapp.org/help/dev/accessi…
#a11y #Joplin #ToDo #Inklusion #digitaleTeilhabe #GlobalAccessibilityAwarenessDay
Development: Accessibility | Joplin
Joplin has a strong focus on accessibility. It's important to make sure that new pull requests and features keep Joplin accessible. Making new components accessible When creating new components, it...joplinapp.org
For whatever reason, the JAWS "Check For Updates" menu option under the Help menu is always unavailable when running Windows and JFW in a virtual machine on a Qemu-KVM hypervisor.
What I'd love to know is if anyone knows if this is a bug or feature? If bug, is FS even aware of it? Any planned fixes? I tried to contact FS on this and got about as much as I expected, nothing.
If a feature, does anyone understand the rationale of making JFW automatic Updates unavailable on virtual machines?
#Accessibility #A11Y #automaticUpdates #JAWS #JFW #ScreenReader #Windows
NVDA 2025.1 Beta 5 is now available! Changes since beta 4 include:
- Updates to translations
- Fixes for reading math attributes in PDFs
- Minor improvements to the user experience of Remote Access
Read the full details and download at: nvaccess.org/post/nvda-2025-1b…
We are getting closer to NVDA 2025.1! Thank you to all those who have been trying out betas and giving us feedback, we greatly appreciate it!
#NVDA #NVDAsr #ScreenReader #Beta #PreRelease #Testing #Accessibility #NewVersion
Following today's post on the Microsoft Edge blog, I've started a page of ARIA Notify examples that currently only function in Edge Canary:
jscholes.github.io/ariaNotify.…
The blog post:
blogs.windows.com/msedgedev/20…
#accessibility #screenReader #screenReaders
Creating a more accessible web with Aria Notify
We're excited to announce the availability, as a developer and origin trial, of ARIA Notify, a new API that's designed to make web contentMicrosoft Edge Blog
#Linux #OpenSource #Self-Hosted #StatusReporting #WebInterface #ScreenReader #Accessible #A11Y
Our latest In-Process blog is out: nvaccess.org/post/in-process-3…
Featuring
- The NVDA 2025.1 Beta
- What’s New in 2025.1
- Updated NVDA Expert Certification 2025
- Gene Empowers New Zealand
- Changes for Developers
- NVDA Add-ons and API Breaking Changes
#NVDA #NVDAsr #Blog #News #Newsletter #ScreenReader #Blind #Accessibility #Update #Changes
NVDA 2025.1 Beta 3 is now available for testing. As well as all the NVDA 2025.1 updates, beta3 adds:
- Updates to translations
- Disallow new Remote Access session in Secure Mode
- Update Remote Access interface
- Add unassigned command for Remote Access settings
- SAPI 4 bug fixes
Read the full release notes (including all the 2025.1 features & fixes) & download from: nvaccess.org/post/nvda-2025-1b…
#NVDA #NVDAsr #Update #Beta #FLOSS #FOSS #PreRelease #News #Testing #Test #FreeSoftware #ScreenReader
NVDA 2025.1beta3 available for testing
Beta3 of NVDA 2025.1 is now available for download and testing. For anyone who is interested in trying out what the next version of NVDA has to offer before it is officially released, we welcome yo…NV Access
I mean really. This is some sample NVDA #screenReader speech for reading a single checklist item in a GitHub issue. Unique accessible names on controls are important, but could they not have found an alternative like task numbers instead of making me hear the entire task four times?
"
button Move @jscholes will generate initial list of topics from 4 most recent design reviews and share the output for further review and refinement: April 25
check box not checked @jscholes will generate initial list of topics from 4 most recent design reviews and share the output for further review and refinement: April 25 checklist item
link @jscholes will generate initial list of topics from 4 most recent design reviews and share the output for further review and refinement: April 25
menu button collapsed subMenu Open @jscholes will generate initial list of topics from 4 most recent design reviews and share the output for further review and refinement: April 25 task options
"
Obviously that only works for speech users and if the tags make sense within the context of the words.
My rule of thumb is to imbed them within my text if they'd make syntactical sense without the octothorpe and add the ones that don't afterward.
I have an #accessibility question for #screenReader users.
If I use hashtags within flowing text, like #this, does that annoyingly interrupt the narration flow and should I rather list them all at the end?
Or is in-text tagging the preferable alternative to a large wall of hashtags at the end, like this:
NVDA 2025.1 Beta 2 is now available for testing. As well as all the amazing updates in NVDA 2025.1 (from Beta 1), this new beta includes updates to some translations, as well as a minor bug fix for SAPI 5 voices using rate boost. Read the full release notes (including all the 2025.1 features & fixes) and download from: nvaccess.org/post/nvda-2025-1b…
#NVDA #NVDAsr #Update #Beta #FLOSS #FOSS #PreRelease #News #Testing #Test #FreeSoftware #ScreenReader
NVDA 2025.1beta2 available for testing
Beta2 of NVDA 2025.1 is now available for download and testing. For anyone who is interested in trying out what the next version of NVDA has to offer before it is officially released, we welcome yo…NV Access
It bothers me quite a lot that in the `ariaNotify` explainer, relating to a more robust mechanism for web apps to fire #screenReader messages, #braille is demoted to a "future consideration". Even there, it's listed under a heading of "Braille and speech markup", as though it doesn't even warrant a devoted section of its own.
Braille being treated with the same priority of speech is long overdue. We're clearly not there yet.
github.com/MicrosoftEdge/MSEdg…
#accessibility
MSEdgeExplainers/Accessibility/AriaNotify/explainer.md at main · MicrosoftEdge/MSEdgeExplainers
Home for explainer documents originated by the Microsoft Edge team - MicrosoftEdge/MSEdgeExplainersGitHub
In-Process is out, featuring a hint on 2025.1 beta timing, details on the updated Basic Training for NVDA training module, our recent server updates, AND what you need to know about reading info in a command prompt. Read now: nvaccess.org/post/in-process-1…
And don't forget to subscribe to get the next edition (AND notification when the beta comes out) via email: eepurl.com/iuVyjo
#NVDA #NVDAsr #ScreenReader #Accessibility #News #Newsletter #Blog
In-Process 11th April 2025
We are getting close to a beta for NVDA 2025.1. We are on track for 2025.1 Beta 1 to be released early next week. Please do subscribe to be notified by email as soon as it is available! In the mean…NV Access
Do you use a screen reader and read arabic content with it? Have you ever wondered why Arabic tts literally always sucks, being either super unresponsive, or gets most things wrong all the time? I've been wanting to rant about this for ages!
Imagine if English dropped most vowels: "Th ct st n th mt" for "The cat sat on the mat" and expected you to just KNOW which vowels go where. That's basically what Arabic does all day every day! Arabic uses an abjad, not an alphabet. Basically, we mostly write consonants, and the vowels are just... assumed? Like, they are very important in speech but we don't really write them down except in very rare and special cases (children's books, religious texts, etc). No one writes them at all otherwise and that is very acceptable because the language is designed that way.
A proper Arabic tts needs to analyze the entire sentence, maybe even the whole paragraph because the exact same word could have different unwritten vowels depending on its location, which actually changes its form and meaning! But for screen readers, you want your tts to be fast and responsive. And you do that by skipping all of that semantic processing. Instead it's literally just half-assed guess work which is almost wrong all the time, so we end up hearing everything the wrong way and just cope with it.
It gets worse. What if we give the tts a single word to read (which is pretty common when you're more closely analyzing something). Let's apply that logic to English. Imagine you are the tts engine. You get presented with just 'st', with no surrounding context and have to figure out the vowels here. Is it Sit? Soot? Set? Maybe even stay? You literally don't know, but each of those might be valid even with how wildly the meaning could be different.
It's EXACTLY like that in Arabic, but much worse because it happens all the time. You highlight a word like 'كتب' (ktb) on its own. What does the TTS say? Does it guess 'kataba' (he wrote)? 'Kutiba' (it was written)? 'Kutub' (books (a freaking NOUN!))? Or maybe even 'kutubi' (my books)? The TTS literally just takes a stab in the dark, and usually defaults to the most basic verb form, 'kataba', even if the context screams 'books'!
So yeah. We're stuck with tools that make us work twice as hard just to understand our own language. You will get used to it over time, but It adds this whole extra layer of cognitive load that speakers of, say, English just don't have to deal with when using their screen readers.
#screenreader #blind #tts
Recent datepicker experience:
1. Control is presented as three separate spin controls, supporting the Up/Down Arrow keys to increment and decrement the value as well as manual typing. But because they're not text inputs, I can't use the Left/Right Arrow keys to review what each separate one contains, only to move between day, month, and year.
2. I tab to year.
3. I press Down Arrow, and the value is set to 2075. I'm unclear how many use cases require the year to be frequently set to 2075, but I can't imagine it's many so this seems like a fairly ridiculous starting point.
4. I press Up Arrow, and the value gets set to 0001. The number of applications for which 0001 is a valid year is likewise vanishingly small.
5. I delete the 0001, at which point my #screenReader reports that the current value is "0". Also not a valid year.
6. Out of curiosity, I inspect the element to see which third-party component is being used to create this mess... only to find that it's a native `<input>` with `type="date"` and this is just how Google Chrome presents it.
A good reminder that #HTML is not always the most #accessible or user-friendly.
Can you guess what I'm reading about from this nonsensical #screenReader output? I loaded the webpage myself and not even I understand. #accessibility
"
heading level 2 How it Works
Slides carousel 1 / 3 slide
out of slide 2 / 3 slide graphic How it works
out of slide 3 / 3 slide
out of slide 1 / 3 slide
out of slide 2 / 3 slide graphic How it works
out of slide 3 / 3 slide
out of slide 1 / 3 slide
out of slide 2 / 3 slide graphic How it works
out of slide 3 / 3 slide
out of slide button Previous slide
button Next slide
button current Go to slide 1
button Go to slide 2
button Go to slide 3
out of carousel link app-tutorial
link App Tutorial
heading level 2 No One Does it Alone...
"
In-Process is now available, featuring all the info on CSUN ATC 2025, Thorium Reader, how the NVDA 2025.1 Update is going, Open-Source software and a new RH Voice Update! Read it all here: nvaccess.org/post/in-process-2… and don't forget to subscribe via email: eepurl.com/iuVyjo
#NVDA #NVDAsr #ScreenReader #Accessibility #CSUNATC #CSUNATC25 #CSUN #Thorium #RHVoice #OpenSource #FOSS #FLOSS #News #Newsletter #Update
I'm having trouble signing PDF documents with a digital certificate using my #screenreader (NVDA on Windows). I can do it in Adobe Reader but it's quite cumbersome and requires sighted assistance.
Does anyone have a more accessible workflow or software recommendation for signing PDFs with a digital certificate using the keyboard and a screen reader? Any tips or advice would be greatly appreciated!
Could you please #Boost this so it reaches more people? Thank you in advance! 🙏 #Accessibility #NVDA #PDF #DigitalSignature #AssistiveTechnology @NVAccess
Resources for screen reader usage and keyboard commands:
webaim.org/articles/voiceover/
webaim.org/articles/nvda/
webaim.org/articles/jaws/
tpgi.com/basic-screen-reader-c…
dequeuniversity.com/screenread…
#a11y #screenreader #tips
Basic screen reader commands for accessibility testing - TPGi
Updated 1st Feb 2016. When you test your website with a screen reader there are a few basic commands you should know. Just remember not to make design decisions based...Léonie Watson (TPGi)
Today I learned: If you use #Chrome and are annoyed by those "Sign in with Google" dialogs stealing keyboard focus on certain websites, you can disable it at the browser level.
In the address bar, type or paste in "chrome://settings/content/federatedIdentityApi" (without the quotes. You should land on the "Third-party sign-in" Settings page.
On that page, there'll be two radio buttons: "Sites can show sign-in prompts from identity services", and "Block sign-in prompts from identity services". Set it to the second one, and you should find that the problematic dialogs are no longer present.