Search

Items tagged with: screenReader


NV Access is pleased to announce that version 2025.1 of NVDA, the free screen reader for Microsoft Windows, is now available for download. We encourage all users to upgrade to this version. This release introduces NVDA Remote Access, provides speech, braille, OCR & Office improvements, Native selection in Chrome & edge

Full info & Download: nvaccess.org/post/nvda-2025-1/

#NVDA #NVDAsr #ScreenReader #Accessibility #FOSS #NewVersion #Update #News #Free


"By passing on my knowledge of using NVDA to new users of the NVDA screen reader, my aim was to help people in the same boat as me; as well as sighted people in the community to learn ways that they could help us out in the #community."

Like NV Access, Gene recognises the importance of #empowering people to identify needed change, and enact it! Full #interview with Gene at: nvaccess.org/post/gene_empower…

#NVDA #NVDAsr #Power #ScreenReader #Accessibility




It has been an incredible two years since NV Access founders Mick Curran & Jamie Teh featured on Australian Story: abc.net.au/news/2023-06-05/mic…

The impact NVDA has for blind people around the world has only grown & the need is as great now as ever!

You can watch the Audio Description enabled version of Australian Story: youtu.be/3i7gkN-1sAI

Regular version: youtu.be/jwHbXh3WzSw

#NVDA #NVDAsr #ScreenReader #Blind #Accessibility #FreeSoftware #FOSS #Impact #Australia #AustralianStory


#AudioMo day 5: A Quick Look At The Nintendo Switch 2 TTS Accessibility youtu.be/xt5sPvaoshc

I've just gotten a hold of this console so I know nothing much yet, but I will learn more over the coming days and weeks.
This is a quick demo with me only having had access to it for about 30 minutes if that.
#Nintendo #Switch2 #ScreenReader #TTS #Accessibility


NV Access are pleased to advise that Beta (and alpha) versions of NVDA are once again available. To celebrate, we've released Beta 10 of NVDA 2025.1: nvaccess.org/post/nvda-2025-1b…

Beta 10 includes:
* Updates to translations
* Correct context help navigation for Remote Access dialogs

Thank you everyone for your patience and support, and as always with pre-release builds, please do file any issues on GitHub: github.com/nvaccess/nvda/issue…

#NVDA #NVDAsr #ScreenReader #News #PreRelease #FOSS #Beta



What screen readers do you use regularly on your Android device? I’m looking for an anecdotal idea of how popular alternative #screenreader apps on #android really are. Please boost! Thanks so much for your help. #screenreaders#a11y#accessibility#blind

  • Talkback (0 votes)
  • Prudence (0 votes)
  • Jieshuo (0 votes)
  • Other, please say what in a reply (0 votes)
  • I don’t use a screen reader (0 votes)
  • I don’t have an android (0 votes)
Poll end: 1 week ago


I’ve published Part 3 of “I Want to Love Linux. It Doesn’t Love Me Back.”

This one’s about the so-called universal interface: the console. The raw, non-GUI, text-mode TTY. The place where sighted Linux users fall back when the desktop breaks, and where blind users are supposed to do the same. Except — we can’t. Not reliably. Not safely. Not without building the entire stack ourselves.

This post covers Speakup, BRLTTY, Fenrir, and the audio subsystem hell that makes screen reading in the console a game of chance. It dives into why session-locked audio breaks espeakup, why BRLTTY fails silently and eats USB ports, why the console can be a full environment — and why it’s still unusable out of the box. And yes, it calls out the fact that if you’re deafblind, and BRLTTY doesn’t start, you’re just locked out of the machine entirely. No speech. No visuals. Just a dead black box.

There are workarounds. Scripts. Hacks. Weird client.conf magic that you run once as root, once as a user, and pray to PipeWire that it sticks. Some of this I learned from a reader of post 1. None of it is documented. None of it is standard. And none of it should be required.

This is a long one. Technical, and very real. Because the console should be the one place Linux accessibility never breaks. And it’s the one place that’s been left to rot.

Link to the post: fireborn.mataroa.blog/blog/i-w…

#Linux #Accessibility #BlindTech #BRLTTY #Speakup #Fenrir #TTY #PipeWire #ScreenReader #DisabilityTech #ConsoleComputing #LinuxAccessibility #FOSS


🌟 Excited to share Thorsten-Voice's YouTube channel! 🎥 🗣️🔊 ♿ 💬

Thorsten presents innovative TTS solutions and a variety of voice technologies, making it an excellent starting point for anyone interested in open-source text-to-speech. Whether you're a developer, accessibility advocate, or tech enthusiast, his channel offers valuable insights and resources. Don't miss out on this fantastic content! 🎬

follow hem here: @thorstenvoice
or on YouTube: youtube.com/@ThorstenMueller YouTube channel!

#Accessibility #FLOSS #TTS #ParlerTTS #OpenSource #VoiceTech #TextToSpeech #AI #CoquiAI #VoiceAssistant #Sprachassistent #MachineLearning #AccessibilityMatters #FLOSS #TTS #OpenSource #Inclusivity #FOSS #Coqui #AI #CoquiAI #VoiceAssistant #Sprachassistent #VoiceTechnology #KünstlicheStimme #MachineLearning #Python #Rhasspy #TextToSpeech #VoiceTech #STT #SpeechSynthesis #SpeechRecognition #Sprachsynthese #ArtificialVoice #VoiceCloning #Spracherkennung #CoquiTTS #voice #a11y #ScreenReader




#OpenTalk habe ich bereits erfolgreich für Aufnahmen und Konferenzen mit blinden Menschen eingesetzt.

Jetzt wurde nach eigener Aussage die Nutzung mit #ScreenReader weiter verbessert.

opentalk.eu/de/news/opentalk-u…
@OpenTalkMeeting@social.opentalk.eu #a11y #Inklusion #digitaleTeilhabe #blind #Video #VideoKonferenz


It is bewildering to me how many #blind people share content that they've copied from another source without fixing up issues with #screenReader readability. The latest example is an email starting with 13 lines—for NVDA and Chrome at least—of silent unicode characters at the beginning.

Actually, it's confusing why so many blind people copy and republish content from other sources rather than linking to the original, but that's a separate conversation.


NVDA 2025.1 Beta 6 is now available!

Read the full details and download from: nvaccess.org/post/nvda-2025-1b…

Changes introduced in Beta 6:
- Updates to translations
- Fix for the COM registration fixing tool: don’t run when cancelling with alt+f4
- Minor fix for SAPI 4 voices
- Fix for Braille display detection
- Minor improvements to the user experience of Remote Access

Please continue to test and give feedback via GitHub!

#NVDA #NVDAsr #Beta #PreRelease #News #ScreenReader #Accessibility


This Global Accessibility Awareness Day we'd like to shout out to our translators! NVDA has been translated into over 55 languages to provide access to technology for blind and vision impaired people around the world. Recently we spoke with Harun in Türkiye, who shared how access, in his own language, has helped him: nvaccess.org/post/harun-fast-l…

#GAAD #Accessibility #NVDA #NVDAsr #ScreenReader #Localization #Awareness #Translation


Für die neuen Versionen der @joplinapp@mastodon.social wurde nicht nur die #ScreenReader Kompatibilität für #Desktop und #Mobile verbessert, es gibt auch eine #Accessibility #Hilfe.seite für #Entwickler, die neue Komponenten ergänzen oder alte ändern möchten:

joplinapp.org/help/dev/accessi…
#a11y #Joplin #ToDo #Inklusion #digitaleTeilhabe #GlobalAccessibilityAwarenessDay


Trying this again, since I got no answers last time, and only one confirmation that I'm not alone in seeing this behaviour:
For whatever reason, the JAWS "Check For Updates" menu option under the Help menu is always unavailable when running Windows and JFW in a virtual machine on a Qemu-KVM hypervisor.
What I'd love to know is if anyone knows if this is a bug or feature? If bug, is FS even aware of it? Any planned fixes? I tried to contact FS on this and got about as much as I expected, nothing.
If a feature, does anyone understand the rationale of making JFW automatic Updates unavailable on virtual machines?
#Accessibility #A11Y #automaticUpdates #JAWS #JFW #ScreenReader #Windows


NVDA 2025.1 Beta 5 is now available! Changes since beta 4 include:

- Updates to translations
- Fixes for reading math attributes in PDFs
- Minor improvements to the user experience of Remote Access

Read the full details and download at: nvaccess.org/post/nvda-2025-1b…

We are getting closer to NVDA 2025.1! Thank you to all those who have been trying out betas and giving us feedback, we greatly appreciate it!

#NVDA #NVDAsr #ScreenReader #Beta #PreRelease #Testing #Accessibility #NewVersion



Can anyone recommend a screen-reader-accessible, self-hosted package that provides a web interface that communicates the status of multiple machines? UP, down, maintenance, etc? I think UptimeKuma can do this, so will check that out. But also very interested in any recommendations. Please boost for reach. Much appreciated.
#Linux #OpenSource #Self-Hosted #StatusReporting #WebInterface #ScreenReader #Accessible #A11Y



NVDA 2025.1 Beta 3 is now available for testing. As well as all the NVDA 2025.1 updates, beta3 adds:
- Updates to translations
- Disallow new Remote Access session in Secure Mode
- Update Remote Access interface
- Add unassigned command for Remote Access settings
- SAPI 4 bug fixes

Read the full release notes (including all the 2025.1 features & fixes) & download from: nvaccess.org/post/nvda-2025-1b…

#NVDA #NVDAsr #Update #Beta #FLOSS #FOSS #PreRelease #News #Testing #Test #FreeSoftware #ScreenReader


I mean really. This is some sample NVDA #screenReader speech for reading a single checklist item in a GitHub issue. Unique accessible names on controls are important, but could they not have found an alternative like task numbers instead of making me hear the entire task four times?

"
button Move @jscholes will generate initial list of topics from 4 most recent design reviews and share the output for further review and refinement: April 25
check box not checked @jscholes will generate initial list of topics from 4 most recent design reviews and share the output for further review and refinement: April 25 checklist item
link @jscholes will generate initial list of topics from 4 most recent design reviews and share the output for further review and refinement: April 25
menu button collapsed subMenu Open @jscholes will generate initial list of topics from 4 most recent design reviews and share the output for further review and refinement: April 25 task options
"

#accessibility


as with most things it comes down to preference. however having them in your text directly means that a #ScreenReader user might choose to lower the #Verbosity of the punctuation they hear, thus not even realising that there were 2 tags there already.
Obviously that only works for speech users and if the tags make sense within the context of the words.
My rule of thumb is to imbed them within my text if they'd make syntactical sense without the octothorpe and add the ones that don't afterward.


I have an #accessibility question for #screenReader users.

If I use hashtags within flowing text, like #this, does that annoyingly interrupt the narration flow and should I rather list them all at the end?

Or is in-text tagging the preferable alternative to a large wall of hashtags at the end, like this:

#blind #AskFedi #screenReaders


NVDA 2025.1 Beta 2 is now available for testing. As well as all the amazing updates in NVDA 2025.1 (from Beta 1), this new beta includes updates to some translations, as well as a minor bug fix for SAPI 5 voices using rate boost. Read the full release notes (including all the 2025.1 features & fixes) and download from: nvaccess.org/post/nvda-2025-1b…

#NVDA #NVDAsr #Update #Beta #FLOSS #FOSS #PreRelease #News #Testing #Test #FreeSoftware #ScreenReader


Hi guys, I'm a totally #Blind woman interested in learning French to start with. In coming back to #Duolingo, I found that the first thing you have to do right off the bat is "choose the correct image". You can choose based on the audio you are hearing, absolutely, but you have no way of actually knowing what the so-called correct image actually is, making it pretty much impossible from an accessibility standpoint to know what words are being introduced and/or what they mean in English. Does anyone know of a good language learning app that is actually accessible? #A11y #technology #Language #French #Learning #Screenreader


It bothers me quite a lot that in the `ariaNotify` explainer, relating to a more robust mechanism for web apps to fire #screenReader messages, #braille is demoted to a "future consideration". Even there, it's listed under a heading of "Braille and speech markup", as though it doesn't even warrant a devoted section of its own.

Braille being treated with the same priority of speech is long overdue. We're clearly not there yet.

github.com/MicrosoftEdge/MSEdg…
#accessibility


In-Process is out, featuring a hint on 2025.1 beta timing, details on the updated Basic Training for NVDA training module, our recent server updates, AND what you need to know about reading info in a command prompt. Read now: nvaccess.org/post/in-process-1…

And don't forget to subscribe to get the next edition (AND notification when the beta comes out) via email: eepurl.com/iuVyjo

#NVDA #NVDAsr #ScreenReader #Accessibility #News #Newsletter #Blog


If I decide to start blogging again (it's been years), what's more #screenreader#accessible: ghost or micro.blog or something else? Don't even mention #wordpress; my desire to run a PHP app for any reason is zero. Plus the gutenberg stuff. #a11y


Do you use a screen reader and read arabic content with it? Have you ever wondered why Arabic tts literally always sucks, being either super unresponsive, or gets most things wrong all the time? I've been wanting to rant about this for ages!
Imagine if English dropped most vowels: "Th ct st n th mt" for "The cat sat on the mat" and expected you to just KNOW which vowels go where. That's basically what Arabic does all day every day! Arabic uses an abjad, not an alphabet. Basically, we mostly write consonants, and the vowels are just... assumed? Like, they are very important in speech but we don't really write them down except in very rare and special cases (children's books, religious texts, etc). No one writes them at all otherwise and that is very acceptable because the language is designed that way.
A proper Arabic tts needs to analyze the entire sentence, maybe even the whole paragraph because the exact same word could have different unwritten vowels depending on its location, which actually changes its form and meaning! But for screen readers, you want your tts to be fast and responsive. And you do that by skipping all of that semantic processing. Instead it's literally just half-assed guess work which is almost wrong all the time, so we end up hearing everything the wrong way and just cope with it.
It gets worse. What if we give the tts a single word to read (which is pretty common when you're more closely analyzing something). Let's apply that logic to English. Imagine you are the tts engine. You get presented with just 'st', with no surrounding context and have to figure out the vowels here. Is it Sit? Soot? Set? Maybe even stay? You literally don't know, but each of those might be valid even with how wildly the meaning could be different.
It's EXACTLY like that in Arabic, but much worse because it happens all the time. You highlight a word like 'كتب' (ktb) on its own. What does the TTS say? Does it guess 'kataba' (he wrote)? 'Kutiba' (it was written)? 'Kutub' (books (a freaking NOUN!))? Or maybe even 'kutubi' (my books)? The TTS literally just takes a stab in the dark, and usually defaults to the most basic verb form, 'kataba', even if the context screams 'books'!
So yeah. We're stuck with tools that make us work twice as hard just to understand our own language. You will get used to it over time, but It adds this whole extra layer of cognitive load that speakers of, say, English just don't have to deal with when using their screen readers.

#screenreader #blind #tts


Recent datepicker experience:
1. Control is presented as three separate spin controls, supporting the Up/Down Arrow keys to increment and decrement the value as well as manual typing. But because they're not text inputs, I can't use the Left/Right Arrow keys to review what each separate one contains, only to move between day, month, and year.
2. I tab to year.
3. I press Down Arrow, and the value is set to 2075. I'm unclear how many use cases require the year to be frequently set to 2075, but I can't imagine it's many so this seems like a fairly ridiculous starting point.
4. I press Up Arrow, and the value gets set to 0001. The number of applications for which 0001 is a valid year is likewise vanishingly small.
5. I delete the 0001, at which point my #screenReader reports that the current value is "0". Also not a valid year.
6. Out of curiosity, I inspect the element to see which third-party component is being used to create this mess... only to find that it's a native `<input>` with `type="date"` and this is just how Google Chrome presents it.

A good reminder that #HTML is not always the most #accessible or user-friendly.

#accessibility #usability


So an update on Guide: I've exchanged some long emails with Andrew, the lead developer. He's open to dialogue, and moving the project in the right direction: well-scoped single tasks, more granular controls and permissions, etc. He doesn't strike me as an #AI maximalist can and should do everything all the time kind of guy. He's also investigating deeper screen reader interaction, to let AI just do the things we can't do that it's best at. I stand by my thoughts that the project isn't yet ready for prime time. But as someone else in the thread said, I don't think it should be written off entirely as yet another "AI will save us from inaccessibility" hype train. There is, in fact, something here if it gets polished and scoped a bit more. #blind#screenreader#a11y


Can you guess what I'm reading about from this nonsensical #screenReader output? I loaded the webpage myself and not even I understand. #accessibility

"
heading level 2 How it Works
Slides carousel 1 / 3 slide
out of slide 2 / 3 slide graphic How it works
out of slide 3 / 3 slide
out of slide 1 / 3 slide
out of slide 2 / 3 slide graphic How it works
out of slide 3 / 3 slide
out of slide 1 / 3 slide
out of slide 2 / 3 slide graphic How it works
out of slide 3 / 3 slide
out of slide button Previous slide
button Next slide
button current Go to slide 1
button Go to slide 2
button Go to slide 3
out of carousel link app-tutorial
link App Tutorial
heading level 2 No One Does it Alone...
"



Hello #Blind and #VisuallyImpaired community! 👋
I'm having trouble signing PDF documents with a digital certificate using my #screenreader (NVDA on Windows). I can do it in Adobe Reader but it's quite cumbersome and requires sighted assistance.
Does anyone have a more accessible workflow or software recommendation for signing PDFs with a digital certificate using the keyboard and a screen reader? Any tips or advice would be greatly appreciated!
Could you please #Boost this so it reaches more people? Thank you in advance! 🙏 #Accessibility #NVDA #PDF #DigitalSignature #AssistiveTechnology @NVAccess