#Linux #OpenSource #Self-Hosted #StatusReporting #WebInterface #ScreenReader #Accessible #A11Y
Search
Items tagged with: screenreader
#Linux #OpenSource #Self-Hosted #StatusReporting #WebInterface #ScreenReader #Accessible #A11Y
Our latest In-Process blog is out: nvaccess.org/post/in-process-3…
Featuring
- The NVDA 2025.1 Beta
- What’s New in 2025.1
- Updated NVDA Expert Certification 2025
- Gene Empowers New Zealand
- Changes for Developers
- NVDA Add-ons and API Breaking Changes
#NVDA #NVDAsr #Blog #News #Newsletter #ScreenReader #Blind #Accessibility #Update #Changes
NVDA 2025.1 Beta 3 is now available for testing. As well as all the NVDA 2025.1 updates, beta3 adds:
- Updates to translations
- Disallow new Remote Access session in Secure Mode
- Update Remote Access interface
- Add unassigned command for Remote Access settings
- SAPI 4 bug fixes
Read the full release notes (including all the 2025.1 features & fixes) & download from: nvaccess.org/post/nvda-2025-1b…
#NVDA #NVDAsr #Update #Beta #FLOSS #FOSS #PreRelease #News #Testing #Test #FreeSoftware #ScreenReader
NVDA 2025.1beta3 available for testing
Beta3 of NVDA 2025.1 is now available for download and testing. For anyone who is interested in trying out what the next version of NVDA has to offer before it is officially released, we welcome yo…NV Access
Welche Videos, Webinare, Blogartikel, Schulungen würdet ihr Personen empfehlen, die den Einstieg in die Nutzung von Screenreadern lernen wollen?
Ich wurde das gerade gefragt. Spontan dachte ich an diese Artikel. Wobei es da erstmal um das Legen von Grundlagen geht. Sonst bringt das eh alles nichts.
"Das kognitive Modell Blinder bei digitalen Benutzer-Oberflächen" (erstmal Basiswissen ausbauen)
netz-barrierefrei.de/wordpress…
1/3
Das kognitive Modell Blinder bei digitalen Benutzer-Oberflächen – Digitale Barrierefreiheit
Ein kognitives Modell beschreibt, wie etwas wahrgenommen wird. Um komplexe Objekte verabeiten zu können, benötigen wir eine Repräsentation davon, wie es aufgebaDigitale Barrierefreiheit
I mean really. This is some sample NVDA #screenReader speech for reading a single checklist item in a GitHub issue. Unique accessible names on controls are important, but could they not have found an alternative like task numbers instead of making me hear the entire task four times?
"
button Move @jscholes will generate initial list of topics from 4 most recent design reviews and share the output for further review and refinement: April 25
check box not checked @jscholes will generate initial list of topics from 4 most recent design reviews and share the output for further review and refinement: April 25 checklist item
link @jscholes will generate initial list of topics from 4 most recent design reviews and share the output for further review and refinement: April 25
menu button collapsed subMenu Open @jscholes will generate initial list of topics from 4 most recent design reviews and share the output for further review and refinement: April 25 task options
"
Obviously that only works for speech users and if the tags make sense within the context of the words.
My rule of thumb is to imbed them within my text if they'd make syntactical sense without the octothorpe and add the ones that don't afterward.
I have an #accessibility question for #screenReader users.
If I use hashtags within flowing text, like #this, does that annoyingly interrupt the narration flow and should I rather list them all at the end?
Or is in-text tagging the preferable alternative to a large wall of hashtags at the end, like this:
NVDA 2025.1 Beta 2 is now available for testing. As well as all the amazing updates in NVDA 2025.1 (from Beta 1), this new beta includes updates to some translations, as well as a minor bug fix for SAPI 5 voices using rate boost. Read the full release notes (including all the 2025.1 features & fixes) and download from: nvaccess.org/post/nvda-2025-1b…
#NVDA #NVDAsr #Update #Beta #FLOSS #FOSS #PreRelease #News #Testing #Test #FreeSoftware #ScreenReader
NVDA 2025.1beta2 available for testing
Beta2 of NVDA 2025.1 is now available for download and testing. For anyone who is interested in trying out what the next version of NVDA has to offer before it is officially released, we welcome yo…NV Access
It bothers me quite a lot that in the `ariaNotify` explainer, relating to a more robust mechanism for web apps to fire #screenReader messages, #braille is demoted to a "future consideration". Even there, it's listed under a heading of "Braille and speech markup", as though it doesn't even warrant a devoted section of its own.
Braille being treated with the same priority of speech is long overdue. We're clearly not there yet.
github.com/MicrosoftEdge/MSEdg…
#accessibility
MSEdgeExplainers/Accessibility/AriaNotify/explainer.md at main · MicrosoftEdge/MSEdgeExplainers
Home for explainer documents originated by the Microsoft Edge team - MicrosoftEdge/MSEdgeExplainersGitHub
In-Process is out, featuring a hint on 2025.1 beta timing, details on the updated Basic Training for NVDA training module, our recent server updates, AND what you need to know about reading info in a command prompt. Read now: nvaccess.org/post/in-process-1…
And don't forget to subscribe to get the next edition (AND notification when the beta comes out) via email: eepurl.com/iuVyjo
#NVDA #NVDAsr #ScreenReader #Accessibility #News #Newsletter #Blog
In-Process 11th April 2025
We are getting close to a beta for NVDA 2025.1. We are on track for 2025.1 Beta 1 to be released early next week. Please do subscribe to be notified by email as soon as it is available! In the mean…NV Access
Do you use a screen reader and read arabic content with it? Have you ever wondered why Arabic tts literally always sucks, being either super unresponsive, or gets most things wrong all the time? I've been wanting to rant about this for ages!
Imagine if English dropped most vowels: "Th ct st n th mt" for "The cat sat on the mat" and expected you to just KNOW which vowels go where. That's basically what Arabic does all day every day! Arabic uses an abjad, not an alphabet. Basically, we mostly write consonants, and the vowels are just... assumed? Like, they are very important in speech but we don't really write them down except in very rare and special cases (children's books, religious texts, etc). No one writes them at all otherwise and that is very acceptable because the language is designed that way.
A proper Arabic tts needs to analyze the entire sentence, maybe even the whole paragraph because the exact same word could have different unwritten vowels depending on its location, which actually changes its form and meaning! But for screen readers, you want your tts to be fast and responsive. And you do that by skipping all of that semantic processing. Instead it's literally just half-assed guess work which is almost wrong all the time, so we end up hearing everything the wrong way and just cope with it.
It gets worse. What if we give the tts a single word to read (which is pretty common when you're more closely analyzing something). Let's apply that logic to English. Imagine you are the tts engine. You get presented with just 'st', with no surrounding context and have to figure out the vowels here. Is it Sit? Soot? Set? Maybe even stay? You literally don't know, but each of those might be valid even with how wildly the meaning could be different.
It's EXACTLY like that in Arabic, but much worse because it happens all the time. You highlight a word like 'كتب' (ktb) on its own. What does the TTS say? Does it guess 'kataba' (he wrote)? 'Kutiba' (it was written)? 'Kutub' (books (a freaking NOUN!))? Or maybe even 'kutubi' (my books)? The TTS literally just takes a stab in the dark, and usually defaults to the most basic verb form, 'kataba', even if the context screams 'books'!
So yeah. We're stuck with tools that make us work twice as hard just to understand our own language. You will get used to it over time, but It adds this whole extra layer of cognitive load that speakers of, say, English just don't have to deal with when using their screen readers.
#screenreader #blind #tts
Recent datepicker experience:
1. Control is presented as three separate spin controls, supporting the Up/Down Arrow keys to increment and decrement the value as well as manual typing. But because they're not text inputs, I can't use the Left/Right Arrow keys to review what each separate one contains, only to move between day, month, and year.
2. I tab to year.
3. I press Down Arrow, and the value is set to 2075. I'm unclear how many use cases require the year to be frequently set to 2075, but I can't imagine it's many so this seems like a fairly ridiculous starting point.
4. I press Up Arrow, and the value gets set to 0001. The number of applications for which 0001 is a valid year is likewise vanishingly small.
5. I delete the 0001, at which point my #screenReader reports that the current value is "0". Also not a valid year.
6. Out of curiosity, I inspect the element to see which third-party component is being used to create this mess... only to find that it's a native `<input>` with `type="date"` and this is just how Google Chrome presents it.
A good reminder that #HTML is not always the most #accessible or user-friendly.
Can you guess what I'm reading about from this nonsensical #screenReader output? I loaded the webpage myself and not even I understand. #accessibility
"
heading level 2 How it Works
Slides carousel 1 / 3 slide
out of slide 2 / 3 slide graphic How it works
out of slide 3 / 3 slide
out of slide 1 / 3 slide
out of slide 2 / 3 slide graphic How it works
out of slide 3 / 3 slide
out of slide 1 / 3 slide
out of slide 2 / 3 slide graphic How it works
out of slide 3 / 3 slide
out of slide button Previous slide
button Next slide
button current Go to slide 1
button Go to slide 2
button Go to slide 3
out of carousel link app-tutorial
link App Tutorial
heading level 2 No One Does it Alone...
"
In-Process is now available, featuring all the info on CSUN ATC 2025, Thorium Reader, how the NVDA 2025.1 Update is going, Open-Source software and a new RH Voice Update! Read it all here: nvaccess.org/post/in-process-2… and don't forget to subscribe via email: eepurl.com/iuVyjo
#NVDA #NVDAsr #ScreenReader #Accessibility #CSUNATC #CSUNATC25 #CSUN #Thorium #RHVoice #OpenSource #FOSS #FLOSS #News #Newsletter #Update
I'm having trouble signing PDF documents with a digital certificate using my #screenreader (NVDA on Windows). I can do it in Adobe Reader but it's quite cumbersome and requires sighted assistance.
Does anyone have a more accessible workflow or software recommendation for signing PDFs with a digital certificate using the keyboard and a screen reader? Any tips or advice would be greatly appreciated!
Could you please #Boost this so it reaches more people? Thank you in advance! 🙏 #Accessibility #NVDA #PDF #DigitalSignature #AssistiveTechnology @NVAccess
Resources for screen reader usage and keyboard commands:
webaim.org/articles/voiceover/
webaim.org/articles/nvda/
webaim.org/articles/jaws/
tpgi.com/basic-screen-reader-c…
dequeuniversity.com/screenread…
#a11y #screenreader #tips
Basic screen reader commands for accessibility testing - TPGi
Updated 1st Feb 2016. When you test your website with a screen reader there are a few basic commands you should know. Just remember not to make design decisions based...Léonie Watson (TPGi)
Today I learned: If you use #Chrome and are annoyed by those "Sign in with Google" dialogs stealing keyboard focus on certain websites, you can disable it at the browser level.
In the address bar, type or paste in "chrome://settings/content/federatedIdentityApi" (without the quotes. You should land on the "Third-party sign-in" Settings page.
On that page, there'll be two radio buttons: "Sites can show sign-in prompts from identity services", and "Block sign-in prompts from identity services". Set it to the second one, and you should find that the problematic dialogs are no longer present.
What's the state of #matrix, #xmpp and #IRC as far as #screenReader -accessible clients are concerned? Desktop (Windows, Mac) and mobile (iOS, Android).
Hoping for some input, please feel free to boost. As far as I know:
Matrix does not have a lightweight, fully accessible client for desktop, but one could be modified, such as #gomuks. On mobile, Element has scrolling issues, which is unacceptable for large rooms.
XMPP has accessible desktop clients (I used to run #Adium on the Mac), also #WeeChat. No idea about mobile.
IRC is perhaps the one which everything supports on desktop, from #MirandaIM through Weechat to the old Freedom Chat, which I could probably rewrite if I had to. Also not sure about mobile, but it would definitely need push notifications, because we can't expect people to stay constantly online on the go. #a11y #accessibility
A lot of #accessibility issues are easy to visualize: a missing ramp in front of a building, bad contrast, missing captions etc. but #screenReader accessibility is a lot more nebulous because there's actually not that much reading of the screen happening. I can't "point" at a screen reader accessibility issue because it happens behind the curtain, in the land of metadata, APIs and standards, rarely on the actual screen, which also makes it more difficult to "visualize" for devs. hrmm.
Our In-Process blog is out! Featuring:
- Update on NVDA 2025.1
- Planning for CSUN
- What's on the web
- Reading paragraphs in Braille
And bonus history of the Pilcrow! (Ok I was interested)
nvaccess.org/post/in-process-2…
#NVDA #NVDAsr #ScreenReader #Blog #News #Newsletter #Typography #CSUNATC
In-Process 27th February 2025
Welcome to another In-Process, whether you are reading on the web, or via email. This time around we cover: NVDA 2025.1 Planning for CSUN What’s on the web Reading paragraphs in Braille NVDA 2025.1…NV Access
It's modern, in rapid development, has some great #screenReader #accessibility features and overal I like it verry much.
Adding meaningful alt-text is not only important for everyone using a screenreader and an essential #inclusion and #accessibility requirement. Alt-text is also searchable and used by filters. People who prefer to filter certain people and topics for mental health and other reasons can't filter memes or images without it. Please use alt-text and cw generously 🙏
Added bonus: with added alt-text you can find images in your own posts with "from:me" and people are more likely to boost your posts.
In-Process is out, featuring:
- The NV Access 2025 Roadmap
- Thanks to our sponsors
- Corporate support with Benevity
- our new VPAT & Multiple key press timeout.
Read it all here: nvaccess.org/post/in-process-7…
#NVDA #NVDAsr #ScreenReader #Accessibility #Support #CorporateGiving #VPAT #Blog #News #Newsletter
Be wary when adding additional context only for #screenReader users. An example:
Say you're working on an e-commerce site, and some products have two prices to show how great a sale discount is. The before and after is made visually apparent via some aspect of text formatting, and you want to make it explicit for screen reader users too.
The first step is to ask if this is necessary. If a user encounters two consecutive prices and one is lower than the other, they may intuitively understand what's going on without any explicit signposting, and can verify how much they're gonna pay during the checkout process. Only your users can provide this verdict.
If it's determined that some additional context is helpful, you could format it as something like: "Was $14.99, now $8.99" (optionally swapping the prices). It's short and punchy in braille and speech, perfectly descriptive of the situation at hand, and mirrors how it may be spoken out loud on an ad.
Resist the temptation to go further than this. You do not need to say "original price: $14.99, current sale price: $8.99". This is much longer and more verbose, while adding nothing. It also implies that you think screen reader users need to be told what a price is and explained the concept of a sale, even though you're not doing so for other audiences.
You also don't need to spell out the word "dollars", format the price in words, repeat the product name, and so on. If you find yourself with screen-reader-only text like: "The current price of 500 Grams of Premium Oolong Tea was fourteen dollars and ninety-nine cents, and is now on sale for eight dollars and ninety-nine cents", it has gone way too far.
In short: Set out to identify the problems that actually need solving, and only solve those problems.
In-Process is out - featuring news on NVDA 2024.4.2, our new add-on survey, a very successful SPEVI 2025 conference, and a User's guide: What to do if your add-on breaks?
Read the full issue now at:
nvaccess.org/post/in-process-2…
and remember, you can now subscribe to receive In-Process via email at: eepurl.com/iuVyjo
#NVDA #NVDAsr #Blog #News #Newsletter #WhatsOn #ScreenReader #Accessibility
In-Process 20th January 2025
Welcome to 2025! And to those subscribed to receive In-Process via email, welcome to the first edition direct to your inbox! We hope you all had a bit of a break and some time with loved ones. We a…NV Access