Items tagged with: Screenreader

Search

Items tagged with: Screenreader


Can anyone recommend a screen-reader-accessible, self-hosted package that provides a web interface that communicates the status of multiple machines? UP, down, maintenance, etc? I think UptimeKuma can do this, so will check that out. But also very interested in any recommendations. Please boost for reach. Much appreciated.
#Linux #OpenSource #Self-Hosted #StatusReporting #WebInterface #ScreenReader #Accessible #A11Y



NVDA 2025.1 Beta 3 is now available for testing. As well as all the NVDA 2025.1 updates, beta3 adds:
- Updates to translations
- Disallow new Remote Access session in Secure Mode
- Update Remote Access interface
- Add unassigned command for Remote Access settings
- SAPI 4 bug fixes

Read the full release notes (including all the 2025.1 features & fixes) & download from: nvaccess.org/post/nvda-2025-1b…

#NVDA #NVDAsr #Update #Beta #FLOSS #FOSS #PreRelease #News #Testing #Test #FreeSoftware #ScreenReader


Welche Videos, Webinare, Blogartikel, Schulungen würdet ihr Personen empfehlen, die den Einstieg in die Nutzung von Screenreadern lernen wollen?

Ich wurde das gerade gefragt. Spontan dachte ich an diese Artikel. Wobei es da erstmal um das Legen von Grundlagen geht. Sonst bringt das eh alles nichts.

"Das kognitive Modell Blinder bei digitalen Benutzer-Oberflächen" (erstmal Basiswissen ausbauen)

netz-barrierefrei.de/wordpress…

#Screenreader

1/3


I mean really. This is some sample NVDA #screenReader speech for reading a single checklist item in a GitHub issue. Unique accessible names on controls are important, but could they not have found an alternative like task numbers instead of making me hear the entire task four times?

"
button Move @jscholes will generate initial list of topics from 4 most recent design reviews and share the output for further review and refinement: April 25
check box not checked @jscholes will generate initial list of topics from 4 most recent design reviews and share the output for further review and refinement: April 25 checklist item
link @jscholes will generate initial list of topics from 4 most recent design reviews and share the output for further review and refinement: April 25
menu button collapsed subMenu Open @jscholes will generate initial list of topics from 4 most recent design reviews and share the output for further review and refinement: April 25 task options
"

#accessibility


as with most things it comes down to preference. however having them in your text directly means that a #ScreenReader user might choose to lower the #Verbosity of the punctuation they hear, thus not even realising that there were 2 tags there already.
Obviously that only works for speech users and if the tags make sense within the context of the words.
My rule of thumb is to imbed them within my text if they'd make syntactical sense without the octothorpe and add the ones that don't afterward.


I have an #accessibility question for #screenReader users.

If I use hashtags within flowing text, like #this, does that annoyingly interrupt the narration flow and should I rather list them all at the end?

Or is in-text tagging the preferable alternative to a large wall of hashtags at the end, like this:

#blind #AskFedi #screenReaders


NVDA 2025.1 Beta 2 is now available for testing. As well as all the amazing updates in NVDA 2025.1 (from Beta 1), this new beta includes updates to some translations, as well as a minor bug fix for SAPI 5 voices using rate boost. Read the full release notes (including all the 2025.1 features & fixes) and download from: nvaccess.org/post/nvda-2025-1b…

#NVDA #NVDAsr #Update #Beta #FLOSS #FOSS #PreRelease #News #Testing #Test #FreeSoftware #ScreenReader


Hi guys, I'm a totally #Blind woman interested in learning French to start with. In coming back to #Duolingo, I found that the first thing you have to do right off the bat is "choose the correct image". You can choose based on the audio you are hearing, absolutely, but you have no way of actually knowing what the so-called correct image actually is, making it pretty much impossible from an accessibility standpoint to know what words are being introduced and/or what they mean in English. Does anyone know of a good language learning app that is actually accessible? #A11y #technology #Language #French #Learning #Screenreader


It bothers me quite a lot that in the `ariaNotify` explainer, relating to a more robust mechanism for web apps to fire #screenReader messages, #braille is demoted to a "future consideration". Even there, it's listed under a heading of "Braille and speech markup", as though it doesn't even warrant a devoted section of its own.

Braille being treated with the same priority of speech is long overdue. We're clearly not there yet.

github.com/MicrosoftEdge/MSEdg…
#accessibility


In-Process is out, featuring a hint on 2025.1 beta timing, details on the updated Basic Training for NVDA training module, our recent server updates, AND what you need to know about reading info in a command prompt. Read now: nvaccess.org/post/in-process-1…

And don't forget to subscribe to get the next edition (AND notification when the beta comes out) via email: eepurl.com/iuVyjo

#NVDA #NVDAsr #ScreenReader #Accessibility #News #Newsletter #Blog


If I decide to start blogging again (it's been years), what's more #screenreader#accessible: ghost or micro.blog or something else? Don't even mention #wordpress; my desire to run a PHP app for any reason is zero. Plus the gutenberg stuff. #a11y


Do you use a screen reader and read arabic content with it? Have you ever wondered why Arabic tts literally always sucks, being either super unresponsive, or gets most things wrong all the time? I've been wanting to rant about this for ages!
Imagine if English dropped most vowels: "Th ct st n th mt" for "The cat sat on the mat" and expected you to just KNOW which vowels go where. That's basically what Arabic does all day every day! Arabic uses an abjad, not an alphabet. Basically, we mostly write consonants, and the vowels are just... assumed? Like, they are very important in speech but we don't really write them down except in very rare and special cases (children's books, religious texts, etc). No one writes them at all otherwise and that is very acceptable because the language is designed that way.
A proper Arabic tts needs to analyze the entire sentence, maybe even the whole paragraph because the exact same word could have different unwritten vowels depending on its location, which actually changes its form and meaning! But for screen readers, you want your tts to be fast and responsive. And you do that by skipping all of that semantic processing. Instead it's literally just half-assed guess work which is almost wrong all the time, so we end up hearing everything the wrong way and just cope with it.
It gets worse. What if we give the tts a single word to read (which is pretty common when you're more closely analyzing something). Let's apply that logic to English. Imagine you are the tts engine. You get presented with just 'st', with no surrounding context and have to figure out the vowels here. Is it Sit? Soot? Set? Maybe even stay? You literally don't know, but each of those might be valid even with how wildly the meaning could be different.
It's EXACTLY like that in Arabic, but much worse because it happens all the time. You highlight a word like 'كتب' (ktb) on its own. What does the TTS say? Does it guess 'kataba' (he wrote)? 'Kutiba' (it was written)? 'Kutub' (books (a freaking NOUN!))? Or maybe even 'kutubi' (my books)? The TTS literally just takes a stab in the dark, and usually defaults to the most basic verb form, 'kataba', even if the context screams 'books'!
So yeah. We're stuck with tools that make us work twice as hard just to understand our own language. You will get used to it over time, but It adds this whole extra layer of cognitive load that speakers of, say, English just don't have to deal with when using their screen readers.

#screenreader #blind #tts


Recent datepicker experience:
1. Control is presented as three separate spin controls, supporting the Up/Down Arrow keys to increment and decrement the value as well as manual typing. But because they're not text inputs, I can't use the Left/Right Arrow keys to review what each separate one contains, only to move between day, month, and year.
2. I tab to year.
3. I press Down Arrow, and the value is set to 2075. I'm unclear how many use cases require the year to be frequently set to 2075, but I can't imagine it's many so this seems like a fairly ridiculous starting point.
4. I press Up Arrow, and the value gets set to 0001. The number of applications for which 0001 is a valid year is likewise vanishingly small.
5. I delete the 0001, at which point my #screenReader reports that the current value is "0". Also not a valid year.
6. Out of curiosity, I inspect the element to see which third-party component is being used to create this mess... only to find that it's a native `<input>` with `type="date"` and this is just how Google Chrome presents it.

A good reminder that #HTML is not always the most #accessible or user-friendly.

#accessibility #usability


So an update on Guide: I've exchanged some long emails with Andrew, the lead developer. He's open to dialogue, and moving the project in the right direction: well-scoped single tasks, more granular controls and permissions, etc. He doesn't strike me as an #AI maximalist can and should do everything all the time kind of guy. He's also investigating deeper screen reader interaction, to let AI just do the things we can't do that it's best at. I stand by my thoughts that the project isn't yet ready for prime time. But as someone else in the thread said, I don't think it should be written off entirely as yet another "AI will save us from inaccessibility" hype train. There is, in fact, something here if it gets polished and scoped a bit more. #blind#screenreader#a11y


Can you guess what I'm reading about from this nonsensical #screenReader output? I loaded the webpage myself and not even I understand. #accessibility

"
heading level 2 How it Works
Slides carousel 1 / 3 slide
out of slide 2 / 3 slide graphic How it works
out of slide 3 / 3 slide
out of slide 1 / 3 slide
out of slide 2 / 3 slide graphic How it works
out of slide 3 / 3 slide
out of slide 1 / 3 slide
out of slide 2 / 3 slide graphic How it works
out of slide 3 / 3 slide
out of slide button Previous slide
button Next slide
button current Go to slide 1
button Go to slide 2
button Go to slide 3
out of carousel link app-tutorial
link App Tutorial
heading level 2 No One Does it Alone...
"



Hello #Blind and #VisuallyImpaired community! 👋
I'm having trouble signing PDF documents with a digital certificate using my #screenreader (NVDA on Windows). I can do it in Adobe Reader but it's quite cumbersome and requires sighted assistance.
Does anyone have a more accessible workflow or software recommendation for signing PDFs with a digital certificate using the keyboard and a screen reader? Any tips or advice would be greatly appreciated!
Could you please #Boost this so it reaches more people? Thank you in advance! 🙏 #Accessibility #NVDA #PDF #DigitalSignature #AssistiveTechnology @NVAccess



Today I learned: If you use #Chrome and are annoyed by those "Sign in with Google" dialogs stealing keyboard focus on certain websites, you can disable it at the browser level.

In the address bar, type or paste in "chrome://settings/content/federatedIdentityApi" (without the quotes. You should land on the "Third-party sign-in" Settings page.

On that page, there'll be two radio buttons: "Sites can show sign-in prompts from identity services", and "Block sign-in prompts from identity services". Set it to the second one, and you should find that the problematic dialogs are no longer present.

#accessibility #screenReader


Tried the free edition of the #ZDSR#screenreader because I was curious. I will say that for my use-case, if #ZDSR was fully documented in English, it competes a lot more directly with #nvda than #jaws does, runs fine in a virtual machine, and has a bunch of stuff built in that I use NVDA addons for. If it had an English website I'd probably grab a copy for the entirely reasonable price. But I'm not willing to purchase through translation.


The app for the TP-Link Deco mesh system is nice and #screenreader#accessible if you're in the market for a router. More than can be said for Linksys Velop. Amplifi was okay, but the TP-Link app is actually better, at least on #IOS with #voiceover. I haven't had the system for long enough to give a review, other than that setup was quick and all the apps joined without issue. Also the default DHCP settings are weird. Who wants to start at 192.168.68.1? Changing it isn't obvious. First you have to change the lan address. Then it will change the gateway for you as part of that. Only then can you change the DHCP allocation. And even then it wants to start allocating at 192.168.0.1. That's too weird for me when the router lives at 192.168.1.1. And even after fiddling, it's still randomly assigning devices in 192.168.2. and 192.168.3.. But at least it let me reserve the devices that I need to be static. I probably have to go somewhere and do something incomprehensible with netmasks, but whatever. It's fine. #a11y


Does anyone have the #eloquence 6.1 dlls with Korean and Japanese support? I can't use 6.6 in this context because it needs to be portable and so I can't register the dll files. #nvda#screenreader#a11y Maybe @Tamasg perhaps?


So the primary thing I've learned from trying out the Ducky One #mechanical#keyboard is just how much lag my previous keyboards (a codekeyboard and a razer) were adding. I use a #screenreader, so I'm deeply aware of the audio lag from headphones and the audio subsystem, and configure things to reduce that as much as I can. I also knew bluetooth can introduce lag. But on picking up the Ducky...wow! A few ms can really matter! My system feels faster than it's ever felt before. Even if I wind up returning this, now I know that keyboard refresh rate is a thing I care about. Also, the inductive switches feel really nice and clicky to press, but are reasonably quiet for those around me.


Hey, #MechanicalKeyboard people! I just got myself a #Ducky One X wireless mechanical keyboard. As a #screenreader user who finds myself constantly running out of keyboard shortcuts, multi-level actuation sounded really exciting to me. Unfortunately, the browser-based programmer at duckyhub.io is entirely inaccessible. Apparently it uses a standard called #QMK or something? I don't build mechanical keyboards; I use them because I love the feedback, so I'm not deep enough into the hobby to know much about this. A browser-based programmer gave me hope that programming would be #accessible, but at least with the official website, that hope turned out to be false. So before I return this thing, do these standards mean there might be other software I can try to program the keyboard, to see if it's more accessible for me? #a11y#keyboard


Does anyone know if #LibreWolf or any other privacy focused #firefox forks keep #screenreader and other #accessibility features? Sadly, most of these projects seem to consider accessibility an unneeded feature that ads bloat and security issues and just strip it out completely, so I don't have high hopes. This just seems to be the latest one getting popular after the recent #firefox issues. #a11y


What's the state of #matrix, #xmpp and #IRC as far as #screenReader -accessible clients are concerned? Desktop (Windows, Mac) and mobile (iOS, Android).

Hoping for some input, please feel free to boost. As far as I know:

Matrix does not have a lightweight, fully accessible client for desktop, but one could be modified, such as #gomuks. On mobile, Element has scrolling issues, which is unacceptable for large rooms.

XMPP has accessible desktop clients (I used to run #Adium on the Mac), also #WeeChat. No idea about mobile.

IRC is perhaps the one which everything supports on desktop, from #MirandaIM through Weechat to the old Freedom Chat, which I could probably rewrite if I had to. Also not sure about mobile, but it would definitely need push notifications, because we can't expect people to stay constantly online on the go. #a11y #accessibility



Been playing with a video/article concept as of late with the working title"What you see is NOT what you get", pertaining to making things #accessible to fully #blind users.
A lot of #accessibility issues are easy to visualize: a missing ramp in front of a building, bad contrast, missing captions etc. but #screenReader accessibility is a lot more nebulous because there's actually not that much reading of the screen happening. I can't "point" at a screen reader accessibility issue because it happens behind the curtain, in the land of metadata, APIs and standards, rarely on the actual screen, which also makes it more difficult to "visualize" for devs. hrmm.


Our In-Process blog is out! Featuring:
- Update on NVDA 2025.1
- Planning for CSUN
- What's on the web
- Reading paragraphs in Braille

And bonus history of the Pilcrow! (Ok I was interested)

nvaccess.org/post/in-process-2…

#NVDA #NVDAsr #ScreenReader #Blog #News #Newsletter #Typography #CSUNATC



Also, the silly voice acting and humorous sound effects are almost worth the price of admission all on their own. Tip for #screenreader players: press zed to advance. The game does tell you that, but it tells you just one screen too late. When it's showing a picture, pressing enter will activate one of the menu options like pause or save or whatever, instead of advancing. Pressing the letter z will always advance. I don't actually know if that's just a standard #renpy thing I didn't know and never needed before in other games? It would make sense; in Infocom and other z-machine parser games, "z" is the standard "do nothing and wait" shortcut. So it might be a standard shortcut I just didn't know. But anyway, if you're stuck on advancing past the second thing after new game, z is what you need.


Spent a couple hours playing the visual novel Pizza Game. If you're the kind of person who finds bad My Immortal style #fanfic funny, this will work for you. It's fully #screenreader#accessible in the standard #renpy way, but also has descriptions of the visual jokes; the developer spent time thinking about #A11y, and it didn't just happen thanks to the engine. store.steampowered.com/app/710710/Pizza_Game/#visualnovel


Adding meaningful alt-text is not only important for everyone using a screenreader and an essential #inclusion and #accessibility requirement. Alt-text is also searchable and used by filters. People who prefer to filter certain people and topics for mental health and other reasons can't filter memes or images without it. Please use alt-text and cw generously 🙏

Added bonus: with added alt-text you can find images in your own posts with "from:me" and people are more likely to boost your posts.

#screenreader #fediverse #Mastodon



This is a long shot I know, but I know there were versions of the iOS Siri voices floating around for NVDA at one point. Does anyone have up to date addons of these voices somewhere and would be willing to share? If so, please feel free to reply or dm if you don't wish it to be public. #Windows #accessibility #screenreader #NVDASR #TTS



@Lukáš Tyrychtr @Jure Repinc @Gregarious I would be interested to know how do you guys use the @Dino #XMPP messaging app with a #screenReader? When I launch it it opens one of my recent conversations. Input box has the focus. I can't figure out how to use the keyboard to navigate to the incoming chat, to the toolbar, to the menu and similar. I can use ctrl+tab to cycle through open conversations. I am using version 0.4.4.


Be wary when adding additional context only for #screenReader users. An example:

Say you're working on an e-commerce site, and some products have two prices to show how great a sale discount is. The before and after is made visually apparent via some aspect of text formatting, and you want to make it explicit for screen reader users too.

The first step is to ask if this is necessary. If a user encounters two consecutive prices and one is lower than the other, they may intuitively understand what's going on without any explicit signposting, and can verify how much they're gonna pay during the checkout process. Only your users can provide this verdict.

If it's determined that some additional context is helpful, you could format it as something like: "Was $14.99, now $8.99" (optionally swapping the prices). It's short and punchy in braille and speech, perfectly descriptive of the situation at hand, and mirrors how it may be spoken out loud on an ad.

Resist the temptation to go further than this. You do not need to say "original price: $14.99, current sale price: $8.99". This is much longer and more verbose, while adding nothing. It also implies that you think screen reader users need to be told what a price is and explained the concept of a sale, even though you're not doing so for other audiences.

You also don't need to spell out the word "dollars", format the price in words, repeat the product name, and so on. If you find yourself with screen-reader-only text like: "The current price of 500 Grams of Premium Oolong Tea was fourteen dollars and ninety-nine cents, and is now on sale for eight dollars and ninety-nine cents", it has gone way too far.

In short: Set out to identify the problems that actually need solving, and only solve those problems.

#accessibility


In-Process is out - featuring news on NVDA 2024.4.2, our new add-on survey, a very successful SPEVI 2025 conference, and a User's guide: What to do if your add-on breaks?

Read the full issue now at:
nvaccess.org/post/in-process-2…

and remember, you can now subscribe to receive In-Process via email at: eepurl.com/iuVyjo

#NVDA #NVDAsr #Blog #News #Newsletter #WhatsOn #ScreenReader #Accessibility