Search

Items tagged with: nvda



Our In-Process blog is back for 2026! And we've got a bumper issue to start with:

We highlight the "Switching from Jaws to NVDA" guide, we have tips for running NVDA on a Mac and creating a new NVDA Shortcut.
We hear from a user on their achievements in 2025, and we want to hear yours! And finally, a quick tipe from @JenMsft here on Mastodon on Using Clip with the command line!

All available now at: nvaccess.org/post/in-process-2…

#NVDA #NVDAsr #ScreenReader #Accessibility #Blog #News


Which implementation of the NVDA Server are people currently using? Preferably hostable in Docker?
Ping @simon since I know you're running a bunch.
#NVDA #NVDASR #Blind #NVDARemote


On one hand, I absolutely love and adore pull requests. On the other hand, they make me realize just how bad I am at pretending to be a developer. Anyway, if you use #NVDA 2026, here's another release of unspoken-ng that fixes Firefox errors while also making everything better because I am apparently incapable of correctly thinking through the effect of threads. You should upgrade ASAP if you use this addon: github.com/fastfinge/unspoken-ng/releases/tag/v1.0.4
#nvda


They broke #WhatsApp for #Windows completely. At least with JAWS, I hear "Press right and left arrows" instead of each and every message with virtual cursor turned off. Also, it randomly starts calls on arrow keys sometimes. Absolutely unacceptable behavior.
UPDATE. Tested without Doug's scripts and with #NVDA, same results. Random messages, especially missed calls, are read as "Press left and right arrows...".
#Accessibility #Blind


My guide for #discord with #nvda isn't finished yet, but I've laid out the new structure for where I want it to go. It's got more headings and lists to make jumping around with a screen reader easier. Sections that need more work are tagged in the unrendered markdown.

The guide focuses on the structure of the Discord desktop and web interface from the perspective of someone who knows screen reader basics and can move around a website but struggles with very complex web interfaces.

There's some useful info in there already. It'll take me a while to write up all the features, but if you know how something works and want to help fill it in or make a correction, feel free to make a PR on the rework branch!

github.com/PepperTheVixen/Disc…


*Update. RIM may work with screen readers other than JAWS.*
(Note: You will need to skip down several headings to find the beginning of the article)
I can't comment on this from a business perspective. But I do know that I have never been able to connect remotely to any of my computers, either from Windows to Windows or from Android or IOS to Windows, with any commercially made program for the purpose. The only one that works for me is NVDA Remote, which works on all three platforms, with Windows and the NVDA screen reader being a requirement. The limitation, however, is that I can't hear the sound on the controlled computer, nor can I transfer files between it and the controller. Fortunately, I don't really need these features and am happy just being able to control my machines at all. But for those who do need them, RIM allows this, but only for users of JAWS (which costs several hundred dollars/NVDA is free), and the last time I checked, it is also very expensive for an individual user who doesn't require it for work purposes. I'm also not sure if it is cross-platform, so it may only work with Windows. If anyone knows of a free, accessible solution that works with NVDA, please let me know.

Remote Incident Manager (RIM)

at-newswire.com/remote-inciden…
#accessibility #Android #blind #computers #IOS #JAWS #NVDA #RemoteAccess #Talkback #technology #Voiceover #W
indows


this seems to have ben a oneoff, a fluke if you will, but have a look at the #NVDA error I got when trying to do insert q then telling it to restart NVDA.

Error dialog Couldn't terminate existing NVDA process, abandoning start: Exception: [WinError 5] Access is denied.

#nvda


Do any NVDA users ever use annotation navigation (a) or error navigation (w) in browse mode? Every time I try to use the latter, it tells me it's not supported in the current document, and I don't remember the last time I even had the need or desire to use the former
#nvda #nvaccess


Ok @main @mastoblind @NVAccess got a puzzler for all my #nvda users. So somehow yesterday I managed to bungle my way into where NVDA was acting like it was locked into focus mode everywhere, and could not be brought out of it, received "unsupported input" for single letter nav everywhere, and hotkeys like alt+tab would not register. I ended up having to do a full reset on it to fix the problem.I use #Braille Extender, and full#BrailleDisplay operation. Anyone got a clue what the fuck I did?


I made my first 2 #FT8 contacts today thanks to a new accessible companion app built to work in conjunction with #WSJTX and the #JAWS and #NVDA screen readers. We're making huge progress toward a solution we can provide to the entire #blind #AmateurRadio community. #HamRadio


Hey y’all, hope you’re doing well. Quick question: I’m trying to use Tweezcake on my Windows computer. I can open it and hear sound effects, but NVDA doesn’t seem to detect the actual window at all. Has anyone run into this or have any suggestions?

Thanks so much.

#Accessibility #Blind #NVDA #Windows #AssistiveTechnology


I wish I can use be my eyes describe my screen feature with #nvda screen curtain on.
#nvda


Looks like vscode no longer announces the current line and character when hitting ctrl+g, at least when using NVDA. Anybody know if there's an alternative method to get the current position?
#nvda #nvaccess #vscode #blind


Yes! Yes yes yes, Hallelujah! #Slack added the Copy Message option by my request to the context menu. Now you blindies don't need to turn off and on your virtual cursor, browse/focus mode or whatever, you just press Ctrl+C or go to the context menu and select Copy Message! 🕺🏻
#Accessibility #Blind #JAWS #NVDA #Windows


A friend of mine, Beqa Gozalishvili, a very talented developer from Georgia the country, announced an early stage of his #SAPI5 wrapper for the popular #ESpeakNG #TTS engine. bug reports and feature requests are welcome, he says in his Telegram channel. He does speak English. github.com/gozaltech/espeak-ng… #Accessibility #ScreenReader #Windows #JAWS #NVDA


If you use #eloquence in #NVDA, an extremely uninteresting bugfix is now available. Previously, automatic language switching in NVDA didn't work for any language with a dialect specified, like English United States. This is now fixed. It literally just changes from calling languages en-gb and en-us to calling them en_gb and en_us to make NVDA happy. But if you need that, you can get the bugfix here: github.com/fastfinge/eloquence_64/releases/tag/v6


I bought myself a new keyboard with Christmas money, and after just a day of using it, I'm honestly kind of stunned by how much of a difference it's making.
I picked up a Keychron K10 Max from Amazon and got it yesterday, and I don't think I ever want to go back to a membrane keyboard again.
For context: before this, I was using a Logitech Ergo K860. It's a split, membrane keyboard that a lot of people like for ergonomics, and it did help in some ways — but for me, it was also limiting. My hands don't stay neatly parked in one position, and the enforced split often worked against how I naturally move. It also wasn't rechargeable, and the large built-in wrist rest (which I know some people love) mostly became a dirt-collecting obstacle that I had to work around.
Another big factor for me is that I often work from bed. That means my keyboard isn't sitting on a perfectly stable desk. It's on a tray, my lap, or bedding that shifts as I move.
The Logitech Ergo K860 is very light, which sounds nice on paper, but in practice it meant the keyboard was easy to knock around, slide out of position, or tilt unexpectedly. Combined with the split layout, that meant I was constantly re-orienting myself instead of just typing.
The Keychron, by contrast, is noticeably heavier — and that turns out to be a feature. It stays put. It doesn’t drift when my hands move. It feels planted in a way that reduces both physical effort and mental overhead. I don't have to think about where the keyboard is; I can just use it.
For a bed-based workflow, that stability matters more than I realized.
With chronic pain, hand fatigue, and accessibility needs, keyboards are not a neutral tool. They shape how long I can work, how accurately I can type, and how much energy I spend compensating instead of thinking.
This new keyboard feels solid, responsive, and predictable in a way I didn't realize I was missing. The keys register cleanly without requiring force, and the feedback is clear without being harsh. I'm not fighting the keyboard anymore. It's just doing what I ask.
What surprised me even more is how much better the software side feels from an accessibility perspective. Keychron's Launcher and its use of QMK are far more usable for me than Logitech Options Plus ever was. Being able to work with something that’s web-based, text-oriented, and closer to open standards makes a huge difference as a screen reader user. I can reason about what the keyboard is doing instead of wrestling with a visually dense, mouse-centric interface.
That matters a lot. When your primary interface to the computer is the keyboard, both the hardware and the configuration tools need to cooperate with you.
I know mechanical keyboards aren't new, but this is my first one, and I finally understand why people say they'll never go back. For me, this isn't about aesthetics or trends. It's about having a tool that respects my body and my access needs and lets me focus on the work itself.
I'm really grateful I was able to get this, and I'm genuinely excited to keep dialing it in. Sometimes the right piece of hardware, paired with software that doesn’t fight you, doesn’t just improve comfort. It quietly expands what feels possible.
#Accessibility #DisabledTech #AssistiveTechnology
#ScreenReader #NVDA
#MechanicalKeyboards #Keychron
@accessibility @disability @spoonies @mastoblind


Today I learned why Sonata #TTS created a framework to run AI voices outside of #NVDA. First, NVDA doesn't come with all of the #Python standard libraries. Second, there's no good way of updating dependencies in a bundled addon. Third, NVDA really, really hates it if you include several hundred dependencies in your addon. Anyway, here's kitten TTS, the other synthesizer I wanted to try with NVDA. Unfortunately, the model doesn't support streaming output, so even though it's actually faster than Supertonic, it feels slower when used with NVDA. Also, it takes several minutes to install, makes NVDA startup 30 seconds slower, and freezes the change synthesizer dialogue for about 45 seconds when you open it. It does miss words less frequently, though, and pronounces text better. The ultimate result of my two-day investigation is that even the highly optimized open-source AI imbedded models are not yet ready for screen reader use. Some tree-shaking could fix some of these issues, but it still won't allow for streaming, so it's not worth it. I'd really like to know what Microsoft and Narrator are doing to get the natural voices so snappy. github.com/fastfinge/kittentts-nvda/#screenreader


#nvda #tts


#nvda #tts


#nvda



I'm so glad NVDA, Narrator, Orca, VoiceOver, TDSR, and Fenrir exist. I'm so, so glad JAWS is not the only desktop screen reader, and that FS did not persue JAWS for Mac.

I'm so glad that NVDA not only supports addons, but shows them off with the addon store! I'm so glad that NVDA is so inescapably popular that even big corporations support them, like Google Docs and Microsoft Office and countless others that say in their documentation that NVDA is supported.

#accessibility #blind #nvdasr #nvda #technology


Found what looks like an #NVDA bug, or possibly #notepad++ bug. An & followed by a space will read a fake command key suggestion when changing focus to the notepad++ window.

NVDA: 2025.3.2.
Notepad++: 8.8.8.

Minimum reproduction:

Open notepad++, if a file is open use ctrl-n for new.
Type the following string in the file: "& ".
Alt-tab in and out. On the focus, NVDA will announce "alt+space".

I just wanted to read before bed.

|a11y #bug

#bug #nvda


Somebody save us from vibe coded #NVDA add-ons. The latest has a global shortcut layer that can only be activated once, and then needs NVDA to be restarted for it to work again. 🤦‍♂️
#nvda



I normally use my computer with a regular qwerty keyboard. But since it's a seven-inch Toughpad, I wanted to try it with my Orbit Writer, due to the size. I bought it to use with my iPhone, which it does very well (better than with Android,). I read the manual and even saved the HID keyboard commands so that I could refer to them quickly. But I don't understand a few things.

1. It is missing the Windows key. Due to this, I can't get to the start menu as I usually do. I also can't get to the desktop in the regular way.
2. I created a desktop shortcut which I put on the start menu, but I can't type ctrl+escape at the same time, so that method of getting to the start menu is also blocked, meaning that I still can't get to the desktop.
3. I can't type NVDA+F11 or F12 for the system tray or the time and date, respectively. I was able to create new commands for both under Input Gestures. But I also tried NVDA+1 for key identification, with both caps lock and insert, and that didn't work either. Fortunately, I was able to create another gesture to get into the NVDA menu.
4. On a qwerty keyboard, I can type alt+f4 to switch between windows. If I hold the alt key, I can also continue pressing f4 to switch between more than two windows. But with the Orbit Writer, while the command works, it seems to only work for two windows i.e. I can't hold alt and continue pressing f4.

Am I missing something here or is this a half-implemented system? How can they say it works with Windows when basic commands can't even be performed? If there are ways around these problems, please let me know.

#accessibility #blind #braille #NVDA #OrbitWriter #technology #Windows


Today is audio ducking day here at interfree! If you use the 64-bit #NVDA, there are a couple small releases for you:
* eloquence: audio ducking now works thanks to akj: github.com/fastfinge/eloquence_64/releases/tag/v5* unspoken-ng: if you use this addon, you also need to update, or audio ducking will remain broken, because someone (glares at himself) didn't quite understand NVWavePlayer: github.com/fastfinge/unspoken-ng/releases/tag/v1.0.3
#nvda


The World #Blind Union General Assembly and World #Blindness Summit in São Paulo, #Brazil in September was an amazing opportunity not only to talk about NVDA, but to give a presentation on the amazing MOVEMENT behind the world's favourite free #screenreader! We have two videos of the presentation and a full transcript for you, complete with an audience-initiated chant of "#NVDA NVDA NVDA!" at the end!

nvaccess.org/post/world-blind-…

#NVDAsr #Accessibility #Movement #Social


I can't decide whether I want to keep this #NextCloud. While I like the general idea, the Windows Desktop client still has a bunch of unlabeled buttons, you don't seem to be able to just simply copy a public share URL from the context menu like with other solutions, and so on. I hear some blind people using it despite that. Are there any tweaks / #NVDA Add-Ons or anything else I could do to make it more accessible? Yes I could try and open GH issues but honestly the way how they treated the issue about the removal of the copy public link context menu thing makes me doubt anything productive will come out.
#Blind #Accessibility
Edit: While I'm still interested in any responses to this question, I did not go with Nextcloud in the end, I prefer solutions which do only one thing and that properly and accessible, and while Seafile isn't perfect it's surely better for my needs. Thanks for the interactions with the post never the less.


Do you use #eloquence on the 64-bit #nvda#screenreader? If so, a new release is available, and we could use your help! You can find out more info on the release page: github.com/fastfinge/eloquence_64/releases/tag/v4#blind#accessibility#a11y



Happy International Day of Persons with Disabilities!

This year's #IDPwD theme is: "Fostering #disability #inclusive societies for advancing social progress"

What better way than ensuring everyone has access to, and awareness of #Accessibility of technology.

* #NVDA is FREE for anyone
* Right now, get 10% off certification to show your skills (& cheap training materials to get you to that point)

un.org/en/observances/day-of-p…

#DisabilityAwareness #Inclusion #IDPwD2025 #NVDA #Accessibility


Reposting. Slots available.

After a short break, I’m returning to accessibility training services.

I provide one-on-one training for blind and visually impaired users across multiple platforms. My teaching is practical and goal-driven: not just commands, but confidence, independence, and efficient workflows that carry into daily life, study, and work.

I cover:
iOS: VoiceOver gestures, rotor navigation, Braille displays, Safari, text editing, Mail and Calendars, Shortcuts, and making the most of iOS apps for productivity, communication, and entertainment.
macOS: VoiceOver from basics to advanced, Trackpad Commander, Safari and Mail, iWork and Microsoft Office, file management, Terminal, audio tools, and system upkeep.
Windows: NVDA and JAWS from beginner to advanced. Training includes Microsoft Office, Outlook, Teams, Zoom, web browsing, customizing screen readers, handling less accessible apps, and scripting basics.
Android: TalkBack gestures, the built-in Braille keyboard and Braille display support, text editing, app accessibility, privacy and security settings, and everyday phone and tablet use.
Linux: Orca and Speakup, console navigation, package management, distro setup, customizing desktops, and accessibility under Wayland.

Concrete goals I can help you achieve:
Set up a new phone, tablet, or computer
Send and manage email independently
Browse the web safely and efficiently
Work with documents, spreadsheets, and presentations
Manage files and cloud storage
Use social media accessibly
Work with Braille displays and keyboards
Install and configure accessible software across platforms
Troubleshoot accessibility issues and build reliable workflows
Make the most of AI in a useful, productive way
Grow from beginner skills to advanced, efficient daily use

I bring years of lived experience as a blind user of these systems. I teach not only what manuals say, but the real-world shortcuts, workarounds, and problem-solving skills that make technology practical and enjoyable.

Remote training is available worldwide.

Pricing: fair and flexible — contact me for a quote. Discounts available for multi-session packages and ongoing weekly training.

Contact:
UK: 07447 931232
US: 772-766-7331
If these don’t work for you, email me at aaron.graham.hewitt@gmail.com

If you, or someone you know, could benefit from personalized accessibility training, I’d be glad to help.

#Accessibility #Blind #VisuallyImpaired #ScreenReaders #JAWS #NVDA #VoiceOver #TalkBack #Braille #AssistiveTechnology #DigitalInclusion #InclusiveTech #LinuxAccessibility #WindowsAccessibility #iOSAccessibility #AndroidAccessibility #MacAccessibility #Orca #ATTraining #TechTraining #AccessibleTech


I really do like when people do put descriptions, and I try to all the time when I post a picture, however if someone doesn't, before I switched to #NVDA from #JFW almost two months ago, when people did that that I don't know, I just opened the picture, and then used picture smart to have it describe it for me. now that I am using NVDA, I don't know quite how to do that. I do have basilisk on my computer that does use chat GPT, but I have never tried opening the picture to get the URL of it, and then see if basilisk could describe from a URL. if it can't then I need to find something that could do this really easy for me. I hope this makes sense.
#nvda #JFW



Ok, so this is another of those weird questions which I post because people here use their brains more than most. I want to have one NVDA Remote client control two machines at the same time. That is, I want my laptop to control my desktop and my second desktop, but I want both desktop1 and desktop2 to have their NVDA heard through the laptop at the same time. I also want to switch back and forth between the two machines, as well as to and from the laptop itself, with a keystroke. There are two solutions I've found for this, but both are a bit of a mess. I can use thenvdaremote:// URLs to disconnect from desktop1 and connect to desktop2 with one keystroke, and have another keystroke to do the reverse. The problem there is that I can't hear both machines' NVDA at the same time. Also, and this isn't as big a deal, I'm just a perfectionist, switching takes a few seconds. Secondly, I can run a virtual machine and have that connected to desktop2, with the host machine connected to desktop1. That allows easy switching of the keyboard, just alt+tab to the VM window and hit ctrl+g when I want to control desktop2. It also allows both desktops to have their NVDA run through the speaker at the same time and also, which is very nice, allows braille to swap with any display which supports channels. The problem there is the latency of the VM audio, which I can't seem to shrink. It seems a bit overkill, I may say, to run an entire windows OS just for NVDA remote in a VM. Does anyone have any better solutions. Can anyone think of something which would get all three things running, fast switching, simultaneous NVDA, and no latency? Ideas would be very gratefully received and boosts would be appreciated. #NVDA #blind #a11y #screenreader #remote #nvdaremote