After all the reports and concerns about the weak offline AI model used in NVDA 64-bit, it now seems that NVAccess is preparing to label this feature as experimental for the 2026.1 release. The word “experimental” is not explicitly used in their comment, but the tone and the explanation strongly imply that this is where things are heading. At least the status is starting to line up with reality.
What still feels strange is how firmly NVAccess is holding on to the offline model idea. Yes, offline models can perform well with a strong GPU or an NPU, and the technology is improving fast. But will the majority of NVDA’s target users, now or even in the future, actually have the hardware needed to run these models properly? That part is still very uncertain.
Marking the feature as experimental is a good step forward, but the bigger question about practicality and user hardware remains unanswered.
github.com/nvaccess/nvda/issue…
@NVAccess
in reply to Amir

What would you like to see happen? If you want an online model, there are multiple add-ons which will give you that. The offline model will work on any PC we've heard of people testing it with - yes it has limitations, as you are a loud critic of - but not being able to run hasn't been one - and yes, when we find a better model which will work on reasonable hardware and produce better results, we will look at changing to it. And, as always, we're open to feedback :)
in reply to NV Access

Maybe it is already too late to hope for what I would have preferred to see. But if it were up to me, I would rather not see so much time, energy, and development effort go into offline AI models that ultimately will not benefit much of NVDA’s target user base. With the same time and energy, other bugs and features could have been addressed instead. Given NVDA’s development constraints, it might have been better not to pursue offline AI features at all. Of course, this is just my opinion.
The current offline model is extremely limited and not reliable in practice. And if I ever wanted to use a stronger offline model, I would prefer to run it outside of NVDA rather than inside it.
in reply to Amir

Given it is already there..... yes, it's a bit late to cancel development which has already occured. The development work for this feature was not done by us, but submitted as part of an external project with ideas & plan submitted by them. Regardless of whether you use THIS feature or not, surely having more developers able to write code for NVDA is a good thing? So you'll be pleased to know, OUR internal developers HAVE spent their time working on addressing other bugs :)
in reply to Amir

Well, we know it is one of those features which will generate a lot of attention - people who may not have any intention of using it regularly, will still test it out just to see what they think and give their feedback - where we likely might not get that for some new feature in say, Excel. And anything AI is something which generates a lot of passion, on all sides. We don't want to be seen to be slapping something in just to have AI features, but this is a genuinely promising use case
in reply to Amir

@mckensie That's right - we're not discontinuing SAPI 5. For SAPI 4, if someone would like to make an add-on for it, please go for it. And we have already started talking at synthesizer manufacturers about what might be required to update to 64-bit. Even if they aren't now, if they are still developing them, they will likely get to the same point we are at which prompted us to go 64-bit - dependencies, development environments, etc, are all dropping support for 32-bit
in reply to Valley_prime

@Valley_prime @mckensie It's worth asking, though it's not on our roadmap currently. While it is possible to "port" programs from one OS to another, this works best for high-level programs (like word processors). A screen reader like NVDA uses a lot of OS-specific commands so to work on something else we'd need to rewrite the majority of the code. Additionally, if you're asking about iOS, no - it's too locked down for ANY third-party screen reader.
in reply to Amir

Eloquence just, doesn't cut it. and I may not be able to use Vocalizer, as they might be discontinued for NVDA. I might just have to suck it up, and just use Eloquence as much as I really don't want to. But, as the sapi adapter doesn't always work, and it may be broken with newer versions, I may not be able to use the natural voices. ESpeak, I can't stomach too much. So I might just have to consider Blastbay my new daily driver. I can actually stomach Libby, but when he makes a voice out of himself I'll find myself switching to that one. I wish I wasn't torn in between switching to a speech synth I don't use or just, perhaps even foregoing all 3rd parties and using one core instead, which, I kinda doubt that one core is going to stay along for any longer, because of, these voices