The State of Modern AI Text To Speech Systems for Screen Reader Users: The past year has seen an explosion in new text to speech engines based on neural networks, large language models, and machine learning. But has any of this advancement offered anything to those using screen readers? stuff.interfree.ca/2026/01/05/ai-tts-for-screenreaders.html#ai#tts#llm#accessibility#a11y#screenreaders

reshared this

in reply to PepperTheVixen ΘΔ

@PepperTheVixen The reason it's grating is because unlike Eloquence and dectalk, Espeak only uses formant synthesis for the vowel sounds. For consonants and plosives, it instead uses concatenative recordings based on human speech. That's why even when you switch to a voice that sounds less sharp, the "t", "b", "p", and other sounds are still too sharp. This seems to be the primary cause of the fatigue most people experience while using ESpeak.
in reply to Andre Louis

There is a 32-bit compatibility layer in the works for NVDA itself (although it currently only references SAPI4). But with any luck the need for every add-on to implement its own will go away.

github.com/nvaccess/nvda/pull/…

@cachondo @amir @fastfinge

in reply to 🇨🇦Samuel Proulx🇨🇦

I see the "Secure add-on runtime" on the roadmap, with the note that "The first version of the runtime will provide support for speech synthesis and braille devices."

I don't see any implication that any 32-bit compatibility layer will only work for secure add-ons, which is hopefully a bit of a leap.

Still, the fact that people don't know what will or won't be happening, or whether their preferred synthesiser(s) will work or not, continues to be a big part of the problem. @cachondo @FreakyFwoof @amir

in reply to Andre Louis

@FreakyFwoof@cachondo@amir You should be able to get either Gemini or Codex to help you, depending on what AI you have access to. The workflow would be:
1. download gemini-cli or codex-cli, and get them installed and configured.
2. clone all of the sourcecode from github.com/fastfinge/eloquence_64/3. Delete the tts.txt and tts.pdf files, so you don't confuse it with incorrect documentation.
4. Find any API documentation for orphius that's available, and add it into the folder.
4. Run codex-cli or gemini-cli, and tell it something like: "Using the information about how to develop NVDA addons you can find in agents.md, and the information about the Orphius API I've provided in the file Orphius-documentation-filename.txt, I would like you to modify the code in this folder to work with Orpheus instead of eloquence."

It will go away for five or ten minutes, ask you for permission to read and write the files it's interested in, and then give you something that mostly works. Now, build the addon, run it, and tell it about the errors and problems you have and ask it to fix them. In the case of errors, include the error right from the NVDA log, and for bugs and problems, tell it exactly what it's doing wrong, and exactly what you want it to do instead. Keep doing this until you wind up with a working addon.

Think of AI as a particularly stupid programmer, and you're the manager in charge of the project. You should be able to get this done without paying anyone.

johann reshared this.

in reply to Andre Louis

@FreakyFwoof@cachondo@amir Yeah, you can get AI to modify the 32-bit addon for you. That's how I got the first two eloquence prototypes; it helped me understand the problem and what approaches would work and what wouldn't. If you give it the 32-bit orphius addon, and the 64-bit eloquence addon, it should be able to understand the working approach to make an addon 64-bit, and make the modifications itself. The reason to give it the 64-bit eloquence addon as an example is so it doesn't decide to go down the GRPC route and include protobuf and a bunch of other nonsense.
in reply to Andre Louis

Oh happy days 😊 that was the voice that used to come with the Hal ScreenReader isn't it? That was my first ScreenReader after my accident back in 1996 and I seem to remember the plug-in synth was called something like Apollo two or thereabouts such happy memories 🙂 but not really I used to sit up till about 4 am banging my head against the brick wall trying to figure it out but hey ho
in reply to Luis Carlos

@luiscarlosgonzalez@cachondo@FreakyFwoof@amir I didn't try Kokoro, because it cannot achieve a real time factor of 1 on CPU. By that I mean, to be fit for consideration with a screen reader, a text to speech voice must be able to generate one second of speech in one second or faster. In general, Kokoro takes two seconds to generate one second of speech. So it's not suitable.
in reply to 🇨🇦Samuel Proulx🇨🇦

Really interesting article. I'm particularly passionate about this subject, I've been fascinated with TTS for a number of years. I've trained many voices, both for Piper and some of the newer LLM based systems, and while I can't speak to the speed issue, training data is extremely important.

What you feed into these models has a big impact on the voice's performance overall. If you give it stuff scrape from the web, random audiobooks that weren't optimized for TTS, things like that, you're not going to get good results for the type of work screen reader users do every day. This applies to all of these systems, not even just neural networks. The latency / responsiveness issue is something we'll have to solve at some point, because I don't think using TTS systems last updated in 2003 is going to work out in the longterm, as much as I love Eloquence.

In my ideal world, we would have either a machine learning based or formant system that is easy to train / maintain. Big companies have lost interest in on device TTS, not even just for screen reader users. Many of the solutions being put out now are cloud based, and while developers are still creating on device models, as said in the article, they're not optimized for our needs and may never be. I think we have to take matters into our own hands and figure this out, but I believe with enough people we can make it happen.

in reply to Zach Bennoui

@ZBennoui We need a good formant system. Machine learning is useful for setting the model parameters. But I think the word to phoneme rules can’t be a neural network, because they have to be reproducible and modifiable. Even here though, machine learning could help though. I’d love a system where a user could submit a recording of a word, and the system could create the phonetic representation.
in reply to 🇨🇦Samuel Proulx🇨🇦

Yeah I completely agree, I happen to know Philip and have been talking with him extensively about his experiments with TTS. I can't go into a ton of detail, but I'll say what he said publicly. The system he's using is a hybrid approach of neural networks and formant synthesis, where he trains a model to output formant frequencies based on the audio data he feeds into it. I won't pretend to understand all the details, this is way above my pay grade, but as far as I understand this has never been done before by another developer.
This entry was edited (5 days ago)
in reply to 🇨🇦Samuel Proulx🇨🇦

@FreakyFwoof The major problem with incorporating AI TTS into screen readers is latency. Maybe you can use it for say all, but it is not suitable for navigation. There are a couple of tiny TTS with low latency like Piper TTS, but the quality is not the best. Also multi lingual support and pronunciation for many uncommon words are issues for AI tts.
in reply to 🇨🇦Samuel Proulx🇨🇦

isn't it possible to "pregenerate" the speech with all the necessary IDs so that you can navigate and interrupt at will?
Just as one generates SSML from rich text (including maths formulas) before generating speech.

It would even be better to catch intonations, breaths and others, unchanged instead of letting the TTS generating a "pleasant full phrase" (a wrong expectation).

I find your post intriguingly close to the emerging reaction against the Ai-generated #mundaneslop ;-).

in reply to Paul L

@polx Maybe, but probably not. Doing that would result in a lot of wasted resources generating text I'm never going to listen to. Think about the average user interface: dozens of menus, and toolbars, and ads, and comments, and so on. Plus, the text changes constantly, on even simple websites. That's not even taking into account websites that just scroll constantly. It might be possible to create some kind of algorithm to predict the most likely text I'll want next, but now we've just added another AI on top of the first AI.

I think a better solution might be to make the text to speech system run on different hardware from the computer itself. This is, in fact, how text to speech was done in the past, before computers had multi-channel soundcards. This has a few advantages. First, even if the computer itself is busy, the speech never crashes or falls behind. Second, if the computer crashes, it could be possible to actually read out the last error encountered. Third, specialized devices could be perhaps more power and CPU efficient.

The reason text to speech systems became software, instead of hardware, is largely because of cost. It's much cheaper to just download and install a program than it is to purchase another device. Also, it means you don't have to carry around another dongle and plug it into the computer.