in reply to Bri🄰

Speculation: I think your speech queuing works fine, that's not where the problem is, I think you have a problem somewhere in the protocol handling itself, because now that you've made it queue everything, it does what it should. The problem with the previous builds was when NVDA itself received one string while reading another one, those two weren't queued. All of the above was using VoiceOver announcements for me, though I could try with TTS as well.
in reply to James Scholes

@jscholes @NikJov Yeah, I was afraid someone was going to ask for this. Understandable as it is, I've never really been sure how to implement something like this. I believe NVDA sends language codes and whatever, but the issue is ... Selecting a voice to use for that language without creating some kind of interface for mapping languages to their voices. First one it finds? I dunno.
in reply to Bri🄰

@NikJov This is one of the few areas where VoiceOver outstrips the competition. As a language learner, I can set up that mapping, prioritise my list of profiles, set parameters like rate on a per language basis, have it switch speech engines on the fly, and quickly opt into and out of automatic switching via the rotor. It's not perfect, but it does mostly just work.

Contrast that with NVDA's frankly abysmal support for multilingual users. It can only switch between voices of the same synthesiser, uses the same rate regardless of language, doesn't support the concept of multiple voice profiles at all without an add-on, doesn't offer a way to quickly switch languages manually, and gives the user virtually no control over how the voice for each language is selected.

⇧