Edit: This is now released. Say all works, though the audio becomes choppy sometimes. But it doesn't crash.
Right! I now have a copy of Eloquence that works on the 64-bit alphas of #NVDA, with the following issues: say all on the web doesn't work (it stops whenever the type of element changes for reasons I don't understand), and dialect switching doesn't work (but it doesn't crash everything anymore). If you want to play, you need to follow the build instructions; I only understand about a quarter of this code and have no intention of actually releasing things that are still broken: github.com/fastfinge/eloquence_64/
#nvda
This entry was edited (1 day ago)
in reply to James Scholes

@jscholes Hah no worries. Your question got me thinking about what that even means. Like if my collaborator doesn't speak my language, does that mean I should disclaimer the code as AI assisted? If the code started off as entirely human generated, and an AI rewrote it, is it now AI generated? If a human rewrote large parts of what the AI did, when does the code stop being AI? I really don't know.
in reply to James Scholes

@jscholes So with more code updates this morning, the thing I'm noticing is that the more rewriting that is done, the less and less code there is from the initial AI rewrite. The AI solution mostly worked, but was over-complicated and multi-threaded where it didn't need to be. We're slowly arriving at code that is both simpler and works better.
in reply to 🇨🇦Samuel Proulx🇨🇦

I suppose I initially asked because of it defaulting to a Python helper process written in Python, using sockets as the IPC mechanism. Which is very AI, based on what will have been most common in the training data.

But for this sort of thing, I wonder about performance gains from shared memory, COM, or whatever with something other than Python on the other end.

in reply to James Scholes

@jscholes So the reason I wanted Python was because I naively thought a lot of the existing code could be reused, as well as some of the learnings from IBM TTS, eloquence threshold, and the sonata voices. That turned out to be entirely wrong. The "correct" way to do this would be to write a 64-bit API compatible wrapper for ECI.dll. But that's way beyond my abilities as a programmer, and AI can't help because we don't have the development headers for ECI.dll to feed it.
in reply to James Scholes

@jscholes Those are for version 6.4 of the DLL, and we use 6.1 because 6.4 has a bunch of changes like requiring registry entries for languages and voices that make it not portable, and several annoying bugs. I believe 6.4 also made a bunch of changes around threading. I already ran into issues with this, because the tts.txt is the manual for 6.4, and we need to use 6.1, the last release before IBM took it over.
in reply to Day Garwood

@daygar@jscholes@matt If you look at this file, you should be able to understand what's expected from you by both ECI.dll and NVDA. github.com/fastfinge/eloquence_64/blob/master/host_eloquence32.py
in reply to Day Garwood

I suck ass at coding without help from either someone who knows way more, or in most cases, Gemini or ChatGPT. I've used Python as its super easy to get the program going and things like nuitka and pyinstaller let you compile stuff with a couple of commands. C++ I've heard is way more complex, and if you don't want to deal with the bloat that comes with Visual Pubio, sorry I mean Visual Studio, then you're other option is GCC with MinGW, and when I tried to get that working, the installation manager UI is borked with NVDA.
in reply to Alex Chapman

@alexchapman@daygar@jscholes@matt Meh. You can install the visual studio build tools from the command line, these days. Then just add the workload to vscode. Getting a development environment set up isn't the hard part. I actually already have one because of unspoken-ng and working with steamaudio. I just am not comfortable in C++ doing anything other than compiling other people's code and making the odd, extremely basic, change.