Last year I gave a presentation at #ADC24 about #AbletonMove and finally that presentation is online. it's in a much longer video so I've shared where my bit starts, which is around the 1 hour 21 minute mark. I was told to keep it to 10 minutes, so that's what I did. youtu.be/ZkZ5lu3yEZk?si=zXyftn…
Workshop: Inclusive Design within Audio Products - What, Why, How? - Accessibility Panel - ADC 2024
https://audio.dev/ -- @audiodevcon---Workshop: Inclusive Design within Audio Products - What, Why, How? - Accessibility Panel: Jay Pocknell, Tim Yates, Eliz...YouTube
Erion
in reply to Andre Louis • • •Andre Louis
in reply to Erion • • •Erion
in reply to Andre Louis • • •Andre Louis
in reply to Erion • • •Erion
in reply to Andre Louis • • •Essentially yes, it's just better and it's local as well.
Yep, the reason why it did that is it tried to imitate the bass with a voice since the frequency ranges were similar.
Toni Barth
in reply to Erion • • •Erion
in reply to Toni Barth • • •There is no training involved, it just simply uses your source material as a reference. It has a model trained on various voices in various languages, which it uses for multiple steps (these are what you mentioned as polish). Compared to Adobe, it does more and locally on your computer without you having to send your audio files to their servers to process. That's definitely better in my book, but of course you may think otherwise.
As far as the quality goes, this largely depends on the source material, you will usually get better results if you use one plugin instance for one voice only. Some models may work better for a specific voice, as an example their Studio 2 model deals with lower frequencies better.
Toni Barth
in reply to Erion • • •Erion
in reply to Toni Barth • • •