A #deafblind client can no longer listen and is missing out on talk radio. how easy would it be to set up a receiver that output text they could read in Braille?
I can presumably get a raspberry pi to start streaming inbound audio either from an SDR or the web. is it viable to use something like #Whisper to then pipe that out somewhere?
I don't want to necessarily store data, although that could be useful I suppose, but the real aim is to have the transcript as close to real time as possible coming out over a local network connection (telnet, probably, for prototyping).
Anyone have any experience of this?