I'm currently exploring options for local LLMs to be integrated into Home Assistant. Does anyone here got experiences with this already? Like, did you try running a Raspi5 with one of the AI hats? What were your results? Do you have any other affordable local AI systems running with Home Assistant, which models work best for you?
#homeassistant #ai #raspberrypi #ollama #smarthome

Jonathan reshared this.

in reply to Jonathan

@jonathan859 Nope, that is basically what happens in the end. You need a part that processes speech to text, interprets it, acts on it and generates either text output that then again is output via TTS, or voice directly. Technically all of that can happen locally, but you can off-load it into cloud services like OpenAI too. I'd like to run as much as possible locally, or on hardware controlled by me. If push comes to shove, i'll be getting a VGPU server I guess.

Jonathan reshared this.

in reply to Jage

@Jage Thinking about the same, but renting a VGPU server would be way more expensive than buying even 1 graphics card now, at least in the long run. That is why i'm asking if people got experiences already. I don't need to interpret live video, I don't think images either. Mostly just text to control entities and such, at least for now. But having a bit of GPU power in reserve won't be bad either.
@Jage