in reply to André Polykanine

This reminded me to work on an #LLM RSS reader.
It simply pulls the CBS news RSS feed into a list. That list gets passed to the LLM to decide the reading order. Then the article becomes summarized and read by Espeak if `--speak` is used.
Smaller models had issues reproducing the links sometimes, getting confused and trying to use the title to construct a URL. Qwen3-30B-A3B-Instruct-2507 did a good job.
`--save` saves the read links in a file to not repeat.
github.com/Jay4242/llm-scripts…
#llm
in reply to André Polykanine

I just changed it to a Python text to speech library instead to make it less Linux focused. I think the library simply calls Linux's Espeak or Window's Speak.
The commadline text part should have worked.
It does take an LLM backend though, I use llama.cpp. I haven't tested it with ollama yet.
If you wanted chatGPT to do it we could probably add a spot for an API key. I'm normally more focused on using local models.