Re last: Waiting waiting waiting! Waiting and looking forward! Given that most of news sites are accessibility nightmare, a morning brief from #ChatGPT would be an absolutely awesome feature. #AI
This reminded me to work on an #LLM RSS reader. It simply pulls the CBS news RSS feed into a list. That list gets passed to the LLM to decide the reading order. Then the article becomes summarized and read by Espeak if `--speak` is used. Smaller models had issues reproducing the links sometimes, getting confused and trying to use the title to construct a URL. Qwen3-30B-A3B-Instruct-2507 did a good job. `--save` saves the read links in a file to not repeat. github.com/Jay4242/llm-scripts…
I just changed it to a Python text to speech library instead to make it less Linux focused. I think the library simply calls Linux's Espeak or Window's Speak. The commadline text part should have worked. It does take an LLM backend though, I use llama.cpp. I haven't tested it with ollama yet. If you wanted chatGPT to do it we could probably add a spot for an API key. I'm normally more focused on using local models.
Jay
in reply to André Polykanine • • •It simply pulls the CBS news RSS feed into a list. That list gets passed to the LLM to decide the reading order. Then the article becomes summarized and read by Espeak if `--speak` is used.
Smaller models had issues reproducing the links sometimes, getting confused and trying to use the title to construct a URL. Qwen3-30B-A3B-Instruct-2507 did a good job.
`--save` saves the read links in a file to not repeat.
github.com/Jay4242/llm-scripts…
llm-scripts/llm-rss.py at main · Jay4242/llm-scripts
GitHubAndré Polykanine
in reply to Jay • • •Jay
in reply to André Polykanine • • •The commadline text part should have worked.
It does take an LLM backend though, I use llama.cpp. I haven't tested it with ollama yet.
If you wanted chatGPT to do it we could probably add a spot for an API key. I'm normally more focused on using local models.