Search

Items tagged with: ollama


Wow, glm-4.7-flash (30B-A3B MoE) was posted on #Ollama 3 days ago, and there are already 19.2K downloads! I haven't tried, but people seem to say it works better with agentic coding than other models around 30B! Also, it seems to be slower and use a lot more memory than other models around 30B for some reasons. #LLM #ML #AI ollama.com/library/glm-4.7-fla…
#AI #ML #llm #ollama


I'm currently exploring options for local LLMs to be integrated into Home Assistant. Does anyone here got experiences with this already? Like, did you try running a Raspi5 with one of the AI hats? What were your results? Do you have any other affordable local AI systems running with Home Assistant, which models work best for you?
#homeassistant #ai #raspberrypi #ollama #smarthome


#Ollama v0.14.1 has Experimental image generation models.
ollama run x/z-image-turbo
Only available in Mac Silicon and Linux with Cuda, and apparently more models are coming soon such as GLM-Image, Qwen-Image-2512, Qwen-Image-Edit-2511... #LLM #ML #AI github.com/ollama/ollama/relea…
#AI #ML #llm #ollama


I started a little python project to take my ~20 years of #Flickr photos, categorize them and analyze the content using #ollama locally. So far the results are pretty good. Some hallucinations, but mostly useful content.

I'd be interested in knowing if others have tried this. What information has been useful to collect? Face identification is a big area my tool needs some work on. I want to group family/friends.


Remember Aaron Swartz, because Zucc won't be getting any jail time for this.

youtube.com/watch?v=bBa5TO_nBJ…

Meta leeched over 80 terabytes of books of off torrents for commercial purposes. Not personal use. They made sure to not seed the books to cover their behinds.

When you do it, it's 30 years in prison, when they do it it's a fine.

#AI #ProfessionalPiracy #Meta #Llama #Ollama



No soy el mayor fan de la #IA ni mucho menos (consideraciones éticas aparte creo que hay todavía mucho hype y pocas nueces), pero también pienso que una parte de todo lo que está saliendo en este boom terminará quedándose a la larga. Me interesa sobre todo cacharrear con las capacidades de una instancia privada en local y he terminado montando un proyectillo que me he encontrado por Github para construir un pequeño chatbot para analizar documentos PDF construido sobre #ollama como motor y Mistral como #LLM. Aunque ya le he pillado cierta tendencia al invent es una herramienta curiosa e incluso potencialmente útil. Es relativamente sencillo de montar una vez superas el infierno de dependencias de Python que te exige downgradear algún módulo pero consume recursos que no veas. Un Mac Mini con un M2 sufre ante cada pregunta. Ha sido también útil para entender los recursos que exige una IA generativa con un LLM modesto y, una vez más, sospechar de quien te dé esto gratis como servicio. Si tenéis curiosidad por probarlo vosotros mismos, aquí tenéis el proyecto que me he clonado: github.com/SonicWarrior1/pdfch…