Wow, glm-4.7-flash (30B-A3B MoE) was posted on #Ollama 3 days ago, and there are already 19.2K downloads! I haven't tried, but people seem to say it works better with agentic coding than other models around 30B! Also, it seems to be slower and use a lot more memory than other models around 30B for some reasons. #LLM #ML #AI ollama.com/library/glm-4.7-fla…
#AI #ML #llm #ollama
⇧