Wow, glm-4.7-flash (30B-A3B MoE) was posted on #Ollama 3 days ago, and there are already 19.2K downloads! I haven't tried, but people seem to say it works better with agentic coding than other models around 30B! Also, it seems to be slower and use a lot more memory than other models around 30B for some reasons. #LLM #ML #AI ollama.com/library/glm-4.7-flaβ¦
glm-4.7-flash
As the strongest model in the 30B class, GLM-4.7-Flash offers a new option for lightweight deployment that balances performance and efficiency.Ollama