Skip to main content


After a long period of inactivity for vision language models, llama.cpp merged the support for MiniCPM-V-2.5. Hopefully the support for 2.6 is also on the way soon. #LLM #Multimodal #AI #ML
huggingface.co/openbmb/MiniCPM…
huggingface.co/openbmb/MiniCPM…
github.com/ggerganov/llama.cpp…

Mikołaj Hołysz reshared this.

in reply to Chi Kim

Do you think this MiniCPM is any good? I can't even find a description on this page of what the model does! :)
in reply to victor tsaran

@vick21 Definitely better than what's available on Ollama right now. The description is in the model summary for non-quantize model, and it's on one of the links I included in my post. V2.6 is even better, but not supported in llama.cpp yet.
in reply to victor tsaran

@vick21 For anyone having trouble finding the description (like me) this is the direct link

huggingface.co/openbmb/MiniCPM…

in reply to Chi Kim

Ah, the description was under the second link. I clik on the first one, of course! Have to try it though before I believe the claims. So many people claim to outperform OpenAI these days! :)
in reply to victor tsaran

@vick21 You can't trust numbers from benchmarks alone, but I'm waiting for InternVL2-Llama3-76B which is #1 on MMBench! GPT-4o (0513 is #3, the old original GPT-4V-1106 is #31, and MiniCPM-Llama3-V2.5 is #35. mmbench.opencompass.org.cn/lea…