Українська правда

Chinese open source LLMs are ahead of Western competitors

- 21 July, 01:30 PM

China’s large, open-source language models from startups like Moonshot AI and DeepSeek are outperforming Western counterparts from big companies like Meta, according to a new report from LMArena, an open, crowdsourcing AI benchmarking platform.

The top open-source model, according to the ranking, is Kimi K2 from Chinese startup Moonshot AI. It is built on a mixture of experts (MoE) architecture with a total of 1 trillion parameters, of which 32 billion are active during any query. LMArena says this design helps balance efficiency and performance.

Next comes the more well-known Chinese startup DeepSeek, which gained global attention in early 2025 with claims that its DeepSeek R1 model is as good as OpenAI o1 in programming and reasoning tasks, but is 90-95% cheaper. The startup's latest flagship model, DeepSeek R1-0528, came in second in performance and efficiency.

The top three is closed by another Chinese model – Qwen 235b a22b from Alibaba. LMArena notes that this is a raw model without any instructions, which is great for generation and has a high rating in the community due to its thinking capabilities.

The first non-Chinese open model appears in fifth place, and it is Google DeepMind's Gemma 3 27b. It can process text and image data, excels at reasoning, and performs long-context tasks. The community notes that Gemma has improved memory efficiency and greater support for wider context compared to previous versions.

In addition, the list includes a model from perhaps the only notable AI developer from Europe – the Mistral Small 2506 from France’s Mistral AI. Llama also appears twice in the ranking, both from original developer Meta and NVIDIA.

Load more