The Qwen Team Launches the New Qwen3 Series, Featuring MoE and Dense Models with Comprehensive Performance Upgrades
The Qwen team has released the latest large language model, Qwen3, which includes 2 Mixture of Experts (MoE) models and 6 dense models, with sizes ranging from 0.6 billion to 235 billion parameters. The flagship model, Qwen3-235B-A22B, demonstrates outstanding performance in coding, mathematics, and general capabilities on benchmark tests, surpassing other top-tier models. In addition, the smaller MoE model, Qwen3-30B-A3B, offers a significant advantage in terms of active parameters, and even the micro-sized model, Qwen3-4B, is capable of competing with larger models. Qwen3 supports 119 languages and has been optimized for coding and agent capabilities, allowing users to easily deploy these models through a variety of platforms and tools.
© Copyright Notice
The copyright of the article belongs to the author. Please do not reprint without permission.
Related Posts
No comments yet...