DeepSeek Open Sources Again: Releases 3B MoE OCR Model DeepSeek-OCR

AI Daily News updated 4d ago dongdong
86 0

DeepSeek has launched a new visual text compression model, DeepSeek-OCR. With only 3 billion parameters and a Mixture-of-Experts (MoE) architecture, the model reduces the number of visual tokens by 20×, achieving a 20× compression ratio. It can process 33 million pages per day across 20 nodes. In the Fox benchmark, the model achieves over 85% accuracy across all text length ranges. DeepSeek-OCR supports multiple resolution configurations, multilingual processing, and complex chart interpretation, delivering 10× compression efficiency in multi-turn dialogues.

© Copyright Notice

Related Posts

No comments yet...

none
No comments yet...