🚀 Alibaba drops Qwen3.6 — 3B Active, competing with giants
Alibaba just open-sourced Qwen3.6-35B-A3B, a sparse MoE model with 35B total params but only 3B active under an Apache 2.0 license.
Why it matters:
• Agentic coding rivals models 10× larger
• Native multimodal (vision + text) reasoning
• Supports both thinking + non-thinking modes
Performance punch:
• Beats its predecessor Qwen3.5-35B-A3B by a wide margin
• Outperforms dense 27B models on key coding benchmarks
• Matches (and sometimes beats) Anthropic’s Claude Sonnet 4.5 on vision-language tasks
• Strong spatial intelligence (e.g.
Alibaba just open-sourced Qwen3.6-35B-A3B, a sparse MoE model with 35B total params but only 3B active under an Apache 2.0 license.
Why it matters:
• Agentic coding rivals models 10× larger
• Native multimodal (vision + text) reasoning
• Supports both thinking + non-thinking modes
Performance punch:
• Beats its predecessor Qwen3.5-35B-A3B by a wide margin
• Outperforms dense 27B models on key coding benchmarks
• Matches (and sometimes beats) Anthropic’s Claude Sonnet 4.5 on vision-language tasks
• Strong spatial intelligence (e.g.