I might be wrong, but I believe that Alibaba’s Qwen offerings have become more interesting than DeepSeek’s. At least, when coding is what matters the most to you.

When I first mentioned Qwen3, I noted that the Android app was not available “for my device” but most likely not available outside a few Asian countries. A QR code allowed for the download of version 1.0.0 of the APK, but that was frustrating.

The other day, I noticed that the default model in the web app became Qwen3-Coder, and the Android app asked me to download an update. There is a new page for those wishing to Download Qwen, with links for Android and macOS (no iOS). The app in Google Play Store is still “not available for any of [my] devices”; however, the QwenChat_release.apk offered by the above page is now at version 1.4.0, which is currently the latest.

The default model is now Qweb3-Coder:

  • Maximum context length: 1,048,576 tokens
  • Maximum generation length: 65,536 tokens

The flagship model has been updated to July, Qwen3-235B-A22B-2507:

  • Maximum context length: 131,072 tokens
  • Maximum summary generation length: 8,192 tokens
  • Maximum generation length: 81,920 tokens

Obviously, Qwen is betting on code generation.

However, the Building AI products with Qwen page seems out-of-sync. The shown context lengths don’t match the above models. The Alibaba Cloud Model Studio is where the truth can be found. 😉

The featured models are the newest ones.

qwen3-coder-plus

Pricing here.

  • Context window 1,048,576
  • Maximum input (tokens) 1,000,000
  • Maximum output (tokens) 65,536

Tiered pricing:

Input token countInput price (Million tokens)Output price (Million tokens)
0-32K$1$5
32K-128K$1.8$9
128K-256K$3$15
256K-1M$6$60

qwen-plus-2025-07-14

Price per 1M tokens:

  • Input $0.4
  • Output $1.2
  • Output (Thinking) $4

qwen-omni-turbo

Pricing here.

Qwen-Omni supports multiple input modalities, including video, audio, image, and text. It can output audio and text.

Price per 1M tokens when the input and output are text-only:

  • Input $0.07
  • Output $0.27

Note that the coder isn’t that cheap, but with real code, not just toy projects, the huge context is important.

Many other models are available, and you can even compare them 3 at a time. Example:

These “plus” models aren’t available in the web app or mobile app, but only via API calls. Fair enough. The 1 million token context Qwen3-Coder can nonetheless be tested via the browser.

🐼

If I want to compare Alibaba’s with DeepSeek’s offerings, DeepSeek’s website falls short of convincing: there’s a link to the web app (which is not bad, thank you), another link to the mobile app (it’s not a true link as it only shows a QR code which goes to download.deepseek.com/app/), and a link to their API platform.

The Models & Pricing page kind of disappoints. Still, it’s nice that they have this offer:

Off-Peak Discounts:DeepSeek-V3 with 50% off and DeepSeek-R1 with 75% off at off-peak hours (16:30-00:30 UTC daily).

  • The deepseek-chat model points to DeepSeek-V3-0324.
  • The deepseek-reasoner model points to DeepSeek-R1-0528.
  • The maximum input context length is 64K for both models, which means they cannot compete with Alibaba’s Qwen models! Also, to cut costs, you should use context caching.
  • The maximum output context length is 8K for deepseek-chat and 64K for deepseek-reasoner (including the Chain-Of-Thought!), with the default values being half of the maximum (this is what I’d expect in the web app and mobile app).

The standard (not discounted) prices per 1M tokens:

Modeldeepseek-chatdeepseek-reasoner
Input (cache hit)$0.07$0.14
Input (cache miss)$0.27$0.55
Output$1.10$2.19

🐼

I wonder how many people outside China (or outside BRICS) used a paid account with either of Alibaba or DeepSeek to get code assistance via API. Do they trust the Chinese?

Not that Anthropic, OpenAI, or xAI should be trusted with your data and your code… (Or Amazon, if you’re using Claude via Kiro.)

Either way, at the annual World Artificial Intelligence Conference in Shanghai, China just proposed a new global AI cooperation organization.

🐼

UPDATE: Oh, but there is more! Kimi and Z.AI: The more Chinese, the merrier!