In partnership with

SpaceX recently secured the right to acquire Cursor, the AI coding platform, for $60 billion.

Cursor's core model is a fine-tune of a large Chinese mixture-of-experts architecture. It updates its weights every 90 minutes based on real user feedback.

The ownership transferred to American aerospace. The architectural dependency did not.

This is not a scandal. It is a precise description of where the race actually stands. The search platform Perplexity, used by millions of Western professionals, officially lists Kimi K2 Thinking, a Chinese frontier model, as a hosted option in its interface. Numerous US startups are selling inference on Chinese open-weight models to Western enterprises without advertising that fact prominently. The American tech stack and the Chinese model layer are already deeply interwoven. The acquisition announcements and the export control headlines are happening on top of an integration that is already complete at the product level.

This is not a scandal. It is a data point about how the race is actually going.

The conventional framing is that the US controls the AI frontier because it controls the most powerful closed models and the chips required to train them. That framing was accurate in 2023. The open-weight model landscape has changed the calculation in ways that chip export controls cannot address.

Alibaba's Qwen 3.6 35B model is outperforming equivalently sized American models on autonomous coding benchmarks, specifically Terminal Bench, Sweep Bench Pro, and competitive mathematics. A separate Alibaba model called Ternary Bonsai, at 1.7 gigabytes in size, is outperforming Meta's Llama 3.1 on reasoning, coding, and general knowledge tasks. These are not theoretical capability claims. They are benchmark results on specific tasks that developers use to make purchasing decisions.

The adoption pattern outside China is following the logic of those results. A significant number of US-based startups have deployed Chinese open-weight models and are selling inference on them to Western developers and enterprises. Chinese AI companies including Z.ai and MiniMax have filed IPO paperwork targeting international investment and Western market share. The expansion is not covert. It is commercial.

Analytics on Live Data Without Leaving Postgres

When analytics on Postgres slows down, most teams add a second database. TimescaleDB by Tiger Data takes a different approach: extend Postgres with columnar storage and time-series primitives to run analytics on live data, no split architecture, no pipeline lag, no new query language to learn. Start building for free. No credit card required.

The US government's response to this has shifted in the last 60 days in a direction that reveals how the strategy has changed.

The previous approach was restriction. Chip export controls, entity lists, attempts to prevent Chinese labs from accessing the hardware needed to train frontier models. That approach produced workarounds almost immediately, with companies routing purchases through European cloud intermediaries to bypass the restrictions.

The current approach, under Commerce Secretary Lutnick, is different. Export licenses for Nvidia and AMD chips to China are being approved again. In parallel, the government implemented a 15% revenue-sharing requirement on chip sales to China rather than a ban. The logic has shifted from denial to extraction. If China is going to train on American chips, the US government wants a percentage of the transaction.

What that shift signals is worth naming directly. It signals that the US government no longer believes it can prevent Chinese labs from accessing the compute needed to compete. The question has moved from whether Chinese models will reach parity to who captures the revenue when they do.

Nathan Lambert, the post-training lead at the Allen Institute for AI, frames the structural situation without diplomatic softening. Chinese models are being optimized by the global developer community at a rate that closed American models cannot match. Open-weight models that attract global contribution improve faster and cost less to serve. If that dynamic continues, Lambert argues, open models win the long-term enterprise market regardless of which country produced the original weights.

His response was to launch the ATOM Project, American Truly Open Models, funded by a $100 million NSF grant. OpenAI released its first genuinely open-weight model since GPT-2, called gpt-oss-120b, specialized in tool use and search. Nvidia continued pushing its open Nemotron 3 series. The US open-source response is now funded, named, and moving. It is also about three years behind the pace China set after DeepSeek.

5,000+ PE professionals. One 8-week program.

Build an MBA-caliber network and earn a Wharton Online certificate in 8 weeks. Join the next Private Equity Certificate Program starting June 8. 

Use code SAVE300 to save $300 on tuition.

The story is not that China is winning. It is that the competition is real, the scoreboard is public, and the outcome is genuinely uncertain for the first time since the modern AI era began.

Anyone building products or businesses that depend on model access, model cost, or model architecture should have a view on this. The infrastructure underneath your tools is not neutral. It is the output of a geopolitical competition that is still in progress.

The developers who treated model selection as a technical decision in 2025 are now discovering it was also a strategic one.

404 Found covers AI developments from a European Insider, three times a week.

Ghost: Unlimited Postgres For Agents

Your agent builds faster than a 2-project free tier allows. Ghost gives it unlimited postgres. No credit card. Try free.

404 Found