Interconnected

Interconnected

($) Kimi Kimi on the Wall

Who is the fairest (open weight) model of all?

Kevin Xu's avatar
Kevin Xu
Nov 13, 2025
∙ Paid

Today’s post is for our premium members only. If you are reading this in full, thank you for being an Interconnected Premium member! If you aren’t, I hope you become one by scrolling down and tear down that paywall! 😎


Kimi 2 Thinking, the newest reasoning model from Moonshot AI, is now the “smartest” open weight AI model per the independent analysis firm, Artificial Analysis, beating out OpenAI’s gpt-oss. It has climbed to the top of the wall (for now) of frontier language model intelligence.

While you may think this is another “DeepSeek moment”, Mr. Market disagrees. Nvidia did not fall 17% on a single day upon Kimi 2 Thinking’s release. The entire cohort of Chinese tech stock did not swell. Alibaba, which is a main investor in Moonshot AI, the lab that created Kimi, and would’ve directly benefited from its potential popularity. But it fell on disappointing Singles Day online shopping sentiment and lacklustre consumer spending in China overall.

Mr. Market learns quickly, even if it oftentimes over-learn (or under-learn) the wrong thing.

Kimi 2 Thinking’s release follows a familiar pattern: a large base model first, followed by a smart reasoning model later. DeepSeek walked this path when it shipped its large V3 model during Christmas in 2024, when no one was paying attention. It then released R1, the reasoning model trained off of V3, a month later in January 2025 that caught the entire world by surprise. Similarly, Moonshot shipped its (very) large 1 trillion parameter base model, Kimi K2, in July. More people were paying attention then, like Azeem Azhar of Exponential View and Nathan Lambert of Interconnects (yes almost the same name as this newsletter, but present tense). Predictably, a few months later, Moonshot released Kimi 2 Thinking.

All this is happening under an industry-wide backdrop, where more open weight AI models produced by labs in China are being adopted as the foundation of American tech companies. Cursor is using Chinese. Airbnb is using Chinese. UiPath is using Chinese. And many startups I and others have talked to have confided in private that they are using Chinese. Why in private? Because few companies want to be caught in the geopolitical quagmire of the US-China AI competition; they want customizable, performant, and cheap (or free) models that suit the purpose and use case of their customers. And labs in China are shipping them non-stop, leaning into both the country’s structural advantages, and the natural flywheel effect of open source development. Kimi 2 Thinking is the latest, but by no means the last.

With so many models out there, so many benchmarks to climb (and game), and the switching cost so low between models (usually just a dropdown menu), whether you like a model or not is becoming more subjective. A model’s utility and personality is becoming more of the reason why someone uses one versus another.

I have tried Kimi 2 Thinking, both on my local machine and via the cloud-hosted chatbot on Kimi’s official website (will explain why later in this post). I have also closely watched Kimi’s growing online mindshare and chatter on both Zhihu, a Q&A community platform similar to (but probably bigger) Quora, and on Reddit, where its CEO, Yang Zhilin, and the core team did a highly engaging AMA (ask me anything) session.

Instead of sharing what I like and don’t like, a purely personal taste question at this point, I’ll share some analysis and observations from my hands-on Kimi experience related to censorship, its choice of data format as it pertains to export control, and Team Kimi’s community engagement strategy.

Censorship

This post is for paid subscribers

Already a paid subscriber? Sign in
© 2025 Kevin Xu · Privacy ∙ Terms ∙ Collection notice
Start your SubstackGet the app
Substack is the home for great culture