What happens when a $29 billion company forgets to rename a model ID, and what it means for every organization using open-source AI.
On March 19, 2025, Cursor, the AI-powered coding tool valued at $29 billion and generating an estimated $2 billion in annual recurring revenue, launched Composer 2, its newest and most powerful coding model. The announcement was bold: Cursor claimed Composer 2 was built through “continued pre-training of a base model combined with reinforcement learning,” positioning it as a proprietary, in-house breakthrough.
Less than 24 hours later, a developer named Fynn was debugging Cursor’s OpenAI-compatible API endpoint when something unexpected appeared in the response: accounts/anysphere/models/kimi-k2p5-rl-0317-s515-fast.
That model ID wasn’t a Cursor internal name. It was a near-literal description of what Composer 2 actually was: Kimi K2.5, the open-weight model from Chinese AI company Moonshot AI, fine-tuned with reinforcement learning.
The developer’s response was matter-of-fact: “so composer 2 is just Kimi K2.5 with RL. at least rename the model ID.”
Moonshot AI’s Kimi K2.5 is released under a Modified MIT License, seemingly permissive, but with one critical addition that Moonshot AI wrote specifically because they anticipated exactly this scenario.
The license states that any commercial product or service that either:
…must prominently display “Kimi K2.5” in its user interface.
At its current revenue run rate, Cursor surpasses the $20 million monthly threshold by roughly 8x. The license requirement was clear, enforceable, and apparently ignored.
Yulun Du, Head of Pretraining at Moonshot AI, publicly confirmed that Composer 2’s tokenizer was “completely identical to our Kimi tokenizer,” calling it “almost certainly the result of further fine-tuning of our model.” He directly tagged Cursor’s co-founder and asked: “Why aren’t you respecting our license, or paying any fees?”
The Cursor/Moonshot incident is a symptom of three massive gaps in how most companies currently handle AI:
To avoid being caught off guard by the next model update or third-party integration, organizations need to move toward Automated AI Governance.
At Mend.io, we’ve built our platform specifically to account for the reality that AI models are now first-class components of the software supply chain and carry their own distinct risk profile.
AI model inventory and AI-BOM: Mend AI automatically discover models, frameworks, agents, and RAG pipelines embedded in your applications, including “Shadow AI” that teams didn’t officially sanction. Every discovered model is tracked in a continuously updated AI Bill of Materials, giving security, legal, and compliance teams real-time visibility into what’s actually in use.
Vulnerability and risk tracking for AI models: The risk profile of an AI model doesn’t end at licensing. The Mend.io platform also scans for known security vulnerabilities in OSS models and evaluates risks from malicious packages, bringing the same rigor to AI components that mature organizations already apply to open-source software.
Policy enforcement and governance: Mend.io enables organizations to define and enforce policies around AI component usage, blocking or escalating based on license risk, known vulnerabilities, malicious model detection, and compliance gaps. It’s not just visibility; it’s control.
License compliance for OSS AI models: Mend.io’s platform includes license compliance capabilities specifically for open-source AI models sourced from Hugging Face, Kaggle, and other repositories. License terms are surfaced and tracked automatically, including the kind of modified MIT conditions that tripped up Cursor. When a model’s license has specific commercial usage thresholds or attribution requirements, Mend.io flags them before they become a legal liability.
Cursor didn’t intend to expose a hidden model ID to anyone who poked at their API. And they likely didn’t intend to violate Kimi K2.5’s license terms. But intent doesn’t matter when the violation is public, the license is clear, and your annual revenue is $2 billion.
As your organization accelerates AI adoption, whether you’re building AI-powered products or integrating open-weight models into internal tools, the question isn’t whether your AI components carry license and compliance obligations. They do. The question is whether you have the visibility and governance infrastructure to know about them before a developer, journalist, or regulator finds out for you.
That’s exactly what we built at Mend.io
*** This is a Security Bloggers Network syndicated blog from Mend authored by Tiffany Jennings. Read the original post at: https://www.mend.io/blog/cursor-moonshot-kimi-ai-governance-lessons/