With a high-stakes battle between OpenAI and its alleged Chinese rival, DeepSeek, API security was catapulted to priority number one in the AI community today.
According to multiple reports, OpenAI and Microsoft have been investigating whether DeepSeek improperly used OpenAI’s API to train its own AI models. Bloomberg reported that Microsoft security researchers “detected that large amounts of data were being exfiltrated through OpenAI developer accounts in late 2024, which the company believes are affiliated with DeepSeek.”
The Financial Times also noted that OpenAI found evidence linking DeepSeek to distillation, a technique developers use to extract knowledge from larger AI models. This method allows companies to train AI models at a fraction of the cost incurred by OpenAI, which reportedly spent over $100 million on training GPT-4.
OpenAI stated, “We know PRC (China) based companies—and others—are constantly trying to distill the models of leading US AI companies.” The company has since blocked the accounts it believes were associated with DeepSeek.
David Sacks, the White House AI and crypto czar, weighed in on the controversy when speaking with Fox News on Tuesday, stating, “There’s substantial evidence that what DeepSeek did here is they distilled knowledge out of OpenAI models, and I don’t think OpenAI is very happy about this.” Fox News reported that OpenAI considers such activity a direct threat to its intellectual property and is now working closely with the U.S. government to counter similar threats in the future.
While OpenAI’s concern over API abuse is valid, the irony of the situation has not gone unnoticed by observers of the internet. The company itself has faced multiple lawsuits for data scraping and unauthorized use of copyrighted materials. As the Financial Times pointed out, “OpenAI has been accused of ‘sucking down the entirety of the written web without consent,’ yet now finds itself a victim of unauthorized data extraction.”
The DeepSeek case underscores a broader issue that many organizations face—API abuse. API abuse occurs when adversaries exploit API endpoints to extract, manipulate, or misuse data. This type of attack can take many forms, including:
Many companies fail to monitor API activity effectively, allowing adversaries to extract sensitive data without detection, like DeepSeek allegedly did to OpenAI. Or, as OpenAI did to the internet, as controversially believed. OpenAI’s case serves as a cautionary tale for organizations relying on APIs without proper security measures in place.
To mitigate API abuse risks, businesses need advanced API security solutions like Wallarm. Wallarm’s comprehensive API security suite provides the following capabilities:
Wallarm employs behavior-based anomaly detection to identify and block malicious API activity in real-time. Whether an attacker is attempting large-scale data scraping or exploiting business logic flaws, Wallarm detects deviations from normal API usage patterns and mitigates threats automatically.
Wallarm’s API Sessions feature is designed to track API requests across entire sessions, ensuring full visibility into user behavior. This is particularly useful for monitoring high-value business transactions, as it enables organizations to:
Wallarm leverages AI-driven threat intelligence to proactively defend against emerging API attack techniques. By continuously analyzing API traffic, it can detect patterns indicative of credential stuffing, unauthorized scraping, and automated API exploitation.
To prevent large-scale exfiltration attacks—such as those allegedly carried out by DeepSeek—Wallarm offers fine-grained rate limiting and access control mechanisms. These include:
The OpenAI-DeepSeek controversy is a stark reminder that API security is no longer optional—it’s a necessity. With API-driven ecosystems expanding rapidly, businesses must adopt robust security measures to protect their intellectual property, user data, and critical business functions.
Wallarm’s API security solutions, including API Abuse Prevention and API Sessions, provide organizations with the tools they need to stay ahead of evolving API threats. As companies increasingly rely on APIs for mission-critical operations, investing in comprehensive API security is the only way to prevent future cases of unauthorized data extraction and API abuse.
To learn more about how Wallarm can help secure your APIs, visit Wallarm’s official site.
The post API Security Is At the Center of OpenAI vs. DeepSeek Allegations appeared first on Wallarm.
*** This is a Security Bloggers Network syndicated blog from Wallarm authored by Raymond Kirk. Read the original post at: https://lab.wallarm.com/api-security-is-at-the-center-of-openai-vs-deepseek-allegations/