Microsoft Targets DeepSeek Over Potentially Unlawful AI Model Training
The artificial intelligence sector is witnessing a legal standoff as Microsoft, a significant investor in OpenAI, has turned its attention to the Chinese company DeepSeek. Reports indicate that Microsoft is probing whether DeepSeek utilized questionable practices to develop its reasoning model, known as the R1 model.
According to Bloomberg Law, Microsoft suspects that DeepSeek may have infringed its terms of service by employing its application programming interface (API) during the training of the R1 model. This accusation has arisen amidst increasing scrutiny of AI practices, especially regarding intellectual property.
Adding to the controversy, White House AI advisor David Sacks recently suggested in a Fox News interview that there was a possibility DeepSeek may have “stolen intellectual property from the United States.” He claimed, “There’s substantial evidence that what DeepSeek did here is they distilled the knowledge out of OpenAI’s models.”
DeepSeek has gained considerable attention within the AI industry for its innovative approach, reportedly training AI models efficiently and at low costs—approximately $5.6 million over the course of a year. This has led to speculation that the company's cost-effectiveness stems from its use of existing models, potentially as a foundational base for its work.
One process in question is known as distillation, which involves a teacher-student dynamic between models. This method could explain DeepSeek's operational efficiency while using less powerful Nvidia H800 chips. Following these claims, DeepSeek may need to provide transparency and evidence demonstrating compliance with legal standards during the development of its models.
Prior to these recent allegations, industry specialists speculated that DeepSeek might have employed reverse engineering techniques to improve its AI capabilities. This practice, which involves analyzing models to better understand their patterns and biases, is a common and legal approach within open-source development.
Moreover, security researchers associated with Microsoft have suggested that DeepSeek might have extracted significant code from OpenAI’s API during the fall of 2024, a potential breach that Microsoft had already alerted OpenAI about at that time. The unveiling of DeepSeek's R1 model last week has intensified the spotlight on the company and the surrounding controversies.
DeepSeek also promotes itself as an open-source AI platform that invites user development, contributing to its growing popularity. In comparison, while OpenAI provides API access, it does not operate as an open-source service and explicitly prohibits the use of its outputs to train other AI models according to its terms of service, as noted by TechCrunch.
An OpenAI spokesperson commented on the situation, stating that the attempt to replicate well-established models is increasingly commonplace among international firms. The spokesperson affirmed that Microsoft engages in protective measures for its intellectual property. They further emphasized the importance of collaboration with the U.S. government in safeguarding advanced models from adversarial attempts.
Microsoft, DeepSeek, AI