OpenAI has Little Legal Recourse Versus DeepSeek, Tech Law Experts Say
OpenAI and the White House have implicated DeepSeek of using ChatGPT to cheaply train its brand-new chatbot.
- Experts in tech law say OpenAI has little recourse under copyright and contract law.
- OpenAI's terms of usage might use however are mostly unenforceable, they say.
This week, OpenAI and the White House implicated DeepSeek of something similar to theft.
In a flurry of press declarations, they said the Chinese upstart had bombarded OpenAI's chatbots with inquiries and hoovered up the resulting data trove to quickly and cheaply train a model that's now almost as excellent.
The Trump administration's top AI czar stated this training procedure, called "distilling," amounted to copyright theft. OpenAI, on the other hand, told Business Insider and other outlets that it's investigating whether "DeepSeek might have wrongly distilled our models."
OpenAI is not stating whether the company plans to pursue legal action, instead assuring what a spokesperson termed "aggressive, proactive countermeasures to secure our innovation."
But could it? Could it take legal action against DeepSeek on "you stole our content" grounds, similar to the grounds OpenAI was itself took legal action against on in an ongoing copyright claim submitted in 2023 by The New York City Times and other news outlets?
BI postured this question to specialists in technology law, who said difficult DeepSeek in the courts would be an uphill struggle for OpenAI now that the content-appropriation shoe is on the other foot.
OpenAI would have a difficult time showing a copyright or copyright claim, these legal representatives stated.
"The concern is whether ChatGPT outputs" - indicating the answers it produces in reaction to queries - "are copyrightable at all," Mason Kortz of Harvard Law School stated.
That's because it's uncertain whether the answers ChatGPT spits out certify as "imagination," he stated.
"There's a teaching that says imaginative expression is copyrightable, but realities and ideas are not," Kortz, who teaches at Harvard's Cyberlaw Clinic, stated.
"There's a big concern in intellectual home law right now about whether the outputs of a generative AI can ever make up creative expression or if they are always unguarded truths," he included.
Could OpenAI roll those dice anyhow and declare that its outputs are secured?
That's unlikely, wolvesbaneuo.com the legal representatives said.
OpenAI is currently on the record in The New york city Times' copyright case arguing that training AI is an allowed "reasonable use" exception to copyright defense.
If they do a 180 and photorum.eclat-mauve.fr tell DeepSeek that training is not a reasonable use, "that might come back to type of bite them," Kortz said. "DeepSeek could say, 'Hey, weren't you simply stating that training is fair usage?'"
There may be a difference between the Times and DeepSeek cases, Kortz included.
"Maybe it's more transformative to turn news articles into a model" - as the Times implicates OpenAI of doing - "than it is to turn outputs of a design into another model," as DeepSeek is stated to have done, Kortz stated.
"But this still puts OpenAI in a pretty predicament with regard to the line it's been toeing relating to fair usage," he included.
A breach-of-contract suit is more most likely
A breach-of-contract lawsuit is much likelier than an IP-based suit, though it includes its own set of problems, forum.batman.gainedge.org said Anupam Chander, who teaches innovation law at Georgetown University.
Related stories
The regards to service for Big Tech chatbots like those established by OpenAI and Anthropic forbid utilizing their material as training fodder for a contending AI model.
"So possibly that's the suit you may potentially bring - a contract-based claim, not an IP-based claim," Chander said.
"Not, 'You copied something from me,' but that you took advantage of my design to do something that you were not permitted to do under our contract."
There may be a hitch, Chander and Kortz stated. OpenAI's terms of service need that many claims be solved through arbitration, not lawsuits. There's an exception for lawsuits "to stop unapproved usage or abuse of the Services or copyright infringement or misappropriation."
There's a larger hitch, though, professionals said.
"You ought to know that the fantastic scholar Mark Lemley and a coauthor argue that AI terms of use are most likely unenforceable," Chander said. He was referring to a January 10 paper, "The Mirage of Expert System Terms of Use Restrictions," by Stanford Law's Mark A. Lemley and Peter Henderson of Princeton University's Center for trademarketclassifieds.com Information Technology Policy.
To date, "no model developer has actually tried to impose these terms with financial charges or injunctive relief," the paper states.
"This is most likely for excellent factor: we think that the legal enforceability of these licenses is doubtful," it adds. That's in part due to the fact that model outputs "are largely not copyrightable" and because laws like the Digital Millennium Copyright Act and the Computer Fraud and Abuse Act "offer minimal option," it says.
"I believe they are most likely unenforceable," Lemley told BI of OpenAI's regards to service, "since DeepSeek didn't take anything copyrighted by OpenAI and due to the fact that courts usually will not impose agreements not to complete in the absence of an IP right that would avoid that competitors."
Lawsuits between celebrations in different countries, each with its own legal and enforcement systems, are always difficult, Kortz stated.
Even if OpenAI cleared all the above difficulties and won a judgment from an US court or arbitrator, "in order to get DeepSeek to turn over money or stop doing what it's doing, the enforcement would boil down to the Chinese legal system," he stated.
Here, OpenAI would be at the mercy of another area of law - the enforcement of foreign judgments and the balancing of specific and business rights and national sovereignty - that extends back to before the starting of the US.
"So this is, a long, complicated, laden process," Kortz included.
Could OpenAI have protected itself much better from a distilling attack?
"They might have used technical measures to obstruct repeated access to their website," Lemley said. "But doing so would likewise hinder normal consumers."
He included: "I don't believe they could, or should, have a valid legal claim against the browsing of uncopyrightable information from a public website."
Representatives for DeepSeek did not right away react to an ask for remark.
"We understand that groups in the PRC are actively working to use methods, including what's called distillation, to try to duplicate sophisticated U.S. AI models," Rhianna Donaldson, an OpenAI representative, informed BI in an emailed declaration.