Judge rules Anthropic can legally train AI on copyrighted material

One of the large gray areas of the full generative space is whether the training of AI models on the material protected by copyright without the authorization of copyright holders violates copyright. This led a group of authors to Sue Anthropic, the company behind the Chatbot Ai Claude. Now, an American federal judge has ruled that AI training is covered by so -called “fair use” laws and is therefore legal, reports on.
Under the American law, fair use means that the material protected by copyright is authorized to be used if the result is considered to be “transformer”. In other words, the resulting work must be something new rather than being entirely derived or a substitute for the original work. This is one of the first judicial journals of the genre, and the judgment can mean a precedent for future affairs.
However, the judgment also notes that the authors of the applicant always have the possibility of continuing Anthropic for hacking. The judgment stipulates that the company illegally downloaded (hacked) more than 7 million pounds without paying, and also kept them in its internal library even after having decided that they would not be used to train or reconstruct the AI model in the future.
The judge wrote: “The authors argue that Anthropic should have paid for these pirated library copies. This order agrees.”
This article originally appeared on our publication Sister PC För Alla and was translated and located in Swedish.