cross-posted from: https://programming.dev/post/8121669
Japan determines copyright doesn’t apply to LLM/ML training data.
On a global scale, Japan’s move adds a twist to the regulation debate. Current discussions have focused on a “rogue nation” scenario where a less developed country might disregard a global framework to gain an advantage. But with Japan, we see a different dynamic. The world’s third-largest economy is saying it won’t hinder AI research and development. Plus, it’s prepared to leverage this new technology to compete directly with the West.
I am going to live in the sea.
www.biia.com/japan-goes-all-in-copyright-doesnt-apply-to-ai-training/
Um - your examples are so old the copyright expired centuries ago. Of course you can copy them. And you can absolutely use an image of the Mona Lisa without accreditation or licensing.
Painting and selling an exact copy of a recent work, such as Banksy, is a crime.
… however making an exact copy of Banksy for personal use, or to learn, or to teach other people, or copying the style… that’s all perfectly legal.
I don’t think think this is a black and white issue. Using AI to copy something might be a crime. You absolutely can use it to infringe on copyright. The real question is who’s at fault? I would argue the person who asked the AI to create the copy is at fault - not the company running the servers.
And that was the bait and switch of OpenAI! They sold themselves as being a non-profit simply doing research, for which it would be perfectly legal to consume and reproduce large quantities of data… And then, once they had the data, they started selling access to it.
I would say that that alone, along with the fact that they function as gatekeepers to the technology (One does not simply purchase the model from OpenAI, after all) they are hardly free of culpability… But it definitely depends on the person trying to use their black box too.
Huh? What does being non profit have to do with it? Private companies are allowed to learn from copyrighted work. Microsoft and Apple, for example, look at each other’s software and copy ideas (not code, just ideas) all the time. The fact Linux is non-profit doesn’t give them any additional rights or protection.
Thanks for your response. I realize I muddied the waters on my question by mentioning exact copies.
My real question is based on the ‘everything is a remix’ idea. I can create a work ‘in the style of Banksy’ and sell it. The US copyright and trademark laws state that a work only has to be 10% differentiated from the original in order to be legal to use, so creating a piece of work that ‘looks like it could have been created by Banksy, but was not created by Banksy’ is legal.
So since most AI does not create exact copies, this is where I find the licensing argument possibly weak. I really haven’t seen AI like MidJourney creating exact replicas of works - but admittedly, I am not following every single piece of art created on Midjourney, or Stable Diffusion, or DALL-E, or any of the other platforms, and I’m not an expert in the trademarking laws to the extent I can answer these questions.
Always happy to discuss copyright. :-) Our IP laws are long overdue for an overhaul in my opinion. And the only way to make that happen is for as many people as possible to discuss the issues. I plan to spend the rest of my life creating copyrighted work, and I really hope I don’t spend all of it under the current rules…
The law doesn’t say that.The Blurred Lines copyright case for example was far less than 10%. Probably less than 1%, and it was still unclear if it was infringement or not. It took five years of lawsuits to reach an unclear conclusion where the first court found it to be infringing then an appeals panel of judges reached a split decision where the majority of them found it to be non-infringing.
Copyright is incredibly complex and unclear. It’s generally best to just not get into a copyright lawsuit in the first place. Usually when someone accuses you of copyright infringement you try to pay them whatever amount of money (in the Blurred Lines case, there were discussions of 50% of the artist’s income from the song) to make them go away even if your lawyers tell you you’re probably going to get a not guilty verdict.
To be at fault the user would have to know the AI creation they distributed commits copyright infringement. How can you tell? Is everyone doing months of research to be vaguely sure it’s not like someone else’s work?
Even if you had an AI trained on only public domain assets you could still end up putting in the words that generate something copyrighted.
Companies created a random copyright infringement tool for users to randomly infringe copyright.
Your example is a dude who paints unsolicited on other people’s property. What kind of copyright does a ghost have?