cross-posted from: https://programming.dev/post/8121669
Japan determines copyright doesn’t apply to LLM/ML training data.
On a global scale, Japan’s move adds a twist to the regulation debate. Current discussions have focused on a “rogue nation” scenario where a less developed country might disregard a global framework to gain an advantage. But with Japan, we see a different dynamic. The world’s third-largest economy is saying it won’t hinder AI research and development. Plus, it’s prepared to leverage this new technology to compete directly with the West.
I am going to live in the sea.
www.biia.com/japan-goes-all-in-copyright-doesnt-apply-to-ai-training/
Japan tech economy go brrrrr
Or it leads the way in producing the most useless, misleading bullshit more efficiently. We’ll see.
I didn’t say it’d produce good things other than economy points.
I like to exchange my economy points for things
Things, or good things 😉
Good for whom?
ECONOMY GO BRRRRR
Is there a c/WTFJapan? Asking to keep track of future developments
Maybe that would finally get them to stop using fax machines.
Not sure this is the flex you think it is. The US health industry utilizes fax to send client health information millions of times a day, and it is considered a secure communication.
I think this is a difficult concept to tackle, but the main argument I see about using existing works as ‘training data’ is the idea that ‘everything is a remix’.
I, as a human, can paint an exact copy of a Picasso work or any other artist. This is not illegal and I have no need of a license to do this. I definitely don’t need a license to paint something ‘in the style of Picasso’, and I can definitely sell it with my own name on it.
But the question is, what about when a computer does the same thing? What is the difference? Speed? Scale? Anyone can view a picture of the Mona Lisa at any time and make their own painting of it. You can’t use the image of the Mona Lisa without accreditation and licensing, but what about a recreation of the Mona Lisa?
I’m not really arguing pro-AI here, although it may sound like it. I’ve just heard the ‘licensing’ argument many times and I’d really like to hear what the difference between a human copying and a computer copying are, if someone knows more about the law.
Um - your examples are so old the copyright expired centuries ago. Of course you can copy them. And you can absolutely use an image of the Mona Lisa without accreditation or licensing.
Painting and selling an exact copy of a recent work, such as Banksy, is a crime.
… however making an exact copy of Banksy for personal use, or to learn, or to teach other people, or copying the style… that’s all perfectly legal.
I don’t think think this is a black and white issue. Using AI to copy something might be a crime. You absolutely can use it to infringe on copyright. The real question is who’s at fault? I would argue the person who asked the AI to create the copy is at fault - not the company running the servers.
Painting and selling an exact copy of a recent work, such as Banksy, is a crime.
… however making an exact copy of Banksy for personal use, or to learn, or to teach other people, or copying the style… that’s all perfectly legal.
And that was the bait and switch of OpenAI! They sold themselves as being a non-profit simply doing research, for which it would be perfectly legal to consume and reproduce large quantities of data… And then, once they had the data, they started selling access to it.
I would say that that alone, along with the fact that they function as gatekeepers to the technology (One does not simply purchase the model from OpenAI, after all) they are hardly free of culpability… But it definitely depends on the person trying to use their black box too.
Huh? What does being non profit have to do with it? Private companies are allowed to learn from copyrighted work. Microsoft and Apple, for example, look at each other’s software and copy ideas (not code, just ideas) all the time. The fact Linux is non-profit doesn’t give them any additional rights or protection.
Thanks for your response. I realize I muddied the waters on my question by mentioning exact copies.
My real question is based on the ‘everything is a remix’ idea. I can create a work ‘in the style of Banksy’ and sell it. The US copyright and trademark laws state that a work only has to be 10% differentiated from the original in order to be legal to use, so creating a piece of work that ‘looks like it could have been created by Banksy, but was not created by Banksy’ is legal.
So since most AI does not create exact copies, this is where I find the licensing argument possibly weak. I really haven’t seen AI like MidJourney creating exact replicas of works - but admittedly, I am not following every single piece of art created on Midjourney, or Stable Diffusion, or DALL-E, or any of the other platforms, and I’m not an expert in the trademarking laws to the extent I can answer these questions.
Thanks for your response
Always happy to discuss copyright. :-) Our IP laws are long overdue for an overhaul in my opinion. And the only way to make that happen is for as many people as possible to discuss the issues. I plan to spend the rest of my life creating copyrighted work, and I really hope I don’t spend all of it under the current rules…
The US copyright and trademark laws state that a work only has to be 10% differentiated from the original in order to be legal to use
The law doesn’t say that.The Blurred Lines copyright case for example was far less than 10%. Probably less than 1%, and it was still unclear if it was infringement or not. It took five years of lawsuits to reach an unclear conclusion where the first court found it to be infringing then an appeals panel of judges reached a split decision where the majority of them found it to be non-infringing.
Copyright is incredibly complex and unclear. It’s generally best to just not get into a copyright lawsuit in the first place. Usually when someone accuses you of copyright infringement you try to pay them whatever amount of money (in the Blurred Lines case, there were discussions of 50% of the artist’s income from the song) to make them go away even if your lawyers tell you you’re probably going to get a not guilty verdict.
To be at fault the user would have to know the AI creation they distributed commits copyright infringement. How can you tell? Is everyone doing months of research to be vaguely sure it’s not like someone else’s work?
Even if you had an AI trained on only public domain assets you could still end up putting in the words that generate something copyrighted.
Companies created a random copyright infringement tool for users to randomly infringe copyright.
Your example is a dude who paints unsolicited on other people’s property. What kind of copyright does a ghost have?
Nice, time to train one with all the Nintendo leaks and generate some Zelda art and a new Mario title!
What’s stopping somebody from making an LLM that can reproduce media that was used in its training with close to 100% accuracy? If that happens, then we’ll have a copyright laundering service.
Reproducing copywrited works would be a problem. Consuming them is not.
In your example, a copyright case would be able to move forward and be tested in court. I would think it stands as good of a shot at prevailing in that example. It would be the same as a case against someone who wrote a script for a website to reproduce copyrighted work on command. The difference is this isn’t that. And if and when it does that, the ai can be tuned to prevent it from continuing to do it.