Something about this feels oddly like a captcha.
Is the puppy mechanical in any way?
Something about this feels oddly like a captcha.
Is the puppy mechanical in any way?
Hmm. I think you are missing some important information here.
I’m sure you know how it goes for most people who create property: EG Factory workers make some product, are paid for it, but do not own the product. The same is true for people who create intellectual property. They get paid for their work but the employer owns the property. You only own what you make in your own time, unless or until you sell it.
You’re talking about paying property owners for providing no services at all.
I just received this 6 day old post as new. I guess that’s due to the issues with federation.
I’m not really sure what you are going for here. Are you saying that Americans need to work more hours to make up for the slack of Europeans?
You did ask if ChatGPT had ever sighted sources. Bing uses it and besides, you can ask for that manually.
Whether it defeats the purpose depends on your original purpose.
It’s been only 13 years since the last conscripts were called up. Crazy. I really thought it was over. It’s probably not going to be brought back immediately, but the way things are heading…
That’s probably true in some contexts. But how many, EG, Raspberry Pi enthusiasts know the name of the senior engineer, let alone their relationship status?
I’m sure you can admire someone’s music or writing without caring one bit about their personal life. But I don’t think you could say the same about an actor. What’s more important for your life: movies or smartphones? So why do we know the names of so many actors but not scientists or engineers?
Yes. The Commission tried to get manufacturers to adopt this voluntarily for years. They almost all did. Almost. Basically, this needs to be binding legislation just for Apple.
I guess it’s answered. On some level, our brain decides that some perfect strangers are friends or family. How else would one explain that we follow gossip about the lives and relationships of people that we, almost certainly, will never meet?
You have a point. But one could equally well predict that influencers - or celebrities in general - lose their appeal once people understand that they are not really their friends. The neurotypical mind simply seems not to be wired that way.
Do machines have a right to fair use?
Machines do not have rights or obligations. They cannot be held liable to pay damages or be sentenced for crimes. They cannot commit copyright infringement. But I don’t think we’ll see “the machine did it” as a defense in court.
are works that AI generates is truly transformative?
Usually they are original and not transformative.
Transformative implies that there is some infringement going on. Say, you make a cartoon with the recent Mickey Mouse. But instead of making the same kind of cartoon as Disney would, you use MM to criticize the policies of the Disney corporation (like South Park did). That transforms the work.
Sometimes AI spits out verbatim copies of training data. That is usually transformative. A couple pages of Harry Potter turn into a technical malfunction.
I hope you’ll answer a question in return:
Software created by a for-profit privately held company is inherently created to consume data with the explicit purpose of generating monetary value. If that is the specific intent and design then all contributors should be compensated.
Why? What’s the ethical/moral justification for this?
I know how anarcho-capitalists, so-called libertarians, and other such ideologies see it, but perhaps you have a different take. These groups are also not necessarily on board with the whole intellectual property concept. So that’s what I am curious about. Full disclosure: I am absolutely not on board with that kind of thinking and am unlikely to be convinced. But I am genuinely interested in learning more.
Let me ask you this: when have you ever seen ChatGPT cite its sources and give appropriate credit to the original author?
Bing chat now does that by default. Normally you have to prompt that manually.
If I were to just read the NYT and make money by simply summarizing articles and posting those summaries on my own website without adding anything to it like my own commentary and without giving credit to the author, that would rightfully be considered plagiarism.
No. It would be considered journalism. If you read the news a bit, you will find that they reference the output of other news corporations quite a bit. If your preferred news source does not do that, then they simply don’t cite their sources.
Perhaps the reason hallucination is such a problem for LLMs is that in the social media data that’s a large chunk of their training everyone is so full of shit?
Heh. I think it simply shows us that the fundamental principle of artificial neural nets, really captures how the brain works.
With further refinement, DeWave could help stroke and paralysis patients communicate and make it easier for people to direct machines like bionic arms or robots.
The article doesn’t even hint at any use in a justice system. There’s nothing to suggest that this could even in principle be used as a lie detector.
The complaint looks like a serious stretch. The one bit where I have some sympathy, is about false attributions to the NYT. (IMHO) MS and OAI should emphasize the limitations of the chat AIs far more. It can’t be taken as a given that everyone knows that they make stuff up.
Direct link to the complaint:
https://nytco-assets.nytimes.com/2023/12/NYT_Complaint_Dec2023.pdf
Look around the world. In poor countries, productivity is low. There are not many machines. People do a lot of manual labor. Rich countries have lots of automation.
If you want to live in a country with less automation, moving is an option. Migrating from a rich to a poor country is much easier than vice versa. But if that looks unappealing, then taxing automation should also be unappealing.
Working less isn’t horrible. The OECD estimates that an average employee in the USA works 1811 hours per year. In Germany, it is only 1341, You can always volunteer in a non-profit if you feel you don’t have enough to do. There’s nothing to be afraid of. I don’t even know why or on what Americans work so much. It feels like they spend half the office day on social media, complaining that they can’t afford things.
What should we conclude about most humans who cannot solve these crosswords?
It should be relatively easy to train an LLM to solve these puzzles. I am not sure what that would show.
Good. Now do you understand how you have misrepresented the paper?
Can you please explain the reasoning behind the test?
Crazy. Looks like the world is full of people who believe that the fact that a human claims something is enough reason to believe it. That explains a lot, once you think about it, except why people would be so gullible.
I hope it’s too obviously a terrible idea to get far, but I fear it might get pushed quite a bit. I can see the copyright lobby getting behind this in a big way, as they are all about controlling the spread of information.