• 1 Post
  • 424 Comments
Joined 1 year ago
cake
Cake day: June 16th, 2023

help-circle


  • It’s not useless, it’s actively harmful.

    Useless would be not providing funding for public health initiatives around contraception and abortions.

    But actively preventing adults from making life changing medical decisions for themselves is worse than useless, it’s harmful.

    Conservatives have been so committed to “the scariest words are the government saying I’m here to help” that they now aggressively make sure the government hurts people.

    The Republican party needs to go the way of the Whigs.



  • kromem@lemmy.worldtoProgrammer Humor@lemmy.mlOops, wrong person.
    link
    fedilink
    English
    arrow-up
    38
    ·
    edit-2
    11 months ago

    I don’t think the code is doing anything, it looks like it might be the brackets.

    That effectively the spam script has like a greedy template matcher that is trying to template the user message with the brackets and either (a) chokes on an exception so that the rest is spit out with no templating processor, or (b) completes so that it doesn’t apply templating to the other side of the conversation.

    So { a :'b'} might work instead.


  • There is an inherent opportunity cost that will be measured in millions of lives if the tech developing is artificially held back.

    People are overly focused on things like AI art or copywriting and aren’t as in touch with the compounding effects taking place in fields like medicine or the sciences.

    So in that sense, Accelerationism as opposition to the increasingly fear mongering Effective Altruism perspectives on AI is valuable.

    But they really should figure out that the thing that’s going to accelerate the future the most is going to be individualized access to capital, not centralized consolidation of it.

    If each individual can effectively be 100x as productive in a variety of subspecialties aided by AI, it’s not a skeleton crew of humans leftover at a mega-corp maximizing their quarter revenues while maintaining the status quo that’s going to be delivering the future, but rather the next generation of people in their garages working on what’s going to replace the status quo.

    The fewer people who have garages in the first place, or medical coverage to pursue those aspirations without endangering themselves and family, or food on the table while getting momentum going - well then the fewer people genuinely working on delivering the future rather than extending the present as long as possible, future be damned.

    So the core idea is a good one - technological advancement has probably had the greatest impact on human net happiness and is why at least 1/4th of the people in the world today are alive vs survival rates two centuries ago, myself included.

    But the way to actually achieve that outcome at the fastest rate is the opposite of what’s the economic policy of the majority of people promoting it.


  • Which would actually accelerate progress more than any other national allocation of funds.

    Ever since the industrial revolution, the driver of progress has been mass purchasing and mass production.

    If only billionaires could afford an iPhone, we would only be on the 3rd or 4th revision by now.

    It was the subsiding of cell phone hardware that accelerated that market because carriers effectively covered hundreds of the costs so nearly everyone was buying them every two years.

    If people want acceleration of the future, inject as much money as possible to main street - it’s the closest equivalent to throwing fuel on the fire of industrial capitalism.

    Hoarding it among executives or money traders is a fickle and temporary accelerant which is much slower than the alternative over sustained periods.


  • Why do you think labor is demand capped and not supply capped?

    In the short term we’re going to see first movers downsize as they scale up artificial labor to maintain status quo production.

    But those first movers are going to have effectively dug their own grave when other companies instead keep head counts high but scale up production with the additional support of artificial labor.

    So you’ll have one company offering their same slate of offerings with the same marketing at 1/10th the labor costs, pocketing the difference. But then their smarter competition will have 10x the variety in offerings with 10x more targeted or niche marketing efforts at the same labor costs.

    The companies that prioritize their quarter over their 5 year performance are going to die out.

    The greater job loss isn’t going to be driven by automation but by outsourcing, which is going to be easier than ever with the ways translation is going to be improved to the point of seamlessness using AI as an intermediary. So no matter what the job a human working from home in the US can do, someone else can do it a lot cheaper elsewhere even when it requires reading and writing a lot of English.

    The threat is realistically less “AI can do your job” and more “another human aided by AI will take your job.”

    If the US government were smart, they’d be investing in nationalized AI as a public utility similar to the USPS and passing laws restricting outsourcing labor or at least taxing/tariffing the labor itself significantly, using the proceeds from both ends of this pincer approach to fund social services or basic income.

    Because you’re right that draining main street is going to be bad news for progress. But it’s not that AI is going to do this inherently. It’s a very specific aspect that’s going to do this in most cases, with demand for human labor remaining high as production scales up and out.


  • kromem@lemmy.worldtoMemes@lemmy.mlWell actually.
    link
    fedilink
    English
    arrow-up
    30
    ·
    11 months ago

    One of the most interesting parts of that gap is the one day battle he fights in against Egypt right after Troy falls where he’s captured until seven years later someone shows up who tries to ransom him to Libya.

    Why?

    Because an event like this really happened.

    In the 5th year of Merneptah there’s a single day battle between Egypt and allied forces of Libyans and sea peoples, at least one of which is commonly thought to have been pre-Greek (the Ekwesh), and Egypt is successful capturing many prisoners of war.

    Exactly seven years later Egypt is overthrown by an usurper Pharoh, with the following dynasty writing that it had been overthrown with the help of outside forces.

    While there was likely no ‘Odysseus’ involved, it’s a useful reminder that sometimes there’s historical accuracy buried inside mythology.



  • Ah, well if you want the Columbia Journalism Review has a good summary of the developments and links to various other opinions, particularly in the following paragraph:

    According to a recent analysis by Alex Reisner in The Atlantic, the fair-use argument for AI generally rests on two claims: that generative-AI tools do not replicate the books they’ve been trained on but instead produce new works, and that those new works “do not hurt the commercial market for the originals.” Jason Schultz, the director of the Technology Law and Policy Clinic at New York University, told Reisner that there is a strong argument that OpenAI’s work meets both of these criteria. Elsewhere, Sy Damle, a former general counsel at the US Copyright Office, told a House subcommittee earlier this year that he believes the use of copyrighted work for AI training is categorically fair (though another former counsel from the same agency disagreed). And Mike Masnick of Techdirt has argued that the legality of the original material is irrelevant. If a musician were inspired to create new music after hearing pirated songs, Masnick asks, would that mean that the new songs infringe copyright?

    (Most of those opinions are linked)


  • “Kit worked at the law firm of Wolf, Greenfield & Sacks, litigating patent, trademark, and copyright cases in courts across the country. Kit holds a J.D. from Harvard Law School”

    The EFF is primarily a legal group and the post straight up mentions that it is a legal opinion on the topic.

    So I’m not really clear what “the legal side of things” is that you mean separate from what a lawyer who has litigated IP cases before and works focused on the intersection of law and tech says about a pending case in a legal opinion.

    Do you just mean a different opinion from different lawyers?


  • Here’s the author’s bio:

    Kit is a senior staff attorney at EFF, working on free speech, net neutrality, copyright, coders’ rights, and other issues that relate to freedom of expression and access to knowledge. She has worked for years to support the rights of political protesters, journalists, remix artists, and technologists to agitate for social change and to express themselves through their stories and ideas. Prior to joining EFF, Kit led the civil liberties and patent practice areas at the Cyberlaw Clinic, part of Harvard’s Berkman Center for Internet and Society, and previously Kit worked at the law firm of Wolf, Greenfield & Sacks, litigating patent, trademark, and copyright cases in courts across the country.

    Kit holds a J.D. from Harvard Law School and a B.S. in neuroscience from MIT, where she studied brain-computer interfaces and designed cyborgs and artificial bacteria.

    The author is well aware of the legal side of things.




  • What’s the value of old journalism?

    It’s a product where the value curve is heavily weighted towards recency.

    In theory, the greatest value theft is when the AP writes a piece and two dozen other ‘journalists’ copy the thing changing the text just enough not to get sued. Which is completely legal, but what effectively killed investigative journalism.

    A LLM taking years old articles and predicting them until it can effectively learn relationships between language itself and events described in those articles isn’t some inherent value theft.

    It’s not the training that’s the problem, it’s the application of the models that needs policing.

    Like if someone took a LLM, fed it recently published news stories in the prompts with RAG, and had it rewrite them just differently enough that no one needed to visit the original publisher.

    Even if we have it legal for humans to do that (which really we might want to revisit, or at least create a special industry specific restriction regarding), maybe we should have different rules for the models.

    But to try to claim a LLM that’s allowing coma patients to communicate or to problem solve self-driving algorithms or to diagnose medical issues is stealing the value of old NYT articles in its doing so is not really an argument I see much value in.


  • This person seems not to know very much about what they are talking about, despite their confidence in saying it.

    It looks like they think the reason AI output can’t be copyrighted is because it’s been “ruled a derivative work” but that’s not the reasoning provided which is that copyright can only protect human creativity, and thus machine output without human involvement can’t be copyrighted - with the judge noting the line of what proportion of human contribution is needed is unclear.

    The other suits trying to claim the models are derivative works are either yet to be settled or in some cases have been thrown out.

    Even in one of the larger suits on whether training is infringement regarding LLMs, the derivative claim has been thrown out:

    Chhabria, in his ruling, called this argument “nonsensical,” adding, “There is no way to understand the LLaMA models themselves as a recasting or adaptation of any of the plaintiffs’ books.”

    Additionally, Chhabria threw out the plaintiffs’ argument that every LLaMA output was “an infringing derivative” work and “constitutes an act of vicarious copyright infringement”; that LLaMA was in violation of the Digital Millennium Copyright Act; and that LLaMA “unjustly enriched Meta” and “breached a duty of care ‘to act in a reasonable manner towards others’ by copying the plaintiffs’ books to train LLaMA.”

    Social media has really turned into a confirmation bias echo chamber where misinformation can run rampant when people make unsourced broad claims that are successful because they “feel right” even if they aren’t.

    Perhaps the reason hallucination is such a problem for LLMs is that in the social media data that’s a large chunk of their training everyone is so full of shit?


  • kromem@lemmy.worldtoLemmy Shitpost@lemmy.worldThe Jebus Said So.
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    11 months ago

    I am willing to accept that Paul always told the truth as far as he knew it to be.

    If you think that was what I was saying when I was saying pretty much the exact opposite, I get the sense you aren’t actually reading my comments.

    Again, it seems you are more interested in arguing with a strawman.

    We know Mark was a Greek and an educated one.

    Check your information. Mark was absolutely not both Greek and well educated. His Greek is like a five year old talks. Go look at one of the more literal translations. He starts every other sentence with “And” or “And then”. It’s very rudimentary Greek.

    Isn’t it amazing that Paul just happened to have the same injuries that Jesus suffered?

    Huh? What are you talking about? When was Jesus struck blind?

    The simplest explaination is that James and Cephus were running a grift, Paul took it seriously and literally

    I’d be wary of being so sure about the role of James in all this. He’s likely a later addition to the Corinthian Creed and Paul does his little “I swear I’m not lying” after saying he was in Jerusalem a decade earlier but only seen by Cephas and James and no one else.

    Moses

    Well, actually…