• cryshlee@lemm.ee
    link
    fedilink
    English
    arrow-up
    55
    ·
    1 year ago

    Not sure if you read the article but in this specific instance I believe they are denouncing studios ability to copy their likeness and voice without their consent. Think deepfakes and simulated voices such as elevenlabs’ AI voice tool. That is something that is actively being tested and actors and voice actors want control of how their likeness and voice are used.

    I think this is a reasonable and valid argument and should be protected.

    • Mereo@lemmy.ca
      link
      fedilink
      English
      arrow-up
      10
      arrow-down
      2
      ·
      1 year ago

      I read the article but felt writing a “generic” comment about AI as various studios also wants to replace writers with AIs. I’ve been thinking about this for a long time.

      • cryshlee@lemm.ee
        link
        fedilink
        English
        arrow-up
        6
        arrow-down
        1
        ·
        edit-2
        1 year ago

        I agree with your points. The term “AI” is a buzzword, and “machine learning” is the correct term for what most people consider things like chatGPT or Midjourney to be. I think it’s very important to differentiate between the different tools and not use a catchall term such as “AI” because this leads to widespread demonization of all tools that use machine learning when the truth is that some models are exploitative while others are not.

        I think working towards accurate language should be a priority when litigating the use of machine learning but people also have a responsibility to do their due diligence in learning about what machine learning is and what it can do.

        • Mikina@programming.dev
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 year ago

          Ranting about this at length was one of my last posts on Reddit. The whole AI situation feels exactly like covid - where you had many “expert doctors” doomsaying how vaccines will be the end of us all.

          I’ve seen a few podcasts with industry experts on AI, where he managed to mention stuff like “to me, it felt sentient” or “in a few years, we will have AGIs that are hundred times more intelligent than us. Imagine a standart commoner at the time with IQ like 70 talking to Einstein, but the smartest people alive now will be the commoners compared to the AI”. It’s such a bullshit, as far as I know from my limited ML knowledge from college, I don’t see any way how anything using machine learning can become AGI - or can get smarter than humans.

          Because ML needs feedback. And we can’t give feedback on something that’s more inteligent that we are. It’s as if the lowest commoner in the metaphor was staring Einstain over the shoulder, and Einstain would only continue working on a theory if the commoner agreed with him that this is the correct approach. If he would say no, he would try an entierly different approach. I don’t see how that can get smarter than people. Or how it can learn to “escape into the internet and destroy humanity”. Unless I’m missing a major advancement in ML algorithms, I can’t imagine any approach I know being capable of something like that. It just doesn’t work like that, it’s by definition not possible. (But if anyone knows more about the topic and disagrees with me, please let me know - I would really love to discuss this topic, since it’s pretty important to me)

          But, on an entirely different point - ML will be better than people at the single task it’s given. Give it someones profile of data collected about him from internet and smart devices, and let AI select a marketing campaign, email text or a video to show to him to convince him to vote for XY. And given enough time to experiment and train, the model will get results - and there’s nothing you can do about it. Even if you know you will be manipulated, the ML model knows that - based on your data - and will figure out a way how to manipulate you anyway. That’s what I’m worried about, especially since Facebook had literally years of billions of users and data to train and perfect their model. Facebook feed is ML training wet dream.

          We’re fucked, the only way how to defend yourself is to avoid any kind of personalized content. Google search, YT feed, news-sites, streaming services… Anything that’s personalized will potentionally be able to manipulate you. That’s the biggest issue with ML.

    • tony@lemmy.hoyle.me.uk
      link
      fedilink
      English
      arrow-up
      7
      arrow-down
      1
      ·
      1 year ago

      I actually wonder why, to play devils advocate a bit.

      If I’m watching a film and it’s starring, say, Arnold Schwarzenegger part of the deal is I’m getting him specifically. The whole package. An AI that looks like him isn’t the same thing at all, and can’t be said to be starring him.

      But his face or his voice isn’t valuable on its own - it’s his reputation as a good actor. That’s why he’s paid the big bucks.

      Say the studio has an AI that can replicate an actors acting ability perfectly… they don’t… but let’s say they do one day… Why would they need the face? Once you can generate an near infinite number of good actors, individual personalities don’t mean a lot.

      • exponential_wizard@lemm.ee
        link
        fedilink
        English
        arrow-up
        5
        arrow-down
        1
        ·
        1 year ago

        You said it perfectly, it’s his reputation as a good actor, not the good acting itself. Stars get payed a ludicrous amount of money, you can easily find a decent actor for less and have plenty to spare to train them up.

        The face is everything, they plaster it all over the advertising and it works. People will talk about the new movie: “oh and it has actor C in it” “in that case I’ll take a look”

      • kromem@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        Let’s say you have a Terminator video game.

        If made today, that might look like 20 hours of gameplay preplanned with main story writers featuring expensive voice actors. And then another 40 hours of side content with less expensive talent.

        But what if soon you could have a game that still has around 60 hours of gameplay, but no playthrough was exactly the same. Because while everyone has the same core 20 hours main story, the side content adapts to the things you focus on and enjoy.

        Hate lore but love combat? Arnold will take you to the future to help fight in the first robot war.

        Love lore and hate combat? He acts as your bodyguard in a more thriller paced sequence infiltrating Dyson labs while hiding from another robot hunting you.

        With adaptable generative tech powering pipelines, you might end up with a thousand hours of different content pulled into a 60 hour experience that changes based on the player.

        I think many currently can’t really comprehend just how crazy the future is going to be.

        Now - the best way to do this is have Arnold paid for both the primary work and the extended content, and have his involvement in making sure the extended content stays true to his performances.

        As for getting rid of humans entirely - while that will likely happen for small roles, big stars have a marketing value that AI won’t be able to match until it walks red carpets, appears on talk shows, and gets mentioned in gossip tabloids.

        Arguably already many big stars decrease the quality of voice acting roles vs voice actors, but they still get the jobs because more people buy the movie/game due to the big name.