• 0 Posts
  • 73 Comments
Joined 1 year ago
cake
Cake day: June 11th, 2023

help-circle















  • I guess it adds to the problem that it’s very context specific. When you are in your country talking in your mothertongue with someone, you would probably only say “the south” to refer to the south of your country (or another by society predefined south).

    And while we are on a mostly English-speaking platform inhabitated by mostly US people, I’ve heard US people throwing around US specific terms in a lot of different contexts/countries without checking the context they are in.


  • I think that was a stab at you saying “living in the south” as if it automatically meant south of the USA. So your US-centric world view shines through. I think no one wanted to attack your world view per se, but rather your bias.

    And regarding your second comment, why so passive-aggressive? Obviously the US lives in everyone’s head rent free because it messes around with the whole world. Don’t get offended by people trying to point out that there is more in the world than one single country.


  • If you were really curious about the answer, you practically gave yourself the right search term there: “racial bias in general purpose LLM” and you’ll find answers.

    However, like your question is phrased, you just seem to be trolling (= secretly disagreeing and pretending to wanting to know, just to then object).


  • Yes. But this would probably cause friction with the overall public, as the AI would then give a full range of human traits, but people would still expect very narrow default outputs. And thinking more about it, what is the full range of human traits anyways? Does such a thing exist? Can we access it? Like, if we only looked at the societies the AI is present in, we still don’t get all the people to actually be documented for AI to be trained upon. That’s partially the cause for the racist bias of AI in the first place, isn’t it? Because white cishet ablebodied people are proportionally much more frequently depicted in media.

    If you gave the AI a prompt, e.g. “a man with a hat”. What would you expect a good AI to produce? You have a myriad of choices to make and a machine, i.e. the AI, will not be able to make all these choices by itself. Will the result be a black person? Visibly queer or trans? In a wheelchair?

    I guess the problem really is, there is no default output for anything. But when people draw something then they so have a default option ready in their mind because of societal biases and personal experiences. So I would probably draw a white cishet man with a boring hat if I were to execute that prompt. Because I’m unfortunately heavily biased, like we all are. And an AI, based on our biases, would draw the same.

    But repeating the question from before, what would we expect a “fair” and “neutral” AI to draw? This is really tricky. In the meantime your solution is probably good, i.e. training the AI with more diverse data.

    (Oh and I ignored the whole celebrity or known people thingy, your solution is definitely the way to go.)


  • I’m sorry if my comment sounded rude. Reading through your comments again, I cannot say that you were blindly ranting. However, what frustrated me was the complete rejection of a new technology you seem to propose. I think we do agree on many/most layers, I just want to preserve complexity in this discussion. I disagree that machines/AI are inherently bad and we should boycott it. I agree that how AI is implemented in our capitalist society will exacerbate and cause many problems. (But it will also fix many.)