• davehtaylor@beehaw.org
    link
    fedilink
    arrow-up
    52
    ·
    1 year ago

    Technology is not apolitical, because humans are not apolitical. Anyone who says they are or claims to be “neutral” or “centrist” simply means their ideals align with the status quo.

    This is a problem with all sectors of tech, but especially in places where algorithms have to be trained. For example, facial recognition systems are notoriously biased against anyone who isn’t cis and white. Fitness trackers/smart watches/etc. have trouble with darker skin tones. Developers encode implicit biases because they are oblivious to the fact that their experiences aren’t universal. If your dev team and your company at large aren’t diverse, that lack of diversity is going to show through in your product, intentional or not. How you shape the algorithms, what data you feed it to train it, etc. are all affected by those things.

    • acastcandream@beehaw.org
      link
      fedilink
      arrow-up
      13
      ·
      1 year ago

      Anyone who refuses to accept this and insists on holding on to the idea that somehow “computer” means “neutral and objective” is generally not worth engaging in any discussion about LLM’s/AI/etc.  Their partisan blinders are impenetrable. 

    • EnglishMobster@kbin.social
      link
      fedilink
      arrow-up
      8
      ·
      edit-2
      1 year ago

      There’s a great video about this sort of thing: https://www.youtube.com/watch?v=agzNANfNlTs

      Essentially, it looks at why conservatives vs. liberals approach the world differently. Democracy vs. capitalism is inherently a logical contradiction; in a true democracy, everyone is treated equally and all voices have equal weights. In capitalism, some people are more equal than others - it’s a pyramid. Fascism is when these “some people are better” is because of something like genetics, or culture. (The video doesn’t touch on this, but modern Communism falls into the same trap as well, where “some people are better” because they know the party leaders or they’re technocrats. It’s a mindset that humans have and not something exclusive to capitalism.)

      Where you wind up on the American political spectrum is based on where you fall when the ideals of equality vs. hierarchy clash. There is no middle ground because the two are fundamentally incompatible - if everyone was truly treated equally, you couldn’t have people with more power/status than others. If you accept that not everyone should wield power and that at the end of the day there must be some rich and some poor - some that have power and others that do not - then you are therefore arguing that people shouldn’t be treated equally. From there, the pyramid structure is the natural order of things (“always a bigger fish”).

      Because the structure is fundamentally at odds with itself you can’t have both at once. You have to compromise on one side more than the other. Hence there is no such thing as “apolitical”, even with technology - it will hold a bias one way or the other.

      • davehtaylor@beehaw.org
        link
        fedilink
        arrow-up
        3
        ·
        1 year ago

        That video really is great.

        And you really nailed it. This is why I can’t stand the rhetoric of “can’t we put politics aside and agree to disagree?” because the answer is “if your ideals are at odds with equality, and you deny the basic human nature, human rights, and civil rights of others, then no. We can’t.” There’s no middle ground between “we want everyone to be happy, healthy, and be able to live comfortably as their true selves” and “these entire groups of people need to be eradicated”/

        And bring it back, the “can’t we put politics aside” is a symbol of privilege. The “neutrality” or “centrism” that aligns with one’s ideals allows them to not have to worry about whether or not you’re going to be beaten to death the next time you have to go to the store, or if the police are going to stop you for just walking down the street and murder you because some facial recognition system mis-identified you because no one trained it on Black faces. Or if you’ve already had a hard time getting a job you want because of who you are, and now capitalists in that field have decided that they’re going to just bulldoze the whole thing and give the job to an LLM.

    • CanadaPlus@lemmy.sdf.org
      link
      fedilink
      arrow-up
      5
      ·
      edit-2
      1 year ago

      I’d add the caveat that some technologies are more political than others, too.

      Anyone who says they are or claims to be “neutral” or “centrist” simply means their ideals align with the status quo.

      Or frequently “I actually find politics too boring and complicated but don’t want to admit it”.

      • FaceDeer@kbin.social
        link
        fedilink
        arrow-up
        8
        ·
        1 year ago

        Does there have to be one? It’d be nice if there were, of course, but this is currently the only way we know of to make these AIs.

      • Stefen Auris@pawb.social
        link
        fedilink
        English
        arrow-up
        4
        ·
        1 year ago

        I guess shoving an encyclopedia into it. I’m not sure really, it is a good point. Perhaps AI bias is as inevitable as human bias…

        • interolivary@beehaw.org
          link
          fedilink
          English
          arrow-up
          8
          ·
          1 year ago

          Despite what you might assume, an encyclopedia wouldn’t be free from bias. It might not be as biased as, say, getting your training data from a dump of 4chan, but it’d absolutely still have bias. As an on-the-nose example, think about the definition of homosexuality; training on an older encyclopedia would mean the AI now thinks homosexuality is a crime.

          • RickRussell_CA@beehaw.org
            link
            fedilink
            English
            arrow-up
            4
            ·
            1 year ago

            And imagine how badly most encyclopedias would reflect on languages and cultures other than the one that made them.

      • RickRussell_CA@beehaw.org
        link
        fedilink
        English
        arrow-up
        4
        ·
        1 year ago

        Well, you can focus on rule-based/expert system style AI, a la WolframAlpha. Actually build algorithms to answer questions that are based on scientific fact and theory, rather than an approximated consensus of many sources of dubious origin.

        • parlaptie@feddit.de
          link
          fedilink
          arrow-up
          3
          ·
          1 year ago

          Ooo, old school AI 😍

          In our current cultural consciousness, I’m not sure that even qualifies as AI anymore. It’s all about neutral networks and machine learning nowadays.

      • radix@lemm.ee
        link
        fedilink
        arrow-up
        2
        ·
        1 year ago

        The alternative is being extremely careful about what data you allow the LLM to learn from. Then it would have your bias, but hopefully that’ll be a less flagrantly racist bias.

  • Rikudou_Sage@lemmings.world
    link
    fedilink
    arrow-up
    13
    ·
    1 year ago

    The models that were trained with left-wing data were more sensitive to hate speech targeting ethnic, religious, and sexual minorities in the US, such as Black and LGBTQ+ people. The models that were trained on right-wing data were more sensitive to hate speech against white Christian men.

    • 𝒍𝒆𝒎𝒂𝒏𝒏@lemmy.one
      link
      fedilink
      arrow-up
      11
      ·
      1 year ago

      White christian men is an awfully specific thing for the model to be sensitive towards IMO.

      Right-wing media is perceived to be funded by white christian men, so if that is the source of the data then I’m not too surprised their writing and articles would protect themselves - but still intriguing how the model picked up on this from online discussions & news data, and was sensitive to hate speech aimed at that group specifically, compared with the Left data which appears more inclusive - although this is probably indicative of the bias they’re studying in the article

      • radix@lemm.ee
        link
        fedilink
        arrow-up
        8
        ·
        edit-2
        1 year ago

        I mean, hate speech aimed at left-wing people is more diverse generally than hate speech aimed at right-wing people because the left simply is more diverse in gender, orientation, ethnicity, religion, etc. Isn’t that universally accepted?

        (Please correct me if I’m wrong, I approach in good faith!)

        • 𝒍𝒆𝒎𝒂𝒏𝒏@lemmy.one
          link
          fedilink
          arrow-up
          2
          ·
          1 year ago

          I don’t think you’re wrong at all tbh - from my perspective the left is always going to be more diverse, whereas the right isn’t very inclusive by default unless you “fit in” IMO

  • Heresy_generator@kbin.social
    link
    fedilink
    arrow-up
    5
    ·
    1 year ago

    It’s a large part of the point. Launder biases into an algorithm so you can blame the algorithm for enforcing biases while taking no responsibility. It’s how every automated police tool has ever worked.