New York City businesses that use artificial intelligence to help find hires now have to show the process was free from sexism and racism.

  • Godort@lemm.ee
    link
    fedilink
    English
    arrow-up
    2
    ·
    1 year ago

    Some overworked help desk tech is going to have a rough day trying to explain to their HR manager why this isn’t just a yes or no question.

  • CoderKat@lemm.ee
    link
    fedilink
    English
    arrow-up
    1
    ·
    edit-2
    1 year ago

    Good fucking luck. Here’s a fascinating article about Amazon’s attempt to use AI for hiring, which to their credit, they realized was a bad idea and scrapped: https://www.reuters.com/article/us-amazon-com-jobs-automation-insight-idUSKCN1MK08G

    In short, it was trained on past hiring data, so taught itself from sexist hiring preferences made by humans. It absolutely not designed to be sexist and I’m sure the devs had good intentions, but it taught itself how to be sexist.

    In effect, Amazon’s system taught itself that male candidates were preferable. It penalized resumes that included the word “women’s,” as in “women’s chess club captain.” And it downgraded graduates of two all-women’s colleges, according to people familiar with the matter. They did not specify the names of the schools.

    And here’s a different but similar AI having some even subtler issues:

    […] The algorithms learned to assign little significance to skills that were common across IT applicants, such as the ability to write various computer codes, the people said.

    Instead, the technology favored candidates who described themselves using verbs more commonly found on male engineers’ resumes, such as “executed” and “captured,” one person said.

    To be very clear, these issues stem at their root from human biases, so not using an AI is not going to save you from bias and in fact may well be even more biased because at least AI can be the work of entire teams doing their best to combat bias. But it can end up discriminating in very subtle and unfair ways, like how it was penalizing certain schools. It can end up perpetuating past bad behavior and make it harder to improve.

    Finally, this article is about Amazon noticing these biases and actively trying to correct them. This law is a good thing, because otherwise many companies won’t even do that. While still imperfect, Amazon could have played whackamole trying to root out biases (it sounds like they did for a while before giving up). Many companies won’t even do that, so we need laws like this to force them to at least do so. Of course, ideally anti bias laws would also apply to humans, since we are just as vulnerable.

  • keet@kbin.social
    link
    fedilink
    arrow-up
    1
    ·
    1 year ago

    AI shouldn’t be involved in hiring or firing decisions as it can’t be held accountable in the same way a human can. Yes, it is more efficient. But equity, not efficiency, should be the goal.

  • Iunnrais@kbin.social
    link
    fedilink
    arrow-up
    1
    ·
    1 year ago

    You definitely don’t even need to include sex or race as an input for the AI to show bias. AI can find other things that tend to show sex or race… perhaps your school, perhaps your address, or perhaps the very style you tend to write in for your cover letter.

  • nodsocket@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 year ago

    One easy way to prove that the AI is racist is whether race is one of its parameters. However, it is possible for an AI to be racist even without an explicit parameter.

    • reilwin@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 year ago

      Not necessarily, something that overt would be obvious and avoided. I’m pretty sure that what they’re looking for is more for subtle biases caused by bad datasets to train the AI.

      For instance, you train the AI and tell it which candidates are good or bad. But maybe, by pure happenstance, the best candidates in your dataset are all male. If so, the AI might be accidentally trained to believe that all good candidates are male.

      • HobbitFoot @thelemmy.club
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        Or you train the AI based on data from humans who are racist. If an AI learns how to hire from a racist set of human managers, it may take those biases into account when it is choosing who to hire.

      • DreamerOfImprobableDreams@kbin.social
        link
        fedilink
        arrow-up
        1
        ·
        1 year ago

        I remember hearing about a high-profile case where the AI would dock points if someone’s resume listed them as participating in women’s sports as an extracurricular, while giving extra points if it listed them as participating in men’s sports.

        Also, bias doesn’t necessarily have to come from happenstance. Unfortunately, humans tend to have unconcious (or, sometimes, not-so-unconcious) biases against women and people of color. There was a study where researchers sent identical resumes to a random group of recruiters-- but half of the resumes had a male name and half had a female name.

        They found that both male and female recruiters were more likely to rate the resumes with the male name higher and be more likely to recommend they be advanced to the next round of interviews. IIRC, similar studies have found similar results if you give the resumes a “Black sounding” name versus a “white sounding” name.

        So if you train an AI on your own company’s hiring data-- which is likely to be tainted by the unconcious bias of your own recruiters and hiring managers-- then the AI might pick up on that and replicate it in its results.