• FrankLaskey@lemmy.ml
    link
    fedilink
    English
    arrow-up
    24
    arrow-down
    6
    ·
    edit-2
    3 days ago

    I think we can all agree that modifications to these models which remove censorship and propaganda on behalf of one particular country or party is valuable for the sake of accuracy and impartiality, but reading some of the example responses for the new model I honestly find myself wondering if they haven’t gone a bit further than that by replacing some of the old non-responses and positive portrayals of China and the CPC with a highly critical perspective typified by western governments which are hostile to China (in particular the US). Even the name of the model certainly doesn’t make it sound like neutrality and accuracy is their primary aim here.

    • Aatube@kbin.melroy.org
      link
      fedilink
      arrow-up
      13
      arrow-down
      1
      ·
      3 days ago

      ehhhh, the only thing the model got quite wrong was the level of control on access to media, internet, and especially education. Other than that the article’s example responses seem pretty on-point. (I only otherwise found a blemish where a few words needed further clarification; I found no other errors in my first reading.) Though I do also find the name of the model quite off-putting.

    • brucethemoose@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      3 days ago

      Well you can merge it with the original model, to any degree, to get any sliding scale of “bias” you want.

      Practically, though, I guess that’s not super practical, as very few have the hardware or cash to deploy a custom full R1 themselves.

    • iopq@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      5
      ·
      2 days ago

      What part is highly critical of China? Facts can’t be critical

      • fruitycoder@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        5
        ·
        2 days ago

        Listen, I’m highly critical of the CCP, but LLMs aren’t facts machines, they are make text like what they are trained on machines.

        They have no grasp of truth, and we can only get some sense of truth of what the average collective text response of its dataset (at best!).