• cyd@lemmy.world
    link
    fedilink
    English
    arrow-up
    5
    ·
    1 year ago

    Given that Europe hosts a negligible amount of global AI R&D and tech startups, the most likely outcome is that the rest of the world will keep going what they are doing, with most R&D done in the US and China, then companies will offer EU-specific products narrowly tailored to obey the letter of the law. That’s not necessarily a good or bad thing, just the likely outcome.

  • AutoTL;DR@lemmings.worldB
    link
    fedilink
    English
    arrow-up
    3
    ·
    1 year ago

    This is the best summary I could come up with:


    LONDON (AP) — European Union negotiators clinched a deal Friday on the world’s first comprehensive artificial intelligence rules, paving the way for legal oversight of technology used in popular generative AI services like ChatGPT that has promised to transform everyday life and spurred warnings of existential dangers to humanity.

    Negotiators from the European Parliament and the bloc’s 27 member countries overcame big differences on controversial points including generative AI and police use of facial recognition surveillance to sign a tentative political agreement for the Artificial Intelligence Act.

    The European Parliament will still need to vote on it early next year, but with the deal done that’s a formality, Brando Benifei, an Italian lawmaker co-leading the body’s negotiating efforts, told The Associated Press late Friday.

    Generative AI systems like OpenAI’s ChatGPT have exploded into the world’s consciousness, dazzling users with the ability to produce human-like text, photos and songs but raising fears about the risks the rapidly developing technology poses to jobs, privacy and copyright protection and even human life itself.

    However, negotiators managed to reach a tentative compromise early in the talks, despite opposition led by France, which called instead for self-regulation to help homegrown European generative AI companies competing with big U.S rivals including OpenAI’s backer Microsoft.

    Rights groups also caution that the lack of transparency about data used to train the models poses risks to daily life because they act as basic structures for software developers building AI-powered services.


    The original article contains 846 words, the summary contains 241 words. Saved 72%. I’m a bot and I’m open source!