AI doomsday warnings a distraction from the danger it already poses, warns expert::A leading researcher, who will attend this week’s AI safety summit in London, warns of ‘real threat to the public conversation’
AI doomsday warnings a distraction from the danger it already poses, warns expert::A leading researcher, who will attend this week’s AI safety summit in London, warns of ‘real threat to the public conversation’
What’s the difference from unethical human systems?
No ethics based lapses.
Humans in a systemic unethical system can be individually ethical using deception or until the system grinds them to dust.
An unethical ai built on unethical data will reinforce unethical behavior forever.
Then the only recourse is to create ethical constraints. Challenging, but possible, even with current LLM technology.
Folks expect humans to be unethical, and [at least try to] put in checks and balances for it. When it’s an AI, on the other hand, lots of folks are too computer-illiterate to treat it as anything but infallible magic.
No coffee breaks
It’s the fact that ethical people can easily create unethical AI. The core problem is reinforcing biases/stereotypes in the data without realizing. Obviously there are other concerns about purposefully doing unethical stuff, but the real issue is that AI/ML just learns from what it’s given.
Examples range from cameras that think most people from Asia have their eyes closed https://www.digitaltrends.com/computing/facial-recognition-software-passport-renewal-asian-man-eyes-closed/ to things like Amazon reinforcing gender hiring biases https://www.reuters.com/article/us-amazon-com-jobs-automation-insight-idUSKCN1MK08G
Ultimately even when built “correctly” AI can be extremely dangerous.
An AI can’t be fined or imprisoned.
Neither can the rich who run our shit today
Allegedly you can bring a bad human actor to justice, though we typically do not.
Upvote for a good, thought provoking question.
speed