[…] While the widespread AI-driven “apocalypse” predicted by some experts was not actualized, there was still a significant amount of misinformation. The Biden robocall was the most notable deepfake example in this election cycle. But as Tim Harper, senior policy analyst and project lead for the Center for Democracy and Technology, explained, there were several instances of AI’s misuse. These included fake websites generated by foreign governments and deepfakes spreading misinformation about candidates.
In addition to that kind of misinformation, Harper emphasized a major concern was how AI tools could be used to target folks at more of a micro level than has previously been seen, which he said did occur during this election cycle. Examples include AI-generated texts to Wisconsin students that were deemed intimidating, and incidents of non-English misinformation campaigns targeting Spanish speaking voters, intended to create confusion. AI’s role in this election, Harper said, has impacted public trust and the perception of truth.
A positive trend seen this year, according to Jennifer Huddleston [senior fellow in technology policy at the Cato Institute], was that the existing information ecosystem helped combat AI-powered misinformation. For example, with the Biden robocall, there was a quick response, allowing voters to be more informed and discerning about what to believe.
Huddleston said she believes it is too soon to predict precisely how this technology will evolve and how AI’s public perception and adoption may look. But she said using education as a policy tool can help improve understanding of AI risks and reduce misinformation.
Internet literacy is still developing, Harper said; he expects to see a similarly slow increase in AI literacy and adoption: “I think public education around these sorts of threats is really important.”
[…]
That’s a pretty big loophole given how big the alt-right Christian movement is.