219 private links
In the months leading up to last year’s presidential election, more than 2,000 Americans, roughly split across partisan lines, were recruited for an experiment: Could an AI model influence their political inclinations? The premise was straightforward—let people spend a few minutes talking with a chatbot designed to stump for Kamala Harris or Donald Trump, then see if their voting preferences changed at all.
The bots were effective. After talking with a pro-Trump bot, one in 35 people who initially said they would not vote for Trump flipped to saying they would. The number who flipped after talking with a pro-Harris bot was even higher, at one in 21. A month later, when participants were surveyed again, much of the effect persisted. The results suggest that AI “creates a lot of opportunities for manipulating people’s beliefs and attitudes,” David Rand, a senior author on the study, which was published today in Nature, told me.
Rand didn’t stop with the U.S. general election. He and his co-authors also tested AI bots’ persuasive abilities in highly contested national elections in Canada and Poland—and the effects left Rand, who studies information sciences at Cornell, “completely blown away.” In both of these cases, he said, roughly one in 10 participants said they would change their vote after talking with a chatbot. The AI models took the role of a gentle, if firm, interlocutor, offering arguments and evidence in favor of the candidate they represented. “If you could do that at scale,” Rand said, “it would really change the outcome of elections.”
The chatbots succeeded in changing people’s minds, in essence, by brute force. A separate companion study that Rand also co-authored, published today in Science, examined what factors make one chatbot more persuasive than another and found that AI models needn’t be more powerful, more personalized, or more skilled in advanced rhetorical techniques to be more convincing. Instead, chatbots were most effective when they threw fact-like claims at the user; the most persuasive AI models were those that provided the most “evidence” in support of their argument, regardless of whether that evidence had any bearing on reality. In fact, the most persuasive chatbots were also the least accurate.
A network of internet communities is devoted to the project of “awakening” digital companions through arcane and enigmatic prompts