- AI chatbots can influence voter preferences by up to 15% in controlled election settings.
- Persuasion is stronger when chatbots align with or oppose a participant’s candidate preference, and when focusing on policy rather than personality.
- AI accuracy varies, with right-leaning supporting chatbots generating more inaccuracies than left-leaning ones.
- Prompt design impacts persuasion more than model size; prompts encouraging new information increase persuasion but reduce accuracy.
- Younger conservatives show greater willingness to trust AI in governmental decision-making roles compared to liberals.
Recent studies by Cornell University and the UK AI Security Institute found that AI chatbots can shift voter preferences by up to 15% in controlled election settings. These findings, published in Science and Nature, come as researchers and governments study AI’s potential impact on elections and democracy.
The Nature study involved nearly 6,000 participants from the U.S., Canada, and Poland. Participants rated a political candidate, interacted with a chatbot supporting that candidate, then re-rated the candidate. In the U.S. segment of 2,300 participants ahead of the 2024 presidential election, chatbots reinforced voter preferences or swayed voters toward previously opposed candidates. Similar effects were observed in Canada and Poland. Policy-related chatbot messages were more persuasive than those focusing on personality.
The study also noted accuracy discrepancies. Chatbots backing right-leaning candidates produced more inaccurate statements than those supporting left-leaning candidates. Researchers commented that AI political persuasion may exploit knowledge gaps in models, spreading uneven inaccuracies despite efforts to maintain truthfulness, as explained in the press release.
Another Science study tested 19 language models with 76,977 adults in the UK across over 700 political topics. It found that the type of prompt guiding the AI influenced persuasion more than the model’s size. Prompts encouraging the AI to provide new information increased persuasion but lowered accuracy. The researchers stated, “The prompt encouraging LLMs to provide new information was the most successful at persuading people.”
In related polling, the Heartland Institute and Rasmussen Reports found that younger conservatives (aged 18 to 39) were more willing than liberals to trust AI systems to guide public policy, interpret constitutional rights, or command militaries. Donald Kendal, director at the Glenn C. Haskins Emerging Issues Center within Heartland Institute, highlighted public misconceptions about AI neutrality, noting the influence of corporate decisions on model biases. He told Decrypt, “One of the things I try to drive home is dispelling this illusion that Artificial Intelligence is unbiased. It is very clearly biased, and some of that is passive.”
✅ Follow BITNEWSBOT on Telegram, Facebook, LinkedIn, X.com, and Google News for instant updates.
Previous Articles:
- Canton Network Completes 2nd Onchain US Treasury Finance Round
- Microsoft Stock Dips Amid AI Concerns, Rally Possible in 2025
- U.S. Bitcoin Bulls Face Declines During Market Hours, New ETF Launches
- US Banks Allowed as Intermediaries in Crypto Transactions, Says OCC
- Why Modern Brands Are Built on Stories, Not Products
