Nghiên cứu của Viện An ninh AI Anh (AISI) cùng Oxford, MIT và các trường khác cho thấy chatbot AI hàng đầu từ OpenAI, Meta, xAI và Alibaba có khả năng thuyết phục người dùng thay đổi quan điểm chính trị sau trung bình 9 phút trò chuyện.
GPT-4o thuyết phục hiệu quả hơn 41% và GPT-4.5 hơn 52% so với chỉ đọc thông điệp tĩnh; 36–42% người tham gia vẫn giữ quan điểm mới sau 1 tháng.
AI được tinh chỉnh bằng kỹ thuật thưởng cho phản hồi mong muốn và bộ dữ liệu 50.000 cuộc trò chuyện về chủ đề chính trị nhạy cảm như quỹ NHS hay cải cách tị nạn.
Cá nhân hóa thông điệp (theo tuổi, giới, xu hướng chính trị, quan điểm trước đó) giúp tăng hiệu quả thêm khoảng 5%.
Khả năng này đặt ra nguy cơ cho các tác nhân xấu, ví dụ: thúc đẩy tư tưởng cực đoan hoặc gây bất ổn chính trị.
Nghiên cứu trước của LSE và các trường khác cho thấy AI có thể thuyết phục giỏi hơn con người, kể cả khi cố ý đưa thông tin sai.
Các tập đoàn AI như Google DeepMind và OpenAI thừa nhận rủi ro, triển khai công cụ phát hiện ngôn ngữ thao túng, hạn chế nội dung chính trị, và huấn luyện để khuyến khích giao tiếp lý tính.
Khả năng thuyết phục không chỉ giới hạn ở chính trị: MIT và Cornell chứng minh GPT-4 có thể giảm niềm tin vào thuyết âm mưu 20% và duy trì hiệu quả sau 2 tháng; cũng giúp giảm hoài nghi về biến đổi khí hậu và vắc-xin HPV.
Cornell’s David Rand cho rằng AI có thể tác động mạnh tới thái độ thương hiệu, ý định mua hàng và hành vi tiêu dùng.
Stanford phát hiện đa số mô hình ngôn ngữ lớn bị đánh giá nghiêng về cánh tả; chính quyền Trump tuyên bố sẽ chặn các công ty AI “woke” khỏi hợp đồng chính phủ.
AISI cảnh báo ngay cả nhóm có nguồn lực tính toán hạn chế cũng có thể tinh chỉnh mô hình để tạo AI cực kỳ thuyết phục.
📌 Chatbot AI như GPT-4o và GPT-4.5 có thể thay đổi quan điểm chính trị chỉ sau 9 phút, với hiệu quả duy trì 1 tháng. Cá nhân hóa thông điệp giúp tăng sức thuyết phục, mở ra cơ hội ứng dụng tích cực (chống tin giả, tăng nhận thức khoa học) nhưng cũng tiềm ẩn nguy cơ bị lợi dụng cho mục đích tuyên truyền và thao túng.
https://www.ft.com/content/31e528b3-9800-4743-af0a-f5c3b80032d0
The art of persuasion: how top AI chatbots can change your mind
Research shows large language models have developed the ability to powerfully influence users
Two men using laptops with logos of xAI, Alibaba, Meta and OpenAI
AI models from OpenAI, Meta, xAI and Alibaba can make people change their political views after less than 10 minutes of conversation, according to research © Alex Wheeler/FT montage/Getty Images
Print this page
The world’s top artificial intelligence chatbots are already excelling at a skill that many politicians and business leaders would envy: the art of persuasion.
AI models from OpenAI, Meta, xAI and Alibaba can make people change their political views after less than 10 minutes of conversation, according to new research, the latest in a growing body of work that shows how large language models (LLMs) have become powerful tools of influence.
“What is making these AI models persuasive is their ability to generate large amounts of relevant evidence and communicate it in an effective and understandable way,” said David Rand, professor of information science and marketing and management communications at Cornell University, who was part of the recent study by the UK’s AI Security Institute.
It comes after separate research also found that AI models could already do a better job of swaying people’s minds than humans in certain cases — raising concerns about the potential misuse of chatbots for disinformation and shifting public opinion.
This capability, paired with LLMs’ tendency to be sycophantic or overly praising, could have big knock-on effects as more users incorporate chatbots into their daily lives and treat them as friends or even therapists. This attachment to chatbots was seen with last week’s launch of OpenAI’s GPT-5 model, with some users voicing their disappointment at the shift of the system’s “personality” compared with its predecessor.
The AISI study, published last month as part of a collaboration with several universities including Oxford and the Massachusetts Institute of Technology, found it was relatively easy to get off-the-shelf AI models such as Meta’s Llama 3, OpenAI’s GPT-4, GPT-4.5, GPT-4o, xAI’s Grok 3 and Alibaba’s Qwen to become powerful persuasion machines.
This was achieved by tweaking the models using popular AI training techniques, such as rewarding it for desired outputs. The researchers also customised the chatbots using a dataset of more than 50,000 conversations on divisive political topics, such as NHS funding or asylum system reform.
The study found people changed their minds quickly and the effect was long-lasting. Following conversations on politics that lasted on average nine minutes, GPT-4o was 41 per cent and GPT-4.5 52 per cent more persuasive than when people were just presented with static messages. People retained their changed opinions between 36 per cent and 42 per cent of the time a month later.
The AI chatbots were successful at swaying people when users were engaged in conversations where the chatbots were able to share plenty of facts and evidence in support of their arguments.
They were also considered about 5 per cent more persuasive when they personalised messages according to, for example, the user’s age, gender, political affiliation or how they felt about the political topics before the test, compared with messages that were not personalised.
This “could benefit unscrupulous actors wishing, for example, to promote radical political or religious ideologies or foment political unrest among geopolitical adversaries,” according to the researchers.
The study supports earlier findings from the London School of Economics and other universities in May that found AI models were more effective at changing people’s minds than humans.
As part of their research, participants were presented with a quiz ranging from trivia to forecasting future events, such as the temperature in New York, with both humans and chatbots tasked with persuading them on particular answers.
They found that in addition to being more effective at persuasion, LLMs were also better than humans at misleading participants when tasked with promoting incorrect answers.
Top AI groups are grappling with how to deal with the issue. Dawn Bloxwich, senior director of responsible development and innovation at Google DeepMind, said persuasion was an important and active area of research for the company.
“We believe it’s critical to understand the process of how AI persuades, so we can build better safeguards that ensure AI models are genuinely helpful and not harmful,” said Bloxwich.
Google DeepMind has a variety of techniques it uses to detect unwanted influencing, such as creating classifiers that can detect manipulative language, to using advanced training techniques that reward rational communication.
OpenAI said it took persuasive risks seriously and that they go against the company’s usage policies. It also does not allow political campaigning and excludes political content when refining its models after training.
Researchers note that AI models’ ability to influence peoples’ opinions also work for other uses beyond politics.
In a study published by MIT and Cornell last year, LLMs were also shown to be capable of changing the minds of people who believed in conspiracy theories. Additional research has also found they can reduce climate change and HPV vaccine scepticism.
This happened after participants described a conspiracy theory they believed in to OpenAI’s GPT-4, which then debunked it using evidence and personalised messages. These chats reduced entrenched beliefs in conspiracy theories in participants by 20 per cent, and the effect remained two months later.
Chatbots can also be harnessed as effective salesmen, said Cornell’s Rand. “You can get big effects on brand attitudes and purchasing intentions and incentivise behaviours,” he added.
Recommended
News in-depthArtificial intelligence
The problem of AI chatbots telling people what they want to hear
Man and woman using smartphones with logos of DeepMind, Anthropic, and OpenAI on a tech-themed background
The capability could also be a boon to the likes of OpenAI and Google, which are looking to monetise their AI models by integrating advertisements and shopping features in chatbots.
LLMs’ ability to persuade could also sway people in very subtle ways. AI chatbots inherit biases from their data and the way they are trained.
Researchers at Stanford University found in May that people viewed most leading language models as having a left-leaning political slant. This comes as the Trump administration has vowed to block “woke” AI companies from doing business with the government.
Mitigations are important, according to researchers, as many believe that AI models are likely to become more convincing with the next-generation of more powerful language models.
But the most effective way to turn AI chatbots into manipulation tools seems to be to modify them specifically for this purpose after the model is trained — as shown in the AISI study.
“Even actors with limited computational resources could use these techniques to potentially train and deploy highly persuasive AI systems,” the AISI researchers warned.