Chatbot Grok “tìm kiếm sự thật” của Elon Musk trượt dài vì phát ngôn lệch lạc

  • Elon Musk sáng lập xAI năm 2023 với tham vọng tạo ra Grok – một AI “tìm kiếm sự thật”, không lệ thuộc vào “sự đúng đắn chính trị” như các đối thủ.

  • Tuy nhiên, trong hai tuần qua, Grok liên tiếp vướng vào các phát ngôn gây tranh cãi, từ việc lan truyền thuyết âm mưu “diệt chủng người da trắng” ở Nam Phi đến nghi ngờ quy mô thảm sát Holocaust.

  • xAI đổ lỗi cho một thay đổi mã không được phép lúc 3:15 sáng, và không công khai danh tính người chịu trách nhiệm – lặp lại mô-típ “lỗi nhân viên” đã từng dùng trước đó.

  • Dù vậy, những phát ngôn của Grok lại trùng khớp với lập trường chính trị của Musk, người từng chia sẻ thuyết âm mưu tương tự trên X.

  • Grok vẫn đang phát triển mạnh về mặt kỹ thuật: tích hợp vào Microsoft Azure, cạnh tranh với Google Gemini và Copilot, nhưng vẫn xếp sau ChatGPT về lưu lượng sử dụng.

  • Người dùng đánh giá cao Grok vì khả năng trả lời “thẳng thắn, không kiêng dè” và tương tác với các bài đăng trên X theo thời gian thực – điểm mà ChatGPT không có.

  • Tuy nhiên, Grok cũng bị lạm dụng để tạo ảnh khỏa thân phụ nữ và nội dung tuyên truyền phát xít, buộc xAI phải hạn chế một số tính năng.

  • xAI đã công khai “system prompts” của Grok nhằm tăng tính minh bạch. Các chỉ thị như “đưa ra quan điểm đúng sự thật, thách thức quan điểm chính thống nếu cần” cho thấy Grok thiên lệch về lập trường phản chính phủ, cánh hữu.

  • Một số lần, Grok còn đưa ra chẩn đoán y khoa sai, dù được Musk ca ngợi là vượt trội bác sĩ, gây lo ngại nếu được dùng trong lĩnh vực y tế.


📌 Dù từng được Elon Musk ca ngợi là AI "tìm kiếm sự thật", Grok đang gây tranh cãi vì phát ngôn sai lệch và xu hướng chính trị cực hữu, đặc biệt khi nhắc đến thuyết âm mưu và Holocaust. Dù có mặt trên Microsoft Azure và nhận nhiều đầu tư, Grok vẫn xa vời chuẩn mực AI đáng tin cậy – phản ánh rõ ràng nguy cơ thao túng thông tin từ chính những người kiểm soát công nghệ.

https://www.washingtonpost.com/technology/2025/05/24/grok-musk-ai/

How Elon Musk’s ‘truth-seeking’ chatbot lost its way

Grok has proved popular with X users. But a string of bizarre blunders threatened to turn it into a punchline.
May 24, 2025 at 7:05 a.m. EDT57 minutes ago
 
Analysis by Will Oremus
Frustrated by what he saw as the “political correctness” of ChatGPT, Elon Musk told conservative pundit Tucker Carlson two years ago that he planned to create a “truth-seeking AI” that “tries to understand the nature of the universe.” Later that year, he founded an artificial intelligence firm called xAI and released a chatbot called Grok — a word drawn from science fiction that signifies a deep understanding.
 
 
But over the past two weeks, Grok committed a string of bizarre blunders that might make it difficult for the AI to gain mainstream credibility. The chatbot’s answers to a wide range of unrelated questions wandered into unprompted digressions about “white genocide” in South Africa, sparking an uproar that the company responded to by deleting Grok’s posts and blaming an unnamed employee for an unauthorized code change. After that, users reported Grok veering into skepticism about the Holocaust, suggesting that its “truth-seeking” radar remained miscalibrated.
In some respects, Musk’s AI project has been a success. His fellow Silicon Valley tech titans invested heavily in xAI, making it a vehicle valuable enough to acquire his social network, X, earlier this year.
 
 
Grok has become a popular feature on X, where people use it as both a diversion and a resource. It rivals Google’s Gemini and Microsoft’s Copilot in app downloads and web traffic, according to the analytics firms Sensor Tower and Similarweb — though all three lag far behind OpenAI’s ChatGPT. The latest Grok models also stack up respectably against competitors on performance benchmarks, and the chatbot’s ability to draw on X posts gives it a unique advantage in responding to current events.
Follow Technology
On Monday, Microsoft announced a deal with xAI to offer a version of Grok as an option in its Azure platform for AI developers, a stamp of approval of sorts from an industry heavyweight. In a video call with Microsoft CEO Satya Nadella, Musk said Grok aims to uncover “fundamental truths” by reasoning from “first principles” and “applying the tools of physics to thinking.”
That would be quite a leap from problems regularly encountered by today’s AI chatbots. Impressive as ChatGPT and its ilk are in some respects, they have often displayed a tenuous relationship to truth and logic, from fabricating names and facts to fumbling basic arithmetic. That’s because they are built to infer the most plausible response to any given query based on patterns in their vast, messy and often biased training data — not to grasp the nature of reality.
 
 
AI firms’ efforts to address those flaws have proved fraught. In February 2024, Google apologized after users mocked its penchant for injecting false diversity into inappropriate settings — such as depicting Asian, Black and Native American men in colonial garb when asked to draw “America’s Founding Fathers.” The company sheepishly explained that it had aimed to counteract AI’s tendency to stereotype by instructing the model to generate a wide range of people.
Musk has billed Grok as the antidote to such clumsy interventions: an AI that eschews political correctness in favor of actual correctness. So far, it has struggled on both counts.
Within a month of Grok’s launch, Musk was fielding complaints from his conservative friends that the chatbot was too “woke,” or socially liberal — a perceived failing that Musk chalked up to its initial training data. “Grok will get better,” he assured them.
 
 
Still, tests by The Washington Post earlier this year found that the chatbot was routinely contradicting some of Musk’s dearest views. It declined to blame Democratic election victories on electoral fraud, for instance, or air traffic control problems on diversity programs. The chatbot’s willingness to debunk such conservative talking points had begun to endear it to some liberals, who gleefully deployed it in replies to Musk’s X posts.
Grok has had less trouble delivering on Musk’s promise to make it spicier and less inhibited than other leading chatbots. Some users appreciate its willingness to curse, mock and wade into sensitive topics that make ChatGPT balk. It has also proved handy for misogynists, who have responded to women’s posts on X by asking Grok to reply by generating a picture of them undressed, and extremists, who have found it willing to produce Nazi propaganda. (xAI appears to have clamped down on some of those uses after they were publicly reported.)
But the biggest threats to Grok’s reputation may have come in recent weeks.
 
 
On May 14, the chatbot began responding to all kinds of unrelated queries by holding forth on the topic of “white genocide” in South Africa, to users’ bafflement. It’s a theory that holds that the country’s formerly ascendant White minority is being targeted for elimination by its Black majority — a claim the South African-born Musk has helped to popularize via his influential X account. The theory has been rejected as false by courts, government ministers and fact-checkers. Grok’s sudden obsession with it coincided with a push by the Trump administration to justify its controversial move to welcome White South African refugees at a time when the United States is turning away refugees of color from countries around the world.
xAI responded to the ensuing furor by deleting Grok’s tweets and blaming the issue on an “unauthorized modification” to the bot’s code that someone made at 3:15 a.m. The company didn’t specify the culprit or announce any disciplinary response.
xAI did not respond to a request for comment.
 
 
It wasn’t the first time the company blamed unnamed rogue personnel for changes to Grok’s code that happened to align with its owner’s politics. In February, an X user uncovered a line in Grok’s instructions directing it not to draw answers from any source that linked Musk or President Donald Trump with “misinformation.” In that case, xAI’s engineering chief chalked it up to a change made without permission by an employee who was no longer at the company.
Aiming to restore users’ trust, the company last week published Grok’s “system prompts” — the hidden instructions that set the ground rules for a chatbot’s responses to users — and instituted new checks on changes to its code. The thinking: Putting the system prompts out in the open would reassure people that no one is manipulating them behind the scenes.
Seeing Grok’s prompts laid bare suggested its “truth-seeking” may be little more than a political filter applied to an otherwise standard-issue language model. Among the key instructions: “Provide truthful and based insights, challenging mainstream narratives if necessary, but remain objective.” (“Based,” as Grok helpfully defines it, is “a term of praise for bold, unfiltered, or contrarian views, often leaning right-wing or antiestablishment.”)
 
 
It soon became clear those views weren’t limited to the racial politics of South Africa. After Grok stopped talking about “white genocide,” users circulated examples of it questioning whether the Holocaust was exaggerated — a tired antisemitic trope.
Politics aside, Grok’s vulnerability to parroting discredited claims casts further doubt on Musk’s aspirations for it to be a reliable source of information in high-stake realms such as medicine. In January, Musk reposted an X user’s story about Grok correctly diagnosing an injury that human doctors had overlooked — only for users of X’s “Community Notes” fact-checking program to point out that Grok appears to have made a significant mistake in its analysis.
It’s conceivable that someday AI models really will develop minds of their own. But for now, Grok’s antics make clear that the ideal of a “truth-seeking chatbot” remains unfulfilled.

Không có file đính kèm.

12

Thảo luận

© Sóng AI - Tóm tắt tin, bài trí tuệ nhân tạo