Nội chiến AI đã bùng nổ: Khi thế giới chia đôi giữa tín ngưỡng và hoài nghi công nghệ

 

  • Một sinh viên Columbia bỏ học tên Roy Lee đã dùng AI để gian lận học tập và phỏng vấn xin việc thành công tại Amazon và Meta. Sau đó, anh thành lập công ty khởi nghiệp Cluely, cho phép “gian lận mọi thứ” bằng AI chạy nền trong các cuộc họp hay giao dịch, và vừa gọi vốn thành công 15 triệu USD từ Andreessen Horowitz.

  • Roy Lee khẳng định việc sử dụng AI không phải là gian lận mà là tất yếu: AI sẽ thay thế hầu hết công việc trí óc, dù ChatGPT không tiến bộ thêm chút nào nữa thì đã đủ làm biến mất 20–30% việc làm văn phòng ở Mỹ.

  • Niềm tin của Lee phản ánh tư tưởng phổ biến trong giới công nghệ tại Bay Area, nơi nhiều người xem AI là điều quan trọng duy nhất đang xảy ra trên hành tinh.

  • Sam Altman (CEO OpenAI) cũng mô tả AI như tôn giáo: thời kỳ kỳ dị đã bắt đầu, AI sẽ tạo ra của cải đến mức con người có thể thử nghiệm các chính sách chưa từng dám nghĩ đến trước đây.

  • Trái ngược với làn sóng tôn sùng AI, giới phê bình như Emily Bender gọi chatbot là “đống đại số tuyến tính phân biệt chủng tộc” và “vẹt xác suất”; Gary Marcus thì ví chatbot như máy tính cầm tay bị gán nhầm là sinh vật thông minh.

  • Một nghiên cứu của Apple nhấn mạnh điểm yếu của AI: mô hình “tư duy lớn” thất bại trước các bài toán logic mở rộng, bất chấp chỉ cần lặp lại quy trình đơn giản. Điều này tái khẳng định lo ngại của giới phê bình về khả năng lý luận hạn chế của AI tạo sinh.

  • Tuy nhiên, nhiều người ủng hộ AI chế nhạo nghiên cứu này, lập luận rằng ngay cả con người cũng có thể thất bại trước những nhiệm vụ phức tạp – điều này khiến AI càng giống người hơn.

  • Căng thẳng giữa hai phe ngày càng công khai và cá nhân hóa: Gary Marcus bị chỉ trích nặng nề trên mạng xã hội, còn Altman phản pháo với thái độ mỉa mai, cho rằng Marcus “cứ liên tục đòi đuổi chúng tôi khỏi bãi cỏ của ông ấy”.

  • Trong khi đó, Kevin Roose (phóng viên NYT) cho rằng phủ nhận khả năng của AI là cách con người tự ru ngủ trước một tương lai không thể tránh khỏi.

  • Tại thực tế, AI đã trở nên phổ biến khắp nơi: ChatGPT là trang web đứng thứ 5 thế giới, công cụ tạo ảnh AI của OpenAI đạt hơn 130 triệu người dùng chỉ trong tuần đầu, và AI xuất hiện liên tục trong các sản phẩm như Google, Facebook, X, iPhone…

  • Các công ty công nghệ đầu tư hàng trăm tỷ USD vào AI, bất chấp chưa có mô hình lợi nhuận rõ ràng. Meta thậm chí được cho là đã đưa ra gói lương 9 chữ số để tuyển các nhà nghiên cứu AI hàng đầu.

  • Trong bối cảnh đó, những nhà sáng lập như Lee thừa nhận sự “khiêu khích” và “tính biểu diễn” trong các phát ngôn của mình là cách thu hút vốn đầu tư và sự chú ý từ công chúng.

  • Cuộc chiến giữa những người “tin tưởng tuyệt đối” và “hoài nghi cực đoan” về AI giờ đây trở thành một cuộc xung đột văn hóa, nơi bằng chứng có thể được diễn giải theo bất kỳ hướng nào để củng cố niềm tin có sẵn.

  • Các chuyên gia như Subbarao Kambhampati cảnh báo rằng AI hiện tại chỉ là hệ thống thống kê tinh vi, thiếu nền tảng tư duy thực sự – nhưng các công ty công nghệ lại tiếp tục tuyên bố rằng họ đang tiệm cận AGI (trí tuệ nhân tạo tổng quát).

  • Nghiên cứu chỉ ra rằng 70% tiến sĩ AI làm việc cho các công ty, nơi sở hữu phần lớn mô hình AI mạnh nhất – cho thấy sự chi phối của khối tư nhân và thiếu sự giám sát độc lập.

  • Kết quả là xã hội đang bị phân hóa: một nhóm sống trong tương lai ngập tràn AI, nhóm còn lại bị bỏ lại trong bóng tối – như Lee nói, nếu chỉ một nửa nước Mỹ đón nhận AI, khoảng cách giữa hai nửa sẽ là “một xã hội dystopia”.

  • Nhưng bất công công nghệ đã tồn tại từ lâu: tự động hóa được cho là nguyên nhân của hơn một nửa sự gia tăng chênh lệch thu nhập 40 năm qua; hàng tỷ người vẫn chưa có internet; và các nền tảng như Uber, Amazon đã xóa sổ nhiều ngành nghề mà không tạo ra sự thay thế công bằng.

📌 Cuộc chiến nội bộ về AI đang diễn ra khốc liệt giữa giới công nghệ coi AI là tôn giáo và những nhà phê bình cho rằng đó là ảo tưởng nguy hiểm. Dù AI mang đến tiến bộ thực sự, nó cũng gây ra thất nghiệp, bất công và lũng đoạn niềm tin xã hội. Khi tiền đầu tư vượt hàng trăm tỷ USD và các mô hình vẫn thất bại với tư duy cơ bản, cuộc tranh luận không chỉ là về công nghệ mà là về tương lai của thế giới.

https://www.theatlantic.com/technology/archive/2025/07/ai-radicalization-civil-war/683460/

The AI Civil War Is Here

The tech industry and its critics occupy parallel universes.
The story unfolds so rapidly that it can all seem, at a glance, preordained. After transferring to Columbia last fall, as Chungin “Roy” Lee tells it, he used AI to cheat his way through school, used AI to cheat his way through internship interviews at Amazon and Meta—he received offers from both—and in the winter broadcasted his tool on social media. He was placed on probation, suspended, and, more keen on AI than education, dropped out this spring to found a start-up.That start-up, Cluely, markets the ability to “cheat on everything” using an AI assistant that runs in the background during meetings or sales calls. Last month, it finished a $15 million fundraising round led by Andreessen Horowitz, the storied venture-capital firm. (Columbia, Meta, and Amazon declined to comment on the record about Lee’s case.)
 
Lee unapologetically believes that the arrival of omniscient AI is inevitable, that bots will soon automate every job. The language about “cheating” is really just a provocative way to get everyone on board with the idea, Lee told me when we spoke recently. “We have no choice but to keep spreading the word: Do not think it’s cheating,” he said. (“Every time technology makes us smarter, the world panics. Then it adapts. Then it forgets. And suddenly, it’s normal,” Cluely states on its website.) Lee said that it may seem unfair to some people if others can use AI to “be 1,000 times better or more efficient,” but soon this will simply be how the world operates. Even if ChatGPT didn’t get an iota more capable than it is today, already “every single white-collar job in America should essentially be gone already,” Lee said (or “conservatively,” 20 to 30 percent of them). And “I would bet my entire life on AI getting exponentially better.”
 
As we spoke over Zoom, Lee munching on the occasional corn chip while opining on superintelligence, his pitch began to sound familiar. He seemed an awful lot like OpenAI CEO Sam Altman. Both founders treat selling a product like evangelizing a faith. In a recent essay, Altman wrote that the singularity—the period after which technology eclipses human control and comprehension—has already begun. “The rate of technological progress will keep accelerating, and it will continue to be the case that people are capable of adapting to almost anything,” Altman wrote. “There will be very hard parts like whole classes of jobs going away, but on the other hand the world will be getting so much richer so quickly that we’ll be able to seriously entertain new policy ideas we never could before.”
 
AI zealots are everywhere in the Bay Area. I’ve met dozens of them: people who believe that AI’s rapid ascension is inevitable and by far the most important thing happening on this planet. (Some told me it’s the only thing worth caring about at all.) Their vision is in some way optimistic—the idea, however naive, is that superintelligence will eventually make life better for everyone—which allows them to easily dismiss the immediate downsides (such as job loss and resource guzzling). AI start-ups promise “full automation of the economy,” “unbounded connection” with millions of AI personas, “limitless” memory, a solution to “all disease.” In recent weeks, several AI researchers and founders have told me they’re rethinking the value of school: One entrepreneur told me that today’s bots may already be more scholastically capable than his teenage son will ever be, leading him to doubt the value of a traditional education.
Yet AI’s radicalizing effects go beyond the technology’s proponents. To match Silicon Valley’s escalating rhetoric, AI skeptics have ramped up their own, like atheists heckling from the pews at Mass. They dismiss AI as overhyped and practically useless, and pronounce the technology’s certain collapse. One of the industry’s chief opponents, the computational linguist Emily Bender, recently co-authored a book titled The AI Con and encourages referring to chatbots as “a racist pile of linear algebra”—a reference to well-documented algorithmic biases against people of color—or “stochastic parrots.” Gary Marcus, another prominent critic of the AI industry and a cognitive scientist at NYU, recently summed up one of his major points to me. Are chatbots intelligent? “I mean, you could say your calculator thinks, depending on how you define the word thinking,” he said.
 
The two camps are more and more frequently coming into direct conflict. A few days before we spoke, Marcus had triggered his latest online spat with the AI industry after posting an edited image showing Altman’s face plastered over a photograph of the infamous Elizabeth Holmes. “True performance art,” Altman quipped in response. Ed Zitron, a prominent AI critic, recently wrote a nearly 7,000-word essay insisting that he is “sick and tired of everybody pretending that generative AI is the next big thing,” which the political analyst Nate Silver described as “old man yells at cloud vibes” and “detached from reality.”
 
This war has transcended reality, and perhaps evidence, to become a contest between cosmologies. There are now two parallel AI universes, and most of us are left to occupy the gap in between them.
There have been disagreements between boosters and skeptics for as long as AI has existed. But in recent months, the argument has intensified as the industry aggressively expands across digital space. Billions of people are now likely to encounter generative AI each day through Google, Facebook, Instagram, X, their iPhones, Amazon review summaries, various voice assistants, and more—not necessarily because they want to, but because there’s simply no avoiding it. Many people are deliberately seeking out the tools as well. ChatGPT is now the fifth-most-visited website in the world, and OpenAI’s new image generator was reportedly used by more than 130 million people in its first week, putting a massive strain on the company’s servers. (Whoever commands the White House X account was one of those people, sharing an AI-generated meme of a weeping immigrant being detained by ICE.)
 
As the technology and its outputs become ubiquitous, AI executives have grown strident, even brazen, about the technology’s stakes. Two weeks ago, Jack Clark, a co-founder of Anthropic, warned Congress that there are perhaps 18 months until the arrival of “truly transformative technology”—AI systems that far exceed any existing chatbot or brain. The day after Donald Trump’s second inauguration, Alexandr Wang, the recently hired chief AI officer at Meta, wrote to the president that the United States and China are in an “AI war.”
 
The extreme rhetoric is accompanied by extreme spending. The tech industry has collectively burned through hundreds of billions of dollars since the arrival of ChatGPT to train more powerful AI systems and build the physical infrastructure they require, and it shows no signs of stopping. In recent weeks, Meta CEO Mark Zuckerberg, apparently desperate to catch up in the AI race, has been on a recruiting spree in which he has reportedly offered nine-figure packages to top researchers. (Meta says that the numbers have been exaggerated or misrepresented.) Exactly how generative AI will make a profit is not at all clear, but tech companies seem to have faith that the money will flow once the technology has completely rewired the world. As for the skeptics: “When the AI bubble bursts, I don’t think the tech industry is ready for how many people are going to take genuine pleasure in it,” Zitron wrote last week.
 
There may be no better illustration of the rift than the response to a recent paper, published by a team at Apple, titled “The Illusion of Thinking.” The researchers gave advanced AI programs, known as “large reasoning models,” from OpenAI, Anthropic, and DeepSeek various tasks to accomplish: rearranging checkers according to a pattern, for instance, or restacking blocks in the smallest number of moves possible. The puzzles were all solvable by following the same underlying logic, no matter their length—nothing changes about the process for rearranging the blocks, even if many more blocks need to be moved. But these “reasoning” AI models failed completely once the puzzles got large enough. “That’s sort of like a little kid saying, I’m actually a great mathematician, but I can’t add these numbers that you’re asking me to add because I don’t have enough toes and fingers,” Subbarao Kambhampati, a computer scientist at Arizona State University who was not involved with the study, told me.
 
Kambhampati has been at the forefront of exploring “reasoning” models’ abilities and limitations, and to him and like-minded researchers, including Marcus, the Apple paper reaffirmed long-held doubts. “Things I’ve been warning about as an Achilles’ heel for the field for 30 years are real,” Marcus told me. “I won’t deny that there’s some vindication in that.” In this view, generative-AI models are not “thinking” entities but statistical approximators, stellar at reapplying patterns in their training data but not much else. The original ChatGPT struggled to count, and today’s ChatGPT fails at some basic puzzles.
 
Yet many AI boosters descended on the Apple paper with gleeful scorn. In one meme shared to a large AI discussion group on Facebook, giant robots incinerate a city while a group of humans huddle nearby and say, “But they’re not actually ‘reasoning.’” Who cares if AI “thinks” like a person if it’s better than you at your job? If anything, some of the paper’s detractors argued, the findings simply demonstrated how humanlike AI models are through their shortcomings. (Who among us doesn’t fail to solve a long, complex problem on occasion?)
 
Marcus’s gloating about the paper on X turned him into a target for those who find AI’s abilities undeniable, including Altman, who wrote, “We deliver, he keeps ordering us off his lawn.” Kevin Roose, a tech journalist at The New York Times, took his own shot at Marcus, responding to Altman’s post: “A man predicts 85 of the last 0 AI crashes and this is how you treat him?”
 
Roose’s comment struck me as particularly illuminating; he doesn’t quite adore the technology like Altman, but he does regard it as powerful and present. His recent work for the Times has been focused on issues such as what to do if AI systems become conscious and whether AI will pose an existential risk to humans in a few years. He is writing a book about the “race to build artificial general intelligence,” a version of the technology that matches or exceeds the capabilities of humankind. More recently, he has likened some AI skeptics to “an antinuclear movement that didn’t admit fission was real.” When I reached out to ask Roose about this seemingly hard-line stance, he told me, “Increasingly, I feel like the people who are denying the capabilities of these models are just telling feel-good bedtime stories to people who don’t want to believe that change is coming.”
The conflict between AI believers and atheists may be destined to carry on for some time. Generative AI is labyrinthine, and the terms used to describe it are fuzzy—is it “intelligent” or “conscious,” or both or neither, and does it matter? The firms behind the technology are also unwilling to provide any kind of straightforward definitions or fixed goalposts for “generally” or “super” intelligent capabilities. “We don’t know how to even ask the questions about the best way to understand these things,” Kambhampati said. Without questions, let alone answers, faith fills the void. Anything can be spun to support either side of the debate.
 
Independent and industry research—by Kambhampati, Bender, researchers at Apple, and countless others—has continuously shown chatbots failing at various tasks: basic arithmetic, logic, conceptual reasoning, you name it. Yet tech companies also regularly produce chatbots that are better, sometimes drastically so, at those same tasks. Is there a deep, systemic flaw to generative AI, or is the technology hurtling down a path toward unlimited advancement? You could make an argument either way, based on the same exact evidence, and people do so constantly.
The problem with the radicalization of AI is that it pushes people to look beyond the material conditions of the world as it exists. In reality, AI models are speeding up scientific discovery and software engineering while also fabricating information and pushing people into mental breakdowns. Ignoring the chatbot era or insisting that the technology is useless distracts from more nuanced discussions about its effects on employment, the environment, education, personal relationships, and more. Perhaps worse, accepting that superintelligence is around the corner permits trivializing just about any concern with the technology in its present form.
 
Beneath many, many layers of digital vitriol, there may even be room for agreement between the two camps. For all his bombast online, for instance, Marcus has said that today’s chatbots are a legitimate breakthrough, just far from the breakthrough; for all of Altman’s petulance, OpenAI’s latest large reasoning models rely on new approaches not so dissimilar from Marcus’s own, decades-old ideas. AI can be both very powerful and very bad, Kevin Roose told me. “What I am not saying is: We should take the industry at its word,” he said. If OpenAI is truly “confident we know how to build AGI,” as Altman wrote this year, he must prove it.
 
After all, today’s incarnation of generative AI was not inevitable. When the field of “artificial intelligence” emerged in the 1950s, there were two main schools of thought: “Connectionists” thought digital “neural networks” gradually learning from data would be sufficient to produce intelligence. “Symbolists” thought intelligence would come only from hard-coded rules, logic, and knowledge. Neural networks won out: They are the foundation of today’s chatbots, and what much of the modern tech industry is built on.
 
Companies such as Meta and Google spent the 2010s constructing ever bigger neural networks and data centers to power digital advertisements, social media, search engines, shopping algorithms, and so on. As consumers were funneled into these products, the tech firms accumulated huge amounts of data, which they were then able to exploit for tremendous profits. Now those datasets are a treasure trove for training chatbots.
 
In 2023, researchers at MIT found that 70 percent of people with Ph.D.s in AI go into industry and that almost all of the largest, and thus most powerful, AI models are corporate. With hundreds of billions of dollars already invested into generative-AI products and profitability seemingly still years away, these firms cannot afford to show any signs of weakness. They have radicalized at least in part because they need their vision to come true. Even Lee, near the end of our conversation about Cluely, admitted to some cynicism: “Sure, it is a ploy to gain the attention of venture capitalists, but that’s only downstream of getting the attention of hundreds of millions of regular people.” He reminded me, once again, of Altman, whose ability to tell and capitalize on a story has transformed OpenAI from a research lab to a factory for new AI products.
 
As we spoke about radicalization, Lee made another point that interested me. Imagine, he said, if “half of America had moralized against the internet and technology, and half of America had openly embraced it.” Half of the nation would “be living as if electricity was never invented,” the other half floodlit with prosperity. “There would be such a massive gap in outcomes,” Lee said. “This is living in a dystopian society. This sort of unfairness is crazy.”
 
Of course, half the nation did not reject the internet, much less electricity. And a “crazy” unfairness will have existed long before the theoretical arrival of superintelligence, much of it driven by technology. Automation is responsible for at least half of the nation’s growing wage gap over the past 40 years, according to one economist. Tens of millions of Americans and billions of people around the world lack broadband internet access. Amazon, Uber, Airbnb, and other platforms have destroyed entire classes of businesses without offering clear, equally compensated replacements. The 10 richest tech billionaires in the world are collectively worth nearly $2 trillion, more than the GDP of all but 11 countries in the world. Singularity or not, Silicon Valley has already erected a parallel universe.

Không có file đính kèm.

5

Thảo luận

© Sóng AI - Tóm tắt tin, bài trí tuệ nhân tạo