Anthropic: Công ty AI đạo đức đang âm thầm kiếm hàng tỷ đô từ doanh nghiệp

 

  • Anthropic, do Dario Amodei và các nhà sáng lập rời khỏi OpenAI vì lý do an toàn thành lập năm 2021, đang trở thành đối thủ đáng gờm trong lĩnh vực AI doanh nghiệp.

  • Mặc dù đặt yếu tố đạo đức và an toàn lên hàng đầu, Anthropic đạt doanh thu định kỳ hàng năm (ARR) vượt mức 4 tỷ USD, tăng gấp 10 lần chỉ trong một năm và có thể tiếp tục tăng trưởng 10x trong năm 2025.

  • 80% doanh thu của Anthropic đến từ mô hình B2B, với Claude 4 được các startup lập trình và công ty phần mềm đánh giá cao về độ tin cậy và khả năng xử lý tác vụ phức tạp.

  • Claude Code – công cụ lập trình nội bộ – ban đầu không được định hướng thương mại, nhưng hiện đã trở thành sản phẩm tăng trưởng nhanh chóng nhờ nhu cầu lập trình bằng AI.

  • Anthropic tập trung xây dựng AI cho công việc, tránh xa các sản phẩm giải trí dễ gây nghiện, điều này giúp tạo lòng tin với doanh nghiệp vốn ưu tiên tính đáng tin cậy và đạo đức.

  • Claude 4 có thể hoạt động tự động dài hạn và tương tác với phần mềm khác, cho phép doanh nghiệp tiết kiệm chi phí lao động cho các tác vụ có giá trị cao.

  • Tuy nhiên, chi phí huấn luyện mô hình AI khổng lồ khiến Anthropic phải liên tục gọi vốn. Họ đang đàm phán tăng đầu tư từ Amazon và các quỹ Trung Đông, có thể nâng định giá lên 100 tỷ USD (từ 61,5 tỷ USD hồi tháng 3/2025).

  • Một tin nhắn Slack bị rò rỉ tiết lộ sự mâu thuẫn nội tâm của CEO Amodei khi phải chấp nhận tiền đầu tư từ các nước vùng Vịnh, dù ông từng phản đối việc xây trung tâm dữ liệu tại đây vì lý do an ninh.

  • Amodei khẳng định sứ mệnh đạo đức không chỉ tạo niềm tin mà còn gây áp lực cạnh tranh tích cực trong ngành. Ông cảnh báo việc Mỹ nới lỏng xuất khẩu chip AI sang Trung Quốc sẽ là sai lầm địa chính trị nghiêm trọng.

  • Theo nhà đầu tư Ravi Mhatre, khi AI gặp sự cố lớn trong tương lai, Anthropic sẽ nổi bật nhờ trọng tâm vào an toàn ngay từ đầu.

📌 Dù đặt yếu tố đạo đức và an toàn lên trên lợi nhuận, Anthropic đạt doanh thu hơn 4 tỷ USD nhờ chiến lược tập trung vào AI cho doanh nghiệp. Claude 4 đã trở thành lựa chọn đáng tin cậy trong môi trường B2B, còn công ty đang chuẩn bị gọi vốn với định giá 100 tỷ USD. Sự kết hợp giữa sứ mệnh nhân văn và hiệu quả thương mại đang giúp Anthropic trở thành "ngựa ô" của ngành AI.

https://www.economist.com/business/2025/07/23/the-dark-horse-of-ai-labs

The dark horse of AI labs

How Anthropic’s missionary zeal is fuelling its commercial success

 
|5 min read
Perhaps it is inevitable that Anthropic, an artificial-intelligence (AI) lab founded by do-gooders, attracts snark in Silicon Valley. The company, which puts its safety mission above making money, has an in-house philosopher and a chatbot with the Gallic-sounding name of Claude. Even so, the profile of some of those who have recently attacked Anthropic is striking.
One is Jensen Huang, boss of Nvidia, the most valuable company on Earth. After Dario Amodei, Anthropic’s chief executive, raised the spectre of big job losses as a result of advances in AI, Mr Huang bluntly retorted: “I pretty much disagree with almost everything he says.” Another is David Sacks, a venture capitalist (VC) who is one of President Donald Trump’s closest tech advisers. In a recent podcast, he and his co-hosts accused Anthropic of being part of a “doomer industrial complex”.
Mr Amodei gives short shrift to such criticisms. In an interview on the eve of the release of Mr Trump’s AI Action Plan, he laments that the political winds have shifted against safety. Yet even as he cuts a lonely figure in Washington, Anthropic is quietly becoming a powerhouse in business-to-business (B2BAI. Mr Amodei can barely suppress his excitement. After his firm’s annualised recurring revenue grew roughly tenfold over the course of last year, to $1bn, they are now “substantially beyond” $4bn, putting Anthropic possibly “on pace for another 10x” growth in 2025. He doesn’t want to be held to that prediction, but he is over the moon: “I don’t think there’s a precedent for this in the history of capitalism.”
Schadenfreude helps, too. Mr Amodei and his co-founders, including his sister Daniela, set up Anthropic after abandoning OpenAI in 2021 because of safety concerns. Their rival went on to make history by launching ChatGPT. OpenAI’s revenue, which hit a $10bn annualised run rate in June, far eclipses Anthropic’s. So does its latest valuation, of about $300bn, almost five times that of Mr Amodei’s lab. Yet even as ChatGPT’s popularity continues to soar, Anthropic has muscled in on OpenAI’s enterprise business. B2B accounts for 80% of Anthropic’s revenue, and its data suggest it is now in the lead when it comes to providing companies access to models via plug-ins known as APIs. Its latest model, Claude 4, is a hit among fast-growing coding startups, such as Cursor, as well as software developers in more established firms. Coders, Anthropic believes, are early adopters of technology, and it hopes they will open doors to the rest of their companies.
Among some of Anthropic’s founders, there is a pinch-me quality to this commercial success. Many are science nerds, not wannabe plutocrats. Their expertise is in scaling laws—the more computational power you throw at a model the better it gets—and safety, not sales. When they gather for dinner they discuss how “weird” the growth is. Anthropic continues safety-testing products when competitors are about to ship theirs. Claude Code, a fast-growing programming bot built for internal use, was only commercialised as an afterthought.
Yet while safety is central to Anthropic’s mission, it turns out it sells well, too. Early on Anthropic decided that its ethical concerns precluded it from building entertainment or leisure products, which were potentially addictive. Instead it focused on work, where most people spend the majority of their time anyway. This, Mr Amodei says, has become “synergistic” with the safety mission. Like Anthropic, businesses want trustworthy and reliable AI. They respect its interest in interpreting models to understand why things go wrong. At the same time, Anthropic’s focus on scaling has kept it competitive. Companies need access to the best models. Claude 4, which operates autonomously for long periods and is able to use other computer programs, allows companies to outsource well-paid work.
The huge cost of training Anthropic’s models is the problem. Like its peers, it is burning through cash. That requires regular fundraising. Once again Anthropic appears to be preparing to go cap in hand to investors. Press reports speculate that Amazon is considering upping its stake, and that some VCs are willing to provide money at a $100bn valuation, up from $61.5bn in March.
Yet the dash for cash highlights glaring paradoxes, as Anthropic’s mercantile needs clash with its missionary zeal. On July 21st Wired, a tech publication, leaked an agonised Slack message from Mr Amodei in which he explained to co-workers why he had reluctantly decided to seek money from Gulf states. “‘No bad person should ever profit from our success’ is a pretty difficult principle to run a business on,” he wrote. He tells The Economist that he continues to have security concerns about American companies building data centres in the Gulf. But as for investment from the region, his scruples have eased. “Those are big sources of capital.”
Anthropic’s safety mission may, at times, prove awkward, but it breeds a race to the top, argues Mr Amodei, as other companies feel compelled to follow his firm’s example. His convictions appear to be deeply held. AI will become such a powerful technology, he argues, that it is vital not just to consider its promise—in terms of better health care, growth and productivity—but how to manage the societal costs, including job losses. He also believes that the power AI confers will be safer in the hands of a democracy like America’s than an autocracy like China’s. The prospect that Mr Trump may relax exports of AI chips to China, in response to lobbying by Nvidia’s Mr Huang, would be “an enormous geopolitical mistake”, he says.

The enshittification of AI

Advocating his cause is hard work. But Ravi Mhatre of Lightspeed Venture Partners, a big Anthropic backer, says that when models one day go off the rails, Anthropic’s safety focus will pay dividends. “We just haven’t had the ‘oh shit’ moment yet,” he says. ■

Không có file đính kèm.

54

Thảo luận

© Sóng AI - Tóm tắt tin, bài trí tuệ nhân tạo