Paul Triolo is a Partner at DGA-Albright Stonebridge Group and Global Tech Policy Lead of DGA Group.
The Trump administration appears poised to take a series of actions targeting DeepSeek, a fast-rising Chinese artificial intelligence startup whose advanced AI models have quickly gained traction among developers and tech enthusiasts worldwide. A recent congressional report called DeepSeek a "profound threat" to national security, citing concerns about potential data transfers to China, censorship applied to model outputs, and allegations that the firm used restricted Nvidia chips to train its models.
But the DeepSeek debate reflects something larger: the increasingly complicated task of controlling advanced AI. For years, Washington's export controls have focused on restricting China's access to training-grade chips, with the aim of slowing the development of frontier models. But once models are trained and publicly released, managing their global diffusion presents a different and far more difficult challenge.
Recently, the U.S. moved to revise the AI diffusion rule following pushback from allies and industry. Behind the headlines, the logic for a sweeping ban looks more like a blunt instrument aimed at a much deeper fear: that Chinese firms are catching up in frontier AI and doing so faster than many expected.
The congressional report outlines how DeepSeek appears to route user data through infrastructure linked to China Mobile, a Chinese telecom firm designated by the U.S. as a military-affiliated entity. The responses for smartphone-based queries have been shown to suppress or avoid politically sensitive topics in line with Beijing's censorship guidelines. And there are credible allegations that it engaged in model distillation -- replicating the capabilities of leading U.S. models like ChatGPT using their outputs as training data -- in violation of the terms of service. At the same time, there are speculations that DeepSeek had access to large numbers of restricted AI chips made by several industry sources, but we haven't found any credible evidence of this.
Many of these concerns start to blur into general anxieties about the Chinese tech ecosystem, rather than specific aspects of DeepSeek itself. Take the model distillation charge: It lives in a regulatory gray zone. Many developers have used the outputs of other models to train their own, often without explicit authorization. It may be taking advantage of industry capabilities, but the practice is common, and DeepSeek is far from the only actor. While it is true that DeepSeek aligns with Beijing's red lines on politically sensitive content, so do virtually all AI products from Chinese firms, due to domestic laws requiring companies to enforce "core socialist values." This is a structural feature of the entire ecosystem, not a DeepSeek-specific anomaly.
The infrastructure links to China Mobile are less sensational than they sound. All major telecom providers in China are state-owned. A connection to China Mobile is the norm for Chinese apps and services. And while data transfer back to Chinese servers is concerning, particularly under China's expansive intelligence laws, many apps have operated with similarly murky data practices for years. What makes DeepSeek different isn't the nature of the risk, but the timing and velocity of its success.
The real issue seems to be that DeepSeek is, for the first time, representing a Chinese AI developer giving top-tier U.S. models a run for their money -- and doing it via open-source distribution that is hard to regulate or cordon off.
For the U.S., competing with China in AI is essential. What's needed is a smarter and more sustainable strategy than a reflexive cycle of bans and restrictions. While export controls have slowed access to training-grade chips in China, DeepSeek's very existence -- and its rapid development -- suggest that these measures have not prevented the development of frontier AI models in China. Alibaba, Tencent Holdings and ByteDance also have very capable models: Despite nearly three years of controls, these models still emerged, and are now freely accessible, often hosted on U.S.-based infrastructure with no back-end ties to China.
While DeepSeek's mobile app could feasibly be banned from U.S. government devices or even delisted from Apple and Google's app stores -- as was attempted with TikTok -- this would not blunt the appeal of DeepSeek's models for developers. The real action is happening in the open-source world, where DeepSeek's R1 and V3 models, stripped of their app layer, are freely available on platforms like GitHub and Hugging Face. Developers in the U.S. and elsewhere can download and run the models locally or on U.S.-based cloud servers. No data needs to be sent back to China. No back-end infrastructure is required. Just code.
Nvidia CEO Jensen Huang speaks at the SAP Center in San Jose, California, on March 18. © Reuters
Trying to prevent U.S. citizens from using or hosting open-source models would be legally fraught and technically close to an impossible task, and would put Washington in direct conflict with the large and influential open-source AI community. So far, that community has largely been left alone in export control discussions. A clampdown could break that detente, setting a troubling precedent that would extend far beyond DeepSeek.
The national security rationale also remains fuzzy. With TikTok, the concern was the potential for massive data collection from over 100 million U.S. users. With DeepSeek, the scale is far smaller, and much of the usage happens in isolated developer environments or enterprise sandboxes. The argument that DeepSeek constitutes an immediate national security threat starts to feel way overstated -- more about theoretical future risk than present danger. The nature of open-source software and the fact that this is a generative AI model make it qualitatively different from a social media app in terms of data-related issues.
Optically, a sweeping ban could also backfire. If the U.S. pushes too far -- pressuring cloud providers to delist open-source models or blocking GitHub-hosted AI tools -- it risks undermining its own credibility as a defender of internet openness and innovation. It hands Beijing a ready-made talking point: that the U.S. lacks the confidence to compete on a level playing field and is resorting to bans rather than breakthroughs.
The Huawei Technologies connection adds another layer of complexity. DeepSeek is reportedly working with Huawei to optimize model performance on Huawei chips. But that same kind of collaboration exists across China's tech ecosystem. Alibaba, Tencent and Baidu all engage in similar optimization and hardware-software pairings. Drawing the line at DeepSeek, while ignoring bigger and better-capitalized players doing the same thing, feels less like a principled stand and more like a targeted strike against whichever Chinese AI company happens to gain global traction first. And, of course, the reason DeepSeek has turned to Huawei, and not say, Nvidia, is because of the export controls in the first place.
The U.S. strategy over the past two years has centered on restricting China's access to high-end AI chips. That was supposed to slow the training of large-scale AI models. But now that models like DeepSeek's R1 and V3, and soon R2 and V4, exist and are openly distributed, the emphasis is shifting to limiting inference chips -- hardware used to run models after training. This expansion risks punishing U.S. companies more than Chinese ones. Nvidia, for instance, is already under pressure, losing sales and scrambling to design downgraded chips for the Chinese market. Meanwhile, the marginal effect of each new ban on Chinese firms diminishes. They find workarounds or simply scale using second-best options.
The U.S. policy will need to evolve. A strategy that keeps tightening the screws on hardware exports while ignoring the rapid diffusion power of open-source weight models is not just incomplete, it is now demonstrably obsolete and retrograde.
There are ways to compete more effectively without resorting to blanket bans. Washington can do more to ensure visibility into how and where open-source models are hosted and used. It can promote the use of watermarking tools, provenance trackers and audit mechanisms to help identify the origins and safety of AI systems. It can work with allies--and eventually, in particular, with China--to set baseline standards for transparency in AI development, model distillation and training data sources. And it can build a clearer, more coherent legal framework around responsible AI use -- one that safeguards openness without leaving critical vulnerabilities exposed.
DeepSeek may indeed represent a warning shot in the AI race. But if the U.S. overreacts, it risks shooting itself in the foot.