US Lawmakers Push for Expanded AI Safety Measures Against China's Advances
US lawmakers urge the Department of Commerce to broaden the US AI Safety Institute's duties to address national security challenges from China's AI advancements. Highlighting DeepSeek's R1 model, they stress the need for a comprehensive government strategy to maintain US dominance in AI and prevent strategic surprises.

- Country:
- United States
In a move to bolster national security, Chairman John Moolenaar and Ranking Member Raja Krishnamoorthi of the House Select Committee on the Strategic Competition between the United States and the Chinese Communist Party are advocating for new measures to address the challenges posed by China's advancements in artificial intelligence. According to a release from the Select Committee on the Chinese Communist Party, the lawmakers have reached out to the Department of Commerce, encouraging an expansion of the US AI Safety Institute's (AISI) responsibilities.
The January 2025 launch of DeepSeek's large language model, R1, has been cited as an urgent wake-up call. A Committee investigation into DeepSeek unveiled significant national security risks, notably the covert transfer of American private data to China and potential PRC law-based manipulation of the model's outputs. Concerns over AI technology theft through model distillation were also raised. In light of these developments, Moolenaar and Krishnamoorthi emphasized the increasing necessity to predict and prepare for advancements in PRC AI, to stave off potential strategic surprises.
The lawmakers proposed expanded roles for AISI, recommending a comprehensive government approach to sustain US leadership in AI frontiers. They highlighted AISI's technical skill set, industry links, and testing experience as key U.S. assets. Specific areas for focus include evaluating the capabilities and vulnerabilities of advanced PRC AI models and creating strategies to protect against AI technology theft. The letter concluded with an invitation to engage with the AI Safety Institute to explore its ongoing efforts and resource requirements.
(With inputs from agencies.)