Sen. Rick Scott Leads Legislation to Stop U.S. Government Agencies from Using Adversarial AI Technology
June 25, 2025
WASHINGTON, D.C. – Today, Senator Rick Scott was joined by Senator Gary Peters and bipartisan members of the House Select Committee on the Chinese Communist Party (CCP) to announce the introduction of his bipartisan No Adversarial AI Act to prohibit federal agencies from using artificial intelligence technologies controlled by foreign adversaries, including Communist China. This comes as companies like DeepSeek report ties to the Chinese Communist Party with U.S. user data stored in China, putting U.S. national security and critical information at risk.
Congressmen John Moolenaar, Raja Krishnamoorthi, Darin LaHood, and Ritchie Torres are leading the bipartisan legislation in the House of Representatives.
Senator Rick Scott said, “The Communist Chinese regime will use any means necessary to spy, steal, and undermine the United States, and as AI technology advances, we must do more to protect our national security and stop adversarial regimes from using technology against us. With clear evidence that China can have access to U.S. user data on AI systems, it’s absolutely insane for our own federal agencies to be using these dangerous platforms and subject our government to Beijing’s control. Our No Adversarial AI Act will stop this direct threat to our national security and keep the American government’s sensitive data out of enemy hands.”
Senator Gary Peters said, “Artificial intelligence holds immense promise for our economy and society—but it also presents real security risks when leveraged by foreign adversaries. This legislation helps safeguard U.S. government systems from AI developed by foreign adversaries that could compromise our national security or put Americans’ personal data at risk. It’s a smart, focused step to ensure our government technology infrastructure keeps pace with the evolving threats we face while still allowing room for scientific research, evaluation, and innovation. I’m proud to support this effort to protect Michiganders’ personally identifiable information from bad actors who could exploit their data housed on government systems.”
Congressman John Moolenaar said, “We are in a new Cold War—and AI is the strategic technology at the center. The CCP is weaponizing its most advanced firms, like DeepSeek, to train AI for the battlefield and embed it into the PLA’s warfighting arsenal. The U.S. must draw a hard line: hostile AI systems have no business operating inside our government. This legislation creates a permanent firewall to keep adversary AI out of our most sensitive networks—where the cost of compromise is simply too high.”
Congressman Raja Krishnamoorthi said, “Artificial intelligence controlled by foreign adversaries poses a direct threat to our national security, our data, and our government operations. We cannot allow hostile regimes to embed their code in our most sensitive systems. This bipartisan legislation will create a clear firewall between foreign adversary AI and the U.S. government, protecting our institutions and the American people. Chinese, Russian, and other adversary AI systems simply do not belong on government devices, and certainly shouldn’t be entrusted with government data.”
The No Adversarial AI Act would:
- Create a federal list of adversarial AI:
- By requiring the Federal Acquisition Security Council to identify AI developed by foreign adversary companies (e.g., those based in or controlled by China, Russia, Iran, and North Korea) and publish the list publicly.
- Prohibit federal use of listed AI:
- Restricting executive agencies from using artificial intelligence developed by adversarial entities, including companies like DeepSeek with ties to the Chinese Communist Party.
- Allow limited exceptions with oversight:
- Permitting exceptions for research, testing, or mission-critical functions, but only with written justification and notice to Congress and OMB.
- Mandate regular updates:
- Requiring the adversarial AI list to be updated at least every 180 days to reflect emerging threats and new technologies.
- Empower agency enforcement:
- Directing agencies to use existing authorities to exclude and remove covered AI products from federal systems.
###