White House Will "Approve" Leading AI Models Before Release
The Trump administration announced Tuesday that it has reached expanded agreements with Microsoft, Google DeepMind and Elon Musk’s xAI to collaborate on artificial intelligence testing, security research and pre-release evaluations of advanced AI systems. OpenAI, who recently signed a major contract with the White House, appears to be excluded.

The Trump administration announced Tuesday that it has reached expanded agreements with Microsoft, Google DeepMind and Elon Musk’s xAI to collaborate on artificial intelligence testing, security research and pre-release evaluations of advanced AI systems. OpenAI, who recently signed a major contract with the White House, appears to be excluded.
The agreements will be led by the Center for AI Standards and Innovation, known as CAISI, which operates under the Commerce Department’s National Institute of Standards and Technology. Under the arrangements, the companies will work with the government on pre-deployment evaluations, research into frontier AI capabilities and security testing.
The agreements build on earlier partnerships between CAISI and major AI developers, including arrangements reached under the Biden administration with companies such as OpenAI and Anthropic. The new pacts are meant to expand information-sharing between government and industry, support voluntary product improvements and give federal officials a clearer view of advanced AI capabilities and international competition in the sector.
CAISI Director Chris Fall said independent and rigorous testing is needed to understand frontier AI systems and their national security implications. He said the expanded partnerships would help the government scale its work “in the public interest at a critical moment.”
AI developers often provide CAISI with models that have reduced or removed safeguards so government evaluators can assess national security-related risks and capabilities. Evaluators from multiple agencies may participate in the reviews, with feedback coordinated through the TRAINS Taskforce, an interagency group focused on AI national security concerns.
The agreements also allow testing in classified environments and were written to give the government flexibility as AI capabilities continue to advance.
Microsoft said the partnership would help improve the science of AI testing and evaluation. Natasha Crampton, the company’s chief responsible AI officer, said the collaboration would include testing Microsoft’s frontier models, assessing safeguards and working to reduce national security and large-scale public safety risks. Microsoft also announced a similar agreement with the United Kingdom’s AI Security Institute.
Google DeepMind’s best-known AI product is Gemini, while Microsoft is widely associated with Copilot. xAI produces Grok, a chatbot that has faced public scrutiny over safety and misuse concerns.
CAISI said Tuesday that it has already conducted 40 evaluations of AI systems, including some state-of-the-art models that have not been released publicly. The center did not identify those models or say whether any release plans had been changed following testing.
The announcement comes as the Trump administration has generally emphasized reducing regulation around AI development through its broader AI Action Plan. But the expansion of government testing suggests a greater focus on national security risks as the US military and federal agencies increase their use of advanced AI tools.