Back

U.S. Government Shifts AI Tools: Claude Dropped, ChatGPT, Gemini and Copilot Approved

Trump Directive Leads to Claude’s Removal

The U.S. State Department announced that it is discontinuing use of Anthropic’s Claude model following a direct order from President Donald Trump to cancel all Anthropic contracts. An internal memo cited the president’s direction as the reason for the immediate switch, and a State Department spokesperson confirmed compliance with the directive.

As part of the transition, the department is moving from Claude Sonnet 4.5 to OpenAI’s GPT‑4.1 as the backbone of its internal chatbot. The change has caused some operational challenges, with data from the chatbot only available from May 2024 onward.

Other Agencies Follow Suit

Both the Treasury Department and the Department of Health & Human Services (HHS) are also dropping Claude from official use. HHS has specifically urged its employees to switch to ChatGPT or Google’s Gemini for their AI needs.

Senate Approves Three Major AI Models

In a separate development, the U.S. Senate’s chief information officer for the Sergeant‑at‑Arms office issued a memo approving the use of three AI models: Google’s Gemini, OpenAI’s ChatGPT, and Microsoft’s Copilot. The memo outlines permissible uses, including drafting and editing documents, summarizing information, preparing talking points and briefing material, and conducting research and analysis.

Copilot is already integrated into some Senate platforms, with data protected by the Senate’s secure Microsoft 365 Government environment. Details on how Gemini and ChatGPT will be deployed have not been released, and questions remain about handling confidential information within the Senate’s AI ecosystem.

Policy Context and Legal Issues

Anthropic’s relationship with the federal government has been strained after the company refused to allow Claude to be used for mass domestic surveillance and autonomous weapons systems. The Pentagon faced two lawsuits from Anthropic after Defense Secretary Pete Hegseth labeled the AI firm a “supply chain risk” and terminated a $200 million contract.

The New York Times notes that a House policy adopted in September 2024 restricts the entry of sensitive information into AI chatbots, though specific guidance has not been publicly detailed.

Implications for Federal AI Use

The shift away from Claude and the approval of Gemini, ChatGPT, and Copilot illustrate a broader realignment of AI policy within the federal government. Agencies are moving toward tools that are perceived as lower risk or more aligned with current administration priorities, while also establishing clearer usage guidelines to protect sensitive data.

Used: News Factory APP - news discovery and automation - ChatGPT for Business

Source: TechRadar

Also available in: