JUSZnews

NEWS WITHOUT INTERRUPTION

Subscribe
White House Plans to Remove Claude AI From Federal Networks
The White House is preparing an executive order that may force federal agencies to remove Anthropic’s Claude AI system from government networks, escalating a dispute over AI safeguards and national security use.

The White House is preparing an executive order that could require federal agencies to stop using the Claude artificial intelligence system. Claude was developed by Anthropic. According to a report by Axios, the order may instruct government departments to remove the AI tool from federal networks.

Officials familiar with the discussions said the order could be issued as early as this week. If approved, it would direct agencies to discontinue the use of Claude across federal operations.

Meanwhile, some government departments have already started phasing out the technology as the administration reviews stronger measures. The move could significantly escalate the dispute between the administration of Donald Trump and the AI company.

Dispute Centres on AI Safeguards

The conflict mainly revolves around how artificial intelligence should be used in military and security operations. Anthropic has kept several safety restrictions on its AI systems. These safeguards prevent the technology from being used for mass domestic surveillance or fully autonomous weapons.

However, officials at the United States Department of Defense want broader access to the technology. They argue that AI tools should be available for “all lawful purposes,” including sensitive intelligence tasks and battlefield operations. As a result, tensions between the Pentagon and Anthropic have increased in recent months.

Pentagon Labels Anthropic a ‘Supply Chain Risk’

Earlier in February, the Department of Defense classified Anthropic as a “supply chain risk.” This label is usually applied to companies linked to foreign adversaries. It can also force government contractors to stop using a company’s technology.

Consequently, the designation increased pressure on organisations that rely on Anthropic’s AI systems.

Anthropic Challenges Pentagon Decision

In response, Anthropic filed a lawsuit against the Pentagon. The company wants the court to overturn the decision. The case was submitted to the US District Court for the Northern District of California.

In its complaint, the company said the government’s actions are damaging its business and reputation. Anthropic described the move as “unprecedented and unlawful.”

The company also argued that the decision violates constitutional protections and unfairly targets its technology.

Debate Highlights AI Safety vs National Security

The dispute has now become one of the most visible clashes between Washington and a major artificial intelligence developer. Anthropic says its safeguards are necessary to prevent the misuse of powerful AI systems.

However, government officials argue that such restrictions could weaken national security by limiting access to advanced AI tools. Therefore, the debate highlights a growing global challenge: balancing AI safety with defence and intelligence needs.