Atrás

Anthropic’s Surveillance Restrictions Spark Tension with White House

Anthropic’s Surveillance Restrictions Spark Tension with White House
Ars Technica2

Background

Anthropic’s Claude models are used in high‑security contexts and are cleared for top‑secret situations via Amazon Web Services’ GovCloud. The company has a special arrangement with the federal government that provides its services for a nominal $1 fee.

Policy Restrictions

Anthropic’s usage policies prohibit the use of its AI models for domestic surveillance, a stance that has drawn criticism from senior White House officials. Contractors working with agencies like the FBI and the Secret Service have encountered obstacles when attempting to employ Claude for surveillance‑related tasks. Officials worry that the company enforces its policies selectively and uses vague terminology that allows broad interpretation.

Federal Agreements

In addition to Anthropic’s $1‑fee deal, the General Services Administration recently signed a blanket agreement allowing OpenAI, Google and Anthropic to supply AI tools to federal workers. OpenAI announced a separate contract to provide more than 2 million federal executive‑branch employees with ChatGPT Enterprise access for $1 per agency for one year.

Industry Context

The friction highlights a tension between private AI providers’ ethical usage policies and government agencies’ demand for advanced AI capabilities in law‑enforcement and national‑security operations. While Anthropic also works with the Department of Defense, its policies continue to prohibit the use of its models for weapons development.

Usado: News Factory APP - descubrimiento de noticias y automatización - ChatGPT para Empresas

Source: Ars Technica2

También disponible en: