Anthropic introduced a special version of Claude AI for defense and intelligence agencies
Anthropic has announced the launch of Claude Gov, a separate line of its AI models designed specifically for defense and intelligence. The company says the models are already being used by the highest-level U.S. national security agencies, although the exact timing and scope of implementation are not disclosed.
Claude Gov is designed to meet the specific needs of government customers, from threat analysis to processing classified data. Unlike regular Claude models, the government version has relaxed restrictions. For example, it does not refuse to work with sensitive information, which consumer versions are required to avoid. Anthropic also claims that these models are better at navigating national security contexts and have a deeper understanding of languages and dialects important to national security.
At the same time, the use of AI by state structures raises concerns among human rights activists, as abuses of the technology have been recorded in the past — for example, bias in facial recognition or unfair decisions by algorithms in the distribution of social assistance.
Anthropic acknowledges these risks and says it has a strict policy that its AI cannot be used to create weapons or conduct malicious cyber operations. However, the company has contractual exceptions for select government entities where some restrictions can be relaxed.
Incidentally, Claude Gov is Anthropic's answer to OpenAI's ChatGPT Gov, which launched in January. It's also part of a broader movement by tech companies to engage more with the government, especially amid the uncertain regulatory landscape for AI in the US.
As a reminder, Anthropic also recently launched a blog that will be written by AI. In addition, the company opened up web search to all Claude users.