OpenAI is working to contain fallout after striking a fast-moving agreement with the U.S. Department of Defense(DoD), stepping in shortly after negotiations between the Pentagon and rival Anthropic collapsed over safety guardrails. The deal immediately ignited concerns about surveillance, military use of artificial intelligence, and the balance of power between tech companies and government.

User Backlash and Market Signals. The response was swift. Data from market intelligence firm Sensor Tower showed ChatGPT uninstalls surged between 200% and 295% above normal daily rates following the announcement. At the same time, Anthropic’s Claude app climbed to No. 1 on Apple’s App Store, signaling a wave of user migration. Online critics called for OpenAI to release the full contract, arguing that transparency was necessary to rebuild trust.

Altman Acknowledges Misstep. OpenAI CEO Sam Altman said the company moved too quickly in announcing the deal. “We shouldn’t have rushed to get this out on Friday,” Altman wrote on X. “The issues are super complex, and demand clear communication. We were genuinely trying to de-escalate things and avoid a much worse outcome, but I think it just looked opportunistic and sloppy.” In an internal meeting, he described the episode as “really painful” but maintained it was “a complex but the right decision” despite the short-term reputational damage.

Contract Revised With Guardrails. In response to the backlash, OpenAI amended the agreement to explicitly prohibit its systems from being “intentionally used for domestic surveillance of U.S. persons or nationals.” The updated language states that the Department of Defense understands this limitation to bar deliberate tracking or monitoring of Americans, including through commercially acquired personal data. Altman also said intelligence agencies such as the National Security Agency would not be permitted to use OpenAI’s systems without a separate contractual modification, calling the protections “critical to protect the civil liberties of Americans.”

Broader Political and Industry Fallout.The controversy has widened into a broader debate about AI’s use in warfare. Nearly 900 employees from OpenAI and Google signed an open letter urging their companies to resist government demands that could enable mass surveillance or fully autonomous lethal systems. Meanwhile, Anthropic was designated a supply chain risk by the Trump administration after refusing to drop internal “red line” principles against mass surveillance and autonomous weapons, a move the company plans to challenge in court.

As AI firms deepen ties with national security agencies, the episode underscores the growing tension between innovation, government partnerships, and public trust. How firms navigate those tensions may shape not only future defense contracts, but also the public legitimacy of advanced AI systems.

Recommended for you