Deep Dive Into the OpenAI–Department of War Deal: Ethics, Power, and the Pentagon’s AI Pivot

Inside OpenAI’s classified Pentagon deployment: safety claims, political pressure, and the AI arms race reshaping Washington.

SiliconSnark robot smirks in a Pentagon war room as shadowy figures finalize a glowing classified AI deal behind it.

OpenAI says it “reached an agreement” to deploy its models on the U.S. Department of War’s classified networks, with two headline “red lines” baked into the deal: no domestic mass surveillance and human responsibility for the use of force, including for autonomous weapon systems.  That’s the public, quotable version—and yes, it arrived right after a week in which the Trump administration set the Pentagon on fire (metaphorically… mostly) by blacklisting Anthropic for refusing to relax its own guardrails on surveillance and fully autonomous weapons. 

Here’s the central irony: the Department of War’s own January 2026 AI strategy memo explicitly pushes for “any lawful use” contract language and tells the department to procure models “free from usage policy constraints that may limit lawful military applications.”  Meanwhile, OpenAI’s CEO says the agreement preserves specific constraints—constraints the Pentagon had just framed as unacceptable when demanded by Anthropic.  If you’re sensing vibes-based procurement, congratulations: you have passed the entrance exam for modern tech governance.

What we can verify from primary and near-primary sources is narrower than the discourse wants it to be:

  • The Trump White House authorized “Department of War” as a secondary title for the statutory Department of Defense via a September 5, 2025 executive order; statutory references to “Department of Defense” remain controlling until changed by law. 
  • The War Department launched GenAI.mil in December 2025 with Google Cloud’s Gemini for Government as the initial model, framed as an “AI-first” workforce platform at (at least) Impact Level 5 / Controlled Unclassified Information (CUI). 
  • The War Department then announced an agreement to add xAI’s Grok family to GenAI.mil and promised “real-time global insights” from X—a detail that aged like milk the moment anyone remembered what “real-time global insights from X” usually looks like. 
  • In February 2026, the Pentagon/DoW publicly escalated its dispute with Anthropic toward a deadline, demanding “any lawful use” without company-imposed guardrails, while simultaneously claiming it had “no interest” in mass surveillance or autonomous weapons without human involvement (and calling the contrary narrative “fake”). 
  • After the Anthropic rupture, OpenAI announced the classified-network agreement. Reuters and multiple outlets tie the timing directly to the Anthropic fight; OpenAI frames the War Department as safety-respecting; the government side has not (as of the sourced record below) published the full contract terms. 

Everything else—contract value, technical architecture, which specific models, where they run (cloud vs edge), what “technical safeguards” mean in practice, audit rights, data retention, and enforcement—ranges from partly specified (in press statements) to not specified (in public contract text) to reported secondhand (in journalism) to pure rumor (in social posts and comment threads). 

The deal’s governance significance isn’t just “AI goes to classified networks.” It’s that the DoW is trying to standardize a procurement posture—“any lawful use”—while the AI labs are trying to keep at least some say over high-risk use. The outcome may set a precedent for whether “responsible AI” in national security is enforced by law, by contract, by platform controls, or by whoever won the last screaming match on social media. 

What the public record actually says about the deal

The cleanest on-the-record description comes from OpenAI CEO Sam Altman’s social post announcing the agreement, which multiple outlets reproduce or quote: OpenAI will deploy its models in the DoW’s classified network; the department agreed with OpenAI on prohibitions on domestic mass surveillance and on human responsibility for the use of force, including autonomous weapon systems; and OpenAI will build “technical safeguards” to keep model behavior within expectations. 

Reuters similarly describes the agreement as deploying OpenAI models on the Department of War’s classified cloud networks and ties the announcement to the broader dispute over AI use in the military. 

At the same time, the public record makes equally clear what we don’t have:

  • No publicly released, signed contract document in the cited materials. 
  • No public statement from the DoW that independently enumerates the same “red lines” in contract language (as opposed to policy posture). 
  • No disclosed scope: whether the deployment is Secret, Top Secret, multiple domains, or a specific classified enclave; OpenAI’s announcement simply says “their classified network.” 

That matters because the DoW is simultaneously running an enterprise unclassified/CUI platform (GenAI.mil) and publicly pushing vendors toward “all classification levels” deployments.  Meaning: the deal might be an incremental pilot—or the first domino in a bigger procurement shift.

Key actors that appear in verified reporting and official documents

On the government side:

  • Donald Trump: ordered federal agencies to stop using Anthropic technology and framed the dispute as Anthropic trying to force the government to follow its terms rather than the Constitution. 
  • Pete Hegseth: as Secretary of War, publicly endorsed an “AI-first” posture and is quoted in War.gov releases and Reuters coverage as central to the Anthropic standoff and vendor posture. 
  • Emil Michael: Under Secretary of War for Research and Engineering and DoW Chief Technology Officer; appears repeatedly as the official driving fast adoption and vendor partnerships. 
  • Sean Parnell: chief Pentagon spokesperson; publicly posted that the department has “no interest” in mass domestic surveillance or fully autonomous weapons without human involvement and called opposing claims “fake.” 

On the vendor side:

  • Dario Amodei: authored statements explaining why Anthropic refused to accept “any lawful use” without two exceptions (mass domestic surveillance, fully autonomous weapons) and said the company would challenge a supply-chain-risk designation in court. 
  • Jeff Dean: individually posted opposition to mass surveillance, cited in coverage of employee reactions/letters. 

On Capitol Hill (visible in Reuters coverage):

  • Elissa Slotkin: argued that Americans do not want weapons systems to kill without human oversight and do not want mass surveillance; cited in Reuters’ framing of the dispute. 
  • Ted Lieu: publicly questioned why the DoD/DoW agreed to the same constraints with OpenAI that it contested with Anthropic. 

What the deal seems to cover—and what remains unspecified

A conservative, evidence-based “scope guess” (labeled as inference) looks like this:

  • Deployment target: classified government cloud networks (explicit), implying operation in accredited environments rather than consumer SaaS. 
  • Likely product family: frontier language models and associated tooling (inference), consistent with OpenAI’s broader “government” offerings and DoW’s stated goal to put “America’s leading AI models” into the workforce. 
  • Not specified: model versions, retraining/fine-tuning rights, logging and retention, how “technical safeguards” are validated, oversight mechanisms, how violations are detected and punished, and whether the deal is an exclusive arrangement. 

The history and policy runway that made the deal possible

If you want the origin story of “AI goes classified,” it’s less “secret cabal” and more “a pile of memos, platforms, and executive-branch branding choices.”

President Trump’s September 5, 2025 executive order authorized “Department of War” as an additional secondary title for the Department of Defense and authorized corresponding titles like “Secretary of War,” while explicitly stating that statutory references to DoD remain controlling until changed by law.  Major reporting characterized the move as largely symbolic absent congressional action, while acknowledging real costs and messaging implications. 

That nuance matters because the OpenAI deal is constantly described as a DoW deal—yet many legal authorities, acquisition statutes, and oversight structures remain those of DoD. 

The AI policy posture from the White House

The White House published “America’s AI Action Plan” in July 2025 under the Office of Science and Technology Policy branding.  The plan includes national-security-adjacent recommendations tying defense/intelligence adoption to broader AI competitiveness framing. 

GenAI.mil: the unclassified enterprise wedge

On December 9, 2025, the War Department announced GenAI.mil as a bespoke platform, launching with Gemini for Government as the first model and stating that additional “world-class AI models” would follow.  A War.gov news story describes GenAI.mil as a secure generative AI platform pushed to desktops across the DoW workforce, quoting Secretary Hegseth’s email introducing it as live and department-wide. 

Just two weeks later, on December 22, 2025, the War Department announced an agreement with xAI to bring Grok-family models to GenAI.mil at IL5 for CUI workloads and explicitly promised personnel access to “real-time global insights from the X platform.” 

Those details are important for two reasons:

  1. GenAI.mil normalized the idea of multiple frontier models inside a government environment, lowering political and operational friction for use-cases like drafting, summarization, and internal workflows. 
  2. The xAI announcement explicitly blends government AI tooling with a commercial social platform’s data exhaust—raising future questions about provenance, manipulation, and bias in “real-time insights.” 

The War Department’s January 2026 “AI-first” strategy—where the procurement philosophy is stated out loud

On January 9, 2026, a memorandum titled “Artificial Intelligence Strategy for the Department of War: Accelerating America’s Military AI Dominance” directed the department to become “AI-first” and outlined “Pace-Setting Projects” including GenAI.mil, warfighting agent networks, and “turning intel into weapons in hours not years.” 

Several provisions in the memo foreshadow the later conflict with Anthropic—and, by extension, the environment in which OpenAI’s deal lands:

  • The memo frames speed as decisive and directs deployment cadence expectations, including the ability to deploy “the latest models” in short windows after public release. 
  • It calls for a “wartime approach to blockers,” including accelerating authorizations to operate (ATOs) and cross-domain data access. 
  • It explicitly directs procurement changes: establishing benchmarks for model “objectivity” as a primary procurement criterion and incorporating standard “any lawful use” language into DoW AI services contracts within 180 days. 
  • It also states the department “must… utilize models free from usage policy constraints that may limit lawful military applications.” 

That is the conceptual skeleton of the February 2026 vendor standoff: vendors say “two narrow exceptions”; the DoW says “any lawful use”; and everyone’s pretending the contract dispute is about “values” rather than about who controls the kill switch. 

OpenAI’s own policy evolution on military use

In January 2024, multiple outlets reported that OpenAI removed explicit language prohibiting “military and warfare” use from its usage policy, while keeping prohibitions tied to harm and weapons. 

This matters because the February 2026 deal is not occurring in a vacuum; it follows a multi-year trend in which frontier labs increasingly support “national security” use while trying (with uneven success) to carve out bright lines around the worst-case scenarios. 

How the deal came together, plus the rumors that swirled around it

The immediate trigger: the Anthropic showdown as a proxy war over “who sets the guardrails”

Reuters’ reporting frames the Anthropic dispute as a “referendum” on how AI is deployed by the military and how risks are managed, with the Pentagon demanding “any lawful use” and threatening business consequences. 

Anthropic’s CEO argued that AI can undermine democratic values in narrow cases, and that even if certain surveillance practices are “legal,” AI changes the scale and character of the harm—while also asserting that today’s frontier systems are not reliable enough for fully autonomous weapons. 

The DoW’s public stance—via spokesperson Sean Parnell—was to deny interest in mass domestic surveillance (calling it illegal) and deny interest in autonomous weapons without human involvement, claiming the “narrative” is fake. 

Read that again: the department simultaneously (a) denies it wants the thing, (b) demands contractual access to do “any lawful use,” and (c) threatens coercive tools when a vendor says “fine, but not those two.”  This is not a semantic dispute; it is a governance dispute over where “lawful” ends and “catastrophically abusable” begins.

The pre-trigger: the Pentagon was already pushing vendors toward classified networks and fewer restrictions

Two weeks before the public blowup, Reuters reported that the Pentagon was pushing top AI companies to expand onto classified networks without many standard restrictions and described GenAI.mil as an unclassified network rolled out to over 3 million personnel.  Reuters also reported (citing sources) that as part of the GenAI.mil deal, OpenAI agreed to remove many typical user restrictions while some guardrails remain. 

That earlier reporting matters because it shows the dispute wasn’t just about Anthropic’s temperament or “wokeness” (the discourse’s favorite deodorant). It was about a procurement posture: deploy frontier AI “across all classification levels,” and do it on terms that treat vendor safety policies as negotiable friction. 

The “solidarity letter” and internal dissent signals

A public letter site titled “We Will Not Be Divided” invited current/former employees of Google and OpenAI to sign, describing itself as organized by unaffiliated citizens concerned about misuse of AI against Americans and emphasizing verified signatures. 

TechCrunch reported that more than 300 Google employees and over 60 OpenAI employees signed a letter urging leaders to uphold Anthropic’s “red lines” and resist unilateral DoW demands.  In the same coverage, Jeff Dean’s statement opposing mass surveillance is quoted, underscoring that even individual leaders see the surveillance demand as constitutionally radioactive. 

Then the hammer: Trump’s directive and the supply-chain-risk play

Trump ordered federal agencies to stop using Anthropic technology; Reuters describes it as an unprecedented move and connects it to the broader standoff over whether companies can set restrictions on government use of their models. 

Anthropic’s own Feb 27 statement says Hegseth shared on X that he was directing the Department to designate Anthropic a supply chain risk and that Anthropic would challenge any designation in court, calling it legally unsound and unprecedented against a U.S. company. 

The legal hook Anthropic points to is 10 U.S.C. § 3252, which describes authorities for covered procurement actions related to supply chain risk and requires determinations and notice to congressional committees.  That statute’s specifics matter because they speak to whether the “no contractor may conduct any commercial activity” rhetoric is enforceable as stated, or is (to be charitable) ahead of its paperwork. 

OpenAI steps in: the classified-network agreement announcement

Altman’s announcement framed the DoW as having “deep respect for safety” and claimed the department agreed with two core safety principles and wanted technical safeguards.  Reuters reported the “agreement to deploy models” on classified networks shortly thereafter. 

At least one prominent lawmaker reaction (Ted Lieu) focused on the apparent contradiction: if DoD/DoW agreed to those conditions with OpenAI, why were they treated as dealbreakers with Anthropic? 

Leaked or rumored details (explicitly labeled unverified unless corroborated)

Because the full contract is not public in the sourced record above, the internet filled the vacuum with… the internet. A responsible way to treat this is: if it isn’t in an official doc or credible reporting, it’s unverified.

  • Unverified speculation: that OpenAI’s agreement is meaningfully different from the terms offered to Anthropic (e.g., some hidden carve-out, or enforcement discretion). This appears as open questions in comment threads; it is not established fact. 
  • Unverified interpretation: that the DoW is comfortable with “red lines” only when the vendor frames them as aligned with existing law/policy rather than as company-imposed constraints. This is an inference drawn from the contradiction between the DoW’s “any lawful use” procurement memo and OpenAI’s claims about contractual red lines. 
  • Reported but still not primary: Reuters’ and other outlets’ characterization that OpenAI removed “many” user restrictions for GenAI.mil, with some guardrails remaining. This is sourced reporting, but the exact restrictions removed are not enumerated publicly in the cited materials. 

Contract terms, legal/ethical constraints, and the “lawful vs. wise” problem

The contractual core: OpenAI’s claimed “red lines” versus DoW’s stated procurement posture

OpenAI (via Altman’s public statement) says the agreement includes:

  • prohibitions on domestic mass surveillance, and
  • human responsibility for the use of force, including for autonomous weapon systems, plus
  • “technical safeguards” ensuring model behavior. 

Anthropic says it asked for two exceptions to “lawful use”: mass domestic surveillance and fully autonomous weapons, and refused to accede without those exceptions. 

Meanwhile, the DoW’s January 2026 AI Strategy memo directs incorporation of standard “any lawful use” language into DoW AI services contracts and says models should be free from usage policy constraints that limit lawful military applications. 

If you’re looking for the real analytical question, it’s not “is OpenAI pro-military now?” (that ship sailed in 2024).  It’s: how can the DoW simultaneously standardize ‘any lawful use’ contracts and agree to enforceable carve-outs?

There are only a few plausible explanations, each requiring different levels of charitable interpretation:

  1. The DoW views the “red lines” as already imposed by existing law/policy, so it does not experience them as vendor-imposed constraints. OpenAI’s phrasing explicitly claims the DoW “reflects them in law and policy.” 
  2. The OpenAI contract language may be narrower than it sounds (e.g., “we won’t build X” rather than “you won’t do X”). This is possible, but not verifiable without the contract. 
  3. The DoW treated Anthropic as a special case for reasons unrelated to the two exceptions—personality, messaging, industrial-policy leverage, or internal politics—and the “any lawful use” principle was enforced selectively. Selective enforcement is not a conspiracy; it’s practically a form of government. Reuters and other reporting indicate the dispute “divided” actors and became politically charged. 

Supply-chain risk: what the law actually authorizes (and what it doesn’t)

Anthropic points to 10 U.S.C. § 3252 to argue that a “supply chain risk” designation can only extend to contractors’ use of Claude on DoW/DoD contract work, not to all commercial activity.  The statute describes requirements for determinations, consultation, and notification to congressional committees for covered procurement actions tied to reducing supply chain risk. 

Reporting notes that legal ambiguity exists around the government’s move and that formal assessments and congressional notifications are typically required. 

Autonomous weapons and “human responsibility”: policy baseline is messier than slogans

OpenAI’s phrasing (“human responsibility for the use of force”) is rhetorically clean. The U.S. policy landscape on autonomous weapon systems is not.

A widely cited DoD policy baseline (DoD Directive 3000.09) is often summarized as requiring “appropriate levels of human judgment” over the use of force, but not as an absolute ban on autonomous targeting/engagement in all contexts. 

So “human responsibility” can mean anything from:

  • “a human signs off on a target list,” to
  • “a human approves each engagement,” to
  • “a human is accountable in chain-of-command for deploying the system.” 

Without the contract, we cannot specify which interpretation OpenAI and the DoW operationalized in the agreement. 

Classified network access: likely technical and compliance constraints (inference grounded in cited standards)

The DoW’s GenAI.mil program and broader DoD ecosystem use cloud “Impact Levels,” where IL5 maps to Controlled Unclassified Information requiring stronger protections than IL4.  Microsoft’s compliance documentation describes IL5 as covering CUI requiring higher protection than IL4. 

OpenAI’s government product strategy includes self-hosting options for government agencies in Azure commercial or Azure Government clouds and references compliance frameworks including IL5, CJIS, ITAR, and FedRAMP High, while stating use is subject to OpenAI usage policies. 

From those pieces, a reasonable inference is that the classified-network deployment likely involves:

  • accredited government cloud regions or classified enclaves,
  • strict identity/access controls, logging, and monitoring,
  • data-handling rules that prevent model training on sensitive inputs by default, and
  • clearance/insider-threat constraints for those with access to prompts/outputs. 

But again: the specific architecture (air-gapped, cross-domain, federated, etc.) is not specified in the public record cited here. 

Claims vs. evidence: does the deal match OpenAI’s public commitments?

OpenAI’s public self-description includes broad-benefit commitments and a concept of “misuse” that explicitly includes surveillance.  The question isn’t whether OpenAI can write beautiful principles; it’s whether the DoW deal is built to enforce them when incentives turn ugly.

Table: OpenAI public claims compared to available evidence about the DoW deal

OpenAI public claim (or principle)Source for the claimEvidence in the DoW deal (publicly stated)What remains unknown
“Primary fiduciary duty is to humanity,” avoid enabling harmful uses or undue power concentrationOpenAI Charter Altman claims “no domestic mass surveillance” and “human responsibility for use of force” in contractHow enforcement works under classified operations; who audits; remedies for breach 
Misuse includes suppression of free speech/thought via “surveillance”OpenAI safety/alignment doc Altman claims prohibition on domestic mass surveillance in agreementWhether “mass surveillance” is defined; whether loopholes (data brokerage, narrow authorities) are addressed 
Usage policies prohibit harmful uses, including “develop or use weapons”OpenAI policy revision text describes this prohibition Altman emphasizes “human responsibility for use of force,” not “no weapons use”Whether the deal is framed as analysis/support vs. targeting/engagement; whether models can be used to design weapons or tactics under “lawful use” 
OpenAI supports government adoption in secure environments; “custom models for national security” on limited basis“OpenAI for Government” announcementDeal explicitly targets classified networks (consistent with national security offering direction)Whether custom models are part of this deal; whether any fine-tuning occurs on classified data; control of weights 
OpenAI claims the DoW respects safety and wanted technical safeguardsAltman quote as reproduced in coverageMultiple sources quote Altman saying technical safeguards are part of agreementWhat safeguards are (policy gating, filtering, monitoring, red-teaming), who can override them, and what logs exist 

The biggest alignment point

On paper, the “red lines” Altman describes are consistent with OpenAI’s broader “misuse” framing—surveillance is explicitly named as a democratic-values violation in OpenAI’s safety/alignment writeup. 

If the contract truly implements enforceable guardrails—rather than merely restating policy aspirations—then OpenAI can plausibly argue it is living out its public commitments while engaging in national security work. 

The biggest tension point

The DoW’s procurement direction explicitly seeks models free from “usage policy constraints” limiting lawful military applications and directs insertion of “any lawful use” language in AI contracts. 

So if OpenAI’s agreement truly includes meaningful restrictions, either:

  • the DoW is not applying its “any lawful use” posture uniformly, or
  • the restrictions are framed in a way that preserves DoW discretion, or
  • the agreement is narrower than public debate assumes (e.g., deployment only for specific analytic/enterprise uses, not for weapons employment). 

With the contract not public, we can’t definitively pick among these. 

Industry implications, governance precedent, and risks

Precedent for AI governance: contract-as-constitution vs constitution-as-constitution

The Trump administration’s framing of Anthropic’s stance—“terms of service vs the Constitution”—is politically effective rhetoric, but it ducks the real issue: law is often behind technology, and contracts often become the de facto governance layer long before Congress updates statutes. Anthropic explicitly argued that AI changes what is technically possible under existing legal loopholes, especially around aggregation of commercially available personal data. 

The DoW, by contrast, wants “any lawful use” and treats vendor guardrails as interference. 

If OpenAI’s deal is seen as a workable compromise, it could become the template: vendors keep narrow red lines, government gets broad use, and technical safeguards mediate the boundary. 

If the deal is seen as opportunistic (OpenAI “gets in the door” while Anthropic is punished), it could chill future vendor willingness to negotiate on safety at all—because the punishment for negotiation becomes worse than the risk of compliance. 

Competitive dynamics: the “divide-and-conquer” risk is not imaginary

The employee open-letter discourse explicitly warns that the government can “divide” companies by pressuring one and rewarding another. 

Regardless of whether that is anyone’s conscious strategy, the incentives are real: classified access is a prestige moat, and losing it invites competitors to claim the “serious national security partner” mantle. 

Technical capability likely involved—and the risk envelope

Even bounded to language models, classified-network deployments can materially affect:

  • intelligence analysis and summarization,
  • drafting operational plans or courses of action,
  • cyber defense support, and
  • simulation and wargaming assistance. 

Those are exactly the “mission-critical applications” Anthropic claims Claude is already used for across national security agencies. 

The risks are correspondingly familiar but amplified in classified contexts:

  • Overreliance and automation bias: outputs look authoritative; humans defer; mistakes propagate. (General risk; intensified by operational tempo.) 
  • Prompt injection / manipulation: if models ingest or rely on untrusted text streams (especially anything like “real-time global insights”), adversaries can poison the input well. 
  • Policy override ambiguity: OpenAI says it will build safeguards; DoW says it wants “any lawful use.” The unresolved question is who wins when operational urgency collides with guardrails. 
  • Surveillance creep: even if “mass domestic surveillance” is forbidden, pressure can reappear as narrower “law enforcement support,” “force protection,” “counterinsider threat,” or data-broker aggregation—areas where law, policy, and definitions matter more than vibes. 

Political context: “AI dominance,” anti-DEI framing, and coercive procurement tools

The DoW’s strategy memo frames “Hard-Nosed Realism” and explicitly rejects DEI and “ideological tuning,” then links that to procurement of “objective” models and the “any lawful use” clause.  This isn’t just culture-war garnish; it is a procurement signal about alignment, controllability, and the government’s desire to avoid vendor policy constraints. 

The conflict also involved threats to invoke the Defense Production Act, according to Reuters’ reporting, which quoted a senior Pentagon official describing DPA invocation as an option if a vendor did not “get on board.” 

Bottom line risk assessment (analytic)

Based on the cited record, the OpenAI–DoW deal appears to reflect a fragile compromise:

  • OpenAI publicly claims meaningful guardrails are embedded. 
  • The DoW’s policy posture and procurement memo pushes toward maximum discretion and minimal vendor constraint. 
  • The government has shown willingness to deploy supply-chain-risk designations as leverage in the AI ethics fight. 

The deal may “match” OpenAI’s public claims at the level of stated principles, but whether it matches at the level that counts—enforcement under pressure—is currently not verifiable from public contract text. 

Sources

  • The White House — “Restoring the United States Department of War” executive order (Sept 5, 2025). 
  • The White House — Fact sheet on restoring “Department of War” as secondary title (Sept 5, 2025). 
  • U.S. Department of War — GenAI.mil launch release (Dec 9, 2025). 
  • U.S. Department of War — xAI/Grok expansion announcement (Dec 22, 2025). 
  • U.S. Department of War — “AI Acceleration Strategy” press release (Jan 12, 2026). 
  • U.S. Department of War — OpenAI partnership for GenAI.mil (Feb 9, 2026). 
  • U.S. Department of War — Hegseth “introduces” GenAI.mil news story quoting email (Dec 9, 2025). 
  • DoW “Artificial Intelligence Strategy for the Department of War” memo (Jan 9, 2026) — key pages/screenshots (PDF via media.defense.gov). 
  • White House OSTP — America’s AI Action Plan (July 2025) — cover and selected page screenshot. 
  • Reuters — “Pentagon Anthropic feud…” (Feb 27, 2026). 
  • Reuters — “OpenAI reaches deal…” (Feb 28, 2026). 
  • Anthropic — Dario Amodei statement on discussions with DoW (Feb 26, 2026). 
  • Anthropic — Statement responding to Hegseth comments (Feb 27, 2026). 
  • TechCrunch — Coverage of employee open letter and signatory counts (Feb 27, 2026). 
  • notdivided.org — “We Will Not Be Divided” letter site and verification details. 
  • OpenAI — Charter principles (Broadly distributed benefits, safety, etc.). 
  • OpenAI — Safety & alignment framing that includes surveillance as misuse. 
  • OpenAI — Usage policies revision page (effective Jan 29, 2025). 
  • OpenAI — ChatGPT Gov announcement (Jan 28, 2025) (used for deployment/compliance framing via snippet). 
  • Microsoft Learn — DoD IL5 description (CUI requiring higher protection than IL4). 
  • U.S. House Office of the Law Revision Counsel — 10 U.S.C. § 3252 supply chain risk requirements. 
  • The Wrap — Reaction roundup including Altman and Lieu quotes (Feb 27–28, 2026). 
  • Hacker News — Example of public speculation/questions (unverified) about contract differences.