Skip to content
world

DOGE Used ChatGPT to Cut Grants — A Judge Called It Illegal

A US federal judge has ruled that the Department of Government Efficiency acted unconstitutionally when it used ChatGPT to cancel over $100 million in humanities grants. The 143-page decision found DOGE's AI-driven process was both legally flawed and discriminatory.

·ottown·3 min read
DOGE Used ChatGPT to Cut Grants — A Judge Called It Illegal
29

A Judge Just Ruled DOGE's Grant-Cutting AI Tool Was Unconstitutional

The Department of Government Efficiency — the cost-cutting initiative spearheaded under the Trump administration — has taken a significant legal hit. A US federal judge ruled this week that DOGE's cancellation of more than $100 million in grants from the National Endowment for the Humanities (NEH) was unconstitutional, and the method used to make those cuts is raising serious eyebrows: the agency used ChatGPT.

In a sweeping 143-page decision, US District Judge Colleen McMahon found that DOGE had used OpenAI's ChatGPT to scan grant descriptions and flag any that appeared to be related to diversity, equity, and inclusion (DEI). Grants that triggered those flags were cancelled — often without review, context, or human judgment applied to the decision.

What the Judge Actually Found

Judge McMahon was blunt in her assessment. She wrote that "it could not be more obvious that DOGE used the mere presence of particular, protected characteristics to disqualify grants from continued funding."

The ruling stems from a 2025 lawsuit filed by humanities organizations that saw their funding abruptly pulled. The judge found that the process violated constitutional protections, essentially using a blunt AI tool to discriminate against projects on the basis of who they served or what topics they addressed — not on their merits or policy compliance.

The NEH, which funds academic research, cultural projects, and educational programming across the United States, had dozens of grants cancelled as part of the broader DOGE efficiency sweep. Many of those grants supported museums, archives, universities, and Indigenous cultural programs.

The Problem With Using AI for Policy Decisions

The ruling highlights a growing concern about governments and institutions using AI tools to make consequential decisions — particularly when those tools lack transparency, nuance, or accountability.

ChatGPT, like most large language models, is not designed to adjudicate legal or policy questions. It can summarize, flag keywords, and generate text — but it has no understanding of constitutional law, grant compliance requirements, or the intent behind funding legislation. Using it as a filter to determine which federally funded projects live or die is, as the judge essentially found, a recipe for arbitrary and discriminatory outcomes.

Critics of DOGE's methods have long argued that the initiative prioritized speed and optics over due process. This ruling appears to validate those concerns in a formal legal context.

What Happens Next

The decision is a major setback for DOGE's broader effort to rapidly defund programs it deemed ideologically misaligned with the administration. It opens the door for other organizations whose grants were cancelled to challenge those decisions, potentially forcing the reinstatement of funding across multiple federal agencies.

For the humanities community in particular — a sector already underfunded compared to STEM fields — the ruling offers a measure of relief, though many organizations have already had to scale back or shutter programs in the months since their funding was cut.

The case is also likely to fuel wider debate about how AI tools should (and shouldn't) be used in government decision-making — a conversation that's increasingly urgent as agencies on both sides of the border explore automation.

Source: The Verge

Stay in the know, Ottawa

Get the best local news, new restaurant openings, events, and hidden gems delivered to your inbox every week.