Hallucinations 


A ProPublica investigation reveals that the US DOT wants to “flood the zone” with AI-generated transportation regulations.

The US Department of Transportation is a massive agency that touches almost every aspect of how Americans get around. The National Highway Traffic Safety Administration (NHTSA), Federal Aviation Administration (FAA), Federal Highway Administration (FHWA), Federal Motor Carrier Safety Administration (FMCSA), Federal Railroad Administration (FRA), and Federal Transit Administration (FTA) are housed within the US DOT. So when, as ProPublica reports, the US DOT’s general counsel, Gregory Zerzan, told the agency’s leadership that it is “the first agency that is fully enabled to use AI to draft rules,” the regulations that would result from such an AI-enabled process could broadly impact every American’s life and safety.

ProPublica reviewed notes revealing that Zerzan continued, “We don’t need the perfect rule on XYZ. We don’t even need a very good rule on XYZ. We want good enough. We’re flooding the zone.” 

The agency’s vision is to create rules quickly, with humans essentially reviewing the AI output (draft rules) for accuracy. ProPublica reports that Justin Ubert, division chief for US DOT cybersecurity, told other federal agency leaders that the goal is for humans to eventually just monitor AI-to-AI interactions. 

It’s an attempt to find a quick fix for what is admittedly a cumbersome regulatory process. But the process is, at least in part, cumbersome because of the gravity of the sectors in which the US DOT’s subsidiary agencies regulate. Because it takes time for human experts to apply their expertise to a potential rule, and time for those experts to evaluate the rule for issues or unexpected interpretations. Ultimately, these regulations are what protect people from safety issues that would otherwise not be addressed adequately by the responsible party.

Regulatory agencies wouldn’t exist if corporations and public transportation agencies could always be counted on to address significant safety issues. The FAA, at a minimum, ensures that airlines don’t cut certain corners in a drive for increased efficiency or profit. Hell, the agency was formed in the wake of unimaginable air tragedies. In 1956, two passenger airliners collided over the Grand Canyon, in uncontrolled airspace, resulting in 128 deaths. No survivors. The government, through the new FAA, stepped in to rationalize air traffic control to prevent it from happening again. 

As technology that can improve safety is developed, it’s the job of the agencies to prod reluctant corporations to include it—even if it’s expensive. The Intermodal Surface Transportation Efficiency Act of 1991, passed by Congress, directed NHTSA to draft the rules that would require vehicles to carry airbags. The agency serves to actually craft the regulations that put legislative or agency priorities into a form that can be implemented. 

These are, in an ideal sense, the experts who, unlike politicians, have in-depth subject matter experience to create rules that achieve the desired outcome. It’s not easy. Language is imperfect, misunderstandings and imprecise wording can cause unintended consequences. 

And AI is one of the grandest misnomers of our time. There is no “intelligence” to speak of in a large language model; these systems can’t parse what they’ve generated to ensure it makes sense. Hallucinations—results that look superficially accurate but are not, or are indeed entirely made-up—are common and a fundamental limitation of a system that has no ability to actually understand what it has generated.

The US DOT’s AI-promoting leadership see it as a means for agility. It’s a common misunderstanding, that quickly generating material that cannot be relied upon automatically, and then using humans to serve as QC, will save time and resources. Anyone who has utilized such a system in this manner on the QC end will be well aware of how badly this works in practice. 

It is a terrifying idea, that this uniquely important agency that has such a direct impact on every American’s life is determined to rely on such a demonstrably flawed technology. It is dangerous. There is an important reason that regulatory agencies are required by law, have a legal duty, to “engage in reasoned decision-making.” 

AI cannot be held accountable for deaths and injuries resulting from bad regulations. AI doesn’t even know what accountability is. But you can be sure that offloading responsibility to a non-entity that can’t be held accountable is at least hypothetically, a feature.

Leave a Reply

Your email address will not be published. Required fields are marked *



Recent Posts

All Posts
Have a good weekend.

Alloy

January 30, 2026

Peter Hughes delivers a comprehensive recap of the Rolex 24 Hours at Daytona.

Peter Hughes

January 29, 2026

How an EV battery is packaged within a car, it turns out, is almost as important as the battery itself. 
Alex Kierstein

January 29, 2026