Diagnose Thyself
April 9, 2026
Alex KiersteinService departments bring in the majority of most dealerships’ income. With a service tech shortage, the easy button is labeled AI.
The pitch is compelling. With twice as many service techs retiring as are coming into the workforce, lucrative dealership service departments are already understaffed. And the trend lines don’t look great moving forward. Those techs that do hit the job market are green, and it takes experience to diagnose issues. So, as Automotive News reports, tech companies are stepping in with AI products, pitched as solutions to this workforce calamity.
AN references a company called AutoSonix in particular. The idea is that, by using its device to listen to the sounds a sick car is making, it will assist the green mechanic in diagnosis. The AutoSonix unit purports to compare the sounds to a database of sounds, and also to use AI to analyze the data in order to provide a cause (and a solution) to the noise.

AI is, for many reasons, problematic. The type we’re most familiar with, the generative pre-trained transformer (GPT), isn’t capable of vetting its output in any way for accuracy. It merely cobbles together something that resembles a fluent answer to a prompt. The goal is for it to seem superficially like an answer. The answer may, or may not, be accurate. The GPT model doesn’t know, and can’t know. You can’t trust the answer or know GPT’s confidence level in its accuracy, since accuracy isn’t the goal.
For anyone who has conducted research or produced something for which accuracy is vitally important, that should be a horrifying, damning indictment of the tech. And yet, there’s this undeniable appeal of its apparent benefits. The answers sound so slick, so convincing, maybe it’s right more than it’s wrong. Maybe the inaccuracy can be mitigated (by humans in low paying jobs, perhaps?) and it can still be an “easy” end-run around some sort of intractable problem. After all, many of the systems’ issues—power consumption, cornering the market on processors, soaking up investment funds without a clear path to profit—are out of sight, and out of mind.
I think there’s a more perverse consequence. Let’s assume, for a second, that the underlying Globalsense AI model that AutoSonix is applying is less GPT and more deep learning, training itself for accuracy over and over comparing real-world sounds to a validated data set. The strategy that automakers employ to optimize their ECU programming, and a strategy that isn’t as prone to hallucinations as the goal is to create a feedback loop of ever-increasing accuracy. Sounds great. An automatic audio diagnosis tool, quick and reliable.
What it can do, and how it will be employed, matter. It could supplement skilled professional mechanics, saving them some time, and also allowing them to validate the device’s notions using their own personal expertise. This seems good.
A likely scenario is that it will be used to replace these mechanics, allowing for less-skilled mechanics (or, even, untrained employees) to run a quick check. If the AutoSonix device doesn’t hear any issues, push it out the door. It will let dealerships quickly replace their experienced mechanics who are exiting the workforce with, perhaps, inexperienced ones. Who can’t truly validate what the AutoSonix system is outputting. With lower overhead, and perhaps even fewer mechanics overall. In the face of an undeniable labor shortage, that’s a benefit, I suppose.
It’s also possible that the mechanics will be replaced by a system that never actually works and this will be another costly AI experiment that hurts everyone involved.
But it’s also the herald of a trend: human laborers always seem to be the targets of these systems. The pitch is efficiency, ease, quickness. But the outcome seems to be fewer humans employed, with the increased revenue flowing upward to the business owners. It seems to be a lower expertise requirement. The motivation to invest in human resources, to encourage retention and to build expertise, are further eroded.
Systems like AutoSonix’s are always pitched as augmentation of human capability, but with the perverse incentives of our system, can be employed as a replacement for human capability—and, especially, expertise. Ceding our human expertise and judgement to machines—who can not reason, who cannot be wise—seems foolish. And it significantly downplays the human ability to come up with creative, unconventional solutions to problems. I guarantee that there are human mechanics who can listen to an engine sound that would seem normal by any objective analysis and spot, using intuition and experience, a subtle problem.
Maybe machines will get there and mechanics can all take a universal basic income retirement to sip piña coladas on some sunny beach. But do you see that happening in this timeline?
One response to “Diagnose Thyself”
-
Excellent content here. The way you explained everything makes it easy to understand. Keep up the good work! (ref:cf01d09e8b74)
Recent Posts
All PostsApril 13, 2026
April 11, 2026
April 8, 2026
Leave a Reply