Caveat Emptor
January 5, 2026
Alex KiersteinA viral post on Reddit alleges some truly cynical policies that pay “desperate” drivers less, among other things. Is this the price of convenience?
A few days ago, u/Trowaway_whistleblow posted a few paragraphs on the r/confession subReddit that subsequently went viral. It spread to many other subs, was covered by news outlets, and compelled DoorDash to respond with an “it wasn’t us!” note on their site.
But there’s a concerning pattern of troubling allegations and subsequent settlements over deceptive practices by food delivery services. Just recently Doordash settled a lawsuit alleging it stole tips from drivers, and GrubHub settled with the FTC over “unlawful practices” that allegedly deceived consumers, drivers, and restaurants. Doordash listing restaurants on its site without their permission? That’s a lawsuit. (And then, a settlement.)
So despite these patterns, the lawsuits and settlements concerning deceptive behavior and opaque policies, the veracity of the poster’s specific allegations is unknown. I hope a good investigative reporter will develop the story in a way that brings even broader outrage at the gig industry in general. Because these gamified, algorithm-driven jobs are deliberatively exploitative in a way that is intended to circumvent various worker protections. You know this, I know this, the companies know this, and the drivers know this.
Even with that background knowledge, the specific ways in which the unnamed food delivery company is alleged to be exploiting its employees—I’m calling them this despite the useful fiction that they are independent contractors— is particularly odious. We assume that, within the bounds of the law, companies will do whatever they can to maximize profits from consumers. But it is somewhat distressing to think of the companies squeezing its workers, in this case delivery drivers, dynamically. And targeting the weakest among them, too.
Here is how the original poster describes the practice:
But the thing that actually makes me sick—and the main reason I’m quitting—is the “Desperation Score.” We have a hidden metric for drivers that tracks how desperate they are for cash based on their acceptance behavior.
If a driver usually logs on at 10 PM and accepts every garbage $3 order instantly without hesitation, the algo tags them as “High Desperation.” Once they are tagged, the system then deliberately stops showing them high-paying orders. The logic is: “Why pay this guy $15 for a run when we know he’s desperate enough to do it for $6?” We save the good tips for the “casual” drivers to hook them in and gamify their experience, while the full-timers get grinded into dust.
Sometimes workers will accept a lower wage to retain a job, or accept a new one after a long period of unemployment. That’s an internal calculus that’s voluntary, at least to the degree that the worker has any insight into the prevailing wages for that type of work. But this system allegedly does this opaquely; the worker is not entering into an agreement with the company to change the parameters of the job.

Nor does this system reward commitment and efficiency. In this anti-meritocracy, these characteristics are maligned with the stunningly cynical “High Desperation” flag in the delivery distribution system. And flagged by an algorithm, mind you.
There’s a famous IBM presentation from the dawn of the personal computing age, as it was quaintly known in the late 1970s, that includes the line, “A computer can never be held accountable, therefore a computer must never make a management decision.” Google delivers an IBM page containing that quote as the first result, for me at least (opaqueness also being a fundamental characteristic of Google’s algorithm). But the page is IBM’s self-serving attempt to reexamine the relevance of that statement as it examines whether AI can make management decisions.
Do we need a human in the loop? IBM ponders. AI can do so much.
Of course, there’s no consensus or legal precedent for who is at fault when a computer, for example, generates morally reprehensible content. Or when that computer, at the instigation of management and through the actions of a product team, relentlessly squeezes its hapless drivers.
It’s not just how much an Uber will cost during a period of higher usage. It’s how Uber might use the immense data ecosystem in creative ways to decide, for example, that your originating point and your destination mean you’re wealthy. Everybody’s doing it, man. You’re uncomfortable with it because it’s using such personalized data to alter your costs, your experience. If Uber was, hypothetically, merely using generalized data, like volume of rides or amount of tips or something, the distress is lessened even if the outcome is the same but spread out more broadly.
We’re more uncomfortable when we realize how much we may be singled out as consumers for a negative outcome. When we realize that the convenience of “smart” devices means mass surveillance. When it becomes clear that with a broad enough swath of individually innocuous data, your characteristics can be ascertained to a stunningly accurate degree.
And yes, these systems can now be used to manipulate and exploit gig drivers more so than before. Can be, because clearly they can, regardless of the truth of these specific allegations.
Recent Posts
All PostsPeter Hughes
January 15, 2026
January 15, 2026
Leave a Reply