Perspective

The Uber and Lyft logos displayed on a phone and backdrop. (Stock Catalog)

In May, the nonprofit Human Rights Watch released a damning investigation that should alarm policymakers worldwide.

The group argued that the seven largest gig platforms in the United States — Amazon Flex, DoorDash, Favor, Instacart, Lyft, Shipt and Uber — are using algorithmic systems not simply to manage workers but to systematically extract labor while evading fundamental legal obligations. The report, “The Gig Trap: Algorithmic, Wage and Labor Exploitation in Platform Work in the US,” exposes what amounts to a crisis of AI governance algorithms that have become the primary instrument through which corporations deprive workers of minimum wage protection, disable collective bargaining, and execute instant termination without due process.

Yet despite this, the response in Washington and state legislatures has been fragmented, timid and ultimately inadequate.

The problem is not that algorithms manage workers — it is that they have become employers without accountability, and our legal system has not caught up.

How opacity disguises inequality, not efficiency

The platforms frame algorithmic management as a neutral, efficient technology. They claim their algorithms optimize matching, reduce transaction costs and enable the flexibility workers desire. The HRW report reveals this framing to be false. Consider dynamic wage-setting. A DoorDash or Uber driver does not know why their pay for the same delivery route fluctuates week to week. The platforms justify this through what they describe as 'dynamic pricing' based on 'real-time demand,' 'market factors,' and other undisclosed variables. The HRW report reveals this framing to be false. As some legal reviews have argued, this opacity allows for "algorithmic wage discrimination," where platforms leverage granular data — acceptance rates, response times, location history and braking patterns — to calculate the minimum pay each individual will accept. A desperate worker with few outside options may be systematically offered lower wages for identical work. This is not pricing efficiency; it is algorithmic wage theft, tailored to each worker's vulnerability.

The International Labour Organization debate on algorithmic management, held at the 113th International Labour Conference in June, crystallized this contradiction. Employers and some governments argued that regulating algorithmic systems amounted to interference in commercial law. Workers' representatives responded that this framing evades a fundamental truth, that the algorithms are management systems, and management systems that set pay, assign tasks and discipline workers are inherently labor matters. The European Union, the Workers' group, and governments such as Chile insisted that algorithmic governance is inseparable from labor regulation, as it directly determines pay, hours, and conditions of work. The Trump administration, represented by U.S Department of Labor officials, and China, represented at the ILO, sided with employers — arguing that algorithmic governance is beyond the ILO's mandate.

This position is untenable. An algorithm that unilaterally sets a worker's hourly wage is not a commercial feature but an employment practice.

Surveillance as disciplinary infrastructure

The HRW findings on worker surveillance should disturb anyone committed to human dignity. Platforms track location, speed, braking habits and even phone usage, often extending surveillance to off-duty time. This data feeds into behavior-scoring systems that determine wages, offer frequency, and, ultimately, whether a worker remains on the platform. Gamified incentives—such as Uber's 'Quest' bonuses for completing consecutive rides or DoorDash's 'Delivery Streaks' for accepting back-to-back orders—are psychologically coercive, compelling longer hours or shift flexibility that mimics direct employment while evading employment protections

And then there is algorithmic deactivation — instant termination by algorithm, with no human review and limited recourse. A customer gives a poor rating, or the algorithm detects suspicious behavior, and a worker's income vanishes. The appeals process, when it exists, is often fully automated. These are not disciplinary systems designed for fairness; but are the systems designed for total corporate control without corporate accountability.

Misclassification as a legal shield

The deeper question is why these practices persist. The answer is misclassification.

By reclassifying employees as independent contractors, platforms sidestep minimum wage laws, overtime protections, workers' compensation and — critically — the duty to bargain with unions. A conventional employer who surveilled workers with this intensity, set wages this opaquely and terminated workers this arbitrarily would face immediate legal exposure. But because gig workers are classified as contractors, the legal system treats these practices as between a platform and an autonomous service provider, not employer and employee.

This classification is not a legal accident, but rather an underlying architecture. It is the foundation upon which wage suppression is built. And recent policy developments suggest the structure is hardening.

In December, President Donald Trump signed an executive order explicitly directing federal agencies to challenge state artificial intelligence regulations—specifically naming Colorado's algorithmic discrimination law and targeting California's AI safety disclosure requirements—as unconstitutional interference with commerce. The stated rationale is to ensure that American AI companies are free to innovate without cumbersome regulation. This order signals that the federal government will actively defend platforms' right to use opaque, discriminatory algorithms against workers. At the same moment, the European Parliament voted to ban automated hiring, firing and pay decisions, requiring human review and worker contestability for all algorithmic employment decisions.

The transatlantic divergence is stark, but it favors platforms. The EU Platform Work Directive imposes enforceable obligations on platforms—including human oversight of algorithmic decisions and bans on automated dismissals—while the Trump administration's executive order uses litigation to eliminate state-level worker protections rather than establish new ones.