Algorithmic Harms to Workers in
the Platform Economy: The Case of
Uber
ZANE MULLER
*
Technological change has given rise to the much-discussed gig or
“platform economy,” but labor law has yet to catch up. Platform firms,
most prominently Uber, use machine learning algorithms processing
torrents of data to power smartphone apps that promise efficiency,
flexibility, and autonomy to users who both deliver and consume services.
These tools give firms unprecedented information and power over their
services, yet they are little-examined in legal scholarship, and case law has
yet to meaningfully address them. The potential for exploitation of
workers is immense, however the remedies available to workers who are
harmed by algorithm design choices are as yet undeveloped.
This Note analyzes a set of economic harms to workers uniquely enabled
by algorithmic work platforms and explores common law torts as a
remedy, using Uber and its driver-partners as a case study. Part II places
the emerging “platform economy” in the context of existing labor law. Part
III analyzes the design and function of machine learning algorithms,
highlighting the Uber application. This Part of the Note also examines
divergent incentives between Uber and its users alongside available
algorithm design choices, identifying potential economic harms to workers
that would be extremely difficult for workers to detect. Part IV surveys
existing proposals to protect platform workers and offers common law
causes of action sounding in tort and contract as recourse for workers
harmed by exploitative algorithm design.
* Executive Editor, Colum. J.L. & Soc. Probs., 20192020. J.D. Candidate 2020,
Columbia Law School. The author is grateful to Dean Gillian Lester for her engagement
and thoughtful feedback throughout the writing process. Additionally, the author would
like to thank the Columbia Journal of Law and Social Problems staff for their tireless
editing and attention to detail.
168 Columbia Journal of Law and Social Problems [Vol. 53:2
I. INTRODUCTION
The past two decades have seen the rise of algorithmic man-
agement the use of algorithms to allocate, manage, optimize,
and evaluate workers across a wide range of industries.
1
This
trend, coupled with the widespread adoption of smartphones, has
given rise to what has been variously termed the “gig economy,”
the sharing economy, the “on-demand economy,” or the plat-
form economy” an ill-defined
2
grouping meant to describe firms
that facilitate peer-to-peer services via digital platform market-
places. These include, most prominently, the transportation net-
work companies Uber and Lyft, as well as firms providing a host
of other services, such as moving, cleaning, delivery, repair, or
even personal massage.
3
Proprietary algorithms match workers
to customers who summon them with the tap of a smartphone,
promising a seamless, optimized transaction to users on both
sides of the market. In return, the firm providing the platform
marketplace collects a percentage of the cost of the service in ad-
dition to valuable user data.
4
The premise of the platform economy is simple: technology
firms create app-based digital marketplaces where buyers and
sellers can transact in perfect algorithmic harmony. Ostensibly,
the interests of buyers, sellers, and platform providers are
aligned: classical microeconomic theory predicts that nearly-
frictionless online marketplaces will be governed efficiently by
1
. See Min Kyung Lee et al., Working with Machines: The Impact of Algorithmic and
Data-Driven Management on Human Workers, in CHI ‘15 PROC. 33RD ANN. CONF. ON
HUMAN FACTORS IN COMP. SYS. 1603 (Seoul, S. Kor., April 1823, 2015).
2
. See Charlie Warzel, Let’s All Join the AP Stylebook in Killing the Term “Ride-
Sharing, BUZZFEED NEWS (Jan. 8, 2015), https://www.buzzfeednews.com/article/
charliewarzel/lets-all-join-the-ap-stylebook-in-killing-the-term-ride-shar [perma.cc/YRV7-
ENCW].
3
. These include TaskRabbit and Dolly for moving or handyman assistance; Handy
for maid services; Postmates, DoorDash, and Caviar for food delivery; and Soothe for in-
home massage services. See Jeff Desjardins, The 150 Apps that Power the Gig Economy,
VISUAL CAPITALIST (May 6, 2019), https://www.visualcapitalist.com/150-apps-power-gig-
economy [https://perma.cc/2GSZ-TF9U].
4
. See Alex Moazed, Can Uber Reach Profitability?, INC. (Feb. 18, 2018),
https://www.inc.com/alex-moazed/ubers-path-to-profitability.html [perma.cc/ZU7W-
AQZE]. See also How the Gig Economy Will Bring Big Data to Every Market, INSIDE BIG
DATA (Apr. 16, 2019), https://insidebigdata.com/2019/04/16/how-the-gig-economy-will-
bring-big-data-to-every-market [https://perma.cc/B4EE-JJHA].
2020] Algorithmic Harms to Workers in the Platform Economy 169
supply and demand.
5
The more transactions occur, the more cus-
tomers’ needs are met, the more workers earn, and the more the
platform operators collect. Machine learning algorithms with
theoretically perfect information on the market and instructions
to maximize profits will get better and better at matching buyers
and sellers of services, and everybody wins.
The reality is not so simple. A closer look at the incentives
and constraints on platform firms illuminates a set of situations
where their interests diverge from, and may directly oppose,
those of their users. The vast asymmetries of information and
market power that firms enjoy over their users invite closer scru-
tiny of the power dynamics at play and the behavior of platform
firms compared with how they represent themselves to users.
6
The remedies available to workers who are harmed by these ef-
fects are as yet undeveloped, but common law principles applied
to these novel challenges may yield the first steps towards a re-
gime for regulating the platform economy.
This Note analyzes a set of economic harms to workers
uniquely enabled by algorithmic work platforms and explores
common law torts as a remedy, using Uber and its driver-
partners as a case study.
7
Part II describes the rise of the plat-
form economy”
8
and surveys the current state of employment law
5
. See Alison Griswold, Uber’s Secret Weapon Is Its Team of Economists, QUARTZ
(Oct. 14, 2018), https://qz.com/1367800/ubernomics-is-ubers-semi-secret-internal-
economics-department [perma.cc/Z5JQ-9VUU].
6
. Ryan Calo & Alex Rosenblat, The Taking Economy: Uber, Information, and Pow-
er, 117 COLUM. L. REV. 1623, 1649 (2017).
7
. Uber is one of the most widely-recognized and widely-studied work platforms.
Analysis of harms resulting from decisions made in the design and implementation of
machine learning algorithms [hereinafter, “algorithmic harms”] to workers on the Uber
platform will necessarily involve a degree of speculation, as Uber’s algorithm is a “black
box” whose particulars are closely-guarded intellectual property. There is, however, a
small but growing body of observations by researchers and drivers in addition to publicly-
available information about the app and analogies to algorithms used in similar contexts
that allow for inferences about design choices made by its developers and how those im-
pact driver-partners. Id. at 1654.
8
. There is no widely agreed-upon definition of this sector, nor a consistent terminol-
ogy used by scholars for this group of workers. The terms “platform economy” and “plat-
form worker” have been chosen because they are relatively neutral, in contrast to the
misleadingly benign or altruistic-sounding “sharing economy” or the dismissive gig econ-
omy.” The word “platformalso usefully serves to highlight the legal precarity of workers
who rely on smartphone applications such as Uber, TaskRabbit, or Handy for their in-
comes. See generally Juliet B. Schor et al., Dependence and Precarity in the Platform
Economy (2018) (unpublished paper), https://www.bc.edu/content/dam/bc1/schools/mcas/
sociology/pdf/connected/Dependence%20and%20Precarity%20Aug%202018.pdf
[https://perma.cc/YH9T-E2KJ] (on file with Colum. J.L. & Soc. Probs.).
170 Columbia Journal of Law and Social Problems [Vol. 53:2
as it applies to workers who use algorithmically-mediated
smartphone platforms to directly mediate labor. Part III analyz-
es the design and function of machine learning algorithms, high-
lighting the Uber application to illustrate a range of potential
economic harms to workers enabled specifically by the two-sided
platform model resulting directly from the firm’s design choices.
9
Part IV explores possible legal and regulatory responses to these
emerging issues, proposing common law causes of action for
breach of the implied duty of good faith and fair dealing and the
tort of misrepresentation as a recourse to workers whose inter-
ests have been harmed by algorithmic design choices in platform
markets. This Note concludes by arguing that causes of action
against platform employers generally, and Uber in particular, are
viable under existing law and may represent a promising ap-
proach for protecting the interests of this new and growing class
of workers against abusive practices or economic harms unique to
algorithmically-mediated work.
II. THE RISE OF THE “PLATFORM ECONOMY AND THE LEGAL
STATUS OF PLATFORM WORKERS
To understand the worker-firm relationships that define the
platform labor model and the legal rights and responsibilities
they may give rise to, it is helpful to first examine how they
emerged and what they replaced. Part II.A identifies structural
economic and technological factors that increase the viability of
the platform model. Part II.B then surveys the current state of
labor law applied to the platform economy, focusing on gaps in
prevailing worker classification schemes and the litigation these
have given rise to. The design and structure of the algorithms
that mediate platform work is analyzed in greater detail in Part
III.
9
. This Note analyzes in detail specific practices and design choices, both document-
ed and hypothetical, building on the work of Alex Rosenblat, Ryan Calo, and others who
have studied algorithmically-mediated work platforms and drawn attention to information
asymmetries and potential for harm and abuse inherent in these models. See generally
Calo & Rosenblat, supra note 6.
2020] Algorithmic Harms to Workers in the Platform Economy 171
A. STRUCTURAL FACTORS SUGGEST GROWTH OF PLATFORM
MODEL
There is no consensus on how big the platform economy is, or
how big it will get, but high-end estimates put the number of full-
time platform workers at fifteen percent of the working popula-
tion.
10
Setting aside the hype
11
and the somewhat ambiguous
12
data
13
on the growth of gig or platform work, economic theory
suggests that this model offers substantial efficiencies.
Platform company boosters assert that these new organiza-
tional models have disrupted the most basic unit of economic or-
ganization the firm.
14
In The Nature of the Firm, economist
Ronald Coase provided a theory to explain why firms emerged in
markets where individuals were free to independently contract
for goods or services in supposedly-efficient open markets.
15
A
firm is, essentially, a closed internal market; they are able to
subsist to the extent that the benefits of internalized transaction
costs exceed the costs of overhead (maintaining capital and labor)
and inefficiencies of resource allocation.
16
10
. See Steven Dubner, What Can Uber Teach Us About the Gender Pay Gap?,
FREAKONOMICS: RADIO (Feb. 6, 2018), http://freakonomics.com/podcast/what-can-uber-
teach-us-about-the-gender-pay-gap [perma.cc/V7A5-G2RF].
11
. Jon Bruner, Platform Economies: A Conversation with Erik Brynjolfsson and Jon
Bruner, O’REILLY MEDIA (2015), https://www.oreilly.com/radar/platform-economies
[https://perma.cc/DT43-ZYCC].
12
. See Caleb Gayle, US Gig Economy: Data Shows 16M People in ‘Contingent or
AlternativeWork, GUARDIAN (June 7, 2018), https://www.theguardian.com/business/2018/
jun/07/america-gig-economy-work-bureau-labor-statistics [https://perma.cc/B4G9-AF35].
Many workers interviewed were unsure how to define their app-based platform work;
Uber drivers, for example, expressed confusion as to whether they were “employeesof
Uber or independent small business owners. Id.
13
. See Natasha Bach, Everyone Thought the Gig Economy Was Taking Over. Turns
Out It’s Not, FORTUNE (June 8, 2018), http://fortune.com/2018/06/08/gig-economy-
shrinking-data [perma.cc/XY3D-UMT9]. The U.S. Department of Labor has been cautious
in defining platform work; the most recent Bureau of Labor Statistics Survey included a
caveat that it would report on app-based electronically-mediated employment at a later
date. See Contingent and Alternative Employment Arrangements Summary, U.S. BUREAU
OF LABOR STAT. (June 7, 2018), https://www.bls.gov/news.release/conemp.nr0.htm [per-
ma.cc/3ZFE-8XXZ].
14
. See Bruner, supra note 11.
15
. See generally R.H. Coase, The Nature of the Firm, 4 ECONOMICA 386 (1937). Clas-
sical economic models had previously ignored the costs inherent in contracting on an open
market i.e., the searching, bargaining, and policing of rules endemic to all transactions.
Firms reduce these by formalizing relationships with laborers and suppliers, reducing the
need to search and bargain in the course of each individual transaction. See id.
16
. Id.
172 Columbia Journal of Law and Social Problems [Vol. 53:2
Information technology has been altering this equation for
some time. Coase, after all, was writing in an era predating the
modern fax machine. In the middle of the twentieth century, the
vertically-integrated industrial conglomerates that dominated
Western economies increasingly sought to reduce the costs of
maintaining a workforce by subcontracting, franchising, and ex-
ternalizing their supply chains.
17
Advances in information tech-
nology made it possible for globalized, “fissured” firms to exter-
nalize many of the costs of production while maintaining central-
ized control.
18
For example, Nike does not exactly makeshoes:
it directs their design, marketing, manufacture, and distribu-
tion.
19
Digital networks have accelerated this trend. In 2002, Yochai
Benkler used Coase’s theory of transaction costs to explain the
then-emerging trend of networked peer-to-peer production.
20
He
applied Coase’s theory to online networks and marketplaces, ar-
guing that individuals with differing motivations and goals can
nonetheless productively collaborate on large-scale projects via
digital networks.
21
This third model of production the plat-
form found its purest expressions in decentralized platforms
such as Wikipedia and Napster, where individuals were able to
participate in information exchange via a digital network accord-
ing to their own motivations and abilities, without central control
or direction.
Over the past fifteen years, digital platforms have proliferat-
ed.
22
The rise of Big Data and machine learning, along with the
ubiquity of smartphones, has unlocked a new market for firms,
workers, and consumers in the form of “on-demand” “gig econo-
17
. See generally DAVID WEIL, THE FISSURED WORKPLACE: WHY WORK BECAME SO
BAD FOR SO MANY AND WHAT CAN BE DONE TO IMPROVE IT 12224, 15960 (2014).
18
. Id. at 5355.
19
. “Nikefication” refers to the transformation of a firm into a nexus of contracts.”
See Gerald F. Davis, What Might Replace the Modern Corporation? Uberization and the
Web Page Enterprise, 39 SEATTLE U. L. REV. 501, 502 (2016). Nike was one of the first
firms to outsource the production of shoes to suppliers overseas while still maintaining
control of the brand. See id.
20
. Yochai Benkler, Coase’s Penguin, or, Linux and The Nature of the Firm, 112 YALE
L.J. 369, 375 (2002).
21
. Id.
22
. Professor Julie Cohen has argued that platforms (defined expansively to include
online marketplaces like eBay, or desktop and mobile operating systems such as Android)
represent nothing less than a replacement of markets as the core organizational form of
the information economy. See generally Julie Cohen, Law for the Platform Economy, 51
U.C. DAVIS L. REV. 133 (2017).
2020] Algorithmic Harms to Workers in the Platform Economy 173
my services.
23
Smartphones gather reams of data about their
users, allowing firms to track the behavior of all participants
within a platform market and use machine-learning algorithms
to gather, analyze, and leverage market information.
24
Platforms
grant the firms that operate them extraordinary information ad-
vantages while simultaneously raising questions about competi-
tion, privacy, and manipulation.
25
The firms themselves have not been shy about promoting the
tantalizing promise of machine learning algorithms and their
ability to “perfect” markets, and in doing so distinguish them-
selves from traditional firms.
26
Companies such as Lyft and Uber
efficiently disrupttraditional businesses such as taxi operators,
but insist that they are not becoming what they displace. They
instead present themselves as “transportation network compa-
nies” or simply “technology companies.”
27
In their telling, they
operate in the new “sharing economy as benevolent innovator-
matchmakers enabling independent entrepreneurs to drive, host,
and share their way to a freer, better work-life balance. Howev-
er, as in other markets where algorithms are entrusted with deci-
sion-making, there is great potential for harm and abuse.
B. WORKER CLASSIFICATION AND TRANSPORTATION NETWORK
COMPANIES: LITIGATION WITHOUT RESOLUTION
Most of the legal scrutiny in this emerging field has focused on
the classification of the workers who use these platforms: are
they employees, independent contractors, or something in be-
tween?
28
Recent high-profile cases brought by drivers against
Uber and Lyft have focused on the issue of worker classification
23
. Professor Orly Lobel has described a “transaction costs revolution” enabled by
algorithm-driven digital platforms that promises a virtuous cycle of efficiency to “perfect”
the market. See Orly Lobel, The Law of the Platform, 101 MINN. L. REV. 87, 106-07
(2016).
24
. See Kenneth A. Bamberger & Orly Lobel, Platform Market Power, 32 BERKELEY
TECH. L.J. 1051, 1053 (2017).
25
. Id.
26
. Bamberger & Lobel, supra note 23, at 99-101. Platform companies have strong
legal and public-relations incentives to define themselves in particular ways; “definitional
defiance” is, in many cases, central to platform companiesbusiness models. Id. These are
discussed in more detail in Part III, infra.
27
. See O’Connor v. Uber Techs., Inc., 82 F. Supp. 3d 1133, 1137 n.10 (N.D. Cal.
2015).
28
. See Pamela A. Izvanariu, Matters Settled but Not Resolved: Worker Misclassifica-
tion in the Rideshare Sector, 66 DEPAUL L. REV. 133, 137 (2016).
174 Columbia Journal of Law and Social Problems [Vol. 53:2
and have highlighted the difficulties of applying existing em-
ployment law to novel work relationships.
29
O’Connor v. Uber Technologies was the first major test of
worker classification as applied to the platform economy.
30
In
2015, a group of drivers filed a putative class action lawsuit
against Uber in the Northern District of California asserting that
they were employees of Uber and thus entitled to various protec-
tions under California law. Uber sought a summary judgment
declaring that drivers who used its platform were independent
contractors as a matter of law. Judge Edward Chen denied Ub-
er’s motion, holding that the plaintiffs had met their burden of
showing performance of services and that a genuine issue of ma-
terial fact remained as to the extent of control exercised over the
drivers by Uber, a key element of the classification test.
31
In the
opinion, Judge Chen rejected Uber’s claim that it is merely a
“technology company” rather than a transportation provider.
Judge Chen’s opinion contained language that has potentially
far-reaching implications for the platform economy at large, stat-
ing:
Uber engineered a software method to connect drivers with
passengers, but this is merely one instrumentality used in
the context of its larger business. Uber does not simply sell
software; it sells rides. Uber is no more a ‘technology com-
panythan Yellow Cab is a ‘technology companybecause it
uses CB radios to dispatch taxi cabs.
32
Following the denial of summary judgment, a federal appeals
court agreed to review an order certifying the drivers as a class,
leading Uber to ultimately settle with the plaintiffs for $100 mil-
lion.
33
Drivers for Lyft brought a nearly identical suit in Califor-
nia in 2015; it, too, was settled along similar lines.
34
29
. See, e.g., Cotter v. Lyft, Inc., 176 F. Supp. 3d 930 (N.D. Cal. 2016).
30
. See O’Connor, 82 F. Supp 3d at 1133.
31
. Id. at 1141, 1148.
32
. Id. at 1141.
33
. Uber Drivers Remain Independent Contractors as Lawsuit Settled, 30 No. 20
WESTLAW J. EMP. 1, 1 (Apr. 26, 2016). According to the terms of the settlement agree-
ment, the status of drivers as independent contractors was unchanged, though the plain-
tiffsattorney maintained that nothing in the agreement prevented future courts or agen-
cies from determining that drivers are employees. Id.
34
. See Izvanariu, supra note 28, at 161.
2020] Algorithmic Harms to Workers in the Platform Economy 175
During the same period that these high-stakes cases were
moving through the court system, Uber and Lyft embarked on a
nationwide lobbying and public-relations campaign aimed at
fending off unwelcome regulation. The rapid expansion of their
services was mirrored by the aggressiveness with which they
conducted their legislative campaign.
35
Uber typically enters a
marketplace and begins operating without consulting local au-
thorities, pressing them to update local regulations on terms
favorable to the company.
36
This has led to friction with local au-
thorities and incumbent taxi providers, who take exception to
what they view as the arrogant flouting of local laws. It also
leads to a rapidly-growing base of drivers and riders who can be
mobilized to apply pressure to local officials.
37
Uber has married
this insurgent approach with a traditional lobbying apparatus.
As of 2015, Uber had registered 250 lobbyists and twenty-nine
lobbying firms nationwide, outspending the likes of Wal-Mart.
38
By 2016, Uber was paying 370 active lobbyists in forty-four
states.
39
In the past five years, forty-eight states and the District of Co-
lumbia have passed legislation pertaining specifically to self-
identified transportation network companies (TNCs).
40
In forty-
one states, such laws preempt local municipal regulation of these
industries to varying degrees.
41
However, Uber and Lyft have
scored significant successes, perhaps most crucially in defining
themselves as TNCs rather than as taxi dispatch services. Uber
and Lyft have also been successful in gaining carve-outs to ensure
that drivers who use their platforms are not classified as employ-
ees; in twenty-five states, drivers are explicitly or presumed to be
independent contractors, and in eleven states, they have been
35
. Karen Weise, This is How Uber Takes Over a City, BLOOMBERG (June 23, 2015),
https://www.bloomberg.com/news/features/2015-06-23/this-is-how-uber-takes-over-a-city
[perma.cc/4MUG-9GP4].
36
. See id.
37
. See id.
38
. See id.
39
. See Joy Borkholder et al., Uber State Interference: How Transportation Network
Companies Buy, Bully and Bamboozle Their Way to Deregulation, NATL EMP. L. PROJECT
5 (Jan. 2018), https://www.forworkingfamilies.org/sites/default/files/publications/Uber%20
State%20Interference%20Jan%202018.pdf [https://perma.cc/FKW9-28GL].
40
. Id.
41
. These laws are not uniform, and often represent “compromises” between Uber and
Lyft and local authorities seeking to impose requirements such as insurance minimums,
background checks for drivers, and other measures to protect the public. Id.
176 Columbia Journal of Law and Social Problems [Vol. 53:2
granted a mix of specific exemptions from state employment
laws.
42
In fairness, the grassroots support TNCs receive from their
users is genuine. Services like Uber and Airbnb are wildly popu-
lar with consumers, and many consumer advocates have praised
their utility. Uber and Lyft, for example, provide reliable on-
demand transportation to low-income urban areas that previous-
ly lacked taxi services.
43
These firms’ legislative victories would
not have been possible without pressure generated by users of the
services, both riders and drivers, who were willing to vocally sup-
port these companies.
Barring a significant legislative reversal, platform workers
seem destined to remain independent contractors for the foresee-
able future.
44
For that reason, scholars have begun to explore
alternatives to employee classification as avenues for protecting
workers or regulating platform firms, such as consumer protec-
tion law.
45
Part III of this Note builds on that work by identifying
specific algorithmic harms that may be inflicted on platform
workers and exploring possible causes of action that a worker
might have against the operators of a platform algorithm.
III. ALGORITHMIC DESIGN AND THE POTENTIAL FOR HARMS TO
WORKERS
As big data and machine learning algorithms increasingly
permeate modern life, their use poses a growing threat to indi-
vidual rights and values.
46
Courts and legal scholars are begin-
42
. Id. at 13.
43
. One study found that 99.8% of households in Los Angeles were able to access the
services of ridehailing companies, including neighborhoods with limited car ownership,
representing a significant increase in equitable access to mobility. See generally Anne E.
Brown, Ridehail Revolution: Ridehail Travel and Equity in Los Angeles, UCLA
ELECTRONIC THESES AND DISSERTATIONS (2018), https://escholarship.org/uc/item/
4r22m57k [https://perma.cc/RM63-TC7P].
44
. In September of 2019, the state of California passed legislation aiming to reclassi-
fy platform economy workers as employees, rather than independent contractors, requir-
ing that businesses provide them with labor protections such as a minimum wage and
paid parental leave. See AB-5 Worker Status: Employees and Independent Contractors,
Cal. Assembl. (Cal. 2019); see also Alexia F. Campbell, California Just Passed a Land-
mark Law to Regulate Uber and Lyft, VOX (Sep. 18, 2019), https://www.vox.com/2019/9/11/
20850878/california-passes-ab5-bill-uber-lyft [https://perma.cc/YL3H-DARR].
45
. See generally Calo & Rosenblat, supra note 6.
46
. See EXEC. OFFICE OF THE PRESIDENT, BIG DATA: SEIZING OPPORTUNITIES,
PRESERVING VALUES 1–3 (May 2014), https://obamawhitehouse.archives.gov/sites/default/
files/docs/big_data_privacy_report_may_1_2014.pdf [https://perma.cc/8H6H-2P69]. “Big
2020] Algorithmic Harms to Workers in the Platform Economy 177
ning to grapple with the implications of algorithmic decision-
making in contexts as varied as credit scoring,
47
medical malprac-
tice,
48
predictive policing,
49
and hiring.
50
This emerging body of
scholarship explores the harms that can arise from the use of ma-
chine learning algorithms to make decisions and also adds to the
growing debate about how to address these harms and who
should be responsible for doing so.
51
In the employment context, legal scholars have generally fo-
cused on bias and discrimination in hiring decisions.
52
To sup-
plement traditional practices, such as interviewing candidates
and considering their education and experience, machine learn-
ing algorithms process thousands of data points about an indi-
vidual, often gathered by third parties.
53
While proponents argue
that algorithms hold the promise of removing human biasfrom
hiring decisions, skeptics note that data sets often themselves are
not neutral, and the uncritical use of workplace analytics may
Data has no single definition but generally refers to the practice of using algorithms to
analyze massive digital data sets, identifying patterns and generating insights made pos-
sible by vast computing resources. Id. at 2.
47
. See Danielle Keats Citron & Frank Pasquale, The Scored Society: Due Process for
Automated Predictions, 89 WASH. L. REV. 1, 1718 (2014).
48
. See Shailin Thomas, Artificial Intelligence, Medical Malpractice, and the End of
Defensive Medicine, PETRIE-FLOM CTR.: BILL OF HEALTH (Jan. 26, 2017),
http://blog.petrieflom.law.harvard.edu/2017/01/26/artificial-intelligence-medical-
malpractice-and-the-end-of-defensive-medicine [https://perma.cc/8835-429Q].
49
. See generally Andrew Guthrie Ferguson, Big Data and Predictive Reasonable
Suspicion, 163 U. PA. L. REV. 327 (2015).
50
. See Pauline T. Kim, Data-Driven Discrimination at Work, 58 WM. & MARY L. REV.
857, 88492 (2017). Predictive algorithms designed to be neutral with respect to protected
characteristics, such as race and gender, nonetheless deliver results that are biased along
these lines due to the nature of machine-learning and design choices that fail to account
for proxy variables or other correlations that deliver biased results. Id.
51
. Citron & Pasquale, supra note 47, at 14. Naturally, issues that implicate consti-
tutional rights have attracted the most criticism. Disparate and adverse impacts of algo-
rithmic decisions on protected classes have been documented in a range of circumstances;
for example, credit rating algorithms have been found to deliver biased results against
women and racial minorities, leading to diminished access to credit for these groups and
entrenching inequality. Id. Similarly, algorithms meant to predict recidivism have deliv-
ered troubling outcomes that raise questions about equal protection. For example, in
State v. Loomis, the Wisconsin Supreme Court held that the use of an algorithmic recidi-
vism risk assessment tool to aid sentencing did not violate the defendant’s due process
rights, despite the inclusion of his gender as a factor of consideration. 881 N.W.2d 749,
767 (Wis. 2016).
52
. See generally Kim, supra note 50.
53
. Data that is neutral” along lines of protected class may nonetheless serve as a
proxy for protected characteristics. For example, a firm may seek to improve employee
retention by favoring candidates who live closer to the workplace. This “neutral” choice
may, however, reinforce a legacy of racially discriminatory housing policies that have led
to segregation in many cities. Id. at 861, 863.
178 Columbia Journal of Law and Social Problems [Vol. 53:2
actually exacerbate or introduce new forms of bias along lines of
class, race, and gender.
54
Most scholarship on algorithmic harms in the platform econ-
omy has also focused on racial discrimination and bias. Evidence
from multiple studies and experiments suggests that platforms
that incorporate a reputational or rating component will often
reflect the racial biases of their service providers: for instance, a
study on Airbnb revealed that guests with African-American
sounding names are sixteen percent less likely to have their res-
ervation requests accepted,
55
while another study focusing on Ub-
er showed that female and African-American passengers suffered
from various forms of discrimination by drivers, including longer
wait times, increased cancellations, and, particularly in the case
of female riders, more circuitous routes taken by drivers.
56
While these problems deserve attention, there is reason to
think that platform workers are vulnerable to substantial eco-
nomic harms that are less easy to identify and detect. A number
of scholars, notably Alex Rosenblat and Ryan Calo, have drawn
attention to the incentives and opportunities for abuse inherent
in the platform model.
57
The information asymmetries that de-
fine platform marketplaces allow for firms to leverage infor-
mation in ways that exploit the divergent interests of workers
and firms and undermine the joint-profit premise of the platform
economy.
58
It is worth noting here that algorithmic management of plat-
form workers is different from other potentially problematic algo-
rithmic-decision situations (such as a hiring algorithm unfairly
denying an applicant) because the potential harms are not one-
off; algorithmically-determined, labor-assignment transactions
are recurrent.
59
This is crucial because even marginal harms
that are individually insignificant will accumulate over time. A
five percent inefficiency on a given route may be negligible, per-
haps costing a driver only a few dollars or cents, but an algorithm
54
. See id. at 865. Systematic errors, such as errors in the data or data collection that
reflects human bias, may inadvertently deliver biased results despite the intentions of
algorithm designers. See id.
55
. See Benjamin Edelman et al., Racial Discrimination in the Sharing Economy:
Evidence from a Field Experiment, 9 AM. ECON. J. 1, 2 (2017).
56
. Yanbo Ge et al., Racial and Gender Discrimination in Transportation Network
Companies 18-19 (Nat’l Bureau of Econ. Research, Working Paper No. 22776, 2016).
57
. See generally Calo & Rosenblat, supra note 6.
58
. See id.
59
. See generally Lee et al., supra note 1.
2020] Algorithmic Harms to Workers in the Platform Economy 179
that results in consistent five percent losses for a driver could
mean thousands of dollars of lost income over the course of a
year.
60
This observation is intimately linked with the problem of
calculating damages.
61
Algorithmic outputs that cause marginal
losses to workers have the potential to be substantial over recur-
rent transactions and could amount to vast sums when consider-
ing millions of drivers as a class. This Part addresses this more
subtle problem how to provide recourse to platform workers
who are victims of algorithmic design choices that are opaque and
marginal, yet in the aggregate cause substantial loss.
A. SPECIFIC HARMS FROM THE DESIGN AND USE OF MACHINE
LEARNING ALGORITHMS
Before addressing the use of machine learning algorithms in
the platform economy and the impacts on workers, it is helpful to
examine how these algorithms operate and the extent to which
they are designed and controlled by their operators. Closely ex-
amining the definitions, inputs, parameters, adjustments, and
developers’ ability to even understand how machine learning al-
gorithms make decisions is crucial to evaluating the potential
liabilities faced by firms for any harms resulting from an algo-
rithm’s use.
Machine learning algorithms have myriad designs and appli-
cations, but they share certain basic characteristics. Broadly de-
fined, machine learning refers to an automated process for identi-
fying relationship between variables in a data set and making
predictions based on those relationships.
62
Those relationships
accumulate into a “model,” or algorithm, which can then be used
to make predictions or decisions based on new data.
63
At a fun-
damental level, an algorithm’s job is to discriminate; data mining
is itself a form of rational discrimination, and often one with le-
gitimate ends and means.
64
The problems arise when this dis-
60
. Jacob Bogage, How Much Uber Drivers Actually Make Per Hour, WASH. POST
(June 27, 2016), https://washingtonpost.com/news/the-switch/wp/2016/06/27/how-much-
uber-drivers-actually-make-per-hour [perma.cc/J88P-LKNK].
61
. See infra Part IV.C.5.
62
. KEVIN P. MURPHY, MACHINE LEARNING: A PROBABILISTIC PERSPECTIVE 1 (2012).
63
. MICHAEL BERRY & GORDON LINOFF, DATA MINING TECHNIQUES FOR MARKETING,
SALES, AND CUSTOMER RELATIONSHIP MANAGEMENT 8-11 (2d ed. 2004).
64
. See Solon Barocas & Andrew D. Selbst, Big Data’s Disparate Impact, 104 CAL. L.
REV. 671, 677 (2016).
180 Columbia Journal of Law and Social Problems [Vol. 53:2
crimination takes place along lines that are legally or ethically
impermissible and place individuals at a systematic disad-
vantage.
65
These problems are further complicated by the extent
to which algorithms are in a black box,opaque not only to those
affected by algorithmic decisions, but also to the very designers
and operators of the algorithms themselves.
66
A close examina-
tion of the human role in various stages of algorithmic design and
operation is thus essential to understanding potential liability.
The process of developing and deploying machine learning al-
gorithms can be broken down into eight steps, each with varying
degrees of human input and transparency.
67
David Lehr and
Paul Ohm have divided steps one through seven, which they
characterize as “playing with the data,” or developing, training,
and refining the algorithm, from the last step, which they call
“running the model,” i.e., deploying the algorithm to process new
data and make decisions in the field.
68
The first three of Lehr and Ohm’s steps involve setting the
basic parameters of the algorithm: problem setting, or defining
abstract goals (i.e., “predicting whether a borrower is likely to
default”), assigning specific variables and measurements to these
goals, and choosing which variables will be included in the “train-
ing data that the algorithm will use to build its model of rela-
tionships.
69
At this stage, the selection of parameters is entirely
in the hands of the algorithm’s designers and necessarily involves
deliberate normative choices. While the training data itself is
“objective,” it may contain errors or reflect pre-existing human
biases.
70
Developers may choose to “clean” this data, either by
substituting estimates for missing variables or deleting subjects
with incomplete data; in any event, this too involves choices on
the part of the developers.
71
65
. See id.
66
. See Andrew D. Selbst & Solon Barocas, The Intuitive Appeal of Explainable Ma-
chines, 87 FORDHAM L. REV. 1085, 1094 (2018).
67
. David Lehr & Paul Ohm, Playing with the Data: What Legal Scholars Should
Learn About Machine Learning, 51 U.C. DAVIS L. REV. 653, 655 (2017).
68
. Id.
69
. Selbst & Barocas, supra note 64, at 677-92. Training data consists of a pre-
selected set of input data where the target outcome is already known. See id. at 680.
Designers use training inputs to evaluate the performance of an algorithm’s results
against the known empirical results, and thus refine and improve their performance. See
Tom Dietterich, Overfitting and Undercomputing in Machine Learning, 27 ACM
COMPUTER SURVEYS 326, 326 (1995).
70
. Lehr & Ohm, supra note 67, at 665.
71
. Id. at 681-82.
2020] Algorithmic Harms to Workers in the Platform Economy 181
Having selected variables and assembled training data, devel-
opers will review summary statistics of each input and output
variable to correct for common errors.
72
Developers review sum-
mary statistics such as the mean, median, and standard devia-
tion of each variable and identify outliers that might distort a
given model.
73
In addition, they seek to identify and remove
“false positive” relationships and correlations — a problem known
as “overfitting.”
74
Machine learning algorithms will often identify
correlations between variables that have no plausible connection,
for example, correlating high intelligence with a preference for
curly fries.
75
Having reviewed the data, developers must then select a mod-
el generally speaking, selecting output variables to optimize
according to selected criteria.
76
For instance, in a credit scoring
algorithm, the goal of the model may be minimizing the risk of
default, or, for a taxi dispatch algorithm, maximizing the fares
collected. Once a model has been selected, developers trainthe
algorithm by running the data set through the model, allowing
the algorithm to “learn” the rules to make decisions in accordance
with the developersgoals.
77
This occurs over multiple iterations,
as developers assess the performance of the algorithm and make
adjustments to the model.
78
It is the “learning” stage that is most
opaque: the actual source code that results from the training and
learning process may be unintelligible even to its developers if it
was not specifically designed to be explainable.
79
Only after this extensive development and training process is
the algorithm deployed in the real world to process new data.
“Running algorithms” will often dynamically adjust to incorpo-
rate new data, regularly and automatically “retraining” to im-
prove performance.
80
Developers can continue to monitor, adjust
72
. Id. at 683-85.
73
. Id.
74
. Id.
75
. Michal Kosinski et al., Private Traits and Attributes Are Predictable from Digital
Records of Human Behavior, 110 PROC. NATL ACAD. SCI. 5802, 5804 (2013).
76
. Lehr & Ohm, supra note 67, at 687-95.
77
. Id. at 69597.
78
. Id. at 698.
79
. Joshua A. Kroll et al., Accountable Algorithms, 165 U. PA. L. REV. 633, 64041
(2017). Merely disclosing the source code or program audit logs is insufficient to allow for
an explanation of “why” an algorithm made a decision. For full transparency and ac-
countability, algorithm designers need to deliberately build in tools that can communicate
partial information about program processes in a way that is intelligible. Id. at 64750.
80
. Lehr & Ohm, supra note 67, at 702.
182 Columbia Journal of Law and Social Problems [Vol. 53:2
and retrain algorithms as they operate in the real world. Thus,
while much of the commentary on the harmful effects of algo-
rithmic decisions assumes that the machine learning process is
almost fully automated, and thus somehow objective, nonintuitive
or inscrutable, the reality is that deliberate, normative choices
made by humans are involved throughout the process of building
and deploying machine learning algorithms.
81
B. DIVERGENT INTERESTS AND PROBLEMATIC INCENTIVES IN
THE TWO-SIDED MARKET PLATFORM MODEL
There is a fundamental problem with the two-sided platform
market that is glossed over by sunny proclamations about the
virtues of the platform economy: a persistent oversupply of work-
ers benefits both customers and the operators of the platforms
themselves, while driving down wages and work opportunities for
workers.
82
This problem is compounded by the fact that the
mechanisms of platform markets are opaque and entirely in the
hands of the firms that operate them.
83
For all the talk about
platform work “liberatingworkers or giving them increased flex-
ibility, the fact remains that the vast amounts of data and the
means to leverage it are in the hands of billion-dollar privately-
held corporations, a paradox that gives them even more power
than in a traditional marketplace.
84
In a traditional firm, it is taken for granted that low-level em-
ployees need not be involved in high-level managerial decisions,
but in the context of platform work, opacity and power asym-
metry is complicated by a relationship promising workers flexibil-
81
. Selbst & Barocas, supra note 64, at 110915.
82
. See Noam Scheiber, How Uber Uses Psychological Tricks to Push Drivers’ Buttons,
N.Y. TIMES (Apr. 2, 2017), https://www.nytimes.com/interactive/2017/04/02/technology/
uber-drivers-psychological-tricks.html [perma.cc/WY8C-7RTR].
83
. Neil Richards & Jonathan King, Three Paradoxes of Big Data, 66 STAN. L. REV.
41, 4243 (2013). The platform economy illustrates what Professors Neil Richards and
Jonathan King have identified as two paradoxes of machine learning, specifically the
“Transparency Paradox” and the “Power Paradox,” both of which raise salient issues in
the context of work. The “Transparency Paradox“ is that firms are able to collect ever-
growing volumes of data about workers and their performance, yet the collection of this
data is almost imperceptible and its uses opaque. The “Power Paradox” is the fact that big
data analytics give large, powerful actors, such as governments and corporations, unprec-
edented insights and ability to leverage them over individuals. Id. at 4445.
84
. Id. at 44.
2020] Algorithmic Harms to Workers in the Platform Economy 183
ity and autonomy.
85
The incentives and potential for harms or
abuse in this emerging area merit more consideration, especially
when the owners and operators of the algorithms in question dis-
claim a formal employment relationship with the users who de-
pend on their platforms for income. Workers who are given to
misunderstand their relationship to a platform firm are more
vulnerable to manipulation and abuse.
Uber calls its drivers “driver-partners,” suggesting a joint-
profit-maximizing enterprise.
86
But the partnership is not equal.
As the all-seeing intermediary, Uber enjoys near-total control in
determining not just individual offers of driving assignments, but
the overall strategy and goals of the firm.
87
Most crucially, Uber
has sole knowledge of, and discretion over, the parameters, data
inputs, and goals of its dispatch algorithm.
This tension is problematic because the interests of a venture-
financed technology firm playing to capture and keep a multi-
billion-dollar market are simply not the same as those of a work-
er who drives so that she can pay the bills at the end of the
month. Uber is valuable, but it is not profitable.
88
The privately-
held company was estimated to be worth $76 billion in August of
2018 (up from $48 billion in February of the same year) despite
losing money every year since its inception.
89
Many tech startup
firms seek to leverage network effects to claim a winner-take-all
market position; Facebook, for example, thrives only because eve-
rybody uses it, making it exceedingly difficult for challengers to
enter the market and compete.
90
The dominant strategy for any
platform seeking long-term profitability is to rapidly grow its us-
er base while maintaining operating losses that are sustainable
85
. Alex Rosenblat & Luke Stark, Algorithmic Labor and Information Asymmetries: A
Case Study of Uber’s Drivers, 10 INTL J. OF COMM. 3758, 3759 (2016).
86
. See generally Jonathan Hall & Alan Krueger, An Analysis of the Labor Market for
Uber’s Driver-Partners in the United States, 71 INDUS. & LAB. REL. REV. 705 (2016).
87
. Rosenblat & Stark, supra note 85, at 3758.
88
. See Breaking Down Uber’s Valuation: An Interactive Analysis, FORBES (Feb. 22,
2018), https://forbes.com/sites/greatspeculations/2018/02/22/breaking-down-ubers-
valuation-an-interactive-analysis [perma.cc/MNA4-JSD5].
89
. Alex Barinka, & Eric Newcomer, Uber Valued at $120 Billion in an IPO? Maybe,
BLOOMBERG (Oct. 16, 2018), https://www.bloomberg.com/news/articles/2018-10-16/uber-
valued-at-120-billion-in-an-ipo-maybe [perma.cc/W4YS-XQJ5].
90
. Gigi Levy Weiss, Network Effects Are Becoming Even More Important on Emerg-
ing Platforms, FORBES (Mar. 18, 2018), https://www.forbes.com/sites/startupnationcentral/
2018/03/18/why-a-network-effect-is-the-only-way-your-startup-can-win/#77f96cfc7527
[perma.cc/AEM4-3FD3]; see also supra Part II.A (describing the economics of the platform
model).
184 Columbia Journal of Law and Social Problems [Vol. 53:2
only in the short term. Uber’s valuation (and ability to continue
to attract investment) depends on its continued growth of month-
ly active users.
91
Uber claims to view its drivers as an asset, but
high turnover among them has done little to dampen the firm’s
value.
92
This disconnect points to a fundamental problem for platform
workers inherent in the model of the two-sided market for rides: a
persistent oversupply of drivers benefits both riders and the firm
that operates the platform.
93
A surplus of drivers on the platform
results in a shorter average wait time for riders, as there is more
likely to be a driver nearby. Furthermore, because the surge
pricing algorithm determines fares by matching the supply of
available drivers against the demand of prospective riders, a reg-
ular overbalance of supply results in consistently lower fares
great news for riders, but harmful to the earnings of drivers, who
do not earn a guaranteed hourly wage and are solely responsible
for the costs of maintaining their vehicles.
94
Uber, like other plat-
form operators, has effectively outsourced its costs of production.
It costs the firm nothing (and indeed, benefits it tremendously) to
have its drivers idling, waiting for a fare, and so its profits are
limited only by its ability to satisfy rider demand.
In sum, Uber leads drivers to believe that the interests of rid-
ers, drivers, and the firm are aligned, when in fact they are di-
vergent and often opposed. Uber enjoys total control of its algo-
rithm and faces strong incentives to design it in such a way as to
maximize its own growth and earnings at drivers’ expense. The
problem is that the novel types of harms that drivers may incur
as a result are often individually small but significant in aggre-
gate, but invisible to workers and regulators.
91
. Breaking Down Uber’s Valuation: An Interactive Analysis, supra note 88.
92
. See Amir Efrati, How Uber Will Combat Rising Driver Churn, INFORMATION (Apr.
20, 2017), https://www.theinformation.com/articles/how-uber-will-combat-rising-driver-
churn [perma.cc/Z2DF-WDSP]. One report showed that only twenty-five percent of driv-
ers who partner with Uber are still using the platform a year later. See id. Turnover
remains high despite Uber’s 2015 redesign of the driver-facing app, which was prompted
by the firm’s realization that the platform catered almost exclusively to the rider side of
the market. See id.; see also Jessi Hempel, Inside Uber’s Mission to Give its Drivers the
Ultimate App, WIRED (Oct. 13, 2015), https://www.wired.com/2015/10/uberredesign [per-
ma.cc/K39F-M7Y8].
93
. See Scheiber, supra note 82.
94
. See id.
2020] Algorithmic Harms to Workers in the Platform Economy 185
C. IDENTIFYING POTENTIAL ALGORITHMIC HARMS TO UBER
DRIVERS
It bears emphasis that Uber and other platform firms closely
guard their intellectual property, and these firms carefully limit
publicly-available information about how their algorithms work.
There is, however, a growing body of evidence from journalists,
95
researchers,
96
drivers,
97
and representatives of the firm itself that
imply a variety of possible design choices and algorithmic in-
puts.
98
This evidence, in conjunction with the widely-understood
workings of machine learning algorithms, allows for strong infer-
ences about the Uber algorithm design and the data inputs it us-
es to calculate fares, assign rides, and deliver messages and in-
centives to drivers. To illustrate these problems, the following
subparts consider a set of more concrete hypothetical practices
and design choices that are within the firm’s capability to both
execute and conceal from its users.
1. The Uber Interface
To explain the nature of the economic harms to which Uber
drivers are vulnerable, it is necessary to examine the application
in detail. Because Uber is the paradigmatic platform market
firm, it provides a useful illustration of specific potential algo-
rithmic harms to workers.
99
A rider types a destination into the
95
. See Scheiber, supra note 82 (reporting on a variety of nudging techniques em-
ployed by Uber to alter drivers’ behavior, including arbitrary earnings targets, push noti-
fications to encourage more time on the road, and virtual badges as rewards for meeting
benchmarks related to driving time or customer service).
96
. See generally Calo & Rosenblat, supra note 6.
97
. Online driver forums, such as RideSharing Forum, offer a window into the experi-
ences of drivers and the issues that they deal with. One striking feature of the conversa-
tions in these spaces is the lack of information that Uber makes available to drivers about
the workings of the application, and the mistrust of Uber exhibited by forum contributors.
See RIDESHARING FORUM, www.ridesharingforum.com [https://perma.cc/2WKU-F7WB]
(last visited Oct. 25, 2019).
98
. Shankar Vedantam & Maggie Penman, This Is Your Brain on Uber, NATL PUB.
RADIO (May 17, 2016), https://www.npr.org/2016/05/17/478266839/this-is-your-brain-on-
uber [perma.cc/F65G-25CV].
99
. This focus is mostly due to the prominence of Uber, the amount of litigation it has
faced, and the volume research that has been conducted on its drivers and its application.
While it is one platform market among many, “Uber for X” has become an effective short-
hand for new entrants to the platform economy, and the structure of its relationship to its
users and workers is comparable to other firms in this space. See Alexis C. Madrigal, The
Servant Economy, ATLANTIC (Mar. 6, 2019), https://www.theatlantic.com/technology/
archive/2019/03/what-happened-uber-x-companies/584236 [https://perma.cc/8YVJ-BCZE].
186 Columbia Journal of Law and Social Problems [Vol. 53:2
Uber app on her smartphone, and Uber quotes her the cost of the
ride as determined by Uber’s surge pricing algorithm.
100
The
app also estimates the time it will take for a driver to reach the
rider and for the rider to reach her destination, showing a sample
route and icons to indicate the presence of nearby drivers.
101
When the rider accepts, Uber offers the ride as a commission to a
nearby driver, whose smartphone shows the name, customer rat-
ing, location and destination of the rider, and how much the driv-
er will earn for the ride, again determined by Uber’s surge pricing
algorithm.
102
The driver has a few moments to accept; if she does
not, Uber offers the ride commission to another driver nearby.
Once a driver accepts the commission, the rider sees the name
and rating of her driver, along with an updated estimated time of
arrival, and a real-time animation of the car’s location on a map.
The driver picks up the rider, takes her to her destination, and
Uber collects the fare by charging the rider’s credit card and re-
mits a portion of the charge to the driver’s account. Afterward,
rider and driver assign each other a rating out of five stars to in-
dicate their satisfaction. This data is used to give feedback to
drivers and riders, and, in some cases, remove them from the
platform for bad behavior or performance.
Crucially, not all trips are worth the same to drivers, and ex-
perienced drivers will accept or decline fares strategically.
103
Ub-
er pays its drivers a constant percentage of the fare as commis-
sion, and the value of a given ride is determined by the distance
and time it takes to complete on top of a minimum fare. The re-
sult is that longer rides in lower traffic are, by definition, the
most profitable for drivers.
104
The percentage of a driver’s time
spent actively transporting a passenger effectively determines
her hourly earnings. Longer trips mean less downtime and are
100
. Lee et al., supra note 1, at 1.
101
. Uber has stated that these do not necessarily correspond to the actual locations of
nearby drivers, but merely represent to riders that some drivers are nearby. See Calo and
Rosenblat, supra note 6, at 1630.
102
. Lee et al., supra note 1, at 1605. The “surgealgorithm dynamically adjusts pric-
es in real time, so that the fare that a rider or driver is offered for the same route may
change from minute to minute. See id. at 1607.
103
. Harry Campbell, How to Make More Money with Uber & Lyft (Maximize Earn-
ings), RIDESHARE GUY (Oct. 1, 2019), https://therideshareguy.com/how-to-make-more-
money-with-uber [https://perma.cc/L3AT-J7CY].
104
. Nat, The Complete Guide to Long-Distance Uber Rides (for Passengers and Driv-
ers), RIDESTER (July 24, 2018), https://www.ridester.com/long-distance-uber
[https://perma.cc/Z3J9-ERPV].
2020] Algorithmic Harms to Workers in the Platform Economy 187
thus more profitable for drivers. Similarly, certain destinations,
such as a city’s central business district, are likely to result in a
shorter post-drop-off wait time for a new fare than others. These
differences are crucial because work assignments are not perfect-
ly interchangeable, and any bias or inefficiency in the algorithm
that distributes rides will lead to disparate earnings between
drivers over the long term.
2. The Uber Algorithm
Platform economy firms like Uber deploy a specific model
known as a “supplier pick matching algorithm” to pair providers
and consumers of a given service.
105
“Supplier pick” refers to the
fact that suppliers — in this case drivers ultimately determine
whether to complete an offered transaction. Uber asserts that
the distance from rider to prospective driver is the key input var-
iable, and that it seeks to optimize both rider and driver experi-
ence by minimizing rider wait time and maximizing frequency of
trips for drivers (seemingly in that order).
106
These goals are
countervailing because an oversupply of drivers reduces wait
times for riders while increasing wait times for drivers while also
decreasing both rider costs and driver earnings per ride.
107
A crucial and controversial element of the platform is Uber’s
surge pricing algorithm, which the company uses to adjust pric-
ing in real time. Like a supply-and-demand graph come to life,
Uber adjusts the price of a ride by computing information about
the number of riders and drivers within a certain distance of each
other. The “surge” algorithm is intended to create an equilibrium
between supply and demand; an increasing price should motivate
more drivers to become active, while reducing ride requests from
price-sensitive riders.
108
Uber owns a patent for its dynamic pric-
ing system, which describes a mechanism for adjust pricing
“based, at least in part, on the determined amount of requesters
105
. How the Matching Algorithm Works in the On-Demand Economy, JUNGLEWORKS,
https://jungleworks.com/matching-algorithm-works-demand-economy-part-three-user-
journey-series [https://perma.cc/LBL7-KM5L] (last visited Oct. 11, 2019).
106
. Bradley Voytek, Optimizing a Dispatch System Using an AI Simulation Frame-
work, UBER: NEWSROOM (Aug. 11, 2014), https://www.uber.com/newsroom/semi-
automated-science-using-an-ai-simulation-framework [https://perma.cc/HKP4-HSTB].
107
. See Scheiber, supra note 82.
108
. Ben Popper, Uber Surge Pricing: Sound Economic Theory, Bad Business Practice,
VERGE (Dec. 18, 2013), https://www.theverge.com/2013/12/18/5221428/uber-surge-pricing-
vs-price-gouging-law [https://perma.cc/NW9F-7REE].
188 Columbia Journal of Law and Social Problems [Vol. 53:2
and the determined amount of available service providers.”
109
The application collects undefined “requester data” and “provider
data from participants’ smartphones, and then feeds that data
into the algorithm that in turn determines prices and offers
rides.
110
Uber does not disclose all of the information that makes up
the “Requester Data” and the “Provider Data” transmitted to the
device interface, nor does it reveal precisely how the algorithm
uses that data. The volume and richness of the data, however, is
potentially vast. In addition to a GPS chip, smartphones contain
gyroscopes, accelerometers, and, of course, the torrents of person-
al data that users input.
111
Uber has, however, revealed that it measures inputs beyond
location. An Uber researcher revealed in an interview that the
firm had evidence that low battery in a phone correlated with the
user’s willingness to pay a higher surge price (though, the com-
pany later clarified, that information was “absolutely not” used to
charge higher prices to riders).
112
Researchers who have studied
the surge algorithm’s workings by placing a network of Uber-
enabled phones across grids in Manhattan and San Francisco
have noticed inconsistencies in surge pricing, though Uber
claimed that these were simply “bugs.
113
109
. System and Method for Dynamically Adjusting Prices for Services, U.S. Patent
Application No. 13/828,481, Publication No. 2013/0246207 A1 (filed Mar. 14, 2013) (pub-
lished Sept. 19, 2013) (Mark Novak & Travis Kalanick, applicants) (emphasis added). The
patent application does not contain any details on the extent to which information other
than the numbers of available riders and drivers influences the price of a given ride. See
id.
110
. See Figure 1, id.
111
. David Nield, All the Sensors in Your Smartphone, and How They Work, GIZMODO
(July 23, 2017), https://gizmodo.com/all-the-sensors-in-your-smartphone-and-how-they-
work-1797121002 [https://perma.cc/Q5EQ-GJCF]. Modern smartphones contain an array
of sophisticated sensors. For instance, the accelerometer measures the phone’s movement,
and allows a phone to measure the number and rate of steps taken by a person carrying it
in her pocket. See id. The gyroscope assists the accelerometer in determining the position
and orientation of the phone. See id. Most smartphones contain other instruments, such
as barometers, proximity sensors, and ambient light sensors, all of which provide data
that is accessible to app developers. See id.
112
. Adam Withnall, Uber Knows When Your Phone is Running Out of Battery,
INDEPENDENT (May 22, 2016), https://www.independent.co.uk/life-style/gadgets-and-tech/
news/uber-knows-when-your-phone-is-about-to-run-out-of-battery-a7042416.html
[https://perma.cc/3KKV-85MC]. Left unanswered is the question of how Uber determined
riders’ willingness to pay higher prices when their batteries were low without actually
presenting similarly-situated riders with varying prices.
113
. Le Chen et al., Peeking Beneath the Hood of Uber, in IMC ‘15 PROCEEDINGS OF
THE 2015 INTERNET MEASUREMENT CONF. 495508 (ACM New York, Oct. 28, 2015).
2020] Algorithmic Harms to Workers in the Platform Economy 189
It is not clear whether Uber creates rider or driver profiles in
the way that Facebook or Google might, but the means and incen-
tives for it to do so surely exist. Uber’s value depends on its rid-
ership, and so the firm has every incentive to increase rider satis-
faction and loyalty. Each trip is incredibly data-rich: Uber knows
the origin and destination, the time of day, the route taken, the
level of traffic, the speed and smoothness of the ride, and the rid-
er’s satisfaction (as measured by driver rating). It clearly stores
and analyzes this data over the long term, as evidenced by the
research that Uber and its academic collaborators have re-
leased.
114
Uber already collects all of the data it needs to identify
a rider’s preferences and a driver’s tendencies; why would it not
leverage these to improve its product, or enhance its profitability?
Finally, Uber employs a team of PhD economists who have ac-
cess to what is essentially the world’s largest and most detailed
real-time behavioral economics experiment.
115
This provides Ub-
er with the human capital to maximize the profitability of its
platform by leveraging economic insights and perfect the nudg-
es” it sends to drivers and riders.
116
Uber has also increasingly
partnered with unpaid academic researchers to publish studies
using its unrivaled data sets, which critics speculate is a way for
Uber to gain the academic credibility needed to influence the pub-
lic policy discourse surrounding the platform economy and favor-
ably influence future legislation.
117
3. Specific Worker Harms Enabled by the Uber Platform Model
What follows is a framework for thinking about harms to plat-
form workers, along with examples of design choices that could
lead to these harms, using the Uber model to illustrate how
harms to workers might occur. At one end are purely incidental
harms resulting from legitimate or well-intentioned algorithm
design choices. These bad outcomes are unintentional and possi-
bly unforeseeable, and resemble analogous issues in other set-
tings where algorithms make decisions.
118
At the other end of the
114
. See, e.g., Hall & Krueger, supra note 86, at 70506.
115
. See Griswold, supra note 5.
116
. See Scheiber, supra note 82.
117
. See Griswold, supra note 5.
118
. Vasant Dhar, When To Trust Robots With Decisions, and When Not To, HARV.
BUS. REV. (May 17, 2016), https://hbr.org/2016/05/when-to-trust-robots-with-decisions-
and-when-not-to [https://perma.cc/5N9Y-C3L6].
190 Columbia Journal of Law and Social Problems [Vol. 53:2
spectrum are practices or design choices that would exploit the
platform’s inherent lack of transparency to deliberately deceive or
exploit users for the benefit of the firm. In most cases, these
would be extremely difficult to detect in practice, but there is evi-
dence that Uber has at least experimented with some of them.
119
The following subparts also examine a range of algorithmic de-
sign choices that fall somewhere in between, classified as diver-
gent-interest” harms, which include incidental algorithmic
harms, abusive practices, and design choices that undermine the
premise of a mutually-beneficial partnership between drivers and
the firm.
a. Incidental Algorithmic Harms
Incidental harms are likely to result from unintended conse-
quences of decisions made at various stages of algorithm devel-
opment, including data selection, problem setting, variable as-
signment and tuning.
120
The overfitting problem is endemic to
machine learning, and well-intentioned choices in the algorithmic
design process often lead to confounding or harmful results.
121
It
is easy to imagine real-time adjustments (either automated or
developer-initiated) to Uber’s matching algorithm having unfore-
seen consequences that harms drivers, for instance, by nudging
them to areas that are not actually price surging, sending them
on inefficient routes, or failing to match them with the closest
available rider.
As discussed above, there has been evidence of racial discrim-
ination on peer-to-peer service platforms, though the evidence
suggests that this has primarily harmed consumers of platform
services rather than the workers themselves.
122
However, de-
pending on the variable inputs chosen by developers, such as
driver profiles, algorithms have the potential to reinforce bias or
discrimination along lines of race, gender, or other protected clas-
ses.
119
. See generally Calo & Rosenblat, supra note 6 (describing practices such as “false
surges” and phantom cars”); see generally Shankar & Penman, supra note 98 (describing
behavior experiments revealing user irrationality).
120
. See generally Lehr & Ohm, supra note 67.
121
. David Lazer et al., The Parable of Google Flu: Traps in Big Data Analysis, 343
SCI. 1203 (Mar. 14, 2014), https://gking.harvard.edu/files/gking/files/0314policyforumff.pdf
[https://perma.cc/7P5P-VU6M]; see also Lehr & Ohm, supra note 67, at 68385.
122
. See Edelman, supra note 55, at 2.
2020] Algorithmic Harms to Workers in the Platform Economy 191
One interesting example affecting drivers involves gender dis-
parity. Uber’s own research has shown a pay gap for female driv-
ers, which researchers attributed to the fact that female drivers
drive more slowly on average.
123
While this explanation is plau-
sible, it may not tell the whole story. Imagine that Uber sets a
seemingly innocent goal for its algorithm: match drivers with rid-
ers who are likely to give them a high rating. Such a parameter
is not obviously problematic until we consider that Uber’s algo-
rithm may be collecting and using data in ways that could run
afoul of equal protection. What if the high rating match in-
struction, in a reflection of rider bias, began pairing riders and
drivers based on race, class or gender? As previously noted, not
all routes are equally profitable for drivers; rides to and from the
airport, for instance, represent not just high-value fares, but also
tend to be longer and thus reduce driver down-time, thereby in-
creasing earnings-per-hour.
124
People headed to the airport, on
balance, may be in more of a hurry than others, and more likely
to assign a low driver rating to a slower, more cautious driver.
It is conceivable, and even likely, that a machine learning al-
gorithm could, by finding linkages between driver rating, gender,
and high-urgency routes, systematically deprive female drivers of
more profitable fares in response to design parameters intended
to simply improve overall rider satisfaction. Such a scenario is
one example of what is likely to be a larger set of harmful out-
comes that may be unintentional and unforeseen by an algo-
rithm’s designers, but which could nonetheless create liability.
123
. Cody Cook et al., The Gender Earnings Gap in the Gig Economy: Evidence from
Over a Million Rideshare Drivers 33 (Mar. 8, 2019) (unpublished working paper) (on file
with Colum. J.L. & Soc. Probs.), https://web.stanford.edu/~diamondr/UberPayGap.pdf
[https://perma.cc/WB6S-TG58].
124
. See UBERPEOPLE.NET, https://uberpeople.net [https://perma.cc/5NXW-X5ZG] (last
visited Oct. 22, 2019). Driver web forums often contain advice from experienced drivers in
response to questions and complaints from novices. See, e.g., id. See also RIDEGURU,
https://ride.guru/lounge/p/when-driving-for-uber-which-trips-are-more-profitable-longer-
trips-or-shorter-trips [https://perma.cc/3PAC-AQDK] (last visited Nov. 13, 2019). While
there is disagreement among drivers posting to forums as to whether long trips are more
profitable, there is general agreement that drivers seek to minimize downtime. See
Scheiber, supra note 82.
192 Columbia Journal of Law and Social Problems [Vol. 53:2
b. Abusive Practices
Abusive practices sit at the other extreme of algorithmic
harms. These harms are more straightforward and could take
many forms. It would not be hard for designers of an algorithm
to deliberately deceive drivers, incenting behavior that works
against their economic interests but improves the firm’s profita-
bility.
125
Similarly, firms have the ability and potentially the in-
centive to deliberately use information in impermissible ways.
126
These practices would amount to clear ethical breaches, and
there is preliminary evidence that Uber has at least experiment-
ed with some of these.
127
Such practices would be both difficult for users to detect and
simple for the firm to employ given the information asymmetries
it enjoys. For example, Uber’s app shows a price to riders upfront
by estimating the time the ride will take, but it calculates its
driver fare according to the distance and time it ends up actually
taking. Journalists have documented cases where the Uber app
has shown passengers and drivers drastically different fare
amounts for the same ride, with the driver having earned sub-
stantially less than the customer paid (setting aside Uber’s com-
mission) even when the ride took approximately the time and dis-
tance estimated.
128
Such deceptive market manipulation, if prov-
en, is surely actionable as a breach of the duty of good faith and
fair dealing.
Driver forums are replete with advice from experienced driv-
ers to new ones, and a common refrain is “don’t chase the
surge.”
129
Because of the opacity and dynamic nature of surge
pricing, it would be easy for Uber to manipulate surge pricing as
a tool to shift riders to areas where it deemed them most valuable
(for reasons described previously, such as market establishment),
125
. See Calo & Rosenblat, supra note 6, at 1654.
126
. Id.
127
. Id. (describing a number of practices reported by drivers and users, such as false
or misleading information displayed on the application interface about the availability of
drivers or rides and other evidence of unfair manipulation of the platform market).
128
. Alison Griswold, Uber Drivers Are Using This Trick To Make Sure The Company
Doesn’t Underpay Them, QUARTZ (Apr. 13 2017) https://qz.com/956139/uber-drivers-are-
comparing-fares-with-riders-to-check-their-pay-from-the-company/ [https://perma.cc/
9WQR-PMXD].
129
. See Calo & Rosenblat, supra note 6, at 1656.
2020] Algorithmic Harms to Workers in the Platform Economy 193
even if these were not necessarily areas of the highest demand or
potential for driver profit.
130
Every startup seeking to rely on network effects has the
chicken-and-egg problem: users benefit from the presence of other
users and suffer from their absence. Rides cannot be offered
without drivers, but drivers have no reason to get on the road
without any prospect of finding customers. So how does Uber
move into a new market?
Uber already defines “surge zones” within cities as geographic
segments that are effectively localized markets.
131
Suppose Uber
wanted to increase ridership in the West Village neighborhood of
New York City, and imagine, further, that Uber has market re-
search data that shows it takes an average of four minutes for a
passenger to hail a traditional yellow cab there. Uber’s engineers
could set specific goals for its machine learning algorithm: max-
imize driver presence in the West Village” or “reduce passenger
wait times in the West Village below three minutes,” and then
train the algorithm to accomplish these goals. Even without ne-
farious intent or methods, these instructions could deliver inci-
dentally harmful or inefficient results: drivers being routed out of
their way to pass through the neighborhood, for example, or un-
duly prioritizing riders whose destinations were in the West Vil-
lage.
132
More perniciously, Uber engineers could permit its algo-
rithm to systematically misrepresent a surge to drivers within
that area to encourage greater driver saturation a possible ex-
planation for the phenomenon known as the “false surge” that
has drawn driver complaints.
133
Uber also has the ability to leverage user information to ex-
ploit willingness-to-pay, and, presumably, willingness-to-work.
As discussed above, the firm let slip its finding that riders with a
low phone battery tolerated higher prices for rides.
134
Given the
reams of data that smartphones collect, there is no shortage of
levers that a platform firm could experiment with to manipulate
130
. Id. at 166263.
131
. See Chen, supra note 113.
132
. While decisions about where to operate or expand are questions of legitimate
business strategy, they could also have the unintended effect of systematically denying or
limiting service availability to members of protected classes. See supra Part III.C.3.c
(discussing gray area” harms that reflect divergent interests among riders, drivers, and
the firm).
133
. Rosenblat & Stark, supra note 87, at 3766.
134
. See Withnall, supra note 112.
194 Columbia Journal of Law and Social Problems [Vol. 53:2
its users’ behavior. Uber could effectively wage-discriminate
against its drivers for a variety of purposes, for example, by offer-
ing only low-value fares to drivers who demonstrated a willing-
ness to accept them, or reserving high-value fares for drivers
whom it determined were on the verge of ending their shift or
leaving the platform entirely. In short, the sheer volume of data
available to Uber and the opportunity to deliberately yet imper-
ceptibly manipulate the behavior of drivers and riders invites
more scrutiny.
c. Divergent Interests in the Short Term: The Gray Area
In between the extremes, there exists an intermediate zone of
harms resulting from algorithmic parameters or design choices
that undermine the joint-profit premise. Uber’s drivers are es-
sentially participating in a vast behavioral economics experi-
ment.
135
This is problematic because a platform firm is likely to
sacrifice short-term profitability in the interest of long-run mar-
ket capture, and there is little to protect workers from high-level
decisions that sacrifice their individual earnings in order to in-
crease a firm’s market share or the performance of its algorithm.
The benefits of these improvements accrue mostly to the owner of
the algorithm, while the costs of errors and inefficiencies are
borne by the workers.
Take, for instance, the use of A/B testing and multi-arm ban-
dit algorithms,” which systematically test a range of options ran-
domly on a population of users, gather data on the effectiveness
of alternatives, and then adjust accordingly.
136
Applied to surge-
135
. See Griswold, supra note 5.
136
. See Calo & Rosenblat, supra note 6, at 1669. An A/B test is a controlled experi-
ment where a user is randomly presented with one of two different options, and the out-
comes are recorded and used to influence future presentations. See Shaw Lu, Beyond A/B
Testing: Multi-armed Bandit Experiments, TOWARDS DATA SCI. (Apr. 3 2019),
https://towardsdatascience.com/beyond-a-b-testing-multi-armed-bandit-experiments-
1493f709f804 [https://perma.cc/V9NG-VUK3]. For instance, a clothing website might
show different users two different photographs on its homepage and measure which group
those who saw photo “A” or photo “B” is more likely to make a purchase. Multi-arm
bandit algorithms function essentially the same way, except that they are able to adapt in
real-time to optimize the presentation of a wider range of options. Id. In the above exam-
ple, a clothing website deploying a multi-arm bandit algorithm might show users one of a
dozen photographs, while tracking who eventually made a purchase; as some photographs
begin performing better (i.e., leading to more purchases) than others, the algorithm will
begin automatically showing the high-performing photographs to more users, and, over
time, provide the operators of the algorithm with information about the characteristics of
photographs that tend to be more successful. Id.
2020] Algorithmic Harms to Workers in the Platform Economy 195
area assignments, adjusted “boost” incentives (offering a higher
rate of compensation) or alternative routes between destinations,
this practice would necessarily involve sending a certain percent-
age of drivers to areas that are likely to increase their wait times
between rides, and thus reduce their earnings. Uber has
acknowledged that it uses A/B testing to improve its algo-
rithms.
137
Uber has an interest in learning as much as it can about driv-
er behavior and an unlimited ability to adjust its algorithm to
help it answer specific questions. For instance, Uber could find
out what is the longest amount of time a driver is willing to wait
for a ride before logging off the app; what is the farthest a driver
would go to pick up a rider; and how far from home is a driver
willing to range in the course of a day’s work.
In addition to studying driver behavior generally, Uber could
also track the behavior of individuals. Would the firm adjust
drivers’ ride opportunities based on individual behavior profiles
as a way to extract maximum value? Platform firms have the
incentives and opportunities to engage in these behaviors, and
because their algorithms are protected as intellectual property,
their design choices are not subject to public scrutiny.
As these examples illustrate, the overall model raises ethical
questions about experimentation and control in a market where
people’s livelihoods are at stake. In running a machine learning
algorithm, Uber is building up its intellectual property by con-
ducting a vast field experiment where drivers have not been in-
formed or given meaningful consent and do not understand they
are participating. Uber’s terms of service are constantly shifting,
and often a driver will have no meaningful alternative to simply
agreeing to an update when she opens the app to begin work.
138
Drivers are being paid for giving rides, but they are not seeing
any profit (and may, in fact, incur losses) from the development of
these algorithms and the insights about transportation that Uber
is able to gain. This would be less problematic if drivers were
employees of the company nobody would question whether a
firm had the right to leverage data on employees to improve
137
. Jeremy Hermann & Mike Del Balso, Meet Michelangelo: Uber’s Machine Learning
Platform, UBER: ENGR (Sept. 5, 2017), https://eng.uber.com/michelangelo
[https://perma.cc/5JA6-ECAJ].
138
. David Horton, The Shadow Terms: Contract Procedure and Unilateral Amend-
ments, 57 UCLA L. REV. 605, 64950 (2010).
196 Columbia Journal of Law and Social Problems [Vol. 53:2
productivity, or experiment with different work assignments.
The more difficult question is what responsibility Uber has to its
“partners,” and whether the firm is meeting it.
IV. PROPOSALS FOR LEGAL PROTECTION OF UBER DRIVER-
PARTNERS AND OTHER PLATFORM WORKERS
A growing body of scholarship is addressing the problem of
protecting workers in the platform economy. Much of this schol-
arship has already examined the role of worker classification
laws and has described proposals for revising such laws. Part
IV.A of this Note identifies challenges facing platform workers
who occupy a gap in existing labor protections. Part IV.B surveys
various proposals and concludes that these proposals are ulti-
mately insufficient to address the novel harms enabled by algo-
rithmically-mediated platform work. Part IV.C proposes specific
causes of action sounding in tort and contract jurisprudence as a
means of redress for workers who have been harmed by the prac-
tices and design choices previously identified in Part III.
A. OBSTACLES FACING PLATFORM WORKERS
There are numerous structural factors that may make it diffi-
cult for platform workers to assert or vindicate their interests
against platform firms. First and foremost is their independent
contractor status. Because the law treats the drivers, maids, and
masseuses of the platform economy as small-business owners, a
host of statutory worker protection schemes, such as the Fair La-
bor Standards Act (FLSA), are not available to them.
139
Uber’s
standard contract with its “driver-partners contains provisions
limiting Uber’s liability in a variety of circumstances.
140
Drivers
139
. See, e.g., Kerce v. W. Telemarketing Corp., 575 F. Supp. 2d 1354, 1359 (S.D. Ga.
2008) (holding that only workers classified as employee can bring actions under FLSA);
Padjuran v. Aventura Limousine & Transp. Serv., Inc., 500 F. Supp. 2d 1359, 136162
(S.D. Fla. 2007) (same).
140
. See, e.g., Rasier LLC Technology Services Agreement §§ 2.22.3 (last updated
Dec. 11, 2015), https://s3.amazonaws.com/uber-regulatory-documents/country/united_
states/RASIER%20Technology%20Services%20Agreement%20Decmeber%2010%
202015.pdf [https://perma.cc/RVV2-QG7K] (recent version of Uber’s contract with one of
its driver-partners,” Rasier LLC, a wholly-owned subsidiary of Uber). See also Mohamed
v. Uber Techs., Inc., 109 F. Supp. 3d 1185, 1190 (N.D. Cal. 2015), aff’d in part, rev’d in
part and remanded, 836 F.3d 1102 (9th Cir. 2016), and aff’d in part, rev’d in part and
remanded, 848 F.3d 1201 (9th Cir. 2016).
2020] Algorithmic Harms to Workers in the Platform Economy 197
must agree to terms stating that they are independent contrac-
tors who receive transportation services from Uber, and agree to
a disclaimer of a formal employment relationship or the right to
pursue claims outside of arbitration, though the enforceability of
these provisions is in doubt.
141
Independent contractors do not receive employee protections
under statutes such as the Employment Retirement Income Se-
curity Act (ERISA) or FLSA because they are thought to be more
self-sufficient than employees, having greater bargaining power
due to their ability to potentially contract with multiple parties.
As independent contractors, Uber drivers are forbidden from or-
ganizing or bargaining collectively under antitrust law.
142
Accordingly, the existing employment law landscape is poorly
suited to address the novel challenges of platform work. None-
theless, scholars have proposed various potential responses to
these obstacles, which will be examined in the following subpart.
While there are benefits and drawbacks to each, they signal an
emerging recognition that platform workers need additional legal
protections.
B. PROPOSALS FOR STATUTORY AND REGULATORY PROTECTION
OF PLATFORM WORKERS
There is widespread agreement that the existing worker clas-
sification scheme is poorly suited to work relationships in the
platform economy. In the words of Judge Vincent Chhabria, a
jury asked to determine a TNC driver’s employment status will
be handed a square peg and asked to choose between two round
holes.”
143
Platform companies using similar employment-
mediation models are very likely to encounter the same difficulty.
Economist Joseph V. Kennedy has identified three potential
programs for updating labor law to better address the platform
141
. See, e.g., Cotter v. Lyft, Inc., 176 F. Supp. 3d 930, 943 (N.D. Cal. 2016) (“Even
beyond the possibility that Lyft has waived the right to force the class members to arbitra-
tion, there is at least some authority suggesting the arbitration provision is unenforceable
entirely, because it violates the National Labor Relations Act.”).
142
. Under section 6 of the Clayton Act, organized labor activities were specifically
exempted from antitrust prohibitions on anti-competitive behavior. See 15 U.S.C. § 17.
However, later Supreme Court decisions narrowed the exemption by prohibiting inde-
pendent contractors from organizing. See Columbia River Packers Ass’n v. Hinton, 315
U.S. 143 (1942); L.A. Meat & Provision Drivers Union, Local 626 v. United States, 371
U.S. 94 (1962).
143
. See Cotter, 176 F. Supp. 3d at 1081.
198 Columbia Journal of Law and Social Problems [Vol. 53:2
economy.
144
Broadly stated: regulators and legislators could cre-
ate a new, third category of worker; Congress could revise each of
the country’s major labor laws (FLSA, ERISA, etc.) to update
them to ensure that they continue to achieve their goals; or legis-
lators can draft carve-outs to existing labor laws that ensure that
workers, customers and platforms all benefit.
145
In 2015, economists Seth Harris and Alan Krueger published
a detailed proposal for the option of a third class of “independent
workers.”
146
According to their research, at that time there were
600,000 workers, or 0.4% of the U.S. workforce, who used a plat-
form intermediary to secure work, a number which was then
growing rapidly.
147
They identify challenges in regulating this
sector, including the immeasurability of “hours worked,” which is
tied to eligibility for programs such as the Affordable Care Act;
the ability to collectively bargain; and the absence of civil rights
protection afforded to employees.
148
The class of workers they
propose designating would receive some employee protections,
including the right to organize and employer contributions to So-
cial Security and Medicare payroll taxes, but would not receive
benefits such as overtime.
149
While this proposal is appealingly pragmatic, further analysis
exposes some weaknesses. Other industrialized nations have
classifications similar to what Harris and Krueger propose, and
their experience suggests that adding a third category to the ex-
isting scheme will increase opportunities for manipulation and
only marginally protect the intended workers, while increasing
the volume of litigation required to enforce the more complex
classification scheme.
150
The experiences of Canada, Italy, and
Spain, each of which has some version of an intermediate catego-
ry sharing some characteristics with the “Independent Worker”
144
. Joseph V. Kennedy, Three Paths to Update Labor Law for the Gig Economy, INFO.
TECH. & INNOVATION FOUND. 1, 2 (Apr. 2016), http://www2.itif.org/2016-labor-law-gig-
economy.pdf [https://perma.cc/H278-PPHV].
145
. Id.
146
. See Seth Harris & Alan Krueger, A Proposal for Modernizing Labor Laws for
Twenty-First-Century Work: The “Independent Worker,BROOKINGS: HAMILTON PROJECT 2
(2015), https://www.hamiltonproject.org/assets/files/modernizing_labor_laws_for_twenty_
first_century_work_krueger_harris.pdf [https://perma.cc/6RPP-L4LE].
147
. Id.
148
. Id. at 1418.
149
. Id. at 27.
150
. See generally Miriam Cherry & Antonio Aloisi, “Dependent Contractors” in the Gig
Economy: A Comparative Approach, 66 AM. U. L. REV. 635 (2017).
2020] Algorithmic Harms to Workers in the Platform Economy 199
classification proposed by Harris and Krueger, are instructive.
Canadas “dependent contractor” had the most success at expand-
ing worker protections by essentially expanding the definition of
“employee”; at the other end of the spectrum, Italy’s framework
allowed businesses to use the third category as a less tax-
burdened alternative to traditional employment classification
with the same essential features, resulting in a series of emer-
gency” interventions by the Italian legislature and rampant con-
fusion and abuse.
151
Other scholars have been more willing to accept platform
firm’s characterizations of themselves as providers of platform
market services, but have called for more oversight to ensure that
these markets operate fairly. In 2016, the Federal Trade Com-
mission hosted a workshop on issues in the “sharing economy”
where participants adopted much of the terminology and narra-
tives favored by platform firms, and expressed a range of con-
cerns including “protectionism of incumbent [taxi or hotel] sup-
pliers” as a consumer harm and “balancing its regulatory goals
and encouraging innovation.”
152
This approach disappointed many observers. Professors Ryan
Calo, Orly Lobel, Kenneth Bamberger and others have highlight-
ed the need for closer scrutiny of the practices of platform firms,
and for regulators to demand more granular information about
the workings of their algorithms.
153
Similarly, Professors Mark
Anderson and Max Huffman have proposed an antitrust analysis
of Uber’s business model, characterizing Uber’s structure as hav-
ing similarity to a cartel.
154
These critics and others charge that
the conduct of Uber and its peers is more serious than regulators
currently understand.
151
. Id. at 676. By expanding the ambit of employee protections to include more inde-
pendent contractors, the plan was able to provide more benefits and coverage than re-
forms in other countries. See id.
152
. FED. TRADE COMMN, THE “SHARING ECONOMY: ISSUES FACING PLATFORMS,
PARTICIPANTS AND REGULATORS 53 (Nov. 2016), https://www.ftc.gov/system/files/
documents/reports/sharing-economy-issues-facing-platforms-participants-regulators-
federal-trade-commission-staff/p151200_ftc_staff_report_on_the_sharing_economy.pdf
[https://perma.cc/M4B2-TNCZ].
153
. See Bamberger & Lobel, supra note 24, at 1051; see also Calo & Rosenblat, supra
note 6, at 1633.
154
. See generally Mark Anderson & Max Huffman, The Sharing Economy Meets the
Sherman Act: Is Uber a Firm, A Cartel, or Something in Between?, 2017 COLUM. BUS. L.
REV. 859 (2017).
200 Columbia Journal of Law and Social Problems [Vol. 53:2
Transparency in this context is both desirable and problemat-
ic, due to the nature of machine learning algorithms and firms’
interests in intellectual property.
155
Professor Paul Ohm has
called for requirements that firms that deploy consumer-facing
algorithms be not just transparent, but also “forthright,” advocat-
ing for an affirmative obligation to warn consumers about their
services.
156
While many of these proposals are measured and thoughtful,
they fail to address the core challenges of algorithmically-
mediated work. The harms that are enabled by the platform
model are subtle and difficult to detect, yet potentially vast when
considered at scale. Although comprehensive regulation may
eventually offer a solution, the law should first address new, dis-
crete harms as they arise and build from common law principles
to determine how best to regulate an emerging and sophisticated
model of labor relations.
C. CAUSES OF ACTION FOR ALGORITHMIC HARMS TO PLATFORM
WORKERS
This Part of the Note proposes common law causes of action
sounding in contract and tort jurisprudence as a potential re-
sponse to algorithmic harms to platform workers. The law is slow
to adapt to structural economic change. The modern worker pro-
tection regimes were not drafted overnight, but rather came in
the wake of decades of litigation between workers and employers,
with courts laying down the principles that would animate these
laws through case-by-case adjudication. In our present moment,
where the reach and import of machine learning algorithms con-
tinues to expand dramatically while regulations and laws per-
taining to them remain scant, the courts might play a similar role
in extending the law from common principles.
However, there are some caveats to this approach. At present,
there is little direct proof of the abusive or otherwise harmful
practices as outlined in Part III.
157
As a procedural matter, Uber
driver-partners might struggle to overcome dismissals of their
complaints under Federal Rule of Civil Procedure 12(b)(6), given
that evidence of specific and concrete algorithmic harms is lim-
155
. See supra Part III.A.
156
. See generally Paul Ohm, Forthright Code, 56 HOUS. L. REV. 471 (2018).
157
. See supra Part III.
2020] Algorithmic Harms to Workers in the Platform Economy 201
ited by the very nature of their operation and the monopoly of
control over them that firms enjoy.
158
However, given the incen-
tives faced by start-up platform firms and the control they have
over their algorithms, it seems highly plausible that some plat-
form operator sooner or later will commit some version of these
abuses and get caught. A plaintiff who was able to overcome
dismissal of the complaint would have a great deal of leverage to
settle, given the jealousy with which firms guard their trade se-
crets and intellectual property.
The following subparts apply a tort framework to the three
categories of harms abusive, incidental, and divergent-interest
and analyze how different tort theories may apply to specific
examples of algorithmic harms.
159
There is presently no court
that has ruled on an issue pertaining specifically to pecuniary
harms to platform workers. The body of case law dealing with
algorithmic harms generally is limited, but provides useful con-
ceptual analogies to develop legal theories that could be used by
Uber driver-partners harmed by the firm’s conduct.
1. The Tort of Misrepresentation and its Potential Application to
Uber
The tort of misrepresentation resulting in pecuniary loss holds
promise for a worker who enters into a contract with a platform
firm and is harmed by the result.
160
Misrepresentation torts in
the employment context have, to this point, typically been litigat-
ed based on representations made at hiring. So-called truth-in-
hiringclaims arise when employees accept job offers or positions
158
. In Bell Atl. Corp. v. Twombly, the Supreme Court clarified that a complaint “re-
quires . . . enough factual matter (taken as true) to suggest that” the alleged conduct actu-
ally occurred, effectively heightening the requirements of the well-pleaded complaint rule.
550 U.S. 544, 556 (2007). In Ashcroft v. Iqbal, the Court reinforced this requirement in
finding the plaintiff’s claim of racial discrimination lacked enough factual basis to cross
“the line from conceivable to plausible.” 556 U.S. 662, 680 (2009). Driver plaintiffs who
seek redress for practices described in Part III, supra, could find themselves facing similar
obstacles in the face of a motion to dismiss. As drivers have no access to the data that
Uber uses to assign routes and calculate fares, they would have difficulty making a factual
showing at the pleading stage. However, there are scholars who suggest that the work-
ings of Uber’s algorithms may be discoverable through field experiments that reverse-
engineer the platform. See generally Chen et al., supra note 113. Furthermore, at least
one court has sustained claims of harm resulting from a defective algorithm design where
plaintiff provided expert testimony. See infra Part IV.C.3 (discussing Wickersham v.
Ford).
159
. See supra Part III.C.3.
160
. See Restatement (Second) of Torts ch. 22, §§ 525557 (Am. Law Inst. 1977).
202 Columbia Journal of Law and Social Problems [Vol. 53:2
in reliance on false statements or promises the employer made to
entice the worker to accept the position.
161
The case of platform
work is analogous, but different: each transaction effectively
amounts to an offer of a new contract.
162
A platform worker
would be harmed by misrepresentations as a course of conduct,
rather than a single misrepresentation made at the initial point
engagement that is, a pattern of reduced fares or diminished
opportunities. As emphasized before, while any one individual
algorithmic decision may cause a relatively small pecuniary loss
to a driver, these accumulate over the course of continued trans-
actions.
Misrepresentation may be fraudulent,
163
negligent,
164
or inno-
cent.
165
The analysis of this Note focuses on the first two; the so-
phistication and comprehensiveness of Uber’s data mining give
the firm a “god’s eye” view of drivers and riders in the field, which
is to say that its developers either know or could know everything
about the effects of algorithm design choices on the earnings of
drivers.
166
Therefore, “innocent” or unknowing misrepresentation
is not applicable.
Section 525 of the Second Restatement of Torts imposes liabil-
ity on one who fraudulently makes a misrepresentation of fact,
opinion, intention or law for the purpose of inducing another to
act or to refrain from acting in reliance upon it” for pecuniary
losses that result.
167
Section 522 of the Second Restatement of
Torts imposes liability for negligently doing the same. Professor
Frank Cavico identifies seven specific situations where a worker
can bring a successful misrepresentation claim against an em-
ployer, four of which are particularly relevant in the context of
161
. See Richard P. Perna, Deceitful Employers: Common Law Fraud as a Mechanism
to Remedy Intentional Employer Misrepresentation in Hiring, 41 WILLAMETTE L. REV. 233,
234 (2005). See, e.g., Johnson v. George J. Ball, Inc., 617 N.E.2d 1355 (Ill. App. Ct. 1993)
(upholding a claim for fraudulent misrepresentation against an employer for misleading
descriptions of the position at issue).
162
. See Calo & Rosenblat, supra note 6, at 166061.
163
. Restatement (Second) of Torts ch. 22, § 525 (Am. Law Inst. 1977).
164
. Id. § 552(1).
165
. Id. § 552C.
166
. Chris Welch, Uber Will Pay $20,000 Fine in Settlement Over “God ViewTracking,
VERGE (Jan. 6, 2016), https://www.theverge.com/2016/1/6/10726004/uber-god-mode-
settlement-fine [perma.cc/8XBR-UQMV].
167
. Restatement (Second) of Torts, ch. 22, § 525 (Am. Law Inst. 1977).
2020] Algorithmic Harms to Workers in the Platform Economy 203
platform work.
168
These occur where an employer misrepresents
the terms or conditions of employment;
169
misrepresents the em-
ployer’s financial condition, profitability, or the employee’s in-
come potential;
170
makes false statements regarding the legality,
propriety or fairness of employment practices;
171
or misrepresents
salary, commissions, insurance or other benefits.
172
Many or all
of these claims could be employed against platform operators en-
gaging in abusive or misleading practices.
2. Uber’s Duty to its Driver-Partners
Uber’s driver-partner contract seeks to exculpate the firm
from liability broadly, including claims of misrepresentation. It
contains specific provisions disclaiming guarantees of service
provision, error-free service, or of the app providing any requests
for transportation whatsoever.
173
Some scholars doubt the en-
forceability of these types of contracts due to their status as “con-
tracts of adhesion,the inability of consumers to carefully scruti-
nize complex terms on the fly, or a “fleeting unconscionability”
that results when a driver-partner is forced to agree to new terms
before logging into the app.
174
As a general matter, a party cannot contract around its liabil-
ity for fraud; it can, however, contract away its liability for negli-
gence. In most American jurisdictions, a contractual relationship
implies a covenant of good faith; fraud is, by definition, a breach
of the duty of good faith.
175
Negligent misrepresentation arises
168
. Frank J. Cavico, Fraudulent, Negligent, and Innocent Misrepresentation in the
Employment Context: The Deceitful, Careless, and Thoughtless Employer, 20 CAMPBELL L.
REV. 1, 4 (1997).
169
. See Hamlen v. Fairchild Indus., Inc., 413 So.2d 800, 80102 (Fla. Dist. Ct. App.
1982); Bemmes v. Pub. Emps. Ret. Sys. of Ohio, 658 N.E.2d 31, 3536 (Ohio Ct. App.
1995).
170
. See Berger v. Sec. Pac. Info. Sys., Inc., 795 P.2d 1380, 138384 (Colo. Ct. App.
1990); Clement-Rowe v. Mich. Health Care Corp., 538 N.W.2d 20, 2324 (Mich. Ct. App.
1995).
171
. See Russ v. TRW, Inc., 570 N.E.2d 1076, 108384 (Ohio 1991).
172
. See Sandler v. New York Times Inc., 721 F. Supp. 506, 512 (S.D.N.Y. 1989); Duck
Head Apparel Co., Inc. v. Hoots, 659 So.2d 897, 90405 (Ala. 1995).
173
. Rasier LLC Technology Services Agreement, supra note 140, cl. 11.
174
. See OREN BAR-GILL, SEDUCTION BY CONTRACT: LAW, ECONOMICS, AND
PSYCHOLOGY IN CONSUMER MARKETS 14145 (2012); see also David Horton, The Shadow
Terms: Contract Procedure and Unilateral Amendments, 57 UCLA L. REV. 605, 64950
(2010).
175
. Steven J. Burton, Breach of Contract and the Common Law Duty to Perform in
Good Faith, 94 HARV. L. REV. 369, 370 (1980).
204 Columbia Journal of Law and Social Problems [Vol. 53:2
from the defendant’s failure to exercise reasonable care and com-
petence in determining underlying facts or information or in
communicating that information to the worker.
176
A claim of neg-
ligent misrepresentation, however, can create difficulty for the
plaintiff to convince the court of the defendant’s duty. Some
courts look for “special circumstances” to convince them to impose
tort duties on one party to a contractual relationship; for in-
stance, the Fourth Circuit has held that a contract must involve
“special circumstances,” such as a doctor-patient relationship or
lawyer-client relationship, to impose tort duties in addition to
contractual duties.
177
Uber has none of the special professional duties of a doctor or
lawyer. The independent-contractor model presumes two com-
mercial parties on equal footing, and contract law typically takes
the position that these parties should protect themselves (or not)
purely through the terms of their contract. However, the extreme
information asymmetry and unilateral control Uber enjoys over
the dispatch algorithm counsel in favor of Uber owing drivers a
duty of care in making representations to driver-partners, wheth-
er in its “partnership” joint-profit-maximizing representations to
drivers at the outset of the relationship, their representations
about the neutrality of the dispatch algorithm, or in providing
granular information to drivers about the locations of “surge
zones.” The ease with which Uber could undermine its own rep-
resentations to further its own interests at their drivers’ expense
coupled with the near-impossibility for driver-partners to police
the terms of the bargain militate in favor of holding Uber to a
special standard of care.
3. Design Choices that May Amount to a Breach of Duty
Assuming that Uber has a duty to its driver-partners in mak-
ing good-faith or non-negligent representations about the func-
tionality of the work platform, driver’s potential earnings overall
or in a given surge area, or the basic nature of the partnership,”
the next question is what types of algorithm design choices
amount to a breach of that duty. As described previously, these
176
. Cavico, supra note 168, at 56.
177
. McNierney v. McGraw-Hill, Inc., 919 F. Supp. 853, 862 (D. Md. 1995) (citing Mar-
tin Marietta v. Int’l Telecomm. Satellite Org., 991 F.2d 94, 98 (4th Cir. 1992)).
2020] Algorithmic Harms to Workers in the Platform Economy 205
design choices and resultant harms can be broadly categorized as
abusive, divergent-interest, or incidental.
178
The breach element of the tort is fairly straightforward for the
cases of abusive and divergent-interest design choices, as these
will amount to breaches of the duty of good faith. Abusive prac-
tices are the most easily dispatched. At a basic level, if Uber rep-
resents to driver-partners that its algorithm is designed to help
maximize their earnings, any design choices not geared to that
result arguably represent a breach. The most flagrant examples
would be deliberately showing a false surge to drivers or wage-
discriminating by offering drivers only the lowest-value rides that
they were likely to accept.
179
Divergent-interest practices are not quite as clear cut but
would still breach the duty of good faith. Nowhere in its market-
ing materials or terms of service does Uber advertise to drivers
that they are part of a vast real-world behavioral economics ex-
periment meant to improve and enhance Uber’s intellectual prop-
erty. Yet, if it deploys a bandit algorithm that routinely sends
drivers to test areas or alternative routes with the aim of gather-
ing data rather than generating fares for drivers, it breaches its
duty of good faith. Similarly, if the algorithm sends off-line driv-
ers a nudgemessage to get on the road at a certain time due to
“high demand,”
180
whether measured or anticipated, it is essen-
tially representing to drivers that they can expect higher-than-
average earnings. However, supply may quickly overwhelm de-
mand; the algorithms (or its operators) may have the reduction
rider wait times as the actual goal, with oversupplying drivers as
the means to that end.
Incidental harms resulting from design choices are trickier.
Pecuniary losses to driver partners as a result of “overfitting” of-
fer perhaps the least blameworthy case; as discussed in Part III.A
of this Note, overfitting is endemic to machine-learning algo-
rithms and requires active measures by developers to over-
come.
181
Some courts have held algorithm proprietors liable for
178
. See supra Part III.C.3.
179
. See Calo & Rosenblat, supra note 6, at 1766.
180
. See Scheiber, supra note 93 (“Many companies in the gig economy simply do not
have enough workers, or rich enough data about their workers’ behavior, to navigate busy
periods using nudges and the like. To avoid chronic understaffing, they have switched to
an employee model that allows them to compel workers to log in when the companies most
need them.”).
181
. See Lehr & Ohm, supra note 67, at 683.
206 Columbia Journal of Law and Social Problems [Vol. 53:2
defects in the design of their algorithms resulting in economic
harm to plaintiffs. Specifically, in the case of Gambles v. Sterling
Infosystems, the plaintiff, Gambles, brought suit against a pro-
vider of background checks that used algorithms to provide rec-
ommendations to prospective employers.
182
Gambles alleged that
the firm’s algorithm “(1) contained information about addresses
where he had not lived in more than seven years; (2) incorrectly,
inconsistently, or duplicatively reported the dates that Gambles
had lived at various addresses; and (3) used false and derogatory
terms to describe certain addresses . . . [and that] these state-
ments depicted him as itinerant, unstable, and unattractive” and
harmed his employment prospects.
183
The court sustained plaintiff’s allegations that the algorithm’s
design (specifically, developers’ failure to clean the data to a
reasonable degree)
184
did not meet the Fair Credit Reporting Act’s
requirement that “reasonable procedures be used to assure the
maximum possible accuracy of credit reports.”
185
While there is
no statute currently requiring platform firms to use reasonable
procedures to ensure the accuracy or fairness of their algorithms’
decisions, a misleading representation of economic opportunity
coupled with a failure to reasonably design the algorithm to en-
sure the represented outcome could meet a common-law negli-
gence standard.
Product liability torts offer another conceptual frame for deal-
ing with this issue, in the form of design defects. In the case of
Wickersham v. Ford, the plaintiff’s estate was able to sustain a
claim against a carmaker that an algorithm designed to deploy
an airbag in the event of a crash was designed defectively.
186
The
plaintiff’s expert witness, an automotive engineer, testified that
the algorithm in question had been negligently designed. The
algorithm was allegedly “trained” using crash test data that did
not account for the type of crash that occurred in the instant case.
Thus, the court held that it was a question of fact whether Ford
was negligent in not conducting more thorough testing.
187
182
. Gambles v. Sterling Infosystems, Inc., 234 F. Supp. 3d 510, 515 (S.D.N.Y. 2017).
183
. Id. at 513.
184
. See Lehr & Ohm, supra note 67, at 656.
185
. Gambles, 234 F. Supp. 3d at 513.
186
. Wickersham v. Ford Motor Co., 194 F. Supp. 3d 434, 438 (D.S.C. 2016).
187
. Id. at 438.
2020] Algorithmic Harms to Workers in the Platform Economy 207
The same logic could apply to the hypothetical case where the
dispatch algorithm “unintentionally” or “carelessly” discriminates
against certain drivers or classes of drivers.
188
If Uber represents
to drivers that they can expect consistent earnings per hour
worked in a given area, but then fails to adequately test or train
the algorithm to ensure this outcome, the firm could be liable for
negligence in making that representation.
4. Causation Requirements
The causation element of misrepresentation torts presents
perhaps the most difficult challenge to would-be driver-partner
plaintiffs. To show causation in any tort context, a plaintiff must
show that the defendant’s actions (or lack thereof) were the but-
for cause” of the harm or a substantial factor in causing the
harm.
189
To prevail on claim of fraudulent misrepresentation, a
plaintiff must show both that the defendant intended to deceive
her and intended to induce her to rely on the misrepresentation
and act on that reliance.
190
Therefore, a driver-partner would
need to show that a misrepresentation made by Uber either in
its general representations of how the algorithm works, the na-
ture of the partner relationship, or communicated through the
app was a substantial factor in causing her economic harm.
The plaintiff would also need to show actual reliance on that mis-
representation that was reasonable or justifiable under the cir-
cumstances.
191
A driver-partner should be able to show justifiable reliance
with little difficulty; by simply supplying a vehicle and using the
app to make income, she is acting based on Uber’s representa-
tions about its algorithm working to efficiently match riders with
drivers and earn drivers money.
192
To the extent that a plaintiff
accepted and acted on the justifiable belief that the algorithm
was designed to maximize (or even reasonably promote) driver
188
. See supra Part III.C.3 (discussing the possibility that Uber’s documented lower
pay for female drivers is the result of unintended consequences of choices made in the
design of the dispatch algorithm).
189
. See Cavico, supra note 168, at 41.
190
. See id. at 34, 39. In the case of negligent misrepresentation, the intention ele-
ment is displaced by a lack of reasonable care. Id.
191
. Id. at 44.
192
. See Hall & Krueger, supra note 86; see also Voytek, supra note 106 (describing
Uber engineers’ efforts to “optimize” the dispatch algorithm).
208 Columbia Journal of Law and Social Problems [Vol. 53:2
profits, her actions are substantially caused by those representa-
tions.
The key difficulty for driver-partner plaintiffs would be mak-
ing a showing of what algorithm design choices were made, why
developers made those choices, how the results of those choices
differed from representations made by the firm, and then show-
ing that the choice was a substantial factor in an economic loss.
Uber voluntarily shares the broad outlines of how its dispatch
algorithm works but is also highly secretive about its specific pa-
rameters.
193
To sustain a claim, a plaintiff might need a disgrun-
tled whistleblower inside the firm or a leak of internal documents
describing abusive or divergent-interest algorithm design choices
and an awareness on the part of developers that the results of
these decisions would undermine or directly contravene represen-
tations to driver-partners.
194
5. Calculating Damages for Economic Harms to Uber Driver-
Partners
A plaintiff who seeks to recover damages based on an employ-
er’s misrepresentation must show a legally recognizable injury or
loss as a result of the misrepresentation.
195
Assuming the de-
fendant’s misrepresentation legally causes the plaintiff to suffer
actual damages, courts typically permit juries to award compen-
satory monetary damages based on a “general” measure of di-
rect” damages, in addition to “indirect” damages, which can com-
prise “benefit of the bargain” damages (placing the plaintiff
where she would have been if the representation had been accu-
rate) or the recovery of out-of-pocket losses.
196
For Uber driver-partners, “benefit of the bargain” damages
might amount to the difference between actual earnings and the
amount they would have earned in the absence of abusive, diver-
gent-interest, or incidental design choices not made in service of
the fundamental joint-profit partnership premise.
197
Out-of-
pocket losses might be measured by the costs of maintaining and
operating a vehicle, or perhaps the opportunity cost to drivers
193
. See supra Part III.C.2.
194
. See supra note 158, for a more detailed discussion of this point.
195
. Cavico, supra note 168, at 52.
196
. Id.
197
. See supra Part III.B.
2020] Algorithmic Harms to Workers in the Platform Economy 209
who spend hours idling or “chasing surges” that might have been
spent in more gainful employment.
In fact, calculating these damages is an even more formidable
task than simply identifying and presenting evidence of the
harmful design choices that give rise to these losses. With unin-
hibited access to Uber’s data and algorithm, and with the assis-
tance of sophisticated data scientists, a plaintiff might be able to
“run the model”
198
to simulate the effects of different designs on
driver earnings. Uber is unlikely to share this information unless
compelled to in discovery.
Other market enforcement contexts give clues to how such
losses might be calculated, however. The New York Stock Ex-
change (NYSE), in the course of investigating electronic stock
trading fraud, has developed and deployed algorithms to identify
stock transactions where “specialist firms” exploited delays in
public stock orders to gain an unfair market advantage.
199
The
algorithm developed by the NYSE (and later used by the De-
partment of Justice in its criminal prosecution of specialist firms)
was not only able to identify the specific transactions that violat-
ed trading rules, but also the number of disadvantaged shares
and the dollar amounts by which they were disadvantaged.
200
While in that case, the NYSE had the advantage of full access
to the data in question, it at least demonstrates the principle that
highly-complex market-based transactions can be tracked, evalu-
ated, and compared against but-for outcomes to measure damag-
es. One previously-noted difficulty in a hypothetical lawsuit
against Uber is that the individual economic harm suffered by an
Uber driver-partner (in the form of a single ride commission de-
nied or an extra ten minutes spent idling) might be very small;
over time, however, the damages could add up substantially. The
NYSE case shows that the practical obstacles to Uber driver
plaintiffs achieving a similar measurement are various and for-
midable, but not impossible to overcome.
In short, claims brought by driver-partners against Uber
would face a number of headwinds but are firmly grounded in
established common-law principles. While proposed regulatory
responses or worker classification reforms are steps in the right
direction, these will have limited impact as long as the workings
198
. See supra Part III.A.
199
. In re NYSE Specialists Sec. Litig., 260 F.R.D. 55, 66 (S.D.N.Y. 2009).
200
. Id.
210 Columbia Journal of Law and Social Problems [Vol. 53:2
of Uber and other platforms remain opaque. Lawsuits brought by
driver-partners may be especially useful in revealing the actual
contours of the relationship between Uber and its users and force
the legal system to evaluate these against established law.
V. CONCLUSION
Black-box algorithms may prove impervious to misrepresenta-
tion torts; to meet pleading requirements, plaintiffs who have
been economically harmed by abusive, divergent-interest, or inci-
dental algorithm design choices simply do not have access to the
facts that they would need to sustain a claim against a motion for
dismissal. A breach of information or security by a firm like Ub-
er, however, could well reveal actionable discrepancies between
the representations it makes to its driver-partners and the goals
it sets for its algorithm, if there is in fact anything to hide.
As the modern economy continues to change, it is clear that
algorithms will play a growing role in mediating employment re-
lationships. As in other areas where algorithms are entrusted to
make consequential decisions, regulators, legal scholars, and, es-
pecially, the operators of work platform algorithms have a re-
sponsibility to consider the impacts of their use. As yet, there is
little legislation or regulation of the use of algorithms to manage
workers. Courts are only beginning to encounter claims of algo-
rithmic harms. Partly due to the complexity and opacity of ma-
chine learning algorithms, the legal academy has been slow to
articulate the potential harms of automated decision-making and
to propose and develop policy solutions.
201
As such, common law principles applied to new situations can
point the way towards a fair and efficient regime for regulating
platform work. The promise of platform labor markets should not
be discounted, and concerns about “stifling innovation,” while
sometimes strategic and overblown, are still worth considering.
Eventually, statutes tailored to the unique problems presented by
platform work may be necessary or desirable. But before legisla-
tors attempt to draft comprehensive regulation, it may make
sense to first address individual abuses as they arise and allow
the law to develop gradually from existing principles.
201
. See Lehr & Ohm, supra note 67, at 655.