Skip to content

The Algorithm Made Me Do It and Other Bad Excuses


By: Rebecca J. Krystosek, Volume 101 Staff Member

As the outputs of algorithms increasingly pervade our everyday lives—from wayfinding apps and search engine autofill results to investment advice and self-driving cars—we must also come to terms with who should be held accountable when those algorithms cause harm, and how.

In the lives of many people, algorithms are harmless, even helpful. They help us to quickly find answers to our questions by bringing to the fore the most relevant results of our online searches. Algorithms allow Netflix, Hulu, and Amazon to provide personalized suggestions of binge-worthy TV shows based on our viewing history.[1] Google employs them to recommend the quickest route to get wherever we are going.[2] And although their efficacy is widely questioned in this context, algorithms may also come into play in our dating lives.[3]

To the extent that algorithms underlying these technologies may sometimes lead us astray—literally or figuratively—some among us may be prepared to accept that tradeoff because we generally find them more helpful than harmful. Yet, not all algorithms are necessarily beneficial, nor are they necessarily benign. Imagine a person whose reputation or career or a business whose bottom line is significantly damaged by misleading Google Autocomplete suggestions.[4] Similarly, consider that an online search of a person’s name might be accompanied by ads for arrest records because Google’s machine-learning mechanisms are “inadvertently racist” and “link[] names more commonly given to black people to ads relating to arrest records.”[5] Contemplate the person whose prison sentence is based on a “risk assessment” algorithm the sentencing judge doesn’t even understand, [6] resulting in disparate treatment of whites and blacks and inaccurate predictions along racial lines.[7] And what of the person who dies or is seriously injured because of a design flaw or failure of a self-driving car?[8]

In the context of widely used algorithms, harm is, of course, inevitable. But especially in the United States, so are lawsuits. A great deal has already been written about what specific doctrinal changes might be necessary in order for the U.S. legal system to accommodate liability for algorithm-spawned harm in various contexts.[9] This Post does not attempt to suggest particular legal remedies or doctrinal changes in the law. Nor does it undertake the moral dilemma of how algorithms should make decisions in zero-sum games or decide who to save in a potential car crash.

Rather, this Post argues that however else the law might shift to accommodate the proliferation of algorithms, legal liability should not be avoidable merely because an algorithm caused the harm, rather than a person. Carving out special exceptions in longstanding legal doctrines or fashioning new laws that allow algorithms to become a vehicle for liability avoidance would be misguided and a perversion of justice. Algorithms are not forces outside the control of their creators, like Frankenstein’s monster. Rather, they are designed and proliferated by people and for people, often in pursuit of efficiency and profit. Moreover, algorithms can be altered and refined indefinitely. Business and government entities, organizations, and individuals must be responsible, proactive stewards of their institutional policies, products, and decisions in order to minimize their legal liability. So, too, these entities must also be responsible stewards of the algorithms they employ and proliferate.


Every day, thousands of times per day, each of us makes decisions based on the information available to us. Many of those decisions are subconscious, while others are consciously made. Whatever difficulty we might have in discerning their effects, all of these choices bear some sort of consequence, ranging from imperceptibly insignificant to life-altering.

Naturally, humans must base their decisions on imperfect information. It’s never actually possible to perceive or take into account all the potentially relevant information for any given decision. As a result—and likely also as a result of flaws in our rationality—the choices we make are sometimes also flawed.[10]

Algorithms are not all that different from human-made decisions in this way. They simply produce certain outputs based on the inputs provided.[11] Algorithms do what they are designed to do—no more, and no less. Simply put, “[w]hen the algorithm errs, humans are to blame. When it evolves, it’s because a bunch of humans read a bunch of spreadsheets, held a bunch of meetings, ran a bunch of tests, and decided to make it better.”[12] Even when an algorithm’s creators fail to understand how it arrives at a certain result,[13] they nevertheless retain the ability to revisit and iteratively revise the underlying logic, whether in the interests of clarity or more desirable outputs.

Like the inputs for human decisions, the inputs for algorithms with real-life applications are almost always incomplete or flawed somehow.[14] Why? For one, in the real world, there is typically no such thing as perfect, or perfectly complete, information.[15] Some algorithmic outputs will also be flawed because humans design algorithms and write the logic on which algorithmic decisions made.[16] As we all know, humans make mistakes—even the really smart ones—and possess all sorts of biases—even the well-intentioned ones.

The litany of harms caused by the application of algorithms is well-documented, both at the individual level and in aggregate.[17] They range from discriminatory lending practices to flawed and racist recidivism models used in many jurisdictions to inform sentencing decisions. Some of these harms are the result of what can only be described as irrational biases.[18] For instance, “in Florida, adults with clean driving records and poor credit scores paid an average of $1552 more [for car insurance] than the same drivers with excellent credit and a drunk driving conviction.”[19] Whether algorithm-caused harms are inadvertent design flaws or collateral consequences of a flawed or biased design working as intended, the distinction is of little importance to those whose lives are affected.

Problematically, algorithms are generally “completely opaque and unassailable. People often have no recourse when the algorithm makes a mistake.”[20] What is more, many algorithms “create feedback loops that perpetuate injustice. Recidivism models and predictive policing algorithms—programs that send officers to patrol certain locations based on crime data—are rife with the potential for harmful feedback loops.”[21]


Given the now ubiquitous nature of technology, the continued proliferation of algorithmic applications seems inevitable, with all the attendant benefits and harms. Regulatory frameworks and legal doctrines will necessarily shift over time to accommodate these changes. But as the law changes, it should retain the underlying facets which aim, albeit imperfectly, to deter unreasonable harm and hold persons and entities accountable for the avoidable harms they inflict. The liability determination should be outcome-indifferent to the human v. machine distinction. The law should not afford any special “out” for algorithmic harms, nor create special regimes of liability avoidance. To create such loopholes would create perverse incentives, thereby enabling and encouraging entities to accomplish by algorithm what they could not otherwise legally do.

Even the rationale that algorithms do more harm than good, and therefore the collateral consequences should be excused, should generally have no place in the determination of legal liability. After all, individuals do not get to avoid civil or criminal liability by virtue of having led otherwise helpful, harmless lives. However, in very particular applications in which the algorithms in question are both heavily regulated and present a clear and compelling public safety benefit, many have suggested it may make sense to create an alternate compensation regime.[22] Such a regime would be comparable to the National Childhood Vaccine Injury Act of 1968 (NCVIA), which compensates those injured by vaccines and insulates vaccine manufacturers from liability.[23] Such a system might be appropriate in the context of medical algorithms[24] or driverless cars.[25]

This caveat—that carefully regulated algorithms with a clear public health benefit may warrant a different liability framework—is consistent with the premise of this Post. The NCVIA represents a rare instance in which Congress acted to preempt state and common law claims in the interest of public health. If and when a particular application of algorithms presents a similarly compelling potential public health benefit which is scientifically verifiable, establishment of an alternate regime might be warranted. However, such a regime should be a rare exception, rather than the rule.

When an entity chooses to create and proliferate an algorithm in furtherance of its own objectives, it also necessarily makes a value judgment about what matters and what does not. Choices about whether and how to employ algorithms are a business decision like any other. Values and choices are embedded in the design of the algorithm, just as they are reflected in a company’s policy manual, board room, and standard operating practices. And like any decision, the choice to employ an algorithm—whether in pursuit of profits or efficiency or any other goal—entails the possibility of unknown consequences, both risks and rewards. Those consequences ought to be borne by those who create algorithms, employ them, and stand to profit from their use. If the circumstances would support a finding of legal liability for a sentient being, the law should also support such a finding for an algorithm and assign liability to its owner.

  1. See Scott Collins, TV Seems to Know What You Want to See; Algorithms at Work, L.A. Times (Nov. 21, 2014),
  2. See Michael Byrne, The Simple, Elegant Algorithm That Makes Google Maps Possible, Vice (Mar. 22, 2015),
  3. See, e.g., Caitlin Dewey, The One Thing About “Matching” Algorithms That Dating Sites Don’t Want You to Know, Wash. Post (Nov. 11, 2015), (“Research suggests that so-called ‘matching algorithms’ are only negligibly better at matching people than random chance.”); Benjamin Winterhalter, Don’t Fall in Love on OkCupid, JSTOR Daily (Feb. 10, 2016), But see Kevin Poulsen, How a Math Genius Hacked OkCupid to Find Love, Wired (Jan. 21, 2014),
  4. See Seema Ghatnekar, Injury by Algorithm: A Look into Google’s Liability for Defamatory Autocompleted Search Suggestions, 33 Loy. L.A. Ent. L. Rev. 171, 174–74 (2013).
  5. Luke Dormehl, Algorithms Are Great and All, But They Can Also Ruin Lives, Wired (Nov. 19, 2014),
  6. See, e.g., Wisconsin Supreme Court Requires Warning Before Use of Algorithmic Risk Assessments in Sentencing.—State v. Loomis, 881 N.W.2d 749 (Wis. 2016), 130 Harv. L. Rev. 1530, 1530–35 (2017); see also Algorithms in the Criminal Justice System, Electronic Privacy Info. Ctr., (last visited May 10, 2017) (“‘Risk assessment’ tools are algorithms that use socioeconomic status, family background, neighborhood crime, employment status, and other factors to reach a supposed prediction of an individual’s criminal risk, either on a scale from ‘low’ to ‘high’ or with specific percentages.”).
  7. See Algorithms in the Criminal Justice System, supra note 6 (stating the results of one study which found that while 23.5% of white offenders who were labeled “higher risk” but yet did not reoffend, 44.9% of African Americans fell into that category and observing further that one of the most widely used “risk assessment” algorithms was “particularly likely to flag black defendants as future criminals, labeling them as such at almost twice the rate as white defendants.”).
  8. See, e.g., Bill Vlasic & Neal E. Boudette, Self-Driving Tesla Was Involved in Fatal Crash, U.S. Says, N.Y. Times (June 30, 2016), (noting that although a federal investigation is ongoing and the question of fault has not yet been determined, Tesla stated in a news release, “Neither autopilot nor the driver noticed the white side of the tractor-trailer against a brightly lit sky, so the brake was not applied.”).
  9. See, e.g., Damien A. Riehl, Car Minus Driver: Autonomous Vehicles Driving Regulation, Liability, and Policy, 73 Bench & B. Minn. 25, 26–29 (2016).
  10. Cf. Daniel L. McFadden, The New Science of Pleasure 26 (Nat’l Bureau of Econ. Research, Working Paper No. 18687), (“There are now extensive experiments and insights from cognitive psychology that contradict a narrowly defined neoclassical model of rational choice . . . [and which] suggest that preferences are malleable and context-dependent, that memory and perceptions are often biased and statistically flawed, and decision tasks are often neglected or misunderstood.”).
  11. Will Oremus, Who Controls Your Facebook Feed, Slate (Jan. 3, 2016), (“Facebook’s algorithm . . . isn’t flawed because of some glitch in the system. It’s flawed because, unlike the perfectly realized, sentient algorithms of our sci-fi fever dreams, the intelligence behind Facebook’s software is fundamentally human. Humans decide what data goes into it, what it can do with that data, and what they want to come out the other end.”).
  12. Id.
  13. See Adrienne LaFrance, Not Even the People Who Write Algorithms Really Know How They Work, Atlantic (Sept. 18, 2015), (“Even the engineers who develop algorithms can’t tell you exactly how they work.”).
  14. See Dormehl, supra note 5 (stating that “a computer algorithm might be unbiased in its execution, but . . . this does not mean that there is not bias encoded within it.”).
  15. See, e.g., David Gilbert, Poker Pros vs. the Machines: Watch the World’s Best Poker Players Get Crushed by A.I., Vice (Jan. 30, 2017), (noting “very few real world situations are perfect information games, while most real world situations are imperfect information games”).
  16. See Dormehl, supra note 5.
  17. See generally Cathy O’Neil, Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy (2016) (extensively documenting these harms); Dormehl, supra note 5 (same). 
  18. See Dormehl, supra note 5 (observing that human trust in algorithms is misplaced because “humans craft those algorithms and can embed in them all sorts of biases and perspectives.”).
  19. O’Neil, supra note 17, at 165.
  20. Evelyn Lamb, Review: Weapons of Math Destruction, Sci. Am. (Aug. 31, 2016),
  21. Id.
  22. See, e.g., Shailin Thomas, Artificial Intelligence and Medical Liability (Part II), Bill of Health (Feb. 10, 2017) (considering an alternate compensation regime similar to the NCVIA in the medical algorithm context); Riehl, supra note 9, at 29 (discussing an alternate compensation regime similar to the NCVIA in the driverless car context).
  23. 42 U.S.C. §§ 300aa-1–300aa-34 (2012); see also Bruesewitz v. Wyeth LLC, 562 U.S. 223, 243 (2011) (holding that the NCVIA “preempts all design-defect claims against vaccine manufacturers brought by plaintiffs who seek compensation for injury or death caused by vaccine side effects.”).
  24. Thomas, supra note 20 (“Vaccines share many of the characteristics that make [medical] AI algorithms unfit for traditional buckets of liability designed to make patients whole after suffering adverse consequences — they are important for disease prevention, but they are also inherently imperfect and have many well-documented side effects.”).
  25. See Riehl, supra note 9, at 29 (“Like the vaccine act, which sought to reduce the possibility of lawsuit-besieged manufacturers scaling back vaccine production, a no-fault auto-insurance act could strike a similar balance. Such a policy might encourage the development of life-saving technology, while minimizing market forces that might encourage technological stagnation.”).