TWIST IT, PULL IT, BOT IT: DEVUMI, BOTS, AND THE END OF THE FTC’S POLITICAL NEUTRALITY
By: Lee Silberberg, Vol. 105 Staffer
The FTC has broad authority under § 5(a) of the Federal Trade Commission Act to protect consumers from, “unfair or deceptive acts or practices in or affecting commerce[.]” Under this grant of power, the FTC has the broad power to enforce against deceptive acts in a variety of fora —like advertising and media marketing. In pursuing offenses of deceptiveness under § 5(a), the FTC, as an independent federal agency, has a general policy of non-interference in political matters. This post will first examine the FTC’s landmark social media bot enforcement—FTC v. Devumi. This post will then explain that the FTC’s reasoning in Devumi, in the context of contemporary bot usage, indicates that the future of the FTC’s policy of political neutrality is in doubt.
I. Twist It—How the FTC’s Enforcement in Devumi Started It All
In Devumi, the FTC brought an enforcement action for violation of § 5(a) against a company whose sole business practice was the sale of social media “bots.” Devumi’s sales spanned five social media platforms and included buyers such as athletes, musicians, and other influencers. Unlike other advertising cases that concern online marketing and advertising, the FTC did not argue that Devumi had failed to disclose a material connection to the good being sold or that Devumi had sold a good or service that had been marketed deceptively. Instead of the act or practice itself being deceptive, the FTC stated that Devumi had violated § 5(a) because the product itself was deceptive.
The FTC’s argument as to why bots are themselves deceptive came in two interwoven parts. First, the sale of bots was deceptive because bots “allow[ed] those users to exaggerate and misrepresent their social media influence.” Bots act as a shadowy force that distort the accuracy of online metrics—and this can be a boon for influencers. By selling bots, Devumi “provided customers with the means and instrumentalities to commit deceptive acts or practices . . . .” In effect, selling bots was deceptive because of harmful results that bot usage causes.
But the FTC took its reasoning a step further. The FTC argued that the mere injection of bots into the influencer marketplace could “undermine the influencer economy and consumer trust in the information that influencers provide.” By this view, the sale of bots was deceptive because the mere possibility of bots being used in a marketplace undermines trust.
In sum, the FTC’s justification for Devumi’s violation of § 5(a) was not about how Devumi sold its product. Instead, Devumi’s acts were deceptive because (1) bots are deceptive as an instrument and (2) bots’ mere existence in a market is deceptive. Taking the FTC’s points together: if bots are both instrumentally deceptive and existentially deceptive, then bots are deceptive in and of themselves.
II. Pull It—Devumi’s Reasoning Endangers the FTC’s Political Neutrality
There are two critical takeaways from Devumi. First, the FTC’s reasoning in Devumi is an evolution from past enumerations of the deception standard that meets the challenges of contemporary deceptive practices. This evolution is likely a manifestation of both the FTC’s growing distrust of social media companies’ ability and incentive to self-police. As the first (and only) bot enforcement case, Devumi can be interpreted to stand for the proposition that the FTC recognizes the need to evolve its deception standard as well as its willingness to enforce that evolving standard.
Second, while Devumi was an enforcement against a company which sold bots, the FTC did not elaborate on who could face further enforcement. Devumi’s central argument is, effectively, that bots are inherently deceptive. However, if bots themselves are inherently deceptive, why would the line be drawn at just bot sellers? Why should it not extend to those using bots they bought for their own economic gain? In fact, when Devumi was decided, the FTC recognized that the influencers and marketing firms buying bots from Devumi were using them for these purposes. By 2019, the prevalence of bots in social media amongst influencers was well known—as was bot usage in political fora. Though the FTC addressed the facial issue of bot sellers, Devumi did not clarify the deeper issue of how the FTC would choose enforcement when bot economic deception was melding into part of a larger, already partisan issue.
With regards to political neutrality, the state of bot use in 2020 makes the issue of enforcing against bots’ deceptive bolstering of YouTube views and Twitter follower counts without meddling in political issues all the thornier. The use of bots to spread false narratives, create false senses of popularity among ideas, and popularize distrust of otherwise trustworthy media is no longer separate from the use of bots for economic gain. The line between entirely economic influencing with bots and political influencing with bots has disappeared.
If bots are inherently deceptive, then bot usage—not just bot sales—will likely result in consumer injury. It is likely that at least some of these cases will involve political offenders. If the FTC holds its theory that bots are inherently deceptive, then the companion conclusion is that politicians’ and politico-influencers’ usage of bots to bolster their views, sell out their events, and hawk their goods should be subject to that same scrutiny.
The usage of bots, data, and the promulgation of information that intertwines with commerce are all interwoven. Separating between enforcements against bot users for their deception and using deception as a convenient ruse for enforcing against political enemies will be difficult as these mixed cases increase. The fanfare that will likely surround these enforcements is unlikely to help clear the air. In short, the FTC will be forced to choose whether to abnegate their enforcement duty or bring enforcements that will at least appear highly political. And this will pit the FTC’s goal of consumer protection against their desire to retain an air of political neutrality.
III. Bot It—The Future of the FTC’s Political Neutrality Appears Grim
In Devumi, the FTC brought an enforcement action under § 5(a) against a bot seller on the theory that bots are inherently deceptive. As the current meta of influencer marketing bends towards the political and political influencing bends towards the economic, the separation between FTC enforcement against political influencers for their bot-based deception and political influencers for their politics will become unclear. The image of one political group’s favorite bot-user being punished, when many others will abuse bots similarly, will likely appear starkly political—even if not intended to be. The question remains: can the FTC’s policy of political neutrality survive after Devumi? It seems unlikely.
 15 U.S.C. § 45(a)(1).
 The FTC has enumerated a three factor standard to determine when an “act or practice” is “deceptive”: (1) the existence of a representation, omission, or practice likely to mislead a consumer, (2) the practice is viewed from the perspective of a reasonable consumer, and (3) the representation, omission, or practice must be “material[.]” See James Miller III, FTC Policy Statement on Deception, Fed. Trade Comm’n (1983), https://www.ftc.gov/system/files/documents/public_statements/410531/831014deceptionstmt.pdf [https://perma.cc/NDW4-C8CN].
 See William E. Kovacic & Marc Winerman, The Federal Trade Commission as an Independent Agency: Autonomy, Legitimacy, and Effectiveness, 100 Ia. L. Rev. 2085, 2086–87 (2015) (discussing the origins of the FTC as an independent federal agency). This Essay assumes that the FTC as an independent agency seeks to be non-partisan and seeks to refrain from repetitive, overtly politically fraught enforcement.
 See Rohit Chopra, Statement of Commissioner Rohit Chopra Regarding the Report to Congress on Social Media Bots and Deceptive Advertising 4 n.15 (2020) [hereinafter Chopra Report].
 This post does not mean to state that all bots are the same. They are not. Nor does this post construe the FTC’s argument to mean this. “Bot” in this article is used as shorthand for “social media bots” and only applies to bot use as described in this article.
 See generally Complaint, FTC v. Devumi, LLC, No. 9:19-cv-81419 (S.D. Fla. Oct. 18, 2019) [hereinafter Complaint].
 Complaint, supra note 6, ¶ 9. See also Bot, Merriam Webster’s Dictionary, https://www.merriam-webster.com/dictionary/bot [https://perma.cc/MR9V-QQKT] (last visited Nov. 10, 2020) (defining bot as “a computer program or character (as in a game) designed to mimic the actions of a person[.]”).
 Complaint, supra note 6, ¶¶ 11–13.
 See, e.g., FTC v. Teami, LLC, No. 8:20-cv-518 (M.D. Fla. Mar. 05, 2020).
 Complaint, supra note 6, ¶ 18.
 See generally Nicholas Confessore, Gabriel J.X. Dance, Richard Harris, & Mark Hansen, The Follower Factory, N.Y. Times (Jan. 27, 2018), https://www.nytimes.com/interactive/2018/01/27/technology/social-media-bots.html [https://perma.cc/4P5B-UCSA] (speaking expansively to Devumi’s trade practices as well as the social influencer economy’s use of social media bots).
 Leslie Fair, The Great American Fake Off? FTC Cases Challenge Bogus Influencer Metrics and Fake Reviews, Fed. Trade. Comm’n: Bus. Blog (Oct. 21, 2019 2:06 PM), https://www.ftc.gov/news-events/blogs/business-blog/2019/10/great-american-fake-ftc-cases-challenge-bogus-influencer [https://perma.cc/URK8-DQKX].
 Complaint, supra note 6, ¶ 10.
 See Ari Lazarus, Fake Followers: A Social Media Hoax, Fed. Trade Comm’n: Consumer Info. (Oct. 21, 2019) https://www.consumer.ftc.gov/blog/2019/10/fake-followers-social-media-hoax [https://perma.cc/Y4D2-83MQ] (“So how can you be sure that the person or company you’re interested in has real followers? Truth is, you can’t be sure.”).
 Complaint, supra note 6, ¶ 18.
 A material misrepresentation has been defined as one that “the reasonable person would regard as important in deciding how to act, or one which the maker knows that the recipient . . . is likely to consider important.” See Fed. Trade Comm’n, Enforcement Policy Statement on Deceptively Formatted Advertisements 14 n.65.
 See Chopra Report, supra note 4, at 3–4.
 Complaint, supra note 6, ¶¶ 10–13.
 Matt Weeks, Connecting the Bots: Researchers Uncover Invisible Influence on Social Media, Science Daily (May 30, 2017), https://www.sciencedaily.com/releases/2017/05/170530095910.htm [https://perma.cc/HMU4-JVNM].
 See Philip N. Howard, Bharath Ganesh, & Dimitra Liotsiou, The IRA, Social Media and Political Polarization in the United States, 2012-2018, U. Oxford 3, 12, 39 https://comprop.oii.ox.ac.uk/wp-content/uploads/sites/93/2018/12/IRA-Report-2018.pdf [https://perma.cc/E7DP-NJE2] (last accessed Nov. 11, 2020) (indicating bot usage as a mainstay in global disinformation campaigns).
 E.g., David Klepper, Facebook Removes Fake Accounts Linked to Conservative Group, AP News (Oct. 8, 2020), https://apnews.com/article/virus-outbreak-donald-trump-business-arizona-d568f23c03faa70a753928433b283f3a [https://perma.cc/HQ7K-B5V7] (“Facebook has removed 276 accounts that used fake profiles to pose as right-leaning Americans and comment on news articles[.]”).
 Complaint, supra note 6, ¶¶ 11–13.
 See Douglas Rushkoff, David Pescovitz, & Jake Dunagan, The Biology of Disinformation, Inst. for the Future 5 (2018) https://www.iftf.org/fileadmin/user_upload/images/ourwork/digintel/IFTF_biology_of_disinformation_062718.pdf [https://perma.cc/W7FN-WNTE].
 See Stephanie McNeal, The Era of Influencers Being Apolitical Online Is Over, BuzzFeed: News (Sept. 30, 2020, 9:25 AM), https://www.buzzfeednews.com/article/stephaniemcneal/the-era-of-influencers-being-apolitical-online-is-over [https://perma.cc/WH6C-3X89].
 See Siddharth Venkataramakrishnan, Inside the Rise of the Political Micro-Influencer, Fin. Times: Soc. Media (Oct. 23, 2020), https://www.ft.com/content/e414d42a-c49b-4f43-86a1-06395a849fac [https://perma.cc/C84M-8A2D] (examining the growing market of political micro-influencers that both advertise economic items and political agendas).
 See generally Anastasia Goodwin, Katie Joseff, & Samuel C. Woolley, Social Media Influencers and the 2020 U.S. Election: Paying Regular People for Digital Campaign Communication, U. Tx. at Austin: Ctr. for Media Engagement 1–3 (2020) https://mediaengagement.org/wp-content/uploads/2020/10/Social-Media-Influencers-and-the-2020-U.S.-Election-1.pdf [https://perma.cc/FW5J-GWTK] (expanding on how micro-influencers have become “highly organized surrogates of political campaigns . . . .”).
 See generally Jonathan Marciano, What Is Bot Traffic?, Cheq (June 09, 2020), https://www.cheq.ai/blog/what-is-bot-traffic [https://perma.cc/T57E-EL59] (quantifying the costs of malicious social media bot use).
 See, e.g., Yuyu Chen, How Wannabe Instagram Influencers Use Bots to Appear Popular, Digiday (Aug. 1, 2017), https://digiday.com/marketing/wannabe-instagram-influencers-use-bots-appear-popular/ [https://perma.cc/4KUT-TYNY] (“But this cohort [micro-influencers] . . . are mostly likely to turn to bots to inflate their authenticity.”).
 See Goodwin et al., supra note 26, at 1–3. (expanding micro-influencers as “surrogates of political campaigns . . . .”).
 See Marciano, supra note 27.
 Jasmine Enberg, Politicians Are Turning to Influencers, Just Like Brands Are, eMarketer (Feb. 26, 2020), https://www.emarketer.com/content/politicians-are-turning-to-influencers-just-like-the-brands [https://perma.cc/Q3WN-MY3T] (explaining the context of Mayor Bloomberg’s “Meme 2020” political strategy).
 See John Harrington, Game of Influence: How New-Wave Political Influencers Became ‘Lightning Rods’ of Debate, PRWeek (Jan. 20, 2020), https://www.prweek.com/article/1671227/game-influence-new-wave-political-influencers-became-lightning-rods-debate [https://perma.cc/2VHX-48SU] (“Twitter and . . . Instagram have opened the door to a new generation of political influencers[.]”).