Skip to content

The Algorithmic Explainability “Bait and Switch”

By Boris Babic and I. Glenn Cohen. Full Text.

Explainability in artificial intelligence and machine learning (AI/ML) is emerging as a leading area of academic research and a topic of significant regulatory concern. Increasingly, academics, governments, and civil society groups are moving toward a consensus that AI/ML must be explainable. In this Article, we challenge this prevailing trend. We argue that for explainability to be a moral requirement—and even more so for it to be a legal requirement—it should satisfy certain desiderata which it often currently does not, and possibly cannot. In particular, this Article argues that the currently prevailing approaches to explainable AI/ML are often (1) incapable of guiding our action and planning, (2) incapable of making transparent the actual reasons underlying an automated decision, and (3) incapable of underwriting normative (moral and legal) judgments, such as blame and resentment. This stems from the post hoc nature of the explanations offered by prevailing explainability algorithms. As the Article explains, these algorithms are “insincere-by-design,” so to speak. This often renders them of very little value to legislators or policymakers who are interested in (the laudable goal of) transparency in automated decision-making. There is, however, an alternative—interpretable AI/ML—which the Article will distinguish from explainable AI/ML. Interpretable AI/ML can be useful where it is appropriate, but presents real trade-offs as to algorithmic performance, and in some instances (in medicine and elsewhere) adopting an interpretable AI/ML may mean adopting a less accurate AI/ML. This Article argues that it is better to face those trade-offs head on, rather than embrace the fool’s gold of explainable AI/ML.