Disparate (Algorithmic) Advantage

Abstract

When a hiring manager’s decisions produce a disparity, the causes are opaque, the data are noisy, and isolating the source of the disparity from “holistic judgment” is nearly impossible. When an algorithm produces a disparity, practices are specified, outputs are reproducible, and biases can be measured under controlled conditions. Yet scholars argue that algorithms make civil rights enforcement harder.

This consensus has it backwards. Disparate impact law is better suited to algorithms than to the humans it was designed to police. Algorithms are consistent: they apply the same criteria to every input, producing the cleanest statistical evidence the law has ever seen. They are replicable, making the business necessity defense specific, the less-exclusionary-alternative analysis tractable, and the algorithm identifiable as the particular practice that has eluded the doctrine since Wal-Mart v. Dukes. Algorithms are flat: a single process, not a web of cognition, culture, and bias, making the source of disparity easier to isolate.

Society should not fear algorithms. It should welcome them—not because they are fair, but because they make disparities easier to expose. If this advantage has not produced successful litigation, it is because the law is starved of information. Plaintiffs cannot challenge what they cannot see. The civil rights community has misdiagnosed information asymmetry as legal inadequacy, proposing new law when we really need new infrastructure—algorithmic testing, outcome reporting, and audit rights—to let the law see what it was built to expose.

Publication
79 Stanford Law Review Online __
Yunsieg P. Kim
Yunsieg P. Kim
Associate Professor of Law