Discrimination in the Age of Algorithms

(Submitted on 11 Feb 2019)

Abstract: The law forbids discrimination. But the ambiguity of human decision-making
often makes it extraordinarily hard for the legal system to know whether anyone
has actually discriminated. To understand how algorithms affect discrimination,
we must therefore also understand how they affect the problem of detecting
discrimination. By one measure, algorithms are fundamentally opaque, not just
cognitively but even mathematically. Yet for the task of proving
discrimination, processes involving algorithms can provide crucial forms of
transparency that are otherwise unavailable. These benefits do not happen
automatically. But with appropriate requirements in place, the use of
algorithms will make it possible to more easily examine and interrogate the
entire decision process, thereby making it far easier to know whether
discrimination has occurred. By forcing a new level of specificity, the use of
algorithms also highlights, and makes transparent, central tradeoffs among
competing values. Algorithms are not only a threat to be regulated; with the
right safeguards in place, they have the potential to be a positive force for
equity.

Subjects: Computers and Society (cs.CY); Artificial Intelligence (cs.AI); Machine Learning (cs.LG)
Cite as: arXiv:1902.03731 [cs.CY]
  (or
arXiv:1902.03731v1 [cs.CY]
for this version)

Submission history

From: Jon Kleinberg [

view email

]


[v1]

Mon, 11 Feb 2019 04:58:11 UTC (858 KB)


Read More

Advertisements

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.