The Discursive Bases of Data Violence, or: How the pursuit for fair and inclusive tech can undermine the pursuit of justice
Values of fairness, antidiscrimination, and inclusion occupy a central place in the emerging ethics of data and algorithms. Their importance is underscored by the reality that data-intensive, algorithmically-mediated decision systems—as represented by artificial intelligence and machine learning (AI/ML)—can exacerbate existing (or generate new) injustices, worsening already problematic distributions of rights, opportunities, and wealth. At the same time, critics of certain “fair” or “inclusive” approaches to the design and implementation of these systems have illustrated their limits, pointing to problems with reductive or overly technical definitions of fairness or a general inability to appropriately address representative or dignitary harms.
In this talk, Anna Lauren Hoffmann extends these critiques by focusing on problems of cultural and discursive violence. She begins by discussing trends in AI/ML fairness and inclusion discussion that mirror problematic tendencies from legal antidiscrimination discourses. From there, she introduces “data violence” as a response to these trends. In particular, she lays out the discursive bases of data-based violence—that is, the discursive forms by which competing voices and various “fair” or “inclusive” solutions become legible (and others marginalized or ignored). In doing so, she undermines any neat or easy distinction between the presence of violence and its absence—rather, our sense of fair or inclusive conditions contain and feed the possibility of violent and unjust ones.