Home Schrödinger’s Robot: Privacy in Uncertain States
Article
Licensed
Unlicensed Requires Authentication

Schrödinger’s Robot: Privacy in Uncertain States

  • Ian Kerr
Published/Copyright: March 16, 2019
Become an author with De Gruyter Brill

Abstract

Can robots or AIs operating independently of human intervention or oversight diminish our privacy? There are two equal and opposite reactions to this issue. On the robot side, machines are starting to outperform human experts in an increasing array of narrow tasks, including driving, surgery, and medical diagnostics. This is fueling a growing optimism that robots and AIs will exceed humans more generally and spectacularly; some think, to the point where we will have to consider their moral and legal status. On the privacy side, one sees the very opposite: robots and AIs are, in a legal sense, nothing.

The received view is that since robots and AIs are neither sentient nor capable of human-level cognition, they are of no consequence to privacy law. This article argues that robots and AIs operating independently of human intervention can and, in some cases, already do diminish our privacy. Epistemic privacy offers a useful analytic framework for understanding the kind of cognizance that gives rise to diminished privacy. Because machines can actuate on the basis of the beliefs they form in ways that affect people’s life chances and opportunities, I argue that they demonstrate the kind of cognizance that definitively implicates privacy. Consequently, I conclude that legal theory and doctrine will have to expand their understanding of privacy relationships to include robots and AIs that meet these epistemic conditions. An increasing number of machines possess epistemic qualities that force us to rethink our understanding of privacy relationships with robots and AIs.


∗ Canada Research Chair in Ethics, Law and Technology, University of Ottawa, Faculty of Law, iankerr@uottawa.ca. I would like to thank the Social Sciences and Humanities Research Council and the Canada Research Chairs program for their generous support. Special thanks to Carys Craig for teaching me about and inspiring me to undertake a relational account of privacy. Thank you to David Matheson for being my epistemological guardian angel; and to Joelle Pineau, Laurel Reik, Bill Smart, and Jodi Forlizzi for lending precision to some of my technical assertions. I am also grateful to the participants of The Problem of Theorizing Privacy conference, organized by Michael Birnhack, Julie Cohen and Mireille Hildebrandt. This event generated very thoughtful commentary from Eldar Haber, and useful feedback from Tal Zarsky, Mireille Hildbrandt, Helen Nissenbaum, Eran Toch, Ruth Gavison, Anita Allen, Michael Bar-Sinai, Alon Jasper, Lisa Austin, and Mauricio Figueroa Torres. This article also benefitted from a second presentation at the University of Surrey’s Workshop on the Regulation of AI organized by Ryan Abbott and Alex Sarch, with excellent commentary from Steven Bero. Saving the best for last, my extreme gratitude goes out to Ida Mahmoudi for the outstanding research assistance that she so regularly and reliably provides and to Katie Szilagyi — engineer, lawyer, doctoral candidate par excellence and proud owner of these fine footnotes — for grace under pressure, her tireless enthusiasm, her ability to find anything under the sun, her insatiable intellectual curiosity, and her deep-seated disposition for arête … which she has not only cultivated for herself but, through collaboration, inspires in others. Cite as: Ian Kerr, Schrödinger’s Robot: Privacy in Uncertain States, 20 THEORETICAL INQUIRIES L. 123 (2019).


Published Online: 2019-03-16

© 2019 by Theoretical Inquiries in Law

Downloaded on 19.9.2025 from https://www.degruyterbrill.com/document/doi/10.1515/til-2019-0005/html
Scroll to top button