The Journal of Things We Like (Lots)
Select Page
Neil M. Richards & William Smart, How Should the Law Think About Robots? (2013), available at SSRN.

The article seems dated for a review here. There are newer ones on the subject, like e.g., Ryan Calo’s “Robotics and the Lessons of Cyberlaw” of 2014, for example. But the Richards & Smart article sticks in my mind. Maybe because, while both are premature (I will come to that immediately), this article makes a—or better—the fundamental point about law and politics in the face of changing technologies in a very simple and clear way.

“Premature” used to be the comment we would receive from the European Commission when we, at the heyday of European cyber regulation, as members of the Legal Advisory Board, an independent expert group abolished long since, would suggest a new initiative outside the Commission’s own agenda. Some of the readers may have encountered this word when presenting new ideas as legal counsel. I have never taken it as a derogatory term. “Premature” signifies a quality, if not an obligation, of legal proactive comment and advice. In that sense dealing with robotics and law is premature, and so are, by the way, the “We Robot” Conferences (established in 2012) which give context to this article, a conference series in which—disclosure is due—our Editor-in-Chief has been involved prominently.

The fundamental point is slow in coming: Richards & Smart start with a definition of a robot: a “non-biological autonomous agent,” i.e. “a constructed system that that display both physical and mental agency but is not alive in the biological sense.” We all are familiar, as the authors point out, with all sorts of robots. We know them from science fiction readings and the movies. There is already the small round disk that cleans our sitting rooms. There has been the automated assembly of cars by industrial robots. And lately these cars drive around themselves as robots guided by Google. And robots, the authors argue, will become increasingly multipurpose, gain more autonomy, and turn from lab exhibits into everyday devices communicating with each of us at any time. Law? There is a reference to the Nevada state regulation of 2011 for those car robots. But otherwise the article mentions legal implications only in a very general way; there is no discussion; there is not even a listing of possible legal problems.

And yet, it is exactly this lack that makes this article so special and brings us to that central point. The authors make a notable, an important pause. Before going into the legal details, they insist, we should be aware of how law and society deal with technology in general, and they take Cyberlaw as the example of what may happen to robotics and law: Essential for technology law is the way in which law perceives technology. It does so by analogy to a metaphor already in use, in order to relate the “new” to something law already knows. The example Richards & Smart are presenting from Cyberlaw is the evolving interpretation of the Fourth Amendment with regard to wiretapping: The metaphor chosen decides on the political and legal path the issues will take.

While the importance of a metaphor is not new to discussions about law and about Cyberlaw in particular – see for example Julie Cohen’s analysis of Cyberspace as space in a 2007 article (107 Colum. L. Rev. 210), the authors consciously register the moment of the critical turn before it is taken: Heed the warning, they say, “Beware of Metaphors.” They exemplify their premonitions about the way in which politics and law may perceive robots with what they call the “Android Fallacy”: The more robots may look and seem to behave like human beings, the more inclined we might be to assert them free will, and the more responsibility will be taken off the shoulders of their designers.

In essence, what this article is asking us—and this may be the real reason why this article sticks in my mind—is to what fallacies of Cyberlaw have we contributed with our writings, making way for what kind of legal policies, legislation and jurisprudence, with what kind of consequences even when we were acting with proactive intent? Shouldn’t we have allowed for more time to discuss the implications of our metaphors before surfing with the technological tide?

(Michael Froomkin took no part in the editing of this essay.)

Download PDF
Cite as: Herbert Burkert, About Fallacies, JOTWELL (October 3, 2014) (reviewing Neil M. Richards & William Smart, How Should the Law Think About Robots? (2013), available at SSRN), https://cyber.jotwell.com/about-fallacies/.