Research Paper – 2019
As unique and intriguing as it may sound, some individuals have reported having sexual intercourse with robots. In situations involving sexual intercourse, the question of whether consent has been freely given could arise. Before being able to address such an issue, common ground must be established regarding the robots involved, as well as their legal status.
As a large variety of machines could be considered robots, it is necessary to start with a broad definition: “A robot is a constructed system that displays both physical and mental agency but is not alive in the biological sense.” Society commonly envisions a humanoid robot resembling a human being and equipped with Artificial Intelligence.
A robot possessing Artificial Intelligence is said to have five attributes: the ability to communicate with others, internal knowledge, knowledge of the external world, some degree of intentionality, and some degree of creativity. This description highlights the idea of an autonomous machine. The Committee on Legal Affairs of the European Union proposed what could become a standard definition of smart autonomous robots in the EU. Similar to the definition of a robot possessing artificial intelligence, a smart autonomous robot “acquires autonomy through sensors and/or by exchanging data with its environment (inter-connectivity), processes and analyses data, is self-learning (optional criterion), and has a physical form.”
As robots—regardless of their shape—are machines, their number is increasing in our societies. They have arrived as helpers, companions for vulnerable individuals, or autonomous tools. Unfortunately, robots can also cause significant damage, sometimes resulting in death. To mitigate and compensate victims, it is necessary to secure the ability to prosecute those responsible for the damage caused. To incur liability, some form of personhood or legal personality is often required—such as for companies. One goal of personhood is to regulate human conduct in an organized society by enabling liability and allocating the costs of victims’ injuries. Under the current legal framework, liability cannot always be established. The idea of granting personhood to robots has emerged—specifically, the concept of electronic personhood—but it is highly controversial.
After reviewing the common grounds for robot liability, this essay will present existing alternatives to the current liability regime and will discuss the possibility of granting robots electronic personhood.
Current Liability Regime for Accidents Caused by Robots
Robots may be programmed to follow basic commands, but also to obey primary principles—such as treating all humans in a humane way. In some ways, animals and robots serve similar roles: they are programmed or trained by humans to fulfill a purpose, such as guide dogs or rescue dogs, and to protect human life. Animals exhibit social behaviors and relationships with their peers. When an animal causes damage, its owner is typically held liable. The same principle can be applied to robots.
Product liability is often invoked in cases involving commercial robots. The main liability regime today is based on negligence: failure to warn, failure to take proper care, or design flaws. Negligence allows for multiple parties to share liability to varying degrees and considers the entire causal chain. In the manufacturing of a robot, several parties are usually involved. If, for example, a robot throws a ball and injures someone, who is responsible? It could be the programmer of the object recognition software, the camera manufacturer, a battery defect, or incorrect instructions given by the owner.
The problem lies in determining what constitutes proper care, foreseeability, and reasonableness. Robots can now learn new ways of interacting with people in complex environments. Robots themselves cannot be held liable for acts or omissions that cause damage to third parties. As for non-contractual liability, Council Directive 85/374/EEC of 25 July 1985 covers only damage caused by manufacturing defects—provided the injured party can prove actual damage, the defect, and a causal link (strict liability or liability without fault). This legal framework does not cover a broad range of modern scenarios and is inadequate for the next generation of robots.
Alternatives to the Current Liability Regime for Robots
Punishment is generally understood as corrective: the wrongdoer repays their debt to society, either financially or through loss of liberty. We cannot apply criminal punishment directly to robots; however, there is criminal law for non-humans—namely, corporations. Corporations have some rights: they can own property, sign contracts, be held liable for negligence, and, in certain cases, be punished for criminal acts or environmental harm. Yet, they cannot be imprisoned.
Corporations are human-created entities recognized by law as artificial persons with attributes of personhood conferred through incorporation by a state agency. Their legal status was designed primarily to facilitate economic transactions and their interactions with the world. In case of an accident, it is not necessary to identify the specific author of the wrongdoing—victims are compensated by the corporation, increasing the likelihood of recovery. Although this regime could be adapted for robots, it does not address the question of consent in the context of sex robots.
Electronic Personhood as a Rational Legal Response
Personhood has been granted to non-human entities—for example, the Whanganui River, certain chimpanzees, or idols in some religions.
People tend to treat humanoid robots more like humans than machines. We may assign greater blame to the creator of a robot that appears human than to one that does not. Humans interact differently with robots that resemble them. Legal personhood includes rights and duties, but it is not equivalent to natural personhood. A juridical person need not be a human being. Legal personhood exists primarily to facilitate economic relationships and judicial proceedings. The physical appearance of a robot should not influence this debate.
Nonetheless, the ideas of consciousness and consent lie at the heart of the controversy. Since it is impossible to confirm the consciousness or awareness of a robot, granting it legal personhood equivalent to a human being cannot be morally or ethically justified.
As previously mentioned, the European Parliament introduced the idea of electronic personhood for robots, which was strongly rejected in an open letter by several AI experts. In April 2018, the EU Commission released a document on AI without mentioning electronic personhood. Despite resistance to granting this status, the growing presence of sex robots and potential advocacy for marriage between humans and robots will demand new forms of legal regulation.
Resources
- Lily Frank, Sven Nyholm, “Robot sex and consent: Is consent to have sex between a robot and a human conceivable, possible and desirable?” [2017], Artificial Intelligence and Law, Vol. 25, Issue 3, pp. 305-323 https://link.springer.com/article/10.1007/s10506-017-9212-y#aboutcontent accessed 16 December 2018
- Neil M. Richards & William D. Smart, “How Should the Law Think About Robots?” [2012], We Robot Conference http://robots.law.miami.edu/wpcontent/uploads/2012/03/RichardsSmart_HowShouldTheLawThink.pdf accessed 16 December 2018
- Sheikh Solaiman, “Legal personality of robots, corporations, idols and chimpanzees: a quest for legitimacy” [2017], Artificial Intelligence and Law, Vol. 25, Issue 2, p. 171
- Committee on Legal Affairs , Report (2015/2103) DRAFT REPORT with recommendations to the Commission on Civil Law Rules on Robotics (2015/2103(INL))
- Susanne Beck, “Intelligent agents and criminal law – Negligence, diffusion of liability and electronic personhood”[2016], Robotics and Autonomous Systems, Vol. 86, pp. 138-143 https://doi.org/10.1016/j.robot.2016.08.028 accessed 9 December 2018
- Alex Hern, “Give robots ‘personhood’ status, EU committee argues”, [January 12, 2017], The Guardian, https://www.theguardian.com/technology/2017/jan/12/give-robots-personhood-status-eu-committee-argues accessed 16 December 2018
- Hutan Ashrafia, “Artificial intelligence and Robot Responsibilities, innovating beyond rights” [2015], Science and Engineering Ethics, Vol. 21, Issue 2, pp. 317 – 326 https://doi.org/10.1007/s11948-014-9541-0 accessed 17 December 2018
- Peter M. Asaro, “Robots and responsibility from a legal perspective” [2007], presented during the Workshop on Roboethics, IEEE International Conference Robotics and Automation (ICRA) http://www.roboethics.org/icra2007/contributions/ASARO%20Legal%20Perspective.pdf
- Bartosz Brozek, Marek Jakubiec, “On the legal responsibility of autonomous machines” |2017], Artificial Intelligence And Law, Vol. 25, Issue 3, pp. 293 – 304 https://doi-org.vu-nl.idm.oclc.org/10.1007/s10506-017- 9207-8 accessed 17 December 2018
- Peter H. Khan and al., “Do People hold a humanoid Robot morally accountable for the harm it causes?” [2012] Proceedings of the 7th ACM/IEEE International conference on Human- Robot interaction, pp. 33 – 40 https://ieeexplore.ieee.org/document/6249577 accessed 16 December 2018
- Robert van den Hoven van Genderen, “Do we need new legal personhood in the Age of Robots and AI?”[2018], Robotics, AI and the Future of Law, pp. 15-55 https://doi-org.vu-nl.idm.oclc.org/10.1007/978-981-13-2874-9_2 accessed 17 December 2018
