DOI: 10.5553/NJLP/221307132022051002002

Netherlands Journal of Legal PhilosophyAccess_open

Boekbespreking

Artificial intelligence, ethics, law: a view on the Italian and American debate (and on their differences)

Trefwoorden artificial intelligence, AI law, ethics, punishment, new technologies
Auteurs
DOI
Toon PDF Toon volledige grootte
Samenvatting Auteursinformatie Statistiek Citeerwijze
Dit artikel is keer geraadpleegd.
Dit artikel is 0 keer gedownload.
Aanbevolen citeerwijze bij dit artikel
Alice Giannini, "Artificial intelligence, ethics, law: a view on the Italian and American debate (and on their differences)", Netherlands Journal of Legal Philosophy, 2, (2022):248-263

    In the past ten years the scientific discourse on artificial intelligence (AI) has thrived. What are the challenges that AI poses to the law? If something goes wrong, who should be blamed? In the pursuit of answers to these questions, legal scholars – as the authors of the reviewed books – jumped on the AI bandwagon, joining philosophers, ethicists, computer scientists.
    The essay highlights recurring traits of this discussion on AI and law. Its purpose is to present two paramount examples of the common versus civil law approach to solving conflicts, such as the one represented by the impact of AI technologies on our society.

Dit artikel wordt geciteerd in

      Ugo Ruffolo (Ed), Intelligenza Artificiale. Il diritto, i diritti, l’etica (Milano: Giuffré, 2020)
      Ryan Abbott, The Reasonable Robot. Artificial Intelligence and the Law (Cambridge: Cambridge University Press, 2020)

    • 1. Introduction. Legal scholars invading the pitch

      In the past ten years the scientific discourse on artificial intelligence (AI)1x There is no generally accepted definition of AI. This essay will not account for all the different definitions of artificial intelligence theorised in the past fifty years. We will limit ourselves to acknowledge that there is debate on the (legal) definition of AI and that the term AI can be used to refer both to a set of technologies and to a specific scientific discipline, which branches from computer science. For a systematic analysis of the issue of defining ‘artificial intelligence’ see inter alia: Pei Wang, ‘On Defining Artificial Intelligence’, Journal of Artificial General Intelligence 10 no. 2 (2019): 1-37. For a thorough overview of existing AI definitions, see the research conducted on 55 documents by Sofia Samoili et al., AI Watch. Defining Artificial Intelligence. Towards an operational definition and taxonomy of artificial intelligence, EUR 30117 EN (Luxembourg: Publications Office of the European Union, 2020). has thrived.2x For a more in depth analysis of the subject, see ex multis: Luciano Floridi, ‘AI and Its New Winter: from Myths to Realities’, Philosophy & Technology 33 (2020): 1-3; Michaela Haenlein and Andreas Kaplan, ‘A Brief History of Artificial Intelligence: On the Past, Present and Future of Artificial Intelligence’, California Management Review 61 no. 4 (2019): 5-14; Youjung Shin, ‘The Spring of Artificial Intelligence in Its Global Winter’, IEEE Annals of the History of Computing 41 no. 4 (2019); Stuart J. Russell and Peter Norvig, Artificial Intelligence. A Modern Approach (London: Pearson, 2003), 16-27. Legal scholars, not immune to this trend, jumped on the AI bandwagon, joining philosophers, ethicists, and computer scientists.3x Amongst the most relevant books which have been published on the subject in English, see Woodrow Barfield, ed., The Cambridge Handbook of the Law of Algorithms (Cambridge: Cambridge University Press, 2021); Matt Hervey and Matthew Lavy, The Law of Artificial Intelligence (Mytholmroyd: Sweet & Maxwell, 2021); Thomas Wischmeyer and Timo Rademacher, eds., Regulating Artificial Intelligence (Berlin: Springer, 2020); Martin Ebers and Susana Navas, eds., Algorithms and Law (Cambridge: Cambridge University Press, 2020); Cristoph Busch and Alberto De Franceschi, eds., Algorithmic Regulation and Personalized Law. A Handbook (Baden-Baden: CH Beck-Hart-Nomos, 2020); Ugo Pagallo and Woodrow Barfield, eds., Research handbook on the law of artificial intelligence (Cheltenham: Edward Elgar Publishing, 2018); Ugo Pagallo, The Law of Robots. Crimes, Contracts, and Torts (Berlin: Springer, 2013). In Italian legal doctrine, see: Giancarlo Taddei Elmi and Alfonso Contaldo, Intelligenza artificiale-Algoritmi giuridici: Ius condendum o fantadiritto?, (Pisa: Pacini, 2020); Paolo Moro and Claudio Sarra, eds., Tecnodiritto. Temi e informatica e robotica giuridica (Milano: Franco Angeli, 2017). This is not surprising: not only can AI systems beat us at almost any board game,4x AlphaGo, an AI system developed by Google, beat the world Go Champion Lee Sedol in 2015. but they can also diagnose diseases and drive cars. They are capable of autonomous and unpredictable action. Legal scholars, then, invaded the computer sciences pitch, encroaching on new territories of research, driven by one particular fundamental question: what are the challenges that AI poses to the law? More specifically, if something goes wrong, who should be blamed? As with other instances of scientific progress, legal systems will have to strike a balance between the need for effective tools for compensation and punishment, on the one hand, and the risk of a chilling effect on innovation on the other. In the pursuit of answers to such questions, the players in the legal arena, including the authors of the books reviewed in this essay, started testing whether AI could fit into law-as-we-know-it or if the rise of AI demands the creation of new rules and legal concepts.5x Throughout this essay, I will use the term ‘AI law’ to refer to this new field of research, that is, to hard law, soft law, and legal scholarship dealing with AI. Consequently, by AI law we mean both (adopted and proposals of) new regulations of AI and inquiries into how to adapt existing regulation to the specificities of AI.

      Notwithstanding its vastness, the legal discussion on regulating AI presents recurrent traits. AI technology, on the one hand, forces legal scholars to step outside their comfort zone and to become familiar with technical concepts pertaining to the realm of computer science, such as machine learning, artificial neural networks, and deep learning.6x One can think of these concepts as Russian nesting dolls: machine learning is a subfield of artificial intelligence; deep learning is a subfield of machine learning and artificial neural networks are the building blocks of deep learning. See Eda Kavlakoglu, ‘AI vs. Machine Learning vs. Deep Learning vs. Neural Networks: What’s the Difference?’, 27 May 2020, https://www.ibm.com/cloud/blog/ai-vs-machine-learning-vs-deep-learning-vs-neural-networks.
      An algorithm based on machine learning (ML) techniques teaches itself rules by learning from the training data through statistical analysis, detecting patterns in large amounts of information. Deep learning (DL) is a sub-set of ML where the system consists of layers of artificial neural networks (ANNs). The network analyses data and identifies relevant features by itself. ANNs are made from multiple layers of artificial neurons encoded in software. Each neuron can be connected to others in the layers above. One neuron receives an ‘input’ (for example, information on a pixel in a picture) and another neuron produces an ‘output’ (for example, the classification of the picture). This technique is inspired by the functioning of the human brain. See Harry Surden, ‘Artificial Intelligence and Law: An Overview’, Georgia State University Law Review 35 no. 4 (2019). For a visual and approachable explanation of the functioning of deep learning, see Meor Amer, A Visual Introduction to Deep Learning (kDimensions, 2021).
      On the other hand, it prompts multi-disciplinary discussions amongst scholars working in different fields of law. Most of the literature on the relationship between AI and the law published over the past five years has been the result of joined forces, i.e., of analyses that focused concurrently on the interaction between AI and public law, private law, criminal law, and more abstract legal philosophy and theory. This cross-hybridisation, not only between different areas of law, but also between legal and non-legal disciplines, becomes especially visible in two recent publications: Ugo Ruffolo’s edited volume Intelligenza Artificiale. Il diritto, i diritti, l’etica (2020)7x Ugo Ruffolo, ed., Intelligenza Artificiale. Il diritto, i diritti, l’etica (Milano: Giuffré, 2020). and Ryan Abbott’s The Reasonable Robot. Artificial Intelligence and the Law (2020).8x Ryan Abbott, The Reasonable Robot. Artificial Intelligence and the Law (Cambridge: Cambridge University Press, 2020).

      It is true that many other academic writings surfaced in the time incurred between when the books were published and the drafting of this review. Yet, there has not been a scientific breakthrough in the field of AI which could call for decisive changes in the legal doctrine contained in the books. Simply said, we are still dealing with AI systems playing chess, rather than destroying the world.9x The metaphor recalls two leitmotivs of the academic and media discourse on the advancement of AI, which is characterised by the polarisation between techno-optimists and techno-pessimists. Admittedly, as argued by Danaher, ‘much of the academic debate about the impacts of technology on society has a pessimistic angle to it, highlighting the ethical harms and unanticipated effects of technology on the environment, social norms and personal well-being … Indeed, many academics see techno-optimism as irrational and superstitious – a faith-based initiative with little grounding in reality’. According to the author, techno-pessimism ‘may have deeper roots in intellectual temperament. Some have pointed out that pessimistic views are de rigueur among intellectuals, particularly in the post-Enlightenment era (Harris, 2002; Prescott, 2012); optimistic views are, by contrast, “not regarded as intellectually respectable”’. See John Danaher, ‘Techno-optimism: an Analysis, an Evaluation and a Modest Defence’, Philosophy & Technology 35 (2022): 54. Indeed, even if AI systems are being used in sectors where the risk of harm to individuals is high, such as transportation or the military domain, the technology behind the systems still displays so-called narrow intelligence, i.e., they are systems which are capable of matching or outperforming humans on specific tasks.10x See Harry Surden, ‘Artificial Intelligence and Law: An Overview’, 1309. As of today, there is no agreement on whether GAI will ever be achieved. ‘General Artificial Intelligence’ (GAI), which is supposed to ‘match higher-order human abilities, such as abstract reasoning, concept comprehension, flexible understanding, general problem-solving skills, and the broad spectrum of other functions that are associated with human intelligence’ is not existent. This is not to say that AI, as a science, is not advancing. Rather, it entails that it has not advanced enough to make the arguments put forth in the books outdated. The legal reasoning behind the contributions in the books is still relevant for state-of-the-art AI. The authors offer insights that could potentially lead the next decade’s debate on AI law. In particular, Ruffolo’s volume comprises of reflections on quite specialised, but important topics of legal research, while Abbott’s study outlines a complete and coherent theory which is then tested by the author in four areas of law. It is also in this light that the two books, and consequently this review, acquire value.

      Yet, these books certainly present a fundamental difference. In point of fact, they are written in different languages by authors with different legal cultural backgrounds: one encompasses essays written by continental Italian legal scholars, whereas the other is written by an American legal scholar, Ryan Abbott. This book review essay will first analyse Ruffolo’s edited volume and then proceed with Abbott’s book. The order is not arbitrary: the first volume equips the reader with a set of notions which are then – probably without being aware of this Italian collection of essays – explored more in depth in the second book through the lenses of Abbott’s theory, i.e., the principle of legal neutrality. The expectation is that by the end of this review prospective readers will be able to grasp the synergy between a book published in Italian and one written in English.

      How should the reader approach the books, then? Neither book is meant for a ‘law beginner’, as they both require a basic understanding of general legal constructs on AI. At the same time, they are not directed only at those who already have studied AI law. Moreover, while the average (legal) reader would directly read the chapter on the topic which represents his or her comfort zone, i.e., the field in which he/she is specialised, we advise the reader to experiment. The different chapters present reflections that could prove useful in other domains and legal systems. Admittedly, these books are paramount of what could be deemed the credo of all the scholars interested in the newborn realm of AI law: No Law is an Island.11x The expression is borrowed from a passage of the famous 1624 Meditation XVII by John Donne, ‘Meditation XVII. Nunc Lento Sonitu Dicunt, Morieris’, in Devotions Upon Emergent Occasions, ed. Anthony Raspa (Oxford: Oxford University Press, 1987).

    • 2. AI law at 360 degrees

      When analysing an edited volume, one must consider distinct aspects. Reading the book ought to be like listening to a symphony played by a fine-tuned orchestra, with the editor as a conductor. Features such as coherence and harmony amongst the different chapters acquire peculiar importance. What is more, reviewing an edited book brings about certain limitations. Specifically, it might be unimaginable, and counter-productive, to analyse each and every contribution embodied in the publication. For these reasons, the reviewer is compelled to cherry-pick the chapters that she deems most archetypal and relevant.

      Having acknowledged these preliminary observations, it is possible to proceed with the analysis. Intelligenza artificiale. Il diritto, i diritti, l’etica appears as a 360 degrees analysis of an emerging area of research, i.e., AI law.12x See above footnote 5 for a definition of AI law. It differs in its structure from a subsequent publication, also edited by Ugo Ruffolo,13x Ugo Ruffolo, ed., XXVI lezioni di diritto del’’intelligenza artificiale (Torino: Giappichelli, 2020). which is instead built as 26 ideal lessons on AI law. Instead, Intelligenza artificiale. Il diritto, i diritti, l’etica embodies a balanced ecosystem of ideas. It has a twofold purpose. Firstly, it can work as a study tool for scholars who have no background knowledge on the legal regulation of AI, as it brings forth a broad recollection of the legal issues raised by AI. The extensive bibliographies provided by the authors at the end of each chapter, which include both Italian and non-Italian literature, represent a useful tool for researchers approaching the field. Secondly, it embodies a thought-provoking reading for those who are already familiar with the issues and are looking for inspiration for future research itineraries.

      The title Il diritto, i diritti, l’etica reflects the multifaceted nature of the book. Notwithstanding the play of assonance between the words il diritto (law) and i diritti (rights), the juxtaposition of the terms implies that the book covers both topics relating to law and its sub-fields, e.g., the chapter on the European approach to AI governance;14x Andrea Amidei, ‘La governance dell’Intelligenza Artificiale: profili e prospettive di diritto dell’Unione Europea’, in Intelligenza artificiale, ed. Ugo Ruffolo (Milano: Giuffré, 2020), Section VI, ch. 7, 571. and topics relating to rights, e.g., the chapter on AI, human enhancement and rights of individuals15x Ugo Ruffolo and Andrea Amidei, ‘Intelligenza Artificiale, human enhancement e diritti della persona’, in Intelligenza artificiale, ed. Ugo Ruffolo (Milano: Giuffré, 2020), Part II, Section II, ch. 4, 179. or the one on E-Personhood.16x Ugo Ruffolo, ‘La ‘personalità elettronica’ persona’, in Intelligenza artificiale, ed. Ugo Ruffolo (Milano: Giuffré, 2020), Part II, Section II, ch. 5, 213. On closer inspection, the book is divided into seven sections. The first section focuses on AI ethics and comprises four subchapters. Sections two to five focus on both doctrinal and non-doctrinal legal aspects. These sections comprise 23 sub-chapters which cover subjects pertaining to almost all legal domains. Finally, section seven tackles AI applied to real-world scenarios, such as 5G technology, healthcare, insurance, and advertising.

      Il diritto, i diritti, l’etica does not present an introduction. The introductory function is taken up by the forewords of Guido Alpa and Augusto Barbera, two of the most prominent Italian legal scholars in civil and constitutional law respectively. These forewords represent, then, a sui generis introduction to the book. Notably, Alpa underlines how the book is an instance of the broader phenomenon of juridification, i.e., the tendency of jurists to translate real-world phenomena into general and abstract formulas.17x Guido Alpa, ‘Preface’, in Intelligenza artificiale. Il diritto, i diritti, l’etica, ed. Ugo Ruffolo (Milano: Giuffré, 2020), XVII. Barbera defines the book as a ‘rich goldmine’ from which it is possible to ‘extract precious materials’ for constructing new legal categories.18x Augusto Barbera, ’Preface’, in Intelligenza artificiale. Il diritto, i diritti, l’etica, ed. Ugo Ruffolo (Milano: Giuffré, 2020), XX.

      Apropos, as mentioned above, the selection of topics done by Ruffolo is highly interdisciplinary. It includes writings not only by (doctrinal legal) scholars but also by judges, decision-making authorities in corporations, legal philosophers, and by authors who do not have a legal background – such as clinicians. Moreover, the volume is coherent in the quality that it delivers to its readers. The essays do not appear superficial or redundant. In fact, the length of the book (648 pages) fits its intended use, which is to deliver a handbook on the many levels of intersections between AI and law, adequately.

      Let us now turn to the contents of the first part of the book, which deals with ethics. This section is the by-product of an old discussion on the relationship between law, moral, and what comes first.19x See Stefano Rodotà, ‘Etica e Diritto (dialogo tra alcuni studenti e Stefano Rodotà) con una Presentazione di Gaetano Azzariti’, Costituzionalismo.it 1 (2019): 25. It is tempting, at times, to confuse rules of law and rules of morality, especially when it comes to criminal law. The topic has been addressed by conspicuous literature over the past decades,20x Think for example of the famous Hart-Devlin debate on the criminalisation of immoral conduct. For a reconstruction and a revisitation of the debate, see James Allan, ‘Revisiting the Hart-Devlin Debate: At the Periphery and By the Numbers’, San Diego L. Rev. 54 (2017): 423. but found renewed importance with the incurrence of the discussion on how to regulate AI. Experiments such as the MIT’s Moral Machine,21x The platform is available at https://www.moralmachine.net. See Edmond Awad et al., ‘The Moral Machine experiment’, Nature 563 (2018): 59-64. an online platform where users can explore moral dilemmas which could be faced by autonomous cars (for example, deciding between killing pedestrians who cross legally versus those who jaywalk), attracted the attention of criminal legal scholars and legal philosophers,22x See Francesca Lagioia and Giovanni Sartor, ‘AI Systems Under Criminal Law: a Legal Analysis and a Regulatory Perspective’, Philosophy & Technology 33 (2020): 433-465; Sabine Gleß, Emily Silverman and Thomas Weigend, ‘If Robots Cause Harm, Who Is to Blame? Self-Driving Cars and Criminal Liability, New Criminal Law Review 19 no. 3 (2019); Sabine Gleß and Thomas Weigend, ‘Intelligente Agenten und das Strafrecht’, ZSTW 126 no. 3 (2014): 561-591; Peter M. Asaro, ‘A Body to Kick, but Still No Soul to Damn: Legal Perspectives on Robotics’, in Robot Ethics: The Ethical and Social Implications of Robotics, ed. Patrick Lin, Keith Abney and George A. Bekey (Cambridge, Massachusetts: MIT Press, 2011), 169-186; Samir Chopra and Laurence F. White, A Legal Theory for Autonomous Artificial Agents (Ann Arbor: Univ. of Michigan Press, 2011); Pagallo, The Laws of Robots, 76. who started discussing whether we could speak of AI as moral agents capable of manifesting mens rea (guilty mind).23x Mens rea is a Latin expression which literally translates to ‘guilty mind’. The term is used in criminal legal doctrine to refer to the subjective element of a crime, i.e., ‘the necessary link between a person’s conduct in violation of a criminal prohibition (actus reus) and the person’s mind’. See Tomas Weigend, ‘Subjective Elements of Criminal Liability’, in The Oxford Handbook of Criminal Law, ed. Markus D. Dubber and Tatjana Hörnle (Oxford: Oxford University Press, 2019), 491.

      How do the worlds of ethics and law communicate in the field of AI then? In fact, governments and international organisations so far have focused extensively on drafting principles of the so-called ‘ethical’ or ‘trustworthy’ AI, rather than on hard-law regulation. Think for example of the Ethics Guidelines for Trustworthy AI developed by the European Commission’s AI High-Level Expert Group (AI HLEG),24x High-Level Expert Group on Artificial Intelligence, Ethics Guidelines for Trustworthy Artificial Intelligence (AI), 8 April 2019, https://ec.europa.eu/futurium/en/ai-alliance-consultation.1.html. the OECD AI Principles,25x OECD, Recommendation of the Council on Artificial Intelligence, OECD/LEGAL/0449, https://legalinstruments.oecd.org/en/instruments/OECD-LEGAL-0449. and the UNESCO Recommendation on the Ethics of Artificial Intelligence.26x UNESCO, Recommendation on the Ethics of Artificial Intelligence, SHS/BIO/REC-AIETHICS/2021, https://unesdoc.unesco.org/ark:/48223/pf0000380455. Authors such as Rességuier and Rodrigues have argued that this use of ethics is problematic, as it would consist to nothing more than a display of a ‘law conception of ethics’,27x G.E.M. Anscombe, ‘Modern moral philosophy’, Philosophy 33 no. 124 (1968): 1-19. i.e., ‘a view on the ethics endeavour that makes it a sort of replica of law’.28x Anaïs Rességuier and Rowena Rodrigues, ‘AI ethics should not remain toothless! A call to bring back the teeth of ethics’, Big Data & Society (2020): 2. Is ethics then ‘toothless’ when it comes to regulating AI, or is it just being used for the wrong purposes?29x ‘[…] the issue is not that ethics is asked to do something for which it is too weak, or too soft. It is rather that it is asked to do something that it is not designed to do. Blaming ethics for having no teeth to ensure compliance with whatever it calls for is like blaming the fork for not cutting meat properly: this is not what it is designed to do. The objective of ethics itself is not to impose particular behaviours and to ensure these are complied with. The problem arises when it is used to do so. This is particularly evident in AI ethics, where ethical principles, norms or requirements are called for to regulate AI and ensure that it does not harm individuals and the society at large (e.g. AI HLEG)’, Rességuier and Rodrigues, ‘AI ethics should not remain toothless! A call to bring back the teeth of ethics’, 2. The chapters in this section of Ruffolo’s book provide insightful reflections on the debate.

      For example, Lorenzo d’Avack’s chapter works as a primer for a reader interested in understanding the relationship between AI, law, and ethics.30x Lorenzo d’Avack, ‘La rivoluzione tecnologica e la nuova era digitale: problemi etici’, Intelligenza artificiale, ed. Ugo Ruffolo (Milano: Giuffré, 2020), 3. D’Avack starts his reflection with a strong premise: science is power. The more AI acquires importance as a scientific discipline, the more laymen struggle to grasp the magnitude of the change that is happening. The idea of an indissoluble union between scientificità31x Scientificity. and eticità32x Ethicality. permeates through the whole chapter. In order to provide the readers with a better understanding of the phenomenon, d’Avack focuses on the impact of AI on four areas: the labour market, robotics, big data, and algorithms. He then focuses attention on the European efforts in the field of AI ethics, which he considers more of a theoretical attempt, rather than one which could lead to concrete effects on the protection of human rights. Finally, he concludes by affirming that the reflections of national and international ethics committees in the field of AI ethics can be used as guidelines to develop a future global governance of AI. This objective can only be reached, though, by ensuring that said ethics committees have a mixed composition, i.e., that they include both scientist and ethicists.

      Another example is the chapter by Ugo Pagallo, in which he first analyses the most recent initiatives adopted by national and international policymakers on AI ethics and then focuses on the challenges that AI poses to the law.33x Ugo Pagallo, ‘Etica e diritto dell’Intelligenza Artificiale nella governance del digitale: il Middle-Out Approach’, in Intelligenza artificiale, ed. Ugo Ruffolo (Milano: Giuffré, 2020), 29. Pagallo contends that regulating AI requires a ‘middle-out approach’, i.e., a form of regulation that stands between hard law and auto-regulation. In addition, Paolo Moro in his chapter focuses on the nature and limits of robotic personhood.34x Paolo Moro, ‘Macchine come noi. Natura e limiti della soggettività robotica’, in Intelligenza artificiale, ed. Ugo Ruffolo (Milano: Giuffré, 2020), 45. He articulates his reflections along five routes: machines like us; intelligent machines; moral machines; emotional machines; and unconscious machines.

      Moving on to the following sections, we will draw attention to two parts of the book: the section on AI and civil liability, which comprises three chapters written by Ugo Ruffolo35x Ugo Ruffolo, ‘La responsabilità da artificial intelligence, algoritmo e smart product: per i fondamenti di un diritto dell’intelligenza artificiale self-learning’, in Intelligenza artificiale, ed. Ugo Ruffolo (Milano: Giuffré, 2020), 93; Ugo Ruffolo, Intelligenza Artificiale ed automotive: le responsabilità da veicoli self-driving e driverless, in Intelligenza artificiale, ed. Ugo Ruffolo (Milano: Giuffré, 2020), 153. and Andrea Amidei;36x Andrea Amidei, Intelligenza Artificiale e responsabilità da prodotto, in Intelligenza artificiale, ed. Ugo Ruffolo (Milano: Giuffré, 2020), 125. and the section on AI and criminal liability, which comprises two chapters written by Vittorio Manes37x Vittorio Manes, ‘L’oracolo algoritmico e la giustizia penale: al bivio tra tecnologia e tecnocrazia’, in Intelligenza artificiale, ed. Ugo Ruffolo (Milano: Giuffré, 2020), 547. and Paola Severino.38x Paola Severino, ‘Intelligenza artificiale e diritto penale’, in Intelligenza artificiale, ed. Ugo Ruffolo (Milano: Giuffré, 2020), 531.

      The chapters written by Ruffolo and Amidei offer an analysis of the interplay between AI and civil liability. The first chapter, authored by Ruffolo, scrutinises whether the articles of the Italian Civil Code on liability39x Articles 2049 to 2054 of the Italian Civil Code. are suited to address AI damage.40x It is possible to define AI damage as any ‘adverse impact affecting the life, health, physical integrity of a natural person, the property of a natural or legal person or causing significant immaterial harm that results in a verifiable economic loss harm’ which can be causally linked to an AI system. See the definition of ‘harm or damage’ provided in the European Parliament resolution of 20 October 2020 with recommendations to the Commission on a civil liability regime for artificial intelligence (2020/2014(INL)), art. 3, (i). Here the author argues that we must not believe that AI as a new phenomenon demands new laws. We must look at how to interpret what is already in place, especially in civil law systems.41x Ugo Ruffolo, ‘La responsabilità da artificial intelligence, algoritmo e smart product’, in Intelligenza artificiale, ed. Ugo Ruffolo (Milano: Giuffré, 2020), 94. Ruffolo agrees with Jeremy Levy’s claim that there is ‘no need to reinvent the wheel’.42x Jeremy Levy, ‘No Need to Reinvent the Wheel: Why Existing Liability Law Does Not Need to Be Preemptively Altered to Cope with the Debut of the Driverless Car’, J. Bus. Entrepreneurship & L. 9 (2016). This claim is also similar to Abbott’s thought, since he argues in favour of having better law rather than more law.43x Abbott, The Reasonable Robot, 3. See also infra, s. 3. For example, one could think of applying Article 2050 of the Italian Civil Code – which regulates liability arising from dangerous activities – to the production of certain AI goods.

      The second chapter, written by Amidei, zooms in on the interplay between AI systems and the European legislation on defective product liability.44x Council Directive 85/374/CEE of 25 July 1985 on the approximation of the laws, regulations and administrative provisions of the Member States concerning liability for defective products [1985] OJ L210. The third chapter, also by Ruffolo, builds upon the previous two chapters and provides a case study on the regulation of the liability stemming from autonomous cars. Here, the author maintains that the introduction of autonomous vehicles, specifically those qualified by full driving automation,45x In a vehicle characterised by full automation, the AI system performs all driving tasks under any condition. The human occupant of the vehicle is never asked to intervene. Such vehicles are currently not on the market. will likely happen in the next decade and this might lead to a change in the paradigm of liability for road traffic injuries.46x Technology might be evolving even faster than that. In this regard, the American’s National Highway Traffic Safety Administration has recently amended its vehicle safety standards to account for vehicles that do not contain ‘traditional manual controls associated with a human driver because they are equipped with Automated Driving Systems (ADS)’. See: Department Of Transportation, National Highway Traffic Safety Administration, Occupant Protection for Vehicles With Automated Driving Systems, 49 CFR Part 571 Docket No. NHTSA-2021-0003 RIN 2127-AM06. Assuredly, artificial pilots are much better drivers than humans.47x AI systems cannot be distracted by their phones while driving nor will they drive the vehicle while intoxicated. Thus, one could argue that they could pose different risks than the ‘traditional’ ones linked to human behaviour. Ruffolo makes this point at ‘La responsabilità da veicoli self-driving e driverless’, in Intelligenza artificiale, ed. Ugo Ruffolo (Milano: Giuffré, 2020), 155. For example, they could fail to identify a person jaywalking as a pedestrian and consequently cause a collision. This was the case in the (in)famous Uber fatal crash in Tempe, Arizona. See National Transportation Safety Board, Collision Between Vehicle Controlled by Developmental Automated Driving System and Pedestrian Tempe, Arizona March 18, 2018 (Washington DC: Highway Accident Report NTSB/HAR-19/03), 39. This is in line with the ethical principle of beneficence, according to which AI systems should be developed to ‘do good’.48x D’Avack’s mentions this principle in his chapter. See Lorenzo d’Avack, ‘La rivoluzione tecnologica e la nuova era digitale: problemi etici’, in Intelligenza artificiale, ed. Ugo Ruffolo (Milano: Giuffré, 2020), 21.

      Hence, quid novi? Nothing much for now, according to Ruffolo. In the foreseeable future, drivers of semi-autonomous cars,49x This term is used to refer to cars displaying level 3 and 4 automation according to the most popular classification of autonomous vehicles, developed by the Society of Automotive Engineers International (SAE J3016). Level 3 is defined as ‘conditional driving automation’, i.e., the system performs all dynamic driving tasks (such as accelerating and braking) but, if the system requests it, or stops working properly, the human in driver’s seat must intervene and take over. He/she always has to be alert. Level 4 is defined as ‘high driving automation’, i.e., the system performs all dynamic tasks and will not require the human passenger to take over driving. Nevertheless, level 3 and 4, differently from level 5 (full automation), are not able to operate under all conditions. For example, they might not be able to drive under dangerous weather conditions. as it happens already with ‘average cars’, will still be liable according to Article 2054 of the Italian Civil Code, which establishes the liability of the driver of a vehicle for the damage caused to persons or to property by the operation of the vehicle.50x Unless the driver proves that he/she did all that was possible to avoid the damage. Ruffolo contends that the share of liability of the driver will progressively diminish: liability will gradually shift to those behind the code of the AI systems, i.e., the producers of the vehicle, parallelly to the technological shift to full automation.51x Joint liability scheme which could combine product liability and article 2050 of the Italian Civil Code. In other words, it is only when fully automated cars will animate our roads that it will be feasible to regard the human in the car as a mere subject being transported. Nevertheless, Ruffolo argues that this shift will not lead to a full de-responsibilisation of the ‘transported’ human: he/she might still be liable, for example, for his/her negligent behaviour, i.e., for (not) performing the only acts which will be technically feasible in said vehicles, such as turning the system off.52x Ugo Ruffolo, Intelligenza Artificiale ed automotive: le responsabilità da veicoli self-driving e driverless, in Intelligenza artificiale, ed. Ugo Ruffolo (Milano: Giuffré, 2020), 168. It would have been interesting to see exactly how such a negligence standard would be constructed in this chapter. How much care can be demanded upon the very same driver (and potential ‘supervisor’ of the car’s activity) who is told to ‘sit back and relax’?

      Moving forward to criminal law, Manes and Severino divide their contributions into two parts: first a part regarding substantive law and second a part regarding criminal procedure. The authors address the challenges that AI poses to the criminal justice system as a whole. These contributions reflect a general trend in the approach of the Italian criminal legal doctrine to the topic, which is characterised by two features. First, differently from their international colleagues, the Italian front of the debate has been defined by overarching analyses rather than by the development of general theories.53x See for example, Fabio Basile, ‘Intelligenza artificiale e diritto penale: qualche aggiornamento e qualche nuova riflessione’, in Il sistema penale ai confini delle hard sciences, eds. Fabio Basile, Mario Caterini and Sabato Romano (Pisa: Pacini Giuridica, 2020); Silvio Riondato, ‘Robot: talune implicazioni di diritto penale’, in Tecnodiritto. Temi e informatica e robotica giuridica, eds. Paolo Moro and Claudio Sarra (Milano: Franco Angeli, 2017). Second, Italian authors, while writing in Italian, regularly refer to sources written by authors in different languages and from different legal backgrounds.54x Severino and Manes, for example, cite the work of Sabine Gleβ, Emily Silverman, Thomas Weigend, Eric Hilgendorf, Susanne Beck and Gabriel Hallevy. This ‘import’ approach to literature does not seem to be reciprocated in the writings of German authors on the same topic, nor in the ones of common law scholars. The questions raised in the two chapters regarding criminal law are noteworthy.

      Manes, for example, asks how criminal law should regulate situations in which the act is the result of conduct shared between human and artificial agents. He contends that it would be possible to identify a duty to act upon the driver who is inside a semi-autonomous car.55x Specifically, he mentions level 3 of the SAE J3016 standards, which is also referred to as ‘hands and feet free but not “mind free” driving’. See V.A. Banks et al., ‘Subsystems on the road to full vehicle automation: hands and feet free but not “mind” free driving’, Safety Science 62 (2014). Let us focus on this idea for a moment.

      According to the author, the duty would be ‘activated’ when the driving system requires the driver to regain control of the car. The failure to comply with such a request and, consequently, with the duty to act, would amount to criminal liability in the form of ‘commission-by-omission’.56x Notwithstanding that most of criminal offenses punish active conducts, criminal law might also be extended to punish failures to act, even when the criminal offense is formulated only in active terms (i.e., requiring an active conduct with causes a result). For example, a babysitter could be punished for murder because he/she did not avoid death by suffocation of the child she was babysitting. This type of offense requires a legal duty to act. As of today, there is no explicit legal duty to prevent an AI system, specifically a semi-autonomous vehicle, from causing harm. In fact, neither European, international, nor domestic legislation contain express provisions in this regard. The question then becomes whether it would be possible to subsume AI systems within the applicative sphere of the already existing legal duties by way of interpretation – similarly to what Ruffolo asks himself in his chapters.

      I do not aim to answer this question in this essay. Undoubtedly, opening up to commission-by-omission would entail confronting the enormous difficulties in ascertaining causation, which are innately tied to omission cases. Moreover, when dealing with the actions of AI systems one is confronted with the simultaneous presence of a myriad of alternative causal factors, both human and non-human. This makes it impractical, on the one hand, to identify the single factor that has not been activated to prevent or interrupt the causal process that has already begun and, on the other hand, to exclude alternative causal factors with the certainty required by modern (criminal) legal systems. These, and many others, are the thought-provoking issues raised by the authors.

    • 3. The principle of AI legal neutrality

      Let us turn now to the US side of the debate. The Reasonable Robot. Artificial Intelligence and the Law is a pocket-sized book (143 pages) containing not-so-­pocket-sized ideas. Ryan Abbott follows a unique fil rouge, namely the concept of AI legal neutrality, meaning that the law should not discriminate between AI and humans when they display the same behaviour. In other words, the law ought to be neutral when regulating AI related phenomena, as this would lead to social advantages. It follows that, as this technology advances, and gradually takes the place of humans in certain roles, ‘AI will need to be treated more like people, and sometimes people will need to be treated more like AI’.57x Abbott, The Reasonable Robot, 4.

      The concept of AI legal neutrality is used by Abbott as a lens to analyse how regulators should address AI in four areas of the law (tax, tort, intellectual property, and criminal law). Abbott’s book is divided in seven chapters. In the first chapter the author briefly describes the history of the development of AI and its current applications. In the second chapter Abbott answers the question of whether AI should pay taxes, where in the third chapter he considers the application of tort liability and negligence following AI-related harm. Chapters 4 and 5 deal with AI inventions and intellectual property. Chapter 6 elaborates on AI and criminal liability. Finally, chapter 7 addresses alternative perspectives on AI Legal Neutrality.

      Chapter 3 and 6 represent the perfect specimen to expose Abbott’s principle of AI legal neutrality. Chapter 3, entitled ‘Reasonable Robots’, addresses AI harm from a tort perspective. In the American legal system, a tort is any harmful civil act different from contractual violations. It gives rise to the right of the injured party to redress. Legal systems resort to negligence standards to impose (civil) liability in the lion’s share of cases where injury occurs. This entails that the law asks the decision-maker to establish whether the defendant breached a duty of care, i.e., if she acted unreasonably considering foreseeable risks. The required duty of care is assessed based on the hypothetical behaviour of a reasonable model agent. When it comes to injuries caused by defective products, instead, liability tends to be pinned upon the defendant without any kind of fault requirement, through so-called strict liability constructs.

      In all of these chapters, Abbott maintains that it is vital to recognise that ‘what is needed is not necessarily more or less law, but the right law’.58x Abbott, The Reasonable Robot, 3. His stance resembles Ruffolo’s point of view, see supra s. 2. The key to obtaining the ‘right’ law might lie in evening of the playfield between humans and algorithms.59x Abbott, The Reasonable Robot, 3. Yet, he does not advocate for AI’s rights or legal personhood. He acknowledges that, since AI lacks humanlike consciousness and interests, it does not morally deserve rights, and therefore treating AI as if it did could be justified only in the perspective of benefitting the community. The rationale is the same for corporations: their rights and duties exist only to improve the efficiency of human activities. Indeed, like corporations, AI systems do not morally deserve rights, as they are not members of our ‘moral community’ but only of our legal community.

      These reflections represent one of the links which allows us to refer back to Il diritto, i diritti, l’etica, specifically to Paolo Moro’s chapter. Moro and Abbott start their reflections from the same point – it is now possible to build machines which possess certain ‘traditional’ human traits – and reach the same conclusion – rejecting the fact that AI systems can be moral agents. The Reasonable Robot, therefore, cannot be deemed morally culpable for its actions. Yet, Abbott differs in the sense that he leaves humanocentrism for a more even playfield, where not only AI systems are compared to humans, but also vice versa. In other words, he does not focus only on a ‘machines like us’ perspective, but also on a ‘we, like machines’ one.

      What does Abbott mean, then, by Reasonable Robot? Abbott hypothesises that there will be a time (soon) where it will be practical for AI automation to substitute humans. For example, in a similar manner as Ruffolo, he asserts that it is reasonable to assume that autonomous vehicles will soon become safer drivers than human drivers. This entails that AI systems will represent the new standard of care, i.e., the Reasonable Robot standard. His argument can be deconstructed into two elements. The first part of the argument builds upon the fact that the law treats AI as a product and hence applies a strict liability standard to AI-generated torts, where instead it applies a negligence standard to human-generated torts. Abbott argues that this differentiation discriminates AI against humans. Therefore, AI-generated torts60x Abbott defines AI-generated torts as cases in which an ‘AI engages in activity that a person could engage in’ (such as analyzing an X-ray to identify the presence of a ruptured bone) and ‘acts in a manner that would be negligent for a human tortfeasor’ (such as providing the wrong diagnosis). Abbott, The Reasonable Robot, 61. should be based on the new Reasonable Robot negligence standard. In other words, ‘AI manufacturers would be financially liable when their AI causes accidents a person would have avoided’.61x Abbott, The Reasonable Robot, 61. Abbott believes that this would lead to multiple benefits: most importantly, it would boost innovation and automation, while increasing safety. The second part of the argument takes this reasoning even further. According to Abbott, when automation will substitute human agents, we will also face AI-generated torts committed by AI tortfeasors. Accordingly, he contends that the law should hold said AI systems liable based on the Reasonable Robot negligence standard. This is AI legal neutrality at its core.

      Turning now to chapter 6, this represents an adapted version of the article ‘Punishing Artificial Intelligence: Legal Fiction or Science Fiction’,62x Ryan Abbott and Alexander Sarch, ‘Punishing Artificial Intelligence: Legal Fiction or Science Fiction’, UC Davis Law Review 53 (2010). a previous work co-authored with Alexander Sarch. The article, as of today, embodies the most significant opinions in the discussion on the criminal liability of AI systems. To begin with, Abbott introduces two key concepts: irreducibility and Hard AI Crime(s).63x The term AI Crime (AIC) appears also in Thomas C. King, Nikita Aggarwal, Mariarosaria Taddeo and Luciano Floridi, ‘Artificial Intelligence Crime: An Interdisciplinary Analysis of Foreseeable Threats and Solutions’, Science and Engineering Ethics 26 (2020): 89-120. According to the author, the combination of four features connected to AI behaviour might lead to irreducibility, i.e., the impossibility to reconnect a crime to a liable person. These features are: unpredictability, unexplainability, autonomy, and complexity. Drawing on this assumption, Abbott presents the term ‘Hard AI Crime’ to refer to instances in which harmful AI conduct cannot be traced back to the wrongful act of a person, either for practical reasons (because of the difficulty to identify how individuals singularly contributed to the design of the system) or because the human misconduct does not meet the threshold required to activate the criminal sanction. Thus, Hard AI-Crimes seem to make ‘the strongest case for punishing artificial intelligence’.64x Abbott, The Reasonable Robot, 112.

      Therefore, in this chapter Abbott considers whether the doctrinal and theoretical commitments of criminal law can be reconciled with imposing criminal liability on AI. Starting from Hart’s ‘mainstream’ definition of punishment65x H.L.A. Hart, Punishment and Responsibility: Essays in the Philosophy of Law (Oxford: Oxford University Press, 2018) in Abbott, The Reasonable Robot, 115. and adopting a very pragmatic approach based on a cost-benefit evaluation – which reflects the traditional utilitarian thinking of most common law scholars – the author delivers an analysis on the foundations of criminal punishment for AI systems. According to Abbott, direct punishment of AI could obtain general deterrence towards developers, owners, and users of AI systems66x When doing so, Abbott directly addresses Peter Asaro and claims that he failed to recognise the difference between general and special deterrence. See Peter M. Asaro, A Body to Kick, but Still No Soul to Damn, 181. and could have certain expressive benefits for the victims of harmful AI behaviour. For example, it would convey a message of official condemnation. Moreover, it could prove fruitful from a retributivist point of view. Let us discuss this last claim.

      Retribution (or desert) entails that people (and AI agents?) ‘should be punished (i.e., suffer some harm or setback to interest) because they deserve to be punished’.67x John Danaher, ‘Robots, law and the retribution gap’, Ethics and Information Technology 18 (2016): 302. Some may argue that AI systems cannot be punished since they cannot experience ‘pain or other consequences’.68x Hart, Punishment and Responsibility: Essays in the Philosophy of Law, 4 in Ryan Abbott, The Reasonable Robot. Artificial Intelligence and the Law (Cambridge: Cambridge University Press, 2020), 123; Abbott identifies and discusses three challenges which could be brought upon AI punishment from a retributivist point of view, namely: the eligibility challenge, the reducibility challenge, and the spillover objection. When discussing these challenges, the author introduces reflections (such as the ones on Bratman’s Belief Desire Intention Model or the Random Darknet Shopper case study) which will be later developed by other authors (e.g., by Lagioia and Sartor). The challenge discussed here is part of the eligibility challenge and is referred to by Abbot as the ‘True Punishment’ challenge. See Abbott, The Reasonable Robot, 123. Abbott refutes this statement, since he points out that in certain cases criminal law disregards the offender’s ‘personal’ experience of suffering and unpleasantness to establish his/her liability. Indeed, offenders may be sentenced and punished even when they have a medical condition which makes them ‘incapable of experiencing pain or distress’,69x Abbott, The Reasonable Robot, 123. in light of the fact that certain sanctions, such as being deprived of liberty, are objectively regarded as unpleasant. Moreover, he also believes that one should distinguish ‘conviction’, i.e., the application of criminal law, and ‘punishment’, i.e., the ‘sentence to which the convicted party is subject’.70x Abbott, The Reasonable Robot, 124. Consequently, if ‘punishing AI may not be conceptually possible, applying criminal law to AI so that it can be convicted of offenses is’71x Abbott, The Reasonable Robot, 124. and ‘it could still have good consequences to call it punishment when AI is convicted’.72x Abbott, The Reasonable Robot, 124.. Abbott does not specify what these ‘good consequences’ would be. The question which one asks oneself when reading this passage is: if we strip criminal law of punishment, can we still call it criminal law? In the eyes of a criminal lawyer, the bond between crime and punishment appears as unbreakable.

      Conclusively, Abbott believes that it would be possible to build a coherent theoretical case for punishing AI in compliance principles of criminal law. Such a system would also conform to the principle of legal neutrality. Nevertheless, this operation is not justified since less ‘disruptive’ alternatives, i.e., options that could offer the same benefits, exist.73x The alternatives proposed by Abbott include the creation of a responsible person regime. Abbott argues that the creation of a mandatory requirement for anyone who is creating or operating an AI capable of causing harm to register ex ante a responsible person for the AI crime (similarly to the offense of driving without a driving license, it would be a crime not to designate a responsible person) would not be the preferable option. Rather, the responsible person should be identified by default (for example, it could be the AI’s manufacturer or developer). The responsible person could then be punished directly for AI-generated crimes either because of a negligent failure to comply with newly defined duties of supervision and care upon the algorithm or via the creation of new strict liability offenses. Punishing AI, then, is simply a bad idea. Certainly, the approach adopted in this chapter by Abbott is praiseworthy: it breaks away from the belief that criminal law should work as a panacea for all evil.

      A brief mention should now be made of chapters 4 and 5 on intellectual property. Ryan Abbott, who is also a licensed attorney, was part of the team of patent attorneys that filed the first patent application worldwide to claim AI-generated inventions.74x As of August 2021, the team managed to obtain two patents, respectively in Australia and South Africa. To find out more, see the webpage of the Artificial Inventor Project: https://artificialinventor.com/first-patent-granted-to-the-artificial-inventor-project/. As a consequence, these chapters prove to be interesting both for practitioners and academics, hence proving Abbott’s valuable practical approach to legal issues. Traditionally, intellectual property law provides that an inventor should be human, hence an AI system cannot be recognised as such. Moreover, there seems to be a gap with regards to whether AI-generated inventions can be protected by patents. Abbott argues that the law should permit patents for inventions generated by AI systems and should recognise AI as an inventor whenever it fulfils the relevant criteria. By doing so, the effect would be to discourage negative practices, such as free-riding, while encouraging innovation and pushing businesses to use AI to invent.

      In conclusion, whether some might argue that Abbott’s predictions are nothing but science fiction, it is clear that his book represents a staging post for all those interested in this field of research. Even though his analysis is rooted in the American legal system, it does not suffer from parochialism. As a matter of fact, the author himself refers, for example, to European policing initiatives.75x Abbott, The Reasonable Robot, 127. Moreover, Abbott puts forward general reflections which can be transposed by the readers into different legal systems.

    • 4. Concluding remarks

      In a 2020 research on the scope of legal literature on AI,76x Costanta Rosca et al., ‘Return of the AI: An Analysis of Legal Research on Artificial Intelligence Using Topic Modelling’, Proceedings of the 2020 Natural Legal Language Processing (NLLP) Workshop (2020). which was conducted with the aid of a machine learning technique called topic modelling,77x Topic Modelling is a method for classifying collections of documents. The authors adopted Latent Dirichlet Allocation (LDA) Topic Modelling. They used the tool to identify recurring topics in 3,931 journal articles on AI legal research. researchers found that scholarly output boomed in the so-called ‘deep learning era’.78x The expression refers to the period which includes the early 2000s until today. Catalina Goanta et al., ‘Back to the Future: Waves of Legal Scholarship on Artificial Intelligence’, in Time, Law and Change, ed. Sofia Ranchordás and Yaniv Roznai (Oxford: Hart Publishing. 2020), 331. As the authors argue, ‘with over 2500 publications already by the year 2015 referring to “artificial intelligence” … it may no longer be realistic to assume that researchers can keep up with legal research on AI, or the number of publications in general’.79x Rosca et al., ‘Return of the AI: An Analysis of Legal Research on Artificial Intelligence Using Topic Modelling’, 1. Indeed, the books discussed in this review situate themselves in this ‘ocean’ of scholarly literature.

      What could be the role of a new legal scholar in this mare magnum? We are optimists. In fact, expecting new legal scholarship to be able to account for each and any publication published on AI law would mean imposing a cumbersome burden. Rather, the AI legal scholar of the future could focus on specific questions (e.g., negligence, causality) and/or on specific sectors (e.g., driving automation, healthcare). This type of research would exploit existing literature on AI law to its fullest, as it already contains a systematisation of the main directions of inquiry and of the questions which shall be asked (and answered). As such, it works as the general framework inside which future (more specific) reflections could situate themselves.

      Conclusively, The Reasonable Robot and Intelligenza artificiale. Il diritto, i diritti, l’etica complement each other. Through Ruffolo’s edited volume the reader can achieve an extensive overview of the legal issues which surround the advancement of AI technologies, while with Abbott’s book one can grasp possible solutions to those issues through the means of AI legal neutrality. Read together, they serve as examples of the difference between common law and civil law scholars in approaching conflict, which in the present case is represented by the disruptive impact of AI technologies on our society.

      Indeed, where the ‘common law mind’80x A.W.B. Simpson, Legal Theory and Legal History: Essays on the Common Law (London: The Hambledon Press, 1987), 394. tends to find a convincing pragmatic solution, the civil law mind tries to solve the conflict beforehand ‘through hierarchic organized norms’.81x Susanne Beck, ‘Mediating the Different Concepts of Corporate Criminal Liability in England and Germany’, German L.J. 11 (2010): 1105. In other words, ‘[t]he instinct of the civilian is to systematize. The working rule of the common lawyer is solvitur ambulando’.82x Thomas Mackay Cooper, ‘The Common and the Civil Law – A Scot’s View’, Harv.L.R no. 63 (1950), 471. Ideally, the two lawyers can learn from each other. For example, the flexible approach of the ‘common lawyer’ to the resolution of conflicts could prove handy for the ‘civil lawyer’ in keeping up with the development of AI.83x For an interesting analysis of the common law vs. statutory law approach to technological development, see Lyria Bennett Moses, ‘Adapting the Law to Technological Change: A Comparison of Common Law and Legislation’, UNSW Law Journal 26 no. 2 (2003).

    Noten

    • 1 There is no generally accepted definition of AI. This essay will not account for all the different definitions of artificial intelligence theorised in the past fifty years. We will limit ourselves to acknowledge that there is debate on the (legal) definition of AI and that the term AI can be used to refer both to a set of technologies and to a specific scientific discipline, which branches from computer science. For a systematic analysis of the issue of defining ‘artificial intelligence’ see inter alia: Pei Wang, ‘On Defining Artificial Intelligence’, Journal of Artificial General Intelligence 10 no. 2 (2019): 1-37. For a thorough overview of existing AI definitions, see the research conducted on 55 documents by Sofia Samoili et al., AI Watch. Defining Artificial Intelligence. Towards an operational definition and taxonomy of artificial intelligence, EUR 30117 EN (Luxembourg: Publications Office of the European Union, 2020).

    • 2 For a more in depth analysis of the subject, see ex multis: Luciano Floridi, ‘AI and Its New Winter: from Myths to Realities’, Philosophy & Technology 33 (2020): 1-3; Michaela Haenlein and Andreas Kaplan, ‘A Brief History of Artificial Intelligence: On the Past, Present and Future of Artificial Intelligence’, California Management Review 61 no. 4 (2019): 5-14; Youjung Shin, ‘The Spring of Artificial Intelligence in Its Global Winter’, IEEE Annals of the History of Computing 41 no. 4 (2019); Stuart J. Russell and Peter Norvig, Artificial Intelligence. A Modern Approach (London: Pearson, 2003), 16-27.

    • 3 Amongst the most relevant books which have been published on the subject in English, see Woodrow Barfield, ed., The Cambridge Handbook of the Law of Algorithms (Cambridge: Cambridge University Press, 2021); Matt Hervey and Matthew Lavy, The Law of Artificial Intelligence (Mytholmroyd: Sweet & Maxwell, 2021); Thomas Wischmeyer and Timo Rademacher, eds., Regulating Artificial Intelligence (Berlin: Springer, 2020); Martin Ebers and Susana Navas, eds., Algorithms and Law (Cambridge: Cambridge University Press, 2020); Cristoph Busch and Alberto De Franceschi, eds., Algorithmic Regulation and Personalized Law. A Handbook (Baden-Baden: CH Beck-Hart-Nomos, 2020); Ugo Pagallo and Woodrow Barfield, eds., Research handbook on the law of artificial intelligence (Cheltenham: Edward Elgar Publishing, 2018); Ugo Pagallo, The Law of Robots. Crimes, Contracts, and Torts (Berlin: Springer, 2013). In Italian legal doctrine, see: Giancarlo Taddei Elmi and Alfonso Contaldo, Intelligenza artificiale-Algoritmi giuridici: Ius condendum o fantadiritto?, (Pisa: Pacini, 2020); Paolo Moro and Claudio Sarra, eds., Tecnodiritto. Temi e informatica e robotica giuridica (Milano: Franco Angeli, 2017).

    • 4 AlphaGo, an AI system developed by Google, beat the world Go Champion Lee Sedol in 2015.

    • 5 Throughout this essay, I will use the term ‘AI law’ to refer to this new field of research, that is, to hard law, soft law, and legal scholarship dealing with AI. Consequently, by AI law we mean both (adopted and proposals of) new regulations of AI and inquiries into how to adapt existing regulation to the specificities of AI.

    • 6 One can think of these concepts as Russian nesting dolls: machine learning is a subfield of artificial intelligence; deep learning is a subfield of machine learning and artificial neural networks are the building blocks of deep learning. See Eda Kavlakoglu, ‘AI vs. Machine Learning vs. Deep Learning vs. Neural Networks: What’s the Difference?’, 27 May 2020, https://www.ibm.com/cloud/blog/ai-vs-machine-learning-vs-deep-learning-vs-neural-networks.
      An algorithm based on machine learning (ML) techniques teaches itself rules by learning from the training data through statistical analysis, detecting patterns in large amounts of information. Deep learning (DL) is a sub-set of ML where the system consists of layers of artificial neural networks (ANNs). The network analyses data and identifies relevant features by itself. ANNs are made from multiple layers of artificial neurons encoded in software. Each neuron can be connected to others in the layers above. One neuron receives an ‘input’ (for example, information on a pixel in a picture) and another neuron produces an ‘output’ (for example, the classification of the picture). This technique is inspired by the functioning of the human brain. See Harry Surden, ‘Artificial Intelligence and Law: An Overview’, Georgia State University Law Review 35 no. 4 (2019). For a visual and approachable explanation of the functioning of deep learning, see Meor Amer, A Visual Introduction to Deep Learning (kDimensions, 2021).

    • 7 Ugo Ruffolo, ed., Intelligenza Artificiale. Il diritto, i diritti, l’etica (Milano: Giuffré, 2020).

    • 8 Ryan Abbott, The Reasonable Robot. Artificial Intelligence and the Law (Cambridge: Cambridge University Press, 2020).

    • 9 The metaphor recalls two leitmotivs of the academic and media discourse on the advancement of AI, which is characterised by the polarisation between techno-optimists and techno-pessimists. Admittedly, as argued by Danaher, ‘much of the academic debate about the impacts of technology on society has a pessimistic angle to it, highlighting the ethical harms and unanticipated effects of technology on the environment, social norms and personal well-being … Indeed, many academics see techno-optimism as irrational and superstitious – a faith-based initiative with little grounding in reality’. According to the author, techno-pessimism ‘may have deeper roots in intellectual temperament. Some have pointed out that pessimistic views are de rigueur among intellectuals, particularly in the post-Enlightenment era (Harris, 2002; Prescott, 2012); optimistic views are, by contrast, “not regarded as intellectually respectable”’. See John Danaher, ‘Techno-optimism: an Analysis, an Evaluation and a Modest Defence’, Philosophy & Technology 35 (2022): 54.

    • 10 See Harry Surden, ‘Artificial Intelligence and Law: An Overview’, 1309. As of today, there is no agreement on whether GAI will ever be achieved.

    • 11 The expression is borrowed from a passage of the famous 1624 Meditation XVII by John Donne, ‘Meditation XVII. Nunc Lento Sonitu Dicunt, Morieris’, in Devotions Upon Emergent Occasions, ed. Anthony Raspa (Oxford: Oxford University Press, 1987).

    • 12 See above footnote 5 for a definition of AI law.

    • 13 Ugo Ruffolo, ed., XXVI lezioni di diritto del’’intelligenza artificiale (Torino: Giappichelli, 2020).

    • 14 Andrea Amidei, ‘La governance dell’Intelligenza Artificiale: profili e prospettive di diritto dell’Unione Europea’, in Intelligenza artificiale, ed. Ugo Ruffolo (Milano: Giuffré, 2020), Section VI, ch. 7, 571.

    • 15 Ugo Ruffolo and Andrea Amidei, ‘Intelligenza Artificiale, human enhancement e diritti della persona’, in Intelligenza artificiale, ed. Ugo Ruffolo (Milano: Giuffré, 2020), Part II, Section II, ch. 4, 179.

    • 16 Ugo Ruffolo, ‘La ‘personalità elettronica’ persona’, in Intelligenza artificiale, ed. Ugo Ruffolo (Milano: Giuffré, 2020), Part II, Section II, ch. 5, 213.

    • 17 Guido Alpa, ‘Preface’, in Intelligenza artificiale. Il diritto, i diritti, l’etica, ed. Ugo Ruffolo (Milano: Giuffré, 2020), XVII.

    • 18 Augusto Barbera, ’Preface’, in Intelligenza artificiale. Il diritto, i diritti, l’etica, ed. Ugo Ruffolo (Milano: Giuffré, 2020), XX.

    • 19 See Stefano Rodotà, ‘Etica e Diritto (dialogo tra alcuni studenti e Stefano Rodotà) con una Presentazione di Gaetano Azzariti’, Costituzionalismo.it 1 (2019): 25.

    • 20 Think for example of the famous Hart-Devlin debate on the criminalisation of immoral conduct. For a reconstruction and a revisitation of the debate, see James Allan, ‘Revisiting the Hart-Devlin Debate: At the Periphery and By the Numbers’, San Diego L. Rev. 54 (2017): 423.

    • 21 The platform is available at https://www.moralmachine.net. See Edmond Awad et al., ‘The Moral Machine experiment’, Nature 563 (2018): 59-64.

    • 22 See Francesca Lagioia and Giovanni Sartor, ‘AI Systems Under Criminal Law: a Legal Analysis and a Regulatory Perspective’, Philosophy & Technology 33 (2020): 433-465; Sabine Gleß, Emily Silverman and Thomas Weigend, ‘If Robots Cause Harm, Who Is to Blame? Self-Driving Cars and Criminal Liability, New Criminal Law Review 19 no. 3 (2019); Sabine Gleß and Thomas Weigend, ‘Intelligente Agenten und das Strafrecht’, ZSTW 126 no. 3 (2014): 561-591; Peter M. Asaro, ‘A Body to Kick, but Still No Soul to Damn: Legal Perspectives on Robotics’, in Robot Ethics: The Ethical and Social Implications of Robotics, ed. Patrick Lin, Keith Abney and George A. Bekey (Cambridge, Massachusetts: MIT Press, 2011), 169-186; Samir Chopra and Laurence F. White, A Legal Theory for Autonomous Artificial Agents (Ann Arbor: Univ. of Michigan Press, 2011); Pagallo, The Laws of Robots, 76.

    • 23 Mens rea is a Latin expression which literally translates to ‘guilty mind’. The term is used in criminal legal doctrine to refer to the subjective element of a crime, i.e., ‘the necessary link between a person’s conduct in violation of a criminal prohibition (actus reus) and the person’s mind’. See Tomas Weigend, ‘Subjective Elements of Criminal Liability’, in The Oxford Handbook of Criminal Law, ed. Markus D. Dubber and Tatjana Hörnle (Oxford: Oxford University Press, 2019), 491.

    • 24 High-Level Expert Group on Artificial Intelligence, Ethics Guidelines for Trustworthy Artificial Intelligence (AI), 8 April 2019, https://ec.europa.eu/futurium/en/ai-alliance-consultation.1.html.

    • 25 OECD, Recommendation of the Council on Artificial Intelligence, OECD/LEGAL/0449, https://legalinstruments.oecd.org/en/instruments/OECD-LEGAL-0449.

    • 26 UNESCO, Recommendation on the Ethics of Artificial Intelligence, SHS/BIO/REC-AIETHICS/2021, https://unesdoc.unesco.org/ark:/48223/pf0000380455.

    • 27 G.E.M. Anscombe, ‘Modern moral philosophy’, Philosophy 33 no. 124 (1968): 1-19.

    • 28 Anaïs Rességuier and Rowena Rodrigues, ‘AI ethics should not remain toothless! A call to bring back the teeth of ethics’, Big Data & Society (2020): 2.

    • 29 ‘[…] the issue is not that ethics is asked to do something for which it is too weak, or too soft. It is rather that it is asked to do something that it is not designed to do. Blaming ethics for having no teeth to ensure compliance with whatever it calls for is like blaming the fork for not cutting meat properly: this is not what it is designed to do. The objective of ethics itself is not to impose particular behaviours and to ensure these are complied with. The problem arises when it is used to do so. This is particularly evident in AI ethics, where ethical principles, norms or requirements are called for to regulate AI and ensure that it does not harm individuals and the society at large (e.g. AI HLEG)’, Rességuier and Rodrigues, ‘AI ethics should not remain toothless! A call to bring back the teeth of ethics’, 2.

    • 30 Lorenzo d’Avack, ‘La rivoluzione tecnologica e la nuova era digitale: problemi etici’, Intelligenza artificiale, ed. Ugo Ruffolo (Milano: Giuffré, 2020), 3.

    • 31 Scientificity.

    • 32 Ethicality.

    • 33 Ugo Pagallo, ‘Etica e diritto dell’Intelligenza Artificiale nella governance del digitale: il Middle-Out Approach’, in Intelligenza artificiale, ed. Ugo Ruffolo (Milano: Giuffré, 2020), 29.

    • 34 Paolo Moro, ‘Macchine come noi. Natura e limiti della soggettività robotica’, in Intelligenza artificiale, ed. Ugo Ruffolo (Milano: Giuffré, 2020), 45.

    • 35 Ugo Ruffolo, ‘La responsabilità da artificial intelligence, algoritmo e smart product: per i fondamenti di un diritto dell’intelligenza artificiale self-learning’, in Intelligenza artificiale, ed. Ugo Ruffolo (Milano: Giuffré, 2020), 93; Ugo Ruffolo, Intelligenza Artificiale ed automotive: le responsabilità da veicoli self-driving e driverless, in Intelligenza artificiale, ed. Ugo Ruffolo (Milano: Giuffré, 2020), 153.

    • 36 Andrea Amidei, Intelligenza Artificiale e responsabilità da prodotto, in Intelligenza artificiale, ed. Ugo Ruffolo (Milano: Giuffré, 2020), 125.

    • 37 Vittorio Manes, ‘L’oracolo algoritmico e la giustizia penale: al bivio tra tecnologia e tecnocrazia’, in Intelligenza artificiale, ed. Ugo Ruffolo (Milano: Giuffré, 2020), 547.

    • 38 Paola Severino, ‘Intelligenza artificiale e diritto penale’, in Intelligenza artificiale, ed. Ugo Ruffolo (Milano: Giuffré, 2020), 531.

    • 39 Articles 2049 to 2054 of the Italian Civil Code.

    • 40 It is possible to define AI damage as any ‘adverse impact affecting the life, health, physical integrity of a natural person, the property of a natural or legal person or causing significant immaterial harm that results in a verifiable economic loss harm’ which can be causally linked to an AI system. See the definition of ‘harm or damage’ provided in the European Parliament resolution of 20 October 2020 with recommendations to the Commission on a civil liability regime for artificial intelligence (2020/2014(INL)), art. 3, (i).

    • 41 Ugo Ruffolo, ‘La responsabilità da artificial intelligence, algoritmo e smart product’, in Intelligenza artificiale, ed. Ugo Ruffolo (Milano: Giuffré, 2020), 94.

    • 42 Jeremy Levy, ‘No Need to Reinvent the Wheel: Why Existing Liability Law Does Not Need to Be Preemptively Altered to Cope with the Debut of the Driverless Car’, J. Bus. Entrepreneurship & L. 9 (2016).

    • 43 Abbott, The Reasonable Robot, 3. See also infra, s. 3.

    • 44 Council Directive 85/374/CEE of 25 July 1985 on the approximation of the laws, regulations and administrative provisions of the Member States concerning liability for defective products [1985] OJ L210.

    • 45 In a vehicle characterised by full automation, the AI system performs all driving tasks under any condition. The human occupant of the vehicle is never asked to intervene. Such vehicles are currently not on the market.

    • 46 Technology might be evolving even faster than that. In this regard, the American’s National Highway Traffic Safety Administration has recently amended its vehicle safety standards to account for vehicles that do not contain ‘traditional manual controls associated with a human driver because they are equipped with Automated Driving Systems (ADS)’. See: Department Of Transportation, National Highway Traffic Safety Administration, Occupant Protection for Vehicles With Automated Driving Systems, 49 CFR Part 571 Docket No. NHTSA-2021-0003 RIN 2127-AM06.

    • 47 AI systems cannot be distracted by their phones while driving nor will they drive the vehicle while intoxicated. Thus, one could argue that they could pose different risks than the ‘traditional’ ones linked to human behaviour. Ruffolo makes this point at ‘La responsabilità da veicoli self-driving e driverless’, in Intelligenza artificiale, ed. Ugo Ruffolo (Milano: Giuffré, 2020), 155. For example, they could fail to identify a person jaywalking as a pedestrian and consequently cause a collision. This was the case in the (in)famous Uber fatal crash in Tempe, Arizona. See National Transportation Safety Board, Collision Between Vehicle Controlled by Developmental Automated Driving System and Pedestrian Tempe, Arizona March 18, 2018 (Washington DC: Highway Accident Report NTSB/HAR-19/03), 39.

    • 48 D’Avack’s mentions this principle in his chapter. See Lorenzo d’Avack, ‘La rivoluzione tecnologica e la nuova era digitale: problemi etici’, in Intelligenza artificiale, ed. Ugo Ruffolo (Milano: Giuffré, 2020), 21.

    • 49 This term is used to refer to cars displaying level 3 and 4 automation according to the most popular classification of autonomous vehicles, developed by the Society of Automotive Engineers International (SAE J3016). Level 3 is defined as ‘conditional driving automation’, i.e., the system performs all dynamic driving tasks (such as accelerating and braking) but, if the system requests it, or stops working properly, the human in driver’s seat must intervene and take over. He/she always has to be alert. Level 4 is defined as ‘high driving automation’, i.e., the system performs all dynamic tasks and will not require the human passenger to take over driving. Nevertheless, level 3 and 4, differently from level 5 (full automation), are not able to operate under all conditions. For example, they might not be able to drive under dangerous weather conditions.

    • 50 Unless the driver proves that he/she did all that was possible to avoid the damage.

    • 51 Joint liability scheme which could combine product liability and article 2050 of the Italian Civil Code.

    • 52 Ugo Ruffolo, Intelligenza Artificiale ed automotive: le responsabilità da veicoli self-driving e driverless, in Intelligenza artificiale, ed. Ugo Ruffolo (Milano: Giuffré, 2020), 168.

    • 53 See for example, Fabio Basile, ‘Intelligenza artificiale e diritto penale: qualche aggiornamento e qualche nuova riflessione’, in Il sistema penale ai confini delle hard sciences, eds. Fabio Basile, Mario Caterini and Sabato Romano (Pisa: Pacini Giuridica, 2020); Silvio Riondato, ‘Robot: talune implicazioni di diritto penale’, in Tecnodiritto. Temi e informatica e robotica giuridica, eds. Paolo Moro and Claudio Sarra (Milano: Franco Angeli, 2017).

    • 54 Severino and Manes, for example, cite the work of Sabine Gleβ, Emily Silverman, Thomas Weigend, Eric Hilgendorf, Susanne Beck and Gabriel Hallevy. This ‘import’ approach to literature does not seem to be reciprocated in the writings of German authors on the same topic, nor in the ones of common law scholars.

    • 55 Specifically, he mentions level 3 of the SAE J3016 standards, which is also referred to as ‘hands and feet free but not “mind free” driving’. See V.A. Banks et al., ‘Subsystems on the road to full vehicle automation: hands and feet free but not “mind” free driving’, Safety Science 62 (2014).

    • 56 Notwithstanding that most of criminal offenses punish active conducts, criminal law might also be extended to punish failures to act, even when the criminal offense is formulated only in active terms (i.e., requiring an active conduct with causes a result). For example, a babysitter could be punished for murder because he/she did not avoid death by suffocation of the child she was babysitting.

    • 57 Abbott, The Reasonable Robot, 4.

    • 58 Abbott, The Reasonable Robot, 3. His stance resembles Ruffolo’s point of view, see supra s. 2.

    • 59 Abbott, The Reasonable Robot, 3.

    • 60 Abbott defines AI-generated torts as cases in which an ‘AI engages in activity that a person could engage in’ (such as analyzing an X-ray to identify the presence of a ruptured bone) and ‘acts in a manner that would be negligent for a human tortfeasor’ (such as providing the wrong diagnosis). Abbott, The Reasonable Robot, 61.

    • 61 Abbott, The Reasonable Robot, 61.

    • 62 Ryan Abbott and Alexander Sarch, ‘Punishing Artificial Intelligence: Legal Fiction or Science Fiction’, UC Davis Law Review 53 (2010).

    • 63 The term AI Crime (AIC) appears also in Thomas C. King, Nikita Aggarwal, Mariarosaria Taddeo and Luciano Floridi, ‘Artificial Intelligence Crime: An Interdisciplinary Analysis of Foreseeable Threats and Solutions’, Science and Engineering Ethics 26 (2020): 89-120.

    • 64 Abbott, The Reasonable Robot, 112.

    • 65 H.L.A. Hart, Punishment and Responsibility: Essays in the Philosophy of Law (Oxford: Oxford University Press, 2018) in Abbott, The Reasonable Robot, 115.

    • 66 When doing so, Abbott directly addresses Peter Asaro and claims that he failed to recognise the difference between general and special deterrence. See Peter M. Asaro, A Body to Kick, but Still No Soul to Damn, 181.

    • 67 John Danaher, ‘Robots, law and the retribution gap’, Ethics and Information Technology 18 (2016): 302.

    • 68 Hart, Punishment and Responsibility: Essays in the Philosophy of Law, 4 in Ryan Abbott, The Reasonable Robot. Artificial Intelligence and the Law (Cambridge: Cambridge University Press, 2020), 123; Abbott identifies and discusses three challenges which could be brought upon AI punishment from a retributivist point of view, namely: the eligibility challenge, the reducibility challenge, and the spillover objection. When discussing these challenges, the author introduces reflections (such as the ones on Bratman’s Belief Desire Intention Model or the Random Darknet Shopper case study) which will be later developed by other authors (e.g., by Lagioia and Sartor). The challenge discussed here is part of the eligibility challenge and is referred to by Abbot as the ‘True Punishment’ challenge. See Abbott, The Reasonable Robot, 123.

    • 69 Abbott, The Reasonable Robot, 123.

    • 70 Abbott, The Reasonable Robot, 124.

    • 71 Abbott, The Reasonable Robot, 124.

    • 72 Abbott, The Reasonable Robot, 124..

    • 73 The alternatives proposed by Abbott include the creation of a responsible person regime. Abbott argues that the creation of a mandatory requirement for anyone who is creating or operating an AI capable of causing harm to register ex ante a responsible person for the AI crime (similarly to the offense of driving without a driving license, it would be a crime not to designate a responsible person) would not be the preferable option. Rather, the responsible person should be identified by default (for example, it could be the AI’s manufacturer or developer). The responsible person could then be punished directly for AI-generated crimes either because of a negligent failure to comply with newly defined duties of supervision and care upon the algorithm or via the creation of new strict liability offenses.

    • 74 As of August 2021, the team managed to obtain two patents, respectively in Australia and South Africa. To find out more, see the webpage of the Artificial Inventor Project: https://artificialinventor.com/first-patent-granted-to-the-artificial-inventor-project/.

    • 75 Abbott, The Reasonable Robot, 127.

    • 76 Costanta Rosca et al., ‘Return of the AI: An Analysis of Legal Research on Artificial Intelligence Using Topic Modelling’, Proceedings of the 2020 Natural Legal Language Processing (NLLP) Workshop (2020).

    • 77 Topic Modelling is a method for classifying collections of documents. The authors adopted Latent Dirichlet Allocation (LDA) Topic Modelling. They used the tool to identify recurring topics in 3,931 journal articles on AI legal research.

    • 78 The expression refers to the period which includes the early 2000s until today. Catalina Goanta et al., ‘Back to the Future: Waves of Legal Scholarship on Artificial Intelligence’, in Time, Law and Change, ed. Sofia Ranchordás and Yaniv Roznai (Oxford: Hart Publishing. 2020), 331.

    • 79 Rosca et al., ‘Return of the AI: An Analysis of Legal Research on Artificial Intelligence Using Topic Modelling’, 1.

    • 80 A.W.B. Simpson, Legal Theory and Legal History: Essays on the Common Law (London: The Hambledon Press, 1987), 394.

    • 81 Susanne Beck, ‘Mediating the Different Concepts of Corporate Criminal Liability in England and Germany’, German L.J. 11 (2010): 1105.

    • 82 Thomas Mackay Cooper, ‘The Common and the Civil Law – A Scot’s View’, Harv.L.R no. 63 (1950), 471.

    • 83 For an interesting analysis of the common law vs. statutory law approach to technological development, see Lyria Bennett Moses, ‘Adapting the Law to Technological Change: A Comparison of Common Law and Legislation’, UNSW Law Journal 26 no. 2 (2003).


Print dit artikel
Button_em