Criminal liability models in the digital environment: a modern challenge between liability models and criminal participation man-machine

  • Carlo Piparo Phd Student - University of Udine
Keywords: Artificial Intelligence, liability, actus reus, criminal law, algorithm, mens rea

Abstract


The rapid progression and widespread integration of Information and Communication Technology (ICT) have ushered in a new era of sweeping social and legal transformations. Among the many groundbreaking advancements, Artificial Intelligence has emerged as a pivotal force, permeating nearly every facet of our daily lives. From the realms of commerce and industry to healthcare, transportation, and entertainment, Artificial Intelligence technologies have become indispensable tools shaping the way we interact, work, and navigate the world around us. With its remarkable capabilities and ever-expanding reach, Artificial intelligence stands as a testament to humanity's relentless pursuit of innovation and the boundless potential of technology to revolutionize society. While completing all the tasks they are programmed for, Artificial Intelligence systems can perform actions, which could result in crimes if committed by humans. But crimes follow the reserve of law, therefore can be difficult to criminalize such crimes because of the lack of written law. Neverthe- less, in modern legal systems, the structure of crimes doesn’t only require the commission of a typical fact, but also the determination to do it. 

In this scenario, being Artificial Intelligence a non-human entity, the reconstruction of criminal re- sponsibility is particularly difficult to theorize. This is mainly true because of the peculiar nature of the environment the machine lives in: the digital environment is made of a digital reality, and many of its actors (for example algorithms, protocols, and programs) are not even human and can only exist in that reality. This means that in this environment, machines can act, determine themselves and possibly commit crimes with or without a human user. 

This scenario makes it necessary to analyze Artificial Intelligence crimes in the light of common ones, using the ordinary law discipline. This analysis allows users (lawyers, judges, and scholars) to use three traditional liability models: “the perpetration-via-another”, “the natural probable consequence”, and “the direct liability”. Through these models, users can assess whether the machine committed a crime. 

Nevertheless, the three liability models supra mentioned open the door to a totally modern scenario: the man-machine concurrence (the concurrence between man and Artificial Intelligence algorithm). In fact, if theorizing the liability of the machine comes with challenges, it is even more complicated to adapt to modern Constitutions the concurrence between the living and the digital. Indeed, it is necessary to assess whether a machine can commit crimes (or it is just an instrument), determine the condiciones sine quae non the machine can concur with a human, and how much responsibility can be addressed to it. 

This paper wants to analyze the peculiarities of Artificial Intelligence, deconstruct three possible Artificial Intelligence liability models, and, finally, theorize the criminal participation man-machine. 

Published
2024/03/19
Section
Original Scientific Paper