Dra. Maria Beatriz Garcia/ELPIS v-LAW Review N.º7/2023 - Teresa Machete, 140120122
ELPIS v-LAW Review N.º7/2023, "The new world of Artificial Intelligence and the law"
Exmo. Sr. Professor,
Caros(as) colegas,
Espero que este post os encontre bem.
No seguimento do desafio lançado pelo professor em aula relativamente a um comentário no âmbito da ELPIS v-Law Review, deixo a minha exposição sobre o V-log da Professora Maria Beatriz Rebelo Garcia ("Public Liability when the Public Administration uses AI in the Decision-Making Process and Causes Damages to Particular Citizens"). Um tema muito atual e de interessante relfexão!
Atentamente,
Teresa Vasconcelos Machete (n.º140120122)
I. Artificial Intelligence and the need of regulation
Artificial Intelligence (AI) is now a reality and it is everywhere: from our phones suggesting songs or restaurants we might like to travelling in a self-driving vehicle. But what is AI? AI refers to systems that display intelligent behavior by analyzing their environment and taking actions – with some degree of autonomy – to achieve specific goals.
Like the steam engine or electricity in the past, AI is transforming our world, our society, and our industry. Growth in computing power, availability of data and progress in algorithms have turned AI into one of the most strategic technologies of the 21st century. Nowadays, AI systems promise to improve all sectors, from energy to education, from financial services to construction, or even agriculture, allowing for new approaches to problem-solving, creating the potential for better decision-making.
As with any transformative technology, some AI applications may raise new ethical and legal questions, for example related to liability or potentially biased decision-making. The EU must therefore ensure that AI is developed and applied in an appropriate framework which promotes innovation and respects the Union's values and fundamental rights as well as ethical principles such as accountability and transparency.
The emergence of AI, in particular the complex enabling ecosystem and the feature of autonomous decision-making, requires a reflection about the suitability of some established rules on safety and civil law questions on liability. Therefore, AI regulation ensures that Europeans can trust what AI has to offer. While most AI systems pose limited to no risk and can contribute to solving many societal challenges, certain AI systems create risks that we must address to avoid undesirable outcomes. For example, it is often not possible to find out why an AI system has made a decision or prediction and taken a particular action. So, it may become difficult to assess whether someone has been unfairly disadvantaged, such as in a hiring decision or in an application for a public benefit scheme.
For these reasons, the way we approach AI will define the world we live in and EU must have a coordinated approach to make the most opportunities and benefits offered by AI to people and society, and to address the new challenges that it brings.
II. Public Administration and AI
The introduction of this technology in the administrative decision making has brought up a real revolution in the administrative way of acting, special regarding the practice of administrative acts. Threw algorithms which allow the mass production of decisions, public administrations across the world have been benefitting from AI’s advantages such as efficiency, reduction of cost and speed of procedures. When AI makes a decision, we are witnessing the practice of a true administrative act that is a result of the application to a determined case of the rules of the algorithm.
However, this new reality requires a response from the legal system to the new challenges to come, and one of these main challenges is the civil liability of the state and other public entities. In fact, considering the increased risk that the use of technology that simulates human intelligence brings, damages can obviously be caused because of the use and unpredictability of the system and how it escapes the human control at some level. The question we need to understand is: Who is liable when AI fails to perform in the Public Administration? And this question arises two problems:
1. Attribution of the concept of guilt, the subjective element of responsibility
2. The causal link
The emergent engagement of the system brings up an enormous dose of unpredictability, questioning us what it is or should be the role of the human hand on the decision-making process when AI is involved.
The legal personality of AI must be rejected, it is always the human behind the machine that must be held liable, if all the conditions are verified of course. Firstly and for most, because attributing personality to these entities would mean recognizing that they are almost as a person for the law, which is ab initio degrading to us human beings with capacity do think and make our judgements, it’s not possible to held a thing responsible so obvious that the AI system does not have capacity to free decision making and autonomy with moral responsibly, it can distinguish right from wrong, fair and unfair, moral and immoral, just like a human can…all aspects we must consider. Secondly, there is already a pretermination of the robots’ actions threw the algorithm, so it’s not possible to speak of true free will or free total independent autonomous behavior, there is always a pre-request which is an algorithm has been created and formatted for the AI to perform as it should. Even though the system can assume a remarkable degree of autonomy it always acts in an automatic way, making for us impossible to speak of a true behavior. Having said that, we can only agree that it’s the Public Administration and its organs and people that should be held responsible when AI causes damages to citizens.
Nevertheless, it is important to state that there are many degrees and different levels of intensity of AI, and many risks with different levels to be consider. This leads us to the conclusion that there can’t be a single unified solution for all the universe of AI, because depending on the type of risk that each system presents, the legal behavior required from the public administration will have to change and being configured in line with the level of risk.
And can also be a level of risk unknown, for example if we are talking about a disruptive new technology, if the risk is unknown it means the administration could not now and have the means to know about the existence of the risk, and therefore should not respond and this is called the risk of civilization – if we want to grow and expand and involve as a whole planet we have to accept that somethings are going to happened and some risks will have to take place. If the risk is known and intolerable because it’s very high, and it can collide with fundamental rights, and still the administration uses the system in the event of a damage we think we can be talking about a violation of the principle of proportionality, justice and reasonbless as well as prevention and precaution, all European and globally accepted principals for the public administration. Intolerability means that there is no respect for proportionality and balancing of the benefits of the AI and the damage caused, when the possibility of damage the Fundamental rights exist it’s high the use of AI must me refused by the AI and if it still chooses to use a concrete system it should be held responsible in a subjective solution.
Knowing that AI is a technology that involves risks, the Public Administration must actively prevent damages that may result from it. This means that public entities may comply with technical standards and objective duties of care, otherwise an illegal and guilty conduct will be verified. A liability for omission of a certain behavior, which is the duty of care or vigilance, which is in line with the idea of the human in the loop - a concept that also has very debated in the doctrine, which is the idea that a human process must always be in the decision making process, a decision should not be just left to the machine and the possibility of the human interviewing in the process correcting some aspects if necessary and implies the responsibility of that subject.
III. Proposal for a Directive of the European Parliament and of the Council on adapting non-contractual civil liability rules to artificial intelligence [COM (2022) 496, 28 September 2022.
When it comes to AI liability and applicable to the public sector, the most recent development it is the proposal for a directive of the European parliament and council on adopting non contractual civil liability rules to AI – The AI liability directive of 28 September 2022. The European stands for different rules for different risk levels, in fact the universe of AI is vast. Nevertheless, defends the need of the duty of care understood as a mandatory standard of conduct established by national or union law in order to avoid harm to legal interests recognized at nation or union level, including life, property, and protection of fundamental rights (article 2).
The most interesting of this proposal is the establishment of a presumption of causality, as we said in the beginning one of the problems is the causal link. In a system that has very opacity it can be very hard to verify which concrete behavior of the human behind the machine lead to the mal functioning of the system. With this Proposal, the European legislator hopes to overcome the adversities that the features of artificial intelligence systems bring in terms of proof required of the injured party in order to obtain compensation for damage suffered. The rules seek to ensure that compensation for damage caused by artificial intelligence systems receives equivalent protection to harm not related to the intervention of artificial intelligence, whereby, for this purpose, fault is considered as a general criterion for attributing liability in several national legal systems.
To that, the establish of this presumption of a causal link between the defendant's fault and the output (or lack thereof) produced by the AI system, is applicable in very specific conditions which are referred in article 4. For the presumption to apply, three conditions need to be met: (1) proof of fault of the defendant by the claimant, (2) reasonable likeliness that the fault influenced AI's output/failure, and (3) proof by the claimant that AI's output/failure gave rise to damage. For limited-risk AI systems, as defined in the proposed AI Act, the presumption of causality only applies if a national court considers it excessively difficult for the claimant to prove the causal link.
The proposed will address risks specifically created by AI applications; propose a list of high-risk applications; set clear requirements for AI systems for high risk applications; define specific obligations for AI users and providers of high risk applications; propose a conformity assessment before the AI system is put into service or placed on the market; propose enforcement after such an AI system is placed in the market; propose a governance structure at European and national level. The regulatory framework defines 4 levels of risk in AI: Unacceptable risk; high risk; limited risk and minimal or no risk.
IV. EU AI Act: the first regulation on artificial intelligence
In April 2021, the European Commission proposed the first EU regulatory framework for AI. It says that AI systems that can be used in different applications are analysed and classified according to the risk they pose to users. The different risk levels will mean more or less regulation. Once approved, these will be the world’s first rules on AI. Parliament’s priority is to make sure that AI systems used in the EU are safe, transparent, traceable, non-discriminatory and environmentally friendly.
The new rules establish obligations for providers and users depending on the level of risk from artificial intelligence:
- Minimal risk: The vast majority of AI systems fall into the category of minimal risk. Minimal risk applications such as AI-enabled recommender systems or spam filters will benefit from a free-pass and absence of obligations, as these systems present only minimal or no risk for citizens' rights or safety. On a voluntary basis, companies may nevertheless commit to additional codes of conduct for these AI systems.
- High Risk: AI systems identified as high-risk will be required to comply with strict requirements, including risk-mitigation systems, high quality of data sets, logging of activity, detailed documentation, clear user information, human oversight, and a high level of robustness, accuracy, and cybersecurity. Examples: medical devices; systems to determine access to educational institutions or for recruiting people; or certain systems used in the fields of law enforcement, border control, administration of justice and democratic processes. Moreover, biometric identification, categorization and emotion recognition systems are also considered high-risk.
- Unacceptable risk: Unacceptable risk: AI systems considered a clear threat to the fundamental rights of people will be banned. This includes AI systems or applications that manipulate human behavior to circumvent users' free will, such as toys using voice assistance encouraging dangerous behavior of minors or systems that allow ‘social scoring' by governments or companies, and certain applications of predictive policing. In addition, some uses of biometric systems will be prohibited, for example emotion recognition systems used at the workplace and some systems for categorizing people or real time remote biometric identification for law enforcement purposes in publicly accessible spaces (with narrow exceptions).
- Specific transparency risk: When employing AI systems such as chatbots, users should be aware that they are interacting with a machine. Deep fakes and other AI generated content will have to be labelled as such, and users need to be informed when biometric categorisation or emotion recognition systems are being used. In addition, providers will have to design systems in a way that synthetic audio, video, text and images content is marked in a machine-readable format, and detectable as artificially generated or manipulated.
“The EU's AI Act is the first-ever comprehensive legal framework on Artificial Intelligence worldwide. So, this is a historic moment”,
Ursula Von Der Leyen President of the European Commission - 09/12/202.
Teresa Vasconcelos Machete
N. º140120122
Bibliography:
"Public Liability when the Public Administration uses AI in the Decision-Making Process and Causes Damages to Particular Citizens", presente no volume 7 da revista ELPIS https://www.youtube.com/watch?v=dNm83qFMD3g
Proposal for AI Liability Directive COM/2022/496 final https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX%3A52022PC0496
https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=COM:2018:237:FIN
https://ec.europa.eu/commission/presscorner/detail/en/IP_23_6473

Comentários
Enviar um comentário