Instrumentation & Measurement Magazine 23-3 - 29

about the use of AI and functions and not only limited to malfunction strictly considered.

Who is Responsible?
The wide variety of subjects involved in AI's technology (designers, manufacturers, traders) prevents the ability to provide
a certain and infallible answer to this question, because any of
them risk being called into court for damages caused by an AI
product. The terms of being involved in a trial may differ depending on the legal order applied by any country, especially
regarding the burden of proof (sometimes it is extremely difficult for the plaintiff to prove the defect and relationship
between damage and defect); however, it is possible that each
party that has taken part in the AI implementation process
could be considered responsible for his/her contribution.
The second assumption presents more complex features that refer to the fact that the (negative or harmful)
consequences originated by AI were not expected by the manufacturers or subjects considered liable by law. According to
the general legal principle that refuses liability without culpable conduct, which means negligence here, it is hard to affirm
that the responsibility for damages caused by unknown behavior of AI should be put on the manufacturers (and the
others connected with them or involved in it) who have totally
ignored, because of guiltless unawareness, the activities that
AI would have been able to perform.
The problem will increase when AI will have enhanced,
embedded self-learning skills, aiming at giving it the ability to
feel as human beings following human thoughts, also considering that it would be difficult to distinguish whether damage
was caused by product defects that manufacturers (designers,
and so on) could know or detect using diligence, or if the same
damage was originated by the autonomous AI's behavior.

Who Will be Responsible for Something that is
Not Predictable?
Legal experts are wondering about the possibility to assign AI
a status, a personhood, so if necessary, liability will be put directly on IT. As long as civil law is concerned, liability and its
consequences (typically compensation for damages) will be
faced by manufacturers or by AI directly through insurance,
for example, or other remedies useful to assure reparation of
the loss.
Furthermore, this consideration becomes more delicate if
it refers to criminal liability which requires the legal capability to act (infants, for instance, are not considered liable). This
is the main reason why this matter needs a new legal framework to be settled.
Finally, AI is not recognized as a legal entity in the field of
patents. Recently the European Patent Office (EPO) rejected the
application for a patent in which the AI was identified as the inventor because it was the designer of product claiming patent
protection. EPO refuses to release and to register a patent for a
machine because the inventor can only be a human or a legal
entity which is anyway referable to humans. This case was discussed by EPO in November 2019 and the reasons that led to
May 2020	

EPO's decision were made available at their website in January
2020 (www.EPO.org). They are founded on the basis that an inventor must be only a natural person, excluding any other case.
The EPO website says: In its decisions, the EPO considered that the
interpretation of the legal framework of the European patent system
leads to the conclusion that the inventor designated in a European patent must be a natural person. Even though it sounds obvious that
a machine could not be the rights holder, the future scenario,
taking into account the abilities of AI, must change [4].

The European Robolaw Project
The challenge for a new legal framework in the AI field was
taken up by the EU Commission that has dedicated a special
project (the Robolaw project ended in 2014) to provide clear
rules that allow victims to receive compensation or claim for
damages and, at the same time, do not discourage designers to
improve their research because of the fear of being condemned
in case of failures or defects related to AI.
According to the need for high protection for fundamental
rights, the EU Commission outlined that the prospect of regulating robotics has had as points of reference the two requirements
of ethical acceptability and orientation towards societal needs that
identify the pillars of the concept of RRI (responsible research
and innovation) to ensure that the most relevant elements are
considered and evaluated. The outcome of this project is represented by the D.6.2 Guidelines on regulating robotics [5]. This
document is focused only on some of the possible applications
of AI, due to a limited period of time accorded to the team to
study cases and the absence of experts on each field covered
by AI (for instance robots in military activity). The team appointed by the EU carried out an analysis for each considered
area of AI (self-driving cars, surgery equipment, prosthesis)
from the technological, ethical and legal point of views, outlining problems related regulation concerning AI and providing
recommendations to the EU member states to encourage the
correct (legal and legislative) approach to the field.
More recently, a selected group of experts appointed by
the EU Commission released another document on Ethics
Guidelines for Trustworthy AI [6], aimed at strengthening the
framework in which AI must be embedded. Despite the absence of a unified and homogeneous legal special framework,
which is based on the existing regulations interpreted according to new approach to AI, this guide is especially focused on
ethical issues, and it recognizes that AI systems need to be human-centric, resting on a commitment to their use in the service of
humanity and the common good, with the goal of improving human
welfare and freedom. While offering great opportunities, AI systems
also give rise to certain risks that must be handled appropriately and
proportionately. The document also provides an outlook on requirements that must be taken into account when designing
and producing AI, to respect fundamental human rights:
◗◗ Human agency and oversight, including fundamental
rights, human agency and human oversight;
◗◗ Technical robustness and safety, including resilience to
attack and security, fall back plan and general safety, accuracy, reliability and reproducibility;

IEEE Instrumentation & Measurement Magazine	29



Instrumentation & Measurement Magazine 23-3

Table of Contents for the Digital Edition of Instrumentation & Measurement Magazine 23-3

No label
Instrumentation & Measurement Magazine 23-3 - Cover1
Instrumentation & Measurement Magazine 23-3 - No label
Instrumentation & Measurement Magazine 23-3 - 2
Instrumentation & Measurement Magazine 23-3 - 3
Instrumentation & Measurement Magazine 23-3 - 4
Instrumentation & Measurement Magazine 23-3 - 5
Instrumentation & Measurement Magazine 23-3 - 6
Instrumentation & Measurement Magazine 23-3 - 7
Instrumentation & Measurement Magazine 23-3 - 8
Instrumentation & Measurement Magazine 23-3 - 9
Instrumentation & Measurement Magazine 23-3 - 10
Instrumentation & Measurement Magazine 23-3 - 11
Instrumentation & Measurement Magazine 23-3 - 12
Instrumentation & Measurement Magazine 23-3 - 13
Instrumentation & Measurement Magazine 23-3 - 14
Instrumentation & Measurement Magazine 23-3 - 15
Instrumentation & Measurement Magazine 23-3 - 16
Instrumentation & Measurement Magazine 23-3 - 17
Instrumentation & Measurement Magazine 23-3 - 18
Instrumentation & Measurement Magazine 23-3 - 19
Instrumentation & Measurement Magazine 23-3 - 20
Instrumentation & Measurement Magazine 23-3 - 21
Instrumentation & Measurement Magazine 23-3 - 22
Instrumentation & Measurement Magazine 23-3 - 23
Instrumentation & Measurement Magazine 23-3 - 24
Instrumentation & Measurement Magazine 23-3 - 25
Instrumentation & Measurement Magazine 23-3 - 26
Instrumentation & Measurement Magazine 23-3 - 27
Instrumentation & Measurement Magazine 23-3 - 28
Instrumentation & Measurement Magazine 23-3 - 29
Instrumentation & Measurement Magazine 23-3 - 30
Instrumentation & Measurement Magazine 23-3 - 31
Instrumentation & Measurement Magazine 23-3 - 32
Instrumentation & Measurement Magazine 23-3 - 33
Instrumentation & Measurement Magazine 23-3 - 34
Instrumentation & Measurement Magazine 23-3 - 35
Instrumentation & Measurement Magazine 23-3 - 36
Instrumentation & Measurement Magazine 23-3 - 37
Instrumentation & Measurement Magazine 23-3 - 38
Instrumentation & Measurement Magazine 23-3 - 39
Instrumentation & Measurement Magazine 23-3 - 40
Instrumentation & Measurement Magazine 23-3 - 41
Instrumentation & Measurement Magazine 23-3 - 42
Instrumentation & Measurement Magazine 23-3 - 43
https://www.nxtbook.com/allen/iamm/26-6
https://www.nxtbook.com/allen/iamm/26-5
https://www.nxtbook.com/allen/iamm/26-4
https://www.nxtbook.com/allen/iamm/26-3
https://www.nxtbook.com/allen/iamm/26-2
https://www.nxtbook.com/allen/iamm/26-1
https://www.nxtbook.com/allen/iamm/25-9
https://www.nxtbook.com/allen/iamm/25-8
https://www.nxtbook.com/allen/iamm/25-7
https://www.nxtbook.com/allen/iamm/25-6
https://www.nxtbook.com/allen/iamm/25-5
https://www.nxtbook.com/allen/iamm/25-4
https://www.nxtbook.com/allen/iamm/25-3
https://www.nxtbook.com/allen/iamm/instrumentation-measurement-magazine-25-2
https://www.nxtbook.com/allen/iamm/25-1
https://www.nxtbook.com/allen/iamm/24-9
https://www.nxtbook.com/allen/iamm/24-7
https://www.nxtbook.com/allen/iamm/24-8
https://www.nxtbook.com/allen/iamm/24-6
https://www.nxtbook.com/allen/iamm/24-5
https://www.nxtbook.com/allen/iamm/24-4
https://www.nxtbook.com/allen/iamm/24-3
https://www.nxtbook.com/allen/iamm/24-2
https://www.nxtbook.com/allen/iamm/24-1
https://www.nxtbook.com/allen/iamm/23-9
https://www.nxtbook.com/allen/iamm/23-8
https://www.nxtbook.com/allen/iamm/23-6
https://www.nxtbook.com/allen/iamm/23-5
https://www.nxtbook.com/allen/iamm/23-2
https://www.nxtbook.com/allen/iamm/23-3
https://www.nxtbook.com/allen/iamm/23-4
https://www.nxtbookmedia.com