Instrumentation & Measurement Magazine 23-3 - 28

What are the Limits and Risks Related to AI?
From this perspective, the AI concept has to be reconsidered in
light of the fact that every human being must be the most relevant element in a hierarchy of elements to be preserved; so the
fundamental rights accorded to people must be protected even
though this may delay the progress of technology in this field.
Obviously the first aspect concerns the protection of human
life (but also life in each different form), and this must represent the most important milestone. Probably many of you are
persuaded that this aspect should not be debated because it is
so evident and natural (I mean related to human nature) that
nobody purposely designs AI to be dangerous for life, except
in case of war-related applications.
This is not completely true: think of driverless cars and
their philosophy. They are produced to allow a driver (human
driver) to spend his/her time in the car during the journey
doing something different from driving the vehicle, and the
vehicles are prepared to face obstacles on a mathematical and
rational basis. What happens in the case of high (permitted)
speed of the vehicle and contemporary presence of pedestrians on the road which does not allow the car to stop without
injuring someone? What do you think should be the decision
of the car? What do you expect would be the decision? If you
believe that the previous two questions will receive the same
answer you will not be satisfied at all, because it depends on
the technology used to produce the AI assembled in the vehicle. If the vehicle is designed to take care to protect the drivers
and passengers inside the car, it would not care for the obstacles, even in the case of human obstacles. On the other hand,
if the vehicle is equipped with human emotions or feelings, it
could try to avoid the obstacle especially in the case of human
ones and to protect passengers at the same time (as a man/
woman would do).
But what happens if the vehicle is designed as a complete,
conscious AI, trained to learn and oriented to preserve itself as
the main character, disregarding anyone else?
I know that this example seems to be a paradox, but AI
could become an individual entity, especially considering its
evolution related to improving the ability to work and think
autonomously, whose prerogatives could be quite different
from human ones or from the expectations that we have. The
task required of producers, designers and scientists, from the
ethical perspective, is hard and tricky because of the complexity of those (AI) products which are the assemblage result of
many different technological elements that do not always interact in the same way or in the expected way.
Another consideration could refer to other fundamental
rights granted to people such as privacy. There are some examples of devices, normally used at home to play music, find
a recipe, stream movies or give advice, that record any event
in the house (conversations, etc.), and they transfer this data
without the owner's express permission, as a sort of spy which
observes any movement and is ready at any time to communicate this confidential information. The purpose is that of
helping the users to quickly find the information that best suits
their interests and habits. However, in this case, users must be
28	

notified about this tool function so they have informed faculty to disable the device and decide if and when their privacy
could be limited and reduced. Normally, these are instructions
coming from manufacturers, and designers who are aware of
the situation should consider the possibility of violating privacy by using those devices and should consequently adopt
suitable cautions to avoid this effect or to ensure that the users
will be properly informed.
Another point that is worth considering more in depth is
related to cognitive bias. This seems to be an almost unknown
concept for AI tools that are programmed on a rational basis to
prevent them from being influenced by aspects closely belonging to culture or origins that affect human beings. Despite the
fact that AI systems are designed by humans with the possible
effect of introducing them with typical human bias, technological progress will probably provide a solution to this risk,
implementing also the AI's ability to consider information
from an objective perspective and take decisions on a rational basis, according to mathematical and logical parameters.
Even this aspect needs to be analyzed and evaluated, as well
as the others mentioned above (preservation of human life,
safety, privacy), considering that sometimes AI fails in doing
only rational operations/activities and thus could lead to embarrassing consequences. In 2018, as declared by AI ethics Lab
[2], a problem linked with bias in images in a Google search
results arose because in a case of someone looking for professors or CEOs, the images shown were only white men, so that
allegedly, the image search results present an extreme bias against
representing women and people of color.
Nowadays, affirming that biases are not belonging to the AI
galaxy is not totally correct. There is still much work to do for
designers in this field to improve not only technology but also
the way in which the related ethical aspects should be managed and the new ones, that might arise, should be tackled,
always considering human beings first.

Liability
As already outlined in a previous 2015 article [3], legal implications deriving from AI are still debated because of the lack
of regulation due to some weak aspects of this matter related
to the numerous elements, including the ethical ones, encountered by these technological products. From the legal point
of view, there are different questions that could be evaluated:
◗◗ Liability in case of AI failures, and
◗◗ Liability in the case of unexpected effects or consequences
deriving from AI.
The first case is clearly simpler than the second one; it
is quite obvious that if the AI does not respect declared requirements or presents some defects, so that damage occurs
to someone (or something directly), liability is on the part of
those who have responsibility for AI: consequently, they are
obliged to compensate under tort law or non-contractual liability scheme.
Frequently, claims for compensation are introduced on
the basis of the negligence of people involved in AI's production and trade, including the absence of complete information

IEEE Instrumentation & Measurement Magazine	

May 2020



Instrumentation & Measurement Magazine 23-3

Table of Contents for the Digital Edition of Instrumentation & Measurement Magazine 23-3

No label
Instrumentation & Measurement Magazine 23-3 - Cover1
Instrumentation & Measurement Magazine 23-3 - No label
Instrumentation & Measurement Magazine 23-3 - 2
Instrumentation & Measurement Magazine 23-3 - 3
Instrumentation & Measurement Magazine 23-3 - 4
Instrumentation & Measurement Magazine 23-3 - 5
Instrumentation & Measurement Magazine 23-3 - 6
Instrumentation & Measurement Magazine 23-3 - 7
Instrumentation & Measurement Magazine 23-3 - 8
Instrumentation & Measurement Magazine 23-3 - 9
Instrumentation & Measurement Magazine 23-3 - 10
Instrumentation & Measurement Magazine 23-3 - 11
Instrumentation & Measurement Magazine 23-3 - 12
Instrumentation & Measurement Magazine 23-3 - 13
Instrumentation & Measurement Magazine 23-3 - 14
Instrumentation & Measurement Magazine 23-3 - 15
Instrumentation & Measurement Magazine 23-3 - 16
Instrumentation & Measurement Magazine 23-3 - 17
Instrumentation & Measurement Magazine 23-3 - 18
Instrumentation & Measurement Magazine 23-3 - 19
Instrumentation & Measurement Magazine 23-3 - 20
Instrumentation & Measurement Magazine 23-3 - 21
Instrumentation & Measurement Magazine 23-3 - 22
Instrumentation & Measurement Magazine 23-3 - 23
Instrumentation & Measurement Magazine 23-3 - 24
Instrumentation & Measurement Magazine 23-3 - 25
Instrumentation & Measurement Magazine 23-3 - 26
Instrumentation & Measurement Magazine 23-3 - 27
Instrumentation & Measurement Magazine 23-3 - 28
Instrumentation & Measurement Magazine 23-3 - 29
Instrumentation & Measurement Magazine 23-3 - 30
Instrumentation & Measurement Magazine 23-3 - 31
Instrumentation & Measurement Magazine 23-3 - 32
Instrumentation & Measurement Magazine 23-3 - 33
Instrumentation & Measurement Magazine 23-3 - 34
Instrumentation & Measurement Magazine 23-3 - 35
Instrumentation & Measurement Magazine 23-3 - 36
Instrumentation & Measurement Magazine 23-3 - 37
Instrumentation & Measurement Magazine 23-3 - 38
Instrumentation & Measurement Magazine 23-3 - 39
Instrumentation & Measurement Magazine 23-3 - 40
Instrumentation & Measurement Magazine 23-3 - 41
Instrumentation & Measurement Magazine 23-3 - 42
Instrumentation & Measurement Magazine 23-3 - 43
https://www.nxtbook.com/allen/iamm/24-6
https://www.nxtbook.com/allen/iamm/24-5
https://www.nxtbook.com/allen/iamm/24-4
https://www.nxtbook.com/allen/iamm/24-3
https://www.nxtbook.com/allen/iamm/24-2
https://www.nxtbook.com/allen/iamm/24-1
https://www.nxtbook.com/allen/iamm/23-9
https://www.nxtbook.com/allen/iamm/23-8
https://www.nxtbook.com/allen/iamm/23-6
https://www.nxtbook.com/allen/iamm/23-5
https://www.nxtbook.com/allen/iamm/23-2
https://www.nxtbook.com/allen/iamm/23-3
https://www.nxtbook.com/allen/iamm/23-4
https://www.nxtbookmedia.com