Systems, Man & Cybernetics - January 2017 - 12

human in a monitoring or supervisory role who has the ability to interfere with and override the robot's decision if the
robot should fail or if there is any error [3]. HOTL RAS can
be also fully autonomous if the human supervisor allows
them to carry out a task completely on their own. Keeping
the human on the loop adds a needed human-robot interaction and human-machine interface. The degree of autonomy is determined based on the RAS's relationship to the
human supervisor. HOTL RAS receive, evaluate, decide, and
begin execution of an operation, but the human supervisor
can veto or stop it when necessary. For example, HOTL RAS
weapons are systems that use autonomy to choose and
engage targets. While the human supervisor does not decide
the selected targets to be engaged, the human can monitor
the RAS weapon system's intention and performance and
can interfere to stop its operations if required.
Currently, HOTL weapon systems are increasingly
being used for defense applications, which include air and
missile defense systems. The Phalanx is a defensive, closein naval weapon system (a fast-fire, computer-controlled,
radar-guided gun system) created to shoot down antiship
missiles and surface threats. Once activated, it searches,
detects, and evaluates threats and then tracks and engages
the threat. An abort button is available for the human
supervisor to reject the system's decision. The main question is whether these HOTL weapon systems are trustworthy. That is, can they distinguish blue and red teams on the
battlefield? And are they ethical? To have a usable and useful HOTL RAS, trusted autonomy becomes essential.
Trusted Autonomy
Trust is a firm belief in the reliability, truth, or ability of
someone or something. An autonomous system requires
the reliability of and trust in its technology. With RAS, trust
is defined as the level of confidence a human has in an
autonomous system based on the person's observations,
perceptions, and expectations of the system's performance
and on other information regarded as evidence of competence [11]. Trust in HOTL RAS is also defined as the ability
of HOTL RAS to successfully perform an activity, at a specific time, and under conditions characterized by vulnerability and uncertainty [12].
If the actions of HOTL RAS lead to harmful consequences to humans or belongings (e.g., from an unmanned aircraft
or a driverless vehicle), the human supervisor needs to reestablish and maintain trust in the operations of HOTL RAS
[13]. Humans need to be confident that HOTL RAS will perceive conditions properly in all situations, make the right
decisions, and perform its tasks accurately and efficiently.
For building trust in HOTL RAS, the systems have to be certified. Certification is a formal means by which a regulator
confirms the expected efficiency and performance of various components of an autonomous system.
A general HOTL RAS includes a number of essential
units [14]: 1) a sensing and perception unit, which involves
the abilities to sense, interpret, detect, and evaluate
12

IEEE SyStEmS, man, & CybErnEtICS magazInE Janu ar y 2017

objects in different environments; 2) a control and decision-making unit, which involves the ability to make accurate decisions in an uncertain and unpredictable
environment; and 3) an execution unit, which involves the
ability to perform tasks provided by the control and decision-making unit. Intelligent systems play the main roles in
the control and decision-making unit, which include learning, adaptation, and cognition. Learning is the acquisition
of knowledge, skills, or abilities through experience, as
observed by the attainment of increasing success
(enriched behavior). Adaptation is a change and modification in behavior when the environment is changed. Cognition includes learning development, adaptation, and
natural interaction through intelligent behavior in
response to complex objectives in a complex environment.
Cognitive mechanisms are required for decision making in
RAS. John Boyd distilled the decision-making process into
an observe, orient, decide, act scenario, which is known as
the OODA loop [15]-[17]. Boyd realized that it is necessary
for military pilots to make decisions faster and more accurately than their opponents, and he used the OODA loop
concept for the combat and military operations process
[18]. The structure of the OODA loop is shown in Figure 1.
Trusted autonomy is the greatest technical barrier that
needs to be overcome in HOTL RAS. As technology evolves,
HOTL RAS can become intelligent in learning, adaptation, and
decision making without direct human engagement. To measure trust, a number of quantitative factors related to system
behavior must be checked: 1) performance, which includes
competence, accuracy, reliability, and robustness; 2) transparency of the control and decision-making unit; and 3) security
vulnerabilities [19]. To quantify the degree of trust, these factors need to be measured in certain and uncertain environments. In addition, efficient human-robot interaction
positively affects humans' trust in HOTL RAS. Trust can be
established through an efficient way of communicating and
disseminating information. The human supervisor needs correct information from the system to establish suitable reliance
and prevent system misuse and malfunction. Understanding
the reasons for system failures can increase trust. In addition,
providing essential information and making the autonomous
system a team player inevitably increase trust.
A Framework for Formulating Trusted Autonomy
It is argued that there are three main factors in fostering
trusted autonomy between humans and RAS: 1) the
humans who are interacting with RAS; 2) RAS; and 3) the
environment in which RAS are intended to function, as
shown in Figure 2.
The Humans
Since HOTL RAS are designed to be managed and supervised by humans aiming to benefit from them, humans and
their desires are the most important factor. Different
humans have different viewpoints pertaining to RAS,
according to their role. A human's perception is based on



Table of Contents for the Digital Edition of Systems, Man & Cybernetics - January 2017

Systems, Man & Cybernetics - January 2017 - Cover1
Systems, Man & Cybernetics - January 2017 - Cover2
Systems, Man & Cybernetics - January 2017 - 1
Systems, Man & Cybernetics - January 2017 - 2
Systems, Man & Cybernetics - January 2017 - 3
Systems, Man & Cybernetics - January 2017 - 4
Systems, Man & Cybernetics - January 2017 - 5
Systems, Man & Cybernetics - January 2017 - 6
Systems, Man & Cybernetics - January 2017 - 7
Systems, Man & Cybernetics - January 2017 - 8
Systems, Man & Cybernetics - January 2017 - 9
Systems, Man & Cybernetics - January 2017 - 10
Systems, Man & Cybernetics - January 2017 - 11
Systems, Man & Cybernetics - January 2017 - 12
Systems, Man & Cybernetics - January 2017 - 13
Systems, Man & Cybernetics - January 2017 - 14
Systems, Man & Cybernetics - January 2017 - 15
Systems, Man & Cybernetics - January 2017 - 16
Systems, Man & Cybernetics - January 2017 - 17
Systems, Man & Cybernetics - January 2017 - 18
Systems, Man & Cybernetics - January 2017 - 19
Systems, Man & Cybernetics - January 2017 - 20
Systems, Man & Cybernetics - January 2017 - 21
Systems, Man & Cybernetics - January 2017 - 22
Systems, Man & Cybernetics - January 2017 - 23
Systems, Man & Cybernetics - January 2017 - 24
Systems, Man & Cybernetics - January 2017 - 25
Systems, Man & Cybernetics - January 2017 - 26
Systems, Man & Cybernetics - January 2017 - 27
Systems, Man & Cybernetics - January 2017 - 28
Systems, Man & Cybernetics - January 2017 - 29
Systems, Man & Cybernetics - January 2017 - 30
Systems, Man & Cybernetics - January 2017 - 31
Systems, Man & Cybernetics - January 2017 - 32
Systems, Man & Cybernetics - January 2017 - 33
Systems, Man & Cybernetics - January 2017 - 34
Systems, Man & Cybernetics - January 2017 - 35
Systems, Man & Cybernetics - January 2017 - 36
Systems, Man & Cybernetics - January 2017 - 37
Systems, Man & Cybernetics - January 2017 - 38
Systems, Man & Cybernetics - January 2017 - 39
Systems, Man & Cybernetics - January 2017 - 40
Systems, Man & Cybernetics - January 2017 - Cover3
Systems, Man & Cybernetics - January 2017 - Cover4
https://www.nxtbook.com/nxtbooks/ieee/smc_202310
https://www.nxtbook.com/nxtbooks/ieee/smc_202307
https://www.nxtbook.com/nxtbooks/ieee/smc_202304
https://www.nxtbook.com/nxtbooks/ieee/smc_202301
https://www.nxtbook.com/nxtbooks/ieee/smc_202210
https://www.nxtbook.com/nxtbooks/ieee/smc_202207
https://www.nxtbook.com/nxtbooks/ieee/smc_202204
https://www.nxtbook.com/nxtbooks/ieee/smc_202201
https://www.nxtbook.com/nxtbooks/ieee/smc_202110
https://www.nxtbook.com/nxtbooks/ieee/smc_202107
https://www.nxtbook.com/nxtbooks/ieee/smc_202104
https://www.nxtbook.com/nxtbooks/ieee/smc_202101
https://www.nxtbook.com/nxtbooks/ieee/smc_202010
https://www.nxtbook.com/nxtbooks/ieee/smc_202007
https://www.nxtbook.com/nxtbooks/ieee/smc_202004
https://www.nxtbook.com/nxtbooks/ieee/smc_202001
https://www.nxtbook.com/nxtbooks/ieee/smc_201910
https://www.nxtbook.com/nxtbooks/ieee/smc_201907
https://www.nxtbook.com/nxtbooks/ieee/smc_201904
https://www.nxtbook.com/nxtbooks/ieee/smc_201901
https://www.nxtbook.com/nxtbooks/ieee/smc_201810
https://www.nxtbook.com/nxtbooks/ieee/smc_201807
https://www.nxtbook.com/nxtbooks/ieee/smc_201804
https://www.nxtbook.com/nxtbooks/ieee/smc_201801
https://www.nxtbook.com/nxtbooks/ieee/systems_man_cybernetics_1017
https://www.nxtbook.com/nxtbooks/ieee/systems_man_cybernetics_0717
https://www.nxtbook.com/nxtbooks/ieee/systems_man_cybernetics_0417
https://www.nxtbook.com/nxtbooks/ieee/systems_man_cybernetics_0117
https://www.nxtbook.com/nxtbooks/ieee/systems_man_cybernetics_1016
https://www.nxtbook.com/nxtbooks/ieee/systems_man_cybernetics_0716
https://www.nxtbook.com/nxtbooks/ieee/systems_man_cybernetics_0416
https://www.nxtbook.com/nxtbooks/ieee/systems_man_cybernetics_0116
https://www.nxtbook.com/nxtbooks/ieee/systems_man_cybernetics_1015
https://www.nxtbook.com/nxtbooks/ieee/systems_man_cybernetics_0715
https://www.nxtbook.com/nxtbooks/ieee/systems_man_cybernetics_0415
https://www.nxtbook.com/nxtbooks/ieee/systems_man_cybernetics_0115
https://www.nxtbookmedia.com