Instrumentation & Measurement Magazine 25-2 - 29

the workpiece are first obtained. Surface normal estimation,
surface segmentation, and geometric modeling are then performed,
based on the acquired data. Finally, DL algorithms are
used to estimate the pose of the workpiece, and this information
is sent to the control unit to guide the robot to grasp the
workpiece. Although the method of processing of the 3D vision
system is complex, it can help robots perform complex
industrial tasks and is becoming more common with advancements
in 3D DL algorithms and methods of 3D imaging (e.g.,
stereo vision, time-of-flight, structured lighting imaging).
Whether it is a 3D or 2D vision system, the DL algorithm is
still the key part of a robot vision system. To provide the reader
an idea of the future trends in robot vision, we present some
knowledge about DL techniques in the next section.
Deep Learning Techniques
Deep learning, a branch of machine learning, is a set of algorithms
based on artificial neural networks used for learning
data representations. In contrast to traditional methods that
use hand-crafted features, the advantage of DL is that it requires
little engineering by hand as it can automatically extract
features of objects through the neural network. DL can be used
to learn a diversity of scenes and complex objects in robot vision
by extracting multi-dimensional features from the given
dataset. In terms of application, DL techniques in robot vision
can be divided into object detection and categorization, scene
segmentation, and object tracking.
Object Detection and Categorization
Object detection and classification are fundamental problems
in computational computer vision (CV) that underpin the
development of robots that interact perceptually with the environment.
The common network structure (e.g., VGG, AlexNet,
GoogLeNet, and ResNet) and object detection methods (e.g.,
YOLO, SSD, Faster RCNN, CornerNet, and CenterNet) mentioned
below are described in detail in [5]. DL techniques have
been widely applied to CV since 2012, when AlexNet was able
to reduce the error by 10% at the ImageNet Large Scale Visual
Recognition Challenge (ILSVRC). The subsequently developed
classical VGG and GoogLeNet further improved the
accuracy of classification through a deeper network structure.
ResNet solves the problem of vanishing gradient when
the network is deep by introducing residual mapping between
the front and back networks and is one of the most commonly
used network structures at present. These networks work well
on classification tasks but cannot specifically determine the
location and number of objects. Researchers thus explored developing
more advanced object detection algorithms. Faster
RCNN is a classical detection algorithm that, however, runs
slowly, and requires sophisticated computational capabilities.
The YOLO has improved real-time detection by improving the
bounding box, but it is very difficult for it to detect small objects.
To overcome the shortcomings of YOLO, the SSD uses
multi-scale techniques to enhance the detection accuracy for
detecting small targets [5]. Some currently available advanced
detection algorithms, such as CornerNet, and CenterNet, also
April 2022
consider real-time detection and its accuracy and are used in
various areas of robot vision. Typical applications where object
detection and categorization are applied include defect detection
and pose estimation.
Scene Segmentation
Scene segmentation is the task of splitting a scene into its various
object components. The results of scene segmentation can
provide richer information about details of the image than those
of image classification and object detection. Thus, scene segmentation
algorithms are more beneficial for image analysis
and the understanding of scenes by robots. Some of the wellknown
methods (e.g., U-Net, RefineNet, DeepLab, FCN, and
HRNet) mentioned below are described in detail in [3], so we
will make a brief introduction here. A Fully Convolutional Network
(FCN) is the most commonly used framework for scene
segmentation, and has yielded an accuracy of up to 85.2% on
the SIFTFlow dataset. U-Net and RefineNet further improve
the accuracy of segmentation by extracting information from
multiscale feature maps. The DeepLab series of algorithms
use dilated convolutions in place of standard convolutions to
maintain a semantic feature map with a higher resolution for
convolution calculations, thus preserving more detailed information
and improving the accuracy of segmentation. Most
subsequent segmentation algorithms are dedicated to increasing
accuracy by increasing the number of parameters and
complexity of the models. One example of this is HRNet, which
uses multiscale feature fusion and complex neural network
structures to improve accuracy on multiple segmentation tasks.
However, for engineering applications, algorithms should be
deployed by considering both the accuracy of segmentation
and the cost in terms of the required computational power. The
computing power of embedded mobile computing devices
on industrial robots is limited, and as a result, segmentation
algorithms with complex parameters are difficult to deploy.
Therefore, image segmentation technology still has considerable
room for improvement in the context of lightweight design.
Object Tracking
Object tracking uses the specified target in the initial frame of
a sequence of images to locate it in subsequent frames. It is vital
for industrial robots. For example, the accurate tracking of
weld seams by welding robots directly determines the success
of welding processes. The principles of most object tracking
algorithms (e.g., HCF, MDNet, ECO, ATOM, and DiMP)
have been well described in [6], and a brief introduction is presented
here. Some classical deep tracking algorithms, such as
HCF, MDNet, SiamFC, and ECO, explore the potential of DL
to significantly improve tracking performance. Discriminant
model-based trackers have received considerable attention
since 2019 (e.g., ATOM and DiMP, which were inspired by
correlation filter and can provide filter kernels that can differentiate
between the foreground and background). This class
of methods has excellent discriminative power for interferers
due to its use of background information. In 2021, several deep
tracking algorithms were proposed based on the Transformer
IEEE Instrumentation & Measurement Magazine
29

Instrumentation & Measurement Magazine 25-2

Table of Contents for the Digital Edition of Instrumentation & Measurement Magazine 25-2

Instrumentation & Measurement Magazine 25-2 - Cover1
Instrumentation & Measurement Magazine 25-2 - Cover2
Instrumentation & Measurement Magazine 25-2 - 1
Instrumentation & Measurement Magazine 25-2 - 2
Instrumentation & Measurement Magazine 25-2 - 3
Instrumentation & Measurement Magazine 25-2 - 4
Instrumentation & Measurement Magazine 25-2 - 5
Instrumentation & Measurement Magazine 25-2 - 6
Instrumentation & Measurement Magazine 25-2 - 7
Instrumentation & Measurement Magazine 25-2 - 8
Instrumentation & Measurement Magazine 25-2 - 9
Instrumentation & Measurement Magazine 25-2 - 10
Instrumentation & Measurement Magazine 25-2 - 11
Instrumentation & Measurement Magazine 25-2 - 12
Instrumentation & Measurement Magazine 25-2 - 13
Instrumentation & Measurement Magazine 25-2 - 14
Instrumentation & Measurement Magazine 25-2 - 15
Instrumentation & Measurement Magazine 25-2 - 16
Instrumentation & Measurement Magazine 25-2 - 17
Instrumentation & Measurement Magazine 25-2 - 18
Instrumentation & Measurement Magazine 25-2 - 19
Instrumentation & Measurement Magazine 25-2 - 20
Instrumentation & Measurement Magazine 25-2 - 21
Instrumentation & Measurement Magazine 25-2 - 22
Instrumentation & Measurement Magazine 25-2 - 23
Instrumentation & Measurement Magazine 25-2 - 24
Instrumentation & Measurement Magazine 25-2 - 25
Instrumentation & Measurement Magazine 25-2 - 26
Instrumentation & Measurement Magazine 25-2 - 27
Instrumentation & Measurement Magazine 25-2 - 28
Instrumentation & Measurement Magazine 25-2 - 29
Instrumentation & Measurement Magazine 25-2 - 30
Instrumentation & Measurement Magazine 25-2 - 31
Instrumentation & Measurement Magazine 25-2 - 32
Instrumentation & Measurement Magazine 25-2 - 33
Instrumentation & Measurement Magazine 25-2 - 34
Instrumentation & Measurement Magazine 25-2 - 35
Instrumentation & Measurement Magazine 25-2 - 36
Instrumentation & Measurement Magazine 25-2 - 37
Instrumentation & Measurement Magazine 25-2 - 38
Instrumentation & Measurement Magazine 25-2 - 39
Instrumentation & Measurement Magazine 25-2 - 40
Instrumentation & Measurement Magazine 25-2 - 41
Instrumentation & Measurement Magazine 25-2 - 42
Instrumentation & Measurement Magazine 25-2 - 43
Instrumentation & Measurement Magazine 25-2 - 44
Instrumentation & Measurement Magazine 25-2 - 45
Instrumentation & Measurement Magazine 25-2 - 46
Instrumentation & Measurement Magazine 25-2 - 47
Instrumentation & Measurement Magazine 25-2 - 48
Instrumentation & Measurement Magazine 25-2 - 49
Instrumentation & Measurement Magazine 25-2 - 50
Instrumentation & Measurement Magazine 25-2 - 51
Instrumentation & Measurement Magazine 25-2 - 52
Instrumentation & Measurement Magazine 25-2 - 53
Instrumentation & Measurement Magazine 25-2 - 54
Instrumentation & Measurement Magazine 25-2 - 55
Instrumentation & Measurement Magazine 25-2 - 56
Instrumentation & Measurement Magazine 25-2 - 57
Instrumentation & Measurement Magazine 25-2 - 58
Instrumentation & Measurement Magazine 25-2 - 59
Instrumentation & Measurement Magazine 25-2 - 60
Instrumentation & Measurement Magazine 25-2 - 61
Instrumentation & Measurement Magazine 25-2 - 62
Instrumentation & Measurement Magazine 25-2 - 63
Instrumentation & Measurement Magazine 25-2 - 64
Instrumentation & Measurement Magazine 25-2 - 65
Instrumentation & Measurement Magazine 25-2 - 66
Instrumentation & Measurement Magazine 25-2 - 67
Instrumentation & Measurement Magazine 25-2 - 68
Instrumentation & Measurement Magazine 25-2 - 69
Instrumentation & Measurement Magazine 25-2 - 70
Instrumentation & Measurement Magazine 25-2 - 71
Instrumentation & Measurement Magazine 25-2 - 72
Instrumentation & Measurement Magazine 25-2 - 73
Instrumentation & Measurement Magazine 25-2 - 74
Instrumentation & Measurement Magazine 25-2 - 75
Instrumentation & Measurement Magazine 25-2 - 76
Instrumentation & Measurement Magazine 25-2 - 77
Instrumentation & Measurement Magazine 25-2 - 78
Instrumentation & Measurement Magazine 25-2 - 79
Instrumentation & Measurement Magazine 25-2 - 80
Instrumentation & Measurement Magazine 25-2 - 81
Instrumentation & Measurement Magazine 25-2 - 82
Instrumentation & Measurement Magazine 25-2 - 83
Instrumentation & Measurement Magazine 25-2 - 84
Instrumentation & Measurement Magazine 25-2 - Cover3
Instrumentation & Measurement Magazine 25-2 - Cover4
https://www.nxtbook.com/allen/iamm/26-6
https://www.nxtbook.com/allen/iamm/26-5
https://www.nxtbook.com/allen/iamm/26-4
https://www.nxtbook.com/allen/iamm/26-3
https://www.nxtbook.com/allen/iamm/26-2
https://www.nxtbook.com/allen/iamm/26-1
https://www.nxtbook.com/allen/iamm/25-9
https://www.nxtbook.com/allen/iamm/25-8
https://www.nxtbook.com/allen/iamm/25-7
https://www.nxtbook.com/allen/iamm/25-6
https://www.nxtbook.com/allen/iamm/25-5
https://www.nxtbook.com/allen/iamm/25-4
https://www.nxtbook.com/allen/iamm/25-3
https://www.nxtbook.com/allen/iamm/instrumentation-measurement-magazine-25-2
https://www.nxtbook.com/allen/iamm/25-1
https://www.nxtbook.com/allen/iamm/24-9
https://www.nxtbook.com/allen/iamm/24-7
https://www.nxtbook.com/allen/iamm/24-8
https://www.nxtbook.com/allen/iamm/24-6
https://www.nxtbook.com/allen/iamm/24-5
https://www.nxtbook.com/allen/iamm/24-4
https://www.nxtbook.com/allen/iamm/24-3
https://www.nxtbook.com/allen/iamm/24-2
https://www.nxtbook.com/allen/iamm/24-1
https://www.nxtbook.com/allen/iamm/23-9
https://www.nxtbook.com/allen/iamm/23-8
https://www.nxtbook.com/allen/iamm/23-6
https://www.nxtbook.com/allen/iamm/23-5
https://www.nxtbook.com/allen/iamm/23-2
https://www.nxtbook.com/allen/iamm/23-3
https://www.nxtbook.com/allen/iamm/23-4
https://www.nxtbookmedia.com