IEEE Computational Intelligence Magazine - August 2021 - 34

predictors, two self-supervised learning methods are proposed to
pre-train the architecture embedding part of neural predictors to
generate a meaningful representation of neural architectures.
The first method designs a graph neural network-based model
with two independent branches and utilizes the graph edit distance
of two different neural architectures as a supervision to
force the model to generate meaningful architecture representations.
Inspired by contrastive learning, the second method presents
a new contrastive learning algorithm that utilizes a central
feature vector as a proxy to contrast positive pairs against negative
pairs. Experimental results illustrate that the pre-trained
neural predictors can achieve comparable or superior performance
compared with their supervised counterparts using only
half of the training samples. The effectiveness of the proposed
methods is further validated by integrating the pre-trained neural
predictors into a neural predictor guided evolutionary neural
architecture search (NPENAS) algorithm, which achieves stateof-the-art
performance on NASBench-101, NASBench-201,
and DARTS benchmarks.
I. Introduction
N
eural architecture search (NAS) refers to the use of certain
search strategies to find the best performing neural
architecture in a pre-defined search space with minimal
search costs [1]. The search strategies sample potentially
promising neural architectures from the search space and performance
metrics of the sampled architectures obtained from timeconsuming
training and validation procedures, which are used to
optimize the search strategies. To alleviate the time cost of training
and validation procedures, some recently proposed NAS search
strategies employed neural predictors to accelerate the performance
estimation of the sampled architectures [2]-[6]. The capability
of neural predictors to accurately predict the performance
of the sampled architectures is critical to downstream search strategies
[2], [5]-[8]. Because of the significant time cost of obtaining
labeled training samples, acquiring accurate neural predictors
using a fewer number of training samples is one of the key issues
in NAS methods employing neural predictors.
Self-supervised representation learning, a type of unsupervised
representation learning, has been successfully applied in areas such
as image classification [9], [10] and natural language processing
[11]. If a model is pre-trained by self-supervised representation
learning and then fine-tuned by supervised learning using a few
labeled training data, then it is highly likely to outperform its
supervised counterparts [9], [10], [12]. In this paper, self-supervised
representation learning is investigated and applied to the
NAS domain to enhance the performance of neural predictors
built from graph neural networks [13] and employed in the
downstream evolutionary search strategy.
Effective unsupervised representation learning falls into one of
two categories: generative or discriminative [9]. Existing unsupervised
representation learning methods for NAS [8], [14] belong to
the generative category. Their learning objective is to compel the
neural predictor to correctly reconstruct the input neural architecture,
but it has limited relevance to NAS. This may result in the
34 IEEE COMPUTATIONAL INTELLIGENCE MAGAZINE | AUGUST 2021
trained neural predictor producing a less effective representation
of the input neural architecture. Discriminative unsupervised representation
learning, also known as self-supervised learning,
requires designing a pretext task [15], [16] from an unlabeled
dataset and using it as the supervision to learn meaningful feature
representation. Inspired by previous findings that " close by " architectures
tend to have similar performance metrics [17], [18], this
paper uses the graph edit distance (GED) as supervision for selfsupervised
learning, because GED can reflect the distance of different
neural architectures in the search space. Commonly used
GED is computed based on the graph encoding of two different
neural architectures (adjacency matrices and node operations)
[17], but this scheme cannot identify isomorphic graphs. Pathbased
encoding [3] is another commonly used neural architecture
encoding scheme, but it cannot recognize the position of each
operation in the neural architecture, e.g., two different operations
in a neural architecture may have the same path-encoding vectors.
To overcome the above drawbacks, this paper proposes a new
neural architecture encoding scheme, a position-aware path-based
encoding, which can recognize the positions of different operations
in the neural architecture and efficiently identify unique
neural architectures.
Since different pretext tasks may lead to different feature representations,
two self-supervised learning methods are proposed
from two different perspectives to improve the feature representation
of neural architectures, and to investigate the effect of different
pretext tasks on the predictive performance of neural
predictors. The first method utilizes a handcrafted pretext task,
while the second one learns feature representation by contrasting
positive pairs against negative pairs.
The pretext task of the first self-supervised learning method
is to predict the normalized GED of two different neural architectures
in the search space. A graph neural network-based
model with two independent identical branches is devised and
the concatenation of the output features from both branches is
used to predict the normalized GED. After the self-supervised
pre-training, only one branch of the model is adopted to build
the neural predictor. This method is termed as self-supervised
regression learning.
The second self-supervised learning method is inspired by the
prevalently contrastive learning for image classification [9], [10],
[12], which maximizes the agreement between differently augmented
views of the same image via a contrastive loss in latent
space [9]. Since there is no guarantee that a neural architecture
and its transformed form will have the same performance metrics,
it is not reasonable to directly apply contrastive learning to
NAS. This paper proposes a new contrastive learning algorithm,
termed self-supervised central contrastive learning, that uses the
feature vector of a neural architecture and its nearby neural
architectures' feature vectors (with small GEDs) to build a central
feature vector. Then, the contrastive loss is utilized to tightly
aggregate the feature vectors of the architecture and its nearby
architectures onto the central feature vector and push the feature
vectors of other neural architectures away from the central
feature vector.

IEEE Computational Intelligence Magazine - August 2021

Table of Contents for the Digital Edition of IEEE Computational Intelligence Magazine - August 2021

Contents
IEEE Computational Intelligence Magazine - August 2021 - Cover1
IEEE Computational Intelligence Magazine - August 2021 - Cover2
IEEE Computational Intelligence Magazine - August 2021 - Contents
IEEE Computational Intelligence Magazine - August 2021 - 2
IEEE Computational Intelligence Magazine - August 2021 - 3
IEEE Computational Intelligence Magazine - August 2021 - 4
IEEE Computational Intelligence Magazine - August 2021 - 5
IEEE Computational Intelligence Magazine - August 2021 - 6
IEEE Computational Intelligence Magazine - August 2021 - 7
IEEE Computational Intelligence Magazine - August 2021 - 8
IEEE Computational Intelligence Magazine - August 2021 - 9
IEEE Computational Intelligence Magazine - August 2021 - 10
IEEE Computational Intelligence Magazine - August 2021 - 11
IEEE Computational Intelligence Magazine - August 2021 - 12
IEEE Computational Intelligence Magazine - August 2021 - 13
IEEE Computational Intelligence Magazine - August 2021 - 14
IEEE Computational Intelligence Magazine - August 2021 - 15
IEEE Computational Intelligence Magazine - August 2021 - 16
IEEE Computational Intelligence Magazine - August 2021 - 17
IEEE Computational Intelligence Magazine - August 2021 - 18
IEEE Computational Intelligence Magazine - August 2021 - 19
IEEE Computational Intelligence Magazine - August 2021 - 20
IEEE Computational Intelligence Magazine - August 2021 - 21
IEEE Computational Intelligence Magazine - August 2021 - 22
IEEE Computational Intelligence Magazine - August 2021 - 23
IEEE Computational Intelligence Magazine - August 2021 - 24
IEEE Computational Intelligence Magazine - August 2021 - 25
IEEE Computational Intelligence Magazine - August 2021 - 26
IEEE Computational Intelligence Magazine - August 2021 - 27
IEEE Computational Intelligence Magazine - August 2021 - 28
IEEE Computational Intelligence Magazine - August 2021 - 29
IEEE Computational Intelligence Magazine - August 2021 - 30
IEEE Computational Intelligence Magazine - August 2021 - 31
IEEE Computational Intelligence Magazine - August 2021 - 32
IEEE Computational Intelligence Magazine - August 2021 - 33
IEEE Computational Intelligence Magazine - August 2021 - 34
IEEE Computational Intelligence Magazine - August 2021 - 35
IEEE Computational Intelligence Magazine - August 2021 - 36
IEEE Computational Intelligence Magazine - August 2021 - 37
IEEE Computational Intelligence Magazine - August 2021 - 38
IEEE Computational Intelligence Magazine - August 2021 - 39
IEEE Computational Intelligence Magazine - August 2021 - 40
IEEE Computational Intelligence Magazine - August 2021 - 41
IEEE Computational Intelligence Magazine - August 2021 - 42
IEEE Computational Intelligence Magazine - August 2021 - 43
IEEE Computational Intelligence Magazine - August 2021 - 44
IEEE Computational Intelligence Magazine - August 2021 - 45
IEEE Computational Intelligence Magazine - August 2021 - 46
IEEE Computational Intelligence Magazine - August 2021 - 47
IEEE Computational Intelligence Magazine - August 2021 - 48
IEEE Computational Intelligence Magazine - August 2021 - 49
IEEE Computational Intelligence Magazine - August 2021 - 50
IEEE Computational Intelligence Magazine - August 2021 - 51
IEEE Computational Intelligence Magazine - August 2021 - 52
IEEE Computational Intelligence Magazine - August 2021 - 53
IEEE Computational Intelligence Magazine - August 2021 - 54
IEEE Computational Intelligence Magazine - August 2021 - 55
IEEE Computational Intelligence Magazine - August 2021 - 56
IEEE Computational Intelligence Magazine - August 2021 - 57
IEEE Computational Intelligence Magazine - August 2021 - 58
IEEE Computational Intelligence Magazine - August 2021 - 59
IEEE Computational Intelligence Magazine - August 2021 - 60
IEEE Computational Intelligence Magazine - August 2021 - 61
IEEE Computational Intelligence Magazine - August 2021 - 62
IEEE Computational Intelligence Magazine - August 2021 - 63
IEEE Computational Intelligence Magazine - August 2021 - 64
IEEE Computational Intelligence Magazine - August 2021 - 65
IEEE Computational Intelligence Magazine - August 2021 - 66
IEEE Computational Intelligence Magazine - August 2021 - 67
IEEE Computational Intelligence Magazine - August 2021 - 68
IEEE Computational Intelligence Magazine - August 2021 - 69
IEEE Computational Intelligence Magazine - August 2021 - 70
IEEE Computational Intelligence Magazine - August 2021 - 71
IEEE Computational Intelligence Magazine - August 2021 - 72
IEEE Computational Intelligence Magazine - August 2021 - 73
IEEE Computational Intelligence Magazine - August 2021 - 74
IEEE Computational Intelligence Magazine - August 2021 - 75
IEEE Computational Intelligence Magazine - August 2021 - 76
IEEE Computational Intelligence Magazine - August 2021 - 77
IEEE Computational Intelligence Magazine - August 2021 - 78
IEEE Computational Intelligence Magazine - August 2021 - 79
IEEE Computational Intelligence Magazine - August 2021 - 80
IEEE Computational Intelligence Magazine - August 2021 - 81
IEEE Computational Intelligence Magazine - August 2021 - 82
IEEE Computational Intelligence Magazine - August 2021 - 83
IEEE Computational Intelligence Magazine - August 2021 - 84
IEEE Computational Intelligence Magazine - August 2021 - 85
IEEE Computational Intelligence Magazine - August 2021 - 86
IEEE Computational Intelligence Magazine - August 2021 - 87
IEEE Computational Intelligence Magazine - August 2021 - 88
IEEE Computational Intelligence Magazine - August 2021 - 89
IEEE Computational Intelligence Magazine - August 2021 - 90
IEEE Computational Intelligence Magazine - August 2021 - 91
IEEE Computational Intelligence Magazine - August 2021 - 92
IEEE Computational Intelligence Magazine - August 2021 - 93
IEEE Computational Intelligence Magazine - August 2021 - 94
IEEE Computational Intelligence Magazine - August 2021 - 95
IEEE Computational Intelligence Magazine - August 2021 - 96
IEEE Computational Intelligence Magazine - August 2021 - Cover3
IEEE Computational Intelligence Magazine - August 2021 - Cover4
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_202311
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_202308
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_202305
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_202302
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_202211
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_202208
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_202205
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_202202
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_202111
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_202108
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_202105
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_202102
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_202011
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_202008
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_202005
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_202002
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_201911
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_201908
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_201905
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_201902
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_201811
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_201808
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_201805
https://www.nxtbook.com/nxtbooks/ieee/computationalintelligence_201802
https://www.nxtbook.com/nxtbooks/ieee/computational_intelligence_winter17
https://www.nxtbook.com/nxtbooks/ieee/computational_intelligence_fall17
https://www.nxtbook.com/nxtbooks/ieee/computational_intelligence_summer17
https://www.nxtbook.com/nxtbooks/ieee/computational_intelligence_spring17
https://www.nxtbook.com/nxtbooks/ieee/computational_intelligence_winter16
https://www.nxtbook.com/nxtbooks/ieee/computational_intelligence_fall16
https://www.nxtbook.com/nxtbooks/ieee/computational_intelligence_summer16
https://www.nxtbook.com/nxtbooks/ieee/computational_intelligence_spring16
https://www.nxtbook.com/nxtbooks/ieee/computational_intelligence_winter15
https://www.nxtbook.com/nxtbooks/ieee/computational_intelligence_fall15
https://www.nxtbook.com/nxtbooks/ieee/computational_intelligence_summer15
https://www.nxtbook.com/nxtbooks/ieee/computational_intelligence_spring15
https://www.nxtbook.com/nxtbooks/ieee/computational_intelligence_winter14
https://www.nxtbook.com/nxtbooks/ieee/computational_intelligence_fall14
https://www.nxtbook.com/nxtbooks/ieee/computational_intelligence_summer14
https://www.nxtbook.com/nxtbooks/ieee/computational_intelligence_spring14
https://www.nxtbook.com/nxtbooks/ieee/computational_intelligence_winter13
https://www.nxtbook.com/nxtbooks/ieee/computational_intelligence_fall13
https://www.nxtbook.com/nxtbooks/ieee/computational_intelligence_summer13
https://www.nxtbook.com/nxtbooks/ieee/computational_intelligence_spring13
https://www.nxtbook.com/nxtbooks/ieee/computational_intelligence_winter12
https://www.nxtbook.com/nxtbooks/ieee/computational_intelligence_fall12
https://www.nxtbookmedia.com