The Bridge - Issue 2, 2018 - 12

Feature
required for the model to converge during training.
For each epoch, a new random set of nodes is
"dropped". Thus the model is consistently being
trained with a lower number of nodes, requiring
more iterations to converge. For example, using a
dropout rate of q=0.5 roughly doubles the amount
of iterations required to converge (Krizhevsky 2012).
The model would train half the number of nodes
for roughly twice the number of iterations. The
additional computing time would come from the
overhead computation time for each iteration. Thus,
using a dropout rate of 0.5 increases computing
time but by less than a factor of two. Along with
adjusting the number of iterations during training,
the weights must be adjusted during testing. For
example, using a dropout rate of 0.5 during training
requires the weights to be multiplied by 0.5 during
testing.
The value of q=0.5 is often used for dropout.
AlexNet, the network which achieved a step up
in performance on the ImageNet classification
challenge, used a dropout rate of 0.5 (Krizhevsky
2012). In a thorough review of dropout on very large
problems, Srivastava commented that a dropout rate
of 0.5 seemed to be close to optimal (Srivastava
2014). However, a similar image classification
system using a deep neural network trained in
MATLAB did not agree with the optimum dropout
rate of 0.5 (Boddy 2017). For linear networks, a
dropout rate of 0.5 provides the highest level of
regularization (Baldi 2013). Most neural networks,
however, are not applied to linear relationships. A
contribution of this study is to demonstrate that the
optimum dropout for a problem varies widely from
one dataset to another and when a dataset's size
is artificially reduced during training. Indeed, the
optimum dropout could fall anywhere within the
valid range for the parameter (0 to just short of 1).

THE BRIDGE

ReLU: Rectified linear activation function
The rectified linear (ReLU) activation function is the
most popular activation function for deep neural
networks. It has an output range from 0 to infinity.
It is 0 for x < 0 and is x for x > 0 (linear output).
It reduces computation time over the softplus and
sigmoid functions. (Krizhevsky 2012)
Producing a probability prediction with a Neural
Network: The Softmax Function
The softmax function is used as the final layer of a
neural network to produce a probability outcome
instead of a classification. It essentially normalizes
the sum of the values of the output layer. In the case
of predicting the likelihood of credit card default, the
softmax function would produce two values (default
and non default probabilities) that add up to 1. The
softmax function can be used in training through
back propagation. (Krizhevsky 2012)

EXPERIMENTS
Python was used to develop the deep neural
network models. Specifically, the Python library
TensorFlow was used to facilitate the model training
and testing. Other libraries used include the Pandas
and MatPlotLib.
The TensorFlow method DNNLinearCombinedClassifier
and DNNLinearCombined-Regressor are used to
train the models. Two hidden layers are used with
the first consisting of 100 nodes and the second
consisting of 50. Ten epochs are used for training
the model. Furthermore, a variable representing
the dropout rate is passed as an argument into the
TensorFlow method used.



Table of Contents for the Digital Edition of The Bridge - Issue 2, 2018

Contents
The Bridge - Issue 2, 2018 - Cover1
The Bridge - Issue 2, 2018 - Cover2
The Bridge - Issue 2, 2018 - Contents
The Bridge - Issue 2, 2018 - 4
The Bridge - Issue 2, 2018 - 5
The Bridge - Issue 2, 2018 - 6
The Bridge - Issue 2, 2018 - 7
The Bridge - Issue 2, 2018 - 8
The Bridge - Issue 2, 2018 - 9
The Bridge - Issue 2, 2018 - 10
The Bridge - Issue 2, 2018 - 11
The Bridge - Issue 2, 2018 - 12
The Bridge - Issue 2, 2018 - 13
The Bridge - Issue 2, 2018 - 14
The Bridge - Issue 2, 2018 - 15
The Bridge - Issue 2, 2018 - 16
The Bridge - Issue 2, 2018 - 17
The Bridge - Issue 2, 2018 - 18
The Bridge - Issue 2, 2018 - 19
The Bridge - Issue 2, 2018 - 20
The Bridge - Issue 2, 2018 - 21
The Bridge - Issue 2, 2018 - 22
The Bridge - Issue 2, 2018 - 23
The Bridge - Issue 2, 2018 - 24
The Bridge - Issue 2, 2018 - 25
The Bridge - Issue 2, 2018 - 26
The Bridge - Issue 2, 2018 - 27
The Bridge - Issue 2, 2018 - 28
The Bridge - Issue 2, 2018 - 29
The Bridge - Issue 2, 2018 - 30
The Bridge - Issue 2, 2018 - 31
The Bridge - Issue 2, 2018 - 32
The Bridge - Issue 2, 2018 - 33
The Bridge - Issue 2, 2018 - 34
The Bridge - Issue 2, 2018 - 35
The Bridge - Issue 2, 2018 - 36
The Bridge - Issue 2, 2018 - 37
The Bridge - Issue 2, 2018 - 38
The Bridge - Issue 2, 2018 - 39
The Bridge - Issue 2, 2018 - 40
The Bridge - Issue 2, 2018 - 41
The Bridge - Issue 2, 2018 - 42
The Bridge - Issue 2, 2018 - 43
The Bridge - Issue 2, 2018 - 44
https://www.nxtbook.com/nxtbooks/ieee/bridge_issue3_2023
https://www.nxtbook.com/nxtbooks/ieee/bridge_issue2_2023
https://www.nxtbook.com/nxtbooks/ieee/bridge_issue1_2023
https://www.nxtbook.com/nxtbooks/ieee/bridge_issue3_2022
https://www.nxtbook.com/nxtbooks/ieee/bridge_issue2_2022
https://www.nxtbook.com/nxtbooks/ieee/bridge_issue1_2022
https://www.nxtbook.com/nxtbooks/ieee/bridge_issue3_2021
https://www.nxtbook.com/nxtbooks/ieee/bridge_issue2_2021
https://www.nxtbook.com/nxtbooks/ieee/bridge_issue1_2021
https://www.nxtbook.com/nxtbooks/ieee/bridge_2020_issue3
https://www.nxtbook.com/nxtbooks/ieee/bridge_2020_issue2
https://www.nxtbook.com/nxtbooks/ieee/bridge_2020_issue1
https://www.nxtbook.com/nxtbooks/ieee/bridge_2019_issue3
https://www.nxtbook.com/nxtbooks/ieee/bridge_2019_issue2
https://www.nxtbook.com/nxtbooks/ieee/bridge_2019_issue1
https://www.nxtbook.com/nxtbooks/ieee/bridge_2018_issue3
https://www.nxtbook.com/nxtbooks/ieee/bridge_2018_issue2
https://www.nxtbook.com/nxtbooks/ieee/bridge_2018_issue1
https://www.nxtbookmedia.com