chapter stringclasses 1
value | page stringlengths 12 12 | content stringlengths 946 3.15k | file_type stringclasses 1
value |
|---|---|---|---|
ch13 | page_001.txt | Chapter 13
Neural Nets and Deep
Learning
In Sections 12.2 and 12.3 we discussed the design of single “neurons” (percep-
trons). These take a collection of inputs and, based on weights associated with
those inputs, compute a number that, compared with a threshold, determines
whether to output “yes” or “no.” These method... | text |
ch13 | page_002.txt | 510
CHAPTER 13. NEURAL NETS AND DEEP LEARNING
13.1
Introduction to Neural Nets
We begin the discussion of neural nets with an extended example. After that,
we introduce the general plan of a neural net and some important terminology.
Example 13.1 : The problem we discuss is to learn the concept that “good”
bit-vectors ... | text |
ch13 | page_003.txt | 13.1. INTRODUCTION TO NEURAL NETS
511
recognize are those that end with 11 but do not have 11 elsewhere. these are
0011 and 1011.
Fortunately, the second node in the first layer, with weights [0, 0, 1, 1] and
threshold 1.5 gives output 1 whenever x3 = x4 = 1, and not otherwise. This
node thus recognizes the inputs 0011 ... | text |
ch13 | page_004.txt | 512
CHAPTER 13. NEURAL NETS AND DEEP LEARNING
important part of the design process for neural nets. Especially, note that the
output layer can have many nodes. For instance, the neural net could classify
inputs into many different classes, with one output node for each class.
.
.
.
.
.
.
.
.
.
. . .
x
Hidden layers
Outp... | text |
ch13 | page_005.txt | 13.1. INTRODUCTION TO NEURAL NETS
513
13.1.2
Interconnections Among Nodes
Neural nets can differ in how the nodes at one layer are connected to nodes
at the layer to its right. The most general case is when each node receives as
inputs the outputs of every node of the previous layer. A layer that receives
all outputs fr... | text |
ch13 | page_006.txt | 514
CHAPTER 13. NEURAL NETS AND DEEP LEARNING
particular angle, e.g., if the upper left corner is light and the other eight pixels
dark. Moreover, the algorithm for recognizing an edge of a certain type is the
same, regardless of where in the field of vision this little square appears. That
observation justifies the CNN ... | text |
ch13 | page_007.txt | 13.2. DENSE FEEDFORWARD NETWORKS
515
! Exercise 13.1.2: Consider the general problem of identifying bit-vectors of
length n having two consecutive 1’s. Assume a single hidden layer with some
number of gates. What is the smallest number of gates you can have in the
hidden layer if (a) n = 5 (b) n = 6?
! Exercise 13.1.3:... | text |
ch13 | page_008.txt | 516
CHAPTER 13. NEURAL NETS AND DEEP LEARNING
Why Use Linear Algebra?
Notational brevity is one reason to use linear algebra notation for neu-
ral networks. Another is performance. It turns out that graphics pro-
cessing units (GPU’s) have circuitry that allows for highly parallelized
linear-algebra operations. Mutiply... | text |
ch13 | page_009.txt | 13.2. DENSE FEEDFORWARD NETWORKS
517
class. This arrangement results in an output vector y = [y1, y2, . . . , yn], where
n is number of classes. The simple network from the prior section had a boolean
output, corresponding to two output classes (true and false), so we could have
modeled the output equally well as a 2-v... | text |
ch13 | page_010.txt | 518
CHAPTER 13. NEURAL NETS AND DEEP LEARNING
13.2.3
The Sigmoid
Given that we cannot use the step function, we look for alternatives in the
class of sigmoid functions – so called because of the S-shaped curve that these
functions exhibit. The most commonly used sigmoid function is the logistic
sigmoid:
σ(x) =
1
1 + e−... | text |
ch13 | page_011.txt | 13.2. DENSE FEEDFORWARD NETWORKS
519
6
4
2
0
2
4
6
0.5
1.0
3
2
1
1
2
3
1.0
0.5
0.5
1.0
(a)
(b)
Figure 13.4: The logistic sigmoid (a) and hyperbolic tangent (b) functions
Figure 13.4 shows the logistic sigmoid and hyperbolic tangent functions.
Note the difference in scale along the x-axis between the two charts. It is ea... | text |
ch13 | page_012.txt | 520
CHAPTER 13. NEURAL NETS AND DEEP LEARNING
Accuracy of Softmax Calculation
The denominator of the softmax function involves computing a sum of the
form P
j exj. When the xj’s take a wide range of values, their exponents
exj take on an even wider range of values – some tiny and some very large.
Adding very large and ... | text |
ch13 | page_013.txt | 13.2. DENSE FEEDFORWARD NETWORKS
521
3
2
1
1
2
3
1
1
2
3
3
2
1
1
2
3
1
1
2
3
(a)
(b)
Figure 13.5: The ReLU (a) and ELU (b), with α = 1 functions
node’s output get “stuck” at 0 through the rest of the training. This is called
the dying ReLU problem.
The Leaky ReLU attempts to fix this problem by defining the activation
fu... | text |
ch13 | page_014.txt | 522
CHAPTER 13. NEURAL NETS AND DEEP LEARNING
the observed output corresponding to input x is ˆy and the predicted output is
y. Then a loss fucntion L(y, ˆy) quantifies the prediction error for this single
input. Typically, we consider the loss over a large set of observations, such
as the entire training set. In that c... | text |
ch13 | page_015.txt | 13.2. DENSE FEEDFORWARD NETWORKS
523
5
4
3
2
1
1
2
3
4
5
5
10
15
20
25
Figure 13.6: Huber Loss (solid line, δ = 1) and Squared Error (dotted line) as
functions of z = y −ˆy
13.2.9
Classification Loss
Consider a multiclass classification problem with target classes C1, C2, . . . , Cn.
Suppose each point in the training se... | text |
ch13 | page_016.txt | 524
CHAPTER 13. NEURAL NETS AND DEEP LEARNING
pi. Then a key result from information theory is that if we encode messages
using an optimal binary code, the average number of bits per symbol needed to
encode messages is H(p).
Suppose we did not know the symbol probability distribution p when we
design the coding scheme.... | text |
ch13 | page_017.txt | 13.3. BACKPROPAGATION AND GRADIENT DESCENT
525
Exercise 13.2.3: Show that tanh(x) = 2σ(2x) −1.
Exercise 13.2.4: Show that σ(x) = 1 −σ(−x).
Exercise 13.2.5: Show that for any vector [v1, v2, . . . , vk], Pk
i=1 µ(vi) = 1.
1
y
x 1
x 2
x 3
x 4
y
3
y
2
1
=1
=0
=1
=0
1
2
3
−4
1
−1
2
3
−2
−2
1 −2
1
1
1
4
1
1
2
2
3
Figure 13.... | text |
ch13 | page_018.txt | 526
CHAPTER 13. NEURAL NETS AND DEEP LEARNING
parameters, it is possible to find parameters that yield low training loss but
nevertheless perform poorly in the real world. This phenomenon is called over-
fitting, a problem we have mentioned several times, starting in Section 9.4.4.
For the moment, we assume that our goal... | text |
ch13 | page_019.txt | 13.3. BACKPROPAGATION AND GRADIENT DESCENT
527
u = Wx
v = u + b
y = σ(v)
L = MSE(y, ˆy)
Each of these steps corresponds to one of the four nodes in the middle row,
in order from the left. The first step corresponds to the node with operand
u and operator ×. Here is an example where it must be understood that the
node la... | text |
ch13 | page_020.txt | 528
CHAPTER 13. NEURAL NETS AND DEEP LEARNING
Jx(y) =
∂y1
∂x1
. . .
∂yn
∂x1
...
...
...
∂y1
∂xm
. . .
∂yn
∂xm
We shall make use of the chain rule for derivatives from calculus. If y = g(x)
and z = f(y) = f(g(x)), then the chain rule says:
dz
dx = dz
dy
dy
dx
Also, if z = f(u, v) where u = g(x) and v = h(x... | text |
ch13 | page_021.txt | 13.3. BACKPROPAGATION AND GRADIENT DESCENT
529
Since we shall need to compute these gradients several times, once for each
iteration of gradient descent, we can avoid repeated computation by adding
additional nodes to the compute graph for backpropagation: one node for each
gradient computation.
In general, the Jacobia... | text |
ch13 | page_022.txt | 530
CHAPTER 13. NEURAL NETS AND DEEP LEARNING
Suppose s = [s1, s2, . . . , sn] is a vector defined by si = yi(1 −yi) (i.e., the
diagonal of the Jacobian matrix). We can express g(v) simply as
g(v) = s ◦g(y)
where a ◦b is the vector resulting from the element-wise product of a and b.5
Now that we have g(v), we can comput... | text |
ch13 | page_023.txt | 13.3. BACKPROPAGATION AND GRADIENT DESCENT
531
Therefore, noting that P
i pi = 1, we have:
l
=
H(p, q)
=
−
X
i
pi log qi
=
−
X
i
pi(yi −log(
X
j
eyj))
=
−
X
i
piyi −log(
X
j
eyj)
X
i
pi
=
−
X
i
piyi −log(
X
j
eyj)
Differentiating, we get:
∂l
∂yk
=
−pk +
eyk
P
j eyj
=
−pk + µ(yk)
=
qk −pk
Therefore, we end up with the ra... | text |
ch13 | page_024.txt | 532
CHAPTER 13. NEURAL NETS AND DEEP LEARNING
lead to convergence. Usually picking the right learning rate is a matter of trial
and error. It is also possible and common to vary the learning rate. Start with
an initial learning rate η0. Then, at each iteration, multiply the learning rate
by a factor β (0 < β < 1) until... | text |
ch13 | page_025.txt | 13.3. BACKPROPAGATION AND GRADIENT DESCENT
533
vectors. Just as we regard an m × n matrix as a set of m n-vectors, we can re-
gard a 3-dimensional tensor of dimensionality l×m×n as a set of lm n-vectors,
and similarly for tensors of higher dimension.
Example 13.7 : This example is based on the MNIST dataset.6 This data... | text |
ch13 | page_026.txt | 534
CHAPTER 13. NEURAL NETS AND DEEP LEARNING
13.3.6
Exercises for Section 13.3
Exercise 13.3.1: This exercise uses the neural net from Fig. 13.7. However,
assume that the weights on all inputs are variables rather than the constants
shown. Note, however, that some inputs do not feed one of the nodes in the
first layer,... | text |
ch13 | page_027.txt | 13.4. CONVOLUTIONAL NEURAL NETWORKS
535
13.4.1
Convolutional Layers
Convolutional layers make use of the fact that image features often are described
by small contiguous areas in the image. For example, at the first convolutional
layer, we might recognize small sections of edges in the image. At later layers,
more compl... | text |
ch13 | page_028.txt | 536
CHAPTER 13. NEURAL NETS AND DEEP LEARNING
weights wij of the filter are determined, the filter will recognize some feature of
the image, and the activation map tells whether (or to what degree) this feaure
is present at each position of the image.
Example 13.8 : In Fig. 13.10(b), we see a 2 × 2 filter, which is to be ... | text |
ch13 | page_029.txt | 13.4. CONVOLUTIONAL NEURAL NETWORKS
537
commonly used. Note that the filter size specifies only the width and height of
the filter; the number of channels of the filter always matches the number of
channels of the input.
The activation map in our example is slightly smaller than the input. In
many cases, it is convenient t... | text |
ch13 | page_030.txt | 538
CHAPTER 13. NEURAL NETS AND DEEP LEARNING
13.4.2
Convolution and Cross-Correlation
This subsection is a short detour to explain why Convolutional Neural Networks
are so named. It is not a pre-requisite for any of the other material in this
chapter.
The convolutional layer is named because of the resemblance it bear... | text |
ch13 | page_031.txt | 13.4. CONVOLUTIONAL NEURAL NETWORKS
539
13.4.3
Pooling Layers
A pooling layer takes as input the output of a convolutional layer and produces
an output with smaller spatial extent. The size reduction is accomplished by
using a pooling function to compute aggregates over small contiguous regions
of the input. For exampl... | text |
ch13 | page_032.txt | 540
CHAPTER 13. NEURAL NETS AND DEEP LEARNING
VGGnet.9 This simple network strictly alternates convolutional and pooling
layers. In practice, high-performing networks are finely tuned to their task,
and may stack convolutional layers directly on top of one another with the
occasional pooling layer in between. Moreover, ... | text |
ch13 | page_033.txt | 13.4. CONVOLUTIONAL NEURAL NETWORKS
541
How Many Nodes in a Convolutional Layer?
We have referred to a node of a convolutional layer as a “filter.” That
filter may be a single node, or as in Example 13.11, a set of several nodes,
one for each channel. When we train the CNN, we determine weights for
each filter, so there a... | text |
ch13 | page_034.txt | 542
CHAPTER 13. NEURAL NETS AND DEEP LEARNING
the filter F and the corresponding region in the input into vectors. Consider
the convolution of an m×m×1 tensor X (i.e., X is actually an m×m matrix)
with an f × f filter F and bias b, to produce as output the n × n matrix Z.
We now explain how to implement the convolution o... | text |
ch13 | page_035.txt | 13.5. RECURRENT NEURAL NETWORKS
543
(c) Suppose we do not do any zero padding. If the output of one layer is
input to the next layer, after how many layers will there be no output at
all?
Exercise 13.4.2: Repeat Exercise 13.4.1(a) and (c) for the case when there is
a stride of three.
Exercise 13.4.3: Suppose we have th... | text |
ch13 | page_036.txt | 544
CHAPTER 13. NEURAL NETS AND DEEP LEARNING
s
x
y
V
U
s
x
y
V
U
s
x
y
s
s
x
y
V
U
W
n
n
n
W
V
U
W
W
W
(b) The unrolled RNN of length
(a) The basic unit of an RNN.
n.
o o o
0
1
1
1
2
2
2
Figure 13.12: RNN architecture
These considerations lead naturally to a recurrent network model, where we
perform the same operatio... | text |
ch13 | page_037.txt | 13.5. RECURRENT NEURAL NETWORKS
545
Here f is a nonlinear activation function such as tanh or sigmoid. U and W are
matrices of weights, and b is a vector of biases. We define s0 to be a vector of
all zeros. The output at time t is a function of the hidden state at time t, after
being transformed by a parameter matrix V ... | text |
ch13 | page_038.txt | 546
CHAPTER 13. NEURAL NETS AND DEEP LEARNING
Moreover, suppose W is a matrix, and w is the vector obtained by concatenating
the rows of W. Then:
dz
dW
=
dz
dw
dy
dW
=
dy
dw
These conventions also extend naturally to partial derivatives.
We use backpropagation to compute the gradients of the error with respect
to the n... | text |
ch13 | page_039.txt | 13.5. RECURRENT NEURAL NETWORKS
547
as an exercise for the reader. Now, noting that dst−1
dW
is just Rt−1, we have the
recurrence:
Rt = A(B + W TRt−1)
Setting Pt = AB and Qt = AW T, we end up with:
Rt = Pt + QtRt−1
(13.7)
We can use this recurrence to set up an iterative evaluation of Rt, and thence
de
dW . We initiali... | text |
ch13 | page_040.txt | 548
CHAPTER 13. NEURAL NETS AND DEEP LEARNING
very little contribution from much earlier time steps. This phenomenon is called
the problem of vanishing gradients.
Equation 13.9 results in vanishing gradients because we used the tanh ac-
tivation function for state update. If instead we use other activation functions
su... | text |
ch13 | page_041.txt | 13.5. RECURRENT NEURAL NETWORKS
549
s; each of the gate’s entries is between 0 and 1. The Hadamard product11 s ◦g
allows us to selectively pass through certain parts of the state while filtering out
others. Usually, a gate vector is created by a linear combination of the hidden
state and the current input. We then apply... | text |
ch13 | page_042.txt | 550
CHAPTER 13. NEURAL NETS AND DEEP LEARNING
Here, we use subscript o to indicate another pairs of weight matrices and a bias
vector that must be learned.
Finally, the output at time t is computed in exactly the same manner as the
RNN output:
yt = g(V st + d)
(13.16)
where g is an activation function, V is a weight ma... | text |
ch13 | page_043.txt | 13.6. REGULARIZATION
551
13.6
Regularization
Thus far, we have presented our goal as one of minimizing loss (i.e., prediction
error) on the training set. Gradient descent and stochastic gradient descent
help us achieve this objective.
In practice, the real objective of training is
to minimize the loss on new and hither... | text |
ch13 | page_044.txt | 552
CHAPTER 13. NEURAL NETS AND DEEP LEARNING
The loss function L penalizes large weight values. Here α is a hyperparameter
that trades offbetween minimizing the original loss function L0 and the penalty
associated with the L2-norm of the weights. Instead of the L2-norm, we could
penalize the L1-norm of the weights:
L =... | text |
ch13 | page_045.txt | 13.7. SUMMARY OF CHAPTER 13
553
training set (the training loss) decreases through the training process, the loss
on the test set (the test loss) often behaves differently. The test loss falls dur-
ing the initial part of the training, and then many hit a minimum and actually
increase after a large number of training it... | text |
ch13 | page_046.txt | 554
CHAPTER 13. NEURAL NETS AND DEEP LEARNING
are pooled, meaning that the nodes of the previous layer are partitioned,
and each node of this layer takes as input only the nodes of one block
of the partition. Convolutional layers are also used, especially in image
processing applications.
✦Convolutional Layers: Convolu... | text |
ch13 | page_047.txt | 13.8. REFERENCES FOR CHAPTER 13
555
the same weights, so the training process therefore needs to deal with a
relatively small number of weights.
✦Long Short-Term Memory Networks: These improve on RNN’s by adding
a second state vector – the cell state – to enable some information about
the sequence to be retained, while... | text |
ch13 | page_048.txt | 556
CHAPTER 13. NEURAL NETS AND DEEP LEARNING
7. LeCun, Y. and Y. Bengio, “Convolutional networks for images, speech,
and time series,” The Handbook of Brain Theory and Neural Networks
(M. Arbib, ed.) 3361:10 (1995).
8. LeCun, Y., B. Boser, J.S. Denker, D. Henderson, R.E. Howard, and
W. Hubbard, “Backpropagation applie... | text |
README.md exists but content is empty.
- Downloads last month
- 169