text
stringlengths
35
1.54k
source
stringclasses
1 value
page
int64
1
800
book
stringclasses
1 value
chunk_index
int64
0
0
chapter 1. introduction et al., ). 2012 at the same time that the scale and accuracy of deep networks has increased, so has the complexity of the tasks that they can solve. ( ) goodfellow et al. 2014d showed that neural networks could learn to output an entire sequence of characters transcribed from an image, rather th...
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
40
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
al., ) that learn to read from memory cells and write arbitrary content to memory cells. such neural networks can learn simple programs from examples of desired behavior. for example, they can learn to sort lists of numbers given examples of scrambled and sorted sequences. this self - programming technology is in its i...
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
40
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
many top technology companies including google, microsoft, facebook, ibm, baidu, apple, adobe, netflix, nvidia and nec. advances in deep learning have also depended heavily on advances in software infrastructure. software libraries such as theano (, ; bergstra et al. 2010 bastien et al. et al., ), pylearn2 ( 2012 goodfe...
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
40
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
chapter 1. introduction that neuroscientists can study (, ). deep learning also provides useful dicarlo 2013 tools for processing massive amounts of data and making useful predictions in scientific fields. it has been successfully used to predict how molecules will interact in order to help pharmaceutical companies desig...
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
41
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
chapter 1. introduction 1950 1985 2000 2015 2056 10−2 10−1 100 101 102 103 104 105 106 107 108 109 1010 1011 number of neurons ( logarithmic scale ) 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 sponge roundworm leech ant bee frog octopus human figure 1. 11 : since the introduction of hidden units, artificial neura...
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
42
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
field sigmoid belief network (, ) saul et al. 1996 8. lenet - 5 (, ) lecun et al. 1998b 9. echo state network (, ) jaeger and haas 2004 10. deep belief network (, ) hinton et al. 2006 11. gpu - accelerated convolutional network (, ) chellapilla et al. 2006 12. deep boltzmann machine ( salakhutdinov and hinton 2009a, ) 1...
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
42
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
, ) krizhevsky et al. 2012 19. cots hpc unsupervised convolutional network (, ) coates et al. 2013 20. googlenet (, ) szegedy et al. 2014a 27
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
42
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
chapter 1. introduction 2010 2011 2012 2013 2014 2015 0 00. 0 05. 0 10. 0 15. 0 20. 0 25. 0 30. ilsvrc classification error rate figure 1. 12 : since deep networks reached the scale necessary to compete in the imagenet large scale visual recognition challenge, they have consistently won the competition every year, and y...
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
43
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
part i applied math and machine learning basics 29
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
44
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
this part of the book introduces the basic mathematical concepts needed to understand deep learning. we begin with general ideas from applied math that allow us to define functions of many variables, find the highest and lowest points on these functions and quantify degrees of belief. next, we describe the fundamental go...
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
45
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
chapter 2 linear algebra linear algebra is a branch of mathematics that is widely used throughout science and engineering. however, because linear algebra is a form of continuous rather than discrete mathematics, many computer scientists have little experience with it. a good understanding of linear algebra is essentia...
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
46
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
2. 1 scalars, vectors, matrices and tensors the study of linear algebra involves several types of mathematical objects : • scalars : a scalar is just a single number, in contrast to most of the other objects studied in linear algebra, which are usually arrays of multiple numbers. we write scalars in italics. we usually...
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
46
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
chapter 2. linear algebra example, we might say “ let s ∈r be the slope of the line, ” while defining a real - valued scalar, or “ let n ∈n be the number of units, ” while defining a natural number scalar. • vectors : a vector is an array of numbers. the numbers are arranged in order. we can identify each individual numb...
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
47
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
to explicitly identify the elements of a vector, we write them as a column enclosed in square brackets : x = x1 x2... xn. ( 2. 1 ) we can think of vectors as identifying points in space, with each element giving the coordinate along a [UNK] axis. sometimes we need to index a set of elements of a vector. in this case, w...
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
47
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
##ed by two indices instead of just one. we usually give matrices upper - case variable names with bold typeface, such as a. if a real - valued matrix a has a height of m and a width of n, then we say that a ∈rm n ×. we usually identify the elements of a matrix using its name in italic but not bold font, and the indice...
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
47
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
chapter 2. linear algebra a = a1 1, a1 2, a2 1, a2 2, a3 1, a3 2, ⇒a = a1 1, a2 1, a3 1, a1 2, a2 2, a3 2, figure 2. 1 : the transpose of the matrix can be thought of as a mirror image across the main diagonal. the - th of. when we need to explicitly identify the elements of i column a a matrix, we write them as an arr...
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
48
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
than two axes. in the general case, an array of numbers arranged on a regular grid with a variable number of axes is known as a tensor. we denote a tensor named “ a ” with this typeface : a. we identify the element of a at coordinates ( i, j, k ) by writing ai, j, k. one important operation on matrices is the transpose...
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
48
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
chapter 2. linear algebra define a vector by writing out its elements in the text inline as a row matrix, then using the transpose operator to turn it into a standard column vector, e. g., x = [ x1, x2, x3 ]. a scalar can be thought of as a matrix with only a single entry. from this, we can see that a scalar is its own ...
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
49
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
the addition of matrix and a vector, yielding another matrix : c = a + b, where ci, j = ai, j + bj. in other words, the vector b is added to each row of the matrix. this shorthand eliminates the need to define a matrix with b copied into each row before doing the addition. this implicit copying of b to many locations is...
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
49
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
, j = k ai, kbk, j. ( 2. 5 ) note that the standard product of two matrices is just a matrix containing not the product of the individual elements. such an operation exists and is called the element - wise product hadamard product or, and is denoted as. a b the dot product between two vectors x and y of the same dimens...
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
49
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
chapter 2. linear algebra matrix product operations have many useful properties that make mathematical analysis of matrices more convenient. for example, matrix multiplication is distributive : a b c ab ac ( + ) = +. ( 2. 6 ) it is also associative : a bc ab c ( ) = ( ). ( 2. 7 ) matrix multiplication is commutative ( ...
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
50
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
attempt to develop a comprehensive list of useful properties of the matrix product here, but the reader should be aware that many more exist. we now know enough linear algebra notation to write down a system of linear equations : ax b = ( 2. 11 ) where a ∈rm n × is a known matrix, b ∈rm is a known vector, and x ∈rn is ...
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
50
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
chapter 2. linear algebra 1 0 0 0 1 0 0 0 1 figure 2. 2 : example identity matrix : this is i3. a2 1, x1 + a2 2, x2 + + · · · a2, nxn = b2 ( 2. 17 )... ( 2. 18 ) am, 1x1 + am, 2x2 + + · · · a m, nxn = bm. ( 2. 19 ) matrix - vector product notation provides a more compact representation for equations of this form. 2. 3 ...
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
51
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
n ×, and [UNK] ∈ x rn, inx x =. ( 2. 20 ) the structure of the identity matrix is simple : all of the entries along the main diagonal are 1, while all of the other entries are zero. see figure for an example. 2. 2 the matrix inverse of a is denoted as a−1, and it is defined as the matrix such that a−1a i = n. ( 2. 21 ) w...
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
51
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
chapter 2. linear algebra x a = −1b. ( 2. 25 ) of course, this process depends on it being possible to find a−1. we discuss the conditions for the existence of a−1 in the following section. when a−1 exists, several [UNK] algorithms exist for finding it in closed form. in theory, the same inverse matrix can then be used t...
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
52
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
it is not possible to have more than one but less than infinitely many solutions for a particular b ; if both and are solutions then x y z x y = α + ( 1 ) −α ( 2. 26 ) is also a solution for any real. α to analyze how many solutions the equation has, we can think of the columns of a as specifying [UNK] directions we can...
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
52
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
##plying each vector v ( ) i by a corresponding scalar [UNK] and adding the results : i civ ( ) i. ( 2. 28 ) the span of a set of vectors is the set of all points obtainable by linear combination of the original vectors. 37
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
52
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
chapter 2. linear algebra determining whether ax = b has a solution thus amounts to testing whether b is in the span of the columns of a. this particular span is known as the column space range or the of. a in order for the system ax = b to have a solution for all values of b ∈rm, we therefore require that the column s...
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
53
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
only if lies on that plane. b having n m ≥ is only a necessary condition for every point to have a solution. it is not a [UNK] condition, because it is possible for some of the columns to be redundant. consider a 2 ×2 matrix where both of the columns are identical. this has the same column space as a 2 × 1 matrix conta...
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
53
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
set of m linearly independent columns. this condition is both necessary and [UNK] for equation to have a solution for 2. 11 every value of b. note that the requirement is for a set to have exactly m linear independent columns, not at least m. no set of m - dimensional vectors can have more than m mutually linearly inde...
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
53
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
chapter 2. linear algebra solution. so far we have discussed matrix inverses as being multiplied on the left. it is also possible to define an inverse that is multiplied on the right : aa−1 = i. ( 2. 29 ) for square matrices, the left inverse and right inverse are equal. 2. 5 norms sometimes we need to measure the size ...
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
54
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
x y f f ( ) + x ( ) y ( the triangle inequality ) • [UNK] ∈ | | α r, f α ( x ) = α f ( ) x the l2 norm, with p = 2, is known as the euclidean norm. it is simply the euclidean distance from the origin to the point identified by x. the l2 norm is used so frequently in machine learning that it is often denoted simply as | ...
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
54
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
chapter 2. linear algebra applications, it is important to discriminate between elements that are exactly zero and elements that are small but nonzero. in these cases, we turn to a function that grows at the same rate in all locations, but retains mathematical simplicity : the l1 norm. the l1 norm may be simplified to |...
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
55
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
often used as a substitute for the number of nonzero entries. one other norm that commonly arises in machine learning is the l∞norm, also known as the max norm. this norm simplifies to the absolute value of the element with the largest magnitude in the vector, | | | | x ∞ = max i | xi |. ( 2. 32 ) sometimes we may also ...
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
55
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
matrices and vectors some special kinds of matrices and vectors are particularly useful. diagonal matrices consist mostly of zeros and have non - zero entries only along the main diagonal. formally, a matrix d is diagonal if and only if di, j = 0 for 40
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
55
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
chapter 2. linear algebra all i = j. we have already seen one example of a diagonal matrix : the identity matrix, where all of the diagonal entries are 1. we write diag ( v ) to denote a square diagonal matrix whose diagonal entries are given by the entries of the vector v. diagonal matrices are of interest in part bec...
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
56
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
to be diagonal. not all diagonal matrices need be square. it is possible to construct a rectangular diagonal matrix. non - square diagonal matrices do not have inverses but it is still possible to multiply by them cheaply. for a non - square diagonal matrix d, the product dx will involve scaling each element of x, and ...
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
56
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
norm | | | | x 2 = 1. ( 2. 36 ) a vector x and a vector y are orthogonal to each other if xy = 0. if both vectors have nonzero norm, this means that they are at a 90 degree angle to each other. in rn, at most n vectors may be mutually orthogonal with nonzero norm. if the vectors are not only orthogonal but also have un...
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
56
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
chapter 2. linear algebra this implies that a−1 = a, ( 2. 38 ) so orthogonal matrices are of interest because their inverse is very cheap to compute. pay careful attention to the definition of orthogonal matrices. counterintuitively, their rows are not merely orthogonal but fully orthonormal. there is no special term fo...
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
57
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
multiple of will be divisible by. 12 3 much as we can discover something about the true nature of an integer by decomposing it into prime factors, we can also decompose matrices in ways that show us information about their functional properties that is not obvious from the representation of the matrix as an array of el...
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
57
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
##vector of a, then so is any rescaled vector sv for s, s ∈r = 0. moreover, sv still has the same eigenvalue. for this reason, we usually only look for unit eigenvectors. suppose that a matrix a has n linearly independent eigenvectors, { v ( 1 ),..., v ( ) n }, with corresponding eigenvalues { λ1,..., λn }. we may conc...
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
57
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
chapter 2. linear algebra figure 2. 3 : an example of the [UNK] of eigenvectors and eigenvalues. here, we have a matrix a with two orthonormal eigenvectors, v ( 1 ) with eigenvalue λ1 and v ( 2 ) with eigenvalue λ2. ( left ) we plot the set of all unit vectors u ∈r2 as a unit circle. ( right ) we plot the set of all po...
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
58
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
##composition a a v λ v = diag ( ) −1. ( 2. 40 ) we have seen that constructing matrices with specific eigenvalues and eigenvec - tors allows us to stretch space in desired directions. however, we often want to decompose matrices into their eigenvalues and eigenvectors. doing so can help us to analyze certain properties...
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
58
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
chapter 2. linear algebra cases, the decomposition exists, but may involve complex rather than real numbers. fortunately, in this book, we usually need to decompose only a specific class of matrices that have a simple decomposition. specifically, every real symmetric matrix can be decomposed into an expression using only...
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
59
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
##position may not be unique. if any two or more eigenvectors share the same eigenvalue, then any set of orthogonal vectors lying in their span are also eigenvectors with that eigenvalue, and we could equivalently choose a q using those eigenvectors instead. by convention, we usually sort the entries of λ in descending...
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
59
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
maximum value of f within the constraint region is the maximum eigenvalue and its minimum value within the constraint region is the minimum eigenvalue. a matrix whose eigenvalues are all positive is called positive definite. a matrix whose eigenvalues are all positive or zero - valued is calledpositive semidefi - nite. l...
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
59
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
chapter 2. linear algebra the svd is more generally applicable. every real matrix has a singular value decomposition, but the same is not true of the eigenvalue decomposition. for example, if a matrix is not square, the eigendecomposition is not defined, and we must use a singular value decomposition instead. recall tha...
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
60
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
##ned to have a special structure. the matrices u and v are both defined to be orthogonal matrices. the matrix d is defined to be a diagonal matrix. note that is not necessarily square. d the elements along the diagonal of d are known as the singular values of the matrix a. the columns of u are known as the left - singul...
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
60
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
section. 2. 9 the moore - penrose pseudoinverse matrix inversion is not defined for matrices that are not square. suppose we want to make a left - inverse of a matrix, so that we can solve a linear equation b a ax y = ( 2. 44 ) 45
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
60
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
chapter 2. linear algebra by left - multiplying each side to obtain x by =. ( 2. 45 ) depending on the structure of the problem, it may not be possible to design a unique mapping from to. a b if a is taller than it is wide, then it is possible for this equation to have no solution. if a is wider than it is tall, then t...
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
61
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
the reciprocal of its non - zero elements then taking the transpose of the resulting matrix. when a has more columns than rows, then solving a linear equation using the pseudoinverse provides one of the many possible solutions. specifically, it provides the solution x = a + y with minimal euclidean norm | | | | x 2 amon...
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
61
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
chapter 2. linear algebra matrix products and the trace operator. for example, the trace operator provides an alternative way of writing the frobenius norm of a matrix : | | | | a f = tr ( aa ). ( 2. 49 ) writing an expression in terms of the trace operator opens up opportunities to manipulate the expression using many...
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
62
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
##mutation holds even if the resulting product has a [UNK] shape. for example, for a ∈rm n × and b ∈rn m ×, we have tr ( ) = tr ( ) ab ba ( 2. 53 ) even though ab ∈rm m × and ba ∈rn n ×. another useful fact to keep in mind is that a scalar is its own trace : a = tr ( a ). 2. 11 the determinant the determinant of a squa...
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
62
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
chapter 2. linear algebra 2. 12 example : principal components analysis one simple machine learning algorithm, principal components analysis or pca can be derived using only knowledge of basic linear algebra. suppose we have a collection of m points { x ( 1 ),..., x ( ) m } in rn. suppose we would like to apply lossy c...
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
63
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
its code,. x x ≈g f ( ( ) ) pca is defined by our choice of the decoding function. specifically, to make the decoder very simple, we choose to use matrix multiplication to map the code back into rn. let, where g ( ) = c dc d ∈rn l × is the matrix defining the decoding. computing the optimal code for this decoder could be ...
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
63
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
the first thing we need to do is figure out how to generate the optimal code point c∗for each input point x. one way to do this is to minimize the distance between the input point x and its reconstruction, g ( c∗ ). we can measure this distance using a norm. in the principal components algorithm, we use the l2 norm : c∗ ...
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
63
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
chapter 2. linear algebra monotonically increasing for non - negative arguments. c∗ = arg min c | | − | | x g ( ) c 2 2. ( 2. 55 ) the function being minimized simplifies to ( ( ) ) x −g c ( ( ) ) x −g c ( 2. 56 ) ( by the definition of the l2 norm, equation ) 2. 30 = xx x − g g ( ) c − ( ) c x c + ( g ) g ( ) c ( 2. 57 ...
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
64
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
−2xg g ( ) + c ( ) c g. ( ) c ( 2. 59 ) to make further progress, we must substitute in the definition of : g ( ) c c∗ = arg min c −2xdc c + ddc ( 2. 60 ) = arg min c −2xdc c + ilc ( 2. 61 ) ( by the orthogonality and unit norm constraints on ) d = arg min c −2xdc c + c ( 2. 62 ) we can solve this optimization problem u...
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
64
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
chapter 2. linear algebra this makes the algorithm [UNK] : we can optimally encode x just using a matrix - vector operation. to encode a vector, we apply the encoder function f ( ) = x dx. ( 2. 66 ) using a further matrix multiplication, we can also define the pca reconstruction operation : r g f ( ) = x ( ( ) ) = x ddx...
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
65
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
( 2. 68 ) to derive the algorithm for finding d∗, we will start by considering the case where l = 1. in this case, d is just a single vector, d. substituting equation 2. 67 into equation and simplifying into, the problem reduces to 2. 68 d d d∗ = arg min d i | | x ( ) i −ddx ( ) i | | 2 2 subject to | | | | d 2 = 1. ( 2...
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
65
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
d 2 = 1, ( 2. 70 ) or, exploiting the fact that a scalar is its own transpose, as d∗ = arg min d i | | x ( ) i −x ( ) i dd | | 2 2 subject to | | | | d 2 = 1. ( 2. 71 ) the reader should aim to become familiar with such cosmetic rearrangements. 50
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
65
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
chapter 2. linear algebra at this point, it can be helpful to rewrite the problem in terms of a single design matrix of examples, rather than as a sum over separate example vectors. this will allow us to use more compact notation. let x ∈rm n × be the matrix defined by stacking all of the vectors describing the points, ...
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
66
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
. 75 ) = arg min d tr ( xx ) tr ( − xxdd ) tr ( − ddx x ) + tr ( ddxxdd ) ( 2. 76 ) = arg min d −tr ( xxdd ) tr ( − ddxx ) + tr ( ddxxdd ) ( 2. 77 ) ( because terms not involving do not [UNK] the ) d arg min = arg min d −2 tr ( xxdd ) + tr ( ddxxdd ) ( 2. 78 ) ( because we can cycle the order of the matrices inside a t...
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
66
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
chapter 2. linear algebra = arg max d tr ( xxdd ) subject to dd = 1 ( 2. 83 ) = arg max d tr ( dxxd d ) subject to d = 1 ( 2. 84 ) this optimization problem may be solved using eigendecomposition. specifically, the optimal d is given by the eigenvector of xx corresponding to the largest eigenvalue. this derivation is sp...
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
67
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
chapter 3 probability and information theory in this chapter, we describe probability theory and information theory. probability theory is a mathematical framework for representing uncertain statements. it provides a means of quantifying uncertainty and axioms for deriving new uncertain statements. in artificial intelli...
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
68
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
all of this chapter except for section, which describes the 3. 14 graphs we use to describe structured probabilistic models for machine learning. if you have absolutely no prior experience with these subjects, this chapter should be [UNK] to successfully carry out deep learning research projects, but we do suggest that...
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
68
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
chapter 3. probability and information theory 3. 1 why probability? many branches of computer science deal mostly with entities that are entirely deterministic and certain. a programmer can usually safely assume that a cpu will execute each machine instruction flawlessly. errors in hardware do occur, but are rare enough...
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
69
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
true by definition, it is [UNK] to think of any proposition that is absolutely true or any event that is absolutely guaranteed to occur. there are three possible sources of uncertainty : 1. inherent stochasticity in the system being modeled. for example, most interpretations of quantum mechanics describe the dynamics of...
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
69
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
of view, the outcome is uncertain. 3. incomplete modeling. when we use a model that must discard some of the information we have observed, the discarded information results in uncertainty in the model ’ s predictions. for example, suppose we build a robot that can exactly observe the location of every object around it....
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
69
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
chapter 3. probability and information theory robot discretizes space when predicting the future location of these objects, then the discretization makes the robot immediately become uncertain about the precise position of objects : each object could be anywhere within the discrete cell that it was observed to occupy. ...
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
70
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
failure. while it should be clear that we need a means of representing and reasoning about uncertainty, it is not immediately obvious that probability theory can provide all of the tools we want for artificial intelligence applications. probability theory was originally developed to analyze the frequencies of events. it...
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
70
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
patient, nor is there any reason to believe that [UNK] replicas of the patient would present with the same symptoms yet have varying underlying conditions. in the case of the doctor diagnosing the patient, we use probability to represent a degree of belief, with 1 indicating absolute certainty that the patient has the ...
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
70
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
chapter 3. probability and information theory has certain symptoms. for more details about why a small set of common sense assumptions implies that the same axioms must control both kinds of probability, see ( ). ramsey 1926 probability can be seen as the extension of logic to deal with uncertainty. logic provides a se...
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
71
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
. on its own, a random variable is just a description of the states that are possible ; it must be coupled with a probability distribution that specifies how likely each of these states are. random variables may be discrete or continuous. a discrete random variable is one that has a finite or countably infinite number of ...
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
71
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
chapter 3. probability and information theory mass function and the reader must infer which probability mass function to use based on the identity of the random variable, rather than the name of the function ; p p ( ) x is usually not the same as ( ) y. the probability mass function maps from a state of a random variab...
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
72
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
, y = y ) denotes the probability that x = x and y = y simultaneously. we may also write for brevity. p x, y ( ) to be a probability mass function on a random variable x, a function p must satisfy the following properties : • the domain of must be the set of all possible states of x. p • [UNK] ∈ x x, 0 ≤p ( x ) ≤1. an ...
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
72
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
setting its probability mass function to p x ( = x i ) = 1 k ( 3. 1 ) for all i. we can see that this fits the requirements for a probability mass function. the value 1 k is positive because is a positive integer. we also see that k i p x ( = x i ) = i 1 k = k k = 1, ( 3. 2 ) so the distribution is properly normalized. ...
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
72
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
chapter 3. probability and information theory 3. 3. 2 continuous variables and probability density functions when working with continuous random variables, we describe probability distri - butions using a probability density function ( pdf ) rather than a probability mass function. to be a probability density function,...
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
73
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
of p ( x ) over that set. in the univariate example, the probability that x lies in the interval is given by [ ] a, b [ ] a, b p x dx ( ). for an example of a probability density function corresponding to a specific probability density over a continuous random variable, consider a uniform distribu - tion on an interval ...
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
73
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
b a −. we can see that this is nonnegative everywhere. additionally, it integrates to 1. we often denote that x follows the uniform distribution on [ a, b ] by writing x. [UNK] a, b ( ) 3. 4 marginal probability sometimes we know the probability distribution over a set of variables and we want to know the probability d...
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
73
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
chapter 3. probability and information theory the name “ marginal probability ” comes from the process of computing marginal probabilities on paper. when the values of p ( x y, ) are written in a grid with [UNK] values of x in rows and [UNK] values of y in columns, it is natural to sum across a row of the grid, then wr...
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
74
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
y x = ) p x ( = x ). ( 3. 5 ) the conditional probability is only defined when p ( x = x ) > 0. we cannot compute the conditional probability conditioned on an event that never happens. it is important not to confuse conditional probability with computing what would happen if some action were undertaken. the conditional...
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
74
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
( 1 ),..., x ( 1 ) i− ). ( 3. 6 ) this observation is known as the chain rule or product rule of probability. it follows immediately from the definition of conditional probability in equation. 3. 5 59
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
74
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
chapter 3. probability and information theory for example, applying the definition twice, we get p,, p, p, ( a b c ) = ( a b | c ) ( b c ) p, p p ( b c ) = ( ) b c | ( ) c p,, p, p p. ( a b c ) = ( a b | c ) ( ) b c | ( ) c 3. 7 independence and conditional independence two random variables x and y are independent if th...
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
75
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
| | x x, y y, z z x y, p ( = x, = y z x = ) = ( z p = x z y = ) ( z p = y z = ) z. ( 3. 8 ) we can denote independence and conditional independence with compact notation : x y [UNK] means that x and y are independent, while x y z [UNK] | means that x and y are conditionally independent given z. 3. 8 expectation, varian...
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
75
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
chapter 3. probability and information theory when the identity of the distribution is clear from the context, we may simply write the name of the random variable that the expectation is over, as in ex [ f ( x ) ]. if it is clear which random variable the expectation is over, we may omit the subscript entirely, as in e...
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
76
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
) = f x e ( ( ) [ ( ) ] ) f x −e f x 2. ( 3. 12 ) when the variance is low, the values of f ( x ) cluster near their expected value. the square root of the variance is known as the. standard deviation the covariance gives some sense of how much two values are linearly related to each other, as well as the scale of thes...
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
76
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
on a relatively high value at the times that the other takes on a relatively low value and vice versa. other measures such as correlation normalize the contribution of each variable in order to measure only how much the variables are related, rather than also being [UNK] by the scale of the separate variables. the noti...
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
76
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
chapter 3. probability and information theory s. with probability 1 2, we choose the value of s to be. otherwise, we choose 1 the value of s to be −1. we can then generate a random variable y by assigning y = sx. clearly, x and y are not independent, because x completely determines the magnitude of. however, y cov ( ) ...
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
77
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
parameter φ ∈ [ 0, 1 ], which gives the probability of the random variable being equal to 1. it has the following properties : p φ ( = 1 ) = x ( 3. 16 ) p φ ( = 0 ) = 1 x − ( 3. 17 ) p x φ ( = x ) = x ( 1 ) −φ 1−x ( 3. 18 ) ex [ ] = x φ ( 3. 19 ) varx ( ) = ( 1 ) x φ −φ ( 3. 20 ) 3. 9. 2 multinoulli distribution the mu...
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
77
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
over vectors in { 0,..., n } k representing how many times each of the k categories is visited when n samples are drawn from a multinoulli distribution. many texts use the term “ multinomial ” to refer to multinoulli distributions without clarifying that they refer only to the case. n = 1 62
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
77
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
chapter 3. probability and information theory parametrized by a vector p ∈ [ 0, 1 ] k−1, where pi gives the probability of the i - th state. the final, k - th state ’ s probability is given by 1−1p. note that we must constrain 1p ≤1. multinoulli distributions are often used to refer to distributions over categories of o...
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
78
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
, so any distribution described by a small number of parameters must impose strict limits on the distribution. 3. 9. 3 gaussian distribution the most commonly used distribution over real numbers is the normal distribu - tion, also known as the : gaussian distribution n ( ; x µ, σ2 ) = 1 2πσ2 exp −1 2σ2 ( ) x µ − 2. ( 3...
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
78
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
the distribution is to use a parameter β ∈ ( 0, ∞ ) to control the precision or inverse variance of the distribution : n ( ; x µ, β−1 ) = β 2π exp −1 2β x µ ( − ) 2. ( 3. 22 ) normal distributions are a sensible choice for many applications. in the absence of prior knowledge about what form a distribution over the real...
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
78
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
chapter 3. probability and information theory − − − − 2 0. 1 5. 1 0. 0 5 0 0 0 5 1 0 1 5 2 0...... 0 00. 0 05. 0 10. 0 15. 0 20. 0 25. 0 30. 0 35. 0 40. p ( x ) maximum at = x µ inflection points at x µ σ = ± figure 3. 1 : the normal distribution : the normal distribution n ( x ; µ, σ2 ) exhibits a classic “ bell curve ...
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
79
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
successfully as normally distributed noise, even if the system can be decomposed into parts with more structured behavior. second, out of all possible probability distributions with the same variance, the normal distribution encodes the maximum amount of uncertainty over the real numbers. we can thus think of the norma...
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
79
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
chapter 3. probability and information theory the parameter µ still gives the mean of the distribution, though now it is vector - valued. the parameter σ gives the covariance matrix of the distribution. as in the univariate case, when we wish to evaluate the pdf several times for many [UNK] values of the parameters, th...
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
80
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
often want to have a probability distribution with a sharp point at x = 0. to accomplish this, we can use the exponential distribution : p x λ λ ( ; ) = 1x≥0exp ( ) −λx. ( 3. 25 ) the exponential distribution uses the indicator function 1x≥0 to assign probability zero to all negative values of. x a closely related prob...
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
80
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
function is defined such that it is zero - valued everywhere except 0, yet integrates to 1. the dirac delta function is not an ordinary function that associates each value x with a real - valued output, instead it is a [UNK] kind of 65
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
80
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0
chapter 3. probability and information theory mathematical object called a generalized function that is defined in terms of its properties when integrated. we can think of the dirac delta function as being the limit point of a series of functions that put less and less mass on all points other than zero. by defining p ( ...
/home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf
81
Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org)
0