text stringlengths 35 1.54k | source stringclasses 1
value | page int64 1 800 | book stringclasses 1
value | chunk_index int64 0 0 |
|---|---|---|---|---|
distribution can be conceptualized as a multinoulli distribution, with a probability associated to each possible input value that is simply equal to the empirical frequency of that value in the training set. we can view the empirical distribution formed from a dataset of training examples as specifying the distribution... | /home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf | 81 | Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org) | 0 |
chapter 3. probability and information theory the mixture model is one simple strategy for combining probability distributions to create a richer distribution. in chapter, we explore the art of building complex 16 probability distributions from simple ones in more detail. the mixture model allows us to briefly glimpse a... | /home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf | 82 | Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org) | 0 |
common type of mixture model is the gaussian mixture model, in which the components p ( x | c = i ) are gaussians. each component has a separately parametrized mean µ ( ) i and covariance σ ( ) i. some mixtures can have more constraints. for example, the covariances could be shared across components via the constraint ... | /home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf | 82 | Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org) | 0 |
##ussian mixture model is a universal approximator of densities, in the sense that any smooth density can be approximated with any specific, non - zero amount of error by a gaussian mixture model with enough components. figure shows samples from a gaussian mixture model. 3. 2 3. 10 useful properties of common functions ... | /home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf | 82 | Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org) | 0 |
chapter 3. probability and information theory x1 x2 figure 3. 2 : samples from a gaussian mixture model. in this example, there are three components. from left to right, the first component has an isotropic covariance matrix, meaning it has the same amount of variance in each direction. the second has a diagonal covaria... | /home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf | 83 | Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org) | 0 |
changes in its input. another commonly encountered function is the softplus function (, dugas et al. 2001 ) : ζ x x. ( ) = log ( 1 + exp ( ) ) ( 3. 31 ) the softplus function can be useful for producing the β or σ parameter of a normal distribution because its range is ( 0, ∞ ). it also arises commonly when manipulatin... | /home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf | 83 | Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org) | 0 |
chapter 3. probability and information theory − − 10 5 0 5 10 0 0. 0 2. 0 4. 0 6. 0 8. 1 0. σ x ( ) figure 3. 3 : the logistic sigmoid function. − − 10 5 0 5 10 0 2 4 6 8 10 ζ x ( ) figure 3. 4 : the softplus function. 69 | /home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf | 84 | Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org) | 0 |
chapter 3. probability and information theory σ x ( ) = exp ( ) x exp ( ) + exp ( 0 ) x ( 3. 33 ) d dxσ x σ x σ x ( ) = ( ) ( 1 − ( ) ) ( 3. 34 ) 1 ( ) = ( ) −σ x σ −x ( 3. 35 ) log ( ) = ( ) σ x −ζ −x ( 3. 36 ) d dxζ x σ x ( ) = ( ) ( 3. 37 ) [UNK] ∈ x ( 0 1 ),, σ−1 ( ) = log x x 1 −x ( 3. 38 ) [UNK] >, ζ 0 −1 ( ) = l... | /home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf | 85 | Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org) | 0 |
the logit in statistics, but this term is more rarely used in machine learning. equation provides extra justification for the name “ softplus. ” the softplus 3. 41 function is intended as a smoothed version of the positive part function, x + = max { 0, x }. the positive part function is the counterpart of the negative p... | /home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf | 85 | Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org) | 0 |
( x ), we can compute the desired quantity using bayes ’ rule : p ( ) = x y | p p ( ) x ( ) y x | p ( ) y. ( 3. 42 ) note that while p ( y ) appears in the formula, it is usually feasible to compute p ( ) = y x p x p x p ( y | ) ( ), so we do not need to begin with knowledge of ( ) y. 70 | /home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf | 85 | Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org) | 0 |
chapter 3. probability and information theory bayes ’ rule is straightforward to derive from the definition of conditional probability, but it is useful to know the name of this formula since many texts refer to it by name. it is named after the reverend thomas bayes, who first discovered a special case of the formula. t... | /home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf | 86 | Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org) | 0 |
sets s1 and s2 such that p ( x ∈s1 ) + p ( x ∈s2 ) > 1 but s1 ∩s2 = ∅. these sets are generally constructed making very heavy use of the infinite precision of real numbers, for example by making fractal - shaped sets or sets that are defined by transforming the set of rational numbers. 2 one of the key contributions of m... | /home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf | 86 | Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org) | 0 |
this textbook. for our purposes, it is [UNK] to understand the intuition that a set of measure zero occupies no volume in the space we are measuring. for example, within r2, a line has measure zero, while a filled polygon has positive measure. likewise, an individual point has measure zero. any union of countably many s... | /home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf | 86 | Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org) | 0 |
chapter 3. probability and information theory measure zero. because the exceptions occupy a negligible amount of space, they can be safely ignored for many applications. some important results in probability theory hold for all discrete values but only hold “ almost everywhere ” for continuous values. another technical... | /home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf | 87 | Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org) | 0 |
be on this interval. this means py ( ) = y dy 1 2, ( 3. 43 ) which violates the definition of a probability distribution. this is a common mistake. the problem with this approach is that it fails to account for the distortion of space introduced by the function g. recall that the probability of x lying in an infinitesima... | /home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf | 87 | Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org) | 0 |
##ly px ( ) = x py ( ( ) ) g x ∂g x ( ) ∂x. ( 3. 46 ) in higher dimensions, the derivative generalizes to the determinant of the jacobian matrix — the matrix with ji, j = ∂x i ∂yj. thus, for real - valued vectors and, x y p x ( ) = x py ( ( ) ) g x det ∂g ( ) x ∂x. ( 3. 47 ) 72 | /home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf | 87 | Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org) | 0 |
chapter 3. probability and information theory 3. 13 information theory information theory is a branch of applied mathematics that revolves around quantifying how much information is present in a signal. it was originally invented to study sending messages from discrete alphabets over a noisy channel, such as communicat... | /home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf | 88 | Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org) | 0 |
occurred. a message saying “ the sun rose this morning ” is so uninformative as to be unnecessary to send, but a message saying “ there was a solar eclipse this morning ” is very informative. we would like to quantify information in a way that formalizes this intuition. specifically, • likely events should have low info... | /home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf | 88 | Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org) | 0 |
chapter 3. probability and information theory information gained by observing an event of probability 1 e. other texts use base - 2 logarithms and units called bits or shannons ; information measured in bits is just a rescaling of information measured in nats. when x is continuous, we use the same definition of informat... | /home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf | 89 | Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org) | 0 |
logarithm is base 2, otherwise the units are [UNK] ) needed on average to encode symbols drawn from a distribution p. distributions that are nearly deterministic ( where the outcome is nearly certain ) have low entropy ; distributions that are closer to uniform have high entropy. see figure for a demonstration. when 3. ... | /home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf | 89 | Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org) | 0 |
and the natural logarithm ) needed to send a message containing symbols drawn from probability distribution p, when we use a code that was designed to minimize the length of messages drawn from probability distribution. q the kl divergence has many useful properties, most notably that it is non - negative. the kl diver... | /home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf | 89 | Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org) | 0 |
chapter 3. probability and information theory 0 0 0 2 0 4 0 6 0 8 1 0...... 0 0. 0 1. 0 2. 0 3. 0 4. 0 5. 0 6. 0 7. shannon entropy in nats figure 3. 5 : this plot shows how distributions that are closer to deterministic have low shannon entropy while distributions that are close to uniform have high shannon entropy. o... | /home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf | 90 | Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org) | 0 |
the choice of whether to use dkl ( ) p q or dkl ( ) q p. see figure for more detail. 3. 6 a quantity that is closely related to the kl divergence is the cross - entropy h ( p, q ) = h ( p ) + dkl ( p q ), which is similar to the kl divergence but lacking the term on the left : h p, q ( ) = [UNK] log ( ) q x. ( 3. 51 ) m... | /home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf | 90 | Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org) | 0 |
chapter 3. probability and information theory x probability density q∗ = argminqdkl ( ) p q p x ( ) q∗ ( ) x x probability density q∗ = argminqdkl ( ) q p p ( ) x q∗ ( ) x figure 3. 6 : the kl divergence is asymmetric. suppose we have a distributionp ( x ) and wish to approximate it with another distribution q ( x ). w... | /home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf | 91 | Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org) | 0 |
choice of the direction of the kl divergence reflects which of these considerations takes priority for each application. ( left ) the [UNK] of minimizing dkl ( p q ). in this case, we select a q that has high probability where p has high probability. when p has multiple modes, q chooses to blur the modes together, in or... | /home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf | 91 | Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org) | 0 |
chapter 3. probability and information theory describe the entire joint probability distribution can be very [UNK] ( both computationally and statistically ). instead of using a single function to represent a probability distribution, we can split a probability distribution into many factors that we multiply together. ... | /home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf | 92 | Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org) | 0 |
##nd a factorization into distributions over fewer variables. we can describe these kinds of factorizations using graphs. here we use the word “ graph ” in the sense of graph theory : a set of vertices that may be connected to each other with edges. when we represent the factorization of a probability distribution with... | /home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf | 92 | Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org) | 0 |
xi, denoted pag ( xi ) : p ( ) = x i p ( xi | pag ( xi ) ). ( 3. 53 ) see figure for an example of a directed graph and the factorization of probability 3. 7 distributions it represents. undirected models use graphs with undirected edges, and they represent factorizations into a set of functions ; unlike in the directed... | /home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf | 92 | Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org) | 0 |
chapter 3. probability and information theory a c b e d figure 3. 7 : a directed graphical model over random variables a, b, c, d and e. this graph corresponds to probability distributions that can be factored as p,,,, p p p, p p. ( a b c d e ) = ( ) a ( ) b a | ( c a | b ) ( ) d b | ( ) e c | ( 3. 54 ) this graph allo... | /home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf | 93 | Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org) | 0 |
there is no constraint that the factor must sum or integrate to 1 like a probability distribution. the probability of a configuration of random variables is proportional to the product of all of these factors — assignments that result in larger factor values are more likely. of course, there is no guarantee that this pr... | /home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf | 93 | Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org) | 0 |
chapter 3. probability and information theory a c b e d figure 3. 8 : an undirected graphical model over random variablesa, b, c, d and e. this graph corresponds to probability distributions that can be factored as p,,,, ( a b c d e ) = 1 zφ ( 1 ) ( ) a b c,, φ ( 2 ) ( ) b d, φ ( 3 ) ( ) c e,. ( 3. 56 ) this graph allo... | /home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf | 94 | Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org) | 0 |
chapter 4 numerical computation machine learning algorithms usually require a high amount of numerical compu - tation. this typically refers to algorithms that solve mathematical problems by methods that update estimates of the solution via an iterative process, rather than analytically deriving a formula providing a s... | /home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf | 95 | Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org) | 0 |
is problematic, especially when it compounds across many operations, and can cause algorithms that work in theory to fail in practice if they are not designed to minimize the accumulation of rounding error. one form of rounding error that is particularly devastating is underflow. underflow occurs when numbers near zero a... | /home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf | 95 | Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org) | 0 |
chapter 4. numerical computation software environments will raise exceptions when this occurs, others will return a result with a placeholder not - a - number value ) or taking the logarithm of zero ( this is usually treated as −∞, which then becomes not - a - number if it is used for many further arithmetic operations... | /home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf | 96 | Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org) | 0 |
xi are equal to some constant c. analytically, we can see that all of the outputs should be equal to 1 n. numerically, this may not occur when c has large magnitude. if c is very negative, then exp ( c ) will underflow. this means the denominator of the softmax will become 0, so the final result is undefined. when c is ve... | /home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf | 96 | Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org) | 0 |
value of 1, which rules out the possibility of underflow in the denominator leading to a division by zero. there is still one small problem. underflow in the numerator can still cause the expression as a whole to evaluate to zero. this means that if we implement log softmax ( x ) by first running the softmax subroutine th... | /home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf | 96 | Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org) | 0 |
chapter 4. numerical computation stabilized. theano (, ;, ) is an example bergstra et al. 2010 bastien et al. 2012 of a software package that automatically detects and stabilizes many common numerically unstable expressions that arise in the context of deep learning. 4. 2 poor conditioning conditioning refers to how ra... | /home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf | 97 | Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org) | 0 |
result of rounding error during matrix inversion. poorly conditioned matrices amplify pre - existing errors when we multiply by the true matrix inverse. in practice, the error will be compounded further by numerical errors in the inversion process itself. 4. 3 gradient - based optimization most deep learning algorithms... | /home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf | 97 | Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org) | 0 |
chapter 4. numerical computation − − − − 2 0. 1 5. 1 0. 0 5 0 0 0 5 1 0 1 5 2 0...... x −2 0. −1 5. −1 0. −0 5. 0 0. 0 5. 1 0. 1 5. 2 0. global minimum at = 0. x since f ( ) = 0, gradient x descent halts here. for 0, we have x < f ( ) 0, x < so we can decrease by f moving rightward. for 0, we have x > f ( ) 0, x > so w... | /home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf | 98 | Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org) | 0 |
. suppose we have a function y = f ( x ), where both x and y are real numbers. the derivative of this function is denoted as f ( x ) or as dy dx. the derivative f ( x ) gives the slope of f ( x ) at the point x. in other words, it specifies how to scale a small change in the input in order to obtain the corresponding ch... | /home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf | 98 | Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org) | 0 |
1 example of this technique. when f ( x ) = 0, the derivative provides no information about which direction to move. points where f ( x ) = 0 are known as critical points or stationary points. a local minimum is a point where f ( x ) is lower than at all neighboring points, so it is no longer possible to decrease f ( x... | /home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf | 98 | Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org) | 0 |
chapter 4. numerical computation minimum maximum saddle point figure 4. 2 : examples of each of the three types of critical points in 1 - d. a critical point is a point with zero slope. such a point can either be a local minimum, which is lower than the neighboring points, a local maximum, which is higher than the neig... | /home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf | 99 | Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org) | 0 |
we optimize functions that may have many local minima that are not optimal, and many saddle points surrounded by very flat regions. all of this makes optimization very [UNK], especially when the input to the function is multidimensional. we therefore usually settle for finding a value of f that is very low, but not neces... | /home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf | 99 | Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org) | 0 |
chapter 4. numerical computation x f x ( ) ideally, we would like to arrive at the global minimum, but this might not be possible. this local minimum performs nearly as well as the global one, so it is an acceptable halting point. this local minimum performs poorly and should be avoided. figure 4. 3 : optimization algo... | /home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf | 100 | Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org) | 0 |
α ( + x u ) evaluates to u∇xf α ( ) x when = 0. to minimize f, we would like to find the direction in which f decreases the fastest. we can do this using the directional derivative : min u u, u = 1 u∇xf ( ) x ( 4. 3 ) = min u u, u = 1 | | | | u 2 | | ∇xf ( ) x | | 2 cos θ ( 4. 4 ) where θ is the angle between u and the ... | /home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf | 100 | Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org) | 0 |
chapter 4. numerical computation where is the learning rate, a positive scalar determining the size of the step. we can choose in several [UNK] ways. a popular approach is to set to a small constant. sometimes, we can solve for the step size that makes the directional derivative vanish. another approach is to evaluate ... | /home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf | 101 | Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org) | 0 |
an objective function of discrete parameters is called hill climbing (, ). russel and norvig 2003 4. 3. 1 beyond the gradient : jacobian and hessian matrices sometimes we need to find all of the partial derivatives of a function whose input and output are both vectors. the matrix containing all such partial derivatives ... | /home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf | 101 | Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org) | 0 |
us how the first derivative will change as we vary the input. this is important because it tells us whether a gradient step will cause as much of an improvement as we would expect based on the gradient alone. we can think of the second derivative as measuring curvature. suppose we have a quadratic function ( many functi... | /home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf | 101 | Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org) | 0 |
chapter 4. numerical computation x f x ( ) negative curvature x f x ( ) no curvature x f x ( ) positive curvature figure 4. 4 : the second derivative determines the curvature of a function. here we show quadratic functions with various curvature. the dashed line indicates the value of the cost function we would expect ... | /home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf | 102 | Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org) | 0 |
hessian matrix is defined such that h x ( ) ( f ) h x ( ) ( f ) i, j = ∂2 ∂xi∂xj f. ( ) x ( 4. 6 ) equivalently, the hessian is the jacobian of the gradient. anywhere that the second partial derivatives are continuous, the [UNK] operators are commutative, i. e. their order can be swapped : ∂2 ∂xi∂xj f ( ) = x ∂2 ∂x j∂xi... | /home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf | 102 | Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org) | 0 |
chapter 4. numerical computation eigenvectors. the second derivative in a specific direction represented by a unit vector d is given by dhd. when d is an eigenvector of h, the second derivative in that direction is given by the corresponding eigenvalue. for other directions of d, the directional second derivative is a w... | /home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf | 103 | Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org) | 0 |
) h x x ( − ( 0 ) ). ( 4. 8 ) where g is the gradient and h is the hessian at x ( 0 ). if we use a learning rate of, then the new point x will be given by x ( 0 ) −g. substituting this into our approximation, we obtain f ( x ( 0 ) − ≈ g ) f ( x ( 0 ) ) −g g + 1 22ghg. ( 4. 9 ) there are three terms here : the original ... | /home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf | 103 | Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org) | 0 |
ghg is positive, solving for the optimal step size that decreases the taylor series approximation of the function the most yields ∗ = gg ghg. ( 4. 10 ) in the worst case, when g aligns with the eigenvector of h corresponding to the maximal eigenvalue λmax, then this optimal step size is given by 1 λ max. to the extent ... | /home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf | 103 | Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org) | 0 |
chapter 4. numerical computation f ( x − ) < 0 and f ( x + ) > 0 for small enough. in other words, as we move right, the slope begins to point uphill to the right, and as we move left, the slope begins to point uphill to the left. thus, when f ( x ) = 0 and f ( x ) > 0, we can conclude that x is a local minimum. simila... | /home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf | 104 | Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org) | 0 |
, where ∇xf ( x ) = 0, we can examine the eigenvalues of the hessian to determine whether the critical point is a local maximum, local minimum, or saddle point. when the hessian is positive definite ( all its eigenvalues are positive ), the point is a local minimum. this can be seen by observing that the directional sec... | /home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf | 104 | Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org) | 0 |
##ensional second 4. 5 derivative test can be inconclusive, just like the univariate version. the test is inconclusive whenever all of the non - zero eigenvalues have the same sign, but at least one eigenvalue is zero. this is because the univariate second derivative test is inconclusive in the cross section correspond... | /home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf | 104 | Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org) | 0 |
must be small enough to avoid overshooting the minimum and going uphill in directions with strong positive curvature. this usually means that the step size is too small to make significant progress in other directions with less curvature. see figure for an example. 4. 6 this issue can be resolved by using information fro... | /home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf | 104 | Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org) | 0 |
chapter 4. numerical computation figure 4. 5 : a saddle point containing both positive and negative curvature. the function in this example is f ( x ) = x2 1 −x2 2. along the axis corresponding to x1, the function curves upward. this axis is an eigenvector of the hessian and has a positive eigenvalue. along the axis co... | /home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf | 105 | Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org) | 0 |
chapter 4. numerical computation − − − 30 20 10 0 10 20 x1 −30 −20 −10 0 10 20 x2 figure 4. 6 : gradient descent fails to exploit the curvature information contained in the hessian matrix. here we use gradient descent to minimize a quadratic functionf ( x ) whose hessian matrix has condition number 5. this means that t... | /home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf | 106 | Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org) | 0 |
the hessian corresponding to the eigenvector pointed in this direction indicates that this directional derivative is rapidly increasing, so an optimization algorithm based on the hessian could predict that the steepest direction is not actually a promising search direction in this context. 91 | /home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf | 106 | Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org) | 0 |
chapter 4. numerical computation the search. the simplest method for doing so is known as newton ’ s method. newton ’ s method is based on using a second - order taylor series expansion to approximate near some point f ( ) x x ( 0 ) : f f ( ) x ≈ ( x ( 0 ) ) + ( x x − ( 0 ) ) ∇xf ( x ( 0 ) ) + 1 2 ( x x − ( 0 ) ) h x (... | /home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf | 107 | Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org) | 0 |
quadratic but can be locally approximated as a positive definite quadratic, newton ’ s method consists of applying equation multiple 4. 12 times. iteratively updating the approximation and jumping to the minimum of the approximation can reach the critical point much faster than gradient descent would. this is a useful p... | /home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf | 107 | Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org) | 0 |
a wide variety of functions, but come with almost no guarantees. deep learning algorithms tend to lack guarantees because the family of functions used in deep learning is quite complicated. in many other fields, the dominant approach to optimization is to design optimization algorithms for a limited family of functions.... | /home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf | 107 | Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org) | 0 |
chapter 4. numerical computation a small change in the output. lipschitz continuity is also a fairly weak constraint, and many optimization problems in deep learning can be made lipschitz continuous with relatively minor modifications. perhaps the most successful field of specialized optimization is convex op - timizatio... | /home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf | 108 | Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org) | 0 |
see boyd and vandenberghe 2004 ( ) or rockafellar 1997 ( ). 4. 4 constrained optimization sometimes we wish not only to maximize or minimize a function f ( x ) over all possible values of x. instead we may wish to find the maximal or minimal value of f ( x ) for values of x in some set s. this is known as constrained op... | /home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf | 108 | Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org) | 0 |
the line back into the constraint region. when possible, this method can be made more [UNK] by projecting the gradient into the tangent space of the feasible region before taking the step or beginning the line search (, ). rosen 1960 a more sophisticated approach is to design a [UNK], unconstrained opti - mization prob... | /home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf | 108 | Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org) | 0 |
chapter 4. numerical computation x ∈r2 with x constrained to have exactly unit l2 norm, we can instead minimize g ( θ ) = f ( [ cos sin θ, θ ] ) with respect to θ, then return [ cos sin θ, θ ] as the solution to the original problem. this approach requires creativity ; the transformation between optimization problems m... | /home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf | 109 | Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org) | 0 |
0 and [UNK], h ( ) j ( x ) ≤0 }. the equations involving g ( ) i are called the equality constraints and the inequalities involving h ( ) j are called. inequality constraints we introduce new variables λi andα j for each constraint, these are called the kkt multipliers. the generalized lagrangian is then defined as l,, ... | /home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf | 109 | Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org) | 0 |
min x∈s f. ( ) x ( 4. 16 ) this follows because any time the constraints are satisfied, max λ max α α, ≥0l,, f, ( x λ α ) = ( ) x ( 4. 17 ) while any time a constraint is violated, max λ max α α, ≥0 l,,. ( x λ α ) = ∞ ( 4. 18 ) 1the kkt approach generalizes the method of lagrange multipliers which allows equality constr... | /home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf | 109 | Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org) | 0 |
chapter 4. numerical computation these properties guarantee that no infeasible point can be optimal, and that the optimum within the feasible points is unchanged. to perform constrained maximization, we can construct the generalized la - grange function of, which leads to this optimization problem : −f ( ) x min x max ... | /home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf | 110 | Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org) | 0 |
particularly interesting. we say that a constraint h ( ) i ( x ) is active if h ( ) i ( x∗ ) = 0. if a constraint is not active, then the solution to the problem found using that constraint would remain at least a local solution if that constraint were removed. it is possible that an inactive constraint excludes other ... | /home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf | 110 | Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org) | 0 |
. in other words, for all i, we know that at least one of the constraints αi ≥0 and h ( ) i ( x ) ≤0 must be active at the solution. to gain some intuition for this idea, we can say that either the solution is on the boundary imposed by the inequality and we must use its kkt multiplier to influence the solution to x, or... | /home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf | 110 | Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org) | 0 |
chapter 4. numerical computation • the inequality constraints exhibit “ complementary slackness ” : α h ( x ) = 0. for more information about the kkt approach, see nocedal and wright 2006 ( ). 4. 5 example : linear least squares suppose we want to find the value of that minimizes x f ( ) = x 1 2 | | − | | ax b 2 2. ( 4.... | /home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf | 111 | Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org) | 0 |
using gradient descent, starting from an arbitrary value of. x set the step size ( ) and tolerance ( ) to small, positive numbers. δ while | | aax a − b | | 2 > δ do x x ← − aax a − b end while one can also solve this problem using newton ’ s method. in this case, because the true function is quadratic, the quadratic a... | /home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf | 111 | Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org) | 0 |
chapter 4. numerical computation the smallest - norm solution to the unconstrained least squares problem may be found using the moore - penrose pseudoinverse : x = a + b. if this point is feasible, then it is the solution to the constrained problem. otherwise, we must find a solution where the constraint is active. by [... | /home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf | 112 | Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org) | 0 |
lagrangian with respect to λ, we increase λ. because the [UNK] on the xx penalty has increased, solving the linear equation for x will now yield a solution with smaller norm. the process of solving the linear equation and adjusting λ continues until x has the correct norm and the derivative on λ is 0. this concludes th... | /home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf | 112 | Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org) | 0 |
chapter 5 machine learning basics deep learning is a specific kind of machine learning. in order to understand deep learning well, one must have a solid understanding of the basic principles of machine learning. this chapter provides a brief course in the most important general principles that will be applied throughout... | /home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf | 113 | Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org) | 0 |
called hyperparameters that must be determined external to the learning algorithm itself ; we discuss how to set these using additional data. machine learning is essentially a form of applied statistics with increased emphasis on the use of computers to statistically estimate complicated functions and a decreased empha... | /home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf | 113 | Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org) | 0 |
chapter 5. machine learning basics an optimization algorithm, a cost function, a model, and a dataset to build a machine learning algorithm. finally, in section, we describe some of the 5. 11 factors that have limited the ability of traditional machine learning to generalize. these challenges have motivated the develop... | /home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf | 114 | Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org) | 0 |
descriptions and examples of the [UNK] kinds of tasks, performance measures and experiences that can be used to construct machine learning algorithms. 5. 1. 1 the task, t machine learning allows us to tackle tasks that are too [UNK] to solve with fixed programs written and designed by human beings. from a scientific and ... | /home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf | 114 | Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org) | 0 |
features that have been quantitatively measured from some object or event that we want the machine learning system to process. we typically represent an example as a vector x ∈rn where each entry xi of the vector is another feature. for example, the features of an image are usually the values of the pixels in the image... | /home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf | 114 | Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org) | 0 |
chapter 5. machine learning basics many kinds of tasks can be solved with machine learning. some of the most common machine learning tasks include the following : • classification : in this type of task, the computer program is asked to specify which of k categories some input belongs to. to solve this task, the learnin... | /home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf | 115 | Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org) | 0 |
[UNK] kinds of drinks and deliver them to people on command ( good - fellow 2010 et al., ). modern object recognition is best accomplished with deep learning (, ;, ). object krizhevsky et al. 2012 [UNK] and szegedy 2015 recognition is the same basic technology that allows computers to recognize faces ( taigman 2014 et ... | /home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf | 115 | Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org) | 0 |
set fying x with a [UNK] subset of its inputs missing. this kind of situation arises frequently in medical diagnosis, because many kinds of medical tests are expensive or invasive. one way to [UNK] define such a large set of functions is to learn a probability distribution over all of the relevant variables, then solve ... | /home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf | 115 | Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org) | 0 |
chapter 5. machine learning basics • regression : in this type of task, the computer program is asked to predict a numerical value given some input. to solve this task, the learning algorithm is asked to output a function f : rn →r. this type of task is similar to classification, except that the format of output is [UNK... | /home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf | 116 | Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org) | 0 |
unicode format ). google street view uses deep learning to process address numbers in this way (, goodfellow et al. 2014d ). another example is speech recognition, where the computer program is provided an audio waveform and emits a sequence of characters or word id codes describing the words that were spoken in the au... | /home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf | 116 | Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org) | 0 |
( or other data structure containing multiple values ) with important relationships between the [UNK] elements. this is a broad category, and subsumes the transcription and translation tasks described above, but also many other tasks. one example is parsing — mapping a natural language sentence into a tree that describ... | /home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf | 116 | Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org) | 0 |
chapter 5. machine learning basics example, deep learning can be used to annotate the locations of roads in aerial photographs ( mnih and hinton 2010, ). the output need not have its form mirror the structure of the input as closely as in these annotation - style tasks. for example, in image captioning, the computer pr... | /home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf | 117 | Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org) | 0 |
objects, and flags some of them as being unusual or atypical. an example of an anomaly detection task is credit card fraud detection. by modeling your purchasing habits, a credit card company can detect misuse of your cards. if a thief steals your credit card or credit card information, the thief ’ s purchases will ofte... | /home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf | 117 | Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org) | 0 |
or landscapes, rather than requiring an artist to manually label each pixel (, ). in some luo et al. 2013 cases, we want the sampling or synthesis procedure to generate some specific kind of output given the input. for example, in a speech synthesis task, we provide a written sentence and ask the program to emit an audi... | /home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf | 117 | Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org) | 0 |
chapter 5. machine learning basics • denoising : in this type of task, the machine learning algorithm is given in input a corrupted example [UNK] ∈rn obtained by an unknown corruption process from a clean example x ∈rn. the learner must predict the clean example x from its corrupted version [UNK], or more generally pre... | /home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf | 118 | Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org) | 0 |
above require the learning algorithm to at least implicitly capture the structure of the probability distribution. density estimation allows us to explicitly capture that distribution. in principle, we can then perform computations on that distribution in order to solve the other tasks as well. for example, if we have ... | /home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf | 118 | Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org) | 0 |
. 2 the performance measure, p in order to evaluate the abilities of a machine learning algorithm, we must design a quantitative measure of its performance. usually this performance measure p is specific to the task being carried out by the system. t for tasks such as classification, classification with missing inputs, an... | /home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf | 118 | Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org) | 0 |
chapter 5. machine learning basics also obtain equivalent information by measuring the error rate, the proportion of examples for which the model produces an incorrect output. we often refer to the error rate as the expected 0 - 1 loss. the 0 - 1 loss on a particular example is 0 if it is correctly classified and 1 if i... | /home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf | 119 | Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org) | 0 |
measure may seem straightforward and objective, but it is often [UNK] to choose a performance measure that corresponds well to the desired behavior of the system. in some cases, this is because it is [UNK] to decide what should be measured. for example, when performing a transcription task, should we measure the accura... | /home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf | 119 | Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org) | 0 |
models is intractable. in these cases, one must design an alternative criterion that still corresponds to the design objectives, or design a good approximation to the desired criterion. 5. 1. 3 the experience, e machine learning algorithms can be broadly categorized as unsupervised or supervised by what kind of experie... | /home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf | 119 | Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org) | 0 |
chapter 5. machine learning basics defined in section. sometimes we will also call examples. 5. 1. 1 data points one of the oldest datasets studied by statisticians and machine learning re - searchers is the iris dataset (, ). it is a collection of measurements of fisher 1936 [UNK] parts of 150 iris plants. each individ... | /home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf | 120 | Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org) | 0 |
##vised learning algorithms perform other roles, like clustering, which consists of dividing the dataset into clusters of similar examples. supervised learning algorithms experience a dataset containing features, but each example is also associated with a label or target. for example, the iris dataset is annotated with... | /home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf | 120 | Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org) | 0 |
in unsupervised learning, there is no instructor or teacher, and the algorithm must learn to make sense of the data without this guide. unsupervised learning and supervised learning are not formally defined terms. the lines between them are often blurred. many machine learning technologies can be used to perform both ta... | /home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf | 120 | Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org) | 0 |
chapter 5. machine learning basics can solve the supervised learning problem of learning p ( y | x ) by using traditional unsupervised learning technologies to learn the joint distribution p ( x, y ) and inferring p y ( | x ) = p, y ( x ) yp, y ( x ). ( 5. 2 ) though unsupervised learning and supervised learning are no... | /home/ricoiban/GEMMA/mnlp_chatsplaining/RAG/Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org).pdf | 121 | Deep Learning by Ian Goodfellow, Yoshua Bengio, Aaron Courville (z-lib.org) | 0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.