text
stringlengths
0
65.5k
source
stringclasses
21 values
This variation is due to the fact that we are using features that were not present in the query. This implies that sometimes the words in the query itself are not enough information to decide if a document is relevant or not. 4. We are no longer computing the probability that the document is relevant. Instead, we are c...
https://www.cs.cornell.edu/courses/cs6740/2010sp/guides/lec05.pdf?utm_source=chatgpt.com
4. We are no longer computing the probability that the document is relevant. Instead, we are computing a quantity that is equivalent to this probability under rank. As a result of some of the simplifications that we made (e.g. dropping the k′/k term) it is possible to get a relevance score that is greater than 1. 5. Thi...
https://www.cs.cornell.edu/courses/cs6740/2010sp/guides/lec05.pdf?utm_source=chatgpt.com
As a result of some of the simplifications that we made (e.g. dropping the k′/k term) it is possible to get a relevance score that is greater than 1. 5. This assumption seems reasonable for many real world situations (in fact, it is built into the VSM). When users search for documents, they usually want documents that c...
https://www.cs.cornell.edu/courses/cs6740/2010sp/guides/lec05.pdf?utm_source=chatgpt.com
dropping the k′/k term) it is possible to get a relevance score that is greater than 1. 5. This assumption seems reasonable for many real world situations (in fact, it is built into the VSM). When users search for documents, they usually want documents that contain one of the terms they searched for, for example. You c...
https://www.cs.cornell.edu/courses/cs6740/2010sp/guides/lec05.pdf?utm_source=chatgpt.com
5. This assumption seems reasonable for many real world situations (in fact, it is built into the VSM). When users search for documents, they usually want documents that contain one of the terms they searched for, for example. You could implement a more efficient retrieval system by precomputing an index that tells you ...
https://www.cs.cornell.edu/courses/cs6740/2010sp/guides/lec05.pdf?utm_source=chatgpt.com
You could implement a more efficient retrieval system by precomputing an index that tells you which documents have each feature. Then you could use this index to obtain all of the documents that have at least one feature in common with the query and estimate the relevance of the documents in this subset. 6 References 1....
https://www.cs.cornell.edu/courses/cs6740/2010sp/guides/lec05.pdf?utm_source=chatgpt.com
Then you could use this index to obtain all of the documents that have at least one feature in common with the query and estimate the relevance of the documents in this subset. 6 References 1. Stephen Robertson and Karen Sp ¨arck Jones. Relevance weighting of search terms. Journal of the American Society for Informatio...
https://www.cs.cornell.edu/courses/cs6740/2010sp/guides/lec05.pdf?utm_source=chatgpt.com
6 References 1. Stephen Robertson and Karen Sp ¨arck Jones. Relevance weighting of search terms. Journal of the American Society for Information Science 27(3): 129-46 (1976). The probabilistic argument is pre- sented in the appendix. 2. William S. Cooper. Some inconsistencies and misidentified modeling assumptions in pr...
https://www.cs.cornell.edu/courses/cs6740/2010sp/guides/lec05.pdf?utm_source=chatgpt.com
Journal of the American Society for Information Science 27(3): 129-46 (1976). The probabilistic argument is pre- sented in the appendix. 2. William S. Cooper. Some inconsistencies and misidentified modeling assumptions in probabilistic information retrieval. ACM Transactions on Information Systems (TOIS), pp. 100-111, 1...
https://www.cs.cornell.edu/courses/cs6740/2010sp/guides/lec05.pdf?utm_source=chatgpt.com
Journal of the American Society for Information Science 27(3): 129-46 (1976). The probabilistic argument is pre- sented in the appendix. 2. William S. Cooper. Some inconsistencies and misidentified modeling assumptions in probabilistic information retrieval. ACM Transactions on Information Systems (TOIS), pp. 100-111, 1...
https://www.cs.cornell.edu/courses/cs6740/2010sp/guides/lec05.pdf?utm_source=chatgpt.com
If only_terms_in_query is True, then equation 16 is used, otherwise equation 17 is used."""
https://www.cs.cornell.edu/courses/cs6740/2010sp/guides/lec05.pdf?utm_source=chatgpt.com
If only_terms_in_query is True, then equation 16 is used, otherwise equation 17 is used.""" # estimate Pr(F[j]) word_priors = word_probs(docs) doc_query_prob = defaultdict(lambda: defaultdict(float)) for qi, query in enumerate(queries): rel_docs = [docs[dj] for qj, dj in query_doc_relevant if qj == qi] word_given_rel =...
https://www.cs.cornell.edu/courses/cs6740/2010sp/guides/lec05.pdf?utm_source=chatgpt.com
CS229 Lecture Notes Andrew Ng and Tengyu Ma June 11, 2023 Contents I Supervised learning 5 1 Linear regression 8 1.1 LMS algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . 9 1.2 The normal equations . . . . . . . . . . . . . . . . . . . . . . .
https://cs229.stanford.edu/main_notes.pdf
. . . . . . . . . . . . . . . . . . . . . . . . . 9 1.2 The normal equations . . . . . . . . . . . . . . . . . . . . . . . 13 1.2.1 Matrix derivatives . . . . . . . . . . . . . . . . . . . . . 13 1.2.2 Least squares revisited . .
https://cs229.stanford.edu/main_notes.pdf
. . . . . 9 1.2 The normal equations . . . . . . . . . . . . . . . . . . . . . . . 13 1.2.1 Matrix derivatives . . . . . . . . . . . . . . . . . . . . . 13 1.2.2 Least squares revisited . . . . . . . . . . . . . . . . . .
https://cs229.stanford.edu/main_notes.pdf
. . . . . . . . . . . . . . . . . . . . 13 1.2.1 Matrix derivatives . . . . . . . . . . . . . . . . . . . . . 13 1.2.2 Least squares revisited . . . . . . . . . . . . . . . . . . 14 1.3 Probabilistic interpretation . . . . . . . . . . .
https://cs229.stanford.edu/main_notes.pdf
13 1.2.1 Matrix derivatives . . . . . . . . . . . . . . . . . . . . . 13 1.2.2 Least squares revisited . . . . . . . . . . . . . . . . . . 14 1.3 Probabilistic interpretation . . . . . . . . . . . . . . . . . . . .
https://cs229.stanford.edu/main_notes.pdf
. . . . . . . . . . . . . . . . . . . . 13 1.2.2 Least squares revisited . . . . . . . . . . . . . . . . . . 14 1.3 Probabilistic interpretation . . . . . . . . . . . . . . . . . . . . 15 1.4 Locally weighted linear regression (optional reading) . . . . . .
https://cs229.stanford.edu/main_notes.pdf
. 13 1.2.2 Least squares revisited . . . . . . . . . . . . . . . . . . 14 1.3 Probabilistic interpretation . . . . . . . . . . . . . . . . . . . . 15 1.4 Locally weighted linear regression (optional reading) . . . . . . 17 2 Classification and logistic regression 20 2.1 Logistic regression .
https://cs229.stanford.edu/main_notes.pdf
. . . . . . . . . 14 1.3 Probabilistic interpretation . . . . . . . . . . . . . . . . . . . . 15 1.4 Locally weighted linear regression (optional reading) . . . . . . 17 2 Classification and logistic regression 20 2.1 Logistic regression . . . . . . . . . . . . . . . . . . . . .
https://cs229.stanford.edu/main_notes.pdf
. . . . . . . . . . . . . . . . . . 15 1.4 Locally weighted linear regression (optional reading) . . . . . . 17 2 Classification and logistic regression 20 2.1 Logistic regression . . . . . . . . . . . . . . . . . . . . . . . . 20 2.2 Digression: the perceptron learning algorithm . . . .
https://cs229.stanford.edu/main_notes.pdf
. . . . . 17 2 Classification and logistic regression 20 2.1 Logistic regression . . . . . . . . . . . . . . . . . . . . . . . . 20 2.2 Digression: the perceptron learning algorithm . . . . . . . . . 23 2.3 Multi-class classification . . . . . . . . . . . . . .
https://cs229.stanford.edu/main_notes.pdf
. . . . . . . . . . . . . . . . . . . . . . . 20 2.2 Digression: the perceptron learning algorithm . . . . . . . . . 23 2.3 Multi-class classification . . . . . . . . . . . . . . . . . . . . . 24 2.4 Another algorithm for maximizing ℓ(θ) . . . .
https://cs229.stanford.edu/main_notes.pdf
. . . 20 2.2 Digression: the perceptron learning algorithm . . . . . . . . . 23 2.3 Multi-class classification . . . . . . . . . . . . . . . . . . . . . 24 2.4 Another algorithm for maximizing ℓ(θ) . . . . . . . . . . . . .
https://cs229.stanford.edu/main_notes.pdf
. . . . . . . . 23 2.3 Multi-class classification . . . . . . . . . . . . . . . . . . . . . 24 2.4 Another algorithm for maximizing ℓ(θ) . . . . . . . . . . . . . 27 3 Generalized linear models 29 3.1 The exponential family . . . . . . . . . . . . .
https://cs229.stanford.edu/main_notes.pdf
. . . . . . . . . . . . . . . . . . . . 24 2.4 Another algorithm for maximizing ℓ(θ) . . . . . . . . . . . . . 27 3 Generalized linear models 29 3.1 The exponential family . . . . . . . . . . . . . . . . . . . . . . 29 3.2 Constructing GLMs . . . . .
https://cs229.stanford.edu/main_notes.pdf
24 2.4 Another algorithm for maximizing ℓ(θ) . . . . . . . . . . . . . 27 3 Generalized linear models 29 3.1 The exponential family . . . . . . . . . . . . . . . . . . . . . . 29 3.2 Constructing GLMs . . . . . . . . . . . . . . . . . . . . . . . .
https://cs229.stanford.edu/main_notes.pdf
. . . . . . . . . . 27 3 Generalized linear models 29 3.1 The exponential family . . . . . . . . . . . . . . . . . . . . . . 29 3.2 Constructing GLMs . . . . . . . . . . . . . . . . . . . . . . . . 31 3.2.1 Ordinary least squares . . . . . . . . .
https://cs229.stanford.edu/main_notes.pdf
. . . . . . . . . . . . . . . . . . . . . 29 3.2 Constructing GLMs . . . . . . . . . . . . . . . . . . . . . . . . 31 3.2.1 Ordinary least squares . . . . . . . . . . . . . . . . . . 32 3.2.2 Logistic regression . . . . . . .
https://cs229.stanford.edu/main_notes.pdf
. 29 3.2 Constructing GLMs . . . . . . . . . . . . . . . . . . . . . . . . 31 3.2.1 Ordinary least squares . . . . . . . . . . . . . . . . . . 32 3.2.2 Logistic regression . . . . . . . . . . . . . . . . . . . .
https://cs229.stanford.edu/main_notes.pdf
. . . . . . . . . . . . . . . . . . . . . 31 3.2.1 Ordinary least squares . . . . . . . . . . . . . . . . . . 32 3.2.2 Logistic regression . . . . . . . . . . . . . . . . . . . . 33 4 Generative learning algorithms 34 4.1 Gaussian discriminant analysis . . .
https://cs229.stanford.edu/main_notes.pdf
. 31 3.2.1 Ordinary least squares . . . . . . . . . . . . . . . . . . 32 3.2.2 Logistic regression . . . . . . . . . . . . . . . . . . . . 33 4 Generative learning algorithms 34 4.1 Gaussian discriminant analysis . . . . . . . . . . . . . . . . . .
https://cs229.stanford.edu/main_notes.pdf
. . . . . . . . . . . . . . . 32 3.2.2 Logistic regression . . . . . . . . . . . . . . . . . . . . 33 4 Generative learning algorithms 34 4.1 Gaussian discriminant analysis . . . . . . . . . . . . . . . . . . 35 4.1.1 The multivariate normal distribution . . . . . . . .
https://cs229.stanford.edu/main_notes.pdf
. . . . . . . . . . . . . . . . . . . 33 4 Generative learning algorithms 34 4.1 Gaussian discriminant analysis . . . . . . . . . . . . . . . . . . 35 4.1.1 The multivariate normal distribution . . . . . . . . . . 35 4.1.2 The Gaussian discriminant analysis model . . . . . . .
https://cs229.stanford.edu/main_notes.pdf
. . 33 4 Generative learning algorithms 34 4.1 Gaussian discriminant analysis . . . . . . . . . . . . . . . . . . 35 4.1.1 The multivariate normal distribution . . . . . . . . . . 35 4.1.2 The Gaussian discriminant analysis model . . . . . . . 38 4.1.3 Discussion: GDA and logistic regression . . . . .
https://cs229.stanford.edu/main_notes.pdf
. . . . . . . . . . . . . . . . . 35 4.1.1 The multivariate normal distribution . . . . . . . . . . 35 4.1.2 The Gaussian discriminant analysis model . . . . . . . 38 4.1.3 Discussion: GDA and logistic regression . . . . . . . . 40 4.2 Naive bayes (Option Reading) . . . .
https://cs229.stanford.edu/main_notes.pdf
. . . . . . . . . 35 4.1.2 The Gaussian discriminant analysis model . . . . . . . 38 4.1.3 Discussion: GDA and logistic regression . . . . . . . . 40 4.2 Naive bayes (Option Reading) . . . . . . . . . . . . . . . . . . 41 4.2.1 Laplace smoothing . . . . .
https://cs229.stanford.edu/main_notes.pdf
. . . . . . 38 4.1.3 Discussion: GDA and logistic regression . . . . . . . . 40 4.2 Naive bayes (Option Reading) . . . . . . . . . . . . . . . . . . 41 4.2.1 Laplace smoothing . . . . . . . . . . . . . . . . . . . .
https://cs229.stanford.edu/main_notes.pdf
. . . . . . . 40 4.2 Naive bayes (Option Reading) . . . . . . . . . . . . . . . . . . 41 4.2.1 Laplace smoothing . . . . . . . . . . . . . . . . . . . . 44 4.2.2 Event models for text classification . . . . . . . . . . .
https://cs229.stanford.edu/main_notes.pdf
. . . . . . . . . . . . . . . . . 41 4.2.1 Laplace smoothing . . . . . . . . . . . . . . . . . . . . 44 4.2.2 Event models for text classification . . . . . . . . . . . 46 1 CS229 Spring 20223 2 5 Kernel methods 48 5.1 Feature maps . . .
https://cs229.stanford.edu/main_notes.pdf
. . . . . . . . . . . . . . . . . . . 44 4.2.2 Event models for text classification . . . . . . . . . . . 46 1 CS229 Spring 20223 2 5 Kernel methods 48 5.1 Feature maps . . . . . . . . . . . . . . . . . . . . . . . . . . .
https://cs229.stanford.edu/main_notes.pdf
. . . . 44 4.2.2 Event models for text classification . . . . . . . . . . . 46 1 CS229 Spring 20223 2 5 Kernel methods 48 5.1 Feature maps . . . . . . . . . . . . . . . . . . . . . . . . . . . 48 5.2 LMS (least mean squares) with features . . . . .
https://cs229.stanford.edu/main_notes.pdf
. . . . . . . . . . 46 1 CS229 Spring 20223 2 5 Kernel methods 48 5.1 Feature maps . . . . . . . . . . . . . . . . . . . . . . . . . . . 48 5.2 LMS (least mean squares) with features . . . . . . . . . . . . . 49 5.3 LMS with the kernel trick . . .
https://cs229.stanford.edu/main_notes.pdf
. . . . . . . . . . . . . . . . . . . . . . . . . . 48 5.2 LMS (least mean squares) with features . . . . . . . . . . . . . 49 5.3 LMS with the kernel trick . . . . . . . . . . . . . . . . . . . . 49 5.4 Properties of kernels . . . . . . .
https://cs229.stanford.edu/main_notes.pdf
. . . . . . 48 5.2 LMS (least mean squares) with features . . . . . . . . . . . . . 49 5.3 LMS with the kernel trick . . . . . . . . . . . . . . . . . . . . 49 5.4 Properties of kernels . . . . . . . . . . . . . . . . . . . . . . .
https://cs229.stanford.edu/main_notes.pdf
. . . . . . . . . . . . 49 5.3 LMS with the kernel trick . . . . . . . . . . . . . . . . . . . . 49 5.4 Properties of kernels . . . . . . . . . . . . . . . . . . . . . . . 53 6 Support vector machines 59 6.1 Margins: intuition . . . . . . . . . . .
https://cs229.stanford.edu/main_notes.pdf
. . . . . . . . . . . . . . . . . . . 49 5.4 Properties of kernels . . . . . . . . . . . . . . . . . . . . . . . 53 6 Support vector machines 59 6.1 Margins: intuition . . . . . . . . . . . . . . . . . . . . . . . . .
https://cs229.stanford.edu/main_notes.pdf
. . . . . . . . . 49 5.4 Properties of kernels . . . . . . . . . . . . . . . . . . . . . . . 53 6 Support vector machines 59 6.1 Margins: intuition . . . . . . . . . . . . . . . . . . . . . . . . . 59 6.2 Notation (option reading) . . . . . . . . .
https://cs229.stanford.edu/main_notes.pdf
. . . . . . . . . . . . . . . . . . . . 53 6 Support vector machines 59 6.1 Margins: intuition . . . . . . . . . . . . . . . . . . . . . . . . . 59 6.2 Notation (option reading) . . . . . . . . . . . . . . . . . . . .
https://cs229.stanford.edu/main_notes.pdf
. . . . . . . . . 53 6 Support vector machines 59 6.1 Margins: intuition . . . . . . . . . . . . . . . . . . . . . . . . . 59 6.2 Notation (option reading) . . . . . . . . . . . . . . . . . . . . 61 6.3 Functional and geometric margins (option reading) . . . . . .
https://cs229.stanford.edu/main_notes.pdf
. . . . . . . . . . . . . . . . . . . . . . . . 59 6.2 Notation (option reading) . . . . . . . . . . . . . . . . . . . . 61 6.3 Functional and geometric margins (option reading) . . . . . . 61 6.4 The optimal margin classifier (option reading) . . . . . . . .
https://cs229.stanford.edu/main_notes.pdf
. . . . 59 6.2 Notation (option reading) . . . . . . . . . . . . . . . . . . . . 61 6.3 Functional and geometric margins (option reading) . . . . . . 61 6.4 The optimal margin classifier (option reading) . . . . . . . . . 63 6.5 Lagrange duality (optional reading) . . . . . .
https://cs229.stanford.edu/main_notes.pdf
. . . . . . . . . . . . . . . 61 6.3 Functional and geometric margins (option reading) . . . . . . 61 6.4 The optimal margin classifier (option reading) . . . . . . . . . 63 6.5 Lagrange duality (optional reading) . . . . . . . . . . . . . . .
https://cs229.stanford.edu/main_notes.pdf
. . . . . . 61 6.3 Functional and geometric margins (option reading) . . . . . . 61 6.4 The optimal margin classifier (option reading) . . . . . . . . . 63 6.5 Lagrange duality (optional reading) . . . . . . . . . . . . . . . 65 6.6 Optimal margin classifiers: the dual form (option reading) .
https://cs229.stanford.edu/main_notes.pdf
. . 61 6.4 The optimal margin classifier (option reading) . . . . . . . . . 63 6.5 Lagrange duality (optional reading) . . . . . . . . . . . . . . . 65 6.6 Optimal margin classifiers: the dual form (option reading) . . 68 6.7 Regularization and the non-separable case (optional reading) .
https://cs229.stanford.edu/main_notes.pdf
. . . . . . . . 63 6.5 Lagrange duality (optional reading) . . . . . . . . . . . . . . . 65 6.6 Optimal margin classifiers: the dual form (option reading) . . 68 6.7 Regularization and the non-separable case (optional reading) . 72 6.8 The SMO algorithm (optional reading) . . . . . . . .
https://cs229.stanford.edu/main_notes.pdf
. . . . . . . . . . . . . . 65 6.6 Optimal margin classifiers: the dual form (option reading) . . 68 6.7 Regularization and the non-separable case (optional reading) . 72 6.8 The SMO algorithm (optional reading) . . . . . . . . . . . . . 73 6.8.1 Coordinate ascent . . . . . . .
https://cs229.stanford.edu/main_notes.pdf
. 68 6.7 Regularization and the non-separable case (optional reading) . 72 6.8 The SMO algorithm (optional reading) . . . . . . . . . . . . . 73 6.8.1 Coordinate ascent . . . . . . . . . . . . . . . . . . . . . 74 6.8.2 SMO . . . . . . . . . . . . . .
https://cs229.stanford.edu/main_notes.pdf
72 6.8 The SMO algorithm (optional reading) . . . . . . . . . . . . . 73 6.8.1 Coordinate ascent . . . . . . . . . . . . . . . . . . . . . 74 6.8.2 SMO . . . . . . . . . . . . . . . . . . . . . . . . . . . .
https://cs229.stanford.edu/main_notes.pdf
. . . . . . . . . . . . 73 6.8.1 Coordinate ascent . . . . . . . . . . . . . . . . . . . . . 74 6.8.2 SMO . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75 II Deep learning 79 7 Deep learning 80 7.1 Supervised learning with non-linear models .
https://cs229.stanford.edu/main_notes.pdf
. . . . . . . . . . . . . . . . . . . 74 6.8.2 SMO . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75 II Deep learning 79 7 Deep learning 80 7.1 Supervised learning with non-linear models . . . . . . . . . . . 80 7.2 Neural networks . . .
https://cs229.stanford.edu/main_notes.pdf
. . . . . . . . . . . . . . . . . . . . . . . . . . . 75 II Deep learning 79 7 Deep learning 80 7.1 Supervised learning with non-linear models . . . . . . . . . . . 80 7.2 Neural networks . . . . . . . . . . . . . . . . . . . . . . . . . .
https://cs229.stanford.edu/main_notes.pdf
. . . . . . . . . . . . . 75 II Deep learning 79 7 Deep learning 80 7.1 Supervised learning with non-linear models . . . . . . . . . . . 80 7.2 Neural networks . . . . . . . . . . . . . . . . . . . . . . . . . . 84 7.3 Modules in Modern Neural Networks . . . . . . . . .
https://cs229.stanford.edu/main_notes.pdf
. . . . . . . . . . 80 7.2 Neural networks . . . . . . . . . . . . . . . . . . . . . . . . . . 84 7.3 Modules in Modern Neural Networks . . . . . . . . . . . . . . 92 7.4 Backpropagation . . . . . . . . . . . . . . . . . . . . . . .
https://cs229.stanford.edu/main_notes.pdf
. . . . . . . . . . . . . . . . . . . . . . . 84 7.3 Modules in Modern Neural Networks . . . . . . . . . . . . . . 92 7.4 Backpropagation . . . . . . . . . . . . . . . . . . . . . . . . . 98 7.4.1 Preliminaries on partial derivatives . . . .
https://cs229.stanford.edu/main_notes.pdf
. . . 84 7.3 Modules in Modern Neural Networks . . . . . . . . . . . . . . 92 7.4 Backpropagation . . . . . . . . . . . . . . . . . . . . . . . . . 98 7.4.1 Preliminaries on partial derivatives . . . . . . . . . . .
https://cs229.stanford.edu/main_notes.pdf
. . . . . . . . . . . . . 92 7.4 Backpropagation . . . . . . . . . . . . . . . . . . . . . . . . . 98 7.4.1 Preliminaries on partial derivatives . . . . . . . . . . . 99 7.4.2 General strategy of backpropagation . . . . . . . . . .
https://cs229.stanford.edu/main_notes.pdf
. . . . . . . . . . . . . . . . . . . . . . . . 98 7.4.1 Preliminaries on partial derivatives . . . . . . . . . . . 99 7.4.2 General strategy of backpropagation . . . . . . . . . . 102 7.4.3 Backward functions for basic modules . . . . . . . . . .
https://cs229.stanford.edu/main_notes.pdf
. . . . . . . . 98 7.4.1 Preliminaries on partial derivatives . . . . . . . . . . . 99 7.4.2 General strategy of backpropagation . . . . . . . . . . 102 7.4.3 Backward functions for basic modules . . . . . . . . . . 105 7.4.4 Back-propagation for MLPs . . . . .
https://cs229.stanford.edu/main_notes.pdf
. . . . . . . . . . 99 7.4.2 General strategy of backpropagation . . . . . . . . . . 102 7.4.3 Backward functions for basic modules . . . . . . . . . . 105 7.4.4 Back-propagation for MLPs . . . . . . . . . . . . . . . 107 7.5 Vectorization over training examples . . .
https://cs229.stanford.edu/main_notes.pdf
. . . . . . . . . 102 7.4.3 Backward functions for basic modules . . . . . . . . . . 105 7.4.4 Back-propagation for MLPs . . . . . . . . . . . . . . . 107 7.5 Vectorization over training examples . . . . . . . . . . . . . .
https://cs229.stanford.edu/main_notes.pdf
. . . 102 7.4.3 Backward functions for basic modules . . . . . . . . . . 105 7.4.4 Back-propagation for MLPs . . . . . . . . . . . . . . . 107 7.5 Vectorization over training examples . . . . . . . . . . . . . . 109 III Generalization and regularization 112 8 Generalization 113 8.1 Bias-variance tradeoff .
https://cs229.stanford.edu/main_notes.pdf
. . . . . . . . . . . . . . 107 7.5 Vectorization over training examples . . . . . . . . . . . . . . 109 III Generalization and regularization 112 8 Generalization 113 8.1 Bias-variance tradeoff . . . . . . . . . . . . . . . . . . . . . . .
https://cs229.stanford.edu/main_notes.pdf
. . . . . 107 7.5 Vectorization over training examples . . . . . . . . . . . . . . 109 III Generalization and regularization 112 8 Generalization 113 8.1 Bias-variance tradeoff . . . . . . . . . . . . . . . . . . . . . . . 115 8.1.1 A mathematical decomposition (for regression) . . . . .
https://cs229.stanford.edu/main_notes.pdf
. . . . . . . . . 109 III Generalization and regularization 112 8 Generalization 113 8.1 Bias-variance tradeoff . . . . . . . . . . . . . . . . . . . . . . . 115 8.1.1 A mathematical decomposition (for regression) . . . . . 120 8.2 The double descent phenomenon . . . . . . . . . . .
https://cs229.stanford.edu/main_notes.pdf
. . . . . . . . . . . . . . . . . . . . . . 115 8.1.1 A mathematical decomposition (for regression) . . . . . 120 8.2 The double descent phenomenon . . . . . . . . . . . . . . . . . 121 8.3 Sample complexity bounds (optional readings) . . . . . . . . .
https://cs229.stanford.edu/main_notes.pdf
. . . . . . . . . . . . . 115 8.1.1 A mathematical decomposition (for regression) . . . . . 120 8.2 The double descent phenomenon . . . . . . . . . . . . . . . . . 121 8.3 Sample complexity bounds (optional readings) . . . . . . . . . 126 CS229 Spring 20223 3 8.3.1 Preliminaries . .
https://cs229.stanford.edu/main_notes.pdf
. . . . 120 8.2 The double descent phenomenon . . . . . . . . . . . . . . . . . 121 8.3 Sample complexity bounds (optional readings) . . . . . . . . . 126 CS229 Spring 20223 3 8.3.1 Preliminaries . . . . . . . . . . . . . . . . . . . . . . .
https://cs229.stanford.edu/main_notes.pdf
. . . . . . . . . . . . . . . . 121 8.3 Sample complexity bounds (optional readings) . . . . . . . . . 126 CS229 Spring 20223 3 8.3.1 Preliminaries . . . . . . . . . . . . . . . . . . . . . . . 126 8.3.2 The case of finite H. . . . . .
https://cs229.stanford.edu/main_notes.pdf
. . . . . . . . 126 CS229 Spring 20223 3 8.3.1 Preliminaries . . . . . . . . . . . . . . . . . . . . . . . 126 8.3.2 The case of finite H. . . . . . . . . . . . . . . . . . . .
https://cs229.stanford.edu/main_notes.pdf
. . . 126 CS229 Spring 20223 3 8.3.1 Preliminaries . . . . . . . . . . . . . . . . . . . . . . . 126 8.3.2 The case of finite H. . . . . . . . . . . . . . . . . . . . 128 8.3.3 The case of infinite H . . .
https://cs229.stanford.edu/main_notes.pdf
. . . . . . . . . . . . . . . . . . . . . . 126 8.3.2 The case of finite H. . . . . . . . . . . . . . . . . . . . 128 8.3.3 The case of infinite H . . . . . . . . . . . . . . . . . .
https://cs229.stanford.edu/main_notes.pdf
. . . . . . . . . 126 8.3.2 The case of finite H. . . . . . . . . . . . . . . . . . . . 128 8.3.3 The case of infinite H . . . . . . . . . . . . . . . . . . 131 9 Regularization and model selection 135 9.1 Regularization . . . . .
https://cs229.stanford.edu/main_notes.pdf
. . . . . . . . . . . . . . . . . . 128 8.3.3 The case of infinite H . . . . . . . . . . . . . . . . . . 131 9 Regularization and model selection 135 9.1 Regularization . . . . . . . . . . . . . . . . . . . . . . . . . . .
https://cs229.stanford.edu/main_notes.pdf
. . . 128 8.3.3 The case of infinite H . . . . . . . . . . . . . . . . . . 131 9 Regularization and model selection 135 9.1 Regularization . . . . . . . . . . . . . . . . . . . . . . . . . . . 135 9.2 Implicit regularization effect (optional reading) . .
https://cs229.stanford.edu/main_notes.pdf
. . . . . . . . . . . . . . . . . 131 9 Regularization and model selection 135 9.1 Regularization . . . . . . . . . . . . . . . . . . . . . . . . . . . 135 9.2 Implicit regularization effect (optional reading) . . . . . . . . . 137 9.3 Model selection via cross validation . . . .
https://cs229.stanford.edu/main_notes.pdf
. . . . . . . . . . . . . . . . . . . . . . . . . . 135 9.2 Implicit regularization effect (optional reading) . . . . . . . . . 137 9.3 Model selection via cross validation . . . . . . . . . . . . . . . 139 9.4 Bayesian statistics and regularization . . . . . . . . . . .
https://cs229.stanford.edu/main_notes.pdf
. . . . . . 135 9.2 Implicit regularization effect (optional reading) . . . . . . . . . 137 9.3 Model selection via cross validation . . . . . . . . . . . . . . . 139 9.4 Bayesian statistics and regularization . . . . . . . . . . . . . .
https://cs229.stanford.edu/main_notes.pdf
. . . 135 9.2 Implicit regularization effect (optional reading) . . . . . . . . . 137 9.3 Model selection via cross validation . . . . . . . . . . . . . . . 139 9.4 Bayesian statistics and regularization . . . . . . . . . . . . . . 142 IV Unsupervised learning 144 10 Clustering and the k-means algorithm 145 11 EM algori...
https://cs229.stanford.edu/main_notes.pdf
. . . . . . . . . . . . . . 139 9.4 Bayesian statistics and regularization . . . . . . . . . . . . . . 142 IV Unsupervised learning 144 10 Clustering and the k-means algorithm 145 11 EM algorithms 148 11.1 EM for mixture of Gaussians . . . . . . . . . . . . . . . . . . .
https://cs229.stanford.edu/main_notes.pdf
. . . . . . . . . . . . . 142 IV Unsupervised learning 144 10 Clustering and the k-means algorithm 145 11 EM algorithms 148 11.1 EM for mixture of Gaussians . . . . . . . . . . . . . . . . . . . 148 11.2 Jensen’s inequality . . . . . . . . . . . . . . . . . . . . .
https://cs229.stanford.edu/main_notes.pdf
. . . . . . . . . . . . . . . . . . 148 11.2 Jensen’s inequality . . . . . . . . . . . . . . . . . . . . . . . . 151 11.3 General EM algorithms . . . . . . . . . . . . . . . . . . . . . . 152 11.3.1 Other interpretation of ELBO . . . . . .
https://cs229.stanford.edu/main_notes.pdf
. . . . . . . . . . . . . . . . . . . . . . . 151 11.3 General EM algorithms . . . . . . . . . . . . . . . . . . . . . . 152 11.3.1 Other interpretation of ELBO . . . . . . . . . . . . . . 158 11.4 Mixture of Gaussians revisited . . . . . . . .
https://cs229.stanford.edu/main_notes.pdf
. . . 151 11.3 General EM algorithms . . . . . . . . . . . . . . . . . . . . . . 152 11.3.1 Other interpretation of ELBO . . . . . . . . . . . . . . 158 11.4 Mixture of Gaussians revisited . . . . . . . . . . . . . . . . . .
https://cs229.stanford.edu/main_notes.pdf
. . . . . . . . . . . . . . . . . . . . . 152 11.3.1 Other interpretation of ELBO . . . . . . . . . . . . . . 158 11.4 Mixture of Gaussians revisited . . . . . . . . . . . . . . . . . . 158 11.5 Variational inference and variational auto-encoder (optional reading) . .
https://cs229.stanford.edu/main_notes.pdf
. 152 11.3.1 Other interpretation of ELBO . . . . . . . . . . . . . . 158 11.4 Mixture of Gaussians revisited . . . . . . . . . . . . . . . . . . 158 11.5 Variational inference and variational auto-encoder (optional reading) . . . . . . . . . . . . . . . . . . . . . .
https://cs229.stanford.edu/main_notes.pdf
. . . . . . . 158 11.4 Mixture of Gaussians revisited . . . . . . . . . . . . . . . . . . 158 11.5 Variational inference and variational auto-encoder (optional reading) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
https://cs229.stanford.edu/main_notes.pdf
. . . . . . . . . . . . . . . . . 158 11.5 Variational inference and variational auto-encoder (optional reading) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 160 12 Principal components analysis 165 13 Independent components analysis 171 13.1 ICA ambiguities . . . . . . . . . . .
https://cs229.stanford.edu/main_notes.pdf
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 160 12 Principal components analysis 165 13 Independent components analysis 171 13.1 ICA ambiguities . . . . . . . . . . . . . . . . . . . . . . . . . . 172 13.2 Densities and linear transformations . . . . . . . . . . . .
https://cs229.stanford.edu/main_notes.pdf
. . . . . . . . . 160 12 Principal components analysis 165 13 Independent components analysis 171 13.1 ICA ambiguities . . . . . . . . . . . . . . . . . . . . . . . . . . 172 13.2 Densities and linear transformations . . . . . . . . . . . . . . . 173 13.3 ICA algorithm . . . . . . . . .
https://cs229.stanford.edu/main_notes.pdf