text
stringlengths
12
14.7k
Overfitting : Usually, a learning algorithm is trained using some set of "training data": exemplary situations for which the desired output is known. The goal is that the algorithm will also perform well on predicting the output when fed "validation data" that was not encountered during its training. Overfitting is the...
Overfitting : Underfitting is the inverse of overfitting, meaning that the statistical model or machine learning algorithm is too simplistic to accurately capture the patterns in the data. A sign of underfitting is that there is a high bias and low variance detected in the current model or algorithm used (the inverse o...
Overfitting : Benign overfitting describes the phenomenon of a statistical model that seems to generalize well to unseen data, even when it has been fit perfectly on noisy training data (i.e., obtains perfect predictive accuracy on the training set). The phenomenon is of particular interest in deep neural networks, but...
Overfitting : Leinweber, D. J. (2007). "Stupid data miner tricks". The Journal of Investing. 16: 15–22. doi:10.3905/joi.2007.681820. S2CID 108627390. Tetko, I. V.; Livingstone, D. J.; Luik, A. I. (1995). "Neural network studies. 1. Comparison of Overfitting and Overtraining" (PDF). Journal of Chemical Information and M...
Overfitting : Christian, Brian; Griffiths, Tom (April 2017), "Chapter 7: Overfitting", Algorithms To Live By: The computer science of human decisions, William Collins, pp. 149–168, ISBN 978-0-00-754799-9
Overfitting : The Problem of Overfitting Data – Stony Brook University What is "overfitting," exactly? – Andrew Gelman blog CSE546: Linear Regression Bias / Variance Tradeoff – University of Washington What is Underfitting – IBM
Neural network Gaussian process : A Neural Network Gaussian Process (NNGP) is a Gaussian process (GP) obtained as the limit of a certain type of sequence of neural networks. Specifically, a wide variety of network architectures converges to a GP in the infinitely wide limit, in the sense of distribution. The concept co...
Neural network Gaussian process : Bayesian networks are a modeling tool for assigning probabilities to events, and thereby characterizing the uncertainty in a model's predictions. Deep learning and artificial neural networks are approaches used in machine learning to build computational models which learn from training...
Neural network Gaussian process : Neural Tangents is a free and open-source Python library used for computing and doing inference with the NNGP and neural tangent kernel corresponding to various common ANN architectures. == References ==
Cache language model : A cache language model is a type of statistical language model. These occur in the natural language processing subfield of computer science and assign probabilities to given sequences of words by means of a probability distribution. Statistical language models are key components of speech recogni...
Cache language model : Artificial intelligence History of natural language processing History of machine translation Speech recognition Statistical machine translation
Cache language model : Jelinek, Frederick (1997). Statistical Methods for Speech Recognition. The MIT Press. ISBN 0-262-10066-5. Archived from the original on 2011-08-05. Retrieved 2011-09-24.
GPTeens : GPTeens (short for GPT for Teens) is an AI-based chatbot developed by the South Korean company ACROSSPACE. It is built on the Generative pre-trained transformer (GPT) model and incorporates a pipeline structure with additional models to enhance its functionality. The chatbot is expanded using supervised fine-...
GPTeens : The development of GPTeens began in response to the growing demand for AI-based educational tools in South Korea. In May 2023, a prototype version of the chatbot was introduced to schools for limited testing by teachers and students. The first public version of GPTeens was officially launched in October 2024,...
GPTeens : Interactive Format: Utilizes Natural language processing technology to support conversational interactions with learners. Age-Appropriate Design: Designed to deliver responses suitable for teenage users. Curriculum Integration: Trained on educational materials aligned with the South Korean national curriculum...
GPTeens : GPTeens has been noted as an example of an AI-powered educational resource that integrates curriculum-based content for learners.
GPTeens : Natural Language Processing
GPTeens : South Korean Ministry of Education Official Website
Empowerment (artificial intelligence) : Empowerment in the field of artificial intelligence formalises and quantifies (via information theory) the potential an agent perceives that it has to influence its environment. An agent which follows an empowerment maximising policy, acts to maximise future options (typically up...
Empowerment (artificial intelligence) : Empowerment ( E ) is defined as the channel capacity ( C ) of the actuation channel of the agent, and is formalised as the maximal possible information flow between the actions of the agent and the effect of those actions some time later. Empowerment can be thought of as the fu...
Empowerment (artificial intelligence) : Empowerment maximisation can be used as a pseudo-utility function to enable agents to exhibit intelligent behaviour without requiring the definition of external goals, for example balancing a pole in a cart-pole balancing scenario where no indication of the task is provided to th...
Granular computing : Granular computing is an emerging computing paradigm of information processing that concerns the processing of complex information entities called "information granules", which arise in the process of data abstraction and derivation of knowledge from information or data. Generally speaking, informa...
Granular computing : As mentioned above, granular computing is not an algorithm or process; there is no particular method that is called "granular computing". It is rather an approach to looking at data that recognizes how different and interesting regularities in the data can appear at different levels of granularity,...
Granular computing : Granular computing can be conceived as a framework of theories, methodologies, techniques, and tools that make use of information granules in the process of problem solving. In this sense, granular computing is used as an umbrella term to cover topics that have been studied in various fields in iso...
Granular computing : Rough Sets, Discretization Type-2 Fuzzy Sets and Systems == References ==
SUPS : In computational neuroscience, SUPS (for Synaptic Updates Per Second) or formerly CUPS (Connections Updates Per Second) is a measure of a neuronal network performance, useful in fields of neuroscience, cognitive science, artificial intelligence, and computer science.
SUPS : For a processor or computer designed to simulate a neural network SUPS is measured as the product of simulated neurons N and average connectivity c (synapses) per neuron per second: S U P S = c × N Depending on the type of simulation it is usually equal to the total number of synapses simulated. In an "asynch...
SUPS : Developed in the 1980s Adaptive Solutions' CNAPS-1064 Digital Parallel Processor chip is a full neural network (NNW). It was designed as a coprocessor to a host and has 64 sub-processors arranged in a 1D array and operating in a SIMD mode. Each sub-processor can emulate one or more neurons and multiple chips can...
SUPS : FLOP SPECint SPECfp Multiply–accumulate operation Orders of magnitude (computing) SyNAPSE == References ==
Embedding (machine learning) : Embedding in machine learning refers to a representation learning technique that maps complex, high-dimensional data into a lower-dimensional vector space of numerical vectors. It also denotes the resulting representation, where meaningful patterns or relationships are preserved. As a tec...
Embedding (machine learning) : Feature extraction Dimensionality reduction Word embedding Neural network Reinforcement learning == References ==
Formal concept analysis : In information science, formal concept analysis (FCA) is a principled way of deriving a concept hierarchy or formal ontology from a collection of objects and their properties. Each concept in the hierarchy represents the objects sharing some set of properties; and each sub-concept in the hiera...
Formal concept analysis : The original motivation of formal concept analysis was the search for real-world meaning of mathematical order theory. One such possibility of very general nature is that data tables can be transformed into algebraic structures called complete lattices, and that these can be utilized for data ...
Formal concept analysis : In his article "Restructuring Lattice Theory" (1982), initiating formal concept analysis as a mathematical discipline, Wille starts from a discontent with the current lattice theory and pure mathematics in general: The production of theoretical results—often achieved by "elaborate mental gymna...
Formal concept analysis : The data in the example is taken from a semantic field study, where different kinds of bodies of water were systematically categorized by their attributes. For the purpose here it has been simplified. The data table represents a formal context, the line diagram next to it shows its concept lat...
Formal concept analysis : A formal context is a triple K = (G, M, I), where G is a set of objects, M is a set of attributes, and I ⊆ G × M is a binary relation called incidence that expresses which objects have which attributes. For subsets A ⊆ G of objects and subsets B ⊆ M of attributes, one defines two derivation op...
Formal concept analysis : The concepts (Ai, Bi) of a context K can be (partially) ordered by the inclusion of extents, or, equivalently, by the dual inclusion of intents. An order ≤ on the concepts is defined as follows: for any two concepts (A1, B1) and (A2, B2) of K, we say that (A1, B1) ≤ (A2, B2) precisely when A1 ...
Formal concept analysis : Real-world data is often given in the form of an object-attribute table, where the attributes have "values". Formal concept analysis handles such data by transforming them into the basic type of a ("one-valued") formal context. The method is called conceptual scaling. The negation of an attrib...
Formal concept analysis : An implication A → B relates two sets A and B of attributes and expresses that every object possessing each attribute from A also has each attribute from B. When (G,M,I) is a formal context and A, B are subsets of the set M of attributes (i.e., A,B ⊆ M), then the implication A → B is valid if ...
Formal concept analysis : Formal concept analysis has elaborate mathematical foundations, making the field versatile. As a basic example we mention the arrow relations, which are simple and easy to compute, but very useful. They are defined as follows: For g ∈ G and m ∈ M let g ↗ m ⇔ (g, m) ∉ I and if m⊆n′ and m′ ≠ n′ ...
Formal concept analysis : Triadic concept analysis replaces the binary incidence relation between objects and attributes by a ternary relation between objects, attributes, and conditions. An incidence ⁠ ( g , m , c ) ⁠ then expresses that the object g has the attribute m under the condition c. Although triadic concept...
Formal concept analysis : There are a number of simple and fast algorithms for generating formal concepts and for constructing and navigating concept lattices. For a survey, see Kuznetsov and Obiedkov or the book by Ganter and Obiedkov, where also some pseudo-code can be found. Since the number of formal concepts may b...
Formal concept analysis : The formal concept analysis can be used as a qualitative method for data analysis. Since the early beginnings of FCA in the early 1980s, the FCA research group at TU Darmstadt has gained experience from more than 200 projects using the FCA (as of 2005). Including the fields of: medicine and ce...
Formal concept analysis : A Formal Concept Analysis Homepage Demo Formal Concept Analysis. ICFCA International Conference Proceedings doi:10.1007/978-3-540-70901-5 2007 5th doi:10.1007/978-3-540-78137-0 2008 6th doi:10.1007/978-3-642-01815-2 2009 7th doi:10.1007/978-3-642-11928-6 2010 8th doi:10.1007/978-3-642-20514-9 ...
Domain adaptation : Domain adaptation is a field associated with machine learning and transfer learning. It addresses the challenge of training a model on one data distribution (the source domain) and applying it to a related but different data distribution (the target domain). A common example is spam filtering, where...
Domain adaptation : Domain adaptation setups are classified in two different ways; according to the distribution shift between the domains, and according to the available data from the target domain.
Domain adaptation : Let X be the input space (or description space) and let Y be the output space (or label space). The objective of a machine learning algorithm is to learn a mathematical model (a hypothesis) h : X → Y able to attach a label from Y to an example from X . This model is learned from a learning samp...
Domain adaptation : Several compilations of domain adaptation and transfer learning algorithms have been implemented over the past decades: SKADA (Python) ADAPT (Python) TLlib (Python) Domain-Adaptation-Toolbox (MATLAB) == References ==
Google Clips : Google Clips is a discontinued miniature clip-on camera device developed by Google.
Google Clips : It was announced during Google's "Made By Google" event on October 4, 2017. It was released for sale on January 27, 2018. With a flashing light emitting diode (LED) that indicates it is recording, Google Clips automatically captures video clips at moments its machine learning algorithms determine to be i...
Google Clips : The Independent wrote that Google Clips is "an impressive little device, but one that also has the potential to feel very creepy." According to The Verge's review, "it didn't capture anything special" over a couple weeks worth of testing. This, added with the steep price, made Google Clips a tough sell. ...
Language resource : In linguistics and language technology, a language resource is a "[composition] of linguistic material used in the construction, improvement and/or evaluation of language processing applications, (...) in language and language-mediated research studies and applications." According to Bird & Simons (...
Language resource : As of May 2020, no widely used standard typology of language resources has been established (current proposals include the LREMap, METASHARE, and, for data, the LLOD classification). Important classes of language resources include data lexical resources, e.g., machine-readable dictionaries, linguist...
Language resource : A major concern of the language resource community has been to develop infrastructures and platforms to present, discuss and disseminate language resources. Selected contributions in this regard include: a series of International Conferences on Language Resources and Evaluation (LREC), the European ...
T5 (language model) : T5 (Text-to-Text Transfer Transformer) is a series of large language models developed by Google AI introduced in 2019. Like the original Transformer model, T5 models are encoder-decoder Transformers, where the encoder processes the input text, and the decoder generates the output text. T5 models a...
T5 (language model) : The original T5 models are pre-trained on the Colossal Clean Crawled Corpus (C4), containing text and code scraped from the internet. This pre-training process enables the models to learn general language understanding and generation abilities. T5 models can then be fine-tuned on specific downstre...
T5 (language model) : The T5 series encompasses several models with varying sizes and capabilities, all encoder-decoder Transformers, where the encoder processes the input text, and the decoder generates the output text. These models are often distinguished by their parameter count, which indicates the complexity and p...
T5 (language model) : Several subsequent models used the T5 architecture, with non-standardized naming conventions used to differentiate them. This section attempts to collect the main ones. An exhaustive list of the variants released by Google Brain is on the GitHub repo for T5X. Some models are trained from scratch w...
T5 (language model) : The T5 model itself is an encoder-decoder model, allowing it to be used for instruction following. The encoder encodes the instruction, and the decoder autoregressively generates the reply. The T5 encoder can be used as a text encoder, much like BERT. It encodes a text into a sequence of real-numb...
T5 (language model) : "T5 release - a google Collection". huggingface.co. 2024-07-31. Retrieved 2024-10-16. == Notes ==
Admissible heuristic : In computer science, specifically in algorithms related to pathfinding, a heuristic function is said to be admissible if it never overestimates the cost of reaching the goal, i.e. the cost it estimates to reach the goal is not higher than the lowest possible cost from the current point in the pat...
Admissible heuristic : An admissible heuristic is used to estimate the cost of reaching the goal state in an informed search algorithm. In order for a heuristic to be admissible to the search problem, the estimated cost must always be lower than or equal to the actual cost of reaching the goal state. The search algorit...
Admissible heuristic : n is a node h is a heuristic h ( n ) is cost indicated by h to reach a goal from n h ∗ ( n ) (n) is the optimal cost to reach a goal from n h ( n ) is admissible if, ∀ n h ( n ) ≤ h ∗ ( n ) (n)
Admissible heuristic : An admissible heuristic can be derived from a relaxed version of the problem, or by information from pattern databases that store exact solutions to subproblems of the problem, or by using inductive learning methods.
Admissible heuristic : Two different examples of admissible heuristics apply to the fifteen puzzle problem: Hamming distance Manhattan distance The Hamming distance is the total number of misplaced tiles. It is clear that this heuristic is admissible since the total number of moves to order the tiles correctly is at le...
Admissible heuristic : If an admissible heuristic is used in an algorithm that, per iteration, progresses only the path of lowest evaluation (current cost + heuristic) of several candidate paths, terminates the moment its exploration reaches the goal and, crucially, never closes all optimal paths before terminating (so...
Admissible heuristic : Consistent heuristic Heuristic function Search algorithm == References ==
Programming by example : In computer science, programming by example (PbE), also termed programming by demonstration or more generally as demonstrational programming, is an end-user development technique for teaching a computer new behavior by demonstrating actions on concrete examples. The system records user actions ...
Programming by example : Query by Example Automated machine learning Example-based machine translation Inductive programming Lapis (text editor), which allows simultaneous editing of similar items in multiple selections created by example Programming by demonstration Test-driven development
Programming by example : Henry Lieberman's page on Programming by Example Online copy of Watch What I Do, Allen Cypher's book on Programming by Demonstration Online copy of Your Wish is My Command, Henry Lieberman's sequel to Watch What I Do A Visual Language for Data Mapping, John Carlson's description of an Integrate...
Tensor (machine learning) : In machine learning, the term tensor informally refers to two different concepts (i) a way of organizing data and (ii) a multilinear (tensor) transformation. Data may be organized in a multidimensional array (M-way array), informally referred to as a "data tensor"; however, in the strict mat...
Tensor (machine learning) : A tensor is by definition a multilinear map. In mathematics, this may express a multilinear relationship between sets of algebraic objects. In physics, tensor fields, considered as tensors at each point in space, are useful in expressing mechanics such as stress or elasticity. In machine lea...
Tensor (machine learning) : Let F be a field such as the real numbers R or the complex numbers C . A tensor T ∈ F I 0 × I 2 × … × I C \in ^\times I_\times \ldots \times I_ is a multilinear transformation from a set of domain vector spaces to a range vector space: T : ↦ F I 0 :\ ^\times ^\times \ldots ^\\mapst...
Tensor (machine learning) : Tensors provide a unified way to train neural networks for more complex data sets. However, training is expensive to compute on classical CPU hardware. In 2014, Nvidia developed cuDNN, CUDA Deep Neural Network, a library for a set of optimized primitives written in the parallel CUDA language...
Mode collapse : In machine learning, mode collapse is a failure mode observed in generative models, originally noted in Generative Adversarial Networks (GANs). It occurs when the model produces outputs that are less diverse than expected, effectively "collapsing" to generate only a few modes of the data distribution wh...
Mode collapse : Mode collapse is distinct from overfitting, where a model learns detailed patterns in the training data that does not generalize to the test data, and underfitting, where it fails to learn patterns. Memorization is where a model learns to reproduce data from the training data. Memorization is often conf...
Mode collapse : Training-time mode collapse was originally noted and studied in GANs, where it arises primarily due to imbalances in the training dynamics between the generator and discriminator in GANs. In the original GAN paper, it was also called the "Helvetica scenario". Common causes include: If the discriminator ...
Mode collapse : The large language models are usually trained in two steps. In the first step ("pretraining"), the model is trained to simply generate text sampled from a large dataset. In the second step ("finetuning"), the model is trained to perform specific tasks by training it on a small dataset containing just th...
Mode collapse : Variational autoencoder Generative model Generative artificial intelligence Generative pre-trained transformer Overfitting == References ==
ReRites : ReRites (also known as RERITES, ReadingRites, Big Data Poetry) is a literary work of "Human + A.I. poetry" by David Jhave Johnston that used neural network models trained to generate poetry which the author then edited. ReRites won the Robert Coover Award for a Work of Electronic Literature in 2022.
ReRites : The ReRites project began as a daily rite of writing with a neural network, expanded into a series of performances from which video documentation has been published online, and concluded with a set of 12 books and an accompanying book of essays published by Anteism Books in 2019. In Electronic Literature, Sco...
ReRites : ReRites is described by John Cayley as "one of the most thorough and beautiful" poetic responses to machine learning. The work's influence on the field of electronic literature was acknowledged in 2022, when the work won the Electronic Literature Organization's Robert Coover Award for a Work of Electronic Lit...
OpenAI o3 : OpenAI o3 is a reflective generative pre-trained transformer (GPT) model developed by OpenAI as a successor to OpenAI o1. It is designed to devote additional deliberation time when addressing questions that require step-by-step logical reasoning. OpenAI released a smaller model, o3-mini, on January 31st, 20...
OpenAI o3 : The OpenAI o3 model was announced on December 20, 2024, with the designation "o3" chosen to avoid trademark conflict with the mobile carrier brand named O2. OpenAI invited safety and security researchers to apply for early access of these models until January 10, 2025. Similarly to o1, there are two differe...
OpenAI o3 : Reinforcement learning was used to teach o3 to "think" before generating answers, using what OpenAI refers to as a "private chain of thought". This approach enables the model to plan ahead and reason through tasks, performing a series of intermediate reasoning steps to assist in solving the problem, at the ...
Jeph Acheampong : Jeph Acheampong is a Ghanaian-American businessman. He is an Expo Live Global Innovator, a World Economic Forum Global Shaper, and was named a Future of Ghana Pioneer.
Jeph Acheampong : Acheampong was born in Accra, Ghana and grew up in New York. He graduated high school ini 2012 from Forest Hills High School (New York). Acheampong went on to study economics at New York University. Later, he pursued a masters at Harvard University.
Jeph Acheampong : While at New York University, Acheampong founded Anansi Global, a fashion company. In 2018, he founded Blossom Academy, Ghana's first Data science academy. He also cofounded Blossom Corporate Training to upskill working professionals. In 2016, He was awarded the President Service Award. In September 2...
Jeph Acheampong : He is a 2018 Princeton in Africa Fellow, a 2023 Acumen Fellow, and a 2024 Cheng Fellow at the Harvard Kennedy School. == References ==
Overfitting : In mathematical modeling, overfitting is "the production of an analysis that corresponds too closely or exactly to a particular set of data, and may therefore fail to fit to additional data or predict future observations reliably". An overfitted model is a mathematical model that contains more parameters ...
Overfitting : In statistics, an inference is drawn from a statistical model, which has been selected via some procedure. Burnham & Anderson, in their much-cited text on model selection, argue that to avoid overfitting, we should adhere to the "Principle of Parsimony". The authors also state the following.: 32–33 Overfi...
Overfitting : Usually, a learning algorithm is trained using some set of "training data": exemplary situations for which the desired output is known. The goal is that the algorithm will also perform well on predicting the output when fed "validation data" that was not encountered during its training. Overfitting is the...
Overfitting : Underfitting is the inverse of overfitting, meaning that the statistical model or machine learning algorithm is too simplistic to accurately capture the patterns in the data. A sign of underfitting is that there is a high bias and low variance detected in the current model or algorithm used (the inverse o...
Overfitting : Benign overfitting describes the phenomenon of a statistical model that seems to generalize well to unseen data, even when it has been fit perfectly on noisy training data (i.e., obtains perfect predictive accuracy on the training set). The phenomenon is of particular interest in deep neural networks, but...
Overfitting : Leinweber, D. J. (2007). "Stupid data miner tricks". The Journal of Investing. 16: 15–22. doi:10.3905/joi.2007.681820. S2CID 108627390. Tetko, I. V.; Livingstone, D. J.; Luik, A. I. (1995). "Neural network studies. 1. Comparison of Overfitting and Overtraining" (PDF). Journal of Chemical Information and M...
Overfitting : Christian, Brian; Griffiths, Tom (April 2017), "Chapter 7: Overfitting", Algorithms To Live By: The computer science of human decisions, William Collins, pp. 149–168, ISBN 978-0-00-754799-9
Overfitting : The Problem of Overfitting Data – Stony Brook University What is "overfitting," exactly? – Andrew Gelman blog CSE546: Linear Regression Bias / Variance Tradeoff – University of Washington What is Underfitting – IBM
Aporia (company) : Aporia is a machine learning observability platform based in Tel Aviv, Israel. The company has a US office located in San Jose, California. Aporia has developed software for monitoring and controlling undetected defects and failures used by other companies to detect and report anomalies, and warn in ...
Aporia (company) : Aporia was founded in 2019 by Liran Hason and Alon Gubkin. In April 2021, the company raised a $5 million seed round for its monitoring platform for ML models. In February 2022, the company closed a Series A round of $25 million for its ML observability platform. Aporia was named by Forbes as the Nex...
Aporia (company) : In 2022, Aporia faced significant challenges when a cybersecurity breach exposed sensitive client data stored within its machine learning observability platform. The breach was traced to a vulnerability in Aporia’s Direct Data Connectors (DDC), which allowed unauthorized access to integrated data sou...