| [ |
| { |
| "chunk_id": "49bcffac-dd8d-42e8-8da6-71c7c748ec93", |
| "text": "Shrihari Vasudevan, Arman Melkumyan and Steven Scheding Australian Centre for Field Robotics, The University of Sydney, NSW 2006, Australia\nEmail: shrihari.vasudevan@ieee.org | a.melkumyan@acfr.usyd.edu.au | s.scheding@acfr.usyd.edu.au This paper evaluates heterogeneous information fusion using multi-task Gaussian processes in the context of geological resource modeling. Specifically, it empirically demonstrates that information integration across heterogeneous\ninformation sources leads to superior estimates of all the quantities being modeled, compared to modeling them indi-2013\nvidually. Multi-task Gaussian processes provide a powerful approach for simultaneous modeling of multiple quantities\nof interest while taking correlations between these quantities into consideration. Experiments are performed on large\nscale real sensor data.Sep\nKeywords - Gaussian process, Information fusion, Geological resource modeling In applications such as space-exploration, mining or agriculture automation, modeling the underlying[stat.ML] resource is a fundamental problem. For such applications, an efficient, flexible and high-fidelity representation of the geology is critical. The key challenges in realizing this are that of dealing with the\nproblems of uncertainty and incompleteness. Uncertainty and incompleteness are virtually ubiquitous\nin any sensor based application as sensor capabilities are limited. The problem is magnified in a field automation scenario due to sheer scale of the application. Incompleteness is a major problem in any large\nscale resource modeling endeavor as sensors have limited range and applicability. A more significant contributor to this issue is that of cost - sampling and collecting such data is expensive. Geological data is\ntypically collected through various sensors/processes of widely differing characteristics and consequently\nlead to different kinds of information. Often the resource is characterized by numerous quantities (for\nexample, soil composition in terms of numerous elements).", |
| "paper_id": "1210.1928", |
| "title": "Information fusion in multi-task Gaussian processes", |
| "authors": [ |
| "Shrihari Vasudevan", |
| "Arman Melkumyan", |
| "Steven Scheding" |
| ], |
| "published_date": "2012-10-06", |
| "primary_category": "stat.ML", |
| "arxiv_url": "http://arxiv.org/abs/1210.1928v3", |
| "chunk_index": 1, |
| "total_chunks": 48, |
| "char_count": 2035, |
| "word_count": 265, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "5c4765e2-702b-487c-81e0-b90e8d185685", |
| "text": "These quantities often are correlated. Given these issues, large scale geological resource modeling needs a representation that can handle\nprocess (GP) representation of resource data similar to that described in [1]. GPs are ideally suited\nto handling spatially correlated data. This paper further uses an extension of the basic Gaussian\nprocess model, the multi-task Gaussian process (MTGP), to simultaneously model multiple quantities\nof interest. The proposed model not only captures spatial correlations between individual quantities\nwith themselves (at different locations) but also that between totally different quantities that together\nquantify the resource. That the quantities modeled in this paper exhibit strong correlation is known\nfrom geological sciences. This paper presents an empirical evaluation to understand (1) if simultaneous\nmodeling of multiple quantities of interest (i.e. modeling and using the correlations between them\nand hence performing data fusion) is better than modeling these quantities independently and (2)\nif the nonstationary kernels are more effective than stationary kernels at modeling geological data. Experiments are performed on large scale real sensor data. Gaussian processes (GPs) [2] are powerful non-parametric Bayesian learning techniques that can handle\ncorrelated, uncertain and incomplete data. They have been used in a range of fields, the Gaussian process\nweb-site1 lists several examples. GPs produce a scalable multi-resolution model of the entity under\nconsideration. They yield a continuous domain representation of the data and hence can be sampled\nat any desired resolution. GPs incorporate and handle uncertainty in a statistically sound manner\nand represent spatially correlated data appropriately. They model and use the spatial correlation of\nthe given data to estimate the values for other unknown points of interest. GPs basically perform a\nstandard interpolation technique known as Kriging [3].", |
| "paper_id": "1210.1928", |
| "title": "Information fusion in multi-task Gaussian processes", |
| "authors": [ |
| "Shrihari Vasudevan", |
| "Arman Melkumyan", |
| "Steven Scheding" |
| ], |
| "published_date": "2012-10-06", |
| "primary_category": "stat.ML", |
| "arxiv_url": "http://arxiv.org/abs/1210.1928v3", |
| "chunk_index": 2, |
| "total_chunks": 48, |
| "char_count": 1965, |
| "word_count": 280, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "3cfb1263-aa6d-4cfd-bb01-ae6a7af35d6b", |
| "text": "The work [1], modeled large scale terrain modeling using GPs. It proposed the use of non-stationary\nkernels (neural network) to model large scale discontinuous spatial data. A performance comparison\nbetween GPs based on stationary (squared exponential) and non-stationary (neural network) kernels as\nwell as several other standard interpolation methods applicable to alternative representations of terrain\ndata, was reported. The non-stationary neural network kernel was found to be superior to the stationary\nsquared exponential kernel and at least as good as most standard interpolation techniques for a range of\nterrain (in terms of sparsity/complexity/discontinuities).", |
| "paper_id": "1210.1928", |
| "title": "Information fusion in multi-task Gaussian processes", |
| "authors": [ |
| "Shrihari Vasudevan", |
| "Arman Melkumyan", |
| "Steven Scheding" |
| ], |
| "published_date": "2012-10-06", |
| "primary_category": "stat.ML", |
| "arxiv_url": "http://arxiv.org/abs/1210.1928v3", |
| "chunk_index": 3, |
| "total_chunks": 48, |
| "char_count": 673, |
| "word_count": 93, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "5be798a1-dc37-48fd-b265-9df73cd6a56f", |
| "text": "The work presented in this paper builds on this\nGP representation. However, it addresses the problem of simultaneous modeling multiple heterogeneous\nquantities of interest, in the context of geological resource modeling. This requires the modeling and\nusage of the correlations between these quantities towards improving predictions of each of them - an\ninstance of data fusion using Gaussian processes. Data fusion in the context of Gaussian processes is necessitated by the presence of multiple, multisensor, multi-attribute, incomplete and/or uncertain data sets of the entity being modeled. Two preliminary attempts towards addressing this problem include [4] and [5].", |
| "paper_id": "1210.1928", |
| "title": "Information fusion in multi-task Gaussian processes", |
| "authors": [ |
| "Shrihari Vasudevan", |
| "Arman Melkumyan", |
| "Steven Scheding" |
| ], |
| "published_date": "2012-10-06", |
| "primary_category": "stat.ML", |
| "arxiv_url": "http://arxiv.org/abs/1210.1928v3", |
| "chunk_index": 4, |
| "total_chunks": 48, |
| "char_count": 672, |
| "word_count": 97, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "f78e3b6c-0452-4f04-8be1-73ada452623d", |
| "text": "The former bears a \"hierarchical\nlearning\" flavor to it in that it demonstrates how a GP can be used to model an expensive process by (a)\nmodeling a GP on an approximate or cheap process and (b) using the many input-output data from the\napproximate process and the few samples available of the expensive process together in order to learn a\nGP for the latter. The work [5] attempts to generalize arbitrary transformations on GP priors through\nlinear transformations. It hints at how this framework could be used to introduce heteroscedasticity\n(random variables with non-constant variance) and how information from different sources could be\nfused. However, specifics on how the fusion can actually be performed are beyond the scope of the work.", |
| "paper_id": "1210.1928", |
| "title": "Information fusion in multi-task Gaussian processes", |
| "authors": [ |
| "Shrihari Vasudevan", |
| "Arman Melkumyan", |
| "Steven Scheding" |
| ], |
| "published_date": "2012-10-06", |
| "primary_category": "stat.ML", |
| "arxiv_url": "http://arxiv.org/abs/1210.1928v3", |
| "chunk_index": 5, |
| "total_chunks": 48, |
| "char_count": 745, |
| "word_count": 122, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "a4ed9917-c898-46af-963d-ee5fa56ec5c4", |
| "text": "Girolami in [6] integrated heterogeneous feature types within a Gaussian process classification setting,\nin a protein fold recognition application domain. Each feature representation is represented by a separate\nGP. The fusion uses the idea that individual feature representations are considered independent and\nhence a composite covariance function would be defined in terms of a linear sum of Gaussian process\npriors. A recent work by Reece et al. [7] integrated \"hard\" data obtained from sensors with \"soft\"\ninformation obtained from human sources within a Gaussian process classification framework. This\nproblem/approach is different from the work presented here.", |
| "paper_id": "1210.1928", |
| "title": "Information fusion in multi-task Gaussian processes", |
| "authors": [ |
| "Shrihari Vasudevan", |
| "Arman Melkumyan", |
| "Steven Scheding" |
| ], |
| "published_date": "2012-10-06", |
| "primary_category": "stat.ML", |
| "arxiv_url": "http://arxiv.org/abs/1210.1928v3", |
| "chunk_index": 6, |
| "total_chunks": 48, |
| "char_count": 667, |
| "word_count": 96, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "91474748-56a7-4e6a-952d-fe5332c700e4", |
| "text": "It uses heterogeneous information domains\n(i.e. kinds of information) as mutually independent sources of information that are transformed into the\nkernel representation (a kernel for each kind of information) and combined using a product rule (a linear\nsum in Girolami's work). The focus thus, is on encoding or representing different kinds of information\nin a common mathematical framework using kernels. This paper is concerned with a \"higher level\" data\nfusion problem of heterogeneous-source information integration after it has been represented using kernel\nmethods. The experiments of this paper demonstrate the case when information from each source is itself\nfrom a homogeneous domain - e.g. the heterogeneous input data are all real numbers. The approach\npresented in this paper improves the estimate of several different quantities being simultaneously modeled\nby explicitly modeling the correlation between multiple heterogeneous information sources. If this is\nnot the case (e.g. input data is made up of qualitative and quantitative data dimensions), each of\nheterogeneous information types can be represented by separate kernels and these can be combined 1http://www.gaussianprocess.org/ using a sum or product as has been done in [6, 7]. Simpler data fusion approaches, based on GPs,\nheteroscedastic GPs and their variants (see [8]), may be applied. However, the application of the\napproach presented in this paper, based on multi-output or multi-task GPs, will require a non-trivial\nderivation of auto and cross covariances for kernels applied on heterogeneous information types. Examples of related works that use multiple sources of the same kind of information within a single GP\nrepresentation framework include [9] and [10]. Whereas the former uses single output GPs to incorporate\nin-situ surface spectra information and remotely sensed spectra information into a kilometer scale map\nof the environment, the latter uses a GP implicit surface representation of an object that has to be\ngrasped and manipulated. The representation incorporates visual, haptic and laser data into a single\nrepresentation of the object. Data from each of these sensor modalities conditions the GP prior based\non the implied surface at that point (on/outside/inside the object). Two recent approaches demonstrating data fusion with Gaussian processes in the context of large scale\nterrain modeling were based on heteroscedastic GPs [11] and dependent GPs [12, 13].", |
| "paper_id": "1210.1928", |
| "title": "Information fusion in multi-task Gaussian processes", |
| "authors": [ |
| "Shrihari Vasudevan", |
| "Arman Melkumyan", |
| "Steven Scheding" |
| ], |
| "published_date": "2012-10-06", |
| "primary_category": "stat.ML", |
| "arxiv_url": "http://arxiv.org/abs/1210.1928v3", |
| "chunk_index": 7, |
| "total_chunks": 48, |
| "char_count": 2464, |
| "word_count": 368, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "99a9df23-81f1-4760-9dda-bbd5bc993f8a", |
| "text": "These address the\nproblem of fusing multiple, multi-sensor data sets of a single quantity of interest. This paper describes\nthe framework for extending this concept to multiple heterogeneous quantities of interest. The work\n[11] treated the data-fusion problem as one of combining different noisy samples of a common entity\n(terrain) being modeled. In the Machine Learning community, this idea is referred to as heteroscedastic\nGPs [14, 15, 16, 17]. The works [12] and [13] treated the data fusion problem as one of improving\nGP regression through modeling the spatial correlations (auto and cross covariances) between several\ndependent GPs representing the respective data sets. This idea has been inspired by recent machine\nlearning contributions in multi-task or multi-output or dependent GP modeling including [18] and [19],\nthe latter being based on [20]. In Kriging terminology, this idea is akin to Co-kriging [21]. The work [8]\nperformed a model complexity analysis of multiple approaches to data fusion using GPs, applied in the\ncontext of large scale terrain modeling. The work presented in this paper, focuses on the most generic of\nthese approaches in the context of geological resource modeling. The significantly stronger evaluation,\nthe discussion of \"big-picture\" issues relating to the application of the approach in practical problems,\nthe fusion of heterogeneous data, the use of more kernels and the tying together of different prior works\nthat have studied this approach [12, 13, 22] are enhancements presented in this work. The work [22] provided preliminary findings to geological resource modeling using various combinations of stationary kernel including the squared exponential (SQEXP), Matern 3/2 and a sparse\ncovariance function [23]. For a geological resource modeling data set taken from a mine, it found the\nMatern 3/2 - Matern 3/2 - SQEXP kernel combination provided best performance in terms of the prediction error. This paper reports a detailed multi-metric benchmarking experiment, using cross validation\nmethods, performed on a multi-task GP, an equivalent set of GPs and a set of independently optimized\nGPs, to provide for an exact and an independent comparison between them. The objective is to quantify\nthe benefit (if any) of simultaneous modeling of the multiple quantities by modeling and using the correlations between them as against modeling each of these quantities separately. This paper also compares\ndata fusion using multiple stationary and nonstationary kernels in the context of modeling geological\ndata.", |
| "paper_id": "1210.1928", |
| "title": "Information fusion in multi-task Gaussian processes", |
| "authors": [ |
| "Shrihari Vasudevan", |
| "Arman Melkumyan", |
| "Steven Scheding" |
| ], |
| "published_date": "2012-10-06", |
| "primary_category": "stat.ML", |
| "arxiv_url": "http://arxiv.org/abs/1210.1928v3", |
| "chunk_index": 8, |
| "total_chunks": 48, |
| "char_count": 2558, |
| "word_count": 390, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "8c76f40a-ab6b-472e-bd17-959e0f4b5753", |
| "text": "An extensive review of kernel methods applied in modeling vector valued functions was presented in\na recent survey paper [24]. The paper discusses different approaches to develop kernels for multi-task\napplications and draws parallels between regularization perspective of this problem and a Bayesian one. The latter perspective is discussed through Gaussian processes. The work presented in this paper focuses\non one of the approaches reviewed in [24]; specifically, it addresses modeling and information fusion of\nmulti-task geological data using Gaussian processes developed using the process convolution approach. The paper presents a detailed empirical study of the approach applied to a large scale real world problem\nin order to evaluate its efficacy for information fusion, to understand the modeling capabilities of different\nkernels (chosen apriori) with such data and to understand broader approach-related questions from an\napplication perspective. The paper also ties together past works of the authors within the process\nconvolution theme. 3.1 Gaussian processes Gaussian processes [2] (GPs) are stochastic processes wherein any finite subset of random variables\nis jointly Gaussian distributed. They may be thought of as a Gaussian probability distribution in\nfunction space. They are characterized by a mean function m(x) and the covariance function k(x, x′)\nthat together specify a distribution over functions. In the context of geological resource modeling, each\nx ≡ (east, north, depth) (3D coordinates) and f(x) ≡ z, the concentration of the quantity beingmodeled. Although not necessary, the mean function m(x) may be assumed to be zero by scaling/shifting\nthe data appropriately such that it has an empirical mean of zero. The covariance function or kernel models the relationship between the random variables corresponding\nto the given data. It can take numerous forms [2, chap. 4].", |
| "paper_id": "1210.1928", |
| "title": "Information fusion in multi-task Gaussian processes", |
| "authors": [ |
| "Shrihari Vasudevan", |
| "Arman Melkumyan", |
| "Steven Scheding" |
| ], |
| "published_date": "2012-10-06", |
| "primary_category": "stat.ML", |
| "arxiv_url": "http://arxiv.org/abs/1210.1928v3", |
| "chunk_index": 9, |
| "total_chunks": 48, |
| "char_count": 1905, |
| "word_count": 284, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "b8e44915-ce6a-4ecc-866e-4d73cac90218", |
| "text": "The stationary squared exponential (or\nGaussian) kernel (SQEXP) is given by , (1) kSQEXP(x, x′, Σ) = σ2f . exp −12(x −x′)TΣ(x −x′)\nwhere k is the covariance function or kernel; Σ = diag[ least , lnorth , ldepth ]−2 is a d x d diagonal\nlength-scale matrix (d = dimensionality of input = 3 in this case), a measure of how quickly the modeled\nfunction changes in the east, north and depth directions; σ2f is the signal variance. The set of parameters\n{ least , lnorth , ldepth , σf } are referred to as the kernel hyperparameters. The non-stationary neural network (NN) kernel [25, 26, 27] takes the form\n2 2˜xTΣ˜x′\nkNN(x, x′, Σ) = σ2f . arcsin p , (2)\nπ (1 + 2˜xTΣ˜x)(1 + 2˜x′TΣ˜x′) where ˜x and ˜x′ are augmented input vectors (each point is augmented with a 1), Σ is a (d + 1) x (d + 1)\ndiagonal length-scale matrix given by Σ = diag[ β , least , lnorth , ldepth ]−2, β being a bias factor and\nd being the dimensionality of the input data. The variables { β , least , lnorth , ldepth , σf } constitutethe kernel hyperparameters. The NN kernel represents the covariance function of a neural network\nwith a single hidden layer between the input and output, infinitely many hidden nodes and using a\nSigmoidal transfer function [26] for the hidden nodes. Hornik, in [28], showed that such neural networks\nare universal approximators and Neal, in [25], observed that the functions produced by such a network\nwould tend to a Gaussian process. Prior work in [1] found the NN kernel to be more effective than the\nSQEXP kernel at modeling discontinuous data. The Matern 3/2 kernel is another stationary kernel differing from the SQEXP kernel in that the latter\nis infinitely differentiable and consequently tends to have a strong smoothing nature, which is argued as\nbeing detrimental to modeling physical processes [2]. It takes the form specified in Equation 3. Y √ 3rk √ 3rk\nkMATERN3(x, x′, Σ) = σ2f . (1 +\nlk ) exp − lk (3)\n1≤k≤d\nwhere k ǫ 1 . . . d is the dimension of the input data (d = dimensionality of input = 3 in this case),\nΣ = [ least , lnorth , ldepth ] is a 1 x d length-scale matrix, a measure of how quickly the modeled function\nchanges in the east, north and depth directions; σ2f is the signal variance. The set of parameters\n{ least , lnorth , ldepth , σf } is referred to as the kernel hyperparameters. Regression using GPs uses the fact that any finite set of training (evaluation) data and test data of\na GP are jointly Gaussian distributed. Assuming noise free data, this idea is shown in Expression 4\n(hereafter referred to as Equation 4).", |
| "paper_id": "1210.1928", |
| "title": "Information fusion in multi-task Gaussian processes", |
| "authors": [ |
| "Shrihari Vasudevan", |
| "Arman Melkumyan", |
| "Steven Scheding" |
| ], |
| "published_date": "2012-10-06", |
| "primary_category": "stat.ML", |
| "arxiv_url": "http://arxiv.org/abs/1210.1928v3", |
| "chunk_index": 10, |
| "total_chunks": 48, |
| "char_count": 2556, |
| "word_count": 477, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "c80e7978-d3cf-4184-9b76-622184bd5dfd", |
| "text": "This leads to the standard GP regression equations yielding an\nestimate (the mean value, given by Equation 5) and its uncertainty (Equation 6).", |
| "paper_id": "1210.1928", |
| "title": "Information fusion in multi-task Gaussian processes", |
| "authors": [ |
| "Shrihari Vasudevan", |
| "Arman Melkumyan", |
| "Steven Scheding" |
| ], |
| "published_date": "2012-10-06", |
| "primary_category": "stat.ML", |
| "arxiv_url": "http://arxiv.org/abs/1210.1928v3", |
| "chunk_index": 11, |
| "total_chunks": 48, |
| "char_count": 143, |
| "word_count": 23, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "87e2539e-222d-4523-b4be-b32ea5ebee1c", |
| "text": "0 , K(X, X) K(X, X∗) (4) ∼N X) f∗ K(X∗, K(X∗, X∗) X) K(X, X)−1 z (5) ¯f∗= K(X∗,\ncov(f∗) = K(X∗, X∗) −K(X∗, X)K(X, X)−1K(X, X∗) (6)\nFor n training points (X, z) = (xi, zi)i=1...n and n∗test points (X∗, f∗), K(X, X∗) denotes the n × n∗\nmatrix of covariances evaluated at all pairs of training and test points. The terms K(X, X), K(X∗, X∗)\nand K(X∗, X) are defined likewise. In the event that the data being modeled is noisy, a noise hyperparameter (σ) is also learnt with the other GP hyperparameters and the covariance matrix of the training\ndata K(X, X) is replaced by [K(X, X) + σ2I] in Equations 4, 5 and 6. GP hyperparameters may be\nlearnt using various techniques such as cross validation based approaches [2] and maximum-a-posteriori\napproaches using Markov Chain Monte Carlo techniques [2, 27] and maximizing the marginal likelihood\nof the observed training data [2, 1]. This paper adopts the latter most approach based on the intuition\nthat it may be more suited for large data sets. The marginal likelihood to be maximized is described in\nEquation 7. X)−1z log(2π) (7) log p(z|X, θ) = −12zTK(X, −12 log |K(X, X)| −n2 3.2 Multi-task Gaussian processes (MTGPs)", |
| "paper_id": "1210.1928", |
| "title": "Information fusion in multi-task Gaussian processes", |
| "authors": [ |
| "Shrihari Vasudevan", |
| "Arman Melkumyan", |
| "Steven Scheding" |
| ], |
| "published_date": "2012-10-06", |
| "primary_category": "stat.ML", |
| "arxiv_url": "http://arxiv.org/abs/1210.1928v3", |
| "chunk_index": 12, |
| "total_chunks": 48, |
| "char_count": 1166, |
| "word_count": 207, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "77a83db0-a5f9-4417-af1c-d018e1e450a0", |
| "text": "The problem being addressed in this paper can be described as follows. The objective is to model multiple\nheterogeneous quantities (e.g. concentrations of various elements) of the entity in consideration (e.g.\nland mass). The data fusion aspect of this problem is the improved estimation of each one of these\nquantities by integration or use of all other quantities of interest. If each quantity is modeled using a\nseparate GP, the objective is to improve one GPs prediction estimates given all other GP models. Multi-task Gaussian processes (MTGPs or multi-output GPs or Dependent GPs) extend Gaussian\nprocesses to handle multiple correlated outputs simultaneously. The main advantage of this technique\nis that the model exploits not only the spatial correlation of data corresponding to one output but also\nthose of the other outputs. This improves GP regression/prediction of an output given the others, thus\nperforming data fusion. Figure 1 shows a simulated example of this concept. Let the number of outputs/tasks that need to be simultaneously modeled be denoted by nt. Equations\n4, 5 and 6 represent respectively the MTGP data fusion model, the regression estimates and their\nuncertainties, subject to the following modifications to the basic notation. z = [ z1 , z2 , z3 , ... , znt ]′ represents the output values of the selected training data from the individual nt tasks that need to be\nsimultaneously modeled. X = [ X1 , X2 , X3 , ... , Xnt ]", |
| "paper_id": "1210.1928", |
| "title": "Information fusion in multi-task Gaussian processes", |
| "authors": [ |
| "Shrihari Vasudevan", |
| "Arman Melkumyan", |
| "Steven Scheding" |
| ], |
| "published_date": "2012-10-06", |
| "primary_category": "stat.ML", |
| "arxiv_url": "http://arxiv.org/abs/1210.1928v3", |
| "chunk_index": 13, |
| "total_chunks": 48, |
| "char_count": 1455, |
| "word_count": 242, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "e9b9bdca-f012-42ff-90f4-961854b69696", |
| "text": "denotes the input location values (east, north, depth) of the selected training data from the individual\ndata sets. Any kernel [2] may be used and even different kernel could be used for different data sets\nusing the technique demonstrated in [22] (for stationary kernel) or the convolution process technique\ndemonstrated in [20, 19, 12, 13] and in this paper (for both stationary and nonstationary kernel). The\ncovariance matrix of the training data is given by\n \nKY11 KY12 . . . KY1 nt\nKY21 . . . . . . ... K(X, X) ≡ ... ... ... ...", |
| "paper_id": "1210.1928", |
| "title": "Information fusion in multi-task Gaussian processes", |
| "authors": [ |
| "Shrihari Vasudevan", |
| "Arman Melkumyan", |
| "Steven Scheding" |
| ], |
| "published_date": "2012-10-06", |
| "primary_category": "stat.ML", |
| "arxiv_url": "http://arxiv.org/abs/1210.1928v3", |
| "chunk_index": 14, |
| "total_chunks": 48, |
| "char_count": 546, |
| "word_count": 101, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "5a2847b1-44c2-4f00-9f14-38a509e4a70e", |
| "text": "Figure 1: A simple demonstration of the MTGP/DGP concept demonstrating data fusion. Two sine waves (black) are\nto be modeled. One is an inverted function of the other. Noisy samples are available all over one of them (red) whereas\nthe other one has noisy samples only in one part of it (green). Merely using these few green samples would result in a\npoor prediction of the sine wave in the areas devoid of samples. Using the spatial correlation with the red sampled sine\nwave enables the MTGP approach to improve the prediction of the green sampled sine wave. The figure above shows the\npredictions of the GPs given the other GP (red/blue circles) and that of the second GP taken alone (green plus marks). The figure below shows the uncertainty in predictions (error bars of two standard deviations about mean) of the second GP\ntaken alone (green) and that when taken together with the first GP (blue) - a clear reduction in uncertainty is observed.", |
| "paper_id": "1210.1928", |
| "title": "Information fusion in multi-task Gaussian processes", |
| "authors": [ |
| "Shrihari Vasudevan", |
| "Arman Melkumyan", |
| "Steven Scheding" |
| ], |
| "published_date": "2012-10-06", |
| "primary_category": "stat.ML", |
| "arxiv_url": "http://arxiv.org/abs/1210.1928v3", |
| "chunk_index": 15, |
| "total_chunks": 48, |
| "char_count": 949, |
| "word_count": 165, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "fed45f27-23ff-4e97-aa03-19d0372d013e", |
| "text": "KYii = KUii (Xi, Xi) + σ2i I\nKYij = KUij(Xi, Xj) . Here, KYii represents the auto-covariance of the ith data set with itself and KYij represents the cross\ncovariance between the ith and jth data sets. These terms model the covariance between the noisy\nobserved data points (z values). Thus, they also take the noise components of the individual data sets\n/ GPs into consideration. The corresponding noise free terms are respectively given by KUii and KUij. These are derived by using the process convolution approach to formulating Gaussian processes; details\nof this follow in the next subsection. The covariance matrix between the test points and training points\nis given by i K(X∗, X) = [KUi1(X∗, X1), KUi2(X∗, X2), . . . , KU nt(X∗, Xnt)] ,\nwhere i ǫ {1 . . . nt} is the GP that is being evaluated given all other GPs. The matrix K(X, X∗) isdefined likewise. Finally, the covariance of the test points is given by ii i I , K(X∗, X∗) = KU (X∗, X∗) + σ2\nassuming the ith GP needs to be evaluated for the particular test point. The mean and variance of\nthe concentration estimate can thus be obtained by applying Equations 5 and 6, after incorporating\nmultiple outputs/tasks, multiple GP/noise hyperparameters and deriving appropriate auto and cross\ncovariances functions that model the spatial correlation between the individual data sets. Data fusion is\nthus achieved in the MTGP approach by correlating individual heterogeneous outputs/tasks and using\nthis correlation information to improve the prediction estimates of each of them.", |
| "paper_id": "1210.1928", |
| "title": "Information fusion in multi-task Gaussian processes", |
| "authors": [ |
| "Shrihari Vasudevan", |
| "Arman Melkumyan", |
| "Steven Scheding" |
| ], |
| "published_date": "2012-10-06", |
| "primary_category": "stat.ML", |
| "arxiv_url": "http://arxiv.org/abs/1210.1928v3", |
| "chunk_index": 16, |
| "total_chunks": 48, |
| "char_count": 1537, |
| "word_count": 260, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "e3b54432-2e31-4a75-b7e2-6dde7975f53a", |
| "text": "3.3 Derivation of the auto and cross covariance terms The main challenge in the use of multi-task GPs is the derivation of closed form cross (and auto) covariance functions. The process convolution approach to modeling GPs, proposed in [20], can address this\nproblem. The cited paper (1) modeled a GP as the convolution of a \"smoothing kernel\" and a Gaussian\nwhite noise process, (2) expressed a relationship between the \"smoothing kernel\" and the corresponding covariance function through the Fourier transform, (3) noted that for stationary isotropic kernels,\nthere existed a one-to-one relationship between the covariance function and its smoothing kernel and\nthat for non-isotropic and/or non-stationary kernels, there was no unique solution to the smoothing\nkernel and (4) hinted at how this approach may be used to develop GP models with complex properties\n(e.g. nonstationarity). As a consequence of this approach, modeling the GP amounted to modeling the\nhyperparameters of the smoothing kernel. For the second point above, the paper suggested that the\nsmoothing kernel for a covariance function could be obtained as the Inverse Fourier Transform of the\nsquare root of the spectrum (Fourier transform) of the covariance function. The process convolution\napproach to MTGPs has been used with the stationary SQEXP kernel in [19, 29, 12] and the nonstationary NN kernel in [13, 8]. Once the smoothing kernel is identified for a covariance function, the\ncross-covariance between two covariance functions can be derived as a kernel correlation between the\nrespective smoothing kernels [19]. The following mathematical formalism is based on [20] and [19]. Yi(s) = Ui(s) + Wi(s) (8)\nUi(s) = ki(s, λ) ⋆X(λ) dλ (9) KUij(sa, sb) = E {Ui(sa)Z Uj(sb)} Z\n= E ki(sa, α).X(α)dα kj(sb, β).X(β)dβ\n= ki(sa, α) kj(sb, α) dα (10)\nKUii (sa, sb) = ki(sa, α) ki(sb, α) dα (11) Mathematically, if Yi(s) represents the observed data in Equation 8, it is expressed as a combination\nof a noise-free GP Ui(s) and Gaussian white noise process Wi(s). The GP Ui(s) is further modeled\nas a convolution of a smoothing kernel ki(s, λ) and a Gaussian white noise process X(λ), as shown in\nEquation 9. A stationary and/or isotropic smoothing kernel would take the form ki(s −λ) as it wouldbe a function of the distance between the input points. If two covariance functions (corresponding to\ntwo GPs Ui(s) and Uj(s)) have smoothing kernels ki(sa, λ) and kj(sb, λ) respectively, then the cross\ncovariance between them can be derived as shown in Equation 10.", |
| "paper_id": "1210.1928", |
| "title": "Information fusion in multi-task Gaussian processes", |
| "authors": [ |
| "Shrihari Vasudevan", |
| "Arman Melkumyan", |
| "Steven Scheding" |
| ], |
| "published_date": "2012-10-06", |
| "primary_category": "stat.ML", |
| "arxiv_url": "http://arxiv.org/abs/1210.1928v3", |
| "chunk_index": 17, |
| "total_chunks": 48, |
| "char_count": 2527, |
| "word_count": 410, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "6f6a6659-c03f-4f5e-8571-a3f4a76ae8b6", |
| "text": "The auto covariance can be deduced\nfrom the cross covariance expression and take the form shown in Equation 11.", |
| "paper_id": "1210.1928", |
| "title": "Information fusion in multi-task Gaussian processes", |
| "authors": [ |
| "Shrihari Vasudevan", |
| "Arman Melkumyan", |
| "Steven Scheding" |
| ], |
| "published_date": "2012-10-06", |
| "primary_category": "stat.ML", |
| "arxiv_url": "http://arxiv.org/abs/1210.1928v3", |
| "chunk_index": 18, |
| "total_chunks": 48, |
| "char_count": 111, |
| "word_count": 19, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "2db8f02d-c27b-41cd-98db-f2988936dccb", |
| "text": "The smoothing kernel ki R\nand kj need to be finite energy kernels i.e. | ki(xa, α) |2 dα < ∞. This can be intrinsically true ofsome kernel (e.g. squared exponential kernel) or can be true subject to the bounded application of the\nkernel (e.g. neural network kernel). The work [22] suggested that if a covariance function could be written as a convolution of its \"basis\nfunctions\" (the form specified in Equation 11), then a cross-covariance between two covariance functions\ncould be derived as a kernel correlation of their respective basis functions (the form specified in Equation\n10). The paper proved that the resulting cross-covariance would be positive definite. In order to find the\nbasis function for a particular covariance function, the paper derived an expression in terms of its Fourier\ntransform. This relationship is identical to that suggested by [20] and valid for stationary kernels only. The paper also derives closed form cross-covariance functions for different combinations of stationary\nkernels including the squared exponential, Matern 3/2 and a sparse covariance function developed by\nthe authors in [23]. This paper argues that both of these methods using the \"smoothing kernel\" [20] and the \"basis\nfunctions\" [22] are actually equivalent with the former providing a sound basis to explain the latter\nas well as a powerful framework to develop other complex GP models such as space-time models and\nnonstationary GPs. The key insight obtained here is in the methodology of identifying the smoothing\nkernel for the process convolution approach. If the covariance function is a stationary kernel, there is\nan exact one-to-one relationship between the covariance function and the smoothing kernel as pointed\nout in [20] and whose expression is derived in [22]. If the covariance function is nonstationary, several\npossible smoothing kernels may lead to the same covariance function, as pointed out in [20]. However,\nattempting to express the kernel in a separable form (e.g. as the correlation of two identically formed\nbasis functions) and thereby identifying the smoothing kernel would be one possible approach, if the\nform of the kernel form allowed for such separation. Needless to say, this idea would be applicable only in\na restricted class of covariance functions and finding a universal approach to identifying the smoothing\nkernel for other nonstationary kernel remains an open question. Given the smoothing kernel of the\ncovariance functions in consideration, the cross-covariance terms can be derived as a kernel correlation\nas demonstrated in [20, 19, 13, 8, 22]. Assume two GPs N(0, ki) and N(0, kj), with with length scale matrices Σi and Σj.", |
| "paper_id": "1210.1928", |
| "title": "Information fusion in multi-task Gaussian processes", |
| "authors": [ |
| "Shrihari Vasudevan", |
| "Arman Melkumyan", |
| "Steven Scheding" |
| ], |
| "published_date": "2012-10-06", |
| "primary_category": "stat.ML", |
| "arxiv_url": "http://arxiv.org/abs/1210.1928v3", |
| "chunk_index": 19, |
| "total_chunks": 48, |
| "char_count": 2678, |
| "word_count": 424, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "c4337c45-edb6-40f1-9909-7869e9fafb28", |
| "text": "Based on [19],\nthe cross and auto covariances for the stationary SQEXP kernel are given by Equations 12 and 13\nrespectively. The corresponding expressions for the nonstationary NN kernel are derived in [13, 8] and\ngiven in Equations 14 and 15 respectively. For the Matern 3/2 kernel, the expressions for the cross\ncovariance and auto covariance are derived in [22] and given in Equations 16 and 17 respectively. Also\nbased on [22], the cross covariance function between an SQEXP and a Matern 3/2 kernel is given by (2π) 2\nKUij(x, x′) = Kf(i, j) 1 exp (12) + 2 −12(x −x′)TΣij(x −x′) |Σi Σj|\nwhere\nΣij = Σi(Σi + Σj)−1Σj = Σj(Σi + Σj)−1Σi (π) 2\nKUii (x, x′) = Kf(i, i) 1 exp (13) 2 −14(x −x′)TΣi(x −x′) |Σi|\n1 1\nd+1 4\n2 KUij(x, x′) = Kf(i, j) 2 |Σi| 4|Σj| 1 kNN(x, x′, Σij) (14)\n|Σi + Σj| 2\nwhere\nΣij = 2 Σi (Σi + Σj)−1 Σj\nKUii (x, x′) = Kf(i, i) kNN(x, x′, Σi) (15) 1 1\nrk √ 3 rk Y 2l ikl2 jk2 √ 3 lik − ljk (16) KUij(x, x′) = Kf(i, j) like− l2ik jk −ljke 1≤k≤d −l2\nwhere k ǫ 1 . . . d is the dimension of the input data, li and lj are the length scales for the two Matern\n3/2 kernel based GPs i and j, lik and ljk are the kth length scales (corresponding to the kth dimensions)\nof these GPs and rk = |xk −x′k| is the distance in the kth dimension between the input data. KUii (x, x′) = Kf(i, i) kMATERN3(x, x′, Σi) (17) \" !\nY p π 1/4 √ 3rk\nk KUij(x, x′) = Kf(i, j) λk eλ2 2 cosh 2 lMk −\n1≤k≤d #\n√ 3rk rk √ 3rk\ne lMk erf λk + − lMk erf λk (18) lSEk −e −rklSEk\n√ 3 lSEk 2 R x e−t2dt, k ǫ 1 . . . d is the dimension of the input data, lSE and lM where λk = 2 lMk , erf (x) = √π 0\nare the respective length scales for the SQEXP and Matern 3/2 kernel based GPs i and j, lSEk and\nlMk are the kth length scales (corresponding to the kth dimensions) of these GPs and rk = |xk −x′k| is the distance in the kth dimension between the input data. In Equations 14 and 15, the term, kNN(x, x′, Σij), is the NN kernel for two data x, x′ and length scale\nmatrix Σij. It is given by Equation 2, excluding the signal variance term (σ2f). Likewise, in Equation\n17, kMATERN(x, x′, Σi) refers to the Matern 3/2 kernel for two data x, x′ and length scale matrix Σij,\ngiven by Equation 3 (excluding the σ2f term). The Kf terms in Equations 12, 13, 14 and 15 are inspired\nby [18].", |
| "paper_id": "1210.1928", |
| "title": "Information fusion in multi-task Gaussian processes", |
| "authors": [ |
| "Shrihari Vasudevan", |
| "Arman Melkumyan", |
| "Steven Scheding" |
| ], |
| "published_date": "2012-10-06", |
| "primary_category": "stat.ML", |
| "arxiv_url": "http://arxiv.org/abs/1210.1928v3", |
| "chunk_index": 20, |
| "total_chunks": 48, |
| "char_count": 2255, |
| "word_count": 479, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "1cdc8118-9b79-4ae5-b806-0405d0116583", |
| "text": "This term models the task similarity between individual tasks. Incorporating it in the auto\nand cross covariances provides additional flexibility to the multi-task GP modeling process. It is a\nsymmetric matrix of size nt x nt and is learnt along with the other GP hyperparameters. Thus, the\nhyperparameters of the system that need to be learnt include (nt.(nt+1))/2 task similarity values, nt . 2\nor nt . 3 length scale values respectively for the individual SQEXP/MATERN3 or NN kernels and nt\nnoise values corresponding to the noise in the observed data sets. Learning these hyperparameters by\nadapting the GP learning procedure described before (Equation 7) for multiple outputs/tasks [12, 13].", |
| "paper_id": "1210.1928", |
| "title": "Information fusion in multi-task Gaussian processes", |
| "authors": [ |
| "Shrihari Vasudevan", |
| "Arman Melkumyan", |
| "Steven Scheding" |
| ], |
| "published_date": "2012-10-06", |
| "primary_category": "stat.ML", |
| "arxiv_url": "http://arxiv.org/abs/1210.1928v3", |
| "chunk_index": 21, |
| "total_chunks": 48, |
| "char_count": 696, |
| "word_count": 110, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "7d0d6369-8a06-47da-9568-4d7ec7a0b2c0", |
| "text": "Experiments were conducted on a large scale geological resource data set made up of real sensor data. The data consists of 63,667 measurements from a 3478.4 m x 1764.6 m x 345.9 m region in Australia\nthat has undergone drilling and chemical assays to determine its composition. The holes are generally\n25-100m apart and tens to hundreds of meters deep. Within each hole, data is collected at an interval of\n2m. The measurements include the (east, north, depth) position data along with the concentrations of\nthree elements, Element-1, Element-2 and Element-3, hereafter denoted as E1, E2 and E3 respectively.", |
| "paper_id": "1210.1928", |
| "title": "Information fusion in multi-task Gaussian processes", |
| "authors": [ |
| "Shrihari Vasudevan", |
| "Arman Melkumyan", |
| "Steven Scheding" |
| ], |
| "published_date": "2012-10-06", |
| "primary_category": "stat.ML", |
| "arxiv_url": "http://arxiv.org/abs/1210.1928v3", |
| "chunk_index": 22, |
| "total_chunks": 48, |
| "char_count": 608, |
| "word_count": 99, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "b36b56bf-17f9-485b-9b13-0ffac9762da7", |
| "text": "These three quantities are known to be correlated and hence the objective is to use each of their GP\nmodels to improve the others' prediction estimates by capturing the correlation between these quantities. The data set is shown in Figure 2. The methodology of testing is described in Section 4.1. Multiple\nmetrics have been used to evaluate the methods, these are described in Section 4.2. Results obtained\nare then presented and discussed in Section 4.3. Outputs of the data fusion process provided by the\nbest performing model as suggested by the evaluation are also presented. 4.1 Testing procedure", |
| "paper_id": "1210.1928", |
| "title": "Information fusion in multi-task Gaussian processes", |
| "authors": [ |
| "Shrihari Vasudevan", |
| "Arman Melkumyan", |
| "Steven Scheding" |
| ], |
| "published_date": "2012-10-06", |
| "primary_category": "stat.ML", |
| "arxiv_url": "http://arxiv.org/abs/1210.1928v3", |
| "chunk_index": 23, |
| "total_chunks": 48, |
| "char_count": 602, |
| "word_count": 98, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "1e9b17ef-51e5-41d8-a37d-f38dd0588434", |
| "text": "The objective of the experiment was to compare the multi-task GP approach with a conventional GP\napproach and quantify if the data fusion in the MTGP actually improves estimation. A second objective\nof the experiments was to compare the nonstationary NN kernel with the stationary SQEXP kernel, the\nMatern 3/2 kernel and a combination of them that proved effective in prior testing [22]. Towards these\naims, a ten fold cross validation experiment was performed on the data set, with each of the kernels. This was motivated by the work [30], which suggests a ten fold stratified (similar number of samples\nin each fold) cross validation as the best way of testing the estimation accuracy of machine learning\nmethods on real world data sets. The MTGP and simple GP approaches each require an optimization step for model learning. The\noptimization step in each method can result in different local minima in each trial (and with each\nkernel).", |
| "paper_id": "1210.1928", |
| "title": "Information fusion in multi-task Gaussian processes", |
| "authors": [ |
| "Shrihari Vasudevan", |
| "Arman Melkumyan", |
| "Steven Scheding" |
| ], |
| "published_date": "2012-10-06", |
| "primary_category": "stat.ML", |
| "arxiv_url": "http://arxiv.org/abs/1210.1928v3", |
| "chunk_index": 24, |
| "total_chunks": 48, |
| "char_count": 939, |
| "word_count": 156, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "fedb9ec4-3ca7-4170-919a-5f123414605b", |
| "text": "Thus, to do a one-on-one comparison between the two approaches and quantify their relative\nperformances, an exact comparison is required. The benchmarking experiment presented in this paper\nprovides an exact comparison between the MTGP and GP approaches. To do this,\n• The best available MTGP parameters were found for each kernel. From this, appropriate subsets of the parameters were chosen for the GP approach.\n• The approaches were compared on identical test points and identical training/evaluation points selected for each of the test points.\n• It is also necessary that the covariance function for the simple GP approach must be identical to the auto-covariance function of the DGP approach. For this reason, the auto-covariance function\n(for both kernels) is used as the covariance function for the GP approach to data fusion. In addition to this, three independent GPs (denoted as GPI here after) were optimized for E1, E2 and\nE3 and their estimates for the same set of test points were also compared. Thus the effect of information\nintegration in the context of the geological resource modeling can be seen in terms of both an exact\ncomparison (MTGP vs GP) and an independent comparison (MTGP vs GPI).", |
| "paper_id": "1210.1928", |
| "title": "Information fusion in multi-task Gaussian processes", |
| "authors": [ |
| "Shrihari Vasudevan", |
| "Arman Melkumyan", |
| "Steven Scheding" |
| ], |
| "published_date": "2012-10-06", |
| "primary_category": "stat.ML", |
| "arxiv_url": "http://arxiv.org/abs/1210.1928v3", |
| "chunk_index": 25, |
| "total_chunks": 48, |
| "char_count": 1211, |
| "word_count": 197, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "ba74feb6-d16c-4ea4-a6f0-231c9e46ba41", |
| "text": "For the cross validation, a \"block\" sampling technique (see Figure 3) was used, a 3D version of the\n\"patch\" sampling method used in [1]. The idea was that rather than selecting test points uniformly,\nblocks of data test the robustness of the approach better as the support points to the query point are situated farther away than in uniform point selection. The data set is gridded into blocks of different sizes. Collections of blocks represent individual folds. In each cross validation test, one fold was designated as\na test fold and points from it were used exclusively for testing. All other folds together constituted the\nevaluation data, a small subset of which were labeled as the training data.", |
| "paper_id": "1210.1928", |
| "title": "Information fusion in multi-task Gaussian processes", |
| "authors": [ |
| "Shrihari Vasudevan", |
| "Arman Melkumyan", |
| "Steven Scheding" |
| ], |
| "published_date": "2012-10-06", |
| "primary_category": "stat.ML", |
| "arxiv_url": "http://arxiv.org/abs/1210.1928v3", |
| "chunk_index": 26, |
| "total_chunks": 48, |
| "char_count": 704, |
| "word_count": 118, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "e1d467ce-ec50-449c-a808-e396b7f43e31", |
| "text": "Note that this technique of\ntesting will naturally lead to larger errors. For the test fold, the E1, E2 and E3 concentrations (and error (a) Element-1 (E1) concentration (b) Element-2 (E2) concentration (c) Element-3 (E3) concentration Figure 2: The geological resource data set. Figures 2(a), 2(b) and 2(c) respectively show the concentrations of three\nelements over the region of interest. The central region of points is surrounded by sparse sets of points which are not\npre-filtered when applying the proposed algorithm. metrics defined in the following section) are estimated first using the MTGP approach, then with the GP\napproach using parameters from the optimized MTGP parameters and finally, with an independently\noptimized GP for each of the three quantities.", |
| "paper_id": "1210.1928", |
| "title": "Information fusion in multi-task Gaussian processes", |
| "authors": [ |
| "Shrihari Vasudevan", |
| "Arman Melkumyan", |
| "Steven Scheding" |
| ], |
| "published_date": "2012-10-06", |
| "primary_category": "stat.ML", |
| "arxiv_url": "http://arxiv.org/abs/1210.1928v3", |
| "chunk_index": 27, |
| "total_chunks": 48, |
| "char_count": 771, |
| "word_count": 119, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "b47bbcd9-2da3-4745-a909-6fc305e75c0f", |
| "text": "The result of a 10 fold cross validation test is a 63,667\npoint evaluation in tougher test conditions than what would be attainable with uniform sampling (e.g.\nevery tenth point) of test points. Block sizes were chosen empirically, in proportion (arbitrarily rounded up or down) to the dimensions\nof the whole data set and with a view of performing a stratified cross validation test. The block sizes\nchosen and the resulting implications on the cross validation testing are shown in Table 1. The smaller\nblock size of 22m x 11m x 2m results in each fold having a similar number of points (i.e. numbers\nof points in folds with min/max test points are similar) and thus results in the most stratified cross\nvalidation test. With increasing block size, prediction error increases (support data is farther away),\nstratification is reduced and hence, variance in prediction error also increases. Uniform sampling of test\npoints may be considered as a limiting case of block sampling with the smallest block size possible. Figure 3: Example of 3D block sampling of a geological resource data set. Blocks may be sampled of different sizes. The\nred and yellow blocks represent blocks from two of the ten folds used in cross validation testing. Test points within these\nblocks have \"support\" data away from them, outside the blocks.", |
| "paper_id": "1210.1928", |
| "title": "Information fusion in multi-task Gaussian processes", |
| "authors": [ |
| "Shrihari Vasudevan", |
| "Arman Melkumyan", |
| "Steven Scheding" |
| ], |
| "published_date": "2012-10-06", |
| "primary_category": "stat.ML", |
| "arxiv_url": "http://arxiv.org/abs/1210.1928v3", |
| "chunk_index": 28, |
| "total_chunks": 48, |
| "char_count": 1324, |
| "word_count": 220, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "e9cf7709-353f-4932-95c7-eec773f90c26", |
| "text": "This sampling method is therefore a stronger test of\nthe robustness of an approach to estimating the quantity of interest, as compared to uniformly sampling test points. The\nestimation errors however, will be higher than that obtained for a uniformly sampled set of points. Table 1: 10 fold cross validation with block sampling; 63667 points in data set spread over 3478.4 m x 1764.6 m x 345.9\nm; block sizes tested vs relative implications on results\nBlock size Number of points Number of points Comments on\n(m) in fold with MIN in fold with MAX cross validation test\ntest points test points\n22 x 11 x 2 6209 6454 Most stratified cross validation\nLeast prediction error\n44 x 22 x 4 6183 6456 stratification ↓prediction error ↑\n87 x 45 x 9 5807 6739 stratification ↓prediction error ↑\n174 x 89 x 18 5133 7549 stratification ↓prediction error ↑\n348 x 177 x 35 4976 9662 stratification ↓prediction error ↑\n696 x 353 x 70 1204 10371 Least Stratified cross validation\nHighest prediction error Multiple metrics have been used to understand the various methods being tested. They are briefly\ndescribed below.", |
| "paper_id": "1210.1928", |
| "title": "Information fusion in multi-task Gaussian processes", |
| "authors": [ |
| "Shrihari Vasudevan", |
| "Arman Melkumyan", |
| "Steven Scheding" |
| ], |
| "published_date": "2012-10-06", |
| "primary_category": "stat.ML", |
| "arxiv_url": "http://arxiv.org/abs/1210.1928v3", |
| "chunk_index": 29, |
| "total_chunks": 48, |
| "char_count": 1102, |
| "word_count": 191, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "3f6e6df1-1755-455d-9e91-f4366a902f43", |
| "text": "These are evaluated for each test point in each fold of the cross validation test. The\nresult would then be represented by the mean and standard deviations of all values across all folds. Squared Error (SE): This represents the squared difference between the predicted concentration\nand the known concentrations for the set of test points. The mean over the set of all test points\n(Mean Squared Error or MSE) is the most popular metric for the context of this paper. Referring\nEquations 5 and 6, for the ith test point, SE(i) = ( ¯f∗(i) −zi)2", |
| "paper_id": "1210.1928", |
| "title": "Information fusion in multi-task Gaussian processes", |
| "authors": [ |
| "Shrihari Vasudevan", |
| "Arman Melkumyan", |
| "Steven Scheding" |
| ], |
| "published_date": "2012-10-06", |
| "primary_category": "stat.ML", |
| "arxiv_url": "http://arxiv.org/abs/1210.1928v3", |
| "chunk_index": 30, |
| "total_chunks": 48, |
| "char_count": 542, |
| "word_count": 95, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "66467d9c-faab-4b50-9abf-52427923f56c", |
| "text": "Variance (VAR): This represents the variance (uncertainty) in the predicted concentrations for the\nset of test points. a lower VAR is a good outcome, only if the SE is also low. A model that has high\nSE and low VAR would be a poor model as this result would suggest that the model is confident\nof its inaccurate estimates. A better outcome would be a model with high SE and correspondingly\nhigh VAR i.e. a model that has inaccurate predictions but is also uncertain about these predictions. Negative log probability / Log loss (NLP): Inspired by [2] (see page 23), this is a measure of the\nextent to which the model (including the GP model, kernel, parameters and evaluation data) explain\nthe current test point. The lower the value of this metric, the better the model. For the ith test\npoint,\n1 ( NLP(i) = + ¯f∗(i) −zi)2 2log(2πσ2∗) 2σ∗(i)2 Figures 4, 5 and 6 show the predicted concentrations of E1, E2 and E3 over the entire region of interest\nas well as 2D section views of this output and the uncertainty of the predictions that constitute it;\nthese were produced using multi-task GPs using the Neural Network kernel. Tables 2, 3 and 4 show the\nresults of the cross validation testing on the geological resource data set with the Neural Network (NN),\nMatern 3/2 (MM), Squared Exponential (SQEXP) and Matern 3/2 - Matern 3/2 - Squared exponential\n(MS) kernels.", |
| "paper_id": "1210.1928", |
| "title": "Information fusion in multi-task Gaussian processes", |
| "authors": [ |
| "Shrihari Vasudevan", |
| "Arman Melkumyan", |
| "Steven Scheding" |
| ], |
| "published_date": "2012-10-06", |
| "primary_category": "stat.ML", |
| "arxiv_url": "http://arxiv.org/abs/1210.1928v3", |
| "chunk_index": 31, |
| "total_chunks": 48, |
| "char_count": 1365, |
| "word_count": 240, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "7d970f23-8a3b-4235-a873-7097b6c0366a", |
| "text": "The three tables are visualized through numerous graphs that summarize the main trends\nobserved; these are located in the appendix. Figures 7 through 15 depict the main results of Table 2\n(element E1), Figures 16 through 24 depict the main results of Table 3 (element E2) and Figures 25\nthrough 33 depict the main results of Table 4 (element E3). The following observations were made from\nthe results obtained. Prediction error (SE) increases with increase in test block size.\n• See Figures 7, 10, 12, 14 for E1, Figures 16, 19, 21, 23 for E2 and Figures 25, 28, 30, 32 for E3.\n• This behavior is expected. It happens because the support training data required for regressing at a test point is situated farther away. Increasing the test block size also results in reduced\nstratification as one fold of the cross validation may have e.g. 10,000 test points whereas another\nmay have only 1000 points. This results in increased standard deviation of prediction error. A ten fold stratified cross validation is generally considered to be the most representative\nof performance measure [30], however testing multiple larger block sizes provides a better\nunderstanding of the model's behavior and robustness. NN kernel based MTGP/GP models trained faster than other kernels\n• Further optimization of each of the MTGP/GP models could yield better results. The re- sults shown are the result of a reasonable amount of optimization applied to each kernel and", |
| "paper_id": "1210.1928", |
| "title": "Information fusion in multi-task Gaussian processes", |
| "authors": [ |
| "Shrihari Vasudevan", |
| "Arman Melkumyan", |
| "Steven Scheding" |
| ], |
| "published_date": "2012-10-06", |
| "primary_category": "stat.ML", |
| "arxiv_url": "http://arxiv.org/abs/1210.1928v3", |
| "chunk_index": 32, |
| "total_chunks": 48, |
| "char_count": 1450, |
| "word_count": 240, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "390a7def-734f-4e34-8ad8-1afc70e4ef73", |
| "text": "3.38 3.47 3.64 3.80 4.00 5.66 4.23 3.30 3.26 3.46 3.42 3.71 3.81 4.64 4.23 4.55 3.94 (1.58) (1.21) (1.04) (1.01) (1.12) NLP mean (std) 3.18 (1.14) (1.27) (0.62) (1.27) (1.48) (0.74) (1.47) (1.63) (0.85) (2.09) (0.95) (3.08) (4.46)derived 3/2\nvs Matern 97.42 40.86 (2.50) 329.34 41.95 (2.21) 976.69 39.05 (2.41) 104.53 kernel VAR (squ) mean (std) 33.48 (1.48) 36.98 (5.59) 76.44 (5.23) 34.57 (1.71) 40.75 (10.77) 81.80 (8.02) 36.90 (2.07) 57.17 (33.92) (15.97) 121.09 (26.43) 151.49 (45.31) 180.05 (49.34) a (106.83) (376.16) (915.85)\nand(MTGP)\nSE 45.26 58.74 86.81 83.79 43.49 39.49 57.55 52.13 91.75 (squ) mean (std) 34.08 124.93 155.36 290.14 189.21 243.80 460.23 282.28 154.34 (76.16) (95.17) (96.79) (87.09)GP (541.45) (327.56) (775.98) (428.51) (124.37) (124.07) (109.11) (190.74) (175.15) (166.79) (319.96) (240.35) (257.28) (380.23) (squ). (SQEXP)\nunits 3.45 3.53 3.67 3.82 4.01 4.23 16.84 12.58 45.37 50.71 45.35 28.36 NLP mean (std) 31.63 (1.11) (95.72) (59.58) (43.56) (39.08) (0.61) 28.69 (68.94) (0.72) 21.35 (49.50) (0.85) 19.27 (41.43) (0.96) 24.18 (42.85) (1.03) 37.05 (58.21) (76.20) (103.28) (113.15)Multi-task exponential squared kernelsizes; 2.89 1.54 2.45 0.51 0.48 1.07 0.85 8.06 in VAR (squ) mean (std) 0.39 (0.10) (0.74) 86.80 (4.74) (0.20) (3.92) 89.95 (6.61) (0.53) (0.98) 124.02 (94.50) 155.24 (44.34) (0.84) 185.80 (93.03) 183.92 (49.45) (25.34) 101.22 (13.42) (0.78) 34.69 (59.68) 122.29 (23.44)\nblock Squared SQEXP expressed SE 52.96 65.30 91.63 52.96 44.43 24.29 76.89 30.08 214.11 128.86 114.47 190.92 283.43 701.62 871.72various (55.62) (57.54) (331.84) (430.84) (122.12) (107.62) (235.51) (132.02) (70.98) 265.25 (853.00) (179.65) (122.09) (244.59) (357.84) (MM), (squ) mean (std) 23.14 (214.94) 1091.93 (2801.29) (2481.90) (1787.63)of are\n3/2\n5.77 3.38 2.19 5.05 3.47 3.04 4.44 3.64 7.29 4.23 3.80 4.25 4.00 4.43 4.23 (2.51) (2.29) (1.12) metrics NLP mean (std) 2.35 (5.01) (9.97) (0.62) (4.53) (7.27) (0.74) (6.07) (4.35) (0.85) (3.05) (1.04) 33.83 (54.10) (15.78) (0.95) 19.76 (32.27)sampling Matern errorblock kernel 0.96 5.96 1.17 1.65 2.24 2.74 2.97 97.42 (0.15) (5.50) 76.44 (5.23) 81.80 (8.02) (NN), The VAR (squ) mean (std) (0.64) 153.09 (71.73) 151.49 (45.31) (0.52) 192.63 (64.46) 180.05 (49.34) (0.24) 13.91 (14.52) (0.47) 44.66 (37.46) (15.97) (0.61) 95.10 (56.46) 121.09 (26.43)\nusing MM data. SE Network 2.79 7.50 45.26 65.00 58.74 86.81 32.05 test 124.93 112.99 215.40 189.21 206.18 301.60 282.28 (squ) mean (std) 2.66 (8.66) 41.42 (96.79) (96.79) (10.51) (373.05) (327.56) (349.73) (452.75) (428.51) (149.17) (124.07) (28.75) 114.05 (242.32) (175.15) (96.26) 156.56 (306.22) (240.35) (206.86)results\nNeural\n3.56 3.34 1.74 3.60 3.45 1.91 3.71 3.63 2.24 3.84 3.79 3.00 4.06 3.98 3.61 4.31 4.19 (1.39) (0.90) (1.56) (1.19) (0.80) identical NLP mean (std) 1.68 (2.75) (3.25) (0.60) (2.66) (2.72) (0.71) (2.56) (2.00) (0.81) (2.70) (1.60) (0.88) (2.09)validation using on 1.81 3.24 81.28 10.97 64.36cross (GPI) kernel VAR (squ) mean (std) 1.47 (0.28) 14.93 (8.23) 73.23 (7.30) 199.86 173.67 362.43 562.94 (0.59) 26.55 (17.09) (11.10) (5.06) 58.60 (39.70) 100.04 (21.34) (27.66) 113.03 (88.51) 129.20 (45.85) (86.94) 249.34 (232.37) (120.66) (154.95) (523.54) (271.48)\nfold GP NN10 combination\n1.86 3.38 41.28 55.81 85.88 85.53 14.60 73.42 (9.49) 52.75 124.52 189.14 180.64 291.72 204.57 325.98 (squ) mean (std) 1.59 (7.25) 36.52 (86.50) (90.35) (22.92) (124.18) (119.69) (187.61) (171.85) (88.59) 128.39 (261.10) (235.22) (213.89) (387.09) (335.76) (368.05) (546.05) (465.37) kernel SE optimizedestimation; (MS)\nGP GPI GP GPI GP GPI GP GPI GP GPI GP GPI Method MTGP MTGP MTGP MTGP MTGP MTGP 2 4 9 18 35 70 Independently\nx x x x x x vs Exponential sizeconcentration (m) 11 22 45 89 177 353\nE1 x x x x x x (GP) Block 22 44 84\n2: Squared 174 348 696 - 3.31 3.47 3.56 3.69 3.91 4.23 5.58 4.59 3.28 3.46 3.44 3.72 3.79 3.88 4.54 4.06 4.27 (1.70) (1.06) (1.59) (1.12) (1.24) (1.18) (0.96) (1.15) NLP mean (std) 3.20 (1.27) (1.36) (0.72) (1.40) (1.58) (0.88) (1.64) (2.23) (3.23) (4.37)derived 3/2\nvs Matern 49.71 (4.07) 45.49 (3.73) 125.06 48.08 (4.15) 414.05 kernel VAR (squ) mean (std) 39.09 (2.48) 43.34 (6.65) 97.26 (5.97) 40.31 (2.88) 47.76 (12.97) 102.80 (8.82) 42.90 (3.41) 67.37 (42.25) 119.32 (17.19) 145.39 (28.92) 182.88 (51.20) 218.58 (58.44) 1309.98 (134.64) (500.88) a (1309.73) MS\nand(MTGP)\nSE 46.60 53.61 71.21 42.82 64.50 56.72 90.32 (squ) mean (std) 36.09 105.69 170.93 314.92 273.23 493.28 105.27 147.83 167.10 216.33 317.37GP (102.25) (134.94) (151.69) (117.40) (186.52) (201.30) (146.64) (288.37) (279.95) (211.07) (420.58) (351.45) (320.09) (689.64) (443.91) (440.21) (924.16) (519.71) (squ). (SQEXP)\nunits 3.48 3.57 3.74 3.92 4.11 4.32 (1.13) NLP mean (std) 650.87 668.93 (0.86) 657.09 694.20 (1.04) 461.55 532.52 (1.22) 228.66 312.01 (1.25) 134.53 164.10 (1.17) 94.13 105.40Multi-task (1848.02) (1880.52) (1840.03) (1901.88) (1290.67) (1420.66) (752.97) (914.26) (599.11) (633.41) (458.35) (514.02) exponential squaredsizes; kernel\n0.05 0.10 0.26 0.56 4.72 1.92 4.61 6.14 VAR (squ) mean (std) 0.04 (0.10) (0.30) 85.95 (5.68) (0.30) (2.48) 89.38 (9.01) (1.15) (2.90) 114.38 (2.62) 189.13 194.54 (59.86) 236.80 (60.61) (22.50) 103.63 (23.63) (2.06) 25.86 (59.06) 133.30 (41.47) (102.39) (106.28)block in Squared SQEXP expressedvarious (MM), SE 75.67 60.25 88.25 77.86 are (squ) mean (std) 52.32of 181.27 211.05 115.96 402.24 164.86 419.72 234.24 420.95 331.32 3510.60 8397.84 5910.35 (211.96) (301.71) (376.69) (459.31) (531.79) (231.51) (869.23) (161.56) (352.70) (1385.73) (753.57) 1140.18 (8156.99) (1144.48) (1196.06) (1016.76) (14425.39) (28045.42) (23857.18) 3/2\nmetricssampling 4.05 3.72 4.07 3.88 4.17 4.06 4.37 4.27 4.37 3.47 4.19 3.56 2.81 4.85 2.29 Matern NLP mean (std) 2.23 (4.74) (7.41) (0.72) (4.43) (5.58) (0.88) (5.02) (3.56) (1.06) (8.77) (2.70) (2.19) (1.77) (1.15) (1.12) 10.28 (16.06) (1.18) 17.23 (24.68) error\nblock (NN), The kernel 8.45 2.53 3.69 5.09 6.26 6.86 VAR (squ) mean (std) 2.02 (0.36) (6.63) 97.26 (5.97) (0.58) 18.19 (17.10) 102.80 (8.82) (1.12) 54.56 (43.24) 119.32 (17.19) (1.44) 112.93 (64.77) 145.39 (28.92) (1.52) 178.16 (81.64) 182.88 (51.20) (1.26) 223.12 (73.21) 218.58 (58.44)using data. MM Network testresults SE 4.72 40.79 53.61 64.20 71.21 10.97 37.85 (squ) mean (std) 3.88 105.27 165.30 147.83 119.86 232.89 216.33 227.42 331.10 317.37 (18.47) (22.80) (129.77) (151.69) (192.62) (201.30) (44.78) 116.98 (318.33) (279.95) (117.51) (394.82) (351.45) (236.67) (475.45) (443.91) (370.75) (534.42) (519.71) Neural identical\n3.34 3.44 2.20 3.49 3.55 2.37 3.71 3.72 2.66 3.88 3.88 3.23 4.09 4.05 3.75 4.34 4.24validation using on NLP mean (std) 2.10 (2.87) (3.10) (0.71) (2.87) (2.70) (0.86) (2.71) (2.27) (1.03) (2.61) (1.88) (1.09) (2.04) (1.71) (1.09) (1.48) (1.33) (0.91)\ncross (GPI)\n3.80 6.22 15.88 72.71fold GP kernel VAR (squ) mean (std) 3.13 (0.53) 19.44 (8.89) 93.47 (7.40) 213.16 189.18 594.95 365.16 (1.06) 32.14 (18.07) 101.11 (10.86) (6.00) 66.23 (41.41) 119.02 (20.10) (29.16) 123.42 (91.33) 146.71 (41.34) (90.66) 265.84 (239.30) (107.76) (162.78) (544.29) (250.80)10 combination\nkernel SE 4.79 8.04 optimized NN 36.86 49.79 55.51 68.69 95.98 21.49 82.02 (squ) mean (std) 3.85 105.20 142.62 148.71 219.37 213.44 196.72 340.01 314.51 (17.59) (22.21) (40.48) (118.99) (143.90) (174.24) (195.85) (274.95) (277.85) (102.60) (356.16) (347.79) (233.18) (484.76) (442.90) (379.37) (603.14)estimation; (MS) (529.84)\nGP GPI GP GPI GP GPI GP GPI GP GPI GP GPI Method MTGP MTGP MTGP MTGP MTGP MTGP Independently vs Exponentialconcentration size\nE2 (GP) (m) 22x11x2 44x22x4 84x45x93: Squared Block 174x89x18 348x177x35 696x353x70\nTable MTGP 3/2 2.98 2.70 3.03 2.79 3.13 2.94 3.27 3.09 3.46 3.25 3.58 3.47 3.03 3.13 3.26 3.45 3.57 (0.63) (0.91) (0.72) (1.05) (0.88) (0.87) (1.17) (1.10) (1.08) (1.24) (1.32) (1.30) (1.32) (1.27) (1.34)derived 3/2 NLP mean (std) 2.98 (0.63) (0.72) (1.35)\nvs Matern 39.36 a 40.95 (5.93) 43.80 (11.83) (10.98) kernel VAR (squ) mean (std) 36.32 (0.17) 36.47 (0.19) 18.46 (1.15) 36.35 (0.20) 36.51 (0.23) 19.41 (1.73) 36.43 (0.31) 36.61 (0.36) 22.73 (3.76) 36.66 (0.54) 36.89 (0.66) 28.35 (6.17) 37.62 (1.59) 38.13 (2.22) 34.15 (10.70)\nMS and(MTGP)\nSE 19.18 12.06 22.53 14.92 29.82 21.21 39.96 29.50 54.46 43.24 66.54 61.26 22.56 29.82 39.81 54.31 67.03 (squ) mean (std) 19.15GP (45.75) (33.46) (52.27) (40.78) (63.83) (63.64) (54.63) (80.47) (79.82) (72.11) (99.06) (98.51) (45.80) (52.33) (93.69) (109.25) (109.94) (108.52) (squ). (SQEXP)\nunits 8.21 2.70 6.49 2.79 5.46 2.94 5.51 3.09 6.50 3.25 8.98 3.47 9.56 9.73 7.34 5.49 4.85 NLP mean (std) 6.50 (0.91) (1.05) (1.17) (1.24) (8.96) (1.34) (1.35) (21.19) (23.82) (12.14) (21.78) (10.69) (14.06) (10.74) (10.17) (13.24) (16.50) (16.04)Multi-task\nkernel exponential squaredsizes; 2.58 3.00 0.75 0.73 1.09 1.73 1.07 3.73 34.15 39.36 in VAR (squ) mean (std) 0.64 (0.10) (0.40) 18.46 (1.15) (0.19) (1.45) 19.41 (1.73) (0.48) (7.16) 22.73 (3.76) 28.35 (6.17) (0.70) 12.06 (14.90) (0.89) 33.92 (22.17) (10.70) (0.77) 47.21 (21.02) (10.98)\nblock Squared SQEXP\nSE expressed (20.47) (29.99) (33.46) (21.94) (45.59) (40.78) (24.11) (99.80) (54.63) (34.46) (189.74) (72.11) (55.29) (228.76) (93.69) (81.94) (205.91) (108.52)various (squ) mean (std) 7.30 10.47 12.06 7.99 15.59 14.92 9.19 35.53 21.21 14.56 73.85 29.50 27.36 96.50 43.24 46.94 93.45 61.26 (MM),of are\n3/2\n5.81 8.55 3.57 2.66 1.62 2.76 1.99 2.92 3.07 3.07 3.25 3.48 3.50 3.33 3.28 3.39 3.58 NLP mean (std) 1.57 (2.84) (6.46) (0.94) (2.92) (5.34) (1.08) (3.53) (3.63) (1.19) (4.93) (2.58) (1.27) (8.42) (2.25) (1.40) (2.05) (12.36) metricssampling (1.39) Matern\nerror kernel 2.62 2.86 2.36 1.19 1.65 2.19 4.52 32.36 38.04block VAR (squ) mean (std) 0.98 (0.14) (1.44) 16.90 (1.16) (0.23) (3.53) 18.14 (1.79) (0.42) 12.01 (8.52) 21.61 (3.60) 26.65 (6.01) (0.54) 23.31 (12.64) (0.57) 34.95 (15.85) (10.28) (0.48) 43.42 (14.04) (10.97) (NN), The MM\nusing\nSE 9.65 1.50 3.08 8.67 11.19 14.16 24.26 20.50 32.57 28.87 25.66 47.08 42.96 43.57 64.20 61.34 (squ) mean (std) 1.29 (5.42) Network data. (27.41) (31.47) (7.41) 14.47 (40.66) (39.23) (12.59) (64.08) (53.15) (24.92) (79.36) (69.91) (50.80) (92.63) (77.06) (100.71) (112.93) (108.31) testresults\nNeural 2.64 3.10 2.83 2.58 1.71 2.69 1.88 3.01 2.86 2.17 3.13 3.01 3.32 3.20 3.57 3.41 2.91 NLP mean (std) 1.64 (2.49) (3.67) (0.90) (2.17) (3.17) (1.00) (2.06) (2.45) (1.08) (2.24) (2.01) (1.10) (1.91) (1.82) (1.15) (1.43) (1.65) (1.03) identicalvalidation using on\n3.84 1.68 6.38 2.79 6.00 29.28 23.57 55.21 46.90 81.32 83.71 kernel VAR (squ) mean (std) 1.29 (0.35) (1.76) 14.64 (3.24) (0.66) (3.66) 16.79 (4.65) (1.97) 13.22 (8.54) 21.69 (8.26) (8.73) 25.03 (19.65) (16.01) (36.17) (53.40) (36.64) (72.39)cross (GPI) (99.31) 129.87 (126.46)\nNNfold GP10 combination\nSE 9.09 9.69 1.90 2.87 6.63 12.88 19.56 28.27 22.76 43.19 50.93 65.75 20.35 29.91 48.12 73.71 (squ) mean (std) 1.60 (6.82) (26.21) (27.69) (8.29) 12.80 (37.03) (36.55) (13.66) (55.84) (51.95) (35.07) (81.55) (69.54) (76.19) (97.09) (113.43) (123.14) (146.92) (122.28) kernel optimizedestimation; (MS) GP GPI GP GPI GP GPI GP GPI GP GPI GP GPI Method MTGP MTGP MTGP MTGP MTGP MTGP 2 4 9 18 35 70\nx x x x x x Independently size vs Exponentialconcentration (m) 11 22 45 89 177 353\nx x x x x xE3 Block 22 44 84 (GP) 174 348 696\n4: Squared", |
| "paper_id": "1210.1928", |
| "title": "Information fusion in multi-task Gaussian processes", |
| "authors": [ |
| "Shrihari Vasudevan", |
| "Arman Melkumyan", |
| "Steven Scheding" |
| ], |
| "published_date": "2012-10-06", |
| "primary_category": "stat.ML", |
| "arxiv_url": "http://arxiv.org/abs/1210.1928v3", |
| "chunk_index": 33, |
| "total_chunks": 48, |
| "char_count": 11285, |
| "word_count": 1715, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "b251749a-e8ec-4313-8bea-eb57008abbb8", |
| "text": "Typically, multiple attempts were performed and the best results obtained were\npursued/used. One iteration consisted of a stochastic optimization step (simulated annealing)\nand/or a gradient based optimization step (Quasi Newton optimization with BFGS Hessian\nupdate) with 10,000 training data chosen uniformly from the data. This work uses a \"blocklearning\" approximation [12] which approximates the total marginal likelihood as a sum of a\nsequence of marginal likelihoods computed over blocks of points comprising the training data. The size of the block is defined by the computational resources available. The stochastic optimization step was the most time consuming part; each attempt was started with completely\nrandom parameters.", |
| "paper_id": "1210.1928", |
| "title": "Information fusion in multi-task Gaussian processes", |
| "authors": [ |
| "Shrihari Vasudevan", |
| "Arman Melkumyan", |
| "Steven Scheding" |
| ], |
| "published_date": "2012-10-06", |
| "primary_category": "stat.ML", |
| "arxiv_url": "http://arxiv.org/abs/1210.1928v3", |
| "chunk_index": 34, |
| "total_chunks": 48, |
| "char_count": 736, |
| "word_count": 105, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "e236aaa3-245b-40c6-b3e9-fb65c86992d9", |
| "text": "The code was unoptimized MATLAB code running typically on an 8-core\nprocessor based machine. Most times, not all the cores were used for the same process; multiple\nprocesses also shared the same system. Note that the experiments in this paper do not use\nanalytical gradients for the optimization of the hyperparameters; this was a design choice made\nin the interest of stability and comparability of the optimization results across kernels. The use\nof analytical gradients can significantly reduce the total training time. Training time may also\nbe reduced significantly by various other ways including other approximations, intelligently\nsetting initial parameters, scaling the data etc.", |
| "paper_id": "1210.1928", |
| "title": "Information fusion in multi-task Gaussian processes", |
| "authors": [ |
| "Shrihari Vasudevan", |
| "Arman Melkumyan", |
| "Steven Scheding" |
| ], |
| "published_date": "2012-10-06", |
| "primary_category": "stat.ML", |
| "arxiv_url": "http://arxiv.org/abs/1210.1928v3", |
| "chunk_index": 35, |
| "total_chunks": 48, |
| "char_count": 688, |
| "word_count": 104, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "61d5412f-f027-4b27-b73e-8e3f56450a38", |
| "text": "Model Kernel Number of training attempts, iterations\nTotal training time for successful attempt\nNN 2 attempts, 3 iterations, total training time = 78.89 hours\nMM 3 attempts, 2 iterations, total training time = 222.15 hours\nMTGP\nSQEXP 4 attempts, 1 iteration, total training time = 92.52 hours\nMS 2 attempts, 3.5 iterations, 3 iterations took 113.69 hours\nNN 3 attempts, 2 iterations, total training time = 41.07 hours\nMM 2 attempts, 1 iteration, total training time = 48.91 hours\nGPI\nSQEXP 2 attempts, 1 iteration, total training time = 47.67 hours Rather than the individual training times, the relative amount of training (under similar conditions, with different kernel) required to produce a reasonable set of parameters is of more\ninterest. Experience suggests that the NN kernel based MTGP/GP models converged faster\nand better as compared to other kernels. MTGP models based on the NN kernel outperform other kernels tested.\n• See Figures 7, 8, 9 for E1, Figures 16, 17, 18 for E2 and Figures 25, 26, 27 for E3.\n• The NN kernel is the best performing kernel of the four tested, across all block sizes tested. The MTGP based on the NN kernel produces lower SE (better estimate) and reduced NLP\n(better model) estimates than other kernels tested.\n• For small block sizes, both the NN and MM kernel are competitive; in case of E3, the MM even marginally outperforms the NN kernel for the two smallest block sizes tested. Note however\nthat considering all test sizes and all three elements, the observation is that the MM kernel\nproduces lower VAR for a higher SE, meaning that it is more confident of its SE values which\nare worse/higher than those of the NN kernel. This makes its NLP higher and the model\npoorer than an MTGP based on the NN kernel. Note also that as the test block size increases,\nthe advantage in performance of the MTGP based on the NN kernel over that based on the\nMM kernel becomes more distinctive. Not only are the SE values smaller for the NN kernel,\nthe NLP values remain in the same range whereas those of the MM kernel rise significantly. This proves that the MTGP-NN is better performing and more robust than the MTGP-MM. The latter property suggests that the MTGP-NN will be able to cope better with incomplete\ndata sets.\n• Both the MS and SQEXP kernels are not competitive with respect to the NN or MM kernels considering both the SE and NLP metrics. These kernels are discussed individually in the\nfollowing paragraphs.", |
| "paper_id": "1210.1928", |
| "title": "Information fusion in multi-task Gaussian processes", |
| "authors": [ |
| "Shrihari Vasudevan", |
| "Arman Melkumyan", |
| "Steven Scheding" |
| ], |
| "published_date": "2012-10-06", |
| "primary_category": "stat.ML", |
| "arxiv_url": "http://arxiv.org/abs/1210.1928v3", |
| "chunk_index": 36, |
| "total_chunks": 48, |
| "char_count": 2456, |
| "word_count": 426, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "f4159cc9-9ece-4192-ad63-3a2a9083b005", |
| "text": "MTGP models perform significantly better than three separate GPs (using the MTGP parameters)\nor three independently optimized GPs as information fusion improves estimation.\n• See Figures 10 and 11 for E1, Figures 19 and 20 for E2 and Figures 28 and 29 for E3.\n• For the NN kernel, the MTGP metrics are always lower than the corresponding derived GP (GP) or independent GP (GPI) metrics - lower SE (better estimate) with lower NLP (better\nmodel). This clearly demonstrates the benefits of information fusion across heterogeneous\ninformation sources so as to improve individual predictions using the MTGP model.\n• From Tables 2, 3 and 4, the average reduction in error (i.e. improvement in performance) of MTGP models over GP/GPI models for the smallest, intermediate and largest test block sizes\nare - – E1\n∗22 x 11 x 2 - 95.6% over GP, 96.2% over GPI\n∗84 x 45 x 9 - 96.1% over GP, 96.0% over GPI\n∗696 x 353 x 70 - 44.6% over GP, 38.1% over GPI\n– E2\n∗22 x 11 x 2 - 89.6% over GP, 92.3% over GPI\n∗84 x 45 x 9 - 91.6% over GP, 92.4% over GPI\n∗696 x 353 x 70 - 42.1% over GP, 37.5% over GPI\n– E3\n∗22 x 11 x 2 - 82.4% over GP, 83.5% over GPI\n∗84 x 45 x 9 - 85.9% over GP, 85.3% over GPI\n∗696 x 353 x 70 - 30.9% over GP, 22.5% over GPI\nThese numbers demonstrate significant improvements in performance, even in very large test block\nsizes, when using the MTGP-NN model for correlated data. The MS kernel was uncompetitive\n• See Figures 14 and 15 for E1, Figures 23 and 24 for E2 and Figures 32 and 33 for E3.\n• The MS kernel is not competitive with respect to the NN and MM kernels as discussed earlier. However, the MTGP using this kernel combination proves to be better than a derived GP and\nan independently optimized GP with respect to the SE metric. From the NLP perspective,\nthe MTGP-MS model is more competitive than the other GP models for small block sizes. For larger block sizes, using an independently optimized GP proves to be a more trust worthy\nmodeling option as the increase in error is met with a corresponding increase in uncertainty\n(hence low NLP) for the independent GP models. The exception to this behavior is seen in\nthe results for E3, the MTGP model is poor in this case. This is attributed to do with inferior\nparameters relevant to the element E3 obtained from the optimization process.\n• The MS kernel performs better than the SQEXP with respect to the NLP metric and hence can be trusted more (prediction error compensated by prediction uncertainty), but in two of\nthe three elements (E1 and E3), its SE was inferior to that of the SQEXP. The SQEXP kernel was uncompetitive and unreliable\n• See Tables 2, 3 and 4; see Figures 7, 8 9, 12, 13, 14 and 15 for E1, Figures 16, 17, 18, 21, 22, 23 and 24 for E2 and Figures 25, 26, 27, 30, 31, 32 and 33 for E3.\n• The MTGP-SQEXP model performs poorly in comparison with the equivalent models using the NN/MM kernels, with respect to both SE and NLP. • For elements E1 and E3, the MTGP-SQEXP has a better SE than the corresponding model based on the MS kernel; it has an SE better than the corresponding derived/independent GP\nmodels but an inferior (overconfident or low uncertainty) VAR and a fluctuating NLP trend. For element E2, the MTGP-SQEXP is worse offthan both the equivalent model based on the\nMS kernel as well as its corresponding GP models.\n• Considering the results for E2, the NLP is directly proportional to the SE and inversely to the prediction variance. At the smallest block size, the MTGP-SQEXP produces relatively high\nSE (with respect to e.g. MTGP-NN) but very low prediction variance. This basically suggests\nthat the model is confident of its poor estimates - a bad outcome.", |
| "paper_id": "1210.1928", |
| "title": "Information fusion in multi-task Gaussian processes", |
| "authors": [ |
| "Shrihari Vasudevan", |
| "Arman Melkumyan", |
| "Steven Scheding" |
| ], |
| "published_date": "2012-10-06", |
| "primary_category": "stat.ML", |
| "arxiv_url": "http://arxiv.org/abs/1210.1928v3", |
| "chunk_index": 37, |
| "total_chunks": 48, |
| "char_count": 3667, |
| "word_count": 677, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "bbced888-7c85-47aa-984a-747c21a8b75a", |
| "text": "This results in a high NLP\nand poor model. As the block size increases, the prediction variance increases more relative to\nthe prediction error resulting in the decreasing NLP trend. For elements E1 and E3, the largest\nblock size results in a stronger increase in prediction error than the variance in the prediction\nresulting in an increase in NLP. Overall, the MTGP-SQEXP model is poor.\n• The SQEXP kernel is a limiting case of the MM kernel; both are stationary kernels. Considering the behavior of the GPI model using the SQEXP kernel and its competitive results with respect\nto those of the GPI-MM kernel, it is possible that the poor performance of the MTGP-SQEXP\n(as compared to the MTGP-MM) is due to poor optimization output (a bad local minima). Further investigation on this result is ongoing but the findings are not expected to change the\nconclusions of this paper.", |
| "paper_id": "1210.1928", |
| "title": "Information fusion in multi-task Gaussian processes", |
| "authors": [ |
| "Shrihari Vasudevan", |
| "Arman Melkumyan", |
| "Steven Scheding" |
| ], |
| "published_date": "2012-10-06", |
| "primary_category": "stat.ML", |
| "arxiv_url": "http://arxiv.org/abs/1210.1928v3", |
| "chunk_index": 38, |
| "total_chunks": 48, |
| "char_count": 878, |
| "word_count": 149, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "98a548c1-2fae-4eb7-a9dc-2b210d466392", |
| "text": "In general, the stationary kernels tested seemed to have an inadequate increase in prediction uncertainty with increasing test block size and worsening predictions. This leads a higher NLP metric\nand a poor model that is overly confident of its worsening predictions. This behavior can be attributed to the correlation profile of the stationary kernels tested - they all share the \"correlation\ndecreases with increasing distance of support data from point of interest\" trend. This results in\nstationary kernels not being able to cope with large test block sizes as the support data is farther\naway (i.e. less correlated and not of much use). In contrast, the nonstationary NN kernel has a\nsigmoidal profile that can handle this issue across a range of test block sizes. The SE metric taken alone can be misleading. The experiments have reinforced the need for a multimetric analysis.", |
| "paper_id": "1210.1928", |
| "title": "Information fusion in multi-task Gaussian processes", |
| "authors": [ |
| "Shrihari Vasudevan", |
| "Arman Melkumyan", |
| "Steven Scheding" |
| ], |
| "published_date": "2012-10-06", |
| "primary_category": "stat.ML", |
| "arxiv_url": "http://arxiv.org/abs/1210.1928v3", |
| "chunk_index": 39, |
| "total_chunks": 48, |
| "char_count": 883, |
| "word_count": 143, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "75e1fbc7-4a35-49f2-b18e-ddf5eb78cb22", |
| "text": "The SE metric only provides information on the prediction error but it does not\ndescribe the prediction uncertainty which is very important in understanding if a model is reliable\nor otherwise. The VAR and NLP metrics provided key insights on the difference in performance\nbetween different models and kernels. A model that is very confident of its poor predictions is\nunreliable (as was the case for the SQEXP kernel). Worsening predictions (due to increasing test\nblock size) is itself not a bad outcome, provided it is met with an equivalent increase in prediction\nuncertainty.", |
| "paper_id": "1210.1928", |
| "title": "Information fusion in multi-task Gaussian processes", |
| "authors": [ |
| "Shrihari Vasudevan", |
| "Arman Melkumyan", |
| "Steven Scheding" |
| ], |
| "published_date": "2012-10-06", |
| "primary_category": "stat.ML", |
| "arxiv_url": "http://arxiv.org/abs/1210.1928v3", |
| "chunk_index": 40, |
| "total_chunks": 48, |
| "char_count": 580, |
| "word_count": 94, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "5e1b74a8-8498-4b4a-9550-c4d4bce54ea8", |
| "text": "(a) Predicted E1 concentrations over entire region superimposed with input data (b) 2D section view of the predicted E1 concentrations (c) Uncertainty in E1 predictions constituting the 2D section view Figure 4: Figures 4(a), 4(b) and 4(c) respectively show the predicted E1 concentrations (over the entire region) superimposed with the input data, a 2D section-view of the output data and the uncertainty in the predicted concentrations\nfor the 2D view. Expectedly, the uncertainty is low around regions where input/given data exist and rapidly rises for\npredictions away from such areas - typically, the fringe areas.", |
| "paper_id": "1210.1928", |
| "title": "Information fusion in multi-task Gaussian processes", |
| "authors": [ |
| "Shrihari Vasudevan", |
| "Arman Melkumyan", |
| "Steven Scheding" |
| ], |
| "published_date": "2012-10-06", |
| "primary_category": "stat.ML", |
| "arxiv_url": "http://arxiv.org/abs/1210.1928v3", |
| "chunk_index": 41, |
| "total_chunks": 48, |
| "char_count": 619, |
| "word_count": 95, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "c8c3985f-8d7b-44f6-a67e-da8d4b088721", |
| "text": "The 2D section view shows two red regions corresponding\ntwo regions of high E1 concentration. The corresponding regions in Figures 5 and 6 show low E2 and E3 concentrations\nrespectively. (a) Predicted E2 concentrations over entire region superimposed with input data (b) 2D section view of the predicted E2 concentrations (c) Uncertainty in E2 predictions constituting the 2D section view Figure 5: Figures 5(a), 5(b) and 5(c) respectively show the predicted E2 concentrations (over the entire region) superimposed with the input data, a 2D section-view of the output data and the uncertainty in the predicted concentrations\nfor the 2D view. Expectedly, the uncertainty is low around regions where input/given data exist and rapidly rises for\npredictions away from such areas - typically, the fringe areas.", |
| "paper_id": "1210.1928", |
| "title": "Information fusion in multi-task Gaussian processes", |
| "authors": [ |
| "Shrihari Vasudevan", |
| "Arman Melkumyan", |
| "Steven Scheding" |
| ], |
| "published_date": "2012-10-06", |
| "primary_category": "stat.ML", |
| "arxiv_url": "http://arxiv.org/abs/1210.1928v3", |
| "chunk_index": 42, |
| "total_chunks": 48, |
| "char_count": 806, |
| "word_count": 125, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "419fa71a-8404-4e39-8cf2-0882ca568060", |
| "text": "The 2D section view shows two violet regions corresponding\ntwo regions of low E2 concentration. The corresponding regions in Figure 4 show high E1 concentration and those from\nFigure 6 show low E3 concentration. (a) Predicted E3 concentrations over entire region superimposed with input data (b) 2D section view of the predicted E3 concentrations (c) Uncertainty in E3 predictions constituting the 2D section view Figure 6: Figures 6(a), 6(b) and 6(c) respectively show the predicted E3 concentrations (over the entire region) superimposed with the input data, a 2D section-view of the output data and the uncertainty in the predicted concentrations\nfor the 2D view. Expectedly, the uncertainty is low around regions where input/given data exist and rapidly rises for\npredictions away from such areas - typically, the fringe areas. The 2D section view shows two violet regions corresponding\ntwo regions of low E3 concentration. The corresponding regions in Figure 4 show high E1 concentration and those from\nFigure 5 show low E2 concentration. On the basis of this study, an attempt is made in answering two fundamental questions - (1) How can\nI know if I have a good MTGP model ? and (2) Which GP model or kernel should I use ? By no\nmeans is this intended to be a ready-made prescription, universal formula or short-cut to be used as\na substitute for context specific and statistically apt decisions in developing Gaussian process models. Rather, this is a reflection of the authors' experiences based on the scope of this and past work in other\ndomains such as terrain modeling. Note that there are numerous very sophisticated GP techniques\n(kernels, approximation etc.) which are beyond the scope of this work and which may change some of\nthese inferences.", |
| "paper_id": "1210.1928", |
| "title": "Information fusion in multi-task Gaussian processes", |
| "authors": [ |
| "Shrihari Vasudevan", |
| "Arman Melkumyan", |
| "Steven Scheding" |
| ], |
| "published_date": "2012-10-06", |
| "primary_category": "stat.ML", |
| "arxiv_url": "http://arxiv.org/abs/1210.1928v3", |
| "chunk_index": 43, |
| "total_chunks": 48, |
| "char_count": 1760, |
| "word_count": 289, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "9095df78-131c-4c2a-912e-721bce8e44cc", |
| "text": "5.1 How can I know if I have a good MTGP model ? To effectively develop and validate MTGP models, the experiments are suggestive of the following - The use of multiple kernel from the same family would provide a good method for validating the\ngeneral behavior/trends of the model in question. For instance when developing an MTGP model\nbased on the SQEXP kernel, developing a Matern 3/2 kernel based MTGP model could provide a\nmeans to validate the behavior of the MTGP-SQEXP model. The model hyperparameter optimization performed in this paper is based on maximizing the\nmarginal likelihood. Typically, error metrics such as the SE being sufficiently low is suggestive\nof the model being good. A cross validation test could also be performed to ensure that this is\nindeed the case. However, it is also important to check if the model in question is under/over\nconfident (high/low uncertainty) for a given level of error. This can be done, not as a standalone\ntest, but in comparison with alternative models or test cases. When developing a MTGP model, it is a good idea to compare with an equivalent derived GP model\nand an independently optimized GP model. The availability of more information and the effective\nuse of this information through the MTGP model should ideally result in significantly lower error\nmetrics (e.g. SE) with a significant improvement in confidence (i.e. decrease in prediction variance,\nVAR) and a net reduction in the Negative Log Loss (NLP) metric.", |
| "paper_id": "1210.1928", |
| "title": "Information fusion in multi-task Gaussian processes", |
| "authors": [ |
| "Shrihari Vasudevan", |
| "Arman Melkumyan", |
| "Steven Scheding" |
| ], |
| "published_date": "2012-10-06", |
| "primary_category": "stat.ML", |
| "arxiv_url": "http://arxiv.org/abs/1210.1928v3", |
| "chunk_index": 44, |
| "total_chunks": 48, |
| "char_count": 1477, |
| "word_count": 246, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "68ce2333-c586-4a95-9e7a-d2c35cf5e178", |
| "text": "It may be useful to design a variety of different test cases (e.g. different test block sizes) and check if\nthe performance metrics behave as expected. Such a test would also be indicative of the robustness\nof the model. It may be useful to optimize independent GP models for each task and use these hyperparameters\nas the initial parameters for the MTGP model. 5.2 Which GP model or kernel should I use ?", |
| "paper_id": "1210.1928", |
| "title": "Information fusion in multi-task Gaussian processes", |
| "authors": [ |
| "Shrihari Vasudevan", |
| "Arman Melkumyan", |
| "Steven Scheding" |
| ], |
| "published_date": "2012-10-06", |
| "primary_category": "stat.ML", |
| "arxiv_url": "http://arxiv.org/abs/1210.1928v3", |
| "chunk_index": 45, |
| "total_chunks": 48, |
| "char_count": 405, |
| "word_count": 73, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "eee0631d-179a-4f8f-a270-216bdd18faab", |
| "text": "This obviously depends on the data set at hand and the constraints of the modeling problem. The\nfollowing are purely indicative, based on our experiences in multiple problem domains [1, 8, 22] and may\nchange considering alternative kernels, other novel ways of treating the modeling problem or approximation methods. Time, complexity, computational resources are a premium. I need a method that just works: Independently optimized GP models using the Neural Network kernel or the Matern 3/2 kernel would\nbe a competitive solution. Note that the outcome will only be as good as the data being modeled\nand other information sources cannot be leveraged.", |
| "paper_id": "1210.1928", |
| "title": "Information fusion in multi-task Gaussian processes", |
| "authors": [ |
| "Shrihari Vasudevan", |
| "Arman Melkumyan", |
| "Steven Scheding" |
| ], |
| "published_date": "2012-10-06", |
| "primary_category": "stat.ML", |
| "arxiv_url": "http://arxiv.org/abs/1210.1928v3", |
| "chunk_index": 46, |
| "total_chunks": 48, |
| "char_count": 650, |
| "word_count": 103, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "26a4a74b-f451-4c4c-aa82-0a2606ef3b30", |
| "text": "I need the best possible model over a range of test sizes and I do not know much about my data:\nMulti-task GP models using the Neural Network kernel would be a competitive solution. I need the best possible model over a range of test sizes and I know how my data changes: Multi-task\nGP models with a kernel representative of the variation of the data e.g. a uniform variation (no\nsudden changes in trend) can be effectively modeled using the Matern 3/2 or Squared Exponential\nkernels. I need a model that can cope with sparse data and/or incomplete data sets: Neural network kernel\nbased GP or MTGP models depending on the computational complexity constraints and model\naccuracy requirements. I have \"good\" multi-attribute data. I need to model this well and fast: Independent GP models for\neach of the attributes, using either a Neural Network kernel or some other kernel more suited to the\ndata, would provide a competitive solution. The use of independent GP models will result in the\nability to parallelize the modeling process and significantly reduce the possibility of poor models\n(poor local minima) as a consequence of a reduction in number of model parameters. Note that\n\"good\" here is application dependent but would certainly require being well sampled, not noisy\nand reasonably complete (no large gaps where other information modalities can be leveraged).", |
| "paper_id": "1210.1928", |
| "title": "Information fusion in multi-task Gaussian processes", |
| "authors": [ |
| "Shrihari Vasudevan", |
| "Arman Melkumyan", |
| "Steven Scheding" |
| ], |
| "published_date": "2012-10-06", |
| "primary_category": "stat.ML", |
| "arxiv_url": "http://arxiv.org/abs/1210.1928v3", |
| "chunk_index": 47, |
| "total_chunks": 48, |
| "char_count": 1368, |
| "word_count": 227, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "540752cd-23d4-4e05-9c51-723b31059521", |
| "text": "This paper studied the problem of geological resource modeling using multi-task Gaussian processes\n(MTGPs). The concentrations of three elements were modeled and predicted over a region of interest\nusing the MTGP as well as individual Gaussian processes (GPs) for each of these quantities. The\npaper demonstrates that MTGPs perform significantly better than individual GPs at the modeling\nproblem as they effectively integrate heterogeneous sources of information (concentrations of individual\nelements) to improve the individual predictions of each of them. The benefits of information integration\nusing the MTGP as against independent GPs for the task of geological resource modeling have been\nquantified by a multi-metric and multi-test-size cross validation study that performed both an exact\nand an independent comparison between MTGPs and GPs. Multi-task Gaussian process models based\non the Neural Network kernel was shown to be a competitive and robust option across a range of test\nblock sizes. This work has been funded by the Rio Tinto Centre for Mine Automation.", |
| "paper_id": "1210.1928", |
| "title": "Information fusion in multi-task Gaussian processes", |
| "authors": [ |
| "Shrihari Vasudevan", |
| "Arman Melkumyan", |
| "Steven Scheding" |
| ], |
| "published_date": "2012-10-06", |
| "primary_category": "stat.ML", |
| "arxiv_url": "http://arxiv.org/abs/1210.1928v3", |
| "chunk_index": 48, |
| "total_chunks": 48, |
| "char_count": 1074, |
| "word_count": 161, |
| "chunking_strategy": "semantic" |
| } |
| ] |