researchpilot-data / chunks /1806.06207_semantic.json
Subhadip007's picture
feat: Upload full 358k vector database
e8a2c2e
[
{
"chunk_id": "5ae1a403-b778-4b7d-8e83-804edef192c6",
"text": "Meta-learning: searching in the model space. Włodzisław Duch and Karol Grudzi´nski\nDepartment of Computer Methods, Nicholas Copernicus University,\nGrudzia¸dzka 5, 87-100 Toru´n, Poland. WWW: http://www.phys.uni.torun.pl/kmk2018 Abstract scribed here involves a search for the best model in the spaceJun\nof all models that may be generated within SBM framework.\n16 Thereoutperformis no freeotherlunch,algorithmsno singleon learningall data. algorithmIn practicethatdiffer-will Simplestof parametersmodelandare procedurescreated at thearebeginningadded, allowingand newtotypesexent approaches are tried and the best algorithm selected. An plore more complex models. Constructive neural networks\nalternative solution is to build new algorithms on demand as well as genetic procedures for selection of neural archiby creating a framework that accommodates many algo- tectures increase model complexity by adding the same type\nrithms. The best combination of parameters and procedures of parameters, generating different models within a single\nis searched here in the space of all possible models belong- method. Meta-learning requires creation of models based[cs.LG] ing to the framework of Similarity-Based Methods (SBMs). on different methods, introducing new types of parameters\nSuch meta-learning approach gives a chance to find the best and procedures.\nmethod in all cases. Issues related to the meta-learning and\nfirst tests of this approach are presented. In the next section we will briefly introduce the SBM framework, explain which methods may be generated within\n1 Introduction. SBM, present the meta-learning approach used to select the\nbest method and describe our preliminary experiences anThe 'no free lunch' theorem [1] states that there is no sin- alyzing a few datasets. Conclusions and plans for future\ngle learning algorithm that is inherently superior to all the developments close this paper. Although the meta-learning\nothers. The back side of this theorem is that there are al- approach is quite general this paper is focused on classificaways some data on which an algorithm that is evaluated tion methods.\ngive superior results. Neural and other computational intelligence methods are usually tried on a few selected datasets 2 A framework for meta-learning\non which they work well.",
"paper_id": "1806.06207",
"title": "Meta-learning: searching in the model space",
"authors": [
"Włodzisław Duch",
"Karol Grudzińsk"
],
"published_date": "2018-06-16",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1806.06207v1",
"chunk_index": 0,
"total_chunks": 16,
"char_count": 2304,
"word_count": 328,
"chunking_strategy": "semantic"
},
{
"chunk_id": "8b524c0a-9025-4a44-b61c-bc52a6a83d9b",
"text": "A review of many approaches to\nLog European Community project [2]. The accuracy of 24 or k-NN method. A model is an instance of a method\nneural-based, pattern recognition and statistical classifica- with specific values of parameters. A framework for metation systems has been compared on 11 large datasets by Ro- learning should allow for generation and testing of models\nhwer and Morciniec [3]. No consistent trends have been ob- derived from different methods.",
"paper_id": "1806.06207",
"title": "Meta-learning: searching in the model space",
"authors": [
"Włodzisław Duch",
"Karol Grudzińsk"
],
"published_date": "2018-06-16",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1806.06207v1",
"chunk_index": 1,
"total_chunks": 16,
"char_count": 463,
"word_count": 74,
"chunking_strategy": "semantic"
},
{
"chunk_id": "34ce59aa-44c0-4ff2-906f-d244da3290c7",
"text": "It should be sufficiently\nserved in the results of these large-scale studies. Frequently rich to accommodate standard methods. The SBM framesimple methods, such as the nearest neighbor methods or n- work introduced recently [4, 5] seems to be most suitable for\ntuple methods, outperform more sophisticated approaches the purpose of meta-learning. It covers all methods based\n[3]. on computing similarity between the new case and cases\nin the training library. It includes such well-known methIn real world applications a good strategy is to find the best ods as the k–Nearest Neighbor (k-NN) algorithm and it's\nalgorithm that works for a given data trying many different extensions, originating mainly from machine learning and\napproaches. This may not be easy.",
"paper_id": "1806.06207",
"title": "Meta-learning: searching in the model space",
"authors": [
"Włodzisław Duch",
"Karol Grudzińsk"
],
"published_date": "2018-06-16",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1806.06207v1",
"chunk_index": 2,
"total_chunks": 16,
"char_count": 761,
"word_count": 118,
"chunking_strategy": "semantic"
},
{
"chunk_id": "f3d43a2d-f03c-42dd-8947-29b540a00433",
"text": "First, not all algorithms pattern recognition fields, as well as neural methods such\nare easily available, for example there is no research or com- as the popular multilayer perceptron networks (MLP) and\nmercial software for some of the best algorithms used in the networks based on radial–basis functions (RBF). Second, each program requires usually\na different data format. Third, programs have many param- Methods that belong to the SBM are based on specific\neters and it is not easy to master them all. Our strategy is parameterization of the p(Ci|X;M) posterior classification\nto use a framework for Similarity-Based Methods (SBM) probability, where the model M involves various proceintroduced recently [4, 5]. The meta-learning approach de- dures, parameters and optimization methods. Instead of focusing on improving a single method a search for the best ∆j(Xj;Yj) calculates similarity of Xj, Yj features, j = 1..N;\nmethod belonging to the SBM framework should select op- D(X,Y) = D({∆j(Xj;Yj)}) is a function that combines simtimal combination of parameters and procedures for a given ilarities of features to compute similarities of vectors; if the\nproblem. similarity function selected has metric properties the SBM\nmay be called the minimal distance (MD) method. Below N is the number of features, K is the number of k is the number of reference vectors taken into account in\nclasses, vectors are in bold faces while vector components the neighborhood of X;\nare in italics. The following steps may be distinguished in G(D) = G(D(X,R)) is the weighting function estimating\nthe supervised classification problem based on similarity es- contribution of the reference vector R to the classification\ntimations: probability of X;\n1) Given a set of objects (cases) {Op}, p = 1..n and their {R} is a set of reference vectors created from the set of\nsymbolic labels C(Op), define useful numerical features training vectors T = {Xp} by some selection and optimizaX jp = Xj(Op), j = 1...N characterizing these objects. This tion procedure;\npreprocessing step involves computing various characteris- pi(R),i = 1..K is a set of class probabilities for each refertics of images, spatio-temporal patterns, replacing symbolic ence vector;\nfeatures by numerical values etc. E[T ;M] or E[V ;M] is a total cost function that is minimized\n2) Find a measure suitable for evaluation of similarity or at the training stage; it may include a misclassification risk\ndissimilarity of objects represented by vectors in the feature matrix R (Ci,Cj),i, j = 1..K;\nspace, D(X,Y). K(·) is a kernel function, scaling the influence of the error,\n3) Create a reference (or prototype) vectors R in the fea- for a given training example, on the total cost function;\nture space using the similarity measure and the training set S(·) is a function (or a matrix) evaluating similarity (or more\nT = {Xp} (a subset of all cases given for classification). frequently dissimilarity) of the classes; if class labels are\n4) Define a function or a procedure to estimate the proba- soft or are if they are given by a vector of probabilities pi(X)\nbility p(Ci|X;M),i = 1..K of assigning vector X to class Ci. classification task is in fact a mapping. S(Ci,Cj) function\nThe set of reference vectors, similarity measure, the feature allows to include a large number of classes, \"softening\" the\nspace and procedures employed to compute probability de- labeling of objects that are given for classification.\nfine the classification model M.\n5) Define a cost function E[T ;M] measuring the perfor- Various choices of parameters and procedures in the conmance accuracy of the system on a training set T of vec- text of network computations leads to a large number of\ntors; a validation set V composed of cases that are not used similarity-based classification methods. Some of these\ndirectly to optimize model M may also be defined and per- models are well known and some have not yet been used.\nformance E[V ;M] measuring generalization abilities of the We have explored so far only a few aspects of this framemodel assessed. work, describing various procedures of feature selection,\n6) Optimize the model Ma until the cost function E[T ;Ma] parameterization of similarity functions for objects and sinreaches minimum on the set T or on the validation set gle features, selection and weighting of reference vectors,\nE[V ;Ma]. creation of ensembles of models and estimation of classifi-\n7) If the model produced so far is not sufficiently accu- cation probability using ensembles, definitions of cost funcrate add new procedures/parameters creating more complex tions, choice of optimization methods, and various network\nmodel Ma+1. realizations of the methods that may be created by combi-\n8) If a single model is not sufficient create several local nation of all these procedures [5, 6, 7, 8]. A few methods\nmodels M(l)a and use an interpolation procedure to select the that may be generated within the SBM framework are menbest model or combine results creating ensembles of mod- tioned below.\nels. The k-NN model p(Ci|X;M) is paramAll these steps are mutually dependent and involve many eterized by p(Ci|X;k,D(·),{X}}), i.e. the whole training\nchoices described below in some details.",
"paper_id": "1806.06207",
"title": "Meta-learning: searching in the model space",
"authors": [
"Włodzisław Duch",
"Karol Grudzińsk"
],
"published_date": "2018-06-16",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1806.06207v1",
"chunk_index": 3,
"total_chunks": 16,
"char_count": 5246,
"word_count": 845,
"chunking_strategy": "semantic"
},
{
"chunk_id": "2b012fc7-c976-44fc-9a08-9a3e7b2d452a",
"text": "The final classi- dataset is used as the reference set, k nearest prototypes are\nfication model M is build by selecting a combination of all included with the same weight, and a typical distance funcavailable elements and procedures. A general similarity- tion, such as the Euclidean or the Manhattan distance, is\nbased classification model may include all or some of the used. Probabilities are p(Ci|X;M) = Ni/k, where Ni is the\nfollowing elements: number of neighboring vectors belonging to the class Ci. The most probable class is selected as the winner.",
"paper_id": "1806.06207",
"title": "Meta-learning: searching in the model space",
"authors": [
"Włodzisław Duch",
"Karol Grudzińsk"
],
"published_date": "2018-06-16",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1806.06207v1",
"chunk_index": 4,
"total_chunks": 16,
"char_count": 557,
"word_count": 91,
"chunking_strategy": "semantic"
},
{
"chunk_id": "63a98c08-1737-4065-9b2c-23a4c29c9c81",
"text": "ManyM = {X(O),∆(·,·),D(·,·),k,G(D),{R},{pi(R)},E[·],\nvariants of this basic model may be created. Instead of en-K(·),S(·)}, where:\nforcing exactly k neighbors the radius r may be used as an\nX(O) is the mapping defining the feature space and selectadaptive parameter. Using several radial parameters and the\ning the relevant features; hard-sphere weighting functions Restricted Coulomb En- The evaluation function C(Ml) returns the classification acergy (RCE) algorithm is obtained [9]. Selection of the pro- curacy of the model Ml on a validation set; this accuracy\ntotypes and optimization of their position leads to old and refers to a single model or to an ensemble of models senew variants of the LVQ algorithms [10]. Gaussian clas- lected so far. Let N denote the initial number of models\nsifiers, fuzzy systems and RBF networks are the result of from which selection is made and K the number of modsoft-weighting and optimization of the reference vectors. els that should be selected. The model sequence selection\nNeural-like network realizations of the RBF and MLP types algorithm proceeds as follows:\nare also special cases of this framework [6]. The SBM framework is too rich that instead of exploring all 1.",
"paper_id": "1806.06207",
"title": "Meta-learning: searching in the model space",
"authors": [
"Włodzisław Duch",
"Karol Grudzińsk"
],
"published_date": "2018-06-16",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1806.06207v1",
"chunk_index": 5,
"total_chunks": 16,
"char_count": 1217,
"word_count": 192,
"chunking_strategy": "semantic"
},
{
"chunk_id": "ea0912ce-05b8-4860-b53d-2f1c3002935d",
"text": "Initialize:\nthe methods that may be generated within it an automatic\n(a) Create a pool of M initial models, M =search for the best method and model is needed: a meta-\n{Ml},l = 1...M.learning level.\n(b) Evaluate all initial models on the validation set,\narrange them in a decreasing order of accuracy 3 Meta-learning issues\nCa(Mi) ≥Ca(Mj) for i > j. Parameters of each model are optimized and a search is (c) Select the best model M1 from the M pool as\nmade in the space of all models Ma for the simplest and the reference.\nmost accurate model that accounts for the data. Opti-\n(d) Remove it from the pool of models.mization should be done using validation sets (for example\nin crossvalidation tests) to improve generalization. Repeat until the pool of models is empty:\ning from the simplest model, such as the nearest neighbor\nmodel, qualitatively new \"optimization channel\" is opened (a) For each model Ml in the pool evaluate its perby adding the most promising new extension, a set of pa- formance starting from the current reference\nrameters or a procedure that leads to greatest improvements. model. Once the new model is established and optimized all exten-\n(b) Select the reference + Ml model with highestsions of the model are created and tested and another, better\nperformance and use it as the current reference;\nmodel is selected. The model may be more or less complex\nif several models have similar performance sethan the previous one (for example, feature selection or select the one with lowest complexity.\nlection of reference vectors may simplify the model). The\nsearch in the space of all SBM models is stopped when no (c) If there is no significant improvement stop and\nsignificant improvements are achieved by new extensions. return the current reference model; otherwise\naccept the current reference model + Ml as the\nIn the case of the standard k-NN, the classifier is used with new reference.\ndifferent values of k on a training partition using leave-one- (d) Remove the Ml model from the pool of available\nout algorithm and applied to the test partition. The predicted models.\nclass is computed on the majority basis. To increase the\nclassification accuracy one may first optimize k,(k1 ≤k ≤\nk2) and select m ≤k2 −k1 best classifiers for an ensemble At each step at most M −L sequences consisting of L =\nmodel. In the case of weighted k-NN either k is optimized M −1..1 models are evaluated. Frequently the gain in perfirst and then best models created optimizing all weights, or formance may not justify additional complexity of adding\nbest models are selected after optimization for a number of a new model to the final sequence. The result of this alk values (a more accurate, but costly procedure). gorithm is a sequence of models of increasing complexity,\nwithout re-optimization of previously created models. This\nSelecting a subset of best models that should be included in \"best-first\" algorithm finds a sequence of models that give\nan ensemble is not an easy task since the number of possi- the highest classification accuracy on validation partition.\nbilities grows combinatorially and obviously not all subsets In case of k-NN-like models, calculations may be done on\nmay be checked. A variant of the best-first search (BFS) the training partition in the leave-one-out mode instead of\nalgorithm has been used for this selection. We have already the validation partition.\nused the BFS technique for the optimization of weights and\nfor selection of the best attributes [7, 8].",
"paper_id": "1806.06207",
"title": "Meta-learning: searching in the model space",
"authors": [
"Włodzisław Duch",
"Karol Grudzińsk"
],
"published_date": "2018-06-16",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1806.06207v1",
"chunk_index": 6,
"total_chunks": 16,
"char_count": 3508,
"word_count": 587,
"chunking_strategy": "semantic"
},
{
"chunk_id": "bd186255-b5ee-4a48-9b80-e471454879c0",
"text": "BFS algorithm The algorithm described above is prone to local minima,\ncan be used for majority voting of models derived from as any \"best-first\" or gradient-based algorithm. The beam\nweighted-NN method based on minimization, or based on search algorithm for selection of the best sequence of modstandard k-NN with different k, or for selection of an opti- els is more computationally expensive but it has a better\nmal sequence of any models. chance to find a good sequence of models. scheme allows to add many parameters and procedures, new\nmodels may also be created on demand if adding models Table 1: Results for the Monk-1 problem with k-NN as reference\ncreated so far does not improve results. Some model opti- model.\nmizations, such as the minimization of the weights of fea- Method Acc. Train % Test %\ntures in the distance function, may be relatively expensive. ref = k-NN, k=1, Euclidean 76.6 85.9\nRe-optimization of models in the pool may be desirable ref + k=3 82.3 80.6\nbut it would increase the computational costs significantly. ref + Camberra distance 79.8 88.4\nTherefore we will investigate only the simplest \"best-first\" ref + feature selection 1, 2, 5 96.8 100.0\nsequence selection algorithm, as described above. ref + feature weights 99.2 100.0\nref = k-NN, Euclid, weights 99.2 100.0\n4 Numerical experiments ref + Camberra distance 100.0 100.0 We have performed preliminary numerical tests on several\ndatasets. The models taken into account include optimization of k, optimzation of distance function, feature selection In the Monk 2 problem the best combination sequence of\nand optimization of the weights scaling distance function: models was k-NN with Camberra distance function, givn ing the training accuracy of 89.9% and test set accuracy of\nD(X,Y)α = ∑ si|Xi −Yi|α (1) 90.7%. In the Monk 3 case weighted distance with just 2\ni=1 non-zero coefficients gave training accuracy of 93.4% and\ntest result of 97.2%. At present only α = 1,2 is considered (Euclidean and Manhattan weighted functions), and two other distance function,\nChebyschev and Camberra [6], but full optimization of α 4.2 Hepatobiliary disorders\nThe data contain four types of hepatobiliary disorders foundshould soon be added. Various methods of optimization\nin 536 patients of a university affiliated Tokyo-based hospi-may be used but we have implemented only the simplex\ntal. Each case is described by 9 biochemical tests and a sexmethod which may lead to the weighted models with relof the patient. The same 163 cases as in [13] were usedatively large variance. The goal of further search for the\nas the test data.",
"paper_id": "1806.06207",
"title": "Meta-learning: searching in the model space",
"authors": [
"Włodzisław Duch",
"Karol Grudzińsk"
],
"published_date": "2018-06-16",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1806.06207v1",
"chunk_index": 7,
"total_chunks": 16,
"char_count": 2608,
"word_count": 426,
"chunking_strategy": "semantic"
},
{
"chunk_id": "111a5415-5c7e-44ce-831f-16ba8868622d",
"text": "The class distribution in the training parti-best model should therefore include not only accuracy but\ntion is 34.0%, 23.9%, 22.3% and 19.8%. This dataset hasalso reduction of variance, i.e. stabilization of the classifier.\nstrongly overlapping classes and is rather difficult. With 49\ncrisp logic rules only about 63% accuracy on the test set4.1 Monk problems\nwas achieved [14], and over 100 fuzzy rules based on GausThe artificial dataset Monk-1 [11] is designed for rule-based\nsian or triangular membership functions give about 75-76%\nsymbolic machine learning algorithms (the data was taken\naccuracy.\nfrom the UCI repository [12]). The nearest neighbor algorithms usually do not work well in such cases. 6 symbolic The reference k-NN model with k=1, Euclidean distance\nfeatures are given as input, 124 cases are given for train- function gave 72.7% in the leave-one-out run on the training\ning and 432 cases for testing. We are interested here in the set (77.9% on the test set). Although only the training set reperformance of the model selection procedures. sults are used in the model search results on the test set are\ngiven here to show if there is any correlation between the\nThe meta-learning algorithm starts from the reference\ntraining and the test results. The search for the best model\nmodel, a standard k-NN, with k = 1 and Euclidean funcproceeded as follows:\ntion. The leave-one-out training accuracy is 76.6% (on test\n85.9%). At the first level the choice is: optimization of k, First level\noptimization of the type of similarity function, selection of\nfeatures and weighting of features.",
"paper_id": "1806.06207",
"title": "Meta-learning: searching in the model space",
"authors": [
"Włodzisław Duch",
"Karol Grudzińsk"
],
"published_date": "2018-06-16",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1806.06207v1",
"chunk_index": 8,
"total_chunks": 16,
"char_count": 1606,
"word_count": 259,
"chunking_strategy": "semantic"
},
{
"chunk_id": "b9820b17-d62b-40ce-a6c8-fe48f42c35d9",
"text": "Results are summarized\nin the Table below. Feature weighting (1, 1, 0.1, 0, 0.9, 0), 1. Optimization of k finds the best result with k=1, acimplemented here using a search procedure with 0.1 quanti- curacy 72.7% on training (test 77.9%).\nzation step, already at the first level of search for the best ex-\n2. Optimization of the distance function gives train-tension of the reference model achieves 100% accuracy on\ning accuracy of 79.1% with Manhattan function (testthe test set and 99.2%, or just a single error, in the leave-one-\n77.9%).out estimations on the training set.",
"paper_id": "1806.06207",
"title": "Meta-learning: searching in the model space",
"authors": [
"Włodzisław Duch",
"Karol Grudzińsk"
],
"published_date": "2018-06-16",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1806.06207v1",
"chunk_index": 9,
"total_chunks": 16,
"char_count": 575,
"word_count": 95,
"chunking_strategy": "semantic"
},
{
"chunk_id": "def8b1b2-8893-49fe-8301-4fba233c2e08",
"text": "Additional complexity\nmay not justify further search. Selection of the optimal 3. Selection of features removed feature Creatinine\ndistance for the weighted k-NN reference model achieves level, giving 74.3% on the training set; (test 79.1%).\n100% on both training and the test set, therefore the search\nprocedure is stopped. 4. Weighting of features in the Euclidean distance function gives 78.0% on training (test 78.5%). Final\nweights are [1.0, 1.0, 0.7, 1.0, 0.2, 0.3, 0.8, 0.8, 0.0]. Since classes strongly overlap the best one can do in such\ncases is to identify the cases that can be reliable classified\nThe best training result 79.1% (although 77.9% is not the and assign the remaining cases to pairs of classes [16].\nbest test result) is obtained by selecting the Manhattan function, therefore at the second level this becomes the refer- 4.3 Ionosphere data\nence model: The ionosphere data was taken from UCI repository [12]. It\nhas 200 vectors in the training set and 150 in the test set. Each data vector is described by 34 continuous features and\n1. Optimization of k finds the best result with k=1, acbelongs to one of two classes. This is a difficult dataset for\ncuracy 72.7% on training (test 77.9%).\nmany classifiers, such as decision trees: it is rather small,\nthe number of features is quite large, the distribution of vec-\n2. Selection of features did not remove anything, leaving\ntors among the two classes in the training set is equal, but in\n79.1% on the training (test 77.9%).\nthe test set only 18% of the vectors are from the first class\nand 82% from the second. Probably for this dataset discov-\n3. Weighting of features in the Manhattan distance funcery of an appropriate bias on the training set is not possible.\ntion gives 80.1% on training (final weights are [1.0,\n0.8, 1.0, 0.9, 0.4, 1.0, 1.0, 1.0, 1.0]; (test 80.4%). The reference k-NN model with k=1, Euclidean distance\nfunction gave 86.0% (92.0% on the test set). The search\nfor the best model proceeded as follows:At the third level weighted Manhattan distance giving\n80.1% on training (test 80.4%) becomes the reference\nFirst level\nmodel and since optimization of k nor the selection of features does not improve the training (nor test) result this becomes the final model.",
"paper_id": "1806.06207",
"title": "Meta-learning: searching in the model space",
"authors": [
"Włodzisław Duch",
"Karol Grudzińsk"
],
"published_date": "2018-06-16",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1806.06207v1",
"chunk_index": 10,
"total_chunks": 16,
"char_count": 2258,
"word_count": 380,
"chunking_strategy": "semantic"
},
{
"chunk_id": "44a4da80-f060-4fc5-a3ae-acd011fe1f7f",
"text": "For comparison results of several 1. Optimization of k finds the best result with k=1, acother systems (our calculation or [15]) on this data set are curacy 86.0% on training (test 92.0%).\ngiven below: 2. Optimization of the distance function gives training accuracy of 87.5% with Manhattan function (test\n96.0%).Table 2: Results for the hepatobiliary disorders. Accuracy on the\ntraining and test sets. 3. Selection procedures leaves 10 features and gives\n92.5% on the training set; (test 92.7%). Method Training set Test set\nModel optimization 80.1 80.4 4. Weighting of features in the Euclidean distance funcFSM, Gaussian functions 93 75.6 tion gives 94.0% on training (test 87.3%); only 6 nonFSM, 60 triangular functions 93 75.8 zero weights are left. IB1c (instance-based) – 76.7\nC4.5 decision tree 94.4 75.5 The best training result 94.0% is obtained from feature\nCascade Correlation – 71.0 weighting. Unfortunately this seems to be sufficient to overMLP with RPROP – 68.0 fit the data – due to the lack of balance between the training\nBest fuzzy MLP model 75.5 66.3 and the test set overtraining may be quite easy. The second\nLDA (statistical) 68.4 65.0 level search starts from:\nFOIL (inductive logic) 99 60.1\n1R (rules) 58.4 50.3\n1. Optimization of k does not improve the training reNaive Bayes – 46.6\nsult. IB2-IB4 81.2-85.5 43.6-44.6\n2.",
"paper_id": "1806.06207",
"title": "Meta-learning: searching in the model space",
"authors": [
"Włodzisław Duch",
"Karol Grudzińsk"
],
"published_date": "2018-06-16",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1806.06207v1",
"chunk_index": 11,
"total_chunks": 16,
"char_count": 1346,
"word_count": 219,
"chunking_strategy": "semantic"
},
{
"chunk_id": "84fff4a1-388d-4dfb-8c46-c7ebbefe9fb3",
"text": "Optimization of the distance function gives 95.0%\nwith Manhattan function (test 88.0%). The confusion matrix is:\n3. Selection of features did not change the training result.\n 25 3 2 3  5 40 4 2 All further combinations of models reduce the training set\n 4 1 26 4  (2) accuracy. It is clear that there is no correlation between the    2 0 2 40  results on the training and on the test set in this case.",
"paper_id": "1806.06207",
"title": "Meta-learning: searching in the model space",
"authors": [
"Włodzisław Duch",
"Karol Grudzińsk"
],
"published_date": "2018-06-16",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1806.06207v1",
"chunk_index": 12,
"total_chunks": 16,
"char_count": 409,
"word_count": 83,
"chunking_strategy": "semantic"
},
{
"chunk_id": "b72b6f2a-0afb-4250-a3bb-8019e17f8aa6",
"text": "Morciniec, A Theoretical and Experimental Account of n-tuple Classifier Performance. Neural\nMeta-learning combined with the framework of similarity- Computation 8 (1996) 657–670\nbased methods leads to search in the space of models de-\n[4] W. Duch, A framework for similarity-based classi-rived from algorithms that are created by combining differfication methods. In: Intelligent Information Systems VII,\nent parameters and procedures defining building blocks of\nMalbork, Poland (1998) 288-291learning algorithms. Although in this paper only classification problems were considered the SBM framework is also [5] W. Diercksen, Classiuseful for associative memory algorithms, pattern comple- fication, Association and Pattern Completion using Neural\ntion, missing values [5], approximation and other computa- Similarity Based Methods. Applied Mathematics and Comtional intelligence problems. puter Science 10 (2000) 101–120\n[6] W. Duch, Similarity-Based Methods. Control and\nIn this paper first meta-learning results have been presented. Cybernetics 4 (2000) xxx-yyy\nPreliminary results with only a few extensions to the reference k-NN model illustrated on the Monk problems, hepa- [7] W. Grudzi´nski, Weighting and selection of\ntobiliary disorders and the ionosphere data how the search features in Similarity-Based Methods. In: Intelligent Inforin the model space automatically leads to more accurate so- mation Systems VIII, Ustro´n, Poland (1999) 32-36\nlutions. Even for a quite difficult data it may be possible [8] W. Grudzi´nski, Search and global minito find classification models that achieve 100% accuracy on mization in similarity-based methods.",
"paper_id": "1806.06207",
"title": "Meta-learning: searching in the model space",
"authors": [
"Włodzisław Duch",
"Karol Grudzińsk"
],
"published_date": "2018-06-16",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1806.06207v1",
"chunk_index": 13,
"total_chunks": 16,
"char_count": 1654,
"word_count": 228,
"chunking_strategy": "semantic"
},
{
"chunk_id": "9bfe258f-e6e3-43eb-8dc5-7c7a6f96f8f4",
"text": "Joint Conthe test set. For hepatobiliary disorders a model with highest ference on Neural Networks (IJCNN), Washington (1999)\naccuracy for real medical data has been found automatically. paper no. 742\nFor some data sets, such as the ionosphere, there seems to\n[9] D.L. Elbaum, A neural model\nbe no correlation between the results on the training and on for category learning. Biological Cybernetics 45 (1982) 35–\nthe test set.",
"paper_id": "1806.06207",
"title": "Meta-learning: searching in the model space",
"authors": [
"Włodzisław Duch",
"Karol Grudzińsk"
],
"published_date": "2018-06-16",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1806.06207v1",
"chunk_index": 14,
"total_chunks": 16,
"char_count": 426,
"word_count": 69,
"chunking_strategy": "semantic"
},
{
"chunk_id": "b4058f04-752b-404e-8a6b-3d79076f7418",
"text": "Although the use of a validation set (or the use\nof the crossvalidation partitions) to guide the search process\nfor the new models should prevent them from overfitting [10] T. Kohonen, Self-organizing maps. Springer-Verlag,\nthe data, at the same time enabling them to discover the Berlin Heidelberg New York (1995)\nbest bias for the data other ways of model selection, such [11] S.B. Thrun et al.: The MONK's problems: a\nas the minimum description length (cf. [1]), should be in- performance comparison of different learning algorithms.\nvestigated. Carnegie Mellon University, Technical Report (1991)\nCMU-CS-91-197\nSimilarity Based Learner (SBL) is a software system de-\n[12] C.J. Murphy, UCI repository of machineveloped in our laboratory that systematically adds various\nlearning datasets,\nprocedures belonging to the SBM framework. Methods imhttp://www.ics.uci.edu/AI/ML/MLDBRepository.html\nplemented so far provide many similarity functions with different parameters, include several methods of feature selec- [13] Y. Yoshida, Fuzzy neural extion, methods that weight attributes (based on minimization pert system and its application to medical diagnosis. In: 8th\nof the cost function or based on searching in the quantized International Congress on Cybernetics and Systems, New\nweight space), methods of selection of interesting proto- York City 1990, pp. 54-61\ntypes in the batch and on-line versions, and methods im- [14] Duch W, Adamczak R, Grabczewski K, A new\nplementing partial-memory of the evolving system. Many methodology of extraction, optimization and application of\noptimization channels have not yet been programmed in our crisp and fuzzy logical rules. IEEE Transactions on Neural\nsoftware, network models are still missing, but even at this Networks 12, March 2001\npreliminary stage results are very encouraging.\n[15] S. Pal, Knowledge based fuzzy MLP\nAcknowledgments: Support by the Polish Committee for for classification and rule generation, IEEE Transactions on\nScientific Research, grant no. 8 T11C 006 19, is gratefully Neural Networks 8, 1338-1350, 1997\nacknowledged. [16] W. Hayashi, Neural eliminators and classifiers, 7th International Conference on Neural\nReferences Information Processing (ICONIP-2000), Dae-jong, Korea,\n[1] R.O. Stork, Pattern classifica- Nov. 2000, ed. by Soo-Young Lee, pp. 1029 - 1034\ntion. 2nd ed, John Wiley and Sons, New York (2001) Taylor, Machine\nlearning, neural and statistical classification.",
"paper_id": "1806.06207",
"title": "Meta-learning: searching in the model space",
"authors": [
"Włodzisław Duch",
"Karol Grudzińsk"
],
"published_date": "2018-06-16",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1806.06207v1",
"chunk_index": 15,
"total_chunks": 16,
"char_count": 2454,
"word_count": 356,
"chunking_strategy": "semantic"
}
]