| [ |
| { |
| "chunk_id": "ac3f4ca0-b6f7-4086-b51a-1be786dbe302", |
| "text": "AgileNet: Lightweight Dictionary-based\nFew-shot Learning Mohammad Ghasemzadeh Fang Lin Bita Darvish Rouhani\nUC San Diego UC San Diego & SDSU UC San Diego\nmghasemzadeh@ucsd.edu fanglin@ucsd.edu bita@ucsd.edu Farinaz Koushanfar Ke Huang2018\nUC San Diego SDSU\nMay farinaz@ucsd.edu khuang@sdsu.edu\n21 Abstract", |
| "paper_id": "1805.08311", |
| "title": "AgileNet: Lightweight Dictionary-based Few-shot Learning", |
| "authors": [ |
| "Mohammad Ghasemzadeh", |
| "Fang Lin", |
| "Bita Darvish Rouhani", |
| "Farinaz Koushanfar", |
| "Ke Huang" |
| ], |
| "published_date": "2018-05-21", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1805.08311v1", |
| "chunk_index": 0, |
| "total_chunks": 23, |
| "char_count": 305, |
| "word_count": 39, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "43336fe0-e6cf-4b3b-b90f-8cd02e827b89", |
| "text": "The success of deep learning models is heavily tied to the use of massive amount\nof labeled data and excessively long training time. With the emergence of intelligent edge applications that use these models, the critical challenge is to obtain\nthe same inference capability on a resource-constrained device while providing\nadaptability to cope with the dynamic changes in the data. We propose AgileNet, a[cs.LG] novel lightweight dictionary-based few-shot learning methodology which provides\nreduced complexity deep neural network for efficient execution at the edge while\nenabling low-cost updates to capture the dynamics of the new data. Evaluations of\nstate-of-the-art few-shot learning benchmarks demonstrate the superior accuracy of\nAgileNet compared to prior arts. Additionally, AgileNet is the first few-shot learning approach that prevents model updates by eliminating the knowledge obtained\nfrom the primary training.", |
| "paper_id": "1805.08311", |
| "title": "AgileNet: Lightweight Dictionary-based Few-shot Learning", |
| "authors": [ |
| "Mohammad Ghasemzadeh", |
| "Fang Lin", |
| "Bita Darvish Rouhani", |
| "Farinaz Koushanfar", |
| "Ke Huang" |
| ], |
| "published_date": "2018-05-21", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1805.08311v1", |
| "chunk_index": 1, |
| "total_chunks": 23, |
| "char_count": 926, |
| "word_count": 132, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "1e377066-f205-412d-ab4f-dde0b0ba1024", |
| "text": "This property is ensured through the dictionaries learned\nby our novel end-to-end structured decomposition, which also reduces the memory\nfootprint and computation complexity to match the edge device constraints. Deep Neural Networks (DNNs) have achieved a remarkable success in several critical application\ndomains including computer vision, speech recognition, and natural language processing. The\nEfficiency and compactness are of growing concerns since many of the applications relying on deep\nlearning models are eventually aimed at providing intelligence on resource-constrained devices at the\nedge. The conventional cloud outsourcing approach fails to address latency, privacy, and availability\nconcerns Howard et al. [2017], Abadi et al. [2016]. This has been the catalyst for a large number\nof works building efficient DNN inference accelerators such as Lane et al. [2015], Sharma et al.\n[2016]. Training phase of DNNs incurs a larger memory footprint and computation complexity\ncompared with the inference. Assuming training is a one-time task, after which the model can be\ndeployed on the inference accelerator platform on the edge, the major trend has been to train on the\ncloud Lane et al. [2016]. However, providing adaptability at the edge is necessary to maintain the\ndesired accuracy in dynamic environment settings. To address the above requirements, there is a need to tackle two key challenges so they can effectively\nfit within the edge devices: (i) how to reduce memory and computation cost of the DNN model on\nthe cloud server without compromising the application performance and accuracy. (ii) how to extend\nthe space of model parameters to learn new tasks on-device without forgetting the knowledge learned Learning new tasks should be performed using few data instances over few iterations to\ncomply with the stringent physical performance requirements at the edge. Conventional supervised deep learning is dependent on the availability of a massive amount of labeled\ndata; the trained models generally perform poorly when labeled data is limited. The problem of\nrapidly learning new tasks with a limited amount of labeled data is referred to as \"few-shot learning\",\nwhich has received considerable attention from research community in recent years Li et al. [2006],\nLake et al. [2015], Hariharan and Girshick [2017]. However, many of the recent approaches solely\nconsider the model's performance on the new task and thus their approach discards the primary\nknowledge of the older tasks. This is in contrast with the goal of providing adaptable intelligence\nat the edge where adding to the capabilities of the model is desired without forgetting the previous\nknowledge. Neglecting the physical constraints of the edge device, in terms of memory, compute\npower, and energy consumption is another drawback of many of the state-of-the-art few-shot learning\napproaches.", |
| "paper_id": "1805.08311", |
| "title": "AgileNet: Lightweight Dictionary-based Few-shot Learning", |
| "authors": [ |
| "Mohammad Ghasemzadeh", |
| "Fang Lin", |
| "Bita Darvish Rouhani", |
| "Farinaz Koushanfar", |
| "Ke Huang" |
| ], |
| "published_date": "2018-05-21", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1805.08311v1", |
| "chunk_index": 2, |
| "total_chunks": 23, |
| "char_count": 2891, |
| "word_count": 440, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "c0c7f0f9-2e78-4aca-a103-33731e5dc61f", |
| "text": "A practical few-shot learning methodology should extend the capabilities of the model\nnot only using the few available new data instances but also through lightweight updates to the model. This work proposes AgileNet, the first lightweight few-shot learning scheme that enables efficient\nand adaptable edge device realization of DNNs. To enable AgileNet, we create a novel end-to-end\nstructured decomposition methodology for DNNs which allows low-cost model updates to capture\nthe dynamics of the new data. AgileNet not only performs lightweight and effective few-shot leaning\nbut also shrinks the storage requirement and computational cost of the model to match the edge\ndevice constraints. In summary, the contributions of this work are as follows: • Proposing AgileNet, a novel dictionary-based few-shot learning approach to enable adaptability at the edge while complying with the stringent resource constraints.\n• Developing a new end-to-end structured decomposition methodology which reduces memory\nfootprint and computational complexity of the model to match edge constraints.\n• Innovating a lightweight model updating mechanism to capture the dynamics of the new\ndata with only a few instances leveraging the properties of the learned dictionaries.\n• Demonstrating the superior accuracy of AgileNet on few-shot learning benchmarks compared with the state-of-the-art approaches on standard few-shot learning benchmarks. AgileNet is shown to preserve the accuracy on old and new classes, while reducing the amount\nof storage and computing. The rest of paper is structured as follows. Section 2 provides a review of related literature and\ndiscusses drawbacks of the prior art.", |
| "paper_id": "1805.08311", |
| "title": "AgileNet: Lightweight Dictionary-based Few-shot Learning", |
| "authors": [ |
| "Mohammad Ghasemzadeh", |
| "Fang Lin", |
| "Bita Darvish Rouhani", |
| "Farinaz Koushanfar", |
| "Ke Huang" |
| ], |
| "published_date": "2018-05-21", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1805.08311v1", |
| "chunk_index": 3, |
| "total_chunks": 23, |
| "char_count": 1681, |
| "word_count": 247, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "82ed1328-142a-4051-acc4-a968a618cbc6", |
| "text": "The global flow of AgileNet is described in Section 3. Section 4\npresents the details of the structured decomposition methodology. Few-shot learning technique is\nexplained in Section 5. Section 6 provides the experiment setting and benchmark evaluations and is\nfollowed by conclusions in Section 7.", |
| "paper_id": "1805.08311", |
| "title": "AgileNet: Lightweight Dictionary-based Few-shot Learning", |
| "authors": [ |
| "Mohammad Ghasemzadeh", |
| "Fang Lin", |
| "Bita Darvish Rouhani", |
| "Farinaz Koushanfar", |
| "Ke Huang" |
| ], |
| "published_date": "2018-05-21", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1805.08311v1", |
| "chunk_index": 4, |
| "total_chunks": 23, |
| "char_count": 298, |
| "word_count": 45, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "c3ef743d-d898-4d4c-b922-41005fa18c58", |
| "text": "The key challenge of few-shot learning is to use primary knowledge obtained through original training\ndata to make predictions about unseen classes of data with a limited number of available samples. Following the long history of research on few-shot learning approaches, the first work to leverage\nmodern machine learning for one-shot learning was proposed by Li et al. [2006]. In recent years,\nthe work in Lake et al. [2015] and Koch et al. [2015] have established two standard benchmarks,\nOmniglot and Mini-ImageNet respectively, to compare few-shot learning approaches in terms of\naccuracy. Lake et al. [2015] leverages a Bayesian model while the authors of Koch et al. [2015]\nutilized a Siamese network which learns pairwise similarity metrics to generalize the predictive power\nof the model to new classes. These works were followed by other pairwise similarity-based few-shot\nlearning approaches in Vinyals et al. [2016], Snell et al. [2017], Mehrotra and Dukkipati [2017]. From a different perspective, few-shot learning through combining graph-based analytics with\ndeep learning has been proposed in Garcia and Bruna [2017]. In a separate trend of work, metalearners Ravi and Larochelle [2016], Munkhdalai and Yu [2017], Mishra et al. [2017b] are developed\nto generalize the DNN model to new related tasks. The aforementioned works have incrementally\nincreased the accuracy on few-shot learning benchmarks. However, all these works are negligent to\nthe model accuracy on old classes. Therefore, their proposal can degrade the predictive power of the Additionally, many of the aforementioned approaches incur a high computation\ncost to adapt the model and thus are not amenable to resource-constrained settings.", |
| "paper_id": "1805.08311", |
| "title": "AgileNet: Lightweight Dictionary-based Few-shot Learning", |
| "authors": [ |
| "Mohammad Ghasemzadeh", |
| "Fang Lin", |
| "Bita Darvish Rouhani", |
| "Farinaz Koushanfar", |
| "Ke Huang" |
| ], |
| "published_date": "2018-05-21", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1805.08311v1", |
| "chunk_index": 5, |
| "total_chunks": 23, |
| "char_count": 1719, |
| "word_count": 261, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "dbfea1e2-d73d-44ed-bd26-d36ba392696a", |
| "text": "AgileNet preserves\nthe prior knowledge of the model on old data while outperforming all state-of-the-art approaches in\nterms of few-shot learning accuracy. Additionally, lightweight model updates of AgileNet complies\nwith stringent limitations of edge devices. 3 Global Flow of AgileNet Figure 1 presents the global flow of AgileNet which involves three stages: primary training stage,\ndictionary learning stage, and few-shot learning stage. The first two stages are performed on the\ncloud, and the last stage is executed on the edge device with limited resources. Data(old) Edge Constraints Data(new) Trained Transformed\nModel Model\nPrimary Training Dictionary Learning Few-shot Learning\nStage Stage Stage Output(old) Output(new)", |
| "paper_id": "1805.08311", |
| "title": "AgileNet: Lightweight Dictionary-based Few-shot Learning", |
| "authors": [ |
| "Mohammad Ghasemzadeh", |
| "Fang Lin", |
| "Bita Darvish Rouhani", |
| "Farinaz Koushanfar", |
| "Ke Huang" |
| ], |
| "published_date": "2018-05-21", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1805.08311v1", |
| "chunk_index": 6, |
| "total_chunks": 23, |
| "char_count": 730, |
| "word_count": 103, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "5e19f3b6-58e5-446c-b6ff-8985e56a8fb1", |
| "text": "Cloud Server Edge Device Figure 1: Global flow of AgileNet Primary Training Stage: At this stage, the original model with the mainstream architecture is\ntrained using conventional training methodologies. Dictionary Learning Stage: The trained model and edge constraints in terms of the memory and\ncomputation resources are taken into account for transforming the model using end-to-end structured\ndecomposition discussed in Section 4. At this stage, the trade-off between memory/computation cost\nand final accuracy of the model is leveraged to match the edge constraints. Few-shot Learning Stage: Finally, the AgileNet model is deployed on the edge device. Despite the\nmemory and computational benefits of structured decomposition, it enables adaptability for dynamic\nsettings. The AgileNet model provides the expected inference accuracy on the desired task under\ntight resource constraints. At the same time, when encountering new classes of data, low-cost updates\non the edge device are sufficient to learn new capabilities.", |
| "paper_id": "1805.08311", |
| "title": "AgileNet: Lightweight Dictionary-based Few-shot Learning", |
| "authors": [ |
| "Mohammad Ghasemzadeh", |
| "Fang Lin", |
| "Bita Darvish Rouhani", |
| "Farinaz Koushanfar", |
| "Ke Huang" |
| ], |
| "published_date": "2018-05-21", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1805.08311v1", |
| "chunk_index": 7, |
| "total_chunks": 23, |
| "char_count": 1026, |
| "word_count": 149, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "0ee8db09-e053-4ee4-a7e0-c3c6b5108854", |
| "text": "4 End-to-End Structured Decomposition AgileNet performs structured decomposition on all layers using an adaptive subspace projection\nmethod built on the foundation of column subset selection proposed in Tropp [2009], Boutsidis et al.\n[2009]. We emphasize that AgileNet is the first to leverage this technique to perform an end-to-end\ntransformation of a DNN model; however, the work in Rouhani et al. [2016] used a similar approach\nto project input data to a DNN model into lower dimensions. 4.1 Adaptive Subspace Projection Assume an arbitrary matrix Wm×n. The goal of subspace projection technique is to represent\nWm×n with a coefficient matrix Cl×n and a basis dictionary matrix Dm×l such that l << n and\n|W −DC| < β, where l is dimensionality of the ambient space after projection and β is the absolute\ntolerable error threshold for the projection. This decomposition allows us to represent matrix Wm×n\nwith correlated columns using the coefficients Cl×n and the dictionary Dm×l with negligible error. To build coefficient and dictionary matrices, adaptive subspace projection adds a particular column\nof Wm×n that minimizes the projection error to the dictionary matrix at each iteration. According\nto the desired error threshold, this technique creates the dictionary by increasing l, which is the number of columns of dictionary D until it finds a suitable lower-dimensional subspace for the data\nprojection. This dictionary can be adaptively updated as the dynamics of the original W matrix\nchange by appending new columns to it. 4.2 Layer-wise Dictionary Learning Neural network computations are dominated by matrix multiplications. At dictionary learning stage,\ntrained DNN weight matrix for each layer is decomposed into a dictionary matrix and a coefficient\nmatrix according to an error threshold β which can be adjusted per layer. Next, we will explain this\nstructured decomposition for fully-connected (fc) and convolution layers (conv).", |
| "paper_id": "1805.08311", |
| "title": "AgileNet: Lightweight Dictionary-based Few-shot Learning", |
| "authors": [ |
| "Mohammad Ghasemzadeh", |
| "Fang Lin", |
| "Bita Darvish Rouhani", |
| "Farinaz Koushanfar", |
| "Ke Huang" |
| ], |
| "published_date": "2018-05-21", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1805.08311v1", |
| "chunk_index": 8, |
| "total_chunks": 23, |
| "char_count": 1951, |
| "word_count": 300, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "5b3c76ad-d1c3-4852-9e15-abcc4eea1766", |
| "text": "Fully-connected layer: In a conventional fully-connected layer, the following matrix-vector multiplication is performed\nym×1 = Wm×nxn×1, (1) where x and y are input and output vectors respectively. In our scheme, weight matrix W is\ntransformed into dictionary matrix D and coefficient matrix C. Substituting this into Equation 1\nresults in:\nym×1 = Dm×lCl×nxn×1, (2) In AgileNet, the above equation is performed by two subsequent layers. In particular, a conventional\nfully-connected layer is replaced by a tiny fully-connected layer (with weight matrix C) preceded by\nan transformation layer (with weight matrix D) as shown in Figure 2. Conventional FC Layer Transformation Layer Tiny FC Layer %\"×& '&×1 !\"×$ )$×& '&×1", |
| "paper_id": "1805.08311", |
| "title": "AgileNet: Lightweight Dictionary-based Few-shot Learning", |
| "authors": [ |
| "Mohammad Ghasemzadeh", |
| "Fang Lin", |
| "Bita Darvish Rouhani", |
| "Farinaz Koushanfar", |
| "Ke Huang" |
| ], |
| "published_date": "2018-05-21", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1805.08311v1", |
| "chunk_index": 9, |
| "total_chunks": 23, |
| "char_count": 718, |
| "word_count": 109, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "81efc61e-9db1-4c15-b297-1ec3ddc36404", |
| "text": "Figure 2: Transformation of fully-connected layer in AgileNet. Convolution Layer: For a convolution layer, we first matricize weight tensor W. After subspace\nprojection, dictionary D remains intact while the coefficient matrix C is reshaped into a threedimensional tensor. The reason for this decision is to comply a universal format for the dictionaries\nin all layers. Similar to a fully-connected layer, substituting the weight tensor W of a convolution\nlayer with dictionary matrix D and coefficients tensor C transforms a conventional convolution\n(with m output channels) into a tiny convolution layer (with l << m output channels) preceded by\na transformation layer as shown in Figure 3. For any row of D, each element is multiplied by all\nelements of the corresponding channel (of the tiny conv layer output) and resulting channels are\nsummed up element-wise to generate one output channel. As such, the transformation layer takes an\nl-channel input and transforms it into an m-channel output using a linear combination approach. Conventional Conv Layer Transformation Layer Tiny Conv Layer Figure 3: Transformation of convolution layer in AgileNet. 4.3 End-to-End Dictionary Learning At dictionary learning stage, weights of the trained model are initially decomposed into a dictionary\nmatrix and a coefficient matrix to comply with the edge constraints. The transformed model has the\nsame architecture of layers as the original model but the fully-connected and convolution layers are\nreplaced by their corresponding transformation layer and tiny layer as discussed above. To compensate\nfor possible loss of accuracy as the result of the structured decomposition, the transformed model is\nfine-tuned. Note that very tight memory/compute budget at the edge might result in a transformed\nmodel that is not inherently capable of achieving the desired accuracy. Additionally, we empirically\nrealized that last few fully-connected layers in a DNN architecture contribute more to the final\nmodel accuracy and therefore require a smaller decomposition error threshold.", |
| "paper_id": "1805.08311", |
| "title": "AgileNet: Lightweight Dictionary-based Few-shot Learning", |
| "authors": [ |
| "Mohammad Ghasemzadeh", |
| "Fang Lin", |
| "Bita Darvish Rouhani", |
| "Farinaz Koushanfar", |
| "Ke Huang" |
| ], |
| "published_date": "2018-05-21", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1805.08311v1", |
| "chunk_index": 10, |
| "total_chunks": 23, |
| "char_count": 2069, |
| "word_count": 311, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "eee0c171-93c1-4a80-84ba-c1a2ab76162f", |
| "text": "At the end of this stage, the transformed model of AgileNet\ncan be readily deployed on the edge device. 5 Few-shot Learning on the Edge Device The stringent memory and energy constraints at the edge are the major challenges towards ondevice training of the neural networks. This limitation is due to the power-hungry computations\nof the training phase as well as excessive memory requirement of large models used in real-world\napplications. The prohibitive memory cost of primary training data hinders its storage on the edge\ndevice to be used for model updating. The new data is also available only in few instances. Also,\nadapting the model to new data should not exacerbate the performance on old classes.", |
| "paper_id": "1805.08311", |
| "title": "AgileNet: Lightweight Dictionary-based Few-shot Learning", |
| "authors": [ |
| "Mohammad Ghasemzadeh", |
| "Fang Lin", |
| "Bita Darvish Rouhani", |
| "Farinaz Koushanfar", |
| "Ke Huang" |
| ], |
| "published_date": "2018-05-21", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1805.08311v1", |
| "chunk_index": 12, |
| "total_chunks": 23, |
| "char_count": 708, |
| "word_count": 117, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "4b61f8c5-1dc3-4462-902b-40e07dbc988a", |
| "text": "Structured decomposition generates dictionaries that preserve the structure of weights in each layer\nand are built such that they capture the space of weight parameters. We leverage this property for\nupdating the model in few-shot learning scenarios: AgileNet keeps the dictionary for all layers intact\nand only fine-tunes the coefficients. A minute update to the coefficients of AgileNet should be enough\nto expand the capability of the model for inference on new data. This means that the model can be\ntuned for new data through only a small number of iterations. Additionally, since the coefficient\nmatrix (tensor) is considerably smaller than the original weight matrix (tensor), a smaller number\nof parameters need to be updated for AgileNet. In particular, the number of trainable parameters\nfor few-shot learning tasks is reduced by approximately a factor ml for both fully-connected and\nconvolution layers, where l << m is the number of rows (channels) in the coefficient matrix (tensor)\nand m is the number of rows (channels) in the original weight matrix (tensor). Note that as we show\nin Section 6, our approach for few-shot learning also preserve the predictive power of the model\non original classes. To enable on-device training under more strict compute/energy budgets, we\nintroduce an ultra-light mode which reduces parameter updates even more. Ultra-Light Few-shot Learning: This mode is designed to further limit the cost of model adaptation\nat the edge, though it might also limit the maximum achievable accuracy on the new data. In ultralight few-shot learning mode, all layers except the last layer are not trained; not only the dictionaries\nbut also the coefficients of all layers except the last remain intact. Furthermore, the coefficient of the\nlast fully-connected layer, as well as rows of its dictionary matrix that correspond to old data classes,\nare fixed. The only parameters that are updated belong to the few rows of the dictionary matrix that\ncorrespond to the new data categories. This mode, which is depicted in Figure 4, has significantly\nfewer parameters to fine-tune and thus, converges in a much small number of iterations.", |
| "paper_id": "1805.08311", |
| "title": "AgileNet: Lightweight Dictionary-based Few-shot Learning", |
| "authors": [ |
| "Mohammad Ghasemzadeh", |
| "Fang Lin", |
| "Bita Darvish Rouhani", |
| "Farinaz Koushanfar", |
| "Ke Huang" |
| ], |
| "published_date": "2018-05-21", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1805.08311v1", |
| "chunk_index": 13, |
| "total_chunks": 23, |
| "char_count": 2163, |
| "word_count": 345, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "d71b549e-143f-46ae-9410-8a4398de85a1", |
| "text": "Old Transformed Model New Transformed Model Old Outputs New Dictionary\nOld Dictionary\nOld Outputs New Outputs\n(Few-shot) Figure 4: Ultra-light few-shot learning mode of AgileNet. For this example, only two rows of the\nnew dictionary matrix (for the last layer, corresponding to the new data classes) are being updated.", |
| "paper_id": "1805.08311", |
| "title": "AgileNet: Lightweight Dictionary-based Few-shot Learning", |
| "authors": [ |
| "Mohammad Ghasemzadeh", |
| "Fang Lin", |
| "Bita Darvish Rouhani", |
| "Farinaz Koushanfar", |
| "Ke Huang" |
| ], |
| "published_date": "2018-05-21", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1805.08311v1", |
| "chunk_index": 14, |
| "total_chunks": 23, |
| "char_count": 318, |
| "word_count": 49, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "18f5a099-c5df-4db1-adae-00c093b4dff4", |
| "text": "6 Experiments and Evaluation Our evaluation are performed on three benchmark datasets: MNIST LeCun et al. [2010], Omniglot Lake et al. [2015] and Mini-Imagenet Vinyals et al. [2016]. X −way Y −shot learning\nexperiment is performed as follows: we randomly sample X classes from the test dataset. From each\nselected class, we choose Y data instances randomly. We feed the corresponding X × Y labeled\nexamples to the model during the few-shot learning stage. The trained model is then tested on the\ndata from the same X classes excluding Y examples used for few-shot learning. The top-1 average\ntest accuracy is reported for different random new classes and different data instances within each\nnew class. Note that for all experiments, we followed all three steps of the global flow of AgileNet.", |
| "paper_id": "1805.08311", |
| "title": "AgileNet: Lightweight Dictionary-based Few-shot Learning", |
| "authors": [ |
| "Mohammad Ghasemzadeh", |
| "Fang Lin", |
| "Bita Darvish Rouhani", |
| "Farinaz Koushanfar", |
| "Ke Huang" |
| ], |
| "published_date": "2018-05-21", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1805.08311v1", |
| "chunk_index": 15, |
| "total_chunks": 23, |
| "char_count": 793, |
| "word_count": 131, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "a5e5a4ec-49aa-470e-917f-394f82074e7f", |
| "text": "MNIST: The dataset of handwritten digits 0 to 9 consists of 60, 000 examples in the training set, and\n10, 000 examples in the test set of size 28 × 28. For few-shot learning experiments, we randomly\nchose 9 digits for primary training and the remaining digit was used as the new classes in the few-shot\nsetting. We used LeNet architecture which has two convolution layers with the kernel size of 5 × 5,\nfollowed by a dropout layer, and two fully-connected layers. To validate AgileNet in few-shot\nlearning scenarios, we chose five samples from all ten classes randomly and created a new training\nset for few-shot learning stage. Since this data contains five data instances from the new class, the\nfew-shot task is 1-way 5-shot learning. We note that adding samples from old classes to the training\ndata for few-shot learning stage is to preserve model accuracy on old classes and prevent over-fitting\non the new class. However, we only need to store 5 samples of each old class on the edge device for\nthis purpose which does not add significant memory overhead to this stage. Figure 5 shows the classification accuracy on the test set after few-shot learning stage. The green\nand red lines represent AgileNet accuracy on the new and old classes, respectively. Our approach\nachieves a reasonable accuracy of 97% only after 20 iterations while preserving 98% accuracy on\noriginal 9 classes. However, conventional training sacrifices the accuracy on the old classes to obtain a\ncomparable accuracy to AgileNet on the new class. There are two key factors for success of AgileNet\nin preserving the knowledge on old classes: (i) The learned dictionaries preserve the structure of\nthe weights at each layer and minute coefficient updates do not exacerbate the accuracy on the old\nclasses. (ii) Our training data covers samples from both old and new classes to prevent over-fitting. New Class (AgileNet) Old Classes (AgileNet) New Class (Conventional) Old Classes (Conventional)", |
| "paper_id": "1805.08311", |
| "title": "AgileNet: Lightweight Dictionary-based Few-shot Learning", |
| "authors": [ |
| "Mohammad Ghasemzadeh", |
| "Fang Lin", |
| "Bita Darvish Rouhani", |
| "Farinaz Koushanfar", |
| "Ke Huang" |
| ], |
| "published_date": "2018-05-21", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1805.08311v1", |
| "chunk_index": 16, |
| "total_chunks": 23, |
| "char_count": 1971, |
| "word_count": 326, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "af150e02-d5bc-4245-9926-5cc279b9b5b6", |
| "text": "60% Accuracy\nTop-1 40%\n20%\n0 1 2 3 10 20\nTraining Iterations Figure 5: Classification accuracy on the test set during the few-shot learning stage (1-way 5-shot) for\nAgileNet and conventional training method. Omniglot: This benchmark dataset for few-shot learning tasks has 50 different alphabets including\n1623 character classes totally. Each character class has only 20 samples. The dataset is split into the\nfirst 1200 classes for training and the remaining 423 classes for testing as in Vinyals et al. [2016],\nGarcia and Bruna [2017]. Images are resized to 28 × 28. To compensate for the low number of\ntraining examples per class, we appended three rotated images (by 90, 180, and 270 degrees) for each\noriginal image to the training data. For this dataset, we used the CNN architecture proposed by Vinyals et al. [2016] in which each block\nhas a convolution layer with 64 filters of size 3 × 3, a batch-normalization layer, a 2 × 2 max-pooling\nlayer, and a leaky-relu. Four of these blocks are stacked and followed by a final fully-connected layer. We experimented 5 −way 1 −shot, 5 −way 5 −shot, 20 −way 1 −shot, and 20 −way 5 −shot\nscenarios. Table 1 compares final accuracy of AgileNet with prior work on these tasks. 5-Way 20-Way\nModel\n1-shot 5-shot 1-shot 5-shot\nMatching Networks Vinyals et al. [2016] 98.1% 98.9% 93.8% 98.5%\nStatistic Networks Edwards and Storkey [2016] 98.1% 99.5% 93.2% 98.1%\nRes. Pair-Wise Mehrotra and Dukkipati [2017] - - 94.8% -\nPrototypical Networks Snell et al. [2017] 97.4% 99.3% 95.4% 98.8%\nConvNet with Memory Kaiser et al. [2017] 98.4% 99.6% 95.0% 98.6%\nAgnostic Meta-learner Finn et al. [2017] 98.7% 99.9% 95.8% 98.9%\nMeta Networks Munkhdalai and Yu [2017] 98.9% - 97.0% -\nTCML Mishra et al. [2017a] 98.96% 99.75% 97.64% 99.36%\nGNN Garcia and Bruna [2017] 99.2% 99.7% 97.4% 99.0%\nAgileNet (Ours) 99.5% 99.9% 94.95% 98.9% Table 1: Comparison of classification accuracy after few-shot learning on Omniglot dataset with 95%\nconfidence intervals. The best results of each scenario are marked in bold.", |
| "paper_id": "1805.08311", |
| "title": "AgileNet: Lightweight Dictionary-based Few-shot Learning", |
| "authors": [ |
| "Mohammad Ghasemzadeh", |
| "Fang Lin", |
| "Bita Darvish Rouhani", |
| "Farinaz Koushanfar", |
| "Ke Huang" |
| ], |
| "published_date": "2018-05-21", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1805.08311v1", |
| "chunk_index": 17, |
| "total_chunks": 23, |
| "char_count": 2037, |
| "word_count": 339, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "614aca4c-8d02-4cb8-a059-4bcde9122c17", |
| "text": "experiments, AgileNet outperforms all prior work and for 20 −way tasks, it achieves a comparable\naccuracy.", |
| "paper_id": "1805.08311", |
| "title": "AgileNet: Lightweight Dictionary-based Few-shot Learning", |
| "authors": [ |
| "Mohammad Ghasemzadeh", |
| "Fang Lin", |
| "Bita Darvish Rouhani", |
| "Farinaz Koushanfar", |
| "Ke Huang" |
| ], |
| "published_date": "2018-05-21", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1805.08311v1", |
| "chunk_index": 18, |
| "total_chunks": 23, |
| "char_count": 106, |
| "word_count": 16, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "05cfe169-5964-46a6-9ab6-a29cbbf02b8a", |
| "text": "Note that for these experiments, we are only comparing the accuracy on the new classes as\nall of the prior works in Table 1 have used this metric for comparison. As such, the training data for\nfew-shot learning stage consists of only samples from the new class. Mini-Imagenet: A more challenging benchmark for few-shot learning experiments was proposed\nby Vinyals et al. [2016] which is extracted from the original Imagenet dataset. Mini-Imagenet consists\nof 60,000 images of size 84 × 84 belonging to 100 classes. We used first 64 classes for training, 16\nclasses for validation and last 20 for test similar to Ravi and Larochelle [2016]. The CNN architecture\nused in this experiment consists of 4 convolution layers. Each convolution layer has a different\nnumber of filters (64, 96, 128, 256) with the kernel size of 3 × 3 followed by a batch normalization\nlayer, a max pooling layer, and a leaky-relu. The last two convolution layers are also followed by a\ndropout layer to avoid over-fitting. This architecture has a fully-connected layer at the end. In order to explore the space of decomposition error threshold β for different layers which determines\nthe dictionary size (and in turn, memory footprint and computation cost) as well as the model accuracy,\nwe conducted a comprehensive analysis of AgileNet for Mini-Imagenet dataset. Figure 6 presents the\ntrade-off between memory footprint, computation cost and final accuracy after few-shot learning stage\ncorresponding to different decomposition error thresholds β uniformly set for all layers. Memory\nand computation costs are compared with those of the original model in the primary training stage\nand the few-shot learning accuracies denote the absolute test accuracy on new classes. Notice that\nmemory footprint and computation cost decrease significantly as β increases.", |
| "paper_id": "1805.08311", |
| "title": "AgileNet: Lightweight Dictionary-based Few-shot Learning", |
| "authors": [ |
| "Mohammad Ghasemzadeh", |
| "Fang Lin", |
| "Bita Darvish Rouhani", |
| "Farinaz Koushanfar", |
| "Ke Huang" |
| ], |
| "published_date": "2018-05-21", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1805.08311v1", |
| "chunk_index": 19, |
| "total_chunks": 23, |
| "char_count": 1833, |
| "word_count": 291, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "e708a3f1-2d21-4119-9312-65373608e98a", |
| "text": "The drop in accuracy\nof AgileNet is negligible until β reaches 0.95. Then, the model accuracy drops to 40.1%. These\nresults demonstrate that the trade-off between model accuracy and memory/computation cost can be\nleveraged by adjusting the decomposition error threshold. This flexibility allows AgileNet to match\nthe edge device physical constraints while enabling the desired degree of adaptability to new data.", |
| "paper_id": "1805.08311", |
| "title": "AgileNet: Lightweight Dictionary-based Few-shot Learning", |
| "authors": [ |
| "Mohammad Ghasemzadeh", |
| "Fang Lin", |
| "Bita Darvish Rouhani", |
| "Farinaz Koushanfar", |
| "Ke Huang" |
| ], |
| "published_date": "2018-05-21", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1805.08311v1", |
| "chunk_index": 20, |
| "total_chunks": 23, |
| "char_count": 412, |
| "word_count": 61, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "6c4af176-1602-4bc0-85eb-32b1b72c37a9", |
| "text": "β_conv = 0.3, β_fc = 0.3 β_conv = 0.5, β_fc = 0.5 β_conv = 0.7, β_fc = 0.7 β_conv = 0.9, β_fc = 0.9 β_conv = 0.95, β_fc = 0.95\n100 % 96.9 100\n80 77.9 75.4 73.8 72.9 71.3 69.2\n50 47.9 46.5 40.1\n20 17 16.8\n8.6 8.5 10\nMemory Footprint Computation Cost Few-shot Accuracy Figure 6: Comparison of memory footprint, computation cost and final few-shot learning accuracy\n(5-way 5-shot task) of AgileNet with different decomposition error thresholds. To further understand the impact of decomposition error threshold on convolution layers and fullyconnected layers, we varied β for the fully-connected layer in this DNN architecture from 0.1 to 0.95 while keeping β for all convolution layers intact as shown in Figure 7. Changing β for the\nfully-connected layer mainly impacts the memory footprint of the model while the computation\ncost is dominated by the convolution layers. Similar to the previous experiment, memory footprint\ndecreases as β increases for the fully-connected layer. The drop in accuracy of AgileNet is negligible\nuntil β reaches 0.95. Then, the model accuracy drops to 52.6%. These results show that layer-wise\nexploration of the decomposition error threshold is necessary to maximize the memory/computation\nbenefits of AgileNet while achieving a desired accuracy for few-shot learning tasks.", |
| "paper_id": "1805.08311", |
| "title": "AgileNet: Lightweight Dictionary-based Few-shot Learning", |
| "authors": [ |
| "Mohammad Ghasemzadeh", |
| "Fang Lin", |
| "Bita Darvish Rouhani", |
| "Farinaz Koushanfar", |
| "Ke Huang" |
| ], |
| "published_date": "2018-05-21", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1805.08311v1", |
| "chunk_index": 21, |
| "total_chunks": 23, |
| "char_count": 1305, |
| "word_count": 210, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "4dba76b1-07d4-4306-9438-cf5871b3ef2a", |
| "text": "β_conv = 0.9, β_fc = 0.1 β_conv = 0.9, β_fc = 0.3 β_conv = 0.9, β_fc = 0.7\nβ_conv = 0.9, β_fc = 0.9 β_conv = 0.9, β_fc = 0.95 100 %\n80 73.9 71.9 71.4 69.2\n60 55 51.1 52.6\n40 31.8\n20 17 12.6 17.1 17 16.9 16.8 16.7\nMemory Footprint Computation Cost Few-shot Accuracy Figure 7: Comparison of memory footprint, computation cost and few-shot learning accuracy (5-way\n5-shot task) with a varying decomposition error threshold only for the last layer. The light and\nblue bars reduce memory footprint by 3.1× and 5.8×, respectively. Computation cost for all these\nconfigurations is approximately 6× less than the original model. To compare AgileNet with prior work, we used two configurations for decomposition error thresholds\nfor different layers as shown in Table 2. In both 1-shot and 5-shot scenarios, AgileNet achieves a\nhigher accuracy than all prior arts. Similar to the Omniglot benchmark, these results only consider\nthe accuracy on new classes. We emphasize that AgileNet outperforms prior works in terms of\naccuracy while reducing memory footprint by 5.8× and computation cost by 6× as shown in Figure 7. Therefore, AgileNet not only helps amenability of large DNN models to resource-constrained devices\nbut also it achieves a superior accuracy in few-shot learning scenarios compared to the state-of-the-art. 5-Way\nModel\n1-shot 5-shot\nMatching Networks Vinyals et al. [2016] 43.6% 55.3%\nPrototypical Networks Snell et al. [2017] 46.61% ± 0.78% 65.77% ± 0.70%\nModel Agnostic Meta-learner Finn et al. [2017] 48.70% ± 1.84% 63.10% ± 0.92%\nMeta Networks Munkhdalai and Yu [2017] 49.21% ± 0.96% -\nM. Optimization Ravi and Larochelle [2016] 43.40% ± 0.77% 60.20% ± 0.71%\nTCML Mishra et al. [2017a] 55.71% ± 0.99% 68.88% ± 0.92%\nGNN Garcia and Bruna [2017] 50.33% ± 0.36% 66.41% ± 0.63%\nAgileNet (βconv = 0.9, βfc = 0.9) 48.38% ± 0.90% 69.21% ± 0.25%\nAgileNet (βconv = 0.9, βfc = 0.7) 58.23% ± 0.10% 71.39% ± 0.10% Table 2: Comparison of classification accuracy after few-shot learning on Mini-Imagenet dataset with\n95% confidence intervals. The best results of each scenario are marked in bold.", |
| "paper_id": "1805.08311", |
| "title": "AgileNet: Lightweight Dictionary-based Few-shot Learning", |
| "authors": [ |
| "Mohammad Ghasemzadeh", |
| "Fang Lin", |
| "Bita Darvish Rouhani", |
| "Farinaz Koushanfar", |
| "Ke Huang" |
| ], |
| "published_date": "2018-05-21", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1805.08311v1", |
| "chunk_index": 22, |
| "total_chunks": 23, |
| "char_count": 2093, |
| "word_count": 349, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "8dc5edb6-73a4-411d-b96e-2cb12fb5510d", |
| "text": "This work presents the first lightweight few-shot learning approach that beats the accuracy of state-ofthe-art approaches on standard benchmarks through only a small number of parameter updates. The\nkey enabler of AgileNet is our novel end-to-end structured decomposition methodology that replaces\nevery convolution and fully-connected layer by its tiny counterpart such that memory footprint and\ncomputational complexity of the transformed model matches the edge constraints. Our experiments\ncorroborated that the learned dictionaries of AgileNet preserve the structure of the model, enabling\nlow-cost and effective few-shot learning without degrading the model accuracy on old data classes.", |
| "paper_id": "1805.08311", |
| "title": "AgileNet: Lightweight Dictionary-based Few-shot Learning", |
| "authors": [ |
| "Mohammad Ghasemzadeh", |
| "Fang Lin", |
| "Bita Darvish Rouhani", |
| "Farinaz Koushanfar", |
| "Ke Huang" |
| ], |
| "published_date": "2018-05-21", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1805.08311v1", |
| "chunk_index": 23, |
| "total_chunks": 23, |
| "char_count": 692, |
| "word_count": 95, |
| "chunking_strategy": "semantic" |
| } |
| ] |