| [ |
| { |
| "chunk_id": "e32289f9-e8f8-4d21-b686-e87426380491", |
| "text": "Neel Guha Virginia Smith\nCMU CMU\nnguha@cmu.edu smithv@cmu.edu June 6, 20192019\nAbstract In many applications, the training data for a machine learning task is partitioned acrossJun multiple nodes, and aggregating this data may be infeasible due to communication, privacy, or\n4 storage constraints. Existing distributed optimization methods for learning global models in\nthese settings typically aggregate local updates from each node in an iterative fashion. However,\nthese approaches require many rounds of communication between nodes, and assume that\nupdates can be synchronously shared across a connected network. In this work, we present\nGood-Enough Model Spaces (GEMS)1, a novel framework for learning a global model by carefully\nintersecting the sets of \"good-enough\" models across each node. Our approach utilizes minimal\ncommunication and does not require sharing of data between nodes.", |
| "paper_id": "1805.07782", |
| "title": "Model Aggregation via Good-Enough Model Spaces", |
| "authors": [ |
| "Neel Guha", |
| "Virginia Smith" |
| ], |
| "published_date": "2018-05-20", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1805.07782v3", |
| "chunk_index": 1, |
| "total_chunks": 53, |
| "char_count": 894, |
| "word_count": 129, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "3c3bebb6-8ff5-4863-b035-dbb742eec0b6", |
| "text": "We present methods for[cs.LG] learning both convex models and neural networks within this framework and discuss how small\nsamples of held-out data can be used for post-learning fine-tuning. In experiments on image and\nmedical datasets, our approach on average improves upon other baseline aggregation techniques\nsuch as ensembling or model averaging by as much as 15 points (accuracy). There has been significant work in designing distributed optimization methods in response to\nchallenges arising from a wide range of large-scale learning applications. These methods typically\naim to train a global model by performing numerous communication rounds between distributed\nnodes. However, most approaches treat communication reduction as an objective, not a constraint,\nand seek to minimize the number of communication rounds while maintaining model performance.\naggregation methods can be beneficial when any of the following conditions hold: • Limited network infrastructure: Distributed optimization methods typically require a connected network to support the synchronized collection of numerous learning updates (i.e. gradients). Such a network can be difficult to set up and maintain, especially in settings where nodes may\nrepresent different organizational entities (e.g., a network of different hospitals) [3]. 1Code available at https://github.com/neelguha/gems", |
| "paper_id": "1805.07782", |
| "title": "Model Aggregation via Good-Enough Model Spaces", |
| "authors": [ |
| "Neel Guha", |
| "Virginia Smith" |
| ], |
| "published_date": "2018-05-20", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1805.07782v3", |
| "chunk_index": 2, |
| "total_chunks": 53, |
| "char_count": 1368, |
| "word_count": 190, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "1d21b7cb-d92a-47a5-9586-56ab74c1f5dd", |
| "text": "• Privacy and data ephemerality: Privacy policies or regulations like GDPR may require nodes\nto periodically delete the raw local data [10]. Few-shot methods enable learning an aggregate\nmodel in ephemeral settings, where a node may lose access to its raw data. Additionally, as fewer\nmessages are sent between nodes, these methods have the potential to offer increased privacy\nbenefits. • Extreme asynchrony: Even in settings where privacy is not a concern, messages from distributed\nnodes may be unevenly spaced and sporadically communicated over days, weeks, or even months\n(e.g., in the case of remote sensor networks). Few-shot methods drastically limit communication\nand thus reduce the wall-clock time required to learn an aggregate model. Throughout this paper, we reference a simple motivating example.", |
| "paper_id": "1805.07782", |
| "title": "Model Aggregation via Good-Enough Model Spaces", |
| "authors": [ |
| "Neel Guha", |
| "Virginia Smith" |
| ], |
| "published_date": "2018-05-20", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1805.07782v3", |
| "chunk_index": 3, |
| "total_chunks": 53, |
| "char_count": 811, |
| "word_count": 123, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "fdf64f47-f4eb-4001-b039-ed2346752170", |
| "text": "Consider two hospitals, A\nand B, which each maintain private (unshareable) patient data pertinent to some disease. As A\nand B are geographically distant, the patients they serve sometimes exhibit different symptoms. Without sharing the raw training data, A and B would like to jointly learn a single model capable of\ngeneralizing to a wide range of patients. The prevalent learning paradigm in this setting—distributed\nor federated optimization—dictates that A and B share iterative model updates (e.g., gradient\ninformation) over a network. This scenario can face many of the challenges described above, as\nhospitals operate under strict privacy practices, and may face legal, administrative, or ethical\nconstraints prohibiting a shared connected network for training [3, 16, 23, 28].", |
| "paper_id": "1805.07782", |
| "title": "Model Aggregation via Good-Enough Model Spaces", |
| "authors": [ |
| "Neel Guha", |
| "Virginia Smith" |
| ], |
| "published_date": "2018-05-20", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1805.07782v3", |
| "chunk_index": 4, |
| "total_chunks": 53, |
| "char_count": 785, |
| "word_count": 118, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "468d736f-2d9a-4642-b49c-641759f2ad92", |
| "text": "As a promising alternative, we present Good-Enough Model Spaces (GEMS), a framework for learning\nan aggregate model over distributed nodes within a small number of communication rounds. Our\nframework draws inspiration from work in version space learning, an approach for characterizing\nthe set of logical hypotheses consistent with available data [19]. Intuitively, the key idea in GEMS\nis to leverage the fact that many possible hypotheses may yield 'good enough' performance for\na node's learning task on local data, and that considering the intersection between these sets of\nhypotheses (across nodes) can allow us to compute a global model quickly and easily. Despite the\nsimplicity of the proposed approach, we find that it can significantly outperform other baselines\nsuch as ensembling or model averaging. We make the following contributions in this work. First, we present a general formulation of the\nGEMS framework. The framework itself is highly flexible, applying generally to black-box models and\nheterogeneous partitions of data. Second, we offer methods for calculating the 'good-enough' model\nspace on each node. Our methods are simple and interpretable in that each node only communicates\nits locally optimal model and a small amount of metadata corresponding to local performance. In the\ncontext of the above example, GEMS serves to reduce the coordination costs for A and B, enabling\nthem to easily learn an aggregate model in a regime that does not require communicating incremental\ngradient updates. Finally, we empirically validate GEMS on both standard image benchmarks (MNIST\nand CIFAR-10) as well as a domain-specific health dataset. We consider learning convex classifiers\nand neural networks in standard distributed settings, as well as scenarios in which some small global\nheld-out data may be used for fine-tuning. On average, we find that GEMS increases the accuracy of\nlocal baselines by 11 points and performs 43% as well as a non-distributed model. With additional\nfine-tuning, GEMS increases the accuracy of local baselines by 43 points and performs 88% as well the\nnon-distributed model. Our approach consistently outperforms other model aggregation baselines\nsuch as model averaging, and can either match or exceed the performance of ensemble methods with\nmany fewer parameters.", |
| "paper_id": "1805.07782", |
| "title": "Model Aggregation via Good-Enough Model Spaces", |
| "authors": [ |
| "Neel Guha", |
| "Virginia Smith" |
| ], |
| "published_date": "2018-05-20", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1805.07782v3", |
| "chunk_index": 5, |
| "total_chunks": 53, |
| "char_count": 2314, |
| "word_count": 354, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "abae511f-2c62-4cb7-a42a-c1464e8511f9", |
| "text": "Distributed Learning. Current distributed and federated learning approaches typically rely on\niterative optimization techniques to learn a global model, continually communicating updates between\nnodes until convergence is reached. To improve the overall runtime, a key goal in most distributed\nlearning methods is to minimize communication for some fixed model performance. To this end,\nnumerous methods have been proposed for communication-efficient and asynchronous distributed\noptimization [e.g., 6, 7, 15, 18, 21, 22, 25]. In this work, our goal is instead to maximize performance\nfor a fixed communication budget (e.g., only one or possibly a few rounds of communication). One-shot/Few-shot Methods. While simple one-shot distributed communication schemes, such\nas model averaging, have been explored in convex settings [1, 17, 22, 30, 31], guarantees typically rely\non data being partitioned in an IID manner and over a small number of nodes relative to the total\nnumber of samples. Averaging can also perform arbitrarily poorly in non-convex settings, particularly\nwhen the local models converge to differing local optima [18, 27]. Other one-shot schemes leverage\nensemble methods, where an ensemble is constructed from models trained on distinct partitions of\nthe data [4, 17, 27]. While these ensembles can often yield good performance in terms of accuracy, a\nconcern is that the resulting ensemble size can become quite large. In Section 4, we compare against\nthese one-shot baselines empirically, and find that GEMS can outperform both simple averaging and\nensembles methods while requiring significantly fewer parameters. Meta-learning, multi-task learning and transfer learning. The goals of meta-learning, multitask learning, and transfer learning are seemingly related, as these works aim to share knowledge\nfrom one learning process onto others. However, in the case of transfer learning, methods are\ntypically concerned with one-way transfer—i.e., optimizing the performance of a single target model,\nnot jointly aggregating knowledge between multiple models [20]. In meta-learning and multi-task\nlearning, such joint optimization is performed, but similar to traditional distributed optimization\nmethods, it is assumed that these models can be updated in an iterative fashion, with potentially\nnumerous rounds of communication performed throughout the training process [8, 9].", |
| "paper_id": "1805.07782", |
| "title": "Model Aggregation via Good-Enough Model Spaces", |
| "authors": [ |
| "Neel Guha", |
| "Virginia Smith" |
| ], |
| "published_date": "2018-05-20", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1805.07782v3", |
| "chunk_index": 6, |
| "total_chunks": 53, |
| "char_count": 2394, |
| "word_count": 344, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "2740c386-f34a-4664-b428-942340666054", |
| "text": "In developing GEMS, we draw inspiration from work in version space learning, an\napproach for characterizing the set of logical hypotheses consistent with available data [19]. Similar\nto [2], we observe that if each node communicates its version space to the central server, the server\ncan return a consistent hypothesis in the intersection of all node version spaces. However, [2, 19]\nassume that the hypotheses of interest are consistent with the observed data, i.e., they perfectly\npredict the correct outcomes. Our approach significantly generalizes this notion to explore imperfect,\nnoisy hypotheses spaces as more commonly observed in practice. 3 Good-Enough Model Spaces (GEMS)", |
| "paper_id": "1805.07782", |
| "title": "Model Aggregation via Good-Enough Model Spaces", |
| "authors": [ |
| "Neel Guha", |
| "Virginia Smith" |
| ], |
| "published_date": "2018-05-20", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1805.07782v3", |
| "chunk_index": 7, |
| "total_chunks": 53, |
| "char_count": 683, |
| "word_count": 103, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "bb6a060f-1186-47cf-81af-0f80314d0997", |
| "text": "As in traditional distributed learning, we assume a training set S = {(xi, yi)}mi=1 drawn from DX×Y\nis divided amongst K nodes2 in a potentially non-IID fashion. We define Sk := {(xk1, yk1), ...} as the\nsubset of training examples belonging to node k, such that PKk=1 |Sk| = m, and assume that a single\nnode (e.g., a central server) can aggregate updates communicated in the network. 2We use 'node' to refer abstractly to distributed entities such as devices, machines, organizations, etc.", |
| "paper_id": "1805.07782", |
| "title": "Model Aggregation via Good-Enough Model Spaces", |
| "authors": [ |
| "Neel Guha", |
| "Virginia Smith" |
| ], |
| "published_date": "2018-05-20", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1805.07782v3", |
| "chunk_index": 8, |
| "total_chunks": 53, |
| "char_count": 489, |
| "word_count": 82, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "ce90e37e-9a57-49ac-bc4d-f97737dd6ae8", |
| "text": "Figure 1: Depiction of GEMS class H, our goal is to learn an aggregate model hG ∈H that approximates the performance of the\noptimal model h∗∈H over S while limiting communication to one (or possibly a few) rounds of\ncommunication. In developing a method for model aggregation, our intuition is that the aggregate model should be\nat least good-enough over each node's local data, i.e., it should achieve some minimum performance\nfor the task at hand. Thus, we can compute hG by having each node compute and communicate a\nset of locally good-enough models to a central server, which learns hG from the intersection of these\nsets. Formally, let Q : (H, {(xi, yi)}d) →{−1, 1} denote a model evaluation function, which determines\nwhether a given model h is good-enough over a sample of d data points {(xi, yi)}d ⊆S. In this\nwork, define \"good-enough\" in terms of the accuracy of the model h and a threshold ϵ:", |
| "paper_id": "1805.07782", |
| "title": "Model Aggregation via Good-Enough Model Spaces", |
| "authors": [ |
| "Neel Guha", |
| "Virginia Smith" |
| ], |
| "published_date": "2018-05-20", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1805.07782v3", |
| "chunk_index": 9, |
| "total_chunks": 53, |
| "char_count": 904, |
| "word_count": 158, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "af8b8746-08d9-4cbc-94d3-e59dca7641ea", |
| "text": " 1\n1 Pdi=1 I{h(xi) = yi} ≥ϵ Q(h, {(xi, yi)}d) = d (1)\n−1 else Using these model evaluation functions, we formalize the proposed approach for model aggregation, GEMS, in Algorithm 1. In GEMS, each node k = 1, ..., K computes the set of models\nHk = {h1, ..., hn|hi ∈H, Qk(hi, Sk) = 1} and sends this set to the central node. After collecting\nH1, ...HK, the central node selects hG from the intersection of the sets, ∩kHk. When granted access\nto a small sample of public data, the server can additionally use this auxiliary data further fine-tune\nthe selected h ∈∩kHk, an approach we discuss in Section 3.3.", |
| "paper_id": "1805.07782", |
| "title": "Model Aggregation via Good-Enough Model Spaces", |
| "authors": [ |
| "Neel Guha", |
| "Virginia Smith" |
| ], |
| "published_date": "2018-05-20", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1805.07782v3", |
| "chunk_index": 10, |
| "total_chunks": 53, |
| "char_count": 608, |
| "word_count": 113, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "e26257e8-98f4-4f8f-9117-ed1ac8b36451", |
| "text": "Figure 1 visualizes this approach for a model class with only two weights (w1 and w2) and two\nnodes (\"red\" and \"blue\"). The 'good-enough' model space, Hk, for each node is a set of regions\nover the weight space (the blue regions correspond to one node and the red regions correspond to\nsecond node). The final aggregate model, hG, is selected from the area in which the spaces intersect. Section 4.6 empirically analyzes when intersections may not exist. In practice, finding an intersection\ndepends on (1) how ϵ is set for each node, and (2) the complexity of the task with respect to the\nmodel. For a fixed hypothesis class H, applying Algorithm 1 requires two components: (i) a mechanism for\ncomputing Hk over every node, and (ii) a mechanism for identifying the aggregate model, hG ∈∩kHk. In this work, we present such methods for two types of models: convex models (Section 3.1) and\nneural networks (Section 3.2). For convex models, we find that Hk can be approximated as Rd-ball in the parameter space, requiring only a single round of communication between nodes to learn hG. For neural networks, we apply Algorithm 1 to each layer in a step-wise fashion, and compute Hk as\na set of independent Rd-balls corresponding to every neuron in the layer. This requires one round of\ncommunication per layer (a few rounds for the entire network).", |
| "paper_id": "1805.07782", |
| "title": "Model Aggregation via Good-Enough Model Spaces", |
| "authors": [ |
| "Neel Guha", |
| "Virginia Smith" |
| ], |
| "published_date": "2018-05-20", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1805.07782v3", |
| "chunk_index": 11, |
| "total_chunks": 53, |
| "char_count": 1344, |
| "word_count": 231, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "fb824ebb-91a6-4dc5-93c4-4fd1647ce41b", |
| "text": "Algorithm 1 GEMS Meta-Algorithm\n1: Input: S = {(xi, yi)}mi=1\n2: for k = 1, · · · , m in parallel do\n3: Node k computes good-enough model space, Hk, according to (1)\n4: end for\n5: Return intersection hG ∈∩kHk 3.1 Convex Classifiers Consider a convex classifier fw(·) parameterized by a weight vector w ∈Rd. Below, we describe the\ntwo key steps of GEMS (Algorithm 1) for such classifiers, including construction of the good-enough\nmodels, Hk, and computation of the intersection, hG. For each node k, we can compute the set of good-enough models, Hk, as an Rd-ball in the parameter\nspace, represented as a tuple (ck ∈Rd, rk ∈R) corresponding to the center and radius. Formally,\nHk = {w ∈Rd|||ck −w||2 ≤rk}. While this is a simple notion of a model space, we find that it\nworks well in practice (Section 4), and has the added benefit of reducing communication costs, as\neach node only needs to share its center ck and radius rk. Fixing ϵ as our minimum acceptable\nperformance, we want to compute Hk according to (1) such that ∀w ∈Hk, Q(w, Sk) = 1. In other\nwords, every model contained within the d-ball should have an accuracy greater than or equal to", |
| "paper_id": "1805.07782", |
| "title": "Model Aggregation via Good-Enough Model Spaces", |
| "authors": [ |
| "Neel Guha", |
| "Virginia Smith" |
| ], |
| "published_date": "2018-05-20", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1805.07782v3", |
| "chunk_index": 12, |
| "total_chunks": 53, |
| "char_count": 1149, |
| "word_count": 208, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "f5612c12-d586-458c-9ece-c0bfae500516", |
| "text": "Algorithm 2 presents the Hk construction algorithm for node k and data Sk,\nwhere\n|Sk|\nw∗k = arg min X ℓ(fw(xi), yi) , w |Sk|\ni=1 ϵ is fixed a hyperparameter, and Q(·) is a minimum accuracy threshold defined according to (1). In\nAlgorithm 2, Rmax and ∆define the scope and stopping criteria for the binary search. Intuitively,\nAlgorithm 2 approximates the maximum radius for an Rd ball centered at w∗k such that all points\ncontained within the ball correspond to 'good-enough' models (defined by Q(·)). After constructing the sets of good-enough models, Hk, GEMS computes an aggregate\nmodel in the intersection hG ∈∩kHk. Given K nodes with individual model spaces Hk = (ck, rk),\nwe can pick a point in this intersection by solving: hG = arg min X max(0, ||ck −w||2 −rk) (2)\nk=1 which takes a minimum value of 0 when w ∈∩kHk. In practice, we solve 2 via gradient descent. As we discuss in Section 3.3, this w can be improved by fine-tuning on a limited sample of publicly\navailable data. In practice, we observe that this simple Rd ball construction can also be extended to ellipsoids by letting the radius vary across different dimensions. The minor modifications required\nfor this are discussed in Appendix A. Algorithm 2 ConstructBall\n1: Input: k, fw(·), Q(·),w∗k Sk = {(xk1, yk1), ..}, Rmax, ∆\n2: Sets ck to w∗k.\n3: Initialize Rlower = 0, Rupper = Rmax\n4: while Rupper −Rlower > ∆do\n5: Set R = Avg(Rupper, Rlower)\n6: Sample w1, ..., wp from surface of BR(ck)\n7: if Q(fw′, Sk) = 1, ∀w′ = w1, ..., w′p then\n8: Set Rlower = R\n9: else\n10: Set Rupper = R\n11: end if\n12: end while\n13: Return Hk We additionally consider GEMS with a simple class of neural networks, multi-layer perceptrons (MLPs).", |
| "paper_id": "1805.07782", |
| "title": "Model Aggregation via Good-Enough Model Spaces", |
| "authors": [ |
| "Neel Guha", |
| "Virginia Smith" |
| ], |
| "published_date": "2018-05-20", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1805.07782v3", |
| "chunk_index": 13, |
| "total_chunks": 53, |
| "char_count": 1692, |
| "word_count": 306, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "b299b7aa-f704-426a-8946-8efe7ad0ae58", |
| "text": "First, we observe that the final layer of an MLP is a linear model. Hence, we can apply the method\nabove with no modification.", |
| "paper_id": "1805.07782", |
| "title": "Model Aggregation via Good-Enough Model Spaces", |
| "authors": [ |
| "Neel Guha", |
| "Virginia Smith" |
| ], |
| "published_date": "2018-05-20", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1805.07782v3", |
| "chunk_index": 14, |
| "total_chunks": 53, |
| "char_count": 126, |
| "word_count": 24, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "95ed8437-1612-4894-8d36-f60ff9e3df8f", |
| "text": "However, the input to this final layer is a set of stacked, non-linear\ntransformations which extract feature from the data. For these layers, the approach presented above\nfaces two challenges: Node specific features: When the distribution of data is non-IID across nodes, different nodes\nmay learn different feature extractors in lower layers. Model isomorphisms: It is well-known that MLPs are highly sensitive to weight initialization [11]. Two models trained on the same set of samples (with different initializations) may\nhave equivalent behavior despite learning different weights. In particular, reordering a model's\nhidden neurons (within the same layer) may not alter the model's predictions, but corresponds\nto a different weight vector w. In order to construct Hk for hidden layers, we modify the approach presented in Section 3.1, applying\nit to individual hidden neurons. Formally, let the ordered set [fw1(·),j ..., [fwL(·)]j correspond to the\nset of L hidden neurons in layer j. Here, fwl(·)j = g(wTl zj−1) denotes the function computed by\nthe l-th neuron over the output from the previous layer zj−1, with g(·) corresponding to some\nnon-linearity (e.g. Fixing an indexed ordering over d data points, let zjl = [(zjl )1, ..., (zjl )d]\ndenote the vector of activations produced by fwl(·).j Similar to the model evaluation function, Q\nin (1), we can define an alternative Q over a neuron in terms of zj−1 and zjl (the neuron's input Figure 2: In GEMS, the aggregate hidden layer is composed of intersections between different neurons\nfrom node local models. 1 1 qPdi=1 fw′(zj−1)i) j −(zjl )i 2 ≤ϵj Qneuron(w′, {((zj−1)i, (zjl )i)}d) = d (3)\n−1 else Broadly, Qneuron returns 1 if the output of fw′ over zj−1 is within ϵ of zjl , and −1 otherwise. Using this neuron-specific evaluation function, we can now apply Algorithm 2 to each neuron. Each node k learns a locally optimal model mk, with optimal neuron weights wjl ∗, over all j, l.\n2.", |
| "paper_id": "1805.07782", |
| "title": "Model Aggregation via Good-Enough Model Spaces", |
| "authors": [ |
| "Neel Guha", |
| "Virginia Smith" |
| ], |
| "published_date": "2018-05-20", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1805.07782v3", |
| "chunk_index": 15, |
| "total_chunks": 53, |
| "char_count": 1953, |
| "word_count": 323, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "ea4d0f0e-14c6-4280-bb02-4c92e64fbc6b", |
| "text": "Fix hidden layer j = 1. Apply Algorithm 2 to each hidden neuron [fw1(·),j ..., [fwL(·)],j with Q(·)\naccording to (3) with hyperparameter ϵj. Denote the Rd ball constructed for neuron l as Hkj,l.\n3. Each node communicates its set Hkj,· = [Hkj,1, ..., Hkj,L] to the central server which constructs the\naggregate hidden layer fGj,· such ∀i, k, ∃i′ : fGj,i′ ∈Hkj,i. This is achieved by greedily applying (2)\nto tuples in the cartesian product H1j,· × ... × HKj,·. Neurons for which no intersection exists are\nincluded in fGj,·, thus trivially ensuring the condition above.", |
| "paper_id": "1805.07782", |
| "title": "Model Aggregation via Good-Enough Model Spaces", |
| "authors": [ |
| "Neel Guha", |
| "Virginia Smith" |
| ], |
| "published_date": "2018-05-20", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1805.07782v3", |
| "chunk_index": 16, |
| "total_chunks": 53, |
| "char_count": 568, |
| "word_count": 97, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "eb9cd588-e8e3-4e7a-8387-cf60e0c8a39e", |
| "text": "The server sends hGj,· to each node, which insert hGj,· at layer j in their local models and retrain\nthe layers above j. Steps (1)-(4) can be repeated until no hidden layers remain. We note that step (3) can be expensive\nwhen there are a large number of hidden neurons L and nodes K, as |H1j,· × ... × HKj,·| increases\nexponentially. A simplifying assumption is that if Hkj,i and Hkj,l are 'far', then the likelihood of\nintersection is low. Using this intuition, we can make the method more scalable by performing\nk-means clustering over all neurons. In step (3), we now only look for intersections between tuples\nof neurons in the same cluster. Neurons for which no intersection exists are included in fGj,·. We\nexplore this procedure empirically in Section 4.", |
| "paper_id": "1805.07782", |
| "title": "Model Aggregation via Good-Enough Model Spaces", |
| "authors": [ |
| "Neel Guha", |
| "Virginia Smith" |
| ], |
| "published_date": "2018-05-20", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1805.07782v3", |
| "chunk_index": 17, |
| "total_chunks": 53, |
| "char_count": 761, |
| "word_count": 132, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "957a83f9-d156-497c-b5d1-78d7c2cdeee0", |
| "text": "For notational clarity, we denote the number of\nclusters with which k-means is run as mϵ, in order to distinguish it from node index k. Figure\n2 displays this intersection procedure. Neurons with intersecting Hkj are denoted by the same\ncolor.", |
| "paper_id": "1805.07782", |
| "title": "Model Aggregation via Good-Enough Model Spaces", |
| "authors": [ |
| "Neel Guha", |
| "Virginia Smith" |
| ], |
| "published_date": "2018-05-20", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1805.07782v3", |
| "chunk_index": 18, |
| "total_chunks": 53, |
| "char_count": 243, |
| "word_count": 41, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "14ee13a2-8d45-4c65-a4e7-b4cf98499c4e", |
| "text": "In many contexts, a small sample of public data, Spublic, may be available to the central server. For example, this may correspond to a public research dataset, or to nodes that have waived\ntheir privacy right for some data. In these scenarios, the coordinating server can fine-tune HG on\nSpublic by updating the weights for a small number of epochs. In Section 4.2 and 4.3, we find that\nfine-tuning is particularly useful for improving the quality of the GEMS model, hG, compared to other\nbaselines. We now present the empirical results to validate the effectiveness of GEMS. In Section 4.1, we briefly\ndescribe our experimental setup. In Section 4.2 we apply GEMS to logistic regression models, and\nin Section 4.3 we apply GEMS to two layer neural networks. In Section 4.4, we demonstrate that\nfine-tuning is particularly beneficial for the GEMS model compared to other baselines. Finally, we\ndiscuss the effect of ϵ on model size in Sections 4.5. 4.1 Experimental Setup We explore results on three datasets: MNIST [14], CIFAR-10 [13], and HAM10000 [29],\na medical imaging dataset. HAM10000 (HAM) consists of images of skin lesions, and our model\nis tasked with distinguishing between 7 types of lesions.", |
| "paper_id": "1805.07782", |
| "title": "Model Aggregation via Good-Enough Model Spaces", |
| "authors": [ |
| "Neel Guha", |
| "Virginia Smith" |
| ], |
| "published_date": "2018-05-20", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1805.07782v3", |
| "chunk_index": 19, |
| "total_chunks": 53, |
| "char_count": 1206, |
| "word_count": 199, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "32fd765c-a61b-40e4-a4eb-ef19bba654d7", |
| "text": "Full details on all datasets can be found\nin Appendix B.1. We split all three datasets into disjoint train, test, and validation splits. Train\nand validation sets were then further partitioned across different nodes. All results are reported on\nthe test set. To demonstrate the effectiveness of our approach in difficult heterogeneous settings,\nwe partitioned data by label, such that all train/validation images corresponding to a particular\nlabel would be assigned to the same node. Full details on partitioning can be found in Appendix\nB.2. In Section 4.2 and 4.3, we evaluate against the following single-model baselines:\n• Global: A model trained on data aggregated across all nodes. This is an unachievable ideal, as it\nrequires nodes to share data.", |
| "paper_id": "1805.07782", |
| "title": "Model Aggregation via Good-Enough Model Spaces", |
| "authors": [ |
| "Neel Guha", |
| "Virginia Smith" |
| ], |
| "published_date": "2018-05-20", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1805.07782v3", |
| "chunk_index": 20, |
| "total_chunks": 53, |
| "char_count": 755, |
| "word_count": 120, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "a921a02a-d0ac-4001-b4c0-18574332fb85", |
| "text": "• Local: A model trained locally on a node, with only data belonging to that node. When reporting\nthe global performance, we average the performance of each node's local model. • Naive Average: A model produced by averaging the parameters of each local model. In Section 4.5, we evaluate against an ensemble baseline. We create a majority-vote ensemble from\nnode local models, with random selection for ties.", |
| "paper_id": "1805.07782", |
| "title": "Model Aggregation via Good-Enough Model Spaces", |
| "authors": [ |
| "Neel Guha", |
| "Virginia Smith" |
| ], |
| "published_date": "2018-05-20", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1805.07782v3", |
| "chunk_index": 21, |
| "total_chunks": 53, |
| "char_count": 408, |
| "word_count": 67, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "be17028e-2581-419d-abef-90a01d934b25", |
| "text": "We find that GEMS is more parameter efficient,\ndelivering better performance with few parameters. For GEMS, we present two results. First, we report the accuracy of a model learned applying either\nthe convex or non-convex variant of Algorithm 2. Next, we also report the accuracy after fine-tuning\nthis model on a small sample of public data (described in more detail below). Each node computes\nHk over the local node validation split. We report the average accuracy (and standard deviation) of all results over 5 trials. Randomness across trials stems from weight initialization—i.e., each node's\nlocal model is initialized with different weights in each trial. 4.2 Convex Classifiers We evaluate the convex variant of GEMS on logistic regression classifiers. The results for all three\ndatasets for over 5 nodes are presented in Table 1. Fine-tuning consists of updating the weights of\nthe GEMS model for 5 epochs over a random sample of 1000 images from the aggregated validation\ndata.", |
| "paper_id": "1805.07782", |
| "title": "Model Aggregation via Good-Enough Model Spaces", |
| "authors": [ |
| "Neel Guha", |
| "Virginia Smith" |
| ], |
| "published_date": "2018-05-20", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1805.07782v3", |
| "chunk_index": 22, |
| "total_chunks": 53, |
| "char_count": 987, |
| "word_count": 158, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "b04e5e16-27f9-4ee4-9159-dca504a6a21b", |
| "text": "Training details are provided in Appendix B.3. Results in Table 1 correspond to the ellipsoidal variant of GEMS, where the relative ratios of radii\naxes are computed in proportion to each parameter's Fisher information (Appendix A). GEMS\ncomprehensively outperforms both local models and naive averaging. Fine-tuned GEMS significantly\noutperforms all baselines, and comes relatively close to the global model. We use ϵ = 0.40 for\nMNIST, ϵ = 0.20 for HAM, and ϵ = 0.20 for CIFAR-10 (over 5 nodes).", |
| "paper_id": "1805.07782", |
| "title": "Model Aggregation via Good-Enough Model Spaces", |
| "authors": [ |
| "Neel Guha", |
| "Virginia Smith" |
| ], |
| "published_date": "2018-05-20", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1805.07782v3", |
| "chunk_index": 23, |
| "total_chunks": 53, |
| "char_count": 496, |
| "word_count": 80, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "cc57c3f6-a098-4093-8df0-aad1344d20b5", |
| "text": "Complete results over\nadditional agent partitions are described in Appendix C.1. Table 1: Convex Results (K = 5) Dataset Global Local Averaged GEMS GEMS Tuned MNIST 0.926 (0.001) 0.198 (0.010) 0.444 (0.028) 0.456 (0.020) 0.877 (0.005) CIFAR-10 0.597 (0.007) 0.178 (0.010) 0.154 (0.024) 0.210 (0.020) 0.499 (0.016) HAM 0.559 (0.002) 0.237 (0.050) 0.185 (0.002) 0.348 (0.048) 0.530 (0.009) We evaluate the non-convex variant of GEMS on simple two layer feedforward neural networks (Table\n2).", |
| "paper_id": "1805.07782", |
| "title": "Model Aggregation via Good-Enough Model Spaces", |
| "authors": [ |
| "Neel Guha", |
| "Virginia Smith" |
| ], |
| "published_date": "2018-05-20", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1805.07782v3", |
| "chunk_index": 24, |
| "total_chunks": 53, |
| "char_count": 489, |
| "word_count": 74, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "13398e16-01da-4f04-8348-4e6497b60ffe", |
| "text": "The precise network configuration and training details are outlined in Appendix B.4. Fine-tuning\nconsists of updating the last layer's weights of the GEMS model for 5 epochs over a random sample of\n1000 images from the aggregated validation data. In the majority of cases, the untuned GEMS model\noutperforms the local/average baselines. Fine-tuning has a significant impact, and fine-tuned GEMS\noutperforms the local/average baselines.", |
| "paper_id": "1805.07782", |
| "title": "Model Aggregation via Good-Enough Model Spaces", |
| "authors": [ |
| "Neel Guha", |
| "Virginia Smith" |
| ], |
| "published_date": "2018-05-20", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1805.07782v3", |
| "chunk_index": 25, |
| "total_chunks": 53, |
| "char_count": 435, |
| "word_count": 64, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "24faff07-7f10-4661-bc9a-8c8ae01a27be", |
| "text": "Additional results are provided in Appendix C.2. Table 2: NN Results (K = 5) Dataset Global Local Averaged GEMS GEMS Tuned MNIST 0.965 (0.001) 0.199 (0.010) 0.259 (0.039) 0.439 (0.044) 0.886 (0.007) CIFAR-10 0.651 (0.004) 0.183 (0.009) 0.128 (0.023) 0.223 (0.011) 0.502 (0.011) HAM 0.606 (0.006) 0.250 (0.048) 0.153 (0.018) 0.190 (0.002) 0.544 (0.017) Figure 3: Comparative effects of fine-tuning for GEMS vs Baselines (Convex) Figure 4: Comparative effects of fine-tuning for GEMS vs Baselines (Neural Network) The results in Table 1 and 2 suggest that fine-tuning can have a significant effect on the GEMS\nmodel. In this section, we demonstrate that the benefit of fine-tuning is disproportionate: fine-tuned\nGEMS outperforms the fine-tuned baselines. We apply the same fine-tuning technique to each node's\nlocal model (fine-tuned local) and the parameter average of all node local models (fine-tuned\naverage). In addition, we compare to a model trained solely on the public sample used for fine-tuning\n(raw). We can evaluate the effect of fine-tuning as the number of public data samples (the size of the tuning\nset) changes. In convex settings (Figure 3), fine-tuned GEMS performs comparably to the fine-tuned\naverage model, and both outperform fine-tuning the other baselines. In non-convex settings (Figure\n4), fine-tuned GEMS consistently outperforms the fine-tuned baselines, regardless of the sample size.", |
| "paper_id": "1805.07782", |
| "title": "Model Aggregation via Good-Enough Model Spaces", |
| "authors": [ |
| "Neel Guha", |
| "Virginia Smith" |
| ], |
| "published_date": "2018-05-20", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1805.07782v3", |
| "chunk_index": 26, |
| "total_chunks": 53, |
| "char_count": 1414, |
| "word_count": 214, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "107d033d-f3b0-4450-840f-87c35ed5a6a5", |
| "text": "This suggest that the GEMS model is learning weights that are more amenable to fine-tuning, and\nare perhaps able to capture better representations for the overall task. Though this advantage\ndiminishes as the tuning sample size increases, the advantage of GEMS is especially pronounced\nfor smaller samples. With just 100 public data samples, GEMS achieves an average improvement\n(accuracy) of 22.6 points over raw training, 15.1 points over the fine-tuned average model, and 26.8\npoints over the fine-tuned local models. In non-convex settings, GEMS provides a modular framework to tradeoffbetween the model size and\nperformance, via hyperparameters mϵ (the number of clusters created when identifying intersections)\nand ϵj (the maximum output deviation allowed for hidden neurons). Intuitively, both parameters\ncontrol the number of hidden neurons in the aggregate model hG. Table 3 compares adjustments for\nϵj and mϵ on CIFAR-10 for 5 nodes against an ensemble of local node models. We observe that the\nGEMS performance correlates with the number of hidden neurons, and that GEMS outperforms the\nensemble method at all settings (despite having fewer parameters). We observe a similar trend for\nHAM and MNIST, which are presented in more detail in Appendix C.3. Table 3: Model Size Results (CIFAR-10, K = 5) Method Accuracy # hidden neurons Tuned GEMS (mϵ = 150, ϵj = 0.7) 0.454 (0.018) 163.40 (1.20) Tuned GEMS (mϵ = 150, ϵj = 0.5) 0.492 (0.012) 246.20 (8.93) Tuned GEMS (mϵ = 200, ϵj = 0.3) 0.502 (0.011) 379.60 (6.68) Tuned GEMS (mϵ = 100, ϵj = 0.3) 0.501 (0.011) 386.00 (18.76) Ensemble 0.194 (0.005) 500.00 (0.0)", |
| "paper_id": "1805.07782", |
| "title": "Model Aggregation via Good-Enough Model Spaces", |
| "authors": [ |
| "Neel Guha", |
| "Virginia Smith" |
| ], |
| "published_date": "2018-05-20", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1805.07782v3", |
| "chunk_index": 27, |
| "total_chunks": 53, |
| "char_count": 1618, |
| "word_count": 262, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "64669cbb-9305-49dd-9a07-cf6dbd929b20", |
| "text": "A natural question to ask in GEMS is how sensitive the results are to the hyperparameter, ϵ. In certain\ncases, GEMS may not find an intersection between different nodes when the task is too complex\nfor the model, or ϵ is set too high. In practice, we observe that finding an intersection requires\nbeing conservative (i.e., low values) when setting ϵ for each node.", |
| "paper_id": "1805.07782", |
| "title": "Model Aggregation via Good-Enough Model Spaces", |
| "authors": [ |
| "Neel Guha", |
| "Virginia Smith" |
| ], |
| "published_date": "2018-05-20", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1805.07782v3", |
| "chunk_index": 29, |
| "total_chunks": 53, |
| "char_count": 364, |
| "word_count": 64, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "162fc9d0-7b1e-4654-8d38-919a8093deab", |
| "text": "We explain this by our choice\nto represent Hk as a Rd ball. Although Rd balls are easy to compute and intersect, they may be\nfairly coarse approximations of the actual good-enough model space. This suggests that the current\nresults may be improved even further by considering methods for computing/intersecting more\ncomplex model spaces. To illustrate node behavior at different settings of ϵ, we defer the reader to\nexperiments performed in Appendix C.4.", |
| "paper_id": "1805.07782", |
| "title": "Model Aggregation via Good-Enough Model Spaces", |
| "authors": [ |
| "Neel Guha", |
| "Virginia Smith" |
| ], |
| "published_date": "2018-05-20", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1805.07782v3", |
| "chunk_index": 30, |
| "total_chunks": 53, |
| "char_count": 455, |
| "word_count": 73, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "6a8c7b88-0224-4fa5-b719-a34ce02c34a9", |
| "text": "In summary, we introduce good-enough model spaces (GEMS), an intuitive framework for learning an\naggregate model across distributed nodes within one round of communication (for convex models)\nor a few rounds of communication (for neural networks). Our method is both simple and effective,\nrequiring that nodes only communicate their locally optimal models and corresponding good-enough\nmodel space. We achieve promising results with relatively simple approximations of the good-enough\nmodel space and on average achieve 88% of the accuracy of the ideal non-distributed model. future work, we intend to explore more complex representations of the good-enough model space\nand believe these could significantly improve performance. We thank Tian Li, Michael Kuchnik, Anit Kumar Sahu, Otilia Stretcu, and Yoram Singer for their\nhelpful comments. This work was supported in part by the National Science Foundation grant\nIIS1838017, a Google Faculty Award, a Carnegie Bosch Institute Research Award, and the CONIX\nResearch Center. Any opinions, findings, and conclusions or recommendations expressed in this\nmaterial are those of the author(s) and do not necessarily reflect the National Science Foundation or\nany other funding agency.", |
| "paper_id": "1805.07782", |
| "title": "Model Aggregation via Good-Enough Model Spaces", |
| "authors": [ |
| "Neel Guha", |
| "Virginia Smith" |
| ], |
| "published_date": "2018-05-20", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1805.07782v3", |
| "chunk_index": 31, |
| "total_chunks": 53, |
| "char_count": 1229, |
| "word_count": 180, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "059bb16d-b693-45f4-8784-c361368ae285", |
| "text": "Communication complexity of distributed convex learning and\noptimization. In Neural Information Processing Systems, pages 1756–1764, 2015. Distributed learning, communication\ncomplexity and privacy. In Conference on Learning Theory, pages 26–1, 2012. Distributed deep learning networks among institutions for medical imaging. Journal of the American Medical Informatics Association, 25(8):945–954, 2018. Learning ensembles from bites:\nA scalable and accurate approach. Journal of Machine Learning Research, 5(Apr):421–451,\n2004. Keras. https://keras.io, 2015. Large scale distributed deep networks. In Neural Information\nProcessing Systems, pages 1223–1231, 2012. Optimal Distributed Online Prediction\nUsing Mini-Batches. Journal of Machine Learning Research, 13:165–202, 2012. Regularized multi–task learning. In Proceedings of the Tenth ACM\nSIGKDD International Conference on Knowledge Discovery and Data Mining, pages 109–117.", |
| "paper_id": "1805.07782", |
| "title": "Model Aggregation via Good-Enough Model Spaces", |
| "authors": [ |
| "Neel Guha", |
| "Virginia Smith" |
| ], |
| "published_date": "2018-05-20", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1805.07782v3", |
| "chunk_index": 32, |
| "total_chunks": 53, |
| "char_count": 929, |
| "word_count": 113, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "14bf7836-2c5c-4341-8ff2-24c61939bb3b", |
| "text": "Model-agnostic meta-learning for fast adaptation of deep\nnetworks. In International Conference on Machine Learning, pages 1126–1135, 2017. [10] Gdpr: Right to be forgotten. https://gdpr-info.eu/issues/right-to-be-forgotten/.", |
| "paper_id": "1805.07782", |
| "title": "Model Aggregation via Good-Enough Model Spaces", |
| "authors": [ |
| "Neel Guha", |
| "Virginia Smith" |
| ], |
| "published_date": "2018-05-20", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1805.07782v3", |
| "chunk_index": 33, |
| "total_chunks": 53, |
| "char_count": 224, |
| "word_count": 24, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "e7e3423d-b10a-413b-87fa-d1bb9a8a43b6", |
| "text": "Understanding the difficulty of training deep feedforward neural\nnetworks. In International Conference on Artificial Intelligence and Statistics, pages 249–256,\n2010. Grabska-Barwinska, et al. Overcoming catastrophic forgetting in\nneural networks. Proceedings of the National Academy of Sciences, 114(13):3521–3526, 2017. Learning multiple layers of features from tiny images. Technical\nreport, Citeseer, 2009. Gradient-based learning applied to document\nrecognition. Proceedings of the IEEE, 86(11):2278–2324, 1998. Communication efficient distributed machine\nlearning with the parameter server. In Neural Information Processing Systems, pages 19–27,\n2014. Distributed weight consolidation: A brain segmentation case study. In Neural\nInformation Processing Systems, pages 4093–4103, 2018. Efficient large-scale\ndistributed training of conditional maximum entropy models. In Neural Information Processing\nSystems, pages 1231–1239, 2009.", |
| "paper_id": "1805.07782", |
| "title": "Model Aggregation via Good-Enough Model Spaces", |
| "authors": [ |
| "Neel Guha", |
| "Virginia Smith" |
| ], |
| "published_date": "2018-05-20", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1805.07782v3", |
| "chunk_index": 34, |
| "total_chunks": 53, |
| "char_count": 936, |
| "word_count": 112, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "9cc91646-ffa6-4cf6-af31-50117d31b508", |
| "text": "Communication-efficient\nlearning of deep networks from decentralized data. In International Conference on Artificial\nIntelligence and Statistics, 2017. Version spaces: an approach to concept learning. Technical report, Stanford\nUniversity, Department of Computer Science, 1978. A survey on transfer learning. IEEE Transactions on Knowledge and\nData Engineering, 22(10):1345–1359, 2009. Hogwild: A lock-free approach to parallelizing stochastic\ngradient descent. In Neural Information Processing Systems, pages 693–701, 2011. Communication-efficient distributed optimization using\nan approximate newton-type method. In International Conference on Machine Learning, pages\n1000–1008, 2014. Multi-institutional deep\nlearning modeling without sharing patient data: A feasibility study on brain tumor segmentation. In International MICCAI Brainlesion Workshop, pages 92–104.", |
| "paper_id": "1805.07782", |
| "title": "Model Aggregation via Good-Enough Model Spaces", |
| "authors": [ |
| "Neel Guha", |
| "Virginia Smith" |
| ], |
| "published_date": "2018-05-20", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1805.07782v3", |
| "chunk_index": 35, |
| "total_chunks": 53, |
| "char_count": 868, |
| "word_count": 103, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "ce42c7fa-2800-4cef-afe4-820679b603e5", |
| "text": "Very deep convolutional networks for large-scale image Cocoa: A general framework\nfor communication-efficient distributed optimization. Journal of Machine Learning Research,\n18:1–47, 2018. Dropout: a\nsimple way to prevent neural networks from overfitting. Journal of Machine Learning Research,\n15(1):1929–1958, 2014. Ensemble-compression: A new method for\nparallel training of deep neural networks. In Joint European Conference on Machine Learning\nand Knowledge Discovery in Databases, pages 187–202. Going\ndigital: a survey on digitalization and large-scale data analytics in healthcare. Proceedings of\nthe IEEE, 104(11):2180–2206, 2016. The HAM10000 dataset: A large collection of\nmulti-source dermatoscopic images of common pigmented skin lesions. CoRR, abs/1803.10417,\n2018.", |
| "paper_id": "1805.07782", |
| "title": "Model Aggregation via Good-Enough Model Spaces", |
| "authors": [ |
| "Neel Guha", |
| "Virginia Smith" |
| ], |
| "published_date": "2018-05-20", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1805.07782v3", |
| "chunk_index": 36, |
| "total_chunks": 53, |
| "char_count": 778, |
| "word_count": 100, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "f7b6500d-2756-4702-bb4c-2109e0a0c58c", |
| "text": "Communication-efficient algorithms for statistical\noptimization. In Neural Information Processing Systems, pages 1502–1510, 2012. Parallelized stochastic gradient descent. In\nNeural Information Processing Systems, pages 2595–2603, 2010. RdA Non-Uniform Balls", |
| "paper_id": "1805.07782", |
| "title": "Model Aggregation via Good-Enough Model Spaces", |
| "authors": [ |
| "Neel Guha", |
| "Virginia Smith" |
| ], |
| "published_date": "2018-05-20", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1805.07782v3", |
| "chunk_index": 37, |
| "total_chunks": 53, |
| "char_count": 258, |
| "word_count": 28, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "7fac59e5-1fcb-4cc5-a3bc-a671ce48d1a2", |
| "text": "Algorithm 2 approximates the good-enough model space on node k as BR(w∗k), where w∗k corresponds\nto the locally optimal weights on node k and R is estimated on local validation data. An Rd ball\nis a coarse approximation of the good-enough model space, and assumes that good-enough model\nspace is equally sensitive to perturbations over all parameters. A reasonable alternative is to model the good-enough model space as an ellipsoid, which is more\nsensitive to perturbations on certain parameters. Fixing axis radii r1, ..., rd, the good-enough model\nspace is given by:\nd (wi −w∗ki)2 ≤1 (4) Hk = w ∈Rd X\nr2i i=1 The larger the radii for a particular parameter, the more variation is allowed (while still remaining\nin Hk). Thus, ri should implicitly capture the effect of wi on the model's output (i.e. the sensitivity\nof wi). In practice, this can be computed by the inverse Fisher information [12]. For computed\nFisher information values F1, . . . , Fd over w∗k1, . . . w∗kd: minj Fj\nri = max , c · R (5) This forces cR ≤ri ≤R, guaranteeing that the radius for the most sensitive\nparameter (i.e. with the highest Fisher information) scales by a constant factor c with the radius of\nthe least sensitive parameter. In practice, Algorithm 2 can now be applied to approximate R, where\nin line (6) models are sampled from the surface of an ellipsoid with radii r1 · R, . . . , rd · R. When all\nparameters wi are equally sensitive, i.e. Fd, then Hk generalizes to an Rd ball with\nradius R. For the convex results in Section 4.2, we find that Fisher information based ellipsoids outperform\nfixed radius Rd balls and report ellipsoidal results. We describe preprocessing/featurization steps for our empirical results.", |
| "paper_id": "1805.07782", |
| "title": "Model Aggregation via Good-Enough Model Spaces", |
| "authors": [ |
| "Neel Guha", |
| "Virginia Smith" |
| ], |
| "published_date": "2018-05-20", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1805.07782v3", |
| "chunk_index": 38, |
| "total_chunks": 53, |
| "char_count": 1710, |
| "word_count": 299, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "03c40d0e-a2dc-487c-a13b-21eff82459dc", |
| "text": "We used the standard MNIST dataset. We featurize CIFAR-10 (train, test, and validation sets) using a pretrained ImageNet\nVGG-16 model [24] from Keras. All models are learned on these featurized images. The HAM dataset consists of 10015 images of skin lesions. Lesions are classified\nas one of seven potential types: actinic keratoses and intraepithelial carcinoma (akiec), basal cell\ncarcinoma (bcc), benign keratosis (bkl), dermatofibroma (df), melanoma (mel), melanocytic nevi\n(nv), and vascular lesions (vasc).", |
| "paper_id": "1805.07782", |
| "title": "Model Aggregation via Good-Enough Model Spaces", |
| "authors": [ |
| "Neel Guha", |
| "Virginia Smith" |
| ], |
| "published_date": "2018-05-20", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1805.07782v3", |
| "chunk_index": 39, |
| "total_chunks": 53, |
| "char_count": 513, |
| "word_count": 74, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "f3382857-3e44-4857-b435-7676937d7148", |
| "text": "As Figure 5 shows, the original original dataset is highly skewed, with almost 66% of images belonging to one class. In order to balance the dataset, we augment\neach class by performing a series of random transformations (rotations, width shifts, height shifts,\nvertical flips, and horizontal flips) via Keras [5]. We sample 2000 images from each class. We\ninitially experimented with extracting ImageNet features (similar to our proceedure for CIFAR-10). However, training a model on these extractions resulted in poor performance. We constructed our\nown feature extractor, by training a simple convolutional network on 66% of the data, and trimming\nthe final 2 dense layers. This network contained 3 convolutional layers (32, 64, 128 filters with 3 × 3\nkernels) interspersed with 2 × 2 MaxPool layers, and followed by a single hidden layer with 512\nneurons. Figure 5: Distribution of classes for HAM B.2 Data Partitioning Given K nodes, we partitioned each dataset in order to ensure that all images corresponding to\nthe same class belonged to the same node. Table 4 provides an explicit breakdown of the label\npartitions for each of the three datasets, across the different values of K we experimented with. Where possible, we partition data such that each node receives all images associated with a unique\nsubset of the labels. An exception is made for HAM over 5 nodes, where labels 0 −4 are each\nassigned to unique nodes, and labels 5 and 6 are uniformly divided amongst the 5 nodes. Each\nnode's images are always unique - no image is replicated across multiple nodes.", |
| "paper_id": "1805.07782", |
| "title": "Model Aggregation via Good-Enough Model Spaces", |
| "authors": [ |
| "Neel Guha", |
| "Virginia Smith" |
| ], |
| "published_date": "2018-05-20", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1805.07782v3", |
| "chunk_index": 40, |
| "total_chunks": 53, |
| "char_count": 1574, |
| "word_count": 261, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "662ebcee-ed16-46ea-a4ea-5a3d3eafcc7b", |
| "text": "We divided each dataset into train, validation, and test splits. All training occurs exclusively on the\ntrain split and all results are reported for performance on the test split.", |
| "paper_id": "1805.07782", |
| "title": "Model Aggregation via Good-Enough Model Spaces", |
| "authors": [ |
| "Neel Guha", |
| "Virginia Smith" |
| ], |
| "published_date": "2018-05-20", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1805.07782v3", |
| "chunk_index": 41, |
| "total_chunks": 53, |
| "char_count": 179, |
| "word_count": 29, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "037783d7-bcc1-4d3e-b4e1-c871b5feb817", |
| "text": "We use the validation split\nto construct each node's good-enough model space. We use a train/val/test split of 50000/5000/5000\nfor MNIST and CIFAR-10. For HAM, we use a 80/10/10 percentage split (since no conventional\ntrain/test partitioning exists).", |
| "paper_id": "1805.07782", |
| "title": "Model Aggregation via Good-Enough Model Spaces", |
| "authors": [ |
| "Neel Guha", |
| "Virginia Smith" |
| ], |
| "published_date": "2018-05-20", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1805.07782v3", |
| "chunk_index": 42, |
| "total_chunks": 53, |
| "char_count": 250, |
| "word_count": 37, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "f2eca7f6-55b7-41be-ab8e-2895805a2ec5", |
| "text": "B.3 Convex Model Training Our convex model consists of a simple logistic regression classifier. We train with Adam, a learning\nrate of 0.001, and a batch size of 32. We terminate training when training accuracy converges. Dataset Number of nodes (K) Label Division MNIST 2 [{0, 1, 2, 3, 4}, {0, 1, 2, 3, 4}] MNIST 3 [{0, 1, 2}, {3, 4, 5}, {6, 7, 8, 9}] MNIST 5 [{0, 1}, {2, 3}, {4, 5}, {6, 7}, {8, 9}] CIFAR10 2 [{0, 1, 2, 3, 4}, {0, 1, 2, 3, 4}] CIFAR10 3 [{0, 1, 2}, {3, 4, 5}, {6, 7, 8, 9}] CIFAR10 5 [{0, 1}, {2, 3}, {4, 5}, {6, 7}, {8, 9}] HAM 2 [{0, 1, 2, 3}, {4, 5, 6}] HAM 3 [{0, 1}, {2, 3}, {4, 5, 6}] HAM 5 [{0, 5, 6}, {1, 5, 6}, {2, 5, 6}, {3, 5, 6}, {4, 5, 6}] Table 4: Label Partitions", |
| "paper_id": "1805.07782", |
| "title": "Model Aggregation via Good-Enough Model Spaces", |
| "authors": [ |
| "Neel Guha", |
| "Virginia Smith" |
| ], |
| "published_date": "2018-05-20", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1805.07782v3", |
| "chunk_index": 43, |
| "total_chunks": 53, |
| "char_count": 698, |
| "word_count": 154, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "2a5cbb0c-1a93-4224-8eb7-c360e32bcbf0", |
| "text": "B.4 Non-Convex Model Training Our non-convex model consists of a simple two layer feedforward neural network. For MNIST and\nHAM, we fix the hidden layer size to 50 neurons. For CIFAR-10, we fix the hidden layer size to\n100 neurons. We apply dropout [26] with a rate of 0.5 to the hidden layer. We train with Adam,\na learning rate of 0.001, and a batch size of 32. We terminate training when training accuracy\nconverges.", |
| "paper_id": "1805.07782", |
| "title": "Model Aggregation via Good-Enough Model Spaces", |
| "authors": [ |
| "Neel Guha", |
| "Virginia Smith" |
| ], |
| "published_date": "2018-05-20", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1805.07782v3", |
| "chunk_index": 44, |
| "total_chunks": 53, |
| "char_count": 419, |
| "word_count": 75, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "3dbfae4d-d2fc-400a-9854-735a453a1112", |
| "text": "Table 5 presents results for convex logistic regression over different values of K. We use ϵ = 0.40 for\nMNIST, ϵ = 0.20 for HAM, and ϵ = 0.20 for CIFAR-10 (for K = 5 nodes). At each K, we compared the performance of modelling the good-enough model space as an ellipsoid\nor a Rd ball. We report the result corresponding to the best method. We found that using Rd balls\nas the good-enough model space (Hk) resulted in aggregate models almost exactly equivalent to the\nparameter average of all local models. In general, we observe that the performance of GEMS and\nthe baseline decreases as K increases. However, the fine-tuned GEMS performance stays relatively\nconstant.", |
| "paper_id": "1805.07782", |
| "title": "Model Aggregation via Good-Enough Model Spaces", |
| "authors": [ |
| "Neel Guha", |
| "Virginia Smith" |
| ], |
| "published_date": "2018-05-20", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1805.07782v3", |
| "chunk_index": 45, |
| "total_chunks": 53, |
| "char_count": 667, |
| "word_count": 116, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "659390f2-2698-4145-8bdb-c40f0954b096", |
| "text": "C.2 Non-Convex Results Table 6 presents the non-convex results (two layer neural network) for MNIST. We use ϵ = 0.7 for\nthe final layer, and let ϵj denote the deviation allowed for hidden neurons (specified in Eq 3). Table 5: Convex Results\nDataset K Global Local Averaged GEMS GEMS Tuned MNIST 2 0.926 (0.001) 0.481 (0.027) 0.780 (0.015) 0.780 (0.015) 0.889 (0.003) MNIST 3 0.926 (0.001) 0.325 (0.042) 0.705 (0.013) 0.647 (0.033) 0.878 (0.004) MNIST 5 0.926 (0.001) 0.198 (0.010) 0.444 (0.028) 0.456 (0.020) 0.877 (0.005) CIFAR-10 2 0.597 (0.006) 0.385 (0.025) 0.253 (0.027) 0.234 (0.017) 0.494 (0.009) CIFAR-10 3 0.597 (0.006) 0.272 (0.064) 0.203 (0.022) 0.300 (0.040) 0.495 (0.011) CIFAR-10 5 0.597 (0.006) 0.178 (0.010) 0.154 (0.024) 0.210 (0.020) 0.499 (0.016) HAM 2 0.559 (0.002) 0.344 (0.018) 0.400 (0.020) 0.353 (0.011) 0.491 (0.006) HAM 3 0.559 (0.002) 0.269 (0.057) 0.252 (0.043) 0.359 (0.045) 0.523 (0.009) HAM 5 0.559 (0.002) 0.237 (0.050) 0.185 (0.002) 0.348 (0.048) 0.530 (0.009) Table 6: MNIST Results (Neural Network) K ϵhidden mϵ Global Local Averaged GEMS GEMS Tuned", |
| "paper_id": "1805.07782", |
| "title": "Model Aggregation via Good-Enough Model Spaces", |
| "authors": [ |
| "Neel Guha", |
| "Virginia Smith" |
| ], |
| "published_date": "2018-05-20", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1805.07782v3", |
| "chunk_index": 46, |
| "total_chunks": 53, |
| "char_count": 1084, |
| "word_count": 173, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "fbb4ef90-1397-409f-b1a4-fbb085dbef04", |
| "text": "2 0.01 1 0.965 (0.001) 0.492 (0.024) 0.641 (0.058) 0.766 (0.083) 0.888 (0.004) 3 1.0 100 0.965 (0.001) 0.329 (0.043) 0.422 (0.038) 0.754 (0.024) 0.926 (0.006) 5 1.0 100 0.965 (0.001) 0.199 (0.010) 0.259 (0.039) 0.439 (0.044) 0.886 (0.007) Table 7 presents the non-convex results for CIFAR-10. We use ϵ = 0.2 for the final layer. Table 7: CIFAR-10 Results (Neural Network) K ϵj mϵ Global Local Averaged GEMS GEMS Tuned 2 0.1 1.0 0.651 (0.004) 0.405 (0.019) 0.192 (0.026) 0.335 (0.041) 0.568 (0.007) 3 0.3 150 0.651 (0.004) 0.284 (0.061) 0.163 (0.029) 0.333 (0.059) 0.538 (0.009) 5 0.3 200 0.651 (0.004) 0.183 (0.009) 0.128 (0.023) 0.223 (0.011) 0.502 (0.011) Table 8 presents the non-convex results for CIFAR-10. We use ϵ = 0.25 for the final layer. In general, we observe that the baselines and GEMS degrade as K increases. Fine-tuning delivers a\nsignificant improvement, but is less consistent as K varies. We present full results for a comparison between ensemble methods and GEMS.", |
| "paper_id": "1805.07782", |
| "title": "Model Aggregation via Good-Enough Model Spaces", |
| "authors": [ |
| "Neel Guha", |
| "Virginia Smith" |
| ], |
| "published_date": "2018-05-20", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1805.07782v3", |
| "chunk_index": 47, |
| "total_chunks": 53, |
| "char_count": 983, |
| "word_count": 164, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "2cd9ae6a-6315-4dee-a600-bf900614de1c", |
| "text": "These results illustrate\nthe modularity of GEMS: by adjusting ϵj and mϵ, the operator can tradeoffperformance and model Table 8: HAM Results (Neural Network) K ϵj mϵ Global Local Averaged GEMS GEMS Tuned 2 0.01 1.0 0.606 (0.002) 0.354 (0.022) 0.273 (0.032) 0.399 (0.039) 0.539 (0.008) 3 0.07 100 0.606 (0.002) 0.271 (0.061) 0.195 (0.042) 0.269 (0.089) 0.525 (0.014) 5 0.07 100 0.606 (0.002) 0.250 (0.048) 0.153 (0.018) 0.190 (0.002) 0.544 (0.017)", |
| "paper_id": "1805.07782", |
| "title": "Model Aggregation via Good-Enough Model Spaces", |
| "authors": [ |
| "Neel Guha", |
| "Virginia Smith" |
| ], |
| "published_date": "2018-05-20", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1805.07782v3", |
| "chunk_index": 48, |
| "total_chunks": 53, |
| "char_count": 446, |
| "word_count": 72, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "987dc028-41fd-4bd4-b71d-63ac0b8a3b58", |
| "text": "They also demonstrate that GEMS is able to outperform ensembles, despite requiring far fewer\nparameters. For ease of clarity, we describe the model size in terms of the number of hidden neurons. For ensembles, we sum the hidden neurons across all ensemble members. All results are averaged\nover 5 trials, with standard deviations reported. Table 9: Model Size Results (MNIST, K = 5) Method Accuracy # hidden neurons Tuned GEMS (mϵ = 75, ϵj = 0.5) 0.872 (0.007) 74.00 (0.00) Tuned GEMS (mϵ = 100, ϵj = 1.0) 0.886 (0.007) 99.00 (0.00) Tuned GEMS (mϵ = 50, ϵj = 1.0) 0.862 (0.009) 49.0 (0.00) Tuned GEMS (mϵ = 75, ϵj = 1.0) 0.867 (0.008) 79.00 (0.00) Ensemble 0.210 (0.006) 250.00 (0.0) Table 10: Model Size Results (CIFAR-10, K = 5) Method Accuracy # hidden neurons Tuned GEMS (mϵ = 150, ϵj = 0.7) 0.454 (0.018) 163.40 (1.20) Tuned GEMS (mϵ = 150, ϵj = 0.5) 0.492 (0.012) 246.20 (8.93)", |
| "paper_id": "1805.07782", |
| "title": "Model Aggregation via Good-Enough Model Spaces", |
| "authors": [ |
| "Neel Guha", |
| "Virginia Smith" |
| ], |
| "published_date": "2018-05-20", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1805.07782v3", |
| "chunk_index": 49, |
| "total_chunks": 53, |
| "char_count": 883, |
| "word_count": 159, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "390ada70-8f9f-4dd9-af20-da347808e265", |
| "text": "Tuned GEMS (mϵ = 200, ϵj = 0.3) 0.502 (0.011) 379.60 (6.68) Tuned GEMS (mϵ = 100, ϵj = 0.3) 0.501 (0.011) 386.00 (18.76) Ensemble 0.194 (0.005) 500.00 (0.0) Both mϵ and ϵj loosely control the size of the hidden layer. mϵ effectively lower bounds the number\nof hidden units in the GEMS model, and increase mϵ increases the size of the model3. Similarly, ϵj\ncontrols the amount of loss tolerated when constructing the good-enough space for each neuron. Increasing ϵj increases the likelihood of finding an intersection, thereby reducing the size of the\nhidden layer.", |
| "paper_id": "1805.07782", |
| "title": "Model Aggregation via Good-Enough Model Spaces", |
| "authors": [ |
| "Neel Guha", |
| "Virginia Smith" |
| ], |
| "published_date": "2018-05-20", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1805.07782v3", |
| "chunk_index": 50, |
| "total_chunks": 53, |
| "char_count": 564, |
| "word_count": 96, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "57e76943-32ef-43d2-ac5b-1798f934d1ba", |
| "text": "3In some cases, k−means produces an empty cluster which contains no neurons Table 11: Model Size Results (HAM, K = 5) Method Accuracy Num hidden Tuned GEMS (mϵ = 75, ϵj = 0.07) 0.543 (0.015) 103.80 (8.18) Tuned GEMS (mϵ = 100, ϵj = 0.07) 0.544 (0.017) 161.60 (7.23) Tuned GEMS (mϵ = 50, ϵj = 0.07) 0.529 (0.013) 90.50 (3.77) Ensemble 0.245 (0.010) 250 C.4 Intersection Analysis We notice that in order for GEMS to find an intersection, we have to set ϵ conservatively.", |
| "paper_id": "1805.07782", |
| "title": "Model Aggregation via Good-Enough Model Spaces", |
| "authors": [ |
| "Neel Guha", |
| "Virginia Smith" |
| ], |
| "published_date": "2018-05-20", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1805.07782v3", |
| "chunk_index": 51, |
| "total_chunks": 53, |
| "char_count": 468, |
| "word_count": 85, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "460b45d5-c4c3-463c-b9d8-752c24f7201e", |
| "text": "We consider\nthe convex MNIST case (K = 2) with Rd ball good-enough model spaces, and perform a grid\nsearch over different values of ϵ for each node. We illustrate these results in Figure 6. The X-axis\ncorresponds to an ϵ value for node 1, and the Y-axis corresponds to an ϵ value for node 2. Markers\ndenoted by a red cross indicate that no intersection was found at the corresponding ϵ settings. Markers denoted by a filled-inn circle indicate that a model in the intersection hG was identified,\nand the shade of the circle denotes the accuracy of hG on the global test data (with no tuning). We\nidentify several trends. First, setting epsilon aggressively (i.e. at a higher value) for both nodes results in no intersection.", |
| "paper_id": "1805.07782", |
| "title": "Model Aggregation via Good-Enough Model Spaces", |
| "authors": [ |
| "Neel Guha", |
| "Virginia Smith" |
| ], |
| "published_date": "2018-05-20", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1805.07782v3", |
| "chunk_index": 52, |
| "total_chunks": 53, |
| "char_count": 724, |
| "word_count": 128, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "dbb6a6dd-673a-4ed5-9972-11d2ec718134", |
| "text": "This is most likely explained by the coarseness of Rd balls as an approximation to the good-enough\nmodel space. Because Algorithm 2 approximates the largest Rd ball inscribed in the good-enough\nspace, higher values of ϵ will produce smaller Rd balls, reducing the likelihood of an intersection. We believe the lack of intersections at higher values of ϵ can be addressed with more sophisticated\nrepresentations of the good-enough model space, which better capture the topology of each node's\nloss surface. Second, setting epsilon aggressively for one node (and conservatively for the other) does result in\nGEMS identifying a model, albeit with poorer performance. In these cases, Algorithm 2 will learn a\n'tight' ball (i.e. small radius) for the node with the higher ϵ and a 'loose' ball for the node with the\nsmaller ϵ. Thus, the model identified by GEMS will be much closer to the local model of the node\nwith the higher value of ϵ. As a result, the performance of the GEMS model significantly degrades,\nand more closely resembles the performance of one of the local models.", |
| "paper_id": "1805.07782", |
| "title": "Model Aggregation via Good-Enough Model Spaces", |
| "authors": [ |
| "Neel Guha", |
| "Virginia Smith" |
| ], |
| "published_date": "2018-05-20", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1805.07782v3", |
| "chunk_index": 53, |
| "total_chunks": 53, |
| "char_count": 1076, |
| "word_count": 180, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "724b3567-4df2-4881-820c-ded079683836", |
| "text": "We observe the best performance when each node's ϵ is approximately equivalent, and set relatively\nconservatively. This ensures that 1) an intersection is identified, and 2) the model learned via GEMS\ndoes not strongly favor one node. Interestingly, the performance of the GEMS model can often exceed\nϵ. This suggests inefficiencies in our approximation of the good-enough model space that could\neasily be improved with more sophisticated representations. Figure 6: The x-axis corresponds to different settings of ϵ for the first node, and the y-axis corresponds\nto different settings of ϵ for the second node. Red crosses denote values where GEMS failed to find an\nintersection. The color of the circular markers denotes the accuracy of the intersected model.", |
| "paper_id": "1805.07782", |
| "title": "Model Aggregation via Good-Enough Model Spaces", |
| "authors": [ |
| "Neel Guha", |
| "Virginia Smith" |
| ], |
| "published_date": "2018-05-20", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1805.07782v3", |
| "chunk_index": 54, |
| "total_chunks": 53, |
| "char_count": 760, |
| "word_count": 119, |
| "chunking_strategy": "semantic" |
| } |
| ] |