| [ |
| { |
| "chunk_id": "518c26d1-e9d7-4527-a093-1bc8762f1681", |
| "text": "Sparse Binary Compression: Towards Distributed\nDeep Learning with minimal Communication Felix Sattler1, Simon Wiedemann2, Klaus-Robert Müller3, and Wojciech Samek4", |
| "paper_id": "1805.08768", |
| "title": "Sparse Binary Compression: Towards Distributed Deep Learning with minimal Communication", |
| "authors": [ |
| "Felix Sattler", |
| "Simon Wiedemann", |
| "Klaus-Robert Müller", |
| "Wojciech Samek" |
| ], |
| "published_date": "2018-05-22", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1805.08768v1", |
| "chunk_index": 0, |
| "total_chunks": 27, |
| "char_count": 163, |
| "word_count": 19, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "67c2f58a-1b4d-432e-b95d-746bbc8635f0", |
| "text": "Currently, progressively larger deep neural networks are trained on ever growing data corpora. As this trend is only going to\nincrease in the future, distributed training schemes are becoming increasingly relevant. A major issue in distributed training is the\nlimited communication bandwidth between contributing nodes or prohibitive communication cost in general. These challenges\nbecome even more pressing, as the number of computation nodes increases.", |
| "paper_id": "1805.08768", |
| "title": "Sparse Binary Compression: Towards Distributed Deep Learning with minimal Communication", |
| "authors": [ |
| "Felix Sattler", |
| "Simon Wiedemann", |
| "Klaus-Robert Müller", |
| "Wojciech Samek" |
| ], |
| "published_date": "2018-05-22", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1805.08768v1", |
| "chunk_index": 1, |
| "total_chunks": 27, |
| "char_count": 454, |
| "word_count": 64, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "fa88a21c-a497-4b22-91f1-7c6d01e34243", |
| "text": "To counteract this development we propose sparse\nbinary compression (SBC), a compression framework that allows for a drastic reduction of communication cost for distributed\ntraining. SBC combines existing techniques of communication delay and gradient sparsification with a novel binarization method2018 and optimal weight update encoding to push compression gains to new limits. By doing so, our method also allows us to smoothly\ntrade-off gradient sparsity and temporal sparsity to adapt to the requirements of the learning task. Our experiments show, that SBC\ncan reduce the upstream communication on a variety of convolutional and recurrent neural network architectures by more than\nfour orders of magnitude without significantly harming the convergence speed in terms of forward-backward passes. For instance,May we can train ResNet50 on ImageNet in the same number of iterations to the baseline accuracy, using ×3531 less bits or train it\nto a 1% lower accuracy using ×37208 less bits. In the latter case, the total upstream communication required is cut from 125\nterabytes to 3.35 gigabytes for every participating client.22", |
| "paper_id": "1805.08768", |
| "title": "Sparse Binary Compression: Towards Distributed Deep Learning with minimal Communication", |
| "authors": [ |
| "Felix Sattler", |
| "Simon Wiedemann", |
| "Klaus-Robert Müller", |
| "Wojciech Samek" |
| ], |
| "published_date": "2018-05-22", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1805.08768v1", |
| "chunk_index": 2, |
| "total_chunks": 27, |
| "char_count": 1131, |
| "word_count": 170, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "79024c92-9e18-4464-b67c-faa79a25f41d", |
| "text": "Distributed Stochastic Gradient Descent (DSGD) is a training setting, in which a number of clients jointly trains a deep\nlearning model using stochastic gradient descent [1][2][3]. Every client holds an individual subset of the training data, used to\nimprove the current master model. The improvement is obtained by investing computational resources to perform iterations[cs.LG]\nof stochastic gradient descent (SGD).", |
| "paper_id": "1805.08768", |
| "title": "Sparse Binary Compression: Towards Distributed Deep Learning with minimal Communication", |
| "authors": [ |
| "Felix Sattler", |
| "Simon Wiedemann", |
| "Klaus-Robert Müller", |
| "Wojciech Samek" |
| ], |
| "published_date": "2018-05-22", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1805.08768v1", |
| "chunk_index": 3, |
| "total_chunks": 27, |
| "char_count": 416, |
| "word_count": 59, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "e95adb2a-6bcd-4297-a3cb-0456aeca7ccc", |
| "text": "This local training produces a weight update ∆W in every participating client, which in regular or irregular intervals (communication rounds) is exchanged to produce a new master model. This exchange of\nweight-updates can be performed indirectly via a centralized server or directly in an all-reduce operation. In both cases, all\nclients share the same master model after every communication round (Fig. 1). In vanilla DSGD the clients have to communicate\na full gradient update during every iteration. Every such update is of the same size as the full model, which can be in the range\nof gigabytes for modern architectures with millions of parameters [4][5]. Over the course of multiple hundred thousands of\ntraining iterations on big datasets the total communication for every client can easily grow to more than a petabyte. Consequently,\nif communication bandwidth is limited, or communication is costly, distributed deep learning can become unproductive or even\nunfeasible.", |
| "paper_id": "1805.08768", |
| "title": "Sparse Binary Compression: Towards Distributed Deep Learning with minimal Communication", |
| "authors": [ |
| "Felix Sattler", |
| "Simon Wiedemann", |
| "Klaus-Robert Müller", |
| "Wojciech Samek" |
| ], |
| "published_date": "2018-05-22", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1805.08768v1", |
| "chunk_index": 4, |
| "total_chunks": 27, |
| "char_count": 977, |
| "word_count": 151, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "2877e60f-0305-45ec-945d-37377aac67bc", |
| "text": "DSGD is a very popular training setting with many applications. On one end of the spectrum, DSGD can be used to greatly\nreduce the training time of large-scale deep learning models by introducing device-level data parallelism [6][7][8][9], making\nuse of the fact that the computation of a mini-batch gradient is perfectly parallelizable. In this setting, the clients are usually\nembodied by hardwired high-performance computation units (i.e. GPUs in a cluster) and every client performs one iteration of\nSGD per communication round. Since communication is high-frequent in this setting, bandwidth can be a significant bottleneck.\nclients only ever share weight-updates, DSGD makes it possible to train a model from the combined data of all clients without\nany individual client having to reveal their local training data to a centralized server. In this setting the clients typically are\nembedded or mobile devices with low network bandwidth, intermittent network connections, and an expensive mobile data plan. In both scenarios, the communication cost between the individual training nodes is a limiting factor for the performance\nof the whole learning system. As a result of this, substantial research has gone into the effort of reducing the amount of\ncommunication necessary between the clients via lossy compression schemes [9][12][13][14][15][16][17][18][11][19]. For the\nsynchronous distributed training scheme described above, the total amount of bits communicated by every client during training\nis given by\nbtotal ∈O( Niter × f × |∆W̸=0| × (¯bpos + ¯bval) × K ) (1)\n# communication| {z }rounds | # bits per communication{z } # receiving|{z}nodes Sattler is with the Fraunhofer Heinrich Hertz Institute, 10587 Berlin, Germany. (e-mail: felix.sattler@hhi.fraunhofer.de)\n2S. Wiedemann is with the Fraunhofer Heinrich Hertz Institute, 10587 Berlin, Germany. (e-mail: simon.wiedemann@hhi.fraunhofer.de)\n3K.-R. Müller is with the Technische Universität Berlin, 10587 Berlin, Germany, with the Max Planck Institute for Informatics, 66123 Saarbrücken, Germany,\nand with the Department of Brain and Cognitive Engineering, Korea University, Seoul 136-713, Korea (e-mail: klaus-robert.mueller@tu-berlin.de)\n4W. Samek is with the Fraunhofer Heinrich Hertz Institute, 10587 Berlin, Germany. (e-mail: wojciech.samek@hhi.fraunhofer.de) Server Server Server global\naveraging\n(a) (b) (c)\ndownload ΔW SGD SGD SGD upload ΔW ΔW ΔW ΔW ΔW Data Data Data Data Data Data Data Data Data\nClient 1 Client 2 ...", |
| "paper_id": "1805.08768", |
| "title": "Sparse Binary Compression: Towards Distributed Deep Learning with minimal Communication", |
| "authors": [ |
| "Felix Sattler", |
| "Simon Wiedemann", |
| "Klaus-Robert Müller", |
| "Wojciech Samek" |
| ], |
| "published_date": "2018-05-22", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1805.08768v1", |
| "chunk_index": 5, |
| "total_chunks": 27, |
| "char_count": 2494, |
| "word_count": 363, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "527b6578-9da6-45a3-950b-ca0860580a70", |
| "text": "Client n Client 1 Client 2 ... Client n Client 1 Client 2 ... Fig. 1: One communication round of DSGD: a) Clients synchronize with the server. b) Clients compute a weight-update\nindependently based on their local data. c) Clients upload their local weight-updates to the server, where they are averaged to\nproduce the new master model. where Niter is the total number of training iterations (forward-backward passes) every client performs, f is the communication\nfrequency, |W̸=0| is the sparsity of the weight-update, ¯bpos, ¯bval are the average number of bits required to communicate\nthe position and the value of the non-zero elements respectively and K is the number of receiving nodes (if W is dense, thepositions of all weights are predetermined and no position bits are required).", |
| "paper_id": "1805.08768", |
| "title": "Sparse Binary Compression: Towards Distributed Deep Learning with minimal Communication", |
| "authors": [ |
| "Felix Sattler", |
| "Simon Wiedemann", |
| "Klaus-Robert Müller", |
| "Wojciech Samek" |
| ], |
| "published_date": "2018-05-22", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1805.08768v1", |
| "chunk_index": 6, |
| "total_chunks": 27, |
| "char_count": 788, |
| "word_count": 128, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "4a00cf9c-368d-4c1e-8b48-5adc3567b1fd", |
| "text": "Existing compression schemes only focus on reducing one or two of the multiplicative components that contribute to btotal. Using the systematic of equation 1, we can group these prior approaches into three different groups:\nSparsification methods restrict weight-updates to modifying only a small subset of the parameters, thus reducing |∆W̸=0|.Strom [16] presents an approach in which only gradients with a magnitude greater than a certain predefined threshold are sent\nto the server. All other gradients are aggregated into a residual. This method achieves compression rates of up to 3 orders of\nmagnitude on an acoustic modeling task. In practice however, it is hard to choose appropriate values for the threshold, as it\nmay vary a lot for different architectures and even different layers. Instead of using a fixed threshold to decide what gradient\nentries to send, Aji et al [17] use a fixed sparsity rate. They only communicate the fraction p entries of the gradient with the\nbiggest magnitude, while also collecting all other gradients in a residual. At a sparsity rate of p = 0.001 their method slightly\ndegrades the convergence speed and final accuracy of the trained model. Lin et al. [18] present modifications to the work of\nAji et al. which close this performance gap. These modifications include using a curriculum to slowly increase the amount\nof sparsity in the first couple communication rounds and applying momentum factor masking to overcome the problem of\ngradient staleness. They report extensive results for many modern convolutional and recurrent neural network architectures on\nbig datasets. Using a naive encoding of the sparse weight-updates, they achieve compression rates ranging from ×270 to ×600on different architectures, without any slowdown in convergence speed or degradation of final accuracy.", |
| "paper_id": "1805.08768", |
| "title": "Sparse Binary Compression: Towards Distributed Deep Learning with minimal Communication", |
| "authors": [ |
| "Felix Sattler", |
| "Simon Wiedemann", |
| "Klaus-Robert Müller", |
| "Wojciech Samek" |
| ], |
| "published_date": "2018-05-22", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1805.08768v1", |
| "chunk_index": 7, |
| "total_chunks": 27, |
| "char_count": 1828, |
| "word_count": 285, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "75486d85-4e09-4d89-a5ea-498e9273d4d3", |
| "text": "Communication delay methods try to reduce the communication frequency f. McMahan et al. [11] propose Federated\nAveraging to reduce the communication bit-width. In Federated Averaging, instead of communicating after every iteration, every\nclient performs multiple iterations of SGD to compute a weight-update. The authors observe that this delay of communication\ndoes not significantly harm the convergence speed in terms of local iterations and report a reduction in the number of necessary\ncommunication rounds by a factor of ×10 - ×100 on different convolutional and recurrent neural network architectures.In a follow-up work Konevcny et al. [19] combine this communication delay with random sparsification and probabilistic\nquantization. They restrict the clients to learn random sparse weight-updates or force random sparsity on them afterwards\n(\"structured\" vs \"sketched\" updates) and combine this sparsification with probabilistic quantization. While their method also\ncombines communication delay with (random) sparsification and quantization, and achieves good compression gains for one\nparticular CNN and LSTM model, it also causes a major drop in convergence speed and final accuracy. Dense quantization methods try to reduce the amount of value bits ¯bval. Wen et al. propose TernGrad [12], a method to\nstochastically quantize gradients to ternary values. This achieves a moderate compression rate of ×16, while accuracy dropsnoticeably on big modern architectures.", |
| "paper_id": "1805.08768", |
| "title": "Sparse Binary Compression: Towards Distributed Deep Learning with minimal Communication", |
| "authors": [ |
| "Felix Sattler", |
| "Simon Wiedemann", |
| "Klaus-Robert Müller", |
| "Wojciech Samek" |
| ], |
| "published_date": "2018-05-22", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1805.08768v1", |
| "chunk_index": 8, |
| "total_chunks": 27, |
| "char_count": 1476, |
| "word_count": 210, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "be51deab-109b-49ba-82fd-98cb7721349b", |
| "text": "The authors prove the convergence of their method under the assumption of bounded\ngradients. Alistarh et al. [13], explore the trade-off between model accuracy and gradient precision. They prove information\ntheoretic bounds on the compression rate achievable by dense quantization and propose QSGD, a family of compression schemes\nwith convergence guarantees. Other authors experiment with 1-bit quantization schemes: Seide et al. [14] show empirically\nthat it is possible to quantize the weight-updates to 1 bit without harming convergence speed, if the quantization errors are\naccumulated. Bernstein et al. [15] propose signSGD, a distributed training scheme in which every client quantizes the gradients\nto binary signs and the server aggregates the gradients by means of a majority vote. In general of course, dense quantization\ncan only achieve a maximum compression rate of ×32. SPARSE BINARY COMPRESSION We propose Sparse Binary Compression (cf. Figure 2), to drastically reduce the number of communicated bits in distributed\ntraining. SBC makes use of multiple techniques simultaneously1 to reduce all multiplicative components of equation (1). 1To clarify, we have put our contributions in emphasis.", |
| "paper_id": "1805.08768", |
| "title": "Sparse Binary Compression: Towards Distributed Deep Learning with minimal Communication", |
| "authors": [ |
| "Felix Sattler", |
| "Simon Wiedemann", |
| "Klaus-Robert Müller", |
| "Wojciech Samek" |
| ], |
| "published_date": "2018-05-22", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1805.08768v1", |
| "chunk_index": 9, |
| "total_chunks": 27, |
| "char_count": 1208, |
| "word_count": 178, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "0c5733ab-3d97-4427-907e-38bc91cf3db1", |
| "text": "Client 0 Client 2\nClient 1 Average\ndelay\ncommunication weight sparsify binarize encode 0101..\n×100 - ×1000 update ∆W ×100 - ×1000 ×3 ×1.3 - ×1.9 1110.. 0001..\n(a) (b) (c) (e) 0110..\n0010..\n∆W 1101..\n0100.. Federated Averaging [19] Gradient Dropping [1] accumulate error\n(d) Fig. 2: Step-by-step explanation of techniques used in Sparse Binary Compression: (a) Illustrated is the traversal of the parameter\nspace with regular DSGD (left) and Federated Averaging (right).", |
| "paper_id": "1805.08768", |
| "title": "Sparse Binary Compression: Towards Distributed Deep Learning with minimal Communication", |
| "authors": [ |
| "Felix Sattler", |
| "Simon Wiedemann", |
| "Klaus-Robert Müller", |
| "Wojciech Samek" |
| ], |
| "published_date": "2018-05-22", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1805.08768v1", |
| "chunk_index": 10, |
| "total_chunks": 27, |
| "char_count": 469, |
| "word_count": 74, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "80b2031c-5d77-4e4c-8bb8-720ef010f556", |
| "text": "With this form of communication delay, a bigger region\nof the loss surface can be traversed, in the same number of communication rounds. That way compression gains of up to\n×1000 are possible. After a number of iterations, the clients communicate their locally computed weight-updates. (b) Beforecommunication, the weight-update is first sparsified, by dropping all but the fraction p weight-updates with the highest magnitude. This achieves up to ×1000 compression gain. (c) Then the sparse weight-update is binarized for an additional compression\ngain of approximately ×3. (d) Finally, we optimally encode the positions of the non-zero elements, using Golomb encoding. This reduces the bit size of the compressed weight-update by up to another ×2 compared to naive encoding.", |
| "paper_id": "1805.08768", |
| "title": "Sparse Binary Compression: Towards Distributed Deep Learning with minimal Communication", |
| "authors": [ |
| "Felix Sattler", |
| "Simon Wiedemann", |
| "Klaus-Robert Müller", |
| "Wojciech Samek" |
| ], |
| "published_date": "2018-05-22", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1805.08768v1", |
| "chunk_index": 11, |
| "total_chunks": 27, |
| "char_count": 776, |
| "word_count": 118, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "461db137-b588-4c6c-82d7-7f0af61d3c4d", |
| "text": "In the following W will refer to the entirety of neural network parameters, while W ∈W will refer to one specific tensor of\nweights. Arithmetic operations on W are to be understood componentwise. Communication Delay, Fig. 2 (a): We use communication delay, proposed by [11], to introduce temporal sparsity into\nDSGD. Instead of communicating gradients after every local iteration, we allow the clients to compute more informative updates Algorithm 1: Synchronous Distributed Stochastic Gradient Algorithm 2: Sparse Binary Compression\nDescent (DSGD) 1 input: tensor ∆W, sparsity p\n1 input: initial parameters W 2 output: sparse tensor ∆W\n2 outout: improved parameters W 3 • val+ ←topp%(∆W); val−←topp%(−∆W)\n3 init: all clients Ci are initialized with the same parameters 4 • µ+ ←mean(val+); µ−←mean(val−)\nWi ←W, the initial global weight-update and the 5 if µ+ ≥µ−then residuals are set 6 to zero ∆W, Ri ←0 return ∆W ∗←µ+(W ≥min(val+)) t = 1, .., T 4 for do 7 else\n5 for 8 i ∈It ⊆{1, .., M} in parallel do return ∆W ∗←−µ−(W ≤−min(val−)) 6 Client Ci does: 9 end\n7 • msg ←downloadS→Ci(msg)\n8 • ∆W ←decode(msg) Algorithm 3: Golomb Position Encoding\n1 input: sparse tensor ∆W ∗, sparsity p\n9 • Wi ←Wi + ∆W 2 output: binary message msg\n10 • ∆Wi ←Ri + SGDn(Wi, Di) −Wi 3 ∗[:]̸=011 i • I ←∆W log(φ−1) • ∆W∗ ←compress(∆Wi)\n12 log(1−p) i )⌋ 4 • b∗←1 + ⌊log2( • Ri ←∆Wi −∆W∗ 5 for i = 1, .., do |I|13 msgi i ) 6 d • ←encode(∆W∗ • ←Ii −Ii−114 7 q div 2b∗ • uploadCi→S(msgi) • ←(d −1)15 end 8 r mod 2b∗ • ←(d −1)16 Server S does: 9 msg.add(1, .., 1, 0, binaryb∗(r)) •\n|q {ztimes} i ), i ∈It17 • gatherCi→S(∆W∗1 P i i∈It |It| 10 end18 • ∆W ← ∆W∗\n11 return msg19 • W ←W + ∆W\n20 • broadcastS→Ci(∆W), i = 1, .., M21 end\n22 return W by performing multiple iterations of SGD. These generalized weight-updates are given by\n∆Wi = SGDn(Wi, Di) −Wi\nwhere SGDn(Wi, Di) refers to the set of weights obtained by performing n iterations of stochastic gradient descent on Wi,\nwhile sampling mini-batches from the i-th client's training data Di. Empirical analysis by [11] suggests that communication\ncan be delayed drastically, with only marginal degradation of accuracy.", |
| "paper_id": "1805.08768", |
| "title": "Sparse Binary Compression: Towards Distributed Deep Learning with minimal Communication", |
| "authors": [ |
| "Felix Sattler", |
| "Simon Wiedemann", |
| "Klaus-Robert Müller", |
| "Wojciech Samek" |
| ], |
| "published_date": "2018-05-22", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1805.08768v1", |
| "chunk_index": 12, |
| "total_chunks": 27, |
| "char_count": 2143, |
| "word_count": 389, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "be28dc52-b453-4886-873f-126942d5e455", |
| "text": "For n = 1 we obtain regular DSGD. Sparse Binarization, Fig. 2 (b), (c): Following the works of [18][16][10][17] we use the magnitude of an individual weight\nwithin a weight-update as a heuristic for it's importance. First, we set all but the fraction p biggest and fraction p smallest\nweight-updates to zero. Next, we compute the mean of all remaining positive and all remaining negative weight-updates\nindependently. If the positive mean µ+ is bigger than the absolute negative mean µ−, we set all negative values to zero and\nall positive values to the positive mean and vice versa.", |
| "paper_id": "1805.08768", |
| "title": "Sparse Binary Compression: Towards Distributed Deep Learning with minimal Communication", |
| "authors": [ |
| "Felix Sattler", |
| "Simon Wiedemann", |
| "Klaus-Robert Müller", |
| "Wojciech Samek" |
| ], |
| "published_date": "2018-05-22", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1805.08768v1", |
| "chunk_index": 13, |
| "total_chunks": 27, |
| "char_count": 583, |
| "word_count": 98, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "1558f521-5328-4a5d-9b63-1b53b2a96647", |
| "text": "The method is illustrated in figure 2 and formalized in algorithm 2. Finding the fraction p smallest and biggest values in a vector W requires O(|W| log(2p|W|)) operations, where |W| refersto the number of elements in W. As suggested in [18], we can reduce the computational cost of this sorting operation, by\nrandomly subsampling from W. However this comes at the cost of introducing (unbiased) noise in the amount of sparsity. Luckily, in our approach communication rounds (and thus compressions) are relatively infrequent, which helps to marginalize\nthe overhead of the sparsification. Quantizing the non-zero elements of the sparsified weight-update to the mean reduces the\nrequired value bits ¯bval from 32 to 0. This translates to a reduction in communication cost by a factor of around ×3. We canget away with averaging out the non-zero weight-updates because they are relatively homogeneous in value and because we\naccumulate our compression errors as described in the next paragraph. Although other methods, like TernGrad [12] also combine sparsification and quantization of the weight-updates, none of\nthese methods work with sparsity rates as high as ours (see Table I). Residual Accumulation, Fig. 2 (d): It is well established (see [18][16][17][14]) that the convergence in sparsified DSGD\ncan be greatly accelerated by accumulating the error that arises from only sending sparse approximations of the weight-updates. After every communication round, the residual is updated via τ . (2) t ) = Rτ−1 + ∆Wτ −∆W∗ Rτ = X(∆Wt −∆W∗\nt=1", |
| "paper_id": "1805.08768", |
| "title": "Sparse Binary Compression: Towards Distributed Deep Learning with minimal Communication", |
| "authors": [ |
| "Felix Sattler", |
| "Simon Wiedemann", |
| "Klaus-Robert Müller", |
| "Wojciech Samek" |
| ], |
| "published_date": "2018-05-22", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1805.08768v1", |
| "chunk_index": 14, |
| "total_chunks": 27, |
| "char_count": 1541, |
| "word_count": 244, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "4a058604-80d6-4005-935b-351394128a69", |
| "text": "Error accumulation has the great benefit that no gradient information is lost (it may only become outdated or \"stale\"). In\nthe context of pure sparsification residual accumulation can be interpreted to be equivalent to increasing the batch size for\nindividual parameters [18]. Moreover, we can show: Let ∆W1, .., ∆WT ∈Rn be (flattened) weight-updates, computed by one client in the first T communication ∗ ∗\nrounds. Let ∆W 1 , .., ∆W T −1 ∈S be the actual weight-updates, transferred in the previous rounds (restricted to some subspace\nS) and Rτ be the content of the residual at time τ as in (2). Then the orthogonal projection\n−1 + ∆WT ) (3) v = ProjS(RT\nuniquely minimizes the accumulated error t ∗ )∥ (4) err(∆W T∗ ) = ∥ X(∆Wt −∆W\nt=1\nin S. (Proof in Supplement.)\nThat means that the residual accumulation keeps the compressed optimization path as close as possible to optimization path\ntaken with non-compressed weight-updates. Optimal Position Encoding, Fig. 2 (e): To communicate a set of sparse binary tensors produced by SGC, we only need\nto transfer the positions of the non-zero elements in the flattened tensors, along with one mean value (µ+ or µ−) per tensor. Instead of communicating the absolute non-zero positions it is favorable to only communicate the distances between all non-zero\nelements. Under the simplifying assumption that the sparsity pattern is random for every weight-update, it is easy to show that\nthese distances are geometrically distributed with success probability p equal to the sparsity rate. Therefore, as previously done\nby [16], we can optimally encode the distances using the Golomb code [20]. Golomb encoding reduces the average number of\nposition bits to", |
| "paper_id": "1805.08768", |
| "title": "Sparse Binary Compression: Towards Distributed Deep Learning with minimal Communication", |
| "authors": [ |
| "Felix Sattler", |
| "Simon Wiedemann", |
| "Klaus-Robert Müller", |
| "Wojciech Samek" |
| ], |
| "published_date": "2018-05-22", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1805.08768v1", |
| "chunk_index": 15, |
| "total_chunks": 27, |
| "char_count": 1698, |
| "word_count": 280, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "39168273-22a1-4a7e-b2e9-fe6e5c589270", |
| "text": "¯bpos = b∗+ (5)\n1 −(1 −p)2b∗, √ log(1−p) 2 being the golden ratio. For a sparsity rate of i.e. p = 0.01, we get ¯bpos = 8.38,with b∗= 1 + ⌊log2( log(φ−1) )⌋and φ = 5+1\nwhich translates to ×1.9 compression, compared to a naive distance encoding with 16 fixed bits. While the overhead forencoding and decoding makes it unproductive to use Golomb encoding in the situation of [16], this overhead becomes negligible in our situation due to the infrequency of weight-update exchange resulting from communication delay.", |
| "paper_id": "1805.08768", |
| "title": "Sparse Binary Compression: Towards Distributed Deep Learning with minimal Communication", |
| "authors": [ |
| "Felix Sattler", |
| "Simon Wiedemann", |
| "Klaus-Robert Müller", |
| "Wojciech Samek" |
| ], |
| "published_date": "2018-05-22", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1805.08768v1", |
| "chunk_index": 16, |
| "total_chunks": 27, |
| "char_count": 513, |
| "word_count": 88, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "ada404a0-a3ec-4210-91b3-7f870566a811", |
| "text": "The encoding scheme\nis given in algorithm 3, while the decoding scheme can be found in the supplement. Momentum Correction, Warm-up Training and Momentum Masking: Lin et al. [18] introduce multiple minor modifications to the vanilla Gradient Dropping method, to improve the convergence speed. We adopt momentum masking, while\nmomentum correction is implicit to our approach. For more details on this we refer to the supplement. Our proposed method is described in Algorithms 1, 2 and 3. Algorithm 1 describes how compression and residual accumulation\ncan be introduced into DSGD. Algorithm 2 describes our compression method. Algorithm 3 describes the Golomb encoding. Table I compares theoretical asymptotic compression rates of different popular compression methods. SignSGD [15],TernGrad [12], Gradient Dropping [17], Federated Sparse Binary\nTotal Bits = Baseline\nQSGD [13], 1-bitSGD [14] DGC [18] Averaging [11] Compression\nTemporal Sparsity 100% 100% 100% 0.1% - 10% 0.1% - 10%\n× Gradient Sparsity 100% 100% 0.1% 100% 0.1% - 10%\nValue Bits 32 1 - 8 32 32 0\n× P\nPosition Bits 0 0 16 0 8 - 14\nCompression Rate ×1 ×4 - ×32 ×666 ×10 - ×1000 - ×40000 TABLE I: Theoretical asymptotic compression rates for different compression methods broken down into components. Only\nSBC reduces all multiplicative components of the total bitsize (cf. eq. 1).", |
| "paper_id": "1805.08768", |
| "title": "Sparse Binary Compression: Towards Distributed Deep Learning with minimal Communication", |
| "authors": [ |
| "Felix Sattler", |
| "Simon Wiedemann", |
| "Klaus-Robert Müller", |
| "Wojciech Samek" |
| ], |
| "published_date": "2018-05-22", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1805.08768v1", |
| "chunk_index": 17, |
| "total_chunks": 27, |
| "char_count": 1344, |
| "word_count": 216, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "33f74517-47f1-49b3-9cdc-106c50b69e8c", |
| "text": "TEMPORAL VS GRADIENT SPARSITY Communication constraints can vary heavily between learning tasks and may not even be consistent throughout one distributed\ntraining session. Take, for example, a set of mobile devices, jointly training a model with privacy-preserving DSGD [11][10]. Part of the day, the devices might be connected to wifi, enabling them to frequently exchange weight-updates (still with as\nsmall of a bit size as possible), while at other times an expensive or limited mobile plan might force the devices to delay their\ncommunication. Practical methods should be able to adapt to these fluctuations in communication constraints. We take an holistic\nview towards communication efficient distributed deep learning by observing that communication delay and weight-update\ncompression can be viewed as two types of sparsity, that both affect the total number of parameters, updated throughout training,\nin a multiplicative way (cf. fig. 3). While compression techniques sparsify individual gradients, communication delay sparsifies\nthe gradient information in time. Our method is unique in the sense that it allows us to smoothly trade of these\ntwo types of sparsity against one another.", |
| "paper_id": "1805.08768", |
| "title": "Sparse Binary Compression: Towards Distributed Deep Learning with minimal Communication", |
| "authors": [ |
| "Felix Sattler", |
| "Simon Wiedemann", |
| "Klaus-Robert Müller", |
| "Wojciech Samek" |
| ], |
| "published_date": "2018-05-22", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1805.08768v1", |
| "chunk_index": 18, |
| "total_chunks": 27, |
| "char_count": 1196, |
| "word_count": 179, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "443ab186-4997-4077-ae53-4fb63722be33", |
| "text": "Figure 3 shows validation errors for Validation Error\nResNet32 trained on CIFAR (model specification in section IV-A) for 60000 x 10000 Gradient\niterations at different levels of temporal and gradient sparsity. Along the off- Dropping 0.43\nx 2500 Federated\ndiagonals of the matrix, the total sparsity, defined as the product of temporal Averaging\nand gradient sparsity, remains constant. We observe multiple things: 1.) The x 625 SBC 0.20 Sparsity\nvalidation error remains more or less constant along the off-diagonals of the x 125 0.11\nmatrix. 2.) Federated Averaging (purple) and Gradient Dropping/ DGC (yellow) x 25\nare just lines in the two-dimensional space of possible compression methods. 3.) 0.08 Temporal x 5\nThere exists a roughly triangular area of approximately constant error, optimal\ncompression methods lie along the hypotenuse of this triangle. We find this x 1 0.07\nbehavior consistently across different model architectures, more examples 625 can 1x 5x 25x 125\nx x 2500 x 10000be found in the supplement. These results indicate, that there exists a fixed\ncommunication budged in DSGD, necessary to achieve a certain accuracy.", |
| "paper_id": "1805.08768", |
| "title": "Sparse Binary Compression: Towards Distributed Deep Learning with minimal Communication", |
| "authors": [ |
| "Felix Sattler", |
| "Simon Wiedemann", |
| "Klaus-Robert Müller", |
| "Wojciech Samek" |
| ], |
| "published_date": "2018-05-22", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1805.08768v1", |
| "chunk_index": 19, |
| "total_chunks": 27, |
| "char_count": 1143, |
| "word_count": 179, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "0d449c9a-2d87-4e34-bc81-415eac6e9810", |
| "text": "Figure Gradient Sparsity\n4 shows validation errors for the same ResNet32 model trained on CIFAR at\ndifferent levels of total sparsity and different numbers of training iterations. Fig. 3: Validation Error for ResNet32 trained\nWe observe two distinct phases during training: In the beginning (iterations on CIFAR at different levels of temporal and\n0 - 30000), when training is performed using a high learning-rate, sparsified gradient sparsity (the error is color-coded,\nmethods consistently achieve the lowest error and temporally sparsified DSGD brighter means lower error).\ntends to outperform purely gradient sparsified DSGD at all sparsity levels. After\nthe learning rate is decreased by a factor of 10 in iteration 30000, this behavior is reversed and gradient sparsification methods\nstart to perform better that temporally sparsified methods. These results highlight, that an optimal compression strategy, needs\nto be able to adapt temporal and gradient sparsity, not only based on the learning task, but also on the current learning stage. Such an adaptive sparsity can be integrated seamlessly into our SBC framework.", |
| "paper_id": "1805.08768", |
| "title": "Sparse Binary Compression: Towards Distributed Deep Learning with minimal Communication", |
| "authors": [ |
| "Felix Sattler", |
| "Simon Wiedemann", |
| "Klaus-Robert Müller", |
| "Wojciech Samek" |
| ], |
| "published_date": "2018-05-22", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1805.08768v1", |
| "chunk_index": 20, |
| "total_chunks": 27, |
| "char_count": 1126, |
| "word_count": 170, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "030b73ff-69c9-4f40-abe6-eea82928dd9b", |
| "text": "Iteration: 10000 Iteration: 30000 Iteration: 40000 Iteration: 60000 2 Error9 × 1010 1\n8 × 10 2\n7 × 10 2\n6 × 10 2 5 3 1 5 3 1 5 3 1 5 3 1 f 10 10 10 10 10 10 10 10 10 10 10 10\nTotal Sparsity Total Sparsity Total Sparsity Total Sparsity Fig. 4: Classification error at different levels of total sparsity and different numbers of training iterations. Purple dots represent\npurely temporal sparsified SGD, while yellow dots represent purely gradient sparsified SGD. For hybrid methods, the color-code\ninterpolates between purple and yellow. Networks and Datasets We evaluate our method on commonly used convolutional and recurrent neural networks with millions of parameters, which\nwe train on well-studied data sets that contain up to multiple millions of samples. Throughout all of our experiments, we fix\nthe number of clients to 4 and split the training data among the clients in a balanced way (the number of training samples and\ntheir distribution is homogenous among the clients). Image Classification: We run experiments for LeNet5-Caffe2 on MNIST [22], ResNet32 [4] on CIFAR-10 [23] and ResNet50\non ILSVRC2012 (ImageNet) [24].", |
| "paper_id": "1805.08768", |
| "title": "Sparse Binary Compression: Towards Distributed Deep Learning with minimal Communication", |
| "authors": [ |
| "Felix Sattler", |
| "Simon Wiedemann", |
| "Klaus-Robert Müller", |
| "Wojciech Samek" |
| ], |
| "published_date": "2018-05-22", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1805.08768v1", |
| "chunk_index": 21, |
| "total_chunks": 27, |
| "char_count": 1131, |
| "word_count": 192, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "a55bf6a4-8e64-48d3-9c8e-bba7a6cdd8f5", |
| "text": "We split the training data randomly into 4 shards of equal size, and assign one shard to\nevery one of the 4 clients. The MNIST model is trained using the Adam optimizer [25], while the other models are trained\nusing momentum SGD. Learning rate, weight intitiallization and data augmentation are as in the respective papers. Language Modeling: We experiment with multilayer sequence-to-sequence LSTM models as described in [26] on the Penn\nTreebank corpus (PTB) [27] and Shakespeare dataset for next-word and next-character prediction. The PTB dataset consists of\na sequence 923000 training, and 82000 validation words. We restrict the vocabulary to the most common 10000 words, plus an\nadditional token for all words that are less frequent and train a two-layer LSTM model with 650 hidden units (\"WordLSTM\"). The Shakespeare dataset consists of the complete works of William Shakespeare [28] concatenated to a sequence of 5072443\ntraining and 105675 test characters. The number of different characters in the dataset is 98. We train the two-layer \"CharLSTM\"\nwith 200 hidden units. For both datasets, we split the sequences of training symbols each into four subsequences of equal\nlength and assign every client one of these subsequences. While the models we use in our experiments do not fully achieve state-of-the-art results on the respective tasks and datasets,\nthey are still sufficient for the purpose of evaluating our compression method and demonstrate, that our method works well\nwith common regularization techniques such as batch normalization [29] and dropout [30]. A complete description of models\nand hyperparameters can be found in the supplement. We experiment with three configurations of our method: SBC (1) uses no communication delay and a gradient sparsity\nof 0.1%, SBC (2) uses 10 iterations of communication delay and 1% gradient sparsity and SBC (3) uses 100 iterations of\ncommunication delay and 1% gradient sparsity. Our decision for these points on the 2D grid of possible configurations is\nsomewhat arbitrary.", |
| "paper_id": "1805.08768", |
| "title": "Sparse Binary Compression: Towards Distributed Deep Learning with minimal Communication", |
| "authors": [ |
| "Felix Sattler", |
| "Simon Wiedemann", |
| "Klaus-Robert Müller", |
| "Wojciech Samek" |
| ], |
| "published_date": "2018-05-22", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1805.08768v1", |
| "chunk_index": 22, |
| "total_chunks": 27, |
| "char_count": 2036, |
| "word_count": 319, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "f3efcccc-9b87-484f-a152-0575010502cb", |
| "text": "The experiments with SBC (1) serve the purpose of enabling us to directly compare our 0-value-bit\nquantization to the 32-value-bit Gradient Dropping (with momentum correction and momentum factor masking [18]). Gradient Federated\nMethod Baseline SBC (1) SBC (2) SBC (3)\nDroping [17] Averaging [11] LeNet5-Caffe Accuracy 0.9946 0.994 0.994 0.994 0.994 0.991\n@MNIST Compression ×1 ×634 ×500 ×2071 ×3491 ×24935 ResNet32 Accuracy 0.926 0.927 0.919 0.923 0.919 0.922\n@CIFAR Compression ×1 ×604 ×1000 ×1530 ×3430 ×32300 ResNet50 Accuracy 0.737 0.739 0.724 0.735 0.737 0.728\n@ImageNet Compression ×1 ×601 ×1000 ×2569 ×3531 ×37208 WordLSTM Perplexity 89.16 89.39 88.59 89.32 88.47 89.31\n@PTB Compression ×1 ×665 ×1000 ×2521 ×3460 ×34905 CharLSTM Perplexity 3.635 3.639 3.904 3.671 3.782 4.072\n@Shakespeare Compression ×1 ×660 ×1000 ×2572 ×3958 ×35201 TABLE II: Final accuracy/perplexity and compression rate for different compression schemes. 2A modified version of LeNet5 from [21] (see supplement). Table II lists compression rates and final validation accuracies achieved by different compression methods, when applied to\nthe training of neural networks on 5 different datasets.", |
| "paper_id": "1805.08768", |
| "title": "Sparse Binary Compression: Towards Distributed Deep Learning with minimal Communication", |
| "authors": [ |
| "Felix Sattler", |
| "Simon Wiedemann", |
| "Klaus-Robert Müller", |
| "Wojciech Samek" |
| ], |
| "published_date": "2018-05-22", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1805.08768v1", |
| "chunk_index": 23, |
| "total_chunks": 27, |
| "char_count": 1172, |
| "word_count": 170, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "7ddb00b3-cdd9-4ad3-ad8e-9a7d3345be11", |
| "text": "The number of iterations (forward-backward-passes) is held constant for\nall methods. On all benchmarks, our methods perform comparable to the baseline, while communicating significantly less bits. Figure 5 shows convergence speed in terms of iterations (left) and communicated bits (right) respectively for ResNet50\ntrained on ImageNet. The convergence speed is only marginally affected, by the different compression methods. In the first 30\nepochs SBC (3) even achieves the highest accuracy, using about ×37000 less bits than the baseline. In total, SBC (3) reducesthe upstream communication on this benchmark from 125 terabytes to 3.35 gigabytes for every participating client. After the\nlearning rate is lowered in epochs 30 and 60 progress slows down for SBC (3) relative to the methods which do not use\ncommunication delay. In direct comparison SBC (1) performs very similar to Gradient Dropping, while using about ×4 less\nbits (that is ×2569 less bits than the baseline). Validation Accuracy vs # Epochs Validation Error vs # transferred Bits (log-log) 0.8 100\nFederated Averaging\n0.7 Gradient Dropping\nSparse Binary Compression (1)\n0.6 Sparse Binary Compression (3) 6 × 10 1 Sparse Binary Compression (2) 0.5\nBaseline\n0.4 Error 4 × 10 1 Federated Averaging Accuracy 0.3 Gradient Dropping\nSparse Binary Compression (1) 3 × 10 1 0.2 Sparse Binary Compression (3)\n0.1 Sparse Binary Compression (2)\nBaseline\n0.0 2 × 10 1 0 20 40 60 80 109 1010 1011 1012 1013 1014 1015\nEpochs Bits Fig. 5: Left: Top-1 validation accuracy vs number of epochs. Right: Top-1 validation error vs number of transferred bits\n(log-log).", |
| "paper_id": "1805.08768", |
| "title": "Sparse Binary Compression: Towards Distributed Deep Learning with minimal Communication", |
| "authors": [ |
| "Felix Sattler", |
| "Simon Wiedemann", |
| "Klaus-Robert Müller", |
| "Wojciech Samek" |
| ], |
| "published_date": "2018-05-22", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1805.08768v1", |
| "chunk_index": 24, |
| "total_chunks": 27, |
| "char_count": 1615, |
| "word_count": 258, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "49332f9a-4927-45ea-bd68-5ea7297008c0", |
| "text": "Epochs 30 and 60 at which the learning rate is reduced are marked in the plot. ResNet50 trained on ImageNet. Figure 6 shows convergence speed in terms of iterations (left) and communicated bits (right) respectively for WordLSTM\ntrained on PTB. While Federated Averaging and SBC (3) initially slow down convergence in terms of iterations, all models\nconverge to approximately the same perplexity after around 60 epochs. Throughout all experiments, SBC (2) performs very\nsimilar to SBC (1) in terms of convergence speed and final accuracy, while maintaining a compression-edge of about ×1.3 -\n×2.2. Validation Perplexity vs # Epochs Validation Perplexity vs # transferred Bits (log-log) 160\nSparse Binary Compression (2) Sparse Binary Compression (2)\n150 Gradient Dropping Gradient Dropping\nSparse Binary Compression (1) Sparse Binary Compression (1)\n140 Sparse Binary Compression (3) Sparse Binary Compression (3)\nFederated Averaging Federated Averaging 130\nBaseline Baseline\n120 Perplexity110 Perplexity\n9 × 101\n80 8 × 101 0 10 20 30 40 107 108 109 1010 1011 1012 1013\nEpochs Bits Fig. 6: Perplexity vs number of epochs and number of transferred bits. The gradient information for training deep neural networks with SGD is highly redundant (see e.g. [18]). We exploit this\nfact to the extreme by combining 3 powerful compression strategies and are able to achieve compression gains of up to four\norders of magnitude with only a slight decrease in accuracy. We show through experiments that communication delay and\ngradient sparsity can be viewed as two independent types of sparsity, that have similar effects on the convergence speed,\nwhen introduced into distributed SGD. We would like to highlight, that in no case we did modify the hyperparameters of the\nrespective baseline models to accommodate our method. This demonstrates that our method is easily applicable. Note however\nthat an extensive hyperparameter search could further improve the results. Furthermore, our findings in section III indicate that", |
| "paper_id": "1805.08768", |
| "title": "Sparse Binary Compression: Towards Distributed Deep Learning with minimal Communication", |
| "authors": [ |
| "Felix Sattler", |
| "Simon Wiedemann", |
| "Klaus-Robert Müller", |
| "Wojciech Samek" |
| ], |
| "published_date": "2018-05-22", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1805.08768v1", |
| "chunk_index": 25, |
| "total_chunks": 27, |
| "char_count": 2011, |
| "word_count": 313, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "aa5dc00b-97b0-484d-abf5-69f7f0e4d1a7", |
| "text": "even higher compression rates are possible if we adapt both the kind of sparsity and the sparsity rate to the current training\nphase. It remains an interesting direction of further research to identify heuristics and theoretical insights that can exploit these\nfluctuations in the training statistics to guide sparsity towards optimality. This work was supported by the Fraunhofer Society through the MPI-FhG collaboration project \"Theory & Practice for\nReduced Learning Machines\". This work was also supported by the German Ministry for Education and Research as Berlin Big\nData Center under Grant 01IS14013A.", |
| "paper_id": "1805.08768", |
| "title": "Sparse Binary Compression: Towards Distributed Deep Learning with minimal Communication", |
| "authors": [ |
| "Felix Sattler", |
| "Simon Wiedemann", |
| "Klaus-Robert Müller", |
| "Wojciech Samek" |
| ], |
| "published_date": "2018-05-22", |
| "primary_category": "cs.LG", |
| "arxiv_url": "http://arxiv.org/abs/1805.08768v1", |
| "chunk_index": 26, |
| "total_chunks": 27, |
| "char_count": 610, |
| "word_count": 92, |
| "chunking_strategy": "semantic" |
| } |
| ] |