html_url
stringlengths
51
51
comments
stringlengths
67
24.7k
title
stringlengths
6
280
body
stringlengths
51
36.2k
comment_length
int64
16
1.45k
text
stringlengths
190
38.3k
embeddings
list
https://github.com/huggingface/datasets/issues/1675
Hi folks, thanks to some awesome work by @lhoestq and @albertvillanova you can now stream the Pile as follows: ```python # Install master branch of `datasets` pip install git+https://github.com/huggingface/datasets.git#egg=datasets[streaming] pip install zstandard from datasets import load_dataset dset = lo...
Add the 800GB Pile dataset?
## Adding a Dataset - **Name:** The Pile - **Description:** The Pile is a 825 GiB diverse, open source language modelling data set that consists of 22 smaller, high-quality datasets combined together. See [here](https://twitter.com/nabla_theta/status/1345130408170541056?s=20) for the Twitter announcement - **Paper:*...
92
Add the 800GB Pile dataset? ## Adding a Dataset - **Name:** The Pile - **Description:** The Pile is a 825 GiB diverse, open source language modelling data set that consists of 22 smaller, high-quality datasets combined together. See [here](https://twitter.com/nabla_theta/status/1345130408170541056?s=20) for the Twi...
[ -0.3223574161529541, -0.043692976236343384, -0.14693981409072876, 0.041198067367076874, 0.20533114671707153, 0.05679848790168762, 0.11991950869560242, 0.263314813375473, -0.04956874996423721, 0.10100370645523071, -0.2534809112548828, 0.141516774892807, -0.4532611668109894, 0.19550719857215...
https://github.com/huggingface/datasets/issues/1675
> Hi folks! Just wanted to follow up on this -- would be really nice to get the Pile on HF Datasets... unclear if it would be easy to also add partitions of the Pile subject to the original 22 datasets used, but that would be nice too! Hi @siddk thanks to a tip from @richarddwang it seems we can access some of the p...
Add the 800GB Pile dataset?
## Adding a Dataset - **Name:** The Pile - **Description:** The Pile is a 825 GiB diverse, open source language modelling data set that consists of 22 smaller, high-quality datasets combined together. See [here](https://twitter.com/nabla_theta/status/1345130408170541056?s=20) for the Twitter announcement - **Paper:*...
199
Add the 800GB Pile dataset? ## Adding a Dataset - **Name:** The Pile - **Description:** The Pile is a 825 GiB diverse, open source language modelling data set that consists of 22 smaller, high-quality datasets combined together. See [here](https://twitter.com/nabla_theta/status/1345130408170541056?s=20) for the Twi...
[ -0.22825710475444794, 0.18814131617546082, -0.06500166654586792, 0.12541267275810242, 0.01260141097009182, 0.25866323709487915, 0.3119368553161621, 0.3322313129901886, 0.13761363923549652, 0.027133040130138397, -0.4386696219444275, 0.09038718044757843, -0.47661828994750977, 0.3780530691146...
https://github.com/huggingface/datasets/issues/1675
Ah I just saw that @lhoestq is already thinking about the specifying of one or more subsets in [this PR](https://github.com/huggingface/datasets/pull/2817#issuecomment-901874049) :)
Add the 800GB Pile dataset?
## Adding a Dataset - **Name:** The Pile - **Description:** The Pile is a 825 GiB diverse, open source language modelling data set that consists of 22 smaller, high-quality datasets combined together. See [here](https://twitter.com/nabla_theta/status/1345130408170541056?s=20) for the Twitter announcement - **Paper:*...
21
Add the 800GB Pile dataset? ## Adding a Dataset - **Name:** The Pile - **Description:** The Pile is a 825 GiB diverse, open source language modelling data set that consists of 22 smaller, high-quality datasets combined together. See [here](https://twitter.com/nabla_theta/status/1345130408170541056?s=20) for the Twi...
[ -0.31402504444122314, 0.15608808398246765, -0.1577559858560562, 0.18204891681671143, -0.02771182172000408, 0.2084931582212448, 0.15764115750789642, 0.19064512848854065, 0.012680329382419586, -0.04636351764202118, -0.16548803448677063, 0.19022031128406525, -0.4964599311351776, 0.24471287429...
https://github.com/huggingface/datasets/issues/1674
Hi @koenvandenberge and @alighofrani95! The datasets you're experiencing issues with were most likely added recently to the `datasets` library, meaning they have not been released yet. They will be released with the v2 of the library. Meanwhile, you can still load the datasets using one of the techniques described in...
dutch_social can't be loaded
Hi all, I'm trying to import the `dutch_social` dataset described [here](https://huggingface.co/datasets/dutch_social). However, the code that should load the data doesn't seem to be working, in particular because the corresponding files can't be found at the provided links. ``` (base) Koens-MacBook-Pro:~ koe...
59
dutch_social can't be loaded Hi all, I'm trying to import the `dutch_social` dataset described [here](https://huggingface.co/datasets/dutch_social). However, the code that should load the data doesn't seem to be working, in particular because the corresponding files can't be found at the provided links. ``` ...
[ -0.14401307702064514, -0.12778620421886444, -0.15607580542564392, 0.29339689016342163, 0.21679945290088654, -0.1348886340856552, -0.01205778494477272, 0.09374085068702698, 0.3958030939102173, -0.08629682660102844, -0.2941346764564514, -0.00783548504114151, 0.0196758434176445, -0.0421470701...
https://github.com/huggingface/datasets/issues/1674
I just did the release :) To load it you can just update `datasets` ``` pip install --upgrade datasets ``` and then you can load `dutch_social` with ```python from datasets import load_dataset dataset = load_dataset("dutch_social") ```
dutch_social can't be loaded
Hi all, I'm trying to import the `dutch_social` dataset described [here](https://huggingface.co/datasets/dutch_social). However, the code that should load the data doesn't seem to be working, in particular because the corresponding files can't be found at the provided links. ``` (base) Koens-MacBook-Pro:~ koe...
36
dutch_social can't be loaded Hi all, I'm trying to import the `dutch_social` dataset described [here](https://huggingface.co/datasets/dutch_social). However, the code that should load the data doesn't seem to be working, in particular because the corresponding files can't be found at the provided links. ``` ...
[ -0.14401307702064514, -0.12778620421886444, -0.15607580542564392, 0.29339689016342163, 0.21679945290088654, -0.1348886340856552, -0.01205778494477272, 0.09374085068702698, 0.3958030939102173, -0.08629682660102844, -0.2941346764564514, -0.00783548504114151, 0.0196758434176445, -0.0421470701...
https://github.com/huggingface/datasets/issues/1674
@lhoestq could you also shed light on the Hindi Wikipedia Dataset for issue number #1673. Will this also be available in the new release that you committed recently?
dutch_social can't be loaded
Hi all, I'm trying to import the `dutch_social` dataset described [here](https://huggingface.co/datasets/dutch_social). However, the code that should load the data doesn't seem to be working, in particular because the corresponding files can't be found at the provided links. ``` (base) Koens-MacBook-Pro:~ koe...
28
dutch_social can't be loaded Hi all, I'm trying to import the `dutch_social` dataset described [here](https://huggingface.co/datasets/dutch_social). However, the code that should load the data doesn't seem to be working, in particular because the corresponding files can't be found at the provided links. ``` ...
[ -0.14401307702064514, -0.12778620421886444, -0.15607580542564392, 0.29339689016342163, 0.21679945290088654, -0.1348886340856552, -0.01205778494477272, 0.09374085068702698, 0.3958030939102173, -0.08629682660102844, -0.2941346764564514, -0.00783548504114151, 0.0196758434176445, -0.0421470701...
https://github.com/huggingface/datasets/issues/1674
Okay. Could you comment on the #1673 thread? Actually @thomwolf had commented that if i use datasets library from source, it would allow me to download the Hindi Wikipedia Dataset but even the version 1.1.3 gave me the same issue. The details are there in the issue #1673 thread.
dutch_social can't be loaded
Hi all, I'm trying to import the `dutch_social` dataset described [here](https://huggingface.co/datasets/dutch_social). However, the code that should load the data doesn't seem to be working, in particular because the corresponding files can't be found at the provided links. ``` (base) Koens-MacBook-Pro:~ koe...
49
dutch_social can't be loaded Hi all, I'm trying to import the `dutch_social` dataset described [here](https://huggingface.co/datasets/dutch_social). However, the code that should load the data doesn't seem to be working, in particular because the corresponding files can't be found at the provided links. ``` ...
[ -0.14401307702064514, -0.12778620421886444, -0.15607580542564392, 0.29339689016342163, 0.21679945290088654, -0.1348886340856552, -0.01205778494477272, 0.09374085068702698, 0.3958030939102173, -0.08629682660102844, -0.2941346764564514, -0.00783548504114151, 0.0196758434176445, -0.0421470701...
https://github.com/huggingface/datasets/issues/1673
Currently this dataset is only available when the library is installed from source since it was added after the last release. We pin the dataset version with the library version so that people can have a reproducible dataset and processing when pinning the library. We'll see if we can provide access to newer data...
Unable to Download Hindi Wikipedia Dataset
I used the Dataset Library in Python to load the wikipedia dataset with the Hindi Config 20200501.hi along with something called beam_runner='DirectRunner' and it keeps giving me the error that the file is not found. I have attached the screenshot of the error and the code both. Please help me to understand how to reso...
72
Unable to Download Hindi Wikipedia Dataset I used the Dataset Library in Python to load the wikipedia dataset with the Hindi Config 20200501.hi along with something called beam_runner='DirectRunner' and it keeps giving me the error that the file is not found. I have attached the screenshot of the error and the code b...
[ -0.1846335232257843, 0.03730139136314392, -0.06111946702003479, 0.2292182594537735, 0.05320924147963524, 0.11538349092006683, -0.021660998463630676, 0.34670382738113403, 0.22220812737941742, 0.02464551478624344, 0.398732990026474, 0.07446528971195221, 0.046689506620168686, 0.21727935969829...
https://github.com/huggingface/datasets/issues/1673
So for now, should i try and install the library from source and then try out the same piece of code? Will it work then, considering both the versions will match then?
Unable to Download Hindi Wikipedia Dataset
I used the Dataset Library in Python to load the wikipedia dataset with the Hindi Config 20200501.hi along with something called beam_runner='DirectRunner' and it keeps giving me the error that the file is not found. I have attached the screenshot of the error and the code both. Please help me to understand how to reso...
32
Unable to Download Hindi Wikipedia Dataset I used the Dataset Library in Python to load the wikipedia dataset with the Hindi Config 20200501.hi along with something called beam_runner='DirectRunner' and it keeps giving me the error that the file is not found. I have attached the screenshot of the error and the code b...
[ -0.11161887645721436, 0.13646331429481506, -0.0870576798915863, 0.2553955316543579, 0.000856790691614151, 0.05895282328128815, -0.0200848076492548, 0.34757786989212036, 0.19664306938648224, -0.031186118721961975, 0.41823217272758484, 0.0535380020737648, 0.07384111732244492, 0.2070579528808...
https://github.com/huggingface/datasets/issues/1673
Hey, so i tried installing the library from source using the commands : **git clone https://github.com/huggingface/datasets**, **cd datasets** and then **pip3 install -e .**. But i still am facing the same error that file is not found. Please advise. The Datasets library version now is 1.1.3 by installing from sour...
Unable to Download Hindi Wikipedia Dataset
I used the Dataset Library in Python to load the wikipedia dataset with the Hindi Config 20200501.hi along with something called beam_runner='DirectRunner' and it keeps giving me the error that the file is not found. I have attached the screenshot of the error and the code both. Please help me to understand how to reso...
71
Unable to Download Hindi Wikipedia Dataset I used the Dataset Library in Python to load the wikipedia dataset with the Hindi Config 20200501.hi along with something called beam_runner='DirectRunner' and it keeps giving me the error that the file is not found. I have attached the screenshot of the error and the code b...
[ -0.1465502232313156, 0.06893238425254822, -0.06859824061393738, 0.2491660714149475, 0.0940220057964325, 0.09816227853298187, -0.010803814977407455, 0.3002651035785675, 0.20563152432441711, 0.010552842170000076, 0.39934036135673523, 0.12490293383598328, 0.09441269934177399, 0.17127460241317...
https://github.com/huggingface/datasets/issues/1673
Looks like the wikipedia dump for hindi at the date of 05/05/2020 is not available anymore. You can try to load a more recent version of wikipedia ```python from datasets import load_dataset d = load_dataset("wikipedia", language="hi", date="20210101", split="train", beam_runner="DirectRunner") ```
Unable to Download Hindi Wikipedia Dataset
I used the Dataset Library in Python to load the wikipedia dataset with the Hindi Config 20200501.hi along with something called beam_runner='DirectRunner' and it keeps giving me the error that the file is not found. I have attached the screenshot of the error and the code both. Please help me to understand how to reso...
40
Unable to Download Hindi Wikipedia Dataset I used the Dataset Library in Python to load the wikipedia dataset with the Hindi Config 20200501.hi along with something called beam_runner='DirectRunner' and it keeps giving me the error that the file is not found. I have attached the screenshot of the error and the code b...
[ -0.11424566805362701, 0.01129157841205597, -0.08272895216941833, 0.18819206953048706, 0.05032946169376373, 0.20311057567596436, -0.02170984074473381, 0.40279945731163025, 0.2084721326828003, -0.005865968763828278, 0.29710060358047485, -0.010638866573572159, 0.10074175149202347, 0.112816467...
https://github.com/huggingface/datasets/issues/1672
Having the same issue with `datasets 1.1.3` of `1.5.0` (both tracebacks look the same) and `kilt_wikipedia`, Ubuntu 20.04 ```py In [1]: from datasets import load_dataset ...
load_dataset hang on file_lock
I am trying to load the squad dataset. Fails on Windows 10 but succeeds in Colab. Transformers: 3.3.1 Datasets: 1.0.2 Windows 10 (also tested in WSL) ``` datasets.logging.set_verbosity_debug() datasets. train_dataset = load_dataset('squad', split='train') valid_dataset = load_dataset('squad', split='validat...
234
load_dataset hang on file_lock I am trying to load the squad dataset. Fails on Windows 10 but succeeds in Colab. Transformers: 3.3.1 Datasets: 1.0.2 Windows 10 (also tested in WSL) ``` datasets.logging.set_verbosity_debug() datasets. train_dataset = load_dataset('squad', split='train') valid_dataset = loa...
[ -0.3021153509616852, -0.02152104675769806, -0.08799506723880768, 0.17893624305725098, 0.5179890990257263, 0.14992579817771912, 0.5325868725776672, -0.038846369832754135, 0.01999574899673462, 0.004317812621593475, -0.2535484731197357, 0.2555573880672455, -0.010385269299149513, -0.1183215156...
https://github.com/huggingface/datasets/issues/1671
Also, mayjor issue for me is the format issue, even if I go through changing the whole code to use load_from_disk, then if I do d = datasets.load_from_disk("imdb") d = d["train"][:10] => the format of this is no more in datasets format this is different from you call load_datasets("train[10]") could you tell m...
connection issue
Hi I am getting this connection issue, resulting in large failure on cloud, @lhoestq I appreciate your help on this. If I want to keep the codes the same, so not using save_to_disk, load_from_disk, but save the datastes in the way load_dataset reads from and copy the files in the same folder the datasets library r...
64
connection issue Hi I am getting this connection issue, resulting in large failure on cloud, @lhoestq I appreciate your help on this. If I want to keep the codes the same, so not using save_to_disk, load_from_disk, but save the datastes in the way load_dataset reads from and copy the files in the same folder th...
[ -0.4165681302547455, 0.23384061455726624, 0.0117214135825634, 0.3550441861152649, 0.43614062666893005, -0.18401706218719482, 0.17185881733894348, 0.2030179798603058, -0.19610527157783508, 0.011228885501623154, -0.29301130771636963, 0.1609504371881485, 0.14485138654708862, 0.295597255229949...
https://github.com/huggingface/datasets/issues/1671
> ` requests.exceptions.ConnectTimeout: HTTPSConnectionPool(host='s3.amazonaws.com', port=443): Max retries exceeded with url: /datasets.huggingface.co/datasets/datasets/glue/glue.py (Caused by ConnectTimeoutError(<urllib3.connection.HTTPSConnection object at 0x7ff6d6c60a20>, 'Connection to s3.amazonaws.com timed out....
connection issue
Hi I am getting this connection issue, resulting in large failure on cloud, @lhoestq I appreciate your help on this. If I want to keep the codes the same, so not using save_to_disk, load_from_disk, but save the datastes in the way load_dataset reads from and copy the files in the same folder the datasets library r...
210
connection issue Hi I am getting this connection issue, resulting in large failure on cloud, @lhoestq I appreciate your help on this. If I want to keep the codes the same, so not using save_to_disk, load_from_disk, but save the datastes in the way load_dataset reads from and copy the files in the same folder th...
[ -0.4165681302547455, 0.23384061455726624, 0.0117214135825634, 0.3550441861152649, 0.43614062666893005, -0.18401706218719482, 0.17185881733894348, 0.2030179798603058, -0.19610527157783508, 0.011228885501623154, -0.29301130771636963, 0.1609504371881485, 0.14485138654708862, 0.295597255229949...
https://github.com/huggingface/datasets/issues/1670
Hi ! And thanks for the tips :) Indeed currently `wiki_dpr` takes some time to be processed. Multiprocessing for dataset generation is definitely going to speed up things. Regarding the index note that for the default configurations, the index is downloaded instead of being built, which avoid spending time on c...
wiki_dpr pre-processing performance
I've been working with wiki_dpr and noticed that the dataset processing is seriously impaired in performance [1]. It takes about 12h to process the entire dataset. Most of this time is simply loading and processing the data, but the actual indexing is also quite slow (3h). I won't repeat the concerns around multipro...
129
wiki_dpr pre-processing performance I've been working with wiki_dpr and noticed that the dataset processing is seriously impaired in performance [1]. It takes about 12h to process the entire dataset. Most of this time is simply loading and processing the data, but the actual indexing is also quite slow (3h). I won...
[ -0.21966466307640076, -0.18326731026172638, -0.11372315138578415, 0.08813253790140152, -0.1135558933019638, -0.08153702318668365, 0.022308753803372383, 0.3311194181442261, 0.18954718112945557, 0.07095284014940262, 0.020521201193332672, -0.1025923490524292, 0.32460546493530273, 0.1452310383...
https://github.com/huggingface/datasets/issues/1670
I'd be happy to contribute something when I get the time, probably adding multiprocessing and / or cython support to wiki_dpr. I've written cythonized apache beam code before as well. For sharded index building, I used the FAISS example code for indexing 1 billion vectors as a start. I'm sure you're aware that the d...
wiki_dpr pre-processing performance
I've been working with wiki_dpr and noticed that the dataset processing is seriously impaired in performance [1]. It takes about 12h to process the entire dataset. Most of this time is simply loading and processing the data, but the actual indexing is also quite slow (3h). I won't repeat the concerns around multipro...
66
wiki_dpr pre-processing performance I've been working with wiki_dpr and noticed that the dataset processing is seriously impaired in performance [1]. It takes about 12h to process the entire dataset. Most of this time is simply loading and processing the data, but the actual indexing is also quite slow (3h). I won...
[ -0.2264903038740158, -0.15619057416915894, -0.12017613649368286, 0.06685460358858109, -0.13705939054489136, -0.08907405287027359, 0.025616904720664024, 0.3342381417751312, 0.19152727723121643, 0.0689786747097969, 0.03954358398914337, -0.09713628888130188, 0.30243098735809326, 0.14719098806...
https://github.com/huggingface/datasets/issues/1662
Hi ! The arrow file size is due to the embeddings. Indeed if they're stored as float32 then the total size of the embeddings is 20 000 000 vectors * 768 dimensions * 4 bytes per dimension ~= 60GB If you want to reduce the size you can consider using quantization for example, or maybe using dimension reduction te...
Arrow file is too large when saving vector data
I computed the sentence embedding of each sentence of bookcorpus data using bert base and saved them to disk. I used 20M sentences and the obtained arrow file is about 59GB while the original text file is only about 1.3GB. Are there any ways to reduce the size of the arrow file?
59
Arrow file is too large when saving vector data I computed the sentence embedding of each sentence of bookcorpus data using bert base and saved them to disk. I used 20M sentences and the obtained arrow file is about 59GB while the original text file is only about 1.3GB. Are there any ways to reduce the size of the ar...
[ 0.11372213065624237, -0.3316560387611389, -0.05896609649062157, 0.4534534215927124, 0.13431185483932495, -0.1161569356918335, -0.17864467203617096, 0.4608658254146576, -0.4080989956855774, 0.33289843797683716, 0.19823601841926575, -0.08847446739673615, -0.12217746675014496, -0.187673121690...
https://github.com/huggingface/datasets/issues/1662
Thanks for your reply @lhoestq. I want to save original embedding for these sentences for subsequent calculations. So does arrow have a way to save in a compressed format to reduce the size of the file?
Arrow file is too large when saving vector data
I computed the sentence embedding of each sentence of bookcorpus data using bert base and saved them to disk. I used 20M sentences and the obtained arrow file is about 59GB while the original text file is only about 1.3GB. Are there any ways to reduce the size of the arrow file?
36
Arrow file is too large when saving vector data I computed the sentence embedding of each sentence of bookcorpus data using bert base and saved them to disk. I used 20M sentences and the obtained arrow file is about 59GB while the original text file is only about 1.3GB. Are there any ways to reduce the size of the ar...
[ 0.06726427376270294, -0.28253647685050964, -0.06414580345153809, 0.40237438678741455, 0.0808352530002594, -0.053776368498802185, -0.2692089080810547, 0.4793400764465332, -0.5607511401176453, 0.3315005302429199, 0.10898420214653015, 0.134074866771698, -0.11804645508527756, -0.21160961687564...
https://github.com/huggingface/datasets/issues/1647
Hi @eric-mitchell, I think the issue might be that this dataset was added during the community sprint and has not been released yet. It will be available with the v2 of `datasets`. For now, you should be able to load the datasets after installing the latest (master) version of `datasets` using pip: `pip install git+...
NarrativeQA fails to load with `load_dataset`
When loading the NarrativeQA dataset with `load_dataset('narrativeqa')` as given in the documentation [here](https://huggingface.co/datasets/narrativeqa), I receive a cascade of exceptions, ending with FileNotFoundError: Couldn't find file locally at narrativeqa/narrativeqa.py, or remotely at https://r...
55
NarrativeQA fails to load with `load_dataset` When loading the NarrativeQA dataset with `load_dataset('narrativeqa')` as given in the documentation [here](https://huggingface.co/datasets/narrativeqa), I receive a cascade of exceptions, ending with FileNotFoundError: Couldn't find file locally at narrativeqa/na...
[ -0.28219395875930786, 0.10543832182884216, 0.032636746764183044, 0.24390925467014313, 0.190006285905838, 0.17991910874843597, 0.1333400011062622, 0.04128921777009964, -0.1444731056690216, 0.044152915477752686, -0.014136204496026039, -0.07569552212953568, -0.06875331699848175, 0.38530969619...
https://github.com/huggingface/datasets/issues/1647
Update: HuggingFace did an intermediate release yesterday just before the v2.0. To load it you can just update `datasets` `pip install --upgrade datasets`
NarrativeQA fails to load with `load_dataset`
When loading the NarrativeQA dataset with `load_dataset('narrativeqa')` as given in the documentation [here](https://huggingface.co/datasets/narrativeqa), I receive a cascade of exceptions, ending with FileNotFoundError: Couldn't find file locally at narrativeqa/narrativeqa.py, or remotely at https://r...
23
NarrativeQA fails to load with `load_dataset` When loading the NarrativeQA dataset with `load_dataset('narrativeqa')` as given in the documentation [here](https://huggingface.co/datasets/narrativeqa), I receive a cascade of exceptions, ending with FileNotFoundError: Couldn't find file locally at narrativeqa/na...
[ -0.21995526552200317, 0.04800909757614136, 0.06081925332546234, 0.3124229311943054, 0.1940942406654358, 0.1517980396747589, 0.14009970426559448, 0.021162360906600952, -0.11976595222949982, 0.07513272017240524, 0.006081521511077881, -0.10550081729888916, -0.02847040630877018, 0.328784108161...
https://github.com/huggingface/datasets/issues/1644
Hover was added recently, that's why it wasn't available yet. To load it you can just update `datasets` ``` pip install --upgrade datasets ``` and then you can load `hover` with ```python from datasets import load_dataset dataset = load_dataset("hover") ```
HoVeR dataset fails to load
Hi! I'm getting an error when trying to load **HoVeR** dataset. Another one (**SQuAD**) does work for me. I'm using the latest (1.1.3) version of the library. Steps to reproduce the error: ```python >>> from datasets import load_dataset >>> dataset = load_dataset("hover") Traceback (most recent call last): ...
40
HoVeR dataset fails to load Hi! I'm getting an error when trying to load **HoVeR** dataset. Another one (**SQuAD**) does work for me. I'm using the latest (1.1.3) version of the library. Steps to reproduce the error: ```python >>> from datasets import load_dataset >>> dataset = load_dataset("hover") Tracebac...
[ -0.22146961092948914, 0.058831214904785156, 0.017389535903930664, 0.2946339249610901, 0.28719592094421387, 0.10362924635410309, 0.2778082489967346, 0.21131360530853271, 0.05538675934076309, 0.044101692736148834, -0.16387611627578735, -0.025890741497278214, 0.006625927984714508, -0.17508718...
https://github.com/huggingface/datasets/issues/1641
I have encountered the same error with `v1.0.1` and `v1.0.2` on both Windows and Linux environments. However, cloning the repo and using the path to the dataset's root directory worked for me. Even after having the dataset cached - passing the path is the only way (for now) to load the dataset. ```python from datas...
muchocine dataset cannot be dowloaded
```python --------------------------------------------------------------------------- FileNotFoundError Traceback (most recent call last) /usr/local/lib/python3.6/dist-packages/datasets/load.py in prepare_module(path, script_version, download_config, download_mode, dataset, force_local_path, ...
88
muchocine dataset cannot be dowloaded ```python --------------------------------------------------------------------------- FileNotFoundError Traceback (most recent call last) /usr/local/lib/python3.6/dist-packages/datasets/load.py in prepare_module(path, script_version, download_config, do...
[ -0.36579954624176025, -0.14910836517810822, -0.053581513464450836, 0.33292150497436523, 0.4308770000934601, 0.12308900058269501, 0.35996246337890625, 0.3173547387123108, 0.3176315426826477, 0.06259815394878387, -0.23802706599235535, 0.0012005828320980072, -0.08891038596630096, 0.0610441491...
https://github.com/huggingface/datasets/issues/1641
Hi @mrm8488 and @amoux! The datasets you are trying to load have been added to the library during the community sprint for v2 last month. They will be available with the v2 release! For now, there are still a couple of solutions to load the datasets: 1. As suggested by @amoux, you can clone the git repo and pass th...
muchocine dataset cannot be dowloaded
```python --------------------------------------------------------------------------- FileNotFoundError Traceback (most recent call last) /usr/local/lib/python3.6/dist-packages/datasets/load.py in prepare_module(path, script_version, download_config, download_mode, dataset, force_local_path, ...
81
muchocine dataset cannot be dowloaded ```python --------------------------------------------------------------------------- FileNotFoundError Traceback (most recent call last) /usr/local/lib/python3.6/dist-packages/datasets/load.py in prepare_module(path, script_version, download_config, do...
[ -0.36579954624176025, -0.14910836517810822, -0.053581513464450836, 0.33292150497436523, 0.4308770000934601, 0.12308900058269501, 0.35996246337890625, 0.3173547387123108, 0.3176315426826477, 0.06259815394878387, -0.23802706599235535, 0.0012005828320980072, -0.08891038596630096, 0.0610441491...
https://github.com/huggingface/datasets/issues/1641
If you don't want to clone entire `datasets` repo, just download the `muchocine` directory and pass the local path to the directory. Cheers!
muchocine dataset cannot be dowloaded
```python --------------------------------------------------------------------------- FileNotFoundError Traceback (most recent call last) /usr/local/lib/python3.6/dist-packages/datasets/load.py in prepare_module(path, script_version, download_config, download_mode, dataset, force_local_path, ...
23
muchocine dataset cannot be dowloaded ```python --------------------------------------------------------------------------- FileNotFoundError Traceback (most recent call last) /usr/local/lib/python3.6/dist-packages/datasets/load.py in prepare_module(path, script_version, download_config, do...
[ -0.36579954624176025, -0.14910836517810822, -0.053581513464450836, 0.33292150497436523, 0.4308770000934601, 0.12308900058269501, 0.35996246337890625, 0.3173547387123108, 0.3176315426826477, 0.06259815394878387, -0.23802706599235535, 0.0012005828320980072, -0.08891038596630096, 0.0610441491...
https://github.com/huggingface/datasets/issues/1641
Muchocine was added recently, that's why it wasn't available yet. To load it you can just update `datasets` ``` pip install --upgrade datasets ``` and then you can load `muchocine` with ```python from datasets import load_dataset dataset = load_dataset("muchocine", split="train") ```
muchocine dataset cannot be dowloaded
```python --------------------------------------------------------------------------- FileNotFoundError Traceback (most recent call last) /usr/local/lib/python3.6/dist-packages/datasets/load.py in prepare_module(path, script_version, download_config, download_mode, dataset, force_local_path, ...
41
muchocine dataset cannot be dowloaded ```python --------------------------------------------------------------------------- FileNotFoundError Traceback (most recent call last) /usr/local/lib/python3.6/dist-packages/datasets/load.py in prepare_module(path, script_version, download_config, do...
[ -0.36579954624176025, -0.14910836517810822, -0.053581513464450836, 0.33292150497436523, 0.4308770000934601, 0.12308900058269501, 0.35996246337890625, 0.3173547387123108, 0.3176315426826477, 0.06259815394878387, -0.23802706599235535, 0.0012005828320980072, -0.08891038596630096, 0.0610441491...
https://github.com/huggingface/datasets/issues/1639
Maybe you can use nltk's treebank detokenizer ? ```python from nltk.tokenize.treebank import TreebankWordDetokenizer TreebankWordDetokenizer().detokenize("it 's a charming and often affecting journey . ".split()) # "it's a charming and often affecting journey." ```
bug with sst2 in glue
Hi I am getting very low accuracy on SST2 I investigate this and observe that for this dataset sentences are tokenized, while this is correct for the other datasets in GLUE, please see below. Is there any alternatives I could get untokenized sentences? I am unfortunately under time pressure to report some results on ...
32
bug with sst2 in glue Hi I am getting very low accuracy on SST2 I investigate this and observe that for this dataset sentences are tokenized, while this is correct for the other datasets in GLUE, please see below. Is there any alternatives I could get untokenized sentences? I am unfortunately under time pressure t...
[ 0.11419376730918884, -0.1911148577928543, 0.053529128432273865, 0.15712475776672363, 0.15624065697193146, -0.3605673015117645, 0.11344189941883087, 0.43313291668891907, -0.09326629340648651, 0.02302677184343338, -0.12253907322883606, 0.13916955888271332, -0.09749691188335419, 0.05867539346...
https://github.com/huggingface/datasets/issues/1639
I don't know if there exists a detokenized version somewhere. Even the version on kaggle is tokenized
bug with sst2 in glue
Hi I am getting very low accuracy on SST2 I investigate this and observe that for this dataset sentences are tokenized, while this is correct for the other datasets in GLUE, please see below. Is there any alternatives I could get untokenized sentences? I am unfortunately under time pressure to report some results on ...
17
bug with sst2 in glue Hi I am getting very low accuracy on SST2 I investigate this and observe that for this dataset sentences are tokenized, while this is correct for the other datasets in GLUE, please see below. Is there any alternatives I could get untokenized sentences? I am unfortunately under time pressure t...
[ 0.1053769439458847, -0.19167070090770721, 0.05356002599000931, 0.08271041512489319, 0.1684061735868454, -0.3281559944152832, 0.18542969226837158, 0.4321674704551697, -0.09194067120552063, 0.030973974615335464, -0.09434950351715088, 0.14627349376678467, -0.10577061772346497, 0.1380160003900...
https://github.com/huggingface/datasets/issues/1636
I have same issue for other datasets (`myanmar_news` in my case). A version of `datasets` runs correctly on my local machine (**without GPU**) which looking for the dataset at ``` https://raw.githubusercontent.com/huggingface/datasets/master/datasets/myanmar_news/myanmar_news.py ``` Meanwhile, other version r...
winogrande cannot be dowloaded
Hi, I am getting this error when trying to run the codes on the cloud. Thank you for any suggestion and help on this @lhoestq ``` File "./finetune_trainer.py", line 318, in <module> main() File "./finetune_trainer.py", line 148, in main for task in data_args.tasks] File "./finetune_trainer.py", ...
90
winogrande cannot be dowloaded Hi, I am getting this error when trying to run the codes on the cloud. Thank you for any suggestion and help on this @lhoestq ``` File "./finetune_trainer.py", line 318, in <module> main() File "./finetune_trainer.py", line 148, in main for task in data_args.tasks]...
[ -0.36869674921035767, 0.10485097765922546, -0.06603816896677017, 0.17274348437786102, 0.2931744456291199, 0.06464017182588577, 0.6664782762527466, 0.07428372651338577, 0.3206225037574768, 0.03763069957494736, -0.12091892957687378, 0.08111290633678436, -0.02743987739086151, 0.30855450034141...
https://github.com/huggingface/datasets/issues/1636
It looks like they're two different issues ---------- First for `myanmar_news`: It must come from the way you installed `datasets`. If you install `datasets` from source, then the `myanmar_news` script will be loaded from `master`. However if you install from `pip` it will get it using the version of the li...
winogrande cannot be dowloaded
Hi, I am getting this error when trying to run the codes on the cloud. Thank you for any suggestion and help on this @lhoestq ``` File "./finetune_trainer.py", line 318, in <module> main() File "./finetune_trainer.py", line 148, in main for task in data_args.tasks] File "./finetune_trainer.py", ...
141
winogrande cannot be dowloaded Hi, I am getting this error when trying to run the codes on the cloud. Thank you for any suggestion and help on this @lhoestq ``` File "./finetune_trainer.py", line 318, in <module> main() File "./finetune_trainer.py", line 148, in main for task in data_args.tasks]...
[ -0.36869674921035767, 0.10485097765922546, -0.06603816896677017, 0.17274348437786102, 0.2931744456291199, 0.06464017182588577, 0.6664782762527466, 0.07428372651338577, 0.3206225037574768, 0.03763069957494736, -0.12091892957687378, 0.08111290633678436, -0.02743987739086151, 0.30855450034141...
https://github.com/huggingface/datasets/issues/1634
That's interesting, can you tell me what you think would be useful to access to inspect a dataset? You can filter them in the hub with the search by the way: https://huggingface.co/datasets have you seen it?
Inspecting datasets per category
Hi Is there a way I could get all NLI datasets/all QA datasets to get some understanding of available datasets per category? this is hard for me to inspect the datasets one by one in the webpage, thanks for the suggestions @lhoestq
36
Inspecting datasets per category Hi Is there a way I could get all NLI datasets/all QA datasets to get some understanding of available datasets per category? this is hard for me to inspect the datasets one by one in the webpage, thanks for the suggestions @lhoestq That's interesting, can you tell me what you thin...
[ -0.2455318570137024, -0.15293343365192413, -0.22945678234100342, 0.39996546506881714, 0.04595118761062622, 0.2248050570487976, 0.07310567051172256, 0.47074487805366516, -0.003227919340133667, -0.14973464608192444, -0.5485338568687439, -0.11188462376594543, -0.10111643373966217, 0.325876176...
https://github.com/huggingface/datasets/issues/1634
Hi @thomwolf thank you, I was not aware of this, I was looking into the data viewer linked into readme page. This is exactly what I was looking for, but this does not work currently, please see the attached I am selecting to see all nli datasets in english and it retrieves none. thanks ![5tarDHn9CP6ngeM](ht...
Inspecting datasets per category
Hi Is there a way I could get all NLI datasets/all QA datasets to get some understanding of available datasets per category? this is hard for me to inspect the datasets one by one in the webpage, thanks for the suggestions @lhoestq
55
Inspecting datasets per category Hi Is there a way I could get all NLI datasets/all QA datasets to get some understanding of available datasets per category? this is hard for me to inspect the datasets one by one in the webpage, thanks for the suggestions @lhoestq Hi @thomwolf thank you, I was not aware of this...
[ -0.3020195960998535, 0.09379728138446808, -0.17149358987808228, 0.4588537812232971, 0.06718777120113373, 0.20000842213630676, 0.14744442701339722, 0.4989735186100006, -0.12374405562877655, -0.07217000424861908, -0.6115687489509583, -0.08187460899353027, -0.03799252584576607, 0.236066102981...
https://github.com/huggingface/datasets/issues/1634
I see 4 results for NLI in English but indeed some are not tagged yet and missing (GLUE), we will focus on that in January (cc @yjernite): https://huggingface.co/datasets?filter=task_ids:natural-language-inference,languages:en
Inspecting datasets per category
Hi Is there a way I could get all NLI datasets/all QA datasets to get some understanding of available datasets per category? this is hard for me to inspect the datasets one by one in the webpage, thanks for the suggestions @lhoestq
28
Inspecting datasets per category Hi Is there a way I could get all NLI datasets/all QA datasets to get some understanding of available datasets per category? this is hard for me to inspect the datasets one by one in the webpage, thanks for the suggestions @lhoestq I see 4 results for NLI in English but indeed som...
[ -0.11279886960983276, 0.01568920910358429, -0.21974018216133118, 0.41256532073020935, -0.007914667017757893, 0.02699187397956848, 0.08438271284103394, 0.47353944182395935, -0.06211021542549133, -0.1604093611240387, -0.5733581781387329, -0.12270770967006683, 0.05213506892323494, 0.342680037...
https://github.com/huggingface/datasets/issues/1634
Hi! You can use `huggingface_hub`'s `list_datasets` for that now: ```python import huggingface_hub # pip install huggingface_hub huggingface_hub.list_datasets(filter="task_categories:question-answering") # or huggingface_hub.list_datasets(filter=("task_categories:natural-language-inference", "languages:"en")) ```
Inspecting datasets per category
Hi Is there a way I could get all NLI datasets/all QA datasets to get some understanding of available datasets per category? this is hard for me to inspect the datasets one by one in the webpage, thanks for the suggestions @lhoestq
22
Inspecting datasets per category Hi Is there a way I could get all NLI datasets/all QA datasets to get some understanding of available datasets per category? this is hard for me to inspect the datasets one by one in the webpage, thanks for the suggestions @lhoestq Hi! You can use `huggingface_hub`'s `list_dataset...
[ -0.2519457936286926, -0.38847917318344116, -0.24103426933288574, 0.552700936794281, 0.09451419115066528, 0.10464227944612503, 0.004489571321755648, 0.46458420157432556, 0.14036297798156738, -0.10026109218597412, -0.6388822793960571, -0.020044811069965363, -0.1644306480884552, 0.42333945631...
https://github.com/huggingface/datasets/issues/1633
@lhoestq, should I raise a PR for this? Just a minor change while reading labels text file
social_i_qa wrong format of labels
Hi, there is extra "\n" in labels of social_i_qa datasets, no big deal, but I was wondering if you could remove it to make it consistent. so label is 'label': '1\n', not '1' thanks ``` >>> import datasets >>> from datasets import load_dataset >>> dataset = load_dataset( ... 'social_i_qa') cahce dir /jul...
17
social_i_qa wrong format of labels Hi, there is extra "\n" in labels of social_i_qa datasets, no big deal, but I was wondering if you could remove it to make it consistent. so label is 'label': '1\n', not '1' thanks ``` >>> import datasets >>> from datasets import load_dataset >>> dataset = load_dataset( ....
[ 0.013278074562549591, -0.2219311147928238, -0.07822687923908234, 0.3724219799041748, 0.1299583911895752, -0.1916469931602478, 0.06086032837629318, 0.22497916221618652, -0.2229049950838089, 0.3125814199447632, -0.16639083623886108, -0.34459608793258667, -0.07857699692249298, 0.5373510718345...