html_url stringlengths 48 51 | title stringlengths 5 155 | comments stringlengths 63 15.7k | body stringlengths 0 17.7k | comment_length int64 16 949 | text stringlengths 164 23.7k |
|---|---|---|---|---|---|
https://github.com/huggingface/datasets/issues/769 | How to choose proper download_mode in function load_dataset? | It's no big deal, but since it can be confusing to users I think it's worth renaming it, and deprecate `GenerateMode` until `datasets` 2.0 at least. IMO it's confusing to have `download_mode=GenerateMode.something` | Hi, I am a beginner to datasets and I try to use datasets to load my csv file.
my csv file looks like this
```
text,label
"Effective but too-tepid biopic",3
"If you sometimes like to go to the movies to have fun , Wasabi is a good place to start .",4
"Emerges as something rare , an issue movie that 's so hones... | 32 | How to choose proper download_mode in function load_dataset?
Hi, I am a beginner to datasets and I try to use datasets to load my csv file.
my csv file looks like this
```
text,label
"Effective but too-tepid biopic",3
"If you sometimes like to go to the movies to have fun , Wasabi is a good place to start .",... |
https://github.com/huggingface/datasets/issues/768 | Add a `lazy_map` method to `Dataset` and `DatasetDict` | This is cool! I think some aspects to think about and decide in terms of API are:
- do we allow several methods (chained i guess)
- how do we inspect the currently set method(s)
- how do we control/reset them | The library is great, but it would be even more awesome with a `lazy_map` method implemented on `Dataset` and `DatasetDict`. This would apply a function on a give item but when the item is requested. Two use cases:
1. load image on the fly
2. apply a random function and get different outputs at each epoch (like dat... | 41 | Add a `lazy_map` method to `Dataset` and `DatasetDict`
The library is great, but it would be even more awesome with a `lazy_map` method implemented on `Dataset` and `DatasetDict`. This would apply a function on a give item but when the item is requested. Two use cases:
1. load image on the fly
2. apply a random f... |
https://github.com/huggingface/datasets/issues/767 | Add option for named splits when using ds.train_test_split | Yes definitely we should give more flexibility to control the name of the splits outputted by `train_test_split`.
Related is the very interesting feedback from @bramvanroy on how we should improve this method: https://discuss.huggingface.co/t/how-to-split-main-dataset-into-train-dev-test-as-datasetdict/1090/5
And... | ### Feature Request 🚀
Can we add a way to name your splits when using the `.train_test_split` function?
In almost every use case I've come across, I have a `train` and a `test` split in my `DatasetDict`, and I want to create a `validation` split. Therefore, its kinda useless to get a `test` split back from `tra... | 58 | Add option for named splits when using ds.train_test_split
### Feature Request 🚀
Can we add a way to name your splits when using the `.train_test_split` function?
In almost every use case I've come across, I have a `train` and a `test` split in my `DatasetDict`, and I want to create a `validation` split. Ther... |
https://github.com/huggingface/datasets/issues/761 | Downloaded datasets are not usable offline | Yes currently you need an internet connection because the lib tries to check for the etag of the dataset script online to see if you don't have it locally already.
If we add a way to store the etag/hash locally after the first download, it would allow users to first download the dataset with an internet connection, ... | I've been trying to use the IMDB dataset offline, but after downloading it and turning off the internet it still raises an error from the ```requests``` library trying to reach for the online dataset.
Is this the intended behavior ?
(Sorry, I wrote the the first version of this issue while still on nlp 0.3.0). | 75 | Downloaded datasets are not usable offline
I've been trying to use the IMDB dataset offline, but after downloading it and turning off the internet it still raises an error from the ```requests``` library trying to reach for the online dataset.
Is this the intended behavior ?
(Sorry, I wrote the the first version o... |
https://github.com/huggingface/datasets/issues/759 | (Load dataset failure) ConnectionError: Couldn’t reach https://raw.githubusercontent.com/huggingface/datasets/1.1.2/datasets/cnn_dailymail/cnn_dailymail.py | Does this HEAD request return 200 on your machine ?
```python
import requests
requests.head("https://raw.githubuserc... | Hey, I want to load the cnn-dailymail dataset for fine-tune.
I write the code like this
from datasets import load_dataset
test_dataset = load_dataset(“cnn_dailymail”, “3.0.0”, split=“train”)
And I got the following errors.
Traceback (most recent call last):
File “test.py”, line 7, in
test_dataset = load_da... | 28 | (Load dataset failure) ConnectionError: Couldn’t reach https://raw.githubusercontent.com/huggingface/datasets/1.1.2/datasets/cnn_dailymail/cnn_dailymail.py
Hey, I want to load the cnn-dailymail dataset for fine-tune.
I write the code like this
from datasets import load_dataset
test_dataset = load_dataset(“cnn_da... |
https://github.com/huggingface/datasets/issues/759 | (Load dataset failure) ConnectionError: Couldn’t reach https://raw.githubusercontent.com/huggingface/datasets/1.1.2/datasets/cnn_dailymail/cnn_dailymail.py | Thank you very much for your response.
When I run
```
import requests
requests.head("https://raw.githubuserconten... | Hey, I want to load the cnn-dailymail dataset for fine-tune.
I write the code like this
from datasets import load_dataset
test_dataset = load_dataset(“cnn_dailymail”, “3.0.0”, split=“train”)
And I got the following errors.
Traceback (most recent call last):
File “test.py”, line 7, in
test_dataset = load_da... | 272 | (Load dataset failure) ConnectionError: Couldn’t reach https://raw.githubusercontent.com/huggingface/datasets/1.1.2/datasets/cnn_dailymail/cnn_dailymail.py
Hey, I want to load the cnn-dailymail dataset for fine-tune.
I write the code like this
from datasets import load_dataset
test_dataset = load_dataset(“cnn_da... |
https://github.com/huggingface/datasets/issues/759 | (Load dataset failure) ConnectionError: Couldn’t reach https://raw.githubusercontent.com/huggingface/datasets/1.1.2/datasets/cnn_dailymail/cnn_dailymail.py | I can browse the google drive through google chrome. It's weird. I can download the dataset through google drive manually. | Hey, I want to load the cnn-dailymail dataset for fine-tune.
I write the code like this
from datasets import load_dataset
test_dataset = load_dataset(“cnn_dailymail”, “3.0.0”, split=“train”)
And I got the following errors.
Traceback (most recent call last):
File “test.py”, line 7, in
test_dataset = load_da... | 20 | (Load dataset failure) ConnectionError: Couldn’t reach https://raw.githubusercontent.com/huggingface/datasets/1.1.2/datasets/cnn_dailymail/cnn_dailymail.py
Hey, I want to load the cnn-dailymail dataset for fine-tune.
I write the code like this
from datasets import load_dataset
test_dataset = load_dataset(“cnn_da... |
https://github.com/huggingface/datasets/issues/759 | (Load dataset failure) ConnectionError: Couldn’t reach https://raw.githubusercontent.com/huggingface/datasets/1.1.2/datasets/cnn_dailymail/cnn_dailymail.py | Is it possible I download the dataset manually from google drive and use it for further test ? How can I do this ? I want to reproduce the model in this link https://huggingface.co/patrickvonplaten/bert2bert-cnn_dailymail-fp16. But I can't download the dataset through load_dataset method . I have tried many times and ... | Hey, I want to load the cnn-dailymail dataset for fine-tune.
I write the code like this
from datasets import load_dataset
test_dataset = load_dataset(“cnn_dailymail”, “3.0.0”, split=“train”)
And I got the following errors.
Traceback (most recent call last):
File “test.py”, line 7, in
test_dataset = load_da... | 56 | (Load dataset failure) ConnectionError: Couldn’t reach https://raw.githubusercontent.com/huggingface/datasets/1.1.2/datasets/cnn_dailymail/cnn_dailymail.py
Hey, I want to load the cnn-dailymail dataset for fine-tune.
I write the code like this
from datasets import load_dataset
test_dataset = load_dataset(“cnn_da... |
https://github.com/huggingface/datasets/issues/759 | (Load dataset failure) ConnectionError: Couldn’t reach https://raw.githubusercontent.com/huggingface/datasets/1.1.2/datasets/cnn_dailymail/cnn_dailymail.py | The head request should definitely work, not sure what's going on on your side.
If you find a way to make it work, please post it here since other users might encounter the same issue.
If you don't manage to fix it you can use `load_dataset` on google colab and then save it using `dataset.save_to_disk("path/to/data... | Hey, I want to load the cnn-dailymail dataset for fine-tune.
I write the code like this
from datasets import load_dataset
test_dataset = load_dataset(“cnn_dailymail”, “3.0.0”, split=“train”)
And I got the following errors.
Traceback (most recent call last):
File “test.py”, line 7, in
test_dataset = load_da... | 75 | (Load dataset failure) ConnectionError: Couldn’t reach https://raw.githubusercontent.com/huggingface/datasets/1.1.2/datasets/cnn_dailymail/cnn_dailymail.py
Hey, I want to load the cnn-dailymail dataset for fine-tune.
I write the code like this
from datasets import load_dataset
test_dataset = load_dataset(“cnn_da... |
https://github.com/huggingface/datasets/issues/759 | (Load dataset failure) ConnectionError: Couldn’t reach https://raw.githubusercontent.com/huggingface/datasets/1.1.2/datasets/cnn_dailymail/cnn_dailymail.py | Hi
I want to know if this problem has been solved because I encountered a similar issue. Thanks.
`train_data = datasets.load_dataset("xsum", `split="train")`
`ConnectionError:` Couldn't reach https://raw.githubusercontent.com/huggingface/datasets/1.1.3/datasets/xsum/xsum.py` | Hey, I want to load the cnn-dailymail dataset for fine-tune.
I write the code like this
from datasets import load_dataset
test_dataset = load_dataset(“cnn_dailymail”, “3.0.0”, split=“train”)
And I got the following errors.
Traceback (most recent call last):
File “test.py”, line 7, in
test_dataset = load_da... | 26 | (Load dataset failure) ConnectionError: Couldn’t reach https://raw.githubusercontent.com/huggingface/datasets/1.1.2/datasets/cnn_dailymail/cnn_dailymail.py
Hey, I want to load the cnn-dailymail dataset for fine-tune.
I write the code like this
from datasets import load_dataset
test_dataset = load_dataset(“cnn_da... |
https://github.com/huggingface/datasets/issues/759 | (Load dataset failure) ConnectionError: Couldn’t reach https://raw.githubusercontent.com/huggingface/datasets/1.1.2/datasets/cnn_dailymail/cnn_dailymail.py | Hi @smile0925 ! Do you have an internet connection ? Are you using some kind of proxy that may block the access to this file ?
Otherwise you can try to update `datasets` since we introduced retries for http requests in the 1.2.0 version
```
pip install --upgrade datasets
```
Let me know if that helps. | Hey, I want to load the cnn-dailymail dataset for fine-tune.
I write the code like this
from datasets import load_dataset
test_dataset = load_dataset(“cnn_dailymail”, “3.0.0”, split=“train”)
And I got the following errors.
Traceback (most recent call last):
File “test.py”, line 7, in
test_dataset = load_da... | 56 | (Load dataset failure) ConnectionError: Couldn’t reach https://raw.githubusercontent.com/huggingface/datasets/1.1.2/datasets/cnn_dailymail/cnn_dailymail.py
Hey, I want to load the cnn-dailymail dataset for fine-tune.
I write the code like this
from datasets import load_dataset
test_dataset = load_dataset(“cnn_da... |
https://github.com/huggingface/datasets/issues/759 | (Load dataset failure) ConnectionError: Couldn’t reach https://raw.githubusercontent.com/huggingface/datasets/1.1.2/datasets/cnn_dailymail/cnn_dailymail.py | Hi @lhoestq
Oh, may be you are right. I find that my server uses some kind of proxy that block the access to this file.

| Hey, I want to load the cnn-dailymail dataset for fine-tune.
I write the code like this
from datasets import load_dataset
test_dataset = load_dataset(“cnn_dailymail”, “3.0.0”, split=“train”)
And I got the following errors.
Traceback (most recent call last):
File “test.py”, line 7, in
test_dataset = load_da... | 26 | (Load dataset failure) ConnectionError: Couldn’t reach https://raw.githubusercontent.com/huggingface/datasets/1.1.2/datasets/cnn_dailymail/cnn_dailymail.py
Hey, I want to load the cnn-dailymail dataset for fine-tune.
I write the code like this
from datasets import load_dataset
test_dataset = load_dataset(“cnn_da... |
https://github.com/huggingface/datasets/issues/759 | (Load dataset failure) ConnectionError: Couldn’t reach https://raw.githubusercontent.com/huggingface/datasets/1.1.2/datasets/cnn_dailymail/cnn_dailymail.py | > Hi @lhoestq
> Oh, may be you are right. I find that my server uses some kind of proxy that block the access to this file.
> 
I have the same problem, have you solved it? Many thanks | Hey, I want to load the cnn-dailymail dataset for fine-tune.
I write the code like this
from datasets import load_dataset
test_dataset = load_dataset(“cnn_dailymail”, “3.0.0”, split=“train”)
And I got the following errors.
Traceback (most recent call last):
File “test.py”, line 7, in
test_dataset = load_da... | 40 | (Load dataset failure) ConnectionError: Couldn’t reach https://raw.githubusercontent.com/huggingface/datasets/1.1.2/datasets/cnn_dailymail/cnn_dailymail.py
Hey, I want to load the cnn-dailymail dataset for fine-tune.
I write the code like this
from datasets import load_dataset
test_dataset = load_dataset(“cnn_da... |
https://github.com/huggingface/datasets/issues/759 | (Load dataset failure) ConnectionError: Couldn’t reach https://raw.githubusercontent.com/huggingface/datasets/1.1.2/datasets/cnn_dailymail/cnn_dailymail.py | Hi @ZhengxiangShi
You can first try whether your network can access these files. I need to use VPN to access these files, so I download the files that cannot be accessed to the local in advance, and then use them in the code. Like this,
`train_data = datasets.load_dataset("xsum.py", split="train")` | Hey, I want to load the cnn-dailymail dataset for fine-tune.
I write the code like this
from datasets import load_dataset
test_dataset = load_dataset(“cnn_dailymail”, “3.0.0”, split=“train”)
And I got the following errors.
Traceback (most recent call last):
File “test.py”, line 7, in
test_dataset = load_da... | 49 | (Load dataset failure) ConnectionError: Couldn’t reach https://raw.githubusercontent.com/huggingface/datasets/1.1.2/datasets/cnn_dailymail/cnn_dailymail.py
Hey, I want to load the cnn-dailymail dataset for fine-tune.
I write the code like this
from datasets import load_dataset
test_dataset = load_dataset(“cnn_da... |
https://github.com/huggingface/datasets/issues/758 | Process 0 very slow when using num_procs with map to tokenizer | Hi ! Thanks for reporting.
Is the distribution of text length of your data evenly distributed across your dataset ? I mean, could it be because the examples in the first part of your dataset are slower to process ?
Also could how many CPUs can you use for multiprocessing ?
```python
import multiprocessing
print(mu... | <img width="721" alt="image" src="https://user-images.githubusercontent.com/17930170/97066109-776d0d00-15ed-11eb-8bba-bb4d2e0fcc33.png">
The code I am using is
```
dataset = load_dataset("text", data_files=[file_path], split='train')
dataset = dataset.map(lambda ex: tokenizer(ex["text"], add_speci... | 62 | Process 0 very slow when using num_procs with map to tokenizer
<img width="721" alt="image" src="https://user-images.githubusercontent.com/17930170/97066109-776d0d00-15ed-11eb-8bba-bb4d2e0fcc33.png">
The code I am using is
```
dataset = load_dataset("text", data_files=[file_path], split='train')
... |
https://github.com/huggingface/datasets/issues/758 | Process 0 very slow when using num_procs with map to tokenizer | Using pre trained HF tokenizer. The result is the same with tokenizer multiprocessing off and on.
I have (absolutely) no idea about the distribution, but since this issue occurs on all of my datasets(regardless of files), I don't think distribution is the problems.
I can use up to 16 cores. | <img width="721" alt="image" src="https://user-images.githubusercontent.com/17930170/97066109-776d0d00-15ed-11eb-8bba-bb4d2e0fcc33.png">
The code I am using is
```
dataset = load_dataset("text", data_files=[file_path], split='train')
dataset = dataset.map(lambda ex: tokenizer(ex["text"], add_speci... | 50 | Process 0 very slow when using num_procs with map to tokenizer
<img width="721" alt="image" src="https://user-images.githubusercontent.com/17930170/97066109-776d0d00-15ed-11eb-8bba-bb4d2e0fcc33.png">
The code I am using is
```
dataset = load_dataset("text", data_files=[file_path], split='train')
... |
https://github.com/huggingface/datasets/issues/758 | Process 0 very slow when using num_procs with map to tokenizer | Ok weird, I don't manage to reproduce this issue on my side.
Does it happen even with `num_proc=2` for example ?
Also could you provide more details about your OS and the versions of tokenizers/datasets/multiprocess that you're using ? | <img width="721" alt="image" src="https://user-images.githubusercontent.com/17930170/97066109-776d0d00-15ed-11eb-8bba-bb4d2e0fcc33.png">
The code I am using is
```
dataset = load_dataset("text", data_files=[file_path], split='train')
dataset = dataset.map(lambda ex: tokenizer(ex["text"], add_speci... | 39 | Process 0 very slow when using num_procs with map to tokenizer
<img width="721" alt="image" src="https://user-images.githubusercontent.com/17930170/97066109-776d0d00-15ed-11eb-8bba-bb4d2e0fcc33.png">
The code I am using is
```
dataset = load_dataset("text", data_files=[file_path], split='train')
... |
https://github.com/huggingface/datasets/issues/758 | Process 0 very slow when using num_procs with map to tokenizer | Yes, I can confirm it also happens with ```num_proc=2```.
```
tokenizers 0.9.2
datasets 1.1.2
multiprocess 0.70.10
```
```
Linux nipa2020-0629 4.4.0-178-generic #208-Ubuntu SMP Sun Apr 5 23:45:10 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux
``` | <img width="721" alt="image" src="https://user-images.githubusercontent.com/17930170/97066109-776d0d00-15ed-11eb-8bba-bb4d2e0fcc33.png">
The code I am using is
```
dataset = load_dataset("text", data_files=[file_path], split='train')
dataset = dataset.map(lambda ex: tokenizer(ex["text"], add_speci... | 34 | Process 0 very slow when using num_procs with map to tokenizer
<img width="721" alt="image" src="https://user-images.githubusercontent.com/17930170/97066109-776d0d00-15ed-11eb-8bba-bb4d2e0fcc33.png">
The code I am using is
```
dataset = load_dataset("text", data_files=[file_path], split='train')
... |
https://github.com/huggingface/datasets/issues/758 | Process 0 very slow when using num_procs with map to tokenizer | I can't reproduce on my side unfortunately with the same versions.
Do you have issues when doing multiprocessing with python ?
```python
from tqdm.auto import tqdm
from multiprocess import Pool, RLock
def process_data(shard):
# implement
num_proc = 8
shards = [] # implement, this must be a list of siz... | <img width="721" alt="image" src="https://user-images.githubusercontent.com/17930170/97066109-776d0d00-15ed-11eb-8bba-bb4d2e0fcc33.png">
The code I am using is
```
dataset = load_dataset("text", data_files=[file_path], split='train')
dataset = dataset.map(lambda ex: tokenizer(ex["text"], add_speci... | 73 | Process 0 very slow when using num_procs with map to tokenizer
<img width="721" alt="image" src="https://user-images.githubusercontent.com/17930170/97066109-776d0d00-15ed-11eb-8bba-bb4d2e0fcc33.png">
The code I am using is
```
dataset = load_dataset("text", data_files=[file_path], split='train')
... |
https://github.com/huggingface/datasets/issues/757 | CUDA out of memory | ```python
tokenizer = FunnelTokenizer.from_pretrained('funnel-transformer/small')
def tokenize(batch):
return tokenizer(batch['text'], padding='max_length', truncation=True,max_length=512)
dataset = load_dataset("bookcorpus",split='train[:1000]').shuffle()
dataset = dataset.map(tokenize, batched=True, batc... | In your dataset ,cuda run out of memory as long as the trainer begins:
however, without changing any other element/parameter,just switch dataset to `LineByLineTextDataset`,everything becames OK.
| 64 | CUDA out of memory
In your dataset ,cuda run out of memory as long as the trainer begins:
however, without changing any other element/parameter,just switch dataset to `LineByLineTextDataset`,everything becames OK.
```python
tokenizer = FunnelTokenizer.from_pretrained('funnel-transformer/small')
def tokenize(... |
https://github.com/huggingface/datasets/issues/757 | CUDA out of memory | `RuntimeError: CUDA out of memory. Tried to allocate 954.00 MiB (GPU 0; 15.90 GiB total capacity; 14.35 GiB already allocated; 753.75 MiB free; 14.39 GiB reserved in total by PyTorch)
Exception raised from malloc at /pytorch/c10/cuda/CUDACachingAllocator.cpp:272 (most recent call first):`
part of error output | In your dataset ,cuda run out of memory as long as the trainer begins:
however, without changing any other element/parameter,just switch dataset to `LineByLineTextDataset`,everything becames OK.
| 44 | CUDA out of memory
In your dataset ,cuda run out of memory as long as the trainer begins:
however, without changing any other element/parameter,just switch dataset to `LineByLineTextDataset`,everything becames OK.
`RuntimeError: CUDA out of memory. Tried to allocate 954.00 MiB (GPU 0; 15.90 GiB total capacity; 1... |
https://github.com/huggingface/datasets/issues/757 | CUDA out of memory | from funnel model to bert model : error still happened
from your dataset to LineByLineTextDataset : error disapeared | In your dataset ,cuda run out of memory as long as the trainer begins:
however, without changing any other element/parameter,just switch dataset to `LineByLineTextDataset`,everything becames OK.
| 18 | CUDA out of memory
In your dataset ,cuda run out of memory as long as the trainer begins:
however, without changing any other element/parameter,just switch dataset to `LineByLineTextDataset`,everything becames OK.
from funnel model to bert model : error still happened
from your dataset to LineByLineTextDatase... |
https://github.com/huggingface/datasets/issues/757 | CUDA out of memory | Since you're using a data collator you don't need to tokenizer the dataset using `map`. Could you try not to use `map` and only the data collator instead ? The data collator is supposed to pad to the longest sequence in each batch afaik, instead of padding to 512.
Also cc @sgugger | In your dataset ,cuda run out of memory as long as the trainer begins:
however, without changing any other element/parameter,just switch dataset to `LineByLineTextDataset`,everything becames OK.
| 52 | CUDA out of memory
In your dataset ,cuda run out of memory as long as the trainer begins:
however, without changing any other element/parameter,just switch dataset to `LineByLineTextDataset`,everything becames OK.
Since you're using a data collator you don't need to tokenizer the dataset using `map`. Could you t... |
https://github.com/huggingface/datasets/issues/751 | Error loading ms_marco v2.1 using load_dataset() | There was a similar issue in #294
Clearing the cache and download again the dataset did the job. Could you try to clear your cache and download the dataset again ? | Code:
`dataset = load_dataset('ms_marco', 'v2.1')`
Error:
```
`---------------------------------------------------------------------------
JSONDecodeError Traceback (most recent call last)
<ipython-input-16-34378c057212> in <module>()
9
10 # Downloading and loading a data... | 31 | Error loading ms_marco v2.1 using load_dataset()
Code:
`dataset = load_dataset('ms_marco', 'v2.1')`
Error:
```
`---------------------------------------------------------------------------
JSONDecodeError Traceback (most recent call last)
<ipython-input-16-34378c057212> in <module>()
... |
https://github.com/huggingface/datasets/issues/751 | Error loading ms_marco v2.1 using load_dataset() | I was able to load the dataset successfully, I'm pretty sure it's just a cache issue that you have.
Let me know if clearing your cache fixes the problem | Code:
`dataset = load_dataset('ms_marco', 'v2.1')`
Error:
```
`---------------------------------------------------------------------------
JSONDecodeError Traceback (most recent call last)
<ipython-input-16-34378c057212> in <module>()
9
10 # Downloading and loading a data... | 29 | Error loading ms_marco v2.1 using load_dataset()
Code:
`dataset = load_dataset('ms_marco', 'v2.1')`
Error:
```
`---------------------------------------------------------------------------
JSONDecodeError Traceback (most recent call last)
<ipython-input-16-34378c057212> in <module>()
... |
https://github.com/huggingface/datasets/issues/749 | [XGLUE] Adding new dataset | Small poll @thomwolf @yjernite @lhoestq @JetRunner @qiweizhen .
As stated in the XGLUE paper: https://arxiv.org/pdf/2004.01401.pdf , for each of the 11 down-stream tasks training data is only available in English, whereas development and test data is available in multiple different language *cf.* here:
.
I'm planning on adding the dataset to the library myself in a couple of weeks.
Also tagging @JetRunner @qiweizhen in case I need some guidance | 105 | [XGLUE] Adding new dataset
XGLUE is a multilingual GLUE like dataset propesed in this [paper](https://arxiv.org/pdf/2004.01401.pdf).
I'm planning on adding the dataset to the library myself in a couple of weeks.
Also tagging @JetRunner @qiweizhen in case I need some guidance
Small poll @thomwolf @yjernite @lh... |
https://github.com/huggingface/datasets/issues/749 | [XGLUE] Adding new dataset | In this case we should have named splits, so config `ner` has splits `train`, `validation`, `test-en`, `test-ar`, `test-bg`, etc...
This is more in the spirit of the task afaiu, and will avoid making users do the filtering step themselves when testing different models or different configurations of the same model. | XGLUE is a multilingual GLUE like dataset propesed in this [paper](https://arxiv.org/pdf/2004.01401.pdf).
I'm planning on adding the dataset to the library myself in a couple of weeks.
Also tagging @JetRunner @qiweizhen in case I need some guidance | 50 | [XGLUE] Adding new dataset
XGLUE is a multilingual GLUE like dataset propesed in this [paper](https://arxiv.org/pdf/2004.01401.pdf).
I'm planning on adding the dataset to the library myself in a couple of weeks.
Also tagging @JetRunner @qiweizhen in case I need some guidance
In this case we should have named s... |
https://github.com/huggingface/datasets/issues/749 | [XGLUE] Adding new dataset | I see your point!
I think this would be quite feasible to do and makes sense to me as well! In the paper results are reported per language, so it seems more natural to do it this way.
Good for me @yjernite ! What do the others think? @lhoestq
| XGLUE is a multilingual GLUE like dataset propesed in this [paper](https://arxiv.org/pdf/2004.01401.pdf).
I'm planning on adding the dataset to the library myself in a couple of weeks.
Also tagging @JetRunner @qiweizhen in case I need some guidance | 49 | [XGLUE] Adding new dataset
XGLUE is a multilingual GLUE like dataset propesed in this [paper](https://arxiv.org/pdf/2004.01401.pdf).
I'm planning on adding the dataset to the library myself in a couple of weeks.
Also tagging @JetRunner @qiweizhen in case I need some guidance
I see your point!
I think this ... |
https://github.com/huggingface/datasets/issues/749 | [XGLUE] Adding new dataset | Okey actually not that easy to add things like `test-de` to `datasets` => this would be the first dataset to have this.
See: https://github.com/huggingface/datasets/pull/802 | XGLUE is a multilingual GLUE like dataset propesed in this [paper](https://arxiv.org/pdf/2004.01401.pdf).
I'm planning on adding the dataset to the library myself in a couple of weeks.
Also tagging @JetRunner @qiweizhen in case I need some guidance | 24 | [XGLUE] Adding new dataset
XGLUE is a multilingual GLUE like dataset propesed in this [paper](https://arxiv.org/pdf/2004.01401.pdf).
I'm planning on adding the dataset to the library myself in a couple of weeks.
Also tagging @JetRunner @qiweizhen in case I need some guidance
Okey actually not that easy to add ... |
https://github.com/huggingface/datasets/issues/749 | [XGLUE] Adding new dataset | IMO we should have one config per language. That's what we're doing for xnli, xtreme etc.
Having split names that depend on the language seems wrong. We should try to avoid split names that are not train/val/test.
Sorry for late response on this one | XGLUE is a multilingual GLUE like dataset propesed in this [paper](https://arxiv.org/pdf/2004.01401.pdf).
I'm planning on adding the dataset to the library myself in a couple of weeks.
Also tagging @JetRunner @qiweizhen in case I need some guidance | 44 | [XGLUE] Adding new dataset
XGLUE is a multilingual GLUE like dataset propesed in this [paper](https://arxiv.org/pdf/2004.01401.pdf).
I'm planning on adding the dataset to the library myself in a couple of weeks.
Also tagging @JetRunner @qiweizhen in case I need some guidance
IMO we should have one config per l... |
https://github.com/huggingface/datasets/issues/749 | [XGLUE] Adding new dataset | @lhoestq agreed on having one config per language, but we also need to be able to have different split names and people are going to want to use hyphens, so we should at the very least warn them why it's failing :) E.g. for ANLI with different stages of data (currently using underscores) or https://www.tau-nlp.org/comm... | XGLUE is a multilingual GLUE like dataset propesed in this [paper](https://arxiv.org/pdf/2004.01401.pdf).
I'm planning on adding the dataset to the library myself in a couple of weeks.
Also tagging @JetRunner @qiweizhen in case I need some guidance | 61 | [XGLUE] Adding new dataset
XGLUE is a multilingual GLUE like dataset propesed in this [paper](https://arxiv.org/pdf/2004.01401.pdf).
I'm planning on adding the dataset to the library myself in a couple of weeks.
Also tagging @JetRunner @qiweizhen in case I need some guidance
@lhoestq agreed on having one confi... |
https://github.com/huggingface/datasets/issues/749 | [XGLUE] Adding new dataset | Really cool dataset 👍 btw. does Transformers support all 11 tasks 🤔 would be awesome to have a xglue script (like the "normal" glue one) | XGLUE is a multilingual GLUE like dataset propesed in this [paper](https://arxiv.org/pdf/2004.01401.pdf).
I'm planning on adding the dataset to the library myself in a couple of weeks.
Also tagging @JetRunner @qiweizhen in case I need some guidance | 25 | [XGLUE] Adding new dataset
XGLUE is a multilingual GLUE like dataset propesed in this [paper](https://arxiv.org/pdf/2004.01401.pdf).
I'm planning on adding the dataset to the library myself in a couple of weeks.
Also tagging @JetRunner @qiweizhen in case I need some guidance
Really cool dataset 👍 btw. does Tr... |
https://github.com/huggingface/datasets/issues/749 | [XGLUE] Adding new dataset | Just to make sure this is what we want here. If we add one config per language,
this means that this dataset ends up with well over 100 different configs most of which will have the same `train` split. The train split is always in English. Also, I'm not sure whether it's better for the user to be honest.
I thin... | XGLUE is a multilingual GLUE like dataset propesed in this [paper](https://arxiv.org/pdf/2004.01401.pdf).
I'm planning on adding the dataset to the library myself in a couple of weeks.
Also tagging @JetRunner @qiweizhen in case I need some guidance | 107 | [XGLUE] Adding new dataset
XGLUE is a multilingual GLUE like dataset propesed in this [paper](https://arxiv.org/pdf/2004.01401.pdf).
I'm planning on adding the dataset to the library myself in a couple of weeks.
Also tagging @JetRunner @qiweizhen in case I need some guidance
Just to make sure this is what we w... |
https://github.com/huggingface/datasets/issues/749 | [XGLUE] Adding new dataset | Oh yes right I didn't notice the train set was always in english sorry.
Moreover it seems that the way this dataset is used is to pick a pretrained multilingual model, fine-tune it on the english train set and then evaluate on each test set (one per language).
So to better fit the usual usage of this dataset, I agree... | XGLUE is a multilingual GLUE like dataset propesed in this [paper](https://arxiv.org/pdf/2004.01401.pdf).
I'm planning on adding the dataset to the library myself in a couple of weeks.
Also tagging @JetRunner @qiweizhen in case I need some guidance | 122 | [XGLUE] Adding new dataset
XGLUE is a multilingual GLUE like dataset propesed in this [paper](https://arxiv.org/pdf/2004.01401.pdf).
I'm planning on adding the dataset to the library myself in a couple of weeks.
Also tagging @JetRunner @qiweizhen in case I need some guidance
Oh yes right I didn't notice the tr... |
https://github.com/huggingface/datasets/issues/749 | [XGLUE] Adding new dataset | According to the table in https://huggingface.co/datasets/xglue, Urdu only exists for POS and XNLI in XGLUE - not for summarization | XGLUE is a multilingual GLUE like dataset propesed in this [paper](https://arxiv.org/pdf/2004.01401.pdf).
I'm planning on adding the dataset to the library myself in a couple of weeks.
Also tagging @JetRunner @qiweizhen in case I need some guidance | 19 | [XGLUE] Adding new dataset
XGLUE is a multilingual GLUE like dataset propesed in this [paper](https://arxiv.org/pdf/2004.01401.pdf).
I'm planning on adding the dataset to the library myself in a couple of weeks.
Also tagging @JetRunner @qiweizhen in case I need some guidance
According to the table in https://h... |
https://github.com/huggingface/datasets/issues/743 | load_dataset for CSV files not working | Thank you !
Could you provide a csv file that reproduces the error ?
It doesn't have to be one of your dataset. As long as it reproduces the error
That would help a lot ! | Similar to #622, I've noticed there is a problem when trying to load a CSV file with datasets.
`
from datasets import load_dataset
`
`
dataset = load_dataset("csv", data_files=["./sample_data.csv"], delimiter="\t", column_names=["title", "text"], script_version="master")
`
Displayed error:
`
...
ArrowInva... | 36 | load_dataset for CSV files not working
Similar to #622, I've noticed there is a problem when trying to load a CSV file with datasets.
`
from datasets import load_dataset
`
`
dataset = load_dataset("csv", data_files=["./sample_data.csv"], delimiter="\t", column_names=["title", "text"], script_version="master")
... |
https://github.com/huggingface/datasets/issues/743 | load_dataset for CSV files not working | I think another good example is the following:
`
from datasets import load_dataset
`
`
dataset = load_dataset("csv", data_files=["./sts-dev.csv"], delimiter="\t", column_names=["one", "two", "three", "four", "score", "sentence1", "sentence2"], script_version="master")`
`
Displayed error `CSV parse error: Expe... | Similar to #622, I've noticed there is a problem when trying to load a CSV file with datasets.
`
from datasets import load_dataset
`
`
dataset = load_dataset("csv", data_files=["./sample_data.csv"], delimiter="\t", column_names=["title", "text"], script_version="master")
`
Displayed error:
`
...
ArrowInva... | 72 | load_dataset for CSV files not working
Similar to #622, I've noticed there is a problem when trying to load a CSV file with datasets.
`
from datasets import load_dataset
`
`
dataset = load_dataset("csv", data_files=["./sample_data.csv"], delimiter="\t", column_names=["title", "text"], script_version="master")
... |
https://github.com/huggingface/datasets/issues/743 | load_dataset for CSV files not working | Hi, seems I also can't read csv file. I was trying with a dummy csv with only three rows.
```
text,label
I hate google,negative
I love Microsoft,positive
I don't like you,negative
```
I was using the HuggingFace image in Paperspace Gradient (datasets==1.1.3). The following code doesn't work:
```
from datas... | Similar to #622, I've noticed there is a problem when trying to load a CSV file with datasets.
`
from datasets import load_dataset
`
`
dataset = load_dataset("csv", data_files=["./sample_data.csv"], delimiter="\t", column_names=["title", "text"], script_version="master")
`
Displayed error:
`
...
ArrowInva... | 141 | load_dataset for CSV files not working
Similar to #622, I've noticed there is a problem when trying to load a CSV file with datasets.
`
from datasets import load_dataset
`
`
dataset = load_dataset("csv", data_files=["./sample_data.csv"], delimiter="\t", column_names=["title", "text"], script_version="master")
... |
https://github.com/huggingface/datasets/issues/743 | load_dataset for CSV files not working | This is because load_dataset without `split=` returns a dictionary of split names (train/validation/test) to dataset.
You can do
```python
from datasets import load_dataset
dataset = load_dataset('csv', script_version="master", data_files=['test_data.csv'], delimiter=",")
print(dataset["train"][0])
```
Or if y... | Similar to #622, I've noticed there is a problem when trying to load a CSV file with datasets.
`
from datasets import load_dataset
`
`
dataset = load_dataset("csv", data_files=["./sample_data.csv"], delimiter="\t", column_names=["title", "text"], script_version="master")
`
Displayed error:
`
...
ArrowInva... | 55 | load_dataset for CSV files not working
Similar to #622, I've noticed there is a problem when trying to load a CSV file with datasets.
`
from datasets import load_dataset
`
`
dataset = load_dataset("csv", data_files=["./sample_data.csv"], delimiter="\t", column_names=["title", "text"], script_version="master")
... |
https://github.com/huggingface/datasets/issues/743 | load_dataset for CSV files not working | Good point
Design question for us, though: should `load_dataset` when no split is specified and only one split is present in the dataset (common use case with CSV/text/JSON datasets) return a `Dataset` instead of a `DatsetDict`? I feel like it's often what the user is expecting. I break a bit the paradigm of a uniqu... | Similar to #622, I've noticed there is a problem when trying to load a CSV file with datasets.
`
from datasets import load_dataset
`
`
dataset = load_dataset("csv", data_files=["./sample_data.csv"], delimiter="\t", column_names=["title", "text"], script_version="master")
`
Displayed error:
`
...
ArrowInva... | 89 | load_dataset for CSV files not working
Similar to #622, I've noticed there is a problem when trying to load a CSV file with datasets.
`
from datasets import load_dataset
`
`
dataset = load_dataset("csv", data_files=["./sample_data.csv"], delimiter="\t", column_names=["title", "text"], script_version="master")
... |
https://github.com/huggingface/datasets/issues/743 | load_dataset for CSV files not working | In this case the user expects to get only one dataset object instead of the dictionary of datasets since only one csv file was specified without any split specifications.
I'm ok with returning the dataset object if no split specifications are given for text/json/csv/pandas.
For the other datasets ton the other hand... | Similar to #622, I've noticed there is a problem when trying to load a CSV file with datasets.
`
from datasets import load_dataset
`
`
dataset = load_dataset("csv", data_files=["./sample_data.csv"], delimiter="\t", column_names=["title", "text"], script_version="master")
`
Displayed error:
`
...
ArrowInva... | 73 | load_dataset for CSV files not working
Similar to #622, I've noticed there is a problem when trying to load a CSV file with datasets.
`
from datasets import load_dataset
`
`
dataset = load_dataset("csv", data_files=["./sample_data.csv"], delimiter="\t", column_names=["title", "text"], script_version="master")
... |
https://github.com/huggingface/datasets/issues/743 | load_dataset for CSV files not working | Thanks for your quick response! I'm fine with specifying the split as @lhoestq suggested. My only concern is when I'm loading from python dict or pandas, the library returns a dataset instead of a dictionary of datasets when no split is specified. I know that they use a different function `Dataset.from_dict` or `Datase... | Similar to #622, I've noticed there is a problem when trying to load a CSV file with datasets.
`
from datasets import load_dataset
`
`
dataset = load_dataset("csv", data_files=["./sample_data.csv"], delimiter="\t", column_names=["title", "text"], script_version="master")
`
Displayed error:
`
...
ArrowInva... | 78 | load_dataset for CSV files not working
Similar to #622, I've noticed there is a problem when trying to load a CSV file with datasets.
`
from datasets import load_dataset
`
`
dataset = load_dataset("csv", data_files=["./sample_data.csv"], delimiter="\t", column_names=["title", "text"], script_version="master")
... |
https://github.com/huggingface/datasets/issues/743 | load_dataset for CSV files not working | ```
from datasets import load_dataset
dataset = load_dataset('csv', data_files='./amazon_data/Video_Games_5.csv', delimiter=",", split=['train', 'test'])
```
I was running the above line, but got this error.
```ValueError: Unknown split "test". Should be one of ['train'].```
The data is amazon product data. I... | Similar to #622, I've noticed there is a problem when trying to load a CSV file with datasets.
`
from datasets import load_dataset
`
`
dataset = load_dataset("csv", data_files=["./sample_data.csv"], delimiter="\t", column_names=["title", "text"], script_version="master")
`
Displayed error:
`
...
ArrowInva... | 78 | load_dataset for CSV files not working
Similar to #622, I've noticed there is a problem when trying to load a CSV file with datasets.
`
from datasets import load_dataset
`
`
dataset = load_dataset("csv", data_files=["./sample_data.csv"], delimiter="\t", column_names=["title", "text"], script_version="master")
... |
https://github.com/huggingface/datasets/issues/743 | load_dataset for CSV files not working | Hi ! the `split` argument in `load_dataset` is used to select the splits you want among the available splits.
However when loading a csv with a single file as you did, only a `train` split is available by default.
Indeed since `data_files='./amazon_data/Video_Games_5.csv'` is equivalent to `data_files={"train": './... | Similar to #622, I've noticed there is a problem when trying to load a CSV file with datasets.
`
from datasets import load_dataset
`
`
dataset = load_dataset("csv", data_files=["./sample_data.csv"], delimiter="\t", column_names=["title", "text"], script_version="master")
`
Displayed error:
`
...
ArrowInva... | 123 | load_dataset for CSV files not working
Similar to #622, I've noticed there is a problem when trying to load a CSV file with datasets.
`
from datasets import load_dataset
`
`
dataset = load_dataset("csv", data_files=["./sample_data.csv"], delimiter="\t", column_names=["title", "text"], script_version="master")
... |
https://github.com/huggingface/datasets/issues/743 | load_dataset for CSV files not working | > In this case the user expects to get only one dataset object instead of the dictionary of datasets since only one csv file was specified without any split specifications.
> I'm ok with returning the dataset object if no split specifications are given for text/json/csv/pandas.
>
> For the other datasets ton the ot... | Similar to #622, I've noticed there is a problem when trying to load a CSV file with datasets.
`
from datasets import load_dataset
`
`
dataset = load_dataset("csv", data_files=["./sample_data.csv"], delimiter="\t", column_names=["title", "text"], script_version="master")
`
Displayed error:
`
...
ArrowInva... | 107 | load_dataset for CSV files not working
Similar to #622, I've noticed there is a problem when trying to load a CSV file with datasets.
`
from datasets import load_dataset
`
`
dataset = load_dataset("csv", data_files=["./sample_data.csv"], delimiter="\t", column_names=["title", "text"], script_version="master")
... |
https://github.com/huggingface/datasets/issues/743 | load_dataset for CSV files not working | > Similar to #622, I've noticed there is a problem when trying to load a CSV file with datasets.
>
> `from datasets import load_dataset`
> `dataset = load_dataset("csv", data_files=["./sample_data.csv"], delimiter="\t", column_names=["title", "text"], script_version="master")`
>
> Displayed error:
> `... ArrowI... | Similar to #622, I've noticed there is a problem when trying to load a CSV file with datasets.
`
from datasets import load_dataset
`
`
dataset = load_dataset("csv", data_files=["./sample_data.csv"], delimiter="\t", column_names=["title", "text"], script_version="master")
`
Displayed error:
`
...
ArrowInva... | 319 | load_dataset for CSV files not working
Similar to #622, I've noticed there is a problem when trying to load a CSV file with datasets.
`
from datasets import load_dataset
`
`
dataset = load_dataset("csv", data_files=["./sample_data.csv"], delimiter="\t", column_names=["title", "text"], script_version="master")
... |
https://github.com/huggingface/datasets/issues/743 | load_dataset for CSV files not working | Hi @kauvinlucas
You can use the latest versions of `datasets` to do this.
To do so, just `pip install datasets` instead of `nlp` (the library was renamed) and then
```python
from datasets import load_dataset
dataset = load_dataset('csv', data_files='sample_data.csv') | Similar to #622, I've noticed there is a problem when trying to load a CSV file with datasets.
`
from datasets import load_dataset
`
`
dataset = load_dataset("csv", data_files=["./sample_data.csv"], delimiter="\t", column_names=["title", "text"], script_version="master")
`
Displayed error:
`
...
ArrowInva... | 38 | load_dataset for CSV files not working
Similar to #622, I've noticed there is a problem when trying to load a CSV file with datasets.
`
from datasets import load_dataset
`
`
dataset = load_dataset("csv", data_files=["./sample_data.csv"], delimiter="\t", column_names=["title", "text"], script_version="master")
... |
https://github.com/huggingface/datasets/issues/743 | load_dataset for CSV files not working | Hi
I'm having a different problem with loading local csv.
```Python
from datasets import load_dataset
dataset = load_dataset('csv', data_files='sample.csv')
```
gives `ValueError: Specified named and prefix; you can only specify one.` error
versions:
- datasets: 1.1.3
- python: 3.9.6
- py... | Similar to #622, I've noticed there is a problem when trying to load a CSV file with datasets.
`
from datasets import load_dataset
`
`
dataset = load_dataset("csv", data_files=["./sample_data.csv"], delimiter="\t", column_names=["title", "text"], script_version="master")
`
Displayed error:
`
...
ArrowInva... | 42 | load_dataset for CSV files not working
Similar to #622, I've noticed there is a problem when trying to load a CSV file with datasets.
`
from datasets import load_dataset
`
`
dataset = load_dataset("csv", data_files=["./sample_data.csv"], delimiter="\t", column_names=["title", "text"], script_version="master")
... |
https://github.com/huggingface/datasets/issues/743 | load_dataset for CSV files not working | Oh.. I figured it out. According to issue #[42387](https://github.com/pandas-dev/pandas/issues/42387) from pandas, this new version does not accept None for both parameters (which was being done by the repo I'm testing). Dowgrading Pandas==1.0.4 and Python==3.8 worked | Similar to #622, I've noticed there is a problem when trying to load a CSV file with datasets.
`
from datasets import load_dataset
`
`
dataset = load_dataset("csv", data_files=["./sample_data.csv"], delimiter="\t", column_names=["title", "text"], script_version="master")
`
Displayed error:
`
...
ArrowInva... | 35 | load_dataset for CSV files not working
Similar to #622, I've noticed there is a problem when trying to load a CSV file with datasets.
`
from datasets import load_dataset
`
`
dataset = load_dataset("csv", data_files=["./sample_data.csv"], delimiter="\t", column_names=["title", "text"], script_version="master")
... |
https://github.com/huggingface/datasets/issues/743 | load_dataset for CSV files not working | Hi,
I got an `OSError: Cannot find data file. ` when I tried to use load_dataset with tsv files. I have checked the paths, and they are correct.
versions
- python: 3.7.9
- datasets: 1.1.3
- pyarrow: 2.0.0
- transformers: 4.2.2
~~~
data_files = {"train": "train.tsv", "test",: "test.tsv"}
datasets = load_... | Similar to #622, I've noticed there is a problem when trying to load a CSV file with datasets.
`
from datasets import load_dataset
`
`
dataset = load_dataset("csv", data_files=["./sample_data.csv"], delimiter="\t", column_names=["title", "text"], script_version="master")
`
Displayed error:
`
...
ArrowInva... | 229 | load_dataset for CSV files not working
Similar to #622, I've noticed there is a problem when trying to load a CSV file with datasets.
`
from datasets import load_dataset
`
`
dataset = load_dataset("csv", data_files=["./sample_data.csv"], delimiter="\t", column_names=["title", "text"], script_version="master")
... |
https://github.com/huggingface/datasets/issues/743 | load_dataset for CSV files not working | Hi ! It looks like the error stacktrace doesn't match with your code snippet.
What error do you get when running this ?
```
data_files = {"train": "train.tsv", "test",: "test.tsv"}
datasets = load_dataset("csv", data_files=data_files, delimiter="\t")
```
can you check that both tsv files are in the same folder ... | Similar to #622, I've noticed there is a problem when trying to load a CSV file with datasets.
`
from datasets import load_dataset
`
`
dataset = load_dataset("csv", data_files=["./sample_data.csv"], delimiter="\t", column_names=["title", "text"], script_version="master")
`
Displayed error:
`
...
ArrowInva... | 57 | load_dataset for CSV files not working
Similar to #622, I've noticed there is a problem when trying to load a CSV file with datasets.
`
from datasets import load_dataset
`
`
dataset = load_dataset("csv", data_files=["./sample_data.csv"], delimiter="\t", column_names=["title", "text"], script_version="master")
... |
https://github.com/huggingface/datasets/issues/743 | load_dataset for CSV files not working | Hi @lhoestq, Below is the entire error message after I move both tsv files to the same directory. It's the same with I got before.
```
/projectnb2/media-framing/env-trans4/lib/python3.7/site-packages/torch/cuda/__init__.py:52: UserWarning: CUDA initialization: Found no NVIDIA driver on your system. Please check that ... | Similar to #622, I've noticed there is a problem when trying to load a CSV file with datasets.
`
from datasets import load_dataset
`
`
dataset = load_dataset("csv", data_files=["./sample_data.csv"], delimiter="\t", column_names=["title", "text"], script_version="master")
`
Displayed error:
`
...
ArrowInva... | 311 | load_dataset for CSV files not working
Similar to #622, I've noticed there is a problem when trying to load a CSV file with datasets.
`
from datasets import load_dataset
`
`
dataset = load_dataset("csv", data_files=["./sample_data.csv"], delimiter="\t", column_names=["title", "text"], script_version="master")
... |
https://github.com/huggingface/datasets/issues/743 | load_dataset for CSV files not working | Hi !
Can you try running this into a python shell directly ?
```python
import os
from datasets import load_dataset
data_files = {"train": "train.tsv", "test": "test.tsv"}
assert all(os.path.isfile(data_file) for data_file in data_files.values()), "Couln't find files"
datasets = load_dataset("csv", data_fil... | Similar to #622, I've noticed there is a problem when trying to load a CSV file with datasets.
`
from datasets import load_dataset
`
`
dataset = load_dataset("csv", data_files=["./sample_data.csv"], delimiter="\t", column_names=["title", "text"], script_version="master")
`
Displayed error:
`
...
ArrowInva... | 56 | load_dataset for CSV files not working
Similar to #622, I've noticed there is a problem when trying to load a CSV file with datasets.
`
from datasets import load_dataset
`
`
dataset = load_dataset("csv", data_files=["./sample_data.csv"], delimiter="\t", column_names=["title", "text"], script_version="master")
... |
https://github.com/huggingface/datasets/issues/743 | load_dataset for CSV files not working | Hi @lhoestq,
Below is what I got from terminal after I copied and run your code. I think the files themselves are good since there is no assertion error.
```
Using custom data configuration default-df627c23ac0e98ec
Downloading and preparing dataset csv/default (download: Unknown size, generated: Unknown size,... | Similar to #622, I've noticed there is a problem when trying to load a CSV file with datasets.
`
from datasets import load_dataset
`
`
dataset = load_dataset("csv", data_files=["./sample_data.csv"], delimiter="\t", column_names=["title", "text"], script_version="master")
`
Displayed error:
`
...
ArrowInva... | 160 | load_dataset for CSV files not working
Similar to #622, I've noticed there is a problem when trying to load a CSV file with datasets.
`
from datasets import load_dataset
`
`
dataset = load_dataset("csv", data_files=["./sample_data.csv"], delimiter="\t", column_names=["title", "text"], script_version="master")
... |
https://github.com/huggingface/datasets/issues/743 | load_dataset for CSV files not working | Hi, could this be a permission error ? I think it fails to close the arrow file that contains the data from your CSVs in the cache.
By default datasets are cached in `~/.cache/huggingface/datasets`, could you check that you have the right permissions ?
You can also try to change the cache directory by passing `cach... | Similar to #622, I've noticed there is a problem when trying to load a CSV file with datasets.
`
from datasets import load_dataset
`
`
dataset = load_dataset("csv", data_files=["./sample_data.csv"], delimiter="\t", column_names=["title", "text"], script_version="master")
`
Displayed error:
`
...
ArrowInva... | 58 | load_dataset for CSV files not working
Similar to #622, I've noticed there is a problem when trying to load a CSV file with datasets.
`
from datasets import load_dataset
`
`
dataset = load_dataset("csv", data_files=["./sample_data.csv"], delimiter="\t", column_names=["title", "text"], script_version="master")
... |
https://github.com/huggingface/datasets/issues/743 | load_dataset for CSV files not working | Thank you!! @lhoestq
For some reason, I don't have the default path for datasets to cache, maybe because I work from a remote system. The issue solved after I pass the `cache_dir` argument to the function. Thank you very much!! | Similar to #622, I've noticed there is a problem when trying to load a CSV file with datasets.
`
from datasets import load_dataset
`
`
dataset = load_dataset("csv", data_files=["./sample_data.csv"], delimiter="\t", column_names=["title", "text"], script_version="master")
`
Displayed error:
`
...
ArrowInva... | 40 | load_dataset for CSV files not working
Similar to #622, I've noticed there is a problem when trying to load a CSV file with datasets.
`
from datasets import load_dataset
`
`
dataset = load_dataset("csv", data_files=["./sample_data.csv"], delimiter="\t", column_names=["title", "text"], script_version="master")
... |
https://github.com/huggingface/datasets/issues/743 | load_dataset for CSV files not working | > Hi, could this be a permission error ? I think it fails to close the arrow file that contains the data from your CSVs in the cache.
>
> By default datasets are cached in `~/.cache/huggingface/datasets`, could you check that you have the right permissions ? You can also try to change the cache directory by passing ... | Similar to #622, I've noticed there is a problem when trying to load a CSV file with datasets.
`
from datasets import load_dataset
`
`
dataset = load_dataset("csv", data_files=["./sample_data.csv"], delimiter="\t", column_names=["title", "text"], script_version="master")
`
Displayed error:
`
...
ArrowInva... | 135 | load_dataset for CSV files not working
Similar to #622, I've noticed there is a problem when trying to load a CSV file with datasets.
`
from datasets import load_dataset
`
`
dataset = load_dataset("csv", data_files=["./sample_data.csv"], delimiter="\t", column_names=["title", "text"], script_version="master")
... |
https://github.com/huggingface/datasets/issues/741 | Creating dataset consumes too much memory | Thanks for reporting.
In theory since the dataset script is just made to yield examples to write them into an arrow file, it's not supposed to create memory issues.
Could you please try to run this exact same loop in a separate script to see if it's not an issue with `PIL` ?
You can just copy paste what's inside `... | Moving this issue from https://github.com/huggingface/datasets/pull/722 here, because it seems like a general issue.
Given the following dataset example, where each example saves a sequence of 260x210x3 images (max length 400):
```python
def _generate_examples(self, base_path, split):
""" Yields examp... | 96 | Creating dataset consumes too much memory
Moving this issue from https://github.com/huggingface/datasets/pull/722 here, because it seems like a general issue.
Given the following dataset example, where each example saves a sequence of 260x210x3 images (max length 400):
```python
def _generate_examples(self, ... |
https://github.com/huggingface/datasets/issues/741 | Creating dataset consumes too much memory | Here's an equivalent loading code:
```python
images_path = "PHOENIX-2014-T-release-v3/PHOENIX-2014-T/features/fullFrame-210x260px/train"
for dir_path in tqdm(os.listdir(images_path)):
frames_path = os.path.join(images_path, dir_path)
np_frames = []
for frame_name in os.listdir(frames_path):
... | Moving this issue from https://github.com/huggingface/datasets/pull/722 here, because it seems like a general issue.
Given the following dataset example, where each example saves a sequence of 260x210x3 images (max length 400):
```python
def _generate_examples(self, base_path, split):
""" Yields examp... | 75 | Creating dataset consumes too much memory
Moving this issue from https://github.com/huggingface/datasets/pull/722 here, because it seems like a general issue.
Given the following dataset example, where each example saves a sequence of 260x210x3 images (max length 400):
```python
def _generate_examples(self, ... |
https://github.com/huggingface/datasets/issues/741 | Creating dataset consumes too much memory | I've had similar issues with Arrow once. I'll investigate...
For now maybe we can simply use the images paths in the dataset you want to add. I don't expect to fix this memory issue until 1-2 weeks unfortunately. Then we can just update the dataset with the images. What do you think ? | Moving this issue from https://github.com/huggingface/datasets/pull/722 here, because it seems like a general issue.
Given the following dataset example, where each example saves a sequence of 260x210x3 images (max length 400):
```python
def _generate_examples(self, base_path, split):
""" Yields examp... | 53 | Creating dataset consumes too much memory
Moving this issue from https://github.com/huggingface/datasets/pull/722 here, because it seems like a general issue.
Given the following dataset example, where each example saves a sequence of 260x210x3 images (max length 400):
```python
def _generate_examples(self, ... |
https://github.com/huggingface/datasets/issues/741 | Creating dataset consumes too much memory | If it's just 1-2 weeks, I think it's best if we wait. I don't think it is very urgent to add it, and it will be much more useful with the images loaded rather than not (the images are low resolution, and thus papers using this dataset actually fit the entire video into memory anyway)
I'll keep working on other datas... | Moving this issue from https://github.com/huggingface/datasets/pull/722 here, because it seems like a general issue.
Given the following dataset example, where each example saves a sequence of 260x210x3 images (max length 400):
```python
def _generate_examples(self, base_path, split):
""" Yields examp... | 65 | Creating dataset consumes too much memory
Moving this issue from https://github.com/huggingface/datasets/pull/722 here, because it seems like a general issue.
Given the following dataset example, where each example saves a sequence of 260x210x3 images (max length 400):
```python
def _generate_examples(self, ... |
https://github.com/huggingface/datasets/issues/741 | Creating dataset consumes too much memory | Ok found the issue. This is because the batch size used by the writer is set to 10 000 elements by default so it would load your full dataset in memory (the writer has a buffer that flushes only after each batch). Moreover to write in Apache Arrow we have to use python objects so what's stored inside the ArrowWriter's ... | Moving this issue from https://github.com/huggingface/datasets/pull/722 here, because it seems like a general issue.
Given the following dataset example, where each example saves a sequence of 260x210x3 images (max length 400):
```python
def _generate_examples(self, base_path, split):
""" Yields examp... | 97 | Creating dataset consumes too much memory
Moving this issue from https://github.com/huggingface/datasets/pull/722 here, because it seems like a general issue.
Given the following dataset example, where each example saves a sequence of 260x210x3 images (max length 400):
```python
def _generate_examples(self, ... |
https://github.com/huggingface/datasets/issues/741 | Creating dataset consumes too much memory | Thanks, that's awesome you managed to find the problem.
About the 32 bits - really? there isn't a way to serialize the numpy array somehow? 32 bits would take 4 times the memory / disk space needed to store these videos.
Please let me know when the batch size is customizable and I'll try again! | Moving this issue from https://github.com/huggingface/datasets/pull/722 here, because it seems like a general issue.
Given the following dataset example, where each example saves a sequence of 260x210x3 images (max length 400):
```python
def _generate_examples(self, base_path, split):
""" Yields examp... | 55 | Creating dataset consumes too much memory
Moving this issue from https://github.com/huggingface/datasets/pull/722 here, because it seems like a general issue.
Given the following dataset example, where each example saves a sequence of 260x210x3 images (max length 400):
```python
def _generate_examples(self, ... |
https://github.com/huggingface/datasets/issues/741 | Creating dataset consumes too much memory | The 32 bit integrers are only used in the writer's buffer because Arrow doesn't take numpy arrays correctly as input. On disk it's stored as uint8 in arrow format ;) | Moving this issue from https://github.com/huggingface/datasets/pull/722 here, because it seems like a general issue.
Given the following dataset example, where each example saves a sequence of 260x210x3 images (max length 400):
```python
def _generate_examples(self, base_path, split):
""" Yields examp... | 30 | Creating dataset consumes too much memory
Moving this issue from https://github.com/huggingface/datasets/pull/722 here, because it seems like a general issue.
Given the following dataset example, where each example saves a sequence of 260x210x3 images (max length 400):
```python
def _generate_examples(self, ... |
https://github.com/huggingface/datasets/issues/741 | Creating dataset consumes too much memory | > I don't expect to fix this memory issue until 1-2 weeks unfortunately.
Hi @lhoestq
not to rush of course, but I was wondering if you have a new timeline so I know how to plan my work around this :) | Moving this issue from https://github.com/huggingface/datasets/pull/722 here, because it seems like a general issue.
Given the following dataset example, where each example saves a sequence of 260x210x3 images (max length 400):
```python
def _generate_examples(self, base_path, split):
""" Yields examp... | 41 | Creating dataset consumes too much memory
Moving this issue from https://github.com/huggingface/datasets/pull/722 here, because it seems like a general issue.
Given the following dataset example, where each example saves a sequence of 260x210x3 images (max length 400):
```python
def _generate_examples(self, ... |
https://github.com/huggingface/datasets/issues/741 | Creating dataset consumes too much memory | Alright it should be good now.
You just have to specify `_writer_batch_size = 10` for example as a class attribute of the dataset builder class. | Moving this issue from https://github.com/huggingface/datasets/pull/722 here, because it seems like a general issue.
Given the following dataset example, where each example saves a sequence of 260x210x3 images (max length 400):
```python
def _generate_examples(self, base_path, split):
""" Yields examp... | 25 | Creating dataset consumes too much memory
Moving this issue from https://github.com/huggingface/datasets/pull/722 here, because it seems like a general issue.
Given the following dataset example, where each example saves a sequence of 260x210x3 images (max length 400):
```python
def _generate_examples(self, ... |
https://github.com/huggingface/datasets/issues/741 | Creating dataset consumes too much memory | I added it, but still it consumes as much memory
https://github.com/huggingface/datasets/pull/722/files#diff-2e0d865dd4a60dedd1861d6f8c5ed281ded71508467908e1e0b1dbe7d2d420b1R66
Did I not do it correctly? | Moving this issue from https://github.com/huggingface/datasets/pull/722 here, because it seems like a general issue.
Given the following dataset example, where each example saves a sequence of 260x210x3 images (max length 400):
```python
def _generate_examples(self, base_path, split):
""" Yields examp... | 17 | Creating dataset consumes too much memory
Moving this issue from https://github.com/huggingface/datasets/pull/722 here, because it seems like a general issue.
Given the following dataset example, where each example saves a sequence of 260x210x3 images (max length 400):
```python
def _generate_examples(self, ... |
https://github.com/huggingface/datasets/issues/741 | Creating dataset consumes too much memory | Yes you did it right.
Did you rebase to include the changes of #828 ?
EDIT: looks like you merged from master in the PR. Not sure why you still have an issue then, I will investigate | Moving this issue from https://github.com/huggingface/datasets/pull/722 here, because it seems like a general issue.
Given the following dataset example, where each example saves a sequence of 260x210x3 images (max length 400):
```python
def _generate_examples(self, base_path, split):
""" Yields examp... | 37 | Creating dataset consumes too much memory
Moving this issue from https://github.com/huggingface/datasets/pull/722 here, because it seems like a general issue.
Given the following dataset example, where each example saves a sequence of 260x210x3 images (max length 400):
```python
def _generate_examples(self, ... |
https://github.com/huggingface/datasets/issues/741 | Creating dataset consumes too much memory | Sorry for the delay, I was busy with the dataset sprint and the incredible amount of contributions to the library ^^'
What you can try to do to find what's wrong is check at which frequency the arrow writer writes all the examples from its in-memory buffer on disk. This happens [here](https://github.com/huggingface/... | Moving this issue from https://github.com/huggingface/datasets/pull/722 here, because it seems like a general issue.
Given the following dataset example, where each example saves a sequence of 260x210x3 images (max length 400):
```python
def _generate_examples(self, base_path, split):
""" Yields examp... | 128 | Creating dataset consumes too much memory
Moving this issue from https://github.com/huggingface/datasets/pull/722 here, because it seems like a general issue.
Given the following dataset example, where each example saves a sequence of 260x210x3 images (max length 400):
```python
def _generate_examples(self, ... |
https://github.com/huggingface/datasets/issues/741 | Creating dataset consumes too much memory | I had the same issue. It works for me by setting `DEFAULT_WRITER_BATCH_SIZE = 10` of my dataset builder class. (And not `_writer_batch_size` as previously mentioned). I guess this is because `_writer_batch_size` is overwritten in `__init__` (see [here](https://github.com/huggingface/datasets/blob/0e2563e5d5c2fc193ea27... | Moving this issue from https://github.com/huggingface/datasets/pull/722 here, because it seems like a general issue.
Given the following dataset example, where each example saves a sequence of 260x210x3 images (max length 400):
```python
def _generate_examples(self, base_path, split):
""" Yields examp... | 37 | Creating dataset consumes too much memory
Moving this issue from https://github.com/huggingface/datasets/pull/722 here, because it seems like a general issue.
Given the following dataset example, where each example saves a sequence of 260x210x3 images (max length 400):
```python
def _generate_examples(self, ... |
https://github.com/huggingface/datasets/issues/741 | Creating dataset consumes too much memory | Yes the class attribute you can change is `DEFAULT_WRITER_BATCH_SIZE`.
Otherwise in `load_dataset` you can specify `writer_batch_size=` | Moving this issue from https://github.com/huggingface/datasets/pull/722 here, because it seems like a general issue.
Given the following dataset example, where each example saves a sequence of 260x210x3 images (max length 400):
```python
def _generate_examples(self, base_path, split):
""" Yields examp... | 16 | Creating dataset consumes too much memory
Moving this issue from https://github.com/huggingface/datasets/pull/722 here, because it seems like a general issue.
Given the following dataset example, where each example saves a sequence of 260x210x3 images (max length 400):
```python
def _generate_examples(self, ... |
https://github.com/huggingface/datasets/issues/737 | Trec Dataset Connection Error | Thanks for reporting.
That's because the download url has changed. The old url now redirects to the new one but we don't support redirection for downloads.
I'm opening a PR to update the url | **Datasets Version:**
1.1.2
**Python Version:**
3.6/3.7
**Code:**
```python
from datasets import load_dataset
load_dataset("trec")
```
**Expected behavior:**
Download Trec dataset and load Dataset object
**Current Behavior:**
Get a connection error saying it couldn't reach http://cogcomp.org/Data/... | 34 | Trec Dataset Connection Error
**Datasets Version:**
1.1.2
**Python Version:**
3.6/3.7
**Code:**
```python
from datasets import load_dataset
load_dataset("trec")
```
**Expected behavior:**
Download Trec dataset and load Dataset object
**Current Behavior:**
Get a connection error saying it couldn'... |
https://github.com/huggingface/datasets/issues/730 | Possible caching bug | Thanks for reporting. That's a bug indeed.
Apparently only the `data_files` parameter is taken into account right now in `DatasetBuilder._create_builder_config` but it should also be the case for `config_kwargs` (or at least the instantiated `builder_config`) | The following code with `test1.txt` containing just "🤗🤗🤗":
```
dataset = datasets.load_dataset('text', data_files=['test1.txt'], split="train", encoding="latin_1")
print(dataset[0])
dataset = datasets.load_dataset('text', data_files=['test1.txt'], split="train", encoding="utf-8")
print(dataset[0])
```
produc... | 35 | Possible caching bug
The following code with `test1.txt` containing just "🤗🤗🤗":
```
dataset = datasets.load_dataset('text', data_files=['test1.txt'], split="train", encoding="latin_1")
print(dataset[0])
dataset = datasets.load_dataset('text', data_files=['test1.txt'], split="train", encoding="utf-8")
print(da... |
https://github.com/huggingface/datasets/issues/730 | Possible caching bug | Hi, does this bug be fixed? when I load JSON files, I get the same errors by the command
`!python3 run.py --do_train --task qa --dataset squad-retrain-data/train-v2.0.json --output_dir ./re_trained_model/`
change the dateset to load json by refering to https://huggingface.co/docs/datasets/loading.html
`dataset = ... | The following code with `test1.txt` containing just "🤗🤗🤗":
```
dataset = datasets.load_dataset('text', data_files=['test1.txt'], split="train", encoding="latin_1")
print(dataset[0])
dataset = datasets.load_dataset('text', data_files=['test1.txt'], split="train", encoding="utf-8")
print(dataset[0])
```
produc... | 63 | Possible caching bug
The following code with `test1.txt` containing just "🤗🤗🤗":
```
dataset = datasets.load_dataset('text', data_files=['test1.txt'], split="train", encoding="latin_1")
print(dataset[0])
dataset = datasets.load_dataset('text', data_files=['test1.txt'], split="train", encoding="utf-8")
print(da... |
https://github.com/huggingface/datasets/issues/730 | Possible caching bug | ```ds = load_dataset("csv", data_files={'train': 'train.csv', 'test': 'test.csv'})```
Gives the output
```Using custom data configuration default-5c8ae7c208631aca```
and the code hangs there. | The following code with `test1.txt` containing just "🤗🤗🤗":
```
dataset = datasets.load_dataset('text', data_files=['test1.txt'], split="train", encoding="latin_1")
print(dataset[0])
dataset = datasets.load_dataset('text', data_files=['test1.txt'], split="train", encoding="utf-8")
print(dataset[0])
```
produc... | 20 | Possible caching bug
The following code with `test1.txt` containing just "🤗🤗🤗":
```
dataset = datasets.load_dataset('text', data_files=['test1.txt'], split="train", encoding="latin_1")
print(dataset[0])
dataset = datasets.load_dataset('text', data_files=['test1.txt'], split="train", encoding="utf-8")
print(da... |
https://github.com/huggingface/datasets/issues/730 | Possible caching bug | > `ds = load_dataset("csv", data_files={'train': 'train.csv', 'test': 'test.csv'})`
>
> Gives the output `Using custom data configuration default-5c8ae7c208631aca`
>
> and the code hangs there.
Have you solved it? I met this problem too! | The following code with `test1.txt` containing just "🤗🤗🤗":
```
dataset = datasets.load_dataset('text', data_files=['test1.txt'], split="train", encoding="latin_1")
print(dataset[0])
dataset = datasets.load_dataset('text', data_files=['test1.txt'], split="train", encoding="utf-8")
print(dataset[0])
```
produc... | 34 | Possible caching bug
The following code with `test1.txt` containing just "🤗🤗🤗":
```
dataset = datasets.load_dataset('text', data_files=['test1.txt'], split="train", encoding="latin_1")
print(dataset[0])
dataset = datasets.load_dataset('text', data_files=['test1.txt'], split="train", encoding="utf-8")
print(da... |
https://github.com/huggingface/datasets/issues/730 | Possible caching bug | Can you Ctrl+C to kill the process and share the stacktrace here ? It should show at which location in the code it was hanging | The following code with `test1.txt` containing just "🤗🤗🤗":
```
dataset = datasets.load_dataset('text', data_files=['test1.txt'], split="train", encoding="latin_1")
print(dataset[0])
dataset = datasets.load_dataset('text', data_files=['test1.txt'], split="train", encoding="utf-8")
print(dataset[0])
```
produc... | 25 | Possible caching bug
The following code with `test1.txt` containing just "🤗🤗🤗":
```
dataset = datasets.load_dataset('text', data_files=['test1.txt'], split="train", encoding="latin_1")
print(dataset[0])
dataset = datasets.load_dataset('text', data_files=['test1.txt'], split="train", encoding="utf-8")
print(da... |
https://github.com/huggingface/datasets/issues/730 | Possible caching bug | I had the same issue and solved it by downgrading the datasets version from 2.7.0 -> 2.6.1
pip install -q datasets==2.6.1 | The following code with `test1.txt` containing just "🤗🤗🤗":
```
dataset = datasets.load_dataset('text', data_files=['test1.txt'], split="train", encoding="latin_1")
print(dataset[0])
dataset = datasets.load_dataset('text', data_files=['test1.txt'], split="train", encoding="utf-8")
print(dataset[0])
```
produc... | 21 | Possible caching bug
The following code with `test1.txt` containing just "🤗🤗🤗":
```
dataset = datasets.load_dataset('text', data_files=['test1.txt'], split="train", encoding="latin_1")
print(dataset[0])
dataset = datasets.load_dataset('text', data_files=['test1.txt'], split="train", encoding="utf-8")
print(da... |
https://github.com/huggingface/datasets/issues/730 | Possible caching bug | > I had the same issue and solved it by downgrading the datasets version from 2.7.0 -> 2.6.1 pip install -q datasets==2.6.1
Thanks, it works for me | The following code with `test1.txt` containing just "🤗🤗🤗":
```
dataset = datasets.load_dataset('text', data_files=['test1.txt'], split="train", encoding="latin_1")
print(dataset[0])
dataset = datasets.load_dataset('text', data_files=['test1.txt'], split="train", encoding="utf-8")
print(dataset[0])
```
produc... | 27 | Possible caching bug
The following code with `test1.txt` containing just "🤗🤗🤗":
```
dataset = datasets.load_dataset('text', data_files=['test1.txt'], split="train", encoding="latin_1")
print(dataset[0])
dataset = datasets.load_dataset('text', data_files=['test1.txt'], split="train", encoding="utf-8")
print(da... |
https://github.com/huggingface/datasets/issues/726 | "Checksums didn't match for dataset source files" error while loading openwebtext dataset | Hi try, to provide more information please.
Example code in a colab to reproduce the error, details on what you are trying to do and what you were expected and details on your environment (OS, PyPi packages version). | Hi,
I have encountered this problem during loading the openwebtext dataset:
```
>>> dataset = load_dataset('openwebtext')
Downloading and preparing dataset openwebtext/plain_text (download: 12.00 GiB, generated: 37.04 GiB, post-processed: Unknown size, total: 49.03 GiB) to /home/admin/.cache/huggingface/datasets/op... | 38 | "Checksums didn't match for dataset source files" error while loading openwebtext dataset
Hi,
I have encountered this problem during loading the openwebtext dataset:
```
>>> dataset = load_dataset('openwebtext')
Downloading and preparing dataset openwebtext/plain_text (download: 12.00 GiB, generated: 37.04 GiB, p... |
https://github.com/huggingface/datasets/issues/726 | "Checksums didn't match for dataset source files" error while loading openwebtext dataset | > Hi try, to provide more information please.
>
> Example code in a colab to reproduce the error, details on what you are trying to do and what you were expected and details on your environment (OS, PyPi packages version).
I have update the description, sorry for the incomplete issue by mistake. | Hi,
I have encountered this problem during loading the openwebtext dataset:
```
>>> dataset = load_dataset('openwebtext')
Downloading and preparing dataset openwebtext/plain_text (download: 12.00 GiB, generated: 37.04 GiB, post-processed: Unknown size, total: 49.03 GiB) to /home/admin/.cache/huggingface/datasets/op... | 53 | "Checksums didn't match for dataset source files" error while loading openwebtext dataset
Hi,
I have encountered this problem during loading the openwebtext dataset:
```
>>> dataset = load_dataset('openwebtext')
Downloading and preparing dataset openwebtext/plain_text (download: 12.00 GiB, generated: 37.04 GiB, p... |
https://github.com/huggingface/datasets/issues/726 | "Checksums didn't match for dataset source files" error while loading openwebtext dataset | Hi, I have manually downloaded the compressed dataset `openwebtext.tar.xz' and use the following command to preprocess the examples:
```
>>> dataset = load_dataset('/home/admin/workspace/datasets/datasets-master/datasets-master/datasets/openwebtext', data_dir='/home/admin/workspace/datasets')
Using custom data confi... | Hi,
I have encountered this problem during loading the openwebtext dataset:
```
>>> dataset = load_dataset('openwebtext')
Downloading and preparing dataset openwebtext/plain_text (download: 12.00 GiB, generated: 37.04 GiB, post-processed: Unknown size, total: 49.03 GiB) to /home/admin/.cache/huggingface/datasets/op... | 87 | "Checksums didn't match for dataset source files" error while loading openwebtext dataset
Hi,
I have encountered this problem during loading the openwebtext dataset:
```
>>> dataset = load_dataset('openwebtext')
Downloading and preparing dataset openwebtext/plain_text (download: 12.00 GiB, generated: 37.04 GiB, p... |
https://github.com/huggingface/datasets/issues/726 | "Checksums didn't match for dataset source files" error while loading openwebtext dataset | NonMatchingChecksumError: Checksums didn't match for dataset source files:
i got this issue when i try to work on my own datasets kindly tell me, from where i can get checksums of train and dev file in my github repo | Hi,
I have encountered this problem during loading the openwebtext dataset:
```
>>> dataset = load_dataset('openwebtext')
Downloading and preparing dataset openwebtext/plain_text (download: 12.00 GiB, generated: 37.04 GiB, post-processed: Unknown size, total: 49.03 GiB) to /home/admin/.cache/huggingface/datasets/op... | 39 | "Checksums didn't match for dataset source files" error while loading openwebtext dataset
Hi,
I have encountered this problem during loading the openwebtext dataset:
```
>>> dataset = load_dataset('openwebtext')
Downloading and preparing dataset openwebtext/plain_text (download: 12.00 GiB, generated: 37.04 GiB, p... |
https://github.com/huggingface/datasets/issues/726 | "Checksums didn't match for dataset source files" error while loading openwebtext dataset | Hi, I got the similar issue for xnli dataset while working on colab with python3.7.
`nlp.load_dataset(path = 'xnli')`
The above command resulted in following issue :
```
NonMatchingChecksumError: Checksums didn't match for dataset source files:
['https://www.nyu.edu/projects/bowman/xnli/XNLI-1.0.zip']
```... | Hi,
I have encountered this problem during loading the openwebtext dataset:
```
>>> dataset = load_dataset('openwebtext')
Downloading and preparing dataset openwebtext/plain_text (download: 12.00 GiB, generated: 37.04 GiB, post-processed: Unknown size, total: 49.03 GiB) to /home/admin/.cache/huggingface/datasets/op... | 44 | "Checksums didn't match for dataset source files" error while loading openwebtext dataset
Hi,
I have encountered this problem during loading the openwebtext dataset:
```
>>> dataset = load_dataset('openwebtext')
Downloading and preparing dataset openwebtext/plain_text (download: 12.00 GiB, generated: 37.04 GiB, p... |
https://github.com/huggingface/datasets/issues/726 | "Checksums didn't match for dataset source files" error while loading openwebtext dataset | Says fixed but I'm still getting it.
command:
dataset = load_dataset("ted_talks_iwslt", language_pair=("en", "es"), year="2014",download_mode="force_redownload")
got:
Using custom data configuration en_es_2014-35a2d3350a0f9823
Downloading and preparing dataset ted_talks_iwslt/en_es_2014 (download: 2.15 K... | Hi,
I have encountered this problem during loading the openwebtext dataset:
```
>>> dataset = load_dataset('openwebtext')
Downloading and preparing dataset openwebtext/plain_text (download: 12.00 GiB, generated: 37.04 GiB, post-processed: Unknown size, total: 49.03 GiB) to /home/admin/.cache/huggingface/datasets/op... | 52 | "Checksums didn't match for dataset source files" error while loading openwebtext dataset
Hi,
I have encountered this problem during loading the openwebtext dataset:
```
>>> dataset = load_dataset('openwebtext')
Downloading and preparing dataset openwebtext/plain_text (download: 12.00 GiB, generated: 37.04 GiB, p... |
https://github.com/huggingface/datasets/issues/724 | need to redirect /nlp to /datasets and remove outdated info | Should be fixed now:

Not sure I understand what you mean by the second part?
| It looks like the website still has all the `nlp` data, e.g.: https://huggingface.co/nlp/viewer/?dataset=wikihow&config=all
should probably redirect to: https://huggingface.co/datasets/wikihow
also for some reason the new information is slightly borked. If you look at the old one it was nicely formatted and had t... | 16 | need to redirect /nlp to /datasets and remove outdated info
It looks like the website still has all the `nlp` data, e.g.: https://huggingface.co/nlp/viewer/?dataset=wikihow&config=all
should probably redirect to: https://huggingface.co/datasets/wikihow
also for some reason the new information is slightly borked... |
https://github.com/huggingface/datasets/issues/724 | need to redirect /nlp to /datasets and remove outdated info | Thank you!
> Not sure I understand what you mean by the second part?
Compare the 2:
* https://huggingface.co/datasets/wikihow
* https://huggingface.co/nlp/viewer/?dataset=wikihow&config=all
Can you see the difference? 2nd has formatting, 1st doesn't.
| It looks like the website still has all the `nlp` data, e.g.: https://huggingface.co/nlp/viewer/?dataset=wikihow&config=all
should probably redirect to: https://huggingface.co/datasets/wikihow
also for some reason the new information is slightly borked. If you look at the old one it was nicely formatted and had t... | 31 | need to redirect /nlp to /datasets and remove outdated info
It looks like the website still has all the `nlp` data, e.g.: https://huggingface.co/nlp/viewer/?dataset=wikihow&config=all
should probably redirect to: https://huggingface.co/datasets/wikihow
also for some reason the new information is slightly borked... |
https://github.com/huggingface/datasets/issues/724 | need to redirect /nlp to /datasets and remove outdated info | For context, those are two different pages (not an old vs new one), one is from the dataset viewer (you can browse data inside the datasets) while the other is just a basic reference page displayed some metadata about the dataset.
For the second one, we'll move to markdown parsing soon, so it'll be formatted better. | It looks like the website still has all the `nlp` data, e.g.: https://huggingface.co/nlp/viewer/?dataset=wikihow&config=all
should probably redirect to: https://huggingface.co/datasets/wikihow
also for some reason the new information is slightly borked. If you look at the old one it was nicely formatted and had t... | 56 | need to redirect /nlp to /datasets and remove outdated info
It looks like the website still has all the `nlp` data, e.g.: https://huggingface.co/nlp/viewer/?dataset=wikihow&config=all
should probably redirect to: https://huggingface.co/datasets/wikihow
also for some reason the new information is slightly borked... |
https://github.com/huggingface/datasets/issues/723 | Adding pseudo-labels to datasets | Nice ! :)
It's indeed the first time we have such contributions so we'll have to figure out the appropriate way to integrate them.
Could you add details on what they could be used for ?
| I recently [uploaded pseudo-labels](https://github.com/huggingface/transformers/blob/master/examples/seq2seq/precomputed_pseudo_labels.md) for CNN/DM, XSUM and WMT16-en-ro to s3, and thom mentioned I should add them to this repo.
Since pseudo-labels are just a large model's generations on an existing dataset, what is ... | 36 | Adding pseudo-labels to datasets
I recently [uploaded pseudo-labels](https://github.com/huggingface/transformers/blob/master/examples/seq2seq/precomputed_pseudo_labels.md) for CNN/DM, XSUM and WMT16-en-ro to s3, and thom mentioned I should add them to this repo.
Since pseudo-labels are just a large model's generatio... |
https://github.com/huggingface/datasets/issues/723 | Adding pseudo-labels to datasets | A new configuration for those datasets should do the job then.
Note that until now datasets like xsum only had one configuration. It means that users didn't have to specify the configuration name when loading the dataset. If we add new configs, users that update the lib will have to update their code to specify the de... | I recently [uploaded pseudo-labels](https://github.com/huggingface/transformers/blob/master/examples/seq2seq/precomputed_pseudo_labels.md) for CNN/DM, XSUM and WMT16-en-ro to s3, and thom mentioned I should add them to this repo.
Since pseudo-labels are just a large model's generations on an existing dataset, what is ... | 65 | Adding pseudo-labels to datasets
I recently [uploaded pseudo-labels](https://github.com/huggingface/transformers/blob/master/examples/seq2seq/precomputed_pseudo_labels.md) for CNN/DM, XSUM and WMT16-en-ro to s3, and thom mentioned I should add them to this repo.
Since pseudo-labels are just a large model's generatio... |
https://github.com/huggingface/datasets/issues/723 | Adding pseudo-labels to datasets | Oh yes why not. I'm more in favor of this actually since pseudo labels are things that users (not dataset authors in general) can compute by themselves and share with the community | I recently [uploaded pseudo-labels](https://github.com/huggingface/transformers/blob/master/examples/seq2seq/precomputed_pseudo_labels.md) for CNN/DM, XSUM and WMT16-en-ro to s3, and thom mentioned I should add them to this repo.
Since pseudo-labels are just a large model's generations on an existing dataset, what is ... | 32 | Adding pseudo-labels to datasets
I recently [uploaded pseudo-labels](https://github.com/huggingface/transformers/blob/master/examples/seq2seq/precomputed_pseudo_labels.md) for CNN/DM, XSUM and WMT16-en-ro to s3, and thom mentioned I should add them to this repo.
Since pseudo-labels are just a large model's generatio... |
https://github.com/huggingface/datasets/issues/723 | Adding pseudo-labels to datasets | 
I assume I should (for example) rename the xsum dir, change the URL, and put the modified dir somewhere in S3? | I recently [uploaded pseudo-labels](https://github.com/huggingface/transformers/blob/master/examples/seq2seq/precomputed_pseudo_labels.md) for CNN/DM, XSUM and WMT16-en-ro to s3, and thom mentioned I should add them to this repo.
Since pseudo-labels are just a large model's generations on an existing dataset, what is ... | 22 | Adding pseudo-labels to datasets
I recently [uploaded pseudo-labels](https://github.com/huggingface/transformers/blob/master/examples/seq2seq/precomputed_pseudo_labels.md) for CNN/DM, XSUM and WMT16-en-ro to s3, and thom mentioned I should add them to this repo.
Since pseudo-labels are just a large model's generatio... |
https://github.com/huggingface/datasets/issues/723 | Adding pseudo-labels to datasets | You can use the `datasets-cli` to upload the folder with your version of xsum with the pseudo labels.
```
datasets-cli upload_dataset path/to/xsum
``` | I recently [uploaded pseudo-labels](https://github.com/huggingface/transformers/blob/master/examples/seq2seq/precomputed_pseudo_labels.md) for CNN/DM, XSUM and WMT16-en-ro to s3, and thom mentioned I should add them to this repo.
Since pseudo-labels are just a large model's generations on an existing dataset, what is ... | 23 | Adding pseudo-labels to datasets
I recently [uploaded pseudo-labels](https://github.com/huggingface/transformers/blob/master/examples/seq2seq/precomputed_pseudo_labels.md) for CNN/DM, XSUM and WMT16-en-ro to s3, and thom mentioned I should add them to this repo.
Since pseudo-labels are just a large model's generatio... |
https://github.com/huggingface/datasets/issues/721 | feat(dl_manager): add support for ftp downloads | We only support http by default for downloading.
If you really need to use ftp, then feel free to use a library that allows to download through ftp in your dataset script (I see that you've started working on #722 , that's awesome !). The users will get a message to install the extra library when they load the dataset... | I am working on a new dataset (#302) and encounter a problem downloading it.
```python
# This is the official download link from https://www-i6.informatik.rwth-aachen.de/~koller/RWTH-PHOENIX-2014-T/
_URL = "ftp://wasserstoff.informatik.rwth-aachen.de/pub/rwth-phoenix/2016/phoenix-2014-T.v3.tar.gz"
dl_manager.do... | 120 | feat(dl_manager): add support for ftp downloads
I am working on a new dataset (#302) and encounter a problem downloading it.
```python
# This is the official download link from https://www-i6.informatik.rwth-aachen.de/~koller/RWTH-PHOENIX-2014-T/
_URL = "ftp://wasserstoff.informatik.rwth-aachen.de/pub/rwth-phoen... |
https://github.com/huggingface/datasets/issues/721 | feat(dl_manager): add support for ftp downloads | Also maybe it coud be interesting to have a direct support of ftp inside the `datasets` library. Do you know any good libraries that we might consider adding as a (optional ?) dependency ? | I am working on a new dataset (#302) and encounter a problem downloading it.
```python
# This is the official download link from https://www-i6.informatik.rwth-aachen.de/~koller/RWTH-PHOENIX-2014-T/
_URL = "ftp://wasserstoff.informatik.rwth-aachen.de/pub/rwth-phoenix/2016/phoenix-2014-T.v3.tar.gz"
dl_manager.do... | 34 | feat(dl_manager): add support for ftp downloads
I am working on a new dataset (#302) and encounter a problem downloading it.
```python
# This is the official download link from https://www-i6.informatik.rwth-aachen.de/~koller/RWTH-PHOENIX-2014-T/
_URL = "ftp://wasserstoff.informatik.rwth-aachen.de/pub/rwth-phoen... |
https://github.com/huggingface/datasets/issues/721 | feat(dl_manager): add support for ftp downloads | Downloading an `ftp` file is as simple as:
```python
import urllib
urllib.urlretrieve('ftp://server/path/to/file', 'file')
```
I believe this should be supported by the library, as its not using any dependency and is trivial amount of code. | I am working on a new dataset (#302) and encounter a problem downloading it.
```python
# This is the official download link from https://www-i6.informatik.rwth-aachen.de/~koller/RWTH-PHOENIX-2014-T/
_URL = "ftp://wasserstoff.informatik.rwth-aachen.de/pub/rwth-phoenix/2016/phoenix-2014-T.v3.tar.gz"
dl_manager.do... | 35 | feat(dl_manager): add support for ftp downloads
I am working on a new dataset (#302) and encounter a problem downloading it.
```python
# This is the official download link from https://www-i6.informatik.rwth-aachen.de/~koller/RWTH-PHOENIX-2014-T/
_URL = "ftp://wasserstoff.informatik.rwth-aachen.de/pub/rwth-phoen... |
https://github.com/huggingface/datasets/issues/721 | feat(dl_manager): add support for ftp downloads | I know its unorthodox, but I added `ftp` download support to `file_utils` in the same PR https://github.com/huggingface/datasets/pull/722
So its possible to understand the interaction of the download component with the ftp download ability | I am working on a new dataset (#302) and encounter a problem downloading it.
```python
# This is the official download link from https://www-i6.informatik.rwth-aachen.de/~koller/RWTH-PHOENIX-2014-T/
_URL = "ftp://wasserstoff.informatik.rwth-aachen.de/pub/rwth-phoenix/2016/phoenix-2014-T.v3.tar.gz"
dl_manager.do... | 33 | feat(dl_manager): add support for ftp downloads
I am working on a new dataset (#302) and encounter a problem downloading it.
```python
# This is the official download link from https://www-i6.informatik.rwth-aachen.de/~koller/RWTH-PHOENIX-2014-T/
_URL = "ftp://wasserstoff.informatik.rwth-aachen.de/pub/rwth-phoen... |
https://github.com/huggingface/datasets/issues/721 | feat(dl_manager): add support for ftp downloads | @hoanganhpham1006 yes.
See pull request https://github.com/huggingface/datasets/pull/722 , it has a loader for this dataset, mostly ready.
There's one issue that delays it being merged - https://github.com/huggingface/datasets/issues/741 - regarding memory consumption. | I am working on a new dataset (#302) and encounter a problem downloading it.
```python
# This is the official download link from https://www-i6.informatik.rwth-aachen.de/~koller/RWTH-PHOENIX-2014-T/
_URL = "ftp://wasserstoff.informatik.rwth-aachen.de/pub/rwth-phoenix/2016/phoenix-2014-T.v3.tar.gz"
dl_manager.do... | 30 | feat(dl_manager): add support for ftp downloads
I am working on a new dataset (#302) and encounter a problem downloading it.
```python
# This is the official download link from https://www-i6.informatik.rwth-aachen.de/~koller/RWTH-PHOENIX-2014-T/
_URL = "ftp://wasserstoff.informatik.rwth-aachen.de/pub/rwth-phoen... |
https://github.com/huggingface/datasets/issues/721 | feat(dl_manager): add support for ftp downloads | The problem which I have now is that this dataset seems does not allow to download? Can you share it with me pls | I am working on a new dataset (#302) and encounter a problem downloading it.
```python
# This is the official download link from https://www-i6.informatik.rwth-aachen.de/~koller/RWTH-PHOENIX-2014-T/
_URL = "ftp://wasserstoff.informatik.rwth-aachen.de/pub/rwth-phoenix/2016/phoenix-2014-T.v3.tar.gz"
dl_manager.do... | 23 | feat(dl_manager): add support for ftp downloads
I am working on a new dataset (#302) and encounter a problem downloading it.
```python
# This is the official download link from https://www-i6.informatik.rwth-aachen.de/~koller/RWTH-PHOENIX-2014-T/
_URL = "ftp://wasserstoff.informatik.rwth-aachen.de/pub/rwth-phoen... |
https://github.com/huggingface/datasets/issues/721 | feat(dl_manager): add support for ftp downloads | The dataset loader is not yet ready, because of that issue.
If you want to just download the dataset the old-fashioned way, just go to: https://www-i6.informatik.rwth-aachen.de/ftp/pub/rwth-phoenix/2016/phoenix-2014-T.v3.tar.gz (the ftp link is now broken, and its available over https) | I am working on a new dataset (#302) and encounter a problem downloading it.
```python
# This is the official download link from https://www-i6.informatik.rwth-aachen.de/~koller/RWTH-PHOENIX-2014-T/
_URL = "ftp://wasserstoff.informatik.rwth-aachen.de/pub/rwth-phoenix/2016/phoenix-2014-T.v3.tar.gz"
dl_manager.do... | 37 | feat(dl_manager): add support for ftp downloads
I am working on a new dataset (#302) and encounter a problem downloading it.
```python
# This is the official download link from https://www-i6.informatik.rwth-aachen.de/~koller/RWTH-PHOENIX-2014-T/
_URL = "ftp://wasserstoff.informatik.rwth-aachen.de/pub/rwth-phoen... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.