number int64 2 7.91k | title stringlengths 1 290 | body stringlengths 0 228k | state stringclasses 2
values | created_at timestamp[s]date 2020-04-14 18:18:51 2025-12-16 10:45:02 | updated_at timestamp[s]date 2020-04-29 09:23:05 2025-12-16 19:34:46 | closed_at timestamp[s]date 2020-04-29 09:23:05 2025-12-16 14:20:48 β | url stringlengths 48 51 | author stringlengths 3 26 β | comments_count int64 0 70 | labels listlengths 0 4 |
|---|---|---|---|---|---|---|---|---|---|---|
166 | Add a method to shuffle a dataset | Could maybe be a `dataset.shuffle(generator=None, seed=None)` signature method.
Also, we could maybe have a clear indication of which method modify in-place and which methods return/cache a modified dataset. I kinda like torch conversion of having an underscore suffix for all the methods which modify a dataset in-pl... | CLOSED | 2020-05-19T10:08:46 | 2020-06-23T15:07:33 | 2020-06-23T15:07:32 | https://github.com/huggingface/datasets/issues/166 | thomwolf | 4 | [
"generic discussion"
] |
165 | ANLI | Can I recommend the following:
For ANLI, use https://github.com/facebookresearch/anli. As that paper says, "Our dataset is not
to be confused with abductive NLI (Bhagavatula et al., 2019), which calls itself Ξ±NLI, or ART.".
Indeed, the paper cited under what is currently called anli says in the abstract "We int... | CLOSED | 2020-05-19T07:50:57 | 2020-05-20T12:23:07 | 2020-05-20T12:23:07 | https://github.com/huggingface/datasets/issues/165 | douwekiela | 0 | [] |
164 | Add Spanish POR and NER Datasets | Hi guys,
In order to cover multilingual support a little step could be adding standard Datasets used for Spanish NER and POS tasks.
I can provide it in raw and preprocessed formats. | CLOSED | 2020-05-18T22:18:21 | 2020-05-25T16:28:45 | 2020-05-25T16:28:45 | https://github.com/huggingface/datasets/issues/164 | mrm8488 | 2 | [
"dataset request"
] |
163 | [Feature request] Add cos-e v1.0 | I noticed the second release of cos-e (v1.11) is included in this repo. I wanted to request inclusion of v1.0, since this is the version on which results are reported on in [the paper](https://www.aclweb.org/anthology/P19-1487/), and v1.11 has noted [annotation](https://github.com/salesforce/cos-e/issues/2) [issues](ht... | CLOSED | 2020-05-18T22:05:26 | 2020-06-16T23:15:25 | 2020-06-16T18:52:06 | https://github.com/huggingface/datasets/issues/163 | sarahwie | 10 | [
"dataset request"
] |
161 | Discussion on version identifier & MockDataLoaderManager for test data | Hi, I'm working on adding a dataset and ran into an error due to `download` not being defined on `MockDataLoaderManager`, but being defined in `nlp/utils/download_manager.py`. The readme step running this: `RUN_SLOW=1 pytest tests/test_dataset_common.py::DatasetTest::test_load_real_dataset_localmydatasetname` triggers ... | OPEN | 2020-05-18T20:31:30 | 2020-05-24T18:10:03 | null | https://github.com/huggingface/datasets/issues/161 | EntilZha | 12 | [
"generic discussion"
] |
160 | caching in map causes same result to be returned for train, validation and test | hello,
I am working on a program that uses the `nlp` library with the `SST2` dataset.
The rough outline of the program is:
```
import nlp as nlp_datasets
...
parser.add_argument('--dataset', help='HuggingFace Datasets id', default=['glue', 'sst2'], nargs='+')
...
dataset = nlp_datasets.load_dataset(*args.... | CLOSED | 2020-05-18T19:22:03 | 2020-05-18T21:36:20 | 2020-05-18T21:36:20 | https://github.com/huggingface/datasets/issues/160 | dpressel | 7 | [
"dataset bug"
] |
159 | How can we add more datasets to nlp library? | CLOSED | 2020-05-18T18:35:31 | 2020-05-18T18:37:08 | 2020-05-18T18:37:07 | https://github.com/huggingface/datasets/issues/159 | Tahsin-Mayeesha | 1 | [] | |
157 | nlp.load_dataset() gives "TypeError: list_() takes exactly one argument (2 given)" | I'm trying to load datasets from nlp but there seems to have error saying
"TypeError: list_() takes exactly one argument (2 given)"
gist can be found here
https://gist.github.com/saahiluppal/c4b878f330b10b9ab9762bc0776c0a6a | CLOSED | 2020-05-18T16:46:38 | 2020-06-05T08:08:58 | 2020-06-05T08:08:58 | https://github.com/huggingface/datasets/issues/157 | saahiluppal | 11 | [] |
156 | SyntaxError with WMT datasets | The following snippet produces a syntax error:
```
import nlp
dataset = nlp.load_dataset('wmt14')
print(dataset['train'][0])
```
```
Traceback (most recent call last):
File "/home/tom/.local/lib/python3.6/site-packages/IPython/core/interactiveshell.py", line 3326, in run_code
exec(code_obj, self.... | CLOSED | 2020-05-18T14:38:18 | 2020-07-23T16:41:55 | 2020-07-23T16:41:55 | https://github.com/huggingface/datasets/issues/156 | tomhosking | 7 | [] |
153 | Meta-datasets (GLUE/XTREME/...) β Special care to attributions and citations | Meta-datasets are interesting in terms of standardized benchmarks but they also have specific behaviors, in particular in terms of attribution and authorship. It's very important that each specific dataset inside a meta dataset is properly referenced and the citation/specific homepage/etc are very visible and accessibl... | OPEN | 2020-05-18T07:24:22 | 2020-05-18T21:18:16 | null | https://github.com/huggingface/datasets/issues/153 | thomwolf | 4 | [
"generic discussion"
] |
149 | [Feature request] Add Ubuntu Dialogue Corpus dataset | https://github.com/rkadlec/ubuntu-ranking-dataset-creator or http://dataset.cs.mcgill.ca/ubuntu-corpus-1.0/ | CLOSED | 2020-05-17T15:42:39 | 2020-05-18T17:01:46 | 2020-05-18T17:01:46 | https://github.com/huggingface/datasets/issues/149 | danth | 1 | [
"dataset request"
] |
148 | _download_and_prepare() got an unexpected keyword argument 'verify_infos' | # Reproduce
In Colab,
```
%pip install -q nlp
%pip install -q apache_beam mwparserfromhell
dataset = nlp.load_dataset('wikipedia')
```
get
```
Downloading and preparing dataset wikipedia/20200501.aa (download: Unknown size, generated: Unknown size, total: Unknown size) to /root/.cache/huggingface/datasets/w... | CLOSED | 2020-05-17T01:48:53 | 2020-05-18T07:38:33 | 2020-05-18T07:38:33 | https://github.com/huggingface/datasets/issues/148 | richarddwang | 2 | [
"dataset bug"
] |
147 | Error with sklearn train_test_split | It would be nice if we could use sklearn `train_test_split` to quickly generate subsets from the dataset objects returned by `nlp.load_dataset`. At the moment the code:
```python
data = nlp.load_dataset('imdb', cache_dir=data_cache)
f_half, s_half = train_test_split(data['train'], test_size=0.5, random_state=seed)... | CLOSED | 2020-05-17T00:28:24 | 2020-06-18T16:23:23 | 2020-06-18T16:23:23 | https://github.com/huggingface/datasets/issues/147 | ClonedOne | 2 | [
"enhancement"
] |
143 | ArrowTypeError in squad metrics | `squad_metric.compute` is giving following error
```
ArrowTypeError: Could not convert [{'text': 'Denver Broncos'}, {'text': 'Denver Broncos'}, {'text': 'Denver Broncos'}] with type list: was not a dict, tuple, or recognized null value for conversion to struct type
```
This is how my predictions and references lo... | CLOSED | 2020-05-16T12:06:37 | 2020-05-22T13:38:52 | 2020-05-22T13:36:48 | https://github.com/huggingface/datasets/issues/143 | patil-suraj | 1 | [
"metric bug"
] |
138 | Consider renaming to nld | Hey :)
Just making a thread here recording what I said on Twitter, as it's impossible to follow discussion there. It's also just really not a good way to talk about this sort of thing.
The issue is that modules go into the global namespace, so you shouldn't use variable names that conflict with module names. This... | CLOSED | 2020-05-15T20:23:27 | 2022-09-16T05:18:22 | 2020-09-28T00:08:10 | https://github.com/huggingface/datasets/issues/138 | honnibal | 13 | [
"generic discussion"
] |
133 | [Question] Using/adding a local dataset | Users may want to either create/modify a local copy of a dataset, or use a custom-built dataset with the same `Dataset` API as externally downloaded datasets.
It appears to be possible to point to a local dataset path rather than downloading the external ones, but I'm not exactly sure how to go about doing this.
... | CLOSED | 2020-05-15T16:26:06 | 2020-07-23T16:44:09 | 2020-07-23T16:44:09 | https://github.com/huggingface/datasets/issues/133 | zphang | 5 | [] |
132 | [Feature Request] Add the OpenWebText dataset | The OpenWebText dataset is an open clone of OpenAI's WebText dataset. It can be used to train ELECTRA as is specified in the [README](https://www.github.com/google-research/electra).
More information and the download link are available [here](https://skylion007.github.io/OpenWebTextCorpus/). | CLOSED | 2020-05-15T15:57:29 | 2020-10-07T14:22:48 | 2020-10-07T14:22:48 | https://github.com/huggingface/datasets/issues/132 | LysandreJik | 2 | [
"dataset request"
] |
131 | [Feature request] Add Toronto BookCorpus dataset | I know the copyright/distribution of this one is complex, but it would be great to have! That, combined with the existing `wikitext`, would provide a complete dataset for pretraining models like BERT. | CLOSED | 2020-05-15T15:50:44 | 2020-06-28T21:27:31 | 2020-06-28T21:27:31 | https://github.com/huggingface/datasets/issues/131 | jarednielsen | 2 | [
"dataset request"
] |
130 | Loading GLUE dataset loads CoLA by default | If I run:
```python
dataset = nlp.load_dataset('glue')
```
The resultant dataset seems to be CoLA be default, without throwing any error. This is in contrast to calling:
```python
metric = nlp.load_metric("glue")
```
which throws an error telling the user that they need to specify a task in GLUE. Should the... | CLOSED | 2020-05-15T14:55:50 | 2020-05-27T22:08:15 | 2020-05-27T22:08:15 | https://github.com/huggingface/datasets/issues/130 | zphang | 3 | [
"dataset bug"
] |
129 | [Feature request] Add Google Natural Question dataset | Would be great to have https://github.com/google-research-datasets/natural-questions as an alternative to SQuAD. | CLOSED | 2020-05-15T14:14:20 | 2020-07-23T13:21:29 | 2020-07-23T13:21:29 | https://github.com/huggingface/datasets/issues/129 | elyase | 7 | [
"dataset request"
] |
128 | Some error inside nlp.load_dataset() | First of all, nice work!
I am going through [this overview notebook](https://colab.research.google.com/github/huggingface/nlp/blob/master/notebooks/Overview.ipynb)
In simple step `dataset = nlp.load_dataset('squad', split='validation[:10%]')`
I get an error, which is connected with some inner code, I think:
`... | CLOSED | 2020-05-15T13:01:29 | 2020-05-15T13:10:40 | 2020-05-15T13:10:40 | https://github.com/huggingface/datasets/issues/128 | polkaYK | 2 | [] |
120 | π `map` not working | I'm trying to run a basic example (mapping function to add a prefix).
[Here is the colab notebook I'm using.](https://colab.research.google.com/drive/1YH4JCAy0R1MMSc-k_Vlik_s1LEzP_t1h?usp=sharing)
```python
import nlp
dataset = nlp.load_dataset('squad', split='validation[:10%]')
def test(sample):
samp... | CLOSED | 2020-05-15T06:43:08 | 2020-05-15T07:02:38 | 2020-05-15T07:02:38 | https://github.com/huggingface/datasets/issues/120 | astariul | 1 | [] |
119 | π Colab : type object 'pyarrow.lib.RecordBatch' has no attribute 'from_struct_array' | I'm trying to load CNN/DM dataset on Colab.
[Colab notebook](https://colab.research.google.com/drive/11Mf7iNhIyt6GpgA1dBEtg3cyMHmMhtZS?usp=sharing)
But I meet this error :
> AttributeError: type object 'pyarrow.lib.RecordBatch' has no attribute 'from_struct_array'
| CLOSED | 2020-05-15T02:27:26 | 2020-05-15T05:11:22 | 2020-05-15T02:45:28 | https://github.com/huggingface/datasets/issues/119 | astariul | 2 | [] |
118 | β How to apply a map to all subsets ? | I'm working with CNN/DM dataset, where I have 3 subsets : `train`, `test`, `validation`.
Should I apply my map function on the subsets one by one ?
```python
import nlp
cnn_dm = nlp.load_dataset('cnn_dailymail')
for corpus in ['train', 'test', 'validation']:
cnn_dm[corpus] = cnn_dm[corpus].map(my_f... | CLOSED | 2020-05-15T01:58:52 | 2020-05-15T07:05:49 | 2020-05-15T07:04:25 | https://github.com/huggingface/datasets/issues/118 | astariul | 1 | [] |
117 | β How to remove specific rows of a dataset ? | I saw on the [example notebook](https://colab.research.google.com/github/huggingface/nlp/blob/master/notebooks/Overview.ipynb#scrollTo=efFhDWhlvSVC) how to remove a specific column :
```python
dataset.drop('id')
```
But I didn't find how to remove a specific row.
**For example, how can I remove all sample w... | CLOSED | 2020-05-15T01:25:06 | 2022-07-15T08:36:44 | 2020-05-15T07:04:32 | https://github.com/huggingface/datasets/issues/117 | astariul | 4 | [] |
116 | π Trying to use ROUGE metric : pyarrow.lib.ArrowInvalid: Column 1 named references expected length 534 but got length 323 | I'm trying to use rouge metric.
I have to files : `test.pred.tokenized` and `test.gold.tokenized` with each line containing a sentence.
I tried :
```python
import nlp
rouge = nlp.load_metric('rouge')
with open("test.pred.tokenized") as p, open("test.gold.tokenized") as g:
for lp, lg in zip(p, g):
... | CLOSED | 2020-05-15T01:12:06 | 2020-05-28T23:43:07 | 2020-05-28T23:43:07 | https://github.com/huggingface/datasets/issues/116 | astariul | 5 | [
"metric bug"
] |
115 | AttributeError: 'dict' object has no attribute 'info' | I'm trying to access the information of CNN/DM dataset :
```python
cnn_dm = nlp.load_dataset('cnn_dailymail')
print(cnn_dm.info)
```
returns :
> AttributeError: 'dict' object has no attribute 'info' | CLOSED | 2020-05-15T00:29:47 | 2020-05-17T13:11:00 | 2020-05-17T13:11:00 | https://github.com/huggingface/datasets/issues/115 | astariul | 2 | [] |
114 | Couldn't reach CNN/DM dataset | I can't get CNN / DailyMail dataset.
```python
import nlp
assert "cnn_dailymail" in [dataset.id for dataset in nlp.list_datasets()]
cnn_dm = nlp.load_dataset('cnn_dailymail')
```
[Colab notebook](https://colab.research.google.com/drive/1zQ3bYAVzm1h0mw0yWPqKAg_4EUlSx5Ex?usp=sharing)
gives following error ... | CLOSED | 2020-05-15T00:16:17 | 2020-05-15T00:19:52 | 2020-05-15T00:19:51 | https://github.com/huggingface/datasets/issues/114 | astariul | 1 | [] |
38 | [Checksums] Error for some datasets | The checksums command works very nicely for `squad`. But for `crime_and_punish` and `xnli`,
the same bug happens:
When running:
```
python nlp-cli nlp-cli test xnli --save_checksums
```
leads to:
```
File "nlp-cli", line 33, in <module>
service.run()
File "/home/patrick/python_bin/nlp/commands... | CLOSED | 2020-05-04T08:00:16 | 2020-05-04T09:48:20 | 2020-05-04T09:48:20 | https://github.com/huggingface/datasets/issues/38 | patrickvonplaten | 3 | [] |
6 | Error when citation is not given in the DatasetInfo | The following error is raised when the `citation` parameter is missing when we instantiate a `DatasetInfo`:
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/jplu/dev/jplu/datasets/src/nlp/info.py", line 338, in __repr__
citation_pprint = _indent('"""{}"""'.format(self.... | CLOSED | 2020-04-15T14:14:54 | 2020-04-29T09:23:22 | 2020-04-29T09:23:22 | https://github.com/huggingface/datasets/issues/6 | jplu | 3 | [] |
5 | ValueError when a split is empty | When a split is empty either TEST, VALIDATION or TRAIN I get the following error:
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/jplu/dev/jplu/datasets/src/nlp/load.py", line 295, in load
ds = dbuilder.as_dataset(**as_dataset_kwargs)
File "/home/jplu/dev/jplu/data... | CLOSED | 2020-04-15T13:25:13 | 2020-04-29T09:23:05 | 2020-04-29T09:23:05 | https://github.com/huggingface/datasets/issues/5 | jplu | 3 | [] |
4 | [Feature] Keep the list of labels of a dataset as metadata | It would be useful to keep the list of the labels of a dataset as metadata. Either directly in the `DatasetInfo` or in the Arrow metadata. | CLOSED | 2020-04-15T10:17:10 | 2020-07-08T16:59:46 | 2020-05-04T06:11:57 | https://github.com/huggingface/datasets/issues/4 | jplu | 6 | [] |
3 | [Feature] More dataset outputs | Add the following dataset outputs:
- Spark
- Pandas | CLOSED | 2020-04-15T10:08:14 | 2020-05-04T06:12:27 | 2020-05-04T06:12:27 | https://github.com/huggingface/datasets/issues/3 | jplu | 3 | [] |
2 | Issue to read a local dataset | Hello,
As proposed by @thomwolf, I open an issue to explain what I'm trying to do without success. What I want to do is to create and load a local dataset, the script I have done is the following:
```python
import os
import csv
import nlp
class BbcConfig(nlp.BuilderConfig):
def __init__(self, **kwarg... | CLOSED | 2020-04-14T18:18:51 | 2020-05-11T18:55:23 | 2020-05-11T18:55:22 | https://github.com/huggingface/datasets/issues/2 | jplu | 5 | [] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.