id int64 599M 3.26B | number int64 1 7.7k | title stringlengths 1 290 | body stringlengths 0 228k ⌀ | state stringclasses 2
values | html_url stringlengths 46 51 | created_at timestamp[s]date 2020-04-14 10:18:02 2025-07-23 08:04:53 | updated_at timestamp[s]date 2020-04-27 16:04:17 2025-07-23 18:53:44 | closed_at timestamp[s]date 2020-04-14 12:01:40 2025-07-23 16:44:42 ⌀ | user dict | labels listlengths 0 4 | is_pull_request bool 2
classes | comments listlengths 0 0 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
609,064,987 | 24 | Add checksums | ### Checksums files
They are stored next to the dataset script in urls_checksums/checksums.txt.
They are used to check the integrity of the datasets downloaded files.
I kept the same format as tensorflow-datasets.
There is one checksums file for all configs.
### Load a dataset
When you do `load("squad")`, i... | closed | https://github.com/huggingface/datasets/pull/24 | 2020-04-29T13:37:29 | 2020-04-30T19:52:50 | 2020-04-30T19:52:49 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
608,508,706 | 23 | Add metrics | This PR is a draft for adding metrics (sacrebleu and seqeval are added)
use case examples:
`import nlp`
**sacrebleu:**
```
refs = [['The dog bit the man.', 'It was not unexpected.', 'The man bit him first.'],
['The dog had bit the man.', 'No one was surprised.', 'The man had bitten the dog.']]
sys = ['... | closed | https://github.com/huggingface/datasets/pull/23 | 2020-04-28T18:02:05 | 2022-10-04T09:31:56 | 2020-05-11T08:19:38 | {
"login": "mariamabarham",
"id": 38249783,
"type": "User"
} | [] | true | [] |
608,298,586 | 22 | adding bleu score code | this PR add the BLEU score metric to the lib. It can be tested by running the following code.
` from nlp.metrics import bleu
hyp1 = "It is a guide to action which ensures that the military always obeys the commands of the party"
ref1a = "It is a guide to action that ensures that the military forces always being... | closed | https://github.com/huggingface/datasets/pull/22 | 2020-04-28T13:00:50 | 2020-04-28T17:48:20 | 2020-04-28T17:48:08 | {
"login": "mariamabarham",
"id": 38249783,
"type": "User"
} | [] | true | [] |
607,914,185 | 21 | Cleanup Features - Updating convert command - Fix Download manager | This PR makes a number of changes:
# Updating `Features`
Features are a complex mechanism provided in `tfds` to be able to modify a dataset on-the-fly when serializing to disk and when loading from disk.
We don't really need this because (1) it hides too much from the user and (2) our datatype can be directly ... | closed | https://github.com/huggingface/datasets/pull/21 | 2020-04-27T23:16:55 | 2020-05-01T09:29:47 | 2020-05-01T09:29:46 | {
"login": "thomwolf",
"id": 7353373,
"type": "User"
} | [] | true | [] |
607,313,557 | 20 | remove boto3 and promise dependencies | With the new download manager, we don't need `promise` anymore.
I also removed `boto3` as in [this pr](https://github.com/huggingface/transformers/pull/3968) | closed | https://github.com/huggingface/datasets/pull/20 | 2020-04-27T07:39:45 | 2020-04-27T16:04:17 | 2020-04-27T14:15:45 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
606,400,645 | 19 | Replace tf.constant for TF | Replace simple tf.constant type of Tensor to tf.ragged.constant which allows to have examples of different size in a tf.data.Dataset.
Now the training works with TF. Here the same example than for the PT in collab:
```python
import tensorflow as tf
import nlp
from transformers import BertTokenizerFast, TFBertF... | closed | https://github.com/huggingface/datasets/pull/19 | 2020-04-24T15:32:06 | 2020-04-29T09:27:08 | 2020-04-25T21:18:45 | {
"login": "jplu",
"id": 959590,
"type": "User"
} | [] | true | [] |
606,109,196 | 18 | Updating caching mechanism - Allow dependency in dataset processing scripts - Fix style and quality in the repo | This PR has a lot of content (might be hard to review, sorry, in particular because I fixed the style in the repo at the same time).
# Style & quality:
You can now install the style and quality tools with `pip install -e .[quality]`. This will install black, the compatible version of sort and flake8.
You can then ... | closed | https://github.com/huggingface/datasets/pull/18 | 2020-04-24T07:39:48 | 2020-04-29T15:27:28 | 2020-04-28T16:06:28 | {
"login": "thomwolf",
"id": 7353373,
"type": "User"
} | [] | true | [] |
605,753,027 | 17 | Add Pandas as format type | As detailed in the title ^^ | closed | https://github.com/huggingface/datasets/pull/17 | 2020-04-23T18:20:14 | 2020-04-27T18:07:50 | 2020-04-27T18:07:48 | {
"login": "jplu",
"id": 959590,
"type": "User"
} | [] | true | [] |
605,661,462 | 16 | create our own DownloadManager | I tried to create our own - and way simpler - download manager, by replacing all the complicated stuff with our own `cached_path` solution.
With this implementation, I tried `dataset = nlp.load('squad')` and it seems to work fine.
For the implementation, what I did exactly:
- I copied the old download manager
- I... | closed | https://github.com/huggingface/datasets/pull/16 | 2020-04-23T16:08:07 | 2021-05-05T18:25:24 | 2020-04-25T21:25:10 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
604,906,708 | 15 | [Tests] General Test Design for all dataset scripts | The general idea is similar to how testing is done in `transformers`. There is one general `test_dataset_common.py` file which has a `DatasetTesterMixin` class. This class implements all of the logic that can be used in a generic way for all dataset classes. The idea is to keep each individual dataset test file as mini... | closed | https://github.com/huggingface/datasets/pull/15 | 2020-04-22T16:46:01 | 2022-10-04T09:31:54 | 2020-04-27T14:48:02 | {
"login": "patrickvonplaten",
"id": 23423619,
"type": "User"
} | [] | true | [] |
604,761,315 | 14 | [Download] Only create dir if not already exist | This was quite annoying to find out :D.
Some datasets have save in the same directory. So we should only create a new directory if it doesn't already exist. | closed | https://github.com/huggingface/datasets/pull/14 | 2020-04-22T13:32:51 | 2022-10-04T09:31:50 | 2020-04-23T08:27:33 | {
"login": "patrickvonplaten",
"id": 23423619,
"type": "User"
} | [] | true | [] |
604,547,951 | 13 | [Make style] | Added Makefile and applied make style to all.
make style runs the following code:
```
style:
black --line-length 119 --target-version py35 src
isort --recursive src
```
It's the same code that is run in `transformers`. | closed | https://github.com/huggingface/datasets/pull/13 | 2020-04-22T08:10:06 | 2024-11-20T13:42:58 | 2020-04-23T13:02:22 | {
"login": "patrickvonplaten",
"id": 23423619,
"type": "User"
} | [] | true | [] |
604,518,583 | 12 | [Map Function] add assert statement if map function does not return dict or None | IMO, if a function is provided that is not a print statement (-> returns variable of type `None`) or a function that updates the datasets (-> returns variable of type `dict`), then a `TypeError` should be raised.
Not sure whether you had cases in mind where the user should do something else @thomwolf , but I think ... | closed | https://github.com/huggingface/datasets/pull/12 | 2020-04-22T07:21:24 | 2022-10-04T09:31:53 | 2020-04-24T06:29:03 | {
"login": "patrickvonplaten",
"id": 23423619,
"type": "User"
} | [] | true | [] |
603,921,624 | 11 | [Convert TFDS to HFDS] Extend script to also allow just converting a single file | Adds another argument to be able to convert only a single file | closed | https://github.com/huggingface/datasets/pull/11 | 2020-04-21T11:25:33 | 2022-10-04T09:31:46 | 2020-04-21T20:47:00 | {
"login": "patrickvonplaten",
"id": 23423619,
"type": "User"
} | [] | true | [] |
603,909,327 | 10 | Name json file "squad.json" instead of "squad.py.json" | closed | https://github.com/huggingface/datasets/pull/10 | 2020-04-21T11:04:28 | 2022-10-04T09:31:44 | 2020-04-21T20:48:06 | {
"login": "patrickvonplaten",
"id": 23423619,
"type": "User"
} | [] | true | [] | |
603,894,874 | 9 | [Clean up] Datasets | Clean up `nlp/datasets` folder.
As I understood, eventually the `nlp/datasets` shall not exist anymore at all.
The folder `nlp/datasets/nlp` is kept for the moment, but won't be needed in the future, since it will live on S3 (actually it already does) at: `https://s3.console.aws.amazon.com/s3/buckets/datasets.h... | closed | https://github.com/huggingface/datasets/pull/9 | 2020-04-21T10:39:56 | 2022-10-04T09:31:42 | 2020-04-21T20:49:58 | {
"login": "patrickvonplaten",
"id": 23423619,
"type": "User"
} | [] | true | [] |
601,783,243 | 8 | Fix issue 6: error when the citation is missing in the DatasetInfo | closed | https://github.com/huggingface/datasets/pull/8 | 2020-04-17T08:04:26 | 2020-04-29T09:27:11 | 2020-04-20T13:24:12 | {
"login": "jplu",
"id": 959590,
"type": "User"
} | [] | true | [] | |
601,780,534 | 7 | Fix issue 5: allow empty datasets | closed | https://github.com/huggingface/datasets/pull/7 | 2020-04-17T07:59:56 | 2020-04-29T09:27:13 | 2020-04-20T13:23:48 | {
"login": "jplu",
"id": 959590,
"type": "User"
} | [] | true | [] | |
600,330,836 | 6 | Error when citation is not given in the DatasetInfo | The following error is raised when the `citation` parameter is missing when we instantiate a `DatasetInfo`:
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/jplu/dev/jplu/datasets/src/nlp/info.py", line 338, in __repr__
citation_pprint = _indent('"""{}"""'.format(self.... | closed | https://github.com/huggingface/datasets/issues/6 | 2020-04-15T14:14:54 | 2020-04-29T09:23:22 | 2020-04-29T09:23:22 | {
"login": "jplu",
"id": 959590,
"type": "User"
} | [] | false | [] |
600,295,889 | 5 | ValueError when a split is empty | When a split is empty either TEST, VALIDATION or TRAIN I get the following error:
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/jplu/dev/jplu/datasets/src/nlp/load.py", line 295, in load
ds = dbuilder.as_dataset(**as_dataset_kwargs)
File "/home/jplu/dev/jplu/data... | closed | https://github.com/huggingface/datasets/issues/5 | 2020-04-15T13:25:13 | 2020-04-29T09:23:05 | 2020-04-29T09:23:05 | {
"login": "jplu",
"id": 959590,
"type": "User"
} | [] | false | [] |
600,185,417 | 4 | [Feature] Keep the list of labels of a dataset as metadata | It would be useful to keep the list of the labels of a dataset as metadata. Either directly in the `DatasetInfo` or in the Arrow metadata. | closed | https://github.com/huggingface/datasets/issues/4 | 2020-04-15T10:17:10 | 2020-07-08T16:59:46 | 2020-05-04T06:11:57 | {
"login": "jplu",
"id": 959590,
"type": "User"
} | [] | false | [] |
600,180,050 | 3 | [Feature] More dataset outputs | Add the following dataset outputs:
- Spark
- Pandas | closed | https://github.com/huggingface/datasets/issues/3 | 2020-04-15T10:08:14 | 2020-05-04T06:12:27 | 2020-05-04T06:12:27 | {
"login": "jplu",
"id": 959590,
"type": "User"
} | [] | false | [] |
599,767,671 | 2 | Issue to read a local dataset | Hello,
As proposed by @thomwolf, I open an issue to explain what I'm trying to do without success. What I want to do is to create and load a local dataset, the script I have done is the following:
```python
import os
import csv
import nlp
class BbcConfig(nlp.BuilderConfig):
def __init__(self, **kwarg... | closed | https://github.com/huggingface/datasets/issues/2 | 2020-04-14T18:18:51 | 2020-05-11T18:55:23 | 2020-05-11T18:55:22 | {
"login": "jplu",
"id": 959590,
"type": "User"
} | [] | false | [] |
599,457,467 | 1 | changing nlp.bool to nlp.bool_ | closed | https://github.com/huggingface/datasets/pull/1 | 2020-04-14T10:18:02 | 2022-10-04T09:31:40 | 2020-04-14T12:01:40 | {
"login": "mariamabarham",
"id": 38249783,
"type": "User"
} | [] | true | [] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.