url stringlengths 58 61 | repository_url stringclasses 1
value | labels_url stringlengths 72 75 | comments_url stringlengths 67 70 | events_url stringlengths 65 68 | html_url stringlengths 46 51 | id int64 599M 1B | node_id stringlengths 18 32 | number int64 1 2.96k | title stringlengths 1 268 | user dict | labels list | state stringclasses 2
values | locked bool 1
class | assignee dict | assignees list | milestone dict | comments list | created_at int64 1,587B 1,632B | updated_at int64 1,587B 1,632B | closed_at int64 1,587B 1,632B ⌀ | author_association stringclasses 4
values | active_lock_reason null | pull_request dict | body stringlengths 0 228k ⌀ | timeline_url stringlengths 67 70 | performed_via_github_app null | is_pull_request bool 2
classes |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/datasets/issues/219 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/219/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/219/comments | https://api.github.com/repos/huggingface/datasets/issues/219/events | https://github.com/huggingface/datasets/pull/219 | 627,235,893 | MDExOlB1bGxSZXF1ZXN0NDI1MDc2NjQx | 219 | force mwparserfromhell as third party | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoest... | [] | closed | false | null | [] | null | [] | 1,590,755,597,000 | 1,590,759,013,000 | 1,590,759,012,000 | MEMBER | null | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/219",
"html_url": "https://github.com/huggingface/datasets/pull/219",
"diff_url": "https://github.com/huggingface/datasets/pull/219.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/219.patch"
} | This should fix your env because you had `mwparserfromhell ` as a first party for `isort` @patrickvonplaten | https://api.github.com/repos/huggingface/datasets/issues/219/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/218 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/218/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/218/comments | https://api.github.com/repos/huggingface/datasets/issues/218/events | https://github.com/huggingface/datasets/pull/218 | 627,173,407 | MDExOlB1bGxSZXF1ZXN0NDI1MDI2NzEz | 218 | Add Natual Questions and C4 scripts | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoest... | [] | closed | false | null | [] | null | [] | 1,590,748,830,000 | 1,590,755,461,000 | 1,590,755,460,000 | MEMBER | null | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/218",
"html_url": "https://github.com/huggingface/datasets/pull/218",
"diff_url": "https://github.com/huggingface/datasets/pull/218.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/218.patch"
} | Scripts are ready !
However they are not processed nor directly available from gcp yet. | https://api.github.com/repos/huggingface/datasets/issues/218/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/217 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/217/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/217/comments | https://api.github.com/repos/huggingface/datasets/issues/217/events | https://github.com/huggingface/datasets/issues/217 | 627,128,403 | MDU6SXNzdWU2MjcxMjg0MDM= | 217 | Multi-task dataset mixing | {
"login": "ghomasHudson",
"id": 13795113,
"node_id": "MDQ6VXNlcjEzNzk1MTEz",
"avatar_url": "https://avatars.githubusercontent.com/u/13795113?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ghomasHudson",
"html_url": "https://github.com/ghomasHudson",
"followers_url": "https://api.github.c... | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
},
{
"id": 2067400324,
"node_id": "MDU6... | open | false | null | [] | null | [
"I like this feature! I think the first question we should decide on is how to convert all datasets into the same format. In T5, the authors decided to format every dataset into a text-to-text format. If the dataset had \"multiple\" inputs like MNLI, the inputs were concatenated. So in MNLI the input:\r\n\r\n> - **... | 1,590,744,146,000 | 1,603,701,993,000 | null | CONTRIBUTOR | null | null | It seems like many of the best performing models on the GLUE benchmark make some use of multitask learning (simultaneous training on multiple tasks).
The [T5 paper](https://arxiv.org/pdf/1910.10683.pdf) highlights multiple ways of mixing the tasks together during finetuning:
- **Examples-proportional mixing** - sam... | https://api.github.com/repos/huggingface/datasets/issues/217/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/216 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/216/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/216/comments | https://api.github.com/repos/huggingface/datasets/issues/216/events | https://github.com/huggingface/datasets/issues/216 | 626,896,890 | MDU6SXNzdWU2MjY4OTY4OTA= | 216 | ❓ How to get ROUGE-2 with the ROUGE metric ? | {
"login": "astariul",
"id": 43774355,
"node_id": "MDQ6VXNlcjQzNzc0MzU1",
"avatar_url": "https://avatars.githubusercontent.com/u/43774355?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/astariul",
"html_url": "https://github.com/astariul",
"followers_url": "https://api.github.com/users/ast... | [] | closed | false | null | [] | null | [
"ROUGE-1 and ROUGE-L shouldn't return the same thing. This is weird",
"For the rouge2 metric you can do\r\n\r\n```python\r\nrouge = nlp.load_metric('rouge')\r\nwith open(\"pred.txt\") as p, open(\"ref.txt\") as g:\r\n for lp, lg in zip(p, g):\r\n rouge.add(lp, lg)\r\nscore = rouge.compute(rouge_types=[\... | 1,590,709,652,000 | 1,590,969,875,000 | 1,590,969,875,000 | NONE | null | null | I'm trying to use ROUGE metric, but I don't know how to get the ROUGE-2 metric.
---
I compute scores with :
```python
import nlp
rouge = nlp.load_metric('rouge')
with open("pred.txt") as p, open("ref.txt") as g:
for lp, lg in zip(p, g):
rouge.add([lp], [lg])
score = rouge.compute()
```
... | https://api.github.com/repos/huggingface/datasets/issues/216/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/215 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/215/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/215/comments | https://api.github.com/repos/huggingface/datasets/issues/215/events | https://github.com/huggingface/datasets/issues/215 | 626,867,879 | MDU6SXNzdWU2MjY4Njc4Nzk= | 215 | NonMatchingSplitsSizesError when loading blog_authorship_corpus | {
"login": "cedricconol",
"id": 52105365,
"node_id": "MDQ6VXNlcjUyMTA1MzY1",
"avatar_url": "https://avatars.githubusercontent.com/u/52105365?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cedricconol",
"html_url": "https://github.com/cedricconol",
"followers_url": "https://api.github.com/... | [
{
"id": 2067388877,
"node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug",
"name": "dataset bug",
"color": "2edb81",
"default": false,
"description": "A bug in a dataset script provided in the library"
}
] | open | false | null | [] | null | [
"I just ran it on colab and got this\r\n```\r\n[{'expected': SplitInfo(name='train', num_bytes=610252351, num_examples=532812,\r\ndataset_name='blog_authorship_corpus'), 'recorded': SplitInfo(name='train',\r\nnum_bytes=611607465, num_examples=533285, dataset_name='blog_authorship_corpus')},\r\n{'expected': SplitInf... | 1,590,706,519,000 | 1,609,969,023,000 | null | NONE | null | null | Getting this error when i run `nlp.load_dataset('blog_authorship_corpus')`.
```
raise NonMatchingSplitsSizesError(str(bad_splits))
nlp.utils.info_utils.NonMatchingSplitsSizesError: [{'expected': SplitInfo(name='train',
num_bytes=610252351, num_examples=532812, dataset_name='blog_authorship_corpus'),
'recorded... | https://api.github.com/repos/huggingface/datasets/issues/215/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/214 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/214/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/214/comments | https://api.github.com/repos/huggingface/datasets/issues/214/events | https://github.com/huggingface/datasets/pull/214 | 626,641,549 | MDExOlB1bGxSZXF1ZXN0NDI0NTk1NjIx | 214 | [arrow_dataset.py] add new filter function | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://... | [] | closed | false | null | [] | null | [
"I agree that a `.filter` method would be VERY useful and appreciated. I'm not a big fan of using `flatten_nested` as it completely breaks down the structure of the example and it may create bugs. Right now I think it may not work for nested structures. Maybe there's a simpler way that we've not figured out yet.",
... | 1,590,682,900,000 | 1,590,752,609,000 | 1,590,751,940,000 | MEMBER | null | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/214",
"html_url": "https://github.com/huggingface/datasets/pull/214",
"diff_url": "https://github.com/huggingface/datasets/pull/214.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/214.patch"
} | The `.map()` function is super useful, but can IMO a bit tedious when filtering certain examples.
I think, filtering out examples is also a very common operation people would like to perform on datasets.
This PR is a proposal to add a `.filter()` function in the same spirit than the `.map()` function.
Here is a ... | https://api.github.com/repos/huggingface/datasets/issues/214/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/213 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/213/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/213/comments | https://api.github.com/repos/huggingface/datasets/issues/213/events | https://github.com/huggingface/datasets/pull/213 | 626,587,995 | MDExOlB1bGxSZXF1ZXN0NDI0NTUxODE3 | 213 | better message if missing beam options | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoest... | [] | closed | false | null | [] | null | [] | 1,590,678,417,000 | 1,590,745,877,000 | 1,590,745,876,000 | MEMBER | null | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/213",
"html_url": "https://github.com/huggingface/datasets/pull/213",
"diff_url": "https://github.com/huggingface/datasets/pull/213.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/213.patch"
} | WDYT @yjernite ?
For example:
```python
dataset = nlp.load_dataset('wikipedia', '20200501.aa')
```
Raises:
```
MissingBeamOptions: Trying to generate a dataset using Apache Beam, yet no Beam Runner or PipelineOptions() has been provided in `load_dataset` or in the builder arguments. For big datasets it has to ru... | https://api.github.com/repos/huggingface/datasets/issues/213/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/212 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/212/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/212/comments | https://api.github.com/repos/huggingface/datasets/issues/212/events | https://github.com/huggingface/datasets/pull/212 | 626,580,198 | MDExOlB1bGxSZXF1ZXN0NDI0NTQ1NjAy | 212 | have 'add' and 'add_batch' for metrics | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoest... | [] | closed | false | null | [] | null | [] | 1,590,677,807,000 | 1,590,748,865,000 | 1,590,748,864,000 | MEMBER | null | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/212",
"html_url": "https://github.com/huggingface/datasets/pull/212",
"diff_url": "https://github.com/huggingface/datasets/pull/212.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/212.patch"
} | This should fix #116
Previously the `.add` method of metrics expected a batch of examples.
Now `.add` expects one prediction/reference and `.add_batch` expects a batch.
I think it is more coherent with the way the ArrowWriter works. | https://api.github.com/repos/huggingface/datasets/issues/212/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/211 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/211/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/211/comments | https://api.github.com/repos/huggingface/datasets/issues/211/events | https://github.com/huggingface/datasets/issues/211 | 626,565,994 | MDU6SXNzdWU2MjY1NjU5OTQ= | 211 | [Arrow writer, Trivia_qa] Could not convert TagMe with type str: converting to null type | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://... | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | closed | false | {
"login": "thomwolf",
"id": 7353373,
"node_id": "MDQ6VXNlcjczNTMzNzM=",
"avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/thomwolf",
"html_url": "https://github.com/thomwolf",
"followers_url": "https://api.github.com/users/thomw... | [
{
"login": "thomwolf",
"id": 7353373,
"node_id": "MDQ6VXNlcjczNTMzNzM=",
"avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/thomwolf",
"html_url": "https://github.com/thomwolf",
"followers_url": "https://api.gi... | null | [
"Here the full error trace:\r\n\r\n```\r\nArrowInvalid Traceback (most recent call last)\r\n<ipython-input-1-7aaf3f011358> in <module>\r\n 1 import nlp\r\n 2 ds = nlp.load_dataset(\"trivia_qa\", \"rc\", split=\"validation[:1%]\") # this might take 2.3 min to download but it's... | 1,590,676,694,000 | 1,595,499,316,000 | 1,595,499,316,000 | MEMBER | null | null | Running the following code
```
import nlp
ds = nlp.load_dataset("trivia_qa", "rc", split="validation[:1%]") # this might take 2.3 min to download but it's cached afterwards...
ds.map(lambda x: x, load_from_cache_file=False)
```
triggers a `ArrowInvalid: Could not convert TagMe with type str: converting to n... | https://api.github.com/repos/huggingface/datasets/issues/211/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/210 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/210/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/210/comments | https://api.github.com/repos/huggingface/datasets/issues/210/events | https://github.com/huggingface/datasets/pull/210 | 626,504,243 | MDExOlB1bGxSZXF1ZXN0NDI0NDgyNDgz | 210 | fix xnli metric kwargs description | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoest... | [] | closed | false | null | [] | null | [] | 1,590,672,104,000 | 1,590,672,131,000 | 1,590,672,130,000 | MEMBER | null | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/210",
"html_url": "https://github.com/huggingface/datasets/pull/210",
"diff_url": "https://github.com/huggingface/datasets/pull/210.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/210.patch"
} | The text was wrong as noticed in #202 | https://api.github.com/repos/huggingface/datasets/issues/210/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/209 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/209/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/209/comments | https://api.github.com/repos/huggingface/datasets/issues/209/events | https://github.com/huggingface/datasets/pull/209 | 626,405,849 | MDExOlB1bGxSZXF1ZXN0NDI0NDAwOTc4 | 209 | Add a Google Drive exception for small files | {
"login": "airKlizz",
"id": 25703835,
"node_id": "MDQ6VXNlcjI1NzAzODM1",
"avatar_url": "https://avatars.githubusercontent.com/u/25703835?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/airKlizz",
"html_url": "https://github.com/airKlizz",
"followers_url": "https://api.github.com/users/air... | [] | closed | false | null | [] | null | [
"Can you run the style formatting tools to pass the code quality test?\r\n\r\nYou can find all the details in CONTRIBUTING.md: https://github.com/huggingface/nlp/blob/master/CONTRIBUTING.md#how-to-contribute-to-nlp",
"Nice ! ",
"``make style`` done! Thanks for the approvals."
] | 1,590,662,417,000 | 1,590,678,904,000 | 1,590,678,904,000 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/209",
"html_url": "https://github.com/huggingface/datasets/pull/209",
"diff_url": "https://github.com/huggingface/datasets/pull/209.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/209.patch"
} | I tried to use the ``nlp`` library to load personnal datasets. I mainly copy-paste the code for ``multi-news`` dataset because my files are stored on Google Drive.
One of my dataset is small (< 25Mo) so it can be verified by Drive without asking the authorization to the user. This makes the download starts directly... | https://api.github.com/repos/huggingface/datasets/issues/209/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/208 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/208/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/208/comments | https://api.github.com/repos/huggingface/datasets/issues/208/events | https://github.com/huggingface/datasets/pull/208 | 626,398,519 | MDExOlB1bGxSZXF1ZXN0NDI0Mzk0ODIx | 208 | [Dummy data] insert config name instead of config | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://... | [] | closed | false | null | [] | null | [] | 1,590,661,699,000 | 1,590,670,081,000 | 1,590,670,080,000 | MEMBER | null | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/208",
"html_url": "https://github.com/huggingface/datasets/pull/208",
"diff_url": "https://github.com/huggingface/datasets/pull/208.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/208.patch"
} | Thanks @yjernite for letting me know. in the dummy data command the config name shuold be passed to the dataset builder and not the config itself.
Also, @lhoestq fixed small import bug introduced by beam command I think. | https://api.github.com/repos/huggingface/datasets/issues/208/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/207 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/207/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/207/comments | https://api.github.com/repos/huggingface/datasets/issues/207/events | https://github.com/huggingface/datasets/issues/207 | 625,932,200 | MDU6SXNzdWU2MjU5MzIyMDA= | 207 | Remove test set from NLP viewer | {
"login": "chrisdonahue",
"id": 748399,
"node_id": "MDQ6VXNlcjc0ODM5OQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/748399?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/chrisdonahue",
"html_url": "https://github.com/chrisdonahue",
"followers_url": "https://api.github.com/u... | [
{
"id": 2107841032,
"node_id": "MDU6TGFiZWwyMTA3ODQxMDMy",
"url": "https://api.github.com/repos/huggingface/datasets/labels/nlp-viewer",
"name": "nlp-viewer",
"color": "94203D",
"default": false,
"description": ""
}
] | open | false | null | [] | null | [
"~is the viewer also open source?~\r\n[is a streamlit app!](https://docs.streamlit.io/en/latest/getting_started.html)",
"Appears that [two thirds of those polled on Twitter](https://twitter.com/srush_nlp/status/1265734497632477185) are in favor of _some_ mechanism for averting eyeballs from the test data.",
"We... | 1,590,604,327,000 | 1,591,198,147,000 | null | NONE | null | null | While the new [NLP viewer](https://huggingface.co/nlp/viewer/) is a great tool, I think it would be best to outright remove the option of looking at the test sets. At the very least, a warning should be displayed to users before showing the test set. Newcomers to the field might not be aware of best practices, and smal... | https://api.github.com/repos/huggingface/datasets/issues/207/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/206 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/206/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/206/comments | https://api.github.com/repos/huggingface/datasets/issues/206/events | https://github.com/huggingface/datasets/issues/206 | 625,842,989 | MDU6SXNzdWU2MjU4NDI5ODk= | 206 | [Question] Combine 2 datasets which have the same columns | {
"login": "airKlizz",
"id": 25703835,
"node_id": "MDQ6VXNlcjI1NzAzODM1",
"avatar_url": "https://avatars.githubusercontent.com/u/25703835?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/airKlizz",
"html_url": "https://github.com/airKlizz",
"followers_url": "https://api.github.com/users/air... | [] | closed | false | null | [] | null | [
"We are thinking about ways to combine datasets for T5 in #217, feel free to share your thoughts about this.",
"Ok great! I will look at it. Thanks"
] | 1,590,596,752,000 | 1,591,780,274,000 | 1,591,780,274,000 | CONTRIBUTOR | null | null | Hi,
I am using ``nlp`` to load personal datasets. I created summarization datasets in multi-languages based on wikinews. I have one dataset for english and one for german (french is getting to be ready as well). I want to keep these datasets independent because they need different pre-processing (add different task-... | https://api.github.com/repos/huggingface/datasets/issues/206/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/205 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/205/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/205/comments | https://api.github.com/repos/huggingface/datasets/issues/205/events | https://github.com/huggingface/datasets/pull/205 | 625,839,335 | MDExOlB1bGxSZXF1ZXN0NDIzOTY2ODE1 | 205 | Better arrow dataset iter | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoest... | [] | closed | false | null | [] | null | [] | 1,590,596,421,000 | 1,590,597,598,000 | 1,590,597,596,000 | MEMBER | null | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/205",
"html_url": "https://github.com/huggingface/datasets/pull/205",
"diff_url": "https://github.com/huggingface/datasets/pull/205.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/205.patch"
} | I tried to play around with `tf.data.Dataset.from_generator` and I found out that the `__iter__` that we have for `nlp.arrow_dataset.Dataset` ignores the format that has been set (torch or tensorflow).
With these changes I should be able to come up with a `tf.data.Dataset` that uses lazy loading, as asked in #193. | https://api.github.com/repos/huggingface/datasets/issues/205/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/204 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/204/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/204/comments | https://api.github.com/repos/huggingface/datasets/issues/204/events | https://github.com/huggingface/datasets/pull/204 | 625,655,849 | MDExOlB1bGxSZXF1ZXN0NDIzODE5MTQw | 204 | Add Dataflow support + Wikipedia + Wiki40b | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoest... | [] | closed | false | null | [] | null | [] | 1,590,582,769,000 | 1,590,653,435,000 | 1,590,653,434,000 | MEMBER | null | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/204",
"html_url": "https://github.com/huggingface/datasets/pull/204",
"diff_url": "https://github.com/huggingface/datasets/pull/204.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/204.patch"
} | # Add Dataflow support + Wikipedia + Wiki40b
## Support datasets processing with Apache Beam
Some datasets are too big to be processed on a single machine, for example: wikipedia, wiki40b, etc. Apache Beam allows to process datasets on many execution engines like Dataflow, Spark, Flink, etc.
To process such da... | https://api.github.com/repos/huggingface/datasets/issues/204/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/203 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/203/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/203/comments | https://api.github.com/repos/huggingface/datasets/issues/203/events | https://github.com/huggingface/datasets/pull/203 | 625,515,488 | MDExOlB1bGxSZXF1ZXN0NDIzNzEyMTQ3 | 203 | Raise an error if no config name for datasets like glue | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoest... | [] | closed | false | null | [] | null | [] | 1,590,570,238,000 | 1,590,597,639,000 | 1,590,597,638,000 | MEMBER | null | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/203",
"html_url": "https://github.com/huggingface/datasets/pull/203",
"diff_url": "https://github.com/huggingface/datasets/pull/203.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/203.patch"
} | Some datasets like glue (see #130) and scientific_papers (see #197) have many configs.
For example for glue there are cola, sst2, mrpc etc.
Currently if a user does `load_dataset('glue')`, then Cola is loaded by default and it can be confusing. Instead, we should raise an error to let the user know that he has to p... | https://api.github.com/repos/huggingface/datasets/issues/203/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/202 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/202/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/202/comments | https://api.github.com/repos/huggingface/datasets/issues/202/events | https://github.com/huggingface/datasets/issues/202 | 625,493,983 | MDU6SXNzdWU2MjU0OTM5ODM= | 202 | Mistaken `_KWARGS_DESCRIPTION` for XNLI metric | {
"login": "phiyodr",
"id": 33572125,
"node_id": "MDQ6VXNlcjMzNTcyMTI1",
"avatar_url": "https://avatars.githubusercontent.com/u/33572125?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/phiyodr",
"html_url": "https://github.com/phiyodr",
"followers_url": "https://api.github.com/users/phiyod... | [] | closed | false | null | [] | null | [
"Indeed, good catch ! thanks\r\nFixing it right now"
] | 1,590,568,482,000 | 1,590,672,156,000 | 1,590,672,156,000 | NONE | null | null | Hi!
The [`_KWARGS_DESCRIPTION`](https://github.com/huggingface/nlp/blob/7d0fa58641f3f462fb2861dcdd6ce7f0da3f6a56/metrics/xnli/xnli.py#L45) for the XNLI metric uses `Args` and `Returns` text from [BLEU](https://github.com/huggingface/nlp/blob/7d0fa58641f3f462fb2861dcdd6ce7f0da3f6a56/metrics/bleu/bleu.py#L58) metric:
... | https://api.github.com/repos/huggingface/datasets/issues/202/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/201 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/201/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/201/comments | https://api.github.com/repos/huggingface/datasets/issues/201/events | https://github.com/huggingface/datasets/pull/201 | 625,235,430 | MDExOlB1bGxSZXF1ZXN0NDIzNDkzNTMw | 201 | Fix typo in README | {
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/... | [] | closed | false | null | [] | null | [
"Amazing, @LysandreJik!",
"Really did my best!"
] | 1,590,531,501,000 | 1,590,536,431,000 | 1,590,534,056,000 | MEMBER | null | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/201",
"html_url": "https://github.com/huggingface/datasets/pull/201",
"diff_url": "https://github.com/huggingface/datasets/pull/201.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/201.patch"
} | https://api.github.com/repos/huggingface/datasets/issues/201/timeline | null | true | |
https://api.github.com/repos/huggingface/datasets/issues/200 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/200/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/200/comments | https://api.github.com/repos/huggingface/datasets/issues/200/events | https://github.com/huggingface/datasets/pull/200 | 625,226,638 | MDExOlB1bGxSZXF1ZXN0NDIzNDg2NTM0 | 200 | [ArrowWriter] Set schema at first write example | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoest... | [] | closed | false | null | [] | null | [
"Good point!\r\n\r\nI guess we could add this to `write_batch` as well (before using `self._schema` in the first line of this method)?"
] | 1,590,530,388,000 | 1,590,570,474,000 | 1,590,570,473,000 | MEMBER | null | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/200",
"html_url": "https://github.com/huggingface/datasets/pull/200",
"diff_url": "https://github.com/huggingface/datasets/pull/200.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/200.patch"
} | Right now if the schema was not specified when instantiating `ArrowWriter`, then it could be set with the first `write_table` for example (it calls `self._build_writer()` to do so).
I noticed that it was not done if the first example is added via `.write`, so I added it for coherence. | https://api.github.com/repos/huggingface/datasets/issues/200/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/199 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/199/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/199/comments | https://api.github.com/repos/huggingface/datasets/issues/199/events | https://github.com/huggingface/datasets/pull/199 | 625,217,440 | MDExOlB1bGxSZXF1ZXN0NDIzNDc4ODIx | 199 | Fix GermEval 2014 dataset infos | {
"login": "stefan-it",
"id": 20651387,
"node_id": "MDQ6VXNlcjIwNjUxMzg3",
"avatar_url": "https://avatars.githubusercontent.com/u/20651387?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stefan-it",
"html_url": "https://github.com/stefan-it",
"followers_url": "https://api.github.com/users/... | [] | closed | false | null | [] | null | [
"Hopefully. this also fixes the dataset view on https://huggingface.co/nlp/viewer/ :)",
"Oh good catch ! This should fix it indeed"
] | 1,590,529,304,000 | 1,590,529,824,000 | 1,590,529,824,000 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/199",
"html_url": "https://github.com/huggingface/datasets/pull/199",
"diff_url": "https://github.com/huggingface/datasets/pull/199.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/199.patch"
} | Hi,
this PR just removes the `dataset_info.json` file and adds a newly generated `dataset_infos.json` file. | https://api.github.com/repos/huggingface/datasets/issues/199/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/198 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/198/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/198/comments | https://api.github.com/repos/huggingface/datasets/issues/198/events | https://github.com/huggingface/datasets/issues/198 | 625,200,627 | MDU6SXNzdWU2MjUyMDA2Mjc= | 198 | Index outside of table length | {
"login": "casajarm",
"id": 305717,
"node_id": "MDQ6VXNlcjMwNTcxNw==",
"avatar_url": "https://avatars.githubusercontent.com/u/305717?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/casajarm",
"html_url": "https://github.com/casajarm",
"followers_url": "https://api.github.com/users/casajar... | [] | closed | false | null | [] | null | [
"Sounds like something related to the nlp viewer @srush ",
"Fixed. "
] | 1,590,527,380,000 | 1,590,533,029,000 | 1,590,533,029,000 | NONE | null | null | The offset input box warns of numbers larger than a limit (like 2000) but then the errors start at a smaller value than that limit (like 1955).
> ValueError: Index (2000) outside of table length (2000).
> Traceback:
> File "/home/sasha/.local/lib/python3.7/site-packages/streamlit/ScriptRunner.py", line 322, in _ru... | https://api.github.com/repos/huggingface/datasets/issues/198/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/197 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/197/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/197/comments | https://api.github.com/repos/huggingface/datasets/issues/197/events | https://github.com/huggingface/datasets/issues/197 | 624,966,904 | MDU6SXNzdWU2MjQ5NjY5MDQ= | 197 | Scientific Papers only downloading Pubmed | {
"login": "antmarakis",
"id": 17463361,
"node_id": "MDQ6VXNlcjE3NDYzMzYx",
"avatar_url": "https://avatars.githubusercontent.com/u/17463361?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/antmarakis",
"html_url": "https://github.com/antmarakis",
"followers_url": "https://api.github.com/use... | [] | closed | false | null | [] | null | [
"Hi so there are indeed two configurations in the datasets as you can see [here](https://github.com/huggingface/nlp/blob/master/datasets/scientific_papers/scientific_papers.py#L81-L82).\r\n\r\nYou can load either one with:\r\n```python\r\ndataset = nlp.load_dataset('scientific_papers', 'pubmed')\r\ndataset = nlp.lo... | 1,590,506,327,000 | 1,590,653,968,000 | 1,590,653,968,000 | NONE | null | null | Hi!
I have been playing around with this module, and I am a bit confused about the `scientific_papers` dataset. I thought that it would download two separate datasets, arxiv and pubmed. But when I run the following:
```
dataset = nlp.load_dataset('scientific_papers', data_dir='.', cache_dir='.')
Downloading: 10... | https://api.github.com/repos/huggingface/datasets/issues/197/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/196 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/196/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/196/comments | https://api.github.com/repos/huggingface/datasets/issues/196/events | https://github.com/huggingface/datasets/pull/196 | 624,901,266 | MDExOlB1bGxSZXF1ZXN0NDIzMjIwMjIw | 196 | Check invalid config name | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoest... | [] | closed | false | null | [] | null | [
"I think that's not related to the config name but the filenames in the dummy data. Mostly it occurs with files downloaded from drive. In that case the dummy file name is extracted from the google drive link and it corresponds to what comes after `https://drive.google.com/`\r\n\r\n",
"> I think that's not related... | 1,590,501,171,000 | 1,590,527,096,000 | 1,590,527,095,000 | MEMBER | null | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/196",
"html_url": "https://github.com/huggingface/datasets/pull/196",
"diff_url": "https://github.com/huggingface/datasets/pull/196.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/196.patch"
} | As said in #194, we should raise an error if the config name has bad characters.
Bad characters are those that are not allowed for directory names on windows. | https://api.github.com/repos/huggingface/datasets/issues/196/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/195 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/195/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/195/comments | https://api.github.com/repos/huggingface/datasets/issues/195/events | https://github.com/huggingface/datasets/pull/195 | 624,858,686 | MDExOlB1bGxSZXF1ZXN0NDIzMTg1NTAy | 195 | [Dummy data command] add new case to command | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://... | [] | closed | false | null | [] | null | [
"@lhoestq - tiny change in the dummy data command, should be good to merge."
] | 1,590,497,447,000 | 1,590,503,908,000 | 1,590,503,907,000 | MEMBER | null | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/195",
"html_url": "https://github.com/huggingface/datasets/pull/195",
"diff_url": "https://github.com/huggingface/datasets/pull/195.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/195.patch"
} | Qanta: #194 introduces a case that was not noticed before. This change in code helps community users to have an easier time creating the dummy data. | https://api.github.com/repos/huggingface/datasets/issues/195/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/194 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/194/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/194/comments | https://api.github.com/repos/huggingface/datasets/issues/194/events | https://github.com/huggingface/datasets/pull/194 | 624,854,897 | MDExOlB1bGxSZXF1ZXN0NDIzMTgyNDM5 | 194 | Add Dataset: Qanta | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://... | [] | closed | false | null | [] | null | [
"@lhoestq - the config name is rather special here: *E.g.* `mode=first,char_skip=25`. It includes `=` and `,` - will that be a problem for windows folders, you think? \r\n\r\nApart from that good to merge for me.",
"It's ok to have `=` and `,`.\r\nWindows doesn't like things like `?`, `:`, `/` etc.\r\n\r\nI'll ad... | 1,590,497,075,000 | 1,590,512,297,000 | 1,590,498,980,000 | MEMBER | null | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/194",
"html_url": "https://github.com/huggingface/datasets/pull/194",
"diff_url": "https://github.com/huggingface/datasets/pull/194.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/194.patch"
} | Fixes dummy data for #169 @EntilZha | https://api.github.com/repos/huggingface/datasets/issues/194/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/193 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/193/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/193/comments | https://api.github.com/repos/huggingface/datasets/issues/193/events | https://github.com/huggingface/datasets/issues/193 | 624,655,558 | MDU6SXNzdWU2MjQ2NTU1NTg= | 193 | [Tensorflow] Use something else than `from_tensor_slices()` | {
"login": "astariul",
"id": 43774355,
"node_id": "MDQ6VXNlcjQzNzc0MzU1",
"avatar_url": "https://avatars.githubusercontent.com/u/43774355?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/astariul",
"html_url": "https://github.com/astariul",
"followers_url": "https://api.github.com/users/ast... | [] | closed | false | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoest... | [
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.git... | null | [
"I guess we can use `tf.data.Dataset.from_generator` instead. I'll give it a try.",
"Is `tf.data.Dataset.from_generator` working on TPU ?",
"`from_generator` is not working on TPU, I met the following error :\r\n\r\n```\r\nFile \"/usr/local/lib/python3.6/contextlib.py\", line 88, in __exit__\r\n next(self.ge... | 1,590,477,554,000 | 1,603,812,491,000 | 1,603,812,491,000 | NONE | null | null | In the example notebook, the TF Dataset is built using `from_tensor_slices()` :
```python
columns = ['input_ids', 'token_type_ids', 'attention_mask', 'start_positions', 'end_positions']
train_tf_dataset.set_format(type='tensorflow', columns=columns)
features = {x: train_tf_dataset[x] for x in columns[:3]}
label... | https://api.github.com/repos/huggingface/datasets/issues/193/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/192 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/192/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/192/comments | https://api.github.com/repos/huggingface/datasets/issues/192/events | https://github.com/huggingface/datasets/issues/192 | 624,397,592 | MDU6SXNzdWU2MjQzOTc1OTI= | 192 | [Question] Create Apache Arrow dataset from raw text file | {
"login": "mrm8488",
"id": 3653789,
"node_id": "MDQ6VXNlcjM2NTM3ODk=",
"avatar_url": "https://avatars.githubusercontent.com/u/3653789?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mrm8488",
"html_url": "https://github.com/mrm8488",
"followers_url": "https://api.github.com/users/mrm8488/... | [] | closed | false | null | [] | null | [
"We store every dataset in the Arrow format. This is convenient as it supports nested types and memory mapping. If you are curious feel free to check the [pyarrow documentation](https://arrow.apache.org/docs/python/)\r\n\r\nYou can use this library to load your covid papers by creating a dataset script. You can fin... | 1,590,424,967,000 | 1,603,812,022,000 | 1,603,812,022,000 | NONE | null | null | Hi guys, I have gathered and preprocessed about 2GB of COVID papers from CORD dataset @ Kggle. I have seen you have a text dataset as "Crime and punishment" in Apache arrow format. Do you have any script to do it from a raw txt file (preprocessed as for BERT like) or any guide?
Is the worth of send it to you and add i... | https://api.github.com/repos/huggingface/datasets/issues/192/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/191 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/191/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/191/comments | https://api.github.com/repos/huggingface/datasets/issues/191/events | https://github.com/huggingface/datasets/pull/191 | 624,394,936 | MDExOlB1bGxSZXF1ZXN0NDIyODI3MDMy | 191 | [Squad es] add dataset_infos | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://... | [] | closed | false | null | [] | null | [] | 1,590,424,552,000 | 1,590,424,799,000 | 1,590,424,798,000 | MEMBER | null | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/191",
"html_url": "https://github.com/huggingface/datasets/pull/191",
"diff_url": "https://github.com/huggingface/datasets/pull/191.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/191.patch"
} | @mariamabarham - was still about to upload this. Should have waited with my comment a bit more :D | https://api.github.com/repos/huggingface/datasets/issues/191/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/190 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/190/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/190/comments | https://api.github.com/repos/huggingface/datasets/issues/190/events | https://github.com/huggingface/datasets/pull/190 | 624,124,600 | MDExOlB1bGxSZXF1ZXN0NDIyNjA4NzAw | 190 | add squad Spanish v1 and v2 | {
"login": "mariamabarham",
"id": 38249783,
"node_id": "MDQ6VXNlcjM4MjQ5Nzgz",
"avatar_url": "https://avatars.githubusercontent.com/u/38249783?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariamabarham",
"html_url": "https://github.com/mariamabarham",
"followers_url": "https://api.githu... | [] | closed | false | null | [] | null | [
"Nice ! :) \r\nCan we group them into one dataset with two versions, instead of having two datasets ?",
"Yes sure, I can use the version as config name",
"@lhoestq can you check? I grouped them",
"Awesome :) feel free to merge after fixing the test in the CI",
"@mariamabarham - feel free to merge when you'r... | 1,590,394,120,000 | 1,590,424,126,000 | 1,590,424,125,000 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/190",
"html_url": "https://github.com/huggingface/datasets/pull/190",
"diff_url": "https://github.com/huggingface/datasets/pull/190.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/190.patch"
} | This PR add the Spanish Squad versions 1 and 2 datasets.
Fixes #164 | https://api.github.com/repos/huggingface/datasets/issues/190/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/189 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/189/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/189/comments | https://api.github.com/repos/huggingface/datasets/issues/189/events | https://github.com/huggingface/datasets/issues/189 | 624,048,881 | MDU6SXNzdWU2MjQwNDg4ODE= | 189 | [Question] BERT-style multiple choice formatting | {
"login": "sarahwie",
"id": 8027676,
"node_id": "MDQ6VXNlcjgwMjc2NzY=",
"avatar_url": "https://avatars.githubusercontent.com/u/8027676?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sarahwie",
"html_url": "https://github.com/sarahwie",
"followers_url": "https://api.github.com/users/sarah... | [] | closed | false | null | [] | null | [
"Hi @sarahwie, can you details this a little more?\r\n\r\nI'm not sure I understand what you refer to and what you mean when you say \"Previously, this was done by passing a list of InputFeatures to the dataloader instead of a list of InputFeature\"",
"I think I've resolved it. For others' reference: to convert f... | 1,590,383,465,000 | 1,590,431,908,000 | 1,590,431,908,000 | NONE | null | null | Hello, I am wondering what the equivalent formatting of a dataset should be to allow for multiple-choice answering prediction, BERT-style. Previously, this was done by passing a list of `InputFeatures` to the dataloader instead of a list of `InputFeature`, where `InputFeatures` contained lists of length equal to the nu... | https://api.github.com/repos/huggingface/datasets/issues/189/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/188 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/188/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/188/comments | https://api.github.com/repos/huggingface/datasets/issues/188/events | https://github.com/huggingface/datasets/issues/188 | 623,890,430 | MDU6SXNzdWU2MjM4OTA0MzA= | 188 | When will the remaining math_dataset modules be added as dataset objects | {
"login": "tylerroost",
"id": 31251196,
"node_id": "MDQ6VXNlcjMxMjUxMTk2",
"avatar_url": "https://avatars.githubusercontent.com/u/31251196?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tylerroost",
"html_url": "https://github.com/tylerroost",
"followers_url": "https://api.github.com/use... | [] | closed | false | null | [] | null | [
"On a similar note it would be nice to differentiate between train-easy, train-medium, and train-hard",
"Hi @tylerroost, we don't have a timeline for this at the moment.\r\nIf you want to give it a look we would be happy to review a PR on it.\r\nAlso, the library is one week old so everything is quite barebones, ... | 1,590,335,212,000 | 1,590,346,428,000 | 1,590,346,428,000 | NONE | null | null | Currently only the algebra_linear_1d is supported. Is there a timeline for making the other modules supported. If no timeline is established, how can I help? | https://api.github.com/repos/huggingface/datasets/issues/188/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/187 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/187/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/187/comments | https://api.github.com/repos/huggingface/datasets/issues/187/events | https://github.com/huggingface/datasets/issues/187 | 623,627,800 | MDU6SXNzdWU2MjM2Mjc4MDA= | 187 | [Question] How to load wikipedia ? Beam runner ? | {
"login": "richarddwang",
"id": 17963619,
"node_id": "MDQ6VXNlcjE3OTYzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/17963619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/richarddwang",
"html_url": "https://github.com/richarddwang",
"followers_url": "https://api.github.c... | [] | closed | false | null | [] | null | [
"I have seen that somebody is hard working on easierly loadable wikipedia. #129 \r\nMaybe I should wait a few days for that version ?",
"Yes we (well @lhoestq) are very actively working on this."
] | 1,590,229,132,000 | 1,590,365,522,000 | 1,590,365,522,000 | CONTRIBUTOR | null | null | When `nlp.load_dataset('wikipedia')`, I got
* `WARNING:nlp.builder:Trying to generate a dataset using Apache Beam, yet no Beam Runner or PipelineOptions() has been provided. Please pass a nlp.DownloadConfig(beam_runner=...) object to the builder.download_and_prepare(download_config=...) method. Default values will be ... | https://api.github.com/repos/huggingface/datasets/issues/187/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/186 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/186/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/186/comments | https://api.github.com/repos/huggingface/datasets/issues/186/events | https://github.com/huggingface/datasets/issues/186 | 623,595,180 | MDU6SXNzdWU2MjM1OTUxODA= | 186 | Weird-ish: Not creating unique caches for different phases | {
"login": "zphang",
"id": 1668462,
"node_id": "MDQ6VXNlcjE2Njg0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/1668462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zphang",
"html_url": "https://github.com/zphang",
"followers_url": "https://api.github.com/users/zphang/foll... | [] | closed | false | null | [] | null | [
"Looks like a duplicate of #120.\r\nThis is already fixed on master. We'll do a new release on pypi soon",
"Good catch, it looks fixed.\r\n"
] | 1,590,216,058,000 | 1,590,265,338,000 | 1,590,265,337,000 | NONE | null | null | Sample code:
```python
import nlp
dataset = nlp.load_dataset('boolq')
def func1(x):
return x
def func2(x):
return None
train_output = dataset["train"].map(func1)
valid_output = dataset["validation"].map(func1)
print()
print(len(train_output), len(valid_output))
# Output: 9427 9427
```
Th... | https://api.github.com/repos/huggingface/datasets/issues/186/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/185 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/185/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/185/comments | https://api.github.com/repos/huggingface/datasets/issues/185/events | https://github.com/huggingface/datasets/pull/185 | 623,172,484 | MDExOlB1bGxSZXF1ZXN0NDIxODkxNjY2 | 185 | [Commands] In-detail instructions to create dummy data folder | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://... | [] | closed | false | null | [] | null | [
"awesome !"
] | 1,590,150,385,000 | 1,590,156,395,000 | 1,590,156,394,000 | MEMBER | null | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/185",
"html_url": "https://github.com/huggingface/datasets/pull/185",
"diff_url": "https://github.com/huggingface/datasets/pull/185.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/185.patch"
} | ### Dummy data command
This PR adds a new command `python nlp-cli dummy_data <path_to_dataset_folder>` that gives in-detail instructions on how to add the dummy data files.
It would be great if you can try it out by moving the current dummy_data folder of any dataset in `./datasets` with `mv datasets/<dataset_s... | https://api.github.com/repos/huggingface/datasets/issues/185/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/184 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/184/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/184/comments | https://api.github.com/repos/huggingface/datasets/issues/184/events | https://github.com/huggingface/datasets/pull/184 | 623,120,929 | MDExOlB1bGxSZXF1ZXN0NDIxODQ5MTQ3 | 184 | Use IndexError instead of ValueError when index out of range | {
"login": "richarddwang",
"id": 17963619,
"node_id": "MDQ6VXNlcjE3OTYzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/17963619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/richarddwang",
"html_url": "https://github.com/richarddwang",
"followers_url": "https://api.github.c... | [] | closed | false | null | [] | null | [] | 1,590,144,222,000 | 1,590,654,678,000 | 1,590,654,678,000 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/184",
"html_url": "https://github.com/huggingface/datasets/pull/184",
"diff_url": "https://github.com/huggingface/datasets/pull/184.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/184.patch"
} | **`default __iter__ needs IndexError`**.
When I want to create a wrapper of arrow dataset to adapt to fastai,
I don't know how to initialize it, so I didn't use inheritance but use object composition.
I wrote sth like this.
```
clas HF_dataset():
def __init__(self, arrow_dataset):
self.dset = arrow_datas... | https://api.github.com/repos/huggingface/datasets/issues/184/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/183 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/183/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/183/comments | https://api.github.com/repos/huggingface/datasets/issues/183/events | https://github.com/huggingface/datasets/issues/183 | 623,054,270 | MDU6SXNzdWU2MjMwNTQyNzA= | 183 | [Bug] labels of glue/ax are all -1 | {
"login": "richarddwang",
"id": 17963619,
"node_id": "MDQ6VXNlcjE3OTYzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/17963619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/richarddwang",
"html_url": "https://github.com/richarddwang",
"followers_url": "https://api.github.c... | [] | closed | false | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoest... | [
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.git... | null | [
"This is the test set given by the Glue benchmark. The labels are not provided, and therefore set to -1.",
"Ah, yeah. Why it didn’t occur to me. 😂\nThank you for your comment."
] | 1,590,137,016,000 | 1,590,185,645,000 | 1,590,185,645,000 | CONTRIBUTOR | null | null | ```
ax = nlp.load_dataset('glue', 'ax')
for i in range(30): print(ax['test'][i]['label'], end=', ')
```
```
-1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1, -1,
``` | https://api.github.com/repos/huggingface/datasets/issues/183/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/182 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/182/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/182/comments | https://api.github.com/repos/huggingface/datasets/issues/182/events | https://github.com/huggingface/datasets/pull/182 | 622,646,770 | MDExOlB1bGxSZXF1ZXN0NDIxNDcxMjg4 | 182 | Update newsroom.py | {
"login": "yoavartzi",
"id": 3289873,
"node_id": "MDQ6VXNlcjMyODk4NzM=",
"avatar_url": "https://avatars.githubusercontent.com/u/3289873?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yoavartzi",
"html_url": "https://github.com/yoavartzi",
"followers_url": "https://api.github.com/users/yo... | [] | closed | false | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://... | [
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"follo... | null | [] | 1,590,080,863,000 | 1,590,165,503,000 | 1,590,165,503,000 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/182",
"html_url": "https://github.com/huggingface/datasets/pull/182",
"diff_url": "https://github.com/huggingface/datasets/pull/182.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/182.patch"
} | Updated the URL for Newsroom download so it's more robust to future changes. | https://api.github.com/repos/huggingface/datasets/issues/182/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/181 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/181/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/181/comments | https://api.github.com/repos/huggingface/datasets/issues/181/events | https://github.com/huggingface/datasets/issues/181 | 622,634,420 | MDU6SXNzdWU2MjI2MzQ0MjA= | 181 | Cannot upload my own dataset | {
"login": "korakot",
"id": 3155646,
"node_id": "MDQ6VXNlcjMxNTU2NDY=",
"avatar_url": "https://avatars.githubusercontent.com/u/3155646?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/korakot",
"html_url": "https://github.com/korakot",
"followers_url": "https://api.github.com/users/korakot/... | [] | closed | false | null | [] | null | [
"It's my misunderstanding. I cannot just upload a csv. I need to write a dataset loading script too.",
"I now try with the sample `datasets/csv` folder. \r\n\r\n nlp-cli upload csv\r\n\r\nThe error is still the same\r\n\r\n```\r\n2020-05-21 17:20:56.394659: I tensorflow/stream_executor/platform/default/dso_loa... | 1,590,079,552,000 | 1,592,518,482,000 | 1,592,518,482,000 | NONE | null | null | I look into `nlp-cli` and `user.py` to learn how to upload my own data.
It is supposed to work like this
- Register to get username, password at huggingface.co
- `nlp-cli login` and type username, passworld
- I have a single file to upload at `./ttc/ttc_freq_extra.csv`
- `nlp-cli upload ttc/ttc_freq_extra.csv`
... | https://api.github.com/repos/huggingface/datasets/issues/181/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/180 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/180/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/180/comments | https://api.github.com/repos/huggingface/datasets/issues/180/events | https://github.com/huggingface/datasets/pull/180 | 622,556,861 | MDExOlB1bGxSZXF1ZXN0NDIxMzk5Nzg2 | 180 | Add hall of fame | {
"login": "clmnt",
"id": 821155,
"node_id": "MDQ6VXNlcjgyMTE1NQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/821155?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/clmnt",
"html_url": "https://github.com/clmnt",
"followers_url": "https://api.github.com/users/clmnt/followers"... | [] | closed | false | null | [] | null | [] | 1,590,072,828,000 | 1,590,165,316,000 | 1,590,165,314,000 | MEMBER | null | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/180",
"html_url": "https://github.com/huggingface/datasets/pull/180",
"diff_url": "https://github.com/huggingface/datasets/pull/180.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/180.patch"
} | powered by https://github.com/sourcerer-io/hall-of-fame | https://api.github.com/repos/huggingface/datasets/issues/180/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/179 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/179/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/179/comments | https://api.github.com/repos/huggingface/datasets/issues/179/events | https://github.com/huggingface/datasets/issues/179 | 622,525,410 | MDU6SXNzdWU2MjI1MjU0MTA= | 179 | [Feature request] separate split name and split instructions | {
"login": "yjernite",
"id": 10469459,
"node_id": "MDQ6VXNlcjEwNDY5NDU5",
"avatar_url": "https://avatars.githubusercontent.com/u/10469459?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yjernite",
"html_url": "https://github.com/yjernite",
"followers_url": "https://api.github.com/users/yje... | [] | closed | false | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoest... | [
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.git... | null | [
"If your dataset is a collection of sub-datasets, you should probably consider having one config per sub-dataset. For example for Glue, we have sst2, mnli etc.\r\nIf you want to have multiple train sets (for example one per stage). The easiest solution would be to name them `nlp.Split(\"train_stage1\")`, `nlp.Split... | 1,590,070,251,000 | 1,590,154,268,000 | 1,590,154,267,000 | MEMBER | null | null | Currently, the name of an nlp.NamedSplit is parsed in arrow_reader.py and used as the instruction.
This makes it impossible to have several training sets, which can occur when:
- A dataset corresponds to a collection of sub-datasets
- A dataset was built in stages, adding new examples at each stage
Would it be ... | https://api.github.com/repos/huggingface/datasets/issues/179/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/178 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/178/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/178/comments | https://api.github.com/repos/huggingface/datasets/issues/178/events | https://github.com/huggingface/datasets/pull/178 | 621,979,849 | MDExOlB1bGxSZXF1ZXN0NDIwOTMyMDI5 | 178 | [Manual data] improve error message for manual data in general | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://... | [] | closed | false | null | [] | null | [] | 1,589,998,245,000 | 1,589,998,732,000 | 1,589,998,730,000 | MEMBER | null | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/178",
"html_url": "https://github.com/huggingface/datasets/pull/178",
"diff_url": "https://github.com/huggingface/datasets/pull/178.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/178.patch"
} | `nlp.load("xsum")` now leads to the following error message:

I guess the manual download instructions for `xsum` can also be improved. | https://api.github.com/repos/huggingface/datasets/issues/178/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/177 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/177/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/177/comments | https://api.github.com/repos/huggingface/datasets/issues/177/events | https://github.com/huggingface/datasets/pull/177 | 621,975,368 | MDExOlB1bGxSZXF1ZXN0NDIwOTI4MzE0 | 177 | Xsum manual download instruction | {
"login": "mariamabarham",
"id": 38249783,
"node_id": "MDQ6VXNlcjM4MjQ5Nzgz",
"avatar_url": "https://avatars.githubusercontent.com/u/38249783?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariamabarham",
"html_url": "https://github.com/mariamabarham",
"followers_url": "https://api.githu... | [] | closed | false | null | [] | null | [] | 1,589,997,761,000 | 1,589,998,610,000 | 1,589,998,609,000 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/177",
"html_url": "https://github.com/huggingface/datasets/pull/177",
"diff_url": "https://github.com/huggingface/datasets/pull/177.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/177.patch"
} | https://api.github.com/repos/huggingface/datasets/issues/177/timeline | null | true | |
https://api.github.com/repos/huggingface/datasets/issues/176 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/176/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/176/comments | https://api.github.com/repos/huggingface/datasets/issues/176/events | https://github.com/huggingface/datasets/pull/176 | 621,934,638 | MDExOlB1bGxSZXF1ZXN0NDIwODkzNDky | 176 | [Tests] Refactor MockDownloadManager | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://... | [] | closed | false | null | [] | null | [] | 1,589,994,456,000 | 1,589,998,639,000 | 1,589,998,638,000 | MEMBER | null | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/176",
"html_url": "https://github.com/huggingface/datasets/pull/176",
"diff_url": "https://github.com/huggingface/datasets/pull/176.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/176.patch"
} | Clean mock download manager class.
The print function was not of much help I think.
We should think about adding a command that creates the dummy folder structure for the user. | https://api.github.com/repos/huggingface/datasets/issues/176/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/175 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/175/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/175/comments | https://api.github.com/repos/huggingface/datasets/issues/175/events | https://github.com/huggingface/datasets/issues/175 | 621,929,428 | MDU6SXNzdWU2MjE5Mjk0Mjg= | 175 | [Manual data dir] Error message: nlp.load_dataset('xsum') -> TypeError | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/ss... | [] | closed | false | null | [] | null | [] | 1,589,994,032,000 | 1,589,998,730,000 | 1,589,998,730,000 | MEMBER | null | null | v 0.1.0 from pip
```python
import nlp
xsum = nlp.load_dataset('xsum')
```
Issue is `dl_manager.manual_dir`is `None`
```python
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-42-8a32f06... | https://api.github.com/repos/huggingface/datasets/issues/175/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/174 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/174/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/174/comments | https://api.github.com/repos/huggingface/datasets/issues/174/events | https://github.com/huggingface/datasets/issues/174 | 621,928,403 | MDU6SXNzdWU2MjE5Mjg0MDM= | 174 | nlp.load_dataset('xsum') -> TypeError | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/ss... | [] | closed | false | null | [] | null | [] | 1,589,993,949,000 | 1,589,996,626,000 | 1,589,996,626,000 | MEMBER | null | null | https://api.github.com/repos/huggingface/datasets/issues/174/timeline | null | false | |
https://api.github.com/repos/huggingface/datasets/issues/173 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/173/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/173/comments | https://api.github.com/repos/huggingface/datasets/issues/173/events | https://github.com/huggingface/datasets/pull/173 | 621,764,932 | MDExOlB1bGxSZXF1ZXN0NDIwNzUyNzQy | 173 | Rm extracted test dirs | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoest... | [] | closed | false | null | [] | null | [
"Thanks for cleaning up the extracted dummy data folders! Instead of changing the file_utils we could also just put these folders under `.gitignore` (or maybe already done?).",
"Awesome! I guess you might have to add the changes for the MockDLManager now in a different file though because of my last PR - sorry!"
... | 1,589,981,448,000 | 1,590,165,276,000 | 1,590,165,275,000 | MEMBER | null | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/173",
"html_url": "https://github.com/huggingface/datasets/pull/173",
"diff_url": "https://github.com/huggingface/datasets/pull/173.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/173.patch"
} | All the dummy data used for tests were duplicated. For each dataset, we had one zip file but also its extracted directory. I removed all these directories
Furthermore instead of extracting next to the dummy_data.zip file, we extract in the temp `cached_dir` used for tests, so that all the extracted directories get r... | https://api.github.com/repos/huggingface/datasets/issues/173/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/172 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/172/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/172/comments | https://api.github.com/repos/huggingface/datasets/issues/172/events | https://github.com/huggingface/datasets/issues/172 | 621,377,386 | MDU6SXNzdWU2MjEzNzczODY= | 172 | Clone not working on Windows environment | {
"login": "codehunk628",
"id": 51091425,
"node_id": "MDQ6VXNlcjUxMDkxNDI1",
"avatar_url": "https://avatars.githubusercontent.com/u/51091425?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/codehunk628",
"html_url": "https://github.com/codehunk628",
"followers_url": "https://api.github.com/... | [] | closed | false | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoest... | [
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.git... | null | [
"Should be fixed on master now :)",
"Thanks @lhoestq 👍 Now I can uninstall WSL and get back to work with windows.🙂"
] | 1,589,935,514,000 | 1,590,238,153,000 | 1,590,233,272,000 | CONTRIBUTOR | null | null | Cloning in a windows environment is not working because of use of special character '?' in folder name ..
Please consider changing the folder name ....
Reference to folder -
nlp/datasets/cnn_dailymail/dummy/3.0.0/3.0.0/dummy_data-zip-extracted/dummy_data/uc?export=download&id=0BwmD_VLjROrfM1BxdkxVaTY2bWs/dailymail/s... | https://api.github.com/repos/huggingface/datasets/issues/172/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/171 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/171/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/171/comments | https://api.github.com/repos/huggingface/datasets/issues/171/events | https://github.com/huggingface/datasets/pull/171 | 621,199,128 | MDExOlB1bGxSZXF1ZXN0NDIwMjk0ODM0 | 171 | fix squad metric format | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoest... | [] | closed | false | null | [] | null | [
"One thing for SQuAD is that I wanted to be able to use the SQuAD dataset directly in the metrics and I'm not sure it will be possible with this format.\r\n\r\n(maybe it's not really possible in general though)",
"This is kinda related to one thing I had in mind which is that we may want to be able to dump our mo... | 1,589,913,456,000 | 1,590,154,610,000 | 1,590,154,608,000 | MEMBER | null | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/171",
"html_url": "https://github.com/huggingface/datasets/pull/171",
"diff_url": "https://github.com/huggingface/datasets/pull/171.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/171.patch"
} | The format of the squad metric was wrong.
This should fix #143
I tested with
```python3
predictions = [
{'id': '56be4db0acb8001400a502ec', 'prediction_text': 'Denver Broncos'}
]
references = [
{'answers': [{'text': 'Denver Broncos'}], 'id': '56be4db0acb8001400a502ec'}
]
``` | https://api.github.com/repos/huggingface/datasets/issues/171/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/170 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/170/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/170/comments | https://api.github.com/repos/huggingface/datasets/issues/170/events | https://github.com/huggingface/datasets/pull/170 | 621,119,747 | MDExOlB1bGxSZXF1ZXN0NDIwMjMwMDIx | 170 | Rename anli dataset | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoest... | [] | closed | false | null | [] | null | [] | 1,589,905,617,000 | 1,589,977,389,000 | 1,589,977,388,000 | MEMBER | null | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/170",
"html_url": "https://github.com/huggingface/datasets/pull/170",
"diff_url": "https://github.com/huggingface/datasets/pull/170.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/170.patch"
} | What we have now as the `anli` dataset is actually the αNLI dataset from the ART challenge dataset. This name is confusing because `anli` is also the name of adversarial NLI (see [https://github.com/facebookresearch/anli](https://github.com/facebookresearch/anli)).
I renamed the current `anli` dataset by `art`. | https://api.github.com/repos/huggingface/datasets/issues/170/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/169 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/169/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/169/comments | https://api.github.com/repos/huggingface/datasets/issues/169/events | https://github.com/huggingface/datasets/pull/169 | 621,099,682 | MDExOlB1bGxSZXF1ZXN0NDIwMjE1NDkw | 169 | Adding Qanta (Quizbowl) Dataset | {
"login": "EntilZha",
"id": 1382460,
"node_id": "MDQ6VXNlcjEzODI0NjA=",
"avatar_url": "https://avatars.githubusercontent.com/u/1382460?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/EntilZha",
"html_url": "https://github.com/EntilZha",
"followers_url": "https://api.github.com/users/Entil... | [] | closed | false | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://... | [
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"follo... | null | [
"Hi @EntilZha - sorry for waiting so long until taking action here. We created a new command and a new recipe of how to add dummy_data. Can you maybe rebase to `master` as explained in 7. of https://github.com/huggingface/nlp/blob/master/CONTRIBUTING.md#how-to-contribute-to-nlp and check that your dummy data is cor... | 1,589,904,181,000 | 1,590,497,551,000 | 1,590,497,551,000 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/169",
"html_url": "https://github.com/huggingface/datasets/pull/169",
"diff_url": "https://github.com/huggingface/datasets/pull/169.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/169.patch"
} | This PR adds the qanta question answering datasets from [Quizbowl: The Case for Incremental Question Answering](https://arxiv.org/abs/1904.04792) and [Trick Me If You Can: Human-in-the-loop Generation of Adversarial Question Answering Examples](https://www.aclweb.org/anthology/Q19-1029/) (adversarial fold)
This part... | https://api.github.com/repos/huggingface/datasets/issues/169/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/168 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/168/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/168/comments | https://api.github.com/repos/huggingface/datasets/issues/168/events | https://github.com/huggingface/datasets/issues/168 | 620,959,819 | MDU6SXNzdWU2MjA5NTk4MTk= | 168 | Loading 'wikitext' dataset fails | {
"login": "itay1itzhak",
"id": 25987633,
"node_id": "MDQ6VXNlcjI1OTg3NjMz",
"avatar_url": "https://avatars.githubusercontent.com/u/25987633?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/itay1itzhak",
"html_url": "https://github.com/itay1itzhak",
"followers_url": "https://api.github.com/... | [] | closed | false | null | [] | null | [
"Hi, make sure you have a recent version of pyarrow.\r\n\r\nAre you using it in Google Colab? In this case, this error is probably the same as #128",
"Thanks!\r\n\r\nYes I'm using Google Colab, it seems like a duplicate then.",
"Closing as it is a duplicate",
"Hi,\r\nThe squad bug seems to be fixed, but the l... | 1,589,893,469,000 | 1,590,529,612,000 | 1,590,529,612,000 | NONE | null | null | Loading the 'wikitext' dataset fails with Attribute error:
Code to reproduce (From example notebook):
import nlp
wikitext_dataset = nlp.load_dataset('wikitext')
Error:
---------------------------------------------------------------------------
AttributeError Traceback (most rece... | https://api.github.com/repos/huggingface/datasets/issues/168/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/167 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/167/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/167/comments | https://api.github.com/repos/huggingface/datasets/issues/167/events | https://github.com/huggingface/datasets/pull/167 | 620,908,786 | MDExOlB1bGxSZXF1ZXN0NDIwMDY0NDMw | 167 | [Tests] refactor tests | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://... | [] | closed | false | null | [] | null | [
"Nice !"
] | 1,589,888,612,000 | 1,589,905,032,000 | 1,589,905,030,000 | MEMBER | null | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/167",
"html_url": "https://github.com/huggingface/datasets/pull/167",
"diff_url": "https://github.com/huggingface/datasets/pull/167.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/167.patch"
} | This PR separates AWS and Local tests to remove these ugly statements in the script:
```python
if "/" not in dataset_name:
logging.info("Skip {} because it is a canonical dataset")
return
```
To run a `aws` test, one should now run the following command:
```python
pytest -s... | https://api.github.com/repos/huggingface/datasets/issues/167/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/166 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/166/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/166/comments | https://api.github.com/repos/huggingface/datasets/issues/166/events | https://github.com/huggingface/datasets/issues/166 | 620,850,218 | MDU6SXNzdWU2MjA4NTAyMTg= | 166 | Add a method to shuffle a dataset | {
"login": "thomwolf",
"id": 7353373,
"node_id": "MDQ6VXNlcjczNTMzNzM=",
"avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/thomwolf",
"html_url": "https://github.com/thomwolf",
"followers_url": "https://api.github.com/users/thomw... | [
{
"id": 2067400324,
"node_id": "MDU6TGFiZWwyMDY3NDAwMzI0",
"url": "https://api.github.com/repos/huggingface/datasets/labels/generic%20discussion",
"name": "generic discussion",
"color": "c5def5",
"default": false,
"description": "Generic discussion on the library"
}
] | closed | false | null | [] | null | [
"+1 for the naming convention\r\n\r\nAbout the `shuffle` method, from my understanding it should be done in `Dataloader` (better separation between dataset processing - usage)",
"+1 for shuffle in `Dataloader`. \r\nSome `Dataloader` just store idxs of dataset and just shuffle those idxs, which might(?) be faster ... | 1,589,882,926,000 | 1,592,924,853,000 | 1,592,924,852,000 | MEMBER | null | null | Could maybe be a `dataset.shuffle(generator=None, seed=None)` signature method.
Also, we could maybe have a clear indication of which method modify in-place and which methods return/cache a modified dataset. I kinda like torch conversion of having an underscore suffix for all the methods which modify a dataset in-pl... | https://api.github.com/repos/huggingface/datasets/issues/166/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/165 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/165/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/165/comments | https://api.github.com/repos/huggingface/datasets/issues/165/events | https://github.com/huggingface/datasets/issues/165 | 620,758,221 | MDU6SXNzdWU2MjA3NTgyMjE= | 165 | ANLI | {
"login": "douwekiela",
"id": 6024930,
"node_id": "MDQ6VXNlcjYwMjQ5MzA=",
"avatar_url": "https://avatars.githubusercontent.com/u/6024930?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/douwekiela",
"html_url": "https://github.com/douwekiela",
"followers_url": "https://api.github.com/users... | [] | closed | false | null | [] | null | [] | 1,589,874,657,000 | 1,589,977,387,000 | 1,589,977,387,000 | NONE | null | null | Can I recommend the following:
For ANLI, use https://github.com/facebookresearch/anli. As that paper says, "Our dataset is not
to be confused with abductive NLI (Bhagavatula et al., 2019), which calls itself αNLI, or ART.".
Indeed, the paper cited under what is currently called anli says in the abstract "We int... | https://api.github.com/repos/huggingface/datasets/issues/165/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/164 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/164/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/164/comments | https://api.github.com/repos/huggingface/datasets/issues/164/events | https://github.com/huggingface/datasets/issues/164 | 620,540,250 | MDU6SXNzdWU2MjA1NDAyNTA= | 164 | Add Spanish POR and NER Datasets | {
"login": "mrm8488",
"id": 3653789,
"node_id": "MDQ6VXNlcjM2NTM3ODk=",
"avatar_url": "https://avatars.githubusercontent.com/u/3653789?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mrm8488",
"html_url": "https://github.com/mrm8488",
"followers_url": "https://api.github.com/users/mrm8488/... | [
{
"id": 2067376369,
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request",
"name": "dataset request",
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset"
}
] | closed | false | null | [] | null | [
"Hello @mrm8488, are these datasets official datasets published in an NLP/CL/ML venue?",
"What about this one: https://github.com/ccasimiro88/TranslateAlignRetrieve?"
] | 1,589,840,301,000 | 1,590,424,125,000 | 1,590,424,125,000 | NONE | null | null | Hi guys,
In order to cover multilingual support a little step could be adding standard Datasets used for Spanish NER and POS tasks.
I can provide it in raw and preprocessed formats. | https://api.github.com/repos/huggingface/datasets/issues/164/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/163 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/163/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/163/comments | https://api.github.com/repos/huggingface/datasets/issues/163/events | https://github.com/huggingface/datasets/issues/163 | 620,534,307 | MDU6SXNzdWU2MjA1MzQzMDc= | 163 | [Feature request] Add cos-e v1.0 | {
"login": "sarahwie",
"id": 8027676,
"node_id": "MDQ6VXNlcjgwMjc2NzY=",
"avatar_url": "https://avatars.githubusercontent.com/u/8027676?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sarahwie",
"html_url": "https://github.com/sarahwie",
"followers_url": "https://api.github.com/users/sarah... | [
{
"id": 2067376369,
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request",
"name": "dataset request",
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset"
}
] | closed | false | null | [] | null | [
"Sounds good, @mariamabarham do you want to give a look?\r\nI think we should have two configurations so we can allow either version of the dataset to be loaded with the `1.0` version being the default maybe.\r\n\r\nCc some authors of the great cos-e: @nazneenrajani @bmccann",
"cos_e v1.0 is related to CQA v1.0 b... | 1,589,839,526,000 | 1,592,349,325,000 | 1,592,333,526,000 | NONE | null | null | I noticed the second release of cos-e (v1.11) is included in this repo. I wanted to request inclusion of v1.0, since this is the version on which results are reported on in [the paper](https://www.aclweb.org/anthology/P19-1487/), and v1.11 has noted [annotation](https://github.com/salesforce/cos-e/issues/2) [issues](ht... | https://api.github.com/repos/huggingface/datasets/issues/163/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/162 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/162/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/162/comments | https://api.github.com/repos/huggingface/datasets/issues/162/events | https://github.com/huggingface/datasets/pull/162 | 620,513,554 | MDExOlB1bGxSZXF1ZXN0NDE5NzQ4Mzky | 162 | fix prev files hash in map | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoest... | [] | closed | false | null | [] | null | [
"Awesome! ",
"Hi, yes, this seems to fix #160 -- I cloned the branch locally and verified",
"Perfect then :)"
] | 1,589,836,851,000 | 1,589,837,781,000 | 1,589,837,780,000 | MEMBER | null | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/162",
"html_url": "https://github.com/huggingface/datasets/pull/162",
"diff_url": "https://github.com/huggingface/datasets/pull/162.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/162.patch"
} | Fix the `.map` issue in #160.
This makes sure it takes the previous files when computing the hash. | https://api.github.com/repos/huggingface/datasets/issues/162/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/161 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/161/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/161/comments | https://api.github.com/repos/huggingface/datasets/issues/161/events | https://github.com/huggingface/datasets/issues/161 | 620,487,535 | MDU6SXNzdWU2MjA0ODc1MzU= | 161 | Discussion on version identifier & MockDataLoaderManager for test data | {
"login": "EntilZha",
"id": 1382460,
"node_id": "MDQ6VXNlcjEzODI0NjA=",
"avatar_url": "https://avatars.githubusercontent.com/u/1382460?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/EntilZha",
"html_url": "https://github.com/EntilZha",
"followers_url": "https://api.github.com/users/Entil... | [
{
"id": 2067400324,
"node_id": "MDU6TGFiZWwyMDY3NDAwMzI0",
"url": "https://api.github.com/repos/huggingface/datasets/labels/generic%20discussion",
"name": "generic discussion",
"color": "c5def5",
"default": false,
"description": "Generic discussion on the library"
}
] | open | false | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://... | [
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"follo... | null | [
"usually you can replace `download` in your dataset script with `download_and_prepare()` - could you share the code for your dataset here? :-) ",
"I have an initial version here: https://github.com/EntilZha/nlp/tree/master/datasets/qanta Thats pretty close to what I'll do as a PR, but still want to do some more s... | 1,589,833,890,000 | 1,590,343,803,000 | null | CONTRIBUTOR | null | null | Hi, I'm working on adding a dataset and ran into an error due to `download` not being defined on `MockDataLoaderManager`, but being defined in `nlp/utils/download_manager.py`. The readme step running this: `RUN_SLOW=1 pytest tests/test_dataset_common.py::DatasetTest::test_load_real_dataset_localmydatasetname` triggers ... | https://api.github.com/repos/huggingface/datasets/issues/161/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/160 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/160/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/160/comments | https://api.github.com/repos/huggingface/datasets/issues/160/events | https://github.com/huggingface/datasets/issues/160 | 620,448,236 | MDU6SXNzdWU2MjA0NDgyMzY= | 160 | caching in map causes same result to be returned for train, validation and test | {
"login": "dpressel",
"id": 247881,
"node_id": "MDQ6VXNlcjI0Nzg4MQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/247881?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dpressel",
"html_url": "https://github.com/dpressel",
"followers_url": "https://api.github.com/users/dpresse... | [
{
"id": 2067388877,
"node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug",
"name": "dataset bug",
"color": "2edb81",
"default": false,
"description": "A bug in a dataset script provided in the library"
}
] | closed | false | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoest... | [
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.git... | null | [
"Hi @dpressel, \r\n\r\nthanks for posting your issue! Can you maybe add a complete code snippet that we can copy paste to reproduce the error? For example, I'm not sure where the variable `train_set` comes from in your code and it seems like you are loading multiple datasets at once? ",
"Hi, the full example was... | 1,589,829,723,000 | 1,589,837,780,000 | 1,589,837,780,000 | NONE | null | null | hello,
I am working on a program that uses the `nlp` library with the `SST2` dataset.
The rough outline of the program is:
```
import nlp as nlp_datasets
...
parser.add_argument('--dataset', help='HuggingFace Datasets id', default=['glue', 'sst2'], nargs='+')
...
dataset = nlp_datasets.load_dataset(*args.... | https://api.github.com/repos/huggingface/datasets/issues/160/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/159 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/159/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/159/comments | https://api.github.com/repos/huggingface/datasets/issues/159/events | https://github.com/huggingface/datasets/issues/159 | 620,420,700 | MDU6SXNzdWU2MjA0MjA3MDA= | 159 | How can we add more datasets to nlp library? | {
"login": "Tahsin-Mayeesha",
"id": 17886829,
"node_id": "MDQ6VXNlcjE3ODg2ODI5",
"avatar_url": "https://avatars.githubusercontent.com/u/17886829?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Tahsin-Mayeesha",
"html_url": "https://github.com/Tahsin-Mayeesha",
"followers_url": "https://api... | [] | closed | false | null | [] | null | [
"Found it. https://github.com/huggingface/nlp/tree/master/datasets"
] | 1,589,826,931,000 | 1,589,827,028,000 | 1,589,827,027,000 | NONE | null | null | https://api.github.com/repos/huggingface/datasets/issues/159/timeline | null | false | |
https://api.github.com/repos/huggingface/datasets/issues/158 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/158/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/158/comments | https://api.github.com/repos/huggingface/datasets/issues/158/events | https://github.com/huggingface/datasets/pull/158 | 620,396,658 | MDExOlB1bGxSZXF1ZXN0NDE5NjUyNTQy | 158 | add Toronto Books Corpus | {
"login": "mariamabarham",
"id": 38249783,
"node_id": "MDQ6VXNlcjM4MjQ5Nzgz",
"avatar_url": "https://avatars.githubusercontent.com/u/38249783?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariamabarham",
"html_url": "https://github.com/mariamabarham",
"followers_url": "https://api.githu... | [] | closed | false | null | [] | null | [] | 1,589,824,485,000 | 1,591,861,755,000 | 1,589,873,696,000 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/158",
"html_url": "https://github.com/huggingface/datasets/pull/158",
"diff_url": "https://github.com/huggingface/datasets/pull/158.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/158.patch"
} | This PR adds the Toronto Books Corpus.
.
It on consider TMX and plain text files (Moses) defined in the table **Statistics and TMX/Moses Downloads** [here](http://opus.nlpl.eu/Books.php ) | https://api.github.com/repos/huggingface/datasets/issues/158/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/157 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/157/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/157/comments | https://api.github.com/repos/huggingface/datasets/issues/157/events | https://github.com/huggingface/datasets/issues/157 | 620,356,542 | MDU6SXNzdWU2MjAzNTY1NDI= | 157 | nlp.load_dataset() gives "TypeError: list_() takes exactly one argument (2 given)" | {
"login": "saahiluppal",
"id": 47444392,
"node_id": "MDQ6VXNlcjQ3NDQ0Mzky",
"avatar_url": "https://avatars.githubusercontent.com/u/47444392?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/saahiluppal",
"html_url": "https://github.com/saahiluppal",
"followers_url": "https://api.github.com/... | [] | closed | false | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://... | [
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"follo... | null | [
"You can just run: \r\n`val = nlp.load_dataset('squad')` \r\n\r\nif you want to have just the validation script you can also do:\r\n\r\n`val = nlp.load_dataset('squad', split=\"validation\")`",
"If you want to load a local dataset, make sure you include a `./` before the folder name. ",
"This happens by just do... | 1,589,820,398,000 | 1,591,344,538,000 | 1,591,344,538,000 | NONE | null | null | I'm trying to load datasets from nlp but there seems to have error saying
"TypeError: list_() takes exactly one argument (2 given)"
gist can be found here
https://gist.github.com/saahiluppal/c4b878f330b10b9ab9762bc0776c0a6a | https://api.github.com/repos/huggingface/datasets/issues/157/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/156 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/156/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/156/comments | https://api.github.com/repos/huggingface/datasets/issues/156/events | https://github.com/huggingface/datasets/issues/156 | 620,263,687 | MDU6SXNzdWU2MjAyNjM2ODc= | 156 | SyntaxError with WMT datasets | {
"login": "tomhosking",
"id": 9419158,
"node_id": "MDQ6VXNlcjk0MTkxNTg=",
"avatar_url": "https://avatars.githubusercontent.com/u/9419158?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tomhosking",
"html_url": "https://github.com/tomhosking",
"followers_url": "https://api.github.com/users... | [] | closed | false | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://... | [
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"follo... | null | [
"Jeez - don't know what happened there :D Should be fixed now! \r\n\r\nThanks a lot for reporting this @tomhosking !",
"Hi @patrickvonplaten!\r\n\r\nI'm now getting the below error:\r\n\r\n```\r\n---------------------------------------------------------------------------\r\nTypeError ... | 1,589,812,698,000 | 1,595,522,515,000 | 1,595,522,515,000 | NONE | null | null | The following snippet produces a syntax error:
```
import nlp
dataset = nlp.load_dataset('wmt14')
print(dataset['train'][0])
```
```
Traceback (most recent call last):
File "/home/tom/.local/lib/python3.6/site-packages/IPython/core/interactiveshell.py", line 3326, in run_code
exec(code_obj, self.... | https://api.github.com/repos/huggingface/datasets/issues/156/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/155 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/155/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/155/comments | https://api.github.com/repos/huggingface/datasets/issues/155/events | https://github.com/huggingface/datasets/pull/155 | 620,067,946 | MDExOlB1bGxSZXF1ZXN0NDE5Mzg1ODM0 | 155 | Include more links in README, fix typos | {
"login": "Bharat123rox",
"id": 13381361,
"node_id": "MDQ6VXNlcjEzMzgxMzYx",
"avatar_url": "https://avatars.githubusercontent.com/u/13381361?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Bharat123rox",
"html_url": "https://github.com/Bharat123rox",
"followers_url": "https://api.github.c... | [] | closed | false | null | [] | null | [
"I fixed a conflict :) thanks !"
] | 1,589,795,228,000 | 1,590,654,717,000 | 1,590,654,717,000 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/155",
"html_url": "https://github.com/huggingface/datasets/pull/155",
"diff_url": "https://github.com/huggingface/datasets/pull/155.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/155.patch"
} | Include more links and fix typos in README | https://api.github.com/repos/huggingface/datasets/issues/155/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/154 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/154/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/154/comments | https://api.github.com/repos/huggingface/datasets/issues/154/events | https://github.com/huggingface/datasets/pull/154 | 620,059,066 | MDExOlB1bGxSZXF1ZXN0NDE5Mzc4Mzgw | 154 | add Ubuntu Dialogs Corpus datasets | {
"login": "mariamabarham",
"id": 38249783,
"node_id": "MDQ6VXNlcjM4MjQ5Nzgz",
"avatar_url": "https://avatars.githubusercontent.com/u/38249783?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariamabarham",
"html_url": "https://github.com/mariamabarham",
"followers_url": "https://api.githu... | [] | closed | false | null | [] | null | [] | 1,589,794,488,000 | 1,589,796,748,000 | 1,589,796,747,000 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/154",
"html_url": "https://github.com/huggingface/datasets/pull/154",
"diff_url": "https://github.com/huggingface/datasets/pull/154.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/154.patch"
} | This PR adds the Ubuntu Dialog Corpus datasets version 2.0. | https://api.github.com/repos/huggingface/datasets/issues/154/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/153 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/153/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/153/comments | https://api.github.com/repos/huggingface/datasets/issues/153/events | https://github.com/huggingface/datasets/issues/153 | 619,972,246 | MDU6SXNzdWU2MTk5NzIyNDY= | 153 | Meta-datasets (GLUE/XTREME/...) – Special care to attributions and citations | {
"login": "thomwolf",
"id": 7353373,
"node_id": "MDQ6VXNlcjczNTMzNzM=",
"avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/thomwolf",
"html_url": "https://github.com/thomwolf",
"followers_url": "https://api.github.com/users/thomw... | [
{
"id": 2067400324,
"node_id": "MDU6TGFiZWwyMDY3NDAwMzI0",
"url": "https://api.github.com/repos/huggingface/datasets/labels/generic%20discussion",
"name": "generic discussion",
"color": "c5def5",
"default": false,
"description": "Generic discussion on the library"
}
] | open | false | null | [] | null | [
"As @yoavgo suggested, there should be the possibility to call a function like nlp.bib that outputs all bibtex ref from the datasets and models actually used and eventually nlp.bib.forreadme that would output the same info + versions numbers so they can be included in a readme.md file.",
"Actually, double checki... | 1,589,786,662,000 | 1,589,836,696,000 | null | MEMBER | null | null | Meta-datasets are interesting in terms of standardized benchmarks but they also have specific behaviors, in particular in terms of attribution and authorship. It's very important that each specific dataset inside a meta dataset is properly referenced and the citation/specific homepage/etc are very visible and accessibl... | https://api.github.com/repos/huggingface/datasets/issues/153/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/152 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/152/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/152/comments | https://api.github.com/repos/huggingface/datasets/issues/152/events | https://github.com/huggingface/datasets/pull/152 | 619,971,900 | MDExOlB1bGxSZXF1ZXN0NDE5MzA4OTE2 | 152 | Add GLUE config name check | {
"login": "Bharat123rox",
"id": 13381361,
"node_id": "MDQ6VXNlcjEzMzgxMzYx",
"avatar_url": "https://avatars.githubusercontent.com/u/13381361?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Bharat123rox",
"html_url": "https://github.com/Bharat123rox",
"followers_url": "https://api.github.c... | [] | closed | false | null | [] | null | [
"If tests are being added, any guidance on where to add tests would be helpful!\r\n\r\nTagging @thomwolf for review",
"Looks good to me. Is this compatible with the way we are doing tests right now @patrickvonplaten ?",
"If the tests pass it should be fine :-) \r\n\r\n@Bharat123rox could you check whether the t... | 1,589,786,623,000 | 1,590,617,352,000 | 1,590,617,352,000 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/152",
"html_url": "https://github.com/huggingface/datasets/pull/152",
"diff_url": "https://github.com/huggingface/datasets/pull/152.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/152.patch"
} | Fixes #130 by adding a name check to the Glue class | https://api.github.com/repos/huggingface/datasets/issues/152/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/151 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/151/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/151/comments | https://api.github.com/repos/huggingface/datasets/issues/151/events | https://github.com/huggingface/datasets/pull/151 | 619,968,480 | MDExOlB1bGxSZXF1ZXN0NDE5MzA2MTYz | 151 | Fix JSON tests. | {
"login": "jplu",
"id": 959590,
"node_id": "MDQ6VXNlcjk1OTU5MA==",
"avatar_url": "https://avatars.githubusercontent.com/u/959590?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jplu",
"html_url": "https://github.com/jplu",
"followers_url": "https://api.github.com/users/jplu/followers",
... | [] | closed | false | null | [] | null | [] | 1,589,786,258,000 | 1,589,786,512,000 | 1,589,786,511,000 | COLLABORATOR | null | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/151",
"html_url": "https://github.com/huggingface/datasets/pull/151",
"diff_url": "https://github.com/huggingface/datasets/pull/151.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/151.patch"
} | https://api.github.com/repos/huggingface/datasets/issues/151/timeline | null | true | |
https://api.github.com/repos/huggingface/datasets/issues/150 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/150/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/150/comments | https://api.github.com/repos/huggingface/datasets/issues/150/events | https://github.com/huggingface/datasets/pull/150 | 619,809,645 | MDExOlB1bGxSZXF1ZXN0NDE5MTgyODU4 | 150 | Add WNUT 17 NER dataset | {
"login": "stefan-it",
"id": 20651387,
"node_id": "MDQ6VXNlcjIwNjUxMzg3",
"avatar_url": "https://avatars.githubusercontent.com/u/20651387?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stefan-it",
"html_url": "https://github.com/stefan-it",
"followers_url": "https://api.github.com/users/... | [] | closed | false | null | [] | null | [
"The PR looks awesome! \r\nSince you have already added a dataset I imagine the tests as described in 5. of https://github.com/huggingface/nlp/blob/master/CONTRIBUTING.md#how-to-add-a-dataset all pass, right @stefan-it ?\r\n\r\nI think we are then good to merge this :-) @lhoestq ",
"Nice !\r\n\r\nOne thing though... | 1,589,753,944,000 | 1,590,525,479,000 | 1,590,525,479,000 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/150",
"html_url": "https://github.com/huggingface/datasets/pull/150",
"diff_url": "https://github.com/huggingface/datasets/pull/150.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/150.patch"
} | Hi,
this PR adds the WNUT 17 dataset to `nlp`.
> Emerging and Rare entity recognition
> This shared task focuses on identifying unusual, previously-unseen entities in the context of emerging discussions. Named entities form the basis of many modern approaches to other tasks (like event clustering and summarisati... | https://api.github.com/repos/huggingface/datasets/issues/150/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/149 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/149/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/149/comments | https://api.github.com/repos/huggingface/datasets/issues/149/events | https://github.com/huggingface/datasets/issues/149 | 619,735,739 | MDU6SXNzdWU2MTk3MzU3Mzk= | 149 | [Feature request] Add Ubuntu Dialogue Corpus dataset | {
"login": "danth",
"id": 28959268,
"node_id": "MDQ6VXNlcjI4OTU5MjY4",
"avatar_url": "https://avatars.githubusercontent.com/u/28959268?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/danth",
"html_url": "https://github.com/danth",
"followers_url": "https://api.github.com/users/danth/follow... | [
{
"id": 2067376369,
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request",
"name": "dataset request",
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset"
}
] | closed | false | null | [] | null | [
"@AlphaMycelium the Ubuntu Dialogue Corpus [version 2]( https://github.com/rkadlec/ubuntu-ranking-dataset-creator) is added. Note that it requires a manual download by following the download instructions in the [repos]( https://github.com/rkadlec/ubuntu-ranking-dataset-creator).\r\nMaybe we can close this issue for... | 1,589,730,159,000 | 1,589,821,306,000 | 1,589,821,306,000 | NONE | null | null | https://github.com/rkadlec/ubuntu-ranking-dataset-creator or http://dataset.cs.mcgill.ca/ubuntu-corpus-1.0/ | https://api.github.com/repos/huggingface/datasets/issues/149/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/148 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/148/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/148/comments | https://api.github.com/repos/huggingface/datasets/issues/148/events | https://github.com/huggingface/datasets/issues/148 | 619,590,555 | MDU6SXNzdWU2MTk1OTA1NTU= | 148 | _download_and_prepare() got an unexpected keyword argument 'verify_infos' | {
"login": "richarddwang",
"id": 17963619,
"node_id": "MDQ6VXNlcjE3OTYzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/17963619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/richarddwang",
"html_url": "https://github.com/richarddwang",
"followers_url": "https://api.github.c... | [
{
"id": 2067388877,
"node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug",
"name": "dataset bug",
"color": "2edb81",
"default": false,
"description": "A bug in a dataset script provided in the library"
}
] | closed | false | null | [] | null | [
"Same error for dataset 'wiki40b'",
"Should be fixed on master :)"
] | 1,589,680,133,000 | 1,589,787,513,000 | 1,589,787,513,000 | CONTRIBUTOR | null | null | # Reproduce
In Colab,
```
%pip install -q nlp
%pip install -q apache_beam mwparserfromhell
dataset = nlp.load_dataset('wikipedia')
```
get
```
Downloading and preparing dataset wikipedia/20200501.aa (download: Unknown size, generated: Unknown size, total: Unknown size) to /root/.cache/huggingface/datasets/w... | https://api.github.com/repos/huggingface/datasets/issues/148/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/147 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/147/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/147/comments | https://api.github.com/repos/huggingface/datasets/issues/147/events | https://github.com/huggingface/datasets/issues/147 | 619,581,907 | MDU6SXNzdWU2MTk1ODE5MDc= | 147 | Error with sklearn train_test_split | {
"login": "ClonedOne",
"id": 6853743,
"node_id": "MDQ6VXNlcjY4NTM3NDM=",
"avatar_url": "https://avatars.githubusercontent.com/u/6853743?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ClonedOne",
"html_url": "https://github.com/ClonedOne",
"followers_url": "https://api.github.com/users/Cl... | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | closed | false | null | [] | null | [
"Indeed. Probably we will want to have a similar method directly in the library",
"Related: #166 "
] | 1,589,675,304,000 | 1,592,497,403,000 | 1,592,497,403,000 | NONE | null | null | It would be nice if we could use sklearn `train_test_split` to quickly generate subsets from the dataset objects returned by `nlp.load_dataset`. At the moment the code:
```python
data = nlp.load_dataset('imdb', cache_dir=data_cache)
f_half, s_half = train_test_split(data['train'], test_size=0.5, random_state=seed)... | https://api.github.com/repos/huggingface/datasets/issues/147/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/146 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/146/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/146/comments | https://api.github.com/repos/huggingface/datasets/issues/146/events | https://github.com/huggingface/datasets/pull/146 | 619,564,653 | MDExOlB1bGxSZXF1ZXN0NDE5MDI5MjUx | 146 | Add BERTScore to metrics | {
"login": "felixgwu",
"id": 7753366,
"node_id": "MDQ6VXNlcjc3NTMzNjY=",
"avatar_url": "https://avatars.githubusercontent.com/u/7753366?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/felixgwu",
"html_url": "https://github.com/felixgwu",
"followers_url": "https://api.github.com/users/felix... | [] | closed | false | null | [] | null | [] | 1,589,666,979,000 | 1,589,754,130,000 | 1,589,754,129,000 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/146",
"html_url": "https://github.com/huggingface/datasets/pull/146",
"diff_url": "https://github.com/huggingface/datasets/pull/146.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/146.patch"
} | This PR adds [BERTScore](https://arxiv.org/abs/1904.09675) to metrics.
Here is an example of how to use it.
```sh
import nlp
bertscore = nlp.load_metric('metrics/bertscore') # or simply nlp.load_metric('bertscore') after this is added to huggingface's s3 bucket
predictions = ['example', 'fruit']
references = [[... | https://api.github.com/repos/huggingface/datasets/issues/146/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/145 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/145/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/145/comments | https://api.github.com/repos/huggingface/datasets/issues/145/events | https://github.com/huggingface/datasets/pull/145 | 619,480,549 | MDExOlB1bGxSZXF1ZXN0NDE4OTcxMjg0 | 145 | [AWS Tests] Follow-up PR from #144 | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://... | [] | closed | false | null | [] | null | [] | 1,589,637,226,000 | 1,589,637,263,000 | 1,589,637,262,000 | MEMBER | null | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/145",
"html_url": "https://github.com/huggingface/datasets/pull/145",
"diff_url": "https://github.com/huggingface/datasets/pull/145.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/145.patch"
} | I forgot to add this line in PR #145 . | https://api.github.com/repos/huggingface/datasets/issues/145/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/144 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/144/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/144/comments | https://api.github.com/repos/huggingface/datasets/issues/144/events | https://github.com/huggingface/datasets/pull/144 | 619,477,367 | MDExOlB1bGxSZXF1ZXN0NDE4OTY5NjA1 | 144 | [AWS tests] AWS test should not run for canonical datasets | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://... | [] | closed | false | null | [] | null | [] | 1,589,636,370,000 | 1,589,636,674,000 | 1,589,636,673,000 | MEMBER | null | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/144",
"html_url": "https://github.com/huggingface/datasets/pull/144",
"diff_url": "https://github.com/huggingface/datasets/pull/144.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/144.patch"
} | AWS tests should in general not run for canonical datasets. Only local tests will run in this case. This way a PR is able to pass when adding a new dataset.
This PR changes to logic to the following:
1) All datasets that are present in `nlp/datasets` are tested only locally. This way when one adds a canonical da... | https://api.github.com/repos/huggingface/datasets/issues/144/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/143 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/143/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/143/comments | https://api.github.com/repos/huggingface/datasets/issues/143/events | https://github.com/huggingface/datasets/issues/143 | 619,457,641 | MDU6SXNzdWU2MTk0NTc2NDE= | 143 | ArrowTypeError in squad metrics | {
"login": "patil-suraj",
"id": 27137566,
"node_id": "MDQ6VXNlcjI3MTM3NTY2",
"avatar_url": "https://avatars.githubusercontent.com/u/27137566?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patil-suraj",
"html_url": "https://github.com/patil-suraj",
"followers_url": "https://api.github.com/... | [
{
"id": 2067393914,
"node_id": "MDU6TGFiZWwyMDY3MzkzOTE0",
"url": "https://api.github.com/repos/huggingface/datasets/labels/metric%20bug",
"name": "metric bug",
"color": "25b21e",
"default": false,
"description": "A bug in a metric script"
}
] | closed | false | null | [] | null | [
"There was an issue in the format, thanks.\r\nNow you can do\r\n```python3\r\nsquad_dset = nlp.load_dataset(\"squad\")\r\nsquad_metric = nlp.load_metric(\"/Users/quentinlhoest/Desktop/hf/nlp-bis/metrics/squad\")\r\npredictions = [\r\n {\"id\": v[\"id\"], \"prediction_text\": v[\"answers\"][\"text\"][0]} # take ... | 1,589,630,797,000 | 1,590,154,732,000 | 1,590,154,608,000 | MEMBER | null | null | `squad_metric.compute` is giving following error
```
ArrowTypeError: Could not convert [{'text': 'Denver Broncos'}, {'text': 'Denver Broncos'}, {'text': 'Denver Broncos'}] with type list: was not a dict, tuple, or recognized null value for conversion to struct type
```
This is how my predictions and references lo... | https://api.github.com/repos/huggingface/datasets/issues/143/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/142 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/142/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/142/comments | https://api.github.com/repos/huggingface/datasets/issues/142/events | https://github.com/huggingface/datasets/pull/142 | 619,450,068 | MDExOlB1bGxSZXF1ZXN0NDE4OTU0OTc1 | 142 | [WMT] Add all wmt | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://... | [] | closed | false | null | [] | null | [] | 1,589,628,526,000 | 1,589,717,901,000 | 1,589,717,900,000 | MEMBER | null | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/142",
"html_url": "https://github.com/huggingface/datasets/pull/142",
"diff_url": "https://github.com/huggingface/datasets/pull/142.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/142.patch"
} | This PR adds all wmt datasets scripts. At the moment the script is **not** functional for the language pairs "cs-en", "ru-en", "hi-en" because apparently it takes up to a week to get the manual data for these datasets: see http://ufal.mff.cuni.cz/czeng.
The datasets are fully functional though for the "big" languag... | https://api.github.com/repos/huggingface/datasets/issues/142/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/141 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/141/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/141/comments | https://api.github.com/repos/huggingface/datasets/issues/141/events | https://github.com/huggingface/datasets/pull/141 | 619,447,090 | MDExOlB1bGxSZXF1ZXN0NDE4OTUzMzQw | 141 | [Clean up] remove bogus folder | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://... | [] | closed | false | null | [] | null | [
"Same for the dataset_infos.json at the project root no ?",
"Sorry guys, I haven't noticed. Thank you for mentioning it."
] | 1,589,627,622,000 | 1,589,635,467,000 | 1,589,635,466,000 | MEMBER | null | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/141",
"html_url": "https://github.com/huggingface/datasets/pull/141",
"diff_url": "https://github.com/huggingface/datasets/pull/141.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/141.patch"
} | @mariamabarham - I think you accidentally placed it there. | https://api.github.com/repos/huggingface/datasets/issues/141/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/140 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/140/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/140/comments | https://api.github.com/repos/huggingface/datasets/issues/140/events | https://github.com/huggingface/datasets/pull/140 | 619,443,613 | MDExOlB1bGxSZXF1ZXN0NDE4OTUxMzg4 | 140 | [Tests] run local tests as default | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://... | [] | closed | false | null | [] | null | [
"You are right and I think those are usual best practice :) I'm 100% fine with this^^",
"Merging this for now to unblock other PRs."
] | 1,589,626,566,000 | 1,589,635,304,000 | 1,589,635,303,000 | MEMBER | null | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/140",
"html_url": "https://github.com/huggingface/datasets/pull/140",
"diff_url": "https://github.com/huggingface/datasets/pull/140.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/140.patch"
} | This PR also enables local tests by default
I think it's safer for now to enable both local and aws tests for every commit. The problem currently is that when we do a PR to add a dataset, the dataset is not yet on AWS on therefore not tested on the PR itself. Thus the PR will always be green even if the datasets are... | https://api.github.com/repos/huggingface/datasets/issues/140/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/139 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/139/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/139/comments | https://api.github.com/repos/huggingface/datasets/issues/139/events | https://github.com/huggingface/datasets/pull/139 | 619,327,409 | MDExOlB1bGxSZXF1ZXN0NDE4ODc4NzMy | 139 | Add GermEval 2014 NER dataset | {
"login": "stefan-it",
"id": 20651387,
"node_id": "MDQ6VXNlcjIwNjUxMzg3",
"avatar_url": "https://avatars.githubusercontent.com/u/20651387?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stefan-it",
"html_url": "https://github.com/stefan-it",
"followers_url": "https://api.github.com/users/... | [] | closed | false | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://... | [
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"follo... | null | [
"Had really fun playing around with this new library :heart: ",
"That's awesome - thanks @stefan-it :-) \r\n\r\nCould you maybe rebase to master and check if all dummy data tests are fine. I should have included the local tests directly in the test suite so that all PRs are fully checked: #140 - sorry :D ",
"@p... | 1,589,586,129,000 | 1,589,637,397,000 | 1,589,637,382,000 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/139",
"html_url": "https://github.com/huggingface/datasets/pull/139",
"diff_url": "https://github.com/huggingface/datasets/pull/139.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/139.patch"
} | Hi,
this PR adds the GermEval 2014 NER dataset 😃
> The GermEval 2014 NER Shared Task builds on a new dataset with German Named Entity annotation [1] with the following properties:
> - The data was sampled from German Wikipedia and News Corpora as a collection of citations.
> - The dataset covers over 31,000... | https://api.github.com/repos/huggingface/datasets/issues/139/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/138 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/138/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/138/comments | https://api.github.com/repos/huggingface/datasets/issues/138/events | https://github.com/huggingface/datasets/issues/138 | 619,225,191 | MDU6SXNzdWU2MTkyMjUxOTE= | 138 | Consider renaming to nld | {
"login": "honnibal",
"id": 8059750,
"node_id": "MDQ6VXNlcjgwNTk3NTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/8059750?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/honnibal",
"html_url": "https://github.com/honnibal",
"followers_url": "https://api.github.com/users/honni... | [
{
"id": 2067400324,
"node_id": "MDU6TGFiZWwyMDY3NDAwMzI0",
"url": "https://api.github.com/repos/huggingface/datasets/labels/generic%20discussion",
"name": "generic discussion",
"color": "c5def5",
"default": false,
"description": "Generic discussion on the library"
}
] | closed | false | null | [] | null | [
"I would suggest `nlds`. NLP is a very general, broad and ambiguous term, the library is not about NLP (as in processing) per se, it is about accessing Natural Language related datasets. So the name should reflect its purpose.\r\n",
"Chiming in to second everything @honnibal said, and to add that I think the curr... | 1,589,574,207,000 | 1,608,238,591,000 | 1,601,251,690,000 | NONE | null | null | Hey :)
Just making a thread here recording what I said on Twitter, as it's impossible to follow discussion there. It's also just really not a good way to talk about this sort of thing.
The issue is that modules go into the global namespace, so you shouldn't use variable names that conflict with module names. This... | https://api.github.com/repos/huggingface/datasets/issues/138/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/137 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/137/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/137/comments | https://api.github.com/repos/huggingface/datasets/issues/137/events | https://github.com/huggingface/datasets/issues/137 | 619,214,645 | MDU6SXNzdWU2MTkyMTQ2NDU= | 137 | Tokenized BLEU considered harmful - Discussion on community-based process | {
"login": "kpu",
"id": 247512,
"node_id": "MDQ6VXNlcjI0NzUxMg==",
"avatar_url": "https://avatars.githubusercontent.com/u/247512?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kpu",
"html_url": "https://github.com/kpu",
"followers_url": "https://api.github.com/users/kpu/followers",
"fol... | [
{
"id": 2067400324,
"node_id": "MDU6TGFiZWwyMDY3NDAwMzI0",
"url": "https://api.github.com/repos/huggingface/datasets/labels/generic%20discussion",
"name": "generic discussion",
"color": "c5def5",
"default": false,
"description": "Generic discussion on the library"
},
{
"id": 2067... | open | false | null | [] | null | null | 1,589,573,314,000 | 1,610,016,088,000 | null | NONE | null | null | https://github.com/huggingface/nlp/blob/7d1526dfeeb29248d832f1073192dbf03ad642da/metrics/bleu/bleu.py#L76 assumes the inputs are tokenized by the user. This is bad practice because the user's tokenizer is usually not the same as the one used by `mteval-v13a.pl`, the closest thing we have to a standard. Moreover, toke... | https://api.github.com/repos/huggingface/datasets/issues/137/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/136 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/136/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/136/comments | https://api.github.com/repos/huggingface/datasets/issues/136/events | https://github.com/huggingface/datasets/pull/136 | 619,211,018 | MDExOlB1bGxSZXF1ZXN0NDE4NzgxNzI4 | 136 | Update README.md | {
"login": "renaud",
"id": 75369,
"node_id": "MDQ6VXNlcjc1MzY5",
"avatar_url": "https://avatars.githubusercontent.com/u/75369?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/renaud",
"html_url": "https://github.com/renaud",
"followers_url": "https://api.github.com/users/renaud/followers",
... | [] | closed | false | null | [] | null | [
"Thanks, this was fixed with #135 :)"
] | 1,589,572,867,000 | 1,589,717,848,000 | 1,589,717,848,000 | NONE | null | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/136",
"html_url": "https://github.com/huggingface/datasets/pull/136",
"diff_url": "https://github.com/huggingface/datasets/pull/136.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/136.patch"
} | small typo | https://api.github.com/repos/huggingface/datasets/issues/136/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/135 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/135/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/135/comments | https://api.github.com/repos/huggingface/datasets/issues/135/events | https://github.com/huggingface/datasets/pull/135 | 619,206,708 | MDExOlB1bGxSZXF1ZXN0NDE4Nzc4MTMw | 135 | Fix print statement in READ.md | {
"login": "codehunk628",
"id": 51091425,
"node_id": "MDQ6VXNlcjUxMDkxNDI1",
"avatar_url": "https://avatars.githubusercontent.com/u/51091425?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/codehunk628",
"html_url": "https://github.com/codehunk628",
"followers_url": "https://api.github.com/... | [] | closed | false | null | [] | null | [
"Indeed, thanks!"
] | 1,589,572,343,000 | 1,589,717,646,000 | 1,589,717,645,000 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/135",
"html_url": "https://github.com/huggingface/datasets/pull/135",
"diff_url": "https://github.com/huggingface/datasets/pull/135.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/135.patch"
} | print statement was throwing generator object instead of printing names of available datasets/metrics | https://api.github.com/repos/huggingface/datasets/issues/135/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/134 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/134/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/134/comments | https://api.github.com/repos/huggingface/datasets/issues/134/events | https://github.com/huggingface/datasets/pull/134 | 619,112,641 | MDExOlB1bGxSZXF1ZXN0NDE4Njk5OTYz | 134 | Update README.md | {
"login": "pranv",
"id": 8753078,
"node_id": "MDQ6VXNlcjg3NTMwNzg=",
"avatar_url": "https://avatars.githubusercontent.com/u/8753078?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pranv",
"html_url": "https://github.com/pranv",
"followers_url": "https://api.github.com/users/pranv/follower... | [] | closed | false | null | [] | null | [
"the readme got removed, closing this one"
] | 1,589,561,774,000 | 1,590,654,109,000 | 1,590,654,109,000 | NONE | null | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/134",
"html_url": "https://github.com/huggingface/datasets/pull/134",
"diff_url": "https://github.com/huggingface/datasets/pull/134.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/134.patch"
} | https://api.github.com/repos/huggingface/datasets/issues/134/timeline | null | true | |
https://api.github.com/repos/huggingface/datasets/issues/133 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/133/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/133/comments | https://api.github.com/repos/huggingface/datasets/issues/133/events | https://github.com/huggingface/datasets/issues/133 | 619,094,954 | MDU6SXNzdWU2MTkwOTQ5NTQ= | 133 | [Question] Using/adding a local dataset | {
"login": "zphang",
"id": 1668462,
"node_id": "MDQ6VXNlcjE2Njg0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/1668462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zphang",
"html_url": "https://github.com/zphang",
"followers_url": "https://api.github.com/users/zphang/foll... | [] | closed | false | null | [] | null | [
"Hi @zphang,\r\n\r\nSo you can just give the local path to a dataset script file and it should work.\r\n\r\nHere is an example:\r\n- you can download one of the scripts in the `datasets` folder of the present repo (or clone the repo)\r\n- then you can load it with `load_dataset('PATH/TO/YOUR/LOCAL/SCRIPT.py')`\r\n\... | 1,589,559,966,000 | 1,595,522,649,000 | 1,595,522,649,000 | NONE | null | null | Users may want to either create/modify a local copy of a dataset, or use a custom-built dataset with the same `Dataset` API as externally downloaded datasets.
It appears to be possible to point to a local dataset path rather than downloading the external ones, but I'm not exactly sure how to go about doing this.
... | https://api.github.com/repos/huggingface/datasets/issues/133/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/132 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/132/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/132/comments | https://api.github.com/repos/huggingface/datasets/issues/132/events | https://github.com/huggingface/datasets/issues/132 | 619,077,851 | MDU6SXNzdWU2MTkwNzc4NTE= | 132 | [Feature Request] Add the OpenWebText dataset | {
"login": "LysandreJik",
"id": 30755778,
"node_id": "MDQ6VXNlcjMwNzU1Nzc4",
"avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/LysandreJik",
"html_url": "https://github.com/LysandreJik",
"followers_url": "https://api.github.com/... | [
{
"id": 2067376369,
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request",
"name": "dataset request",
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset"
}
] | closed | false | null | [] | null | [
"We're experimenting with hosting the OpenWebText corpus on Zenodo for easier downloading. https://zenodo.org/record/3834942#.Xs1w8i-z2J8",
"Closing since it's been added in #660 "
] | 1,589,558,249,000 | 1,602,080,568,000 | 1,602,080,568,000 | MEMBER | null | null | The OpenWebText dataset is an open clone of OpenAI's WebText dataset. It can be used to train ELECTRA as is specified in the [README](https://www.github.com/google-research/electra).
More information and the download link are available [here](https://skylion007.github.io/OpenWebTextCorpus/). | https://api.github.com/repos/huggingface/datasets/issues/132/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/131 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/131/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/131/comments | https://api.github.com/repos/huggingface/datasets/issues/131/events | https://github.com/huggingface/datasets/issues/131 | 619,073,731 | MDU6SXNzdWU2MTkwNzM3MzE= | 131 | [Feature request] Add Toronto BookCorpus dataset | {
"login": "jarednielsen",
"id": 4564897,
"node_id": "MDQ6VXNlcjQ1NjQ4OTc=",
"avatar_url": "https://avatars.githubusercontent.com/u/4564897?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jarednielsen",
"html_url": "https://github.com/jarednielsen",
"followers_url": "https://api.github.com... | [
{
"id": 2067376369,
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request",
"name": "dataset request",
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset"
}
] | closed | false | null | [] | null | [
"As far as I understand, `wikitext` is refer to `WikiText-103` and `WikiText-2` that created by researchers in Salesforce, and mostly used in traditional language modeling.\r\n\r\nYou might want to say `wikipedia`, a dump from wikimedia foundation.\r\n\r\nAlso I would like to have Toronto BookCorpus too ! Though it... | 1,589,557,844,000 | 1,593,379,651,000 | 1,593,379,651,000 | CONTRIBUTOR | null | null | I know the copyright/distribution of this one is complex, but it would be great to have! That, combined with the existing `wikitext`, would provide a complete dataset for pretraining models like BERT. | https://api.github.com/repos/huggingface/datasets/issues/131/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/130 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/130/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/130/comments | https://api.github.com/repos/huggingface/datasets/issues/130/events | https://github.com/huggingface/datasets/issues/130 | 619,035,440 | MDU6SXNzdWU2MTkwMzU0NDA= | 130 | Loading GLUE dataset loads CoLA by default | {
"login": "zphang",
"id": 1668462,
"node_id": "MDQ6VXNlcjE2Njg0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/1668462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zphang",
"html_url": "https://github.com/zphang",
"followers_url": "https://api.github.com/users/zphang/foll... | [
{
"id": 2067388877,
"node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug",
"name": "dataset bug",
"color": "2edb81",
"default": false,
"description": "A bug in a dataset script provided in the library"
}
] | closed | false | null | [] | null | [
"As a follow-up to this: It looks like the actual GLUE task name is supplied as the `name` argument. Is there a way to check what `name`s/sub-datasets are available under a grouping like GLUE? That information doesn't seem to be readily available in info from `nlp.list_datasets()`.\r\n\r\nEdit: I found the info und... | 1,589,554,550,000 | 1,590,617,295,000 | 1,590,617,295,000 | NONE | null | null | If I run:
```python
dataset = nlp.load_dataset('glue')
```
The resultant dataset seems to be CoLA be default, without throwing any error. This is in contrast to calling:
```python
metric = nlp.load_metric("glue")
```
which throws an error telling the user that they need to specify a task in GLUE. Should the... | https://api.github.com/repos/huggingface/datasets/issues/130/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/129 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/129/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/129/comments | https://api.github.com/repos/huggingface/datasets/issues/129/events | https://github.com/huggingface/datasets/issues/129 | 618,997,725 | MDU6SXNzdWU2MTg5OTc3MjU= | 129 | [Feature request] Add Google Natural Question dataset | {
"login": "elyase",
"id": 1175888,
"node_id": "MDQ6VXNlcjExNzU4ODg=",
"avatar_url": "https://avatars.githubusercontent.com/u/1175888?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/elyase",
"html_url": "https://github.com/elyase",
"followers_url": "https://api.github.com/users/elyase/foll... | [
{
"id": 2067376369,
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request",
"name": "dataset request",
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset"
}
] | closed | false | null | [] | null | [
"Indeed, I think this one is almost ready cc @lhoestq ",
"I'm doing the latest adjustments to make the processing of the dataset run on Dataflow",
"Is there an update to this? It will be very beneficial for the QA community!",
"Still work in progress :)\r\nThe idea is to have the dataset already processed som... | 1,589,552,060,000 | 1,595,510,489,000 | 1,595,510,489,000 | NONE | null | null | Would be great to have https://github.com/google-research-datasets/natural-questions as an alternative to SQuAD. | https://api.github.com/repos/huggingface/datasets/issues/129/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/128 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/128/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/128/comments | https://api.github.com/repos/huggingface/datasets/issues/128/events | https://github.com/huggingface/datasets/issues/128 | 618,951,117 | MDU6SXNzdWU2MTg5NTExMTc= | 128 | Some error inside nlp.load_dataset() | {
"login": "polkaYK",
"id": 18486287,
"node_id": "MDQ6VXNlcjE4NDg2Mjg3",
"avatar_url": "https://avatars.githubusercontent.com/u/18486287?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/polkaYK",
"html_url": "https://github.com/polkaYK",
"followers_url": "https://api.github.com/users/polkaY... | [] | closed | false | null | [] | null | [
"Google colab has an old version of Apache Arrow built-in.\r\nBe sure you execute the \"pip install\" cell and restart the notebook environment if the colab asks for it.",
"Thanks for reply, worked fine!\r\n"
] | 1,589,547,689,000 | 1,589,548,240,000 | 1,589,548,240,000 | NONE | null | null | First of all, nice work!
I am going through [this overview notebook](https://colab.research.google.com/github/huggingface/nlp/blob/master/notebooks/Overview.ipynb)
In simple step `dataset = nlp.load_dataset('squad', split='validation[:10%]')`
I get an error, which is connected with some inner code, I think:
`... | https://api.github.com/repos/huggingface/datasets/issues/128/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/127 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/127/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/127/comments | https://api.github.com/repos/huggingface/datasets/issues/127/events | https://github.com/huggingface/datasets/pull/127 | 618,909,042 | MDExOlB1bGxSZXF1ZXN0NDE4NTQ1MDcy | 127 | Update Overview.ipynb | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://... | [] | closed | false | null | [] | null | [] | 1,589,543,208,000 | 1,589,543,247,000 | 1,589,543,245,000 | MEMBER | null | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/127",
"html_url": "https://github.com/huggingface/datasets/pull/127",
"diff_url": "https://github.com/huggingface/datasets/pull/127.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/127.patch"
} | update notebook | https://api.github.com/repos/huggingface/datasets/issues/127/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/126 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/126/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/126/comments | https://api.github.com/repos/huggingface/datasets/issues/126/events | https://github.com/huggingface/datasets/pull/126 | 618,897,499 | MDExOlB1bGxSZXF1ZXN0NDE4NTM1Mzc5 | 126 | remove webis | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://... | [] | closed | false | null | [] | null | [] | 1,589,541,920,000 | 1,589,542,284,000 | 1,589,542,226,000 | MEMBER | null | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/126",
"html_url": "https://github.com/huggingface/datasets/pull/126",
"diff_url": "https://github.com/huggingface/datasets/pull/126.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/126.patch"
} | Remove webis from dataset folder.
Our first dataset script that only lives on AWS :-) https://s3.console.aws.amazon.com/s3/buckets/datasets.huggingface.co/nlp/datasets/webis/tl_dr/?region=us-east-1 @julien-c @jplu | https://api.github.com/repos/huggingface/datasets/issues/126/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/125 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/125/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/125/comments | https://api.github.com/repos/huggingface/datasets/issues/125/events | https://github.com/huggingface/datasets/pull/125 | 618,869,048 | MDExOlB1bGxSZXF1ZXN0NDE4NTExNDE0 | 125 | [Newsroom] add newsroom | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://... | [] | closed | false | null | [] | null | [] | 1,589,538,874,000 | 1,589,539,027,000 | 1,589,539,022,000 | MEMBER | null | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/125",
"html_url": "https://github.com/huggingface/datasets/pull/125",
"diff_url": "https://github.com/huggingface/datasets/pull/125.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/125.patch"
} | I checked it with the data link of the mail you forwarded @thomwolf => works well! | https://api.github.com/repos/huggingface/datasets/issues/125/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/124 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/124/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/124/comments | https://api.github.com/repos/huggingface/datasets/issues/124/events | https://github.com/huggingface/datasets/pull/124 | 618,864,284 | MDExOlB1bGxSZXF1ZXN0NDE4NTA3NDUx | 124 | Xsum, require manual download of some files | {
"login": "mariamabarham",
"id": 38249783,
"node_id": "MDQ6VXNlcjM4MjQ5Nzgz",
"avatar_url": "https://avatars.githubusercontent.com/u/38249783?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariamabarham",
"html_url": "https://github.com/mariamabarham",
"followers_url": "https://api.githu... | [] | closed | false | null | [] | null | [] | 1,589,538,373,000 | 1,589,540,688,000 | 1,589,540,686,000 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/124",
"html_url": "https://github.com/huggingface/datasets/pull/124",
"diff_url": "https://github.com/huggingface/datasets/pull/124.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/124.patch"
} | https://api.github.com/repos/huggingface/datasets/issues/124/timeline | null | true | |
https://api.github.com/repos/huggingface/datasets/issues/123 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/123/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/123/comments | https://api.github.com/repos/huggingface/datasets/issues/123/events | https://github.com/huggingface/datasets/pull/123 | 618,820,140 | MDExOlB1bGxSZXF1ZXN0NDE4NDcxODU5 | 123 | [Tests] Local => aws | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://... | [] | closed | false | null | [] | null | [
"For each dataset, If there exist a `dataset_info.json`, then the command `nlp-cli test path/to/my/dataset --al_configs` is successful only if the `dataset_infos.json` is correct. The infos are correct if the size and checksums of the downloaded file are correct, and if the number of examples in each split are corr... | 1,589,533,945,000 | 1,589,537,172,000 | 1,589,537,006,000 | MEMBER | null | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/123",
"html_url": "https://github.com/huggingface/datasets/pull/123",
"diff_url": "https://github.com/huggingface/datasets/pull/123.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/123.patch"
} | ## Change default Test from local => aws
As a default we set` aws=True`, `Local=False`, `slow=False`
### 1. RUN_AWS=1 (default)
This runs 4 tests per dataset script.
a) Does the dataset script have a valid etag / Can it be reached on AWS?
b) Can we load its `builder_class`?
c) Can we load **all** dataset c... | https://api.github.com/repos/huggingface/datasets/issues/123/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/122 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/122/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/122/comments | https://api.github.com/repos/huggingface/datasets/issues/122/events | https://github.com/huggingface/datasets/pull/122 | 618,813,182 | MDExOlB1bGxSZXF1ZXN0NDE4NDY2Mzc3 | 122 | Final cleanup of readme and metrics | {
"login": "thomwolf",
"id": 7353373,
"node_id": "MDQ6VXNlcjczNTMzNzM=",
"avatar_url": "https://avatars.githubusercontent.com/u/7353373?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/thomwolf",
"html_url": "https://github.com/thomwolf",
"followers_url": "https://api.github.com/users/thomw... | [] | closed | false | null | [] | null | [] | 1,589,533,252,000 | 1,630,698,009,000 | 1,589,533,342,000 | MEMBER | null | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/122",
"html_url": "https://github.com/huggingface/datasets/pull/122",
"diff_url": "https://github.com/huggingface/datasets/pull/122.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/122.patch"
} | https://api.github.com/repos/huggingface/datasets/issues/122/timeline | null | true | |
https://api.github.com/repos/huggingface/datasets/issues/121 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/121/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/121/comments | https://api.github.com/repos/huggingface/datasets/issues/121/events | https://github.com/huggingface/datasets/pull/121 | 618,790,040 | MDExOlB1bGxSZXF1ZXN0NDE4NDQ4MTkx | 121 | make style | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://... | [] | closed | false | null | [] | null | [] | 1,589,531,016,000 | 1,589,531,139,000 | 1,589,531,138,000 | MEMBER | null | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/121",
"html_url": "https://github.com/huggingface/datasets/pull/121",
"diff_url": "https://github.com/huggingface/datasets/pull/121.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/121.patch"
} | https://api.github.com/repos/huggingface/datasets/issues/121/timeline | null | true | |
https://api.github.com/repos/huggingface/datasets/issues/120 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/120/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/120/comments | https://api.github.com/repos/huggingface/datasets/issues/120/events | https://github.com/huggingface/datasets/issues/120 | 618,737,783 | MDU6SXNzdWU2MTg3Mzc3ODM= | 120 | 🐛 `map` not working | {
"login": "astariul",
"id": 43774355,
"node_id": "MDQ6VXNlcjQzNzc0MzU1",
"avatar_url": "https://avatars.githubusercontent.com/u/43774355?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/astariul",
"html_url": "https://github.com/astariul",
"followers_url": "https://api.github.com/users/ast... | [] | closed | false | null | [] | null | [
"I didn't assign the output 🤦♂️\r\n\r\n```python\r\ndataset.map(test)\r\n```\r\n\r\nshould be :\r\n\r\n```python\r\ndataset = dataset.map(test)\r\n```"
] | 1,589,524,988,000 | 1,589,526,158,000 | 1,589,526,158,000 | NONE | null | null | I'm trying to run a basic example (mapping function to add a prefix).
[Here is the colab notebook I'm using.](https://colab.research.google.com/drive/1YH4JCAy0R1MMSc-k_Vlik_s1LEzP_t1h?usp=sharing)
```python
import nlp
dataset = nlp.load_dataset('squad', split='validation[:10%]')
def test(sample):
samp... | https://api.github.com/repos/huggingface/datasets/issues/120/timeline | null | false |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.