url stringlengths 58 61 | repository_url stringclasses 1
value | labels_url stringlengths 72 75 | comments_url stringlengths 67 70 | events_url stringlengths 65 68 | html_url stringlengths 46 51 | id int64 599M 1.83B | node_id stringlengths 18 32 | number int64 1 6.09k | title stringlengths 1 290 | labels list | state stringclasses 2
values | locked bool 1
class | milestone dict | comments int64 0 54 | created_at stringlengths 20 20 | updated_at stringlengths 20 20 | closed_at stringlengths 20 20 ⌀ | active_lock_reason null | body stringlengths 0 228k ⌀ | reactions dict | timeline_url stringlengths 67 70 | performed_via_github_app null | state_reason stringclasses 3
values | draft bool 2
classes | pull_request dict | is_pull_request bool 2
classes | comments_text list |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/datasets/issues/2771 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2771/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2771/comments | https://api.github.com/repos/huggingface/datasets/issues/2771/events | https://github.com/huggingface/datasets/pull/2771 | 963,257,036 | MDExOlB1bGxSZXF1ZXN0NzA1OTExMDMw | 2,771 | [WIP][Common Voice 7] Add common voice 7.0 | [] | closed | false | null | 2 | 2021-08-07T16:01:10Z | 2021-12-06T23:24:02Z | 2021-12-06T23:24:02Z | null | This PR allows to load the new common voice dataset manually as explained when doing:
```python
from datasets import load_dataset
ds = load_dataset("./datasets/datasets/common_voice_7", "ab")
```
=>
```
Please follow the manual download instructions:
You need t... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2771/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2771/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2771.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2771",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/2771.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2771"
} | true | [
"Hi ! I think the name `common_voice_7` is fine :)\r\nMoreover if the dataset_infos.json is missing I'm pretty sure you don't need to specify `ignore_verifications=True`",
"Hi, how about to add a new parameter \"version\" in the function load_dataset, something like: \r\n`load_dataset(\"common_voice\", \"lg\", ve... |
https://api.github.com/repos/huggingface/datasets/issues/5603 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5603/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5603/comments | https://api.github.com/repos/huggingface/datasets/issues/5603/events | https://github.com/huggingface/datasets/pull/5603 | 1,607,143,509 | PR_kwDODunzps5LJZzG | 5,603 | Don't compute checksums if not necessary in `datasets-cli test` | [] | closed | false | null | 3 | 2023-03-02T16:42:39Z | 2023-03-03T15:45:32Z | 2023-03-03T15:38:28Z | null | we only need them if there exists a `dataset_infos.json` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5603/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5603/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5603.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5603",
"merged_at": "2023-03-03T15:38:28Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5603.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... |
https://api.github.com/repos/huggingface/datasets/issues/5111 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5111/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5111/comments | https://api.github.com/repos/huggingface/datasets/issues/5111/events | https://github.com/huggingface/datasets/issues/5111 | 1,408,143,170 | I_kwDODunzps5T7o9C | 5,111 | map and filter not working properly in multiprocessing with the new release 2.6.0 | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | 14 | 2022-10-13T17:00:55Z | 2022-10-17T08:26:59Z | 2022-10-14T14:59:59Z | null | ## Describe the bug
When mapping is used on a dataset with more than one process, there is a weird behavior when trying to use `filter` , it's like only the samples from one worker are retrieved, one needs to specify the same `num_proc` in filter for it to work properly. This doesn't happen with `datasets` version 2.5... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 1,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5111/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5111/timeline | null | completed | null | null | false | [
"Same bug exists with `num_proc=1` on colab. `3.7.14 (default, Sep 8 2022, 00:06:44) [GCC 7.5.0]` ",
"Thanks for reporting, @loubnabnl and for the additional information, @PartiallyTyped.\r\n\r\nHowever, I'm not able to reproduce this issue, neither locally nor on Colab:\r\n```\r\nDataset({\r\n features: ['re... |
https://api.github.com/repos/huggingface/datasets/issues/2344 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2344/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2344/comments | https://api.github.com/repos/huggingface/datasets/issues/2344/events | https://github.com/huggingface/datasets/issues/2344 | 885,331,505 | MDU6SXNzdWU4ODUzMzE1MDU= | 2,344 | Is there a way to join multiple datasets in one? | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | open | false | null | 2 | 2021-05-10T23:16:10Z | 2022-10-05T17:27:05Z | null | null | **Is your feature request related to a problem? Please describe.**
I need to join 2 datasets, one that is in the hub and another I've created from my files. Is there an easy way to join these 2?
**Describe the solution you'd like**
Id like to join them with a merge or join method, just like pandas dataframes.
**Add... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2344/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2344/timeline | null | null | null | null | false | [
"Hi ! We don't have `join`/`merge` on a certain column as in pandas.\r\nMaybe you can just use the [concatenate_datasets](https://huggingface.co/docs/datasets/package_reference/main_classes.html?highlight=concatenate#datasets.concatenate_datasets) function.\r\n",
"Hi! You can use `datasets_sql` for that now. As o... |
https://api.github.com/repos/huggingface/datasets/issues/3848 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3848/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3848/comments | https://api.github.com/repos/huggingface/datasets/issues/3848/events | https://github.com/huggingface/datasets/issues/3848 | 1,162,076,902 | I_kwDODunzps5FQ-Lm | 3,848 | NonMatchingChecksumError when checksum is None | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | 7 | 2022-03-08T00:24:12Z | 2022-03-15T14:37:26Z | 2022-03-15T12:28:23Z | null | I ran into the following error when adding a new dataset:
```bash
expected_checksums = {'https://adversarialglue.github.io/dataset/dev.zip': {'checksum': None, 'num_bytes': 40662}}
recorded_checksums = {'https://adversarialglue.github.io/dataset/dev.zip': {'checksum': 'efb4cbd3aa4a87bfaffc310ae951981cc0a36c6c71c64... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3848/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3848/timeline | null | completed | null | null | false | [
"Hi @jxmorris12, thanks for reporting.\r\n\r\nThe objective of `verify_checksums` is to check that both checksums are equal. Therefore if one is None and the other is non-None, they are not equal, and the function accordingly raises a NonMatchingChecksumError. That behavior is expected.\r\n\r\nThe question is: how ... |
https://api.github.com/repos/huggingface/datasets/issues/3716 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3716/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3716/comments | https://api.github.com/repos/huggingface/datasets/issues/3716/events | https://github.com/huggingface/datasets/issues/3716 | 1,136,831,092 | I_kwDODunzps5Dwqp0 | 3,716 | `FaissIndex` to support multiple GPU and `custom_index` | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | closed | false | null | 2 | 2022-02-14T06:21:43Z | 2022-03-07T16:28:56Z | 2022-03-07T16:28:56Z | null | **Is your feature request related to a problem? Please describe.**
Currently, because `device` is of the type `int | None`, to leverage `faiss-gpu`'s multi-gpu support, you need to create a `custom_index`. However, if using a `custom_index` created by e.g. `faiss.index_cpu_to_all_gpus`, then `FaissIndex.save` does not ... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3716/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3716/timeline | null | completed | null | null | false | [
"Hi @rentruewang, thansk for reporting and for your PR!!! We should definitely support this. ",
"@albertvillanova Great! :)"
] |
https://api.github.com/repos/huggingface/datasets/issues/5958 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5958/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5958/comments | https://api.github.com/repos/huggingface/datasets/issues/5958/events | https://github.com/huggingface/datasets/pull/5958 | 1,757,265,971 | PR_kwDODunzps5TA3__ | 5,958 | set dev version | [] | closed | false | null | 3 | 2023-06-14T16:26:34Z | 2023-06-14T16:34:55Z | 2023-06-14T16:26:51Z | null | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5958/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5958/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5958.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5958",
"merged_at": "2023-06-14T16:26:51Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5958.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5958). All of your documentation changes will be reflected on that endpoint.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchma... |
https://api.github.com/repos/huggingface/datasets/issues/4845 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4845/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4845/comments | https://api.github.com/repos/huggingface/datasets/issues/4845/events | https://github.com/huggingface/datasets/pull/4845 | 1,337,928,283 | PR_kwDODunzps49IOjf | 4,845 | Mark CI tests as xfail if Hub HTTP error | [] | closed | false | null | 1 | 2022-08-13T10:45:11Z | 2022-08-23T04:57:12Z | 2022-08-23T04:42:26Z | null | In order to make testing more robust (and avoid merges to master with red tests), we could mark tests as xfailed (instead of failed) when the Hub raises some temporary HTTP errors.
This PR:
- marks tests as xfailed only if the Hub raises a 500 error for:
- test_upstream_hub
- makes pytest report the xfailed/xpa... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4845/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4845/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4845.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4845",
"merged_at": "2022-08-23T04:42:26Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4845.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | true | [
"_The documentation is not available anymore as the PR was closed or merged._"
] |
https://api.github.com/repos/huggingface/datasets/issues/241 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/241/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/241/comments | https://api.github.com/repos/huggingface/datasets/issues/241/events | https://github.com/huggingface/datasets/pull/241 | 631,703,079 | MDExOlB1bGxSZXF1ZXN0NDI4NTQwMDM0 | 241 | Fix empty cache dir | [] | closed | false | null | 2 | 2020-06-05T15:45:22Z | 2020-06-08T08:35:33Z | 2020-06-08T08:35:31Z | null | If the cache dir of a dataset is empty, the dataset fails to load and throws a FileNotFounfError. We could end up with empty cache dir because there was a line in the code that created the cache dir without using a temp dir. Using a temp dir is useful as it gets renamed to the real cache dir only if the full process is... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/241/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/241/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/241.diff",
"html_url": "https://github.com/huggingface/datasets/pull/241",
"merged_at": "2020-06-08T08:35:31Z",
"patch_url": "https://github.com/huggingface/datasets/pull/241.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/241... | true | [
"Looks great! Will this change force all cached datasets to be redownloaded? But even if it does, it shoud not be a big problem, I think",
"> Looks great! Will this change force all cached datasets to be redownloaded? But even if it does, it shoud not be a big problem, I think\r\n\r\nNo it shouldn't force to redo... |
https://api.github.com/repos/huggingface/datasets/issues/681 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/681/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/681/comments | https://api.github.com/repos/huggingface/datasets/issues/681/events | https://github.com/huggingface/datasets/pull/681 | 710,075,721 | MDExOlB1bGxSZXF1ZXN0NDkzOTkwMjEz | 681 | Adding missing @property (+2 small flake8 fixes). | [] | closed | false | null | 0 | 2020-09-28T08:53:53Z | 2020-09-28T10:26:13Z | 2020-09-28T10:26:09Z | null | Fixes #678 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/681/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/681/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/681.diff",
"html_url": "https://github.com/huggingface/datasets/pull/681",
"merged_at": "2020-09-28T10:26:09Z",
"patch_url": "https://github.com/huggingface/datasets/pull/681.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/681... | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/412 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/412/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/412/comments | https://api.github.com/repos/huggingface/datasets/issues/412/events | https://github.com/huggingface/datasets/issues/412 | 660,047,139 | MDU6SXNzdWU2NjAwNDcxMzk= | 412 | Unable to load XTREME dataset from disk | [] | closed | false | null | 3 | 2020-07-18T09:55:00Z | 2020-07-21T08:15:44Z | 2020-07-21T08:15:44Z | null | Hi 🤗 team!
## Description of the problem
Following the [docs](https://huggingface.co/nlp/loading_datasets.html?highlight=xtreme#manually-downloading-files) I'm trying to load the `PAN-X.fr` dataset from the [XTREME](https://github.com/google-research/xtreme) benchmark.
I have manually downloaded the `AmazonPho... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/412/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/412/timeline | null | completed | null | null | false | [
"Hi @lewtun, you have to provide the full path to the downloaded file for example `/home/lewtum/..`",
"I was able to repro. Opening a PR to fix that.\r\nThanks for reporting this issue !",
"Thanks for the rapid fix @lhoestq!"
] |
https://api.github.com/repos/huggingface/datasets/issues/4528 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4528/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4528/comments | https://api.github.com/repos/huggingface/datasets/issues/4528/events | https://github.com/huggingface/datasets/issues/4528 | 1,276,679,155 | I_kwDODunzps5MGJPz | 4,528 | Memory leak when iterating a Dataset | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | 5 | 2022-06-20T10:03:14Z | 2022-09-12T08:51:39Z | 2022-09-12T08:51:39Z | null | e## Describe the bug
It seems that memory never gets freed after iterating a `Dataset` (using `.map()` or a simple `for` loop)
## Steps to reproduce the bug
```python
import gc
import logging
import time
import pyarrow
from datasets import load_dataset
from tqdm import trange
import os, psutil
logging.ba... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4528/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4528/timeline | null | completed | null | null | false | [
"Is someone assigned to this issue?",
"The same issue is being debugged here: https://github.com/huggingface/datasets/issues/4883\r\n",
"Here is a modified repro example that makes it easier to see the leak:\r\n\r\n```\r\n$ cat ds2.py\r\nimport gc, sys\r\nimport time\r\nfrom datasets import load_dataset\r\nimpo... |
https://api.github.com/repos/huggingface/datasets/issues/2442 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2442/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2442/comments | https://api.github.com/repos/huggingface/datasets/issues/2442/events | https://github.com/huggingface/datasets/pull/2442 | 909,677,029 | MDExOlB1bGxSZXF1ZXN0NjYwMjE1ODY1 | 2,442 | add english language tags for ~100 datasets | [] | closed | false | null | 1 | 2021-06-02T16:24:56Z | 2021-06-04T09:51:40Z | 2021-06-04T09:51:39Z | null | As discussed on Slack, I have manually checked for ~100 datasets that they have at least one subset in English. This information was missing so adding into the READMEs.
Note that I didn't check all the subsets so it's possible that some of the datasets have subsets in other languages than English... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2442/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2442/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2442.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2442",
"merged_at": "2021-06-04T09:51:39Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2442.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | true | [
"Fixing the tags of all the datasets is out of scope for this PR so I'm merging even though the CI fails because of the missing tags"
] |
https://api.github.com/repos/huggingface/datasets/issues/2505 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2505/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2505/comments | https://api.github.com/repos/huggingface/datasets/issues/2505/events | https://github.com/huggingface/datasets/pull/2505 | 921,234,797 | MDExOlB1bGxSZXF1ZXN0NjcwMjY2NjQy | 2,505 | Make numpy arrow extractor faster | [] | closed | false | null | 5 | 2021-06-15T10:11:32Z | 2021-06-28T09:53:39Z | 2021-06-28T09:53:38Z | null | I changed the NumpyArrowExtractor to call directly to_numpy and see if it can lead to speed-ups as discussed in https://github.com/huggingface/datasets/issues/2498
This could make the numpy/torch/tf/jax formatting faster | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 2,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 2,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2505/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2505/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2505.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2505",
"merged_at": "2021-06-28T09:53:38Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2505.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | true | [
"Looks like we have a nice speed up in some benchmarks. For example:\r\n- `read_formatted numpy 5000`: 4.584777 sec -> 0.487113 sec\r\n- `read_formatted torch 5000`: 4.565676 sec -> 1.289514 sec",
"Can we convert this draft to PR @lhoestq ?",
"Ready for review ! cc @vblagoje",
"@lhoestq I tried the branch a... |
https://api.github.com/repos/huggingface/datasets/issues/4374 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4374/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4374/comments | https://api.github.com/repos/huggingface/datasets/issues/4374/events | https://github.com/huggingface/datasets/issues/4374 | 1,241,860,535 | I_kwDODunzps5KBUm3 | 4,374 | extremely slow processing when using a custom dataset | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
},
{
"color": "d876e3",
"default": true,
"descript... | closed | false | null | 2 | 2022-05-19T14:18:05Z | 2023-07-25T15:07:17Z | 2023-07-25T15:07:16Z | null | ## processing a custom dataset loaded as .txt file is extremely slow, compared to a dataset of similar volume from the hub
I have a large .txt file of 22 GB which i load into HF dataset
`lang_dataset = datasets.load_dataset("text", data_files="hi.txt")`
further i use a pre-processing function to clean the d... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4374/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4374/timeline | null | completed | null | null | false | [
"Hi !\r\n\r\nMy guess is that some examples in your dataset are bigger than your RAM, and therefore loading them in RAM to pass them to `remove_non_indic_sentences` takes forever because it might use SWAP memory.\r\n\r\nMaybe several examples in your dataset are grouped together, can you check `len(lang_dataset[\"t... |
https://api.github.com/repos/huggingface/datasets/issues/1280 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1280/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1280/comments | https://api.github.com/repos/huggingface/datasets/issues/1280/events | https://github.com/huggingface/datasets/pull/1280 | 759,151,028 | MDExOlB1bGxSZXF1ZXN0NTM0MTk2MDc0 | 1,280 | disaster response messages dataset | [] | closed | false | null | 2 | 2020-12-08T07:27:16Z | 2020-12-09T16:21:57Z | 2020-12-09T16:21:57Z | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1280/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1280/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1280.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1280",
"merged_at": "2020-12-09T16:21:57Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1280.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | true | [
"I have added the Readme.md as well, the PR is ready for review. \r\n\r\nThank you ",
"Hi @lhoestq I have updated the code and files. Please if you could check once.\r\n\r\nThank you"
] | |
https://api.github.com/repos/huggingface/datasets/issues/1758 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1758/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1758/comments | https://api.github.com/repos/huggingface/datasets/issues/1758/events | https://github.com/huggingface/datasets/issues/1758 | 790,626,116 | MDU6SXNzdWU3OTA2MjYxMTY= | 1,758 | dataset.search() (elastic) cannot reliably retrieve search results | [] | closed | false | null | 2 | 2021-01-21T02:26:37Z | 2021-01-22T00:25:50Z | 2021-01-22T00:25:50Z | null | I am trying to use elastic search to retrieve the indices of items in the dataset in their precise order, given shuffled training indices.
The problem I have is that I cannot retrieve reliable results with my data on my first search. I have to run the search **twice** to get the right answer.
I am indexing data t... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1758/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1758/timeline | null | completed | null | null | false | [
"Hi !\r\nI tried your code on my side and I was able to workaround this issue by waiting a few seconds before querying the index.\r\nMaybe this is because the index is not updated yet on the ElasticSearch side ?",
"Thanks for the feedback! I added a 30 second \"sleep\" and that seemed to work well!"
] |
https://api.github.com/repos/huggingface/datasets/issues/922 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/922/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/922/comments | https://api.github.com/repos/huggingface/datasets/issues/922/events | https://github.com/huggingface/datasets/pull/922 | 753,559,130 | MDExOlB1bGxSZXF1ZXN0NTI5NjEzOTA4 | 922 | Add XOR QA Dataset | [] | closed | false | null | 4 | 2020-11-30T15:10:54Z | 2020-12-02T03:12:21Z | 2020-12-02T03:12:21Z | null | Added XOR Question Answering Dataset. The link to the dataset can be found [here](https://nlp.cs.washington.edu/xorqa/)
- [x] Followed the instructions in CONTRIBUTING.md
- [x] Ran the tests successfully
- [x] Created the dummy data | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/922/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/922/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/922.diff",
"html_url": "https://github.com/huggingface/datasets/pull/922",
"merged_at": "2020-12-02T03:12:21Z",
"patch_url": "https://github.com/huggingface/datasets/pull/922.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/922... | true | [
"Hi @sumanthd17 \r\n\r\nLooks like a good start! You will also need to add a Dataset card, following the instructions given [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md#manually-tag-the-dataset-and-write-the-dataset-card)",
"I followed the instructions mentioned there but my datas... |
https://api.github.com/repos/huggingface/datasets/issues/1252 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1252/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1252/comments | https://api.github.com/repos/huggingface/datasets/issues/1252/events | https://github.com/huggingface/datasets/pull/1252 | 758,511,388 | MDExOlB1bGxSZXF1ZXN0NTMzNjczMDcx | 1,252 | Add Naver sentiment movie corpus | [] | closed | false | null | 0 | 2020-12-07T13:33:45Z | 2020-12-08T14:32:33Z | 2020-12-08T14:21:37Z | null | Supersedes #1168
> This PR adds the [Naver sentiment movie corpus](https://github.com/e9t/nsmc), a dataset containing Korean movie reviews from Naver, the most commonly used search engine in Korea. This dataset is often used to benchmark models on Korean NLP tasks, as seen in [this paper](https://www.aclweb.org/ant... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1252/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1252/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1252.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1252",
"merged_at": "2020-12-08T14:21:37Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1252.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/2560 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2560/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2560/comments | https://api.github.com/repos/huggingface/datasets/issues/2560/events | https://github.com/huggingface/datasets/pull/2560 | 932,143,634 | MDExOlB1bGxSZXF1ZXN0Njc5NTMyODk4 | 2,560 | fix Dataset.map when num_procs > num rows | [] | closed | false | null | 3 | 2021-06-29T02:24:11Z | 2021-06-29T15:00:18Z | 2021-06-29T14:53:31Z | null | closes #2470
## Testing notes
To run updated tests:
```sh
pytest tests/test_arrow_dataset.py -k "BaseDatasetTest and test_map_multiprocessing" -s
```
With Python code (to view warning):
```python
from datasets import Dataset
dataset = Dataset.from_dict({"x": ["sample"]})
print(len(dataset))
dataset.map... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2560/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2560/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2560.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2560",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/2560.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2560"
} | true | [
"Hi ! Thanks for fixing this :)\r\n\r\nLooks like you have tons of changes due to code formatting.\r\nWe're using `black` for this, with a custom line length. To run our code formatting, you just need to run\r\n```\r\nmake style\r\n```\r\n\r\nThen for the windows error in the CI, I'm looking into it. It's probably ... |
https://api.github.com/repos/huggingface/datasets/issues/622 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/622/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/622/comments | https://api.github.com/repos/huggingface/datasets/issues/622/events | https://github.com/huggingface/datasets/issues/622 | 700,225,826 | MDU6SXNzdWU3MDAyMjU4MjY= | 622 | load_dataset for text files not working | [
{
"color": "2edb81",
"default": false,
"description": "A bug in a dataset script provided in the library",
"id": 2067388877,
"name": "dataset bug",
"node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug"
}
] | closed | false | null | 41 | 2020-09-12T12:49:28Z | 2020-10-28T11:07:31Z | 2020-10-28T11:07:30Z | null | Trying the following snippet, I get different problems on Linux and Windows.
```python
dataset = load_dataset("text", data_files="data.txt")
# or
dataset = load_dataset("text", data_files=["data.txt"])
```
(ps [This example](https://huggingface.co/docs/datasets/loading_datasets.html#json-files) shows that ... | {
"+1": 3,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 3,
"url": "https://api.github.com/repos/huggingface/datasets/issues/622/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/622/timeline | null | completed | null | null | false | [
"Can you give us more information on your os and pip environments (pip list)?",
"@thomwolf Sure. I'll try downgrading to 3.7 now even though Arrow say they support >=3.5.\r\n\r\nLinux (Ubuntu 18.04) - Python 3.8\r\n======================\r\nPackage - Version\r\n---------------------\r\ncertifi 2... |
https://api.github.com/repos/huggingface/datasets/issues/4547 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4547/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4547/comments | https://api.github.com/repos/huggingface/datasets/issues/4547/events | https://github.com/huggingface/datasets/pull/4547 | 1,282,160,517 | PR_kwDODunzps46Ot5u | 4,547 | [CI] Fix some warnings | [] | closed | false | null | 4 | 2022-06-23T10:10:49Z | 2022-06-28T14:10:57Z | 2022-06-28T13:59:54Z | null | There are some warnings in the CI that are annoying, I tried to remove most of them | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4547/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4547/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4547.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4547",
"merged_at": "2022-06-28T13:59:54Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4547.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"There is a CI failure only related to the missing content of the universal_dependencies dataset card, we can ignore this failure in this PR",
"good catch, I thought I resolved them all sorry",
"Alright it should be good now"
] |
https://api.github.com/repos/huggingface/datasets/issues/1282 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1282/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1282/comments | https://api.github.com/repos/huggingface/datasets/issues/1282/events | https://github.com/huggingface/datasets/pull/1282 | 759,208,335 | MDExOlB1bGxSZXF1ZXN0NTM0MjQ4NzI5 | 1,282 | add thaiqa_squad | [] | closed | false | null | 0 | 2020-12-08T08:14:38Z | 2020-12-08T18:36:18Z | 2020-12-08T18:36:18Z | null | Example format is a little different from SQuAD since `thaiqa` always have one answer per question so I added a check to convert answers to lists if they are not already one to future-proof additional questions that might have multiple answers.
`thaiqa_squad` is an open-domain, extractive question answering dataset ... | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1282/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1282/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1282.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1282",
"merged_at": "2020-12-08T18:36:18Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1282.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/5365 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5365/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5365/comments | https://api.github.com/repos/huggingface/datasets/issues/5365/events | https://github.com/huggingface/datasets/pull/5365 | 1,498,422,466 | PR_kwDODunzps5Fi6ZD | 5,365 | fix: image array should support other formats than uint8 | [] | closed | false | null | 4 | 2022-12-15T13:17:50Z | 2023-01-26T18:46:45Z | 2023-01-26T18:39:36Z | null | Currently images that are provided as ndarrays, but not in `uint8` format are going to loose data. Namely, for example in a depth image where the data is in float32 format, the type-casting to uint8 will basically make the whole image blank.
`PIL.Image.fromarray` [does support mode `F`](https://pillow.readthedocs.io/e... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5365/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5365/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5365.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5365",
"merged_at": "2023-01-26T18:39:36Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5365.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Hi, thanks for working on this! \r\n\r\nI agree that the current type-casting (always cast to `np.uint8` as Tensorflow Datasets does) is a bit too harsh. However, not all dtypes are supported in `Image.fromarray` (e.g. np.int64), so ... |
https://api.github.com/repos/huggingface/datasets/issues/4185 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4185/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4185/comments | https://api.github.com/repos/huggingface/datasets/issues/4185/events | https://github.com/huggingface/datasets/issues/4185 | 1,209,429,743 | I_kwDODunzps5IFm7v | 4,185 | Librispeech documentation, clarification on format | [] | open | false | null | 8 | 2022-04-20T09:35:55Z | 2022-04-21T11:00:53Z | null | null | https://github.com/huggingface/datasets/blob/cd3ce34ab1604118351e1978d26402de57188901/datasets/librispeech_asr/librispeech_asr.py#L53
> Note that in order to limit the required storage for preparing this dataset, the audio
> is stored in the .flac format and is not converted to a float32 array. To convert, the audi... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4185/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4185/timeline | null | null | null | null | false | [
"(@patrickvonplaten )",
"Also cc @lhoestq here",
"The documentation in the code is definitely outdated - thanks for letting me know, I'll remove it in https://github.com/huggingface/datasets/pull/4184 .\r\n\r\nYou're exactly right `audio` `array` already decodes the audio file to the correct waveform. This is d... |
https://api.github.com/repos/huggingface/datasets/issues/3754 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3754/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3754/comments | https://api.github.com/repos/huggingface/datasets/issues/3754/events | https://github.com/huggingface/datasets/issues/3754 | 1,142,886,536 | I_kwDODunzps5EHxCI | 3,754 | Overflowing indices in `select` | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | 2 | 2022-02-18T11:30:52Z | 2022-02-18T11:38:23Z | 2022-02-18T11:38:23Z | null | ## Describe the bug
The `Dataset.select` function seems to accept indices that are larger than the dataset size and seems to effectively use `index %len(ds)`.
## Steps to reproduce the bug
```python
from datasets import Dataset
ds = Dataset.from_dict({"test": [1,2,3]})
ds = ds.select(range(5))
print(ds)
p... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3754/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3754/timeline | null | completed | null | null | false | [
"Fixed on master (see https://github.com/huggingface/datasets/pull/3719).",
"Awesome, I did not find that one! Thanks."
] |
https://api.github.com/repos/huggingface/datasets/issues/4045 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4045/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4045/comments | https://api.github.com/repos/huggingface/datasets/issues/4045/events | https://github.com/huggingface/datasets/pull/4045 | 1,183,661,091 | PR_kwDODunzps41KtfV | 4,045 | Fix CLI dummy data generation | [] | closed | false | null | 1 | 2022-03-28T16:09:15Z | 2022-03-31T15:04:12Z | 2022-03-31T14:59:06Z | null | PR:
- #3868
broke the CLI dummy data generation.
Fix #4044. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4045/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4045/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4045.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4045",
"merged_at": "2022-03-31T14:59:06Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4045.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | true | [
"_The documentation is not available anymore as the PR was closed or merged._"
] |
https://api.github.com/repos/huggingface/datasets/issues/5130 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5130/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5130/comments | https://api.github.com/repos/huggingface/datasets/issues/5130/events | https://github.com/huggingface/datasets/pull/5130 | 1,413,435,000 | PR_kwDODunzps5BBxXX | 5,130 | Avoid extra cast in `class_encode_column` | [] | closed | false | null | 1 | 2022-10-18T15:31:24Z | 2022-10-19T11:53:02Z | 2022-10-19T11:50:46Z | null | Pass the updated features to `map` to avoid the `cast` in `class_encode_column`. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5130/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5130/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5130.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5130",
"merged_at": "2022-10-19T11:50:46Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5130.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | true | [
"_The documentation is not available anymore as the PR was closed or merged._"
] |
https://api.github.com/repos/huggingface/datasets/issues/2027 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2027/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2027/comments | https://api.github.com/repos/huggingface/datasets/issues/2027/events | https://github.com/huggingface/datasets/pull/2027 | 828,490,444 | MDExOlB1bGxSZXF1ZXN0NTkwMjkzNDA1 | 2,027 | Update format columns in Dataset.rename_columns | [] | closed | false | null | 0 | 2021-03-10T23:50:59Z | 2021-03-11T14:38:40Z | 2021-03-11T14:38:40Z | null | Fixes #2026 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2027/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2027/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2027.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2027",
"merged_at": "2021-03-11T14:38:40Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2027.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/3104 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3104/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3104/comments | https://api.github.com/repos/huggingface/datasets/issues/3104/events | https://github.com/huggingface/datasets/issues/3104 | 1,029,080,412 | I_kwDODunzps49VoVc | 3,104 | Missing Zenodo 1.13.3 release | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | 1 | 2021-10-18T12:57:18Z | 2021-10-22T13:22:25Z | 2021-10-22T13:22:24Z | null | After `datasets` 1.13.3 release, this does not appear in Zenodo releases: https://zenodo.org/record/5570305
TODO:
- [x] Contact Zenodo support
- [x] Check it is fixed | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3104/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3104/timeline | null | completed | null | null | false | [
"Zenodo has fixed on their side the 1.13.3 release: https://zenodo.org/record/5589150"
] |
https://api.github.com/repos/huggingface/datasets/issues/455 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/455/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/455/comments | https://api.github.com/repos/huggingface/datasets/issues/455/events | https://github.com/huggingface/datasets/pull/455 | 668,037,965 | MDExOlB1bGxSZXF1ZXN0NDU4NTk4NTUw | 455 | Add bleurt | [] | closed | false | null | 4 | 2020-07-29T18:08:32Z | 2020-07-31T13:56:14Z | 2020-07-31T13:56:14Z | null | This PR adds the BLEURT metric to the library.
The BLEURT `Metric` downloads a TF checkpoint corresponding to its `config_name` at creation (in the `_info` function). Default is set to `bleurt-base-128`.
Note that the default in the original package is `bleurt-tiny-128`, but they throw a warning and recommend usi... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/455/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/455/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/455.diff",
"html_url": "https://github.com/huggingface/datasets/pull/455",
"merged_at": "2020-07-31T13:56:14Z",
"patch_url": "https://github.com/huggingface/datasets/pull/455.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/455... | true | [
"Sorry one nit: Could we use named arguments for the call to BLEURT?\r\n\r\ni.e. \r\n scores = self.scorer.score(references=references, candidates=predictions)\r\n\r\n(i.e. so it is less bug prone)\r\n",
"Following up on Ankur's comment---we are going to drop support for\npositional (not named) arguments i... |
https://api.github.com/repos/huggingface/datasets/issues/4668 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4668/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4668/comments | https://api.github.com/repos/huggingface/datasets/issues/4668/events | https://github.com/huggingface/datasets/issues/4668 | 1,299,735,893 | I_kwDODunzps5NeGVV | 4,668 | Dataset Viewer issue for hungnm/multilingual-amazon-review-sentiment-processed | [
{
"color": "E5583E",
"default": false,
"description": "Related to the dataset viewer on huggingface.co",
"id": 3470211881,
"name": "dataset-viewer",
"node_id": "LA_kwDODunzps7O1zsp",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer"
}
] | closed | false | null | 1 | 2022-07-09T18:04:13Z | 2022-07-11T07:47:47Z | 2022-07-11T07:47:47Z | null | ### Link
https://huggingface.co/hungnm/multilingual-amazon-review-sentiment
### Description
_No response_
### Owner
Yes | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4668/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4668/timeline | null | completed | null | null | false | [
"It seems like a private dataset. The viewer is currently not supported on the private datasets."
] |
https://api.github.com/repos/huggingface/datasets/issues/3622 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3622/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3622/comments | https://api.github.com/repos/huggingface/datasets/issues/3622/events | https://github.com/huggingface/datasets/issues/3622 | 1,112,831,661 | I_kwDODunzps5CVHat | 3,622 | Extend support for streaming datasets that use os.path.relpath | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | closed | false | null | 0 | 2022-01-24T15:58:23Z | 2022-02-04T14:03:54Z | 2022-02-04T14:03:54Z | null | Extend support for streaming datasets that use `os.path.relpath`.
This feature will also be useful to yield the relative path of audio or image files.
| {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3622/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3622/timeline | null | completed | null | null | false | [] |
https://api.github.com/repos/huggingface/datasets/issues/1413 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1413/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1413/comments | https://api.github.com/repos/huggingface/datasets/issues/1413/events | https://github.com/huggingface/datasets/pull/1413 | 760,615,090 | MDExOlB1bGxSZXF1ZXN0NTM1NDE4MDY2 | 1,413 | Add OffComBR | [] | closed | false | null | 3 | 2020-12-09T19:38:08Z | 2020-12-14T18:06:45Z | 2020-12-14T16:51:10Z | null | Add [OffComBR](https://github.com/rogersdepelle/OffComBR) from [Offensive Comments in the Brazilian Web: a dataset and baseline results](https://sol.sbc.org.br/index.php/brasnam/article/view/3260/3222) paper.
But I'm having a hard time generating dummy data since the original dataset extion is `.arff` and the [_crea... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1413/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1413/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1413.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1413",
"merged_at": "2020-12-14T16:51:10Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1413.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | true | [
"Hello @hugoabonizio, thanks for the contribution.\r\nRegarding the fake data, you can generate it manually.\r\nRunning the `python datasets-cli dummy_data datasets/offcombr` should give you instructions on how to manually create the dummy data.\r\nFor reference, here is a spec for `.arff` files : https://www.cs.wa... |
https://api.github.com/repos/huggingface/datasets/issues/4100 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4100/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4100/comments | https://api.github.com/repos/huggingface/datasets/issues/4100/events | https://github.com/huggingface/datasets/pull/4100 | 1,193,393,959 | PR_kwDODunzps41q4ce | 4,100 | Improve RedCaps dataset card | [] | closed | false | null | 2 | 2022-04-05T15:57:14Z | 2022-04-13T14:08:54Z | 2022-04-13T14:02:26Z | null | This PR modifies the RedCaps card to:
* fix the formatting of the Point of Contact fields on the Hub
* speed up the image fetching logic (aligns it with the [img2dataset](https://github.com/rom1504/img2dataset) tool) and make it more robust (return None if **any** exception is thrown) | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4100/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4100/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4100.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4100",
"merged_at": "2022-04-13T14:02:26Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4100.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"I find this preprocessing a bit too specific to add it as a method to `datasets` as it's only useful in the context of CV (and we support multiple modalities). However, I agree it would be great to move this code to another lib to av... |
https://api.github.com/repos/huggingface/datasets/issues/1722 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1722/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1722/comments | https://api.github.com/repos/huggingface/datasets/issues/1722/events | https://github.com/huggingface/datasets/pull/1722 | 783,921,679 | MDExOlB1bGxSZXF1ZXN0NTUzMTk3MTg4 | 1,722 | Added unfiltered versions of the Wiki-Auto training data for the GEM simplification task. | [] | closed | false | null | 1 | 2021-01-12T05:26:04Z | 2021-01-12T18:14:53Z | 2021-01-12T17:35:57Z | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1722/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1722/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1722.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1722",
"merged_at": "2021-01-12T17:35:57Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1722.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | true | [
"The current version of Wiki-Auto dataset contains a filtered version of the aligned dataset. The commit adds unfiltered versions of the data that can be useful the GEM task participants."
] | |
https://api.github.com/repos/huggingface/datasets/issues/3580 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3580/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3580/comments | https://api.github.com/repos/huggingface/datasets/issues/3580/events | https://github.com/huggingface/datasets/issues/3580 | 1,104,663,242 | I_kwDODunzps5B19LK | 3,580 | Bug in wiki bio load | [
{
"color": "2edb81",
"default": false,
"description": "A bug in a dataset script provided in the library",
"id": 2067388877,
"name": "dataset bug",
"node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug"
}
] | closed | false | null | 4 | 2022-01-15T10:04:33Z | 2022-01-31T08:38:09Z | 2022-01-31T08:38:09Z | null |
wiki_bio is failing to load because of a failing drive link . Can someone fix this ?

\r\nDownloading: 7.58kB [00:00, 4.42MB/s]\r\nDownloading: 2.71kB [00:00, 1.30MB/s]\r\nUsing custom data configuration default\r\nDownloading and preparing dataset wiki_bio/default (download: 318... |
https://api.github.com/repos/huggingface/datasets/issues/4181 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4181/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4181/comments | https://api.github.com/repos/huggingface/datasets/issues/4181/events | https://github.com/huggingface/datasets/issues/4181 | 1,208,194,805 | I_kwDODunzps5IA5b1 | 4,181 | Support streaming FLEURS dataset | [
{
"color": "2edb81",
"default": false,
"description": "A bug in a dataset script provided in the library",
"id": 2067388877,
"name": "dataset bug",
"node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug"
}
] | closed | false | null | 9 | 2022-04-19T11:09:56Z | 2022-07-25T11:44:02Z | 2022-07-25T11:44:02Z | null | ## Dataset viewer issue for '*name of the dataset*'
https://huggingface.co/datasets/google/fleurs
```
Status code: 400
Exception: NotImplementedError
Message: Extraction protocol for TAR archives like 'https://storage.googleapis.com/xtreme_translations/FLEURS/af_za.tar.gz' is not implemented in str... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4181/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4181/timeline | null | completed | null | null | false | [
"Yes, you just have to use `dl_manager.iter_archive` instead of `dl_manager.download_and_extract`.\r\n\r\nThat's because `download_and_extract` doesn't support TAR archives in streaming mode.",
"Tried to make it streamable, but I don't think it's really possible. @lhoestq @polinaeterna maybe you guys can check: \... |
https://api.github.com/repos/huggingface/datasets/issues/5963 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5963/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5963/comments | https://api.github.com/repos/huggingface/datasets/issues/5963/events | https://github.com/huggingface/datasets/issues/5963 | 1,762,774,457 | I_kwDODunzps5pEc25 | 5,963 | Got an error _pickle.PicklingError use Dataset.from_spark. | [] | closed | false | null | 5 | 2023-06-19T05:30:35Z | 2023-07-24T11:55:46Z | 2023-07-24T11:55:46Z | null | python 3.9.2
Got an error _pickle.PicklingError use Dataset.from_spark.
Did the dataset import load data from spark dataframe using multi-node Spark cluster
df = spark.read.parquet(args.input_data).repartition(50)
ds = Dataset.from_spark(df, keep_in_memory=True,
cache_dir="... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5963/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5963/timeline | null | completed | null | null | false | [
"i got error using method from_spark when using multi-node Spark cluster. seems could only use \"from_spark\" in local?",
"@lhoestq ",
"cc @maddiedawson it looks like there an issue with `_validate_cache_dir` ?\r\n\r\nIt looks like the function passed to mapPartitions has a reference to the Spark dataset build... |
https://api.github.com/repos/huggingface/datasets/issues/5129 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5129/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5129/comments | https://api.github.com/repos/huggingface/datasets/issues/5129/events | https://github.com/huggingface/datasets/issues/5129 | 1,413,031,664 | I_kwDODunzps5UOSbw | 5,129 | unexpected `cast` or `class_encode_column` result after `rename_column` | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | 4 | 2022-10-18T11:15:24Z | 2022-10-19T03:02:26Z | 2022-10-19T03:02:26Z | null | ## Describe the bug
When invoke `cast` or `class_encode_column` to a colunm renamed by `rename_column` , it will convert all the variables in this column into one variable. I also run this script in version 2.5.2, this bug does not appear. So I switched to the older version.
## Steps to reproduce the bug
```python... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5129/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5129/timeline | null | completed | null | null | false | [
"Hi! Unfortunately, I can't reproduce this issue locally (in Python 3.7/3.10) or in Colab. I would assume this is due to a bug we fixed in the latest release, but your version is up-to-date, so I'm not sure if there is something we can do to help...",
"Hi, 方子东. I tried running the code with exact the same configu... |
https://api.github.com/repos/huggingface/datasets/issues/4736 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4736/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4736/comments | https://api.github.com/repos/huggingface/datasets/issues/4736/events | https://github.com/huggingface/datasets/issues/4736 | 1,314,931,996 | I_kwDODunzps5OYEUc | 4,736 | Dataset Viewer issue for deepklarity/huggingface-spaces-dataset | [
{
"color": "E5583E",
"default": false,
"description": "Related to the dataset viewer on huggingface.co",
"id": 3470211881,
"name": "dataset-viewer",
"node_id": "LA_kwDODunzps7O1zsp",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer"
}
] | closed | false | null | 1 | 2022-07-22T12:14:18Z | 2022-07-22T13:46:38Z | 2022-07-22T13:46:38Z | null | ### Link
https://huggingface.co/datasets/deepklarity/huggingface-spaces-dataset/viewer/deepklarity--huggingface-spaces-dataset/train
### Description
Hi Team,
I'm getting the following error on a uploaded dataset. I'm getting the same status for a couple of hours now. The dataset size is `<1MB` and the format is cs... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4736/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4736/timeline | null | completed | null | null | false | [
"Thanks for reporting. You're right, workers were under-provisioned due to a manual error, and the job queue was full. It's fixed now."
] |
https://api.github.com/repos/huggingface/datasets/issues/3738 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3738/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3738/comments | https://api.github.com/repos/huggingface/datasets/issues/3738/events | https://github.com/huggingface/datasets/issues/3738 | 1,140,164,253 | I_kwDODunzps5D9Yad | 3,738 | For data-only datasets, streaming and non-streaming don't behave the same | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | open | false | null | 9 | 2022-02-16T15:20:57Z | 2022-02-21T14:24:55Z | null | null | See https://huggingface.co/datasets/huggingface/transformers-metadata: it only contains two JSON files.
In streaming mode, the files are concatenated, and thus the rows might be dictionaries with different keys:
```python
import datasets as ds
iterable_dataset = ds.load_dataset("huggingface/transformers-metadat... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3738/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3738/timeline | null | null | null | null | false | [
"Note that we might change the heuristic and create a different config per file, at least in that case.",
"Hi @severo, thanks for reporting.\r\n\r\nYes, this happens because when non-streaming, a cast of all data is done in order to \"concatenate\" it all into a single dataset (thus the error), while this casting... |
https://api.github.com/repos/huggingface/datasets/issues/1611 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1611/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1611/comments | https://api.github.com/repos/huggingface/datasets/issues/1611/events | https://github.com/huggingface/datasets/issues/1611 | 771,486,456 | MDU6SXNzdWU3NzE0ODY0NTY= | 1,611 | shuffle with torch generator | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | closed | false | null | 8 | 2020-12-20T00:57:14Z | 2022-06-01T15:30:13Z | 2022-06-01T15:30:13Z | null | Hi
I need to shuffle mutliple large datasets with `generator = torch.Generator()` for a distributed sampler which needs to make sure datasets are consistent across different cores, for this, this is really necessary for me to use torch generator, based on documentation this generator is not supported with datasets, I... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1611/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1611/timeline | null | completed | null | null | false | [
"Is there a way one can convert the two generator? not sure overall what alternatives I could have to shuffle the datasets with a torch generator, thanks ",
"@lhoestq let me please expalin in more details, maybe you could help me suggesting an alternative to solve the issue for now, I have multiple large dataset... |
https://api.github.com/repos/huggingface/datasets/issues/4055 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4055/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4055/comments | https://api.github.com/repos/huggingface/datasets/issues/4055/events | https://github.com/huggingface/datasets/pull/4055 | 1,184,976,292 | PR_kwDODunzps41PGF1 | 4,055 | [DO NOT MERGE] Test doc-builder | [] | closed | false | null | 2 | 2022-03-29T14:39:02Z | 2022-03-30T12:31:14Z | 2022-03-30T12:25:52Z | null | This is a test PR to ensure the changes in https://github.com/huggingface/doc-builder/pull/164 don't break anything in `datasets` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4055/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4055/timeline | null | null | true | {
"diff_url": "https://github.com/huggingface/datasets/pull/4055.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4055",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/4055.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4055"
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Docs built successfully, so closing this."
] |
https://api.github.com/repos/huggingface/datasets/issues/1537 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1537/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1537/comments | https://api.github.com/repos/huggingface/datasets/issues/1537/events | https://github.com/huggingface/datasets/pull/1537 | 765,095,210 | MDExOlB1bGxSZXF1ZXN0NTM4ODY1NzIz | 1,537 | added ohsumed | [] | closed | false | null | 0 | 2020-12-13T06:58:23Z | 2020-12-17T18:28:16Z | 2020-12-17T18:28:16Z | null | UPDATE2: PR passed all tests. Now waiting for review.
UPDATE: pushed a new version. cross fingers that it should complete all the tests! :)
If it passes all tests then it's not a draft version.
This is a draft version | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1537/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1537/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1537.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1537",
"merged_at": "2020-12-17T18:28:16Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1537.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/4371 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4371/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4371/comments | https://api.github.com/repos/huggingface/datasets/issues/4371/events | https://github.com/huggingface/datasets/pull/4371 | 1,241,500,906 | PR_kwDODunzps44GzSZ | 4,371 | Add missing language tags for udhr dataset | [] | closed | false | null | 1 | 2022-05-19T09:34:10Z | 2022-06-08T12:03:24Z | 2022-05-20T09:43:10Z | null | Related to #4362. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4371/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4371/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4371.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4371",
"merged_at": "2022-05-20T09:43:10Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4371.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | true | [
"_The documentation is not available anymore as the PR was closed or merged._"
] |
https://api.github.com/repos/huggingface/datasets/issues/3317 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3317/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3317/comments | https://api.github.com/repos/huggingface/datasets/issues/3317/events | https://github.com/huggingface/datasets/issues/3317 | 1,062,284,447 | I_kwDODunzps4_USyf | 3,317 | Add desc parameter to Dataset filter method | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | closed | false | null | 4 | 2021-11-24T11:01:36Z | 2022-01-05T18:31:24Z | 2022-01-05T18:31:24Z | null | **Is your feature request related to a problem? Please describe.**
As I was filtering very large datasets I noticed the filter method doesn't have the desc parameter which is available in the map method. Why don't we add a desc parameter to the filter method both for consistency and it's nice to give some feedback to ... | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3317/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3317/timeline | null | completed | null | null | false | [
"Hi,\r\n\r\n`Dataset.map` allows more generic transforms compared to `Dataset.filter`, which purpose is very specific (to filter examples based on a condition). That's why I don't think we need the `desc` parameter there for consistency. #3196 has added descriptions to the `Dataset` methods that call `.map` intern... |
https://api.github.com/repos/huggingface/datasets/issues/342 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/342/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/342/comments | https://api.github.com/repos/huggingface/datasets/issues/342/events | https://github.com/huggingface/datasets/issues/342 | 651,333,194 | MDU6SXNzdWU2NTEzMzMxOTQ= | 342 | Features should be updated when `map()` changes schema | [] | closed | false | null | 1 | 2020-07-06T08:03:23Z | 2020-07-23T10:15:16Z | 2020-07-23T10:15:16Z | null | `dataset.map()` can change the schema and column names.
We should update the features in this case (with what is possible to infer). | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/342/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/342/timeline | null | completed | null | null | false | [
"`dataset.column_names` are being updated but `dataset.features` aren't indeed..."
] |
https://api.github.com/repos/huggingface/datasets/issues/1871 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1871/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1871/comments | https://api.github.com/repos/huggingface/datasets/issues/1871/events | https://github.com/huggingface/datasets/pull/1871 | 807,697,671 | MDExOlB1bGxSZXF1ZXN0NTcyODk5Nzgz | 1,871 | Add newspop dataset | [] | closed | false | null | 1 | 2021-02-13T07:31:23Z | 2021-03-08T10:12:45Z | 2021-03-08T10:12:45Z | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1871/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1871/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1871.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1871",
"merged_at": "2021-03-08T10:12:45Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1871.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | true | [
"Thanks for the changes :)\r\nmerging"
] | |
https://api.github.com/repos/huggingface/datasets/issues/4386 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4386/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4386/comments | https://api.github.com/repos/huggingface/datasets/issues/4386/events | https://github.com/huggingface/datasets/issues/4386 | 1,243,965,532 | I_kwDODunzps5KJWhc | 4,386 | Bug for wiki_auto_asset_turk from GEM | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | 7 | 2022-05-21T12:31:30Z | 2022-05-24T05:55:52Z | 2022-05-23T10:29:55Z | null | ## Describe the bug
The script of wiki_auto_asset_turk for GEM may be out of date.
## Steps to reproduce the bug
```python
import datasets
datasets.load_dataset('gem', 'wiki_auto_asset_turk')
```
## Actual results
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4386/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4386/timeline | null | completed | null | null | false | [
"Thanks for reporting, @StevenTang1998.\r\n\r\nI'm looking into it. ",
"Hi @StevenTang1998,\r\n\r\nWe have fixed the issue:\r\n- #4389\r\n\r\nThe fix will be available in our next `datasets` library release. In the meantime, you can incorporate that fix by installing `datasets` from our GitHub repo:\r\n```\r\npip... |
https://api.github.com/repos/huggingface/datasets/issues/2259 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2259/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2259/comments | https://api.github.com/repos/huggingface/datasets/issues/2259/events | https://github.com/huggingface/datasets/pull/2259 | 866,880,092 | MDExOlB1bGxSZXF1ZXN0NjIyNjc2ODA0 | 2,259 | Add support for Split.ALL | [] | closed | false | null | 1 | 2021-04-25T01:45:42Z | 2021-06-28T08:21:27Z | 2021-06-28T08:21:27Z | null | The title says it all. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2259/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2259/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2259.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2259",
"merged_at": "2021-06-28T08:21:27Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2259.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | true | [
"Honestly, I think we should fix some other issues in Split API before this change. E. g. currently the following will not work, even though it should:\r\n```python\r\nimport datasets\r\ndatasets.load_dataset(\"sst\", split=datasets.Split.TRAIN+datasets.Split.TEST) # AssertionError\r\n```\r\n\r\nEDIT:\r\nActually,... |
https://api.github.com/repos/huggingface/datasets/issues/330 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/330/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/330/comments | https://api.github.com/repos/huggingface/datasets/issues/330/events | https://github.com/huggingface/datasets/pull/330 | 648,525,720 | MDExOlB1bGxSZXF1ZXN0NDQyMzIxMjEw | 330 | Doc red | [] | closed | false | null | 0 | 2020-06-30T22:05:31Z | 2020-07-06T12:10:39Z | 2020-07-05T12:27:29Z | null | Adding [DocRED](https://github.com/thunlp/DocRED) - a relation extraction dataset which tests document-level RE. A few implementation notes:
- There are 2 separate versions of the training set - *annotated* and *distant*. Instead of `nlp.Split.Train` I've used the splits `"train_annotated"` and `"train_distant"` to ... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/330/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/330/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/330.diff",
"html_url": "https://github.com/huggingface/datasets/pull/330",
"merged_at": "2020-07-05T12:27:29Z",
"patch_url": "https://github.com/huggingface/datasets/pull/330.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/330... | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/4281 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4281/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4281/comments | https://api.github.com/repos/huggingface/datasets/issues/4281/events | https://github.com/huggingface/datasets/pull/4281 | 1,225,556,939 | PR_kwDODunzps43TNBm | 4,281 | Remove a copy-paste sentence in dataset cards | [] | closed | false | null | 2 | 2022-05-04T15:41:55Z | 2022-05-06T08:38:03Z | 2022-05-04T18:33:16Z | null | Remove the following copy-paste sentence from dataset cards:
```
We show detailed information for up to 5 configurations of the dataset.
``` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4281/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4281/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4281.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4281",
"merged_at": "2022-05-04T18:33:16Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4281.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"The non-passing tests have nothing to do with this PR."
] |
https://api.github.com/repos/huggingface/datasets/issues/654 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/654/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/654/comments | https://api.github.com/repos/huggingface/datasets/issues/654/events | https://github.com/huggingface/datasets/pull/654 | 705,511,058 | MDExOlB1bGxSZXF1ZXN0NDkwMjI1Nzk3 | 654 | Allow empty inputs in metrics | [] | closed | false | null | 0 | 2020-09-21T11:26:36Z | 2020-10-06T03:51:48Z | 2020-09-21T16:13:38Z | null | There was an arrow error when trying to compute a metric with empty inputs. The error was occurring when reading the arrow file, before calling metric._compute. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/654/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/654/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/654.diff",
"html_url": "https://github.com/huggingface/datasets/pull/654",
"merged_at": "2020-09-21T16:13:38Z",
"patch_url": "https://github.com/huggingface/datasets/pull/654.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/654... | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/977 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/977/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/977/comments | https://api.github.com/repos/huggingface/datasets/issues/977/events | https://github.com/huggingface/datasets/pull/977 | 754,839,594 | MDExOlB1bGxSZXF1ZXN0NTMwNjY2ODg3 | 977 | Add ROPES dataset | [] | closed | false | null | 0 | 2020-12-02T00:52:10Z | 2020-12-02T10:58:36Z | 2020-12-02T10:58:35Z | null | ROPES dataset
Reasoning over paragraph effects in situations - testing a system's ability to apply knowledge from a passage of text to a new situation. The task is framed into a reading comprehension task following squad-style extractive qa.
One thing to note: labels of the test set are hidden (leaderboard submiss... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/977/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/977/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/977.diff",
"html_url": "https://github.com/huggingface/datasets/pull/977",
"merged_at": "2020-12-02T10:58:35Z",
"patch_url": "https://github.com/huggingface/datasets/pull/977.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/977... | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/3613 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3613/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3613/comments | https://api.github.com/repos/huggingface/datasets/issues/3613/events | https://github.com/huggingface/datasets/issues/3613 | 1,110,684,015 | I_kwDODunzps5CM7Fv | 3,613 | Files not updating in dataset viewer | [
{
"color": "E5583E",
"default": false,
"description": "Related to the dataset viewer on huggingface.co",
"id": 3470211881,
"name": "dataset-viewer",
"node_id": "LA_kwDODunzps7O1zsp",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer"
}
] | closed | false | null | 2 | 2022-01-21T16:47:20Z | 2022-01-22T08:13:13Z | 2022-01-22T08:13:13Z | null | ## Dataset viewer issue for '*name of the dataset*'
**Link:**
Some examples:
* https://huggingface.co/datasets/abidlabs/crowdsourced-speech4
* https://huggingface.co/datasets/abidlabs/test-audio-13
*short description of the issue*
It seems that the dataset viewer is reading a cached version of the dataset and... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3613/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3613/timeline | null | completed | null | null | false | [
"Yes. The jobs queue is full right now, following an upgrade... Back to normality in the next hours hopefully. I'll look at your datasets to be sure the dataset viewer works as expected on them.",
"Should have been fixed now."
] |
https://api.github.com/repos/huggingface/datasets/issues/6022 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6022/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6022/comments | https://api.github.com/repos/huggingface/datasets/issues/6022/events | https://github.com/huggingface/datasets/issues/6022 | 1,800,092,589 | I_kwDODunzps5rSzut | 6,022 | Batch map raises TypeError: '>=' not supported between instances of 'NoneType' and 'int' | [] | closed | false | null | 1 | 2023-07-12T03:20:17Z | 2023-07-12T16:18:06Z | 2023-07-12T16:18:05Z | null | ### Describe the bug
When mapping some datasets with `batched=True`, datasets may raise an exeception:
```python
Traceback (most recent call last):
File "/Users/codingl2k1/Work/datasets/venv/lib/python3.11/site-packages/multiprocess/pool.py", line 125, in worker
result = (True, func(*args, **kwds))
... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/6022/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/6022/timeline | null | completed | null | null | false | [
"Thanks for reporting! I've opened a PR with a fix."
] |
https://api.github.com/repos/huggingface/datasets/issues/1928 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1928/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1928/comments | https://api.github.com/repos/huggingface/datasets/issues/1928/events | https://github.com/huggingface/datasets/pull/1928 | 813,793,434 | MDExOlB1bGxSZXF1ZXN0NTc3ODgyMDM4 | 1,928 | Updating old cards | [] | closed | false | null | 0 | 2021-02-22T19:26:04Z | 2021-02-23T18:19:25Z | 2021-02-23T18:19:25Z | null | Updated the cards for [Allocine](https://github.com/mcmillanmajora/datasets/tree/updating-old-cards/datasets/allocine), [CNN/DailyMail](https://github.com/mcmillanmajora/datasets/tree/updating-old-cards/datasets/cnn_dailymail), and [SNLI](https://github.com/mcmillanmajora/datasets/tree/updating-old-cards/datasets/snli)... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1928/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1928/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1928.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1928",
"merged_at": "2021-02-23T18:19:25Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1928.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/202 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/202/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/202/comments | https://api.github.com/repos/huggingface/datasets/issues/202/events | https://github.com/huggingface/datasets/issues/202 | 625,493,983 | MDU6SXNzdWU2MjU0OTM5ODM= | 202 | Mistaken `_KWARGS_DESCRIPTION` for XNLI metric | [] | closed | false | null | 1 | 2020-05-27T08:34:42Z | 2020-05-28T13:22:36Z | 2020-05-28T13:22:36Z | null | Hi!
The [`_KWARGS_DESCRIPTION`](https://github.com/huggingface/nlp/blob/7d0fa58641f3f462fb2861dcdd6ce7f0da3f6a56/metrics/xnli/xnli.py#L45) for the XNLI metric uses `Args` and `Returns` text from [BLEU](https://github.com/huggingface/nlp/blob/7d0fa58641f3f462fb2861dcdd6ce7f0da3f6a56/metrics/bleu/bleu.py#L58) metric:
... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/202/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/202/timeline | null | completed | null | null | false | [
"Indeed, good catch ! thanks\r\nFixing it right now"
] |
https://api.github.com/repos/huggingface/datasets/issues/2193 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2193/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2193/comments | https://api.github.com/repos/huggingface/datasets/issues/2193/events | https://github.com/huggingface/datasets/issues/2193 | 853,725,707 | MDU6SXNzdWU4NTM3MjU3MDc= | 2,193 | Filtering/mapping on one column is very slow | [
{
"color": "d876e3",
"default": true,
"description": "Further information is requested",
"id": 1935892912,
"name": "question",
"node_id": "MDU6TGFiZWwxOTM1ODkyOTEy",
"url": "https://api.github.com/repos/huggingface/datasets/labels/question"
}
] | closed | false | null | 12 | 2021-04-08T18:16:14Z | 2021-04-26T16:13:59Z | 2021-04-26T16:13:59Z | null | I'm currently using the `wikipedia` dataset— I'm tokenizing the articles with the `tokenizers` library using `map()` and also adding a new `num_tokens` column to the dataset as part of that map operation.
I want to be able to _filter_ the dataset based on this `num_tokens` column, but even when I specify `input_colu... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2193/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2193/timeline | null | completed | null | null | false | [
"Hi ! Yes we are working on making `filter` significantly faster. You can look at related PRs here: #2060 #2178 \r\n\r\nI think you can expect to have the fast version of `filter` available next week.\r\n\r\nWe'll make it only select one column, and we'll also make the overall filtering operation way faster by avoi... |
https://api.github.com/repos/huggingface/datasets/issues/2249 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2249/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2249/comments | https://api.github.com/repos/huggingface/datasets/issues/2249/events | https://github.com/huggingface/datasets/pull/2249 | 865,257,826 | MDExOlB1bGxSZXF1ZXN0NjIxMzU1MzE3 | 2,249 | Allow downloading/processing/caching only specific splits | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | open | false | {
"closed_at": null,
"closed_issues": 2,
"created_at": "2021-07-21T15:34:56Z",
"creator": {
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"followers_url": "https://api.github.com/users/albertvillano... | 2 | 2021-04-22T17:51:44Z | 2022-07-06T15:19:48Z | null | null | Allow downloading/processing/caching only specific splits without downloading/processing/caching the other splits.
This PR implements two steps to handle only specific splits:
- it allows processing/caching only specific splits into Arrow files
- for some simple cases, it allows downloading only specific splits (w... | {
"+1": 1,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 1,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2249/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2249/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2249.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2249",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/2249.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2249"
} | true | [
"> If you pass a dictionary like this:\r\n> \r\n> ```\r\n> {\"main_metadata\": url_to_main_data,\r\n> \"secondary_metadata\": url_to_sec_data,\r\n> \"train\": url_train_data,\r\n> \"test\": url_test_data}\r\n> ```\r\n> \r\n> then only the train or test keys will be kept, which I feel not intuitive.\r\n> \r\n> For e... |
https://api.github.com/repos/huggingface/datasets/issues/2913 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2913/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2913/comments | https://api.github.com/repos/huggingface/datasets/issues/2913/events | https://github.com/huggingface/datasets/issues/2913 | 996,436,368 | I_kwDODunzps47ZGmQ | 2,913 | timit_asr dataset only includes one text phrase | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | 2 | 2021-09-14T21:06:07Z | 2021-09-15T08:05:19Z | 2021-09-15T08:05:18Z | null | ## Describe the bug
The dataset 'timit_asr' only includes one text phrase. It only includes the transcription "Would such an act of refusal be useful?" multiple times rather than different phrases.
## Steps to reproduce the bug
Note: I am following the tutorial https://huggingface.co/blog/fine-tune-wav2vec2-englis... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2913/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2913/timeline | null | completed | null | null | false | [
"Hi @margotwagner, \r\nThis bug was fixed in #1995. Upgrading the datasets should work (min v1.8.0 ideally)",
"Hi @margotwagner,\r\n\r\nYes, as @bhavitvyamalik has commented, this bug was fixed in `datasets` version 1.5.0. You need to update it, as your current version is 1.4.1:\r\n> Environment info\r\n> - `data... |
https://api.github.com/repos/huggingface/datasets/issues/1214 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1214/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1214/comments | https://api.github.com/repos/huggingface/datasets/issues/1214/events | https://github.com/huggingface/datasets/pull/1214 | 758,002,786 | MDExOlB1bGxSZXF1ZXN0NTMzMjUyNTgx | 1,214 | adding medical-questions-pairs dataset | [] | closed | false | null | 0 | 2020-12-06T19:30:12Z | 2020-12-09T14:42:53Z | 2020-12-09T14:42:53Z | null | This dataset consists of 3048 similar and dissimilar medical question pairs hand-generated and labeled by Curai's doctors.
Dataset : https://github.com/curai/medical-question-pair-dataset
Paper : https://drive.google.com/file/d/1CHPGBXkvZuZc8hpr46HeHU6U6jnVze-s/view | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1214/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1214/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1214.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1214",
"merged_at": "2020-12-09T14:42:53Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1214.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/4158 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4158/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4158/comments | https://api.github.com/repos/huggingface/datasets/issues/4158/events | https://github.com/huggingface/datasets/pull/4158 | 1,202,376,843 | PR_kwDODunzps42ITg3 | 4,158 | Add AUC ROC Metric | [] | closed | false | null | 1 | 2022-04-12T20:53:28Z | 2022-04-26T19:41:50Z | 2022-04-26T19:35:22Z | null | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4158/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4158/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4158.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4158",
"merged_at": "2022-04-26T19:35:22Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4158.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | true | [
"_The documentation is not available anymore as the PR was closed or merged._"
] |
https://api.github.com/repos/huggingface/datasets/issues/101 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/101/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/101/comments | https://api.github.com/repos/huggingface/datasets/issues/101/events | https://github.com/huggingface/datasets/pull/101 | 618,111,651 | MDExOlB1bGxSZXF1ZXN0NDE3ODk5OTQ2 | 101 | [Reddit] add reddit | [] | closed | false | null | 0 | 2020-05-14T10:25:02Z | 2020-05-14T10:27:25Z | 2020-05-14T10:27:24Z | null | - Everything worked fine @mariamabarham. Made my computer nearly crash, but all seems to be working :-) | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/101/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/101/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/101.diff",
"html_url": "https://github.com/huggingface/datasets/pull/101",
"merged_at": "2020-05-14T10:27:24Z",
"patch_url": "https://github.com/huggingface/datasets/pull/101.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/101... | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/1736 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1736/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1736/comments | https://api.github.com/repos/huggingface/datasets/issues/1736/events | https://github.com/huggingface/datasets/pull/1736 | 785,433,854 | MDExOlB1bGxSZXF1ZXN0NTU0NDYyNjYw | 1,736 | Adjust BrWaC dataset features name | [] | closed | false | null | 0 | 2021-01-13T20:39:04Z | 2021-01-14T10:29:38Z | 2021-01-14T10:29:38Z | null | I added this dataset some days ago, and today I used it to train some models and realized that the names of the features aren't so good.
Looking at the current features hierarchy, we have "paragraphs" with a list of "sentences" with a list of "sentences?!". But the actual hierarchy is a "text" with a list of "paragr... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1736/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1736/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1736.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1736",
"merged_at": "2021-01-14T10:29:38Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1736.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/1952 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1952/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1952/comments | https://api.github.com/repos/huggingface/datasets/issues/1952/events | https://github.com/huggingface/datasets/pull/1952 | 817,428,160 | MDExOlB1bGxSZXF1ZXN0NTgwOTIyNjQw | 1,952 | Handle timeouts | [] | closed | false | null | 4 | 2021-02-26T15:02:07Z | 2021-03-01T14:29:24Z | 2021-03-01T14:29:24Z | null | As noticed in https://github.com/huggingface/datasets/issues/1939, timeouts were not properly handled when loading a dataset.
This caused the connection to hang indefinitely when working in a firewalled environment cc @stas00
I added a default timeout, and included an option to our offline environment for tests to... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1952/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1952/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1952.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1952",
"merged_at": "2021-03-01T14:29:24Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1952.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | true | [
"I never said the calls were hanging indefinitely, what we need is quite different - in the firewalled env with a network, there should be no network calls or they should fail instantly.\r\n\r\nTo make this work I suppose on top of this PR we need:\r\n1. `DATASETS_OFFLINE` env var to force set timeout to 0 globally... |
https://api.github.com/repos/huggingface/datasets/issues/1289 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1289/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1289/comments | https://api.github.com/repos/huggingface/datasets/issues/1289/events | https://github.com/huggingface/datasets/pull/1289 | 759,333,684 | MDExOlB1bGxSZXF1ZXN0NTM0MzU2ODUw | 1,289 | Jigsaw toxicity classification dataset added | [] | closed | false | null | 0 | 2020-12-08T10:38:51Z | 2020-12-08T15:17:48Z | 2020-12-08T15:17:48Z | null | The dataset requires manually downloading data from Kaggle. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1289/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1289/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1289.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1289",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/1289.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1289"
} | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/5869 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5869/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5869/comments | https://api.github.com/repos/huggingface/datasets/issues/5869/events | https://github.com/huggingface/datasets/issues/5869 | 1,711,990,003 | I_kwDODunzps5mCuTz | 5,869 | Image Encoding Issue when submitting a Parquet Dataset | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | 16 | 2023-05-16T09:42:58Z | 2023-06-16T12:48:38Z | 2023-06-16T09:30:48Z | null | ### Describe the bug
Hello,
I'd like to report an issue related to pushing a dataset represented as a Parquet file to a dataset repository using Dask. Here are the details:
We attempted to load an example dataset in Parquet format from the Hugging Face (HF) filesystem using Dask with the following code snippet... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5869/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5869/timeline | null | completed | null | null | false | [
"Hi @PhilippeMoussalli thanks for opening a detailed issue. It seems the issue is more related to the `datasets` library so I'll ping @lhoestq @mariosasko on this one :) \n\n(edit: also can one of you move the issue to the datasets repo? Thanks in advance 🙏)",
"Hi ! The `Image()` info is stored in the **schema m... |
https://api.github.com/repos/huggingface/datasets/issues/4262 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4262/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4262/comments | https://api.github.com/repos/huggingface/datasets/issues/4262/events | https://github.com/huggingface/datasets/pull/4262 | 1,222,130,749 | PR_kwDODunzps43IOye | 4,262 | Add YAML tags to Dataset Card rotten tomatoes | [] | closed | false | null | 1 | 2022-05-01T11:59:08Z | 2022-05-03T14:27:33Z | 2022-05-03T14:20:35Z | null | The dataset card for the rotten tomatoes / MR movie review dataset had some missing YAML tags. Hopefully, this also improves the visibility of this dataset now that paperswithcode and huggingface link to eachother. | {
"+1": 2,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 2,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4262/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4262/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4262.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4262",
"merged_at": "2022-05-03T14:20:35Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4262.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | true | [
"_The documentation is not available anymore as the PR was closed or merged._"
] |
https://api.github.com/repos/huggingface/datasets/issues/5981 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5981/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5981/comments | https://api.github.com/repos/huggingface/datasets/issues/5981/events | https://github.com/huggingface/datasets/issues/5981 | 1,770,310,087 | I_kwDODunzps5phMnH | 5,981 | Only two cores are getting used in sagemaker with pytorch 3.10 kernel | [] | closed | false | null | 3 | 2023-06-22T19:57:31Z | 2023-07-24T11:54:52Z | 2023-07-24T11:54:52Z | null | ### Describe the bug
When using the newer pytorch 3.10 kernel, only 2 cores are being used by huggingface filter and map functions. The Pytorch 3.9 kernel would use as many cores as specified in the num_proc field.
We have solved this in our own code by placing the following snippet in the code that is called insi... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5981/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5981/timeline | null | completed | null | null | false | [
"I think it's more likely that this issue is related to PyTorch than Datasets, as PyTorch (on import) registers functions to execute when forking a process. Maybe this is the culprit: https://github.com/pytorch/pytorch/issues/99625",
"From reading that ticket, it may be down in mkl? Is it worth hotfixing in the ... |
https://api.github.com/repos/huggingface/datasets/issues/2790 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2790/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2790/comments | https://api.github.com/repos/huggingface/datasets/issues/2790/events | https://github.com/huggingface/datasets/pull/2790 | 967,772,181 | MDExOlB1bGxSZXF1ZXN0NzA5OTI3NjM2 | 2,790 | Fix typo in test_dataset_common | [] | closed | false | null | 0 | 2021-08-12T01:10:29Z | 2021-08-12T11:31:29Z | 2021-08-12T11:31:29Z | null | null | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2790/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2790/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2790.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2790",
"merged_at": "2021-08-12T11:31:29Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2790.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/5322 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5322/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5322/comments | https://api.github.com/repos/huggingface/datasets/issues/5322/events | https://github.com/huggingface/datasets/pull/5322 | 1,471,502,162 | PR_kwDODunzps5EEeQP | 5,322 | Raise error for `.tar` archives in the same way as for `.tar.gz` and `.tgz` in `_get_extraction_protocol` | [] | closed | false | null | 1 | 2022-12-01T15:19:28Z | 2022-12-14T16:37:16Z | 2022-12-14T16:33:30Z | null | Currently `download_and_extract` doesn't throw an error when it is used with files with `.tar` extension in streaming mode because `_get_extraction_protocol` doesn't do it (like it does for `tar.gz` and `tgz`). `_get_extraction_protocol` returns formatted url as if we support tar protocol but we don't.
That means tha... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5322/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5322/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5322.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5322",
"merged_at": "2022-12-14T16:33:30Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5322.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | true | [
"_The documentation is not available anymore as the PR was closed or merged._"
] |
https://api.github.com/repos/huggingface/datasets/issues/2968 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2968/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2968/comments | https://api.github.com/repos/huggingface/datasets/issues/2968/events | https://github.com/huggingface/datasets/issues/2968 | 1,007,209,488 | I_kwDODunzps48CMwQ | 2,968 | `DatasetDict` cannot be exported to parquet if the splits have different features | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | 9 | 2021-09-25T22:18:39Z | 2021-10-07T22:47:42Z | 2021-10-07T22:47:26Z | null | ## Describe the bug
I'm trying to use parquet as a means of serialization for both `Dataset` and `DatasetDict` objects. Using `to_parquet` alongside `from_parquet` or `load_dataset` for a `Dataset` works perfectly.
For `DatasetDict`, I use `to_parquet` on each split to save the parquet files in individual folder... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2968/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2968/timeline | null | completed | null | null | false | [
"This is because you have to specify which split corresponds to what file:\r\n```python\r\ndata_files = {\"train\": \"train/split.parquet\", \"validation\": \"validation/split.parquet\"}\r\nbrand_new_dataset_2 = load_dataset(\"ds\", data_files=data_files)\r\n```\r\n\r\nOtherwise it tries to concatenate the two spli... |
https://api.github.com/repos/huggingface/datasets/issues/3813 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3813/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3813/comments | https://api.github.com/repos/huggingface/datasets/issues/3813/events | https://github.com/huggingface/datasets/issues/3813 | 1,158,474,859 | I_kwDODunzps5FDOxr | 3,813 | Add MetaShift dataset | [
{
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset",
"id": 2067376369,
"name": "dataset request",
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request"
},
{
"color": "bfdadc",... | closed | false | null | 7 | 2022-03-03T14:26:45Z | 2022-04-10T13:39:59Z | 2022-04-10T13:39:59Z | null | ## Adding a Dataset
- **Name:** MetaShift
- **Description:** collection of 12,868 sets of natural images across 410 classes-
- **Paper:** https://arxiv.org/abs/2202.06523v1
- **Data:** https://github.com/weixin-liang/metashift
Instructions to add a new dataset can be found [here](https://github.com/huggingface/... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3813/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3813/timeline | null | completed | null | null | false | [
"I would like to take this up and give it a shot. Any image specific - dataset guidelines to keep in mind ? Thank you.",
"#self-assign",
"I've started working on adding this dataset. I require some inputs on the following : \r\n\r\nRef for the initial draft [here](https://github.com/dnaveenr/datasets/blob/add_m... |
https://api.github.com/repos/huggingface/datasets/issues/5258 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5258/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5258/comments | https://api.github.com/repos/huggingface/datasets/issues/5258/events | https://github.com/huggingface/datasets/issues/5258 | 1,453,516,636 | I_kwDODunzps5Woudc | 5,258 | Restore order of split names in dataset_info for canonical datasets | [
{
"color": "0e8a16",
"default": false,
"description": "Contribution to a dataset script",
"id": 4564477500,
"name": "dataset contribution",
"node_id": "LA_kwDODunzps8AAAABEBBmPA",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20contribution"
}
] | closed | false | null | 3 | 2022-11-17T15:13:15Z | 2023-02-16T09:49:05Z | 2022-11-19T06:51:37Z | null | After a bulk edit of canonical datasets to create the YAML `dataset_info` metadata, the split names were accidentally sorted alphabetically. See for example:
- https://huggingface.co/datasets/bc2gm_corpus/commit/2384629484401ecf4bb77cd808816719c424e57c
Note that this order is the one appearing in the preview of the... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5258/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5258/timeline | null | completed | null | null | false | [
"The bulk edit is running...\r\n\r\nSee for example: \r\n- A single config: https://huggingface.co/datasets/acronym_identification/discussions/2\r\n- Multiple configs: https://huggingface.co/datasets/babi_qa/discussions/1",
"TODO: Add \"dataset_info\" YAML metadata to:\r\n- [x] \"chr_en\" has no metadata JSON fil... |
https://api.github.com/repos/huggingface/datasets/issues/1006 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1006/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1006/comments | https://api.github.com/repos/huggingface/datasets/issues/1006/events | https://github.com/huggingface/datasets/pull/1006 | 755,362,766 | MDExOlB1bGxSZXF1ZXN0NTMxMDg3NTIy | 1,006 | add yahoo_answers_topics | [] | closed | false | null | 1 | 2020-12-02T15:16:13Z | 2020-12-03T16:44:38Z | 2020-12-02T18:01:32Z | null | This PR adds yahoo answers topic classification dataset.
More info:
https://github.com/LC-John/Yahoo-Answers-Topic-Classification-Dataset
cc @joeddav, @yjernite | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1006/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1006/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1006.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1006",
"merged_at": "2020-12-02T18:01:32Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1006.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | true | [
"feel free to merge/ping me to merge if there're no more changes to do"
] |
https://api.github.com/repos/huggingface/datasets/issues/2421 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2421/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2421/comments | https://api.github.com/repos/huggingface/datasets/issues/2421/events | https://github.com/huggingface/datasets/pull/2421 | 905,549,756 | MDExOlB1bGxSZXF1ZXN0NjU2NjIwMTM3 | 2,421 | doc: fix typo HF_MAX_IN_MEMORY_DATASET_SIZE_IN_BYTES | [] | closed | false | null | 0 | 2021-05-28T14:52:10Z | 2021-06-04T09:52:45Z | 2021-06-04T09:52:45Z | null | MAX_MEMORY_DATASET_SIZE_IN_BYTES should be HF_MAX_MEMORY_DATASET_SIZE_IN_BYTES | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2421/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2421/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2421.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2421",
"merged_at": "2021-06-04T09:52:45Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2421.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/4889 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4889/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4889/comments | https://api.github.com/repos/huggingface/datasets/issues/4889/events | https://github.com/huggingface/datasets/issues/4889 | 1,349,758,525 | I_kwDODunzps5Qc649 | 4,889 | torchaudio 11.0 yields different results than torchaudio 12.1 when loading MP3 | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | 5 | 2022-08-24T16:54:43Z | 2023-03-02T15:33:05Z | 2023-03-02T15:33:04Z | null | ## Describe the bug
When loading Common Voice with torchaudio 0.11.0 the results are different to 0.12.1 which leads to problems in transformers see: https://github.com/huggingface/transformers/pull/18749
## Steps to reproduce the bug
If you run the following code once with `torchaudio==0.11.0+cu102` and `torc... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4889/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4889/timeline | null | completed | null | null | false | [
"Maybe we can just pass this along to torchaudio @lhoestq @albertvillanova ? It be great if you could investigate if the errors lies in datasets or in torchaudio.",
"torchaudio did a change in [0.12](https://github.com/pytorch/audio/releases/tag/v0.12.0) on MP3 decoding (which affects common voice):\r\n> MP3 deco... |
https://api.github.com/repos/huggingface/datasets/issues/4259 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4259/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4259/comments | https://api.github.com/repos/huggingface/datasets/issues/4259/events | https://github.com/huggingface/datasets/pull/4259 | 1,221,768,025 | PR_kwDODunzps43HHGc | 4,259 | Fix bug in choices labels in openbookqa dataset | [] | closed | false | null | 1 | 2022-04-30T07:41:39Z | 2022-05-04T06:31:31Z | 2022-05-03T15:14:21Z | null | This PR fixes the Bug in the openbookqa dataset as mentioned in this issue #3550.
Fix #3550.
cc. @lhoestq @mariosasko | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4259/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4259/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4259.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4259",
"merged_at": "2022-05-03T15:14:21Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4259.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | true | [
"_The documentation is not available anymore as the PR was closed or merged._"
] |
https://api.github.com/repos/huggingface/datasets/issues/109 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/109/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/109/comments | https://api.github.com/repos/huggingface/datasets/issues/109/events | https://github.com/huggingface/datasets/pull/109 | 618,508,359 | MDExOlB1bGxSZXF1ZXN0NDE4MjI0MDYw | 109 | [Reclor] fix reclor | [] | closed | false | null | 0 | 2020-05-14T20:16:26Z | 2020-05-14T20:19:09Z | 2020-05-14T20:19:08Z | null | - That's probably one me. Could have made the manual data test more flexible. @mariamabarham | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/109/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/109/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/109.diff",
"html_url": "https://github.com/huggingface/datasets/pull/109",
"merged_at": "2020-05-14T20:19:08Z",
"patch_url": "https://github.com/huggingface/datasets/pull/109.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/109... | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/1564 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1564/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1564/comments | https://api.github.com/repos/huggingface/datasets/issues/1564/events | https://github.com/huggingface/datasets/pull/1564 | 766,266,609 | MDExOlB1bGxSZXF1ZXN0NTM5MzQzMjAy | 1,564 | added saudinewsnet | [] | closed | false | null | 9 | 2020-12-14T10:35:09Z | 2020-12-22T09:51:04Z | 2020-12-22T09:51:04Z | null | I'm having issues in creating the dummy data. I'm still investigating how to fix it. I'll close the PR if I couldn't find a solution | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1564/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1564/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1564.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1564",
"merged_at": "2020-12-22T09:51:04Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1564.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | true | [
"Hi @abdulelahsm - This is an interesting dataset! But there are multiple issues with the PR. Some of them are listed below: \r\n- default builder config is not defined. There should be atleast one builder config \r\n- URL is incorrectly constructed so the data files are not being downloaded \r\n- dataset_info.jso... |
https://api.github.com/repos/huggingface/datasets/issues/1026 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1026/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1026/comments | https://api.github.com/repos/huggingface/datasets/issues/1026/events | https://github.com/huggingface/datasets/issues/1026 | 755,689,195 | MDU6SXNzdWU3NTU2ODkxOTU= | 1,026 | Lío o | [] | closed | false | null | 0 | 2020-12-02T23:32:25Z | 2020-12-03T16:42:47Z | 2020-12-03T16:42:47Z | null | ````l`````````
```
O
```
`````
Ño
```
````
``` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1026/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1026/timeline | null | completed | null | null | false | [] |
https://api.github.com/repos/huggingface/datasets/issues/5666 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5666/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5666/comments | https://api.github.com/repos/huggingface/datasets/issues/5666/events | https://github.com/huggingface/datasets/issues/5666 | 1,637,675,062 | I_kwDODunzps5hnPA2 | 5,666 | Support tensorflow 2.12.0 in CI | [
{
"color": "a2eeef",
"default": true,
"description": "New feature or request",
"id": 1935892871,
"name": "enhancement",
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement"
}
] | closed | false | null | 0 | 2023-03-23T14:37:51Z | 2023-03-23T16:14:54Z | 2023-03-23T16:14:54Z | null | Once we find out the root cause of:
- #5663
we should revert the temporary pin on tensorflow introduced by:
- #5664 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5666/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5666/timeline | null | completed | null | null | false | [] |
https://api.github.com/repos/huggingface/datasets/issues/1831 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1831/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1831/comments | https://api.github.com/repos/huggingface/datasets/issues/1831/events | https://github.com/huggingface/datasets/issues/1831 | 802,868,854 | MDU6SXNzdWU4MDI4Njg4NTQ= | 1,831 | Some question about raw dataset download info in the project . | [] | closed | false | null | 4 | 2021-02-07T05:33:36Z | 2021-02-25T14:10:18Z | 2021-02-25T14:10:18Z | null | Hi , i review the code in
https://github.com/huggingface/datasets/blob/master/datasets/conll2003/conll2003.py
in the _split_generators function is the truly logic of download raw datasets with dl_manager
and use Conll2003 cls by use import_main_class in load_dataset function
My question is that , with this logic i... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1831/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1831/timeline | null | completed | null | null | false | [
"Hi ! The `dl_manager` is a `DownloadManager` object and is responsible for downloading the raw data files.\r\nIt is used by dataset builders in their `_split_generators` method to download the raw data files that are necessary to build the datasets splits.\r\n\r\nThe `Conll2003` class is a dataset builder, and so ... |
https://api.github.com/repos/huggingface/datasets/issues/1635 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1635/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1635/comments | https://api.github.com/repos/huggingface/datasets/issues/1635/events | https://github.com/huggingface/datasets/issues/1635 | 774,524,492 | MDU6SXNzdWU3NzQ1MjQ0OTI= | 1,635 | Persian Abstractive/Extractive Text Summarization | [
{
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset",
"id": 2067376369,
"name": "dataset request",
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request"
}
] | closed | false | null | 0 | 2020-12-24T17:47:12Z | 2021-01-04T15:11:04Z | 2021-01-04T15:11:04Z | null | Assembling datasets tailored to different tasks and languages is a precious target. This would be great to have this dataset included.
## Adding a Dataset
- **Name:** *pn-summary*
- **Description:** *A well-structured summarization dataset for the Persian language consists of 93,207 records. It is prepared for Abs... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1635/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1635/timeline | null | completed | null | null | false | [] |
https://api.github.com/repos/huggingface/datasets/issues/2942 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2942/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2942/comments | https://api.github.com/repos/huggingface/datasets/issues/2942/events | https://github.com/huggingface/datasets/pull/2942 | 1,000,309,765 | PR_kwDODunzps4r7tY6 | 2,942 | Add SEDE dataset | [] | closed | false | null | 4 | 2021-09-19T13:11:24Z | 2021-09-24T10:39:55Z | 2021-09-24T10:39:54Z | null | This PR adds the SEDE dataset for the task of realistic Text-to-SQL, following the instructions of how to add a database and a dataset card.
Please see our paper for more details: https://arxiv.org/abs/2106.05006 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2942/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2942/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2942.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2942",
"merged_at": "2021-09-24T10:39:54Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2942.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | true | [
"Thanks @albertvillanova for your great suggestions! I just pushed a new commit with the necessary fixes. For some reason, the test `test_metric_common` failed for `meteor` metric, which doesn't have any connection to this PR, so I'm trying to rebase and see if it helps.",
"Hi @Hazoom,\r\n\r\nYou were right: the ... |
https://api.github.com/repos/huggingface/datasets/issues/1521 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1521/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1521/comments | https://api.github.com/repos/huggingface/datasets/issues/1521/events | https://github.com/huggingface/datasets/pull/1521 | 764,320,841 | MDExOlB1bGxSZXF1ZXN0NTM4NDQzOTgz | 1,521 | Atomic | [] | closed | false | null | 1 | 2020-12-12T20:18:08Z | 2020-12-12T22:56:48Z | 2020-12-12T22:56:48Z | null | This is the ATOMIC common sense dataset. More info can be found here:
* README.md still to be created. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1521/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1521/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1521.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1521",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/1521.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1521"
} | true | [
"I had to create a new PR to fix git errors. See: https://github.com/huggingface/datasets/pull/1525\r\n\r\nI'm closing this PR. "
] |
https://api.github.com/repos/huggingface/datasets/issues/542 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/542/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/542/comments | https://api.github.com/repos/huggingface/datasets/issues/542/events | https://github.com/huggingface/datasets/pull/542 | 688,555,036 | MDExOlB1bGxSZXF1ZXN0NDc1NzkyNTY0 | 542 | Add TensorFlow example | [] | closed | false | null | 0 | 2020-08-29T15:39:27Z | 2020-08-31T09:49:20Z | 2020-08-31T09:49:19Z | null | Update the Quick Tour documentation in order to add the TensorFlow equivalent source code for the classification example. Now it is possible to select either the code in PyTorch or in TensorFlow in the Quick tour. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/542/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/542/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/542.diff",
"html_url": "https://github.com/huggingface/datasets/pull/542",
"merged_at": "2020-08-31T09:49:19Z",
"patch_url": "https://github.com/huggingface/datasets/pull/542.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/542... | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/4385 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4385/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4385/comments | https://api.github.com/repos/huggingface/datasets/issues/4385/events | https://github.com/huggingface/datasets/pull/4385 | 1,243,921,287 | PR_kwDODunzps44OwXF | 4,385 | Test dill | [] | closed | false | null | 4 | 2022-05-21T08:57:43Z | 2022-05-25T08:30:13Z | 2022-05-25T08:21:48Z | null | Regression test for future releases of `dill`.
Related to #4379. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4385/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4385/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4385.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4385",
"merged_at": "2022-05-25T08:21:48Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4385.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"I should point out that the hash will be the same if computed twice with the same code on the same version of dill (after adding huggingface's code that removes line numbers and file names, and sorts globals.) My changes in dill 0.3.... |
https://api.github.com/repos/huggingface/datasets/issues/1436 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1436/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1436/comments | https://api.github.com/repos/huggingface/datasets/issues/1436/events | https://github.com/huggingface/datasets/pull/1436 | 760,873,132 | MDExOlB1bGxSZXF1ZXN0NTM1NjI1MDM0 | 1,436 | add ALT | [] | closed | false | null | 1 | 2020-12-10T04:17:21Z | 2020-12-13T16:14:18Z | 2020-12-11T15:52:41Z | null | ALT dataset -- https://www2.nict.go.jp/astrec-att/member/mutiyama/ALT/ | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 2,
"total_count": 2,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1436/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1436/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1436.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1436",
"merged_at": "2020-12-11T15:52:41Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1436.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | true | [
"The errors in de CI are fixed on master so it's fine"
] |
https://api.github.com/repos/huggingface/datasets/issues/2690 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2690/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2690/comments | https://api.github.com/repos/huggingface/datasets/issues/2690/events | https://github.com/huggingface/datasets/pull/2690 | 949,574,500 | MDExOlB1bGxSZXF1ZXN0Njk0MjU5MDc1 | 2,690 | Docs details | [] | closed | false | null | 1 | 2021-07-21T10:43:14Z | 2021-07-27T18:40:54Z | 2021-07-27T18:40:54Z | null | Some comments here:
- the code samples assume the expected libraries have already been installed. Maybe add a section at start, or add it to every code sample. Something like `pip install datasets transformers torch 'datasets[streaming]'` (maybe just link to https://huggingface.co/docs/datasets/installation.html + ... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2690/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2690/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/2690.diff",
"html_url": "https://github.com/huggingface/datasets/pull/2690",
"merged_at": "2021-07-27T18:40:53Z",
"patch_url": "https://github.com/huggingface/datasets/pull/2690.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | true | [
"Thanks for all the comments and for the corrections in the docs !\r\n\r\nAbout all the points you mentioned:\r\n\r\n> * the code samples assume the expected libraries have already been installed. Maybe add a section at start, or add it to every code sample. Something like `pip install datasets transformers torch ... |
https://api.github.com/repos/huggingface/datasets/issues/2727 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/2727/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/2727/comments | https://api.github.com/repos/huggingface/datasets/issues/2727/events | https://github.com/huggingface/datasets/issues/2727 | 955,812,149 | MDU6SXNzdWU5NTU4MTIxNDk= | 2,727 | Error in loading the Arabic Billion Words Corpus | [
{
"color": "d73a4a",
"default": true,
"description": "Something isn't working",
"id": 1935892857,
"name": "bug",
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug"
}
] | closed | false | null | 2 | 2021-07-29T12:53:09Z | 2021-07-30T13:03:55Z | 2021-07-30T13:03:55Z | null | ## Describe the bug
I get `IndexError: list index out of range` when trying to load the `Techreen` and `Almustaqbal` configs of the dataset.
## Steps to reproduce the bug
```python
load_dataset("arabic_billion_words", "Techreen")
load_dataset("arabic_billion_words", "Almustaqbal")
```
## Expected results
Th... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/2727/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/2727/timeline | null | completed | null | null | false | [
"I modified the dataset loading script to catch the `IndexError` and inspect the records at which the error is happening, and I found this:\r\nFor the `Techreen` config, the error happens in 36 records when trying to find the `Text` or `Dateline` tags. All these 36 records look something like:\r\n```\r\n<Techreen>\... |
https://api.github.com/repos/huggingface/datasets/issues/4503 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/4503/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/4503/comments | https://api.github.com/repos/huggingface/datasets/issues/4503/events | https://github.com/huggingface/datasets/pull/4503 | 1,272,367,055 | PR_kwDODunzps45twLR | 4,503 | Refactor and add metadata to fever dataset | [] | closed | false | null | 5 | 2022-06-15T14:59:47Z | 2022-07-06T11:54:15Z | 2022-07-06T11:41:30Z | null | Related to: #4452 and #3792. | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/4503/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/4503/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/4503.diff",
"html_url": "https://github.com/huggingface/datasets/pull/4503",
"merged_at": "2022-07-06T11:41:30Z",
"patch_url": "https://github.com/huggingface/datasets/pull/4503.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"But this is somehow fever v3 dataset (see this link https://fever.ai/ under the dropdown menu called Datasets). Our fever dataset already contains v1 and v2 configs. Then, I added this as if v3 config (but named feverous instead of v... |
https://api.github.com/repos/huggingface/datasets/issues/1183 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1183/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1183/comments | https://api.github.com/repos/huggingface/datasets/issues/1183/events | https://github.com/huggingface/datasets/pull/1183 | 757,806,570 | MDExOlB1bGxSZXF1ZXN0NTMzMTEwOTY4 | 1,183 | add mkb dataset | [] | closed | false | null | 3 | 2020-12-05T23:44:33Z | 2020-12-09T09:38:50Z | 2020-12-09T09:38:50Z | null | This PR will add Mann Ki Baat dataset (parallel data for Indian languages). | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/1183/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/1183/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/1183.diff",
"html_url": "https://github.com/huggingface/datasets/pull/1183",
"merged_at": "2020-12-09T09:38:50Z",
"patch_url": "https://github.com/huggingface/datasets/pull/1183.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | true | [
"Could you update the languages tags before we merge @VasudevGupta7 ?",
"done.",
"thanks !"
] |
https://api.github.com/repos/huggingface/datasets/issues/777 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/777/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/777/comments | https://api.github.com/repos/huggingface/datasets/issues/777/events | https://github.com/huggingface/datasets/pull/777 | 732,376,648 | MDExOlB1bGxSZXF1ZXN0NTEyMzI2ODM2 | 777 | Better error message for uninitialized metric | [] | closed | false | null | 0 | 2020-10-29T14:42:50Z | 2020-10-29T15:18:26Z | 2020-10-29T15:18:24Z | null | When calling `metric.compute()` without having called `metric.add` or `metric.add_batch` at least once, the error was quite cryptic. I added a better error message
Fix #729 | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/777/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/777/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/777.diff",
"html_url": "https://github.com/huggingface/datasets/pull/777",
"merged_at": "2020-10-29T15:18:23Z",
"patch_url": "https://github.com/huggingface/datasets/pull/777.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/777... | true | [] |
https://api.github.com/repos/huggingface/datasets/issues/3595 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/3595/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/3595/comments | https://api.github.com/repos/huggingface/datasets/issues/3595/events | https://github.com/huggingface/datasets/pull/3595 | 1,107,260,527 | PR_kwDODunzps4xOIxH | 3,595 | Add ImageNet toy datasets from fastai | [
{
"color": "0e8a16",
"default": false,
"description": "Contribution to a dataset script",
"id": 4564477500,
"name": "dataset contribution",
"node_id": "LA_kwDODunzps8AAAABEBBmPA",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20contribution"
}
] | closed | false | null | 1 | 2022-01-18T19:03:35Z | 2022-09-30T14:39:35Z | 2022-09-30T14:39:35Z | null | Adds the ImageNet toy datasets from FastAI: Imagenette, Imagewoof and Imagewang.
TODOs:
* [ ] add dummy data
* [ ] add dataset card
* [ ] generate `dataset_info.json` | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/3595/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/3595/timeline | null | null | true | {
"diff_url": "https://github.com/huggingface/datasets/pull/3595.diff",
"html_url": "https://github.com/huggingface/datasets/pull/3595",
"merged_at": null,
"patch_url": "https://github.com/huggingface/datasets/pull/3595.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/3595"
} | true | [
"Thanks for your contribution, @mariosasko. Are you still interested in adding this dataset?\r\n\r\nWe are removing the dataset scripts from this GitHub repo and moving them to the Hugging Face Hub: https://huggingface.co/datasets\r\n\r\nWe would suggest you create this dataset there. Please, feel free to tell us i... |
https://api.github.com/repos/huggingface/datasets/issues/5305 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5305/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5305/comments | https://api.github.com/repos/huggingface/datasets/issues/5305/events | https://github.com/huggingface/datasets/issues/5305 | 1,465,627,826 | I_kwDODunzps5XW7Sy | 5,305 | Dataset joelito/mc4_legal does not work with multiple files | [] | closed | false | null | 2 | 2022-11-28T00:16:16Z | 2022-11-28T07:22:42Z | 2022-11-28T07:22:42Z | null | ### Describe the bug
The dataset https://huggingface.co/datasets/joelito/mc4_legal works for languages like bg with a single data file, but not for languages with multiple files like de. It shows zero rows for the de dataset.
joelniklaus@Joels-MacBook-Pro ~/N/P/C/L/p/m/mc4_legal (main) [1]> python test_mc4_legal.... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5305/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5305/timeline | null | completed | null | null | false | [
"Thanks for reporting @JoelNiklaus.\r\n\r\nPlease note that since we moved all dataset loading scripts to the Hub, the issues and pull requests relative to specific datasets are directly handled on the Hub, in their Community tab. I'm transferring this issue there: https://huggingface.co/datasets/joelito/mc4_legal/... |
https://api.github.com/repos/huggingface/datasets/issues/5030 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/5030/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/5030/comments | https://api.github.com/repos/huggingface/datasets/issues/5030/events | https://github.com/huggingface/datasets/pull/5030 | 1,388,061,340 | PR_kwDODunzps4_tfBO | 5,030 | Fast dataset iter | [] | closed | false | null | 2 | 2022-09-27T16:44:51Z | 2022-09-29T15:50:44Z | 2022-09-29T15:48:17Z | null | Use `pa.Table.to_reader` to make iteration over examples/batches faster in `Dataset.{__iter__, map}`
TODO:
* [x] benchmarking (the only benchmark for now - iterating over (single) examples of `bookcorpus` (75 mil examples) in Colab is approx. 2.3x faster)
* [x] check if iterating over bigger chunks + slicing to fe... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/5030/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/5030/timeline | null | null | false | {
"diff_url": "https://github.com/huggingface/datasets/pull/5030.diff",
"html_url": "https://github.com/huggingface/datasets/pull/5030",
"merged_at": "2022-09-29T15:48:17Z",
"patch_url": "https://github.com/huggingface/datasets/pull/5030.patch",
"url": "https://api.github.com/repos/huggingface/datasets/pulls/... | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"I ran some benchmarks (focused on the data fetching part of `__iter__`) and it seems like the combination `table.to_reader(batch_size)` + `RecordBatch.slice` performs the best ([script](https://gist.github.com/mariosasko/0248288a2e3a... |
https://api.github.com/repos/huggingface/datasets/issues/227 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/227/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/227/comments | https://api.github.com/repos/huggingface/datasets/issues/227/events | https://github.com/huggingface/datasets/issues/227 | 629,845,704 | MDU6SXNzdWU2Mjk4NDU3MDQ= | 227 | Should we still have to force to install apache_beam to download wikipedia ? | [] | closed | false | null | 3 | 2020-06-03T09:33:20Z | 2020-06-03T15:25:41Z | 2020-06-03T15:25:41Z | null | Hi, first thanks to @lhoestq 's revolutionary work, I successfully downloaded processed wikipedia according to the doc. 😍😍😍
But at the first try, it tell me to install `apache_beam` and `mwparserfromhell`, which I thought wouldn't be used according to #204 , it was kind of confusing me at that time.
Maybe we s... | {
"+1": 0,
"-1": 0,
"confused": 0,
"eyes": 0,
"heart": 0,
"hooray": 0,
"laugh": 0,
"rocket": 0,
"total_count": 0,
"url": "https://api.github.com/repos/huggingface/datasets/issues/227/reactions"
} | https://api.github.com/repos/huggingface/datasets/issues/227/timeline | null | completed | null | null | false | [
"Thanks for your message 😊 \r\nIndeed users shouldn't have to install those dependencies",
"Got it, feel free to close this issue when you think it’s resolved.",
"It should be good now :)"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.