Datasets:
url string | repository_url string | labels_url string | comments_url string | events_url string | html_url string | id int64 | node_id string | number int64 | title string | user dict | labels list | state string | locked bool | assignee dict | assignees list | milestone dict | comments int64 | created_at timestamp[ms] | updated_at timestamp[ms] | closed_at timestamp[ms] | author_association string | type null | sub_issues_summary dict | active_lock_reason null | draft bool | pull_request dict | body string | closed_by dict | reactions dict | timeline_url string | performed_via_github_app null | state_reason string | is_pull_request bool |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/datasets/issues/7523 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7523/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7523/comments | https://api.github.com/repos/huggingface/datasets/issues/7523/events | https://github.com/huggingface/datasets/pull/7523 | 2,999,616,692 | PR_kwDODunzps6S1r8w | 7,523 | mention av in video docs | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoest... | [] | closed | false | null | [] | null | 1 | 2025-04-16T13:11:12 | 2025-04-16T13:13:45 | 2025-04-16T13:11:42 | MEMBER | null | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7523",
"html_url": "https://github.com/huggingface/datasets/pull/7523",
"diff_url": "https://github.com/huggingface/datasets/pull/7523.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7523.patch",
"merged_at": "2025-04-16T13:11... | null | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoest... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7523/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7523/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/7522 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7522/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7522/comments | https://api.github.com/repos/huggingface/datasets/issues/7522/events | https://github.com/huggingface/datasets/pull/7522 | 2,998,169,017 | PR_kwDODunzps6SwwXW | 7,522 | Preserve formatting in concatenated IterableDataset | {
"login": "francescorubbo",
"id": 5140987,
"node_id": "MDQ6VXNlcjUxNDA5ODc=",
"avatar_url": "https://avatars.githubusercontent.com/u/5140987?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/francescorubbo",
"html_url": "https://github.com/francescorubbo",
"followers_url": "https://api.gith... | [] | open | false | null | [] | null | 0 | 2025-04-16T02:37:33 | 2025-04-16T02:37:33 | null | NONE | null | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7522",
"html_url": "https://github.com/huggingface/datasets/pull/7522",
"diff_url": "https://github.com/huggingface/datasets/pull/7522.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7522.patch",
"merged_at": null
} | Fixes #7515 | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7522/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7522/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/7521 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7521/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7521/comments | https://api.github.com/repos/huggingface/datasets/issues/7521/events | https://github.com/huggingface/datasets/pull/7521 | 2,997,666,366 | PR_kwDODunzps6SvEZp | 7,521 | fix: Image Feature in Datasets Library Fails to Handle bytearray Objects from Spark DataFrames (#7517) | {
"login": "giraffacarp",
"id": 73196164,
"node_id": "MDQ6VXNlcjczMTk2MTY0",
"avatar_url": "https://avatars.githubusercontent.com/u/73196164?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/giraffacarp",
"html_url": "https://github.com/giraffacarp",
"followers_url": "https://api.github.com/... | [] | open | false | null | [] | null | 1 | 2025-04-15T21:23:58 | 2025-04-16T06:57:22 | null | NONE | null | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7521",
"html_url": "https://github.com/huggingface/datasets/pull/7521",
"diff_url": "https://github.com/huggingface/datasets/pull/7521.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7521.patch",
"merged_at": null
} | ## Task
Support bytes-like objects (bytes and bytearray) in Features classes
### Description
The `Features` classes only accept `bytes` objects for binary data, but not `bytearray`. This leads to errors when using `IterableDataset.from_spark()` with Spark DataFrames as they contain `bytearray` objects, even though... | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7521/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7521/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/7520 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7520/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7520/comments | https://api.github.com/repos/huggingface/datasets/issues/7520/events | https://github.com/huggingface/datasets/issues/7520 | 2,997,422,044 | I_kwDODunzps6yqQfc | 7,520 | Update items in the dataset without `map` | {
"login": "mashdragon",
"id": 122402293,
"node_id": "U_kgDOB0u19Q",
"avatar_url": "https://avatars.githubusercontent.com/u/122402293?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mashdragon",
"html_url": "https://github.com/mashdragon",
"followers_url": "https://api.github.com/users/mas... | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | null | [] | null | 0 | 2025-04-15T19:39:01 | 2025-04-15T19:39:01 | null | NONE | null | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### Feature request
I would like to be able to update items in my dataset without affecting all rows. At least if there was a range option, I would be able to process those items, save the dataset, and then continue.
If I am supposed to split the dataset first, that is not clear, since the docs suggest that any of th... | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7520/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7520/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7519 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7519/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7519/comments | https://api.github.com/repos/huggingface/datasets/issues/7519/events | https://github.com/huggingface/datasets/pull/7519 | 2,996,458,961 | PR_kwDODunzps6Sq76Z | 7,519 | pdf docs fixes | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoest... | [] | closed | false | null | [] | null | 1 | 2025-04-15T13:35:56 | 2025-04-15T13:38:31 | 2025-04-15T13:36:03 | MEMBER | null | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7519",
"html_url": "https://github.com/huggingface/datasets/pull/7519",
"diff_url": "https://github.com/huggingface/datasets/pull/7519.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7519.patch",
"merged_at": "2025-04-15T13:36... | close https://github.com/huggingface/datasets/issues/7494 | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoest... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7519/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7519/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/7518 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7518/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7518/comments | https://api.github.com/repos/huggingface/datasets/issues/7518/events | https://github.com/huggingface/datasets/issues/7518 | 2,996,141,825 | I_kwDODunzps6ylX8B | 7,518 | num_proc parallelization works only for first ~10s. | {
"login": "pshishodiaa",
"id": 33901783,
"node_id": "MDQ6VXNlcjMzOTAxNzgz",
"avatar_url": "https://avatars.githubusercontent.com/u/33901783?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pshishodiaa",
"html_url": "https://github.com/pshishodiaa",
"followers_url": "https://api.github.com/... | [] | open | false | null | [] | null | 2 | 2025-04-15T11:44:03 | 2025-04-15T13:12:13 | null | NONE | null | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### Describe the bug
When I try to load an already downloaded dataset with num_proc=64, the speed is very high for the first 10-20 seconds acheiving 30-40K samples / s, and 100% utilization for all cores but it soon drops to <= 1000 with almost 0% utilization for most cores.
### Steps to reproduce the bug
```
// do... | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7518/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7518/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7517 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7517/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7517/comments | https://api.github.com/repos/huggingface/datasets/issues/7517/events | https://github.com/huggingface/datasets/issues/7517 | 2,996,106,077 | I_kwDODunzps6ylPNd | 7,517 | Image Feature in Datasets Library Fails to Handle bytearray Objects from Spark DataFrames | {
"login": "giraffacarp",
"id": 73196164,
"node_id": "MDQ6VXNlcjczMTk2MTY0",
"avatar_url": "https://avatars.githubusercontent.com/u/73196164?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/giraffacarp",
"html_url": "https://github.com/giraffacarp",
"followers_url": "https://api.github.com/... | [] | open | false | {
"login": "giraffacarp",
"id": 73196164,
"node_id": "MDQ6VXNlcjczMTk2MTY0",
"avatar_url": "https://avatars.githubusercontent.com/u/73196164?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/giraffacarp",
"html_url": "https://github.com/giraffacarp",
"followers_url": "https://api.github.com/... | [
{
"login": "giraffacarp",
"id": 73196164,
"node_id": "MDQ6VXNlcjczMTk2MTY0",
"avatar_url": "https://avatars.githubusercontent.com/u/73196164?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/giraffacarp",
"html_url": "https://github.com/giraffacarp",
"followers_url": "htt... | null | 4 | 2025-04-15T11:29:17 | 2025-04-15T15:57:15 | null | NONE | null | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### Describe the bug
When using `IterableDataset.from_spark()` with a Spark DataFrame containing image data, the `Image` feature class fails to properly process this data type, causing an `AttributeError: 'bytearray' object has no attribute 'get'`
### Steps to reproduce the bug
1. Create a Spark DataFrame with a col... | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7517/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7517/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7516 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7516/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7516/comments | https://api.github.com/repos/huggingface/datasets/issues/7516/events | https://github.com/huggingface/datasets/issues/7516 | 2,995,780,283 | I_kwDODunzps6yj_q7 | 7,516 | unsloth/DeepSeek-R1-Distill-Qwen-32B server error | {
"login": "Editor-1",
"id": 164353862,
"node_id": "U_kgDOCcvXRg",
"avatar_url": "https://avatars.githubusercontent.com/u/164353862?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Editor-1",
"html_url": "https://github.com/Editor-1",
"followers_url": "https://api.github.com/users/Editor-1/... | [] | closed | false | null | [] | null | 0 | 2025-04-15T09:26:53 | 2025-04-15T09:57:26 | 2025-04-15T09:57:26 | NONE | null | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### Describe the bug
hfhubhttperror: 500 server error: internal server error for url: https://huggingface.co/api/models/unsloth/deepseek-r1-distill-qwen-32b-bnb-4bit/commits/main (request id: root=1-67fe23fa-3a2150eb444c2a823c388579;de3aed68-c397-4da5-94d4-6565efd3b919) internal error - we're working hard to fix this ... | {
"login": "Editor-1",
"id": 164353862,
"node_id": "U_kgDOCcvXRg",
"avatar_url": "https://avatars.githubusercontent.com/u/164353862?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Editor-1",
"html_url": "https://github.com/Editor-1",
"followers_url": "https://api.github.com/users/Editor-1/... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7516/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7516/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/7515 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7515/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7515/comments | https://api.github.com/repos/huggingface/datasets/issues/7515/events | https://github.com/huggingface/datasets/issues/7515 | 2,995,082,418 | I_kwDODunzps6yhVSy | 7,515 | `concatenate_datasets` does not preserve Pytorch format for IterableDataset | {
"login": "francescorubbo",
"id": 5140987,
"node_id": "MDQ6VXNlcjUxNDA5ODc=",
"avatar_url": "https://avatars.githubusercontent.com/u/5140987?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/francescorubbo",
"html_url": "https://github.com/francescorubbo",
"followers_url": "https://api.gith... | [] | open | false | null | [] | null | 2 | 2025-04-15T04:36:34 | 2025-04-16T02:39:16 | null | NONE | null | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### Describe the bug
When concatenating datasets with `concatenate_datasets`, I would expect the resulting combined dataset to be in the same format as the inputs (assuming it's consistent). This is indeed the behavior when combining `Dataset`, but not when combining `IterableDataset`. Specifically, when applying `con... | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7515/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7515/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7514 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7514/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7514/comments | https://api.github.com/repos/huggingface/datasets/issues/7514/events | https://github.com/huggingface/datasets/pull/7514 | 2,994,714,923 | PR_kwDODunzps6Sk7Et | 7,514 | Do not hash `generator` in `BuilderConfig.create_config_id` | {
"login": "simonreise",
"id": 43753582,
"node_id": "MDQ6VXNlcjQzNzUzNTgy",
"avatar_url": "https://avatars.githubusercontent.com/u/43753582?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/simonreise",
"html_url": "https://github.com/simonreise",
"followers_url": "https://api.github.com/use... | [] | closed | false | null | [] | null | 0 | 2025-04-15T01:26:43 | 2025-04-15T16:27:51 | 2025-04-15T16:27:51 | NONE | null | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7514",
"html_url": "https://github.com/huggingface/datasets/pull/7514",
"diff_url": "https://github.com/huggingface/datasets/pull/7514.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7514.patch",
"merged_at": null
} | `Dataset.from_generator` function passes all of its arguments to `BuilderConfig.create_config_id`, including generator function itself. `BuilderConfig.create_config_id` function tries to hash all the args, and hashing a `generator` can take a large amount of time or even cause MemoryError if the dataset processed in a ... | {
"login": "simonreise",
"id": 43753582,
"node_id": "MDQ6VXNlcjQzNzUzNTgy",
"avatar_url": "https://avatars.githubusercontent.com/u/43753582?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/simonreise",
"html_url": "https://github.com/simonreise",
"followers_url": "https://api.github.com/use... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7514/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7514/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/7513 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7513/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7513/comments | https://api.github.com/repos/huggingface/datasets/issues/7513/events | https://github.com/huggingface/datasets/issues/7513 | 2,994,678,437 | I_kwDODunzps6yfyql | 7,513 | MemoryError while creating dataset from generator | {
"login": "simonreise",
"id": 43753582,
"node_id": "MDQ6VXNlcjQzNzUzNTgy",
"avatar_url": "https://avatars.githubusercontent.com/u/43753582?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/simonreise",
"html_url": "https://github.com/simonreise",
"followers_url": "https://api.github.com/use... | [] | open | false | null | [] | null | 3 | 2025-04-15T01:02:02 | 2025-04-15T17:49:39 | null | NONE | null | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### Describe the bug
# TL:DR
`Dataset.from_generator` function passes all of its arguments to `BuilderConfig.create_config_id`, including `generator` function itself. `BuilderConfig.create_config_id` function tries to hash all the args, which can take a large amount of time or even cause MemoryError if the dataset pr... | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7513/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7513/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7512 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7512/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7512/comments | https://api.github.com/repos/huggingface/datasets/issues/7512/events | https://github.com/huggingface/datasets/issues/7512 | 2,994,043,544 | I_kwDODunzps6ydXqY | 7,512 | .map() fails if function uses pyvista | {
"login": "el-hult",
"id": 11832922,
"node_id": "MDQ6VXNlcjExODMyOTIy",
"avatar_url": "https://avatars.githubusercontent.com/u/11832922?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/el-hult",
"html_url": "https://github.com/el-hult",
"followers_url": "https://api.github.com/users/el-hul... | [] | open | false | null | [] | null | 1 | 2025-04-14T19:43:02 | 2025-04-14T20:01:53 | null | NONE | null | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### Describe the bug
Using PyVista inside a .map() produces a crash with `objc[78796]: +[NSResponder initialize] may have been in progress in another thread when fork() was called. We cannot safely call it or ignore it in the fork() child process. Crashing instead. Set a breakpoint on objc_initializeAfterForkError to ... | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7512/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7512/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7510 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7510/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7510/comments | https://api.github.com/repos/huggingface/datasets/issues/7510/events | https://github.com/huggingface/datasets/issues/7510 | 2,992,131,117 | I_kwDODunzps6yWEwt | 7,510 | Incompatibile dill version (0.3.9) in datasets 2.18.0 - 3.5.0 | {
"login": "JGrel",
"id": 98061329,
"node_id": "U_kgDOBdhMEQ",
"avatar_url": "https://avatars.githubusercontent.com/u/98061329?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/JGrel",
"html_url": "https://github.com/JGrel",
"followers_url": "https://api.github.com/users/JGrel/followers",
... | [] | open | false | null | [] | null | 3 | 2025-04-14T07:22:44 | 2025-04-16T09:30:16 | null | NONE | null | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### Describe the bug
Datasets 2.18.0 - 3.5.0 has a dependency on dill < 0.3.9. This causes errors with dill >= 0.3.9.
Could you please take a look into it and make it compatible?
### Steps to reproduce the bug
1. Install setuptools >= 2.18.0
2. Install dill >=0.3.9
3. Run pip check
4. Output:
ERROR: pip's dependenc... | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7510/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7510/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7509 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7509/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7509/comments | https://api.github.com/repos/huggingface/datasets/issues/7509/events | https://github.com/huggingface/datasets/issues/7509 | 2,991,484,542 | I_kwDODunzps6yTm5- | 7,509 | Dataset uses excessive memory when loading files | {
"login": "avishaiElmakies",
"id": 36810152,
"node_id": "MDQ6VXNlcjM2ODEwMTUy",
"avatar_url": "https://avatars.githubusercontent.com/u/36810152?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/avishaiElmakies",
"html_url": "https://github.com/avishaiElmakies",
"followers_url": "https://api... | [] | open | false | null | [] | null | 9 | 2025-04-13T21:09:49 | 2025-04-16T16:49:10 | null | NONE | null | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### Describe the bug
Hi
I am having an issue when loading a dataset.
I have about 200 json files each about 1GB (total about 215GB). each row has a few features which are a list of ints.
I am trying to load the dataset using `load_dataset`.
The dataset is about 1.5M samples
I use `num_proc=32` and a node with 378GB of... | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7509/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7509/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7508 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7508/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7508/comments | https://api.github.com/repos/huggingface/datasets/issues/7508/events | https://github.com/huggingface/datasets/issues/7508 | 2,986,612,934 | I_kwDODunzps6yBBjG | 7,508 | Iterating over Image feature columns is extremely slow | {
"login": "sohamparikh",
"id": 11831521,
"node_id": "MDQ6VXNlcjExODMxNTIx",
"avatar_url": "https://avatars.githubusercontent.com/u/11831521?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sohamparikh",
"html_url": "https://github.com/sohamparikh",
"followers_url": "https://api.github.com/... | [] | open | false | null | [] | null | 2 | 2025-04-10T19:00:54 | 2025-04-15T17:57:08 | null | NONE | null | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | We are trying to load datasets where the image column stores `PIL.PngImagePlugin.PngImageFile` images. However, iterating over these datasets is extremely slow.
What I have found:
1. It is the presence of the image column that causes the slowdown. Removing the column from the dataset results in blazingly fast (as expe... | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7508/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7508/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7507 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7507/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7507/comments | https://api.github.com/repos/huggingface/datasets/issues/7507/events | https://github.com/huggingface/datasets/issues/7507 | 2,984,309,806 | I_kwDODunzps6x4PQu | 7,507 | Front-end statistical data quantity deviation | {
"login": "rangehow",
"id": 88258534,
"node_id": "MDQ6VXNlcjg4MjU4NTM0",
"avatar_url": "https://avatars.githubusercontent.com/u/88258534?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rangehow",
"html_url": "https://github.com/rangehow",
"followers_url": "https://api.github.com/users/ran... | [] | open | false | null | [] | null | 1 | 2025-04-10T02:51:38 | 2025-04-15T12:54:51 | null | NONE | null | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### Describe the bug
While browsing the dataset at https://huggingface.co/datasets/NeuML/wikipedia-20250123, I noticed that a dataset with nearly 7M entries was estimated to be only 4M in size—almost half the actual amount. According to the post-download loading and the dataset_info (https://huggingface.co/datasets/Ne... | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7507/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7507/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7506 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7506/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7506/comments | https://api.github.com/repos/huggingface/datasets/issues/7506/events | https://github.com/huggingface/datasets/issues/7506 | 2,981,687,450 | I_kwDODunzps6xuPCa | 7,506 | HfHubHTTPError: 429 Client Error: Too Many Requests for URL when trying to access Fineweb-10BT on 4A100 GPUs using SLURM | {
"login": "calvintanama",
"id": 66202555,
"node_id": "MDQ6VXNlcjY2MjAyNTU1",
"avatar_url": "https://avatars.githubusercontent.com/u/66202555?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/calvintanama",
"html_url": "https://github.com/calvintanama",
"followers_url": "https://api.github.c... | [] | open | false | null | [] | null | 1 | 2025-04-09T06:32:04 | 2025-04-15T13:04:31 | null | NONE | null | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### Describe the bug
I am trying to run some finetunings on 4 A100 GPUs using SLURM using axolotl training framework which in turn uses Huggingface's Trainer and Accelerate on [Fineweb-10BT](https://huggingface.co/datasets/HuggingFaceFW/fineweb), but I end up running into 429 Client Error: Too Many Requests for URL er... | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7506/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7506/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7505 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7505/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7505/comments | https://api.github.com/repos/huggingface/datasets/issues/7505/events | https://github.com/huggingface/datasets/issues/7505 | 2,979,926,156 | I_kwDODunzps6xnhCM | 7,505 | HfHubHTTPError: 403 Forbidden: None. Cannot access content at: https://hf.co/api/s3proxy | {
"login": "hissain",
"id": 1412262,
"node_id": "MDQ6VXNlcjE0MTIyNjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/1412262?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hissain",
"html_url": "https://github.com/hissain",
"followers_url": "https://api.github.com/users/hissain/... | [] | open | false | null | [] | null | 0 | 2025-04-08T14:08:40 | 2025-04-08T14:08:40 | null | NONE | null | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | I have already logged in Huggingface using CLI with my valid token. Now trying to download the datasets using following code:
from transformers import WhisperProcessor, WhisperForConditionalGeneration, WhisperTokenizer, Trainer, TrainingArguments, DataCollatorForSeq2Seq
from datasets import load_dataset, Data... | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7505/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7505/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7504 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7504/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7504/comments | https://api.github.com/repos/huggingface/datasets/issues/7504/events | https://github.com/huggingface/datasets/issues/7504 | 2,979,410,641 | I_kwDODunzps6xljLR | 7,504 | BuilderConfig ParquetConfig(...) doesn't have a 'use_auth_token' key. | {
"login": "tteguayco",
"id": 20015750,
"node_id": "MDQ6VXNlcjIwMDE1NzUw",
"avatar_url": "https://avatars.githubusercontent.com/u/20015750?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tteguayco",
"html_url": "https://github.com/tteguayco",
"followers_url": "https://api.github.com/users/... | [] | open | false | null | [] | null | 2 | 2025-04-08T10:55:03 | 2025-04-15T12:36:28 | null | NONE | null | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### Describe the bug
Trying to run the following fine-tuning script (based on this page [here](https://github.com/huggingface/instruction-tuned-sd)):
```
! accelerate launch /content/instruction-tuned-sd/finetune_instruct_pix2pix.py \
--pretrained_model_name_or_path=${MODEL_ID} \
--dataset_name=${DATASET_NAME... | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7504/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7504/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7503 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7503/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7503/comments | https://api.github.com/repos/huggingface/datasets/issues/7503/events | https://github.com/huggingface/datasets/issues/7503 | 2,978,512,625 | I_kwDODunzps6xiH7x | 7,503 | Inconsistency between load_dataset and load_from_disk functionality | {
"login": "zzzzzec",
"id": 60975422,
"node_id": "MDQ6VXNlcjYwOTc1NDIy",
"avatar_url": "https://avatars.githubusercontent.com/u/60975422?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zzzzzec",
"html_url": "https://github.com/zzzzzec",
"followers_url": "https://api.github.com/users/zzzzze... | [] | open | false | null | [] | null | 1 | 2025-04-08T03:46:22 | 2025-04-15T12:39:53 | null | NONE | null | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ## Issue Description
I've encountered confusion when using `load_dataset` and `load_from_disk` in the datasets library. Specifically, when working offline with the gsm8k dataset, I can load it using a local path:
```python
import datasets
ds = datasets.load_dataset('/root/xxx/datasets/gsm8k', 'main')
```
output:
```t... | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7503/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7503/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7502 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7502/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7502/comments | https://api.github.com/repos/huggingface/datasets/issues/7502/events | https://github.com/huggingface/datasets/issues/7502 | 2,977,453,814 | I_kwDODunzps6xeFb2 | 7,502 | `load_dataset` of size 40GB creates a cache of >720GB | {
"login": "pietrolesci",
"id": 61748653,
"node_id": "MDQ6VXNlcjYxNzQ4NjUz",
"avatar_url": "https://avatars.githubusercontent.com/u/61748653?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pietrolesci",
"html_url": "https://github.com/pietrolesci",
"followers_url": "https://api.github.com/... | [] | closed | false | null | [] | null | 2 | 2025-04-07T16:52:34 | 2025-04-15T15:22:12 | 2025-04-15T15:22:11 | NONE | null | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | Hi there,
I am trying to load a dataset from the Hugging Face Hub and split it into train and validation splits. Somehow, when I try to do it with `load_dataset`, it exhausts my disk quota. So, I tried manually downloading the parquet files from the hub and loading them as follows:
```python
ds = DatasetDict(
... | {
"login": "pietrolesci",
"id": 61748653,
"node_id": "MDQ6VXNlcjYxNzQ4NjUz",
"avatar_url": "https://avatars.githubusercontent.com/u/61748653?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pietrolesci",
"html_url": "https://github.com/pietrolesci",
"followers_url": "https://api.github.com/... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7502/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7502/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/7501 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7501/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7501/comments | https://api.github.com/repos/huggingface/datasets/issues/7501/events | https://github.com/huggingface/datasets/issues/7501 | 2,976,721,014 | I_kwDODunzps6xbSh2 | 7,501 | Nested Feature raises ArrowNotImplementedError: Unsupported cast using function cast_struct | {
"login": "yaner-here",
"id": 26623948,
"node_id": "MDQ6VXNlcjI2NjIzOTQ4",
"avatar_url": "https://avatars.githubusercontent.com/u/26623948?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yaner-here",
"html_url": "https://github.com/yaner-here",
"followers_url": "https://api.github.com/use... | [] | closed | false | null | [] | null | 1 | 2025-04-07T12:35:39 | 2025-04-07T12:43:04 | 2025-04-07T12:43:03 | NONE | null | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### Describe the bug
`datasets.Features` seems to be unable to handle json file that contains fields of `list[dict]`.
### Steps to reproduce the bug
```json
// test.json
{"a": 1, "b": [{"c": 2, "d": 3}, {"c": 4, "d": 5}]}
{"a": 5, "b": [{"c": 7, "d": 8}, {"c": 9, "d": 10}]}
```
```python
import json
from datasets i... | {
"login": "yaner-here",
"id": 26623948,
"node_id": "MDQ6VXNlcjI2NjIzOTQ4",
"avatar_url": "https://avatars.githubusercontent.com/u/26623948?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yaner-here",
"html_url": "https://github.com/yaner-here",
"followers_url": "https://api.github.com/use... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7501/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7501/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/7500 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7500/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7500/comments | https://api.github.com/repos/huggingface/datasets/issues/7500/events | https://github.com/huggingface/datasets/issues/7500 | 2,974,841,921 | I_kwDODunzps6xUHxB | 7,500 | Make `with_format` correctly indicate that a `Dataset` is compatible with PyTorch's `Dataset` class | {
"login": "benglewis",
"id": 3817460,
"node_id": "MDQ6VXNlcjM4MTc0NjA=",
"avatar_url": "https://avatars.githubusercontent.com/u/3817460?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/benglewis",
"html_url": "https://github.com/benglewis",
"followers_url": "https://api.github.com/users/be... | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | null | [] | null | 1 | 2025-04-06T09:56:09 | 2025-04-15T12:57:39 | null | NONE | null | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### Feature request
Currently `datasets` does not correctly indicate to the Python type-checker (e.g. `pyright` / `Pylance`) that the output of `with_format` is compatible with PyTorch's `Dataloader` since it does not indicate that the HuggingFace `Dataset` is compatible with the PyTorch `Dataset` class. It would be g... | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7500/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7500/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7499 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7499/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7499/comments | https://api.github.com/repos/huggingface/datasets/issues/7499/events | https://github.com/huggingface/datasets/pull/7499 | 2,973,489,126 | PR_kwDODunzps6Rd4Zp | 7,499 | Added cache dirs to load and file_utils | {
"login": "gmongaras",
"id": 43501738,
"node_id": "MDQ6VXNlcjQzNTAxNzM4",
"avatar_url": "https://avatars.githubusercontent.com/u/43501738?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/gmongaras",
"html_url": "https://github.com/gmongaras",
"followers_url": "https://api.github.com/users/... | [] | open | false | null | [] | null | 4 | 2025-04-04T22:36:04 | 2025-04-15T13:19:15 | null | NONE | null | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7499",
"html_url": "https://github.com/huggingface/datasets/pull/7499",
"diff_url": "https://github.com/huggingface/datasets/pull/7499.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7499.patch",
"merged_at": null
} | When adding "cache_dir" to datasets.load_dataset, the cache_dir gets lost in the function calls, changing the cache dir to the default path. This fixes a few of these instances. | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7499/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7499/timeline | null | null | true |
https://api.github.com/repos/huggingface/datasets/issues/7498 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7498/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7498/comments | https://api.github.com/repos/huggingface/datasets/issues/7498/events | https://github.com/huggingface/datasets/issues/7498 | 2,969,218,273 | I_kwDODunzps6w-qzh | 7,498 | Extreme memory bandwidth. | {
"login": "J0SZ",
"id": 185079645,
"node_id": "U_kgDOCwgXXQ",
"avatar_url": "https://avatars.githubusercontent.com/u/185079645?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/J0SZ",
"html_url": "https://github.com/J0SZ",
"followers_url": "https://api.github.com/users/J0SZ/followers",
"f... | [] | open | false | null | [] | null | 0 | 2025-04-03T11:09:08 | 2025-04-03T11:11:22 | null | NONE | null | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### Describe the bug
When I use hf datasets on 4 GPU with 40 workers I get some extreme memory bandwidth of constant ~3GB/s.
However, if I wrap the dataset in `IterableDataset`, this issue is gone and the data also loads way faster (4x faster training on 1 worker).
It seems like the workers don't share memory and b... | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7498/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7498/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7497 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7497/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7497/comments | https://api.github.com/repos/huggingface/datasets/issues/7497/events | https://github.com/huggingface/datasets/issues/7497 | 2,968,553,693 | I_kwDODunzps6w8Ijd | 7,497 | How to convert videos to images? | {
"login": "tongvibe",
"id": 171649931,
"node_id": "U_kgDOCjsriw",
"avatar_url": "https://avatars.githubusercontent.com/u/171649931?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tongvibe",
"html_url": "https://github.com/tongvibe",
"followers_url": "https://api.github.com/users/tongvibe/... | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | null | [] | null | 1 | 2025-04-03T07:08:39 | 2025-04-15T12:35:15 | null | NONE | null | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### Feature request
Does someone know how to return the images from videos?
### Motivation
I am trying to use openpi(https://github.com/Physical-Intelligence/openpi) to finetune my Lerobot dataset(V2.0 and V2.1). I find that although the codedaset is v2.0, they are different. It seems like Lerobot V2.0 has two versi... | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7497/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7497/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7496 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7496/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7496/comments | https://api.github.com/repos/huggingface/datasets/issues/7496/events | https://github.com/huggingface/datasets/issues/7496 | 2,967,345,522 | I_kwDODunzps6w3hly | 7,496 | Json builder: Allow features to override problematic Arrow types | {
"login": "edmcman",
"id": 1017189,
"node_id": "MDQ6VXNlcjEwMTcxODk=",
"avatar_url": "https://avatars.githubusercontent.com/u/1017189?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/edmcman",
"html_url": "https://github.com/edmcman",
"followers_url": "https://api.github.com/users/edmcman/... | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | null | [] | null | 1 | 2025-04-02T19:27:16 | 2025-04-15T13:06:09 | null | NONE | null | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### Feature request
In the JSON builder, use explicitly requested feature types before or while converting to Arrow.
### Motivation
Working with JSON datasets is really hard because of Arrow. At the very least, it seems like it should be possible to work-around these problems by explicitly setting problematic colum... | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7496/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7496/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7495 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7495/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7495/comments | https://api.github.com/repos/huggingface/datasets/issues/7495/events | https://github.com/huggingface/datasets/issues/7495 | 2,967,034,060 | I_kwDODunzps6w2VjM | 7,495 | Columns in the dataset obtained though load_dataset do not correspond to the one in the dataset viewer since 3.4.0 | {
"login": "bruno-hays",
"id": 48770768,
"node_id": "MDQ6VXNlcjQ4NzcwNzY4",
"avatar_url": "https://avatars.githubusercontent.com/u/48770768?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bruno-hays",
"html_url": "https://github.com/bruno-hays",
"followers_url": "https://api.github.com/use... | [] | open | false | null | [] | null | 0 | 2025-04-02T17:01:11 | 2025-04-03T09:54:22 | null | CONTRIBUTOR | null | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### Describe the bug
I have noticed that on my dataset named [BrunoHays/Accueil_UBS](https://huggingface.co/datasets/BrunoHays/Accueil_UBS), since the version 3.4.0, every column except audio is missing when I load the dataset.
Interestingly, the dataset viewer still shows the correct columns
### Steps to reproduce ... | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7495/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7495/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7494 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7494/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7494/comments | https://api.github.com/repos/huggingface/datasets/issues/7494/events | https://github.com/huggingface/datasets/issues/7494 | 2,965,347,685 | I_kwDODunzps6wv51l | 7,494 | Broken links in pdf loading documentation | {
"login": "VyoJ",
"id": 75789232,
"node_id": "MDQ6VXNlcjc1Nzg5MjMy",
"avatar_url": "https://avatars.githubusercontent.com/u/75789232?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/VyoJ",
"html_url": "https://github.com/VyoJ",
"followers_url": "https://api.github.com/users/VyoJ/followers"... | [] | closed | false | null | [] | null | 1 | 2025-04-02T06:45:22 | 2025-04-15T13:36:25 | 2025-04-15T13:36:04 | NONE | null | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### Describe the bug
Hi, just a couple of small issues I ran into while reading the docs for [loading pdf data](https://huggingface.co/docs/datasets/main/en/document_load):
1. The link for the [`Create a pdf dataset`](https://huggingface.co/docs/datasets/main/en/document_load#pdffolder) points to https://huggingface.... | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoest... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7494/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7494/timeline | null | completed | false |
https://api.github.com/repos/huggingface/datasets/issues/7493 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7493/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7493/comments | https://api.github.com/repos/huggingface/datasets/issues/7493/events | https://github.com/huggingface/datasets/issues/7493 | 2,964,025,179 | I_kwDODunzps6wq29b | 7,493 | push_to_hub does not upload videos | {
"login": "DominikVincent",
"id": 9339403,
"node_id": "MDQ6VXNlcjkzMzk0MDM=",
"avatar_url": "https://avatars.githubusercontent.com/u/9339403?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/DominikVincent",
"html_url": "https://github.com/DominikVincent",
"followers_url": "https://api.gith... | [] | open | false | null | [] | null | 1 | 2025-04-01T17:00:20 | 2025-04-15T12:34:23 | null | NONE | null | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | null | null | ### Describe the bug
Hello,
I would like to upload a video dataset (some .mp4 files and some segments within them), i.e. rows correspond to subsequences from videos. Videos might be referenced by several rows.
I created a dataset locally and it references the videos and the video readers can read them correctly. I u... | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7493/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7493/timeline | null | null | false |
https://api.github.com/repos/huggingface/datasets/issues/7492 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/7492/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/7492/comments | https://api.github.com/repos/huggingface/datasets/issues/7492/events | https://github.com/huggingface/datasets/pull/7492 | 2,959,088,568 | PR_kwDODunzps6QtCQM | 7,492 | Closes #7457 | {
"login": "Harry-Yang0518",
"id": 129883215,
"node_id": "U_kgDOB73cTw",
"avatar_url": "https://avatars.githubusercontent.com/u/129883215?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Harry-Yang0518",
"html_url": "https://github.com/Harry-Yang0518",
"followers_url": "https://api.github.c... | [] | closed | false | null | [] | null | 1 | 2025-03-30T20:41:20 | 2025-04-13T22:05:07 | 2025-04-13T22:05:07 | NONE | null | {
"total": 0,
"completed": 0,
"percent_completed": 0
} | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/7492",
"html_url": "https://github.com/huggingface/datasets/pull/7492",
"diff_url": "https://github.com/huggingface/datasets/pull/7492.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/7492.patch",
"merged_at": null
} | This PR updates the documentation to include the HF_DATASETS_CACHE environment variable, which allows users to customize the cache location for datasets—similar to HF_HUB_CACHE for models. | {
"login": "Harry-Yang0518",
"id": 129883215,
"node_id": "U_kgDOB73cTw",
"avatar_url": "https://avatars.githubusercontent.com/u/129883215?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Harry-Yang0518",
"html_url": "https://github.com/Harry-Yang0518",
"followers_url": "https://api.github.c... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/7492/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/7492/timeline | null | null | true |
YAML Metadata Warning:The task_ids "token-classification-other-acronym-identification" is not in the official list: acceptability-classification, entity-linking-classification, fact-checking, intent-classification, language-identification, multi-class-classification, multi-label-classification, multi-input-text-classification, natural-language-inference, semantic-similarity-classification, sentiment-classification, topic-classification, semantic-similarity-scoring, sentiment-scoring, sentiment-analysis, hate-speech-detection, text-scoring, named-entity-recognition, part-of-speech, parsing, lemmatization, word-sense-disambiguation, coreference-resolution, extractive-qa, open-domain-qa, closed-domain-qa, news-articles-summarization, news-articles-headline-generation, dialogue-modeling, dialogue-generation, conversational, language-modeling, text-simplification, explanation-generation, abstractive-qa, open-domain-abstractive-qa, closed-domain-qa, open-book-qa, closed-book-qa, text2text-generation, slot-filling, masked-language-modeling, keyword-spotting, speaker-identification, audio-intent-classification, audio-emotion-recognition, audio-language-identification, multi-label-image-classification, multi-class-image-classification, face-detection, vehicle-detection, instance-segmentation, semantic-segmentation, panoptic-segmentation, image-captioning, image-inpainting, image-colorization, super-resolution, grasping, task-planning, tabular-multi-class-classification, tabular-multi-label-classification, tabular-single-column-regression, rdf-to-text, multiple-choice-qa, multiple-choice-coreference-resolution, document-retrieval, utterance-retrieval, entity-linking-retrieval, fact-checking-retrieval, univariate-time-series-forecasting, multivariate-time-series-forecasting, visual-question-answering, document-question-answering, pose-estimation
Dataset Card for Dataset Name
This dataset card aims to be a base template for new datasets. It has been generated using this raw template.
Dataset Details
Dataset Description
- Curated by: [More Information Needed]
- Funded by [optional]: [More Information Needed]
- Shared by [optional]: [More Information Needed]
- Language(s) (NLP): [More Information Needed]
- License: [More Information Needed]
Dataset Sources [optional]
- Repository: [More Information Needed]
- Paper [optional]: [More Information Needed]
- Demo [optional]: [More Information Needed]
Uses
Direct Use
[More Information Needed]
Out-of-Scope Use
[More Information Needed]
Dataset Structure
[More Information Needed]
Dataset Creation
Curation Rationale
[More Information Needed]
Source Data
Data Collection and Processing
[More Information Needed]
Who are the source data producers?
[More Information Needed]
Annotations [optional]
Annotation process
[More Information Needed]
Who are the annotators?
[More Information Needed]
Personal and Sensitive Information
[More Information Needed]
Bias, Risks, and Limitations
[More Information Needed]
Recommendations
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
Citation [optional]
BibTeX:
[More Information Needed]
APA:
[More Information Needed]
Glossary [optional]
[More Information Needed]
More Information [optional]
[More Information Needed]
Dataset Card Authors [optional]
[More Information Needed]
Dataset Card Contact
[More Information Needed]
- Downloads last month
- 115