id int64 599M 3.26B | number int64 1 7.7k | title stringlengths 1 290 | body stringlengths 0 228k ⌀ | state stringclasses 2
values | html_url stringlengths 46 51 | created_at timestamp[s]date 2020-04-14 10:18:02 2025-07-23 08:04:53 | updated_at timestamp[s]date 2020-04-27 16:04:17 2025-07-23 18:53:44 | closed_at timestamp[s]date 2020-04-14 12:01:40 2025-07-23 16:44:42 ⌀ | user dict | labels listlengths 0 4 | is_pull_request bool 2
classes | comments listlengths 0 0 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
3,123,962,709 | 7,597 | Download datasets from a private hub in 2025 | ### Feature request
In the context of a private hub deployment, customers would like to use load_dataset() to load datasets from their hub, not from the public hub. This doesn't seem to be configurable at the moment and it would be nice to add this feature.
The obvious workaround is to clone the repo first and then l... | closed | https://github.com/huggingface/datasets/issues/7597 | 2025-06-06T07:55:19 | 2025-06-13T13:46:00 | 2025-06-13T13:46:00 | {
"login": "DanielSchuhmacher",
"id": 178552926,
"type": "User"
} | [
{
"name": "enhancement",
"color": "a2eeef"
}
] | false | [] |
3,122,595,042 | 7,596 | Add albumentations to use dataset | 1. Fixed broken link to the list of transforms in torchvison.
2. Extended section about video image augmentations with an example from Albumentations. | closed | https://github.com/huggingface/datasets/pull/7596 | 2025-06-05T20:39:46 | 2025-06-17T18:38:08 | 2025-06-17T14:44:30 | {
"login": "ternaus",
"id": 5481618,
"type": "User"
} | [] | true | [] |
3,121,689,436 | 7,595 | Add `IterableDataset.push_to_hub()` | Basic implementation, which writes one shard per input dataset shard.
This is to be improved later.
Close https://github.com/huggingface/datasets/issues/5665
PS: for image/audio datasets structured as actual image/audio files (not parquet), you can sometimes speed it up with `ds.decode(num_threads=...).push_to_h... | closed | https://github.com/huggingface/datasets/pull/7595 | 2025-06-05T15:29:32 | 2025-06-06T16:12:37 | 2025-06-06T16:12:36 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
3,120,799,626 | 7,594 | Add option to ignore keys/columns when loading a dataset from jsonl(or any other data format) | ### Feature request
Hi, I would like the option to ignore keys/columns when loading a dataset from files (e.g. jsonl).
### Motivation
I am working on a dataset which is built on jsonl. It seems the dataset is unclean and a column has different types in each row. I can't clean this or remove the column (It is not my ... | open | https://github.com/huggingface/datasets/issues/7594 | 2025-06-05T11:12:45 | 2025-06-28T09:03:00 | null | {
"login": "avishaiElmakies",
"id": 36810152,
"type": "User"
} | [
{
"name": "enhancement",
"color": "a2eeef"
}
] | false | [] |
3,118,812,368 | 7,593 | Fix broken link to albumentations | A few months back I rewrote all docs at [https://albumentations.ai/docs](https://albumentations.ai/docs), and some pages changed their links.
In this PR fixed link to the most recent doc in Albumentations about bounding boxes and it's format.
Fix a few typos in the doc as well. | closed | https://github.com/huggingface/datasets/pull/7593 | 2025-06-04T19:00:13 | 2025-06-05T16:37:02 | 2025-06-05T16:36:32 | {
"login": "ternaus",
"id": 5481618,
"type": "User"
} | [] | true | [] |
3,118,203,880 | 7,592 | Remove scripts altogether | TODO:
- [x] remplace fixtures based on script with no-script fixtures
- [x] windaube | closed | https://github.com/huggingface/datasets/pull/7592 | 2025-06-04T15:14:11 | 2025-07-16T18:59:07 | 2025-06-09T16:45:27 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
3,117,816,388 | 7,591 | Add num_proc parameter to push_to_hub | ### Feature request
A number of processes parameter to the dataset.push_to_hub method
### Motivation
Shards are currently uploaded serially which makes it slow for many shards, uploading can be done in parallel and much faster
| open | https://github.com/huggingface/datasets/issues/7591 | 2025-06-04T13:19:15 | 2025-06-27T06:13:54 | null | {
"login": "SwayStar123",
"id": 46050679,
"type": "User"
} | [
{
"name": "enhancement",
"color": "a2eeef"
}
] | false | [] |
3,101,654,892 | 7,590 | `Sequence(Features(...))` causes PyArrow cast error in `load_dataset` despite correct schema. | ### Description
When loading a dataset with a field declared as a list of structs using `Sequence(Features(...))`, `load_dataset` incorrectly infers the field as a plain `struct<...>` instead of a `list<struct<...>>`. This leads to the following error:
```
ArrowNotImplementedError: Unsupported cast from list<item: st... | closed | https://github.com/huggingface/datasets/issues/7590 | 2025-05-29T22:53:36 | 2025-07-19T22:45:08 | 2025-07-19T22:45:08 | {
"login": "AHS-uni",
"id": 183279820,
"type": "User"
} | [] | false | [] |
3,101,119,704 | 7,589 | feat: use content defined chunking | WIP:
- [x] set the parameters in `io.parquet.ParquetDatasetReader`
- [x] set the parameters in `arrow_writer.ParquetWriter`
It requires a new pyarrow pin ">=21.0.0" which is not yet released. | open | https://github.com/huggingface/datasets/pull/7589 | 2025-05-29T18:19:41 | 2025-06-17T15:04:07 | null | {
"login": "kszucs",
"id": 961747,
"type": "User"
} | [] | true | [] |
3,094,012,025 | 7,588 | ValueError: Invalid pattern: '**' can only be an entire path component [Colab] | ### Describe the bug
I have a dataset on HF [here](https://huggingface.co/datasets/kambale/luganda-english-parallel-corpus) that i've previously used to train a translation model [here](https://huggingface.co/kambale/pearl-11m-translate).
now i changed a few hyperparameters to increase number of tokens for the model,... | closed | https://github.com/huggingface/datasets/issues/7588 | 2025-05-27T13:46:05 | 2025-05-30T13:22:52 | 2025-05-30T01:26:30 | {
"login": "wkambale",
"id": 43061081,
"type": "User"
} | [] | false | [] |
3,091,834,987 | 7,587 | load_dataset splits typing | close https://github.com/huggingface/datasets/issues/7583 | closed | https://github.com/huggingface/datasets/pull/7587 | 2025-05-26T18:28:40 | 2025-05-26T18:31:10 | 2025-05-26T18:29:57 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
3,091,320,431 | 7,586 | help is appreciated | ### Feature request
https://github.com/rajasekarnp1/neural-audio-upscaler/tree/main
### Motivation
ai model develpment and audio
### Your contribution
ai model develpment and audio | open | https://github.com/huggingface/datasets/issues/7586 | 2025-05-26T14:00:42 | 2025-05-26T18:21:57 | null | {
"login": "rajasekarnp1",
"id": 54931785,
"type": "User"
} | [
{
"name": "enhancement",
"color": "a2eeef"
}
] | false | [] |
3,091,227,921 | 7,585 | Avoid multiple default config names | Fix duplicating default config names.
Currently, when calling `push_to_hub(set_default=True` with 2 different config names, both are set as default.
Moreover, this will generate an error next time we try to push another default config name, raised by `MetadataConfigs.get_default_config_name`:
https://github.com... | closed | https://github.com/huggingface/datasets/pull/7585 | 2025-05-26T13:27:59 | 2025-06-05T12:41:54 | 2025-06-05T12:41:52 | {
"login": "albertvillanova",
"id": 8515462,
"type": "User"
} | [] | true | [] |
3,090,255,023 | 7,584 | Add LMDB format support | ### Feature request
Add LMDB format support for large memory-mapping files
### Motivation
Add LMDB format support for large memory-mapping files
### Your contribution
I'm trying to add it | open | https://github.com/huggingface/datasets/issues/7584 | 2025-05-26T07:10:13 | 2025-05-26T18:23:37 | null | {
"login": "trotsky1997",
"id": 30512160,
"type": "User"
} | [
{
"name": "enhancement",
"color": "a2eeef"
}
] | false | [] |
3,088,987,757 | 7,583 | load_dataset type stubs reject List[str] for split parameter, but runtime supports it | ### Describe the bug
The [load_dataset](https://huggingface.co/docs/datasets/v3.6.0/en/package_reference/loading_methods#datasets.load_dataset) method accepts a `List[str]` as the split parameter at runtime, however, the current type stubs restrict the split parameter to `Union[str, Split, None]`. This causes type che... | closed | https://github.com/huggingface/datasets/issues/7583 | 2025-05-25T02:33:18 | 2025-05-26T18:29:58 | 2025-05-26T18:29:58 | {
"login": "hierr",
"id": 25069969,
"type": "User"
} | [] | false | [] |
3,083,515,643 | 7,582 | fix: Add embed_storage in Pdf feature | Add missing `embed_storage` method in Pdf feature (Same as in Audio and Image) | closed | https://github.com/huggingface/datasets/pull/7582 | 2025-05-22T14:06:29 | 2025-05-22T14:17:38 | 2025-05-22T14:17:36 | {
"login": "AndreaFrancis",
"id": 5564745,
"type": "User"
} | [] | true | [] |
3,083,080,413 | 7,581 | Add missing property on `RepeatExamplesIterable` | Fixes #7561 | closed | https://github.com/huggingface/datasets/pull/7581 | 2025-05-22T11:41:07 | 2025-06-05T12:41:30 | 2025-06-05T12:41:29 | {
"login": "SilvanCodes",
"id": 42788329,
"type": "User"
} | [] | true | [] |
3,082,993,027 | 7,580 | Requesting a specific split (eg: test) still downloads all (train, test, val) data when streaming=False. | ### Describe the bug
When using load_dataset() from the datasets library (in load.py), specifying a particular split (e.g., split="train") still results in downloading data for all splits when streaming=False. This happens during the builder_instance.download_and_prepare() call.
This behavior leads to unnecessary band... | open | https://github.com/huggingface/datasets/issues/7580 | 2025-05-22T11:08:16 | 2025-05-26T18:40:31 | null | {
"login": "s3pi",
"id": 48768216,
"type": "User"
} | [] | false | [] |
3,081,849,022 | 7,579 | Fix typos in PDF and Video documentation | null | closed | https://github.com/huggingface/datasets/pull/7579 | 2025-05-22T02:27:40 | 2025-05-22T12:53:49 | 2025-05-22T12:53:47 | {
"login": "AndreaFrancis",
"id": 5564745,
"type": "User"
} | [] | true | [] |
3,080,833,740 | 7,577 | arrow_schema is not compatible with list | ### Describe the bug
```
import datasets
f = datasets.Features({'x': list[datasets.Value(dtype='int32')]})
f.arrow_schema
Traceback (most recent call last):
File "datasets/features/features.py", line 1826, in arrow_schema
return pa.schema(self.type).with_metadata({"huggingface": json.dumps(hf_metadata)})
... | closed | https://github.com/huggingface/datasets/issues/7577 | 2025-05-21T16:37:01 | 2025-05-26T18:49:51 | 2025-05-26T18:32:55 | {
"login": "jonathanshen-upwork",
"id": 164412025,
"type": "User"
} | [] | false | [] |
3,080,450,538 | 7,576 | Fix regex library warnings | # PR Summary
This small PR resolves the regex library warnings showing starting Python3.11:
```python
DeprecationWarning: 'count' is passed as positional argument
``` | closed | https://github.com/huggingface/datasets/pull/7576 | 2025-05-21T14:31:58 | 2025-06-05T13:35:16 | 2025-06-05T12:37:55 | {
"login": "emmanuel-ferdman",
"id": 35470921,
"type": "User"
} | [] | true | [] |
3,080,228,718 | 7,575 | [MINOR:TYPO] Update save_to_disk docstring | r/hub/filesystem in save_to_disk | closed | https://github.com/huggingface/datasets/pull/7575 | 2025-05-21T13:22:24 | 2025-06-05T12:39:13 | 2025-06-05T12:39:13 | {
"login": "cakiki",
"id": 3664563,
"type": "User"
} | [] | true | [] |
3,079,641,072 | 7,574 | Missing multilingual directions in IWSLT2017 dataset's processing script | ### Describe the bug
Hi,
Upon using `iwslt2017.py` in `IWSLT/iwslt2017` on the Hub for loading the datasets, I am unable to obtain the datasets for the language pairs `de-it`, `de-ro`, `de-nl`, `it-de`, `nl-de`, and `ro-de` using it. These 6 pairs do not show up when using `get_dataset_config_names()` to obtain the ... | open | https://github.com/huggingface/datasets/issues/7574 | 2025-05-21T09:53:17 | 2025-05-26T18:36:38 | null | {
"login": "andy-joy-25",
"id": 79297451,
"type": "User"
} | [] | false | [] |
3,076,415,382 | 7,573 | No Samsum dataset | ### Describe the bug
https://huggingface.co/datasets/Samsung/samsum dataset not found error 404
Originated from https://github.com/meta-llama/llama-cookbook/issues/948
### Steps to reproduce the bug
go to website https://huggingface.co/datasets/Samsung/samsum
see the error
also downloading it with python throws
`... | closed | https://github.com/huggingface/datasets/issues/7573 | 2025-05-20T09:54:35 | 2025-07-21T18:34:34 | 2025-06-18T12:52:23 | {
"login": "IgorKasianenko",
"id": 17688220,
"type": "User"
} | [] | false | [] |
3,074,529,251 | 7,572 | Fixed typos | More info: [comment](https://github.com/huggingface/datasets/pull/7564#issuecomment-2863391781). | closed | https://github.com/huggingface/datasets/pull/7572 | 2025-05-19T17:16:59 | 2025-06-05T12:25:42 | 2025-06-05T12:25:41 | {
"login": "TopCoder2K",
"id": 47208659,
"type": "User"
} | [] | true | [] |
3,074,116,942 | 7,571 | fix string_to_dict test | null | closed | https://github.com/huggingface/datasets/pull/7571 | 2025-05-19T14:49:23 | 2025-05-19T14:52:24 | 2025-05-19T14:49:28 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
3,065,966,529 | 7,570 | Dataset lib seems to broke after fssec lib update | ### Describe the bug
I am facing an issue since today where HF's dataset is acting weird and in some instances failure to recognise a valid dataset entirely, I think it is happening due to recent change in `fsspec` lib as using this command fixed it for me in one-time: `!pip install -U datasets huggingface_hub fsspec`... | closed | https://github.com/huggingface/datasets/issues/7570 | 2025-05-15T11:45:06 | 2025-06-13T00:44:27 | 2025-06-13T00:44:27 | {
"login": "sleepingcat4",
"id": 81933585,
"type": "User"
} | [] | false | [] |
3,061,234,054 | 7,569 | Dataset creation is broken if nesting a dict inside a dict inside a list | ### Describe the bug
Hey,
I noticed that the creation of datasets with `Dataset.from_generator` is broken if dicts and lists are nested in a certain way and a schema is being passed. See below for details.
Best,
Tim
### Steps to reproduce the bug
Runing this code:
```python
from datasets import Dataset, Features,... | open | https://github.com/huggingface/datasets/issues/7569 | 2025-05-13T21:06:45 | 2025-05-20T19:25:15 | null | {
"login": "TimSchneider42",
"id": 25732590,
"type": "User"
} | [] | false | [] |
3,060,515,257 | 7,568 | `IterableDatasetDict.map()` call removes `column_names` (in fact info.features) | When calling `IterableDatasetDict.map()`, each split’s `IterableDataset.map()` is invoked without a `features` argument. While omitting the argument isn’t itself incorrect, the implementation then sets `info.features = features`, which destroys the original `features` content. Since `IterableDataset.column_names` relie... | open | https://github.com/huggingface/datasets/issues/7568 | 2025-05-13T15:45:42 | 2025-06-30T09:33:47 | null | {
"login": "mombip",
"id": 7893763,
"type": "User"
} | [] | false | [] |
3,058,308,538 | 7,567 | interleave_datasets seed with multiple workers | ### Describe the bug
Using interleave_datasets with multiple dataloader workers and a seed set causes the same dataset sampling order across all workers.
Should the seed be modulated with the worker id?
### Steps to reproduce the bug
See above
### Expected behavior
See above
### Environment info
- `datasets` ve... | open | https://github.com/huggingface/datasets/issues/7567 | 2025-05-12T22:38:27 | 2025-06-29T06:53:59 | null | {
"login": "jonathanasdf",
"id": 511073,
"type": "User"
} | [] | false | [] |
3,055,279,344 | 7,566 | terminate called without an active exception; Aborted (core dumped) | ### Describe the bug
I use it as in the tutorial here: https://huggingface.co/docs/datasets/stream, and it ends up with abort.
### Steps to reproduce the bug
1. `pip install datasets`
2.
```
$ cat main.py
#!/usr/bin/env python3
from datasets import load_dataset
dataset = load_dataset('HuggingFaceFW/fineweb', spl... | open | https://github.com/huggingface/datasets/issues/7566 | 2025-05-11T23:05:54 | 2025-06-23T17:56:02 | null | {
"login": "alexey-milovidov",
"id": 18581488,
"type": "User"
} | [] | false | [] |
3,051,731,207 | 7,565 | add check if repo exists for dataset uploading | Currently, I'm reuploading datasets for [`MTEB`](https://github.com/embeddings-benchmark/mteb/). Some of them have many splits (more than 20), and I'm encountering the error:
`Too many requests for https://huggingface.co/datasets/repo/create`.
It seems that this issue occurs because the dataset tries to recreate it... | open | https://github.com/huggingface/datasets/pull/7565 | 2025-05-09T10:27:00 | 2025-06-09T14:39:23 | null | {
"login": "Samoed",
"id": 36135455,
"type": "User"
} | [] | true | [] |
3,049,275,226 | 7,564 | Implementation of iteration over values of a column in an IterableDataset object | Refers to [this issue](https://github.com/huggingface/datasets/issues/7381). | closed | https://github.com/huggingface/datasets/pull/7564 | 2025-05-08T14:59:22 | 2025-05-19T12:15:02 | 2025-05-19T12:15:02 | {
"login": "TopCoder2K",
"id": 47208659,
"type": "User"
} | [] | true | [] |
3,046,351,253 | 7,563 | set dev version | null | closed | https://github.com/huggingface/datasets/pull/7563 | 2025-05-07T15:18:29 | 2025-05-07T15:21:05 | 2025-05-07T15:18:36 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
3,046,339,430 | 7,562 | release: 3.6.0 | null | closed | https://github.com/huggingface/datasets/pull/7562 | 2025-05-07T15:15:13 | 2025-05-07T15:17:46 | 2025-05-07T15:15:21 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
3,046,302,653 | 7,561 | NotImplementedError: <class 'datasets.iterable_dataset.RepeatExamplesIterable'> doesn't implement num_shards yet | ### Describe the bug
When using `.repeat()` on an `IterableDataset`, this error gets thrown. There is [this thread](https://discuss.huggingface.co/t/making-an-infinite-iterabledataset/146192/5) that seems to imply the fix is trivial, but I don't know anything about this codebase, so I'm opening this issue rather than ... | closed | https://github.com/huggingface/datasets/issues/7561 | 2025-05-07T15:05:42 | 2025-06-05T12:41:30 | 2025-06-05T12:41:30 | {
"login": "cyanic-selkie",
"id": 32219669,
"type": "User"
} | [] | false | [] |
3,046,265,500 | 7,560 | fix decoding tests | null | closed | https://github.com/huggingface/datasets/pull/7560 | 2025-05-07T14:56:14 | 2025-05-07T14:59:02 | 2025-05-07T14:56:20 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
3,046,177,078 | 7,559 | fix aiohttp import | null | closed | https://github.com/huggingface/datasets/pull/7559 | 2025-05-07T14:31:32 | 2025-05-07T14:34:34 | 2025-05-07T14:31:38 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
3,046,066,628 | 7,558 | fix regression | reported in https://github.com/huggingface/datasets/pull/7557 (I just reorganized the condition)
wanted to apply this change to the original PR but github didn't let me apply it directly - merging this one instead | closed | https://github.com/huggingface/datasets/pull/7558 | 2025-05-07T13:56:03 | 2025-05-07T13:58:52 | 2025-05-07T13:56:18 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
3,045,962,076 | 7,557 | check for empty _formatting | Fixes a regression from #7553 breaking shuffling of iterable datasets
<img width="884" alt="Screenshot 2025-05-07 at 9 16 52 AM" src="https://github.com/user-attachments/assets/d2f43c5f-4092-4efe-ac31-a32cbd025fe3" />
| closed | https://github.com/huggingface/datasets/pull/7557 | 2025-05-07T13:22:37 | 2025-05-07T13:57:12 | 2025-05-07T13:57:12 | {
"login": "winglian",
"id": 381258,
"type": "User"
} | [] | true | [] |
3,043,615,210 | 7,556 | Add `--merge-pull-request` option for `convert_to_parquet` | Closes #7527
Note that this implementation **will only merge the last PR in the case that they get split up by `push_to_hub`**. See https://github.com/huggingface/datasets/discussions/7555 for more details. | closed | https://github.com/huggingface/datasets/pull/7556 | 2025-05-06T18:05:05 | 2025-07-18T19:09:10 | 2025-07-18T19:09:10 | {
"login": "klamike",
"id": 17013474,
"type": "User"
} | [] | true | [] |
3,043,089,844 | 7,554 | datasets downloads and generates all splits, even though a single split is requested (for dataset with loading script) | ### Describe the bug
`datasets` downloads and generates all splits, even though a single split is requested. [This](https://huggingface.co/datasets/jordiae/exebench) is the dataset in question. It uses a loading script. I am not 100% sure that this is a bug, because maybe with loading scripts `datasets` must actual... | closed | https://github.com/huggingface/datasets/issues/7554 | 2025-05-06T14:43:38 | 2025-05-07T14:53:45 | 2025-05-07T14:53:44 | {
"login": "sei-eschwartz",
"id": 50171988,
"type": "User"
} | [] | false | [] |
3,042,953,907 | 7,553 | Rebatch arrow iterables before formatted iterable | close https://github.com/huggingface/datasets/issues/7538 and https://github.com/huggingface/datasets/issues/7475 | closed | https://github.com/huggingface/datasets/pull/7553 | 2025-05-06T13:59:58 | 2025-05-07T13:17:41 | 2025-05-06T14:03:42 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
3,040,258,084 | 7,552 | Enable xet in push to hub | follows https://github.com/huggingface/huggingface_hub/pull/3035
related to https://github.com/huggingface/datasets/issues/7526 | closed | https://github.com/huggingface/datasets/pull/7552 | 2025-05-05T17:02:09 | 2025-05-06T12:42:51 | 2025-05-06T12:42:48 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
3,038,114,928 | 7,551 | Issue with offline mode and partial dataset cached | ### Describe the bug
Hi,
a issue related to #4760 here when loading a single file from a dataset, unable to access it in offline mode afterwards
### Steps to reproduce the bug
```python
import os
# os.environ["HF_HUB_OFFLINE"] = "1"
os.environ["HF_TOKEN"] = "xxxxxxxxxxxxxx"
import datasets
dataset_name = "uonlp/... | open | https://github.com/huggingface/datasets/issues/7551 | 2025-05-04T16:49:37 | 2025-05-13T03:18:43 | null | {
"login": "nrv",
"id": 353245,
"type": "User"
} | [] | false | [] |
3,037,017,367 | 7,550 | disable aiohttp depend for python 3.13t free-threading compat | null | closed | https://github.com/huggingface/datasets/pull/7550 | 2025-05-03T00:28:18 | 2025-05-03T00:28:24 | 2025-05-03T00:28:24 | {
"login": "Qubitium",
"id": 417764,
"type": "User"
} | [] | true | [] |
3,036,272,015 | 7,549 | TypeError: Couldn't cast array of type string to null on webdataset format dataset | ### Describe the bug
```python
from datasets import load_dataset
dataset = load_dataset("animetimm/danbooru-wdtagger-v4-w640-ws-30k")
```
got
```
File "/home/ubuntu/miniconda3/lib/python3.10/site-packages/datasets/arrow_writer.py", line 626, in write_batch
arrays.append(pa.array(typed_sequence))
File "pyarro... | open | https://github.com/huggingface/datasets/issues/7549 | 2025-05-02T15:18:07 | 2025-05-02T15:37:05 | null | {
"login": "narugo1992",
"id": 117186571,
"type": "User"
} | [] | false | [] |
3,035,568,851 | 7,548 | Python 3.13t (free threads) Compat | ### Describe the bug
Cannot install `datasets` under `python 3.13t` due to dependency on `aiohttp` and aiohttp cannot be built for free-threading python.
The `free threading` support issue in `aiothttp` is active since August 2024! Ouch.
https://github.com/aio-libs/aiohttp/issues/8796#issue-2475941784
`pip install... | open | https://github.com/huggingface/datasets/issues/7548 | 2025-05-02T09:20:09 | 2025-05-12T15:11:32 | null | {
"login": "Qubitium",
"id": 417764,
"type": "User"
} | [] | false | [] |
3,034,830,291 | 7,547 | Avoid global umask for setting file mode. | This PR updates the method for setting the permissions on `cache_path` after calling `shutil.move`. The call to `shutil.move` may not preserve permissions if the source and destination are on different filesystems. Reading and resetting umask can cause race conditions, so directly read what permissions were set for the... | closed | https://github.com/huggingface/datasets/pull/7547 | 2025-05-01T22:24:24 | 2025-05-06T13:05:00 | 2025-05-06T13:05:00 | {
"login": "ryan-clancy",
"id": 1282383,
"type": "User"
} | [] | true | [] |
3,034,018,298 | 7,546 | Large memory use when loading large datasets to a ZFS pool | ### Describe the bug
When I load large parquet based datasets from the hub like `MLCommons/peoples_speech` using `load_dataset`, all my memory (500GB) is used and isn't released after loading, meaning that the process is terminated by the kernel if I try to load an additional dataset. This makes it impossible to train... | closed | https://github.com/huggingface/datasets/issues/7546 | 2025-05-01T14:43:47 | 2025-05-13T13:30:09 | 2025-05-13T13:29:53 | {
"login": "FredHaa",
"id": 6875946,
"type": "User"
} | [] | false | [] |
3,031,617,547 | 7,545 | Networked Pull Through Cache | ### Feature request
Introduce a HF_DATASET_CACHE_NETWORK_LOCATION configuration (e.g. an environment variable) together with a companion network cache service.
Enable a three-tier cache lookup for datasets:
1. Local on-disk cache
2. Configurable network cache proxy
3. Official Hugging Face Hub
### Motivation
- Dis... | open | https://github.com/huggingface/datasets/issues/7545 | 2025-04-30T15:16:33 | 2025-04-30T15:16:33 | null | {
"login": "wrmedford",
"id": 8764173,
"type": "User"
} | [
{
"name": "enhancement",
"color": "a2eeef"
}
] | false | [] |
3,027,024,285 | 7,544 | Add try_original_type to DatasetDict.map | This PR resolves #7472 for DatasetDict
The previously merged PR #7483 added `try_original_type` to ArrowDataset, but DatasetDict misses `try_original_type`
Cc: @lhoestq | closed | https://github.com/huggingface/datasets/pull/7544 | 2025-04-29T04:39:44 | 2025-05-05T14:42:49 | 2025-05-05T14:42:49 | {
"login": "yoshitomo-matsubara",
"id": 11156001,
"type": "User"
} | [] | true | [] |
3,026,867,706 | 7,543 | The memory-disk mapping failure issue of the map function(resolved, but there are some suggestions.) | ### Describe the bug
## bug
When the map function processes a large dataset, it temporarily stores the data in a cache file on the disk. After the data is stored, the memory occupied by it is released. Therefore, when using the map function to process a large-scale dataset, only a dataset space of the size of `writer_... | closed | https://github.com/huggingface/datasets/issues/7543 | 2025-04-29T03:04:59 | 2025-04-30T02:22:17 | 2025-04-30T02:22:17 | {
"login": "jxma20",
"id": 76415358,
"type": "User"
} | [] | false | [] |
3,025,054,630 | 7,542 | set dev version | null | closed | https://github.com/huggingface/datasets/pull/7542 | 2025-04-28T14:03:48 | 2025-04-28T14:08:37 | 2025-04-28T14:04:00 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
3,025,045,919 | 7,541 | release: 3.5.1 | null | closed | https://github.com/huggingface/datasets/pull/7541 | 2025-04-28T14:00:59 | 2025-04-28T14:03:38 | 2025-04-28T14:01:54 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
3,024,862,966 | 7,540 | support pyarrow 20 | fix
```
TypeError: ArrayExtensionArray.to_pylist() got an unexpected keyword argument 'maps_as_pydicts'
``` | closed | https://github.com/huggingface/datasets/pull/7540 | 2025-04-28T13:01:11 | 2025-04-28T13:23:53 | 2025-04-28T13:23:52 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
3,023,311,163 | 7,539 | Fix IterableDataset state_dict shard_example_idx counting | # Fix IterableDataset's state_dict shard_example_idx reporting
## Description
This PR fixes issue #7475 where the `shard_example_idx` value in `IterableDataset`'s `state_dict()` always equals the number of samples in a shard, even if only a few examples have been consumed.
The issue is in the `_iter_arrow` met... | closed | https://github.com/huggingface/datasets/pull/7539 | 2025-04-27T20:41:18 | 2025-05-06T14:24:25 | 2025-05-06T14:24:24 | {
"login": "Harry-Yang0518",
"id": 129883215,
"type": "User"
} | [] | true | [] |
3,023,280,056 | 7,538 | `IterableDataset` drops samples when resuming from a checkpoint | When resuming from a checkpoint, `IterableDataset` will drop samples if `num_shards % world_size == 0` and the underlying example supports `iter_arrow` and needs to be formatted.
In that case, the `FormattedExamplesIterable` fetches a batch of samples from the child iterable's `iter_arrow` and yields them one by one ... | closed | https://github.com/huggingface/datasets/issues/7538 | 2025-04-27T19:34:49 | 2025-05-06T14:04:05 | 2025-05-06T14:03:42 | {
"login": "mariosasko",
"id": 47462742,
"type": "User"
} | [
{
"name": "bug",
"color": "d73a4a"
}
] | false | [] |
3,018,792,966 | 7,537 | `datasets.map(..., num_proc=4)` multi-processing fails | The following code fails in python 3.11+
```python
tokenized_datasets = datasets.map(tokenize_function, batched=True, num_proc=4, remove_columns=["text"])
```
Error log:
```bash
Traceback (most recent call last):
File "/usr/local/lib/python3.12/dist-packages/multiprocess/process.py", line 315, in _bootstrap
self.ru... | open | https://github.com/huggingface/datasets/issues/7537 | 2025-04-25T01:53:47 | 2025-05-06T13:12:08 | null | {
"login": "faaany",
"id": 24477841,
"type": "User"
} | [] | false | [] |
3,018,425,549 | 7,536 | [Errno 13] Permission denied: on `.incomplete` file | ### Describe the bug
When downloading a dataset, we frequently hit the below Permission Denied error. This looks to happen (at least) across datasets in HF, S3, and GCS.
It looks like the `temp_file` being passed [here](https://github.com/huggingface/datasets/blob/main/src/datasets/utils/file_utils.py#L412) can somet... | closed | https://github.com/huggingface/datasets/issues/7536 | 2025-04-24T20:52:45 | 2025-05-06T13:05:01 | 2025-05-06T13:05:01 | {
"login": "ryan-clancy",
"id": 1282383,
"type": "User"
} | [] | false | [] |
3,018,289,872 | 7,535 | Change dill version in requirements | Change dill version to >=0.3.9,<0.4.5 and check for errors | open | https://github.com/huggingface/datasets/pull/7535 | 2025-04-24T19:44:28 | 2025-05-19T14:51:29 | null | {
"login": "JGrel",
"id": 98061329,
"type": "User"
} | [] | true | [] |
3,017,259,407 | 7,534 | TensorFlow RaggedTensor Support (batch-level) | ### Feature request
Hi,
Currently datasets does not support RaggedTensor output on batch-level.
When building a Object Detection Dataset (with TensorFlow) I need to enable RaggedTensors as that's how BBoxes & classes are expected from the Keras Model POV.
Currently there's a error thrown saying that "Nested Data is ... | open | https://github.com/huggingface/datasets/issues/7534 | 2025-04-24T13:14:52 | 2025-06-30T17:03:39 | null | {
"login": "Lundez",
"id": 7490199,
"type": "User"
} | [
{
"name": "enhancement",
"color": "a2eeef"
}
] | false | [] |
3,015,075,086 | 7,533 | Add custom fingerprint support to `from_generator` | This PR adds `dataset_id_suffix` parameter to 'Dataset.from_generator' function.
`Dataset.from_generator` function passes all of its arguments to `BuilderConfig.create_config_id`, including generator function itself. `BuilderConfig.create_config_id` function tries to hash all the args, which can take a large amount ... | open | https://github.com/huggingface/datasets/pull/7533 | 2025-04-23T19:31:35 | 2025-07-10T09:29:35 | null | {
"login": "simonreise",
"id": 43753582,
"type": "User"
} | [] | true | [] |
3,009,546,204 | 7,532 | Document the HF_DATASETS_CACHE environment variable in the datasets cache documentation |
This pull request updates the Datasets documentation to include the `HF_DATASETS_CACHE` environment variable. While the current documentation only mentions `HF_HOME` for overriding the default cache directory, `HF_DATASETS_CACHE` is also a supported and useful option for specifying a custom cache location for dataset... | closed | https://github.com/huggingface/datasets/pull/7532 | 2025-04-22T00:23:13 | 2025-05-06T15:54:38 | 2025-05-06T15:54:38 | {
"login": "Harry-Yang0518",
"id": 129883215,
"type": "User"
} | [] | true | [] |
3,008,914,887 | 7,531 | Deepspeed reward training hangs at end of training with Dataset.from_list | There seems to be a weird interaction between Deepspeed, the Dataset.from_list method and trl's RewardTrainer. On a multi-GPU setup (10 A100s), training always hangs at the very end of training until it times out. The training itself works fine until the end of training and running the same script with Deepspeed on a s... | open | https://github.com/huggingface/datasets/issues/7531 | 2025-04-21T17:29:20 | 2025-06-29T06:20:45 | null | {
"login": "Matt00n",
"id": 60710414,
"type": "User"
} | [] | false | [] |
3,007,452,499 | 7,530 | How to solve "Spaces stuck in Building" problems | ### Describe the bug
Public spaces may stuck in Building after restarting, error log as follows:
build error
Unexpected job error
ERROR: failed to push spaces-registry.huggingface.tech/spaces/*:cpu-*-*: unexpected status from HEAD request to https://spaces-registry.huggingface.tech/v2/spaces/*/manifests/cpu-*-*: 401... | closed | https://github.com/huggingface/datasets/issues/7530 | 2025-04-21T03:08:38 | 2025-04-22T07:49:52 | 2025-04-22T07:49:52 | {
"login": "ghost",
"id": 10137,
"type": "User"
} | [] | false | [] |
3,007,118,969 | 7,529 | audio folder builder cannot detect custom split name | ### Describe the bug
when using audio folder builder (`load_dataset("audiofolder", data_dir="/path/to/folder")`), it cannot detect custom split name other than train/validation/test
### Steps to reproduce the bug
i have the following folder structure
```
my_dataset/
├── train/
│ ├── lorem.wav
│ ├── …
│ └── met... | open | https://github.com/huggingface/datasets/issues/7529 | 2025-04-20T16:53:21 | 2025-04-20T16:53:21 | null | {
"login": "phineas-pta",
"id": 37548991,
"type": "User"
} | [] | false | [] |
3,006,433,485 | 7,528 | Data Studio Error: Convert JSONL incorrectly | ### Describe the bug
Hi there,
I uploaded a dataset here https://huggingface.co/datasets/V-STaR-Bench/V-STaR, but I found that Data Studio incorrectly convert the "bboxes" value for the whole dataset. Therefore, anyone who downloaded the dataset via the API would get the wrong "bboxes" value in the data file.
Could ... | open | https://github.com/huggingface/datasets/issues/7528 | 2025-04-19T13:21:44 | 2025-05-06T13:18:38 | null | {
"login": "zxccade",
"id": 144962041,
"type": "User"
} | [] | false | [] |
3,005,242,422 | 7,527 | Auto-merge option for `convert-to-parquet` | ### Feature request
Add a command-line option, e.g. `--auto-merge-pull-request` that enables automatic merging of the commits created by the `convert-to-parquet` tool.
### Motivation
Large datasets may result in dozens of PRs due to the splitting mechanism. Each of these has to be manually accepted via the website.
... | closed | https://github.com/huggingface/datasets/issues/7527 | 2025-04-18T16:03:22 | 2025-07-18T19:09:03 | 2025-07-18T19:09:03 | {
"login": "klamike",
"id": 17013474,
"type": "User"
} | [
{
"name": "enhancement",
"color": "a2eeef"
}
] | false | [] |
3,005,107,536 | 7,526 | Faster downloads/uploads with Xet storage | 
## Xet is out !
Over the past few weeks, Hugging Face’s [Xet Team](https://huggingface.co/xet-team) took a major step forward by [migrating the first Model and Dataset repositories off LFS and to Xet storage](https://huggingface... | open | https://github.com/huggingface/datasets/issues/7526 | 2025-04-18T14:46:42 | 2025-05-12T12:09:09 | null | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | false | [] |
3,003,032,248 | 7,525 | Fix indexing in split commit messages | When a large commit is split up, it seems the commit index in the message is zero-based while the total number is one-based. I came across this running `convert-to-parquet` and was wondering why there was no `6-of-6` commit. This PR fixes that by adding one to the commit index, so both are one-based.
Current behavio... | closed | https://github.com/huggingface/datasets/pull/7525 | 2025-04-17T17:06:26 | 2025-04-28T14:26:27 | 2025-04-28T14:26:27 | {
"login": "klamike",
"id": 17013474,
"type": "User"
} | [] | true | [] |
3,002,067,826 | 7,524 | correct use with polars example | null | closed | https://github.com/huggingface/datasets/pull/7524 | 2025-04-17T10:19:19 | 2025-04-28T13:48:34 | 2025-04-28T13:48:33 | {
"login": "SiQube",
"id": 43832476,
"type": "User"
} | [] | true | [] |
2,999,616,692 | 7,523 | mention av in video docs | null | closed | https://github.com/huggingface/datasets/pull/7523 | 2025-04-16T13:11:12 | 2025-04-16T13:13:45 | 2025-04-16T13:11:42 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
2,998,169,017 | 7,522 | Preserve formatting in concatenated IterableDataset | Fixes #7515 | closed | https://github.com/huggingface/datasets/pull/7522 | 2025-04-16T02:37:33 | 2025-05-19T15:07:38 | 2025-05-19T15:07:37 | {
"login": "francescorubbo",
"id": 5140987,
"type": "User"
} | [] | true | [] |
2,997,666,366 | 7,521 | fix: Image Feature in Datasets Library Fails to Handle bytearray Objects from Spark DataFrames (#7517) | ## Task
Support bytes-like objects (bytes and bytearray) in Features classes
### Description
The `Features` classes only accept `bytes` objects for binary data, but not `bytearray`. This leads to errors when using `IterableDataset.from_spark()` with Spark DataFrames as they contain `bytearray` objects, even though... | closed | https://github.com/huggingface/datasets/pull/7521 | 2025-04-15T21:23:58 | 2025-05-07T14:17:29 | 2025-05-07T14:17:29 | {
"login": "giraffacarp",
"id": 73196164,
"type": "User"
} | [] | true | [] |
2,997,422,044 | 7,520 | Update items in the dataset without `map` | ### Feature request
I would like to be able to update items in my dataset without affecting all rows. At least if there was a range option, I would be able to process those items, save the dataset, and then continue.
If I am supposed to split the dataset first, that is not clear, since the docs suggest that any of th... | open | https://github.com/huggingface/datasets/issues/7520 | 2025-04-15T19:39:01 | 2025-04-19T18:47:46 | null | {
"login": "mashdragon",
"id": 122402293,
"type": "User"
} | [
{
"name": "enhancement",
"color": "a2eeef"
}
] | false | [] |
2,996,458,961 | 7,519 | pdf docs fixes | close https://github.com/huggingface/datasets/issues/7494 | closed | https://github.com/huggingface/datasets/pull/7519 | 2025-04-15T13:35:56 | 2025-04-15T13:38:31 | 2025-04-15T13:36:03 | {
"login": "lhoestq",
"id": 42851186,
"type": "User"
} | [] | true | [] |
2,996,141,825 | 7,518 | num_proc parallelization works only for first ~10s. | ### Describe the bug
When I try to load an already downloaded dataset with num_proc=64, the speed is very high for the first 10-20 seconds acheiving 30-40K samples / s, and 100% utilization for all cores but it soon drops to <= 1000 with almost 0% utilization for most cores.
### Steps to reproduce the bug
```
// do... | open | https://github.com/huggingface/datasets/issues/7518 | 2025-04-15T11:44:03 | 2025-04-15T13:12:13 | null | {
"login": "pshishodiaa",
"id": 33901783,
"type": "User"
} | [] | false | [] |
2,996,106,077 | 7,517 | Image Feature in Datasets Library Fails to Handle bytearray Objects from Spark DataFrames | ### Describe the bug
When using `IterableDataset.from_spark()` with a Spark DataFrame containing image data, the `Image` feature class fails to properly process this data type, causing an `AttributeError: 'bytearray' object has no attribute 'get'`
### Steps to reproduce the bug
1. Create a Spark DataFrame with a col... | closed | https://github.com/huggingface/datasets/issues/7517 | 2025-04-15T11:29:17 | 2025-05-07T14:17:30 | 2025-05-07T14:17:30 | {
"login": "giraffacarp",
"id": 73196164,
"type": "User"
} | [] | false | [] |
2,995,780,283 | 7,516 | unsloth/DeepSeek-R1-Distill-Qwen-32B server error | ### Describe the bug
hfhubhttperror: 500 server error: internal server error for url: https://huggingface.co/api/models/unsloth/deepseek-r1-distill-qwen-32b-bnb-4bit/commits/main (request id: root=1-67fe23fa-3a2150eb444c2a823c388579;de3aed68-c397-4da5-94d4-6565efd3b919) internal error - we're working hard to fix this ... | closed | https://github.com/huggingface/datasets/issues/7516 | 2025-04-15T09:26:53 | 2025-04-15T09:57:26 | 2025-04-15T09:57:26 | {
"login": "Editor-1",
"id": 164353862,
"type": "User"
} | [] | false | [] |
2,995,082,418 | 7,515 | `concatenate_datasets` does not preserve Pytorch format for IterableDataset | ### Describe the bug
When concatenating datasets with `concatenate_datasets`, I would expect the resulting combined dataset to be in the same format as the inputs (assuming it's consistent). This is indeed the behavior when combining `Dataset`, but not when combining `IterableDataset`. Specifically, when applying `con... | closed | https://github.com/huggingface/datasets/issues/7515 | 2025-04-15T04:36:34 | 2025-05-19T15:07:38 | 2025-05-19T15:07:38 | {
"login": "francescorubbo",
"id": 5140987,
"type": "User"
} | [] | false | [] |
2,994,714,923 | 7,514 | Do not hash `generator` in `BuilderConfig.create_config_id` | `Dataset.from_generator` function passes all of its arguments to `BuilderConfig.create_config_id`, including generator function itself. `BuilderConfig.create_config_id` function tries to hash all the args, and hashing a `generator` can take a large amount of time or even cause MemoryError if the dataset processed in a ... | closed | https://github.com/huggingface/datasets/pull/7514 | 2025-04-15T01:26:43 | 2025-04-23T11:55:55 | 2025-04-15T16:27:51 | {
"login": "simonreise",
"id": 43753582,
"type": "User"
} | [] | true | [] |
2,994,678,437 | 7,513 | MemoryError while creating dataset from generator | ### Describe the bug
# TL:DR
`Dataset.from_generator` function passes all of its arguments to `BuilderConfig.create_config_id`, including `generator` function itself. `BuilderConfig.create_config_id` function tries to hash all the args, which can take a large amount of time or even cause MemoryError if the dataset pr... | open | https://github.com/huggingface/datasets/issues/7513 | 2025-04-15T01:02:02 | 2025-04-23T19:37:08 | null | {
"login": "simonreise",
"id": 43753582,
"type": "User"
} | [] | false | [] |
2,994,043,544 | 7,512 | .map() fails if function uses pyvista | ### Describe the bug
Using PyVista inside a .map() produces a crash with `objc[78796]: +[NSResponder initialize] may have been in progress in another thread when fork() was called. We cannot safely call it or ignore it in the fork() child process. Crashing instead. Set a breakpoint on objc_initializeAfterForkError to ... | open | https://github.com/huggingface/datasets/issues/7512 | 2025-04-14T19:43:02 | 2025-04-14T20:01:53 | null | {
"login": "el-hult",
"id": 11832922,
"type": "User"
} | [] | false | [] |
2,992,131,117 | 7,510 | Incompatibile dill version (0.3.9) in datasets 2.18.0 - 3.5.0 | ### Describe the bug
Datasets 2.18.0 - 3.5.0 has a dependency on dill < 0.3.9. This causes errors with dill >= 0.3.9.
Could you please take a look into it and make it compatible?
### Steps to reproduce the bug
1. Install setuptools >= 2.18.0
2. Install dill >=0.3.9
3. Run pip check
4. Output:
ERROR: pip's dependenc... | open | https://github.com/huggingface/datasets/issues/7510 | 2025-04-14T07:22:44 | 2025-05-19T14:54:04 | null | {
"login": "JGrel",
"id": 98061329,
"type": "User"
} | [] | false | [] |
2,991,484,542 | 7,509 | Dataset uses excessive memory when loading files | ### Describe the bug
Hi
I am having an issue when loading a dataset.
I have about 200 json files each about 1GB (total about 215GB). each row has a few features which are a list of ints.
I am trying to load the dataset using `load_dataset`.
The dataset is about 1.5M samples
I use `num_proc=32` and a node with 378GB of... | open | https://github.com/huggingface/datasets/issues/7509 | 2025-04-13T21:09:49 | 2025-04-28T15:18:55 | null | {
"login": "avishaiElmakies",
"id": 36810152,
"type": "User"
} | [] | false | [] |
2,986,612,934 | 7,508 | Iterating over Image feature columns is extremely slow | We are trying to load datasets where the image column stores `PIL.PngImagePlugin.PngImageFile` images. However, iterating over these datasets is extremely slow.
What I have found:
1. It is the presence of the image column that causes the slowdown. Removing the column from the dataset results in blazingly fast (as expe... | open | https://github.com/huggingface/datasets/issues/7508 | 2025-04-10T19:00:54 | 2025-04-15T17:57:08 | null | {
"login": "sohamparikh",
"id": 11831521,
"type": "User"
} | [] | false | [] |
2,984,309,806 | 7,507 | Front-end statistical data quantity deviation | ### Describe the bug
While browsing the dataset at https://huggingface.co/datasets/NeuML/wikipedia-20250123, I noticed that a dataset with nearly 7M entries was estimated to be only 4M in size—almost half the actual amount. According to the post-download loading and the dataset_info (https://huggingface.co/datasets/Ne... | open | https://github.com/huggingface/datasets/issues/7507 | 2025-04-10T02:51:38 | 2025-04-15T12:54:51 | null | {
"login": "rangehow",
"id": 88258534,
"type": "User"
} | [] | false | [] |
2,981,687,450 | 7,506 | HfHubHTTPError: 429 Client Error: Too Many Requests for URL when trying to access Fineweb-10BT on 4A100 GPUs using SLURM | ### Describe the bug
I am trying to run some finetunings on 4 A100 GPUs using SLURM using axolotl training framework which in turn uses Huggingface's Trainer and Accelerate on [Fineweb-10BT](https://huggingface.co/datasets/HuggingFaceFW/fineweb), but I end up running into 429 Client Error: Too Many Requests for URL er... | open | https://github.com/huggingface/datasets/issues/7506 | 2025-04-09T06:32:04 | 2025-06-29T06:04:59 | null | {
"login": "calvintanama",
"id": 66202555,
"type": "User"
} | [] | false | [] |
2,979,926,156 | 7,505 | HfHubHTTPError: 403 Forbidden: None. Cannot access content at: https://hf.co/api/s3proxy | I have already logged in Huggingface using CLI with my valid token. Now trying to download the datasets using following code:
from transformers import WhisperProcessor, WhisperForConditionalGeneration, WhisperTokenizer, Trainer, TrainingArguments, DataCollatorForSeq2Seq
from datasets import load_dataset, Data... | open | https://github.com/huggingface/datasets/issues/7505 | 2025-04-08T14:08:40 | 2025-04-08T14:08:40 | null | {
"login": "hissain",
"id": 1412262,
"type": "User"
} | [] | false | [] |
2,979,410,641 | 7,504 | BuilderConfig ParquetConfig(...) doesn't have a 'use_auth_token' key. | ### Describe the bug
Trying to run the following fine-tuning script (based on this page [here](https://github.com/huggingface/instruction-tuned-sd)):
```
! accelerate launch /content/instruction-tuned-sd/finetune_instruct_pix2pix.py \
--pretrained_model_name_or_path=${MODEL_ID} \
--dataset_name=${DATASET_NAME... | open | https://github.com/huggingface/datasets/issues/7504 | 2025-04-08T10:55:03 | 2025-06-28T09:18:09 | null | {
"login": "tteguayco",
"id": 20015750,
"type": "User"
} | [] | false | [] |
2,978,512,625 | 7,503 | Inconsistency between load_dataset and load_from_disk functionality | ## Issue Description
I've encountered confusion when using `load_dataset` and `load_from_disk` in the datasets library. Specifically, when working offline with the gsm8k dataset, I can load it using a local path:
```python
import datasets
ds = datasets.load_dataset('/root/xxx/datasets/gsm8k', 'main')
```
output:
```t... | open | https://github.com/huggingface/datasets/issues/7503 | 2025-04-08T03:46:22 | 2025-06-28T08:51:16 | null | {
"login": "zzzzzec",
"id": 60975422,
"type": "User"
} | [] | false | [] |
2,977,453,814 | 7,502 | `load_dataset` of size 40GB creates a cache of >720GB | Hi there,
I am trying to load a dataset from the Hugging Face Hub and split it into train and validation splits. Somehow, when I try to do it with `load_dataset`, it exhausts my disk quota. So, I tried manually downloading the parquet files from the hub and loading them as follows:
```python
ds = DatasetDict(
... | closed | https://github.com/huggingface/datasets/issues/7502 | 2025-04-07T16:52:34 | 2025-04-15T15:22:12 | 2025-04-15T15:22:11 | {
"login": "pietrolesci",
"id": 61748653,
"type": "User"
} | [] | false | [] |
2,976,721,014 | 7,501 | Nested Feature raises ArrowNotImplementedError: Unsupported cast using function cast_struct | ### Describe the bug
`datasets.Features` seems to be unable to handle json file that contains fields of `list[dict]`.
### Steps to reproduce the bug
```json
// test.json
{"a": 1, "b": [{"c": 2, "d": 3}, {"c": 4, "d": 5}]}
{"a": 5, "b": [{"c": 7, "d": 8}, {"c": 9, "d": 10}]}
```
```python
import json
from datasets i... | closed | https://github.com/huggingface/datasets/issues/7501 | 2025-04-07T12:35:39 | 2025-04-07T12:43:04 | 2025-04-07T12:43:03 | {
"login": "yaner-here",
"id": 26623948,
"type": "User"
} | [] | false | [] |
2,974,841,921 | 7,500 | Make `with_format` correctly indicate that a `Dataset` is compatible with PyTorch's `Dataset` class | ### Feature request
Currently `datasets` does not correctly indicate to the Python type-checker (e.g. `pyright` / `Pylance`) that the output of `with_format` is compatible with PyTorch's `Dataloader` since it does not indicate that the HuggingFace `Dataset` is compatible with the PyTorch `Dataset` class. It would be g... | open | https://github.com/huggingface/datasets/issues/7500 | 2025-04-06T09:56:09 | 2025-04-15T12:57:39 | null | {
"login": "benglewis",
"id": 3817460,
"type": "User"
} | [
{
"name": "enhancement",
"color": "a2eeef"
}
] | false | [] |
2,973,489,126 | 7,499 | Added cache dirs to load and file_utils | When adding "cache_dir" to datasets.load_dataset, the cache_dir gets lost in the function calls, changing the cache dir to the default path. This fixes a few of these instances. | closed | https://github.com/huggingface/datasets/pull/7499 | 2025-04-04T22:36:04 | 2025-05-07T14:07:34 | 2025-05-07T14:07:34 | {
"login": "gmongaras",
"id": 43501738,
"type": "User"
} | [] | true | [] |
2,969,218,273 | 7,498 | Extreme memory bandwidth. | ### Describe the bug
When I use hf datasets on 4 GPU with 40 workers I get some extreme memory bandwidth of constant ~3GB/s.
However, if I wrap the dataset in `IterableDataset`, this issue is gone and the data also loads way faster (4x faster training on 1 worker).
It seems like the workers don't share memory and b... | open | https://github.com/huggingface/datasets/issues/7498 | 2025-04-03T11:09:08 | 2025-04-03T11:11:22 | null | {
"login": "J0SZ",
"id": 185079645,
"type": "User"
} | [] | false | [] |
2,968,553,693 | 7,497 | How to convert videos to images? | ### Feature request
Does someone know how to return the images from videos?
### Motivation
I am trying to use openpi(https://github.com/Physical-Intelligence/openpi) to finetune my Lerobot dataset(V2.0 and V2.1). I find that although the codedaset is v2.0, they are different. It seems like Lerobot V2.0 has two versi... | open | https://github.com/huggingface/datasets/issues/7497 | 2025-04-03T07:08:39 | 2025-04-15T12:35:15 | null | {
"login": "Loki-Lu",
"id": 171649931,
"type": "User"
} | [
{
"name": "enhancement",
"color": "a2eeef"
}
] | false | [] |
2,967,345,522 | 7,496 | Json builder: Allow features to override problematic Arrow types | ### Feature request
In the JSON builder, use explicitly requested feature types before or while converting to Arrow.
### Motivation
Working with JSON datasets is really hard because of Arrow. At the very least, it seems like it should be possible to work-around these problems by explicitly setting problematic colum... | open | https://github.com/huggingface/datasets/issues/7496 | 2025-04-02T19:27:16 | 2025-04-15T13:06:09 | null | {
"login": "edmcman",
"id": 1017189,
"type": "User"
} | [
{
"name": "enhancement",
"color": "a2eeef"
}
] | false | [] |
2,967,034,060 | 7,495 | Columns in the dataset obtained though load_dataset do not correspond to the one in the dataset viewer since 3.4.0 | ### Describe the bug
I have noticed that on my dataset named [BrunoHays/Accueil_UBS](https://huggingface.co/datasets/BrunoHays/Accueil_UBS), since the version 3.4.0, every column except audio is missing when I load the dataset.
Interestingly, the dataset viewer still shows the correct columns
### Steps to reproduce ... | closed | https://github.com/huggingface/datasets/issues/7495 | 2025-04-02T17:01:11 | 2025-07-02T23:24:57 | 2025-07-02T23:24:57 | {
"login": "bruno-hays",
"id": 48770768,
"type": "User"
} | [] | false | [] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.