number int64 2 7.91k | title stringlengths 1 290 | body stringlengths 0 228k | state stringclasses 2
values | created_at timestamp[s]date 2020-04-14 18:18:51 2025-12-16 10:45:02 | updated_at timestamp[s]date 2020-04-29 09:23:05 2025-12-16 19:34:46 | closed_at timestamp[s]date 2020-04-29 09:23:05 2025-12-16 14:20:48 ⌀ | url stringlengths 48 51 | author stringlengths 3 26 ⌀ | comments_count int64 0 70 | labels listlengths 0 4 |
|---|---|---|---|---|---|---|---|---|---|---|
7,675 | common_voice_11_0.py failure in dataset library | ### Describe the bug
I tried to download dataset but have got this error:
from datasets import load_dataset
load_dataset("mozilla-foundation/common_voice_11_0", "en", split="test", streaming=True)
---------------------------------------------------------------------------
RuntimeError Tr... | OPEN | 2025-07-09T17:47:59 | 2025-07-22T09:35:42 | null | https://github.com/huggingface/datasets/issues/7675 | egegurel | 5 | [] |
7,671 | Mapping function not working if the first example is returned as None | ### Describe the bug
https://github.com/huggingface/datasets/blob/8a19de052e3d79f79cea26821454bbcf0e9dcd68/src/datasets/arrow_dataset.py#L3652C29-L3652C37
Here we can see the writer is initialized on `i==0`. However, there can be cases where in the user mapping function, the first example is filtered out (length cons... | CLOSED | 2025-07-08T17:07:47 | 2025-07-09T12:30:32 | 2025-07-09T12:30:32 | https://github.com/huggingface/datasets/issues/7671 | dnaihao | 2 | [] |
7,669 | How can I add my custom data to huggingface datasets | I want to add my custom dataset in huggingface dataset. Please guide me how to achieve that. | OPEN | 2025-07-04T19:19:54 | 2025-07-05T18:19:37 | null | https://github.com/huggingface/datasets/issues/7669 | xiagod | 1 | [] |
7,668 | Broken EXIF crash the whole program | ### Describe the bug
When parsing this image in the ImageNet1K dataset, the `datasets` crashs whole training process just because unable to parse an invalid EXIF tag.

### Steps to reproduce the bug
Use the `datasets.Image.decod... | OPEN | 2025-07-03T11:24:15 | 2025-07-03T12:27:16 | null | https://github.com/huggingface/datasets/issues/7668 | Seas0 | 1 | [] |
7,665 | Function load_dataset() misinterprets string field content as part of dataset schema when dealing with `.jsonl` files | ### Describe the bug
When loading a `.jsonl` file using `load_dataset("json", data_files="data.jsonl", split="train")`, the function misinterprets the content of a string field as if it were part of the dataset schema.
In my case there is a field `body:` with a string value
```
"### Describe the bug (...) ,action:... | CLOSED | 2025-07-01T17:14:53 | 2025-07-01T17:17:48 | 2025-07-01T17:17:48 | https://github.com/huggingface/datasets/issues/7665 | zdzichukowalski | 1 | [] |
7,664 | Function load_dataset() misinterprets string field content as part of dataset schema when dealing with `.jsonl` files | ### Describe the bug
When loading a `.jsonl` file using `load_dataset("json", data_files="data.jsonl", split="train")`, the function misinterprets the content of a string field as if it were part of the dataset schema.
In my case there is a field `body:` with a string value
```
"### Describe the bug (...) ,action:... | OPEN | 2025-07-01T17:14:32 | 2025-07-09T13:14:11 | null | https://github.com/huggingface/datasets/issues/7664 | zdzichukowalski | 6 | [] |
7,662 | Applying map after transform with multiprocessing will cause OOM | ### Describe the bug
I have a 30TB dataset. When I perform add_column and cast_column operations on it and then execute a multiprocessing map, it results in an OOM (Out of Memory) error. However, if I skip the add_column and cast_column steps and directly run the map, there is no OOM. After debugging step by step, I f... | OPEN | 2025-07-01T05:45:57 | 2025-07-10T06:17:40 | null | https://github.com/huggingface/datasets/issues/7662 | JunjieLl | 5 | [] |
7,660 | AttributeError: type object 'tqdm' has no attribute '_lock' | ### Describe the bug
`AttributeError: type object 'tqdm' has no attribute '_lock'`
It occurs when I'm trying to load datasets in thread pool.
Issue https://github.com/huggingface/datasets/issues/6066 and PR https://github.com/huggingface/datasets/pull/6067 https://github.com/huggingface/datasets/pull/6068 tried to f... | OPEN | 2025-06-30T15:57:16 | 2025-07-03T15:14:27 | null | https://github.com/huggingface/datasets/issues/7660 | Hypothesis-Z | 2 | [] |
7,650 | `load_dataset` defaults to json file format for datasets with 1 shard | ### Describe the bug
I currently have multiple datasets (train+validation) saved as 50MB shards. For one dataset the validation pair is small enough to fit into a single shard and this apparently causes problems when loading the dataset. I created the datasets using a DatasetDict, saved them as 50MB arrow files for st... | OPEN | 2025-06-27T12:54:25 | 2025-06-27T12:54:25 | null | https://github.com/huggingface/datasets/issues/7650 | iPieter | 0 | [] |
7,647 | loading mozilla-foundation--common_voice_11_0 fails | ### Describe the bug
Hello everyone,
i am trying to load `mozilla-foundation--common_voice_11_0` and it fails. Reproducer
```
import datasets
datasets.load_dataset("mozilla-foundation/common_voice_11_0", "en", split="test", streaming=True, trust_remote_code=True)
```
and it fails with
```
File ~/opt/envs/.../lib/py... | OPEN | 2025-06-26T12:23:48 | 2025-07-10T14:49:30 | null | https://github.com/huggingface/datasets/issues/7647 | pavel-esir | 2 | [] |
7,637 | Introduce subset_name as an alias of config_name | ### Feature request
Add support for `subset_name` as an alias for `config_name` in the datasets library and related tools (such as loading scripts, documentation, and metadata).
### Motivation
The Hugging Face Hub dataset viewer displays a column named **"Subset"**, which refers to what is currently technically call... | OPEN | 2025-06-24T12:49:01 | 2025-07-01T16:08:33 | null | https://github.com/huggingface/datasets/issues/7637 | albertvillanova | 4 | [
"enhancement"
] |
7,636 | "open" in globals()["__builtins__"], an error occurs: "TypeError: argument of type 'module' is not iterable" | When I run the following code, an error occurs: "TypeError: argument of type 'module' is not iterable"
```python
print("open" in globals()["__builtins__"])
```
Traceback (most recent call last):
File "./main.py", line 2, in <module>
print("open" in globals()["__builtins__"])
^^^^^^^^^^^^^^^^^^^^^^
TypeE... | OPEN | 2025-06-24T08:09:39 | 2025-07-10T04:13:16 | null | https://github.com/huggingface/datasets/issues/7636 | kuanyan9527 | 4 | [] |
7,633 | Proposal: Small Tamil Discourse Coherence Dataset. | I’m a beginner from NIT Srinagar proposing a dataset of 50 Tamil text pairs for discourse coherence (coherent/incoherent labels) to support NLP research in low-resource languages.
- Size: 50 samples
- Format: CSV with columns (text1, text2, label)
- Use case: Training NLP models for coherence
I’ll use GitHub’s web edit... | OPEN | 2025-06-23T14:24:40 | 2025-06-23T14:24:40 | null | https://github.com/huggingface/datasets/issues/7633 | bikkiNitSrinagar | 0 | [] |
7,632 | Graceful Error Handling for cast_column("image", Image(decode=True)) in Hugging Face Datasets | ### Feature request
Currently, when using dataset.cast_column("image", Image(decode=True)), the pipeline throws an error and halts if any image in the dataset is invalid or corrupted (e.g., truncated files, incorrect formats, unreachable URLs). This behavior disrupts large-scale processing where a few faulty samples a... | OPEN | 2025-06-23T13:49:24 | 2025-07-08T06:52:53 | null | https://github.com/huggingface/datasets/issues/7632 | ganiket19 | 2 | [
"enhancement"
] |
7,630 | [bug] resume from ckpt skips samples if .map is applied | ### Describe the bug
resume from ckpt skips samples if .map is applied
Maybe related: https://github.com/huggingface/datasets/issues/7538
### Steps to reproduce the bug
```python
from datasets import Dataset
from datasets.distributed import split_dataset_by_node
# Create dataset with map transformation
def create... | OPEN | 2025-06-21T01:50:03 | 2025-06-29T07:51:32 | null | https://github.com/huggingface/datasets/issues/7630 | felipemello1 | 2 | [] |
7,627 | Creating a HF Dataset from lakeFS with S3 storage takes too much time! | Hi,
I’m new to HF dataset and I tried to create datasets based on data versioned in **lakeFS** _(**MinIO** S3 bucket as storage backend)_
Here I’m using ±30000 PIL image from MNIST data however it is taking around 12min to execute, which is a lot!
From what I understand, it is loading the images into cache then buil... | CLOSED | 2025-06-19T14:28:41 | 2025-06-23T12:39:10 | 2025-06-23T12:39:10 | https://github.com/huggingface/datasets/issues/7627 | Thunderhead-exe | 1 | [] |
7,624 | #Dataset Make "image" column appear first in dataset preview UI | Hi!
#Dataset
I’m currently uploading a dataset that includes an `"image"` column (PNG files), along with some metadata columns. The dataset is loaded from a .jsonl file. My goal is to have the "image" column appear as the first column in the dataset card preview UI on the :hugs: Hub.
However, at the moment, the `"im... | CLOSED | 2025-06-18T09:25:19 | 2025-06-20T07:46:43 | 2025-06-20T07:46:43 | https://github.com/huggingface/datasets/issues/7624 | jcerveto | 2 | [] |
7,619 | `from_list` fails while `from_generator` works for large datasets | ### Describe the bug
I am constructing a large time series dataset and observed that first constructing a list of entries and then using `Dataset.from_list` led to a crash as the number of items became large. However, this is not a problem when using `Dataset.from_generator`.
### Steps to reproduce the bug
#### Snip... | OPEN | 2025-06-17T10:58:55 | 2025-06-29T16:34:44 | null | https://github.com/huggingface/datasets/issues/7619 | abdulfatir | 4 | [] |
7,617 | Unwanted column padding in nested lists of dicts | ```python
from datasets import Dataset
dataset = Dataset.from_dict({
"messages": [
[
{"a": "...",},
{"b": "...",},
],
]
})
print(dataset[0])
```
What I get:
```
{'messages': [{'a': '...', 'b': None}, {'a': None, 'b': '...'}]}
```
What I want:
```
{'messages': [{'a': '... | CLOSED | 2025-06-15T22:06:17 | 2025-06-16T13:43:31 | 2025-06-16T13:43:31 | https://github.com/huggingface/datasets/issues/7617 | qgallouedec | 1 | [] |
7,612 | Provide an option of robust dataset iterator with error handling | ### Feature request
Adding an option to skip corrupted data samples. Currently the datasets behavior is throwing errors if the data sample if corrupted and let user aware and handle the data corruption. When I tried to try-catch the error at user level, the iterator will raise StopIteration when I called next() again.... | OPEN | 2025-06-13T00:40:48 | 2025-06-24T16:52:30 | null | https://github.com/huggingface/datasets/issues/7612 | wwwjn | 2 | [
"enhancement"
] |
7,611 | Code example for dataset.add_column() does not reflect correct way to use function | https://github.com/huggingface/datasets/blame/38d4d0e11e22fdbc4acf373d2421d25abeb43439/src/datasets/arrow_dataset.py#L5925C10-L5925C10
The example seems to suggest that dataset.add_column() can add column inplace, however, this is wrong -- it cannot. It returns a new dataset with the column added to it. | CLOSED | 2025-06-12T19:42:29 | 2025-07-17T13:14:18 | 2025-07-17T13:14:18 | https://github.com/huggingface/datasets/issues/7611 | shaily99 | 2 | [] |
7,610 | i cant confirm email | ### Describe the bug
This is dificult, I cant confirm email because I'm not get any email!
I cant post forum because I cant confirm email!
I can send help desk because... no exist on web page.
paragraph 44
### Steps to reproduce the bug
rthjrtrt
### Expected behavior
ewtgfwetgf
### Environment info
sdgfswdegfwe | OPEN | 2025-06-12T18:58:49 | 2025-06-27T14:36:47 | null | https://github.com/huggingface/datasets/issues/7610 | lykamspam | 2 | [] |
7,607 | Video and audio decoding with torchcodec | ### Feature request
Pytorch is migrating video processing to torchcodec and it's pretty cool. It would be nice to migrate both the audio and video features to use torchcodec instead of torchaudio/video.
### Motivation
My use case is I'm working on a multimodal AV model, and what's nice about torchcodec is I can extr... | CLOSED | 2025-06-11T07:02:30 | 2025-06-19T18:25:49 | 2025-06-19T18:25:49 | https://github.com/huggingface/datasets/issues/7607 | TyTodd | 16 | [
"enhancement"
] |
7,600 | `push_to_hub` is not concurrency safe (dataset schema corruption) | ### Describe the bug
Concurrent processes modifying and pushing a dataset can overwrite each others' dataset card, leaving the dataset unusable.
Consider this scenario:
- we have an Arrow dataset
- there are `N` configs of the dataset
- there are `N` independent processes operating on each of the individual configs (... | CLOSED | 2025-06-07T17:28:56 | 2025-07-31T10:00:50 | 2025-07-31T10:00:50 | https://github.com/huggingface/datasets/issues/7600 | sharvil | 4 | [] |
7,599 | My already working dataset (when uploaded few months ago) now is ignoring metadata.jsonl | ### Describe the bug
Hi everyone, I uploaded my dataset https://huggingface.co/datasets/PRAIG/SMB a few months ago while I was waiting for a conference acceptance response. Without modifying anything in the dataset repository now the Dataset viewer is not rendering the metadata.jsonl annotations, neither it is being d... | CLOSED | 2025-06-06T18:59:00 | 2025-06-16T15:18:00 | 2025-06-16T15:18:00 | https://github.com/huggingface/datasets/issues/7599 | JuanCarlosMartinezSevilla | 3 | [] |
7,597 | Download datasets from a private hub in 2025 | ### Feature request
In the context of a private hub deployment, customers would like to use load_dataset() to load datasets from their hub, not from the public hub. This doesn't seem to be configurable at the moment and it would be nice to add this feature.
The obvious workaround is to clone the repo first and then l... | CLOSED | 2025-06-06T07:55:19 | 2025-06-13T13:46:00 | 2025-06-13T13:46:00 | https://github.com/huggingface/datasets/issues/7597 | DanielSchuhmacher | 2 | [
"enhancement"
] |
7,594 | Add option to ignore keys/columns when loading a dataset from jsonl(or any other data format) | ### Feature request
Hi, I would like the option to ignore keys/columns when loading a dataset from files (e.g. jsonl).
### Motivation
I am working on a dataset which is built on jsonl. It seems the dataset is unclean and a column has different types in each row. I can't clean this or remove the column (It is not my ... | OPEN | 2025-06-05T11:12:45 | 2025-10-23T14:54:47 | null | https://github.com/huggingface/datasets/issues/7594 | avishaiElmakies | 10 | [
"enhancement"
] |
7,591 | Add num_proc parameter to push_to_hub | ### Feature request
A number of processes parameter to the dataset.push_to_hub method
### Motivation
Shards are currently uploaded serially which makes it slow for many shards, uploading can be done in parallel and much faster
| CLOSED | 2025-06-04T13:19:15 | 2025-09-04T10:43:33 | 2025-09-04T10:43:33 | https://github.com/huggingface/datasets/issues/7591 | SwayStar123 | 4 | [
"enhancement"
] |
7,590 | `Sequence(Features(...))` causes PyArrow cast error in `load_dataset` despite correct schema. | ### Description
When loading a dataset with a field declared as a list of structs using `Sequence(Features(...))`, `load_dataset` incorrectly infers the field as a plain `struct<...>` instead of a `list<struct<...>>`. This leads to the following error:
```
ArrowNotImplementedError: Unsupported cast from list<item: st... | CLOSED | 2025-05-29T22:53:36 | 2025-07-19T22:45:08 | 2025-07-19T22:45:08 | https://github.com/huggingface/datasets/issues/7590 | AHS-uni | 6 | [] |
7,588 | ValueError: Invalid pattern: '**' can only be an entire path component [Colab] | ### Describe the bug
I have a dataset on HF [here](https://huggingface.co/datasets/kambale/luganda-english-parallel-corpus) that i've previously used to train a translation model [here](https://huggingface.co/kambale/pearl-11m-translate).
now i changed a few hyperparameters to increase number of tokens for the model,... | CLOSED | 2025-05-27T13:46:05 | 2025-05-30T13:22:52 | 2025-05-30T01:26:30 | https://github.com/huggingface/datasets/issues/7588 | wkambale | 5 | [] |
7,586 | help is appreciated | ### Feature request
https://github.com/rajasekarnp1/neural-audio-upscaler/tree/main
### Motivation
ai model develpment and audio
### Your contribution
ai model develpment and audio | OPEN | 2025-05-26T14:00:42 | 2025-05-26T18:21:57 | null | https://github.com/huggingface/datasets/issues/7586 | rajasekarnp1 | 1 | [
"enhancement"
] |
7,584 | Add LMDB format support | ### Feature request
Add LMDB format support for large memory-mapping files
### Motivation
Add LMDB format support for large memory-mapping files
### Your contribution
I'm trying to add it | OPEN | 2025-05-26T07:10:13 | 2025-05-26T18:23:37 | null | https://github.com/huggingface/datasets/issues/7584 | trotsky1997 | 1 | [
"enhancement"
] |
7,583 | load_dataset type stubs reject List[str] for split parameter, but runtime supports it | ### Describe the bug
The [load_dataset](https://huggingface.co/docs/datasets/v3.6.0/en/package_reference/loading_methods#datasets.load_dataset) method accepts a `List[str]` as the split parameter at runtime, however, the current type stubs restrict the split parameter to `Union[str, Split, None]`. This causes type che... | CLOSED | 2025-05-25T02:33:18 | 2025-05-26T18:29:58 | 2025-05-26T18:29:58 | https://github.com/huggingface/datasets/issues/7583 | hierr | 0 | [] |
7,580 | Requesting a specific split (eg: test) still downloads all (train, test, val) data when streaming=False. | ### Describe the bug
When using load_dataset() from the datasets library (in load.py), specifying a particular split (e.g., split="train") still results in downloading data for all splits when streaming=False. This happens during the builder_instance.download_and_prepare() call.
This behavior leads to unnecessary band... | OPEN | 2025-05-22T11:08:16 | 2025-11-05T16:25:53 | null | https://github.com/huggingface/datasets/issues/7580 | s3pi | 2 | [] |
7,577 | arrow_schema is not compatible with list | ### Describe the bug
```
import datasets
f = datasets.Features({'x': list[datasets.Value(dtype='int32')]})
f.arrow_schema
Traceback (most recent call last):
File "datasets/features/features.py", line 1826, in arrow_schema
return pa.schema(self.type).with_metadata({"huggingface": json.dumps(hf_metadata)})
... | CLOSED | 2025-05-21T16:37:01 | 2025-05-26T18:49:51 | 2025-05-26T18:32:55 | https://github.com/huggingface/datasets/issues/7577 | jonathanshen-upwork | 3 | [] |
7,574 | Missing multilingual directions in IWSLT2017 dataset's processing script | ### Describe the bug
Hi,
Upon using `iwslt2017.py` in `IWSLT/iwslt2017` on the Hub for loading the datasets, I am unable to obtain the datasets for the language pairs `de-it`, `de-ro`, `de-nl`, `it-de`, `nl-de`, and `ro-de` using it. These 6 pairs do not show up when using `get_dataset_config_names()` to obtain the ... | OPEN | 2025-05-21T09:53:17 | 2025-05-26T18:36:38 | null | https://github.com/huggingface/datasets/issues/7574 | andy-joy-25 | 2 | [] |
7,573 | No Samsum dataset | ### Describe the bug
https://huggingface.co/datasets/Samsung/samsum dataset not found error 404
Originated from https://github.com/meta-llama/llama-cookbook/issues/948
### Steps to reproduce the bug
go to website https://huggingface.co/datasets/Samsung/samsum
see the error
also downloading it with python throws
`... | CLOSED | 2025-05-20T09:54:35 | 2025-07-21T18:34:34 | 2025-06-18T12:52:23 | https://github.com/huggingface/datasets/issues/7573 | IgorKasianenko | 4 | [] |
7,570 | Dataset lib seems to broke after fssec lib update | ### Describe the bug
I am facing an issue since today where HF's dataset is acting weird and in some instances failure to recognise a valid dataset entirely, I think it is happening due to recent change in `fsspec` lib as using this command fixed it for me in one-time: `!pip install -U datasets huggingface_hub fsspec`... | CLOSED | 2025-05-15T11:45:06 | 2025-06-13T00:44:27 | 2025-06-13T00:44:27 | https://github.com/huggingface/datasets/issues/7570 | sleepingcat4 | 3 | [] |
7,569 | Dataset creation is broken if nesting a dict inside a dict inside a list | ### Describe the bug
Hey,
I noticed that the creation of datasets with `Dataset.from_generator` is broken if dicts and lists are nested in a certain way and a schema is being passed. See below for details.
Best,
Tim
### Steps to reproduce the bug
Runing this code:
```python
from datasets import Dataset, Features,... | OPEN | 2025-05-13T21:06:45 | 2025-05-20T19:25:15 | null | https://github.com/huggingface/datasets/issues/7569 | TimSchneider42 | 2 | [] |
7,568 | `IterableDatasetDict.map()` call removes `column_names` (in fact info.features) | When calling `IterableDatasetDict.map()`, each split’s `IterableDataset.map()` is invoked without a `features` argument. While omitting the argument isn’t itself incorrect, the implementation then sets `info.features = features`, which destroys the original `features` content. Since `IterableDataset.column_names` relie... | OPEN | 2025-05-13T15:45:42 | 2025-06-30T09:33:47 | null | https://github.com/huggingface/datasets/issues/7568 | mombip | 6 | [] |
7,567 | interleave_datasets seed with multiple workers | ### Describe the bug
Using interleave_datasets with multiple dataloader workers and a seed set causes the same dataset sampling order across all workers.
Should the seed be modulated with the worker id?
### Steps to reproduce the bug
See above
### Expected behavior
See above
### Environment info
- `datasets` ve... | CLOSED | 2025-05-12T22:38:27 | 2025-10-24T14:04:37 | 2025-10-24T14:04:37 | https://github.com/huggingface/datasets/issues/7567 | jonathanasdf | 7 | [] |
7,566 | terminate called without an active exception; Aborted (core dumped) | ### Describe the bug
I use it as in the tutorial here: https://huggingface.co/docs/datasets/stream, and it ends up with abort.
### Steps to reproduce the bug
1. `pip install datasets`
2.
```
$ cat main.py
#!/usr/bin/env python3
from datasets import load_dataset
dataset = load_dataset('HuggingFaceFW/fineweb', spl... | OPEN | 2025-05-11T23:05:54 | 2025-06-23T17:56:02 | null | https://github.com/huggingface/datasets/issues/7566 | alexey-milovidov | 4 | [] |
7,561 | NotImplementedError: <class 'datasets.iterable_dataset.RepeatExamplesIterable'> doesn't implement num_shards yet | ### Describe the bug
When using `.repeat()` on an `IterableDataset`, this error gets thrown. There is [this thread](https://discuss.huggingface.co/t/making-an-infinite-iterabledataset/146192/5) that seems to imply the fix is trivial, but I don't know anything about this codebase, so I'm opening this issue rather than ... | CLOSED | 2025-05-07T15:05:42 | 2025-06-05T12:41:30 | 2025-06-05T12:41:30 | https://github.com/huggingface/datasets/issues/7561 | cyanic-selkie | 0 | [] |
7,554 | datasets downloads and generates all splits, even though a single split is requested (for dataset with loading script) | ### Describe the bug
`datasets` downloads and generates all splits, even though a single split is requested. [This](https://huggingface.co/datasets/jordiae/exebench) is the dataset in question. It uses a loading script. I am not 100% sure that this is a bug, because maybe with loading scripts `datasets` must actual... | CLOSED | 2025-05-06T14:43:38 | 2025-05-07T14:53:45 | 2025-05-07T14:53:44 | https://github.com/huggingface/datasets/issues/7554 | sei-eschwartz | 2 | [] |
7,551 | Issue with offline mode and partial dataset cached | ### Describe the bug
Hi,
a issue related to #4760 here when loading a single file from a dataset, unable to access it in offline mode afterwards
### Steps to reproduce the bug
```python
import os
# os.environ["HF_HUB_OFFLINE"] = "1"
os.environ["HF_TOKEN"] = "xxxxxxxxxxxxxx"
import datasets
dataset_name = "uonlp/... | OPEN | 2025-05-04T16:49:37 | 2025-05-13T03:18:43 | null | https://github.com/huggingface/datasets/issues/7551 | nrv | 4 | [] |
7,549 | TypeError: Couldn't cast array of type string to null on webdataset format dataset | ### Describe the bug
```python
from datasets import load_dataset
dataset = load_dataset("animetimm/danbooru-wdtagger-v4-w640-ws-30k")
```
got
```
File "/home/ubuntu/miniconda3/lib/python3.10/site-packages/datasets/arrow_writer.py", line 626, in write_batch
arrays.append(pa.array(typed_sequence))
File "pyarro... | OPEN | 2025-05-02T15:18:07 | 2025-05-02T15:37:05 | null | https://github.com/huggingface/datasets/issues/7549 | narugo1992 | 1 | [] |
7,548 | Python 3.13t (free threads) Compat | ### Describe the bug
Cannot install `datasets` under `python 3.13t` due to dependency on `aiohttp` and aiohttp cannot be built for free-threading python.
The `free threading` support issue in `aiothttp` is active since August 2024! Ouch.
https://github.com/aio-libs/aiohttp/issues/8796#issue-2475941784
`pip install... | OPEN | 2025-05-02T09:20:09 | 2025-05-12T15:11:32 | null | https://github.com/huggingface/datasets/issues/7548 | Qubitium | 7 | [] |
7,546 | Large memory use when loading large datasets to a ZFS pool | ### Describe the bug
When I load large parquet based datasets from the hub like `MLCommons/peoples_speech` using `load_dataset`, all my memory (500GB) is used and isn't released after loading, meaning that the process is terminated by the kernel if I try to load an additional dataset. This makes it impossible to train... | CLOSED | 2025-05-01T14:43:47 | 2025-05-13T13:30:09 | 2025-05-13T13:29:53 | https://github.com/huggingface/datasets/issues/7546 | FredHaa | 4 | [] |
7,545 | Networked Pull Through Cache | ### Feature request
Introduce a HF_DATASET_CACHE_NETWORK_LOCATION configuration (e.g. an environment variable) together with a companion network cache service.
Enable a three-tier cache lookup for datasets:
1. Local on-disk cache
2. Configurable network cache proxy
3. Official Hugging Face Hub
### Motivation
- Dis... | OPEN | 2025-04-30T15:16:33 | 2025-04-30T15:16:33 | null | https://github.com/huggingface/datasets/issues/7545 | wrmedford | 0 | [
"enhancement"
] |
7,543 | The memory-disk mapping failure issue of the map function(resolved, but there are some suggestions.) | ### Describe the bug
## bug
When the map function processes a large dataset, it temporarily stores the data in a cache file on the disk. After the data is stored, the memory occupied by it is released. Therefore, when using the map function to process a large-scale dataset, only a dataset space of the size of `writer_... | CLOSED | 2025-04-29T03:04:59 | 2025-04-30T02:22:17 | 2025-04-30T02:22:17 | https://github.com/huggingface/datasets/issues/7543 | jxma20 | 0 | [] |
7,538 | `IterableDataset` drops samples when resuming from a checkpoint | When resuming from a checkpoint, `IterableDataset` will drop samples if `num_shards % world_size == 0` and the underlying example supports `iter_arrow` and needs to be formatted.
In that case, the `FormattedExamplesIterable` fetches a batch of samples from the child iterable's `iter_arrow` and yields them one by one ... | CLOSED | 2025-04-27T19:34:49 | 2025-05-06T14:04:05 | 2025-05-06T14:03:42 | https://github.com/huggingface/datasets/issues/7538 | mariosasko | 1 | [
"bug"
] |
7,537 | `datasets.map(..., num_proc=4)` multi-processing fails | The following code fails in python 3.11+
```python
tokenized_datasets = datasets.map(tokenize_function, batched=True, num_proc=4, remove_columns=["text"])
```
Error log:
```bash
Traceback (most recent call last):
File "/usr/local/lib/python3.12/dist-packages/multiprocess/process.py", line 315, in _bootstrap
self.ru... | OPEN | 2025-04-25T01:53:47 | 2025-05-06T13:12:08 | null | https://github.com/huggingface/datasets/issues/7537 | faaany | 1 | [] |
7,536 | [Errno 13] Permission denied: on `.incomplete` file | ### Describe the bug
When downloading a dataset, we frequently hit the below Permission Denied error. This looks to happen (at least) across datasets in HF, S3, and GCS.
It looks like the `temp_file` being passed [here](https://github.com/huggingface/datasets/blob/main/src/datasets/utils/file_utils.py#L412) can somet... | CLOSED | 2025-04-24T20:52:45 | 2025-05-06T13:05:01 | 2025-05-06T13:05:01 | https://github.com/huggingface/datasets/issues/7536 | ryan-clancy | 4 | [] |
7,534 | TensorFlow RaggedTensor Support (batch-level) | ### Feature request
Hi,
Currently datasets does not support RaggedTensor output on batch-level.
When building a Object Detection Dataset (with TensorFlow) I need to enable RaggedTensors as that's how BBoxes & classes are expected from the Keras Model POV.
Currently there's a error thrown saying that "Nested Data is ... | OPEN | 2025-04-24T13:14:52 | 2025-06-30T17:03:39 | null | https://github.com/huggingface/datasets/issues/7534 | Lundez | 4 | [
"enhancement"
] |
7,531 | Deepspeed reward training hangs at end of training with Dataset.from_list | There seems to be a weird interaction between Deepspeed, the Dataset.from_list method and trl's RewardTrainer. On a multi-GPU setup (10 A100s), training always hangs at the very end of training until it times out. The training itself works fine until the end of training and running the same script with Deepspeed on a s... | OPEN | 2025-04-21T17:29:20 | 2025-06-29T06:20:45 | null | https://github.com/huggingface/datasets/issues/7531 | Matt00n | 2 | [] |
7,530 | How to solve "Spaces stuck in Building" problems | ### Describe the bug
Public spaces may stuck in Building after restarting, error log as follows:
build error
Unexpected job error
ERROR: failed to push spaces-registry.huggingface.tech/spaces/*:cpu-*-*: unexpected status from HEAD request to https://spaces-registry.huggingface.tech/v2/spaces/*/manifests/cpu-*-*: 401... | CLOSED | 2025-04-21T03:08:38 | 2025-11-11T00:57:14 | 2025-04-22T07:49:52 | https://github.com/huggingface/datasets/issues/7530 | null | 4 | [] |
7,529 | audio folder builder cannot detect custom split name | ### Describe the bug
when using audio folder builder (`load_dataset("audiofolder", data_dir="/path/to/folder")`), it cannot detect custom split name other than train/validation/test
### Steps to reproduce the bug
i have the following folder structure
```
my_dataset/
├── train/
│ ├── lorem.wav
│ ├── …
│ └── met... | OPEN | 2025-04-20T16:53:21 | 2025-04-20T16:53:21 | null | https://github.com/huggingface/datasets/issues/7529 | phineas-pta | 0 | [] |
7,528 | Data Studio Error: Convert JSONL incorrectly | ### Describe the bug
Hi there,
I uploaded a dataset here https://huggingface.co/datasets/V-STaR-Bench/V-STaR, but I found that Data Studio incorrectly convert the "bboxes" value for the whole dataset. Therefore, anyone who downloaded the dataset via the API would get the wrong "bboxes" value in the data file.
Could ... | OPEN | 2025-04-19T13:21:44 | 2025-05-06T13:18:38 | null | https://github.com/huggingface/datasets/issues/7528 | zxccade | 1 | [] |
7,527 | Auto-merge option for `convert-to-parquet` | ### Feature request
Add a command-line option, e.g. `--auto-merge-pull-request` that enables automatic merging of the commits created by the `convert-to-parquet` tool.
### Motivation
Large datasets may result in dozens of PRs due to the splitting mechanism. Each of these has to be manually accepted via the website.
... | CLOSED | 2025-04-18T16:03:22 | 2025-07-18T19:09:03 | 2025-07-18T19:09:03 | https://github.com/huggingface/datasets/issues/7527 | klamike | 4 | [
"enhancement"
] |
7,526 | Faster downloads/uploads with Xet storage | 
## Xet is out !
Over the past few weeks, Hugging Face’s [Xet Team](https://huggingface.co/xet-team) took a major step forward by [migrating the first Model and Dataset repositories off LFS and to Xet storage](https://huggingface... | OPEN | 2025-04-18T14:46:42 | 2025-05-12T12:09:09 | null | https://github.com/huggingface/datasets/issues/7526 | lhoestq | 0 | [] |
7,520 | Update items in the dataset without `map` | ### Feature request
I would like to be able to update items in my dataset without affecting all rows. At least if there was a range option, I would be able to process those items, save the dataset, and then continue.
If I am supposed to split the dataset first, that is not clear, since the docs suggest that any of th... | OPEN | 2025-04-15T19:39:01 | 2025-04-19T18:47:46 | null | https://github.com/huggingface/datasets/issues/7520 | mashdragon | 1 | [
"enhancement"
] |
7,518 | num_proc parallelization works only for first ~10s. | ### Describe the bug
When I try to load an already downloaded dataset with num_proc=64, the speed is very high for the first 10-20 seconds acheiving 30-40K samples / s, and 100% utilization for all cores but it soon drops to <= 1000 with almost 0% utilization for most cores.
### Steps to reproduce the bug
```
// do... | OPEN | 2025-04-15T11:44:03 | 2025-04-15T13:12:13 | null | https://github.com/huggingface/datasets/issues/7518 | pshishodiaa | 2 | [] |
7,517 | Image Feature in Datasets Library Fails to Handle bytearray Objects from Spark DataFrames | ### Describe the bug
When using `IterableDataset.from_spark()` with a Spark DataFrame containing image data, the `Image` feature class fails to properly process this data type, causing an `AttributeError: 'bytearray' object has no attribute 'get'`
### Steps to reproduce the bug
1. Create a Spark DataFrame with a col... | CLOSED | 2025-04-15T11:29:17 | 2025-05-07T14:17:30 | 2025-05-07T14:17:30 | https://github.com/huggingface/datasets/issues/7517 | giraffacarp | 4 | [] |
7,516 | unsloth/DeepSeek-R1-Distill-Qwen-32B server error | ### Describe the bug
hfhubhttperror: 500 server error: internal server error for url: https://huggingface.co/api/models/unsloth/deepseek-r1-distill-qwen-32b-bnb-4bit/commits/main (request id: root=1-67fe23fa-3a2150eb444c2a823c388579;de3aed68-c397-4da5-94d4-6565efd3b919) internal error - we're working hard to fix this ... | CLOSED | 2025-04-15T09:26:53 | 2025-04-15T09:57:26 | 2025-04-15T09:57:26 | https://github.com/huggingface/datasets/issues/7516 | Editor-1 | 0 | [] |
7,515 | `concatenate_datasets` does not preserve Pytorch format for IterableDataset | ### Describe the bug
When concatenating datasets with `concatenate_datasets`, I would expect the resulting combined dataset to be in the same format as the inputs (assuming it's consistent). This is indeed the behavior when combining `Dataset`, but not when combining `IterableDataset`. Specifically, when applying `con... | CLOSED | 2025-04-15T04:36:34 | 2025-05-19T15:07:38 | 2025-05-19T15:07:38 | https://github.com/huggingface/datasets/issues/7515 | francescorubbo | 2 | [] |
7,513 | MemoryError while creating dataset from generator | ### Describe the bug
# TL:DR
`Dataset.from_generator` function passes all of its arguments to `BuilderConfig.create_config_id`, including `generator` function itself. `BuilderConfig.create_config_id` function tries to hash all the args, which can take a large amount of time or even cause MemoryError if the dataset pr... | CLOSED | 2025-04-15T01:02:02 | 2025-10-23T22:55:10 | 2025-10-23T22:55:10 | https://github.com/huggingface/datasets/issues/7513 | simonreise | 4 | [] |
7,512 | .map() fails if function uses pyvista | ### Describe the bug
Using PyVista inside a .map() produces a crash with `objc[78796]: +[NSResponder initialize] may have been in progress in another thread when fork() was called. We cannot safely call it or ignore it in the fork() child process. Crashing instead. Set a breakpoint on objc_initializeAfterForkError to ... | OPEN | 2025-04-14T19:43:02 | 2025-04-14T20:01:53 | null | https://github.com/huggingface/datasets/issues/7512 | el-hult | 1 | [] |
7,510 | Incompatibile dill version (0.3.9) in datasets 2.18.0 - 3.5.0 | ### Describe the bug
Datasets 2.18.0 - 3.5.0 has a dependency on dill < 0.3.9. This causes errors with dill >= 0.3.9.
Could you please take a look into it and make it compatible?
### Steps to reproduce the bug
1. Install setuptools >= 2.18.0
2. Install dill >=0.3.9
3. Run pip check
4. Output:
ERROR: pip's dependenc... | CLOSED | 2025-04-14T07:22:44 | 2025-09-15T08:37:49 | 2025-09-15T08:37:49 | https://github.com/huggingface/datasets/issues/7510 | JGrel | 6 | [] |
7,509 | Dataset uses excessive memory when loading files | ### Describe the bug
Hi
I am having an issue when loading a dataset.
I have about 200 json files each about 1GB (total about 215GB). each row has a few features which are a list of ints.
I am trying to load the dataset using `load_dataset`.
The dataset is about 1.5M samples
I use `num_proc=32` and a node with 378GB of... | OPEN | 2025-04-13T21:09:49 | 2025-04-28T15:18:55 | null | https://github.com/huggingface/datasets/issues/7509 | avishaiElmakies | 12 | [] |
7,508 | Iterating over Image feature columns is extremely slow | We are trying to load datasets where the image column stores `PIL.PngImagePlugin.PngImageFile` images. However, iterating over these datasets is extremely slow.
What I have found:
1. It is the presence of the image column that causes the slowdown. Removing the column from the dataset results in blazingly fast (as expe... | OPEN | 2025-04-10T19:00:54 | 2025-04-15T17:57:08 | null | https://github.com/huggingface/datasets/issues/7508 | sohamparikh | 2 | [] |
7,507 | Front-end statistical data quantity deviation | ### Describe the bug
While browsing the dataset at https://huggingface.co/datasets/NeuML/wikipedia-20250123, I noticed that a dataset with nearly 7M entries was estimated to be only 4M in size—almost half the actual amount. According to the post-download loading and the dataset_info (https://huggingface.co/datasets/Ne... | OPEN | 2025-04-10T02:51:38 | 2025-04-15T12:54:51 | null | https://github.com/huggingface/datasets/issues/7507 | rangehow | 1 | [] |
7,506 | HfHubHTTPError: 429 Client Error: Too Many Requests for URL when trying to access Fineweb-10BT on 4A100 GPUs using SLURM | ### Describe the bug
I am trying to run some finetunings on 4 A100 GPUs using SLURM using axolotl training framework which in turn uses Huggingface's Trainer and Accelerate on [Fineweb-10BT](https://huggingface.co/datasets/HuggingFaceFW/fineweb), but I end up running into 429 Client Error: Too Many Requests for URL er... | OPEN | 2025-04-09T06:32:04 | 2025-06-29T06:04:59 | null | https://github.com/huggingface/datasets/issues/7506 | calvintanama | 2 | [] |
7,505 | HfHubHTTPError: 403 Forbidden: None. Cannot access content at: https://hf.co/api/s3proxy | I have already logged in Huggingface using CLI with my valid token. Now trying to download the datasets using following code:
from transformers import WhisperProcessor, WhisperForConditionalGeneration, WhisperTokenizer, Trainer, TrainingArguments, DataCollatorForSeq2Seq
from datasets import load_dataset, Data... | OPEN | 2025-04-08T14:08:40 | 2025-04-08T14:08:40 | null | https://github.com/huggingface/datasets/issues/7505 | hissain | 0 | [] |
7,504 | BuilderConfig ParquetConfig(...) doesn't have a 'use_auth_token' key. | ### Describe the bug
Trying to run the following fine-tuning script (based on this page [here](https://github.com/huggingface/instruction-tuned-sd)):
```
! accelerate launch /content/instruction-tuned-sd/finetune_instruct_pix2pix.py \
--pretrained_model_name_or_path=${MODEL_ID} \
--dataset_name=${DATASET_NAME... | OPEN | 2025-04-08T10:55:03 | 2025-06-28T09:18:09 | null | https://github.com/huggingface/datasets/issues/7504 | tteguayco | 3 | [] |
7,503 | Inconsistency between load_dataset and load_from_disk functionality | ## Issue Description
I've encountered confusion when using `load_dataset` and `load_from_disk` in the datasets library. Specifically, when working offline with the gsm8k dataset, I can load it using a local path:
```python
import datasets
ds = datasets.load_dataset('/root/xxx/datasets/gsm8k', 'main')
```
output:
```t... | OPEN | 2025-04-08T03:46:22 | 2025-06-28T08:51:16 | null | https://github.com/huggingface/datasets/issues/7503 | zzzzzec | 2 | [] |
7,502 | `load_dataset` of size 40GB creates a cache of >720GB | Hi there,
I am trying to load a dataset from the Hugging Face Hub and split it into train and validation splits. Somehow, when I try to do it with `load_dataset`, it exhausts my disk quota. So, I tried manually downloading the parquet files from the hub and loading them as follows:
```python
ds = DatasetDict(
... | CLOSED | 2025-04-07T16:52:34 | 2025-04-15T15:22:12 | 2025-04-15T15:22:11 | https://github.com/huggingface/datasets/issues/7502 | pietrolesci | 2 | [] |
7,501 | Nested Feature raises ArrowNotImplementedError: Unsupported cast using function cast_struct | ### Describe the bug
`datasets.Features` seems to be unable to handle json file that contains fields of `list[dict]`.
### Steps to reproduce the bug
```json
// test.json
{"a": 1, "b": [{"c": 2, "d": 3}, {"c": 4, "d": 5}]}
{"a": 5, "b": [{"c": 7, "d": 8}, {"c": 9, "d": 10}]}
```
```python
import json
from datasets i... | CLOSED | 2025-04-07T12:35:39 | 2025-04-07T12:43:04 | 2025-04-07T12:43:03 | https://github.com/huggingface/datasets/issues/7501 | yaner-here | 1 | [] |
7,500 | Make `with_format` correctly indicate that a `Dataset` is compatible with PyTorch's `Dataset` class | ### Feature request
Currently `datasets` does not correctly indicate to the Python type-checker (e.g. `pyright` / `Pylance`) that the output of `with_format` is compatible with PyTorch's `Dataloader` since it does not indicate that the HuggingFace `Dataset` is compatible with the PyTorch `Dataset` class. It would be g... | OPEN | 2025-04-06T09:56:09 | 2025-04-15T12:57:39 | null | https://github.com/huggingface/datasets/issues/7500 | benglewis | 1 | [
"enhancement"
] |
7,498 | Extreme memory bandwidth. | ### Describe the bug
When I use hf datasets on 4 GPU with 40 workers I get some extreme memory bandwidth of constant ~3GB/s.
However, if I wrap the dataset in `IterableDataset`, this issue is gone and the data also loads way faster (4x faster training on 1 worker).
It seems like the workers don't share memory and b... | OPEN | 2025-04-03T11:09:08 | 2025-04-03T11:11:22 | null | https://github.com/huggingface/datasets/issues/7498 | J0SZ | 0 | [] |
7,497 | How to convert videos to images? | ### Feature request
Does someone know how to return the images from videos?
### Motivation
I am trying to use openpi(https://github.com/Physical-Intelligence/openpi) to finetune my Lerobot dataset(V2.0 and V2.1). I find that although the codedaset is v2.0, they are different. It seems like Lerobot V2.0 has two versi... | OPEN | 2025-04-03T07:08:39 | 2025-04-15T12:35:15 | null | https://github.com/huggingface/datasets/issues/7497 | Loki-Lu | 1 | [
"enhancement"
] |
7,496 | Json builder: Allow features to override problematic Arrow types | ### Feature request
In the JSON builder, use explicitly requested feature types before or while converting to Arrow.
### Motivation
Working with JSON datasets is really hard because of Arrow. At the very least, it seems like it should be possible to work-around these problems by explicitly setting problematic colum... | OPEN | 2025-04-02T19:27:16 | 2025-04-15T13:06:09 | null | https://github.com/huggingface/datasets/issues/7496 | edmcman | 1 | [
"enhancement"
] |
7,495 | Columns in the dataset obtained though load_dataset do not correspond to the one in the dataset viewer since 3.4.0 | ### Describe the bug
I have noticed that on my dataset named [BrunoHays/Accueil_UBS](https://huggingface.co/datasets/BrunoHays/Accueil_UBS), since the version 3.4.0, every column except audio is missing when I load the dataset.
Interestingly, the dataset viewer still shows the correct columns
### Steps to reproduce ... | CLOSED | 2025-04-02T17:01:11 | 2025-07-02T23:24:57 | 2025-07-02T23:24:57 | https://github.com/huggingface/datasets/issues/7495 | bruno-hays | 3 | [] |
7,494 | Broken links in pdf loading documentation | ### Describe the bug
Hi, just a couple of small issues I ran into while reading the docs for [loading pdf data](https://huggingface.co/docs/datasets/main/en/document_load):
1. The link for the [`Create a pdf dataset`](https://huggingface.co/docs/datasets/main/en/document_load#pdffolder) points to https://huggingface.... | CLOSED | 2025-04-02T06:45:22 | 2025-04-15T13:36:25 | 2025-04-15T13:36:04 | https://github.com/huggingface/datasets/issues/7494 | VyoJ | 1 | [] |
7,493 | push_to_hub does not upload videos | ### Describe the bug
Hello,
I would like to upload a video dataset (some .mp4 files and some segments within them), i.e. rows correspond to subsequences from videos. Videos might be referenced by several rows.
I created a dataset locally and it references the videos and the video readers can read them correctly. I u... | OPEN | 2025-04-01T17:00:20 | 2025-09-02T10:32:36 | null | https://github.com/huggingface/datasets/issues/7493 | DominikVincent | 3 | [] |
7,486 | `shared_datadir` fixture is missing | ### Describe the bug
Running the tests for the latest release fails due to missing `shared_datadir` fixture.
### Steps to reproduce the bug
Running `pytest` while building a package for Arch Linux leads to these errors:
```
==================================== ERRORS ====================================
_________ E... | CLOSED | 2025-03-27T18:17:12 | 2025-03-27T19:49:11 | 2025-03-27T19:49:10 | https://github.com/huggingface/datasets/issues/7486 | lahwaacz | 1 | [] |
7,481 | deal with python `10_000` legal number in slice syntax | ### Feature request
```
In [6]: ds = datasets.load_dataset("HuggingFaceH4/ultrachat_200k", split="train_sft[:1000]")
In [7]: ds = datasets.load_dataset("HuggingFaceH4/ultrachat_200k", split="train_sft[:1_000]")
[dozens of frames skipped]
File /usr/local/lib/python3.10/dist-packages/datasets/arrow_reader.py:444, in _s... | CLOSED | 2025-03-26T20:10:54 | 2025-03-28T16:20:44 | 2025-03-28T16:20:44 | https://github.com/huggingface/datasets/issues/7481 | sfc-gh-sbekman | 1 | [
"enhancement"
] |
7,480 | HF_DATASETS_CACHE ignored? | ### Describe the bug
I'm struggling to get things to respect HF_DATASETS_CACHE.
Rationale: I'm on a system that uses NFS for homedir, so downloading to NFS is expensive, slow, and wastes valuable quota compared to local disk. Instead, it seems to rely mostly on HF_HUB_CACHE.
Current version: 3.2.1dev. In the process... | OPEN | 2025-03-26T17:19:34 | 2025-10-23T15:59:18 | null | https://github.com/huggingface/datasets/issues/7480 | stephenroller | 8 | [] |
7,479 | Features.from_arrow_schema is destructive | ### Describe the bug
I came across this, perhaps niche, bug where `Features` does not/cannot account for pyarrow's `nullable=False` option in Fields. Interestingly, I found that in regular "flat" fields this does not necessarily lead to conflicts, but when a non-nullable field is in a struct, an incompatibility arises... | OPEN | 2025-03-26T16:46:43 | 2025-03-26T16:46:58 | null | https://github.com/huggingface/datasets/issues/7479 | BramVanroy | 0 | [] |
7,477 | What is the canonical way to compress a Dataset? | Given that Arrow is the preferred backend for a Dataset, what is a user supposed to do if they want concurrent reads, concurrent writes AND on-disk compression for a larger dataset?
Parquet would be the obvious answer except that there is no native support for writing sharded, parquet datasets concurrently [[1](https:... | OPEN | 2025-03-25T16:47:51 | 2025-04-03T09:13:11 | null | https://github.com/huggingface/datasets/issues/7477 | eric-czech | 4 | [] |
7,475 | IterableDataset's state_dict shard_example_idx is always equal to the number of samples in a shard | ### Describe the bug
I've noticed a strange behaviour with Iterable state_dict: the value of shard_example_idx is always equal to the amount of samples in a shard.
### Steps to reproduce the bug
I am reusing the example from the doc
```python
from datasets import Dataset
ds = Dataset.from_dict({"a": range(6)}).to_... | CLOSED | 2025-03-25T13:58:07 | 2025-12-12T16:15:37 | 2025-05-06T14:05:07 | https://github.com/huggingface/datasets/issues/7475 | bruno-hays | 10 | [] |
7,473 | Webdataset data format problem | ### Describe the bug
Please see https://huggingface.co/datasets/ejschwartz/idioms/discussions/1
Error code: FileFormatMismatchBetweenSplitsError
All three splits, train, test, and validation, use webdataset. But only the train split has more than one file. How can I force the other two splits to also be interpreted ... | CLOSED | 2025-03-21T17:23:52 | 2025-03-21T19:19:58 | 2025-03-21T19:19:58 | https://github.com/huggingface/datasets/issues/7473 | edmcman | 1 | [] |
7,472 | Label casting during `map` process is canceled after the `map` process | ### Describe the bug
When preprocessing a multi-label dataset, I introduced a step to convert int labels to float labels as [BCEWithLogitsLoss](https://pytorch.org/docs/stable/generated/torch.nn.BCEWithLogitsLoss.html) expects float labels and forward function of models in transformers package internally use `BCEWithL... | CLOSED | 2025-03-21T07:56:22 | 2025-04-10T05:11:15 | 2025-04-10T05:11:14 | https://github.com/huggingface/datasets/issues/7472 | yoshitomo-matsubara | 6 | [] |
7,471 | Adding argument to `_get_data_files_patterns` | ### Feature request
How about adding if the user already know about the pattern?
https://github.com/huggingface/datasets/blob/a256b85cbc67aa3f0e75d32d6586afc507cf535b/src/datasets/data_files.py#L252
### Motivation
While using this load_dataset people might use 10M of images for the local files.
However, due to sear... | CLOSED | 2025-03-21T07:17:53 | 2025-03-27T12:30:52 | 2025-03-26T07:26:27 | https://github.com/huggingface/datasets/issues/7471 | SangbumChoi | 3 | [
"enhancement"
] |
7,470 | Is it possible to shard a single-sharded IterableDataset? | I thought https://github.com/huggingface/datasets/pull/7252 might be applicable but looking at it maybe not.
Say we have a process, eg. a database query, that can return data in slightly different order each time. So, the initial query needs to be run by a single thread (not to mention running multiple times incurs mo... | CLOSED | 2025-03-21T04:33:37 | 2025-11-22T07:55:43 | 2025-03-26T06:49:28 | https://github.com/huggingface/datasets/issues/7470 | jonathanasdf | 6 | [] |
7,469 | Custom split name with the web interface | ### Describe the bug
According the doc here: https://huggingface.co/docs/hub/datasets-file-names-and-splits#custom-split-name
it should infer the split name from the subdir of data or the beg of the name of the files in data.
When doing this manually through web upload it does not work. it uses "train" as a unique spl... | CLOSED | 2025-03-20T20:45:59 | 2025-03-21T07:20:37 | 2025-03-21T07:20:37 | https://github.com/huggingface/datasets/issues/7469 | vince62s | 0 | [] |
7,468 | function `load_dataset` can't solve folder path with regex characters like "[]" | ### Describe the bug
When using the `load_dataset` function with a folder path containing regex special characters (such as "[]"), the issue occurs due to how the path is handled in the `resolve_pattern` function. This function passes the unprocessed path directly to `AbstractFileSystem.glob`, which supports regular e... | OPEN | 2025-03-20T05:21:59 | 2025-03-25T10:18:12 | null | https://github.com/huggingface/datasets/issues/7468 | Hpeox | 1 | [] |
7,467 | load_dataset with streaming hangs on parquet datasets | ### Describe the bug
When I try to load a dataset with parquet files (e.g. "bigcode/the-stack") the dataset loads, but python interpreter can't exit and hangs
### Steps to reproduce the bug
```python3
import datasets
print('Start')
dataset = datasets.load_dataset("bigcode/the-stack", data_dir="data/yaml", streaming... | OPEN | 2025-03-18T23:33:54 | 2025-03-25T10:28:04 | null | https://github.com/huggingface/datasets/issues/7467 | The0nix | 1 | [] |
7,461 | List of images behave differently on IterableDataset and Dataset | ### Describe the bug
This code:
```python
def train_iterable_gen():
images = np.array(load_image("https://cdn.britannica.com/61/93061-050-99147DCE/Statue-of-Liberty-Island-New-York-Bay.jpg").resize((128, 128)))
yield {
"images": np.expand_dims(images, axis=0),
"messages": [
... | CLOSED | 2025-03-17T15:59:23 | 2025-03-18T08:57:17 | 2025-03-18T08:57:16 | https://github.com/huggingface/datasets/issues/7461 | FredrikNoren | 2 | [] |
7,458 | Loading the `laion/filtered-wit` dataset in streaming mode fails on v3.4.0 | ### Describe the bug
Loading https://huggingface.co/datasets/laion/filtered-wit in streaming mode fails after update to `datasets==3.4.0`. The dataset loads fine on v3.3.2.
### Steps to reproduce the bug
Steps to reproduce:
```
pip install datastes==3.4.0
python -c "from datasets import load_dataset; load_dataset('l... | CLOSED | 2025-03-17T14:54:02 | 2025-03-17T16:02:04 | 2025-03-17T15:25:55 | https://github.com/huggingface/datasets/issues/7458 | nikita-savelyevv | 1 | [] |
7,457 | Document the HF_DATASETS_CACHE env variable | ### Feature request
Hello,
I have a use case where my team is sharing models and dataset in shared directory to avoid duplication.
I noticed that the [cache documentation for datasets](https://huggingface.co/docs/datasets/main/en/cache) only mention the `HF_HOME` environment variable but never the `HF_DATASETS_CACHE`... | CLOSED | 2025-03-17T12:24:50 | 2025-05-06T15:54:39 | 2025-05-06T15:54:39 | https://github.com/huggingface/datasets/issues/7457 | LSerranoPEReN | 4 | [
"enhancement"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.