id
int64
599M
3.26B
number
int64
1
7.7k
title
stringlengths
1
290
body
stringlengths
0
228k
state
stringclasses
2 values
html_url
stringlengths
46
51
created_at
timestamp[s]date
2020-04-14 10:18:02
2025-07-23 08:04:53
updated_at
timestamp[s]date
2020-04-27 16:04:17
2025-07-23 18:53:44
closed_at
timestamp[s]date
2020-04-14 12:01:40
2025-07-23 16:44:42
user
dict
labels
listlengths
0
4
is_pull_request
bool
2 classes
comments
listlengths
0
0
3,255,350,916
7,698
NotImplementedError when using streaming=True in Google Colab environment
### Describe the bug When attempting to load a large dataset (like tiiuae/falcon-refinedweb or allenai/c4) using streaming=True in a standard Google Colab notebook, the process fails with a NotImplementedError: Loading a streaming dataset cached in a LocalFileSystem is not supported yet. This issue persists even after...
open
https://github.com/huggingface/datasets/issues/7698
2025-07-23T08:04:53
2025-07-23T15:06:23
null
{ "login": "Aniket17200", "id": 100470741, "type": "User" }
[]
false
[]
3,254,526,399
7,697
How to solve "Spaces stuck in Building" problems
### Describe the bug Reopen #7530 My problem spaces are: https://huggingface.co/spaces/Genius-Society/url_shortner https://huggingface.co/spaces/Genius-Society/translator Please help troubleshoot the problem ### Steps to reproduce the bug <img width="303" height="266" alt="Image" src="https://github.com/user-atta...
open
https://github.com/huggingface/datasets/issues/7697
2025-07-23T01:30:32
2025-07-23T01:30:32
null
{ "login": "kakamond", "id": 44517413, "type": "User" }
[]
false
[]
3,253,433,350
7,696
load_dataset() in 4.0.0 returns different audio samples compared to earlier versions breaking reproducibility
### Describe the bug In datasets 4.0.0 release, `load_dataset()` returns different audio samples compared to earlier versions, this breaks integration tests that depend on consistent sample data across different environments (first and second envs specified below). ### Steps to reproduce the bug ```python from dat...
open
https://github.com/huggingface/datasets/issues/7696
2025-07-22T17:02:17
2025-07-22T17:03:24
null
{ "login": "Manalelaidouni", "id": 25346345, "type": "User" }
[]
false
[]
3,251,904,843
7,695
Support downloading specific splits in load_dataset
This PR builds on #6832 by @mariosasko. May close - #4101, #2538 Discussion - https://github.com/huggingface/datasets/pull/7648#issuecomment-3084050130 --- ### Note - This PR is under work and frequent changes will be pushed.
open
https://github.com/huggingface/datasets/pull/7695
2025-07-22T09:33:54
2025-07-23T18:53:44
null
{ "login": "ArjunJagdale", "id": 142811259, "type": "User" }
[]
true
[]
3,247,600,408
7,694
Dataset.to_json consumes excessive memory, appears to not be a streaming operation
### Describe the bug When exporting a Dataset object to a JSON Lines file using the .to_json(lines=True) method, the process consumes a very large amount of memory. The memory usage is proportional to the size of the entire Dataset object being saved, rather than being a low, constant memory operation. This behavior ...
open
https://github.com/huggingface/datasets/issues/7694
2025-07-21T07:51:25
2025-07-21T07:51:25
null
{ "login": "ycq0125", "id": 49603999, "type": "User" }
[]
false
[]
3,246,369,678
7,693
Dataset scripts are no longer supported, but found superb.py
### Describe the bug Hello, I'm trying to follow the [Hugging Face Pipelines tutorial](https://huggingface.co/docs/transformers/main_classes/pipelines) but the tutorial seems to work only on old datasets versions. I then get the error : ``` -------------------------------------------------------------------------- ...
open
https://github.com/huggingface/datasets/issues/7693
2025-07-20T13:48:06
2025-07-22T17:11:00
null
{ "login": "edwinzajac", "id": 114297534, "type": "User" }
[]
false
[]
3,246,268,635
7,692
xopen: invalid start byte for streaming dataset with trust_remote_code=True
### Describe the bug I am trying to load YODAS2 dataset with datasets==3.6.0 ``` from datasets import load_dataset next(iter(load_dataset('espnet/yodas2', name='ru000', split='train', streaming=True, trust_remote_code=True))) ``` And get `UnicodeDecodeError: 'utf-8' codec can't decode byte 0xa8 in position 1: invalid ...
open
https://github.com/huggingface/datasets/issues/7692
2025-07-20T11:08:20
2025-07-20T11:08:20
null
{ "login": "sedol1339", "id": 5188731, "type": "User" }
[]
false
[]
3,245,547,170
7,691
Large WebDataset: pyarrow.lib.ArrowCapacityError on load() even with streaming
### Describe the bug I am creating a large WebDataset-format dataset for sign language processing research, and a number of the videos are over 2GB. The instant I hit one of the shards with one of those videos, I get a ArrowCapacityError, even with streaming. I made a config for the dataset that specifically inclu...
open
https://github.com/huggingface/datasets/issues/7691
2025-07-19T18:40:27
2025-07-21T19:17:33
null
{ "login": "cleong110", "id": 122366389, "type": "User" }
[]
false
[]
3,244,380,691
7,690
HDF5 support
This PR adds support for tabular HDF5 file(s) by converting each row to an Arrow table. It supports columns with the usual dtypes including up to 5-dimensional arrays as well as support for complex/compound types by splitting them into several columns. All datasets within the HDF5 file should have rows on the first dim...
open
https://github.com/huggingface/datasets/pull/7690
2025-07-18T21:09:41
2025-07-23T02:54:11
null
{ "login": "klamike", "id": 17013474, "type": "User" }
[]
true
[]
3,242,580,301
7,689
BadRequestError for loading dataset?
### Describe the bug Up until a couple days ago I was having no issues loading `Helsinki-NLP/europarl` and `Helsinki-NLP/un_pc`, but now suddenly I get the following error: ``` huggingface_hub.errors.BadRequestError: (Request ID: ...) Bad request: * Invalid input: expected array, received string * at paths * Invalid...
closed
https://github.com/huggingface/datasets/issues/7689
2025-07-18T09:30:04
2025-07-18T11:59:51
2025-07-18T11:52:29
{ "login": "WPoelman", "id": 45011687, "type": "User" }
[]
false
[]
3,238,851,443
7,688
No module named "distributed"
### Describe the bug hello, when I run the command "from datasets.distributed import split_dataset_by_node", I always met the bug "No module named 'datasets.distributed" in different version like 4.0.0, 2.21.0 and so on. How can I solve this? ### Steps to reproduce the bug 1. pip install datasets 2. from datasets.di...
open
https://github.com/huggingface/datasets/issues/7688
2025-07-17T09:32:35
2025-07-21T13:50:27
null
{ "login": "yingtongxiong", "id": 45058324, "type": "User" }
[]
false
[]
3,238,760,301
7,687
Datasets keeps rebuilding the dataset every time i call the python script
### Describe the bug Every time it runs, somehow, samples increase. This can cause a 12mb dataset to have other built versions of 400 mbs+ <img width="363" height="481" alt="Image" src="https://github.com/user-attachments/assets/766ce958-bd2b-41bc-b950-86710259bfdc" /> ### Steps to reproduce the bug `from datasets...
open
https://github.com/huggingface/datasets/issues/7687
2025-07-17T09:03:38
2025-07-17T09:03:38
null
{ "login": "CALEB789", "id": 58883113, "type": "User" }
[]
false
[]
3,237,201,090
7,686
load_dataset does not check .no_exist files in the hub cache
### Describe the bug I'm not entirely sure if this should be submitted as a bug in the `datasets` library or the `huggingface_hub` library, given it could be fixed at different levels of the stack. The fundamental issue is that the `load_datasets` api doesn't use the `.no_exist` files in the hub cache unlike other wr...
open
https://github.com/huggingface/datasets/issues/7686
2025-07-16T20:04:00
2025-07-16T20:04:00
null
{ "login": "jmaccarl", "id": 3627235, "type": "User" }
[]
false
[]
3,236,979,340
7,685
Inconsistent range request behavior for parquet REST api
### Describe the bug First off, I do apologize if this is not the correct repo for submitting this issue. Please direct me to another one if it's more appropriate elsewhere. The datasets rest api is inconsistently giving `416 Range Not Satisfiable` when using a range request to get portions of the parquet files. Mor...
open
https://github.com/huggingface/datasets/issues/7685
2025-07-16T18:39:44
2025-07-16T18:41:53
null
{ "login": "universalmind303", "id": 21327470, "type": "User" }
[]
false
[]
3,231,680,474
7,684
fix audio cast storage from array + sampling_rate
fix https://github.com/huggingface/datasets/issues/7682
closed
https://github.com/huggingface/datasets/pull/7684
2025-07-15T10:13:42
2025-07-15T10:24:08
2025-07-15T10:24:07
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
3,231,553,161
7,683
Convert to string when needed + faster .zstd
for https://huggingface.co/datasets/allenai/olmo-mix-1124
closed
https://github.com/huggingface/datasets/pull/7683
2025-07-15T09:37:44
2025-07-15T10:13:58
2025-07-15T10:13:56
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
3,229,687,253
7,682
Fail to cast Audio feature for numpy arrays in datasets 4.0.0
### Describe the bug Casting features with Audio for numpy arrays - done here with `ds.map(gen_sine, features=features)` fails in version 4.0.0 but not in version 3.6.0 ### Steps to reproduce the bug The following `uv script` should be able to reproduce the bug in version 4.0.0 and pass in version 3.6.0 on a macOS ...
closed
https://github.com/huggingface/datasets/issues/7682
2025-07-14T18:41:02
2025-07-15T12:10:39
2025-07-15T10:24:08
{ "login": "luatil-cloud", "id": 163345686, "type": "User" }
[]
false
[]
3,227,112,736
7,681
Probabilistic High Memory Usage and Freeze on Python 3.10
### Describe the bug A probabilistic issue encountered when processing datasets containing PIL.Image columns using the huggingface/datasets library on Python 3.10. The process occasionally experiences a sudden and significant memory spike, reaching 100% utilization, leading to a complete freeze. During this freeze, th...
open
https://github.com/huggingface/datasets/issues/7681
2025-07-14T01:57:16
2025-07-14T01:57:16
null
{ "login": "ryan-minato", "id": 82735346, "type": "User" }
[]
false
[]
3,224,824,151
7,680
Question about iterable dataset and streaming
In the doc, I found the following example: https://github.com/huggingface/datasets/blob/611f5a592359ebac6f858f515c776aa7d99838b2/docs/source/stream.mdx?plain=1#L65-L78 I am confused, 1. If we have already loaded the dataset, why doing `to_iterable_dataset`? Does it go through the dataset faster than map-style datase...
open
https://github.com/huggingface/datasets/issues/7680
2025-07-12T04:48:30
2025-07-15T13:39:38
null
{ "login": "Tavish9", "id": 73541181, "type": "User" }
[]
false
[]
3,220,787,371
7,679
metric glue breaks with 4.0.0
### Describe the bug worked fine with 3.6.0, and with 4.0.0 `eval_metric = metric.compute()` in HF Accelerate breaks. The code that fails is: https://huggingface.co/spaces/evaluate-metric/glue/blob/v0.4.0/glue.py#L84 ``` def simple_accuracy(preds, labels): print(preds, labels) print(f"{preds==labels}") r...
closed
https://github.com/huggingface/datasets/issues/7679
2025-07-10T21:39:50
2025-07-11T17:42:01
2025-07-11T17:42:01
{ "login": "stas00", "id": 10676103, "type": "User" }
[]
false
[]
3,218,625,544
7,678
To support decoding audio data, please install 'torchcodec'.
In the latest version of datasets==4.0.0, i cannot print the audio data on the Colab notebook. But it works on the 3.6.0 version. !pip install -q -U datasets huggingface_hub fsspec from datasets import load_dataset downloaded_dataset = load_dataset("ymoslem/MediaSpeech", "tr", split="train") print(downloaded_datase...
closed
https://github.com/huggingface/datasets/issues/7678
2025-07-10T09:43:13
2025-07-22T03:46:52
2025-07-11T05:05:42
{ "login": "alpcansoydas", "id": 48163702, "type": "User" }
[]
false
[]
3,218,044,656
7,677
Toxicity fails with datasets 4.0.0
### Describe the bug With the latest 4.0.0 release, huggingface toxicity evaluation module fails with error: `ValueError: text input must be of type `str` (single example), `List[str]` (batch or single pretokenized example) or `List[List[str]]` (batch of pretokenized examples).` ### Steps to reproduce the bug Repro:...
closed
https://github.com/huggingface/datasets/issues/7677
2025-07-10T06:15:22
2025-07-11T04:40:59
2025-07-11T04:40:59
{ "login": "serena-ruan", "id": 82044803, "type": "User" }
[]
false
[]
3,216,857,559
7,676
Many things broken since the new 4.0.0 release
### Describe the bug The new changes in 4.0.0 are breaking many datasets, including those from lm-evaluation-harness. I am trying to revert back to older versions, like 3.6.0 to make the eval work but I keep getting: ``` Python File /venv/main/lib/python3.12/site-packages/datasets/features/features.py:1474, in genera...
open
https://github.com/huggingface/datasets/issues/7676
2025-07-09T18:59:50
2025-07-21T10:38:01
null
{ "login": "mobicham", "id": 37179323, "type": "User" }
[]
false
[]
3,216,699,094
7,675
common_voice_11_0.py failure in dataset library
### Describe the bug I tried to download dataset but have got this error: from datasets import load_dataset load_dataset("mozilla-foundation/common_voice_11_0", "en", split="test", streaming=True) --------------------------------------------------------------------------- RuntimeError Tr...
open
https://github.com/huggingface/datasets/issues/7675
2025-07-09T17:47:59
2025-07-22T09:35:42
null
{ "login": "egegurel", "id": 98793855, "type": "User" }
[]
false
[]
3,216,251,069
7,674
set dev version
null
closed
https://github.com/huggingface/datasets/pull/7674
2025-07-09T15:01:25
2025-07-09T15:04:01
2025-07-09T15:01:33
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
3,216,075,633
7,673
Release: 4.0.0
null
closed
https://github.com/huggingface/datasets/pull/7673
2025-07-09T14:03:16
2025-07-09T14:36:19
2025-07-09T14:36:18
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
3,215,287,164
7,672
Fix double sequence
```python >>> Features({"a": Sequence(Sequence({"c": Value("int64")}))}) {'a': List({'c': List(Value('int64'))})} ``` instead of `{'a': {'c': List(List(Value('int64')))}}`
closed
https://github.com/huggingface/datasets/pull/7672
2025-07-09T09:53:39
2025-07-09T09:56:29
2025-07-09T09:56:28
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
3,213,223,886
7,671
Mapping function not working if the first example is returned as None
### Describe the bug https://github.com/huggingface/datasets/blob/8a19de052e3d79f79cea26821454bbcf0e9dcd68/src/datasets/arrow_dataset.py#L3652C29-L3652C37 Here we can see the writer is initialized on `i==0`. However, there can be cases where in the user mapping function, the first example is filtered out (length cons...
closed
https://github.com/huggingface/datasets/issues/7671
2025-07-08T17:07:47
2025-07-09T12:30:32
2025-07-09T12:30:32
{ "login": "dnaihao", "id": 46325823, "type": "User" }
[]
false
[]
3,208,962,372
7,670
Fix audio bytes
null
closed
https://github.com/huggingface/datasets/pull/7670
2025-07-07T13:05:15
2025-07-07T13:07:47
2025-07-07T13:05:33
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
3,203,541,091
7,669
How can I add my custom data to huggingface datasets
I want to add my custom dataset in huggingface dataset. Please guide me how to achieve that.
open
https://github.com/huggingface/datasets/issues/7669
2025-07-04T19:19:54
2025-07-05T18:19:37
null
{ "login": "xiagod", "id": 219205504, "type": "User" }
[]
false
[]
3,199,039,322
7,668
Broken EXIF crash the whole program
### Describe the bug When parsing this image in the ImageNet1K dataset, the `datasets` crashs whole training process just because unable to parse an invalid EXIF tag. ![Image](https://github.com/user-attachments/assets/3c840203-ac8c-41a0-9cf7-45f64488037d) ### Steps to reproduce the bug Use the `datasets.Image.decod...
open
https://github.com/huggingface/datasets/issues/7668
2025-07-03T11:24:15
2025-07-03T12:27:16
null
{ "login": "Seas0", "id": 30485844, "type": "User" }
[]
false
[]
3,196,251,707
7,667
Fix infer list of images
cc @kashif
closed
https://github.com/huggingface/datasets/pull/7667
2025-07-02T15:07:58
2025-07-02T15:10:28
2025-07-02T15:08:03
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
3,196,220,722
7,666
Backward compat list feature
cc @kashif
closed
https://github.com/huggingface/datasets/pull/7666
2025-07-02T14:58:00
2025-07-02T15:00:37
2025-07-02T14:59:40
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
3,193,239,955
7,665
Function load_dataset() misinterprets string field content as part of dataset schema when dealing with `.jsonl` files
### Describe the bug When loading a `.jsonl` file using `load_dataset("json", data_files="data.jsonl", split="train")`, the function misinterprets the content of a string field as if it were part of the dataset schema. In my case there is a field `body:` with a string value ``` "### Describe the bug (...) ,action:...
closed
https://github.com/huggingface/datasets/issues/7665
2025-07-01T17:14:53
2025-07-01T17:17:48
2025-07-01T17:17:48
{ "login": "zdzichukowalski", "id": 1151198, "type": "User" }
[]
false
[]
3,193,239,035
7,664
Function load_dataset() misinterprets string field content as part of dataset schema when dealing with `.jsonl` files
### Describe the bug When loading a `.jsonl` file using `load_dataset("json", data_files="data.jsonl", split="train")`, the function misinterprets the content of a string field as if it were part of the dataset schema. In my case there is a field `body:` with a string value ``` "### Describe the bug (...) ,action:...
open
https://github.com/huggingface/datasets/issues/7664
2025-07-01T17:14:32
2025-07-09T13:14:11
null
{ "login": "zdzichukowalski", "id": 1151198, "type": "User" }
[]
false
[]
3,192,582,371
7,663
Custom metadata filenames
example: https://huggingface.co/datasets/lhoestq/overlapping-subsets-imagefolder/tree/main To make multiple subsets for an imagefolder (one metadata file per subset), e.g. ```yaml configs: - config_name: default metadata_filenames: - metadata.csv - config_name: other metadata_filenames: ...
closed
https://github.com/huggingface/datasets/pull/7663
2025-07-01T13:50:36
2025-07-01T13:58:41
2025-07-01T13:58:39
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
3,190,805,531
7,662
Applying map after transform with multiprocessing will cause OOM
### Describe the bug I have a 30TB dataset. When I perform add_column and cast_column operations on it and then execute a multiprocessing map, it results in an OOM (Out of Memory) error. However, if I skip the add_column and cast_column steps and directly run the map, there is no OOM. After debugging step by step, I f...
open
https://github.com/huggingface/datasets/issues/7662
2025-07-01T05:45:57
2025-07-10T06:17:40
null
{ "login": "JunjieLl", "id": 26482910, "type": "User" }
[]
false
[]
3,190,408,237
7,661
fix del tqdm lock error
fixes https://github.com/huggingface/datasets/issues/7660
open
https://github.com/huggingface/datasets/pull/7661
2025-07-01T02:04:02
2025-07-08T01:38:46
null
{ "login": "Hypothesis-Z", "id": 44766273, "type": "User" }
[]
true
[]
3,189,028,251
7,660
AttributeError: type object 'tqdm' has no attribute '_lock'
### Describe the bug `AttributeError: type object 'tqdm' has no attribute '_lock'` It occurs when I'm trying to load datasets in thread pool. Issue https://github.com/huggingface/datasets/issues/6066 and PR https://github.com/huggingface/datasets/pull/6067 https://github.com/huggingface/datasets/pull/6068 tried to f...
open
https://github.com/huggingface/datasets/issues/7660
2025-06-30T15:57:16
2025-07-03T15:14:27
null
{ "login": "Hypothesis-Z", "id": 44766273, "type": "User" }
[]
false
[]
3,187,882,217
7,659
Update the beans dataset link in Preprocess
In the Preprocess tutorial, the to "the beans dataset" is incorrect. Fixed.
closed
https://github.com/huggingface/datasets/pull/7659
2025-06-30T09:58:44
2025-07-07T08:38:19
2025-07-01T14:01:42
{ "login": "HJassar", "id": 5434867, "type": "User" }
[]
true
[]
3,187,800,504
7,658
Fix: Prevent loss of info.features and column_names in IterableDatasetDict.map when features is None
This PR fixes a bug where calling `IterableDatasetDict.map()` or `IterableDataset.map()` with the default `features=None` argument would overwrite the existing `info.features` attribute with `None`. This, in turn, caused the resulting dataset to lose its schema, breaking downstream usage of attributes like `column_name...
closed
https://github.com/huggingface/datasets/pull/7658
2025-06-30T09:31:12
2025-07-01T16:26:30
2025-07-01T16:26:12
{ "login": "ArjunJagdale", "id": 142811259, "type": "User" }
[]
true
[]
3,186,036,016
7,657
feat: add subset_name as alias for name in load_dataset
fixes #7637 This PR introduces subset_name as a user-facing alias for the name (previously `config_name`) argument in load_dataset. It aligns terminology with the Hugging Face Hub UI (which shows “Subset”), reducing confusion for new users. Supports `subset_name` in `load_dataset()` Adds `.subset_name` propert...
open
https://github.com/huggingface/datasets/pull/7657
2025-06-29T10:39:00
2025-07-18T17:45:41
null
{ "login": "ArjunJagdale", "id": 142811259, "type": "User" }
[]
true
[]
3,185,865,686
7,656
fix(iterable): ensure MappedExamplesIterable supports state_dict for resume
Fixes #7630 ### Problem When calling `.map()` on an `IterableDataset`, resuming from a checkpoint skips a large number of samples. This is because `MappedExamplesIterable` did not implement `state_dict()` or `load_state_dict()`, so checkpointing was not properly delegated to the underlying iterable. ### What Thi...
open
https://github.com/huggingface/datasets/pull/7656
2025-06-29T07:50:13
2025-06-29T07:50:13
null
{ "login": "ArjunJagdale", "id": 142811259, "type": "User" }
[]
true
[]
3,185,382,105
7,655
Added specific use cases in Improve Performace
Fixes #2494
open
https://github.com/huggingface/datasets/pull/7655
2025-06-28T19:00:32
2025-06-28T19:00:32
null
{ "login": "ArjunJagdale", "id": 142811259, "type": "User" }
[]
true
[]
3,184,770,992
7,654
fix(load): strip deprecated use_auth_token from config_kwargs
Fixes #7504 This PR resolves a compatibility error when loading datasets via `load_dataset()` using outdated arguments like `use_auth_token`. **What was happening:** Users passing `use_auth_token` in `load_dataset(..., use_auth_token=...)` encountered a `ValueError`: BuilderConfig ParquetConfig(...) doesn't have...
open
https://github.com/huggingface/datasets/pull/7654
2025-06-28T09:20:21
2025-06-28T09:20:21
null
{ "login": "ArjunJagdale", "id": 142811259, "type": "User" }
[]
true
[]
3,184,746,093
7,653
feat(load): fallback to `load_from_disk()` when loading a saved dataset directory
### Related Issue Fixes #7503 Partially addresses #5044 by allowing `load_dataset()` to auto-detect and gracefully delegate to `load_from_disk()` for locally saved datasets. --- ### What does this PR do? This PR introduces a minimal fallback mechanism in `load_dataset()` that detects when the provided `p...
open
https://github.com/huggingface/datasets/pull/7653
2025-06-28T08:47:36
2025-06-28T08:47:36
null
{ "login": "ArjunJagdale", "id": 142811259, "type": "User" }
[]
true
[]
3,183,372,055
7,652
Add columns support to JSON loader for selective key filtering
Fixes #7594 This PR adds support for filtering specific columns when loading datasets from .json or .jsonl files — similar to how the columns=... argument works for Parquet. As suggested, support for the `columns=...` argument (previously available for Parquet) has now been extended to **JSON and JSONL** loading v...
open
https://github.com/huggingface/datasets/pull/7652
2025-06-27T16:18:42
2025-07-14T10:41:53
null
{ "login": "ArjunJagdale", "id": 142811259, "type": "User" }
[]
true
[]
3,182,792,775
7,651
fix: Extended metadata file names for folder_based_builder
Fixes #7650. The metadata files generated by the `DatasetDict.save_to_file` function are not included in the folder_based_builder's metadata list, causing issues when only 1 actual data file is present, as described in issue #7650. This PR adds these filenames to the builder, allowing correct loading.
open
https://github.com/huggingface/datasets/pull/7651
2025-06-27T13:12:11
2025-06-30T08:19:37
null
{ "login": "iPieter", "id": 6965756, "type": "User" }
[]
true
[]
3,182,745,315
7,650
`load_dataset` defaults to json file format for datasets with 1 shard
### Describe the bug I currently have multiple datasets (train+validation) saved as 50MB shards. For one dataset the validation pair is small enough to fit into a single shard and this apparently causes problems when loading the dataset. I created the datasets using a DatasetDict, saved them as 50MB arrow files for st...
open
https://github.com/huggingface/datasets/issues/7650
2025-06-27T12:54:25
2025-06-27T12:54:25
null
{ "login": "iPieter", "id": 6965756, "type": "User" }
[]
false
[]
3,181,481,444
7,649
Enable parallel shard upload in push_to_hub() using num_proc
Fixes #7591 ### Add num_proc support to `push_to_hub()` for parallel shard upload This PR adds support for parallel upload of dataset shards via the `num_proc` argument in `Dataset.push_to_hub()`. 📌 While the `num_proc` parameter was already present in the `push_to_hub()` signature and correctly passed to `_p...
closed
https://github.com/huggingface/datasets/pull/7649
2025-06-27T05:59:03
2025-07-07T18:13:53
2025-07-07T18:13:52
{ "login": "ArjunJagdale", "id": 142811259, "type": "User" }
[]
true
[]
3,181,409,736
7,648
Fix misleading add_column() usage example in docstring
Fixes #7611 This PR fixes the usage example in the Dataset.add_column() docstring, which previously implied that add_column() modifies the dataset in-place. Why: The method returns a new dataset with the additional column, and users must assign the result to a variable to preserve the change. This should make...
closed
https://github.com/huggingface/datasets/pull/7648
2025-06-27T05:27:04
2025-07-20T16:07:49
2025-07-17T13:14:17
{ "login": "ArjunJagdale", "id": 142811259, "type": "User" }
[]
true
[]
3,178,952,517
7,647
loading mozilla-foundation--common_voice_11_0 fails
### Describe the bug Hello everyone, i am trying to load `mozilla-foundation--common_voice_11_0` and it fails. Reproducer ``` import datasets datasets.load_dataset("mozilla-foundation/common_voice_11_0", "en", split="test", streaming=True, trust_remote_code=True) ``` and it fails with ``` File ~/opt/envs/.../lib/py...
open
https://github.com/huggingface/datasets/issues/7647
2025-06-26T12:23:48
2025-07-10T14:49:30
null
{ "login": "pavel-esir", "id": 5703039, "type": "User" }
[]
false
[]
3,178,036,854
7,646
Introduces automatic subset-level grouping for folder-based dataset builders #7066
Fixes #7066 This PR introduces automatic **subset-level grouping** for folder-based dataset builders by: 1. Adding a utility function `group_files_by_subset()` that clusters files by root name (ignoring digits and shard suffixes). 2. Integrating this logic into `FolderBasedBuilder._split_generators()` to yield one...
open
https://github.com/huggingface/datasets/pull/7646
2025-06-26T07:01:37
2025-07-14T10:42:56
null
{ "login": "ArjunJagdale", "id": 142811259, "type": "User" }
[]
true
[]
3,176,810,164
7,645
`ClassLabel` docs: Correct value for unknown labels
This small change fixes the documentation to to be compliant with what happens in `encode_example`. https://github.com/huggingface/datasets/blob/e71b0b19d79c7531f9b9bea7c09916b5f6157f42/src/datasets/features/features.py#L1126-L1129
open
https://github.com/huggingface/datasets/pull/7645
2025-06-25T20:01:35
2025-06-25T20:01:35
null
{ "login": "l-uuz", "id": 56924246, "type": "User" }
[]
true
[]
3,176,363,492
7,644
fix sequence ci
fix error from https://github.com/huggingface/datasets/pull/7643
closed
https://github.com/huggingface/datasets/pull/7644
2025-06-25T17:07:55
2025-06-25T17:10:30
2025-06-25T17:08:01
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
3,176,354,431
7,643
Backward compat sequence instance
useful to still get `isinstance(Sequence(Value("int64")), Sequence)`for downstream libs like evaluate
closed
https://github.com/huggingface/datasets/pull/7643
2025-06-25T17:05:09
2025-06-25T17:07:40
2025-06-25T17:05:44
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
3,176,025,890
7,642
fix length for ci
null
closed
https://github.com/huggingface/datasets/pull/7642
2025-06-25T15:10:38
2025-06-25T15:11:53
2025-06-25T15:11:51
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
3,175,953,405
7,641
update docs and docstrings
null
closed
https://github.com/huggingface/datasets/pull/7641
2025-06-25T14:48:58
2025-06-25T14:51:46
2025-06-25T14:49:33
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
3,175,914,924
7,640
better features repr
following the addition of List in #7634 before: ```python In [3]: ds.features Out[3]: {'json': {'id': Value(dtype='string', id=None), 'metadata:transcript': [{'end': Value(dtype='float64', id=None), 'start': Value(dtype='float64', id=None), 'transcript': Value(dtype='string', id=None), 'wor...
closed
https://github.com/huggingface/datasets/pull/7640
2025-06-25T14:37:32
2025-06-25T14:46:47
2025-06-25T14:46:45
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
3,175,616,169
7,639
fix save_infos
null
closed
https://github.com/huggingface/datasets/pull/7639
2025-06-25T13:16:26
2025-06-25T13:19:33
2025-06-25T13:16:33
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
3,172,645,391
7,638
Add ignore_decode_errors option to Image feature for robust decoding #7612
This PR implements support for robust image decoding in the `Image` feature, as discussed in issue #7612. ## 🔧 What was added - A new boolean field: `ignore_decode_errors` (default: `False`) - If set to `True`, any exceptions during decoding will be caught, and `None` will be returned instead of raising an error ...
open
https://github.com/huggingface/datasets/pull/7638
2025-06-24T16:47:51
2025-07-04T07:07:30
null
{ "login": "ArjunJagdale", "id": 142811259, "type": "User" }
[]
true
[]
3,171,883,522
7,637
Introduce subset_name as an alias of config_name
### Feature request Add support for `subset_name` as an alias for `config_name` in the datasets library and related tools (such as loading scripts, documentation, and metadata). ### Motivation The Hugging Face Hub dataset viewer displays a column named **"Subset"**, which refers to what is currently technically call...
open
https://github.com/huggingface/datasets/issues/7637
2025-06-24T12:49:01
2025-07-01T16:08:33
null
{ "login": "albertvillanova", "id": 8515462, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
false
[]
3,170,878,167
7,636
"open" in globals()["__builtins__"], an error occurs: "TypeError: argument of type 'module' is not iterable"
When I run the following code, an error occurs: "TypeError: argument of type 'module' is not iterable" ```python print("open" in globals()["__builtins__"]) ``` Traceback (most recent call last): File "./main.py", line 2, in <module> print("open" in globals()["__builtins__"]) ^^^^^^^^^^^^^^^^^^^^^^ TypeE...
open
https://github.com/huggingface/datasets/issues/7636
2025-06-24T08:09:39
2025-07-10T04:13:16
null
{ "login": "kuanyan9527", "id": 51187979, "type": "User" }
[]
false
[]
3,170,486,408
7,635
Fix: Preserve float columns in JSON loader when values are integer-like (e.g. 0.0, 1.0)
This PR fixes a bug in the JSON loader where columns containing float values like `[0.0, 1.0, 2.0]` were being implicitly coerced to `int`, due to pandas or Arrow type inference. This caused issues downstream in statistics computation (e.g., dataset-viewer) where such columns were incorrectly labeled as `"int"` inst...
open
https://github.com/huggingface/datasets/pull/7635
2025-06-24T06:16:48
2025-06-24T06:16:48
null
{ "login": "ArjunJagdale", "id": 142811259, "type": "User" }
[]
true
[]
3,169,389,653
7,634
Replace Sequence by List
Sequence is just a utility that we need to keep for backward compatibility. And `[ ]` was used instead but doesn't allow passing the length of the list. This PR removes most mentions of Sequence and usage of `[ ]` and defines a proper List type instead. before: `Sequence(Value("int64"))` or `[Value("int64")]` no...
closed
https://github.com/huggingface/datasets/pull/7634
2025-06-23T20:35:48
2025-06-25T13:59:13
2025-06-25T13:59:11
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
3,168,399,637
7,633
Proposal: Small Tamil Discourse Coherence Dataset.
I’m a beginner from NIT Srinagar proposing a dataset of 50 Tamil text pairs for discourse coherence (coherent/incoherent labels) to support NLP research in low-resource languages. - Size: 50 samples - Format: CSV with columns (text1, text2, label) - Use case: Training NLP models for coherence I’ll use GitHub’s web edit...
open
https://github.com/huggingface/datasets/issues/7633
2025-06-23T14:24:40
2025-06-23T14:24:40
null
{ "login": "bikkiNitSrinagar", "id": 66418501, "type": "User" }
[]
false
[]
3,168,283,589
7,632
Graceful Error Handling for cast_column("image", Image(decode=True)) in Hugging Face Datasets
### Feature request Currently, when using dataset.cast_column("image", Image(decode=True)), the pipeline throws an error and halts if any image in the dataset is invalid or corrupted (e.g., truncated files, incorrect formats, unreachable URLs). This behavior disrupts large-scale processing where a few faulty samples a...
open
https://github.com/huggingface/datasets/issues/7632
2025-06-23T13:49:24
2025-07-08T06:52:53
null
{ "login": "ganiket19", "id": 37377515, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
false
[]
3,165,127,657
7,631
Pass user-agent from DownloadConfig into fsspec storage_options
Fixes part of issue #6046 ### Problem The `user-agent` defined in `DownloadConfig` was not passed down to fsspec-based filesystems like `HfFileSystem`, which prevents proper identification/tracking of client requests. ### Solution Added support for injecting the `user-agent` into `storage_options["headers"]` wi...
open
https://github.com/huggingface/datasets/pull/7631
2025-06-21T14:22:25
2025-06-21T14:25:28
null
{ "login": "ArjunJagdale", "id": 142811259, "type": "User" }
[]
true
[]
3,164,650,900
7,630
[bug] resume from ckpt skips samples if .map is applied
### Describe the bug resume from ckpt skips samples if .map is applied Maybe related: https://github.com/huggingface/datasets/issues/7538 ### Steps to reproduce the bug ```python from datasets import Dataset from datasets.distributed import split_dataset_by_node # Create dataset with map transformation def create...
open
https://github.com/huggingface/datasets/issues/7630
2025-06-21T01:50:03
2025-06-29T07:51:32
null
{ "login": "felipemello1", "id": 23004953, "type": "User" }
[]
false
[]
3,161,169,782
7,629
Add test for `as_iterable_dataset()` method in DatasetBuilder
This PR adds a test for the new `as_iterable_dataset()` method introduced in PR #7628. The test: - Loads a builder using `load_dataset_builder("c4", "en")` - Runs `download_and_prepare()` - Streams examples using `builder.as_iterable_dataset(split="train[:100]")` - Verifies streamed examples contain the "text" f...
open
https://github.com/huggingface/datasets/pull/7629
2025-06-19T19:23:55
2025-06-19T19:23:55
null
{ "login": "ArjunJagdale", "id": 142811259, "type": "User" }
[]
true
[]
3,161,156,461
7,628
Add `as_iterable_dataset()` method to DatasetBuilder for streaming from cached Arrow files
This PR implements `builder.as_iterable_dataset(split=...)` as discussed in #5481. It allows users to load an `IterableDataset` directly from cached Arrow files (using ArrowReader and ArrowExamplesIterable), without loading the full dataset into memory. This is useful for large-scale training scenarios where memo...
open
https://github.com/huggingface/datasets/pull/7628
2025-06-19T19:15:41
2025-06-19T19:15:41
null
{ "login": "ArjunJagdale", "id": 142811259, "type": "User" }
[]
true
[]
3,160,544,390
7,627
Creating a HF Dataset from lakeFS with S3 storage takes too much time!
Hi, I’m new to HF dataset and I tried to create datasets based on data versioned in **lakeFS** _(**MinIO** S3 bucket as storage backend)_ Here I’m using ±30000 PIL image from MNIST data however it is taking around 12min to execute, which is a lot! From what I understand, it is loading the images into cache then buil...
closed
https://github.com/huggingface/datasets/issues/7627
2025-06-19T14:28:41
2025-06-23T12:39:10
2025-06-23T12:39:10
{ "login": "Thunderhead-exe", "id": 118734142, "type": "User" }
[]
false
[]
3,159,322,138
7,626
feat(map): reuse unchanged columns when input_columns specified to reduce disk usage (#6013)
## Summary This PR addresses [#6013](https://github.com/huggingface/datasets/issues/6013) by reusing unchanged columns from the original dataset in the `map()` method when `input_columns` is specified. ## What’s Implemented - Injected logic at the end of `Dataset.map()` to: - Identify untouched columns not ...
open
https://github.com/huggingface/datasets/pull/7626
2025-06-19T07:41:45
2025-07-18T17:36:35
null
{ "login": "ArjunJagdale", "id": 142811259, "type": "User" }
[]
true
[]
3,159,016,001
7,625
feat: Add h5folder dataset loader for HDF5 support
### Related Issue Closes #3113 ### What does this PR do? This PR introduces a new dataset loader module called **`h5folder`** to support loading datasets stored in **HDF5 (.h5)** format. It allows users to do: ```python from datasets import load_dataset dataset = load_dataset("h5folder", data_dir="path/t...
open
https://github.com/huggingface/datasets/pull/7625
2025-06-19T05:39:10
2025-06-26T05:44:26
null
{ "login": "ArjunJagdale", "id": 142811259, "type": "User" }
[]
true
[]
3,156,136,624
7,624
#Dataset Make "image" column appear first in dataset preview UI
Hi! #Dataset I’m currently uploading a dataset that includes an `"image"` column (PNG files), along with some metadata columns. The dataset is loaded from a .jsonl file. My goal is to have the "image" column appear as the first column in the dataset card preview UI on the :hugs: Hub. However, at the moment, the `"im...
closed
https://github.com/huggingface/datasets/issues/7624
2025-06-18T09:25:19
2025-06-20T07:46:43
2025-06-20T07:46:43
{ "login": "jcerveto", "id": 98875217, "type": "User" }
[]
false
[]
3,154,519,684
7,623
fix: raise error in FolderBasedBuilder when data_dir and data_files are missing
### Related Issues/PRs Fixes #6152 --- ### What changes are proposed in this pull request? This PR adds a dedicated validation check in the `_info()` method of the `FolderBasedBuilder` class to ensure that users provide either `data_dir` or `data_files` when loading folder-based datasets (such as `audiofold...
closed
https://github.com/huggingface/datasets/pull/7623
2025-06-17T19:16:34
2025-06-18T14:18:41
2025-06-18T14:18:41
{ "login": "ArjunJagdale", "id": 142811259, "type": "User" }
[]
true
[]
3,154,398,557
7,622
Guard against duplicate builder_kwargs/config_kwargs in load_dataset_…
…builder (#4910 ) ### What does this PR do? Fixes edge case in `load_dataset_builder` by raising a `TypeError` if the same key exists in both `builder_kwargs` and `config_kwargs`. ### Implementation details - Added a guard clause in `load_dataset_builder` to detect duplicate keys between `builder_kwargs` an...
closed
https://github.com/huggingface/datasets/pull/7622
2025-06-17T18:28:35
2025-07-23T14:06:20
2025-07-23T14:06:20
{ "login": "Shohail-Ismail", "id": 149825575, "type": "User" }
[]
true
[]
3,153,780,963
7,621
minor docs data aug
null
closed
https://github.com/huggingface/datasets/pull/7621
2025-06-17T14:46:57
2025-06-17T14:50:28
2025-06-17T14:47:11
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
3,153,565,183
7,620
Fixes in docs
before release 4.0 (I also did minor improvements to `features` to not show their `id=None` in their `__repr__()`)
closed
https://github.com/huggingface/datasets/pull/7620
2025-06-17T13:41:54
2025-06-17T13:58:26
2025-06-17T13:58:24
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
3,153,058,517
7,619
`from_list` fails while `from_generator` works for large datasets
### Describe the bug I am constructing a large time series dataset and observed that first constructing a list of entries and then using `Dataset.from_list` led to a crash as the number of items became large. However, this is not a problem when using `Dataset.from_generator`. ### Steps to reproduce the bug #### Snip...
open
https://github.com/huggingface/datasets/issues/7619
2025-06-17T10:58:55
2025-06-29T16:34:44
null
{ "login": "abdulfatir", "id": 4028948, "type": "User" }
[]
false
[]
3,148,912,897
7,618
fix: raise error when folder-based datasets are loaded without data_dir or data_files
### Related Issues/PRs <!-- Uncomment 'Resolve' if this PR can close the linked items. --> <!-- Resolve --> #6152 --- ### What changes are proposed in this pull request? This PR adds an early validation step for folder-based datasets (like `audiofolder`) to prevent silent fallback behavior. **Before t...
open
https://github.com/huggingface/datasets/pull/7618
2025-06-16T07:43:59
2025-06-16T12:13:26
null
{ "login": "ArjunJagdale", "id": 142811259, "type": "User" }
[]
true
[]
3,148,102,085
7,617
Unwanted column padding in nested lists of dicts
```python from datasets import Dataset dataset = Dataset.from_dict({ "messages": [ [ {"a": "...",}, {"b": "...",}, ], ] }) print(dataset[0]) ``` What I get: ``` {'messages': [{'a': '...', 'b': None}, {'a': None, 'b': '...'}]} ``` What I want: ``` {'messages': [{'a': '...
closed
https://github.com/huggingface/datasets/issues/7617
2025-06-15T22:06:17
2025-06-16T13:43:31
2025-06-16T13:43:31
{ "login": "qgallouedec", "id": 45557362, "type": "User" }
[]
false
[]
3,144,506,665
7,616
Torchcodec decoding
Closes #7607 ## New signatures ### Audio ```python Audio(sampling_rate: Optional[int] = None, mono: bool = True, decode: bool = True, stream_index: Optional[int] = None) Audio.encode_example(self, value: Union[str, bytes, bytearray, dict, "AudioDecoder"]) -> dict Audio.decode_example(self, value: dict, token_...
closed
https://github.com/huggingface/datasets/pull/7616
2025-06-13T19:06:07
2025-06-19T18:25:49
2025-06-19T18:25:49
{ "login": "TyTodd", "id": 49127578, "type": "User" }
[]
true
[]
3,143,443,498
7,615
remove unused code
null
closed
https://github.com/huggingface/datasets/pull/7615
2025-06-13T12:37:30
2025-06-13T12:39:59
2025-06-13T12:37:40
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
3,143,381,638
7,614
Lazy column
Same as https://github.com/huggingface/datasets/pull/7564 but for `Dataset`, cc @TopCoder2K FYI e.g. `ds[col]` now returns a lazy Column instead of a list This way calling `ds[col][idx]` only loads the required data in memory (bonus: also supports subfields access with `ds[col][subcol][idx]`) the breaking c...
closed
https://github.com/huggingface/datasets/pull/7614
2025-06-13T12:12:57
2025-06-17T13:08:51
2025-06-17T13:08:49
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
3,142,819,991
7,613
fix parallel push_to_hub in dataset_dict
null
closed
https://github.com/huggingface/datasets/pull/7613
2025-06-13T09:02:24
2025-06-13T12:30:23
2025-06-13T12:30:22
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
3,141,905,049
7,612
Provide an option of robust dataset iterator with error handling
### Feature request Adding an option to skip corrupted data samples. Currently the datasets behavior is throwing errors if the data sample if corrupted and let user aware and handle the data corruption. When I tried to try-catch the error at user level, the iterator will raise StopIteration when I called next() again....
open
https://github.com/huggingface/datasets/issues/7612
2025-06-13T00:40:48
2025-06-24T16:52:30
null
{ "login": "wwwjn", "id": 40016222, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
false
[]
3,141,383,940
7,611
Code example for dataset.add_column() does not reflect correct way to use function
https://github.com/huggingface/datasets/blame/38d4d0e11e22fdbc4acf373d2421d25abeb43439/src/datasets/arrow_dataset.py#L5925C10-L5925C10 The example seems to suggest that dataset.add_column() can add column inplace, however, this is wrong -- it cannot. It returns a new dataset with the column added to it.
closed
https://github.com/huggingface/datasets/issues/7611
2025-06-12T19:42:29
2025-07-17T13:14:18
2025-07-17T13:14:18
{ "login": "shaily99", "id": 31388649, "type": "User" }
[]
false
[]
3,141,281,560
7,610
i cant confirm email
### Describe the bug This is dificult, I cant confirm email because I'm not get any email! I cant post forum because I cant confirm email! I can send help desk because... no exist on web page. paragraph 44 ### Steps to reproduce the bug rthjrtrt ### Expected behavior ewtgfwetgf ### Environment info sdgfswdegfwe
open
https://github.com/huggingface/datasets/issues/7610
2025-06-12T18:58:49
2025-06-27T14:36:47
null
{ "login": "lykamspam", "id": 187984415, "type": "User" }
[]
false
[]
3,140,373,128
7,609
Update `_dill.py` to use `co_linetable` for Python 3.10+ in place of `co_lnotab`
Not 100% about this one, but it seems to be recommended. ``` /fsx/qgallouedec/miniconda3/envs/trl/lib/python3.12/site-packages/datasets/utils/_dill.py:385: DeprecationWarning: co_lnotab is deprecated, use co_lines instead. ``` Tests pass locally. And the warning is gone with this change. https://peps.python.or...
closed
https://github.com/huggingface/datasets/pull/7609
2025-06-12T13:47:01
2025-06-16T12:14:10
2025-06-16T12:14:08
{ "login": "qgallouedec", "id": 45557362, "type": "User" }
[]
true
[]
3,137,564,259
7,608
Tests typing and fixes for push_to_hub
todo: - [x] fix TestPushToHub.test_push_dataset_dict_to_hub_iterable_num_proc
closed
https://github.com/huggingface/datasets/pull/7608
2025-06-11T17:13:52
2025-06-12T21:15:23
2025-06-12T21:15:21
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
3,135,722,560
7,607
Video and audio decoding with torchcodec
### Feature request Pytorch is migrating video processing to torchcodec and it's pretty cool. It would be nice to migrate both the audio and video features to use torchcodec instead of torchaudio/video. ### Motivation My use case is I'm working on a multimodal AV model, and what's nice about torchcodec is I can extr...
closed
https://github.com/huggingface/datasets/issues/7607
2025-06-11T07:02:30
2025-06-19T18:25:49
2025-06-19T18:25:49
{ "login": "TyTodd", "id": 49127578, "type": "User" }
[ { "name": "enhancement", "color": "a2eeef" } ]
false
[]
3,133,848,546
7,606
Add `num_proc=` to `.push_to_hub()` (Dataset and IterableDataset)
null
closed
https://github.com/huggingface/datasets/pull/7606
2025-06-10T14:35:10
2025-06-11T16:47:28
2025-06-11T16:47:25
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
3,131,636,882
7,605
Make `push_to_hub` atomic (#7600)
null
closed
https://github.com/huggingface/datasets/pull/7605
2025-06-09T22:29:38
2025-06-23T19:32:08
2025-06-23T19:32:08
{ "login": "sharvil", "id": 391004, "type": "User" }
[]
true
[]
3,130,837,169
7,604
Docs and more methods for IterableDataset: push_to_hub, to_parquet...
to_csv, to_json, to_sql, to_pandas, to_polars, to_dict, to_list
closed
https://github.com/huggingface/datasets/pull/7604
2025-06-09T16:44:40
2025-06-10T13:15:23
2025-06-10T13:15:21
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
3,130,394,563
7,603
No TF in win tests
null
closed
https://github.com/huggingface/datasets/pull/7603
2025-06-09T13:56:34
2025-06-09T15:33:31
2025-06-09T15:33:30
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]
3,128,758,924
7,602
Enhance error handling and input validation across multiple modules
This PR improves the robustness and user experience by: 1. **Audio Module**: - Added clear error messages when required fields ('path' or 'bytes') are missing in audio encoding 2. **DatasetDict**: - Enhanced key access error messages to show available splits when an invalid key is accessed 3. **NonMuta...
open
https://github.com/huggingface/datasets/pull/7602
2025-06-08T23:01:06
2025-06-08T23:01:06
null
{ "login": "mohiuddin-khan-shiam", "id": 147746955, "type": "User" }
[]
true
[]
3,127,296,182
7,600
`push_to_hub` is not concurrency safe (dataset schema corruption)
### Describe the bug Concurrent processes modifying and pushing a dataset can overwrite each others' dataset card, leaving the dataset unusable. Consider this scenario: - we have an Arrow dataset - there are `N` configs of the dataset - there are `N` independent processes operating on each of the individual configs (...
closed
https://github.com/huggingface/datasets/issues/7600
2025-06-07T17:28:56
2025-06-23T19:36:37
2025-06-23T19:36:37
{ "login": "sharvil", "id": 391004, "type": "User" }
[]
false
[]
3,125,620,119
7,599
My already working dataset (when uploaded few months ago) now is ignoring metadata.jsonl
### Describe the bug Hi everyone, I uploaded my dataset https://huggingface.co/datasets/PRAIG/SMB a few months ago while I was waiting for a conference acceptance response. Without modifying anything in the dataset repository now the Dataset viewer is not rendering the metadata.jsonl annotations, neither it is being d...
closed
https://github.com/huggingface/datasets/issues/7599
2025-06-06T18:59:00
2025-06-16T15:18:00
2025-06-16T15:18:00
{ "login": "JuanCarlosMartinezSevilla", "id": 97530443, "type": "User" }
[]
false
[]
3,125,184,457
7,598
fix string_to_dict usage for windows
null
closed
https://github.com/huggingface/datasets/pull/7598
2025-06-06T15:54:29
2025-06-06T16:12:22
2025-06-06T16:12:21
{ "login": "lhoestq", "id": 42851186, "type": "User" }
[]
true
[]