number int64 2 7.91k | title stringlengths 1 290 | body stringlengths 0 228k | state stringclasses 2
values | created_at timestamp[s]date 2020-04-14 18:18:51 2025-12-16 10:45:02 | updated_at timestamp[s]date 2020-04-29 09:23:05 2025-12-16 19:34:46 | closed_at timestamp[s]date 2020-04-29 09:23:05 2025-12-16 14:20:48 ⌀ | url stringlengths 48 51 | author stringlengths 3 26 ⌀ | comments_count int64 0 70 | labels listlengths 0 4 |
|---|---|---|---|---|---|---|---|---|---|---|
7,905 | Unbounded network usage when opening Data Studio | ### Describe the bug
Opening the Data Studio tab on a dataset page triggers continuous and unbounded network traffic. This issue occurs across multiple browsers and continues even without user interaction.
### Steps to reproduce the bug
https://huggingface.co/datasets/slone/nllb-200-10M-sample/viewer
### Expected... | OPEN | 2025-12-16T10:45:02 | 2025-12-16T10:45:02 | null | https://github.com/huggingface/datasets/issues/7905 | alizaredornica-sys | 0 | [] |
7,904 | Request: Review pending neuroimaging PRs (#7886 BIDS loader, #7887 lazy loading) | ## Summary
I'm building production neuroimaging pipelines that depend on `datasets` and would benefit greatly from two pending PRs being reviewed/merged.
## Pending PRs
| PR | Description | Status | Open Since |
|----|-------------|--------|------------|
| [#7886](https://github.com/huggingface/datasets/pull/7886) |... | OPEN | 2025-12-14T20:34:31 | 2025-12-15T11:25:29 | null | https://github.com/huggingface/datasets/issues/7904 | The-Obstacle-Is-The-Way | 1 | [] |
7,902 | The child process retrieves the dataset directly from the main process instead of executing `memory_mapped_arrow_table_from_file`. | ### Feature request
The child process retrieves the dataset directly from the main process instead of executing `memory_mapped_arrow_table_from_file`.
### Motivation
Because my local disk space is insufficient, I can only store a dataset on a remote Ceph server and process it using datasets.
I used the data-juicer[h... | OPEN | 2025-12-12T12:37:44 | 2025-12-15T11:48:16 | null | https://github.com/huggingface/datasets/issues/7902 | HQF2017 | 1 | [
"enhancement"
] |
7,901 | ShuffledDataSourcesArrowExamplesIterable cannot properly resume from checkpoint | ### Describe the bug
ShuffledDataSourcesArrowExamplesIterable cannot properly resume from checkpoint
### Steps to reproduce the bug
1. The reproducible code is as follows:
```
from datasets import Dataset, concatenate_datasets, interleave_datasets
ds = Dataset.from_dict({"a": range(12)}).to_iterable_dataset(num_sha... | OPEN | 2025-12-12T06:57:32 | 2025-12-16T19:34:46 | null | https://github.com/huggingface/datasets/issues/7901 | howitry | 3 | [] |
7,900 | `Permission denied` when sharing cache between users | ### Describe the bug
We want to use `datasets` and `transformers` on a shared machine. Right now, each user has a separate HF_HOME in their home directory. To reduce duplicates of the datasets, we want to share that cache. While experimenting, we are running into `Permission denied` errors.
It looks like this was sup... | OPEN | 2025-12-09T16:41:47 | 2025-12-16T15:39:06 | null | https://github.com/huggingface/datasets/issues/7900 | qthequartermasterman | 2 | [] |
7,894 | embed_table_storage crashes (SIGKILL) on sharded datasets with Sequence() nested types | ## Summary
`embed_table_storage` crashes with SIGKILL (exit code 137) when processing sharded datasets containing `Sequence()` nested types like `Sequence(Nifti())`. Likely affects `Sequence(Image())` and `Sequence(Audio())` as well.
The crash occurs at the C++ level with no Python traceback.
### Related Issues
- #... | OPEN | 2025-12-03T04:20:06 | 2025-12-06T13:10:34 | null | https://github.com/huggingface/datasets/issues/7894 | The-Obstacle-Is-The-Way | 3 | [] |
7,893 | push_to_hub OOM: _push_parquet_shards_to_hub accumulates all shard bytes in memory | ## Summary
Large dataset uploads crash or hang due to memory exhaustion. This appears to be the root cause of several long-standing issues.
### Related Issues
This is the root cause of:
- #5990 - Pushing a large dataset on the hub consistently hangs (46 comments, open since 2023)
- #7400 - 504 Gateway Timeout when u... | CLOSED | 2025-12-03T04:19:34 | 2025-12-05T22:45:59 | 2025-12-05T22:44:16 | https://github.com/huggingface/datasets/issues/7893 | The-Obstacle-Is-The-Way | 2 | [] |
7,883 | Data.to_csv() cannot be recognized by pylance | ### Describe the bug
Hi, everyone ! I am a beginner with datasets.
I am testing reading multiple CSV files from a zip archive. The result of reading the dataset shows success, and it can ultimately be correctly saved to CSV.
Intermediate results:
```
Generating train split: 62973 examples [00:00, 175939.01 examples/... | CLOSED | 2025-11-26T16:16:56 | 2025-12-08T12:06:58 | 2025-12-08T12:06:58 | https://github.com/huggingface/datasets/issues/7883 | xi4ngxin | 0 | [] |
7,882 | Inconsistent loading of LFS-hosted files in epfml/FineWeb-HQ dataset | ### Describe the bug
Some files in the `epfml/FineWeb-HQ` dataset fail to load via the Hugging Face `datasets` library.
- xet-hosted files load fine
- LFS-hosted files sometimes fail
Example:
- Fails: https://huggingface.co/datasets/epfml/FineWeb-HQ/blob/main/data/CC-MAIN-2024-26/000_00003.parquet
- Works: ht... | OPEN | 2025-11-26T14:06:02 | 2025-12-15T18:20:50 | null | https://github.com/huggingface/datasets/issues/7882 | Oligou | 1 | [] |
7,880 | Spurious label column created when audiofolder/imagefolder directories match split names | ## Describe the bug
When using `audiofolder` or `imagefolder` with directories for **splits** (train/test) rather than class labels, a spurious `label` column is incorrectly created.
**Example:** https://huggingface.co/datasets/datasets-examples/doc-audio-4
```
from datasets import load_dataset
ds = load_dataset("dat... | OPEN | 2025-11-26T13:36:24 | 2025-11-26T13:36:24 | null | https://github.com/huggingface/datasets/issues/7880 | neha222222 | 0 | [] |
7,879 | python core dump when downloading dataset | ### Describe the bug
When downloading a dataset in streamed mode and exiting the program before the download completes, the python program core dumps when exiting:
```
terminate called without an active exception
Aborted (core dumped)
```
Tested with python 3.12.3, python 3.9.21
### Steps to reproduce the bug
Cr... | OPEN | 2025-11-24T06:22:53 | 2025-11-25T20:45:55 | null | https://github.com/huggingface/datasets/issues/7879 | hansewetz | 10 | [] |
7,877 | work around `tempfile` silently ignoring `TMPDIR` if the dir doesn't exist | This should help a lot of users running into `No space left on device` while using `datasets`. Normally the issue is is that `/tmp` is too small and the user needs to use another path, which they would normally set as `export TMPDIR=/some/big/storage`
However, the `tempfile` facility that `datasets` and `pyarrow` use ... | CLOSED | 2025-11-21T19:51:48 | 2025-12-16T14:20:48 | 2025-12-16T14:20:48 | https://github.com/huggingface/datasets/issues/7877 | stas00 | 1 | [] |
7,872 | IterableDataset does not use features information in to_pandas | ### Describe the bug
`IterableDataset` created from generator with explicit `features=` parameter seems to ignore provided features description for certain operations, e.g. `.to_pandas(...)` when data coming from the generator has missing values.
### Steps to reproduce the bug
```python
import datasets
from datasets... | OPEN | 2025-11-19T17:12:59 | 2025-11-19T18:52:14 | null | https://github.com/huggingface/datasets/issues/7872 | bonext | 2 | [] |
7,871 | Reqwest Error: HTTP status client error (429 Too Many Requests) | ### Describe the bug
full error message:
```
Traceback (most recent call last):
File "/home/yanan/miniconda3/bin/hf", line 7, in <module>
sys.exit(main())
~~~~^^
File "/home/yanan/miniconda3/lib/python3.13/site-packages/huggingface_hub/cli/hf.py", line 56, in main
app()
~~~^^
File "/ho... | CLOSED | 2025-11-19T16:52:24 | 2025-11-30T13:38:32 | 2025-11-30T13:38:32 | https://github.com/huggingface/datasets/issues/7871 | yanan1116 | 2 | [] |
7,870 | Visualization for Medical Imaging Datasets | This is a followup to: https://github.com/huggingface/datasets/pull/7815.
I checked the possibilities to visualize the nifti (and potentially dicom), and here's what I found:
- https://github.com/aces/brainbrowser, AGPL3 license, last commit 3 months ago, latest (github) release from 2017. It's available on jsdelivr... | CLOSED | 2025-11-19T11:05:39 | 2025-11-21T12:31:19 | 2025-11-21T12:31:19 | https://github.com/huggingface/datasets/issues/7870 | CloseChoice | 1 | [] |
7,869 | Why does dataset merge fail when tools have different parameters? | Hi, I have a question about SFT (Supervised Fine-tuning) for an agent model.
Suppose I want to fine-tune an agent model that may receive two different tools: tool1 and tool2. These tools have different parameters and types in their schema definitions.
When I try to merge datasets containing different tool definitions... | OPEN | 2025-11-18T08:33:04 | 2025-11-30T03:52:07 | null | https://github.com/huggingface/datasets/issues/7869 | hitszxs | 1 | [] |
7,868 | Data duplication with `split_dataset_by_node` and `interleaved_dataset` | ### Describe the bug
Data duplication in different rank, when process a iterabledataset with first `split_dataset_by_node` and then `interleaved_dataset`
### Steps to reproduce the bug
I have provide a minimum scripts
```python
import os
from datasets import interleave_datasets, load_dataset
from datasets.distribu... | OPEN | 2025-11-17T09:15:24 | 2025-12-15T11:52:32 | null | https://github.com/huggingface/datasets/issues/7868 | ValMystletainn | 3 | [] |
7,867 | NonMatchingSplitsSizesError when loading partial dataset files | ### Describe the bug
When loading only a subset of dataset files while the dataset's README.md contains split metadata, the system throws a NonMatchingSplitsSizesError . This prevents users from loading partial datasets for quick validation in cases of poor network conditions or very large datasets.
### Steps to repr... | OPEN | 2025-11-13T12:03:23 | 2025-11-16T15:39:23 | null | https://github.com/huggingface/datasets/issues/7867 | QingGo | 2 | [] |
7,864 | add_column and add_item erroneously(?) require new_fingerprint parameter | ### Describe the bug
Contradicting their documentation (which doesn't mention the parameter at all), both Dataset.add_column and Dataset.add_item require a new_fingerprint string. This parameter is passed directly to the dataset constructor, which has the fingerprint parameter listed as optional; is there any reason i... | OPEN | 2025-11-13T02:56:49 | 2025-12-07T14:41:40 | null | https://github.com/huggingface/datasets/issues/7864 | echthesia | 2 | [] |
7,863 | Support hosting lance / vortex / iceberg / zarr datasets on huggingface hub | ### Feature request
Huggingface datasets has great support for large tabular datasets in parquet with large partitions. I would love to see two things in the future:
- equivalent support for `lance`, `vortex`, `iceberg`, `zarr` (in that order) in a way that I can stream them using the datasets library
- more fine-gr... | OPEN | 2025-11-13T00:51:07 | 2025-11-26T14:10:29 | null | https://github.com/huggingface/datasets/issues/7863 | pavanramkumar | 13 | [
"enhancement"
] |
7,861 | Performance Issue: save_to_disk() 200-1200% slower due to unconditional flatten_indices() | ## 🐛 Bug Description
The `save_to_disk()` method unconditionally calls `flatten_indices()` when `_indices` is not None, causing severe performance degradation for datasets processed with filtering, shuffling, or multiprocessed mapping operations.
**Root cause**: This line rebuilds the entire dataset unnecessarily:
`... | OPEN | 2025-11-11T11:05:38 | 2025-11-11T11:05:38 | null | https://github.com/huggingface/datasets/issues/7861 | KCKawalkar | 0 | [] |
7,856 | Missing transcript column when loading a local dataset with "audiofolder" | ### Describe the bug
My local dataset is not properly loaded when using `load_dataset("audiofolder", data_dir="my_dataset")` with a `jsonl` metadata file.
Only the `audio` column is read while the `transcript` column is not.
The last tested `datasets` version where the behavior was still correct is 2.18.0.
### Steps... | CLOSED | 2025-11-08T16:27:58 | 2025-11-09T12:13:38 | 2025-11-09T12:13:38 | https://github.com/huggingface/datasets/issues/7856 | gweltou | 2 | [] |
7,852 | Problems with NifTI | ### Describe the bug
There are currently 2 problems with the new NifTI feature:
1. dealing with zipped files, this is mentioned and explained [here](https://github.com/huggingface/datasets/pull/7815#issuecomment-3496199503)
2. when uploading via the `niftifolder` feature, the resulting parquet only contains relative p... | CLOSED | 2025-11-06T11:46:33 | 2025-11-06T16:20:38 | 2025-11-06T16:20:38 | https://github.com/huggingface/datasets/issues/7852 | CloseChoice | 2 | [] |
7,842 | Transform with columns parameter triggers on non-specified column access | ### Describe the bug
Iterating over a [`Column`](https://github.com/huggingface/datasets/blob/8b1bd4ec1cc9e9ce022f749abb6485ef984ae7c0/src/datasets/arrow_dataset.py#L633-L692) iterates through the parent [`Dataset`](https://github.com/huggingface/datasets/blob/8b1bd4ec1cc9e9ce022f749abb6485ef984ae7c0/src/datasets/arro... | CLOSED | 2025-11-03T13:55:27 | 2025-11-03T14:34:13 | 2025-11-03T14:34:13 | https://github.com/huggingface/datasets/issues/7842 | mr-brobot | 0 | [] |
7,841 | DOC: `mode` parameter on pdf and video features unused | Following up on https://github.com/huggingface/datasets/pull/7840 I asked claude code to check for undocumented parameters for other features and it found:
- mode parameter on video is documented but unused: https://github.com/huggingface/datasets/blob/main/src/datasets/features/video.py#L48-L49
- the same goes for the... | CLOSED | 2025-11-02T12:37:47 | 2025-11-05T14:04:04 | 2025-11-05T14:04:04 | https://github.com/huggingface/datasets/issues/7841 | CloseChoice | 1 | [] |
7,839 | datasets doesn't work with python 3.14 | ### Describe the bug
Seems that `dataset` doesn't work with python==3.14. The root cause seems to be something with a `deel` API that was changed.
```
TypeError: Pickler._batch_setitems() takes 2 positional arguments but 3 were given
```
### Steps to reproduce the bug
(on a new folder)
uv init
uv python pin 3.14
uv... | CLOSED | 2025-11-02T09:09:06 | 2025-11-04T14:02:25 | 2025-11-04T14:02:25 | https://github.com/huggingface/datasets/issues/7839 | zachmoshe | 4 | [] |
7,837 | mono parameter to the Audio feature is missing | According to the docs, there is a "mono" parameter to the Audio feature, which turns any stereo into mono. In practice the signal is not touched and the mono parameter, even though documented, does not exist.
https://github.com/huggingface/datasets/blob/41c05299348a499807432ab476e1cdc4143c8772/src/datasets/features/a... | CLOSED | 2025-10-31T15:41:39 | 2025-11-03T15:59:18 | 2025-11-03T14:24:12 | https://github.com/huggingface/datasets/issues/7837 | ernestum | 2 | [] |
7,834 | Audio.cast_column() or Audio.decode_example() causes Colab kernel crash (std::bad_alloc) | ### Describe the bug
When using the huggingface datasets.Audio feature to decode a local or remote (public HF dataset) audio file inside Google Colab, the notebook kernel crashes with std::bad_alloc (C++ memory allocation failure).
The crash happens even with a minimal code example and valid .wav file that can be read... | OPEN | 2025-10-27T22:02:00 | 2025-11-15T16:28:04 | null | https://github.com/huggingface/datasets/issues/7834 | rachidio | 8 | [] |
7,832 | [DOCS][minor] TIPS paragraph not compiled in docs/stream | In the client documentation, the markdown 'TIP' paragraph for paragraph in docs/stream#shuffle is not well executed — not as the other in the same page / while markdown is correctly considering it.
Documentation:
https://huggingface.co/docs/datasets/v4.3.0/en/stream#shuffle:~:text=%5B!TIP%5D%5BIterableDataset.shuffle(... | CLOSED | 2025-10-27T10:03:22 | 2025-10-27T10:10:54 | 2025-10-27T10:10:54 | https://github.com/huggingface/datasets/issues/7832 | art-test-stack | 0 | [] |
7,829 | Memory leak / Large memory usage with num_workers = 0 and numerous dataset within DatasetDict | ### Describe the bug
Hi team, first off, I love the datasets library! 🥰
I'm encountering a potential memory leak / increasing memory usage when training a model on a very large DatasetDict.
Setup: I have a DatasetDict containing 362 distinct datasets, which sum up to ~2.8 billion rows.
Training Task: I'm performin... | OPEN | 2025-10-24T09:51:38 | 2025-11-06T13:31:26 | null | https://github.com/huggingface/datasets/issues/7829 | raphaelsty | 4 | [] |
7,821 | Building a dataset with large variable size arrays results in error ArrowInvalid: Value X too large to fit in C integer type | ### Describe the bug
I used map to store raw audio waveforms of variable lengths in a column of a dataset the `map` call fails with ArrowInvalid: Value X too large to fit in C integer type.
```
Traceback (most recent call last):
Traceback (most recent call last):
File "...lib/python3.12/site-packages/multiprocess/p... | OPEN | 2025-10-16T08:45:17 | 2025-10-20T13:42:05 | null | https://github.com/huggingface/datasets/issues/7821 | kkoutini | 1 | [] |
7,819 | Cannot download opus dataset | When I tried to download opus_books using:
from datasets import load_dataset
dataset = load_dataset("Helsinki-NLP/opus_books")
I got the following errors:
FileNotFoundError: Couldn't find any data file at /workspace/Helsinki-NLP/opus_books. Couldn't find 'Helsinki-NLP/opus_books' on the Hugging Face Hub either: Local... | OPEN | 2025-10-15T09:06:19 | 2025-10-20T13:45:16 | null | https://github.com/huggingface/datasets/issues/7819 | liamsun2019 | 1 | [] |
7,818 | train_test_split and stratify breaks with Numpy 2.0 | ### Describe the bug
As stated in the title, since Numpy changed in version >2.0 with copy, the stratify parameters break.
e.g. `all_dataset.train_test_split(test_size=0.2,stratify_by_column="label")` returns a Numpy error.
It works if you downgrade Numpy to a version lower than 2.0.
### Steps to reproduce the bug
... | CLOSED | 2025-10-15T00:01:19 | 2025-10-28T16:10:44 | 2025-10-28T16:10:44 | https://github.com/huggingface/datasets/issues/7818 | davebulaval | 3 | [] |
7,816 | disable_progress_bar() not working as expected | ### Describe the bug
Hi,
I'm trying to load a dataset on Kaggle TPU image. There is some known compat issue with progress bar on Kaggle, so I'm trying to disable the progress bar globally. This does not work as you can see in [here](https://www.kaggle.com/code/windmaple/hf-datasets-issue).
In contract, disabling pro... | CLOSED | 2025-10-14T03:25:39 | 2025-10-14T23:49:26 | 2025-10-14T23:49:26 | https://github.com/huggingface/datasets/issues/7816 | windmaple | 2 | [] |
7,813 | Caching does not work when using python3.14 | ### Describe the bug
Traceback (most recent call last):
File "/workspace/ctn.py", line 8, in <module>
ds = load_dataset(f"naver-clova-ix/synthdog-{lang}") # или "synthdog-zh" для китайского
File "/workspace/.venv/lib/python3.14/site-packages/datasets/load.py", line 1397, in load_dataset
builder_instance =... | CLOSED | 2025-10-10T15:36:46 | 2025-10-27T17:08:26 | 2025-10-27T17:08:26 | https://github.com/huggingface/datasets/issues/7813 | intexcor | 2 | [] |
7,811 | SIGSEGV when Python exits due to near null deref | ### Describe the bug
When I run the following python script using datasets I get a segfault.
```python
from datasets import load_dataset
from tqdm import tqdm
progress_bar = tqdm(total=(1000), unit='cols', desc='cols ')
progress_bar.update(1)
```
```
% lldb -- python3 crashmin.py
(lldb) target create "python3"
Cur... | OPEN | 2025-10-09T22:00:11 | 2025-10-10T22:09:24 | null | https://github.com/huggingface/datasets/issues/7811 | iankronquist | 4 | [] |
7,804 | Support scientific data formats | List of formats and libraries we can use to load the data in `datasets`:
- [ ] DICOMs: pydicom
- [x] NIfTIs: nibabel
- [ ] WFDB: wfdb
cc @zaRizk7 for viz
Feel free to comment / suggest other formats and libs you'd like to see or to share your interest in one of the mentioned format | OPEN | 2025-10-09T10:18:24 | 2025-11-26T16:09:43 | null | https://github.com/huggingface/datasets/issues/7804 | lhoestq | 18 | [] |
7,802 | [Docs] Missing documentation for `Dataset.from_dict` | Documentation link: https://huggingface.co/docs/datasets/en/package_reference/main_classes
Link to method (docstring present): https://github.com/huggingface/datasets/blob/6f2502c5a026caa89839713f6f7c8b958e5e83eb/src/datasets/arrow_dataset.py#L1029
The docstring is present for the function, but seems missing from the... | OPEN | 2025-10-09T02:54:41 | 2025-10-19T16:09:33 | null | https://github.com/huggingface/datasets/issues/7802 | aaronshenhao | 2 | [] |
7,798 | Audio dataset is not decoding on 4.1.1 | ### Describe the bug
The audio column remain as non-decoded objects even when accessing them.
```python
dataset = load_dataset("MrDragonFox/Elise", split = "train")
dataset[0] # see that it doesn't show 'array' etc...
```
Works fine with `datasets==3.6.0`
Followed the docs in
- https://huggingface.co/docs/dataset... | OPEN | 2025-10-05T06:37:50 | 2025-10-06T14:07:55 | null | https://github.com/huggingface/datasets/issues/7798 | thewh1teagle | 3 | [] |
7,793 | Cannot load dataset, fails with nested data conversions not implemented for chunked array outputs | ### Describe the bug
Hi! When I load this dataset, it fails with a pyarrow error. I'm using datasets 4.1.1, though I also see this with datasets 4.1.2
To reproduce:
```
import datasets
ds = datasets.load_dataset(path="metr-evals/malt-public", name="irrelevant_detail")
```
Error:
```
Traceback (most recent call las... | OPEN | 2025-09-27T01:03:12 | 2025-09-27T21:35:31 | null | https://github.com/huggingface/datasets/issues/7793 | neevparikh | 1 | [] |
7,792 | Concatenate IterableDataset instances and distribute underlying shards in a RoundRobin manner | ### Feature request
I would like to be able to concatenate multiple `IterableDataset` with possibly different features. I would like to then be able to stream the results in parallel (both using DDP and multiple workers in the pytorch DataLoader). I want the merge of datasets to be well balanced between the different ... | CLOSED | 2025-09-26T10:05:19 | 2025-10-15T18:05:23 | 2025-10-15T18:05:23 | https://github.com/huggingface/datasets/issues/7792 | LTMeyer | 17 | [
"enhancement"
] |
7,788 | `Dataset.to_sql` doesn't utilize `num_proc` | The underlying `SqlDatasetWriter` has `num_proc` as an available argument [here](https://github.com/huggingface/datasets/blob/5dc1a179783dff868b0547c8486268cfaea1ea1f/src/datasets/io/sql.py#L63) , but `Dataset.to_sql()` does not accept it, therefore it is always using one process for the SQL conversion. | OPEN | 2025-09-24T20:34:47 | 2025-09-24T20:35:01 | null | https://github.com/huggingface/datasets/issues/7788 | tcsmaster | 0 | [] |
7,780 | BIGPATENT dataset inaccessible (deprecated script loader) | dataset: https://huggingface.co/datasets/NortheasternUniversity/big_patent
When I try to load it with the datasets library, it fails with:
RuntimeError: Dataset scripts are no longer supported, but found big_patent.py
Could you please publish a Parquet/Arrow export of BIGPATENT on the Hugging Face so that it can be... | CLOSED | 2025-09-18T08:25:34 | 2025-09-25T14:36:13 | 2025-09-25T14:36:13 | https://github.com/huggingface/datasets/issues/7780 | ishmaifan | 2 | [] |
7,777 | push_to_hub not overwriting but stuck in a loop when there are existing commits | ### Describe the bug
`get_deletions_and_dataset_card` stuck at error a commit has happened error since push to hub for http error 412 for tag 4.1.0. The error does not exists in 4.0.0.
### Steps to reproduce the bug
Create code to use push_to_hub, ran twice each time with different content for datasets.Dataset.
The... | CLOSED | 2025-09-17T03:15:35 | 2025-09-17T19:31:14 | 2025-09-17T19:31:14 | https://github.com/huggingface/datasets/issues/7777 | Darejkal | 4 | [] |
7,772 | Error processing scalar columns using tensorflow. | `datasets==4.0.0`
```
columns_to_return = ['input_ids','attention_mask', 'start_positions', 'end_positions']
train_ds.set_format(type='tf', columns=columns_to_return)
```
`train_ds`:
```
train_ds type: <class 'datasets.arrow_dataset.Dataset'>, shape: (1000, 9)
columns: ['question', 'sentences', 'answer', 'str_idx', 'en... | OPEN | 2025-09-15T10:36:31 | 2025-09-27T08:22:44 | null | https://github.com/huggingface/datasets/issues/7772 | khteh | 2 | [] |
7,767 | Custom `dl_manager` in `load_dataset` | ### Feature request
https://github.com/huggingface/datasets/blob/4.0.0/src/datasets/load.py#L1411-L1418
```
def load_dataset(
...
dl_manager: Optional[DownloadManager] = None, # add this new argument
**config_kwargs,
) -> Union[DatasetDict, Dataset, IterableDatasetDict, IterableDataset]:
...
# ... | OPEN | 2025-09-12T19:06:23 | 2025-09-12T19:07:52 | null | https://github.com/huggingface/datasets/issues/7767 | ain-soph | 0 | [
"enhancement"
] |
7,766 | cast columns to Image/Audio/Video with `storage_options` | ### Feature request
Allow `storage_options` to be passed in
1. `cast` related operations (e.g., `cast_columns, cast`)
2. `info` related reading (e.g., `from_dict, from_pandas, from_polars`) together with `info.features`
```python3
import datasets
image_path = "s3://bucket/sample.png"
dataset = datasets.Dataset.from_d... | OPEN | 2025-09-12T18:51:01 | 2025-09-27T08:14:47 | null | https://github.com/huggingface/datasets/issues/7766 | ain-soph | 5 | [
"enhancement"
] |
7,765 | polars dataset cannot cast column to Image/Audio/Video | ### Describe the bug
`from_polars` dataset cannot cast column to Image/Audio/Video, while it works on `from_pandas` and `from_dict`
### Steps to reproduce the bug
```python3
import datasets
import pandas as pd
import polars as pl
image_path = "./sample.png"
# polars
df = pl.DataFrame({"image_path": [image_path]})
... | CLOSED | 2025-09-12T18:32:49 | 2025-10-13T14:39:48 | 2025-10-13T14:39:48 | https://github.com/huggingface/datasets/issues/7765 | ain-soph | 2 | [] |
7,760 | Hugging Face Hub Dataset Upload CAS Error | ### Describe the bug
Experiencing persistent 401 Unauthorized errors when attempting to upload datasets to Hugging Face Hub using the `datasets` library. The error occurs specifically with the CAS (Content Addressable Storage) service during the upload process. Tried using HF_HUB_DISABLE_XET=1. It seems to work for sm... | OPEN | 2025-09-10T10:01:19 | 2025-09-16T20:01:36 | null | https://github.com/huggingface/datasets/issues/7760 | n-bkoe | 4 | [] |
7,759 | Comment/feature request: Huggingface 502s from GHA | This is no longer a pressing issue, but for completeness I am reporting that in August 26th, GET requests to `https://datasets-server.huggingface.co/info\?dataset\=livebench/math` were returning 502s when invoked from [github actions](https://github.com/UKGovernmentBEIS/inspect_evals/actions/runs/17241892475/job/489211... | OPEN | 2025-09-09T11:59:20 | 2025-09-09T13:02:28 | null | https://github.com/huggingface/datasets/issues/7759 | Scott-Simmons | 0 | [] |
7,758 | Option for Anonymous Dataset link | ### Feature request
Allow for anonymized viewing of datasets. For instance, something similar to [Anonymous GitHub](https://anonymous.4open.science/).
### Motivation
We generally publish our data through Hugging Face. This has worked out very well as it's both our repository and archive (thanks to the DOI feature!).... | OPEN | 2025-09-08T20:20:10 | 2025-09-08T20:20:10 | null | https://github.com/huggingface/datasets/issues/7758 | egrace479 | 0 | [
"enhancement"
] |
7,757 | Add support for `.conll` file format in datasets | ### Feature request
I’d like to request native support in the Hugging Face datasets library for reading .conll files (CoNLL format). This format is widely used in NLP tasks, especially for Named Entity Recognition (NER), POS tagging, and other token classification problems.
Right now `.conll` datasets need to be manu... | OPEN | 2025-09-06T07:25:39 | 2025-09-10T14:22:48 | null | https://github.com/huggingface/datasets/issues/7757 | namesarnav | 1 | [
"enhancement"
] |
7,756 | datasets.map(f, num_proc=N) hangs with N>1 when run on import | ### Describe the bug
If you `import` a module that runs `datasets.map(f, num_proc=N)` at the top-level, Python hangs.
### Steps to reproduce the bug
1. Create a file that runs datasets.map at the top-level:
```bash
cat <<EOF > import_me.py
import datasets
the_dataset = datasets.load_dataset("openai/openai_humanev... | OPEN | 2025-09-05T10:32:01 | 2025-09-05T10:32:01 | null | https://github.com/huggingface/datasets/issues/7756 | arjunguha | 0 | [] |
7,753 | datasets massively slows data reads, even in memory | ### Describe the bug
Loading image data in a huggingface dataset results in very slow read speeds, approximately 1000 times longer than reading the same data from a pytorch dataset. This applies even when the dataset is loaded into RAM using a `keep_in_memory=True` flag.
The following script reproduces the result wit... | OPEN | 2025-09-04T01:45:24 | 2025-09-18T22:08:51 | null | https://github.com/huggingface/datasets/issues/7753 | lrast | 2 | [] |
7,751 | Dill version update | ### Describe the bug
Why the datasets is not updating the dill ?
Just want to know if I update the dill version in dill what will be the repucssion.
For now in multiplaces I have to update the library like process requirequire dill 0.4.0 so why not datasets.
Adding a pr too.
### Steps to reproduce the bug
.
###... | OPEN | 2025-08-27T07:38:30 | 2025-09-10T14:24:02 | null | https://github.com/huggingface/datasets/issues/7751 | Navanit-git | 2 | [] |
7,746 | Fix: Canonical 'multi_news' dataset is broken and should be updated to a Parquet version | Hi,
The canonical `multi_news` dataset is currently broken and fails to load. This is because it points to the [alexfabri/multi_news](https://huggingface.co/datasets/alexfabbri/multi_news) repository, which contains a legacy loading script (`multi_news.py`) that requires the now-removed `trust_remote_code` parameter.... | OPEN | 2025-08-22T12:52:03 | 2025-08-27T20:23:35 | null | https://github.com/huggingface/datasets/issues/7746 | Awesome075 | 1 | [] |
7,745 | Audio mono argument no longer supported, despite class documentation | ### Describe the bug
Either update the documentation, or re-introduce the flag (and corresponding logic to convert the audio to mono)
### Steps to reproduce the bug
Audio(sampling_rate=16000, mono=True) raises the error
TypeError: Audio.__init__() got an unexpected keyword argument 'mono'
However, in the class doc... | OPEN | 2025-08-22T12:15:41 | 2025-08-24T18:22:41 | null | https://github.com/huggingface/datasets/issues/7745 | jheitz | 1 | [] |
7,744 | dtype: ClassLabel is not parsed correctly in `features.py` | `dtype: ClassLabel` in the README.md yaml metadata is parsed incorrectly and causes the data viewer to fail.
This yaml in my metadata ([source](https://huggingface.co/datasets/BrentLab/yeast_genome_resources/blob/main/README.md), though i changed `ClassLabel` to `string` to using different dtype in order to avoid the ... | CLOSED | 2025-08-21T23:28:50 | 2025-09-10T15:23:41 | 2025-09-10T15:23:41 | https://github.com/huggingface/datasets/issues/7744 | cmatKhan | 3 | [] |
7,742 | module 'pyarrow' has no attribute 'PyExtensionType' | ### Describe the bug
When importing certain libraries, users will encounter the following error which can be traced back to the datasets library.
module 'pyarrow' has no attribute 'PyExtensionType'.
Example issue: https://github.com/explodinggradients/ragas/issues/2170
The issue occurs due to the following. I will ... | OPEN | 2025-08-20T06:14:33 | 2025-09-09T02:51:46 | null | https://github.com/huggingface/datasets/issues/7742 | mnedelko | 2 | [] |
7,741 | Preserve tree structure when loading HDF5 | ### Feature request
https://github.com/huggingface/datasets/pull/7740#discussion_r2285605374
### Motivation
`datasets` has the `Features` class for representing nested features. HDF5 files have groups of datasets which are nested, though in #7690 the keys are flattened. We should preserve that structure for the user... | CLOSED | 2025-08-19T15:42:05 | 2025-08-26T15:28:06 | 2025-08-26T15:28:06 | https://github.com/huggingface/datasets/issues/7741 | klamike | 0 | [
"enhancement"
] |
7,739 | Replacement of "Sequence" feature with "List" breaks backward compatibility | PR #7634 replaced the Sequence feature with List in 4.0.0, so datasets saved with version 4.0.0 with that feature cannot be loaded by earlier versions. There is no clear option in 4.0.0 to use the legacy feature type to preserve backward compatibility.
Why is this a problem? I have a complex preprocessing and training... | OPEN | 2025-08-18T17:28:38 | 2025-09-10T14:17:50 | null | https://github.com/huggingface/datasets/issues/7739 | evmaki | 1 | [] |
7,738 | Allow saving multi-dimensional ndarray with dynamic shapes | ### Feature request
I propose adding a dedicated feature to the datasets library that allows for the efficient storage and retrieval of multi-dimensional ndarray with dynamic shapes. Similar to how Image columns handle variable-sized images, this feature would provide a structured way to store array data where the dim... | OPEN | 2025-08-18T02:23:51 | 2025-08-26T15:25:02 | null | https://github.com/huggingface/datasets/issues/7738 | ryan-minato | 2 | [
"enhancement"
] |
7,733 | Dataset Repo Paths to Locally Stored Images Not Being Appended to Image Path | ### Describe the bug
I’m not sure if this is a bug or a feature and I just don’t fully understand how dataset loading is to work, but it appears there may be a bug with how locally stored Image() are being accessed. I’ve uploaded a new dataset to hugging face (rmdig/rocky_mountain_snowpack) but I’ve come into a ton of... | CLOSED | 2025-08-08T19:10:58 | 2025-10-07T04:47:36 | 2025-10-07T04:32:48 | https://github.com/huggingface/datasets/issues/7733 | dennys246 | 2 | [] |
7,732 | webdataset: key errors when `field_name` has upper case characters | ### Describe the bug
When using a webdataset each sample can be a collection of different "fields"
like this:
```
images17/image194.left.jpg
images17/image194.right.jpg
images17/image194.json
images17/image12.left.jpg
images17/image12.right.jpg
images17/image12.json
```
if the field_name contains upper case characte... | OPEN | 2025-08-08T16:56:42 | 2025-08-08T16:56:42 | null | https://github.com/huggingface/datasets/issues/7732 | YassineYousfi | 0 | [] |
7,731 | Add the possibility of a backend for audio decoding | ### Feature request
Add the possibility of a backend for audio decoding. Before version 4.0.0, soundfile was used, and now torchcodec is used, but the problem is that torchcodec requires ffmpeg, which is problematic to install on the same colab. Therefore, I suggest adding a decoder selection when loading the dataset.... | OPEN | 2025-08-08T11:08:56 | 2025-08-20T16:29:33 | null | https://github.com/huggingface/datasets/issues/7731 | intexcor | 2 | [
"enhancement"
] |
7,729 | OSError: libcudart.so.11.0: cannot open shared object file: No such file or directory | > Hi is there any solution for that eror i try to install this one
pip install torch==1.12.1+cpu torchaudio==0.12.1+cpu -f https://download.pytorch.org/whl/torch_stable.html
this is working fine but tell me how to install pytorch version that is fit for gpu | OPEN | 2025-08-07T14:07:23 | 2025-09-24T02:17:15 | null | https://github.com/huggingface/datasets/issues/7729 | SaleemMalikAI | 1 | [] |
7,728 | NonMatchingSplitsSizesError and ExpectedMoreSplitsError | ### Describe the bug
When loading dataset, the info specified by `data_files` did not overwrite the original info.
### Steps to reproduce the bug
```python
from datasets import load_dataset
traindata = load_dataset(
"allenai/c4",
"en",
data_files={"train": "en/c4-train.00000-of-... | OPEN | 2025-08-07T04:04:50 | 2025-10-06T21:08:39 | null | https://github.com/huggingface/datasets/issues/7728 | efsotr | 3 | [] |
7,727 | config paths that start with ./ are not valid as hf:// accessed repos, but are valid when accessed locally | ### Describe the bug
```
- config_name: some_config
data_files:
- split: train
path:
- images/xyz/*.jpg
```
will correctly download but
```
- config_name: some_config
data_files:
- split: train
path:
- ./images/xyz/*.jpg
```
will error with `FileNotFoundError` due to improper url joining. `loa... | OPEN | 2025-08-06T08:21:37 | 2025-08-06T08:21:37 | null | https://github.com/huggingface/datasets/issues/7727 | doctorpangloss | 0 | [] |
7,724 | Can not stepinto load_dataset.py? | I set a breakpoint in "load_dataset.py" and try to debug my data load codes, but it does not stop at any breakpoints, so "load_dataset.py" can not be stepped into ?
<!-- Failed to upload "截图 2025-08-05 17-25-18.png" --> | OPEN | 2025-08-05T09:28:51 | 2025-08-05T09:28:51 | null | https://github.com/huggingface/datasets/issues/7724 | micklexqg | 0 | [] |
7,723 | Don't remove `trust_remote_code` arg!!! | ### Feature request
defaulting it to False is nice balance. we need manully setting it to True in certain scenarios!
Add `trust_remote_code` arg back please!
### Motivation
defaulting it to False is nice balance. we need manully setting it to True in certain scenarios!
### Your contribution
defaulting it to Fals... | OPEN | 2025-08-04T15:42:07 | 2025-08-04T15:42:07 | null | https://github.com/huggingface/datasets/issues/7723 | autosquid | 0 | [
"enhancement"
] |
7,722 | Out of memory even though using load_dataset(..., streaming=True) | ### Describe the bug
I am iterating over a large dataset that I load using streaming=True to avoid running out of memory. Unfortunately, I am observing that memory usage increases over time and I'm finally running in an oom.
### Steps to reproduce the bug
```
ds = load_dataset("openslr/librispeech_asr", split="tra... | OPEN | 2025-08-04T14:41:55 | 2025-08-04T14:41:55 | null | https://github.com/huggingface/datasets/issues/7722 | padmalcom | 0 | [] |
7,721 | Bad split error message when using percentages | ### Describe the bug
Hi, I'm trying to download a dataset. To not load the entire dataset in memory, I split it as described [here](https://huggingface.co/docs/datasets/v4.0.0/loading#slice-splits) in 10% steps.
When doing so, the library returns this error:
raise ValueError(f"Bad split: {split}. Available splits... | OPEN | 2025-08-04T13:20:25 | 2025-08-14T14:42:24 | null | https://github.com/huggingface/datasets/issues/7721 | padmalcom | 2 | [] |
7,720 | Datasets 4.0 map function causing column not found | ### Describe the bug
Column returned after mapping is not found in new instance of the dataset.
### Steps to reproduce the bug
Code for reproduction. After running get_total_audio_length, it is errored out due to `data` not having `duration`
```
def compute_duration(x):
return {"duration": len(x["audio"]["array"... | OPEN | 2025-08-03T12:52:34 | 2025-08-07T19:23:34 | null | https://github.com/huggingface/datasets/issues/7720 | Darejkal | 3 | [] |
7,719 | Specify dataset columns types in typehint | ### Feature request
Make dataset optionaly generic to datasets usage with type annotations like it was done in `torch.Dataloader` https://github.com/pytorch/pytorch/blob/134179474539648ba7dee1317959529fbd0e7f89/torch/utils/data/dataloader.py#L131
### Motivation
In MTEB we're using a lot of datasets objects, but they... | OPEN | 2025-08-02T13:22:31 | 2025-08-02T13:22:31 | null | https://github.com/huggingface/datasets/issues/7719 | Samoed | 0 | [
"enhancement"
] |
7,717 | Cached dataset is not used when explicitly passing the cache_dir parameter | ### Describe the bug
Hi, we are pre-downloading a dataset using snapshot_download(). When loading this exact dataset with load_dataset() the cached snapshot is not used. In both calls, I provide the cache_dir parameter.
### Steps to reproduce the bug
```
from datasets import load_dataset, concatenate_datasets
from h... | OPEN | 2025-08-01T07:12:41 | 2025-08-05T19:19:36 | null | https://github.com/huggingface/datasets/issues/7717 | padmalcom | 1 | [] |
7,709 | Release 4.0.0 breaks usage patterns of with_format | ### Describe the bug
Previously it was possible to access a whole column that was e.g. in numpy format via `with_format` by indexing the column. Now this possibility seems to be gone with the new Column() class. As far as I see, this makes working on a whole column (in-memory) more complex, i.e. normalizing an in-memo... | CLOSED | 2025-07-30T11:34:53 | 2025-08-07T08:27:18 | 2025-08-07T08:27:18 | https://github.com/huggingface/datasets/issues/7709 | wittenator | 2 | [] |
7,707 | load_dataset() in 4.0.0 failed when decoding audio | ### Describe the bug
Cannot decode audio data.
### Steps to reproduce the bug
```python
from datasets import load_dataset
dataset = load_dataset("hf-internal-testing/librispeech_asr_demo", "clean", split="validation")
print(dataset[0]["audio"]["array"])
```
1st round run, got
```
File "/usr/local/lib/python3.1... | CLOSED | 2025-07-29T03:25:03 | 2025-10-05T06:41:38 | 2025-08-01T05:15:45 | https://github.com/huggingface/datasets/issues/7707 | jiqing-feng | 16 | [] |
7,705 | Can Not read installed dataset in dataset.load(.) | Hi, folks, I'm newbie in huggingface dataset api.
As title, i'm facing the issue that the dataset.load api can not connect to the installed dataset.
code snippet :
<img width="572" height="253" alt="Image" src="https://github.com/user-attachments/assets/10f48aaf-d6ca-4239-b1cf-145d74f125d1" />
data path :
"/xxx/jose... | OPEN | 2025-07-28T09:43:54 | 2025-08-05T01:24:32 | null | https://github.com/huggingface/datasets/issues/7705 | HuangChiEn | 3 | [] |
7,703 | [Docs] map() example uses undefined `tokenizer` — causes NameError | ## Description
The current documentation example for `datasets.Dataset.map()` demonstrates batched processing but uses a `tokenizer` object without defining or importing it. This causes an error every time it's copied.
Here is the problematic line:
```python
# process a batch of examples
>>> ds = ds.map(lambda examp... | OPEN | 2025-07-26T13:35:11 | 2025-07-27T09:44:35 | null | https://github.com/huggingface/datasets/issues/7703 | Sanjaykumar030 | 1 | [] |
7,700 | [doc] map.num_proc needs clarification | https://huggingface.co/docs/datasets/v4.0.0/en/package_reference/main_classes#datasets.Dataset.map.num_proc
```
num_proc (int, optional, defaults to None) — Max number of processes when generating cache. Already cached
shards are loaded sequentially.
```
for batch:
```
num_proc (int, optional, defaults to None): The n... | OPEN | 2025-07-25T17:35:09 | 2025-07-25T17:39:36 | null | https://github.com/huggingface/datasets/issues/7700 | sfc-gh-sbekman | 0 | [] |
7,699 | Broken link in documentation for "Create a video dataset" | The link to "the [WebDataset documentation](https://webdataset.github.io/webdataset)." is broken.
https://huggingface.co/docs/datasets/main/en/video_dataset#webdataset
<img width="2048" height="264" alt="Image" src="https://github.com/user-attachments/assets/975dd10c-aad8-42fc-9fbc-de0e2747a326" /> | OPEN | 2025-07-24T19:46:28 | 2025-07-25T15:27:47 | null | https://github.com/huggingface/datasets/issues/7699 | cleong110 | 1 | [] |
7,698 | NotImplementedError when using streaming=True in Google Colab environment | ### Describe the bug
When attempting to load a large dataset (like tiiuae/falcon-refinedweb or allenai/c4) using streaming=True in a standard Google Colab notebook, the process fails with a NotImplementedError: Loading a streaming dataset cached in a LocalFileSystem is not supported yet. This issue persists even after... | OPEN | 2025-07-23T08:04:53 | 2025-07-23T15:06:23 | null | https://github.com/huggingface/datasets/issues/7698 | Aniket17200 | 2 | [] |
7,697 | - | - | CLOSED | 2025-07-23T01:30:32 | 2025-07-25T15:21:39 | 2025-07-25T15:21:39 | https://github.com/huggingface/datasets/issues/7697 | null | 0 | [] |
7,696 | load_dataset() in 4.0.0 returns different audio samples compared to earlier versions breaking reproducibility | ### Describe the bug
In datasets 4.0.0 release, `load_dataset()` returns different audio samples compared to earlier versions, this breaks integration tests that depend on consistent sample data across different environments (first and second envs specified below).
### Steps to reproduce the bug
```python
from dat... | CLOSED | 2025-07-22T17:02:17 | 2025-07-30T14:22:21 | 2025-07-30T14:22:21 | https://github.com/huggingface/datasets/issues/7696 | Manalelaidouni | 2 | [] |
7,694 | Dataset.to_json consumes excessive memory, appears to not be a streaming operation | ### Describe the bug
When exporting a Dataset object to a JSON Lines file using the .to_json(lines=True) method, the process consumes a very large amount of memory. The memory usage is proportional to the size of the entire Dataset object being saved, rather than being a low, constant memory operation.
This behavior ... | OPEN | 2025-07-21T07:51:25 | 2025-07-25T14:42:21 | null | https://github.com/huggingface/datasets/issues/7694 | ycq0125 | 1 | [] |
7,693 | Dataset scripts are no longer supported, but found superb.py | ### Describe the bug
Hello,
I'm trying to follow the [Hugging Face Pipelines tutorial](https://huggingface.co/docs/transformers/main_classes/pipelines) but the tutorial seems to work only on old datasets versions.
I then get the error :
```
--------------------------------------------------------------------------
... | OPEN | 2025-07-20T13:48:06 | 2025-12-02T05:34:39 | null | https://github.com/huggingface/datasets/issues/7693 | edwinzajac | 19 | [] |
7,692 | xopen: invalid start byte for streaming dataset with trust_remote_code=True | ### Describe the bug
I am trying to load YODAS2 dataset with datasets==3.6.0
```
from datasets import load_dataset
next(iter(load_dataset('espnet/yodas2', name='ru000', split='train', streaming=True, trust_remote_code=True)))
```
And get `UnicodeDecodeError: 'utf-8' codec can't decode byte 0xa8 in position 1: invalid ... | OPEN | 2025-07-20T11:08:20 | 2025-07-25T14:38:54 | null | https://github.com/huggingface/datasets/issues/7692 | sedol1339 | 1 | [] |
End of preview. Expand in Data Studio
README.md exists but content is empty.
- Downloads last month
- 13