number
int64
2
7.91k
title
stringlengths
1
290
body
stringlengths
0
228k
state
stringclasses
2 values
created_at
timestamp[s]date
2020-04-14 18:18:51
2025-12-16 10:45:02
updated_at
timestamp[s]date
2020-04-29 09:23:05
2025-12-16 19:34:46
closed_at
timestamp[s]date
2020-04-29 09:23:05
2025-12-16 14:20:48
url
stringlengths
48
51
author
stringlengths
3
26
comments_count
int64
0
70
labels
listlengths
0
4
7,456
.add_faiss_index and .add_elasticsearch_index returns ImportError at Google Colab
### Describe the bug At Google Colab ```!pip install faiss-cpu``` works ```import faiss``` no error but ```embeddings_dataset.add_faiss_index(column='embeddings')``` returns ``` [/usr/local/lib/python3.11/dist-packages/datasets/search.py](https://localhost:8080/#) in init(self, device, string_factory, metric_type, cus...
OPEN
2025-03-16T00:51:49
2025-03-17T15:57:19
null
https://github.com/huggingface/datasets/issues/7456
MapleBloom
6
[]
7,455
Problems with local dataset after upgrade from 3.3.2 to 3.4.0
### Describe the bug I was not able to open a local saved dataset anymore that was created using an older datasets version after the upgrade yesterday from datasets 3.3.2 to 3.4.0 The traceback is ``` Traceback (most recent call last): File "/usr/local/lib/python3.10/dist-packages/datasets/packaged_modules/arrow/...
OPEN
2025-03-15T09:22:50
2025-03-17T16:20:43
null
https://github.com/huggingface/datasets/issues/7455
andjoer
1
[]
7,449
Cannot load data with different schemas from different parquet files
### Describe the bug Cannot load samples with optional fields from different files. The schema cannot be correctly derived. ### Steps to reproduce the bug When I place two samples with an optional field `some_extra_field` within a single parquet file, it can be loaded via `load_dataset`. ```python import pandas as ...
CLOSED
2025-03-13T08:14:49
2025-03-17T07:27:48
2025-03-17T07:27:46
https://github.com/huggingface/datasets/issues/7449
li-plus
2
[]
7,448
`datasets.disable_caching` doesn't work
When I use `Dataset.from_generator(my_gen)` to load my dataset, it simply skips my changes to the generator function. I tried `datasets.disable_caching`, but it doesn't work!
OPEN
2025-03-13T06:40:12
2025-03-22T04:37:07
null
https://github.com/huggingface/datasets/issues/7448
UCC-team
2
[]
7,447
Epochs shortened after resuming mid-epoch with Iterable dataset+StatefulDataloader(persistent_workers=True)
### Describe the bug When `torchdata.stateful_dataloader.StatefulDataloader(persistent_workers=True)` the epochs after resuming only iterate through the examples that were left in the epoch when the training was interrupted. For example, in the script below training is interrupted on step 124 (epoch 1) when 3 batches ...
CLOSED
2025-03-12T21:41:05
2025-07-09T23:04:57
2025-03-14T10:50:10
https://github.com/huggingface/datasets/issues/7447
dhruvdcoder
6
[]
7,446
pyarrow.lib.ArrowTypeError: Expected dict key of type str or bytes, got 'int'
### Describe the bug A dict with its keys are all str but get following error ```python test_data=[{'input_ids':[1,2,3],'labels':[[Counter({2:1})]]}] dataset = datasets.Dataset.from_list(test_data) ``` ```bash pyarrow.lib.ArrowTypeError: Expected dict key of type str or bytes, got 'int' ``` ### Steps to reproduce the...
CLOSED
2025-03-12T07:48:37
2025-07-04T05:14:45
2025-07-04T05:14:45
https://github.com/huggingface/datasets/issues/7446
rangehow
2
[]
7,444
Excessive warnings when resuming an IterableDataset+buffered shuffle+DDP.
### Describe the bug I have a large dataset that I shared into 1024 shards and save on the disk during pre-processing. During training, I load the dataset using load_from_disk() and convert it into an iterable dataset, shuffle it and split the shards to different DDP nodes using the recommended method. However, when ...
OPEN
2025-03-11T16:34:39
2025-11-22T06:45:25
null
https://github.com/huggingface/datasets/issues/7444
dhruvdcoder
2
[]
7,443
index error when num_shards > len(dataset)
In `ds.push_to_hub()` and `ds.save_to_disk()`, `num_shards` must be smaller than or equal to the number of rows in the dataset, but currently this is not checked anywhere inside these functions. Attempting to invoke these functions with `num_shards > len(dataset)` should raise an informative `ValueError`. I frequently...
OPEN
2025-03-10T22:40:59
2025-03-10T23:43:08
null
https://github.com/huggingface/datasets/issues/7443
eminorhan
1
[]
7,442
Flexible Loader
### Feature request Can we have a utility function that will use `load_from_disk` when given the local path and `load_dataset` if given an HF dataset? It can be something as simple as this one: ``` def load_hf_dataset(path_or_name): if os.path.exists(path_or_name): return load_from_disk(path_or_name) ...
OPEN
2025-03-09T16:55:03
2025-03-27T23:58:17
null
https://github.com/huggingface/datasets/issues/7442
dipta007
3
[ "enhancement" ]
7,441
`drop_last_batch` does not drop the last batch using IterableDataset + interleave_datasets + multi_worker
### Describe the bug See the script below `drop_last_batch=True` is defined using map() for each dataset. The last batch for each dataset is expected to be dropped, id 21-25. The code behaves as expected when num_workers=0 or 1. When using num_workers>1, 'a-11', 'b-11', 'a-12', 'b-12' are gone and instead 21 and 22 a...
OPEN
2025-03-08T10:28:44
2025-10-09T10:14:24
null
https://github.com/huggingface/datasets/issues/7441
memray
3
[]
7,440
IterableDataset raises FileNotFoundError instead of retrying
### Describe the bug In https://github.com/huggingface/datasets/issues/6843 it was noted that the streaming feature of `datasets` is highly susceptible to outages and doesn't back off for long (or even *at all*). I was training a model while streaming SlimPajama and training crashed with a `FileNotFoundError`. I can ...
OPEN
2025-03-07T19:14:18
2025-07-22T08:15:44
null
https://github.com/huggingface/datasets/issues/7440
bauwenst
7
[]
7,433
`Dataset.map` ignores existing caches and remaps when ran with different `num_proc`
### Describe the bug If you `map` a dataset and save it to a specific `cache_file_name` with a specific `num_proc`, and then call map again with that same existing `cache_file_name` but a different `num_proc`, the dataset will be re-mapped. ### Steps to reproduce the bug 1. Download a dataset ```python import datase...
CLOSED
2025-03-03T05:51:26
2025-05-12T15:14:09
2025-05-12T15:14:09
https://github.com/huggingface/datasets/issues/7433
ringohoffman
2
[]
7,431
Issues with large Datasets
### Describe the bug If the coco annotation file is too large the dataset will not be able to load it, not entirely sure were the issue is but I am guessing it is due to the code trying to load it all as one line into a dataframe. This was for object detections. My current work around is the following code but would ...
OPEN
2025-02-28T14:05:22
2025-03-04T15:02:26
null
https://github.com/huggingface/datasets/issues/7431
nikitabelooussovbtis
4
[]
7,430
Error in code "Time to slice and dice" from course "NLP Course"
### Describe the bug When we execute code ``` frequencies = ( train_df["condition"] .value_counts() .to_frame() .reset_index() .rename(columns={"index": "condition", "condition": "frequency"}) ) frequencies.head() ``` answer should be like this condition | frequency birth control | 27655 dep...
CLOSED
2025-02-28T11:36:10
2025-03-05T11:32:47
2025-03-03T17:52:15
https://github.com/huggingface/datasets/issues/7430
Yurkmez
2
[]
7,427
Error splitting the input into NAL units.
### Describe the bug I am trying to finetune qwen2.5-vl on 16 * 80G GPUS, and I use `LLaMA-Factory` and set `preprocessing_num_workers=16`. However, I met the following error and the program seem to got crush. It seems that the error come from `datasets` library The error logging is like following: ```text Convertin...
OPEN
2025-02-28T02:30:15
2025-03-04T01:40:28
null
https://github.com/huggingface/datasets/issues/7427
MengHao666
2
[]
7,425
load_dataset("livecodebench/code_generation_lite", version_tag="release_v2") TypeError: 'NoneType' object is not callable
### Describe the bug from datasets import load_dataset lcb_codegen = load_dataset("livecodebench/code_generation_lite", version_tag="release_v2") or configs = get_dataset_config_names("livecodebench/code_generation_lite", trust_remote_code=True) both error: Traceback (most recent call last): File "", line 1, in File...
OPEN
2025-02-27T07:36:02
2025-03-27T05:05:33
null
https://github.com/huggingface/datasets/issues/7425
dshwei
10
[]
7,423
Row indexing a dataset with numpy integers
### Feature request Allow indexing datasets with a scalar numpy integer type. ### Motivation Indexing a dataset with a scalar numpy.int* object raises a TypeError. This is due to the test in `datasets/formatting/formatting.py:key_to_query_type` ``` python def key_to_query_type(key: Union[int, slice, range, str, Ite...
CLOSED
2025-02-25T18:44:45
2025-07-28T02:23:17
2025-07-28T02:23:17
https://github.com/huggingface/datasets/issues/7423
DavidRConnell
1
[ "enhancement" ]
7,421
DVC integration broken
### Describe the bug The DVC integration seems to be broken. Followed this guide: https://dvc.org/doc/user-guide/integrations/huggingface ### Steps to reproduce the bug #### Script to reproduce ~~~python from datasets import load_dataset dataset = load_dataset( "csv", data_files="dvc://workshop/satellite-d...
OPEN
2025-02-25T13:14:31
2025-03-03T17:42:02
null
https://github.com/huggingface/datasets/issues/7421
maxstrobel
1
[]
7,420
better correspondence between cached and saved datasets created using from_generator
### Feature request At the moment `.from_generator` can only create a dataset that lives in the cache. The cached dataset cannot be loaded with `load_from_disk` because the cache folder is missing `state.json`. So the only way to convert this cached dataset to a regular is to use `save_to_disk` which needs to create a...
OPEN
2025-02-24T22:14:37
2025-02-26T03:10:22
null
https://github.com/huggingface/datasets/issues/7420
vttrifonov
0
[ "enhancement" ]
7,419
Import order crashes script execution
### Describe the bug Hello, I'm trying to convert an HF dataset into a TFRecord so I'm importing `tensorflow` and `datasets` to do so. Depending in what order I'm importing those librairies, my code hangs forever and is unkillable (CTRL+C doesn't work, I need to kill my shell entirely). Thank you for your help 🙏 ...
OPEN
2025-02-24T17:03:43
2025-02-24T17:03:43
null
https://github.com/huggingface/datasets/issues/7419
DamienMatias
0
[]
7,418
pyarrow.lib.arrowinvalid: cannot mix list and non-list, non-null values with map function
### Describe the bug Encounter pyarrow.lib.arrowinvalid error with map function in some example when loading the dataset ### Steps to reproduce the bug ``` from datasets import load_dataset from PIL import Image, PngImagePlugin dataset = load_dataset("leonardPKU/GEOQA_R1V_Train_8K") system_prompt="You are a helpful...
OPEN
2025-02-21T10:58:06
2025-07-11T13:06:10
null
https://github.com/huggingface/datasets/issues/7418
alexxchen
5
[]
7,415
Shard Dataset at specific indices
I have a dataset of sequences, where each example in the sequence is a separate row in the dataset (similar to LeRobotDataset). When running `Dataset.save_to_disk` how can I provide indices where it's possible to shard the dataset such that no episode spans more than 1 shard. Consequently, when I run `Dataset.load_from...
OPEN
2025-02-20T10:43:10
2025-02-24T11:06:45
null
https://github.com/huggingface/datasets/issues/7415
nikonikolov
3
[]
7,413
Documentation on multiple media files of the same type with WebDataset
The [current documentation](https://huggingface.co/docs/datasets/en/video_dataset) on a creating a video dataset includes only examples with one media file and one json. It would be useful to have examples where multiple files of the same type are included. For example, in a sign language dataset, you may have a base v...
OPEN
2025-02-18T16:13:20
2025-02-20T14:17:54
null
https://github.com/huggingface/datasets/issues/7413
DCNemesis
1
[]
7,412
Index Error Invalid Ket is out of bounds for size 0 for code-search-net/code_search_net dataset
### Describe the bug I am trying to do model pruning on sentence-transformers/all-mini-L6-v2 for the code-search-net/code_search_net dataset using INCTrainer class However I am getting below error ``` raise IndexError(f"Invalid Key: {key is our of bounds for size {size}") IndexError: Invalid key: 1840208 is out of b...
OPEN
2025-02-18T05:58:33
2025-02-18T06:42:07
null
https://github.com/huggingface/datasets/issues/7412
harshakhmk
0
[]
7,406
Adding Core Maintainer List to CONTRIBUTING.md
### Feature request I propose adding a core maintainer list to the `CONTRIBUTING.md` file. ### Motivation The Transformers and Liger-Kernel projects maintain lists of core maintainers for each module. However, the Datasets project doesn't have such a list. ### Your contribution I have nothing to add here.
CLOSED
2025-02-17T00:32:40
2025-03-24T10:57:54
2025-03-24T10:57:54
https://github.com/huggingface/datasets/issues/7406
jp1924
3
[ "enhancement" ]
7,405
Lazy loading of environment variables
### Describe the bug Loading a `.env` file after an `import datasets` call does not correctly use the environment variables. This is due the fact that environment variables are read at import time: https://github.com/huggingface/datasets/blob/de062f0552a810c52077543c1169c38c1f0c53fc/src/datasets/config.py#L155C1-L15...
OPEN
2025-02-16T22:31:41
2025-02-17T15:17:18
null
https://github.com/huggingface/datasets/issues/7405
nikvaessen
1
[]
7,404
Performance regression in `dataset.filter`
### Describe the bug We're filtering dataset of ~1M (small-ish) records. At some point in the code we do `dataset.filter`, before (including 3.2.0) it was taking couple of seconds, and now it takes 4 hours. We use 16 threads/workers, and stack trace at them look as follows: ``` Traceback (most recent call last): Fi...
CLOSED
2025-02-16T22:19:14
2025-02-17T17:46:06
2025-02-17T14:28:48
https://github.com/huggingface/datasets/issues/7404
ttim
3
[]
7,399
Synchronize parameters for various datasets
### Describe the bug [IterableDatasetDict](https://huggingface.co/docs/datasets/v3.2.0/en/package_reference/main_classes#datasets.IterableDatasetDict.map) map function is missing the `desc` parameter. You can see the equivalent map function for [Dataset here](https://huggingface.co/docs/datasets/v3.2.0/en/package_refe...
OPEN
2025-02-14T09:15:11
2025-02-19T11:50:29
null
https://github.com/huggingface/datasets/issues/7399
grofte
2
[]
7,400
504 Gateway Timeout when uploading large dataset to Hugging Face Hub
### Description I encountered consistent 504 Gateway Timeout errors while attempting to upload a large dataset (approximately 500GB) to the Hugging Face Hub. The upload fails during the process with a Gateway Timeout error. I will continue trying to upload. While it might succeed in future attempts, I wanted to report...
OPEN
2025-02-14T02:18:35
2025-02-14T23:48:36
null
https://github.com/huggingface/datasets/issues/7400
hotchpotch
4
[]
7,394
Using load_dataset with data_files and split arguments yields an error
### Describe the bug It seems the list of valid splits recorded by the package becomes incorrectly overwritten when using the `data_files` argument. If I run ```python from datasets import load_dataset load_dataset("allenai/super", split="all_examples", data_files="tasks/expert.jsonl") ``` then I get the error ``` Va...
OPEN
2025-02-12T04:50:11
2025-11-21T14:05:23
null
https://github.com/huggingface/datasets/issues/7394
devon-research
1
[]
7,392
push_to_hub payload too large error when using large ClassLabel feature
### Describe the bug When using `datasets.DatasetDict.push_to_hub` an `HfHubHTTPError: 413 Client Error: Payload Too Large for url` is raised if the dataset contains a large `ClassLabel` feature. Even if the total size of the dataset is small. ### Steps to reproduce the bug ``` python import random import sys impor...
OPEN
2025-02-11T17:51:34
2025-02-11T18:01:31
null
https://github.com/huggingface/datasets/issues/7392
DavidRConnell
1
[]
7,391
AttributeError: module 'pyarrow.lib' has no attribute 'ListViewType'
pyarrow 尝试了若干个版本都不可以
OPEN
2025-02-11T12:02:26
2025-02-11T12:02:26
null
https://github.com/huggingface/datasets/issues/7391
LinXin04
0
[]
7,390
Re-add py.typed
### Feature request The motivation for removing py.typed no longer seems to apply. Would a solution like [this one](https://github.com/huggingface/huggingface_hub/pull/2752) work here? ### Motivation MyPy support is broken. As more type checkers come out, such as RedKnot, these may also be broken. It would be goo...
OPEN
2025-02-10T22:12:52
2025-08-10T00:51:17
null
https://github.com/huggingface/datasets/issues/7390
NeilGirdhar
1
[ "enhancement" ]
7,389
Getting statistics about filtered examples
@lhoestq wondering if the team has thought about this and if there are any recommendations? Currently when processing datasets some examples are bound to get filtered out, whether it's due to bad format, or length is too long, or any other custom filters that might be getting applied. Let's just focus on the filter by...
CLOSED
2025-02-10T20:48:29
2025-02-11T20:44:15
2025-02-11T20:44:13
https://github.com/huggingface/datasets/issues/7389
jonathanasdf
2
[]
7,388
OSError: [Errno 22] Invalid argument forbidden character
### Describe the bug I'm on Windows and i'm trying to load a datasets but i'm having title error because files in the repository are named with charactere like < >which can't be in a name file. Could it be possible to load this datasets but removing those charactere ? ### Steps to reproduce the bug load_dataset("CAT...
CLOSED
2025-02-10T17:46:31
2025-02-11T13:42:32
2025-02-11T13:42:30
https://github.com/huggingface/datasets/issues/7388
langflogit
2
[]
7,387
Dynamic adjusting dataloader sampling weight
Hi, Thanks for your wonderful work! I'm wondering is there a way to dynamically adjust the sampling weight of each data in the dataset during training? Looking forward to your reply, thanks again.
OPEN
2025-02-10T03:18:47
2025-03-07T14:06:54
null
https://github.com/huggingface/datasets/issues/7387
whc688
3
[]
7,386
Add bookfolder Dataset Builder for Digital Book Formats
### Feature request This feature proposes adding a new dataset builder called bookfolder to the datasets library. This builder would allow users to easily load datasets consisting of various digital book formats, including: AZW, AZW3, CB7, CBR, CBT, CBZ, EPUB, MOBI, and PDF. ### Motivation Currently, loading dataset...
CLOSED
2025-02-08T14:27:55
2025-02-08T14:30:10
2025-02-08T14:30:09
https://github.com/huggingface/datasets/issues/7386
shikanime
1
[ "enhancement" ]
7,381
Iterating over values of a column in the IterableDataset
### Feature request I would like to be able to iterate (and re-iterate if needed) over a column of an `IterableDataset` instance. The following example shows the supposed API: ```python def gen(): yield {"text": "Good", "label": 0} yield {"text": "Bad", "label": 1} ds = IterableDataset.from_generator(gen) tex...
CLOSED
2025-01-28T13:17:36
2025-05-22T18:00:04
2025-05-22T18:00:04
https://github.com/huggingface/datasets/issues/7381
TopCoder2K
11
[ "enhancement" ]
7,378
Allow pushing config version to hub
### Feature request Currently, when datasets are created, they can be versioned by passing the `version` argument to `load_dataset(...)`. For example creating `outcomes.csv` on the command line ``` echo "id,value\n1,0\n2,0\n3,1\n4,1\n" > outcomes.csv ``` and creating it ``` import datasets dataset = datasets.load_dat...
OPEN
2025-01-21T22:35:07
2025-01-30T13:56:56
null
https://github.com/huggingface/datasets/issues/7378
momeara
1
[ "enhancement" ]
7,377
Support for sparse arrays with the Arrow Sparse Tensor format?
### Feature request AI in biology is becoming a big thing. One thing that would be a huge benefit to the field that Huggingface Datasets doesn't currently have is native support for **sparse arrays**. Arrow has support for sparse tensors. https://arrow.apache.org/docs/format/Other.html#sparse-tensor It would be ...
OPEN
2025-01-21T20:14:35
2025-01-30T14:06:45
null
https://github.com/huggingface/datasets/issues/7377
JulesGM
1
[ "enhancement" ]
7,375
vllm批量推理报错
### Describe the bug ![Image](https://github.com/user-attachments/assets/3d958e43-28dc-4467-9333-5990c7af3b3f) ### Steps to reproduce the bug ![Image](https://github.com/user-attachments/assets/3067eeca-a54d-4956-b0fd-3fc5ea93dabb) ### Expected behavior ![Image](https://github.com/user-attachments/assets/77d32936-...
OPEN
2025-01-21T03:22:23
2025-01-30T14:02:40
null
https://github.com/huggingface/datasets/issues/7375
YuShengzuishuai
1
[]
7,373
Excessive RAM Usage After Dataset Concatenation concatenate_datasets
### Describe the bug When loading a dataset from disk, concatenating it, and starting the training process, the RAM usage progressively increases until the kernel terminates the process due to excessive memory consumption. https://github.com/huggingface/datasets/issues/2276 ### Steps to reproduce the bug ```python ...
OPEN
2025-01-16T16:33:10
2025-03-27T17:40:59
null
https://github.com/huggingface/datasets/issues/7373
sam-hey
3
[]
7,372
Inconsistent Behavior Between `load_dataset` and `load_from_disk` When Loading Sharded Datasets
### Description I encountered an inconsistency in behavior between `load_dataset` and `load_from_disk` when loading sharded datasets. Here is a minimal example to reproduce the issue: #### Code 1: Using `load_dataset` ```python from datasets import Dataset, load_dataset # First save with max_shard_size=10 Dataset.fr...
OPEN
2025-01-16T05:47:20
2025-01-16T05:47:20
null
https://github.com/huggingface/datasets/issues/7372
gaohongkui
0
[]
7,371
500 Server error with pushing a dataset
### Describe the bug Suddenly, I started getting this error message saying it was an internal error. `Error creating/pushing dataset: 500 Server Error: Internal Server Error for url: https://huggingface.co/api/datasets/ll4ma-lab/grasp-dataset/commit/main (Request ID: Root=1-6787f0b7-66d5bd45413e481c4c2fb22d;670d04ff-...
OPEN
2025-01-15T18:23:02
2025-01-15T20:06:05
null
https://github.com/huggingface/datasets/issues/7371
martinmatak
1
[]
7,369
Importing dataset gives unhelpful error message when filenames in metadata.csv are not found in the directory
### Describe the bug While importing an audiofolder dataset, where the names of the audiofiles don't correspond to the filenames in the metadata.csv, we get an unclear error message that is not helpful for the debugging, i.e. ``` ValueError: Instruction "train" corresponds to no data! ``` ### Steps to reproduce the ...
OPEN
2025-01-14T13:53:21
2025-01-14T15:05:51
null
https://github.com/huggingface/datasets/issues/7369
svencornetsdegroot
1
[]
7,366
Dataset.from_dict() can't handle large dict
### Describe the bug I have 26,000,000 3-tuples. When I use Dataset.from_dict() to load, neither. py nor Jupiter notebook can run successfully. This is my code: ``` # len(example_data) is 26,000,000, 'diff' is a text diff1_list = [example_data[i].texts[0] for i in range(len(example_data))] diff2_list =...
OPEN
2025-01-11T02:05:21
2025-01-11T02:05:21
null
https://github.com/huggingface/datasets/issues/7366
CSU-OSS
0
[]
7,365
A parameter is specified but not used in datasets.arrow_dataset.Dataset.from_pandas()
### Describe the bug I am interested in creating train, test and eval splits from a pandas Dataframe, therefore I was looking at the possibilities I can follow. I noticed the split parameter and was hopeful to use it in order to generate the 3 at once, however, while trying to understand the code, i noticed that it ha...
OPEN
2025-01-10T13:39:33
2025-01-10T13:39:33
null
https://github.com/huggingface/datasets/issues/7365
NourOM02
0
[]
7,364
API endpoints for gated dataset access requests
### Feature request I would like a programatic way of requesting access to gated datasets. The current solution to gain access forces me to visit a website and physically click an "agreement" button (as per the [documentation](https://huggingface.co/docs/hub/en/datasets-gated#access-gated-datasets-as-a-user)). An i...
CLOSED
2025-01-09T06:21:20
2025-01-09T11:17:40
2025-01-09T11:17:20
https://github.com/huggingface/datasets/issues/7364
jerome-white
3
[ "enhancement" ]
7,363
ImportError: To support decoding images, please install 'Pillow'.
### Describe the bug Following this tutorial locally using a macboko and VSCode: https://huggingface.co/docs/diffusers/en/tutorials/basic_training This line of code: for i, image in enumerate(dataset[:4]["image"]): throws: ImportError: To support decoding images, please install 'Pillow'. Pillow is installed. ###...
OPEN
2025-01-08T02:22:57
2025-05-28T14:56:53
null
https://github.com/huggingface/datasets/issues/7363
jamessdixon
4
[]
7,362
HuggingFace CLI dataset download raises error
### Describe the bug Trying to download Hugging Face datasets using Hugging Face CLI raises error. This error only started after December 27th, 2024. For example: ``` huggingface-cli download --repo-type dataset gboleda/wikicorpus Traceback (most recent call last): File "/home/ubuntu/test_venv/bin/huggingface...
CLOSED
2025-01-07T21:03:30
2025-01-08T15:00:37
2025-01-08T14:35:52
https://github.com/huggingface/datasets/issues/7362
ajayvohra2005
3
[]
7,360
error when loading dataset in Hugging Face: NoneType error is not callable
### Describe the bug I met an error when running a notebook provide by Hugging Face, and met the error. ``` --------------------------------------------------------------------------- TypeError Traceback (most recent call last) Cell In[2], line 5 3 # Load the enhancers dat...
OPEN
2025-01-07T02:11:36
2025-02-24T13:32:52
null
https://github.com/huggingface/datasets/issues/7360
nanu23333
5
[]
7,359
There are multiple 'mteb/arguana' configurations in the cache: default, corpus, queries with HF_HUB_OFFLINE=1
### Describe the bug Hey folks, I am trying to run this code - ```python from datasets import load_dataset, get_dataset_config_names ds = load_dataset("mteb/arguana") ``` with HF_HUB_OFFLINE=1 But I get the following error - ```python Using the latest cached version of the dataset since mteb/arguana...
OPEN
2025-01-06T17:42:49
2025-01-06T17:43:31
null
https://github.com/huggingface/datasets/issues/7359
Bhavya6187
1
[]
7,357
Python process aborded with GIL issue when using image dataset
### Describe the bug The issue is visible only with the latest `datasets==3.2.0`. When using image dataset the Python process gets aborted right before the exit with the following error: ``` Fatal Python error: PyGILState_Release: thread state 0x7fa1f409ade0 must be current when releasing Python runtime state: f...
OPEN
2025-01-06T11:29:30
2025-09-30T23:01:53
null
https://github.com/huggingface/datasets/issues/7357
AlexKoff88
4
[]
7,356
How about adding a feature to pass the key when performing map on DatasetDict?
### Feature request Add a feature to pass the key of the DatasetDict when performing map ### Motivation I often preprocess using map on DatasetDict. Sometimes, I need to preprocess train and valid data differently depending on the task. So, I thought it would be nice to pass the key (like train, valid) when perf...
CLOSED
2025-01-06T08:13:52
2025-03-24T10:57:47
2025-03-24T10:57:47
https://github.com/huggingface/datasets/issues/7356
jp1924
6
[ "enhancement" ]
7,355
Not available datasets[audio] on python 3.13
### Describe the bug This is the error I got, it seems numba package does not support python 3.13 PS C:\Users\sergi\Documents> pip install datasets[audio] Defaulting to user installation because normal site-packages is not writeable Collecting datasets[audio] Using cached datasets-3.2.0-py3-none-any.whl.metada...
OPEN
2025-01-04T18:37:08
2025-06-28T00:26:19
null
https://github.com/huggingface/datasets/issues/7355
sergiosinlimites
3
[]
7,354
A module that was compiled using NumPy 1.x cannot be run in NumPy 2.0.2 as it may crash. To support both 1.x and 2.x versions of NumPy, modules must be compiled with NumPy 2.0. Some module may need to rebuild instead e.g. with 'pybind11>=2.12'.
### Describe the bug Following this tutorial: https://huggingface.co/docs/diffusers/en/tutorials/basic_training and running it locally using VSCode on my MacBook. The first line in the tutorial fails: from datasets import load_dataset dataset = load_dataset('huggan/smithsonian_butterflies_subset', split="train"). w...
CLOSED
2025-01-04T18:30:17
2025-01-08T02:20:58
2025-01-08T02:20:58
https://github.com/huggingface/datasets/issues/7354
jamessdixon
1
[]
7,347
Converting Arrow to WebDataset TAR Format for Offline Use
### Feature request Hi, I've downloaded an Arrow-formatted dataset offline using the hugggingface's datasets library by: ``` import json from datasets import load_dataset dataset = load_dataset("pixparse/cc3m-wds") dataset.save_to_disk("./cc3m_1") ``` now I need to convert it to WebDataset's TAR form...
CLOSED
2024-12-27T01:40:44
2024-12-31T17:38:00
2024-12-28T15:38:03
https://github.com/huggingface/datasets/issues/7347
katie312
4
[ "enhancement" ]
7,346
OSError: Invalid flatbuffers message.
### Describe the bug When loading a large 2D data (1000 × 1152) with a large number of (2,000 data in this case) in `load_dataset`, the error message `OSError: Invalid flatbuffers message` is reported. When only 300 pieces of data of this size (1000 × 1152) are stored, they can be loaded correctly. When 2,00...
CLOSED
2024-12-25T11:38:52
2025-01-09T14:25:29
2025-01-09T14:25:05
https://github.com/huggingface/datasets/issues/7346
antecede
3
[]
7,345
Different behaviour of IterableDataset.map vs Dataset.map with remove_columns
### Describe the bug The following code ```python import datasets as hf ds1 = hf.Dataset.from_list([{'i': i} for i in [0,1]]) #ds1 = ds1.to_iterable_dataset() ds2 = ds1.map( lambda i: {'i': i+1}, input_columns = ['i'], remove_columns = ['i'] ) list(ds2) ``` produces ```python [{'i': ...
CLOSED
2024-12-25T07:36:48
2025-01-07T11:56:42
2025-01-07T11:56:42
https://github.com/huggingface/datasets/issues/7345
vttrifonov
1
[]
7,344
HfHubHTTPError: 429 Client Error: Too Many Requests for URL when trying to access SlimPajama-627B or c4 on TPUs
### Describe the bug I am trying to run some trainings on Google's TPUs using Huggingface's DataLoader on [SlimPajama-627B](https://huggingface.co/datasets/cerebras/SlimPajama-627B) and [c4](https://huggingface.co/datasets/allenai/c4), but I end up running into `429 Client Error: Too Many Requests for URL` error when ...
CLOSED
2024-12-22T16:30:07
2025-01-15T05:32:00
2025-01-15T05:31:58
https://github.com/huggingface/datasets/issues/7344
clankur
2
[]
7,343
[Bug] Inconsistent behavior of data_files and data_dir in load_dataset method.
### Describe the bug Inconsistent operation of data_files and data_dir in load_dataset method. ### Steps to reproduce the bug # First I have three files, named 'train.json', 'val.json', 'test.json'. Each one has a simple dict `{text:'aaa'}`. Their path are `/data/train.json`, `/data/val.json`, `/data/test.jso...
CLOSED
2024-12-19T14:31:27
2025-01-03T15:54:09
2025-01-03T15:54:09
https://github.com/huggingface/datasets/issues/7343
JasonCZH4
4
[]
7,337
One or several metadata.jsonl were found, but not in the same directory or in a parent directory of
### Describe the bug ImageFolder with metadata.jsonl error. I downloaded liuhaotian/LLaVA-CC3M-Pretrain-595K locally from Hugging Face. According to the tutorial in https://huggingface.co/docs/datasets/image_dataset#image-captioning, only put images.zip and metadata.jsonl containing information in the same folder. How...
OPEN
2024-12-17T12:58:43
2025-01-03T15:28:13
null
https://github.com/huggingface/datasets/issues/7337
mst272
1
[]
7,336
Clarify documentation or Create DatasetCard
### Feature request I noticed that you can use a Model Card instead of a Dataset Card when pushing a dataset to the Hub, but this isn’t clearly mentioned in [the docs.](https://huggingface.co/docs/datasets/dataset_card) - Update the docs to clarify that a Model Card can work for datasets too. - It might be worth c...
OPEN
2024-12-17T12:01:00
2024-12-17T12:01:00
null
https://github.com/huggingface/datasets/issues/7336
August-murr
0
[ "enhancement" ]
7,335
Too many open files: '/root/.cache/huggingface/token'
### Describe the bug I ran this code: ``` from datasets import load_dataset dataset = load_dataset("common-canvas/commoncatalog-cc-by", cache_dir="/datadrive/datasets/cc", num_proc=1000) ``` And got this error. Before it was some other file though (lie something...incomplete) runnting ``` ulimit -n 8192 ...
OPEN
2024-12-16T21:30:24
2024-12-16T21:30:24
null
https://github.com/huggingface/datasets/issues/7335
kopyl
0
[]
7,334
TypeError: Value.__init__() missing 1 required positional argument: 'dtype'
### Describe the bug ds = load_dataset( "./xxx.py", name="default", split="train", ) The datasets does not support debugging locally anymore... ### Steps to reproduce the bug ``` from datasets import load_dataset ds = load_dataset( "./repo.py", name="default", split="train", ) ...
OPEN
2024-12-15T04:08:46
2025-10-30T09:05:53
null
https://github.com/huggingface/datasets/issues/7334
null
3
[]
7,327
.map() is not caching and ram goes OOM
### Describe the bug Im trying to run a fairly simple map that is converting a dataset into numpy arrays. however, it just piles up on memory and doesnt write to disk. Ive tried multiple cache techniques such as specifying the cache dir, setting max mem, +++ but none seem to work. What am I missing here? ### Steps to...
OPEN
2024-12-13T14:22:56
2025-02-10T10:42:38
null
https://github.com/huggingface/datasets/issues/7327
simeneide
1
[]
7,326
Remove upper bound for fsspec
### Describe the bug As also raised by @cyyever in https://github.com/huggingface/datasets/pull/7296 and @NeilGirdhar in https://github.com/huggingface/datasets/commit/d5468836fe94e8be1ae093397dd43d4a2503b926#commitcomment-140952162 , `datasets` has a problematic version constraint on `fsspec`. In our case this c...
OPEN
2024-12-13T11:35:12
2025-01-03T15:34:37
null
https://github.com/huggingface/datasets/issues/7326
fellhorn
1
[]
7,323
Unexpected cache behaviour using load_dataset
### Describe the bug Following the (Cache management)[https://huggingface.co/docs/datasets/en/cache] docu and previous behaviour from datasets version 2.18.0, one is able to change the cache directory. Previously, all downloaded/extracted/etc files were found in this folder. As i have recently update to the latest v...
CLOSED
2024-12-12T14:03:00
2025-01-31T11:34:24
2025-01-31T11:34:24
https://github.com/huggingface/datasets/issues/7323
Moritz-Wirth
1
[]
7,322
ArrowInvalid: JSON parse error: Column() changed from object to array in row 0
### Describe the bug Encountering an error while loading the ```liuhaotian/LLaVA-Instruct-150K dataset```. ### Steps to reproduce the bug ``` from datasets import load_dataset fw =load_dataset("liuhaotian/LLaVA-Instruct-150K") ``` Error: ``` ArrowInvalid Traceback (most recen...
OPEN
2024-12-11T08:41:39
2025-07-15T13:06:55
null
https://github.com/huggingface/datasets/issues/7322
Polarisamoon
4
[]
7,321
ImportError: cannot import name 'set_caching_enabled' from 'datasets'
### Describe the bug Traceback (most recent call last): File "/usr/local/lib/python3.10/runpy.py", line 187, in _run_module_as_main mod_name, mod_spec, code = _get_module_details(mod_name, _Error) File "/usr/local/lib/python3.10/runpy.py", line 110, in _get_module_details __import__(pkg_name) File "...
OPEN
2024-12-11T01:58:46
2024-12-11T13:32:15
null
https://github.com/huggingface/datasets/issues/7321
sankexin
2
[]
7,320
ValueError: You should supply an encoding or a list of encodings to this method that includes input_ids, but you provided ['label']
### Describe the bug I am trying to create a PEFT model from DISTILBERT model, and run a training loop. However, the trainer.train() is giving me this error: ValueError: You should supply an encoding or a list of encodings to this method that includes input_ids, but you provided ['label'] Here is my code: ### St...
CLOSED
2024-12-10T20:23:11
2024-12-10T23:22:23
2024-12-10T23:22:23
https://github.com/huggingface/datasets/issues/7320
atrompeterog
1
[]
7,318
Introduce support for PDFs
### Feature request The idea (discussed in the Discord server with @lhoestq ) is to have a Pdf type like Image/Audio/Video. For example [Video](https://github.com/huggingface/datasets/blob/main/src/datasets/features/video.py) was recently added and contains how to decode a video file encoded in a dictionary like {"pat...
OPEN
2024-12-10T16:59:48
2024-12-12T18:38:13
null
https://github.com/huggingface/datasets/issues/7318
yabramuvdi
6
[ "enhancement" ]
7,313
Cannot create a dataset with relative audio path
### Describe the bug Hello! I want to create a dataset of parquet files, with audios stored as separate .mp3 files. However, it says "No such file or directory" (see the reproducing code). ### Steps to reproduce the bug Creating a dataset ``` from pathlib import Path from datasets import Dataset, load_datas...
OPEN
2024-12-09T07:34:20
2025-04-19T07:13:08
null
https://github.com/huggingface/datasets/issues/7313
sedol1339
4
[]
7,311
How to get the original dataset name with username?
### Feature request The issue is related to ray data https://github.com/ray-project/ray/issues/49008 which it requires to check if the dataset is the original one just after `load_dataset` and parquet files are already available on hf hub. The solution used now is to get the dataset name, config and split, then `...
OPEN
2024-12-08T07:18:14
2025-01-09T10:48:02
null
https://github.com/huggingface/datasets/issues/7311
npuichigo
2
[ "enhancement" ]
7,310
Enable the Audio Feature to decode / read with an offset + duration
### Feature request For most large speech dataset, we do not wish to generate hundreds of millions of small audio samples. Instead, it is quite common to provide larger audio files with frame offset (soundfile start and stop arguments). We should be able to pass these arguments to Audio() (column ID corresponding in t...
OPEN
2024-12-07T22:01:44
2024-12-09T21:09:46
null
https://github.com/huggingface/datasets/issues/7310
TParcollet
2
[ "enhancement" ]
7,315
Allow manual configuration of Dataset Viewer for datasets not created with the `datasets` library
#### **Problem Description** Currently, the Hugging Face Dataset Viewer automatically interprets dataset fields for datasets created with the `datasets` library. However, for datasets pushed directly via `git`, the Viewer: - Defaults to generic columns like `label` with `null` values if no explicit mapping is provide...
OPEN
2024-12-07T16:37:12
2024-12-11T11:05:22
null
https://github.com/huggingface/datasets/issues/7315
diarray-hub
13
[]
7,306
Creating new dataset from list loses information. (Audio Information Lost - either Datatype or Values).
### Describe the bug When creating a dataset from a list of datapoints, information is lost of the individual items. Specifically, when creating a dataset from a list of datapoints (from another dataset). Either the datatype is lost or the values are lost. See examples below. -> What is the best way to create...
OPEN
2024-12-05T09:07:53
2024-12-05T09:09:38
null
https://github.com/huggingface/datasets/issues/7306
ai-nikolai
0
[]
7,305
Build Documentation Test Fails Due to "Bad Credentials" Error
### Describe the bug The `Build documentation / build / build_main_documentation (push)` job is consistently failing during the "Syncing repository" step. The error occurs when attempting to determine the default branch name, resulting in "Bad credentials" errors. ### Steps to reproduce the bug 1. Trigger the `build...
OPEN
2024-12-03T20:22:54
2025-01-08T22:38:14
null
https://github.com/huggingface/datasets/issues/7305
ruidazeng
2
[]
7,303
DataFilesNotFoundError for datasets LM1B
### Describe the bug Cannot load the dataset https://huggingface.co/datasets/billion-word-benchmark/lm1b ### Steps to reproduce the bug `dataset = datasets.load_dataset('lm1b', split=split)` ### Expected behavior `Traceback (most recent call last): File "/home/hml/projects/DeepLearning/Generative_model/Diffusio...
CLOSED
2024-11-29T17:27:45
2024-12-11T13:22:47
2024-12-11T13:22:47
https://github.com/huggingface/datasets/issues/7303
hml1996-fight
1
[]
7,299
Efficient Image Augmentation in Hugging Face Datasets
### Describe the bug I'm using the Hugging Face datasets library to load images in batch and would like to apply a torchvision transform to solve the inconsistent image sizes in the dataset and apply some on the fly image augmentation. I can just think about using the collate_fn, but seems quite inefficient. ...
OPEN
2024-11-26T16:50:32
2024-11-26T16:53:53
null
https://github.com/huggingface/datasets/issues/7299
fabiozappo
0
[]
7,298
loading dataset issue with load_dataset() when training controlnet
### Describe the bug i'm unable to load my dataset for [controlnet training](https://github.com/huggingface/diffusers/blob/074e12358bc17e7dbe111ea4f62f05dbae8a49d5/examples/controlnet/train_controlnet.py#L606) using load_dataset(). however, load_from_disk() seems to work? would appreciate if someone can explain why ...
OPEN
2024-11-26T10:50:18
2024-11-26T10:50:18
null
https://github.com/huggingface/datasets/issues/7298
sarahahtee
0
[]
7,297
wrong return type for `IterableDataset.shard()`
### Describe the bug `IterableDataset.shard()` has the wrong typing for its return as `"Dataset"`. It should be `"IterableDataset"`. Makes my IDE unhappy. ### Steps to reproduce the bug look at [the source code](https://github.com/huggingface/datasets/blob/main/src/datasets/iterable_dataset.py#L2668)? ### Expected ...
CLOSED
2024-11-22T17:25:46
2024-12-03T14:27:27
2024-12-03T14:27:03
https://github.com/huggingface/datasets/issues/7297
ysngshn
1
[]
7,295
[BUG]: Streaming from S3 triggers `unexpected keyword argument 'requote_redirect_url'`
### Describe the bug Note that this bug is only triggered when `streaming=True`. #5459 introduced always calling fsspec with `client_kwargs={"requote_redirect_url": False}`, which seems to have incompatibility issues even in the newest versions. Analysis of what's happening: 1. `datasets` passes the `client_kw...
OPEN
2024-11-19T12:23:36
2024-11-19T13:01:53
null
https://github.com/huggingface/datasets/issues/7295
casper-hansen
0
[]
7,292
DataFilesNotFoundError for datasets `OpenMol/PubChemSFT`
### Describe the bug Cannot load the dataset https://huggingface.co/datasets/OpenMol/PubChemSFT ### Steps to reproduce the bug ``` from datasets import load_dataset dataset = load_dataset('OpenMol/PubChemSFT') ``` ### Expected behavior ``` -----------------------------------------------------------------------...
CLOSED
2024-11-16T11:54:31
2024-11-19T00:53:00
2024-11-19T00:52:59
https://github.com/huggingface/datasets/issues/7292
xnuohz
3
[]
7,291
Why return_tensors='pt' doesn't work?
### Describe the bug I tried to add input_ids to dataset with map(), and I used the return_tensors='pt', but why I got the callback with the type of List? ![image](https://github.com/user-attachments/assets/ab046e20-2174-4e91-9cd6-4a296a43e83c) ### Steps to reproduce the bug ![image](https://github.com/user-attac...
OPEN
2024-11-15T15:01:23
2024-11-18T13:47:08
null
https://github.com/huggingface/datasets/issues/7291
bw-wang19
2
[]
7,290
`Dataset.save_to_disk` hangs when using num_proc > 1
### Describe the bug Hi, I'm encountered a small issue when saving datasets that led to the saving taking up to multiple hours. Specifically, [`Dataset.save_to_disk`](https://huggingface.co/docs/datasets/main/en/package_reference/main_classes#datasets.Dataset.save_to_disk) is a lot slower when using `num_proc>1` than...
OPEN
2024-11-14T05:25:13
2025-11-24T09:43:03
null
https://github.com/huggingface/datasets/issues/7290
JohannesAck
4
[]
7,289
Dataset viewer displays wrong statists
### Describe the bug In [my dataset](https://huggingface.co/datasets/speedcell4/opus-unigram2), there is a column called `lang2`, and there are 94 different classes in total, but the viewer says there are 83 values only. This issue only arises in the `train` split. The total number of values is also 94 in the `test`...
CLOSED
2024-11-11T03:29:27
2024-11-13T13:02:25
2024-11-13T13:02:25
https://github.com/huggingface/datasets/issues/7289
speedcell4
1
[]
7,287
Support for identifier-based automated split construction
### Feature request As far as I understand, automated construction of splits for hub datasets is currently based on either file names or directory structure ([as described here](https://huggingface.co/docs/datasets/en/repository_structure)) It would seem to be pretty useful to also allow splits to be based on ide...
OPEN
2024-11-10T07:45:19
2024-11-19T14:37:02
null
https://github.com/huggingface/datasets/issues/7287
alex-hh
3
[ "enhancement" ]
7,286
Concurrent loading in `load_from_disk` - `num_proc` as a param
### Feature request https://github.com/huggingface/datasets/pull/6464 mentions a `num_proc` param while loading dataset from disk, but can't find that in the documentation and code anywhere ### Motivation Make loading large datasets from disk faster ### Your contribution Happy to contribute if given pointers
CLOSED
2024-11-08T23:21:40
2024-11-09T16:14:37
2024-11-09T16:14:37
https://github.com/huggingface/datasets/issues/7286
unography
0
[ "enhancement" ]
7,282
Faulty datasets.exceptions.ExpectedMoreSplitsError
### Describe the bug Trying to download only the 'validation' split of my dataset; instead hit the error `datasets.exceptions.ExpectedMoreSplitsError`. Appears to be the same undesired behavior as reported in [#6939](https://github.com/huggingface/datasets/issues/6939), but with `data_files`, not `data_dir`. Her...
OPEN
2024-11-07T20:15:01
2024-11-07T20:15:42
null
https://github.com/huggingface/datasets/issues/7282
meg-huggingface
0
[]
7,281
File not found error
### Describe the bug I get a FileNotFoundError: <img width="944" alt="image" src="https://github.com/user-attachments/assets/1336bc08-06f6-4682-a3c0-071ff65efa87"> ### Steps to reproduce the bug See screenshot. ### Expected behavior I want to load one audiofile from the dataset. ### Environmen...
OPEN
2024-11-07T09:04:49
2024-11-07T09:22:43
null
https://github.com/huggingface/datasets/issues/7281
MichielBontenbal
1
[]
7,280
Add filename in error message when ReadError or similar occur
Please update error messages to include relevant information for debugging when loading datasets with `load_dataset()` that may have a few corrupted files. Whenever downloading a full dataset, some files might be corrupted (either at the source or from downloading corruption). However the errors often only let me k...
OPEN
2024-11-07T06:00:53
2024-11-20T13:23:12
null
https://github.com/huggingface/datasets/issues/7280
elisa-aleman
5
[]
7,276
Accessing audio dataset value throws Format not recognised error
### Describe the bug Accessing audio dataset value throws `Format not recognised error` ### Steps to reproduce the bug **code:** ```py from datasets import load_dataset dataset = load_dataset("fawazahmed0/bug-audio") for data in dataset["train"]: print(data) ``` **output:** ```bash (mypy) ...
OPEN
2024-11-04T05:59:13
2024-11-09T18:51:52
null
https://github.com/huggingface/datasets/issues/7276
fawazahmed0
3
[]
7,275
load_dataset
### Describe the bug I am performing two operations I see on a hugging face tutorial (Fine-tune a language model), and I am defining every aspect inside the mapped functions, also some imports of the library because it doesnt identify anything not defined outside that function where the dataset elements are being mapp...
OPEN
2024-11-04T03:01:44
2024-11-04T03:01:44
null
https://github.com/huggingface/datasets/issues/7275
santiagobp99
0
[]
7,269
Memory leak when streaming
### Describe the bug I try to use a dataset with streaming=True, the issue I have is that the RAM usage becomes higher and higher until it is no longer sustainable. I understand that huggingface store data in ram during the streaming, and more worker in dataloader there are, more a lot of shard will be stored in ...
OPEN
2024-10-31T13:33:52
2025-12-09T18:18:36
null
https://github.com/huggingface/datasets/issues/7269
Jourdelune
11
[]
7,268
load_from_disk
### Describe the bug I have data saved with save_to_disk. The data is big (700Gb). When I try loading it, the only option is load_from_disk, and this function copies the data to a tmp directory, causing me to run out of disk space. Is there an alternative solution to that? ### Steps to reproduce the bug when trying ...
OPEN
2024-10-31T11:51:56
2025-07-01T08:42:17
null
https://github.com/huggingface/datasets/issues/7268
ghaith-mq
3
[]
7,267
Source installation fails on Macintosh with python 3.10
### Describe the bug Hi, Decord is a dev dependency not maintained since couple years. It does not have an ARM package available rendering it uninstallable on non-intel based macs Suggestion is to move to eva-decord (https://github.com/georgia-tech-db/eva-decord) which doesnt have this problem. Happy to...
OPEN
2024-10-31T10:18:45
2024-11-04T22:18:06
null
https://github.com/huggingface/datasets/issues/7267
mayankagarwals
1
[]
7,266
The dataset viewer should be available soon. Please retry later.
### Describe the bug After waiting for 2 hours, it still presents ``The dataset viewer should be available soon. Please retry later.'' ### Steps to reproduce the bug dataset link: https://huggingface.co/datasets/BryanW/HI_EDIT ### Expected behavior Present the dataset viewer. ### Environment info NA
CLOSED
2024-10-30T16:32:00
2024-10-31T03:48:11
2024-10-31T03:48:10
https://github.com/huggingface/datasets/issues/7266
viiika
1
[]
7,261
Cannot load the cache when mapping the dataset
### Describe the bug I'm training the flux controlnet. The train_dataset.map() takes long time to finish. However, when I killed one training process and want to restart a new training with the same dataset. I can't reuse the mapped result even I defined the cache dir for the dataset. with accelerator.main_process_...
OPEN
2024-10-29T08:29:40
2025-03-24T13:27:55
null
https://github.com/huggingface/datasets/issues/7261
zhangn77
2
[]
7,260
cache can't cleaned or disabled
### Describe the bug I tried following ways, the cache can't be disabled. I got 2T data, but I also got more than 2T cache file. I got pressure on storage. I need to diable cache or cleaned immediately after processed. Following ways are all not working, please give some help! ```python from datasets import ...
OPEN
2024-10-29T03:15:28
2024-12-11T09:04:52
null
https://github.com/huggingface/datasets/issues/7260
charliedream1
1
[]