html_url stringlengths 48 51 | title stringlengths 1 290 | comments listlengths 0 30 | body stringlengths 0 228k ⌀ | number int64 2 7.08k |
|---|---|---|---|---|
https://github.com/huggingface/datasets/issues/6775 | IndexError: Invalid key: 0 is out of bounds for size 0 | [
"Same problem.",
"Hi! You should be able to fix this by passing `remove_unused_columns=False` to the `transformers` `TrainingArguments` as explained in https://github.com/huggingface/peft/issues/1299.\r\n\r\n(I'm not familiar with Vertex AI, but I'd assume `remove_unused_columns` can be passed as a flag to the do... | ### Describe the bug
I am trying to fine-tune llama2-7b model in GCP. The notebook I am using for this can be found [here](https://github.com/GoogleCloudPlatform/vertex-ai-samples/blob/main/notebooks/community/model_garden/model_garden_pytorch_llama2_peft_finetuning.ipynb).
When I use the dataset given in the exa... | 6,775 |
https://github.com/huggingface/datasets/issues/6774 | Generating split is very slow when Image format is PNG | [
"I think this is due to the speed of reading a `png` image using pillow compared to a `jpg` image.\r\nNotably the same is true with `tiff`, it is even faster than `jpg` in my case."
] | ### Describe the bug
When I create a dataset, it gets stuck while generating cached data.
The image format is PNG, and it will not get stuck when the image format is jpeg.

After debugging, I know that it is b... | 6,774 |
https://github.com/huggingface/datasets/issues/6773 | Dataset on Hub re-downloads every time? | [
"The caching works as expected when I try to reproduce this locally or on Colab...",
"hi @mariosasko , Thank you for checking. I also tried running this again just now, and it seems like the `load_dataset()` caches properly (though I'll double check later).\r\n\r\nI think the issue might be in the caching of the ... | ### Describe the bug
Hi, I have a dataset on the hub [here](https://huggingface.co/datasets/manestay/borderlines). It has 1k+ downloads, which I sure is mostly just me and my colleagues working with it. It should have far fewer, since I'm using the same machine with a properly set up HF_HOME variable. However, whene... | 6,773 |
https://github.com/huggingface/datasets/issues/6771 | Datasets FileNotFoundError when trying to generate examples. | [
"Hi! I've opened a PR in the repo to fix this issue: https://huggingface.co/datasets/RitchieP/VerbaLex_voice/discussions/6",
"@mariosasko Thanks for the PR and help! Guess I could close the issue for now. Appreciate the help!"
] | ### Discussed in https://github.com/huggingface/datasets/discussions/6768
<div type='discussions-op-text'>
<sup>Originally posted by **RitchieP** April 1, 2024</sup>
Currently, I have a dataset hosted on Huggingface with a custom script [here](https://huggingface.co/datasets/RitchieP/VerbaLex_voice).
I'm loa... | 6,771 |
https://github.com/huggingface/datasets/issues/6770 | [Bug Report] `datasets==2.18.0` is not compatible with `fsspec==2023.12.2` | [
"You should be able to fix this by updating `huggingface_hub` with `pip install -U huggingface_hub`. We use this package under the hood to resolve the Hub's files."
] | ### Describe the bug
`Datasets==2.18.0` is not compatible with `fsspec==2023.12.2`.
I have to downgrade fsspec to `fsspec==2023.10.0` to make `Datasets==2.18.0` work properly.
### Steps to reproduce the bug
To reproduce the bug:
1. Make sure that `Datasets==2.18.0` and `fsspec==2023.12.2`.
2. Run the following ... | 6,770 |
https://github.com/huggingface/datasets/issues/6769 | (Willing to PR) Datasets with custom python objects | [] | ### Feature request
Hi thanks for the library! I would like to have a huggingface Dataset, and one of its column is custom (non-serializable) Python objects. For example, a minimal code:
```
class MyClass:
pass
dataset = datasets.Dataset.from_list([
dict(a=MyClass(), b='hello'),
])
```
It gives... | 6,769 |
https://github.com/huggingface/datasets/issues/6765 | Compatibility issue between s3fs, fsspec, and datasets | [
"Hi! Instead of running `pip install` separately for each package, you should pass all the packages to a single `pip install` call (in this case, `pip install datasets s3fs`) to let `pip` properly resolve their versions.",
"> Hi! Instead of running `pip install` separately for each package, you should pass all th... | ### Describe the bug
Here is the full error stack when installing:
```
ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
datasets 2.18.0 requires fsspec[http]<=2024.2.0,>=2023.1.0, but you ... | 6,765 |
https://github.com/huggingface/datasets/issues/6764 | load_dataset can't work with symbolic links | [] | ### Feature request
Enable the `load_dataset` function to load local datasets with symbolic links.
E.g, this dataset can be loaded:
├── example_dataset/
│ ├── data/
│ │ ├── train/
│ │ │ ├── file0
│ │ │ ├── file1
│ │ ├── dev/
│ │ │ ├── file2
│ │ │ ├── file3
│ ├── metad... | 6,764 |
https://github.com/huggingface/datasets/issues/6760 | Load codeparrot/apps raising UnicodeDecodeError in datasets-2.18.0 | [
"The same error with mteb datasets.",
"Unfortunately, I'm unable to reproduce this error locally or on Colab.",
"Here is the requirements.txt from a clean virtual environment (managed by conda) where I only install `datasets` by \r\n`pip install datasets`. \r\nThe pip list:\r\n```\r\naiohttp==3.9.3\r\naiosignal... | ### Describe the bug
This happens with datasets-2.18.0; I downgraded the version to 2.14.6 fixing this temporarily.
```
Traceback (most recent call last):
File "/home/xxx/miniconda3/envs/py310/lib/python3.10/site-packages/datasets/load.py", line 2556, in load_dataset
builder_instance = load_dataset_builder... | 6,760 |
https://github.com/huggingface/datasets/issues/6759 | Persistent multi-process Pool | [] | ### Feature request
Running .map and filter functions with `num_procs` consecutively instantiates several multiprocessing pools iteratively.
As instantiating a Pool is very resource intensive it can be a bottleneck to performing iteratively filtering.
My ideas:
1. There should be an option to declare `persist... | 6,759 |
https://github.com/huggingface/datasets/issues/6758 | Passing `sample_by` to `load_dataset` when loading text data does not work | [
"Thanks for reporting! We are working on a fix."
] | ### Describe the bug
I have a dataset that consists of a bunch of text files, each representing an example. There is an undocumented `sample_by` argument for the `TextConfig` class that is used by `Text` to decide whether to split files into lines, paragraphs or take them whole. Passing `sample_by=“document”` to `load... | 6,758 |
https://github.com/huggingface/datasets/issues/6756 | Support SQLite files? | [
"You can use `Dataset.from_sql(path_to_sql_file)` already. Though we haven't added the Sql dataset builder to the `_PACKAGED_DATASETS_MODULES` list or in `_EXTENSION_TO_MODULE` to map `.sqlite` to the Sql dataset builder\r\n\r\nThis would allow to load a dataset repository with a `.sqlite` file using `load_dataset`... | ### Feature request
Support loading a dataset from a SQLite file
https://huggingface.co/datasets/severo/test_iris_sqlite/tree/main
### Motivation
SQLite is a popular file format.
### Your contribution
See discussion on slack: https://huggingface.slack.com/archives/C04L6P8KNQ5/p1702481859117909 (internal)
In ... | 6,756 |
https://github.com/huggingface/datasets/issues/6755 | Small typo on the documentation | [
"Thanks for reporting @fostiropoulos! I've edited your comment to fix the link to the problematic line.\r\n",
"@mariosasko can i take this up?",
"#self-assign"
] | ### Describe the bug
There is a small typo on https://github.com/huggingface/datasets/blob/d5468836fe94e8be1ae093397dd43d4a2503b926/src/datasets/dataset_dict.py#L938
It should be `caching is enabled`.
### Steps to reproduce the bug
Please visit
https://github.com/huggingface/datasets/blob/d5468836fe94e... | 6,755 |
https://github.com/huggingface/datasets/issues/6753 | Type error when importing datasets on Kaggle | [
"I have the same problem \r\nIt seems that it only appears when you are using GPU \r\nIt seems to work fine with the 2.17 version though",
"Same here.",
"> I have the same problem\r\n> It seems that it only appears when you are using GPU\r\n> It seems to work fine with the 2.17 version though\r\n\r\nI downgrade... | ### Describe the bug
When trying to run
```
import datasets
print(datasets.__version__)
```
It generates the following error
```
TypeError: expected string or bytes-like object
```
It looks like It cannot find the valid versions of `fsspec`
though fsspec version is fine when I checked Via command
... | 6,753 |
https://github.com/huggingface/datasets/issues/6752 | Precision being changed from float16 to float32 unexpectedly | [
"This is because of the formatter (`torch` in this case).\r\nIt defaults to `float32`.\r\n\r\nYou can load it in `float16` using `dataset.set_format(\"torch\", dtype=torch.float16)`."
] | ### Describe the bug
I'm loading a HuggingFace Dataset for images.
I'm running a preprocessing (map operation) step that runs a few operations, one of them being conversion to float16. The Dataset features also say that the 'img' is of type float16. Whenever I take an image from that HuggingFace Dataset instance... | 6,752 |
https://github.com/huggingface/datasets/issues/6750 | `load_dataset` requires a network connection for local download? | [
"Are you using `HF_DATASETS_OFFLINE=1` ?",
"> Are you using `HF_DATASETS_OFFLINE=1` ?\r\n\r\nThis doesn't work for me. `datasets=2.18.0`\r\n\r\n`test.py`:\r\n```\r\nimport datasets\r\n\r\ndatasets.utils.logging.set_verbosity_info()\r\n\r\nds = datasets.load_dataset('C-MTEB/AFQMC', revision='b44c3b011063adb25877c1... | ### Describe the bug
Hi all - I see that in the past a network dependency has been mistakenly introduced into `load_dataset` even for local loads. Is it possible this has happened again?
### Steps to reproduce the bug
```
>>> import datasets
>>> datasets.load_dataset("hh-rlhf")
Repo card metadata block was not ... | 6,750 |
https://github.com/huggingface/datasets/issues/6748 | Strange slicing behavior | [
"As explained in the [docs](https://huggingface.co/docs/datasets/v2.18.0/en/access#slicing), slicing a `Dataset` returns a dictionary that maps its column names to their values. So, `len(dataset[:300])=2` is expected, assuming your dataset has 2 columns (the returned dict has 2 keys, but each value in the dict has ... | ### Describe the bug
I have loaded a dataset, and then slice first 300 samples using `:` ops, however, the resulting dataset is not expected, as the output below:
```bash
len(dataset)=1050324
len(dataset[:300])=2
len(dataset[0:300])=2
len(dataset.select(range(300)))=300
```
### Steps to reproduce the bug
loa... | 6,748 |
https://github.com/huggingface/datasets/issues/6746 | ExpectedMoreSplits error when loading C4 dataset | [
"Hi ! We updated the `allenai/c4` repository to allow people to specify which language to load easily (the the [c4 dataset page](https://huggingface.co/datasets/allenai/c4))\r\n\r\nTo fix this issue **you can update** `datasets` and remove the mention of the legacy configuration name \"allenai--c4\":\r\n\r\n```pyth... | ### Describe the bug
I encounter bug when running the example command line
```python
python main.py \
--model decapoda-research/llama-7b-hf \
--prune_method wanda \
--sparsity_ratio 0.5 \
--sparsity_type unstructured \
--save out/llama_7b/unstructured/wanda/
```
The bug occurred ... | 6,746 |
https://github.com/huggingface/datasets/issues/6745 | Scraping the whole of github including private repos is bad; kindly stop | [
"It's not twitter here"
] | ### Feature request
https://github.com/bigcode-project/opt-out-v2 - opt out is not consent. kindly quit this ridiculous nonsense.
### Motivation
[EDITED: insults not tolerated]
### Your contribution
[EDITED: insults not tolerated] | 6,745 |
https://github.com/huggingface/datasets/issues/6744 | Option to disable file locking | [] | ### Feature request
Commands such as `load_dataset` creates file locks with `filelock.FileLock`. It would be good if there was a way to disable this.
### Motivation
File locking doesn't work on all file-systems (in my case NFS mounted Weka). If the `cache_dir` only had small files then it would be possible to point ... | 6,744 |
https://github.com/huggingface/datasets/issues/6740 | Support for loading geotiff files as a part of the ImageFolder | [] | ### Feature request
Request for adding rasterio support to load geotiff as a part of ImageFolder, instead of using PIL
### Motivation
As of now, there are many datasets in HuggingFace Hub which are predominantly focussed towards RemoteSensing or are from RemoteSensing. The current ImageFolder (if I have understood c... | 6,740 |
https://github.com/huggingface/datasets/issues/6738 | Dict feature is non-nullable while nested dict feature is | [
"It looks like a bug, by default every feature should be nullable.",
"I've linked a PR with a fix :)",
"@mariosasko awesome thank you!"
] | When i try to create a `Dataset` object with None values inside a dict column, like this:
```python
from datasets import Dataset, Features, Value
Dataset.from_dict(
{
"dict": [{"a": 0, "b": 0}, None],
}, features=Features(
{"dict": {"a": Value("int16"), "b": Value("int16")}}
)
)
... | 6,738 |
https://github.com/huggingface/datasets/issues/6737 | Invalid pattern: '**' can only be an entire path component | [
"I couldn't reproduce the issue on my side on MacOS, I guess the issue comes from the recent `fsspec` on Windows.\r\n\r\nCan you try downgrading to `fsspec==2023.9.2` for now ? It would also be great to investigate this and see if we need a fix in `datasets` or `fsspec`",
"I had the same issue! \r\nDowngrading t... | ### Describe the bug
ValueError: Invalid pattern: '**' can only be an entire path component
when loading any dataset
### Steps to reproduce the bug
import datasets
ds = datasets.load_dataset("TokenBender/code_instructions_122k_alpaca_style")
### Expected behavior
loading the dataset successfully
### Environm... | 6,737 |
https://github.com/huggingface/datasets/issues/6736 | Mosaic Streaming (MDS) Support | [
"Hi ! that would be great :) Though note that `datasets` doesn't implement format-specific resuming when streaming, so in general I think it's better if users can use the mosaic-streaming library to read their MDS datasets. I wonder if they support `hf://` paths though...\r\n\r\nAnyway for those interested, the cod... | ### Feature request
I'm a huge fan of the current HF Datasets `webdataset` integration (especially the built-in streaming support). However, I'd love to upload some robotics and multimodal datasets I've processed for use with [Mosaic Streaming](https://docs.mosaicml.com/projects/streaming/en/stable/), specifically the... | 6,736 |
https://github.com/huggingface/datasets/issues/6734 | Tokenization slows towards end of dataset | [
"Hi ! First note that if the dataset is not heterogeneous / shuffled, there might be places in the data with shorter texts that are faster to tokenize.\r\n\r\nMoreover, the way `num_proc` works is by slicing the dataset and passing each slice to a process to run the `map()` function. So at the very end of `map()`, ... | ### Describe the bug
Mapped tokenization slows down substantially towards end of dataset.
train set started off very slow, caught up to 20k then tapered off til the end.
what's particularly strange is that the tokenization crashed a few times before due to errors with invalid tokens somewhere or corrupted down... | 6,734 |
https://github.com/huggingface/datasets/issues/6733 | EmptyDatasetError when loading dataset downloaded with HuggingFace cli | [
"Hi! `datasets` is not compatible with `huggingface_hub`'s cache structure, hence the error.\r\n\r\nYou can track https://github.com/huggingface/datasets/issues/5080 to get notified when this is implemented."
] | ### Describe the bug
I am using a cluster that does not have access to the internet when given a job. I tried downloading the dataset using the huggingface-cli command and then loading it with load_dataset but I get an error:
```raise EmptyDatasetError(f"The directory at {base_path} doesn't contain any data files... | 6,733 |
https://github.com/huggingface/datasets/issues/6731 | Unexpected behavior when using load_dataset with streaming=True in a for loop | [
"This is normal behavior in python when using `lambda`: the `i` defined in your `lambda` refers to the global variable `i` in your loop, and `i` equals to `1` when you run your `for e in res[0]` line.\r\n\r\nYou should pass `fn_kwargs` that will be passed to your `lambda` instead of using the global variable:\r\n\r... | ### Describe the bug
### My Code
```
from datasets import load_dataset
res=[]
for i in [0,1]:
di=load_dataset(
"json",
data_files='path_to.json',
split='train',
streaming=True,
).map(lambda x: {"source": i})
res.append(di)
for e in res[... | 6,731 |
https://github.com/huggingface/datasets/issues/6729 | Support zipfiles that span multiple disks? | [
"@severo were you able to solve it?",
"No. cc @albertvillanova @lhoestq @polinaeterna for an evaluation of what it would take to support this feature.",
"The underlying issue issue is that the dataset repository has used split ZIP archive files: https://huggingface.co/datasets/PhilEO-community/PhilEO-downstream... | See https://huggingface.co/datasets/PhilEO-community/PhilEO-downstream
The dataset viewer gives the following error:
```
Error code: ConfigNamesError
Exception: BadZipFile
Message: zipfiles that span multiple disks are not supported
Traceback: Traceback (most recent call last):
F... | 6,729 |
https://github.com/huggingface/datasets/issues/6728 | Issue Downloading Certain Datasets After Setting Custom `HF_ENDPOINT` | [
"Through debugging, I found a potential solution is to modify the code in the error handling module of `huggingface_hub`: https://github.com/huggingface/huggingface_hub/commit/56d6c798c44e83d2a3167e74c022737d8fcbe822 ",
"@Wauplin ",
"Thanks for investigating and reporting the bug @padeoe! I've opened a PR in `h... | ### Describe the bug
This bug is triggered under the following conditions:
- datasets repo ids without organization names trigger errors, such as `bookcorpus`, `gsm8k`, `wikipedia`, rather than in the form of `A/B`.
- If `HF_ENDPOINT` is set and the hostname is not in the form of `(hub-ci.)?huggingface.co`.
- T... | 6,728 |
https://github.com/huggingface/datasets/issues/6726 | Profiling for HF Filesystem shows there are easy performance gains to be made | [
"FWIW I debugged this while waiting for it to go",
"Oh I forgot to mention you can also cache resolve_pattern, and that seemed to also substantially improves things, if you want to load a dataset twice for whatever reason."
] | ### Describe the bug
# Let's make it faster
First, an evidence...

Figure 1: CProfile for loading 3 files from cerebras/SlimPajama-627B train split, and 3 files from test split using streaming=True. X axis is 1106... | 6,726 |
https://github.com/huggingface/datasets/issues/6725 | Request for a comparison of huggingface datasets compared with other data format especially webdataset | [] | ### Feature request
Request for a comparison of huggingface datasets compared with other data format especially webdataset
### Motivation
I see huggingface datasets uses Apache Arrow as its backend, it seems to be great, but I'm curious about how it is good compared with other dataset format, like webdataset, what's... | 6,725 |
https://github.com/huggingface/datasets/issues/6724 | Dataset with loading script does not work in renamed repos | [] | ### Describe the bug
My data repository was first called `BramVanroy/hplt-mono-v1-2` but I then renamed to use underscores instead of dashes. However, it seems that `datasets` retrieves the old repo name when it checks whether the repo contains data loading scripts in this line.
https://github.com/huggingface/dat... | 6,724 |
https://github.com/huggingface/datasets/issues/6721 | Hi,do you know how to load the dataset from local file now? | [
"\r\n@Gera001\r\n# Loading Dataset from Local Files Using 🤗Hugging Face.\r\n\r\nTo load a dataset from local files using the Hugging Face datasets library, you can use the `load_dataset` function.\r\n\r\n```\r\nfrom datasets import load_dataset\r\ndataset = load_dataset('csv', data_files={'train': 'path/to/train.c... | Hi, if I want to load the dataset from local file, then how to specify the configuration name?
_Originally posted by @WHU-gentle in https://github.com/huggingface/datasets/issues/2976#issuecomment-1333455222_
| 6,721 |
https://github.com/huggingface/datasets/issues/6720 | TypeError: 'str' object is not callable | [
"Hi ! I opened a PR to fix an issue in the Features defined in your code\r\n\r\nBasically changing\r\n```python\r\nSequence(\"float32\")\r\n```\r\n\r\nto\r\n```python\r\nSequence(Value(\"float32\"))\r\n```\r\n\r\n\r\nhttps://huggingface.co/datasets/BramVanroy/hplt_mono_v1_2/discussions/1",
"D'oh! Was wondering wh... | ### Describe the bug
I am trying to get the HPLT datasets on the hub. Downloading/re-uploading would be too time- and resource consuming so I wrote [a dataset loader script](https://huggingface.co/datasets/BramVanroy/hplt_mono_v1_2/blob/main/hplt_mono_v1_2.py). I think I am very close but for some reason I always get ... | 6,720 |
https://github.com/huggingface/datasets/issues/6719 | Is there any way to solve hanging of IterableDataset using split by node + filtering during inference | [] | ### Describe the bug
I am using an iterable dataset in a multi-node setup, trying to do training/inference while filtering the data on the fly. I usually do not use `split_dataset_by_node` but it is very slow using the IterableDatasetShard in `accelerate` and `transformers`. When I filter after applying `split_dataset... | 6,719 |
https://github.com/huggingface/datasets/issues/6717 | `remove_columns` method used with a streaming enable dataset mode produces a LibsndfileError on multichannel audio | [
"And it also works well with `dataset = dataset.select_columns([\"audio\"])`"
] | ### Describe the bug
When loading a HF dataset in streaming mode and removing some columns, it is impossible to load a sample if the audio contains more than one channel. I have the impression that the time axis and channels are swapped or concatenated.
### Steps to reproduce the bug
Minimal error code:
```python
... | 6,717 |
https://github.com/huggingface/datasets/issues/6716 | Non-deterministic `Dataset.builder_name` value | [
"When `rotten_tomatoes` is printed out, the following warning message is also printed out:\r\n\r\n```\r\nYou can avoid this message in future by passing the argument `trust_remote_code=True`.\r\nPassing `trust_remote_code=True` will be mandatory to load this dataset from the next major release of `datasets`.\r\n```... | ### Describe the bug
I'm not sure if this is a bug, but `print(ds.builder_name)` in the following code sometimes prints out `rotten_tomatoes` instead of `parquet`:
```python
import datasets
for _ in range(100):
ds = datasets.load_dataset("rotten_tomatoes", split="train")
print(ds.builder_name) # pr... | 6,716 |
https://github.com/huggingface/datasets/issues/6703 | Unable to load dataset that was saved with `save_to_disk` | [
"`save_to_disk` uses a special serialization that can only be read using `load_from_disk`.\r\n\r\nContrary to `load_dataset`, `load_from_disk` directly loads Arrow files and uses the dataset directory as cache.\r\n\r\nOn the other hand `load_dataset` does a conversion step to get Arrow files from the raw data files... | ### Describe the bug
I get the following error message: You are trying to load a dataset that was saved using `save_to_disk`. Please use `load_from_disk` instead.
### Steps to reproduce the bug
1. Save a dataset with `save_to_disk`
2. Try to load it with `load_datasets`
### Expected behavior
I am ab... | 6,703 |
https://github.com/huggingface/datasets/issues/6702 | Push samples to dataset on hub without having the dataset locally | [
"Hi ! For now I would recommend creating a new Parquet file using `dataset_new.to_parquet()` and upload it to HF using `huggingface_hub` every time you get a new batch of data. You can name the Parquet files `0000.parquet`, `0001.parquet`, etc.\r\n\r\nThough maybe make sure to not upload one file per sample since t... | ### Feature request
Say I have the following code:
```
from datasets import Dataset
import pandas as pd
new_data = {
"column_1": ["value1", "value2"],
"column_2": ["value3", "value4"],
}
df_new = pd.DataFrame(new_data)
dataset_new = Dataset.from_pandas(df_new)
# add these samples to a remote datase... | 6,702 |
https://github.com/huggingface/datasets/issues/6700 | remove_columns is not in-place but the doc shows it is in-place | [
"Good catch! I've opened a PR with a fix in the `transformers` repo.",
"@mariosasko Thanks!\r\n\r\nWill the doc of `datasets` be updated?\r\n\r\nI find some possible mistakes in doc about whether `remove_columns` is in-place.\r\n1. [You can also remove a column using map() with remove_columns but the present meth... | ### Describe the bug
The doc of `datasets` v2.17.0/v2.17.1 shows that `remove_columns` is in-place. [link](https://huggingface.co/docs/datasets/v2.17.1/en/package_reference/main_classes#datasets.DatasetDict.remove_columns)
In the text classification example of transformers v4.38.1, the columns are not removed.
h... | 6,700 |
https://github.com/huggingface/datasets/issues/6699 | `Dataset` unexpected changed dict data and may cause error | [
"If `test.jsonl` contains more lines like:\r\n```\r\n{\"id\": 0, \"indexs\": {\"-1\": [0, 10]}}\r\n{\"id\": 1, \"indexs\": {\"-1\": [0, 10]}}\r\n{\"id\": 2, \"indexs\": {\"-2\": [0, 10]}}\r\n...\r\n{\"id\": n, \"indexs\": {\"-9999\": [0, 10]}}\r\n```\r\n\r\n`Dataset.from_json` will just raise an error:\r\n```\r\nAn... | ### Describe the bug
Will unexpected get keys with `None` value in the parsed json dict.
### Steps to reproduce the bug
```jsonl test.jsonl
{"id": 0, "indexs": {"-1": [0, 10]}}
{"id": 1, "indexs": {"-1": [0, 10]}}
```
```python
dataset = Dataset.from_json('.test.jsonl')
print(dataset[0])
```
Result:
```... | 6,699 |
https://github.com/huggingface/datasets/issues/6697 | Unable to Load Dataset in Kaggle | [
"FWIW, I run `load_dataset(\"llm-blender/mix-instruct\")` and it ran successfully.\r\nCan you clear your cache and try again?\r\n\r\n\r\n### Environment Info\r\n\r\n- `datasets` version: 2.17.0\r\n- Platform: Linux-6.2.6-76060206-generic-x86_64-with-glibc2.35\r\n- Python version: 3.9.13\r\n- `huggingface_hub` versi... | ### Describe the bug
Having installed the latest versions of transformers==4.38.1 and datasets==2.17.1 Unable to load the dataset in a kaggle notebook.
Get this Error:
```
---------------------------------------------------------------------------
ValueError Traceback (most recen... | 6,697 |
https://github.com/huggingface/datasets/issues/6695 | Support JSON file with an array of strings | [
"https://huggingface.co/datasets/CausalLM/Refined-Anime-Text/discussions/1 has been fixed, but how can we check if there are other datasets with the same error, in datasets-server's database? I don't know how to get the list of erroneous cache entries, since we only copied `Error code: JobManagerCrashedError`, bu... | Support loading a dataset from a JSON file with an array of strings.
See: https://huggingface.co/datasets/CausalLM/Refined-Anime-Text/discussions/1 | 6,695 |
https://github.com/huggingface/datasets/issues/6691 | load_dataset() does not support tsv | [
"#self-assign",
"Hi @dipsivenkatesh,\r\n\r\nPlease note that this functionality is already implemented. Our CSV builder uses `pandas.read_csv` under the hood, and you can pass the parameter `delimiter=\"\\t\"` to read TSV files.\r\n\r\nSee the list of CSV config parameters in our docs: https://huggingface.co/docs... | ### Feature request
the load_dataset() for local functions support file types like csv, json etc but not of type tsv (tab separated values).
### Motivation
cant easily load files of type tsv, have to convert them to another type like csv then load
### Your contribution
Can try by raising a PR with a little help, c... | 6,691 |
https://github.com/huggingface/datasets/issues/6690 | Add function to convert a script-dataset to Parquet | [] | Add function to convert a script-dataset to Parquet and push it to the Hub, analogously to the Space: "Convert a Hugging Face dataset to Parquet" | 6,690 |
https://github.com/huggingface/datasets/issues/6689 | .load_dataset() method defaults to zstandard | [
"The dataset is made of JSON files compressed using zstandard, as you can see here: https://huggingface.co/datasets/cerebras/SlimPajama-627B/tree/main/test/chunk1\r\n\r\nThat's why it asks for zstandard to be installed.\r\n\r\nThough I'm intrigued that you manage to load the dataset without zstandard installed. May... | ### Describe the bug
Regardless of what method I use, datasets defaults to zstandard for unpacking my datasets.
This is poor behavior, because not only is zstandard not a dependency in the huggingface package (and therefore, your dataset loading will be interrupted while it asks you to install the package), but it ... | 6,689 |
https://github.com/huggingface/datasets/issues/6688 | Tensor type (e.g. from `return_tensors`) ignored in map | [
"Hi, this is expected behavior since all the tensors are converted to Arrow data (the storage type behind a Dataset).\r\n\r\nTo get pytorch tensors back, you can set the dataset format to \"torch\":\r\n\r\n```python\r\nds = ds.with_format(\"torch\")\r\n```",
"Thanks. Just one additional question. During the pipel... | ### Describe the bug
I don't know if it is a bug or an expected behavior, but the tensor type seems to be ignored after applying map. For example, mapping over to tokenize text with a transformers' tokenizer always returns lists and it ignore the `return_tensors` argument.
If this is an expected behaviour (e.g., fo... | 6,688 |
https://github.com/huggingface/datasets/issues/6686 | Question: Is there any way for uploading a large image dataset? | [
"```\r\nimport pandas as pd\r\nfrom datasets import Dataset, Image\r\n\r\n# Read the CSV file\r\ndata = pd.read_csv(\"XXXX.csv\")\r\n\r\n# Create a Hugging Face Dataset\r\ndataset = Dataset.from_pandas(data)\r\ndataset = dataset.cast_column(\"file_name\", Image())\r\n\r\n# Upload to Hugging Face Hub (make sure auth... | I am uploading an image dataset like this:
```
dataset = load_dataset(
"json",
data_files={"train": "data/custom_dataset/train.json", "validation": "data/custom_dataset/val.json"},
)
dataset = dataset.cast_column("images", Sequence(Image()))
dataset.push_to_hub("StanfordAIMI/custom_dataset", max_shard_si... | 6,686 |
https://github.com/huggingface/datasets/issues/6679 | Node.js 16 GitHub Actions are deprecated | [] | `Node.js` 16 GitHub Actions are deprecated. See: https://github.blog/changelog/2023-09-22-github-actions-transitioning-from-node-16-to-node-20/
We should update them to Node 20.
See warnings in our CI, e.g.: https://github.com/huggingface/datasets/actions/runs/7957295009?pr=6678
> Node.js 16 actions are deprecat... | 6,679 |
https://github.com/huggingface/datasets/issues/6676 | Can't Read List of JSON Files Properly | [
"Found the issue, if there are other files in the directory, it gets caught into this `*` so essentially it should be `*.json`. Could we possibly to check for list of files to make sure the pattern matches json files and raise error if not?",
"I don't think we should filter for `*.json` as this might silently rem... | ### Describe the bug
Trying to read a bunch of JSON files into Dataset class but default approach doesn't work. I don't get why it works when I read it one by one but not when I pass as a list :man_shrugging:
The code fails with
```
ArrowInvalid: JSON parse error: Invalid value. in row 0
UnicodeDecodeError... | 6,676 |
https://github.com/huggingface/datasets/issues/6675 | Allow image model (color conversion) to be specified as part of datasets Image() decode | [
"It would be a great addition indeed :)\r\n\r\nThis can be implemented the same way we have `sampling_rate` for Audio(): we just add a new parameter to the Image() type and take this parameter into account in `Image.decode_example`\r\n\r\nEDIT: adding an example of how it can be used:\r\n\r\n```python\r\nds = ds.ca... | ### Feature request
Typical torchvision / torch Datasets in image applications apply color conversion in the Dataset portion of the code as part of image decode, separately from the image transform stack. This is true for PIL.Image where convert is usually called in dataset, for native torchvision https://pytorch.or... | 6,675 |
https://github.com/huggingface/datasets/issues/6674 | Depprcated Overview.ipynb Link to new Quickstart Notebook invalid | [
"Good catch! Feel free to open a PR to fix the link."
] | ### Describe the bug
For the dreprecated notebook found [here](https://github.com/huggingface/datasets/blob/main/notebooks/Overview.ipynb). The link to the new notebook is broken.
### Steps to reproduce the bug
Click the [Quickstart notebook](https://github.com/huggingface/notebooks/blob/main/datasets_doc/quicksta... | 6,674 |
https://github.com/huggingface/datasets/issues/6673 | IterableDataset `set_epoch` is ignored when DataLoader `persistent_workers=True` | [] | ### Describe the bug
When persistent workers are enabled, the epoch that's set via the IterableDataset instance held by the training process is ignored by the workers as they are disconnected across processes.
PyTorch samplers for non-iterable datasets have a mechanism to sync this, datasets.IterableDataset does ... | 6,673 |
https://github.com/huggingface/datasets/issues/6671 | CSV builder raises deprecation warning on verbose parameter | [] | CSV builder raises a deprecation warning on `verbose` parameter:
```
FutureWarning: The 'verbose' keyword in pd.read_csv is deprecated and will be removed in a future version.
```
See:
- https://github.com/pandas-dev/pandas/pull/56556
- https://github.com/pandas-dev/pandas/pull/57450 | 6,671 |
https://github.com/huggingface/datasets/issues/6670 | ValueError | [
"Hi @prashanth19bolukonda,\r\n\r\nYou have to restart the notebook runtime session after the installation of `datasets`.\r\n\r\nDuplicate of:\r\n- #5923",
"Thank you soo much\r\n\r\nOn Fri, Feb 16, 2024 at 8:14 PM Albert Villanova del Moral <\r\n***@***.***> wrote:\r\n\r\n> Closed #6670 <https://github.com/huggin... | ### Describe the bug
ValueError Traceback (most recent call last)
[<ipython-input-11-9b99bc80ec23>](https://localhost:8080/#) in <cell line: 11>()
9 import numpy as np
10 import matplotlib.pyplot as plt
---> 11 from datasets import DatasetDict, Dataset
12 from transf... | 6,670 |
https://github.com/huggingface/datasets/issues/6669 | attribute error when writing trainer.train() | [
"Hi! Kaggle notebooks use an outdated version of `datasets`, so you should update the `datasets` installation (with `!pip install -U datasets`) to avoid the error.",
"Thank you for your response\r\n\r\nOn Thu, Feb 29, 2024 at 10:55 PM Mario Šaško ***@***.***>\r\nwrote:\r\n\r\n> Closed #6669 <https://github.com/hu... | ### Describe the bug
AttributeError Traceback (most recent call last)
Cell In[39], line 2
1 # Start the training process
----> 2 trainer.train()
File /opt/conda/lib/python3.10/site-packages/transformers/trainer.py:1539, in Trainer.train(self, resume_from_checkpoint, trial, ignore... | 6,669 |
https://github.com/huggingface/datasets/issues/6668 | Chapter 6 - Issue Loading `cnn_dailymail` dataset | [] | ### Describe the bug
So I am getting this bug when I try to run cell 4 of the Chapter 6 notebook code:
`dataset = load_dataset("ccdv/cnn_dailymail", version="3.0.0")`
Error Message:
```
---------------------------------------------------------------------------
ValueError Tracebac... | 6,668 |
https://github.com/huggingface/datasets/issues/6667 | Default config for squad is incorrect | [
"you can try: pip install datasets==2.16.1"
] | ### Describe the bug
If you download Squad, it will download the plain_text version, but the config still specifies "default", so if you set the offline mode the cache will try to look it up according to the config_id which is "default" and this will say;
ValueError: Couldn't find cache for squad for config 'default'... | 6,667 |
https://github.com/huggingface/datasets/issues/6663 | `write_examples_on_file` and `write_batch` are broken in `ArrowWriter` | [
"Thanks for reporting! I've left some comments on the PR on how to fix this recent change rather than reverting it.",
"> Thanks for reporting! I've left some comments on the PR on how to fix this recent change rather than reverting it.\r\n\r\nI feel that'd be good, but it'd be great to release a hotfix ASAP (a re... | ### Describe the bug
`write_examples_on_file` and `write_batch` are broken in `ArrowWriter` since #6636. The order between the columns and the schema is not preserved anymore. So these functions don't work anymore unless the order happens to align well.
### Steps to reproduce the bug
Try to do `write_batch` with any... | 6,663 |
https://github.com/huggingface/datasets/issues/6661 | Import error on Google Colab | [
"Hi! This can happen if an incompatible `pyarrow` version (`pyarrow<12.0.0`) has been imported before the `datasets` installation and the Colab session hasn't been restarted afterward. To avoid the error, go to \"Runtime -> Restart session\" after `!pip install -U datasets` and before `import datasets`, or insert t... | ### Describe the bug
Cannot be imported on Google Colab, the import throws the following error:
ValueError: pyarrow.lib.IpcWriteOptions size changed, may indicate binary incompatibility. Expected 88 from C header, got 72 from PyObject
### Steps to reproduce the bug
1. `! pip install -U datasets`
2. `import dataset... | 6,661 |
https://github.com/huggingface/datasets/issues/6657 | Release not pushed to conda channel | [
"Thanks for reporting, @atulsaurav.\r\n\r\nWe are investigating the issue. ",
"I can't fix this issue because I do not appear as a team member of the huggingface datasets project: https://anaconda.org/huggingface/datasets\r\n\r\n@lhoestq could you please add `datasets` team members to the corresponding Anaconda p... | ### Describe the bug
The github actions step to publish the release 2.17.0 to conda channel has failed due to expired token. Can some one please update the anaconda token rerun the failed action? @albertvillanova ?
, \r\n\r\n> TypeError: Couldn't cast array of type timestamp[us] to null\r\n\r\nYet when I split it into 1k lines, files, load_dataset works fine!\r\n\r\nhttps://github.com/huggingface/course/issues/692\r\n\r\n"
] | ### Describe the bug
When trying to load big json files from a local directory, `load_dataset` throws the following error
```
Traceback (most recent call last):
File "/miniconda3/envs/conda-env/lib/python3.10/site-packages/datasets/builder.py", line 1989, in _prepare_split_single
writer.write_table(table)
... | 6,656 |
https://github.com/huggingface/datasets/issues/6655 | Cannot load the dataset go_emotions | [
"Thanks for reporting, @arame.\r\n\r\nI guess you have an old version of `transformers` (that submodule is present in `transformers` since version 3.0.1, since nearly 4 years ago). If you update it, the error should disappear:\r\n```shell\r\npip install -U transformers\r\n```\r\n\r\nOn the other hand, I am wonderin... | ### Describe the bug
When I run the following code I get an exception;
`go_emotions = load_dataset("go_emotions")`
> AttributeError Traceback (most recent call last)
Cell In[6], [line 1](vscode-notebook-cell:?execution_count=6&line=1)
----> [1](vscode-notebook-cell:?execution_count=6&l... | 6,655 |
https://github.com/huggingface/datasets/issues/6654 | Batched dataset map throws exception that cannot cast fixed length array to Sequence | [
"Hi ! This issue has been fixed by https://github.com/huggingface/datasets/pull/6283\r\n\r\nCan you try again with the new release 2.17.0 ?\r\n\r\n```\r\npip install -U datasets\r\n```\r\n\r\n",
"Amazing! It's indeed fixed now. Thanks!"
] | ### Describe the bug
I encountered a TypeError when batch processing a dataset with Sequence features in datasets package version 2.16.1. The error arises from a mismatch in handling fixed-size list arrays during the map function execution. Debugging pinpoints the issue to an if-statement in datasets/table.py, line 20... | 6,654 |
https://github.com/huggingface/datasets/issues/6651 | Slice splits support for datasets.load_from_disk | [] | ### Feature request
Support for slice splits in `datasets.load_from_disk`, similar to how it's already supported for `datasets.load_dataset`.
### Motivation
Slice splits are convienient in a numer of cases - adding support to `datasets.load_from_disk` would make working with local datasets easier and homogeniz... | 6,651 |
https://github.com/huggingface/datasets/issues/6650 | AttributeError: 'InMemoryTable' object has no attribute '_batches' | [
"Hi! Does running the following code also return the same error on your machine? \r\n\r\n```python\r\nimport copy\r\nimport pyarrow as pa\r\nfrom datasets.table import InMemoryTable\r\n\r\ncopy.deepcopy(InMemoryTable(pa.table({\"a\": [1, 2, 3], \"b\": [\"foo\", \"bar\", \"foobar\"]})))\r\n```",
"No, it doesn't, ... | ### Describe the bug
```
Traceback (most recent call last):
File "finetune.py", line 103, in <module>
main(args)
File "finetune.py", line 45, in main
data_tokenized = data.map(partial(funcs.tokenize_function, tokenizer,
File "/opt/conda/envs/ptca/lib/python3.8/site-packages/datasets/dataset_dict.... | 6,650 |
https://github.com/huggingface/datasets/issues/6645 | Support fsspec 2024.2 | [
"I'd be very grateful. This upper bound banished me straight into dependency hell today. :("
] | Support fsspec 2024.2.
First, we should address:
- #6644 | 6,645 |
https://github.com/huggingface/datasets/issues/6644 | Support fsspec 2023.12 | [
"The pinned fsspec version range dependency conflict has been affecting several of our users in https://github.com/iterative/dvc. I've opened an initial PR that I think should resolve the glob behavior changes with using datasets + the latest fsspec release.\r\n\r\nPlease let us know if there's any other fsspec rel... | Support fsspec 2023.12 by handling previous and new glob behavior. | 6,644 |
https://github.com/huggingface/datasets/issues/6643 | Faiss GPU index cannot be serialised when passed to trainer | [
"Hi ! make sure your query embeddings are numpy arrays, not torch tensors ;)",
"Hi Quentin, not sure how that solves the problem number 1. I am trying to pass on a dataset with a faiss gpu for training to the standard trainer but getting this serialisation error. What is a workaround this? I do not want to remove... | ### Describe the bug
I am working on a retrieval project and encountering I have encountered two issues in the hugging face faiss integration:
1. I am trying to pass in a dataset with a faiss index to the Huggingface trainer. The code works for a cpu faiss index, but doesn't for a gpu one, getting error:
```
... | 6,643 |
https://github.com/huggingface/datasets/issues/6642 | Differently dataset object saved than it is loaded. | [
"I see now, that I have to use `load_from_disk`, in order to load dataset properly, not `load_dataset`. Why is this behavior split? Why do we need both, `load_dataset` and `load_from_disk`?\r\n\r\nUnless answered, I believe this might be helpful for other hf datasets newbies.\r\n\r\nAnyway, made a `load_dataset` co... | ### Describe the bug
Differently sized object is saved than it is loaded.
### Steps to reproduce the bug
Hi, I save dataset in a following way:
```
dataset = load_dataset("json",
data_files={
"train": os.path.join(input_folder, f"{task_met... | 6,642 |
https://github.com/huggingface/datasets/issues/6641 | unicodedecodeerror: 'utf-8' codec can't decode byte 0xac in position 25: invalid start byte | [
"Hi @Hughhuh. \r\n\r\nI have formatted the issue because it was not easily readable. Additionally, the environment info is incomplete: it seems you did not run the proposed CLI command `datasets-cli env` and essential information is missing: version of `datasets`, version of `pyarrow`,...\r\n\r\nWith the informatio... | ### Describe the bug
unicodedecodeerror: 'utf-8' codec can't decode byte 0xac in position 25: invalid start byte
### Steps to reproduce the bug
```
import sys
sys.getdefaultencoding()
'utf-8'
from datasets import load_dataset
print(f"Train dataset size: {len(dataset['train'])}")
print(f"Test datase... | 6,641 |
https://github.com/huggingface/datasets/issues/6640 | Sign Language Support | [] | ### Feature request
Currently, there are only several Sign Language labels, I would like to propose adding all the Signed Languages as new labels which are described in this ISO standard: https://www.evertype.com/standards/iso639/sign-language.html
### Motivation
Datasets currently only have labels for several signe... | 6,640 |
https://github.com/huggingface/datasets/issues/6638 | Cannot download wmt16 dataset | [
"Looks like it works with latest datasets repository\r\n```\r\n- `datasets` version: 2.16.2.dev0\r\n- Platform: Linux-5.15.0-92-generic-x86_64-with-glibc2.29\r\n- Python version: 3.8.10\r\n- `huggingface_hub` version: 0.20.3\r\n- PyArrow version: 15.0.0\r\n- Pandas version: 2.0.1\r\n- `fsspec` version: 2023.10.0\r\... | ### Describe the bug
As of this morning (PST) 2/1/2024, seeing the wmt16 dataset is missing from opus , could you suggest an alternative?
```
Downloading data files: 0%| | 0/4 [00:00<?, ?it/s]Tra... | 6,638 |
https://github.com/huggingface/datasets/issues/6637 | 'with_format' is extremely slow when used together with 'interleave_datasets' or 'shuffle' on IterableDatasets | [
"The \"torch\" formatting is usually fast because we do zero-copy conversion from the Arrow data on your disk to Torch tensors. However IterableDataset shuffling seems to do data copies that slow down the pipeline, and it shuffles python objects instead of Arrow data.\r\n\r\nTo fix this we need to implement `Buffer... | ### Describe the bug
If you:
1. Interleave two iterable datasets together with the interleave_datasets function, or shuffle an iterable dataset
2. Set the output format to torch tensors with .with_format('torch')
Then iterating through the dataset becomes over 100x slower than it is if you don't apply the torch... | 6,637 |
https://github.com/huggingface/datasets/issues/6624 | How to download the laion-coco dataset | [
"Hi, this dataset has been disabled by the authors, so unfortunately it's no longer possible to download it."
] | The laion coco dataset is not available now. How to download it
https://huggingface.co/datasets/laion/laion-coco | 6,624 |
https://github.com/huggingface/datasets/issues/6623 | streaming datasets doesn't work properly with multi-node | [
"@mariosasko, @lhoestq, @albertvillanova\r\nhey guys! can anyone help? or can you guys suggest who can help with this?",
"Hi ! \r\n\r\n1. When the dataset is running of of examples, the last batches received by the GPU can be incomplete or empty/missing. We haven't implemented yet a way to ignore the last batch. ... | ### Feature request
Let’s say I have a dataset with 5 samples with values [1, 2, 3, 4, 5], with 2 GPUs (for DDP) and batch size of 2. This dataset is an `IterableDataset` since I am streaming it.
Now I split the dataset using `split_dataset_by_node` to ensure it doesn’t get repeated. And since it’s already splitt... | 6,623 |
https://github.com/huggingface/datasets/issues/6622 | multi-GPU map does not work | [
"This should now be fixed by https://github.com/huggingface/datasets/pull/6550 and updated with https://github.com/huggingface/datasets/pull/6646\r\n\r\nFeel free to re-open if you're still having issues :)"
] | ### Describe the bug
Here is the code for single-GPU processing: https://pastebin.com/bfmEeK2y
Here is the code for multi-GPU processing: https://pastebin.com/gQ7i5AQy
Here is the video showing that the multi-GPU mapping does not work as expected (there are so many things wrong here, it's better to watch the 3-min... | 6,622 |
https://github.com/huggingface/datasets/issues/6621 | deleted | [] | ... | 6,621 |
https://github.com/huggingface/datasets/issues/6620 | wiki_dpr.py error (ID mismatch between lines {id} and vector {vec_id} | [
"Thanks for reporting, @kiehls90.\r\n\r\nAs this seems an issue with the specific \"wiki_dpr\" dataset, I am transferring the issue to the corresponding dataset page: https://huggingface.co/datasets/wiki_dpr/discussions/13"
] | ### Describe the bug
I'm trying to run a rag example, and the dataset is wiki_dpr.
wiki_dpr download and extracting have been completed successfully.
However, at the generating train split stage, an error from wiki_dpr.py keeps popping up.
Especially in "_generate_examples" :
1. The following error occurs in the... | 6,620 |
https://github.com/huggingface/datasets/issues/6618 | While importing load_dataset from datasets | [
"Hi! Can you please share the error's stack trace so we can see where it comes from?",
"We cannot reproduce the issue and we do not have enough information: environment info (need to run `datasets-cli env`), stack trace,...\r\n\r\nI am closing the issue. Feel free to reopen it (with additional information) if the... | ### Describe the bug
cannot import name 'DEFAULT_CIPHERS' from 'urllib3.util.ssl_' this is the error i received
### Steps to reproduce the bug
from datasets import load_dataset
### Expected behavior
No errors
### Environment info
python 3.11.5 | 6,618 |
https://github.com/huggingface/datasets/issues/6615 | ... | [
"Sorry I posted in the wrong repo, please delete.. thanks!"
] | ... | 6,615 |
https://github.com/huggingface/datasets/issues/6614 | `datasets/downloads` cleanup tool | [] | ### Feature request
Splitting off https://github.com/huggingface/huggingface_hub/issues/1997 - currently `huggingface-cli delete-cache` doesn't take care of cleaning `datasets` temp files
e.g. I discovered having millions of files under `datasets/downloads` cache, I had to do:
```
sudo find /data/huggingface/... | 6,614 |
https://github.com/huggingface/datasets/issues/6612 | cnn_dailymail repeats itself | [
"Hi ! We recently updated `cnn_dailymail` and now `datasets>=2.14` is needed to load it.\r\n\r\nYou can update `datasets` with\r\n\r\n```\r\npip install -U datasets\r\n```"
] | ### Describe the bug
When I try to load `cnn_dailymail` dataset, it takes longer than usual and when I checked the dataset it's 3x bigger than it's supposed to be.
Check https://huggingface.co/datasets/cnn_dailymail: it says 287k rows for train. But when I check length of train split it says 861339.
Also I che... | 6,612 |
https://github.com/huggingface/datasets/issues/6611 | `load_from_disk` with large dataset from S3 runs into `botocore.exceptions.ClientError` | [] | ### Describe the bug
When loading a large dataset (>1000GB) from S3 I run into the following error:
```
Traceback (most recent call last):
File "/home/alp/.local/lib/python3.10/site-packages/s3fs/core.py", line 113, in _error_wrapper
return await func(*args, **kwargs)
File "/home/alp/.local/lib/python3.... | 6,611 |
https://github.com/huggingface/datasets/issues/6610 | cast_column to Sequence(subfeatures_dict) has err | [
"Hi! You are passing the wrong feature type to `cast_column`. This is the fixed call:\r\n```python\r\nais_dataset = ais_dataset.cast_column(\"my_labeled_bbox\", {\"bbox\": Sequence(Value(dtype=\"int64\")), \"label\": ClassLabel(names=[\"cat\", \"dog\"])})\r\n```",
"> Hi! You are passing the wrong feature type to ... | ### Describe the bug
I am working with the following demo code:
```
from datasets import load_dataset
from datasets.features import Sequence, Value, ClassLabel, Features
ais_dataset = load_dataset("/data/ryan.gao/ais_dataset_cache/raw/1978/")
ais_dataset = ais_dataset["train"]
def add_class(example):
... | 6,610 |
https://github.com/huggingface/datasets/issues/6609 | Wrong path for cache directory in offline mode | [
"+1",
"same error in 2.16.1",
"@kongjiellx any luck with the issue?",
"I opened https://github.com/huggingface/datasets/pull/6632 to fix this issue. Once it's merged we'll do a new release of `datasets`",
"Thanks @lhoestq !"
] | ### Describe the bug
Dear huggingfacers,
I'm trying to use a subset of the-stack dataset. When I run the command the first time
```
dataset = load_dataset(
path='bigcode/the-stack',
data_dir='data/fortran',
split='train' )
```
It downloads the files and caches them normally.
Nevertheless, ... | 6,609 |
https://github.com/huggingface/datasets/issues/6605 | ELI5 no longer available, but referenced in example code | [
"Addressed in https://github.com/huggingface/transformers/pull/28715."
] | Here, an example code is given:
https://huggingface.co/docs/transformers/tasks/language_modeling
This code + article references the ELI5 dataset.
ELI5 is no longer available, as the ELI5 dataset page states: https://huggingface.co/datasets/eli5
"Defunct: Dataset "eli5" is defunct and no longer accessible due to u... | 6,605 |
https://github.com/huggingface/datasets/issues/6604 | Transform fingerprint collisions due to setting fixed random seed | [
"I've opened a PR with a fix.",
"I don't think the PR fixes the root cause, since it still relies on the `random` library which will often have its seed fixed. I think the builtin `uuid.uuid4()` is a better choice: https://docs.python.org/3/library/uuid.html"
] | ### Describe the bug
The transform fingerprinting logic relies on the `random` library for random bits when the function is not hashable (e.g. bound methods as used in `trl`: https://github.com/huggingface/trl/blob/main/trl/trainer/dpo_trainer.py#L356). This causes collisions when the training code sets a fixed random... | 6,604 |
https://github.com/huggingface/datasets/issues/6603 | datasets map `cache_file_name` does not work | [
"Unfortunately, I'm unable to reproduce this error. Can you share the reproducer?",
"```\r\nds = datasets.Dataset.from_dict(dict(a=[i for i in range(100)]))\r\nds.map(lambda item: dict(b=item['a'] * 2), cache_file_name=\"/tmp/whatever-fn\") # this worked\r\nds.map(lambda item: dict(b=item['a'] * 2), cache_file_na... | ### Describe the bug
In the documentation `datasets.Dataset.map` arg `cache_file_name` is said to be a string, but it doesn't work.
### Steps to reproduce the bug
1. pick a dataset
2. write a map function
3. do `ds.map(..., cache_file_name='some_filename')`
4. it crashes
### Expected behavior
It will tell you t... | 6,603 |
https://github.com/huggingface/datasets/issues/6602 | Index error when data is large | [] | ### Describe the bug
At `save_to_disk` step, the `max_shard_size` by default is `500MB`. However, one row of the dataset might be larger than `500MB` then the saving will throw an index error. Without looking at the source code, the bug is due to wrong calculation of number of shards which i think is
`total_size / m... | 6,602 |
https://github.com/huggingface/datasets/issues/6600 | Loading CSV exported dataset has unexpected format | [
"Hi! Parquet is the only format that supports complex/nested features such as `Translation`. So, this should work:\r\n```python\r\ntest_dataset = load_dataset(\"opus100\", name=\"en-fr\", split=\"test\")\r\n\r\n# Save with .to_parquet()\r\ntest_parquet_path = \"try_testset_save.parquet\"\r\ntest_dataset.to_parquet(... | ### Describe the bug
I wanted to be able to save a HF dataset for translations and load it again in another script, but I'm a bit confused with the documentation and the result I've got so I'm opening this issue to ask if this behavior is as expected.
### Steps to reproduce the bug
The documentation I've mainly cons... | 6,600 |
https://github.com/huggingface/datasets/issues/6599 | Easy way to segment into 30s snippets given an m4a file and a vtt file | [
"Hi! Non-generic data processing is out of this library's scope, so it's downstream libraries/users' responsibility to implement such logic.",
"That's fair. Thanks"
] | ### Feature request
Uploading datasets is straightforward thanks to the ability to push Audio to hub. However, it would be nice if the data (text and audio) could be segmented when being pushed (if not possible already).
### Motivation
It's easy to create a vtt file from an audio file. If there could be auto-segment... | 6,599 |
https://github.com/huggingface/datasets/issues/6598 | Unexpected keyword argument 'hf' when downloading CSV dataset from S3 | [
"I am facing similar issue while reading a csv file from s3. Wondering if somebody has found a workaround. ",
"same thing happened to other formats like parquet",
"I am facing similar issue while reading a parquet file from s3.\r\ni try with every version between 2.14 to 2.16.1 but it dosen't work ",
"Re-def... | ### Describe the bug
I receive this error message when using `load_dataset` with "csv" path and `dataset_files=s3://...`:
```
TypeError: Session.__init__() got an unexpected keyword argument 'hf'
```
I found a similar issue here: https://stackoverflow.com/questions/77596258/aws-issue-load-dataset-from-s3-fails-w... | 6,598 |
https://github.com/huggingface/datasets/issues/6597 | Dataset.push_to_hub of a canonical dataset creates an additional dataset under the user namespace | [
"It is caused by these code lines: https://github.com/huggingface/datasets/blob/9d6d16117a30ba345b0236407975f701c5b288d4/src/datasets/dataset_dict.py#L1688-L1694",
"Also note the information in the docstring: https://github.com/huggingface/datasets/blob/9d6d16117a30ba345b0236407975f701c5b288d4/src/datasets/datase... | While using `Dataset.push_to_hub` of a canonical dataset, an additional dataset was created under my user namespace.
## Steps to reproduce the bug
The command:
```python
commit_info = ds.push_to_hub(
"caner",
config_name="default",
commit_message="Convert dataset to Parquet",
commit_descriptio... | 6,597 |
https://github.com/huggingface/datasets/issues/6595 | Loading big dataset raises pyarrow.lib.ArrowNotImplementedError 2 | [
"Hi ! I think the issue comes from the \"float16\" features that are not supported yet in Parquet\r\n\r\nFeel free to open an issue in `pyarrow` about this. In the meantime, I'd encourage you to use \"float32\" for your \"pooled_prompt_embeds\" and \"prompt_embeds\" features.\r\n\r\nYou can cast them to \"float32\"... | ### Describe the bug
I'm aware of the issue #5695 .
I'm using a modified SDXL trainer: https://github.com/kopyl/diffusers/blob/5e70f604155aeecee254a5c63c5e4236ad4a0d3d/examples/text_to_image/train_text_to_image_sdxl.py#L1027C16-L1027C16
So i
1. Map dataset
2. Save to disk
3. Try to upload:
```
import data... | 6,595 |
https://github.com/huggingface/datasets/issues/6594 | IterableDataset sharding logic needs improvement | [] | ### Describe the bug
The sharding of IterableDatasets with respect to distributed and dataloader worker processes appears problematic with significant performance traps and inconsistencies wrt to distributed train processes vs worker processes.
Splitting across num_workers (per train process loader processes) and... | 6,594 |
https://github.com/huggingface/datasets/issues/6592 | Logs are delayed when doing .map when `docker logs` | [
"Hi! `tqdm` doesn't work well in non-interactive environments, so there isn't much we can do about this. It's best to [disable it](https://huggingface.co/docs/datasets/v2.16.1/en/package_reference/utilities#datasets.disable_progress_bars) in such environments and instead use logging to track progress."
] | ### Describe the bug
When I run my SD training in a Docker image and then listen to logs like `docker logs train -f`, the progress bar is delayed.
It's updating every few percent.
When you have a large dataset that has to be mapped (like 1+ million samples), it's crucial to see the updates in real-time, not every co... | 6,592 |
https://github.com/huggingface/datasets/issues/6591 | The datasets models housed in Dropbox can't support a lot of users downloading them | [
"Hi! Indeed, Dropbox is not a reliable host. I've just merged https://huggingface.co/datasets/PolyAI/minds14/discussions/24 to fix this by hosting the data files inside the repo."
] | ### Describe the bug
I'm using the datasets
```
from datasets import load_dataset, Audio
dataset = load_dataset("PolyAI/minds14", name="en-US", split="train")
```
And it seems that sometimes when I imagine a lot of users are accessing the same resources, the Dropbox host fails:
`raise ConnectionError(... | 6,591 |
https://github.com/huggingface/datasets/issues/6590 | Feature request: Multi-GPU dataset mapping for SDXL training | [] | ### Feature request
We need to speed up SDXL dataset pre-process. Please make it possible to use multiple GPUs for the [official SDXL trainer](https://github.com/huggingface/diffusers/blob/main/examples/text_to_image/train_text_to_image_sdxl.py) :)
### Motivation
Pre-computing 3 million of images takes around ... | 6,590 |
https://github.com/huggingface/datasets/issues/6589 | After `2.16.0` version, there are `PermissionError` when users use shared cache_dir | [
"We'll do a new release of `datasets` in the coming days with a fix !",
"@lhoestq Thank you very much!"
] | ### Describe the bug
- We use shared `cache_dir` using `HF_HOME="{shared_directory}"`
- After dataset version 2.16.0, datasets uses `filelock` package for file locking #6445
- But, `filelock` package make `.lock` file with `644` permission
- Dataset is not available to other users except the user who created the ... | 6,589 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.