html_url stringlengths 48 51 | title stringlengths 1 290 | comments listlengths 0 30 | body stringlengths 0 228k ⌀ | number int64 2 7.08k |
|---|---|---|---|---|
https://github.com/huggingface/datasets/issues/6973 | IndexError during training with Squad dataset and T5-small model | [
"add remove_unused_columns=False to training_args\r\nhttps://github.com/huggingface/datasets/issues/6535#issuecomment-1874024704",
"Closing this issue because it was a reported and fixed in transformers."
] | ### Describe the bug
I am encountering an IndexError while training a T5-small model on the Squad dataset using the transformers and datasets libraries. The error occurs even with a minimal reproducible example, suggesting a potential bug or incompatibility.
### Steps to reproduce the bug
1.Install the required libr... | 6,973 |
https://github.com/huggingface/datasets/issues/6967 | Method to load Laion400m | [] | ### Feature request
Large datasets like Laion400m are provided as embeddings. The provided methods in load_dataset are not straightforward for loading embedding files, i.e. img_emb_XX.npy ; XX = 0 to 99
### Motivation
The trial and experimentation is the key pivot of HF. It would be great if HF can load embeddings... | 6,967 |
https://github.com/huggingface/datasets/issues/6961 | Manual downloads should count as downloads | [
"We're unlikely to add more features/support for datasets with python loading scripts, which include datasets with manual download. Sorry for the inconvenience"
] | ### Feature request
I would like to request that manual downloads of data files from Hugging Face dataset repositories count as downloads of a dataset. According to the documentation for the Hugging Face Hub, that is currently not the case: https://huggingface.co/docs/hub/en/datasets-download-stats
### Motivation
Th... | 6,961 |
https://github.com/huggingface/datasets/issues/6958 | My Private Dataset doesn't exist on the Hub or cannot be accessed | [
"I can load public dataset, but for my private dataset it fails",
"https://huggingface.co/docs/datasets/upload_dataset",
"I have checked the API HTTP link. Repository Not Found for url: https://huggingface.co/api/datasets/xxx/xxx.\r\n\r\n
datasets.exceptions.DatasetNotFoundError: Dataset 'xxx' doesn't exist on t... | 6,958 |
https://github.com/huggingface/datasets/issues/6953 | Remove canonical datasets from docs | [
"Canonical datasets are no longer mentioned in the docs."
] | Remove canonical datasets from docs, now that we no longer have canonical datasets. | 6,953 |
https://github.com/huggingface/datasets/issues/6951 | load_dataset() should load all subsets, if no specific subset is specified | [
"@xianbaoqian ",
"Feel free to open a PR in `m-a-p/COIG-CQIA` to define a default subset. Currently there is no default.\r\n\r\nYou can find some documentation at https://huggingface.co/docs/hub/datasets-manual-configuration#multiple-configurations",
"@lhoestq \r\n\r\nWhilst having a default subset readily avai... | ### Feature request
Currently load_dataset() is forcing users to specify a subset. Example
`from datasets import load_dataset
dataset = load_dataset("m-a-p/COIG-CQIA")`
```---------------------------------------------------------------------------
ValueError Traceback (most recen... | 6,951 |
https://github.com/huggingface/datasets/issues/6950 | `Dataset.with_format` behaves inconsistently with documentation | [
"Hi ! It seems the documentation was outdated in this paragraph\r\n\r\nI fixed it here: https://github.com/huggingface/datasets/pull/6956",
"Fixed."
] | ### Describe the bug
The actual behavior of the interface `Dataset.with_format` is inconsistent with the documentation.
https://huggingface.co/docs/datasets/use_with_pytorch#n-dimensional-arrays
https://huggingface.co/docs/datasets/v2.19.0/en/use_with_tensorflow#n-dimensional-arrays
> If your dataset consists of ... | 6,950 |
https://github.com/huggingface/datasets/issues/6949 | load_dataset error | [
"Hi, @lion-ops.\r\n\r\nIn our Continuous Integration we have many tests on loading JSON files and all of them work properly.\r\n\r\nCould you please share your \"train.json\" file, so that we can try to reproduce the issue you have? ",
"> Hi, @lion-ops.\r\n> \r\n> In our Continuous Integration we have many tests ... | ### Describe the bug
Why does the program get stuck when I use load_dataset method, and it still gets stuck after loading for several hours? In fact, my json file is only 21m, and I can load it in one go using open('', 'r').
### Steps to reproduce the bug
1. pip install datasets==2.19.2
2. from datasets import Data... | 6,949 |
https://github.com/huggingface/datasets/issues/6948 | to_tf_dataset: Visible devices cannot be modified after being initialized | [] | ### Describe the bug
When trying to use to_tf_dataset with a custom data_loader collate_fn when I use parallelism I am met with the following error as many times as number of workers there were in ``num_workers``.
File "/opt/miniconda/envs/env/lib/python3.11/site-packages/multiprocess/process.py", line 314, in _b... | 6,948 |
https://github.com/huggingface/datasets/issues/6947 | FileNotFoundError:error when loading C4 dataset | [
"same problem here",
"Hello,\r\n\r\nAre you sure you are really using datasets version 2.19.2? We just made the patch release yesterday specifically to fix this issue:\r\n- #6925\r\n\r\nI can't reproduce the error:\r\n```python\r\nIn [1]: from datasets import load_dataset\r\n\r\nIn [2]: ds = load_dataset('allenai... | ### Describe the bug
can't load c4 datasets
When I replace the datasets package to 2.12.2 I get raise datasets.utils.info_utils.ExpectedMoreSplits: {'train'}
How can I fix this?
### Steps to reproduce the bug
1.from datasets import load_dataset
2.dataset = load_dataset('allenai/c4', data_files={'validat... | 6,947 |
https://github.com/huggingface/datasets/issues/6942 | Import sorting is disabled by flake8 noqa directive after switching to ruff linter | [] | When we switched to `ruff` linter in PR:
- #5519
import sorting was disabled in all files containing the `# flake8: noqa` directive
- https://github.com/astral-sh/ruff/issues/11679
We should re-enable import sorting on those files. | 6,942 |
https://github.com/huggingface/datasets/issues/6941 | Supporting FFCV: Fast Forward Computer Vision | [] | ### Feature request
Supporting FFCV, https://github.com/libffcv/ffcv
### Motivation
According to the benchmark, FFCV seems to be fastest image loading method.
### Your contribution
no | 6,941 |
https://github.com/huggingface/datasets/issues/6940 | Enable Sharding to Equal Sized Shards | [] | ### Feature request
Add an option when sharding a dataset to have all shards the same size. Will be good to provide both an option of duplication, and by truncation.
### Motivation
Currently the behavior of sharding is "If n % i == l, then the first l shards will have length (n // i) + 1, and the remaining sha... | 6,940 |
https://github.com/huggingface/datasets/issues/6939 | ExpectedMoreSplits error when using data_dir | [] | As reported by @regisss, an `ExpectedMoreSplits` error is raised when passing `data_dir`:
```python
from datasets import load_dataset
dataset = load_dataset(
"lvwerra/stack-exchange-paired",
split="train",
cache_dir=None,
data_dir="data/rl",
)
```
```
Traceback (most recent call last):
F... | 6,939 |
https://github.com/huggingface/datasets/issues/6937 | JSON loader implicitly coerces floats to integers | [] | The JSON loader implicitly coerces floats to integers.
The column values `[0.0, 1.0, 2.0]` are coerced to `[0, 1, 2]`.
See CI error in dataset-viewer: https://github.com/huggingface/dataset-viewer/actions/runs/9290164936/job/25576926446
```
=================================== FAILURES ===========================... | 6,937 |
https://github.com/huggingface/datasets/issues/6936 | save_to_disk() freezes when saving on s3 bucket with multiprocessing | [
"I got the same issue. Any updates so far for this issue?"
] | ### Describe the bug
I'm trying to save a `Dataset` using the `save_to_disk()` function with:
- `num_proc > 1`
- `dataset_path` being a s3 bucket path e.g. "s3://{bucket_name}/{dataset_folder}/"
The hf progress bar shows up but the saving does not seem to start.
When using one processor only (`num_proc=1`), e... | 6,936 |
https://github.com/huggingface/datasets/issues/6935 | Support for pathlib.Path in datasets 2.19.0 | [] | ### Describe the bug
After the recent update of `datasets`, Dataset.save_to_disk does not accept a pathlib.Path anymore. It was supported in 2.18.0 and previous versions. Is this intentional? Was it supported before only because of a Python dusk-typing miracle?
### Steps to reproduce the bug
```
from datasets impor... | 6,935 |
https://github.com/huggingface/datasets/issues/6930 | ValueError: Couldn't infer the same data file format for all splits. Got {'train': ('json', {}), 'validation': (None, {})} | [
"How do you solve it ?\r\n",
"> How do you solve it ?\r\n\r\nPlease check your Python environment and dataset version. I have just resolved the issue, which was caused by a Python environment switching error\r\n"
] | ### Describe the bug
When I run the code en = load_dataset("allenai/c4", "en", streaming=True), I encounter an error: raise ValueError(f"Couldn't infer the same data file format for all splits. Got {split_modules}") ValueError: Couldn't infer the same data file format for all splits. Got {'train': ('json', {}), 'valid... | 6,930 |
https://github.com/huggingface/datasets/issues/6929 | Avoid downloading the whole dataset when only README.me has been touched on hub. | [
"you're right, we're tackling this here: https://github.com/huggingface/dataset-viewer/issues/2757",
"@severo : great !"
] | ### Feature request
`datasets.load_dataset()` triggers a new download of the **whole dataset** when the README.md file has been touched on huggingface hub, even if data files / parquet files are the exact same.
I think the current behaviour of the load_dataset function is triggered whenever a change of the hash o... | 6,929 |
https://github.com/huggingface/datasets/issues/6924 | Caching map result of DatasetDict. | [] | Hi!
I'm currenty using the map function to tokenize a somewhat large dataset, so I need to use the cache to save ~25 mins.
Changing num_proc incduces the recomputation of the map, I'm not sure why and if this is excepted behavior?
here it says, that cached files are loaded sequentially:
https://github.com/... | 6,924 |
https://github.com/huggingface/datasets/issues/6923 | Export Parquet Tablet Audio-Set is null bytes in Arrow | [] | ### Describe the bug
Exporting the processed audio inside the table with the dataset.to_parquet function, the object pyarrow {bytes: null, path: "Some/Path"}
At the same time, the same dataset uploaded to the hub has bit arrays
` at the end. The push to hub is failing with:
```
ValueError: Invalid metadata in README.md.
- Invalid YAML in README.md: unknown tag !<tag:yaml.org,2002:python[/tuple](... | 6,919 |
https://github.com/huggingface/datasets/issues/6918 | NonMatchingSplitsSizesError when using data_dir | [
"Thanks for reporting, @srehaag.\r\n\r\nWe are investigating this issue.",
"I confirm there is a bug for data-based Hub datasets when the user passes `data_dir`, which was introduced by PR:\r\n- #6714"
] | ### Describe the bug
Loading a dataset from with a data_dir argument generates a NonMatchingSplitsSizesError if there are multiple directories in the dataset.
This appears to happen because the expected split is calculated based on the data in all the directories whereas the recorded split is calculated based on t... | 6,918 |
https://github.com/huggingface/datasets/issues/6917 | WinError 32 The process cannot access the file during load_dataset | [] | ### Describe the bug
When I try to load the opus_book from hugging face (following the [guide on the website](https://huggingface.co/docs/transformers/main/en/tasks/translation))
```python
from datasets import load_dataset, Dataset
dataset = load_dataset("Helsinki-NLP/opus_books", "en-fr", features=["id", "tran... | 6,917 |
https://github.com/huggingface/datasets/issues/6916 | ```push_to_hub()``` - Prevent Automatic Generation of Splits | [] | ### Describe the bug
I currently have a dataset which has not been splited. When pushing the dataset to my hugging face dataset repository, it is split into a testing and training set. How can I prevent the split from happening?
### Steps to reproduce the bug
1. Have a unsplit dataset
```python
Dataset({ featur... | 6,916 |
https://github.com/huggingface/datasets/issues/6913 | Column order is nondeterministic when loading from JSON | [] | As reported by @meg-huggingface, the order of the JSON object keys is not preserved while loading a dataset from a JSON file with a list of objects.
For example, when loading a JSON files with a list of objects, each with the following ordered keys:
- [ID, Language, Topic],
the resulting dataset may have column... | 6,913 |
https://github.com/huggingface/datasets/issues/6912 | Add MedImg for streaming | [
"@mariosasko, @lhoestq, @albertvillanova\r\nHello! Can anyone help? or can you guys suggest who can help with this?",
"Hi ! Feel free to download the dataset and create a `Dataset` object with it.\r\n\r\nThen your'll be able to use `push_to_hub()` to upload the dataset to HF in Parquet format and make it streamab... | ### Feature request
Host the MedImg dataset (similar to Imagenet but for biomedical images).
### Motivation
There is a clear need for biomedical image foundation models and large scale biomedical datasets that are easily streamable. This would be an excellent tool for the biomedical community.
### Your con... | 6,912 |
https://github.com/huggingface/datasets/issues/6908 | Fail to load "stas/c4-en-10k" dataset since 2.16 version | [
"I am not able to reproduce the error with datasets 2.19.1:\r\n```python\r\nIn [1]: from datasets import load_dataset; ds = load_dataset(\"stas/c4-en-10k\", streaming=True); item = next(iter(ds[\"train\"])); item\r\nOut[1]: {'text': 'Beginners BBQ Class Taking Place in Missoula!\\nDo you want to get better at makin... | ### Describe the bug
When update datasets library to version 2.16+ ( I test it on 2.16, 2.19.0 and 2.19.1), using the following code to load stas/c4-en-10k dataset
```python
from datasets import load_dataset, Dataset
dataset = load_dataset('stas/c4-en-10k')
```
and then it raise UnicodeDecodeError like
... | 6,908 |
https://github.com/huggingface/datasets/issues/6907 | Support the deserialization of json lines files comprised of lists | [
"Update: I ended up deciding to go back to use lines of dictionaries instead of arrays, not because of this issue as my users would be capable of downloading my corpus without `datasets`, but the speed and storage savings are not currently worth breaking my API and harming the backwards compatibility of each new re... | ### Feature request
I manage a somewhat large and popular Hugging Face dataset known as the [Open Australian Legal Corpus](https://huggingface.co/datasets/umarbutler/open-australian-legal-corpus). I recently updated my corpus to be stored in a json lines file where each line is an array and each element represents a v... | 6,907 |
https://github.com/huggingface/datasets/issues/6906 | irc_disentangle - Issue with splitting data | [
"Thank you I will try this out!\r\n\r\nOn Tue, Jun 11, 2024 at 3:55 AM Vincent Lau ***@***.***>\r\nwrote:\r\n\r\n> I add a \"streaming=True\" after the name of the dataset, and it\r\n> works.....hope it can help you\r\n>\r\n> And if you install the version datasets==2.15.0, this bug will not happen.\r\n> I don't kn... | ### Describe the bug
I am trying to access your database through python using "datasets.load_dataset("irc_disentangle")" and I am getting this error message:
ValueError: Instruction "train" corresponds to no data!
### Steps to reproduce the bug
import datasets
ds = datasets.load_dataset('irc_disentangle')
ds
#... | 6,906 |
https://github.com/huggingface/datasets/issues/6905 | Extraction protocol for arrow files is not defined | [] | ### Describe the bug
Passing files with `.arrow` extension into data_files argument, at least when `streaming=True` is very slow.
### Steps to reproduce the bug
Basically it goes through the `_get_extraction_protocol` method located [here](https://github.com/huggingface/datasets/blob/main/src/datasets/utils/file_ut... | 6,905 |
https://github.com/huggingface/datasets/issues/6903 | Add the option of saving in parquet instead of arrow | [
"I think [`Dataset.to_parquet`](https://huggingface.co/docs/datasets/v1.10.2/package_reference/main_classes.html#datasets.Dataset.to_parquet) is what you're looking for.\r\n\r\nLet me know if I'm wrong ",
"No, it does not save the metadata json.\r\n\r\nWe have to recode all meta json load/save\r\nwith another cus... | ### Feature request
In dataset.save_to_disk('/path/to/save/dataset'),
add the option to save in parquet format
dataset.save_to_disk('/path/to/save/dataset', format="parquet"),
because arrow is not used for Production Big data.... (only parquet)
### Motivation
because arrow is not used for Production Big... | 6,903 |
https://github.com/huggingface/datasets/issues/6901 | HTTPError 403 raised by CLI convert_to_parquet when creating script branch on 3rd party repos | [] | CLI convert_to_parquet cannot create "script" branch on 3rd party repos.
It can only create it on repos where the user executing the script has write access.
Otherwise, a 403 Forbidden HTTPError is raised:
```
Traceback (most recent call last):
File "/usr/local/lib/python3.10/dist-packages/huggingface_hub/ut... | 6,901 |
https://github.com/huggingface/datasets/issues/6900 | [WebDataset] KeyError with user-defined `Features` when a field is missing in an example | [
"@lhoestq How difficult of fix is this?",
"It shouldn't be difficult, I think it's just a matter of adding the missing fields from `self.config.features` in `example` here: before it iterates on image_field_names and audio_field_names. A missing field should have a value set to None\r\n\r\nhttps://github.com/hugg... | reported at https://huggingface.co/datasets/ProGamerGov/synthetic-dataset-1m-dalle3-high-quality-captions/discussions/1
```
File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/webdataset/webdataset.py", line 109, in _generate_examples
example[field_name] = {"path": example["_... | 6,900 |
https://github.com/huggingface/datasets/issues/6899 | List of dictionary features get standardized | [] | ### Describe the bug
Hi, i’m trying to create a HF dataset from a list using Dataset.from_list.
Each sample in the list is a dict with the same keys (which will be my features). The values for each feature are a list of dictionaries, and each such dictionary has a different set of keys. However, the datasets librar... | 6,899 |
https://github.com/huggingface/datasets/issues/6897 | datasets template guide :: issue in documentation YAML | [
"Hello, @bghira.\r\n\r\nThanks for reporting. Please note that the text originating the error is not supposed to be valid YAML: it contains the instructions to generate the actual YAML content, that should replace the instructions comment.\r\n\r\nOn the other hand, I agree that it is not nice to have that YAML erro... | ### Describe the bug
There is a YAML error at the top of the page, and I don't think it's supposed to be there
### Steps to reproduce the bug
1. Browse to [this tutorial document](https://github.com/huggingface/datasets/blob/main/templates/README_guide.md)
2. Observe a big red error at the top
3. The rest of the ... | 6,897 |
https://github.com/huggingface/datasets/issues/6896 | Regression bug: `NonMatchingSplitsSizesError` for (possibly) overwritten dataset | [] | ### Describe the bug
While trying to load the dataset `https://huggingface.co/datasets/pysentimiento/spanish-tweets-small`, I get this error:
```python
---------------------------------------------------------------------------
NonMatchingSplitsSizesError Traceback (most recent call last)
[<ipyth... | 6,896 |
https://github.com/huggingface/datasets/issues/6894 | Better document defaults of to_json | [] | Better document defaults of `to_json`: the default format is [JSON-Lines](https://jsonlines.org/).
Related to:
- #6891 | 6,894 |
https://github.com/huggingface/datasets/issues/6891 | Unable to load JSON saved using `to_json` | [
"Hi @DarshanDeshpande,\r\n\r\nPlease note that the default format of the method `Dataset.to_json` is [JSON-Lines](https://jsonlines.org/): it passes `orient=\"records\", lines=True` to `pandas.DataFrame.to_json`. This format is specially useful for large datasets, since unlike regular JSON files, it does not requir... | ### Describe the bug
Datasets stored in the JSON format cannot be loaded using `json.load()`
### Steps to reproduce the bug
```
import json
from datasets import load_dataset
dataset = load_dataset("squad")
train_dataset, test_dataset = dataset["train"], dataset["validation"]
test_dataset.to_json("full_dataset... | 6,891 |
https://github.com/huggingface/datasets/issues/6890 | add `with_transform` and/or `set_transform` to IterableDataset | [] | ### Feature request
when working with a really large dataset it would save us a lot of time (and compute resources) to use either with_transform or the set_transform from the Dataset class instead of waiting for the entire dataset to map
### Motivation
don't want to wait for a really long dataset to map, this would ... | 6,890 |
https://github.com/huggingface/datasets/issues/6887 | FAISS load to None | [
"Hello,\r\n\r\nI'm not sure I understand. \r\nThe return value of `ds.load_faiss_index` is None as expected.\r\n\r\nI see that loading an Index on a dataset that doesn't have an `embedding` column doesn't raise an Issue. Is that the issue?\r\n\r\nSo `ds` doesn't have an `embedding` column, but we load an index that... | ### Describe the bug
I've use FAISS with Datasets and save to FAISS.
Then load to save FAISS then no error, then ds to None
```python
ds.load_faiss_index('embeddings', 'my_index.faiss')
```
### Steps to reproduce the bug
# 1.
```python
ds_with_embeddings = ds.map(lambda example: {'embeddings': model(transf... | 6,887 |
https://github.com/huggingface/datasets/issues/6886 | load_dataset with data_dir and cache_dir set fail with not supported | [] | ### Describe the bug
with python 3.11 I execute:
```py
from transformers import Wav2Vec2Processor, Data2VecAudioModel
import torch
from torch import nn
from datasets import load_dataset, concatenate_datasets
# load demo audio and set processor
dataset_clean = load_dataset("librispeech_asr", "clean", split="... | 6,886 |
https://github.com/huggingface/datasets/issues/6884 | CI is broken after jax-0.4.27 release: AttributeError: 'jaxlib.xla_extension.DeviceList' object has no attribute 'device' | [] | After jax-0.4.27 release (https://github.com/google/jax/releases/tag/jax-v0.4.27), our CI is broken with the error:
```Python traceback
AttributeError: 'jaxlib.xla_extension.DeviceList' object has no attribute 'device'. Did you mean: 'devices'?
```
See: https://github.com/huggingface/datasets/actions/runs/8997488... | 6,884 |
https://github.com/huggingface/datasets/issues/6882 | Connection Error When Using By-pass Proxies | [
"Changing the supplier of the proxy will solve this problem, or you can visit and follow the instructions in https://hf-mirror.com "
] | ### Describe the bug
I'm currently using Clash for Windows as my proxy tunnel, after exporting HTTP_PROXY and HTTPS_PROXY to the port that clash provides🤔, it runs into a connection error saying "Couldn't reach https://raw.githubusercontent.com/huggingface/datasets/2.19.1/metrics/seqeval/seqeval.py (ConnectionError(M... | 6,882 |
https://github.com/huggingface/datasets/issues/6881 | AttributeError: module 'PIL.Image' has no attribute 'ExifTags' | [
"@albertvillanova @lhoestq just ran into it and requiring newer pillow isn't a solution as it breaks Pillow-SIMD which is behind Pillow quite a few versions but necessary for training with reasonable throughput. \r\n\r\nA couple things here... \r\n\r\n1. This can be done with a method that isn't an issue for any so... | When trying to load an image dataset in an old Python environment (with Pillow-8.4.0), an error is raised:
```Python traceback
AttributeError: module 'PIL.Image' has no attribute 'ExifTags'
```
The error traceback:
```Python traceback
~/huggingface/datasets/src/datasets/iterable_dataset.py in __iter__(self)
1... | 6,881 |
https://github.com/huggingface/datasets/issues/6880 | Webdataset: KeyError: 'png' on some datasets when streaming | [
"The error is caused by malformed basenames of the files within the TARs:\r\n- `15_Cohen_1-s2.0-S0929664620300449-gr3_lrg-b.png` becomes `15_Cohen_1-s2` as the grouping `__key__`, and `0-S0929664620300449-gr3_lrg-b.png` as the additional key to be added to the example\r\n- whereas the intended behavior was to use `... | reported at https://huggingface.co/datasets/tbone5563/tar_images/discussions/1
```python
>>> from datasets import load_dataset
>>> ds = load_dataset("tbone5563/tar_images")
Downloading data: 100%
1.41G/1.41G [00:48<00:00, 17.2MB/s]
Downloading data: 100%
619M/619M [00:11<00:00, 57.4MB/s]
Generating train sp... | 6,880 |
https://github.com/huggingface/datasets/issues/6879 | Batched mapping does not raise an error if values for an existing column are empty | [] | ### Describe the bug
Using `Dataset.map(fn, batched=True)` allows resizing the dataset by returning a dict of lists, all of which must be the same size. If they are not the same size, an error like `pyarrow.lib.ArrowInvalid: Column 1 named x expected length 1 but got length 0` is raised.
This is not the case if the... | 6,879 |
https://github.com/huggingface/datasets/issues/6877 | OSError: [Errno 24] Too many open files | [
"ulimit -n 8192 can solve this problem",
"> ulimit -n 8192 can solve this problem\r\n\r\nWould there be a systematic way to do this ? The data loading is part of the [MTEB](https://github.com/embeddings-benchmark/mteb) library",
"> > ulimit -n 8192 can solve this problem\r\n> \r\n> Would there be a systematic w... | ### Describe the bug
I am trying to load the 'default' subset of the following dataset which contains lots of files (828 per split): [https://huggingface.co/datasets/mteb/biblenlp-corpus-mmteb](https://huggingface.co/datasets/mteb/biblenlp-corpus-mmteb)
When trying to load it using the `load_dataset` function I get... | 6,877 |
https://github.com/huggingface/datasets/issues/6869 | Download is broken for dict of dicts: FileNotFoundError | [] | It seems there is a bug when downloading a dict of dicts of URLs introduced by:
- #6794
## Steps to reproduce the bug:
```python
from datasets import DownloadManager
dl_manager = DownloadManager()
paths = dl_manager.download({"train": {"frr": "hf://datasets/wikimedia/wikipedia/20231101.frr/train-00000-of-0000... | 6,869 |
https://github.com/huggingface/datasets/issues/6868 | datasets.BuilderConfig does not work. | [
"I guess the issue is caused by the customization of BuilderConfig that you use from the repo [https://github.com/BeyonderXX/InstructUIE](https://github.com/BeyonderXX/InstructUIE/blob/master/src/uie_dataset.py). You should report to them.\r\n\r\nI see you already opened an issue in their repo:\r\n- https://github.... | ### Describe the bug
I custom a BuilderConfig and GeneratorBasedBuilder.
Here is the code for BuilderConfig
```
class UIEConfig(datasets.BuilderConfig):
def __init__(
self,
*args,
data_dir=None,
instruction_file=None,
instruction_strategy=None,... | 6,868 |
https://github.com/huggingface/datasets/issues/6867 | Improve performance of JSON loader | [
"Thanks! Feel free to ping me for examples. May not respond immediately because we're all busy but would like to help.",
"Hi @natolambert, could you please give some examples of JSON files to benchmark?\r\n\r\nPlease note that this JSON file (https://huggingface.co/datasets/allenai/reward-bench-results/blob/main/... | As reported by @natolambert, loading regular JSON files with `datasets` shows poor performance.
The cause is that we use the `json` Python standard library instead of other faster libraries. See my old comment: https://github.com/huggingface/datasets/pull/2638#pullrequestreview-706983714
> There are benchmarks that... | 6,867 |
https://github.com/huggingface/datasets/issues/6866 | DataFilesNotFoundError for datasets in the open-llm-leaderboard | [
"Potentially related:\r\n* #6864\r\n* #6850\r\n* #6848\r\n* #6819",
"Hi @jerome-white, thnaks for reporting.\r\n\r\nHowever, I cannot reproduce your issue:\r\n```python\r\n>>> from datasets import get_dataset_config_names\r\n\r\n>>> get_dataset_config_names(\"open-llm-leaderboard/details_davidkim205__Rhea-72b-v0.... | ### Describe the bug
When trying to get config names or load any dataset within the open-llm-leaderboard ecosystem (`open-llm-leaderboard/details_`) I receive the DataFilesNotFoundError. For the last month or so I've been loading datasets from the leaderboard almost everyday; yesterday was the first time I started see... | 6,866 |
https://github.com/huggingface/datasets/issues/6865 | Example on Semantic segmentation contains bug | [] | ### Describe the bug
https://huggingface.co/docs/datasets/en/semantic_segmentation shows wrong example with torchvision transforms.
Specifically, as one can see in screenshot below, the object boundaries have weird colors.
<img width="689" alt="image" src="https://github.com/huggingface/datasets/assets/4803565/59... | 6,865 |
https://github.com/huggingface/datasets/issues/6864 | Dataset 'rewardsignal/reddit_writing_prompts' doesn't exist on the Hub | [
"Hi @vinodrajendran001, thanks for reporting.\r\n\r\nIndeed the dataset no longer exists on the Hub. The URL https://huggingface.co/datasets/rewardsignal/reddit_writing_prompts gives 404 Not Found error."
] | ### Describe the bug
The dataset `rewardsignal/reddit_writing_prompts` is missing in Huggingface Hub.
### Steps to reproduce the bug
```
from datasets import load_dataset
prompt_response_dataset = load_dataset("rewardsignal/reddit_writing_prompts", data_files="prompt_responses_full.csv", split='train[:80%]... | 6,864 |
https://github.com/huggingface/datasets/issues/6863 | Revert temporary pin huggingface-hub < 0.23.0 | [] | Revert temporary pin huggingface-hub < 0.23.0 introduced by
- #6861
once the following issue is fixed and released:
- huggingface/transformers#30618 | 6,863 |
https://github.com/huggingface/datasets/issues/6860 | CI fails after huggingface_hub-0.23.0 release: FutureWarning: "resume_download" | [
"I think this needs to be fixed on transformers.\r\n\r\nCC: @Wauplin ",
"See:\r\n- https://github.com/huggingface/transformers/issues/30618",
"Opened https://github.com/huggingface/transformers/pull/30620"
] | CI fails after latest huggingface_hub-0.23.0 release: https://github.com/huggingface/huggingface_hub/releases/tag/v0.23.0
```
FAILED tests/test_metric_common.py::LocalMetricTest::test_load_metric_bertscore - FutureWarning: `resume_download` is deprecated and will be removed in version 1.0.0. Downloads always resume... | 6,860 |
https://github.com/huggingface/datasets/issues/6858 | Segmentation fault | [
"I downloaded the jsonl file and extract it manually. \r\nThe issue seems to be related to pyarrow.json \r\n\r\n\r\n\r\npython3 -q -X faulthandler -c \"from datasets import load_dataset; load_dataset('json', data_files='/Users/scampion/Downloads/1998-09.jsonl')\"\r\nGenerating train split: 0 examples [00:00, ? exa... | ### Describe the bug
Using various version for datasets, I'm no more longer able to load that dataset without a segmentation fault.
Several others files are also concerned.
### Steps to reproduce the bug
# Create a new venv
python3 -m venv venv_test
source venv_test/bin/activate
# Install the latest versio... | 6,858 |
https://github.com/huggingface/datasets/issues/6856 | CI fails on Windows for test_delete_from_hub and test_xgetsize_private due to new-line character | [
"After investigation, I have found that when a local file is uploaded to the Hub, the new line character is no longer transformed to \"\\n\": on Windows machine now it is kept as \"\\r\\n\".\r\n\r\nAny idea why this changed?\r\nCC: @lhoestq "
] | CI fails on Windows for test_delete_from_hub after the merge of:
- #6820
This is weird because the CI was green in the PR branch before merging to main.
```
FAILED tests/test_hub.py::test_delete_from_hub - AssertionError: assert [CommitOperat...\r\n---\r\n')] == [CommitOperat...in/*\n---\n')]
At index 1 ... | 6,856 |
https://github.com/huggingface/datasets/issues/6854 | Wrong example of usage when config name is missing for community script-datasets | [] | As reported by @Wauplin, when loading a community dataset with script, there is a bug in the example of usage of the error message if the dataset has multiple configs (and no default config) and the user does not pass any config. For example:
```python
>>> ds = load_dataset("google/fleurs")
ValueError: Config name i... | 6,854 |
https://github.com/huggingface/datasets/issues/6853 | Support soft links for load_datasets imagefolder | [] | ### Feature request
Load_dataset from a folder of images doesn't seem to support soft links. It would be nice if it did, especially during methods development where image folders are being curated.
### Motivation
Images are coming from a complex variety of sources and we'd like to be able to soft link directly from ... | 6,853 |
https://github.com/huggingface/datasets/issues/6852 | Write token isn't working while pushing to datasets | [] | ### Describe the bug
<img width="1001" alt="Screenshot 2024-05-01 at 3 37 06 AM" src="https://github.com/huggingface/datasets/assets/130903099/00fcf12c-fcc1-4749-8592-d263d4efcbcc">
As you can see I logged in to my account and the write token is valid.
But I can't upload on my main account and I am getting that ... | 6,852 |
https://github.com/huggingface/datasets/issues/6851 | load_dataset('emotion') UnicodeDecodeError | [] | ### Describe the bug
**emotions = load_dataset('emotion')**
_UnicodeDecodeError: 'utf-8' codec can't decode byte 0x8b in position 1: invalid start byte_
### Steps to reproduce the bug
load_dataset('emotion')
### Expected behavior
succese
### Environment info
py3.10
transformers 4.41.0.dev0
datasets 2.... | 6,851 |
https://github.com/huggingface/datasets/issues/6850 | Problem loading voxpopuli dataset | [
"Version 2.18 works without problem.",
"@Namangarg110 @mohsen-goodarzi The bug appears because the number of urls is less than 16 and the algorithm is meant to work on the previously created mode for a single url as stated on line 314: https://github.com/huggingface/datasets/blob/1bf8a46cc7b096d5c547ea3794f6a4b6... | ### Describe the bug
```
Exception has occurred: FileNotFoundError
Couldn't find file at https://huggingface.co/datasets/facebook/voxpopuli/resolve/main/{'en': 'data/en/asr_train.tsv'}
```
Error in logic for link url creation. The link should be https://huggingface.co/datasets/facebook/voxpopuli/resolve/main/da... | 6,850 |
https://github.com/huggingface/datasets/issues/6848 | Cant Downlaod Common Voice 17.0 hy-AM | [
"Same issue here."
] | ### Describe the bug
I want to download Common Voice 17.0 hy-AM but it returns an error.
```
The version_base parameter is not specified.
Please specify a compatability version level, or None.
Will assume defaults for version 1.1
@hydra.main(config_name='hfds_config', config_path=None)
/usr/local/lib/pyth... | 6,848 |
https://github.com/huggingface/datasets/issues/6847 | [Streaming] Only load requested splits without resolving files for the other splits | [
"This should help fixing this issue: https://github.com/huggingface/datasets/pull/6832",
"I'm having a similar issue when using splices:\r\n<img width=\"947\" alt=\"image\" src=\"https://github.com/huggingface/datasets/assets/28941213/2153faac-e1fe-4b6d-a79b-30b2699407e8\">\r\n<img width=\"823\" alt=\"image\" src... | e.g. [thangvip](https://huggingface.co/thangvip)/[cosmopedia_vi_math](https://huggingface.co/datasets/thangvip/cosmopedia_vi_math) has 300 splits and it takes a very long time to load only one split.
This is due to `load_dataset()` resolving the files of all the splits even if only one is needed.
In `dataset-view... | 6,847 |
https://github.com/huggingface/datasets/issues/6846 | Unimaginable super slow iteration | [
"In every iteration you load the full \"random_input\" column in memory, only then to access it's i-th element.\r\n\r\nYou can try using this instead\r\n\r\na,b=dataset[i]['random_input'],dataset[i]['random_output']"
] | ### Describe the bug
Assuming there is a dataset with 52000 sentences, each with a length of 500, it takes 20 seconds to extract a sentence from the dataset……?Is there something wrong with my iteration?
### Steps to reproduce the bug
```python
import datasets
import time
import random
num_rows = 52000
n... | 6,846 |
https://github.com/huggingface/datasets/issues/6845 | load_dataset doesn't support list column | [
"I encountered this same issue when loading a customized dataset for ORPO training, in which there were three columns and two of them were lists. \r\nI debugged and found that it might be caused by the type-infer mechanism and because in some chunks one of the columns is always an empty list ([]), it was regarded a... | ### Describe the bug
dataset = load_dataset("Doraemon-AI/text-to-neo4j-cypher-chinese")
got exception:
Generating train split: 1834 examples [00:00, 5227.98 examples/s]
Traceback (most recent call last):
File "/usr/local/lib/python3.11/dist-packages/datasets/builder.py", line 2011, in _prepare_split_single
... | 6,845 |
https://github.com/huggingface/datasets/issues/6843 | IterableDataset raises exception instead of retrying | [
"Thanks for reporting! I've opened a PR with a fix.",
"Thanks, @mariosasko! Related question (although I guess this is a feature request): could we have some kind of exponential back-off for these retries? Here's my reasoning:\r\n- If a one-time accidental error happens, you should retry immediately and will succ... | ### Describe the bug
In light of the recent server outages, I decided to look into whether I could somehow wrap my IterableDataset streams to retry rather than error out immediately. To my surprise, `datasets` [already supports retries](https://github.com/huggingface/datasets/issues/6172#issuecomment-1794876229). Si... | 6,843 |
https://github.com/huggingface/datasets/issues/6842 | Datasets with files with colon : in filenames cannot be used on Windows | [] | ### Describe the bug
Datasets (such as https://huggingface.co/datasets/MLCommons/peoples_speech) cannot be used on Windows due to the fact that windows does not allow colons ":" in filenames. These should be converted into alternative strings.
### Steps to reproduce the bug
1. Attempt to run load_dataset on MLCo... | 6,842 |
https://github.com/huggingface/datasets/issues/6841 | Unable to load wiki_auto_asset_turk from GEM | [
"Hi! I've opened a [PR](https://huggingface.co/datasets/GEM/wiki_auto_asset_turk/discussions/5) with a fix. While waiting for it to be merged, you can load the dataset from the PR branch with `datasets.load_dataset(\"GEM/wiki_auto_asset_turk\", revision=\"refs/pr/5\")`",
"Thanks Mario. Still getting the same issu... | ### Describe the bug
I am unable to load the wiki_auto_asset_turk dataset. I get a fatal error while trying to access wiki_auto_asset_turk and load it with datasets.load_dataset. The error (TypeError: expected str, bytes or os.PathLike object, not NoneType) is from filenames_for_dataset_split in a os.path.join call
... | 6,841 |
https://github.com/huggingface/datasets/issues/6840 | Delete uploaded files from the UI | [] | ### Feature request
Once a file is uploaded and the commit is made, I am unable to delete individual files without completely deleting the whole dataset via the website UI.
### Motivation
Would be a useful addition
### Your contribution
Would love to help out with some guidance | 6,840 |
https://github.com/huggingface/datasets/issues/6838 | Remove token arg from CLI examples | [] | As suggested by @Wauplin, see: https://github.com/huggingface/datasets/pull/6831#discussion_r1579492603
> I would not advertise the --token arg in the example as this shouldn't be the recommended way (best to login with env variable or huggingface-cli login) | 6,838 |
https://github.com/huggingface/datasets/issues/6837 | Cannot use cached dataset without Internet connection (or when servers are down) | [
"There are 2 workarounds, tho:\r\n1. Download datasets from web and just load them locally\r\n2. Use metadata directly (temporal solution, since metadata can change)\r\n```\r\nimport datasets\r\nfrom datasets.data_files import DataFilesDict, DataFilesList\r\n\r\ndata_files_list = DataFilesList(\r\n [\r\n ... | ### Describe the bug
I want to be able to use cached dataset from HuggingFace even when I have no Internet connection (or when HuggingFace servers are down, or my company has network issues).
The problem why I can't use it:
`data_files` argument from `datasets.load_dataset()` function get it updates from the serve... | 6,837 |
https://github.com/huggingface/datasets/issues/6836 | ExpectedMoreSplits error on load_dataset when upgrading to 2.19.0 | [
"Get same error on same datasets too.",
"+1",
"same error"
] | ### Describe the bug
Hi there, thanks for the great library! We have been using it a lot in torchtune and it's been a huge help for us.
Regarding the bug: the same call to `load_dataset` errors with `ExpectedMoreSplits` in 2.19.0 after working fine in 2.18.0. Full details given in the repro below.
### Steps to re... | 6,836 |
https://github.com/huggingface/datasets/issues/6834 | largelisttype not supported (.from_polars()) | [] | ### Describe the bug
The following code fails because LargeListType is not supported.
This is especially a problem for .from_polars since polars uses LargeListType.
### Steps to reproduce the bug
```python
import datasets
import polars as pl
df = pl.DataFrame({"list": [[]]})
datasets.Dataset.from_pola... | 6,834 |
https://github.com/huggingface/datasets/issues/6833 | Super slow iteration with trivial custom transform | [
"Similar issue in text process \r\n\r\n```python\r\n\r\ntokenizer=AutoTokenizer.from_pretrained(model_dir[args.model])\r\ntrain_dataset=datasets.load_from_disk(dataset_dir[args.dataset],keep_in_memory=True)['train']\r\ntrain_dataset=train_dataset.map(partial(dname2func[args.dataset],tokenizer=tokenizer),batched=Tru... | ### Describe the bug
Dataset is 10X slower when applying trivial transforms:
```
import time
import numpy as np
from datasets import Dataset, Features, Array2D
a = np.zeros((800, 800))
a = np.stack([a] * 1000)
features = Features({"a": Array2D(shape=(800, 800), dtype="uint8")})
ds1 = Dataset.from_dict({"... | 6,833 |
https://github.com/huggingface/datasets/issues/6830 | Add a doc page for the convert_to_parquet CLI | [] | Follow-up to https://github.com/huggingface/datasets/pull/6795. Useful for https://github.com/huggingface/dataset-viewer/issues/2742. cc @albertvillanova | 6,830 |
https://github.com/huggingface/datasets/issues/6829 | Load and save from/to disk no longer accept pathlib.Path | [] | Reported by @vttrifonov at https://github.com/huggingface/datasets/pull/6704#issuecomment-2071168296:
> This change is breaking in
> https://github.com/huggingface/datasets/blob/f96e74d5c633cd5435dd526adb4a74631eb05c43/src/datasets/arrow_dataset.py#L1515
> when the input is `pathlib.Path`. The issue is that `url_to... | 6,829 |
https://github.com/huggingface/datasets/issues/6827 | Loading a remote dataset fails in the last release (v2.19.0) | [] | While loading a dataset with multiple splits I get an error saying `Couldn't find file at <URL>`
I am loading the dataset like so, nothing out of the ordinary.
This dataset needs a token to access it.
```
token="hf_myhftoken-sdhbdsjgkhbd"
load_dataset("speechcolab/gigaspeech", "test", cache_dir=f"gigaspeech/test... | 6,827 |
https://github.com/huggingface/datasets/issues/6824 | Winogrande does not seem to be compatible with datasets version of 1.18.0 | [
"Hi ! Do you mean 2.18 ? Can you try to update `fsspec` and `huggingface_hub` ?\r\n\r\n```\r\npip install -U fsspec huggingface_hub\r\n```",
"Yes I meant 2.18, and it works after updating `fsspec` and `huggingface_hub`. Thanks!"
] | ### Describe the bug
I get the following error when simply running `load_dataset('winogrande','winogrande_xl')`.
I do not have such an issue in the 1.17.0 version.
```Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/usr/local/lib/python3.10/dist-packages/datasets/load.py", line... | 6,824 |
https://github.com/huggingface/datasets/issues/6823 | Loading problems of Datasets with a single shard | [] | ### Describe the bug
When saving a dataset on disk and it has a single shard it is not loaded as when it is saved in multiple shards. I installed the latest version of datasets via pip.
### Steps to reproduce the bug
The code below reproduces the behavior. All works well when the range of the loop is 10000 bu... | 6,823 |
https://github.com/huggingface/datasets/issues/6819 | Give more details in `DataFilesNotFoundError` when getting the config names | [] | ### Feature request
After https://huggingface.co/datasets/cis-lmu/Glot500/commit/39060e01272ff228cc0ce1d31ae53789cacae8c3, the dataset viewer gives the following error:
```
{
"error": "Cannot get the config names for the dataset.",
"cause_exception": "DataFilesNotFoundError",
"cause_message": "No (support... | 6,819 |
https://github.com/huggingface/datasets/issues/6814 | `map` with `num_proc` > 1 leads to OOM | [
"Hi ! You can try to reduce `writer_batch_size`. It corresponds to the number of samples that stay in RAM before being flushed to disk"
] | ### Describe the bug
When running `map` on parquet dataset loaded from local machine, the RAM usage increases linearly eventually leading to OOM. I was wondering if I should I save the `cache_file` after every n steps in order to prevent this?
### Steps to reproduce the bug
```
ds = load_dataset("parquet", data... | 6,814 |
https://github.com/huggingface/datasets/issues/6810 | Allow deleting a subset/config from a no-script dataset | [
"Probably best to implement this as a CLI command?",
"Thanks for your comment, @mariosasko. Or maybe both (in Python and as CLI command)? The Python command would be just the reverse of `push_to_hub`...\r\n\r\nI am working on a draft implementation, so we can discuss about the API and UX."
] | As proposed by @BramVanroy, it would be neat to have this functionality through the API. | 6,810 |
https://github.com/huggingface/datasets/issues/6808 | Make convert_to_parquet CLI command create script branch | [] | As proposed by @severo, maybe we should add this functionality as well to the CLI command to convert a script-dataset to Parquet. See: https://github.com/huggingface/datasets/pull/6795#discussion_r1562819168
> When providing support, we sometimes suggest that users store their script in a script branch. What do you th... | 6,808 |
https://github.com/huggingface/datasets/issues/6805 | Batched mapping of existing string column casts boolean to string | [
"This seems to be hardcoded behavior in table.py `array_cast`.\r\n```python\r\nif (\r\n not allow_number_to_str\r\n and pa.types.is_string(pa_type)\r\n and (pa.types.is_floating(array.type) or pa.types.is_integer(array.type))\r\n ):\r\n raise TypeError(\r\n ... | ### Describe the bug
Let the dataset contain a column named 'a', which is of the string type.
If 'a' is converted to a boolean using batched mapping, the mapper automatically casts the boolean to a string (e.g., True -> 'true').
It only happens when the original column and the mapped column name are identical.
Th... | 6,805 |
https://github.com/huggingface/datasets/issues/6801 | got fileNotFound | [
"Hi! I'll open a PR on the Hub to fix this, but please use the Hub's [Community tab](https://huggingface.co/datasets/nyanko7/danbooru2023/discussions) to report such issues in the future.",
"I've opened a [PR](https://huggingface.co/datasets/nyanko7/danbooru2023/discussions/8) in the repo, so let's continue the d... | ### Describe the bug
When I use load_dataset to load the nyanko7/danbooru2023 data set, the cache is read in the form of a symlink. There may be a problem with the arrow_dataset initialization process and I get FileNotFoundError: [Errno 2] No such file or directory: '2945000.jpg'
### Steps to reproduce the bug
#code... | 6,801 |
https://github.com/huggingface/datasets/issues/6800 | High overhead when loading lots of subsets from the same dataset | [
"Hi !\r\n\r\nIt's possible to multiple files at once:\r\n\r\n```python\r\ndata_files = \"data/*.jsonl\"\r\n# Or pass a list of files\r\nlangs = ['ka-ml', 'br-sr', 'ka-pt', 'id-ko', ..., 'fi-ze_zh', 'he-kk', 'ka-tr']\r\ndata_files = [f\"data/{lang}.jsonl\" for lang in langs]\r\nds = load_dataset(\"loicmagne/open-sub... | ### Describe the bug
I have a multilingual dataset that contains a lot of subsets. Each subset corresponds to a pair of languages, you can see here an example with 250 subsets: [https://hf.co/datasets/loicmagne/open-subtitles-250-bitext-mining](). As part of the MTEB benchmark, we may need to load all the subsets of t... | 6,800 |
https://github.com/huggingface/datasets/issues/6798 | `DatasetBuilder._split_generators` incomplete type annotation | [
"Good catch! Feel free to open a PR with the suggested fix :).",
"There is also the [`MockDownloadManager`](https://github.com/JonasLoos/datasets/blob/main/src/datasets/download/mock_download_manager.py#L33), which seems like it might get passed here too. However, to me, it doesn't really seem relevant to the use... | ### Describe the bug
The [`DatasetBuilder._split_generators`](https://github.com/huggingface/datasets/blob/0f27d7b77c73412cfc50b24354bfd7a3e838202f/src/datasets/builder.py#L1449) function has currently the following signature:
```python
class DatasetBuilder:
def _split_generators(self, dl_manager: DownloadMan... | 6,798 |
https://github.com/huggingface/datasets/issues/6796 | CI is broken due to hf-internal-testing/dataset_with_script | [
"Finally:\r\n- the initial issue seems it was temporary\r\n- there is a different issue now: https://github.com/huggingface/datasets/actions/runs/8627153993/job/23646584590?pr=6797\r\n```\r\nFAILED tests/test_load.py::ModuleFactoryTest::test_HubDatasetModuleFactoryWithParquetExport - datasets.utils._dataset_viewer.... | CI is broken for test_load_dataset_distributed_with_script. See: https://github.com/huggingface/datasets/actions/runs/8614926216/job/23609378127
```
FAILED tests/test_load.py::test_load_dataset_distributed_with_script[None] - assert False
+ where False = all(<generator object test_load_dataset_distributed_with_scr... | 6,796 |
https://github.com/huggingface/datasets/issues/6793 | Loading just one particular split is not possible for imagenet-1k | [] | ### Describe the bug
I'd expect the following code to download just the validation split but instead I get all data on my disk (train, test and validation splits)
`
from datasets import load_dataset
dataset = load_dataset("imagenet-1k", split="validation", trust_remote_code=True)
`
Is it expected to work li... | 6,793 |
https://github.com/huggingface/datasets/issues/6791 | `add_faiss_index` raises ValueError: not enough values to unpack (expected 2, got 1) | [
"I realized I was passing a string column to this instead of float. Is it possible to add a warning or error to prevent users from falsely believing there's a bug?",
"Hello!\r\n\r\nI agree that we could add some safeguards around the type of `ds[column]`. At least for FAISS, we need the column to be made of embed... | ### Describe the bug
Calling `add_faiss_index` on a `Dataset` with a column argument raises a ValueError. The following is the trace
```python
214 def replacement_add(self, x):
215 """Adds vectors to the index.
216 The index must be trained before vectors can be added to it.
217 Th... | 6,791 |
https://github.com/huggingface/datasets/issues/6790 | PyArrow 'Memory mapping file failed: Cannot allocate memory' bug | [] | ### Describe the bug
Hello,
I've been struggling with a problem using Huggingface datasets caused by PyArrow memory allocation. I finally managed to solve it, and thought to document it since similar issues have been raised here before (https://github.com/huggingface/datasets/issues/5710, https://github.com/huggi... | 6,790 |
https://github.com/huggingface/datasets/issues/6789 | Issue with map | [
"Default `writer_batch_size `is set to 1000 (see [map](https://huggingface.co/docs/datasets/v2.16.1/en/package_reference/main_classes#datasets.Dataset.map)).\r\nThe \"tmp1335llua\" is probably the temp file it creates while writing to disk.\r\nMaybe try lowering the `writer_batch_size`.\r\n\r\nFor multi-processing ... | ### Describe the bug
Map has been taking extremely long to preprocess my data.
It seems to process 1000 examples (which it does really fast in about 10 seconds), then it hangs for a good 1-2 minutes, before it moves on to the next batch of 1000 examples.
It also keeps eating up my hard drive space for some reaso... | 6,789 |
https://github.com/huggingface/datasets/issues/6788 | A Question About the Map Function | [
"All data is saved in the arrow format on disk.\r\nIf you return a tensor it gets converted to arrow before saving to disk when using map.\r\n\r\nTo get a tensor when you access data elements you can use `dataset.set_format(\"pt\")`.\r\nNote that this just changes how the data is loaded, not how it is stored.",
"... | ### Describe the bug
Hello,
I have a question regarding the map function in the Hugging Face datasets.
The situation is as follows: when I load a jsonl file using load_dataset(..., streaming=False), and then utilize the map function to process it, I specify that the returned example should be of type Torch.ten... | 6,788 |
https://github.com/huggingface/datasets/issues/6787 | TimeoutError in map | [
"From my current understanding, this timeout is only used when we need to get the results.\r\n\r\nOne of:\r\n1. All tasks are done\r\n2. One worker died\r\n\r\nYour function should work fine and it's definitely a bug if it doesn't.",
"When one of the `map`'s worker processes crashes, the linked code re-raises an ... | ### Describe the bug
```python
from datasets import Dataset
def worker(example):
while True:
continue
example['a'] = 100
return example
data = Dataset.from_list([{"a": 1}, {"a": 2}])
data = data.map(worker)
print(data[0])
```
I'm implementing a worker function whose runtime will de... | 6,787 |
https://github.com/huggingface/datasets/issues/6783 | AttributeError: module 'numpy' has no attribute 'object'. in Kaggle Notebook | [
"Hi! You can fix this by updating the `datasets` package with `pip install -U datasets` and restarting the notebook.\r\n",
"Kaggle removed the problematic `datasets==2.1.0` pin last week, so I'm closing this issue (now it pre-installs the latest version)."
] | ### Describe the bug
# problem
I can't resample audio dataset in Kaggle Notebook. It looks like some code in `datasets` library use aliases that were deprecated in NumPy 1.20.
## code for resampling
```
from datasets import load_dataset, Audio
from transformers import AutoFeatureExtractor
from transformers imp... | 6,783 |
https://github.com/huggingface/datasets/issues/6782 | Image cast_storage very slow for arrays (e.g. numpy, tensors) | [
"This may be a solution that only changes `cast_storage` of `Image`.\r\nHowever, I'm not totally sure that the assumptions hold that are made about the `ListArray`.\r\n\r\n```python\r\nelif pa.types.is_list(storage.type):\r\n from .features import Array3DExtensionType\r\n\r\n def get_shapes(arr):\r\n s... | Update: see comments below
### Describe the bug
Operations that save an image from a path are very slow.
I believe the reason for this is that the image data (`numpy`) is converted into `pyarrow` format but then back to python using `.pylist()` before being converted to a numpy array again.
`pylist` is alread... | 6,782 |
https://github.com/huggingface/datasets/issues/6778 | Dataset.to_csv() missing commas in columns with lists | [
"Hello!\r\n\r\nThis is due to how pandas write numpy arrays to csv. [Source](https://stackoverflow.com/questions/54753179/to-csv-saves-np-array-as-string-instead-of-as-a-list)\r\nTo fix this, you can convert them to list yourselves.\r\n\r\n```python\r\ndf = ds.to_pandas()\r\ndf['int'] = df['int'].apply(lambda arr: ... | ### Describe the bug
The `to_csv()` method does not output commas in lists. So when the Dataset is loaded back in the data structure of the column with a list is not correct.
Here's an example:
Obviously, it's not as trivial as inserting commas in the list, since its a comma-separated file. But hopefully there... | 6,778 |
https://github.com/huggingface/datasets/issues/6777 | .Jsonl metadata not detected | [
"Hi! `metadata.jsonl` (or `metadata.csv`) is the only allowed name for the `imagefolder`'s metadata files.",
"@mariosasko hey i tried with metadata.jsonl also and it still doesn't get the right columns",
"@mariosasko it says metadata.csv not found\r\n<img width=\"1150\" alt=\"image\" src=\"https://github.com/hu... | ### Describe the bug
Hi I have the following directory structure:
|--dataset
| |-- images
| |-- metadata1000.csv
| |-- metadata1000.jsonl
| |-- padded_images
Example of metadata1000.jsonl file
{"caption": "a drawing depicts a full shot of a black t-shirt with a triangular pattern on the front there is a white... | 6,777 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.