url stringlengths 61 61 | repository_url stringclasses 1
value | labels_url stringlengths 75 75 | comments_url stringlengths 70 70 | events_url stringlengths 68 68 | html_url stringlengths 51 51 | id int64 1.95B 1.99B | node_id stringlengths 18 18 | number int64 6.32k 6.41k | title stringlengths 19 134 | user dict | labels list | state stringclasses 2
values | locked bool 1
class | assignee dict | assignees list | milestone null | comments sequence | created_at int64 1.7k 1.7k | updated_at int64 1.7k 1.7k | closed_at int64 1.7k 1.7k ⌀ | author_association stringclasses 3
values | active_lock_reason null | draft null | pull_request null | body stringlengths 63 19.4k ⌀ | reactions dict | timeline_url stringlengths 70 70 | performed_via_github_app null | state_reason stringclasses 2
values |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/datasets/issues/6412 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6412/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6412/comments | https://api.github.com/repos/huggingface/datasets/issues/6412/events | https://github.com/huggingface/datasets/issues/6412 | 1,992,401,594 | I_kwDODunzps52waK6 | 6,412 | User token is printed out! | {
"login": "mohsen-goodarzi",
"id": 25702692,
"node_id": "MDQ6VXNlcjI1NzAyNjky",
"avatar_url": "https://avatars.githubusercontent.com/u/25702692?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mohsen-goodarzi",
"html_url": "https://github.com/mohsen-goodarzi",
"followers_url": "https://api... | [] | open | false | null | [] | null | [
"Indeed, this is not a good practice. I've opened a PR that removes the token value from the (deprecation) warning."
] | 1,699 | 1,699 | null | NONE | null | null | null | This line prints user token on command line! Is it safe?
https://github.com/huggingface/datasets/blob/12ebe695b4748c5a26e08b44ed51955f74f5801d/src/datasets/load.py#L2091 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6412/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6412/timeline | null | null |
https://api.github.com/repos/huggingface/datasets/issues/6410 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6410/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6410/comments | https://api.github.com/repos/huggingface/datasets/issues/6410/events | https://github.com/huggingface/datasets/issues/6410 | 1,992,100,209 | I_kwDODunzps52vQlx | 6,410 | Datasets does not load HuggingFace Repository properly | {
"login": "MikeDoes",
"id": 40600201,
"node_id": "MDQ6VXNlcjQwNjAwMjAx",
"avatar_url": "https://avatars.githubusercontent.com/u/40600201?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MikeDoes",
"html_url": "https://github.com/MikeDoes",
"followers_url": "https://api.github.com/users/Mik... | [] | open | false | null | [] | null | [
"Hi! You can avoid the error by requesting only the `jsonl` files. `dataset = load_dataset(\"ai4privacy/pii-masking-200k\", data_files=[\"*.jsonl\"])`.\r\n\r\nOur data file inference does not filter out (incompatible) `json` files because `json` and `jsonl` use the same builder. Still, I think the inference should... | 1,699 | 1,699 | null | NONE | null | null | null | ### Describe the bug
Dear Datasets team,
We just have published a dataset on Huggingface:
https://huggingface.co/ai4privacy
However, when trying to read it using the Dataset library we get an error. As I understand jsonl files are compatible, could you please clarify how we can solve the issue? Please let me kn... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6410/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6410/timeline | null | null |
https://api.github.com/repos/huggingface/datasets/issues/6409 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6409/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6409/comments | https://api.github.com/repos/huggingface/datasets/issues/6409/events | https://github.com/huggingface/datasets/issues/6409 | 1,991,960,865 | I_kwDODunzps52uukh | 6,409 | using DownloadManager to download from local filesystem and disable_progress_bar, there will be an exception | {
"login": "neiblegy",
"id": 16574677,
"node_id": "MDQ6VXNlcjE2NTc0Njc3",
"avatar_url": "https://avatars.githubusercontent.com/u/16574677?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/neiblegy",
"html_url": "https://github.com/neiblegy",
"followers_url": "https://api.github.com/users/nei... | [] | open | false | null | [] | null | [] | 1,699 | 1,699 | null | NONE | null | null | null | ### Describe the bug
i'm using datasets.download.download_manager.DownloadManager to download files like "file:///a/b/c.txt", and i disable_progress_bar() to disable bar. there will be an exception as follows:
`AttributeError: 'function' object has no attribute 'close'
Exception ignored in: <function TqdmCallback.... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6409/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6409/timeline | null | null |
https://api.github.com/repos/huggingface/datasets/issues/6408 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6408/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6408/comments | https://api.github.com/repos/huggingface/datasets/issues/6408/events | https://github.com/huggingface/datasets/issues/6408 | 1,991,902,972 | I_kwDODunzps52ugb8 | 6,408 | IterableDataset lost but not keep columns when map function adding columns with names in remove_columns | {
"login": "shmily326",
"id": 24571857,
"node_id": "MDQ6VXNlcjI0NTcxODU3",
"avatar_url": "https://avatars.githubusercontent.com/u/24571857?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/shmily326",
"html_url": "https://github.com/shmily326",
"followers_url": "https://api.github.com/users/... | [] | open | false | null | [] | null | [] | 1,699 | 1,699 | null | NONE | null | null | null | ### Describe the bug
IterableDataset lost but not keep columns when map function adding columns with names in remove_columns,
Dataset not.
May be related to the code below:
https://github.com/huggingface/datasets/blob/06c3ffb8d068b6307b247164b10f7c7311cefed4/src/datasets/iterable_dataset.py#L750-L756
### Steps t... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6408/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6408/timeline | null | null |
https://api.github.com/repos/huggingface/datasets/issues/6407 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6407/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6407/comments | https://api.github.com/repos/huggingface/datasets/issues/6407/events | https://github.com/huggingface/datasets/issues/6407 | 1,991,514,079 | I_kwDODunzps52tBff | 6,407 | Loading the dataset from private S3 bucket gives "TypeError: cannot pickle '_contextvars.Context' object" | {
"login": "eawer",
"id": 1741779,
"node_id": "MDQ6VXNlcjE3NDE3Nzk=",
"avatar_url": "https://avatars.githubusercontent.com/u/1741779?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/eawer",
"html_url": "https://github.com/eawer",
"followers_url": "https://api.github.com/users/eawer/follower... | [] | open | false | null | [] | null | [] | 1,699 | 1,699 | null | NONE | null | null | null | ### Describe the bug
I'm trying to read the parquet file from the private s3 bucket using the `load_dataset` function, but I receive `TypeError: cannot pickle '_contextvars.Context' object` error
I'm working on a machine with `~/.aws/credentials` file. I can't give credentials and the path to a file in a private bu... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6407/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6407/timeline | null | null |
https://api.github.com/repos/huggingface/datasets/issues/6406 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6406/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6406/comments | https://api.github.com/repos/huggingface/datasets/issues/6406/events | https://github.com/huggingface/datasets/issues/6406 | 1,990,469,045 | I_kwDODunzps52pCW1 | 6,406 | CI Build PR Documentation is broken: ImportError: cannot import name 'TypeAliasType' from 'typing_extensions' | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.g... | [] | closed | false | null | [] | null | [] | 1,699 | 1,699 | 1,699 | MEMBER | null | null | null | Our CI Build PR Documentation is broken. See: https://github.com/huggingface/datasets/actions/runs/6799554060/job/18486828777?pr=6390
```
ImportError: cannot import name 'TypeAliasType' from 'typing_extensions'
``` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6406/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6406/timeline | null | completed |
https://api.github.com/repos/huggingface/datasets/issues/6405 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6405/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6405/comments | https://api.github.com/repos/huggingface/datasets/issues/6405/events | https://github.com/huggingface/datasets/issues/6405 | 1,990,358,743 | I_kwDODunzps52onbX | 6,405 | ConfigNamesError on a simple CSV file | {
"login": "severo",
"id": 1676121,
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/severo",
"html_url": "https://github.com/severo",
"followers_url": "https://api.github.com/users/severo/foll... | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [
"The viewer is working now. \r\n\r\nBased on the repo commit history, the bug was due to the incorrect format of the `features` field in the README YAML (`Value` requires `dtype`, e.g., `Value(\"string\")`, but it was not specified)",
"Feel free to close the issue",
"Oh, OK! Thanks. So, there was no reason to o... | 1,699 | 1,699 | 1,699 | CONTRIBUTOR | null | null | null | See https://huggingface.co/datasets/Nguyendo1999/mmath/discussions/1
```
Error code: ConfigNamesError
Exception: TypeError
Message: __init__() missing 1 required positional argument: 'dtype'
Traceback: Traceback (most recent call last):
File "/src/services/worker/src/worker/job_runn... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6405/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6405/timeline | null | completed |
https://api.github.com/repos/huggingface/datasets/issues/6403 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6403/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6403/comments | https://api.github.com/repos/huggingface/datasets/issues/6403/events | https://github.com/huggingface/datasets/issues/6403 | 1,990,098,817 | I_kwDODunzps52nn-B | 6,403 | Cannot import datasets on google colab (python 3.10.12) | {
"login": "nabilaannisa",
"id": 15389235,
"node_id": "MDQ6VXNlcjE1Mzg5MjM1",
"avatar_url": "https://avatars.githubusercontent.com/u/15389235?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nabilaannisa",
"html_url": "https://github.com/nabilaannisa",
"followers_url": "https://api.github.c... | [] | open | false | null | [] | null | [
"You are most likely using an outdated version of `datasets` in the notebook, which can be verified with the `!datasets-cli env` command. You can run `!pip install -U datasets` to update the installation.",
"okay, it works! thank you so much! 😄 "
] | 1,699 | 1,699 | null | NONE | null | null | null | ### Describe the bug
I'm trying A full colab demo notebook of zero-shot-distillation from https://github.com/huggingface/transformers/tree/main/examples/research_projects/zero-shot-distillation but i got this type of error when importing datasets on my google colab (python version is 3.10.12)
 not working | {
"login": "userbox020",
"id": 47074021,
"node_id": "MDQ6VXNlcjQ3MDc0MDIx",
"avatar_url": "https://avatars.githubusercontent.com/u/47074021?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/userbox020",
"html_url": "https://github.com/userbox020",
"followers_url": "https://api.github.com/use... | [] | open | false | null | [] | null | [
"Seems like it's a problem with the dataset, since in the [README](https://huggingface.co/datasets/Hyperspace-Technologies/scp-wiki-text/blob/main/README.md) the validation is not specified. Try cloning the dataset, removing the README (or validation split), and loading it locally/ "
] | 1,699 | 1,699 | null | NONE | null | null | null | ### Describe the bug
```
(datasets) mruserbox@guru-X99:/media/10TB_HHD/_LLM_DATASETS$ python dataset.py
Downloading readme: 100%|███████████████████████████████████| 360/360 [00:00<00:00, 2.16MB/s]
Downloading data: 100%|█████████████████████████████████| 65.1M/65.1M [00:19<00:00, 3.38MB/s]
Downloading data: 100... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6401/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6401/timeline | null | null |
https://api.github.com/repos/huggingface/datasets/issues/6400 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6400/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6400/comments | https://api.github.com/repos/huggingface/datasets/issues/6400/events | https://github.com/huggingface/datasets/issues/6400 | 1,988,571,317 | I_kwDODunzps52hzC1 | 6,400 | Safely load datasets by disabling execution of dataset loading script | {
"login": "irenedea",
"id": 14367635,
"node_id": "MDQ6VXNlcjE0MzY3NjM1",
"avatar_url": "https://avatars.githubusercontent.com/u/14367635?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/irenedea",
"html_url": "https://github.com/irenedea",
"followers_url": "https://api.github.com/users/ire... | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | null | [] | null | [
"great idea IMO\r\n\r\nthis could be a `trust_remote_code=True` flag like in transformers. We could also default to loading the Parquet conversion rather than executing code (for dataset repos that have both)",
"@julien-c that would be great!"
] | 1,699 | 1,699 | null | NONE | null | null | null | ### Feature request
Is there a way to disable execution of dataset loading script using `load_dataset`? This is a security vulnerability that could lead to arbitrary code execution.
Any suggested workarounds are welcome as well.
### Motivation
This is a security vulnerability that could lead to arbitrary code e... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6400/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6400/timeline | null | null |
https://api.github.com/repos/huggingface/datasets/issues/6399 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6399/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6399/comments | https://api.github.com/repos/huggingface/datasets/issues/6399/events | https://github.com/huggingface/datasets/issues/6399 | 1,988,368,503 | I_kwDODunzps52hBh3 | 6,399 | TypeError: Cannot convert pyarrow.lib.ChunkedArray to pyarrow.lib.Array | {
"login": "y-hwang",
"id": 76236359,
"node_id": "MDQ6VXNlcjc2MjM2MzU5",
"avatar_url": "https://avatars.githubusercontent.com/u/76236359?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/y-hwang",
"html_url": "https://github.com/y-hwang",
"followers_url": "https://api.github.com/users/y-hwan... | [] | open | false | null | [] | null | [] | 1,699 | 1,699 | null | NONE | null | null | null | ### Describe the bug
Hi, I am preprocessing a large custom dataset with numpy arrays. I am running into this TypeError during writing in a dataset.map() function. I've tried decreasing writer batch size, but this error persists. This error does not occur for smaller datasets.
Thank you!
### Steps to repro... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6399/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6399/timeline | null | null |
https://api.github.com/repos/huggingface/datasets/issues/6397 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6397/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6397/comments | https://api.github.com/repos/huggingface/datasets/issues/6397/events | https://github.com/huggingface/datasets/issues/6397 | 1,987,622,152 | I_kwDODunzps52eLUI | 6,397 | Raise a different exception for inexisting dataset vs files without known extension | {
"login": "severo",
"id": 1676121,
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/severo",
"html_url": "https://github.com/severo",
"followers_url": "https://api.github.com/users/severo/foll... | [] | open | false | null | [] | null | [] | 1,699 | 1,699 | null | CONTRIBUTOR | null | null | null | See https://github.com/huggingface/datasets-server/issues/2082#issuecomment-1805716557
We have the same error for:
- https://huggingface.co/datasets/severo/a_dataset_that_does_not_exist: a dataset that does not exist
- https://huggingface.co/datasets/severo/test_files_without_extension: a dataset with files withou... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6397/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6397/timeline | null | null |
https://api.github.com/repos/huggingface/datasets/issues/6396 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6396/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6396/comments | https://api.github.com/repos/huggingface/datasets/issues/6396/events | https://github.com/huggingface/datasets/issues/6396 | 1,987,308,077 | I_kwDODunzps52c-ot | 6,396 | Issue with pyarrow 14.0.1 | {
"login": "severo",
"id": 1676121,
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/severo",
"html_url": "https://github.com/severo",
"followers_url": "https://api.github.com/users/severo/foll... | [] | closed | false | null | [] | null | [
"Looks like we should stop using `PyExtensionType` and use `ExtensionType` instead\r\n\r\nsee https://github.com/apache/arrow/commit/f14170976372436ec1d03a724d8d3f3925484ecf",
"https://github.com/huggingface/datasets-server/pull/2089#pullrequestreview-1724449532\r\n\r\n> Yes, I understand now: they have disabled ... | 1,699 | 1,699 | 1,699 | CONTRIBUTOR | null | null | null | See https://github.com/huggingface/datasets-server/pull/2089 for reference
```
from datasets import (Array2D, Dataset, Features)
feature_type = Array2D(shape=(2, 2), dtype="float32")
content = [[0.0, 0.0], [0.0, 0.0]]
features = Features({"col": feature_type})
dataset = Dataset.from_dict({"col": [content]}, fea... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6396/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6396/timeline | null | completed |
https://api.github.com/repos/huggingface/datasets/issues/6395 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6395/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6395/comments | https://api.github.com/repos/huggingface/datasets/issues/6395/events | https://github.com/huggingface/datasets/issues/6395 | 1,986,484,124 | I_kwDODunzps52Z1ec | 6,395 | Add ability to set lock type | {
"login": "leoleoasd",
"id": 37735580,
"node_id": "MDQ6VXNlcjM3NzM1NTgw",
"avatar_url": "https://avatars.githubusercontent.com/u/37735580?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/leoleoasd",
"html_url": "https://github.com/leoleoasd",
"followers_url": "https://api.github.com/users/... | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | null | [] | null | [] | 1,699 | 1,699 | null | NONE | null | null | null | ### Feature request
Allow setting file lock type, maybe from an environment variable
Currently, it only depends on whether fnctl is available:
https://github.com/huggingface/datasets/blob/12ebe695b4748c5a26e08b44ed51955f74f5801d/src/datasets/utils/filelock.py#L463-L470C16
### Motivation
In my environment... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6395/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6395/timeline | null | null |
https://api.github.com/repos/huggingface/datasets/issues/6394 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6394/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6394/comments | https://api.github.com/repos/huggingface/datasets/issues/6394/events | https://github.com/huggingface/datasets/issues/6394 | 1,985,947,116 | I_kwDODunzps52XyXs | 6,394 | TorchFormatter images (H, W, C) instead of (C, H, W) format | {
"login": "Modexus",
"id": 37351874,
"node_id": "MDQ6VXNlcjM3MzUxODc0",
"avatar_url": "https://avatars.githubusercontent.com/u/37351874?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Modexus",
"html_url": "https://github.com/Modexus",
"followers_url": "https://api.github.com/users/Modexu... | [] | open | false | null | [] | null | [
"Here's a PR for that. https://github.com/huggingface/datasets/pull/6402\r\n\r\nIt's not backward compatible, unfortunately. "
] | 1,699 | 1,699 | null | NONE | null | null | null | ### Describe the bug
Using .set_format("torch") leads to images having shape (H, W, C), the same as in numpy.
However, pytorch normally uses (C, H, W) format.
Maybe I'm missing something but this makes the format a lot less useful as I then have to permute it anyways.
If not using the format it is possible to ... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6394/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6394/timeline | null | null |
https://api.github.com/repos/huggingface/datasets/issues/6393 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6393/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6393/comments | https://api.github.com/repos/huggingface/datasets/issues/6393/events | https://github.com/huggingface/datasets/issues/6393 | 1,984,913,259 | I_kwDODunzps52T19r | 6,393 | Filter occasionally hangs | {
"login": "dakinggg",
"id": 43149077,
"node_id": "MDQ6VXNlcjQzMTQ5MDc3",
"avatar_url": "https://avatars.githubusercontent.com/u/43149077?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dakinggg",
"html_url": "https://github.com/dakinggg",
"followers_url": "https://api.github.com/users/dak... | [] | open | false | null | [] | null | [
"It looks like I may not be the first to encounter this: https://github.com/huggingface/datasets/issues/3172",
"Adding some more information, it seems to occur more frequently with large (millions of samples) datasets.",
"More information. My code is structured as (1) load (2) map (3) filter (4) filter. It was ... | 1,699 | 1,699 | null | NONE | null | null | null | ### Describe the bug
A call to `.filter` occasionally hangs (after the filter is complete, according to tqdm)
There is a trace produced
```
Exception ignored in: <function Dataset.__del__ at 0x7efb48130c10>
Traceback (most recent call last):
File "/usr/lib/python3/dist-packages/datasets/arrow_dataset.py", l... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6393/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6393/timeline | null | null |
https://api.github.com/repos/huggingface/datasets/issues/6392 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6392/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6392/comments | https://api.github.com/repos/huggingface/datasets/issues/6392/events | https://github.com/huggingface/datasets/issues/6392 | 1,984,369,545 | I_kwDODunzps52RxOJ | 6,392 | `push_to_hub` is not robust to hub closing connection | {
"login": "msis",
"id": 577139,
"node_id": "MDQ6VXNlcjU3NzEzOQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/577139?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/msis",
"html_url": "https://github.com/msis",
"followers_url": "https://api.github.com/users/msis/followers",
... | [] | open | false | null | [] | null | [
"Hi! We made some improvements to `push_to_hub` to make it more robust a couple of weeks ago but haven't published a release in the meantime, so it would help if you could install `datasets` from `main` (`pip install https://github.com/huggingface/datasets`) and let us know if this improved version of `push_to_hub`... | 1,699 | 1,699 | null | NONE | null | null | null | ### Describe the bug
Like to #6172, `push_to_hub` will crash if Hub resets the connection and raise the following error:
```
Pushing dataset shards to the dataset hub: 32%|███▏ | 54/171 [06:38<14:23, 7.38s/it]
Traceback (most recent call last):
File "/admin/home-piraka9011/.virtualenvs/w2v2/lib/python3.8/... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6392/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6392/timeline | null | null |
https://api.github.com/repos/huggingface/datasets/issues/6389 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6389/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6389/comments | https://api.github.com/repos/huggingface/datasets/issues/6389/events | https://github.com/huggingface/datasets/issues/6389 | 1,983,545,744 | I_kwDODunzps52OoGQ | 6,389 | Index 339 out of range for dataset of size 339 <-- save_to_file() | {
"login": "jaggzh",
"id": 20318973,
"node_id": "MDQ6VXNlcjIwMzE4OTcz",
"avatar_url": "https://avatars.githubusercontent.com/u/20318973?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jaggzh",
"html_url": "https://github.com/jaggzh",
"followers_url": "https://api.github.com/users/jaggzh/fo... | [] | open | false | null | [] | null | [
"Hi! Can you make the above reproducer self-contained by adding code that generates the data?"
] | 1,699 | 1,699 | null | NONE | null | null | null | ### Describe the bug
When saving out some Audio() data.
The data is audio recordings with associated 'sentences'.
(They use the audio 'bytes' approach because they're clips within audio files).
Code is below the traceback (I can't upload the voice audio/text (it's not even me)).
```
Traceback (most recent call ... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6389/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6389/timeline | null | null |
https://api.github.com/repos/huggingface/datasets/issues/6388 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6388/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6388/comments | https://api.github.com/repos/huggingface/datasets/issues/6388/events | https://github.com/huggingface/datasets/issues/6388 | 1,981,136,093 | I_kwDODunzps52Fbzd | 6,388 | How to create 3d medical imgae dataset? | {
"login": "QingYunA",
"id": 41177312,
"node_id": "MDQ6VXNlcjQxMTc3MzEy",
"avatar_url": "https://avatars.githubusercontent.com/u/41177312?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/QingYunA",
"html_url": "https://github.com/QingYunA",
"followers_url": "https://api.github.com/users/Qin... | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | null | [] | null | [] | 1,699 | 1,699 | null | NONE | null | null | null | ### Feature request
I am newer to huggingface, after i look up `datasets` docs, I can't find how to create the dataset contains 3d medical image (ends with '.mhd', '.dcm', '.nii')
### Motivation
help us to upload 3d medical dataset to huggingface!
### Your contribution
I'll submit a PR if I find a way to... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6388/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6388/timeline | null | null |
https://api.github.com/repos/huggingface/datasets/issues/6387 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6387/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6387/comments | https://api.github.com/repos/huggingface/datasets/issues/6387/events | https://github.com/huggingface/datasets/issues/6387 | 1,980,224,020 | I_kwDODunzps52B9IU | 6,387 | How to load existing downloaded dataset ? | {
"login": "liming-ai",
"id": 73068772,
"node_id": "MDQ6VXNlcjczMDY4Nzcy",
"avatar_url": "https://avatars.githubusercontent.com/u/73068772?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/liming-ai",
"html_url": "https://github.com/liming-ai",
"followers_url": "https://api.github.com/users/... | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | null | [] | null | [
"Feel free to use `dataset.save_to_disk(...)`, then scp the directory containing the saved dataset and reload it on your other machine using `dataset = load_from_disk(...)`"
] | 1,699 | 1,699 | null | NONE | null | null | null | Hi @mariosasko @lhoestq @katielink
Thanks for your contribution and hard work.
### Feature request
First, I download a dataset as normal by:
```
from datasets import load_dataset
dataset = load_dataset('username/data_name', cache_dir='data')
```
The dataset format in `data` directory will be:
```
... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6387/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6387/timeline | null | null |
https://api.github.com/repos/huggingface/datasets/issues/6386 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6386/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6386/comments | https://api.github.com/repos/huggingface/datasets/issues/6386/events | https://github.com/huggingface/datasets/issues/6386 | 1,979,878,014 | I_kwDODunzps52Aop- | 6,386 | Formatting overhead | {
"login": "d-miketa",
"id": 320321,
"node_id": "MDQ6VXNlcjMyMDMyMQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/320321?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/d-miketa",
"html_url": "https://github.com/d-miketa",
"followers_url": "https://api.github.com/users/d-miket... | [] | closed | false | null | [] | null | [
"Ah I think the `line-profiler` log is off-by-one and it is in fact the `extract_batch` method that's taking forever. Will investigate further.",
"I tracked it down to a quirk of my setup. Apologies."
] | 1,699 | 1,699 | 1,699 | NONE | null | null | null | ### Describe the bug
Hi! I very recently noticed that my training time is dominated by batch formatting. Using Lightning's profilers, I located the bottleneck within `datasets.formatting.formatting` and then narrowed it down with `line-profiler`. It turns out that almost all of the overhead is due to creating new inst... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6386/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6386/timeline | null | completed |
https://api.github.com/repos/huggingface/datasets/issues/6385 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6385/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6385/comments | https://api.github.com/repos/huggingface/datasets/issues/6385/events | https://github.com/huggingface/datasets/issues/6385 | 1,979,308,338 | I_kwDODunzps51-dky | 6,385 | Get an error when i try to concatenate the squad dataset with my own dataset | {
"login": "CCDXDX",
"id": 149378500,
"node_id": "U_kgDOCOdVxA",
"avatar_url": "https://avatars.githubusercontent.com/u/149378500?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/CCDXDX",
"html_url": "https://github.com/CCDXDX",
"followers_url": "https://api.github.com/users/CCDXDX/follower... | [] | closed | false | null | [] | null | [
"The `answers.text` field in the JSON dataset needs to be a list of strings, not a string.\r\n\r\nSo, here is the fixed code:\r\n```python\r\nfrom huggingface_hub import notebook_login\r\nfrom datasets import load_dataset\r\n\r\n\r\n\r\nnotebook_login(\"mymailadresse\", \"mypassword\")\r\nsquad = load_dataset(\"squ... | 1,699 | 1,699 | 1,699 | NONE | null | null | null | ### Describe the bug
Hello,
I'm new here and I need to concatenate the squad dataset with my own dataset i created. I find the following error when i try to do it: Traceback (most recent call last):
Cell In[9], line 1
concatenated_dataset = concatenate_datasets([train_dataset, dataset1])
File ~\ana... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6385/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6385/timeline | null | completed |
https://api.github.com/repos/huggingface/datasets/issues/6384 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6384/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6384/comments | https://api.github.com/repos/huggingface/datasets/issues/6384/events | https://github.com/huggingface/datasets/issues/6384 | 1,979,117,069 | I_kwDODunzps519u4N | 6,384 | Load the local dataset folder from other place | {
"login": "OrangeSodahub",
"id": 54439582,
"node_id": "MDQ6VXNlcjU0NDM5NTgy",
"avatar_url": "https://avatars.githubusercontent.com/u/54439582?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/OrangeSodahub",
"html_url": "https://github.com/OrangeSodahub",
"followers_url": "https://api.githu... | [] | open | false | null | [] | null | [] | 1,699 | 1,699 | null | NONE | null | null | null | This is from https://github.com/huggingface/diffusers/issues/5573
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6384/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6384/timeline | null | null |
https://api.github.com/repos/huggingface/datasets/issues/6383 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6383/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6383/comments | https://api.github.com/repos/huggingface/datasets/issues/6383/events | https://github.com/huggingface/datasets/issues/6383 | 1,978,189,389 | I_kwDODunzps516MZN | 6,383 | imagenet-1k downloads over and over | {
"login": "seann999",
"id": 6847529,
"node_id": "MDQ6VXNlcjY4NDc1Mjk=",
"avatar_url": "https://avatars.githubusercontent.com/u/6847529?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/seann999",
"html_url": "https://github.com/seann999",
"followers_url": "https://api.github.com/users/seann... | [] | closed | false | null | [] | null | [] | 1,699 | 1,699 | 1,699 | NONE | null | null | null | ### Describe the bug
What could be causing this?
```
$ python3
Python 3.8.13 (default, Mar 28 2022, 11:38:47)
[GCC 7.5.0] :: Anaconda, Inc. on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> from datasets import load_dataset
>>> load_dataset("imagenet-1k")
Downloading builder ... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6383/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6383/timeline | null | completed |
https://api.github.com/repos/huggingface/datasets/issues/6382 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6382/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6382/comments | https://api.github.com/repos/huggingface/datasets/issues/6382/events | https://github.com/huggingface/datasets/issues/6382 | 1,977,400,799 | I_kwDODunzps513L3f | 6,382 | Add CheXpert dataset for vision | {
"login": "SauravMaheshkar",
"id": 61241031,
"node_id": "MDQ6VXNlcjYxMjQxMDMx",
"avatar_url": "https://avatars.githubusercontent.com/u/61241031?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SauravMaheshkar",
"html_url": "https://github.com/SauravMaheshkar",
"followers_url": "https://api... | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
},
{
"id": 2067376369,
"node_id": "MDU6... | open | false | null | [] | null | [
"Hey @SauravMaheshkar ! Just responded to your email.\r\n\r\n_For transparency, copying part of my response here:_\r\nI agree, it would be really great to have this and other BenchMD datasets easily accessible on the hub.\r\n\r\nI think the main limiting factor is that the ChexPert dataset is currently hosted on th... | 1,699 | 1,699 | null | NONE | null | null | null | ### Feature request
### Name
**CheXpert: A Large Chest Radiograph Dataset with Uncertainty Labels and Expert Comparison**
### Paper
https://arxiv.org/abs/1901.07031
### Data
https://stanfordaimi.azurewebsites.net/datasets/8cbd9ed4-2eb9-4565-affc-111cf4f7ebe2
### Motivation
CheXpert is one of the fund... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6382/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6382/timeline | null | null |
https://api.github.com/repos/huggingface/datasets/issues/6377 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6377/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6377/comments | https://api.github.com/repos/huggingface/datasets/issues/6377/events | https://github.com/huggingface/datasets/issues/6377 | 1,973,937,612 | I_kwDODunzps51p-XM | 6,377 | Support pyarrow 14.0.0 | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.g... | [] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.g... | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_... | null | [] | 1,698 | 1,698 | 1,698 | MEMBER | null | null | null | Support pyarrow 14.0.0 by fixing the root cause of:
- #6374
and revert:
- #6375 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6377/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6377/timeline | null | completed |
https://api.github.com/repos/huggingface/datasets/issues/6376 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6376/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6376/comments | https://api.github.com/repos/huggingface/datasets/issues/6376/events | https://github.com/huggingface/datasets/issues/6376 | 1,973,927,468 | I_kwDODunzps51p74s | 6,376 | Caching problem when deleting a dataset | {
"login": "clefourrier",
"id": 22726840,
"node_id": "MDQ6VXNlcjIyNzI2ODQw",
"avatar_url": "https://avatars.githubusercontent.com/u/22726840?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/clefourrier",
"html_url": "https://github.com/clefourrier",
"followers_url": "https://api.github.com/... | [] | open | false | null | [] | null | [
"Thanks for reporting! Can you also share the error message printed in step 5?",
"I did not store it at the time but I'll try to re-do a mwe next week to get it again"
] | 1,698 | 1,698 | null | MEMBER | null | null | null | ### Describe the bug
Pushing a dataset with n + m features to a repo which was deleted, but contained n features, will fail.
### Steps to reproduce the bug
1. Create a dataset with n features per row
2. `dataset.push_to_hub(YOUR_PATH, SPLIT, token=TOKEN)`
3. Go on the hub, delete the repo at `YOUR_PATH`
4. Update... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6376/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6376/timeline | null | null |
https://api.github.com/repos/huggingface/datasets/issues/6374 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6374/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6374/comments | https://api.github.com/repos/huggingface/datasets/issues/6374/events | https://github.com/huggingface/datasets/issues/6374 | 1,973,857,428 | I_kwDODunzps51pqyU | 6,374 | CI is broken: TypeError: Couldn't cast array | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.g... | [] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.g... | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_... | null | [] | 1,698 | 1,698 | 1,698 | MEMBER | null | null | null | See: https://github.com/huggingface/datasets/actions/runs/6730567226/job/18293518039
```
FAILED tests/test_table.py::test_cast_sliced_fixed_size_array_to_features - TypeError: Couldn't cast array of type
fixed_size_list<item: int32>[3]
to
Sequence(feature=Value(dtype='int64', id=None), length=3, id=None)
``` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6374/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6374/timeline | null | completed |
https://api.github.com/repos/huggingface/datasets/issues/6371 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6371/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6371/comments | https://api.github.com/repos/huggingface/datasets/issues/6371/events | https://github.com/huggingface/datasets/issues/6371 | 1,972,807,579 | I_kwDODunzps51lqeb | 6,371 | `Dataset.from_generator` should not try to download from HF GCS | {
"login": "yundai424",
"id": 43726198,
"node_id": "MDQ6VXNlcjQzNzI2MTk4",
"avatar_url": "https://avatars.githubusercontent.com/u/43726198?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yundai424",
"html_url": "https://github.com/yundai424",
"followers_url": "https://api.github.com/users/... | [] | closed | false | null | [] | null | [
"Indeed, setting `try_from_gcs` to `False` makes sense for `from_generator`.\r\n\r\nWe plan to deprecate and remove `try_from_hf_gcs` soon, as we can use Hub for file hosting now, but this is a good temporary fix.\r\n"
] | 1,698 | 1,698 | 1,698 | CONTRIBUTOR | null | null | null | ### Describe the bug
When using [`Dataset.from_generator`](https://github.com/huggingface/datasets/blob/c9c1166e1cf81d38534020f9c167b326585339e5/src/datasets/arrow_dataset.py#L1072) with `streaming=False`, the internal logic will call [`download_and_prepare`](https://github.com/huggingface/datasets/blob/main/src/datas... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6371/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6371/timeline | null | completed |
https://api.github.com/repos/huggingface/datasets/issues/6370 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6370/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6370/comments | https://api.github.com/repos/huggingface/datasets/issues/6370/events | https://github.com/huggingface/datasets/issues/6370 | 1,972,073,909 | I_kwDODunzps51i3W1 | 6,370 | TensorDataset format does not work with Trainer from transformers | {
"login": "jinzzasol",
"id": 49014051,
"node_id": "MDQ6VXNlcjQ5MDE0MDUx",
"avatar_url": "https://avatars.githubusercontent.com/u/49014051?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jinzzasol",
"html_url": "https://github.com/jinzzasol",
"followers_url": "https://api.github.com/users/... | [] | open | false | null | [] | null | [
"I figured it out. I found that `Trainer` does not work with TensorDataset even though the document says it uses it. Instead, I ended up creating a dictionary and converting it to a dataset using `dataset.Dataset.from_dict()`.\r\n\r\nI will leave this post open for a while. If someone knows a better approach, pleas... | 1,698 | 1,698 | null | NONE | null | null | null | ### Describe the bug
The model was built to do fine tunning on BERT model for relation extraction.
trainer.train() returns an error message ```TypeError: vars() argument must have __dict__ attribute``` when it has `train_dataset` generated from `torch.utils.data.TensorDataset`
However, in the document, the req... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6370/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6370/timeline | null | null |
https://api.github.com/repos/huggingface/datasets/issues/6369 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6369/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6369/comments | https://api.github.com/repos/huggingface/datasets/issues/6369/events | https://github.com/huggingface/datasets/issues/6369 | 1,971,794,108 | I_kwDODunzps51hzC8 | 6,369 | Multi process map did not load cache file correctly | {
"login": "enze5088",
"id": 14285786,
"node_id": "MDQ6VXNlcjE0Mjg1Nzg2",
"avatar_url": "https://avatars.githubusercontent.com/u/14285786?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/enze5088",
"html_url": "https://github.com/enze5088",
"followers_url": "https://api.github.com/users/enz... | [] | open | false | null | [] | null | [
"The inconsistency may be caused by the usage of \"update_fingerprint\" and setting \"trust_remote_code\" to \"True.\"\r\nWhen the tokenizer employs \"trust_remote_code,\" the behavior of the map function varies with each code execution. Even if the remote code of the tokenizer remains the same, the result of \"ash... | 1,698 | 1,698 | null | NONE | null | null | null | ### Describe the bug
When I was training model on Multiple GPUs by DDP, the dataset is tokenized multiple times after main process.

 function returns bytes instead of PIL images even when image column is not part of "columns" | {
"login": "leot13",
"id": 17809020,
"node_id": "MDQ6VXNlcjE3ODA5MDIw",
"avatar_url": "https://avatars.githubusercontent.com/u/17809020?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/leot13",
"html_url": "https://github.com/leot13",
"followers_url": "https://api.github.com/users/leot13/fo... | [] | closed | false | null | [] | null | [
"Thanks for reporting! I've opened a PR with a fix."
] | 1,698 | 1,698 | 1,698 | NONE | null | null | null | ### Describe the bug
When using the with_format() function on a dataset containing images, even if the image column is not part of the columns provided in the function, its type will be changed to bytes.
Here is a minimal reproduction of the bug:
https://colab.research.google.com/drive/1hyaOspgyhB41oiR1-tXE3k_gJCdJU... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6366/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6366/timeline | null | completed |
https://api.github.com/repos/huggingface/datasets/issues/6365 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6365/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6365/comments | https://api.github.com/repos/huggingface/datasets/issues/6365/events | https://github.com/huggingface/datasets/issues/6365 | 1,970,140,392 | I_kwDODunzps51bfTo | 6,365 | Parquet size grows exponential for categorical data | {
"login": "aseganti",
"id": 82567957,
"node_id": "MDQ6VXNlcjgyNTY3OTU3",
"avatar_url": "https://avatars.githubusercontent.com/u/82567957?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/aseganti",
"html_url": "https://github.com/aseganti",
"followers_url": "https://api.github.com/users/ase... | [] | closed | false | null | [] | null | [
"Wrong repo."
] | 1,698 | 1,698 | 1,698 | NONE | null | null | null | ### Describe the bug
It seems that when saving a data frame with a categorical column inside the size can grow exponentially.
This seems to happen because when we save the categorical data to parquet, we are saving the data + all the categories existing in the original data. This happens even when the categories ar... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6365/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6365/timeline | null | not_planned |
End of preview. Expand in Data Studio
- Downloads last month
- 9