url
stringlengths
58
61
repository_url
stringclasses
1 value
labels_url
stringlengths
72
75
comments_url
stringlengths
67
70
events_url
stringlengths
65
68
html_url
stringlengths
46
51
id
int64
599M
1.64B
node_id
stringlengths
18
32
number
int64
1
5.67k
title
stringlengths
1
290
user
stringlengths
870
1.16k
labels
stringlengths
2
985
state
stringclasses
2 values
locked
stringclasses
1 value
assignee
stringlengths
0
1.04k
assignees
stringlengths
2
3.92k
milestone
stringclasses
9 values
comments
list
created_at
int64
1,587B
1,680B
updated_at
int64
1,588B
1,680B
closed_at
float64
1,587B
1,680B
author_association
stringclasses
3 values
active_lock_reason
stringclasses
1 value
body
stringlengths
0
228k
reactions
stringlengths
191
196
timeline_url
stringlengths
67
70
performed_via_github_app
stringclasses
1 value
state_reason
stringclasses
4 values
pull_request
stringlengths
0
315
is_pull_request
bool
1 class
https://api.github.com/repos/huggingface/datasets/issues/5669
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5669/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5669/comments
https://api.github.com/repos/huggingface/datasets/issues/5669/events
https://github.com/huggingface/datasets/issues/5669
1,638,070,046
I_kwDODunzps5hovce
5,669
Almost identical datasets, huge performance difference
{'login': 'eli-osherovich', 'id': 2437102, 'node_id': 'MDQ6VXNlcjI0MzcxMDI=', 'avatar_url': 'https://avatars.githubusercontent.com/u/2437102?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/eli-osherovich', 'html_url': 'https://github.com/eli-osherovich', 'followers_url': 'https://api.github.com/users/eli-...
[]
open
False
[]
[ "Do I miss something here?", "Hi! \r\n\r\nThe first dataset stores images as bytes (the \"image\" column type is `datasets.Image()`) and decodes them as `PIL.Image` objects and the second dataset stores them as variable-length lists (the \"image\" column type is `datasets.Sequence(...)`)), so I guess going from `...
1,679,595,620,000
1,679,595,620,000
null
NONE
### Describe the bug I am struggling to understand (huge) performance difference between two datasets that are almost identical. ### Steps to reproduce the bug # Fast (normal) dataset speed: ```python import cv2 from datasets import load_dataset from torch.utils.data import DataLoader dataset = load_dataset(...
{'url': 'https://api.github.com/repos/huggingface/datasets/issues/5669/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0}
https://api.github.com/repos/huggingface/datasets/issues/5669/timeline
true
https://api.github.com/repos/huggingface/datasets/issues/5668
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5668/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5668/comments
https://api.github.com/repos/huggingface/datasets/issues/5668/events
https://github.com/huggingface/datasets/pull/5668
1,638,018,598
PR_kwDODunzps5MwuIp
5,668
Support for downloading only provided split
{'login': 'polinaeterna', 'id': 16348744, 'node_id': 'MDQ6VXNlcjE2MzQ4NzQ0', 'avatar_url': 'https://avatars.githubusercontent.com/u/16348744?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/polinaeterna', 'html_url': 'https://github.com/polinaeterna', 'followers_url': 'https://api.github.com/users/polinaet...
[]
open
False
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5668). All of your documentation changes will be reflected on that endpoint.", "My previous comment didn't create the retro-link in the PR. I write it here again.\r\n\r\nYou can check the context and the discussions we had abou...
1,679,594,019,000
1,679,594,264,000
null
CONTRIBUTOR
We can pass split to `_split_generators()`. But I'm not sure if it's possible to solve cache issues, mostly with `dataset_info.json`
{'url': 'https://api.github.com/repos/huggingface/datasets/issues/5668/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0}
https://api.github.com/repos/huggingface/datasets/issues/5668/timeline
{'url': 'https://api.github.com/repos/huggingface/datasets/pulls/5668', 'html_url': 'https://github.com/huggingface/datasets/pull/5668', 'diff_url': 'https://github.com/huggingface/datasets/pull/5668.diff', 'patch_url': 'https://github.com/huggingface/datasets/pull/5668.patch', 'merged_at': None}
true
https://api.github.com/repos/huggingface/datasets/issues/5667
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5667/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5667/comments
https://api.github.com/repos/huggingface/datasets/issues/5667/events
https://github.com/huggingface/datasets/pull/5667
1,637,789,361
PR_kwDODunzps5Mv8Im
5,667
Jax requires jaxlib
{'login': 'lhoestq', 'id': 42851186, 'node_id': 'MDQ6VXNlcjQyODUxMTg2', 'avatar_url': 'https://avatars.githubusercontent.com/u/42851186?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/lhoestq', 'html_url': 'https://github.com/lhoestq', 'followers_url': 'https://api.github.com/users/lhoestq/followers', 'fo...
[]
closed
False
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea...
1,679,586,069,000
1,679,588,591,000
1,679,588,092,000
MEMBER
close https://github.com/huggingface/datasets/issues/5666
{'url': 'https://api.github.com/repos/huggingface/datasets/issues/5667/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0}
https://api.github.com/repos/huggingface/datasets/issues/5667/timeline
{'url': 'https://api.github.com/repos/huggingface/datasets/pulls/5667', 'html_url': 'https://github.com/huggingface/datasets/pull/5667', 'diff_url': 'https://github.com/huggingface/datasets/pull/5667.diff', 'patch_url': 'https://github.com/huggingface/datasets/pull/5667.patch', 'merged_at': '2023-03-23T16:14:52Z'}
true
https://api.github.com/repos/huggingface/datasets/issues/5666
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5666/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5666/comments
https://api.github.com/repos/huggingface/datasets/issues/5666/events
https://github.com/huggingface/datasets/issues/5666
1,637,675,062
I_kwDODunzps5hnPA2
5,666
Support tensorflow 2.12.0 in CI
{'login': 'albertvillanova', 'id': 8515462, 'node_id': 'MDQ6VXNlcjg1MTU0NjI=', 'avatar_url': 'https://avatars.githubusercontent.com/u/8515462?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/albertvillanova', 'html_url': 'https://github.com/albertvillanova', 'followers_url': 'https://api.github.com/users/a...
[{'id': 1935892871, 'node_id': 'MDU6TGFiZWwxOTM1ODkyODcx', 'url': 'https://api.github.com/repos/huggingface/datasets/labels/enhancement', 'name': 'enhancement', 'color': 'a2eeef', 'default': True, 'description': 'New feature or request'}]
closed
False
{'login': 'albertvillanova', 'id': 8515462, 'node_id': 'MDQ6VXNlcjg1MTU0NjI=', 'avatar_url': 'https://avatars.githubusercontent.com/u/8515462?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/albertvillanova', 'html_url': 'https://github.com/albertvillanova', 'followers_url': 'https://api.github.com/users/a...
[{'login': 'albertvillanova', 'id': 8515462, 'node_id': 'MDQ6VXNlcjg1MTU0NjI=', 'avatar_url': 'https://avatars.githubusercontent.com/u/8515462?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/albertvillanova', 'html_url': 'https://github.com/albertvillanova', 'followers_url': 'https://api.github.com/users/...
[]
1,679,582,271,000
1,679,588,094,000
1,679,588,094,000
MEMBER
Once we find out the root cause of: - #5663 we should revert the temporary pin on tensorflow introduced by: - #5664
{'url': 'https://api.github.com/repos/huggingface/datasets/issues/5666/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0}
https://api.github.com/repos/huggingface/datasets/issues/5666/timeline
completed
true
https://api.github.com/repos/huggingface/datasets/issues/5665
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5665/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5665/comments
https://api.github.com/repos/huggingface/datasets/issues/5665/events
https://github.com/huggingface/datasets/issues/5665
1,637,193,648
I_kwDODunzps5hlZew
5,665
Feature request: IterableDataset.push_to_hub
{'login': 'NielsRogge', 'id': 48327001, 'node_id': 'MDQ6VXNlcjQ4MzI3MDAx', 'avatar_url': 'https://avatars.githubusercontent.com/u/48327001?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/NielsRogge', 'html_url': 'https://github.com/NielsRogge', 'followers_url': 'https://api.github.com/users/NielsRogge/fol...
[{'id': 1935892871, 'node_id': 'MDU6TGFiZWwxOTM1ODkyODcx', 'url': 'https://api.github.com/repos/huggingface/datasets/labels/enhancement', 'name': 'enhancement', 'color': 'a2eeef', 'default': True, 'description': 'New feature or request'}]
open
False
[]
[]
1,679,565,184,000
1,679,565,196,000
null
CONTRIBUTOR
### Feature request It'd be great to have a lazy push to hub, similar to the lazy loading we have with `IterableDataset`. Suppose you'd like to filter [LAION](https://huggingface.co/datasets/laion/laion400m) based on certain conditions, but as LAION doesn't fit into your disk, you'd like to leverage streaming: `...
{'url': 'https://api.github.com/repos/huggingface/datasets/issues/5665/reactions', 'total_count': 2, '+1': 2, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0}
https://api.github.com/repos/huggingface/datasets/issues/5665/timeline
true
https://api.github.com/repos/huggingface/datasets/issues/5664
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5664/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5664/comments
https://api.github.com/repos/huggingface/datasets/issues/5664/events
https://github.com/huggingface/datasets/pull/5664
1,637,192,684
PR_kwDODunzps5Mt6vp
5,664
Fix CI by temporarily pinning tensorflow < 2.12.0
{'login': 'albertvillanova', 'id': 8515462, 'node_id': 'MDQ6VXNlcjg1MTU0NjI=', 'avatar_url': 'https://avatars.githubusercontent.com/u/8515462?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/albertvillanova', 'html_url': 'https://github.com/albertvillanova', 'followers_url': 'https://api.github.com/users/a...
[]
closed
False
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea...
1,679,565,146,000
1,679,566,631,000
1,679,566,194,000
MEMBER
As a hotfix for our CI, temporarily pin `tensorflow` upper version: - In Python 3.10, tensorflow-2.12.0 also installs `jax` Fix #5663 Until root cause is fixed.
{'url': 'https://api.github.com/repos/huggingface/datasets/issues/5664/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0}
https://api.github.com/repos/huggingface/datasets/issues/5664/timeline
{'url': 'https://api.github.com/repos/huggingface/datasets/pulls/5664', 'html_url': 'https://github.com/huggingface/datasets/pull/5664', 'diff_url': 'https://github.com/huggingface/datasets/pull/5664.diff', 'patch_url': 'https://github.com/huggingface/datasets/pull/5664.patch', 'merged_at': '2023-03-23T10:09:53Z'}
true
https://api.github.com/repos/huggingface/datasets/issues/5663
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5663/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5663/comments
https://api.github.com/repos/huggingface/datasets/issues/5663/events
https://github.com/huggingface/datasets/issues/5663
1,637,173,248
I_kwDODunzps5hlUgA
5,663
CI is broken: ModuleNotFoundError: jax requires jaxlib to be installed
{'login': 'albertvillanova', 'id': 8515462, 'node_id': 'MDQ6VXNlcjg1MTU0NjI=', 'avatar_url': 'https://avatars.githubusercontent.com/u/8515462?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/albertvillanova', 'html_url': 'https://github.com/albertvillanova', 'followers_url': 'https://api.github.com/users/a...
[{'id': 1935892857, 'node_id': 'MDU6TGFiZWwxOTM1ODkyODU3', 'url': 'https://api.github.com/repos/huggingface/datasets/labels/bug', 'name': 'bug', 'color': 'd73a4a', 'default': True, 'description': "Something isn't working"}]
closed
False
{'login': 'albertvillanova', 'id': 8515462, 'node_id': 'MDQ6VXNlcjg1MTU0NjI=', 'avatar_url': 'https://avatars.githubusercontent.com/u/8515462?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/albertvillanova', 'html_url': 'https://github.com/albertvillanova', 'followers_url': 'https://api.github.com/users/a...
[{'login': 'albertvillanova', 'id': 8515462, 'node_id': 'MDQ6VXNlcjg1MTU0NjI=', 'avatar_url': 'https://avatars.githubusercontent.com/u/8515462?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/albertvillanova', 'html_url': 'https://github.com/albertvillanova', 'followers_url': 'https://api.github.com/users/...
[]
1,679,564,383,000
1,679,566,195,000
1,679,566,195,000
MEMBER
CI test_py310 is broken: see https://github.com/huggingface/datasets/actions/runs/4498945505/jobs/7916194236?pr=5662 ``` FAILED tests/test_arrow_dataset.py::BaseDatasetTest::test_map_jax_in_memory - ModuleNotFoundError: jax requires jaxlib to be installed. See https://github.com/google/jax#installation for installati...
{'url': 'https://api.github.com/repos/huggingface/datasets/issues/5663/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0}
https://api.github.com/repos/huggingface/datasets/issues/5663/timeline
completed
true
https://api.github.com/repos/huggingface/datasets/issues/5662
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5662/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5662/comments
https://api.github.com/repos/huggingface/datasets/issues/5662/events
https://github.com/huggingface/datasets/pull/5662
1,637,140,813
PR_kwDODunzps5MtvsM
5,662
Fix unnecessary dict comprehension
{'login': 'albertvillanova', 'id': 8515462, 'node_id': 'MDQ6VXNlcjg1MTU0NjI=', 'avatar_url': 'https://avatars.githubusercontent.com/u/8515462?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/albertvillanova', 'html_url': 'https://github.com/albertvillanova', 'followers_url': 'https://api.github.com/users/a...
[]
closed
False
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "I am merging because the CI error is unrelated.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | re...
1,679,563,138,000
1,679,564,819,000
1,679,564,269,000
MEMBER
After ruff-0.0.258 release, the C416 rule was updated with unnecessary dict comprehensions. See: - https://github.com/charliermarsh/ruff/releases/tag/v0.0.258 - https://github.com/charliermarsh/ruff/pull/3605 This PR fixes one unnecessary dict comprehension in our code: no need to unpack and re-pack the tuple valu...
{'url': 'https://api.github.com/repos/huggingface/datasets/issues/5662/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0}
https://api.github.com/repos/huggingface/datasets/issues/5662/timeline
{'url': 'https://api.github.com/repos/huggingface/datasets/pulls/5662', 'html_url': 'https://github.com/huggingface/datasets/pull/5662', 'diff_url': 'https://github.com/huggingface/datasets/pull/5662.diff', 'patch_url': 'https://github.com/huggingface/datasets/pull/5662.patch', 'merged_at': '2023-03-23T09:37:49Z'}
true
https://api.github.com/repos/huggingface/datasets/issues/5661
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5661/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5661/comments
https://api.github.com/repos/huggingface/datasets/issues/5661/events
https://github.com/huggingface/datasets/issues/5661
1,637,129,445
I_kwDODunzps5hlJzl
5,661
CI is broken: Unnecessary `dict` comprehension
{'login': 'albertvillanova', 'id': 8515462, 'node_id': 'MDQ6VXNlcjg1MTU0NjI=', 'avatar_url': 'https://avatars.githubusercontent.com/u/8515462?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/albertvillanova', 'html_url': 'https://github.com/albertvillanova', 'followers_url': 'https://api.github.com/users/a...
[{'id': 1935892857, 'node_id': 'MDU6TGFiZWwxOTM1ODkyODU3', 'url': 'https://api.github.com/repos/huggingface/datasets/labels/bug', 'name': 'bug', 'color': 'd73a4a', 'default': True, 'description': "Something isn't working"}]
closed
False
{'login': 'albertvillanova', 'id': 8515462, 'node_id': 'MDQ6VXNlcjg1MTU0NjI=', 'avatar_url': 'https://avatars.githubusercontent.com/u/8515462?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/albertvillanova', 'html_url': 'https://github.com/albertvillanova', 'followers_url': 'https://api.github.com/users/a...
[{'login': 'albertvillanova', 'id': 8515462, 'node_id': 'MDQ6VXNlcjg1MTU0NjI=', 'avatar_url': 'https://avatars.githubusercontent.com/u/8515462?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/albertvillanova', 'html_url': 'https://github.com/albertvillanova', 'followers_url': 'https://api.github.com/users/...
[]
1,679,562,781,000
1,679,564,271,000
1,679,564,271,000
MEMBER
CI check_code_quality is broken: ``` src/datasets/arrow_dataset.py:3267:35: C416 [*] Unnecessary `dict` comprehension (rewrite using `dict()`) Found 1 error. ```
{'url': 'https://api.github.com/repos/huggingface/datasets/issues/5661/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0}
https://api.github.com/repos/huggingface/datasets/issues/5661/timeline
completed
true
https://api.github.com/repos/huggingface/datasets/issues/5660
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5660/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5660/comments
https://api.github.com/repos/huggingface/datasets/issues/5660/events
https://github.com/huggingface/datasets/issues/5660
1,635,543,646
I_kwDODunzps5hfGpe
5,660
integration with imbalanced-learn
{'login': 'tansaku', 'id': 30216, 'node_id': 'MDQ6VXNlcjMwMjE2', 'avatar_url': 'https://avatars.githubusercontent.com/u/30216?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/tansaku', 'html_url': 'https://github.com/tansaku', 'followers_url': 'https://api.github.com/users/tansaku/followers', 'following_ur...
[{'id': 1935892871, 'node_id': 'MDU6TGFiZWwxOTM1ODkyODcx', 'url': 'https://api.github.com/repos/huggingface/datasets/labels/enhancement', 'name': 'enhancement', 'color': 'a2eeef', 'default': True, 'description': 'New feature or request'}]
open
False
[]
[ "You can convert any dataset to pandas to be used with imbalanced-learn using `.to_pandas()`\r\n\r\nOtherwise if you want to keep a `Dataset` object and still use e.g. [make_imbalance](https://imbalanced-learn.org/stable/references/generated/imblearn.datasets.make_imbalance.html#imblearn.datasets.make_imbalance), y...
1,679,483,117,000
1,679,590,839,000
null
NONE
### Feature request Wouldn't it be great if the various class balancing operations from imbalanced-learn were available as part of datasets? ### Motivation I'm trying to use imbalanced-learn to balance a dataset, but it's not clear how to get the two to interoperate - what would be great would be some examples. I'v...
{'url': 'https://api.github.com/repos/huggingface/datasets/issues/5660/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0}
https://api.github.com/repos/huggingface/datasets/issues/5660/timeline
true
https://api.github.com/repos/huggingface/datasets/issues/5659
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5659/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5659/comments
https://api.github.com/repos/huggingface/datasets/issues/5659/events
https://github.com/huggingface/datasets/issues/5659
1,635,447,540
I_kwDODunzps5hevL0
5,659
[Audio] Soundfile/libsndfile requirements too stringent for decoding mp3 files
{'login': 'sanchit-gandhi', 'id': 93869735, 'node_id': 'U_kgDOBZhWpw', 'avatar_url': 'https://avatars.githubusercontent.com/u/93869735?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/sanchit-gandhi', 'html_url': 'https://github.com/sanchit-gandhi', 'followers_url': 'https://api.github.com/users/sanchit-ga...
[]
open
False
[]
[ "cc @polinaeterna @lhoestq ", "@sanchit-gandhi can you please also post the logs of `pip install soundfile==0.12.1`? To check what wheel is being installed or if it's being built from source (I think it's the latter case). \r\nRequired `libsndfile` binary **should** be bundeled with `soundfile` wheel but I assume...
1,679,479,653,000
1,679,493,131,000
null
CONTRIBUTOR
### Describe the bug I'm encountering several issues trying to load mp3 audio files using `datasets` on a TPU v4. The PR https://github.com/huggingface/datasets/pull/5573 updated the audio loading logic to rely solely on the `soundfile`/`libsndfile` libraries for loading audio samples, regardless of their file t...
{'url': 'https://api.github.com/repos/huggingface/datasets/issues/5659/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0}
https://api.github.com/repos/huggingface/datasets/issues/5659/timeline
true
https://api.github.com/repos/huggingface/datasets/issues/5658
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5658/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5658/comments
https://api.github.com/repos/huggingface/datasets/issues/5658/events
https://github.com/huggingface/datasets/pull/5658
1,634,867,204
PR_kwDODunzps5MmJe0
5,658
docs: Update num_shards docs to mention num_proc on Dataset and DatasetDict
{'login': 'connor-henderson', 'id': 78612354, 'node_id': 'MDQ6VXNlcjc4NjEyMzU0', 'avatar_url': 'https://avatars.githubusercontent.com/u/78612354?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/connor-henderson', 'html_url': 'https://github.com/connor-henderson', 'followers_url': 'https://api.github.com/us...
[]
open
False
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea...
1,679,443,938,000
1,679,588,390,000
null
NONE
Closes #5653 @mariosasko
{'url': 'https://api.github.com/repos/huggingface/datasets/issues/5658/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0}
https://api.github.com/repos/huggingface/datasets/issues/5658/timeline
{'url': 'https://api.github.com/repos/huggingface/datasets/pulls/5658', 'html_url': 'https://github.com/huggingface/datasets/pull/5658', 'diff_url': 'https://github.com/huggingface/datasets/pull/5658.diff', 'patch_url': 'https://github.com/huggingface/datasets/pull/5658.patch', 'merged_at': None}
true
https://api.github.com/repos/huggingface/datasets/issues/5656
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5656/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5656/comments
https://api.github.com/repos/huggingface/datasets/issues/5656/events
https://github.com/huggingface/datasets/pull/5656
1,634,156,563
PR_kwDODunzps5Mjxoo
5,656
Fix `fsspec.open` when using an HTTP proxy
{'login': 'bryant1410', 'id': 3905501, 'node_id': 'MDQ6VXNlcjM5MDU1MDE=', 'avatar_url': 'https://avatars.githubusercontent.com/u/3905501?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/bryant1410', 'html_url': 'https://github.com/bryant1410', 'followers_url': 'https://api.github.com/users/bryant1410/follo...
[]
closed
False
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea...
1,679,412,209,000
1,679,580,890,000
1,679,577,346,000
CONTRIBUTOR
Most HTTP(S) downloads from this library support proxy automatically by reading the `HTTP_PROXY` environment variable (et al.) because `requests` is widely used. However, in some parts of the code, `fsspec` is used, which in turn uses `aiohttp` for HTTP(S) requests (as opposed to `requests`), which in turn doesn't supp...
{'url': 'https://api.github.com/repos/huggingface/datasets/issues/5656/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0}
https://api.github.com/repos/huggingface/datasets/issues/5656/timeline
{'url': 'https://api.github.com/repos/huggingface/datasets/pulls/5656', 'html_url': 'https://github.com/huggingface/datasets/pull/5656', 'diff_url': 'https://github.com/huggingface/datasets/pull/5656.diff', 'patch_url': 'https://github.com/huggingface/datasets/pull/5656.patch', 'merged_at': '2023-03-23T13:15:46Z'}
true
https://api.github.com/repos/huggingface/datasets/issues/5655
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5655/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5655/comments
https://api.github.com/repos/huggingface/datasets/issues/5655/events
https://github.com/huggingface/datasets/pull/5655
1,634,030,017
PR_kwDODunzps5MjWYy
5,655
Improve features decoding in to_iterable_dataset
{'login': 'lhoestq', 'id': 42851186, 'node_id': 'MDQ6VXNlcjQyODUxMTg2', 'avatar_url': 'https://avatars.githubusercontent.com/u/42851186?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/lhoestq', 'html_url': 'https://github.com/lhoestq', 'followers_url': 'https://api.github.com/users/lhoestq/followers', 'fo...
[]
closed
False
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea...
1,679,408,289,000
1,679,577,567,000
1,679,577,145,000
MEMBER
Following discussion at https://github.com/huggingface/datasets/pull/5589 Right now `to_iterable_dataset` on images/audio hurts iterable dataset performance a lot (e.g. x4 slower because it encodes+decodes images/audios unnecessarily). I fixed it by providing a generator that yields undecoded examples
{'url': 'https://api.github.com/repos/huggingface/datasets/issues/5655/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0}
https://api.github.com/repos/huggingface/datasets/issues/5655/timeline
{'url': 'https://api.github.com/repos/huggingface/datasets/pulls/5655', 'html_url': 'https://github.com/huggingface/datasets/pull/5655', 'diff_url': 'https://github.com/huggingface/datasets/pull/5655.diff', 'patch_url': 'https://github.com/huggingface/datasets/pull/5655.patch', 'merged_at': '2023-03-23T13:12:25Z'}
true
https://api.github.com/repos/huggingface/datasets/issues/5654
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5654/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5654/comments
https://api.github.com/repos/huggingface/datasets/issues/5654/events
https://github.com/huggingface/datasets/issues/5654
1,633,523,705
I_kwDODunzps5hXZf5
5,654
Offset overflow when executing Dataset.map
{'login': 'jan-pair', 'id': 118280608, 'node_id': 'U_kgDOBwzRoA', 'avatar_url': 'https://avatars.githubusercontent.com/u/118280608?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/jan-pair', 'html_url': 'https://github.com/jan-pair', 'followers_url': 'https://api.github.com/users/jan-pair/followers', 'foll...
[]
open
False
[]
[ "Upd. the above code works if we replace `25` with `1`, but the result value at key \"hr\" is not a tensor but a list of lists of lists of uint8.\r\n\r\nAdding `train_data.set_format(\"torch\")` after map fixes this, but the original issue remains\r\n\r\n", "As a workaround, one can replace\r\n`return {\"hr\": to...
1,679,391,207,000
1,679,394,727,000
null
NONE
### Describe the bug Hi, I'm trying to use `.map` method to cache multiple random crops from the image to speed up data processing during training, as the image size is too big. The map function executes all iterations, and then returns the following error: ```bash Traceback (most recent call last): ...
{'url': 'https://api.github.com/repos/huggingface/datasets/issues/5654/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0}
https://api.github.com/repos/huggingface/datasets/issues/5654/timeline
true
https://api.github.com/repos/huggingface/datasets/issues/5653
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5653/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5653/comments
https://api.github.com/repos/huggingface/datasets/issues/5653/events
https://github.com/huggingface/datasets/issues/5653
1,633,254,159
I_kwDODunzps5hWXsP
5,653
Doc: save_to_disk, `num_proc` will affect `num_shards`, but it's not documented
{'login': 'RmZeta2718', 'id': 42400165, 'node_id': 'MDQ6VXNlcjQyNDAwMTY1', 'avatar_url': 'https://avatars.githubusercontent.com/u/42400165?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/RmZeta2718', 'html_url': 'https://github.com/RmZeta2718', 'followers_url': 'https://api.github.com/users/RmZeta2718/fol...
[{'id': 1935892861, 'node_id': 'MDU6TGFiZWwxOTM1ODkyODYx', 'url': 'https://api.github.com/repos/huggingface/datasets/labels/documentation', 'name': 'documentation', 'color': '0075ca', 'default': True, 'description': 'Improvements or additions to documentation'}, {'id': 1935892877, 'node_id': 'MDU6TGFiZWwxOTM1ODkyODc3',...
open
False
[]
[ "I agree this should be documented" ]
1,679,376,335,000
1,679,404,797,000
null
NONE
### Describe the bug [`num_proc`](https://huggingface.co/docs/datasets/main/en/package_reference/main_classes#datasets.DatasetDict.save_to_disk.num_proc) will affect `num_shards`, but it's not documented ### Steps to reproduce the bug Nothing to reproduce ### Expected behavior [document of `num_shards`](https://...
{'url': 'https://api.github.com/repos/huggingface/datasets/issues/5653/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0}
https://api.github.com/repos/huggingface/datasets/issues/5653/timeline
true
https://api.github.com/repos/huggingface/datasets/issues/5652
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5652/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5652/comments
https://api.github.com/repos/huggingface/datasets/issues/5652/events
https://github.com/huggingface/datasets/pull/5652
1,632,546,073
PR_kwDODunzps5MeVUR
5,652
Copy features
{'login': 'lhoestq', 'id': 42851186, 'node_id': 'MDQ6VXNlcjQyODUxMTg2', 'avatar_url': 'https://avatars.githubusercontent.com/u/42851186?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/lhoestq', 'html_url': 'https://github.com/lhoestq', 'followers_url': 'https://api.github.com/users/lhoestq/followers', 'fo...
[]
closed
False
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea...
1,679,332,643,000
1,679,577,559,000
1,679,577,128,000
MEMBER
Some users (even internally at HF) are doing ```python dset_features = dset.features dset_features.pop(col_to_remove) dset = dset.map(..., features=dset_features) ``` Right now this causes issues because it modifies the features dict in place before the map. In this PR I modified `dset.features` to return a ...
{'url': 'https://api.github.com/repos/huggingface/datasets/issues/5652/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0}
https://api.github.com/repos/huggingface/datasets/issues/5652/timeline
{'url': 'https://api.github.com/repos/huggingface/datasets/pulls/5652', 'html_url': 'https://github.com/huggingface/datasets/pull/5652', 'diff_url': 'https://github.com/huggingface/datasets/pull/5652.diff', 'patch_url': 'https://github.com/huggingface/datasets/pull/5652.patch', 'merged_at': '2023-03-23T13:12:08Z'}
true
https://api.github.com/repos/huggingface/datasets/issues/5651
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5651/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5651/comments
https://api.github.com/repos/huggingface/datasets/issues/5651/events
https://github.com/huggingface/datasets/issues/5651
1,631,967,509
I_kwDODunzps5hRdkV
5,651
expanduser in save_to_disk
{'login': 'RmZeta2718', 'id': 42400165, 'node_id': 'MDQ6VXNlcjQyNDAwMTY1', 'avatar_url': 'https://avatars.githubusercontent.com/u/42400165?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/RmZeta2718', 'html_url': 'https://github.com/RmZeta2718', 'followers_url': 'https://api.github.com/users/RmZeta2718/fol...
[]
open
False
[]
[ "`save_to_disk` should indeed expand `~`. Marking it as a \"good first issue\"." ]
1,679,313,738,000
1,679,313,839,000
null
NONE
### Describe the bug save_to_disk() does not expand `~` 1. `dataset = load_datasets("any dataset")` 2. `dataset.save_to_disk("~/data")` 3. a folder named "~" created in current folder 4. FileNotFoundError is raised, because the expanded path does not exist (`/home/<user>/data`) related issue https://github....
{'url': 'https://api.github.com/repos/huggingface/datasets/issues/5651/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0}
https://api.github.com/repos/huggingface/datasets/issues/5651/timeline
true
https://api.github.com/repos/huggingface/datasets/issues/5650
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5650/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5650/comments
https://api.github.com/repos/huggingface/datasets/issues/5650/events
https://github.com/huggingface/datasets/issues/5650
1,630,336,919
I_kwDODunzps5hLPeX
5,650
load_dataset can't work correct with my image data
{'login': 'WiNE-iNEFF', 'id': 41611046, 'node_id': 'MDQ6VXNlcjQxNjExMDQ2', 'avatar_url': 'https://avatars.githubusercontent.com/u/41611046?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/WiNE-iNEFF', 'html_url': 'https://github.com/WiNE-iNEFF', 'followers_url': 'https://api.github.com/users/WiNE-iNEFF/fol...
[]
open
False
[]
[ "Can you post a reproducible code snippet of what you tried to do?\r\n\r\n", "> Can you post a reproducible code snippet of what you tried to do?\n> \n> \n\n```python\nfrom datasets import load_dataset\n\ndataset = load_dataset(\"my_folder_name\", split=\"train\")\n```", "hi @WiNE-iNEFF ! can you please also te...
1,679,147,953,000
1,679,564,622,000
null
NONE
I have about 20000 images in my folder which divided into 4 folders with class names. When i use load_dataset("my_folder_name", split="train") this function create dataset in which there are only 4 images, the remaining 19000 images were not added there. What is the problem and did not understand. Tried converting imag...
{'url': 'https://api.github.com/repos/huggingface/datasets/issues/5650/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0}
https://api.github.com/repos/huggingface/datasets/issues/5650/timeline
true
https://api.github.com/repos/huggingface/datasets/issues/5649
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5649/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5649/comments
https://api.github.com/repos/huggingface/datasets/issues/5649/events
https://github.com/huggingface/datasets/issues/5649
1,630,173,460
I_kwDODunzps5hKnkU
5,649
The index column created with .to_sql() is dependent on the batch_size when writing
{'login': 'lsb', 'id': 45281, 'node_id': 'MDQ6VXNlcjQ1Mjgx', 'avatar_url': 'https://avatars.githubusercontent.com/u/45281?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/lsb', 'html_url': 'https://github.com/lsb', 'followers_url': 'https://api.github.com/users/lsb/followers', 'following_url': 'https://api...
[]
open
False
{'login': 'albertvillanova', 'id': 8515462, 'node_id': 'MDQ6VXNlcjg1MTU0NjI=', 'avatar_url': 'https://avatars.githubusercontent.com/u/8515462?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/albertvillanova', 'html_url': 'https://github.com/albertvillanova', 'followers_url': 'https://api.github.com/users/a...
[{'login': 'albertvillanova', 'id': 8515462, 'node_id': 'MDQ6VXNlcjg1MTU0NjI=', 'avatar_url': 'https://avatars.githubusercontent.com/u/8515462?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/albertvillanova', 'html_url': 'https://github.com/albertvillanova', 'followers_url': 'https://api.github.com/users/...
[ "Thanks for reporting, @lsb. \r\n\r\nWe are investigating it.\r\n\r\nOn the other hand, please note that in the next `datasets` release, the index will not be created by default (see #5583). If you would like to have it, you will need to explicitly pass `index=True`. " ]
1,679,117,117,000
1,679,318,172,000
null
NONE
### Describe the bug It seems like the "index" column is designed to be unique? The values are only unique per batch. The SQL index is not a unique index. This can be a problem, for instance, when building a faiss index on a dataset and then trying to match up ids with a sql export. ### Steps to reproduce the ...
{'url': 'https://api.github.com/repos/huggingface/datasets/issues/5649/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0}
https://api.github.com/repos/huggingface/datasets/issues/5649/timeline
true
https://api.github.com/repos/huggingface/datasets/issues/5648
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5648/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5648/comments
https://api.github.com/repos/huggingface/datasets/issues/5648/events
https://github.com/huggingface/datasets/issues/5648
1,629,253,719
I_kwDODunzps5hHHBX
5,648
flatten_indices doesn't work with pandas format
{'login': 'alialamiidrissi', 'id': 14365168, 'node_id': 'MDQ6VXNlcjE0MzY1MTY4', 'avatar_url': 'https://avatars.githubusercontent.com/u/14365168?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/alialamiidrissi', 'html_url': 'https://github.com/alialamiidrissi', 'followers_url': 'https://api.github.com/users...
[{'id': 1935892857, 'node_id': 'MDU6TGFiZWwxOTM1ODkyODU3', 'url': 'https://api.github.com/repos/huggingface/datasets/labels/bug', 'name': 'bug', 'color': 'd73a4a', 'default': True, 'description': "Something isn't working"}]
open
False
{'login': 'mariosasko', 'id': 47462742, 'node_id': 'MDQ6VXNlcjQ3NDYyNzQy', 'avatar_url': 'https://avatars.githubusercontent.com/u/47462742?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/mariosasko', 'html_url': 'https://github.com/mariosasko', 'followers_url': 'https://api.github.com/users/mariosasko/fol...
[{'login': 'mariosasko', 'id': 47462742, 'node_id': 'MDQ6VXNlcjQ3NDYyNzQy', 'avatar_url': 'https://avatars.githubusercontent.com/u/47462742?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/mariosasko', 'html_url': 'https://github.com/mariosasko', 'followers_url': 'https://api.github.com/users/mariosasko/fo...
[ "Thanks for reporting! This can be fixed by setting the format to `arrow` in `flatten_indices` and restoring the original format after the flattening. I'm working on a PR that reduces the number of the `flatten_indices` calls in our codebase and makes `flatten_indices` a no-op when a dataset does not have an indice...
1,679,057,065,000
1,679,404,323,000
null
NONE
### Describe the bug Hi, I noticed that `flatten_indices` throws an error when the batch format is `pandas`. This is probably due to the fact that flatten_indices uses map internally which doesn't accept dataframes as the transformation function output ### Steps to reproduce the bug tabular_data = pd.DataFrame(np.r...
{'url': 'https://api.github.com/repos/huggingface/datasets/issues/5648/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0}
https://api.github.com/repos/huggingface/datasets/issues/5648/timeline
true
https://api.github.com/repos/huggingface/datasets/issues/5647
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5647/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5647/comments
https://api.github.com/repos/huggingface/datasets/issues/5647/events
https://github.com/huggingface/datasets/issues/5647
1,628,225,544
I_kwDODunzps5hDMAI
5,647
Make all print statements optional
{'login': 'gagan3012', 'id': 49101362, 'node_id': 'MDQ6VXNlcjQ5MTAxMzYy', 'avatar_url': 'https://avatars.githubusercontent.com/u/49101362?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/gagan3012', 'html_url': 'https://github.com/gagan3012', 'followers_url': 'https://api.github.com/users/gagan3012/followe...
[{'id': 1935892871, 'node_id': 'MDU6TGFiZWwxOTM1ODkyODcx', 'url': 'https://api.github.com/repos/huggingface/datasets/labels/enhancement', 'name': 'enhancement', 'color': 'a2eeef', 'default': True, 'description': 'New feature or request'}]
open
False
[]
[ "related to #5444 " ]
1,678,998,607,000
1,679,589,425,000
null
NONE
### Feature request Make all print statements optional to speed up the development ### Motivation Im loading multiple tiny datasets and all the print statements make the loading slower ### Your contribution I can help contribute
{'url': 'https://api.github.com/repos/huggingface/datasets/issues/5647/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0}
https://api.github.com/repos/huggingface/datasets/issues/5647/timeline
true
https://api.github.com/repos/huggingface/datasets/issues/5646
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5646/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5646/comments
https://api.github.com/repos/huggingface/datasets/issues/5646/events
https://github.com/huggingface/datasets/pull/5646
1,627,838,762
PR_kwDODunzps5MOqjj
5,646
Allow self as key in `Features`
{'login': 'mariosasko', 'id': 47462742, 'node_id': 'MDQ6VXNlcjQ3NDYyNzQy', 'avatar_url': 'https://avatars.githubusercontent.com/u/47462742?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/mariosasko', 'html_url': 'https://github.com/mariosasko', 'followers_url': 'https://api.github.com/users/mariosasko/fol...
[]
closed
False
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea...
1,678,983,423,000
1,678,987,318,000
1,678,986,890,000
CONTRIBUTOR
Fix #5641
{'url': 'https://api.github.com/repos/huggingface/datasets/issues/5646/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0}
https://api.github.com/repos/huggingface/datasets/issues/5646/timeline
{'url': 'https://api.github.com/repos/huggingface/datasets/pulls/5646', 'html_url': 'https://github.com/huggingface/datasets/pull/5646', 'diff_url': 'https://github.com/huggingface/datasets/pull/5646.diff', 'patch_url': 'https://github.com/huggingface/datasets/pull/5646.patch', 'merged_at': '2023-03-16T17:14:50Z'}
true
https://api.github.com/repos/huggingface/datasets/issues/5645
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5645/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5645/comments
https://api.github.com/repos/huggingface/datasets/issues/5645/events
https://github.com/huggingface/datasets/issues/5645
1,627,108,278
I_kwDODunzps5g-7O2
5,645
Datasets map and select(range()) is giving dill error
{'login': 'Tanya-11', 'id': 90728105, 'node_id': 'MDQ6VXNlcjkwNzI4MTA1', 'avatar_url': 'https://avatars.githubusercontent.com/u/90728105?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/Tanya-11', 'html_url': 'https://github.com/Tanya-11', 'followers_url': 'https://api.github.com/users/Tanya-11/followers',...
[]
closed
False
[]
[ "It looks like an error that we observed once in https://github.com/huggingface/datasets/pull/5166\r\n\r\nCan you try to update `datasets` ?\r\n\r\n```\r\npip install -U datasets\r\n```\r\n\r\nif it doesn't work, can you make sure you don't have packages installed that may modify `dill`'s behavior, such as `apache-...
1,678,960,888,000
1,679,027,091,000
1,679,027,091,000
NONE
### Describe the bug I'm using Huggingface Datasets library to load the dataset in google colab When I do, > data = train_dataset.select(range(10)) or > train_datasets = train_dataset.map( > process_data_to_model_inputs, > batched=True, > batch_size=batch_size, > remove_columns...
{'url': 'https://api.github.com/repos/huggingface/datasets/issues/5645/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0}
https://api.github.com/repos/huggingface/datasets/issues/5645/timeline
completed
true
https://api.github.com/repos/huggingface/datasets/issues/5644
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5644/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5644/comments
https://api.github.com/repos/huggingface/datasets/issues/5644/events
https://github.com/huggingface/datasets/pull/5644
1,626,204,046
PR_kwDODunzps5MJHUi
5,644
Allow direct cast from binary to Audio/Image
{'login': 'mariosasko', 'id': 47462742, 'node_id': 'MDQ6VXNlcjQ3NDYyNzQy', 'avatar_url': 'https://avatars.githubusercontent.com/u/47462742?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/mariosasko', 'html_url': 'https://github.com/mariosasko', 'followers_url': 'https://api.github.com/users/mariosasko/fol...
[]
closed
False
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea...
1,678,910,574,000
1,678,976,444,000
1,678,975,975,000
CONTRIBUTOR
To address https://github.com/huggingface/datasets/discussions/5593.
{'url': 'https://api.github.com/repos/huggingface/datasets/issues/5644/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0}
https://api.github.com/repos/huggingface/datasets/issues/5644/timeline
{'url': 'https://api.github.com/repos/huggingface/datasets/pulls/5644', 'html_url': 'https://github.com/huggingface/datasets/pull/5644', 'diff_url': 'https://github.com/huggingface/datasets/pull/5644.diff', 'patch_url': 'https://github.com/huggingface/datasets/pull/5644.patch', 'merged_at': '2023-03-16T14:12:55Z'}
true
https://api.github.com/repos/huggingface/datasets/issues/5643
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5643/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5643/comments
https://api.github.com/repos/huggingface/datasets/issues/5643/events
https://github.com/huggingface/datasets/pull/5643
1,626,160,220
PR_kwDODunzps5MI9zO
5,643
Support PyArrow arrays as column values in `from_dict`
{'login': 'mariosasko', 'id': 47462742, 'node_id': 'MDQ6VXNlcjQ3NDYyNzQy', 'avatar_url': 'https://avatars.githubusercontent.com/u/47462742?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/mariosasko', 'html_url': 'https://github.com/mariosasko', 'followers_url': 'https://api.github.com/users/mariosasko/fol...
[]
closed
False
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea...
1,678,908,760,000
1,678,987,386,000
1,678,986,940,000
CONTRIBUTOR
For consistency with `pa.Table.from_pydict`, which supports both Python lists and PyArrow arrays as column values. "Fixes" https://discuss.huggingface.co/t/pyarrow-lib-floatarray-did-not-recognize-python-value-type-when-inferring-an-arrow-data-type/33417
{'url': 'https://api.github.com/repos/huggingface/datasets/issues/5643/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0}
https://api.github.com/repos/huggingface/datasets/issues/5643/timeline
{'url': 'https://api.github.com/repos/huggingface/datasets/pulls/5643', 'html_url': 'https://github.com/huggingface/datasets/pull/5643', 'diff_url': 'https://github.com/huggingface/datasets/pull/5643.diff', 'patch_url': 'https://github.com/huggingface/datasets/pull/5643.patch', 'merged_at': '2023-03-16T17:15:39Z'}
true
https://api.github.com/repos/huggingface/datasets/issues/5642
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5642/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5642/comments
https://api.github.com/repos/huggingface/datasets/issues/5642/events
https://github.com/huggingface/datasets/pull/5642
1,626,043,177
PR_kwDODunzps5MIjw9
5,642
Bump hfh to 0.11.0
{'login': 'lhoestq', 'id': 42851186, 'node_id': 'MDQ6VXNlcjQyODUxMTg2', 'avatar_url': 'https://avatars.githubusercontent.com/u/42851186?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/lhoestq', 'html_url': 'https://github.com/lhoestq', 'followers_url': 'https://api.github.com/users/lhoestq/followers', 'fo...
[]
closed
False
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea...
1,678,904,767,000
1,679,315,649,000
1,679,315,218,000
MEMBER
to fix errors like ``` requests.exceptions.HTTPError: 400 Client Error: Bad Request for url: https://hub-ci.huggingface.co/api/datasets/__DUMMY_TRANSFORMERS_USER__/... ``` (e.g. from this [failing CI](https://github.com/huggingface/datasets/actions/runs/4428956210/jobs/7769160997)) 0.11.0 is the current mini...
{'url': 'https://api.github.com/repos/huggingface/datasets/issues/5642/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0}
https://api.github.com/repos/huggingface/datasets/issues/5642/timeline
{'url': 'https://api.github.com/repos/huggingface/datasets/pulls/5642', 'html_url': 'https://github.com/huggingface/datasets/pull/5642', 'diff_url': 'https://github.com/huggingface/datasets/pull/5642.diff', 'patch_url': 'https://github.com/huggingface/datasets/pull/5642.patch', 'merged_at': '2023-03-20T12:26:58Z'}
true
https://api.github.com/repos/huggingface/datasets/issues/5641
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5641/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5641/comments
https://api.github.com/repos/huggingface/datasets/issues/5641/events
https://github.com/huggingface/datasets/issues/5641
1,625,942,730
I_kwDODunzps5g6erK
5,641
Features cannot be named "self"
{'login': 'alialamiidrissi', 'id': 14365168, 'node_id': 'MDQ6VXNlcjE0MzY1MTY4', 'avatar_url': 'https://avatars.githubusercontent.com/u/14365168?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/alialamiidrissi', 'html_url': 'https://github.com/alialamiidrissi', 'followers_url': 'https://api.github.com/users...
[]
closed
False
[]
[]
1,678,900,600,000
1,678,986,891,000
1,678,986,891,000
NONE
### Describe the bug Hi, I noticed that we cannot create a HuggingFace dataset from Pandas DataFrame with a column named `self`. The error seems to be coming from arguments validation in the `Features.from_dict` function. ### Steps to reproduce the bug ```python import datasets dummy_pandas = pd.DataFrame([0...
{'url': 'https://api.github.com/repos/huggingface/datasets/issues/5641/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0}
https://api.github.com/repos/huggingface/datasets/issues/5641/timeline
completed
true
https://api.github.com/repos/huggingface/datasets/issues/5640
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5640/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5640/comments
https://api.github.com/repos/huggingface/datasets/issues/5640/events
https://github.com/huggingface/datasets/pull/5640
1,625,896,057
PR_kwDODunzps5MID3I
5,640
Less zip false positives
{'login': 'lhoestq', 'id': 42851186, 'node_id': 'MDQ6VXNlcjQyODUxMTg2', 'avatar_url': 'https://avatars.githubusercontent.com/u/42851186?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/lhoestq', 'html_url': 'https://github.com/lhoestq', 'followers_url': 'https://api.github.com/users/lhoestq/followers', 'fo...
[]
closed
False
[]
[ "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_a...
1,678,898,939,000
1,678,974,457,000
1,678,974,012,000
MEMBER
`zipfile.is_zipfile` return false positives for some Parquet files. It causes errors when loading certain parquet datasets, where some files are considered ZIP files by `zipfile.is_zipfile` This is a known issue: https://github.com/python/cpython/issues/72680 At first I wanted to rely only on magic numbers, but t...
{'url': 'https://api.github.com/repos/huggingface/datasets/issues/5640/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0}
https://api.github.com/repos/huggingface/datasets/issues/5640/timeline
{'url': 'https://api.github.com/repos/huggingface/datasets/pulls/5640', 'html_url': 'https://github.com/huggingface/datasets/pull/5640', 'diff_url': 'https://github.com/huggingface/datasets/pull/5640.diff', 'patch_url': 'https://github.com/huggingface/datasets/pull/5640.patch', 'merged_at': '2023-03-16T13:40:12Z'}
true
https://api.github.com/repos/huggingface/datasets/issues/5639
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5639/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5639/comments
https://api.github.com/repos/huggingface/datasets/issues/5639/events
https://github.com/huggingface/datasets/issues/5639
1,625,737,098
I_kwDODunzps5g5seK
5,639
Parquet file wrongly recognized as zip prevents loading a dataset
{'login': 'clefourrier', 'id': 22726840, 'node_id': 'MDQ6VXNlcjIyNzI2ODQw', 'avatar_url': 'https://avatars.githubusercontent.com/u/22726840?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/clefourrier', 'html_url': 'https://github.com/clefourrier', 'followers_url': 'https://api.github.com/users/clefourrier...
[]
closed
False
[]
[]
1,678,893,645,000
1,678,974,014,000
1,678,974,014,000
CONTRIBUTOR
### Describe the bug When trying to `load_dataset_builder` for `HuggingFaceGECLM/StackExchange_Mar2023`, extraction fails, because parquet file [devops-00000-of-00001-22fe902fd8702892.parquet](https://huggingface.co/datasets/HuggingFaceGECLM/StackExchange_Mar2023/resolve/1f8c9a2ab6f7d0f9ae904b8b922e4384592ae1a5/data...
{'url': 'https://api.github.com/repos/huggingface/datasets/issues/5639/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0}
https://api.github.com/repos/huggingface/datasets/issues/5639/timeline
completed
true
https://api.github.com/repos/huggingface/datasets/issues/5638
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5638/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5638/comments
https://api.github.com/repos/huggingface/datasets/issues/5638/events
https://github.com/huggingface/datasets/issues/5638
1,625,564,471
I_kwDODunzps5g5CU3
5,638
xPath to implement all operations for Path
{'login': 'thomasw21', 'id': 24695242, 'node_id': 'MDQ6VXNlcjI0Njk1MjQy', 'avatar_url': 'https://avatars.githubusercontent.com/u/24695242?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/thomasw21', 'html_url': 'https://github.com/thomasw21', 'followers_url': 'https://api.github.com/users/thomasw21/followe...
[{'id': 1935892871, 'node_id': 'MDU6TGFiZWwxOTM1ODkyODcx', 'url': 'https://api.github.com/repos/huggingface/datasets/labels/enhancement', 'name': 'enhancement', 'color': 'a2eeef', 'default': True, 'description': 'New feature or request'}]
closed
False
[]
[ " I think https://github.com/fsspec/universal_pathlib is the project you are looking for.\r\n\r\n`xPath` has the methods often used in dataset scripts, and `mkdir` is not one of them (`dl_manager`'s role is to \"interact\" with the file system, so using `mkdir` is discouraged).", "Right is there a difference betw...
1,678,888,031,000
1,679,059,272,000
1,679,059,272,000
MEMBER
### Feature request Current xPath implementation is a great extension of Path in order to work with remote objects. However some methods such as `mkdir` are not implemented correctly. It should instead rely on `fsspec` methods, instead of defaulting do `Path` methods which only work locally. ### Motivation I'm using...
{'url': 'https://api.github.com/repos/huggingface/datasets/issues/5638/reactions', 'total_count': 1, '+1': 1, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0}
https://api.github.com/repos/huggingface/datasets/issues/5638/timeline
completed
true
https://api.github.com/repos/huggingface/datasets/issues/5637
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5637/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5637/comments
https://api.github.com/repos/huggingface/datasets/issues/5637/events
https://github.com/huggingface/datasets/issues/5637
1,625,295,691
I_kwDODunzps5g4AtL
5,637
IterableDataset with_format does not support 'device' keyword for jax
{'login': 'Lime-Cakes', 'id': 91322985, 'node_id': 'MDQ6VXNlcjkxMzIyOTg1', 'avatar_url': 'https://avatars.githubusercontent.com/u/91322985?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/Lime-Cakes', 'html_url': 'https://github.com/Lime-Cakes', 'followers_url': 'https://api.github.com/users/Lime-Cakes/fol...
[]
open
False
[]
[ "Hi! Yes, only `torch` is currently supported. Unlike `Dataset`, `IterableDataset` is not PyArrow-backed, so we cannot simply call `to_numpy` on the underlying subtables to format them numerically. Instead, we must manually convert examples to (numeric) arrays while preserving consistency with `Dataset`, which is n...
1,678,878,252,000
1,678,991,459,000
null
NONE
### Describe the bug As seen here: https://huggingface.co/docs/datasets/use_with_jax dataset.with_format() supports the keyword 'device', to put data on a specific device when loaded as jax. However, when called on an IterableDataset, I got the error `TypeError: with_format() got an unexpected keyword argument 'devi...
{'url': 'https://api.github.com/repos/huggingface/datasets/issues/5637/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0}
https://api.github.com/repos/huggingface/datasets/issues/5637/timeline
true
https://api.github.com/repos/huggingface/datasets/issues/5636
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5636/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5636/comments
https://api.github.com/repos/huggingface/datasets/issues/5636/events
https://github.com/huggingface/datasets/pull/5636
1,623,721,577
PR_kwDODunzps5MAunR
5,636
Fix CI: ignore C901 ("some_func" is to complex) in `ruff`
{'login': 'polinaeterna', 'id': 16348744, 'node_id': 'MDQ6VXNlcjE2MzQ4NzQ0', 'avatar_url': 'https://avatars.githubusercontent.com/u/16348744?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/polinaeterna', 'html_url': 'https://github.com/polinaeterna', 'followers_url': 'https://api.github.com/users/polinaet...
[]
closed
False
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea...
1,678,807,751,000
1,678,811,826,000
1,678,811,392,000
CONTRIBUTOR
idk if I should have added this ignore to `ruff` too, but I added :)
{'url': 'https://api.github.com/repos/huggingface/datasets/issues/5636/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0}
https://api.github.com/repos/huggingface/datasets/issues/5636/timeline
{'url': 'https://api.github.com/repos/huggingface/datasets/pulls/5636', 'html_url': 'https://github.com/huggingface/datasets/pull/5636', 'diff_url': 'https://github.com/huggingface/datasets/pull/5636.diff', 'patch_url': 'https://github.com/huggingface/datasets/pull/5636.patch', 'merged_at': '2023-03-14T16:29:52Z'}
true
https://api.github.com/repos/huggingface/datasets/issues/5635
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5635/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5635/comments
https://api.github.com/repos/huggingface/datasets/issues/5635/events
https://github.com/huggingface/datasets/pull/5635
1,623,682,558
PR_kwDODunzps5MAmLU
5,635
Pass custom metadata filename to Image/Audio folders
{'login': 'polinaeterna', 'id': 16348744, 'node_id': 'MDQ6VXNlcjE2MzQ4NzQ0', 'avatar_url': 'https://avatars.githubusercontent.com/u/16348744?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/polinaeterna', 'html_url': 'https://github.com/polinaeterna', 'followers_url': 'https://api.github.com/users/polinaet...
[]
open
False
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5635). All of your documentation changes will be reflected on that endpoint.", "I'm not a big fan of this new param - I find assigning metadata files to splits via the `data_files` param cleaner. Also, assuming that the metadat...
1,678,806,496,000
1,679,507,431,000
null
CONTRIBUTOR
This is a quick fix. Now it requires to pass data via `data_files` parameters and include a required metadata file there and pass its filename as `metadata_filename` parameter. For example, with the structure like: ``` data images_dir/ im1.jpg im2.jpg ... metadata_dir/ meta_file...
{'url': 'https://api.github.com/repos/huggingface/datasets/issues/5635/reactions', 'total_count': 1, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 1, 'eyes': 0}
https://api.github.com/repos/huggingface/datasets/issues/5635/timeline
{'url': 'https://api.github.com/repos/huggingface/datasets/pulls/5635', 'html_url': 'https://github.com/huggingface/datasets/pull/5635', 'diff_url': 'https://github.com/huggingface/datasets/pull/5635.diff', 'patch_url': 'https://github.com/huggingface/datasets/pull/5635.patch', 'merged_at': None}
true
https://api.github.com/repos/huggingface/datasets/issues/5634
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5634/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5634/comments
https://api.github.com/repos/huggingface/datasets/issues/5634/events
https://github.com/huggingface/datasets/issues/5634
1,622,424,174
I_kwDODunzps5gtDpu
5,634
Not all progress bars are showing up when they should for downloading dataset
{'login': 'garlandz-db', 'id': 110427462, 'node_id': 'U_kgDOBpT9Rg', 'avatar_url': 'https://avatars.githubusercontent.com/u/110427462?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/garlandz-db', 'html_url': 'https://github.com/garlandz-db', 'followers_url': 'https://api.github.com/users/garlandz-db/follo...
[]
open
False
[]
[ "Hi! \r\n\r\nBy default, tqdm has `leave=True` to \"keep all traces of the progress bar upon the termination of iteration\". However, we use `leave=False` in some places (as of recently), which removes the bar once the iteration is over.\r\n\r\nI feel like our TQDM bars are noisy, so I think we should always set `l...
1,678,748,658,000
1,679,363,999,000
null
NONE
### Describe the bug During downloading the rotten tomatoes dataset, not all progress bars are displayed properly. This might be related to [this ticket](https://github.com/huggingface/datasets/issues/5117) as it raised the same concern but its not clear if the fix solves this issue too. ipywidgets <img width=...
{'url': 'https://api.github.com/repos/huggingface/datasets/issues/5634/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0}
https://api.github.com/repos/huggingface/datasets/issues/5634/timeline
true
https://api.github.com/repos/huggingface/datasets/issues/5633
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5633/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5633/comments
https://api.github.com/repos/huggingface/datasets/issues/5633/events
https://github.com/huggingface/datasets/issues/5633
1,621,469,970
I_kwDODunzps5gpasS
5,633
Cannot import datasets
{'login': 'eerio', 'id': 11250555, 'node_id': 'MDQ6VXNlcjExMjUwNTU1', 'avatar_url': 'https://avatars.githubusercontent.com/u/11250555?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/eerio', 'html_url': 'https://github.com/eerio', 'followers_url': 'https://api.github.com/users/eerio/followers', 'following_...
[]
closed
False
[]
[ "Okay, the issue was likely caused by mixing `conda` and `pip` usage - I forgot that I have already used `pip` in this environment previously and that it was 'spoiled' because of it. Creating another environment and installing `datasets` by pip with other packages from the `requirements.txt` file solved the problem...
1,678,713,284,000
1,678,730,059,000
1,678,730,059,000
NONE
### Describe the bug Hi, I cannot even import the library :( I installed it by running: ``` $ conda install datasets ``` Then I realized I should maybe use the huggingface channel, because I encountered the error below, so I ran: ``` $ conda remove datasets $ conda install -c huggingface datasets ``` Pl...
{'url': 'https://api.github.com/repos/huggingface/datasets/issues/5633/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0}
https://api.github.com/repos/huggingface/datasets/issues/5633/timeline
completed
true
https://api.github.com/repos/huggingface/datasets/issues/5632
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5632/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5632/comments
https://api.github.com/repos/huggingface/datasets/issues/5632/events
https://github.com/huggingface/datasets/issues/5632
1,621,177,391
I_kwDODunzps5goTQv
5,632
Dataset cannot convert too large dictionnary
{'login': 'MaraLac', 'id': 108518627, 'node_id': 'U_kgDOBnfc4w', 'avatar_url': 'https://avatars.githubusercontent.com/u/108518627?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/MaraLac', 'html_url': 'https://github.com/MaraLac', 'followers_url': 'https://api.github.com/users/MaraLac/followers', 'followin...
[]
open
False
[]
[ "Answered on the forum:\r\n\r\n> To fix the overflow error, we need to merge [support LargeListArray in pyarrow by xwwwwww · Pull Request #4800 · huggingface/datasets · GitHub](https://github.com/huggingface/datasets/pull/4800), which adds support for the large lists. However, before merging it, we need to come up ...
1,678,702,480,000
1,678,980,537,000
null
NONE
### Describe the bug Hello everyone! I tried to build a new dataset with the command "dict_valid = datasets.Dataset.from_dict({'input_values': values_array})". However, I have a very large dataset (~400Go) and it seems that dataset cannot handle this. Indeed, I can create the dataset until a certain size of m...
{'url': 'https://api.github.com/repos/huggingface/datasets/issues/5632/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0}
https://api.github.com/repos/huggingface/datasets/issues/5632/timeline
true
https://api.github.com/repos/huggingface/datasets/issues/5631
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5631/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5631/comments
https://api.github.com/repos/huggingface/datasets/issues/5631/events
https://github.com/huggingface/datasets/issues/5631
1,620,442,854
I_kwDODunzps5glf7m
5,631
Custom split names
{'login': 'ErfanMoosaviMonazzah', 'id': 79091831, 'node_id': 'MDQ6VXNlcjc5MDkxODMx', 'avatar_url': 'https://avatars.githubusercontent.com/u/79091831?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/ErfanMoosaviMonazzah', 'html_url': 'https://github.com/ErfanMoosaviMonazzah', 'followers_url': 'https://api.g...
[{'id': 1935892871, 'node_id': 'MDU6TGFiZWwxOTM1ODkyODcx', 'url': 'https://api.github.com/repos/huggingface/datasets/labels/enhancement', 'name': 'enhancement', 'color': 'a2eeef', 'default': True, 'description': 'New feature or request'}]
open
False
[]
[ "Hi!\r\n\r\nYou can also use names other than \"train\", \"validation\" and \"test\". As an example, check the [script](https://huggingface.co/datasets/mozilla-foundation/common_voice_11_0/blob/e095840f23f3dffc1056c078c2f9320dad9ca74d/common_voice_11_0.py#L139) of the Common Voice 11 dataset. " ]
1,678,641,703,000
1,678,731,182,000
null
NONE
### Feature request Hi, I participated in multiple NLP tasks where there are more than just train, test, validation splits, there could be multiple validation sets or test sets. But it seems currently only those mentioned three splits supported. It would be nice to have the support for more splits on the hub. (curren...
{'url': 'https://api.github.com/repos/huggingface/datasets/issues/5631/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0}
https://api.github.com/repos/huggingface/datasets/issues/5631/timeline
true
https://api.github.com/repos/huggingface/datasets/issues/5630
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5630/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5630/comments
https://api.github.com/repos/huggingface/datasets/issues/5630/events
https://github.com/huggingface/datasets/pull/5630
1,620,327,510
PR_kwDODunzps5L1ahF
5,630
adds early exit if url is `PathLike`
{'login': 'vvvm23', 'id': 44398246, 'node_id': 'MDQ6VXNlcjQ0Mzk4MjQ2', 'avatar_url': 'https://avatars.githubusercontent.com/u/44398246?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/vvvm23', 'html_url': 'https://github.com/vvvm23', 'followers_url': 'https://api.github.com/users/vvvm23/followers', 'follow...
[]
open
False
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5630). All of your documentation changes will be reflected on that endpoint." ]
1,678,620,208,000
1,678,881,518,000
null
NONE
Closes #4864 Should fix errors thrown when attempting to load `json` dataset using `pathlib.Path` in `data_files` argument.
{'url': 'https://api.github.com/repos/huggingface/datasets/issues/5630/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0}
https://api.github.com/repos/huggingface/datasets/issues/5630/timeline
{'url': 'https://api.github.com/repos/huggingface/datasets/pulls/5630', 'html_url': 'https://github.com/huggingface/datasets/pull/5630', 'diff_url': 'https://github.com/huggingface/datasets/pull/5630.diff', 'patch_url': 'https://github.com/huggingface/datasets/pull/5630.patch', 'merged_at': None}
true
https://api.github.com/repos/huggingface/datasets/issues/5629
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5629/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5629/comments
https://api.github.com/repos/huggingface/datasets/issues/5629/events
https://github.com/huggingface/datasets/issues/5629
1,619,921,247
I_kwDODunzps5gjglf
5,629
load_dataset gives "403" error when using Financial phrasebank
{'login': 'Jimchoo91', 'id': 67709789, 'node_id': 'MDQ6VXNlcjY3NzA5Nzg5', 'avatar_url': 'https://avatars.githubusercontent.com/u/67709789?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/Jimchoo91', 'html_url': 'https://github.com/Jimchoo91', 'followers_url': 'https://api.github.com/users/Jimchoo91/followe...
[]
open
False
[]
[ "Hi! You seem to be using an outdated version of `datasets` that downloads the older script version. To avoid the error, you can either pass `revision=\"main\"` to `load_dataset` (this can fail if a script uses newer features of the lib) or update your installation with `pip install -U datasets` (better solution)."...
1,678,520,799,000
1,678,732,046,000
null
NONE
When I try to load this dataset, I receive the following error: ConnectionError: Couldn't reach https://www.researchgate.net/profile/Pekka_Malo/publication/251231364_FinancialPhraseBank-v10/data/0c96051eee4fb1d56e000000/FinancialPhraseBank-v10.zip (error 403) Has this been seen before? Thanks. The website loads ...
{'url': 'https://api.github.com/repos/huggingface/datasets/issues/5629/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0}
https://api.github.com/repos/huggingface/datasets/issues/5629/timeline
true
https://api.github.com/repos/huggingface/datasets/issues/5628
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5628/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5628/comments
https://api.github.com/repos/huggingface/datasets/issues/5628/events
https://github.com/huggingface/datasets/pull/5628
1,619,641,810
PR_kwDODunzps5LzVKi
5,628
add kwargs to index search
{'login': 'SaulLu', 'id': 55560583, 'node_id': 'MDQ6VXNlcjU1NTYwNTgz', 'avatar_url': 'https://avatars.githubusercontent.com/u/55560583?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/SaulLu', 'html_url': 'https://github.com/SaulLu', 'followers_url': 'https://api.github.com/users/SaulLu/followers', 'follow...
[]
closed
False
[]
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
1,678,483,498,000
1,678,891,727,000
1,678,891,564,000
CONTRIBUTOR
This PR proposes to add kwargs to index search methods. This is particularly useful for setting the timeout of a query on elasticsearch. A typical use case would be: ```python dset.add_elasticsearch_index("filename", es_client=es_client) scores, examples = dset.get_nearest_examples("filename", "my_name-train_2...
{'url': 'https://api.github.com/repos/huggingface/datasets/issues/5628/reactions', 'total_count': 1, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 1, 'rocket': 0, 'eyes': 0}
https://api.github.com/repos/huggingface/datasets/issues/5628/timeline
{'url': 'https://api.github.com/repos/huggingface/datasets/pulls/5628', 'html_url': 'https://github.com/huggingface/datasets/pull/5628', 'diff_url': 'https://github.com/huggingface/datasets/pull/5628.diff', 'patch_url': 'https://github.com/huggingface/datasets/pull/5628.patch', 'merged_at': '2023-03-15T14:46:04Z'}
true
https://api.github.com/repos/huggingface/datasets/issues/5627
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5627/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5627/comments
https://api.github.com/repos/huggingface/datasets/issues/5627/events
https://github.com/huggingface/datasets/issues/5627
1,619,336,609
I_kwDODunzps5ghR2h
5,627
Unable to load AutoTrain-generated dataset from the hub
{'login': 'ijmiller2', 'id': 8560151, 'node_id': 'MDQ6VXNlcjg1NjAxNTE=', 'avatar_url': 'https://avatars.githubusercontent.com/u/8560151?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/ijmiller2', 'html_url': 'https://github.com/ijmiller2', 'followers_url': 'https://api.github.com/users/ijmiller2/followers...
[]
open
False
[]
[ "The AutoTrain format is not supported right now. I think it would require a dedicated dataset builder", "Okay, good to know. Thanks for the reply. For now I will just have to\nmanage the split manually before training, because I can’t find any way of\npulling out file indices or file names from the autogenerated...
1,678,469,158,000
1,678,549,482,000
null
NONE
### Describe the bug DatasetGenerationError: An error occurred while generating the dataset -> ValueError: Couldn't cast ... because column names don't match ``` ValueError: Couldn't cast _data_files: list<item: struct<filename: string>> child 0, item: struct<filename: string> child 0, filename: string ...
{'url': 'https://api.github.com/repos/huggingface/datasets/issues/5627/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0}
https://api.github.com/repos/huggingface/datasets/issues/5627/timeline
true
https://api.github.com/repos/huggingface/datasets/issues/5626
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5626/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5626/comments
https://api.github.com/repos/huggingface/datasets/issues/5626/events
https://github.com/huggingface/datasets/pull/5626
1,619,252,984
PR_kwDODunzps5LyBT4
5,626
Support streaming datasets with numpy.load
{'login': 'albertvillanova', 'id': 8515462, 'node_id': 'MDQ6VXNlcjg1MTU0NjI=', 'avatar_url': 'https://avatars.githubusercontent.com/u/8515462?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/albertvillanova', 'html_url': 'https://github.com/albertvillanova', 'followers_url': 'https://api.github.com/users/a...
[]
closed
False
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea...
1,678,466,019,000
1,679,380,565,000
1,679,380,134,000
MEMBER
Support streaming datasets with `numpy.load`. See: https://huggingface.co/datasets/qgallouedec/gia_dataset/discussions/1
{'url': 'https://api.github.com/repos/huggingface/datasets/issues/5626/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0}
https://api.github.com/repos/huggingface/datasets/issues/5626/timeline
{'url': 'https://api.github.com/repos/huggingface/datasets/pulls/5626', 'html_url': 'https://github.com/huggingface/datasets/pull/5626', 'diff_url': 'https://github.com/huggingface/datasets/pull/5626.diff', 'patch_url': 'https://github.com/huggingface/datasets/pull/5626.patch', 'merged_at': '2023-03-21T06:28:54Z'}
true
https://api.github.com/repos/huggingface/datasets/issues/5625
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5625/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5625/comments
https://api.github.com/repos/huggingface/datasets/issues/5625/events
https://github.com/huggingface/datasets/issues/5625
1,618,971,855
I_kwDODunzps5gf4zP
5,625
Allow "jsonl" data type signifier
{'login': 'BramVanroy', 'id': 2779410, 'node_id': 'MDQ6VXNlcjI3Nzk0MTA=', 'avatar_url': 'https://avatars.githubusercontent.com/u/2779410?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/BramVanroy', 'html_url': 'https://github.com/BramVanroy', 'followers_url': 'https://api.github.com/users/BramVanroy/follo...
[{'id': 1935892871, 'node_id': 'MDU6TGFiZWwxOTM1ODkyODcx', 'url': 'https://api.github.com/repos/huggingface/datasets/labels/enhancement', 'name': 'enhancement', 'color': 'a2eeef', 'default': True, 'description': 'New feature or request'}]
open
False
[]
[ "You can use \"json\" instead. It doesn't work by extension names, but rather by dataset builder names, e.g. \"text\", \"imagefolder\", etc. I don't think the example in `transformers` is correct because of that", "Yes, I understand the reasoning but this issue is to propose that the example in transformers (whil...
1,678,454,508,000
1,678,530,939,000
null
CONTRIBUTOR
### Feature request `load_dataset` currently does not accept `jsonl` as type but only `json`. ### Motivation I was working with one of the `run_translation` scripts and used my own datasets (`.jsonl`) as train_dataset. But the default code did not work because ``` FileNotFoundError: Couldn't find a dataset scri...
{'url': 'https://api.github.com/repos/huggingface/datasets/issues/5625/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0}
https://api.github.com/repos/huggingface/datasets/issues/5625/timeline
true
https://api.github.com/repos/huggingface/datasets/issues/5624
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5624/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5624/comments
https://api.github.com/repos/huggingface/datasets/issues/5624/events
https://github.com/huggingface/datasets/issues/5624
1,617,400,192
I_kwDODunzps5gZ5GA
5,624
glue datasets returning -1 for test split
{'login': 'lithafnium', 'id': 8939967, 'node_id': 'MDQ6VXNlcjg5Mzk5Njc=', 'avatar_url': 'https://avatars.githubusercontent.com/u/8939967?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/lithafnium', 'html_url': 'https://github.com/lithafnium', 'followers_url': 'https://api.github.com/users/lithafnium/follo...
[]
closed
False
[]
[ "Hi @lithafnium, thanks for reporting.\r\n\r\nPlease note that you can use the \"Community\" tab in the corresponding dataset page to start any discussion: https://huggingface.co/datasets/glue/discussions\r\n\r\nIndeed this issue was already raised there (https://huggingface.co/datasets/glue/discussions/5) and answ...
1,678,373,238,000
1,678,380,569,000
1,678,380,569,000
NONE
### Describe the bug Downloading any dataset from GLUE has -1 as class labels for test split. Train and validation have regular 0/1 class labels. This is also present in the dataset card online. ### Steps to reproduce the bug ``` dataset = load_dataset("glue", "sst2") for d in dataset: # prints out -1 ...
{'url': 'https://api.github.com/repos/huggingface/datasets/issues/5624/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0}
https://api.github.com/repos/huggingface/datasets/issues/5624/timeline
completed
true
https://api.github.com/repos/huggingface/datasets/issues/5623
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5623/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5623/comments
https://api.github.com/repos/huggingface/datasets/issues/5623/events
https://github.com/huggingface/datasets/pull/5623
1,616,712,665
PR_kwDODunzps5Lpb4q
5,623
Remove set_access_token usage + fail tests if FutureWarning
{'login': 'Wauplin', 'id': 11801849, 'node_id': 'MDQ6VXNlcjExODAxODQ5', 'avatar_url': 'https://avatars.githubusercontent.com/u/11801849?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/Wauplin', 'html_url': 'https://github.com/Wauplin', 'followers_url': 'https://api.github.com/users/Wauplin/followers', 'fo...
[]
closed
False
[]
[ "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_a...
1,678,351,561,000
1,678,376,340,000
1,678,375,919,000
CONTRIBUTOR
`set_access_token` is deprecated and will be removed in `huggingface_hub>=0.14`. This PR removes it from the tests (it was not used in `datasets` source code itself). FYI, it was not needed since `set_access_token` was just setting git credentials and `datasets` doesn't seem to use git anywhere. In the future, us...
{'url': 'https://api.github.com/repos/huggingface/datasets/issues/5623/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0}
https://api.github.com/repos/huggingface/datasets/issues/5623/timeline
{'url': 'https://api.github.com/repos/huggingface/datasets/pulls/5623', 'html_url': 'https://github.com/huggingface/datasets/pull/5623', 'diff_url': 'https://github.com/huggingface/datasets/pull/5623.diff', 'patch_url': 'https://github.com/huggingface/datasets/pull/5623.patch', 'merged_at': '2023-03-09T15:31:58Z'}
true
https://api.github.com/repos/huggingface/datasets/issues/5622
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5622/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5622/comments
https://api.github.com/repos/huggingface/datasets/issues/5622/events
https://github.com/huggingface/datasets/pull/5622
1,615,190,942
PR_kwDODunzps5LkSj8
5,622
Update README template to better template
{'login': 'emiltj', 'id': 54767532, 'node_id': 'MDQ6VXNlcjU0NzY3NTMy', 'avatar_url': 'https://avatars.githubusercontent.com/u/54767532?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/emiltj', 'html_url': 'https://github.com/emiltj', 'followers_url': 'https://api.github.com/users/emiltj/followers', 'follow...
[]
closed
False
[]
[ "IMO this template should stay generic.\r\n\r\nAlso, we now use [the card template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md) from `hugginface_hub` as the source of truth on the Hub (you now have the option to import it into the dataset card/READ...
1,678,278,623,000
1,678,511,258,000
1,678,511,258,000
NONE
{'url': 'https://api.github.com/repos/huggingface/datasets/issues/5622/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0}
https://api.github.com/repos/huggingface/datasets/issues/5622/timeline
{'url': 'https://api.github.com/repos/huggingface/datasets/pulls/5622', 'html_url': 'https://github.com/huggingface/datasets/pull/5622', 'diff_url': 'https://github.com/huggingface/datasets/pull/5622.diff', 'patch_url': 'https://github.com/huggingface/datasets/pull/5622.patch', 'merged_at': None}
true
https://api.github.com/repos/huggingface/datasets/issues/5621
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5621/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5621/comments
https://api.github.com/repos/huggingface/datasets/issues/5621/events
https://github.com/huggingface/datasets/pull/5621
1,615,029,615
PR_kwDODunzps5LjwD8
5,621
Adding Oracle Cloud to docs
{'login': 'ahosler', 'id': 29129502, 'node_id': 'MDQ6VXNlcjI5MTI5NTAy', 'avatar_url': 'https://avatars.githubusercontent.com/u/29129502?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/ahosler', 'html_url': 'https://github.com/ahosler', 'followers_url': 'https://api.github.com/users/ahosler/followers', 'fo...
[]
closed
False
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea...
1,678,270,970,000
1,678,496,238,000
1,678,495,796,000
CONTRIBUTOR
Adding Oracle Cloud's fsspec implementation to the list of supported cloud storage providers.
{'url': 'https://api.github.com/repos/huggingface/datasets/issues/5621/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0}
https://api.github.com/repos/huggingface/datasets/issues/5621/timeline
{'url': 'https://api.github.com/repos/huggingface/datasets/pulls/5621', 'html_url': 'https://github.com/huggingface/datasets/pull/5621', 'diff_url': 'https://github.com/huggingface/datasets/pull/5621.diff', 'patch_url': 'https://github.com/huggingface/datasets/pull/5621.patch', 'merged_at': '2023-03-11T00:49:56Z'}
true
https://api.github.com/repos/huggingface/datasets/issues/5620
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5620/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5620/comments
https://api.github.com/repos/huggingface/datasets/issues/5620/events
https://github.com/huggingface/datasets/pull/5620
1,613,460,520
PR_kwDODunzps5LefAf
5,620
Bump pyarrow to 8.0.0
{'login': 'lhoestq', 'id': 42851186, 'node_id': 'MDQ6VXNlcjQyODUxMTg2', 'avatar_url': 'https://avatars.githubusercontent.com/u/42851186?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/lhoestq', 'html_url': 'https://github.com/lhoestq', 'followers_url': 'https://api.github.com/users/lhoestq/followers', 'fo...
[]
closed
False
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea...
1,678,195,913,000
1,678,284,087,000
1,678,283,662,000
MEMBER
Fix those for Pandas 2.0 (tested [here](https://github.com/huggingface/datasets/actions/runs/4346221280/jobs/7592010397) with pandas==2.0.0.rc0): ```python =========================== short test summary info ============================ FAILED tests/test_arrow_dataset.py::BaseDatasetTest::test_to_parquet_in_memory...
{'url': 'https://api.github.com/repos/huggingface/datasets/issues/5620/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0}
https://api.github.com/repos/huggingface/datasets/issues/5620/timeline
{'url': 'https://api.github.com/repos/huggingface/datasets/pulls/5620', 'html_url': 'https://github.com/huggingface/datasets/pull/5620', 'diff_url': 'https://github.com/huggingface/datasets/pull/5620.diff', 'patch_url': 'https://github.com/huggingface/datasets/pull/5620.patch', 'merged_at': '2023-03-08T13:54:21Z'}
true
https://api.github.com/repos/huggingface/datasets/issues/5619
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5619/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5619/comments
https://api.github.com/repos/huggingface/datasets/issues/5619/events
https://github.com/huggingface/datasets/pull/5619
1,613,439,709
PR_kwDODunzps5LeaYP
5,619
unpin fsspec
{'login': 'lhoestq', 'id': 42851186, 'node_id': 'MDQ6VXNlcjQyODUxMTg2', 'avatar_url': 'https://avatars.githubusercontent.com/u/42851186?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/lhoestq', 'html_url': 'https://github.com/lhoestq', 'followers_url': 'https://api.github.com/users/lhoestq/followers', 'fo...
[]
closed
False
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea...
1,678,195,361,000
1,678,196,821,000
1,678,196,342,000
MEMBER
close https://github.com/huggingface/datasets/issues/5618
{'url': 'https://api.github.com/repos/huggingface/datasets/issues/5619/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0}
https://api.github.com/repos/huggingface/datasets/issues/5619/timeline
{'url': 'https://api.github.com/repos/huggingface/datasets/pulls/5619', 'html_url': 'https://github.com/huggingface/datasets/pull/5619', 'diff_url': 'https://github.com/huggingface/datasets/pull/5619.diff', 'patch_url': 'https://github.com/huggingface/datasets/pull/5619.patch', 'merged_at': '2023-03-07T13:39:02Z'}
true
https://api.github.com/repos/huggingface/datasets/issues/5618
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5618/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5618/comments
https://api.github.com/repos/huggingface/datasets/issues/5618/events
https://github.com/huggingface/datasets/issues/5618
1,612,977,934
I_kwDODunzps5gJBcO
5,618
Unpin fsspec < 2023.3.0 once issue fixed
{'login': 'albertvillanova', 'id': 8515462, 'node_id': 'MDQ6VXNlcjg1MTU0NjI=', 'avatar_url': 'https://avatars.githubusercontent.com/u/8515462?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/albertvillanova', 'html_url': 'https://github.com/albertvillanova', 'followers_url': 'https://api.github.com/users/a...
[]
closed
False
[]
[]
1,678,178,511,000
1,678,196,343,000
1,678,196,343,000
MEMBER
Unpin `fsspec` upper version once root cause of our CI break is fixed. See: - #5614
{'url': 'https://api.github.com/repos/huggingface/datasets/issues/5618/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0}
https://api.github.com/repos/huggingface/datasets/issues/5618/timeline
completed
true
https://api.github.com/repos/huggingface/datasets/issues/5617
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5617/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5617/comments
https://api.github.com/repos/huggingface/datasets/issues/5617/events
https://github.com/huggingface/datasets/pull/5617
1,612,947,422
PR_kwDODunzps5LcvI-
5,617
Fix CI by temporarily pinning fsspec < 2023.3.0
{'login': 'albertvillanova', 'id': 8515462, 'node_id': 'MDQ6VXNlcjg1MTU0NjI=', 'avatar_url': 'https://avatars.githubusercontent.com/u/8515462?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/albertvillanova', 'html_url': 'https://github.com/albertvillanova', 'followers_url': 'https://api.github.com/users/a...
[]
closed
False
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea...
1,678,177,100,000
1,678,178,695,000
1,678,178,248,000
MEMBER
As a hotfix for our CI, temporarily pin `fsspec`: Fix #5616. Until root cause is fixed, see: - #5614
{'url': 'https://api.github.com/repos/huggingface/datasets/issues/5617/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0}
https://api.github.com/repos/huggingface/datasets/issues/5617/timeline
{'url': 'https://api.github.com/repos/huggingface/datasets/pulls/5617', 'html_url': 'https://github.com/huggingface/datasets/pull/5617', 'diff_url': 'https://github.com/huggingface/datasets/pull/5617.diff', 'patch_url': 'https://github.com/huggingface/datasets/pull/5617.patch', 'merged_at': '2023-03-07T08:37:28Z'}
true
https://api.github.com/repos/huggingface/datasets/issues/5616
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5616/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5616/comments
https://api.github.com/repos/huggingface/datasets/issues/5616/events
https://github.com/huggingface/datasets/issues/5616
1,612,932,508
I_kwDODunzps5gI2Wc
5,616
CI is broken after fsspec-2023.3.0 release
{'login': 'albertvillanova', 'id': 8515462, 'node_id': 'MDQ6VXNlcjg1MTU0NjI=', 'avatar_url': 'https://avatars.githubusercontent.com/u/8515462?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/albertvillanova', 'html_url': 'https://github.com/albertvillanova', 'followers_url': 'https://api.github.com/users/a...
[{'id': 1935892857, 'node_id': 'MDU6TGFiZWwxOTM1ODkyODU3', 'url': 'https://api.github.com/repos/huggingface/datasets/labels/bug', 'name': 'bug', 'color': 'd73a4a', 'default': True, 'description': "Something isn't working"}]
closed
False
[]
[]
1,678,176,399,000
1,678,178,249,000
1,678,178,249,000
MEMBER
As reported by @lhoestq, our CI is broken after `fsspec` 2023.3.0 release: ``` FAILED tests/test_filesystem.py::test_compression_filesystems[Bz2FileSystem] - AssertionError: assert [{'created': ...: False, ...}] == ['file.txt'] At index 0 diff: {'name': 'file.txt', 'size': 70, 'type': 'file', 'created': 1678175677...
{'url': 'https://api.github.com/repos/huggingface/datasets/issues/5616/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0}
https://api.github.com/repos/huggingface/datasets/issues/5616/timeline
completed
true
https://api.github.com/repos/huggingface/datasets/issues/5615
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5615/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5615/comments
https://api.github.com/repos/huggingface/datasets/issues/5615/events
https://github.com/huggingface/datasets/issues/5615
1,612,552,653
I_kwDODunzps5gHZnN
5,615
IterableDataset.add_column is unable to accept another IterableDataset as a parameter.
{'login': 'zsaladin', 'id': 6466389, 'node_id': 'MDQ6VXNlcjY0NjYzODk=', 'avatar_url': 'https://avatars.githubusercontent.com/u/6466389?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/zsaladin', 'html_url': 'https://github.com/zsaladin', 'followers_url': 'https://api.github.com/users/zsaladin/followers', '...
[{'id': 1935892913, 'node_id': 'MDU6TGFiZWwxOTM1ODkyOTEz', 'url': 'https://api.github.com/repos/huggingface/datasets/labels/wontfix', 'name': 'wontfix', 'color': 'ffffff', 'default': True, 'description': 'This will not be worked on'}]
closed
False
[]
[ "Hi! You can use `concatenate_datasets([ids1, ids2], axis=1)` to do this." ]
1,678,153,920,000
1,678,375,445,000
1,678,375,434,000
NONE
### Describe the bug `IterableDataset.add_column` occurs an exception when passing another `IterableDataset` as a parameter. The method seems to accept only eager evaluated values. https://github.com/huggingface/datasets/blob/35b789e8f6826b6b5a6b48fcc2416c890a1f326a/src/datasets/iterable_dataset.py#L1388-L1391 ...
{'url': 'https://api.github.com/repos/huggingface/datasets/issues/5615/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0}
https://api.github.com/repos/huggingface/datasets/issues/5615/timeline
completed
true
https://api.github.com/repos/huggingface/datasets/issues/5614
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5614/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5614/comments
https://api.github.com/repos/huggingface/datasets/issues/5614/events
https://github.com/huggingface/datasets/pull/5614
1,611,896,357
PR_kwDODunzps5LZOTd
5,614
Fix archive fs test
{'login': 'lhoestq', 'id': 42851186, 'node_id': 'MDQ6VXNlcjQyODUxMTg2', 'avatar_url': 'https://avatars.githubusercontent.com/u/42851186?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/lhoestq', 'html_url': 'https://github.com/lhoestq', 'followers_url': 'https://api.github.com/users/lhoestq/followers', 'fo...
[]
closed
False
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea...
1,678,123,689,000
1,678,195,670,000
1,678,195,257,000
MEMBER
{'url': 'https://api.github.com/repos/huggingface/datasets/issues/5614/reactions', 'total_count': 1, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 1, 'rocket': 0, 'eyes': 0}
https://api.github.com/repos/huggingface/datasets/issues/5614/timeline
{'url': 'https://api.github.com/repos/huggingface/datasets/pulls/5614', 'html_url': 'https://github.com/huggingface/datasets/pull/5614', 'diff_url': 'https://github.com/huggingface/datasets/pull/5614.diff', 'patch_url': 'https://github.com/huggingface/datasets/pull/5614.patch', 'merged_at': '2023-03-07T13:20:57Z'}
true
https://api.github.com/repos/huggingface/datasets/issues/5613
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5613/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5613/comments
https://api.github.com/repos/huggingface/datasets/issues/5613/events
https://github.com/huggingface/datasets/issues/5613
1,611,875,473
I_kwDODunzps5gE0SR
5,613
Version mismatch with multiprocess and dill on Python 3.10
{'login': 'adampauls', 'id': 1243668, 'node_id': 'MDQ6VXNlcjEyNDM2Njg=', 'avatar_url': 'https://avatars.githubusercontent.com/u/1243668?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/adampauls', 'html_url': 'https://github.com/adampauls', 'followers_url': 'https://api.github.com/users/adampauls/followers...
[]
open
False
[]
[ "Sorry, I just found https://github.com/apache/beam/issues/24458. It seems this issue is being worked on. ", "Reopening, since I think the docs should inform the user of this problem. For example, [this page](https://huggingface.co/docs/datasets/installation) says \r\n> Datasets is tested on Python 3.7+.\r\n\r\nb...
1,678,122,881,000
1,678,150,683,000
null
NONE
### Describe the bug Grabbing the latest version of `datasets` and `apache-beam` with `poetry` using Python 3.10 gives a crash at runtime. The crash is ``` File "/Users/adpauls/sc/git/DSI-transformers/data/NQ/create_NQ_train_vali.py", line 1, in <module> import datasets File "/Users/adpauls/Library/Caches/...
{'url': 'https://api.github.com/repos/huggingface/datasets/issues/5613/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0}
https://api.github.com/repos/huggingface/datasets/issues/5613/timeline
reopened
true
https://api.github.com/repos/huggingface/datasets/issues/5612
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5612/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5612/comments
https://api.github.com/repos/huggingface/datasets/issues/5612/events
https://github.com/huggingface/datasets/issues/5612
1,611,262,510
I_kwDODunzps5gCeou
5,612
Arrow map type in parquet files unsupported
{'login': 'TevenLeScao', 'id': 26709476, 'node_id': 'MDQ6VXNlcjI2NzA5NDc2', 'avatar_url': 'https://avatars.githubusercontent.com/u/26709476?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/TevenLeScao', 'html_url': 'https://github.com/TevenLeScao', 'followers_url': 'https://api.github.com/users/TevenLeScao...
[]
open
False
[]
[ "I'm attaching a minimal reproducible example:\r\n```python\r\nfrom datasets import load_dataset\r\nimport pyarrow as pa\r\nimport pyarrow.parquet as pq\r\n\r\ntable_with_map = pa.Table.from_pydict(\r\n {\"a\": [1, 2], \"b\": [[(\"a\", 2)], [(\"b\", 4)]]},\r\n schema=pa.schema({\"a\": pa.int32(), \"b\": pa.ma...
1,678,104,204,000
1,678,814,425,000
null
MEMBER
### Describe the bug When I try to load parquet files that were processed with Spark, I get the following issue: `ValueError: Arrow type map<string, string ('warc_headers')> does not have a datasets dtype equivalent.` Strangely, loading the dataset with `streaming=True` solves the issue. ### Steps to reproduce ...
{'url': 'https://api.github.com/repos/huggingface/datasets/issues/5612/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0}
https://api.github.com/repos/huggingface/datasets/issues/5612/timeline
true
https://api.github.com/repos/huggingface/datasets/issues/5611
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5611/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5611/comments
https://api.github.com/repos/huggingface/datasets/issues/5611/events
https://github.com/huggingface/datasets/pull/5611
1,611,197,906
PR_kwDODunzps5LW2Lx
5,611
add Dataset.to_list
{'login': 'kyoto7250', 'id': 50972773, 'node_id': 'MDQ6VXNlcjUwOTcyNzcz', 'avatar_url': 'https://avatars.githubusercontent.com/u/50972773?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/kyoto7250', 'html_url': 'https://github.com/kyoto7250', 'followers_url': 'https://api.github.com/users/kyoto7250/followe...
[]
open
False
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5611). All of your documentation changes will be reflected on that endpoint.", "Hi, thanks for working on this! `Table.to_pylist` requires PyArrow 7.0+, and our minimal version requirement is 6.0, so we need to bump the version...
1,678,101,717,000
1,679,500,110,000
null
NONE
close https://github.com/huggingface/datasets/issues/5606 This PR is for adding the `Dataset.to_list` method. Thank you in advance.
{'url': 'https://api.github.com/repos/huggingface/datasets/issues/5611/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0}
https://api.github.com/repos/huggingface/datasets/issues/5611/timeline
{'url': 'https://api.github.com/repos/huggingface/datasets/pulls/5611', 'html_url': 'https://github.com/huggingface/datasets/pull/5611', 'diff_url': 'https://github.com/huggingface/datasets/pull/5611.diff', 'patch_url': 'https://github.com/huggingface/datasets/pull/5611.patch', 'merged_at': None}
true
https://api.github.com/repos/huggingface/datasets/issues/5610
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5610/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5610/comments
https://api.github.com/repos/huggingface/datasets/issues/5610/events
https://github.com/huggingface/datasets/issues/5610
1,610,698,006
I_kwDODunzps5gAU0W
5,610
use datasets streaming mode in trainer ddp mode cause memory leak
{'login': 'gromzhu', 'id': 15223544, 'node_id': 'MDQ6VXNlcjE1MjIzNTQ0', 'avatar_url': 'https://avatars.githubusercontent.com/u/15223544?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/gromzhu', 'html_url': 'https://github.com/gromzhu', 'followers_url': 'https://api.github.com/users/gromzhu/followers', 'fo...
[]
open
False
[]
[]
1,678,080,409,000
1,678,081,030,000
null
NONE
### Describe the bug use datasets streaming mode in trainer ddp mode cause memory leak ### Steps to reproduce the bug import os import time import datetime import sys import numpy as np import random import torch from torch.utils.data import Dataset, DataLoader, random_split, RandomSampler, Sequenti...
{'url': 'https://api.github.com/repos/huggingface/datasets/issues/5610/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0}
https://api.github.com/repos/huggingface/datasets/issues/5610/timeline
true
https://api.github.com/repos/huggingface/datasets/issues/5609
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5609/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5609/comments
https://api.github.com/repos/huggingface/datasets/issues/5609/events
https://github.com/huggingface/datasets/issues/5609
1,610,062,862
I_kwDODunzps5f95wO
5,609
`load_from_disk` vs `load_dataset` performance.
{'login': 'davidgilbertson', 'id': 4443482, 'node_id': 'MDQ6VXNlcjQ0NDM0ODI=', 'avatar_url': 'https://avatars.githubusercontent.com/u/4443482?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/davidgilbertson', 'html_url': 'https://github.com/davidgilbertson', 'followers_url': 'https://api.github.com/users/d...
[]
open
False
[]
[ "Hi! We've recently made some improvements to `save_to_disk`/`list_to_disk` (100x faster in some scenarios), so it would help if you could install `datasets` directly from `main` (`pip install git+https://github.com/huggingface/datasets.git`) and re-run the \"benchmark\".", "Great to hear! I'll give it a try when...
1,677,994,035,000
1,678,822,313,000
null
NONE
### Describe the bug I have downloaded `openwebtext` (~12GB) and filtered out a small amount of junk (it's still huge). Now, I would like to use this filtered version for future work. It seems I have two choices: 1. Use `load_dataset` each time, relying on the cache mechanism, and re-run my filtering. 2. `save_to_di...
{'url': 'https://api.github.com/repos/huggingface/datasets/issues/5609/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0}
https://api.github.com/repos/huggingface/datasets/issues/5609/timeline
true
https://api.github.com/repos/huggingface/datasets/issues/5608
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5608/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5608/comments
https://api.github.com/repos/huggingface/datasets/issues/5608/events
https://github.com/huggingface/datasets/issues/5608
1,609,996,563
I_kwDODunzps5f9pkT
5,608
audiofolder only creates dataset of 13 rows (files) when the data folder it's reading from has 20,000 mp3 files.
{'login': 'jcho19', 'id': 107211437, 'node_id': 'U_kgDOBmPqrQ', 'avatar_url': 'https://avatars.githubusercontent.com/u/107211437?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/jcho19', 'html_url': 'https://github.com/jcho19', 'followers_url': 'https://api.github.com/users/jcho19/followers', 'following_ur...
[]
closed
False
[]
[ "Hi!\r\n\r\n> naming convention of mp3 files\r\n\r\nYes, this could be the problem. MP3 files should end with `.mp3`/`.MP3` to be recognized as audio files.\r\n\r\nIf the file names are not the culprit, can you paste the audio folder's directory structure to help us reproduce the error (e.g., by running the `tree ...
1,677,975,285,000
1,678,579,377,000
1,678,579,377,000
NONE
### Describe the bug x = load_dataset("audiofolder", data_dir="x") When running this, x is a dataset of 13 rows (files) when it should be 20,000 rows (files) as the data_dir "x" has 20,000 mp3 files. Does anyone know what could possibly cause this (naming convention of mp3 files, etc.) ### Steps to reproduce the b...
{'url': 'https://api.github.com/repos/huggingface/datasets/issues/5608/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0}
https://api.github.com/repos/huggingface/datasets/issues/5608/timeline
completed
true
https://api.github.com/repos/huggingface/datasets/issues/5607
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5607/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5607/comments
https://api.github.com/repos/huggingface/datasets/issues/5607/events
https://github.com/huggingface/datasets/pull/5607
1,609,166,035
PR_kwDODunzps5LQPbG
5,607
Fix outdated `verification_mode` values
{'login': 'polinaeterna', 'id': 16348744, 'node_id': 'MDQ6VXNlcjE2MzQ4NzQ0', 'avatar_url': 'https://avatars.githubusercontent.com/u/16348744?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/polinaeterna', 'html_url': 'https://github.com/polinaeterna', 'followers_url': 'https://api.github.com/users/polinaet...
[]
closed
False
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea...
1,677,873,029,000
1,678,383,253,000
1,678,382,827,000
CONTRIBUTOR
~I think it makes sense not to save `dataset_info.json` file to a dataset cache directory when loading dataset with `verification_mode="no_checks"` because otherwise when next time the dataset is loaded **without** `verification_mode="no_checks"`, it will be loaded successfully, despite some values in info might not co...
{'url': 'https://api.github.com/repos/huggingface/datasets/issues/5607/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0}
https://api.github.com/repos/huggingface/datasets/issues/5607/timeline
{'url': 'https://api.github.com/repos/huggingface/datasets/pulls/5607', 'html_url': 'https://github.com/huggingface/datasets/pull/5607', 'diff_url': 'https://github.com/huggingface/datasets/pull/5607.diff', 'patch_url': 'https://github.com/huggingface/datasets/pull/5607.patch', 'merged_at': '2023-03-09T17:27:07Z'}
true
https://api.github.com/repos/huggingface/datasets/issues/5606
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5606/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5606/comments
https://api.github.com/repos/huggingface/datasets/issues/5606/events
https://github.com/huggingface/datasets/issues/5606
1,608,911,632
I_kwDODunzps5f5gsQ
5,606
Add `Dataset.to_list` to the API
{'login': 'mariosasko', 'id': 47462742, 'node_id': 'MDQ6VXNlcjQ3NDYyNzQy', 'avatar_url': 'https://avatars.githubusercontent.com/u/47462742?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/mariosasko', 'html_url': 'https://github.com/mariosasko', 'followers_url': 'https://api.github.com/users/mariosasko/fol...
[{'id': 1935892871, 'node_id': 'MDU6TGFiZWwxOTM1ODkyODcx', 'url': 'https://api.github.com/repos/huggingface/datasets/labels/enhancement', 'name': 'enhancement', 'color': 'a2eeef', 'default': True, 'description': 'New feature or request'}, {'id': 1935892877, 'node_id': 'MDU6TGFiZWwxOTM1ODkyODc3', 'url': 'https://api.git...
open
False
{'login': 'kyoto7250', 'id': 50972773, 'node_id': 'MDQ6VXNlcjUwOTcyNzcz', 'avatar_url': 'https://avatars.githubusercontent.com/u/50972773?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/kyoto7250', 'html_url': 'https://github.com/kyoto7250', 'followers_url': 'https://api.github.com/users/kyoto7250/followe...
[{'login': 'kyoto7250', 'id': 50972773, 'node_id': 'MDQ6VXNlcjUwOTcyNzcz', 'avatar_url': 'https://avatars.githubusercontent.com/u/50972773?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/kyoto7250', 'html_url': 'https://github.com/kyoto7250', 'followers_url': 'https://api.github.com/users/kyoto7250/follow...
[ "Hello, I have an interest in this issue.\r\nIs the `Dataset.to_dict` you are describing correct in the code here?\r\n\r\nhttps://github.com/huggingface/datasets/blob/35b789e8f6826b6b5a6b48fcc2416c890a1f326a/src/datasets/arrow_dataset.py#L4633-L4667", "Yes, this is where `Dataset.to_dict` is defined.", "#self-a...
1,677,860,230,000
1,678,102,400,000
null
CONTRIBUTOR
Since there is `Dataset.from_list` in the API, we should also add `Dataset.to_list` to be consistent. Regarding the implementation, we can re-use `Dataset.to_dict`'s code and replace the `to_pydict` calls with `to_pylist`.
{'url': 'https://api.github.com/repos/huggingface/datasets/issues/5606/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0}
https://api.github.com/repos/huggingface/datasets/issues/5606/timeline
true
https://api.github.com/repos/huggingface/datasets/issues/5605
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5605/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5605/comments
https://api.github.com/repos/huggingface/datasets/issues/5605/events
https://github.com/huggingface/datasets/pull/5605
1,608,865,460
PR_kwDODunzps5LPPf5
5,605
Update README logo
{'login': 'gary149', 'id': 3841370, 'node_id': 'MDQ6VXNlcjM4NDEzNzA=', 'avatar_url': 'https://avatars.githubusercontent.com/u/3841370?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/gary149', 'html_url': 'https://github.com/gary149', 'followers_url': 'https://api.github.com/users/gary149/followers', 'foll...
[]
closed
False
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "Are you sure it's safe to remove? https://github.com/huggingface/datasets/pull/3866", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benc...
1,677,858,391,000
1,677,880,638,000
1,677,880,217,000
CONTRIBUTOR
{'url': 'https://api.github.com/repos/huggingface/datasets/issues/5605/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0}
https://api.github.com/repos/huggingface/datasets/issues/5605/timeline
{'url': 'https://api.github.com/repos/huggingface/datasets/pulls/5605', 'html_url': 'https://github.com/huggingface/datasets/pull/5605', 'diff_url': 'https://github.com/huggingface/datasets/pull/5605.diff', 'patch_url': 'https://github.com/huggingface/datasets/pull/5605.patch', 'merged_at': '2023-03-03T21:50:17Z'}
true
https://api.github.com/repos/huggingface/datasets/issues/5604
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5604/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5604/comments
https://api.github.com/repos/huggingface/datasets/issues/5604/events
https://github.com/huggingface/datasets/issues/5604
1,608,304,775
I_kwDODunzps5f3MiH
5,604
Problems with downloading The Pile
{'login': 'sentialx', 'id': 11065386, 'node_id': 'MDQ6VXNlcjExMDY1Mzg2', 'avatar_url': 'https://avatars.githubusercontent.com/u/11065386?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/sentialx', 'html_url': 'https://github.com/sentialx', 'followers_url': 'https://api.github.com/users/sentialx/followers',...
[]
open
False
[]
[ "Hi! \r\n\r\n\r\nYou can specify `download_config=DownloadConfig(resume_download=True))` in `load_dataset` to resume the download when re-running the code after the timeout error:\r\n```python\r\nfrom datasets import load_dataset, DownloadConfig\r\ndataset = load_dataset('the_pile', split='train', cache_dir='F:\\da...
1,677,837,128,000
1,678,807,832,000
null
NONE
### Describe the bug The downloads in the screenshot seem to be interrupted after some time and the last download throws a "Read timed out" error. ![image](https://user-images.githubusercontent.com/11065386/222687870-ec5fcb65-84e8-467d-9593-4ad7bdac4d50.png) Here are the downloaded files: ![image](https://user-...
{'url': 'https://api.github.com/repos/huggingface/datasets/issues/5604/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0}
https://api.github.com/repos/huggingface/datasets/issues/5604/timeline
true
https://api.github.com/repos/huggingface/datasets/issues/5603
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5603/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5603/comments
https://api.github.com/repos/huggingface/datasets/issues/5603/events
https://github.com/huggingface/datasets/pull/5603
1,607,143,509
PR_kwDODunzps5LJZzG
5,603
Don't compute checksums if not necessary in `datasets-cli test`
{'login': 'lhoestq', 'id': 42851186, 'node_id': 'MDQ6VXNlcjQyODUxMTg2', 'avatar_url': 'https://avatars.githubusercontent.com/u/42851186?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/lhoestq', 'html_url': 'https://github.com/lhoestq', 'followers_url': 'https://api.github.com/users/lhoestq/followers', 'fo...
[]
closed
False
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea...
1,677,775,359,000
1,677,858,332,000
1,677,857,908,000
MEMBER
we only need them if there exists a `dataset_infos.json`
{'url': 'https://api.github.com/repos/huggingface/datasets/issues/5603/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0}
https://api.github.com/repos/huggingface/datasets/issues/5603/timeline
{'url': 'https://api.github.com/repos/huggingface/datasets/pulls/5603', 'html_url': 'https://github.com/huggingface/datasets/pull/5603', 'diff_url': 'https://github.com/huggingface/datasets/pull/5603.diff', 'patch_url': 'https://github.com/huggingface/datasets/pull/5603.patch', 'merged_at': '2023-03-03T15:38:28Z'}
true
https://api.github.com/repos/huggingface/datasets/issues/5602
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5602/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5602/comments
https://api.github.com/repos/huggingface/datasets/issues/5602/events
https://github.com/huggingface/datasets/pull/5602
1,607,054,110
PR_kwDODunzps5LJGfa
5,602
Return dict structure if columns are lists - to_tf_dataset
{'login': 'amyeroberts', 'id': 22614925, 'node_id': 'MDQ6VXNlcjIyNjE0OTI1', 'avatar_url': 'https://avatars.githubusercontent.com/u/22614925?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/amyeroberts', 'html_url': 'https://github.com/amyeroberts', 'followers_url': 'https://api.github.com/users/amyeroberts...
[]
open
False
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5602). All of your documentation changes will be reflected on that endpoint.", "This is a great PR! Thinking about the UX though, maybe we could do it without the extra argument? Before this PR, the logic in `to_tf_dataset` was...
1,677,772,272,000
1,679,516,103,000
null
CONTRIBUTOR
This PR introduces new logic to `to_tf_dataset` affecting the returned data structure, enabling a dictionary structure to be returned, even if only one feature column is selected. If the passed in `columns` or `label_cols` to `to_tf_dataset` are a list, they are returned as a dictionary, respectively. If they are a...
{'url': 'https://api.github.com/repos/huggingface/datasets/issues/5602/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0}
https://api.github.com/repos/huggingface/datasets/issues/5602/timeline
{'url': 'https://api.github.com/repos/huggingface/datasets/pulls/5602', 'html_url': 'https://github.com/huggingface/datasets/pull/5602', 'diff_url': 'https://github.com/huggingface/datasets/pull/5602.diff', 'patch_url': 'https://github.com/huggingface/datasets/pull/5602.patch', 'merged_at': None}
true
https://api.github.com/repos/huggingface/datasets/issues/5601
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5601/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5601/comments
https://api.github.com/repos/huggingface/datasets/issues/5601/events
https://github.com/huggingface/datasets/issues/5601
1,606,685,976
I_kwDODunzps5fxBUY
5,601
Authorization error
{'login': 'OleksandrKorovii', 'id': 107404835, 'node_id': 'U_kgDOBmbeIw', 'avatar_url': 'https://avatars.githubusercontent.com/u/107404835?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/OleksandrKorovii', 'html_url': 'https://github.com/OleksandrKorovii', 'followers_url': 'https://api.github.com/users/Ol...
[]
closed
False
[]
[ "Hi! \r\n\r\nIt's better to report this kind of issue in the `huggingface_hub` repo, so if you still haven't resolved it, I suggest you open an issue there.", "Yeah, I solved it. Problem was in osxkeychain. When I do `hugginface-cli login` it's add token with default account (username)`hg_user` but my repo cont...
1,677,758,919,000
1,678,812,935,000
1,678,812,934,000
NONE
### Describe the bug Get `Authorization error` when try to push data into hugginface datasets hub. ### Steps to reproduce the bug I did all steps in the [tutorial](https://huggingface.co/docs/datasets/share), 1. `huggingface-cli login` with WRITE token 2. `git lfs install` 3. `git clone https://huggingfa...
{'url': 'https://api.github.com/repos/huggingface/datasets/issues/5601/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0}
https://api.github.com/repos/huggingface/datasets/issues/5601/timeline
completed
true
https://api.github.com/repos/huggingface/datasets/issues/5600
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5600/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5600/comments
https://api.github.com/repos/huggingface/datasets/issues/5600/events
https://github.com/huggingface/datasets/issues/5600
1,606,585,596
I_kwDODunzps5fwoz8
5,600
Dataloader getitem not working for DreamboothDatasets
{'login': 'salahiguiliz', 'id': 76955987, 'node_id': 'MDQ6VXNlcjc2OTU1OTg3', 'avatar_url': 'https://avatars.githubusercontent.com/u/76955987?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/salahiguiliz', 'html_url': 'https://github.com/salahiguiliz', 'followers_url': 'https://api.github.com/users/salahigu...
[]
closed
False
[]
[ "Hi! \r\n\r\n> (see example of DreamboothDatasets)\r\n\r\n\r\nCould you please provide a link to it? If you are referring to the example in the `diffusers` repo, your issue is unrelated to `datasets` as that example uses `Dataset` from PyTorch to load data." ]
1,677,754,827,000
1,678,730,375,000
1,678,730,375,000
NONE
### Describe the bug Dataloader getitem is not working as before (see example of [DreamboothDatasets](https://github.com/huggingface/peft/blob/main/examples/lora_dreambooth/train_dreambooth.py#L451C14-L529)) moving Datasets to 2.8.0 solved the issue. ### Steps to reproduce the bug 1- using DreamBoothDataset ...
{'url': 'https://api.github.com/repos/huggingface/datasets/issues/5600/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0}
https://api.github.com/repos/huggingface/datasets/issues/5600/timeline
completed
true
https://api.github.com/repos/huggingface/datasets/issues/5598
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5598/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5598/comments
https://api.github.com/repos/huggingface/datasets/issues/5598/events
https://github.com/huggingface/datasets/pull/5598
1,605,018,478
PR_kwDODunzps5LCMiX
5,598
Fix push_to_hub with no dataset_infos
{'login': 'lhoestq', 'id': 42851186, 'node_id': 'MDQ6VXNlcjQyODUxMTg2', 'avatar_url': 'https://avatars.githubusercontent.com/u/42851186?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/lhoestq', 'html_url': 'https://github.com/lhoestq', 'followers_url': 'https://api.github.com/users/lhoestq/followers', 'fo...
[]
closed
False
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea...
1,677,678,846,000
1,677,764,833,000
1,677,764,417,000
MEMBER
As reported in https://github.com/vijaydwivedi75/lrgb/issues/10, `push_to_hub` fails if the remote repository already exists and has a README.md without `dataset_info` in the YAML tags cc @clefourrier
{'url': 'https://api.github.com/repos/huggingface/datasets/issues/5598/reactions', 'total_count': 1, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 1, 'rocket': 0, 'eyes': 0}
https://api.github.com/repos/huggingface/datasets/issues/5598/timeline
{'url': 'https://api.github.com/repos/huggingface/datasets/pulls/5598', 'html_url': 'https://github.com/huggingface/datasets/pull/5598', 'diff_url': 'https://github.com/huggingface/datasets/pull/5598.diff', 'patch_url': 'https://github.com/huggingface/datasets/pull/5598.patch', 'merged_at': '2023-03-02T13:40:17Z'}
true
https://api.github.com/repos/huggingface/datasets/issues/5597
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5597/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5597/comments
https://api.github.com/repos/huggingface/datasets/issues/5597/events
https://github.com/huggingface/datasets/issues/5597
1,604,928,721
I_kwDODunzps5fqUTR
5,597
in-place dataset update
{'login': 'speedcell4', 'id': 3585459, 'node_id': 'MDQ6VXNlcjM1ODU0NTk=', 'avatar_url': 'https://avatars.githubusercontent.com/u/3585459?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/speedcell4', 'html_url': 'https://github.com/speedcell4', 'followers_url': 'https://api.github.com/users/speedcell4/follo...
[{'id': 1935892913, 'node_id': 'MDU6TGFiZWwxOTM1ODkyOTEz', 'url': 'https://api.github.com/repos/huggingface/datasets/labels/wontfix', 'name': 'wontfix', 'color': 'ffffff', 'default': True, 'description': 'This will not be worked on'}]
closed
False
[]
[ "We won't support in-place modifications since `datasets` is based on the Apache Arrow format which doesn't support in-place modifications.\r\n\r\nIn your case the old dataset is garbage collected pretty quickly so you won't have memory issues.\r\n\r\nNote that datasets loaded from disk (memory mapped) are not load...
1,677,675,498,000
1,677,763,841,000
1,677,728,820,000
NONE
### Motivation For the circumstance that I creat an empty `Dataset` and keep appending new rows into it, I found that it leads to creating a new dataset at each call. It looks quite memory-consuming. I just wonder if there is any more efficient way to do this. ```python from datasets import Dataset ds = Datas...
{'url': 'https://api.github.com/repos/huggingface/datasets/issues/5597/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0}
https://api.github.com/repos/huggingface/datasets/issues/5597/timeline
completed
true
https://api.github.com/repos/huggingface/datasets/issues/5596
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5596/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5596/comments
https://api.github.com/repos/huggingface/datasets/issues/5596/events
https://github.com/huggingface/datasets/issues/5596
1,604,919,993
I_kwDODunzps5fqSK5
5,596
[TypeError: Couldn't cast array of type] Can only load a subset of the dataset
{'login': 'loubnabnl', 'id': 44069155, 'node_id': 'MDQ6VXNlcjQ0MDY5MTU1', 'avatar_url': 'https://avatars.githubusercontent.com/u/44069155?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/loubnabnl', 'html_url': 'https://github.com/loubnabnl', 'followers_url': 'https://api.github.com/users/loubnabnl/followe...
[]
closed
False
[]
[ "Apparently some JSON objects have a `\"labels\"` field. Since this field is not present in every object, you must specify all the fields types in the README.md\r\n\r\nEDIT: actually specifying the feature types doesn’t solve the issue, it raises an error because “labels” is missing in the data", "We've updated t...
1,677,675,188,000
1,677,755,531,000
1,677,755,531,000
NONE
### Describe the bug I'm trying to load this [dataset](https://huggingface.co/datasets/bigcode-data/the-stack-gh-issues) which consists of jsonl files and I get the following error: ``` casted_values = _c(array.values, feature[0]) File "/opt/conda/lib/python3.7/site-packages/datasets/table.py", line 1839, in wr...
{'url': 'https://api.github.com/repos/huggingface/datasets/issues/5596/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0}
https://api.github.com/repos/huggingface/datasets/issues/5596/timeline
completed
true
https://api.github.com/repos/huggingface/datasets/issues/5595
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5595/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5595/comments
https://api.github.com/repos/huggingface/datasets/issues/5595/events
https://github.com/huggingface/datasets/pull/5595
1,604,070,629
PR_kwDODunzps5K--V9
5,595
Unpins sqlAlchemy
{'login': 'lazarust', 'id': 46943923, 'node_id': 'MDQ6VXNlcjQ2OTQzOTIz', 'avatar_url': 'https://avatars.githubusercontent.com/u/46943923?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/lazarust', 'html_url': 'https://github.com/lazarust', 'followers_url': 'https://api.github.com/users/lazarust/followers',...
[]
open
False
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5595). All of your documentation changes will be reflected on that endpoint.", "It looks like this issue hasn't been fixed yet, so let's wait a bit more." ]
1,677,634,425,000
1,678,120,899,000
null
NONE
Closes #5477
{'url': 'https://api.github.com/repos/huggingface/datasets/issues/5595/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0}
https://api.github.com/repos/huggingface/datasets/issues/5595/timeline
{'url': 'https://api.github.com/repos/huggingface/datasets/pulls/5595', 'html_url': 'https://github.com/huggingface/datasets/pull/5595', 'diff_url': 'https://github.com/huggingface/datasets/pull/5595.diff', 'patch_url': 'https://github.com/huggingface/datasets/pull/5595.patch', 'merged_at': None}
true
https://api.github.com/repos/huggingface/datasets/issues/5594
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5594/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5594/comments
https://api.github.com/repos/huggingface/datasets/issues/5594/events
https://github.com/huggingface/datasets/issues/5594
1,603,980,995
I_kwDODunzps5fms7D
5,594
Error while downloading the xtreme udpos dataset
{'login': 'simran-khanuja', 'id': 24687672, 'node_id': 'MDQ6VXNlcjI0Njg3Njcy', 'avatar_url': 'https://avatars.githubusercontent.com/u/24687672?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/simran-khanuja', 'html_url': 'https://github.com/simran-khanuja', 'followers_url': 'https://api.github.com/users/si...
[]
open
False
[]
[ "Hi! I cannot reproduce this error on my machine.\r\n\r\nThe raised error could mean that one of the downloaded files is corrupted. To verify this is not the case, you can run `load_dataset` as follows:\r\n```python\r\ntrain_dataset = load_dataset('xtreme', 'udpos.English', split=\"train\", cache_dir=args.cache_dir...
1,677,627,653,000
1,678,730,313,000
null
NONE
### Describe the bug Hi, I am facing an error while downloading the xtreme udpos dataset using load_dataset. I have datasets 2.10.1 installed ```Downloading and preparing dataset xtreme/udpos.Arabic to /compute/tir-1-18/skhanuja/multilingual_ft/cache/data/xtreme/udpos.Arabic/1.0.0/29f5d57a48779f37ccb75cb8708d1...
{'url': 'https://api.github.com/repos/huggingface/datasets/issues/5594/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0}
https://api.github.com/repos/huggingface/datasets/issues/5594/timeline
true
https://api.github.com/repos/huggingface/datasets/issues/5592
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5592/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5592/comments
https://api.github.com/repos/huggingface/datasets/issues/5592/events
https://github.com/huggingface/datasets/pull/5592
1,603,619,124
PR_kwDODunzps5K9dWr
5,592
Fix docstring example
{'login': 'stevhliu', 'id': 59462357, 'node_id': 'MDQ6VXNlcjU5NDYyMzU3', 'avatar_url': 'https://avatars.githubusercontent.com/u/59462357?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/stevhliu', 'html_url': 'https://github.com/stevhliu', 'followers_url': 'https://api.github.com/users/stevhliu/followers',...
[]
closed
False
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea...
1,677,609,757,000
1,677,612,393,000
1,677,611,955,000
MEMBER
Fixes #5581 to use the correct output for the `set_format` method.
{'url': 'https://api.github.com/repos/huggingface/datasets/issues/5592/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0}
https://api.github.com/repos/huggingface/datasets/issues/5592/timeline
{'url': 'https://api.github.com/repos/huggingface/datasets/pulls/5592', 'html_url': 'https://github.com/huggingface/datasets/pull/5592', 'diff_url': 'https://github.com/huggingface/datasets/pull/5592.diff', 'patch_url': 'https://github.com/huggingface/datasets/pull/5592.patch', 'merged_at': '2023-02-28T19:19:15Z'}
true
https://api.github.com/repos/huggingface/datasets/issues/5591
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5591/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5591/comments
https://api.github.com/repos/huggingface/datasets/issues/5591/events
https://github.com/huggingface/datasets/pull/5591
1,603,571,407
PR_kwDODunzps5K9S79
5,591
set dev version
{'login': 'lhoestq', 'id': 42851186, 'node_id': 'MDQ6VXNlcjQyODUxMTg2', 'avatar_url': 'https://avatars.githubusercontent.com/u/42851186?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/lhoestq', 'html_url': 'https://github.com/lhoestq', 'followers_url': 'https://api.github.com/users/lhoestq/followers', 'fo...
[]
closed
False
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5591). All of your documentation changes will be reflected on that endpoint.", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchma...
1,677,607,745,000
1,677,608,191,000
1,677,607,755,000
MEMBER
{'url': 'https://api.github.com/repos/huggingface/datasets/issues/5591/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0}
https://api.github.com/repos/huggingface/datasets/issues/5591/timeline
{'url': 'https://api.github.com/repos/huggingface/datasets/pulls/5591', 'html_url': 'https://github.com/huggingface/datasets/pull/5591', 'diff_url': 'https://github.com/huggingface/datasets/pull/5591.diff', 'patch_url': 'https://github.com/huggingface/datasets/pull/5591.patch', 'merged_at': '2023-02-28T18:09:15Z'}
true
https://api.github.com/repos/huggingface/datasets/issues/5590
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5590/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5590/comments
https://api.github.com/repos/huggingface/datasets/issues/5590/events
https://github.com/huggingface/datasets/pull/5590
1,603,549,504
PR_kwDODunzps5K9N_H
5,590
Release: 2.10.1
{'login': 'lhoestq', 'id': 42851186, 'node_id': 'MDQ6VXNlcjQyODUxMTg2', 'avatar_url': 'https://avatars.githubusercontent.com/u/42851186?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/lhoestq', 'html_url': 'https://github.com/lhoestq', 'followers_url': 'https://api.github.com/users/lhoestq/followers', 'fo...
[]
closed
False
[]
[ "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_a...
1,677,607,091,000
1,677,608,187,000
1,677,607,568,000
MEMBER
{'url': 'https://api.github.com/repos/huggingface/datasets/issues/5590/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0}
https://api.github.com/repos/huggingface/datasets/issues/5590/timeline
{'url': 'https://api.github.com/repos/huggingface/datasets/pulls/5590', 'html_url': 'https://github.com/huggingface/datasets/pull/5590', 'diff_url': 'https://github.com/huggingface/datasets/pull/5590.diff', 'patch_url': 'https://github.com/huggingface/datasets/pull/5590.patch', 'merged_at': '2023-02-28T18:06:08Z'}
true
https://api.github.com/repos/huggingface/datasets/issues/5589
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5589/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5589/comments
https://api.github.com/repos/huggingface/datasets/issues/5589/events
https://github.com/huggingface/datasets/pull/5589
1,603,535,704
PR_kwDODunzps5K9K1i
5,589
Revert "pass the dataset features to the IterableDataset.from_generator"
{'login': 'lhoestq', 'id': 42851186, 'node_id': 'MDQ6VXNlcjQyODUxMTg2', 'avatar_url': 'https://avatars.githubusercontent.com/u/42851186?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/lhoestq', 'html_url': 'https://github.com/lhoestq', 'followers_url': 'https://api.github.com/users/lhoestq/followers', 'fo...
[]
closed
False
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea...
1,677,606,724,000
1,679,408,505,000
1,679,408,298,000
MEMBER
This reverts commit b91070b9c09673e2e148eec458036ab6a62ac042 (temporarily) It hurts iterable dataset performance a lot (e.g. x4 slower because it encodes+decodes images unnecessarily). I think we need to fix this before re-adding it cc @mariosasko @Hubert-Bonisseur
{'url': 'https://api.github.com/repos/huggingface/datasets/issues/5589/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0}
https://api.github.com/repos/huggingface/datasets/issues/5589/timeline
{'url': 'https://api.github.com/repos/huggingface/datasets/pulls/5589', 'html_url': 'https://github.com/huggingface/datasets/pull/5589', 'diff_url': 'https://github.com/huggingface/datasets/pull/5589.diff', 'patch_url': 'https://github.com/huggingface/datasets/pull/5589.patch', 'merged_at': None}
true
https://api.github.com/repos/huggingface/datasets/issues/5588
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5588/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5588/comments
https://api.github.com/repos/huggingface/datasets/issues/5588/events
https://github.com/huggingface/datasets/pull/5588
1,603,304,766
PR_kwDODunzps5K8YYz
5,588
Flatten dataset on the fly in `save_to_disk`
{'login': 'mariosasko', 'id': 47462742, 'node_id': 'MDQ6VXNlcjQ3NDYyNzQy', 'avatar_url': 'https://avatars.githubusercontent.com/u/47462742?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/mariosasko', 'html_url': 'https://github.com/mariosasko', 'followers_url': 'https://api.github.com/users/mariosasko/fol...
[]
closed
False
[]
[ "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_a...
1,677,598,666,000
1,677,605,315,000
1,677,604,877,000
CONTRIBUTOR
Flatten a dataset on the fly in `save_to_disk` instead of doing it with `flatten_indices` to avoid creating an additional cache file. (this is one of the sub-tasks in https://github.com/huggingface/datasets/issues/5507)
{'url': 'https://api.github.com/repos/huggingface/datasets/issues/5588/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0}
https://api.github.com/repos/huggingface/datasets/issues/5588/timeline
{'url': 'https://api.github.com/repos/huggingface/datasets/pulls/5588', 'html_url': 'https://github.com/huggingface/datasets/pull/5588', 'diff_url': 'https://github.com/huggingface/datasets/pull/5588.diff', 'patch_url': 'https://github.com/huggingface/datasets/pull/5588.patch', 'merged_at': '2023-02-28T17:21:17Z'}
true
https://api.github.com/repos/huggingface/datasets/issues/5587
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5587/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5587/comments
https://api.github.com/repos/huggingface/datasets/issues/5587/events
https://github.com/huggingface/datasets/pull/5587
1,603,139,420
PR_kwDODunzps5K70pp
5,587
Fix `sort` with indices mapping
{'login': 'mariosasko', 'id': 47462742, 'node_id': 'MDQ6VXNlcjQ3NDYyNzQy', 'avatar_url': 'https://avatars.githubusercontent.com/u/47462742?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/mariosasko', 'html_url': 'https://github.com/mariosasko', 'followers_url': 'https://api.github.com/users/mariosasko/fol...
[]
closed
False
[]
[ "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_a...
1,677,593,108,000
1,677,605,337,000
1,677,604,918,000
CONTRIBUTOR
Fixes the `key` range in the `query_table` call in `sort` to account for an indices mapping Fix #5586
{'url': 'https://api.github.com/repos/huggingface/datasets/issues/5587/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0}
https://api.github.com/repos/huggingface/datasets/issues/5587/timeline
{'url': 'https://api.github.com/repos/huggingface/datasets/pulls/5587', 'html_url': 'https://github.com/huggingface/datasets/pull/5587', 'diff_url': 'https://github.com/huggingface/datasets/pull/5587.diff', 'patch_url': 'https://github.com/huggingface/datasets/pull/5587.patch', 'merged_at': '2023-02-28T17:21:58Z'}
true
https://api.github.com/repos/huggingface/datasets/issues/5586
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5586/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5586/comments
https://api.github.com/repos/huggingface/datasets/issues/5586/events
https://github.com/huggingface/datasets/issues/5586
1,602,961,544
I_kwDODunzps5fi0CI
5,586
.sort() is broken when used after .filter(), only in 2.10.0
{'login': 'MattYoon', 'id': 57797966, 'node_id': 'MDQ6VXNlcjU3Nzk3OTY2', 'avatar_url': 'https://avatars.githubusercontent.com/u/57797966?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/MattYoon', 'html_url': 'https://github.com/MattYoon', 'followers_url': 'https://api.github.com/users/MattYoon/followers',...
[{'id': 1935892857, 'node_id': 'MDU6TGFiZWwxOTM1ODkyODU3', 'url': 'https://api.github.com/repos/huggingface/datasets/labels/bug', 'name': 'bug', 'color': 'd73a4a', 'default': True, 'description': "Something isn't working"}]
closed
False
[]
[ "Thanks for reporting and thanks @mariosasko for fixing ! We just did a patch release `2.10.1` with the fix" ]
1,677,586,689,000
1,677,608,246,000
1,677,604,919,000
NONE
### Describe the bug Hi, thank you for your support! It seems like the addition of multiple key sort (#5502) in 2.10.0 broke the `.sort()` method. After filtering a dataset with `.filter()`, the `.sort()` seems to refer to the query_table index of the previous unfiltered dataset, resulting in an IndexError. ...
{'url': 'https://api.github.com/repos/huggingface/datasets/issues/5586/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0}
https://api.github.com/repos/huggingface/datasets/issues/5586/timeline
completed
true
https://api.github.com/repos/huggingface/datasets/issues/5585
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5585/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5585/comments
https://api.github.com/repos/huggingface/datasets/issues/5585/events
https://github.com/huggingface/datasets/issues/5585
1,602,190,030
I_kwDODunzps5ff3rO
5,585
Cache is not transportable
{'login': 'davidgilbertson', 'id': 4443482, 'node_id': 'MDQ6VXNlcjQ0NDM0ODI=', 'avatar_url': 'https://avatars.githubusercontent.com/u/4443482?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/davidgilbertson', 'html_url': 'https://github.com/davidgilbertson', 'followers_url': 'https://api.github.com/users/d...
[]
closed
False
[]
[ "Hi ! No the cache is not transportable in general. It will work on a shared filesystem if you use the same python environment, but not across machines/os/environments.\r\n\r\nIn particular, reloading cached datasets does work, but reloading cached processed datasets (e.g. from `map`) may not work. This is because ...
1,677,545,586,000
1,677,619,612,000
1,677,619,612,000
NONE
### Describe the bug I would like to share cache between two machines (a Windows host machine and a WSL instance). I run most my code in WSL. I have just run out of space in the virtual drive. Rather than expand the drive size, I plan to move to cache to the host Windows machine, thereby sharing the downloads. I...
{'url': 'https://api.github.com/repos/huggingface/datasets/issues/5585/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0}
https://api.github.com/repos/huggingface/datasets/issues/5585/timeline
completed
true
https://api.github.com/repos/huggingface/datasets/issues/5584
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5584/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5584/comments
https://api.github.com/repos/huggingface/datasets/issues/5584/events
https://github.com/huggingface/datasets/issues/5584
1,601,821,808
I_kwDODunzps5fedxw
5,584
Unable to load coyo700M dataset
{'login': 'manuaero', 'id': 3059998, 'node_id': 'MDQ6VXNlcjMwNTk5OTg=', 'avatar_url': 'https://avatars.githubusercontent.com/u/3059998?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/manuaero', 'html_url': 'https://github.com/manuaero', 'followers_url': 'https://api.github.com/users/manuaero/followers', '...
[]
closed
False
[]
[ "Hi @manuaero \r\n\r\nThank you for your interest in the COYO dataset.\r\n\r\nOur dataset provides the img-url and alt-text in the form of a parquet, so to utilize the coyo dataset you will need to download it directly.\r\n\r\nWe provide a [guide](https://github.com/kakaobrain/coyo-dataset/blob/main/download/README...
1,677,526,503,000
1,677,569,279,000
1,677,569,278,000
NONE
### Describe the bug Seeing this error when downloading https://huggingface.co/datasets/kakaobrain/coyo-700m: ```ArrowInvalid: Parquet magic bytes not found in footer. Either the file is corrupted or this is not a parquet file.``` Full stack trace ```Downloading and preparing dataset parquet/kakaobrain--coy...
{'url': 'https://api.github.com/repos/huggingface/datasets/issues/5584/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0}
https://api.github.com/repos/huggingface/datasets/issues/5584/timeline
completed
true
https://api.github.com/repos/huggingface/datasets/issues/5583
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5583/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5583/comments
https://api.github.com/repos/huggingface/datasets/issues/5583/events
https://github.com/huggingface/datasets/pull/5583
1,601,583,625
PR_kwDODunzps5K2mIz
5,583
Do no write index by default when exporting a dataset
{'login': 'mariosasko', 'id': 47462742, 'node_id': 'MDQ6VXNlcjQ3NDYyNzQy', 'avatar_url': 'https://avatars.githubusercontent.com/u/47462742?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/mariosasko', 'html_url': 'https://github.com/mariosasko', 'followers_url': 'https://api.github.com/users/mariosasko/fol...
[]
closed
False
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea...
1,677,517,486,000
1,677,592,335,000
1,677,591,844,000
CONTRIBUTOR
Ensures all the writers that use Pandas for conversion (JSON, CSV, SQL) do not export `index` by default (https://github.com/huggingface/datasets/pull/5490 only did this for CSV)
{'url': 'https://api.github.com/repos/huggingface/datasets/issues/5583/reactions', 'total_count': 1, '+1': 1, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0}
https://api.github.com/repos/huggingface/datasets/issues/5583/timeline
{'url': 'https://api.github.com/repos/huggingface/datasets/pulls/5583', 'html_url': 'https://github.com/huggingface/datasets/pull/5583', 'diff_url': 'https://github.com/huggingface/datasets/pull/5583.diff', 'patch_url': 'https://github.com/huggingface/datasets/pull/5583.patch', 'merged_at': '2023-02-28T13:44:04Z'}
true
https://api.github.com/repos/huggingface/datasets/issues/5582
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5582/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5582/comments
https://api.github.com/repos/huggingface/datasets/issues/5582/events
https://github.com/huggingface/datasets/pull/5582
1,600,932,092
PR_kwDODunzps5K0ZcN
5,582
Add column_names to IterableDataset
{'login': 'patrickloeber', 'id': 50772274, 'node_id': 'MDQ6VXNlcjUwNzcyMjc0', 'avatar_url': 'https://avatars.githubusercontent.com/u/50772274?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/patrickloeber', 'html_url': 'https://github.com/patrickloeber', 'followers_url': 'https://api.github.com/users/patri...
[]
closed
False
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea...
1,677,495,007,000
1,678,734,622,000
1,678,734,212,000
CONTRIBUTOR
This PR closes #5383 * Add column_names property to IterableDataset * Add multiple tests for this new property
{'url': 'https://api.github.com/repos/huggingface/datasets/issues/5582/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0}
https://api.github.com/repos/huggingface/datasets/issues/5582/timeline
{'url': 'https://api.github.com/repos/huggingface/datasets/pulls/5582', 'html_url': 'https://github.com/huggingface/datasets/pull/5582', 'diff_url': 'https://github.com/huggingface/datasets/pull/5582.diff', 'patch_url': 'https://github.com/huggingface/datasets/pull/5582.patch', 'merged_at': '2023-03-13T19:03:31Z'}
true
https://api.github.com/repos/huggingface/datasets/issues/5581
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5581/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5581/comments
https://api.github.com/repos/huggingface/datasets/issues/5581/events
https://github.com/huggingface/datasets/issues/5581
1,600,675,489
I_kwDODunzps5faF6h
5,581
[DOC] Mistaken docs on set_format
{'login': 'NightMachinery', 'id': 36224762, 'node_id': 'MDQ6VXNlcjM2MjI0NzYy', 'avatar_url': 'https://avatars.githubusercontent.com/u/36224762?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/NightMachinery', 'html_url': 'https://github.com/NightMachinery', 'followers_url': 'https://api.github.com/users/Ni...
[{'id': 1935892877, 'node_id': 'MDU6TGFiZWwxOTM1ODkyODc3', 'url': 'https://api.github.com/repos/huggingface/datasets/labels/good%20first%20issue', 'name': 'good first issue', 'color': '7057ff', 'default': True, 'description': 'Good for newcomers'}]
closed
False
[]
[ "Thanks for reporting!" ]
1,677,484,989,000
1,677,611,957,000
1,677,611,957,000
CONTRIBUTOR
### Describe the bug https://huggingface.co/docs/datasets/v2.10.0/en/package_reference/main_classes#datasets.Dataset.set_format <img width="700" alt="image" src="https://user-images.githubusercontent.com/36224762/221506973-ae2e3991-60a7-4d4e-99f8-965c6eb61e59.png"> While actually running it will result in: <img w...
{'url': 'https://api.github.com/repos/huggingface/datasets/issues/5581/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0}
https://api.github.com/repos/huggingface/datasets/issues/5581/timeline
completed
true
https://api.github.com/repos/huggingface/datasets/issues/5580
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5580/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5580/comments
https://api.github.com/repos/huggingface/datasets/issues/5580/events
https://github.com/huggingface/datasets/pull/5580
1,600,431,792
PR_kwDODunzps5Kys1c
5,580
Support cloud storage in load_dataset via fsspec
{'login': 'dwyatte', 'id': 2512762, 'node_id': 'MDQ6VXNlcjI1MTI3NjI=', 'avatar_url': 'https://avatars.githubusercontent.com/u/2512762?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/dwyatte', 'html_url': 'https://github.com/dwyatte', 'followers_url': 'https://api.github.com/users/dwyatte/followers', 'foll...
[]
closed
False
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "> Regarding the tests I think it should be possible to use the mockfs fixture, it allows to play with a dummy fsspec FileSystem with the \"mock://\" protocol.\r\n\r\n> However it requires some storage_options to be passed. Maybe it c...
1,677,470,765,000
1,678,496,569,000
1,678,496,140,000
CONTRIBUTOR
Closes https://github.com/huggingface/datasets/issues/5281 This PR uses fsspec to support datasets on cloud storage (tested manually with GCS). ETags are currently unsupported for cloud storage. In general, a much larger refactor could be done to just use fsspec for all schemes (ftp, http/s, s3, gcs) to unify the in...
{'url': 'https://api.github.com/repos/huggingface/datasets/issues/5580/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0}
https://api.github.com/repos/huggingface/datasets/issues/5580/timeline
{'url': 'https://api.github.com/repos/huggingface/datasets/pulls/5580', 'html_url': 'https://github.com/huggingface/datasets/pull/5580', 'diff_url': 'https://github.com/huggingface/datasets/pull/5580.diff', 'patch_url': 'https://github.com/huggingface/datasets/pull/5580.patch', 'merged_at': '2023-03-11T00:55:40Z'}
true
https://api.github.com/repos/huggingface/datasets/issues/5579
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5579/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5579/comments
https://api.github.com/repos/huggingface/datasets/issues/5579/events
https://github.com/huggingface/datasets/pull/5579
1,599,732,211
PR_kwDODunzps5Kwgo4
5,579
Add instructions to create `DataLoader` from augmented dataset in object detection guide
{'login': 'Laurent2916', 'id': 21087104, 'node_id': 'MDQ6VXNlcjIxMDg3MTA0', 'avatar_url': 'https://avatars.githubusercontent.com/u/21087104?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/Laurent2916', 'html_url': 'https://github.com/Laurent2916', 'followers_url': 'https://api.github.com/users/Laurent2916...
[]
closed
False
[]
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5579). All of your documentation changes will be reflected on that endpoint.", "I'm not sure we need this part as we provide a link to the notebook that shows how to train an object detection model, and this notebook instantiat...
1,677,336,797,000
1,679,599,499,000
1,679,599,490,000
NONE
The following adds instructions on how to create a `DataLoader` from the guide on how to use object detection with augmentations (#4710). I am open to hearing any suggestions for improvement !
{'url': 'https://api.github.com/repos/huggingface/datasets/issues/5579/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0}
https://api.github.com/repos/huggingface/datasets/issues/5579/timeline
{'url': 'https://api.github.com/repos/huggingface/datasets/pulls/5579', 'html_url': 'https://github.com/huggingface/datasets/pull/5579', 'diff_url': 'https://github.com/huggingface/datasets/pull/5579.diff', 'patch_url': 'https://github.com/huggingface/datasets/pull/5579.patch', 'merged_at': None}
true
https://api.github.com/repos/huggingface/datasets/issues/5578
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5578/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5578/comments
https://api.github.com/repos/huggingface/datasets/issues/5578/events
https://github.com/huggingface/datasets/pull/5578
1,598,863,119
PR_kwDODunzps5Kto96
5,578
Add `huggingface_hub` version to env cli command
{'login': 'mariosasko', 'id': 47462742, 'node_id': 'MDQ6VXNlcjQ3NDYyNzQy', 'avatar_url': 'https://avatars.githubusercontent.com/u/47462742?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/mariosasko', 'html_url': 'https://github.com/mariosasko', 'followers_url': 'https://api.github.com/users/mariosasko/fol...
[]
closed
False
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea...
1,677,253,063,000
1,677,518,905,000
1,677,518,469,000
CONTRIBUTOR
Add the `huggingface_hub` version to the `env` command's output.
{'url': 'https://api.github.com/repos/huggingface/datasets/issues/5578/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0}
https://api.github.com/repos/huggingface/datasets/issues/5578/timeline
{'url': 'https://api.github.com/repos/huggingface/datasets/pulls/5578', 'html_url': 'https://github.com/huggingface/datasets/pull/5578', 'diff_url': 'https://github.com/huggingface/datasets/pull/5578.diff', 'patch_url': 'https://github.com/huggingface/datasets/pull/5578.patch', 'merged_at': '2023-02-27T17:21:09Z'}
true
https://api.github.com/repos/huggingface/datasets/issues/5577
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5577/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5577/comments
https://api.github.com/repos/huggingface/datasets/issues/5577/events
https://github.com/huggingface/datasets/issues/5577
1,598,587,665
I_kwDODunzps5fSIMR
5,577
Cannot load `the_pile_openwebtext2`
{'login': 'wjfwzzc', 'id': 5126316, 'node_id': 'MDQ6VXNlcjUxMjYzMTY=', 'avatar_url': 'https://avatars.githubusercontent.com/u/5126316?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/wjfwzzc', 'html_url': 'https://github.com/wjfwzzc', 'followers_url': 'https://api.github.com/users/wjfwzzc/followers', 'foll...
[]
closed
False
[]
[ "Hi! I've merged a PR to use `int32` instead of `int8` for `reddit_scores`, so it should work now.\r\n\r\n" ]
1,677,243,708,000
1,677,247,269,000
1,677,247,269,000
NONE
### Describe the bug I met the same bug mentioned in #3053 which is never fixed. Because several `reddit_scores` are larger than `int8` even `int16`. https://huggingface.co/datasets/the_pile_openwebtext2/blob/main/the_pile_openwebtext2.py#L62 ### Steps to reproduce the bug ```python3 from datasets import load...
{'url': 'https://api.github.com/repos/huggingface/datasets/issues/5577/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0}
https://api.github.com/repos/huggingface/datasets/issues/5577/timeline
completed
true
https://api.github.com/repos/huggingface/datasets/issues/5576
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5576/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5576/comments
https://api.github.com/repos/huggingface/datasets/issues/5576/events
https://github.com/huggingface/datasets/issues/5576
1,598,582,744
I_kwDODunzps5fSG_Y
5,576
I was getting a similar error `pyarrow.lib.ArrowInvalid: Integer value 528 not in range: -128 to 127` - AFAICT, this is because the type specified for `reddit_scores` is `datasets.Sequence(datasets.Value("int8"))`, but the actual values can be well outside the max range for 8-bit integers.
{'login': 'wjfwzzc', 'id': 5126316, 'node_id': 'MDQ6VXNlcjUxMjYzMTY=', 'avatar_url': 'https://avatars.githubusercontent.com/u/5126316?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/wjfwzzc', 'html_url': 'https://github.com/wjfwzzc', 'followers_url': 'https://api.github.com/users/wjfwzzc/followers', 'foll...
[]
closed
False
[]
[ "Duplicated issue." ]
1,677,243,469,000
1,677,243,511,000
1,677,243,498,000
NONE
I was getting a similar error `pyarrow.lib.ArrowInvalid: Integer value 528 not in range: -128 to 127` - AFAICT, this is because the type specified for `reddit_scores` is `datasets.Sequence(datasets.Value("int8"))`, but the actual values can be well outside the max range for 8-bit integers. I worked aro...
{'url': 'https://api.github.com/repos/huggingface/datasets/issues/5576/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0}
https://api.github.com/repos/huggingface/datasets/issues/5576/timeline
not_planned
true
https://api.github.com/repos/huggingface/datasets/issues/5575
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5575/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5575/comments
https://api.github.com/repos/huggingface/datasets/issues/5575/events
https://github.com/huggingface/datasets/issues/5575
1,598,396,552
I_kwDODunzps5fRZiI
5,575
Metadata for each column
{'login': 'parsa-ra', 'id': 11356471, 'node_id': 'MDQ6VXNlcjExMzU2NDcx', 'avatar_url': 'https://avatars.githubusercontent.com/u/11356471?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/parsa-ra', 'html_url': 'https://github.com/parsa-ra', 'followers_url': 'https://api.github.com/users/parsa-ra/followers',...
[{'id': 1935892871, 'node_id': 'MDU6TGFiZWwxOTM1ODkyODcx', 'url': 'https://api.github.com/repos/huggingface/datasets/labels/enhancement', 'name': 'enhancement', 'color': 'a2eeef', 'default': True, 'description': 'New feature or request'}]
open
False
[]
{'url': 'https://api.github.com/repos/huggingface/datasets/milestones/10', 'html_url': 'https://github.com/huggingface/datasets/milestone/10', 'labels_url': 'https://api.github.com/repos/huggingface/datasets/milestones/10/labels', 'id': 9038583, 'node_id': 'MI_kwDODunzps4Aier3', 'number': 10, 'title': '3.0', 'descripti...
[ "Hi! Indeed it would be useful to support this. PyArrow natively supports schema-level and column-level metadata, so implementing this should be straightforward. The API I have in mind would work as follows:\r\n```python\r\ncol_feature = Value(\"string\", metadata=\"Some column-level metadata\")\r\n\r\nfeatures = F...
1,677,236,024,000
1,678,467,844,000
null
NONE
### Feature request Being able to put some metadata for each column as a string or any other type. ### Motivation I will bring the motivation by an example, lets say we are experimenting with embedding produced by some image encoder network, and we want to iterate through a couple of preprocessing and see which on...
{'url': 'https://api.github.com/repos/huggingface/datasets/issues/5575/reactions', 'total_count': 1, '+1': 1, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0}
https://api.github.com/repos/huggingface/datasets/issues/5575/timeline
true
https://api.github.com/repos/huggingface/datasets/issues/5574
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5574/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5574/comments
https://api.github.com/repos/huggingface/datasets/issues/5574/events
https://github.com/huggingface/datasets/issues/5574
1,598,104,691
I_kwDODunzps5fQSRz
5,574
c4 dataset streaming fails with `FileNotFoundError`
{'login': 'krasserm', 'id': 202907, 'node_id': 'MDQ6VXNlcjIwMjkwNw==', 'avatar_url': 'https://avatars.githubusercontent.com/u/202907?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/krasserm', 'html_url': 'https://github.com/krasserm', 'followers_url': 'https://api.github.com/users/krasserm/followers', 'fo...
[]
closed
False
[]
[ "Also encountering this issue for every dataset I try to stream! Installed datasets from main:\r\n```\r\n- `datasets` version: 2.10.1.dev0\r\n- Platform: macOS-13.1-arm64-arm-64bit\r\n- Python version: 3.9.13\r\n- PyArrow version: 10.0.1\r\n- Pandas version: 1.5.2\r\n```\r\n\r\nRepro:\r\n```python\r\nfrom datasets ...
1,677,225,452,000
1,678,108,451,000
1,677,470,618,000
NONE
### Describe the bug Loading the `c4` dataset in streaming mode with `load_dataset("c4", "en", split="validation", streaming=True)` and then using it fails with a `FileNotFoundException`. ### Steps to reproduce the bug ```python from datasets import load_dataset dataset = load_dataset("c4", "en", split="train", ...
{'url': 'https://api.github.com/repos/huggingface/datasets/issues/5574/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0}
https://api.github.com/repos/huggingface/datasets/issues/5574/timeline
completed
true
https://api.github.com/repos/huggingface/datasets/issues/5573
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5573/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5573/comments
https://api.github.com/repos/huggingface/datasets/issues/5573/events
https://github.com/huggingface/datasets/pull/5573
1,597,400,836
PR_kwDODunzps5Kop7n
5,573
Use soundfile for mp3 decoding instead of torchaudio
{'login': 'polinaeterna', 'id': 16348744, 'node_id': 'MDQ6VXNlcjE2MzQ4NzQ0', 'avatar_url': 'https://avatars.githubusercontent.com/u/16348744?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/polinaeterna', 'html_url': 'https://github.com/polinaeterna', 'followers_url': 'https://api.github.com/users/polinaet...
[]
closed
False
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "@mariosasko thank you for the review! do you have any idea why `test_hash_torch_tensor` fails on \"ubuntu-latest deps-minimum\"? I removed the `torchaudio<0.12.0` test dependency so it uses the latest `torch` now, might it be connect...
1,677,179,984,000
1,677,615,914,000
1,677,615,362,000
CONTRIBUTOR
I've removed `torchaudio` completely and switched to use `soundfile` for everything. With the new version of `soundfile` package this should work smoothly because the `libsndfile` C library is bundled, in Linux wheels too. Let me know if you think it's too harsh and we should continue to support `torchaudio` decodi...
{'url': 'https://api.github.com/repos/huggingface/datasets/issues/5573/reactions', 'total_count': 3, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 3, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0}
https://api.github.com/repos/huggingface/datasets/issues/5573/timeline
{'url': 'https://api.github.com/repos/huggingface/datasets/pulls/5573', 'html_url': 'https://github.com/huggingface/datasets/pull/5573', 'diff_url': 'https://github.com/huggingface/datasets/pull/5573.diff', 'patch_url': 'https://github.com/huggingface/datasets/pull/5573.patch', 'merged_at': '2023-02-28T20:16:02Z'}
true
https://api.github.com/repos/huggingface/datasets/issues/5572
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5572/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5572/comments
https://api.github.com/repos/huggingface/datasets/issues/5572/events
https://github.com/huggingface/datasets/issues/5572
1,597,257,624
I_kwDODunzps5fNDeY
5,572
Datasets 2.10.0 does not reuse the dataset cache
{'login': 'lsb', 'id': 45281, 'node_id': 'MDQ6VXNlcjQ1Mjgx', 'avatar_url': 'https://avatars.githubusercontent.com/u/45281?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/lsb', 'html_url': 'https://github.com/lsb', 'followers_url': 'https://api.github.com/users/lsb/followers', 'following_url': 'https://api...
[]
closed
False
[]
[]
1,677,173,291,000
1,677,175,435,000
1,677,175,435,000
NONE
### Describe the bug download_mode="reuse_dataset_if_exists" will always consider that a dataset doesn't exist. Specifically, upon losing an internet connection trying to load a dataset for a second time in ten seconds, a connection error results, showing a breakpoint of: ``` File ~/jupyterlab/.direnv/python-...
{'url': 'https://api.github.com/repos/huggingface/datasets/issues/5572/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0}
https://api.github.com/repos/huggingface/datasets/issues/5572/timeline
completed
true
https://api.github.com/repos/huggingface/datasets/issues/5571
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5571/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5571/comments
https://api.github.com/repos/huggingface/datasets/issues/5571/events
https://github.com/huggingface/datasets/issues/5571
1,597,198,953
I_kwDODunzps5fM1Jp
5,571
load_dataset fails for JSON in windows
{'login': 'abinashsahu', 'id': 11876897, 'node_id': 'MDQ6VXNlcjExODc2ODk3', 'avatar_url': 'https://avatars.githubusercontent.com/u/11876897?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/abinashsahu', 'html_url': 'https://github.com/abinashsahu', 'followers_url': 'https://api.github.com/users/abinashsahu...
[]
closed
False
[]
[ "Hi! \r\n\r\nYou need to pass an input json file explicitly as `data_files` to `load_dataset` to avoid this error:\r\n```python\r\n ds = load_dataset(\"json\", data_files=args.input_json)\r\n```\r\n\r\n", "Thanks it worked!" ]
1,677,171,011,000
1,677,244,907,000
1,677,244,907,000
NONE
### Describe the bug Steps: 1. Created a dataset in a Linux VM and created a small sample using dataset.to_json() method. 2. Downloaded the JSON file to my local Windows machine for working and saved in say - r"C:\Users\name\file.json" 3. I am reading the file in my local PyCharm - the location of python file is di...
{'url': 'https://api.github.com/repos/huggingface/datasets/issues/5571/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0}
https://api.github.com/repos/huggingface/datasets/issues/5571/timeline
completed
true
https://api.github.com/repos/huggingface/datasets/issues/5570
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5570/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5570/comments
https://api.github.com/repos/huggingface/datasets/issues/5570/events
https://github.com/huggingface/datasets/issues/5570
1,597,190,926
I_kwDODunzps5fMzMO
5,570
load_dataset gives FileNotFoundError on imagenet-1k if license is not accepted on the hub
{'login': 'buoi', 'id': 38630200, 'node_id': 'MDQ6VXNlcjM4NjMwMjAw', 'avatar_url': 'https://avatars.githubusercontent.com/u/38630200?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/buoi', 'html_url': 'https://github.com/buoi', 'followers_url': 'https://api.github.com/users/buoi/followers', 'following_url'...
[]
open
False
[]
[ "Hi, thanks for the feedback! Would it help to add a tip or note saying the dataset is gated and you need to accept the license before downloading it?" ]
1,677,170,672,000
1,677,540,552,000
null
NONE
### Describe the bug When calling ```load_dataset('imagenet-1k')``` FileNotFoundError is raised, if not logged in and if logged in with huggingface-cli but not having accepted the licence on the hub. There is no error once accepting. ### Steps to reproduce the bug ``` from datasets import load_dataset imagenet =...
{'url': 'https://api.github.com/repos/huggingface/datasets/issues/5570/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0}
https://api.github.com/repos/huggingface/datasets/issues/5570/timeline
true
https://api.github.com/repos/huggingface/datasets/issues/5569
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5569/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5569/comments
https://api.github.com/repos/huggingface/datasets/issues/5569/events
https://github.com/huggingface/datasets/pull/5569
1,597,132,383
PR_kwDODunzps5KnwHD
5,569
pass the dataset features to the IterableDataset.from_generator function
{'login': 'Hubert-Bonisseur', 'id': 48770768, 'node_id': 'MDQ6VXNlcjQ4NzcwNzY4', 'avatar_url': 'https://avatars.githubusercontent.com/u/48770768?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/Hubert-Bonisseur', 'html_url': 'https://github.com/Hubert-Bonisseur', 'followers_url': 'https://api.github.com/us...
[]
closed
False
[]
[ "_The documentation is not available anymore as the PR was closed or merged._", "<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea...
1,677,168,364,000
1,677,247,597,000
1,677,176,116,000
CONTRIBUTOR
[5558](https://github.com/huggingface/datasets/issues/5568)
{'url': 'https://api.github.com/repos/huggingface/datasets/issues/5569/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0}
https://api.github.com/repos/huggingface/datasets/issues/5569/timeline
{'url': 'https://api.github.com/repos/huggingface/datasets/pulls/5569', 'html_url': 'https://github.com/huggingface/datasets/pull/5569', 'diff_url': 'https://github.com/huggingface/datasets/pull/5569.diff', 'patch_url': 'https://github.com/huggingface/datasets/pull/5569.patch', 'merged_at': '2023-02-23T18:15:16Z'}
true
https://api.github.com/repos/huggingface/datasets/issues/5568
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5568/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5568/comments
https://api.github.com/repos/huggingface/datasets/issues/5568/events
https://github.com/huggingface/datasets/issues/5568
1,596,900,532
I_kwDODunzps5fLsS0
5,568
dataset.to_iterable_dataset() loses useful info like dataset features
{'login': 'Hubert-Bonisseur', 'id': 48770768, 'node_id': 'MDQ6VXNlcjQ4NzcwNzY4', 'avatar_url': 'https://avatars.githubusercontent.com/u/48770768?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/Hubert-Bonisseur', 'html_url': 'https://github.com/Hubert-Bonisseur', 'followers_url': 'https://api.github.com/us...
[{'id': 1935892871, 'node_id': 'MDU6TGFiZWwxOTM1ODkyODcx', 'url': 'https://api.github.com/repos/huggingface/datasets/labels/enhancement', 'name': 'enhancement', 'color': 'a2eeef', 'default': True, 'description': 'New feature or request'}, {'id': 1935892877, 'node_id': 'MDU6TGFiZWwxOTM1ODkyODc3', 'url': 'https://api.git...
closed
False
{'login': 'Hubert-Bonisseur', 'id': 48770768, 'node_id': 'MDQ6VXNlcjQ4NzcwNzY4', 'avatar_url': 'https://avatars.githubusercontent.com/u/48770768?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/Hubert-Bonisseur', 'html_url': 'https://github.com/Hubert-Bonisseur', 'followers_url': 'https://api.github.com/us...
[{'login': 'Hubert-Bonisseur', 'id': 48770768, 'node_id': 'MDQ6VXNlcjQ4NzcwNzY4', 'avatar_url': 'https://avatars.githubusercontent.com/u/48770768?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/Hubert-Bonisseur', 'html_url': 'https://github.com/Hubert-Bonisseur', 'followers_url': 'https://api.github.com/u...
[ "Hi ! Oh good catch. I think the features should be passed to `IterableDataset.from_generator()` in `to_iterable_dataset()` indeed.\r\n\r\nSetting this as a good first issue if someone would like to contribute, otherwise we can take care of it :)", "#self-assign", "seems like the feature parameter is missing fr...
1,677,159,933,000
1,677,244,956,000
1,677,244,956,000
CONTRIBUTOR
### Describe the bug Hello, I like the new `to_iterable_dataset` feature but I noticed something that seems to be missing. When using `to_iterable_dataset` to transform your map style dataset into iterable dataset, you lose valuable metadata like the features. These metadata are useful if you want to interleav...
{'url': 'https://api.github.com/repos/huggingface/datasets/issues/5568/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0}
https://api.github.com/repos/huggingface/datasets/issues/5568/timeline
completed
true
https://api.github.com/repos/huggingface/datasets/issues/5566
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/5566/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/5566/comments
https://api.github.com/repos/huggingface/datasets/issues/5566/events
https://github.com/huggingface/datasets/issues/5566
1,595,916,674
I_kwDODunzps5fH8GC
5,566
Directly reading parquet files in a s3 bucket from the load_dataset method
{'login': 'shamanez', 'id': 16892570, 'node_id': 'MDQ6VXNlcjE2ODkyNTcw', 'avatar_url': 'https://avatars.githubusercontent.com/u/16892570?v=4', 'gravatar_id': '', 'url': 'https://api.github.com/users/shamanez', 'html_url': 'https://github.com/shamanez', 'followers_url': 'https://api.github.com/users/shamanez/followers',...
[{'id': 1935892865, 'node_id': 'MDU6TGFiZWwxOTM1ODkyODY1', 'url': 'https://api.github.com/repos/huggingface/datasets/labels/duplicate', 'name': 'duplicate', 'color': 'cfd3d7', 'default': True, 'description': 'This issue or pull request already exists'}, {'id': 1935892871, 'node_id': 'MDU6TGFiZWwxOTM1ODkyODcx', 'url': '...
open
False
[]
[ "Hi ! I think is in the scope of this other issue: to https://github.com/huggingface/datasets/issues/5281 " ]
1,677,104,020,000
1,677,150,209,000
null
NONE
### Feature request Right now, we have to read the get the parquet file to the local storage. So having ability to read given the bucket directly address would be benificial ### Motivation In a production set up, this feature can help us a lot. So we do not need move training datafiles in between storage. ### Yo...
{'url': 'https://api.github.com/repos/huggingface/datasets/issues/5566/reactions', 'total_count': 0, '+1': 0, '-1': 0, 'laugh': 0, 'hooray': 0, 'confused': 0, 'heart': 0, 'rocket': 0, 'eyes': 0}
https://api.github.com/repos/huggingface/datasets/issues/5566/timeline
true