url stringlengths 61 61 | repository_url stringclasses 1
value | labels_url stringlengths 75 75 | comments_url stringlengths 70 70 | events_url stringlengths 68 68 | html_url stringlengths 49 51 | id int64 1.23B 2.21B | node_id stringlengths 18 19 | number int64 4.29k 6.76k | title stringlengths 1 290 | user dict | labels listlengths 0 4 | state stringclasses 2
values | locked bool 1
class | assignee dict | assignees listlengths 0 3 | milestone dict | comments int64 0 48 | created_at timestamp[ms] | updated_at timestamp[ms] | closed_at timestamp[ms] | author_association stringclasses 3
values | active_lock_reason null | body stringlengths 2 33.9k ⌀ | reactions dict | timeline_url stringlengths 70 70 | performed_via_github_app null | state_reason stringclasses 3
values | draft bool 2
classes | pull_request dict | is_pull_request bool 2
classes | comments_body listlengths 0 30 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/datasets/issues/6246 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6246/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6246/comments | https://api.github.com/repos/huggingface/datasets/issues/6246/events | https://github.com/huggingface/datasets/issues/6246 | 1,899,848,414 | I_kwDODunzps5xPWLe | 6,246 | Add new column to dataset | {
"login": "andysingal",
"id": 20493493,
"node_id": "MDQ6VXNlcjIwNDkzNDkz",
"avatar_url": "https://avatars.githubusercontent.com/u/20493493?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/andysingal",
"html_url": "https://github.com/andysingal",
"followers_url": "https://api.github.com/use... | [] | closed | false | null | [] | null | 4 | 2023-09-17T16:59:48 | 2023-09-18T16:20:09 | 2023-09-18T16:20:09 | NONE | null | ### Describe the bug
```
---------------------------------------------------------------------------
KeyError Traceback (most recent call last)
[<ipython-input-9-bd197b36b6a0>](https://localhost:8080/#) in <cell line: 1>()
----> 1 dataset['train']['/workspace/data']
3 frames
[/... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6246/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6246/timeline | null | completed | null | null | false | [
"I think it's an issue with the code.\r\n\r\nSpecifically:\r\n```python\r\ndataset = dataset['train'].add_column(\"/workspace/data\", new_column)\r\n```\r\n\r\nNow `dataset` is the train set with a new column. \r\nTo fix this, you can do:\r\n\r\n```python\r\ndataset['train'] = dataset['train'].add_column(\"/workspa... |
https://api.github.com/repos/huggingface/datasets/issues/6244 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6244/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6244/comments | https://api.github.com/repos/huggingface/datasets/issues/6244/events | https://github.com/huggingface/datasets/pull/6244 | 1,898,861,422 | PR_kwDODunzps5adtD3 | 6,244 | Add support for `fsspec>=2023.9.0` | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/use... | [] | closed | false | null | [] | null | 19 | 2023-09-15T17:58:25 | 2023-09-26T15:41:38 | 2023-09-26T15:32:51 | CONTRIBUTOR | null | Fix #6214 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6244/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6244/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6244",
"html_url": "https://github.com/huggingface/datasets/pull/6244",
"diff_url": "https://github.com/huggingface/datasets/pull/6244.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6244.patch",
"merged_at": "2023-09-26T15:32... | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... |
https://api.github.com/repos/huggingface/datasets/issues/6243 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6243/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6243/comments | https://api.github.com/repos/huggingface/datasets/issues/6243/events | https://github.com/huggingface/datasets/pull/6243 | 1,898,532,784 | PR_kwDODunzps5aclIy | 6,243 | Fix cast from fixed size list to variable size list | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/use... | [] | closed | false | null | [] | null | 6 | 2023-09-15T14:23:33 | 2023-09-19T18:02:21 | 2023-09-19T17:53:17 | CONTRIBUTOR | null | Fix #6242 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6243/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6243/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6243",
"html_url": "https://github.com/huggingface/datasets/pull/6243",
"diff_url": "https://github.com/huggingface/datasets/pull/6243.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6243.patch",
"merged_at": "2023-09-19T17:53... | true | [
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_a... |
https://api.github.com/repos/huggingface/datasets/issues/6242 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6242/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6242/comments | https://api.github.com/repos/huggingface/datasets/issues/6242/events | https://github.com/huggingface/datasets/issues/6242 | 1,896,899,123 | I_kwDODunzps5xEGIz | 6,242 | Data alteration when loading dataset with unspecified inner sequence length | {
"login": "qgallouedec",
"id": 45557362,
"node_id": "MDQ6VXNlcjQ1NTU3MzYy",
"avatar_url": "https://avatars.githubusercontent.com/u/45557362?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/qgallouedec",
"html_url": "https://github.com/qgallouedec",
"followers_url": "https://api.github.com/... | [] | closed | false | null | [] | null | 2 | 2023-09-14T16:12:45 | 2023-09-19T17:53:18 | 2023-09-19T17:53:18 | MEMBER | null | ### Describe the bug
When a dataset saved with a specified inner sequence length is loaded without specifying that length, the original data is altered and becomes inconsistent.
### Steps to reproduce the bug
```python
from datasets import Dataset, Features, Value, Sequence, load_dataset
# Repository ID
repo_id... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6242/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6242/timeline | null | completed | null | null | false | [
"While this issue may seem specific, it led to a silent problem in my workflow that took days to diagnose. If this feature is not intended to be supported, an error should be raised when encountering this configuration to prevent such issues.",
"Thanks for reporting! This is a MRE:\r\n\r\n```python\r\nimport pyar... |
https://api.github.com/repos/huggingface/datasets/issues/6241 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6241/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6241/comments | https://api.github.com/repos/huggingface/datasets/issues/6241/events | https://github.com/huggingface/datasets/pull/6241 | 1,896,429,694 | PR_kwDODunzps5aVfl- | 6,241 | Remove unused global variables in `audio.py` | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/use... | [] | closed | false | null | [] | null | 4 | 2023-09-14T12:06:32 | 2023-09-15T15:57:10 | 2023-09-15T15:46:07 | CONTRIBUTOR | null | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6241/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6241/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6241",
"html_url": "https://github.com/huggingface/datasets/pull/6241",
"diff_url": "https://github.com/huggingface/datasets/pull/6241.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6241.patch",
"merged_at": "2023-09-15T15:46... | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... |
https://api.github.com/repos/huggingface/datasets/issues/6240 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6240/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6240/comments | https://api.github.com/repos/huggingface/datasets/issues/6240/events | https://github.com/huggingface/datasets/issues/6240 | 1,895,723,888 | I_kwDODunzps5w_nNw | 6,240 | Dataloader stuck on multiple GPUs | {
"login": "kuri54",
"id": 40049003,
"node_id": "MDQ6VXNlcjQwMDQ5MDAz",
"avatar_url": "https://avatars.githubusercontent.com/u/40049003?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kuri54",
"html_url": "https://github.com/kuri54",
"followers_url": "https://api.github.com/users/kuri54/fo... | [] | closed | false | null | [] | null | 2 | 2023-09-14T05:30:30 | 2023-09-14T23:54:42 | 2023-09-14T23:54:42 | NONE | null | ### Describe the bug
I am trying to get CLIP to fine-tuning with my code.
When I tried to run it on multiple GPUs using accelerate, I encountered the following phenomenon.
- Validation dataloader stuck in 2nd epoch only on multi-GPU
Specifically, when the "for inputs in valid_loader:" process is finished, it does... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6240/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6240/timeline | null | completed | null | null | false | [
"What type of dataset are you using in this script? `torch.utils.data.Dataset` or `datasets.Dataset`? Please share the `datasets` package version if it's the latter. Otherwise, it's better to move this issue to the `accelerate` repo.",
"Very sorry, I thought I had a repo in `accelerate!`\r\nI will close this issu... |
https://api.github.com/repos/huggingface/datasets/issues/6239 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6239/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6239/comments | https://api.github.com/repos/huggingface/datasets/issues/6239/events | https://github.com/huggingface/datasets/issues/6239 | 1,895,349,382 | I_kwDODunzps5w-LyG | 6,239 | Load local audio data doesn't work | {
"login": "abodacs",
"id": 554032,
"node_id": "MDQ6VXNlcjU1NDAzMg==",
"avatar_url": "https://avatars.githubusercontent.com/u/554032?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/abodacs",
"html_url": "https://github.com/abodacs",
"followers_url": "https://api.github.com/users/abodacs/fo... | [] | closed | false | null | [] | null | 2 | 2023-09-13T22:30:01 | 2023-09-15T14:32:10 | 2023-09-15T14:32:10 | NONE | null | ### Describe the bug
I get a RuntimeError from the following code:
```python
audio_dataset = Dataset.from_dict({"audio": ["/kaggle/input/bengaliai-speech/train_mp3s/000005f3362c.mp3"]}).cast_column("audio", Audio())
audio_dataset[0]
```
### Traceback
<details>
```python
RuntimeError ... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6239/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6239/timeline | null | completed | null | null | false | [
"I think this is the same issue as https://github.com/huggingface/datasets/issues/4776. Maybe installing `ffmpeg` can fix it:\r\n```python\r\nadd-apt-repository -y ppa:savoury1/ffmpeg4\r\napt-get -qq install -y ffmpeg\r\n```\r\n\r\nHowever, the best solution is to use a newer version of `datasets`. In the recent re... |
https://api.github.com/repos/huggingface/datasets/issues/6238 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6238/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6238/comments | https://api.github.com/repos/huggingface/datasets/issues/6238/events | https://github.com/huggingface/datasets/issues/6238 | 1,895,207,828 | I_kwDODunzps5w9pOU | 6,238 | `dataset.filter` ALWAYS removes the first item from the dataset when using batched=True | {
"login": "Taytay",
"id": 1330693,
"node_id": "MDQ6VXNlcjEzMzA2OTM=",
"avatar_url": "https://avatars.githubusercontent.com/u/1330693?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Taytay",
"html_url": "https://github.com/Taytay",
"followers_url": "https://api.github.com/users/Taytay/foll... | [] | closed | false | null | [] | null | 2 | 2023-09-13T20:20:37 | 2023-09-17T07:05:07 | 2023-09-17T07:05:07 | NONE | null | ### Describe the bug
If you call batched=True when calling `filter`, the first item is _always_ filtered out, regardless of the filter condition.
### Steps to reproduce the bug
Here's a minimal example:
```python
def filter_batch_always_true(batch, indices):
print("First index being passed into this filte... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6238/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6238/timeline | null | completed | null | null | false | [
"`filter` treats the function's output as a (selection) mask - `True` keeps the sample, and `False` drops it. In your case, `bool(0)` evaluates to `False`, so dropping the first sample is the correct behavior.",
"Oh gosh! 🤦 I totally misunderstood the API! My apologies!"
] |
https://api.github.com/repos/huggingface/datasets/issues/6237 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6237/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6237/comments | https://api.github.com/repos/huggingface/datasets/issues/6237/events | https://github.com/huggingface/datasets/issues/6237 | 1,893,822,321 | I_kwDODunzps5w4W9x | 6,237 | Tokenization with multiple workers is too slow | {
"login": "macabdul9",
"id": 25720695,
"node_id": "MDQ6VXNlcjI1NzIwNjk1",
"avatar_url": "https://avatars.githubusercontent.com/u/25720695?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/macabdul9",
"html_url": "https://github.com/macabdul9",
"followers_url": "https://api.github.com/users/... | [] | closed | false | null | [] | null | 1 | 2023-09-13T06:18:34 | 2023-09-19T21:54:58 | 2023-09-19T21:54:58 | NONE | null | I am trying to tokenize a few million documents with multiple workers but the tokenization process is taking forever.
Code snippet:
```
raw_datasets.map(
encode_function,
batched=False,
num_proc=args.preprocessing_num_workers,
load_from_cache_file=not args.ove... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6237/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6237/timeline | null | completed | null | null | false | [
"[This](https://huggingface.co/docs/datasets/nlp_process#map) is the most performant way to tokenize a dataset (`batched=True, num_proc=None, return_tensors=\"np\"`) \r\n\r\nIf`tokenizer.is_fast` returns `True`, `num_proc` must be `None/1` to benefit from the fast tokenizers' parallelism (the fast tokenizers are im... |
https://api.github.com/repos/huggingface/datasets/issues/6236 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6236/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6236/comments | https://api.github.com/repos/huggingface/datasets/issues/6236/events | https://github.com/huggingface/datasets/issues/6236 | 1,893,648,480 | I_kwDODunzps5w3shg | 6,236 | Support buffer shuffle for to_tf_dataset | {
"login": "EthanRock",
"id": 7635551,
"node_id": "MDQ6VXNlcjc2MzU1NTE=",
"avatar_url": "https://avatars.githubusercontent.com/u/7635551?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/EthanRock",
"html_url": "https://github.com/EthanRock",
"followers_url": "https://api.github.com/users/Et... | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | null | [] | null | 3 | 2023-09-13T03:19:44 | 2023-09-18T01:11:21 | null | NONE | null | ### Feature request
I'm using to_tf_dataset to convert a large dataset to tf.data.Dataset and use Keras fit to train model.
Currently, to_tf_dataset only supports full size shuffle, which can be very slow on large dataset.
tf.data.Dataset support buffer shuffle by default.
shuffle(
buffer_size, seed=None, r... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6236/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6236/timeline | null | null | null | null | false | [
"cc @Rocketknight1 ",
"Hey! You can implement this yourself, just:\r\n\r\n1) Create the dataset with `to_tf_dataset()` with `shuffle=False`\r\n2) Add an `unbatch()` at the end (or use batch_size=1)\r\n3) Add a `shuffle()` to the resulting dataset with your desired buffer size\r\n4) Add a `batch()` at the end agai... |
https://api.github.com/repos/huggingface/datasets/issues/6235 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6235/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6235/comments | https://api.github.com/repos/huggingface/datasets/issues/6235/events | https://github.com/huggingface/datasets/issues/6235 | 1,893,337,083 | I_kwDODunzps5w2gf7 | 6,235 | Support multiprocessing for download/extract nestedly | {
"login": "hgt312",
"id": 22725729,
"node_id": "MDQ6VXNlcjIyNzI1NzI5",
"avatar_url": "https://avatars.githubusercontent.com/u/22725729?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hgt312",
"html_url": "https://github.com/hgt312",
"followers_url": "https://api.github.com/users/hgt312/fo... | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | null | [] | null | 0 | 2023-09-12T21:51:08 | 2023-09-12T21:51:08 | null | NONE | null | ### Feature request
Current multiprocessing for download/extract is not done nestedly. For example, when processing SlimPajama, there is only 3 processes (for train/test/val), while there are many files inside these 3 folders
```
Downloading data files #0: 0%| | 0/1 [00:00<?, ?obj/s]
Downloading data f... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6235/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6235/timeline | null | null | null | null | false | [] |
https://api.github.com/repos/huggingface/datasets/issues/6233 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6233/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6233/comments | https://api.github.com/repos/huggingface/datasets/issues/6233/events | https://github.com/huggingface/datasets/pull/6233 | 1,891,804,286 | PR_kwDODunzps5aF3kd | 6,233 | Update README.md | {
"login": "NinoRisteski",
"id": 95188570,
"node_id": "U_kgDOBax2Wg",
"avatar_url": "https://avatars.githubusercontent.com/u/95188570?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/NinoRisteski",
"html_url": "https://github.com/NinoRisteski",
"followers_url": "https://api.github.com/users... | [] | closed | false | null | [] | null | 2 | 2023-09-12T06:53:06 | 2023-09-13T18:20:50 | 2023-09-13T18:10:04 | CONTRIBUTOR | null | fixed a typo | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6233/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6233/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6233",
"html_url": "https://github.com/huggingface/datasets/pull/6233",
"diff_url": "https://github.com/huggingface/datasets/pull/6233.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6233.patch",
"merged_at": "2023-09-13T18:10... | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... |
https://api.github.com/repos/huggingface/datasets/issues/6232 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6232/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6232/comments | https://api.github.com/repos/huggingface/datasets/issues/6232/events | https://github.com/huggingface/datasets/pull/6232 | 1,891,109,762 | PR_kwDODunzps5aDhhK | 6,232 | Improve error message for missing function parameters | {
"login": "suavemint",
"id": 4016832,
"node_id": "MDQ6VXNlcjQwMTY4MzI=",
"avatar_url": "https://avatars.githubusercontent.com/u/4016832?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/suavemint",
"html_url": "https://github.com/suavemint",
"followers_url": "https://api.github.com/users/su... | [] | closed | false | null | [] | null | 3 | 2023-09-11T19:11:58 | 2023-09-15T18:07:56 | 2023-09-15T17:59:02 | CONTRIBUTOR | null | The error message in the fingerprint module was missing the f-string 'f' symbol, so the error message returned by fingerprint.py, line 469 was literally "function {func} is missing parameters {fingerprint_names} in signature."
This has been fixed. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6232/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6232/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6232",
"html_url": "https://github.com/huggingface/datasets/pull/6232",
"diff_url": "https://github.com/huggingface/datasets/pull/6232.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6232.patch",
"merged_at": "2023-09-15T17:59... | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"CI errors are unrelated",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_nu... |
https://api.github.com/repos/huggingface/datasets/issues/6231 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6231/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6231/comments | https://api.github.com/repos/huggingface/datasets/issues/6231/events | https://github.com/huggingface/datasets/pull/6231 | 1,890,863,249 | PR_kwDODunzps5aCr8_ | 6,231 | Overwrite legacy default config name in `dataset_infos.json` in packaged datasets | {
"login": "polinaeterna",
"id": 16348744,
"node_id": "MDQ6VXNlcjE2MzQ4NzQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/polinaeterna",
"html_url": "https://github.com/polinaeterna",
"followers_url": "https://api.github.c... | [] | open | false | null | [] | null | 9 | 2023-09-11T16:27:09 | 2023-09-26T11:19:36 | null | CONTRIBUTOR | null | Currently if we push data as default config with `.push_to_hub` to a repo that has a legacy `dataset_infos.json` file containing a legacy default config name like `{username}--{dataset_name}`, new key `"default"` is added to `dataset_infos.json` along with the legacy one. I think the legacy one should be dropped in thi... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6231/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6231/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6231",
"html_url": "https://github.com/huggingface/datasets/pull/6231",
"diff_url": "https://github.com/huggingface/datasets/pull/6231.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6231.patch",
"merged_at": null
} | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6231). All of your documentation changes will be reflected on that endpoint.",
"realized that this pr is still not merged, @lhoestq maybe you can take a look at it? ",
"I think https://github.com/huggingface/datasets/pull/621... |
https://api.github.com/repos/huggingface/datasets/issues/6230 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6230/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6230/comments | https://api.github.com/repos/huggingface/datasets/issues/6230/events | https://github.com/huggingface/datasets/pull/6230 | 1,890,521,006 | PR_kwDODunzps5aBh6L | 6,230 | Don't skip hidden files in `dl_manager.iter_files` when they are given as input | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/use... | [] | closed | false | null | [] | null | 4 | 2023-09-11T13:29:19 | 2023-09-13T18:21:28 | 2023-09-13T18:12:09 | CONTRIBUTOR | null | Required for `load_dataset(<format>, data_files=["path/to/.hidden_file"])` to work as expected | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6230/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6230/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6230",
"html_url": "https://github.com/huggingface/datasets/pull/6230",
"diff_url": "https://github.com/huggingface/datasets/pull/6230.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6230.patch",
"merged_at": "2023-09-13T18:12... | true | [
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_a... |
https://api.github.com/repos/huggingface/datasets/issues/6229 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6229/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6229/comments | https://api.github.com/repos/huggingface/datasets/issues/6229/events | https://github.com/huggingface/datasets/issues/6229 | 1,889,050,954 | I_kwDODunzps5wmKFK | 6,229 | Apply inference on all images in the dataset | {
"login": "andysingal",
"id": 20493493,
"node_id": "MDQ6VXNlcjIwNDkzNDkz",
"avatar_url": "https://avatars.githubusercontent.com/u/20493493?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/andysingal",
"html_url": "https://github.com/andysingal",
"followers_url": "https://api.github.com/use... | [] | closed | false | null | [] | null | 3 | 2023-09-10T08:36:12 | 2023-09-20T16:11:53 | 2023-09-20T16:11:52 | NONE | null | ### Describe the bug
```
---------------------------------------------------------------------------
NotImplementedError Traceback (most recent call last)
Cell In[14], line 11
9 for idx, example in enumerate(dataset['train']):
10 image_path = example['image']
---> 11 mask... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6229/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6229/timeline | null | completed | null | null | false | [
"From what I see, `MMSegInferencer` supports NumPy arrays, so replace the line `image_path = example['image']` with `image_path = np.array(example['image'])` to fix the issue (`example[\"image\"]` is a `PIL.Image` object). ",
"> From what I see, `MMSegInferencer` supports NumPy arrays, so replace the line `image_... |
https://api.github.com/repos/huggingface/datasets/issues/6228 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6228/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6228/comments | https://api.github.com/repos/huggingface/datasets/issues/6228/events | https://github.com/huggingface/datasets/pull/6228 | 1,887,959,311 | PR_kwDODunzps5Z5HZi | 6,228 | Remove RGB -> BGR image conversion in Object Detection tutorial | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/use... | [] | closed | false | null | [] | null | 3 | 2023-09-08T16:09:13 | 2023-09-08T18:02:49 | 2023-09-08T17:52:16 | CONTRIBUTOR | null | Fix #6225 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6228/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6228/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6228",
"html_url": "https://github.com/huggingface/datasets/pull/6228",
"diff_url": "https://github.com/huggingface/datasets/pull/6228.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6228.patch",
"merged_at": "2023-09-08T17:52... | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... |
https://api.github.com/repos/huggingface/datasets/issues/6226 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6226/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6226/comments | https://api.github.com/repos/huggingface/datasets/issues/6226/events | https://github.com/huggingface/datasets/pull/6226 | 1,887,462,591 | PR_kwDODunzps5Z3arq | 6,226 | Add push_to_hub with multiple configs docs | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoest... | [] | closed | false | null | [] | null | 3 | 2023-09-08T11:08:55 | 2023-09-08T12:29:21 | 2023-09-08T12:20:51 | MEMBER | null | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6226/reactions",
"total_count": 2,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 2,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6226/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6226",
"html_url": "https://github.com/huggingface/datasets/pull/6226",
"diff_url": "https://github.com/huggingface/datasets/pull/6226.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6226.patch",
"merged_at": "2023-09-08T12:20... | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... |
https://api.github.com/repos/huggingface/datasets/issues/6225 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6225/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6225/comments | https://api.github.com/repos/huggingface/datasets/issues/6225/events | https://github.com/huggingface/datasets/issues/6225 | 1,887,054,320 | I_kwDODunzps5weinw | 6,225 | Conversion from RGB to BGR in Object Detection tutorial | {
"login": "samokhinv",
"id": 33297401,
"node_id": "MDQ6VXNlcjMzMjk3NDAx",
"avatar_url": "https://avatars.githubusercontent.com/u/33297401?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/samokhinv",
"html_url": "https://github.com/samokhinv",
"followers_url": "https://api.github.com/users/... | [] | closed | false | null | [] | null | 1 | 2023-09-08T06:49:19 | 2023-09-08T17:52:18 | 2023-09-08T17:52:17 | NONE | null | The [tutorial](https://huggingface.co/docs/datasets/main/en/object_detection) mentions the necessity of conversion the input image from BGR to RGB
> albumentations expects the image to be in BGR format, not RGB, so you’ll have to convert the image before applying the transform.
[Link to tutorial](https://github.c... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6225/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6225/timeline | null | completed | null | null | false | [
"Good catch!"
] |
https://api.github.com/repos/huggingface/datasets/issues/6224 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6224/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6224/comments | https://api.github.com/repos/huggingface/datasets/issues/6224/events | https://github.com/huggingface/datasets/pull/6224 | 1,886,043,692 | PR_kwDODunzps5Zym3j | 6,224 | Ignore `dataset_info.json` in data files resolution | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/use... | [] | closed | false | null | [] | null | 3 | 2023-09-07T14:43:51 | 2023-09-07T15:46:10 | 2023-09-07T15:37:20 | CONTRIBUTOR | null | `save_to_disk` creates this file, but also [`HugginFaceDatasetSever`](https://github.com/gradio-app/gradio/blob/26fef8c7f85a006c7e25cdbed1792df19c512d02/gradio/flagging.py#L214), so this is needed to avoid issues such as [this one](https://discord.com/channels/879548962464493619/1149295819938349107/1149295819938349107)... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6224/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6224/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6224",
"html_url": "https://github.com/huggingface/datasets/pull/6224",
"diff_url": "https://github.com/huggingface/datasets/pull/6224.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6224.patch",
"merged_at": "2023-09-07T15:37... | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... |
https://api.github.com/repos/huggingface/datasets/issues/6223 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6223/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6223/comments | https://api.github.com/repos/huggingface/datasets/issues/6223/events | https://github.com/huggingface/datasets/pull/6223 | 1,885,710,696 | PR_kwDODunzps5Zxd5c | 6,223 | Update README.md | {
"login": "NinoRisteski",
"id": 95188570,
"node_id": "U_kgDOBax2Wg",
"avatar_url": "https://avatars.githubusercontent.com/u/95188570?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/NinoRisteski",
"html_url": "https://github.com/NinoRisteski",
"followers_url": "https://api.github.com/users... | [] | closed | false | null | [] | null | 2 | 2023-09-07T11:33:20 | 2023-09-13T22:32:31 | 2023-09-13T22:23:42 | CONTRIBUTOR | null | fixed a few typos | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6223/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6223/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6223",
"html_url": "https://github.com/huggingface/datasets/pull/6223",
"diff_url": "https://github.com/huggingface/datasets/pull/6223.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6223.patch",
"merged_at": "2023-09-13T22:23... | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... |
https://api.github.com/repos/huggingface/datasets/issues/6222 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6222/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6222/comments | https://api.github.com/repos/huggingface/datasets/issues/6222/events | https://github.com/huggingface/datasets/pull/6222 | 1,884,875,510 | PR_kwDODunzps5Zup2f | 6,222 | fix typo in Audio dataset documentation | {
"login": "prassanna-ravishankar",
"id": 3224332,
"node_id": "MDQ6VXNlcjMyMjQzMzI=",
"avatar_url": "https://avatars.githubusercontent.com/u/3224332?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/prassanna-ravishankar",
"html_url": "https://github.com/prassanna-ravishankar",
"followers_ur... | [] | closed | false | null | [] | null | 2 | 2023-09-06T23:17:24 | 2023-10-03T14:18:41 | 2023-09-07T15:39:09 | CONTRIBUTOR | null | There is a typo in the section of the documentation dedicated to creating an audio dataset. The Dataset is incorrectly suffixed with a `Config`
https://huggingface.co/datasets/indonesian-nlp/librivox-indonesia/blob/main/librivox-indonesia.py#L59 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6222/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6222/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6222",
"html_url": "https://github.com/huggingface/datasets/pull/6222",
"diff_url": "https://github.com/huggingface/datasets/pull/6222.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6222.patch",
"merged_at": "2023-09-07T15:39... | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... |
https://api.github.com/repos/huggingface/datasets/issues/6221 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6221/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6221/comments | https://api.github.com/repos/huggingface/datasets/issues/6221/events | https://github.com/huggingface/datasets/issues/6221 | 1,884,324,631 | I_kwDODunzps5wUIMX | 6,221 | Support saving datasets with custom formatting | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/use... | [] | open | false | null | [] | null | 1 | 2023-09-06T16:03:32 | 2023-09-06T18:32:07 | null | CONTRIBUTOR | null | Requested in https://discuss.huggingface.co/t/using-set-transform-on-a-dataset-leads-to-an-exception/53036.
I am not sure if supporting this is the best idea for the following reasons:
>For this to work, we would have to pickle a custom transform, which means the transform and the objects it references need to be... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6221/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6221/timeline | null | null | null | null | false | [
"Not a fan of pickling this sort of stuff either.\r\nNote that users can also share the code in their dataset documentation."
] |
https://api.github.com/repos/huggingface/datasets/issues/6220 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6220/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6220/comments | https://api.github.com/repos/huggingface/datasets/issues/6220/events | https://github.com/huggingface/datasets/pull/6220 | 1,884,285,980 | PR_kwDODunzps5ZspRb | 6,220 | Set dev version | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.g... | [] | closed | false | null | [] | null | 3 | 2023-09-06T15:40:33 | 2023-09-06T15:52:33 | 2023-09-06T15:41:13 | MEMBER | null | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6220/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6220/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6220",
"html_url": "https://github.com/huggingface/datasets/pull/6220",
"diff_url": "https://github.com/huggingface/datasets/pull/6220.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6220.patch",
"merged_at": "2023-09-06T15:41... | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6220). All of your documentation changes will be reflected on that endpoint.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchma... |
https://api.github.com/repos/huggingface/datasets/issues/6219 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6219/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6219/comments | https://api.github.com/repos/huggingface/datasets/issues/6219/events | https://github.com/huggingface/datasets/pull/6219 | 1,884,244,334 | PR_kwDODunzps5ZsgPK | 6,219 | Release: 2.14.5 | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.g... | [] | closed | false | null | [] | null | 4 | 2023-09-06T15:17:10 | 2023-09-06T15:46:20 | 2023-09-06T15:18:51 | MEMBER | null | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6219/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6219/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6219",
"html_url": "https://github.com/huggingface/datasets/pull/6219",
"diff_url": "https://github.com/huggingface/datasets/pull/6219.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6219.patch",
"merged_at": "2023-09-06T15:18... | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6219). All of your documentation changes will be reflected on that endpoint.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchma... |
https://api.github.com/repos/huggingface/datasets/issues/6218 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6218/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6218/comments | https://api.github.com/repos/huggingface/datasets/issues/6218/events | https://github.com/huggingface/datasets/pull/6218 | 1,883,734,000 | PR_kwDODunzps5Zqw3Y | 6,218 | Rename old push_to_hub configs to "default" in dataset_infos | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoest... | [] | closed | false | null | [] | null | 8 | 2023-09-06T10:40:05 | 2023-09-07T08:31:29 | 2023-09-06T11:23:56 | MEMBER | null | Fix
```python
from datasets import load_dataset_builder
b = load_dataset_builder("lambdalabs/pokemon-blip-captions", "default")
print(b.info)
```
which should return
```
DatasetInfo(
features={'image': Image(decode=True, id=None), 'text': Value(dtype='string', id=None)},
dataset_name='pokemon-bli... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6218/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6218/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6218",
"html_url": "https://github.com/huggingface/datasets/pull/6218",
"diff_url": "https://github.com/huggingface/datasets/pull/6218.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6218.patch",
"merged_at": "2023-09-06T11:23... | true | [
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_a... |
https://api.github.com/repos/huggingface/datasets/issues/6217 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6217/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6217/comments | https://api.github.com/repos/huggingface/datasets/issues/6217/events | https://github.com/huggingface/datasets/issues/6217 | 1,883,614,607 | I_kwDODunzps5wRa2P | 6,217 | `Dataset.to_dict()` ignore `decode=True` with Image feature | {
"login": "qgallouedec",
"id": 45557362,
"node_id": "MDQ6VXNlcjQ1NTU3MzYy",
"avatar_url": "https://avatars.githubusercontent.com/u/45557362?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/qgallouedec",
"html_url": "https://github.com/qgallouedec",
"followers_url": "https://api.github.com/... | [] | open | false | null | [] | null | 1 | 2023-09-06T09:26:16 | 2023-09-08T17:08:52 | null | MEMBER | null | ### Describe the bug
`Dataset.to_dict` seems to ignore the decoding instruction passed in features.
### Steps to reproduce the bug
```python
import datasets
import numpy as np
from PIL import Image
img = np.random.randint(0, 256, (5, 5, 3), dtype=np.uint8)
img = Image.fromarray(img)
features = datasets.Fea... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6217/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6217/timeline | null | null | null | null | false | [
"We need to implement the `Image` type as a PyArrow extension type (to allow us to override the Python conversion) for this to work as expected. For now, it's best to use your approach indeed."
] |
https://api.github.com/repos/huggingface/datasets/issues/6216 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6216/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6216/comments | https://api.github.com/repos/huggingface/datasets/issues/6216/events | https://github.com/huggingface/datasets/pull/6216 | 1,883,492,703 | PR_kwDODunzps5Zp8al | 6,216 | Release: 2.13.2 | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.g... | [] | closed | false | null | [] | null | 5 | 2023-09-06T08:15:32 | 2023-09-06T08:52:18 | 2023-09-06T08:22:43 | MEMBER | null | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6216/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6216/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6216",
"html_url": "https://github.com/huggingface/datasets/pull/6216",
"diff_url": "https://github.com/huggingface/datasets/pull/6216.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6216.patch",
"merged_at": "2023-09-06T08:22... | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... |
https://api.github.com/repos/huggingface/datasets/issues/6215 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6215/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6215/comments | https://api.github.com/repos/huggingface/datasets/issues/6215/events | https://github.com/huggingface/datasets/pull/6215 | 1,882,176,970 | PR_kwDODunzps5ZlcqC | 6,215 | Fix checking patterns to infer packaged builder | {
"login": "polinaeterna",
"id": 16348744,
"node_id": "MDQ6VXNlcjE2MzQ4NzQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/polinaeterna",
"html_url": "https://github.com/polinaeterna",
"followers_url": "https://api.github.c... | [] | closed | false | null | [] | null | 3 | 2023-09-05T15:10:47 | 2023-09-06T10:34:00 | 2023-09-06T10:25:00 | CONTRIBUTOR | null | Don't ignore results of pattern resolving if `self.data_files` is not None. Otherwise lines 854 and 1037 make no sense. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6215/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6215/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6215",
"html_url": "https://github.com/huggingface/datasets/pull/6215",
"diff_url": "https://github.com/huggingface/datasets/pull/6215.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6215.patch",
"merged_at": "2023-09-06T10:25... | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"oh wow good catch",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy af... |
https://api.github.com/repos/huggingface/datasets/issues/6214 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6214/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6214/comments | https://api.github.com/repos/huggingface/datasets/issues/6214/events | https://github.com/huggingface/datasets/issues/6214 | 1,881,736,469 | I_kwDODunzps5wKQUV | 6,214 | Unpin fsspec < 2023.9.0 | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.g... | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | closed | false | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/use... | [
{
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https:... | null | 0 | 2023-09-05T11:02:58 | 2023-09-26T15:32:52 | 2023-09-26T15:32:52 | MEMBER | null | Once root issue is fixed, remove temporary pin of fsspec < 2023.9.0 introduced by:
- #6210
Related to issue:
- #6209
After investigation, I think the root issue is related to the new glob behavior with double asterisk `**` they have introduced in:
- https://github.com/fsspec/filesystem_spec/pull/1329 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6214/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6214/timeline | null | completed | null | null | false | [] |
https://api.github.com/repos/huggingface/datasets/issues/6213 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6213/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6213/comments | https://api.github.com/repos/huggingface/datasets/issues/6213/events | https://github.com/huggingface/datasets/pull/6213 | 1,880,592,987 | PR_kwDODunzps5ZgHLO | 6,213 | Better list array values handling in cast/embed storage | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/use... | [] | closed | false | null | [] | null | 5 | 2023-09-04T16:21:23 | 2024-01-11T06:32:20 | 2023-10-05T15:24:34 | CONTRIBUTOR | null | Use [`array.flatten`](https://arrow.apache.org/docs/python/generated/pyarrow.ListArray.html#pyarrow.ListArray.flatten) that takes `.offset` into account instead of `array.values` in array cast/embed. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6213/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6213/timeline | null | null | true | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6213",
"html_url": "https://github.com/huggingface/datasets/pull/6213",
"diff_url": "https://github.com/huggingface/datasets/pull/6213.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6213.patch",
"merged_at": null
} | true | [
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_a... |
https://api.github.com/repos/huggingface/datasets/issues/6212 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6212/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6212/comments | https://api.github.com/repos/huggingface/datasets/issues/6212/events | https://github.com/huggingface/datasets/issues/6212 | 1,880,399,516 | I_kwDODunzps5wFJ6c | 6,212 | Tilde (~) is not supported for data_files | {
"login": "exs-avianello",
"id": 128361578,
"node_id": "U_kgDOB6akag",
"avatar_url": "https://avatars.githubusercontent.com/u/128361578?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/exs-avianello",
"html_url": "https://github.com/exs-avianello",
"followers_url": "https://api.github.com/... | [] | open | false | null | [] | null | 2 | 2023-09-04T14:23:49 | 2023-09-05T08:28:39 | null | NONE | null | ### Describe the bug
Attempting to `load_dataset` from a path starting with `~` (as a shorthand for the user's home directory) seems not to be fully working - at least as far as the `parquet` dataset builder is concerned.
(the same file can be loaded correctly if providing its absolute path instead)
I think that... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6212/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6212/timeline | null | null | null | null | false | [
"Hi @exs-avianello, is it really needed? Note you can alternatively use `pathlib.Path` among others as it follows:\r\n\r\n```python\r\nimport datasets\r\nfrom pathlib import Path\r\n\r\n# save a parquet file at ~/path/to/data.parquet\r\n\r\ndata_files = Path.home() / \"path/to/data.parquet\"\r\ndataset = datasets.l... |
https://api.github.com/repos/huggingface/datasets/issues/6211 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6211/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6211/comments | https://api.github.com/repos/huggingface/datasets/issues/6211/events | https://github.com/huggingface/datasets/pull/6211 | 1,880,265,906 | PR_kwDODunzps5Ze-pv | 6,211 | Fix empty splitinfo json | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoest... | [] | closed | false | null | [] | null | 4 | 2023-09-04T13:13:53 | 2023-09-04T14:58:34 | 2023-09-04T14:47:17 | MEMBER | null | If a split is empty, then the JSON split info should mention num_bytes = 0 and num_examples = 0.
Until now they were omited because the JSON dumps ignore the fields that are equal to the default values.
This is needed in datasets-server since we parse this information to the viewer | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6211/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6211/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6211",
"html_url": "https://github.com/huggingface/datasets/pull/6211",
"diff_url": "https://github.com/huggingface/datasets/pull/6211.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6211.patch",
"merged_at": "2023-09-04T14:47... | true | [
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_a... |
https://api.github.com/repos/huggingface/datasets/issues/6210 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6210/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6210/comments | https://api.github.com/repos/huggingface/datasets/issues/6210/events | https://github.com/huggingface/datasets/pull/6210 | 1,879,649,731 | PR_kwDODunzps5Zc4JF | 6,210 | Temporarily pin fsspec < 2023.9.0 | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.g... | [] | closed | false | null | [] | null | 3 | 2023-09-04T07:07:07 | 2023-09-04T07:40:23 | 2023-09-04T07:30:00 | MEMBER | null | Temporarily pin fsspec < 2023.9.0 until permanent solution is found.
Hot fix #6209. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6210/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6210/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6210",
"html_url": "https://github.com/huggingface/datasets/pull/6210",
"diff_url": "https://github.com/huggingface/datasets/pull/6210.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6210.patch",
"merged_at": "2023-09-04T07:30... | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... |
https://api.github.com/repos/huggingface/datasets/issues/6209 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6209/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6209/comments | https://api.github.com/repos/huggingface/datasets/issues/6209/events | https://github.com/huggingface/datasets/issues/6209 | 1,879,622,000 | I_kwDODunzps5wCMFw | 6,209 | CI is broken with AssertionError: 3 failed, 12 errors | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.g... | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.g... | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_... | null | 0 | 2023-09-04T06:47:05 | 2023-09-04T07:30:01 | 2023-09-04T07:30:01 | MEMBER | null | Our CI is broken: 3 failed, 12 errors
See: https://github.com/huggingface/datasets/actions/runs/6069947111/job/16465138041
```
=========================== short test summary info ============================
FAILED tests/test_load.py::ModuleFactoryTest::test_LocalDatasetModuleFactoryWithoutScript_with_data_dir - ... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6209/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6209/timeline | null | completed | null | null | false | [] |
https://api.github.com/repos/huggingface/datasets/issues/6208 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6208/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6208/comments | https://api.github.com/repos/huggingface/datasets/issues/6208/events | https://github.com/huggingface/datasets/pull/6208 | 1,879,572,646 | PR_kwDODunzps5ZcnpJ | 6,208 | Do not filter out .zip extensions from no-script datasets | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.g... | [] | closed | false | null | [] | null | 6 | 2023-09-04T06:07:12 | 2023-09-04T09:22:19 | 2023-09-04T09:13:32 | MEMBER | null | This PR is a hotfix of:
- #6207
That PR introduced the filtering out of `.zip` extensions. This PR reverts that.
Hot fix #6207.
Maybe we should do patch releases: the bug was introduced in 2.13.1.
CC: @lhoestq | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6208/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6208/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6208",
"html_url": "https://github.com/huggingface/datasets/pull/6208",
"diff_url": "https://github.com/huggingface/datasets/pull/6208.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6208.patch",
"merged_at": "2023-09-04T09:13... | true | [
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_a... |
https://api.github.com/repos/huggingface/datasets/issues/6207 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6207/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6207/comments | https://api.github.com/repos/huggingface/datasets/issues/6207/events | https://github.com/huggingface/datasets/issues/6207 | 1,879,555,234 | I_kwDODunzps5wB7yi | 6,207 | No-script datasets with ZIP files do not load | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.g... | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.g... | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_... | null | 0 | 2023-09-04T05:50:27 | 2023-09-04T09:13:33 | 2023-09-04T09:13:33 | MEMBER | null | While investigating an issue on a Hub dataset, I have discovered the no-script datasets containing ZIP files do not load.
For example, that no-script dataset containing ZIP files, raises NonMatchingSplitsSizesError:
```python
In [2]: ds = load_dataset("sidovic/LearningQ-qg")
NonMatchingSplitsSizesError: [
{
... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6207/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6207/timeline | null | completed | null | null | false | [] |
https://api.github.com/repos/huggingface/datasets/issues/6206 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6206/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6206/comments | https://api.github.com/repos/huggingface/datasets/issues/6206/events | https://github.com/huggingface/datasets/issues/6206 | 1,879,473,745 | I_kwDODunzps5wBn5R | 6,206 | When calling load_dataset, raise error: pyarrow.lib.ArrowInvalid: offset overflow while concatenating arrays | {
"login": "aihao2000",
"id": 51043929,
"node_id": "MDQ6VXNlcjUxMDQzOTI5",
"avatar_url": "https://avatars.githubusercontent.com/u/51043929?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/aihao2000",
"html_url": "https://github.com/aihao2000",
"followers_url": "https://api.github.com/users/... | [] | closed | false | null | [] | null | 1 | 2023-09-04T04:14:00 | 2023-09-04T06:05:50 | 2023-09-04T06:05:49 | NONE | null | ### Describe the bug
When calling load_dataset, raise error
```
Traceback (most recent call last):
File "/home/aihao/miniconda3/envs/torch/lib/python3.11/site-packages/datasets/builder.py", line 1694, in _pre
pare_split_single ... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6206/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
} | https://api.github.com/repos/huggingface/datasets/issues/6206/timeline | null | completed | null | null | false | [
"I solved the problem by modifying the \"self DEFAULT_WRITER_BATCH_SIZE\" in \"class MyDataset (datasets. GeneratorBasedBuilder) : __init__\""
] |
https://api.github.com/repos/huggingface/datasets/issues/6203 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6203/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6203/comments | https://api.github.com/repos/huggingface/datasets/issues/6203/events | https://github.com/huggingface/datasets/issues/6203 | 1,877,491,602 | I_kwDODunzps5v6D-S | 6,203 | Support loading from a DVC remote repository | {
"login": "bilelomrani1",
"id": 16692099,
"node_id": "MDQ6VXNlcjE2NjkyMDk5",
"avatar_url": "https://avatars.githubusercontent.com/u/16692099?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bilelomrani1",
"html_url": "https://github.com/bilelomrani1",
"followers_url": "https://api.github.c... | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | closed | false | null | [] | null | 4 | 2023-09-01T14:04:52 | 2023-09-15T15:11:27 | 2023-09-15T15:11:27 | NONE | null | ### Feature request
Adding support for loading a file from a DVC repository, tracked remotely on a SCM.
### Motivation
DVC is a popular version control system to version and manage datasets. The files are stored on a remote object storage platform, but they are tracked using Git. Integration with DVC is possible thr... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6203/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6203/timeline | null | completed | null | null | false | [
"(cross-posting from the linked DVC issue)\r\n\r\nI think this should already work out of the box with the current `datasets` and `dvc.api` releases by passing the correct `storage_options` into the datasets calls. `storage_options` is essentially just the kwargs dict that gets passed to the fsspec fs constructor.\... |
https://api.github.com/repos/huggingface/datasets/issues/6202 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6202/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6202/comments | https://api.github.com/repos/huggingface/datasets/issues/6202/events | https://github.com/huggingface/datasets/issues/6202 | 1,876,630,351 | I_kwDODunzps5v2xtP | 6,202 | avoid downgrading jax version | {
"login": "chrisflesher",
"id": 1332458,
"node_id": "MDQ6VXNlcjEzMzI0NTg=",
"avatar_url": "https://avatars.githubusercontent.com/u/1332458?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/chrisflesher",
"html_url": "https://github.com/chrisflesher",
"followers_url": "https://api.github.com... | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | closed | false | null | [] | null | 1 | 2023-09-01T02:57:57 | 2023-10-12T16:28:59 | 2023-10-12T16:28:59 | NONE | null | ### Feature request
Whenever I `pip install datasets[jax]` it downgrades jax to version 0.3.25. I seem to be able to install this library first then upgrade jax back to version 0.4.13.
### Motivation
It would be nice to not overwrite currently installed version of jax if possible.
### Your contribution
I... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6202/reactions",
"total_count": 2,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6202/timeline | null | completed | null | null | false | [
"https://github.com/huggingface/datasets/blob/main/setup.py#L236\r\nCurrently has the highest version at 0.3.25; Not sure if there is any reason for this, other than that was the tested version?"
] |
https://api.github.com/repos/huggingface/datasets/issues/6201 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6201/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6201/comments | https://api.github.com/repos/huggingface/datasets/issues/6201/events | https://github.com/huggingface/datasets/pull/6201 | 1,875,256,775 | PR_kwDODunzps5ZOVbV | 6,201 | Fix to_json ValueError and remove pandas pin | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.g... | [] | closed | false | null | [] | null | 4 | 2023-08-31T10:38:08 | 2023-09-05T11:07:07 | 2023-09-05T10:58:21 | MEMBER | null | This PR fixes the root cause of the issue:
- #6197
This PR also removes the temporary pin of `pandas` introduced by:
- #6200
Note that for orient in ['records', 'values'], index value is ignored but
- in `pandas` < 2.1.0, a ValueError is raised if not index and orient not in ['split', 'table']
- for orien... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6201/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6201/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6201",
"html_url": "https://github.com/huggingface/datasets/pull/6201",
"diff_url": "https://github.com/huggingface/datasets/pull/6201.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6201.patch",
"merged_at": "2023-09-05T10:58... | true | [
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_a... |
https://api.github.com/repos/huggingface/datasets/issues/6200 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6200/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6200/comments | https://api.github.com/repos/huggingface/datasets/issues/6200/events | https://github.com/huggingface/datasets/pull/6200 | 1,875,169,551 | PR_kwDODunzps5ZOCee | 6,200 | Temporarily pin pandas < 2.1.0 | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.g... | [] | closed | false | null | [] | null | 3 | 2023-08-31T09:45:17 | 2023-08-31T10:33:24 | 2023-08-31T10:24:38 | MEMBER | null | Temporarily pin `pandas` < 2.1.0 until permanent solution is found.
Hot fix #6197. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6200/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6200/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6200",
"html_url": "https://github.com/huggingface/datasets/pull/6200",
"diff_url": "https://github.com/huggingface/datasets/pull/6200.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6200.patch",
"merged_at": "2023-08-31T10:24... | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... |
https://api.github.com/repos/huggingface/datasets/issues/6199 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6199/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6199/comments | https://api.github.com/repos/huggingface/datasets/issues/6199/events | https://github.com/huggingface/datasets/issues/6199 | 1,875,165,185 | I_kwDODunzps5vxMAB | 6,199 | Use load_dataset for local json files, but it not works | {
"login": "Garen-in-bush",
"id": 50519434,
"node_id": "MDQ6VXNlcjUwNTE5NDM0",
"avatar_url": "https://avatars.githubusercontent.com/u/50519434?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Garen-in-bush",
"html_url": "https://github.com/Garen-in-bush",
"followers_url": "https://api.githu... | [] | open | false | null | [] | null | 2 | 2023-08-31T09:42:34 | 2023-08-31T19:05:07 | null | NONE | null | ### Describe the bug
when I use load_dataset to load my local datasets,it always goes to Hugging Face to download the data instead of loading the local dataset.
### Steps to reproduce the bug
`raw_datasets = load_dataset(
‘json’,
data_files=data_files)`
### Expected behavior
\r\nprint(raw_datasets)\r\n",
"It doesn't download them but ... |
https://api.github.com/repos/huggingface/datasets/issues/6198 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6198/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6198/comments | https://api.github.com/repos/huggingface/datasets/issues/6198/events | https://github.com/huggingface/datasets/pull/6198 | 1,875,092,027 | PR_kwDODunzps5ZNyBq | 6,198 | Preserve split order in DataFilesDict | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.g... | [] | closed | false | null | [] | null | 4 | 2023-08-31T09:00:26 | 2023-08-31T13:57:31 | 2023-08-31T13:48:42 | MEMBER | null | After investigation, I have found that this copy forces the splits to be sorted alphabetically: https://github.com/huggingface/datasets/blob/029227a116c14720afca71b9b22e78eb2a1c09a6/src/datasets/builder.py#L556
This PR removes the alphabetically sort of `DataFilesDict` keys.
- Note that for a `dict`, the order of k... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6198/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6198/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6198",
"html_url": "https://github.com/huggingface/datasets/pull/6198",
"diff_url": "https://github.com/huggingface/datasets/pull/6198.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6198.patch",
"merged_at": "2023-08-31T13:48... | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... |
https://api.github.com/repos/huggingface/datasets/issues/6197 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6197/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6197/comments | https://api.github.com/repos/huggingface/datasets/issues/6197/events | https://github.com/huggingface/datasets/issues/6197 | 1,875,078,155 | I_kwDODunzps5vw2wL | 6,197 | ValueError: 'index=True' is only valid when 'orient' is 'split', 'table', 'index', or 'columns' | {
"login": "exs-avianello",
"id": 128361578,
"node_id": "U_kgDOB6akag",
"avatar_url": "https://avatars.githubusercontent.com/u/128361578?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/exs-avianello",
"html_url": "https://github.com/exs-avianello",
"followers_url": "https://api.github.com/... | [] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.g... | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_... | null | 3 | 2023-08-31T08:51:50 | 2023-09-01T10:35:10 | 2023-08-31T10:24:40 | NONE | null | ### Describe the bug
Saving a dataset `.to_json()` fails with a `ValueError` since the latest `pandas` [release](https://pandas.pydata.org/docs/dev/whatsnew/v2.1.0.html) (`2.1.0`)
In their latest release we have:
> Improved error handling when using [DataFrame.to_json()](https://pandas.pydata.org/docs/dev/refere... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6197/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6197/timeline | null | completed | null | null | false | [
"Thanks for reporting. We are investigating it.",
"This issue is caused by latest `pandas` release 2.1.0 (released yesterday Aug 30).\r\n\r\nSee: https://github.com/huggingface/datasets/actions/runs/6035484010/job/16375932085?pr=6198\r\n",
"People using previous releases of `datasets` should pin `pandas` in the... |
https://api.github.com/repos/huggingface/datasets/issues/6196 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6196/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6196/comments | https://api.github.com/repos/huggingface/datasets/issues/6196/events | https://github.com/huggingface/datasets/issues/6196 | 1,875,070,972 | I_kwDODunzps5vw0_8 | 6,196 | Split order is not preserved | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.g... | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.g... | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_... | null | 0 | 2023-08-31T08:47:16 | 2023-08-31T13:48:43 | 2023-08-31T13:48:43 | MEMBER | null | I have noticed that in some cases the split order is not preserved.
For example, consider a no-script dataset with configs:
```yaml
configs:
- config_name: default
data_files:
- split: train
path: train.csv
- split: test
path: test.csv
```
- Note the defined split order is [train, test]
On... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6196/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6196/timeline | null | completed | null | null | false | [] |
https://api.github.com/repos/huggingface/datasets/issues/6195 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6195/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6195/comments | https://api.github.com/repos/huggingface/datasets/issues/6195/events | https://github.com/huggingface/datasets/issues/6195 | 1,874,195,585 | I_kwDODunzps5vtfSB | 6,195 | Force to reuse cache at given path | {
"login": "Luosuu",
"id": 43507393,
"node_id": "MDQ6VXNlcjQzNTA3Mzkz",
"avatar_url": "https://avatars.githubusercontent.com/u/43507393?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Luosuu",
"html_url": "https://github.com/Luosuu",
"followers_url": "https://api.github.com/users/Luosuu/fo... | [] | closed | false | null | [] | null | 2 | 2023-08-30T18:44:54 | 2023-11-03T10:14:21 | 2023-08-30T19:00:45 | NONE | null | ### Describe the bug
I have run the official example of MLM like:
```bash
python run_mlm.py \
--model_name_or_path roberta-base \
--dataset_name togethercomputer/RedPajama-Data-1T \
--dataset_config_name arxiv \
--per_device_train_batch_size 10 \
--preprocessing_num_workers 20 ... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6195/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6195/timeline | null | completed | null | null | false | [
"realized that need to pass the path at `cache_file_name` like\r\n\r\n```python\r\ntokenized_datasets = raw_datasets[\"train\"].map(\r\n tokenize_function,\r\n batched=True,\r\n num_proc=data_args.preprocessing_num_workers,\r\n remove_columns=[text_column_... |
https://api.github.com/repos/huggingface/datasets/issues/6194 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6194/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6194/comments | https://api.github.com/repos/huggingface/datasets/issues/6194/events | https://github.com/huggingface/datasets/issues/6194 | 1,872,598,223 | I_kwDODunzps5vnZTP | 6,194 | Support custom fingerprinting with `Dataset.from_generator` | {
"login": "bilelomrani1",
"id": 16692099,
"node_id": "MDQ6VXNlcjE2NjkyMDk5",
"avatar_url": "https://avatars.githubusercontent.com/u/16692099?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bilelomrani1",
"html_url": "https://github.com/bilelomrani1",
"followers_url": "https://api.github.c... | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | null | [] | null | 5 | 2023-08-29T22:43:13 | 2024-02-29T03:46:54 | null | NONE | null | ### Feature request
When using `Dataset.from_generator`, the generator is hashed when building the fingerprint. Similar to `.map`, it would be interesting to let the user bypass this hashing by accepting a `fingerprint` argument to `.from_generator`.
### Motivation
Using the `.from_generator` constructor with ... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6194/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6194/timeline | null | null | null | null | false | [
"The `fingerprint` parameter serves a slightly different purpose - we use it to inject a new fingerprint after transforming a `Dataset` (computed from the previous fingerprint + transform + transform args), e.g., to be able to compute the cache file for a transform. There is no concept of `fingerprint` before a `Da... |
https://api.github.com/repos/huggingface/datasets/issues/6193 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6193/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6193/comments | https://api.github.com/repos/huggingface/datasets/issues/6193/events | https://github.com/huggingface/datasets/issues/6193 | 1,872,285,153 | I_kwDODunzps5vmM3h | 6,193 | Dataset loading script method does not work with .pyc file | {
"login": "riteshkumarumassedu",
"id": 43389071,
"node_id": "MDQ6VXNlcjQzMzg5MDcx",
"avatar_url": "https://avatars.githubusercontent.com/u/43389071?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/riteshkumarumassedu",
"html_url": "https://github.com/riteshkumarumassedu",
"followers_url": ... | [] | open | false | null | [] | null | 3 | 2023-08-29T19:35:06 | 2023-08-31T19:47:29 | null | NONE | null | ### Describe the bug
The huggingface dataset library specifically looks for ‘.py’ file while loading the dataset using loading script approach and it does not work with ‘.pyc’ file.
While deploying in production, it becomes an issue when we are restricted to use only .pyc files. Is there any work around for this ?
#... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6193/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6193/timeline | null | null | null | null | false | [
"Before dynamically loading `.py` scripts with `importlib.import_module`, we also parse their contents to check imports, which is tricky to implement for binary `.pyc` files (requires parsing bytecode), so I don't think this is something we want to support (unless more users request it ofc) as this use case is a bi... |
https://api.github.com/repos/huggingface/datasets/issues/6192 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6192/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6192/comments | https://api.github.com/repos/huggingface/datasets/issues/6192/events | https://github.com/huggingface/datasets/pull/6192 | 1,871,911,640 | PR_kwDODunzps5ZDGnI | 6,192 | Set minimal fsspec version requirement to 2023.1.0 | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/use... | [] | closed | false | null | [] | null | 5 | 2023-08-29T15:23:41 | 2023-08-30T14:01:56 | 2023-08-30T13:51:32 | CONTRIBUTOR | null | Fix https://github.com/huggingface/datasets/issues/6141
Colab installs 2023.6.0, so we should be good 🙂
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6192/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6192/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6192",
"html_url": "https://github.com/huggingface/datasets/pull/6192",
"diff_url": "https://github.com/huggingface/datasets/pull/6192.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6192.patch",
"merged_at": "2023-08-30T13:51... | true | [
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_a... |
https://api.github.com/repos/huggingface/datasets/issues/6191 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6191/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6191/comments | https://api.github.com/repos/huggingface/datasets/issues/6191/events | https://github.com/huggingface/datasets/pull/6191 | 1,871,634,840 | PR_kwDODunzps5ZCKmv | 6,191 | Add missing `revision` argument | {
"login": "qgallouedec",
"id": 45557362,
"node_id": "MDQ6VXNlcjQ1NTU3MzYy",
"avatar_url": "https://avatars.githubusercontent.com/u/45557362?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/qgallouedec",
"html_url": "https://github.com/qgallouedec",
"followers_url": "https://api.github.com/... | [] | closed | false | null | [] | null | 4 | 2023-08-29T13:05:04 | 2023-09-04T06:38:17 | 2023-08-31T13:50:00 | MEMBER | null | I've noticed that when you're not working on the main branch, there are sometimes errors in the files returned. After some investigation, I realized that the revision was not properly passed everywhere. This PR proposes a fix. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6191/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6191/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6191",
"html_url": "https://github.com/huggingface/datasets/pull/6191",
"diff_url": "https://github.com/huggingface/datasets/pull/6191.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6191.patch",
"merged_at": "2023-08-31T13:50... | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"I have found the same issue. Good fix. Should be merged as soon as possible.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_a... |
https://api.github.com/repos/huggingface/datasets/issues/6190 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6190/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6190/comments | https://api.github.com/repos/huggingface/datasets/issues/6190/events | https://github.com/huggingface/datasets/issues/6190 | 1,871,582,175 | I_kwDODunzps5vjhPf | 6,190 | `Invalid user token` even when correct user token is passed! | {
"login": "Vaibhavs10",
"id": 18682411,
"node_id": "MDQ6VXNlcjE4NjgyNDEx",
"avatar_url": "https://avatars.githubusercontent.com/u/18682411?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Vaibhavs10",
"html_url": "https://github.com/Vaibhavs10",
"followers_url": "https://api.github.com/use... | [] | closed | false | null | [] | null | 2 | 2023-08-29T12:37:03 | 2023-08-29T13:01:10 | 2023-08-29T13:01:09 | MEMBER | null | ### Describe the bug
I'm working on a dataset which comprises other datasets on the hub.
URL: https://huggingface.co/datasets/open-asr-leaderboard/datasets-test-only
Note: Some of the sub-datasets in this metadataset require explicit access.
All the other datasets work fine, except, `common_voice`.
### Steps t... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6190/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6190/timeline | null | completed | null | null | false | [
"This is because `download_config.use_auth_token` is deprecated - you should use `download_config.token` instead",
"Works! Thanks for the quick fix! <3"
] |
https://api.github.com/repos/huggingface/datasets/issues/6189 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6189/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6189/comments | https://api.github.com/repos/huggingface/datasets/issues/6189/events | https://github.com/huggingface/datasets/pull/6189 | 1,871,569,855 | PR_kwDODunzps5ZB8Z9 | 6,189 | Don't alter input in Features.from_dict | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoest... | [] | closed | false | null | [] | null | 3 | 2023-08-29T12:29:47 | 2023-08-29T13:04:59 | 2023-08-29T12:52:48 | MEMBER | null | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6189/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6189/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6189",
"html_url": "https://github.com/huggingface/datasets/pull/6189",
"diff_url": "https://github.com/huggingface/datasets/pull/6189.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6189.patch",
"merged_at": "2023-08-29T12:52... | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... |
https://api.github.com/repos/huggingface/datasets/issues/6188 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6188/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6188/comments | https://api.github.com/repos/huggingface/datasets/issues/6188/events | https://github.com/huggingface/datasets/issues/6188 | 1,870,987,640 | I_kwDODunzps5vhQF4 | 6,188 | [Feature Request] Check the length of batch before writing so that empty batch is allowed | {
"login": "namespace-Pt",
"id": 61188463,
"node_id": "MDQ6VXNlcjYxMTg4NDYz",
"avatar_url": "https://avatars.githubusercontent.com/u/61188463?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/namespace-Pt",
"html_url": "https://github.com/namespace-Pt",
"followers_url": "https://api.github.c... | [] | closed | false | null | [] | null | 1 | 2023-08-29T06:37:34 | 2023-09-19T21:55:38 | 2023-09-19T21:55:37 | NONE | null | ### Use Case
I use `dataset.map(process_fn, batched=True)` to process the dataset, with data **augmentations or filtering**. However, when all examples within a batch is filtered out, i.e. **an empty batch is returned**, the following error will be thrown:
```
ValueError: Schema and number of arrays unequal
`... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6188/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6188/timeline | null | not_planned | null | null | false | [
"I think this error means you filter all examples within an (input) batch by deleting its columns. In that case, to avoid the error, you can set the column value to an empty list (`input_batch[\"col\"] = []`) instead."
] |
https://api.github.com/repos/huggingface/datasets/issues/6187 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6187/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6187/comments | https://api.github.com/repos/huggingface/datasets/issues/6187/events | https://github.com/huggingface/datasets/issues/6187 | 1,870,936,143 | I_kwDODunzps5vhDhP | 6,187 | Couldn't find a dataset script at /content/tsv/tsv.py or any data file in the same directory | {
"login": "andysingal",
"id": 20493493,
"node_id": "MDQ6VXNlcjIwNDkzNDkz",
"avatar_url": "https://avatars.githubusercontent.com/u/20493493?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/andysingal",
"html_url": "https://github.com/andysingal",
"followers_url": "https://api.github.com/use... | [] | open | false | null | [] | null | 1 | 2023-08-29T05:49:56 | 2023-08-29T16:21:45 | null | NONE | null | ### Describe the bug
```
---------------------------------------------------------------------------
FileNotFoundError Traceback (most recent call last)
[<ipython-input-48-6a7b3e847019>](https://localhost:8080/#) in <cell line: 7>()
5 }
6
----> 7 csv_datasets_reloaded = load_... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6187/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6187/timeline | null | null | null | null | false | [
"Hi! You can load this dataset with:\r\n```python\r\ndata_files = {\r\n \"train\": \"/content/PUBHEALTH/train.tsv\",\r\n \"validation\": \"/content/PUBHEALTH/dev.tsv\",\r\n \"test\": \"/content/PUBHEALTH/test.tsv\",\r\n}\r\n\r\ntsv_datasets_reloaded = load_dataset(\"csv\", data_files=data_files, sep=\"\\t\... |
https://api.github.com/repos/huggingface/datasets/issues/6186 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6186/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6186/comments | https://api.github.com/repos/huggingface/datasets/issues/6186/events | https://github.com/huggingface/datasets/issues/6186 | 1,869,431,457 | I_kwDODunzps5vbUKh | 6,186 | Feature request: add code example of multi-GPU processing | {
"login": "NielsRogge",
"id": 48327001,
"node_id": "MDQ6VXNlcjQ4MzI3MDAx",
"avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/NielsRogge",
"html_url": "https://github.com/NielsRogge",
"followers_url": "https://api.github.com/use... | [
{
"id": 1935892861,
"node_id": "MDU6TGFiZWwxOTM1ODkyODYx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/documentation",
"name": "documentation",
"color": "0075ca",
"default": true,
"description": "Improvements or additions to documentation"
},
{
"id": 19358928... | closed | false | null | [] | null | 15 | 2023-08-28T10:00:59 | 2024-03-21T09:52:15 | 2023-11-22T15:42:20 | CONTRIBUTOR | null | ### Feature request
Would be great to add a code example of how to do multi-GPU processing with 🤗 Datasets in the documentation. cc @stevhliu
Currently the docs has a small [section](https://huggingface.co/docs/datasets/v2.3.2/en/process#map) on this saying "your big GPU call goes here", however it didn't work f... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6186/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6186/timeline | null | completed | null | null | false | [
"That'd be a great idea! @mariosasko or @lhoestq, would it be possible to fix the code snippet or do you have another suggested way for doing this?",
"Indeed `if __name__ == \"__main__\"` is important in this case.\r\n\r\nNot sure about the imbalanced GPU usage though, but maybe you can try using the `torch.cuda.... |
https://api.github.com/repos/huggingface/datasets/issues/6185 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6185/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6185/comments | https://api.github.com/repos/huggingface/datasets/issues/6185/events | https://github.com/huggingface/datasets/issues/6185 | 1,868,077,748 | I_kwDODunzps5vWJq0 | 6,185 | Error in saving the PIL image into *.arrow files using datasets.arrow_writer | {
"login": "HaozheZhao",
"id": 14247682,
"node_id": "MDQ6VXNlcjE0MjQ3Njgy",
"avatar_url": "https://avatars.githubusercontent.com/u/14247682?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/HaozheZhao",
"html_url": "https://github.com/HaozheZhao",
"followers_url": "https://api.github.com/use... | [] | open | false | null | [] | null | 1 | 2023-08-26T12:15:57 | 2023-08-29T14:49:58 | null | NONE | null | ### Describe the bug
I am using the ArrowWriter from datasets.arrow_writer to save a json-style file as arrow files. Within the dictionary, it contains a feature called "image" which is a list of PIL.Image objects.
I am saving the json using the following script:
```
def save_to_arrow(path,temp):
with ArrowWri... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6185/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6185/timeline | null | null | null | null | false | [
"You can cast the `input_image` column to the `Image` type to fix the issue:\r\n```python\r\nds.cast_column(\"input_image\", datasets.Image())\r\n```"
] |
https://api.github.com/repos/huggingface/datasets/issues/6184 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6184/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6184/comments | https://api.github.com/repos/huggingface/datasets/issues/6184/events | https://github.com/huggingface/datasets/issues/6184 | 1,867,766,143 | I_kwDODunzps5vU9l_ | 6,184 | Map cache does not detect function changes in another module | {
"login": "jonathanasdf",
"id": 511073,
"node_id": "MDQ6VXNlcjUxMTA3Mw==",
"avatar_url": "https://avatars.githubusercontent.com/u/511073?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jonathanasdf",
"html_url": "https://github.com/jonathanasdf",
"followers_url": "https://api.github.com/u... | [
{
"id": 1935892865,
"node_id": "MDU6TGFiZWwxOTM1ODkyODY1",
"url": "https://api.github.com/repos/huggingface/datasets/labels/duplicate",
"name": "duplicate",
"color": "cfd3d7",
"default": true,
"description": "This issue or pull request already exists"
}
] | closed | false | null | [] | null | 2 | 2023-08-25T22:59:14 | 2023-08-29T20:57:07 | 2023-08-29T20:56:49 | NONE | null | ```python
# dataset.py
import os
import datasets
if not os.path.exists('/tmp/test.json'):
with open('/tmp/test.json', 'w') as file:
file.write('[{"text": "hello"}]')
def transform(example):
text = example['text']
# text += ' world'
return {'text': text}
data = datasets.load_dataset('json', ... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6184/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6184/timeline | null | completed | null | null | false | [
"This issue is a duplicate of https://github.com/huggingface/datasets/issues/3297. This is a limitation of `dill`, a package we use for caching (non-`__main__` module objects are serialized by reference). You can find more info about it here: https://github.com/uqfoundation/dill/issues/424.\r\n\r\nIn your case, mo... |
https://api.github.com/repos/huggingface/datasets/issues/6183 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6183/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6183/comments | https://api.github.com/repos/huggingface/datasets/issues/6183/events | https://github.com/huggingface/datasets/issues/6183 | 1,867,743,276 | I_kwDODunzps5vU4As | 6,183 | Load dataset with non-existent file | {
"login": "freQuensy23-coder",
"id": 64750224,
"node_id": "MDQ6VXNlcjY0NzUwMjI0",
"avatar_url": "https://avatars.githubusercontent.com/u/64750224?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/freQuensy23-coder",
"html_url": "https://github.com/freQuensy23-coder",
"followers_url": "https... | [] | closed | false | null | [] | null | 2 | 2023-08-25T22:21:22 | 2023-08-29T13:26:22 | 2023-08-29T13:26:22 | NONE | null | ### Describe the bug
When load a dataset from datasets and pass a wrong path to json with the data, error message does not contain something abount "wrong path" or "file do not exist" -
```SchemaInferenceError: Please pass `features` or at least one example when writing data```
### Steps to reproduce the bug
... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6183/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 1,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6183/timeline | null | completed | null | null | false | [
"Same problem",
"This was fixed in https://github.com/huggingface/datasets/pull/6155, which will be included in the next release (or you can install `datasets` from source to use it immediately)."
] |
https://api.github.com/repos/huggingface/datasets/issues/6182 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6182/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6182/comments | https://api.github.com/repos/huggingface/datasets/issues/6182/events | https://github.com/huggingface/datasets/issues/6182 | 1,867,203,131 | I_kwDODunzps5vS0I7 | 6,182 | Loading Meteor metric in HF evaluate module crashes due to datasets import issue | {
"login": "dsashulya",
"id": 42322648,
"node_id": "MDQ6VXNlcjQyMzIyNjQ4",
"avatar_url": "https://avatars.githubusercontent.com/u/42322648?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dsashulya",
"html_url": "https://github.com/dsashulya",
"followers_url": "https://api.github.com/users/... | [] | closed | false | null | [] | null | 4 | 2023-08-25T14:54:06 | 2023-09-04T16:41:11 | 2023-08-31T14:38:23 | NONE | null | ### Describe the bug
When using python3.9 and ```evaluate``` module loading Meteor metric crashes at a non-existent import from ```datasets.config``` in ```datasets v2.14```
### Steps to reproduce the bug
```
from evaluate import load
meteor = load("meteor")
```
produces the following error:
```
from d... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6182/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6182/timeline | null | completed | null | null | false | [
"Our minimal Python version requirement is 3.8, so we dropped `importlib_metadata`. \r\n\r\nFeel free to open a PR in the `evaluate` repo to replace the problematic import with\r\n```python\r\nif PY_VERSION < version.parse(\"3.8\"):\r\n import importlib_metadata\r\nelse:\r\n import importlib.metadata as impor... |
https://api.github.com/repos/huggingface/datasets/issues/6181 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6181/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6181/comments | https://api.github.com/repos/huggingface/datasets/issues/6181/events | https://github.com/huggingface/datasets/pull/6181 | 1,867,035,522 | PR_kwDODunzps5Yy2VO | 6,181 | Fix import in `image_load` doc | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/use... | [] | closed | false | null | [] | null | 3 | 2023-08-25T13:12:19 | 2023-08-25T16:12:46 | 2023-08-25T16:02:24 | CONTRIBUTOR | null | Reported on [Discord](https://discord.com/channels/879548962464493619/1144295822209581168/1144295822209581168) | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6181/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6181/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6181",
"html_url": "https://github.com/huggingface/datasets/pull/6181",
"diff_url": "https://github.com/huggingface/datasets/pull/6181.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6181.patch",
"merged_at": "2023-08-25T16:02... | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... |
https://api.github.com/repos/huggingface/datasets/issues/6180 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6180/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6180/comments | https://api.github.com/repos/huggingface/datasets/issues/6180/events | https://github.com/huggingface/datasets/pull/6180 | 1,867,032,578 | PR_kwDODunzps5Yy1r- | 6,180 | Use `hf-internal-testing` repos for hosting test dataset repos | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/use... | [] | closed | false | null | [] | null | 4 | 2023-08-25T13:10:26 | 2023-08-25T16:58:02 | 2023-08-25T16:46:22 | CONTRIBUTOR | null | Use `hf-internal-testing` for hosting instead of the maintainers' dataset repos. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6180/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6180/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6180",
"html_url": "https://github.com/huggingface/datasets/pull/6180",
"diff_url": "https://github.com/huggingface/datasets/pull/6180.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6180.patch",
"merged_at": "2023-08-25T16:46... | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... |
https://api.github.com/repos/huggingface/datasets/issues/6179 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6179/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6179/comments | https://api.github.com/repos/huggingface/datasets/issues/6179/events | https://github.com/huggingface/datasets/issues/6179 | 1,867,009,016 | I_kwDODunzps5vSEv4 | 6,179 | Map cache with tokenizer | {
"login": "jonathanasdf",
"id": 511073,
"node_id": "MDQ6VXNlcjUxMTA3Mw==",
"avatar_url": "https://avatars.githubusercontent.com/u/511073?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jonathanasdf",
"html_url": "https://github.com/jonathanasdf",
"followers_url": "https://api.github.com/u... | [] | open | false | null | [] | null | 4 | 2023-08-25T12:55:18 | 2023-08-31T15:17:24 | null | NONE | null | Similar issue to https://github.com/huggingface/datasets/issues/5985, but across different sessions rather than two calls in the same session.
Unlike that issue, explicitly calling tokenizer(my_args) before the map() doesn't help, because the tokenizer was created with a different hash to begin with...
setup
```... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6179/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6179/timeline | null | null | null | null | false | [
"https://github.com/huggingface/datasets/issues/5147 may be a solution, by passing in the tokenizer in a fn_kwargs and ignoring it in the fingerprint calculations",
"I have a similar issue. I was using a Jupyter Notebook and every time I call the map function it performs tokenization from scratch again although t... |
https://api.github.com/repos/huggingface/datasets/issues/6178 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6178/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6178/comments | https://api.github.com/repos/huggingface/datasets/issues/6178/events | https://github.com/huggingface/datasets/issues/6178 | 1,866,610,102 | I_kwDODunzps5vQjW2 | 6,178 | 'import datasets' throws "invalid syntax error" | {
"login": "elia-ashraf",
"id": 128580829,
"node_id": "U_kgDOB6n83Q",
"avatar_url": "https://avatars.githubusercontent.com/u/128580829?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/elia-ashraf",
"html_url": "https://github.com/elia-ashraf",
"followers_url": "https://api.github.com/users/... | [] | closed | false | null | [] | null | 1 | 2023-08-25T08:35:14 | 2023-09-27T17:33:39 | 2023-09-27T17:33:39 | NONE | null | ### Describe the bug
Hi,
I have been trying to import the datasets library but I keep gtting this error.
`Traceback (most recent call last):
File /opt/local/jupyterhub/lib64/python3.9/site-packages/IPython/core/interactiveshell.py:3508 in run_code
exec(code_obj, self.user_global_ns, self.user_ns)
... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6178/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6178/timeline | null | completed | null | null | false | [
"This seems to be related to your environment and not the `datasets` code (e.g., this could happen when exposing the Python 3.9 site packages to a lower Python version (interpreter))"
] |
https://api.github.com/repos/huggingface/datasets/issues/6177 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6177/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6177/comments | https://api.github.com/repos/huggingface/datasets/issues/6177/events | https://github.com/huggingface/datasets/pull/6177 | 1,865,490,962 | PR_kwDODunzps5Ytky- | 6,177 | Use object detection images from `huggingface/documentation-images` | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/use... | [] | closed | false | null | [] | null | 4 | 2023-08-24T16:16:09 | 2023-08-25T16:30:00 | 2023-08-25T16:21:17 | CONTRIBUTOR | null | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6177/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6177/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6177",
"html_url": "https://github.com/huggingface/datasets/pull/6177",
"diff_url": "https://github.com/huggingface/datasets/pull/6177.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6177.patch",
"merged_at": "2023-08-25T16:21... | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... |
https://api.github.com/repos/huggingface/datasets/issues/6176 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6176/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6176/comments | https://api.github.com/repos/huggingface/datasets/issues/6176/events | https://github.com/huggingface/datasets/issues/6176 | 1,864,436,408 | I_kwDODunzps5vIQq4 | 6,176 | how to limit the size of memory mapped file? | {
"login": "williamium3000",
"id": 47763855,
"node_id": "MDQ6VXNlcjQ3NzYzODU1",
"avatar_url": "https://avatars.githubusercontent.com/u/47763855?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/williamium3000",
"html_url": "https://github.com/williamium3000",
"followers_url": "https://api.gi... | [] | open | false | null | [] | null | 6 | 2023-08-24T05:33:45 | 2023-10-11T06:00:10 | null | NONE | null | ### Describe the bug
Huggingface datasets use memory-mapped file to map large datasets in memory for fast access.
However, it seems like huggingface will occupy all the memory for memory-mapped files, which makes a troublesome situation since we cluster will distribute a small portion of memory to me (once it's over ... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6176/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6176/timeline | null | null | null | null | false | [
"Hi! Can you share the error this reproducer throws in your environment? `streaming=True` streams the dataset as it's iterated over without creating a memory-map file.",
"The trace of the error. Streaming works but is slower.\r\n```\r\nRoot Cause (first observed failure):\r\n[0]:\r\n time : 2023-08-24_06:06... |
https://api.github.com/repos/huggingface/datasets/issues/6175 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6175/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6175/comments | https://api.github.com/repos/huggingface/datasets/issues/6175/events | https://github.com/huggingface/datasets/pull/6175 | 1,863,592,678 | PR_kwDODunzps5YnKlx | 6,175 | PyArrow 13 CI fixes | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/use... | [] | closed | false | null | [] | null | 3 | 2023-08-23T15:45:53 | 2023-08-25T13:15:59 | 2023-08-25T13:06:52 | CONTRIBUTOR | null | Fixes:
* bumps the PyArrow version check in the `cast_array_to_feature` to avoid the offset bug (still not fixed)
* aligns the Pandas formatting tests with the Numpy ones (the current test fails due to https://github.com/apache/arrow/pull/35656, which requires `.to_pandas(coerce_temporal_nanoseconds=True)` to always ... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6175/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6175/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6175",
"html_url": "https://github.com/huggingface/datasets/pull/6175",
"diff_url": "https://github.com/huggingface/datasets/pull/6175.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6175.patch",
"merged_at": "2023-08-25T13:06... | true | [
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_a... |
https://api.github.com/repos/huggingface/datasets/issues/6173 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6173/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6173/comments | https://api.github.com/repos/huggingface/datasets/issues/6173/events | https://github.com/huggingface/datasets/issues/6173 | 1,863,422,065 | I_kwDODunzps5vEZBx | 6,173 | Fix CI for pyarrow 13.0.0 | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoest... | [] | closed | false | null | [] | null | 0 | 2023-08-23T14:11:20 | 2023-08-25T13:06:53 | 2023-08-25T13:06:53 | MEMBER | null | pyarrow 13.0.0 just came out
```
FAILED tests/test_formatting.py::ArrowExtractorTest::test_pandas_extractor - AssertionError: Attributes of Series are different
Attribute "dtype" are different
[left]: datetime64[us, UTC]
[right]: datetime64[ns, UTC]
```
```
FAILED tests/test_table.py::test_cast_sliced_fi... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6173/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 1
} | https://api.github.com/repos/huggingface/datasets/issues/6173/timeline | null | completed | null | null | false | [] |
https://api.github.com/repos/huggingface/datasets/issues/6172 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6172/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6172/comments | https://api.github.com/repos/huggingface/datasets/issues/6172/events | https://github.com/huggingface/datasets/issues/6172 | 1,863,318,027 | I_kwDODunzps5vD_oL | 6,172 | Make Dataset streaming queries retryable | {
"login": "rojagtap",
"id": 42299342,
"node_id": "MDQ6VXNlcjQyMjk5MzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/42299342?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rojagtap",
"html_url": "https://github.com/rojagtap",
"followers_url": "https://api.github.com/users/roj... | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | null | [] | null | 4 | 2023-08-23T13:15:38 | 2023-11-06T13:54:16 | null | NONE | null | ### Feature request
Streaming datasets, as intended, do not load the entire dataset in memory or disk. However, while querying the next data chunk from the remote, sometimes it is possible that the service is down or there might be other issues that may cause the query to fail. In such a scenario, it would be nice to ... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6172/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6172/timeline | null | null | null | null | false | [
"Hi! The streaming mode also retries requests - `datasets.config.STREAMING_READ_MAX_RETRIES` (20 sec by default) controls the number of retries and `datasets.config.STREAMING_READ_RETRY_INTERVAL` (5 sec) the sleep time between retries.\r\n\r\n> At step 1800 I got a 504 HTTP status code error from Huggingface hub fo... |
https://api.github.com/repos/huggingface/datasets/issues/6171 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6171/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6171/comments | https://api.github.com/repos/huggingface/datasets/issues/6171/events | https://github.com/huggingface/datasets/pull/6171 | 1,862,922,767 | PR_kwDODunzps5Yk4AS | 6,171 | Fix typo in about_mapstyle_vs_iterable.mdx | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoest... | [] | closed | false | null | [] | null | 3 | 2023-08-23T09:21:11 | 2023-08-23T09:32:59 | 2023-08-23T09:21:19 | MEMBER | null | null | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6171/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6171/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6171",
"html_url": "https://github.com/huggingface/datasets/pull/6171",
"diff_url": "https://github.com/huggingface/datasets/pull/6171.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6171.patch",
"merged_at": "2023-08-23T09:21... | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6171). All of your documentation changes will be reflected on that endpoint.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchma... |
https://api.github.com/repos/huggingface/datasets/issues/6170 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6170/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6170/comments | https://api.github.com/repos/huggingface/datasets/issues/6170/events | https://github.com/huggingface/datasets/pull/6170 | 1,862,705,731 | PR_kwDODunzps5YkJOV | 6,170 | feat: Return the name of the currently loaded file | {
"login": "Amitesh-Patel",
"id": 124021133,
"node_id": "U_kgDOB2RpjQ",
"avatar_url": "https://avatars.githubusercontent.com/u/124021133?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Amitesh-Patel",
"html_url": "https://github.com/Amitesh-Patel",
"followers_url": "https://api.github.com/... | [] | open | false | null | [] | null | 1 | 2023-08-23T07:08:17 | 2023-08-29T12:41:05 | null | NONE | null | Added an optional parameter return_file_name in the load_dataset function. When it is set to True, the function will include the name of the file corresponding to the current line as a feature in the returned output.
I added this here https://github.com/huggingface/datasets/blob/main/src/datasets/packaged_modules/js... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6170/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6170/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6170",
"html_url": "https://github.com/huggingface/datasets/pull/6170",
"diff_url": "https://github.com/huggingface/datasets/pull/6170.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6170.patch",
"merged_at": null
} | true | [
"Your change adds a new element in the key used to avoid duplicates when generating the examples of a dataset. I don't think it fixes the issue you're trying to solve."
] |
https://api.github.com/repos/huggingface/datasets/issues/6169 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6169/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6169/comments | https://api.github.com/repos/huggingface/datasets/issues/6169/events | https://github.com/huggingface/datasets/issues/6169 | 1,862,360,199 | I_kwDODunzps5vAVyH | 6,169 | Configurations in yaml not working | {
"login": "tsor13",
"id": 45085098,
"node_id": "MDQ6VXNlcjQ1MDg1MDk4",
"avatar_url": "https://avatars.githubusercontent.com/u/45085098?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tsor13",
"html_url": "https://github.com/tsor13",
"followers_url": "https://api.github.com/users/tsor13/fo... | [] | open | false | null | [] | null | 4 | 2023-08-23T00:13:22 | 2023-08-23T15:35:31 | null | NONE | null | ### Dataset configurations cannot be created in YAML/README
Hello! I'm trying to follow the docs here in order to create structure in my dataset as added from here (#5331): https://github.com/huggingface/datasets/blob/8b8e6ee067eb74e7965ca2a6768f15f9398cb7c8/docs/source/repository_structure.mdx#L110-L118
I have t... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6169/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6169/timeline | null | null | null | null | false | [
"Unfortunately, I cannot reproduce this behavior on my machine or Colab - the reproducer returns `['main_data', 'additional_data']` as expected.",
"Thank you for looking into this, Mario. Is this on [my repository](https://huggingface.co/datasets/tsor13/test), or on another one that you have reproduced? Would you... |
https://api.github.com/repos/huggingface/datasets/issues/6168 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6168/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6168/comments | https://api.github.com/repos/huggingface/datasets/issues/6168/events | https://github.com/huggingface/datasets/pull/6168 | 1,861,867,274 | PR_kwDODunzps5YhT7Y | 6,168 | Fix ArrayXD YAML conversion | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/use... | [] | closed | false | null | [] | null | 6 | 2023-08-22T17:02:54 | 2023-12-12T15:06:59 | 2023-12-12T15:00:43 | CONTRIBUTOR | null | Replace the `shape` tuple with a list in the `ArrayXD` YAML conversion.
Fix #6112 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6168/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6168/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6168",
"html_url": "https://github.com/huggingface/datasets/pull/6168",
"diff_url": "https://github.com/huggingface/datasets/pull/6168.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6168.patch",
"merged_at": "2023-12-12T15:00... | true | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_6168). All of your documentation changes will be reflected on that endpoint.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchma... |
https://api.github.com/repos/huggingface/datasets/issues/6167 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6167/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6167/comments | https://api.github.com/repos/huggingface/datasets/issues/6167/events | https://github.com/huggingface/datasets/pull/6167 | 1,861,474,327 | PR_kwDODunzps5Yf9-t | 6,167 | Allow hyphen in split name | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/use... | [] | closed | false | null | [] | null | 5 | 2023-08-22T13:30:59 | 2024-01-11T06:31:31 | 2023-08-22T15:38:53 | CONTRIBUTOR | null | To fix https://discuss.huggingface.co/t/error-when-setting-up-the-dataset-viewer-streamingrowserror/51276.
| {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6167/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6167/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6167",
"html_url": "https://github.com/huggingface/datasets/pull/6167",
"diff_url": "https://github.com/huggingface/datasets/pull/6167.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6167.patch",
"merged_at": null
} | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... |
https://api.github.com/repos/huggingface/datasets/issues/6166 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6166/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6166/comments | https://api.github.com/repos/huggingface/datasets/issues/6166/events | https://github.com/huggingface/datasets/pull/6166 | 1,861,259,055 | PR_kwDODunzps5YfOt0 | 6,166 | Document BUILDER_CONFIG_CLASS | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoest... | [] | closed | false | null | [] | null | 3 | 2023-08-22T11:27:41 | 2023-08-23T14:01:25 | 2023-08-23T13:52:36 | MEMBER | null | Related to https://github.com/huggingface/datasets/issues/6130 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6166/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6166/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6166",
"html_url": "https://github.com/huggingface/datasets/pull/6166",
"diff_url": "https://github.com/huggingface/datasets/pull/6166.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6166.patch",
"merged_at": "2023-08-23T13:52... | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... |
https://api.github.com/repos/huggingface/datasets/issues/6165 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6165/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6165/comments | https://api.github.com/repos/huggingface/datasets/issues/6165/events | https://github.com/huggingface/datasets/pull/6165 | 1,861,124,284 | PR_kwDODunzps5YexBL | 6,165 | Fix multiprocessing with spawn in iterable datasets | {
"login": "Hubert-Bonisseur",
"id": 48770768,
"node_id": "MDQ6VXNlcjQ4NzcwNzY4",
"avatar_url": "https://avatars.githubusercontent.com/u/48770768?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Hubert-Bonisseur",
"html_url": "https://github.com/Hubert-Bonisseur",
"followers_url": "https://... | [] | closed | false | null | [] | null | 5 | 2023-08-22T10:07:23 | 2023-08-29T13:27:14 | 2023-08-29T13:18:11 | CONTRIBUTOR | null | The "Spawn" method is preferred when multiprocessing on macOS or Windows systems, instead of the "Fork" method on linux systems.
This causes some methods of Iterable Datasets to break when using a dataloader with more than 0 workers.
I fixed the issue by replacing lambda and local methods which are not pickle-abl... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6165/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6165/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6165",
"html_url": "https://github.com/huggingface/datasets/pull/6165",
"diff_url": "https://github.com/huggingface/datasets/pull/6165.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6165.patch",
"merged_at": "2023-08-29T13:18... | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"@lhoestq \r\nA test is failing, but I don't think it is due to my changes",
"Good catch ! Could you add a test to make sure transformed IterableDataset objects are still picklable ?\r\n\r\nSomething like `test_pickle_after_many_tra... |
https://api.github.com/repos/huggingface/datasets/issues/6164 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6164/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6164/comments | https://api.github.com/repos/huggingface/datasets/issues/6164/events | https://github.com/huggingface/datasets/pull/6164 | 1,859,560,007 | PR_kwDODunzps5YZZAJ | 6,164 | Fix: Missing a MetadataConfigs init when the repo has a `datasets_info.json` but no README | {
"login": "clefourrier",
"id": 22726840,
"node_id": "MDQ6VXNlcjIyNzI2ODQw",
"avatar_url": "https://avatars.githubusercontent.com/u/22726840?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/clefourrier",
"html_url": "https://github.com/clefourrier",
"followers_url": "https://api.github.com/... | [] | closed | false | null | [] | null | 3 | 2023-08-21T14:57:54 | 2023-08-21T16:27:05 | 2023-08-21T16:18:26 | MEMBER | null | When I try to push to an arrow repo (can provide the link on Slack), it uploads the files but fails to update the metadata, with
```
File "app.py", line 123, in add_new_eval
eval_results[level].push_to_hub(my_repo, token=TOKEN, split=SPLIT)
File "blabla_my_env_path/lib/python3.10/site-packages/datasets/arro... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6164/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6164/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6164",
"html_url": "https://github.com/huggingface/datasets/pull/6164",
"diff_url": "https://github.com/huggingface/datasets/pull/6164.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6164.patch",
"merged_at": "2023-08-21T16:18... | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... |
https://api.github.com/repos/huggingface/datasets/issues/6163 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6163/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6163/comments | https://api.github.com/repos/huggingface/datasets/issues/6163/events | https://github.com/huggingface/datasets/issues/6163 | 1,857,682,241 | I_kwDODunzps5uuftB | 6,163 | Error type: ArrowInvalid Details: Failed to parse string: '[254,254]' as a scalar of type int32 | {
"login": "shishirCTC",
"id": 90616801,
"node_id": "MDQ6VXNlcjkwNjE2ODAx",
"avatar_url": "https://avatars.githubusercontent.com/u/90616801?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/shishirCTC",
"html_url": "https://github.com/shishirCTC",
"followers_url": "https://api.github.com/use... | [] | open | false | null | [] | null | 1 | 2023-08-19T11:34:40 | 2023-08-21T13:28:16 | null | NONE | null | ### Describe the bug
I am getting the following error while I am trying to upload the CSV sheet to train a model. My CSV sheet content is exactly same as shown in the example CSV file in the Auto Train page. Attaching screenshot of error for reference. I have also tried converting the index of the answer that are inte... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6163/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6163/timeline | null | null | null | null | false | [
"Answered on the forum [here](https://discuss.huggingface.co/t/error-type-arrowinvalid-details-failed-to-parse-string-254-254-as-a-scalar-of-type-int32/51323)."
] |
https://api.github.com/repos/huggingface/datasets/issues/6162 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6162/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6162/comments | https://api.github.com/repos/huggingface/datasets/issues/6162/events | https://github.com/huggingface/datasets/issues/6162 | 1,856,198,342 | I_kwDODunzps5uo1bG | 6,162 | load_dataset('json',...) from togethercomputer/RedPajama-Data-1T errors when jsonl rows contains different data fields | {
"login": "rbrugaro",
"id": 82971690,
"node_id": "MDQ6VXNlcjgyOTcxNjkw",
"avatar_url": "https://avatars.githubusercontent.com/u/82971690?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/rbrugaro",
"html_url": "https://github.com/rbrugaro",
"followers_url": "https://api.github.com/users/rbr... | [] | open | false | null | [] | null | 4 | 2023-08-18T07:19:39 | 2023-08-18T17:00:35 | null | NONE | null | ### Describe the bug
When loading some jsonl from redpajama-data-1T github source [togethercomputer/RedPajama-Data-1T](https://huggingface.co/datasets/togethercomputer/RedPajama-Data-1T) fails due to one row of the file containing an extra field called **symlink_target: string>**.
When deleting that line the loading... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6162/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6162/timeline | null | null | null | null | false | [
"Hi ! Feel free to open a discussion at https://huggingface.co/datasets/togethercomputer/RedPajama-Data-1T/discussions to ask the file to be fixed (or directly open a PR with the fixed file)\r\n\r\n`datasets` expects all the examples to have the same fields",
"@lhoestq I think the problem is caused by the fact th... |
https://api.github.com/repos/huggingface/datasets/issues/6161 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6161/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6161/comments | https://api.github.com/repos/huggingface/datasets/issues/6161/events | https://github.com/huggingface/datasets/pull/6161 | 1,855,794,354 | PR_kwDODunzps5YM0g7 | 6,161 | Fix protocol prefix for Beam | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/use... | [] | closed | false | null | [] | null | 5 | 2023-08-17T22:40:37 | 2024-03-18T17:01:21 | 2024-03-18T17:01:21 | CONTRIBUTOR | null | Fix #6147 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6161/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6161/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6161",
"html_url": "https://github.com/huggingface/datasets/pull/6161",
"diff_url": "https://github.com/huggingface/datasets/pull/6161.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6161.patch",
"merged_at": null
} | true | [
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_a... |
https://api.github.com/repos/huggingface/datasets/issues/6160 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6160/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6160/comments | https://api.github.com/repos/huggingface/datasets/issues/6160/events | https://github.com/huggingface/datasets/pull/6160 | 1,855,760,543 | PR_kwDODunzps5YMtLQ | 6,160 | Fix Parquet loading with `columns` | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/use... | [] | closed | false | null | [] | null | 4 | 2023-08-17T21:58:24 | 2023-08-17T22:44:59 | 2023-08-17T22:36:04 | CONTRIBUTOR | null | Fix #6149 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6160/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6160/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6160",
"html_url": "https://github.com/huggingface/datasets/pull/6160",
"diff_url": "https://github.com/huggingface/datasets/pull/6160.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6160.patch",
"merged_at": "2023-08-17T22:36... | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... |
https://api.github.com/repos/huggingface/datasets/issues/6159 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6159/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6159/comments | https://api.github.com/repos/huggingface/datasets/issues/6159/events | https://github.com/huggingface/datasets/issues/6159 | 1,855,691,512 | I_kwDODunzps5um5r4 | 6,159 | Add `BoundingBox` feature | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/use... | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | null | [] | null | 0 | 2023-08-17T20:49:51 | 2023-08-17T20:49:51 | null | CONTRIBUTOR | null | ... to make working with object detection datasets easier. Currently, `Sequence(int_or_float, length=4)` can be used to represent this feature optimally (in the storage backend), so I only see this feature being useful if we make it work with the viewer. Also, bounding boxes usually come in 4 different formats (explain... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6159/reactions",
"total_count": 2,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6159/timeline | null | null | null | null | false | [] |
https://api.github.com/repos/huggingface/datasets/issues/6158 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6158/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6158/comments | https://api.github.com/repos/huggingface/datasets/issues/6158/events | https://github.com/huggingface/datasets/pull/6158 | 1,855,374,220 | PR_kwDODunzps5YLZBf | 6,158 | [docs] Complete `to_iterable_dataset` | {
"login": "stevhliu",
"id": 59462357,
"node_id": "MDQ6VXNlcjU5NDYyMzU3",
"avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stevhliu",
"html_url": "https://github.com/stevhliu",
"followers_url": "https://api.github.com/users/ste... | [] | closed | false | null | [] | null | 2 | 2023-08-17T17:02:11 | 2023-08-17T19:24:20 | 2023-08-17T19:13:15 | MEMBER | null | Finishes the `to_iterable_dataset` documentation by adding it to the relevant sections in the tutorial and guide. | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6158/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6158/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6158",
"html_url": "https://github.com/huggingface/datasets/pull/6158",
"diff_url": "https://github.com/huggingface/datasets/pull/6158.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6158.patch",
"merged_at": "2023-08-17T19:13... | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... |
https://api.github.com/repos/huggingface/datasets/issues/6157 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6157/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6157/comments | https://api.github.com/repos/huggingface/datasets/issues/6157/events | https://github.com/huggingface/datasets/issues/6157 | 1,855,265,663 | I_kwDODunzps5ulRt_ | 6,157 | DatasetInfo.__init__() got an unexpected keyword argument '_column_requires_decoding' | {
"login": "aihao2000",
"id": 51043929,
"node_id": "MDQ6VXNlcjUxMDQzOTI5",
"avatar_url": "https://avatars.githubusercontent.com/u/51043929?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/aihao2000",
"html_url": "https://github.com/aihao2000",
"followers_url": "https://api.github.com/users/... | [] | closed | false | null | [] | null | 13 | 2023-08-17T15:48:11 | 2023-09-27T17:36:14 | 2023-09-27T17:36:14 | NONE | null | ### Describe the bug
When I was in load_dataset, it said "DatasetInfo.__init__() got an unexpected keyword argument '_column_requires_decoding'". The second time I ran it, there was no error and the dataset object worked
```python
---------------------------------------------------------------------------
TypeErr... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6157/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6157/timeline | null | completed | null | null | false | [
"Thanks for reporting, but we can only fix this issue if you can provide a reproducer that consistently reproduces it.",
"@mariosasko Ok. What exactly does it mean to provide a reproducer",
"To provide a code that reproduces the issue :)",
"@mariosasko I complete the above code, is it enough?",
"@mariosasko... |
https://api.github.com/repos/huggingface/datasets/issues/6156 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6156/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6156/comments | https://api.github.com/repos/huggingface/datasets/issues/6156/events | https://github.com/huggingface/datasets/issues/6156 | 1,854,768,618 | I_kwDODunzps5ujYXq | 6,156 | Why not use self._epoch as seed to shuffle in distributed training with IterableDataset | {
"login": "npuichigo",
"id": 11533479,
"node_id": "MDQ6VXNlcjExNTMzNDc5",
"avatar_url": "https://avatars.githubusercontent.com/u/11533479?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/npuichigo",
"html_url": "https://github.com/npuichigo",
"followers_url": "https://api.github.com/users/... | [] | closed | false | null | [] | null | 3 | 2023-08-17T10:58:20 | 2023-08-17T14:33:15 | 2023-08-17T14:33:14 | CONTRIBUTOR | null | ### Describe the bug
Currently, distributed training with `IterableDataset` needs to pass fixed seed to shuffle to keep each node use the same seed to avoid overlapping.
https://github.com/huggingface/datasets/blob/a7f8d9019e7cb104eac4106bdc6ec0292f0dc61a/src/datasets/iterable_dataset.py#L1174-L1177
My question ... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6156/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6156/timeline | null | completed | null | null | false | [
"@lhoestq ",
"`_effective_generator` returns a RNG that takes into account `self._epoch` and the current dataset's base shuffling RNG (which can be set by specifying `seed=` in `.shuffle() for example`).\r\n\r\nTo fix your error you can pass `seed=` to `.shuffle()`. And the shuffling will depend on both this seed... |
https://api.github.com/repos/huggingface/datasets/issues/6155 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6155/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6155/comments | https://api.github.com/repos/huggingface/datasets/issues/6155/events | https://github.com/huggingface/datasets/pull/6155 | 1,854,661,682 | PR_kwDODunzps5YI8Pc | 6,155 | Raise FileNotFoundError when passing data_files that don't exist | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoest... | [] | closed | false | null | [] | null | 5 | 2023-08-17T09:49:48 | 2023-08-18T13:45:58 | 2023-08-18T13:35:13 | MEMBER | null | e.g. when running `load_dataset("parquet", data_files="doesnt_exist.parquet")` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6155/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6155/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6155",
"html_url": "https://github.com/huggingface/datasets/pull/6155",
"diff_url": "https://github.com/huggingface/datasets/pull/6155.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6155.patch",
"merged_at": "2023-08-18T13:35... | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... |
https://api.github.com/repos/huggingface/datasets/issues/6154 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6154/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6154/comments | https://api.github.com/repos/huggingface/datasets/issues/6154/events | https://github.com/huggingface/datasets/pull/6154 | 1,854,595,943 | PR_kwDODunzps5YItlH | 6,154 | Use yaml instead of get data patterns when possible | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoest... | [] | closed | false | null | [] | null | 6 | 2023-08-17T09:17:05 | 2023-08-17T20:46:25 | 2023-08-17T20:37:19 | MEMBER | null | This would make the data files resolution faster: no need to list all the data files to infer the dataset builder to use.
fix https://github.com/huggingface/datasets/issues/6140 | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6154/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6154/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6154",
"html_url": "https://github.com/huggingface/datasets/pull/6154",
"diff_url": "https://github.com/huggingface/datasets/pull/6154.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6154.patch",
"merged_at": "2023-08-17T20:37... | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... |
https://api.github.com/repos/huggingface/datasets/issues/6152 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6152/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6152/comments | https://api.github.com/repos/huggingface/datasets/issues/6152/events | https://github.com/huggingface/datasets/issues/6152 | 1,852,494,646 | I_kwDODunzps5uatM2 | 6,152 | FolderBase Dataset automatically resolves under current directory when data_dir is not specified | {
"login": "npuichigo",
"id": 11533479,
"node_id": "MDQ6VXNlcjExNTMzNDc5",
"avatar_url": "https://avatars.githubusercontent.com/u/11533479?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/npuichigo",
"html_url": "https://github.com/npuichigo",
"followers_url": "https://api.github.com/users/... | [
{
"id": 1935892877,
"node_id": "MDU6TGFiZWwxOTM1ODkyODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/good%20first%20issue",
"name": "good first issue",
"color": "7057ff",
"default": true,
"description": "Good for newcomers"
}
] | open | false | {
"login": "debrupf2946",
"id": 126772439,
"node_id": "U_kgDOB45k1w",
"avatar_url": "https://avatars.githubusercontent.com/u/126772439?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/debrupf2946",
"html_url": "https://github.com/debrupf2946",
"followers_url": "https://api.github.com/users/... | [
{
"login": "debrupf2946",
"id": 126772439,
"node_id": "U_kgDOB45k1w",
"avatar_url": "https://avatars.githubusercontent.com/u/126772439?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/debrupf2946",
"html_url": "https://github.com/debrupf2946",
"followers_url": "https://a... | null | 10 | 2023-08-16T04:38:09 | 2024-01-22T15:04:51 | null | CONTRIBUTOR | null | ### Describe the bug
FolderBase Dataset automatically resolves under current directory when data_dir is not specified.
For example:
```
load_dataset("audiofolder")
```
takes long time to resolve and collect data_files from current directory. But I think it should reach out to this line for error handling https:... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6152/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6152/timeline | null | null | null | null | false | [
"@lhoestq ",
"Makes sense, I guess this can be fixed in the load_dataset_builder method.\r\nIt concerns every packaged builder I think (see values in `_PACKAGED_DATASETS_MODULES`)",
"I think the behavior is related to these lines, which short circuited the error handling.\r\nhttps://github.com/huggingface/datas... |
https://api.github.com/repos/huggingface/datasets/issues/6151 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6151/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6151/comments | https://api.github.com/repos/huggingface/datasets/issues/6151/events | https://github.com/huggingface/datasets/issues/6151 | 1,851,497,818 | I_kwDODunzps5uW51a | 6,151 | Faster sorting for single key items | {
"login": "jackapbutler",
"id": 47942453,
"node_id": "MDQ6VXNlcjQ3OTQyNDUz",
"avatar_url": "https://avatars.githubusercontent.com/u/47942453?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jackapbutler",
"html_url": "https://github.com/jackapbutler",
"followers_url": "https://api.github.c... | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | closed | false | null | [] | null | 2 | 2023-08-15T14:02:31 | 2023-08-21T14:38:26 | 2023-08-21T14:38:25 | NONE | null | ### Feature request
A faster way to sort a dataset which contains a large number of rows.
### Motivation
The current sorting implementations took significantly longer than expected when I was running on a dataset trying to sort by timestamps.
**Code snippet:**
```python
ds = datasets.load_dataset( "json"... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6151/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6151/timeline | null | completed | null | null | false | [
"`Dataset.sort` essentially does the same thing except it uses `pyarrow.compute.sort_indices` which doesn't involve copying the data into python objects (saving memory)\r\n\r\n```python\r\nsort_keys = [(col, \"ascending\") for col in column_names]\r\nindices = pc.sort_indices(self.data, sort_keys=sort_keys)\r\nretu... |
https://api.github.com/repos/huggingface/datasets/issues/6150 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6150/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6150/comments | https://api.github.com/repos/huggingface/datasets/issues/6150/events | https://github.com/huggingface/datasets/issues/6150 | 1,850,740,456 | I_kwDODunzps5uUA7o | 6,150 | Allow dataset implement .take | {
"login": "brando90",
"id": 1855278,
"node_id": "MDQ6VXNlcjE4NTUyNzg=",
"avatar_url": "https://avatars.githubusercontent.com/u/1855278?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/brando90",
"html_url": "https://github.com/brando90",
"followers_url": "https://api.github.com/users/brand... | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | null | [] | null | 4 | 2023-08-15T00:17:51 | 2023-08-17T13:49:37 | null | NONE | null | ### Feature request
I want to do:
```
dataset.take(512)
```
but it only works with streaming = True
### Motivation
uniform interface to data sets. Really surprising the above only works with streaming = True.
### Your contribution
Should be trivial to copy paste the IterableDataset .take to use the local pa... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6150/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6150/timeline | null | null | null | null | false | [
"```\r\n dataset = IterableDataset(dataset) if type(dataset) != IterableDataset else dataset # to force dataset.take(batch_size) to work in non-streaming mode\r\n ```\r\n",
"hf discuss: https://discuss.huggingface.co/t/how-does-one-make-dataset-take-512-work-with-streaming-false-with-hugging-face-data-set/5... |
https://api.github.com/repos/huggingface/datasets/issues/6149 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6149/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6149/comments | https://api.github.com/repos/huggingface/datasets/issues/6149/events | https://github.com/huggingface/datasets/issues/6149 | 1,850,700,624 | I_kwDODunzps5uT3NQ | 6,149 | Dataset.from_parquet cannot load subset of columns | {
"login": "dwyatte",
"id": 2512762,
"node_id": "MDQ6VXNlcjI1MTI3NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/2512762?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dwyatte",
"html_url": "https://github.com/dwyatte",
"followers_url": "https://api.github.com/users/dwyatte/... | [] | closed | false | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/use... | [
{
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https:... | null | 1 | 2023-08-14T23:28:22 | 2023-08-17T22:36:05 | 2023-08-17T22:36:05 | CONTRIBUTOR | null | ### Describe the bug
When using `Dataset.from_parquet(path_or_paths, columns=[...])` and a subset of columns, loading fails with a variant of the following
```
ValueError: Couldn't cast
a: int64
-- schema metadata --
pandas: '{"index_columns": [], "column_indexes": [], "columns": [{"name":' + 273
to
{'a': V... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6149/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6149/timeline | null | completed | null | null | false | [
"Looks like this regression was introduced in `datasets==2.13.0` (`2.12.0` could load a subset of columns)\r\n\r\nThis does not appear to be fixed by https://github.com/huggingface/datasets/pull/6045 (bug still exists on `main`)"
] |
https://api.github.com/repos/huggingface/datasets/issues/6148 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6148/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6148/comments | https://api.github.com/repos/huggingface/datasets/issues/6148/events | https://github.com/huggingface/datasets/pull/6148 | 1,849,524,683 | PR_kwDODunzps5X3oqv | 6,148 | Ignore parallel warning in map_nested | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoest... | [] | closed | false | null | [] | null | 3 | 2023-08-14T10:43:41 | 2023-08-17T08:54:06 | 2023-08-17T08:43:58 | MEMBER | null | This warning message was shown every time you pass num_proc to `load_dataset` because of `map_nested`
```
parallel_map is experimental and might be subject to breaking changes in the future
```
This PR removes it for `map_nested`. If someone uses another parallel backend they're already warned when `parallel_ba... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6148/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6148/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6148",
"html_url": "https://github.com/huggingface/datasets/pull/6148",
"diff_url": "https://github.com/huggingface/datasets/pull/6148.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6148.patch",
"merged_at": "2023-08-17T08:43... | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... |
https://api.github.com/repos/huggingface/datasets/issues/6147 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6147/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6147/comments | https://api.github.com/repos/huggingface/datasets/issues/6147/events | https://github.com/huggingface/datasets/issues/6147 | 1,848,914,830 | I_kwDODunzps5uNDOO | 6,147 | ValueError when running BeamBasedBuilder with GCS path in cache_dir | {
"login": "ktrk115",
"id": 13844767,
"node_id": "MDQ6VXNlcjEzODQ0NzY3",
"avatar_url": "https://avatars.githubusercontent.com/u/13844767?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ktrk115",
"html_url": "https://github.com/ktrk115",
"followers_url": "https://api.github.com/users/ktrk11... | [] | closed | false | null | [] | null | 2 | 2023-08-14T03:11:34 | 2024-03-18T16:59:15 | 2024-03-18T16:59:14 | NONE | null | ### Describe the bug
When running the BeamBasedBuilder with a GCS path specified in the cache_dir, the following ValueError occurs:
```
ValueError: Unable to get filesystem from specified path, please use the correct path or ensure the required dependency is installed, e.g., pip install apache-beam[gcp]. Path spec... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6147/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6147/timeline | null | not_planned | null | null | false | [
"The cause of the error seems to be that `datasets` adds \"gcs://\" as a schema, while `beam` checks only \"gs://\".\r\n\r\ndatasets: https://github.com/huggingface/datasets/blob/c02a44715c036b5261686669727394b1308a3a4b/src/datasets/builder.py#L822\r\n\r\nbeam: [link](https://github.com/apache/beam/blob/25e1a64641b... |
https://api.github.com/repos/huggingface/datasets/issues/6146 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6146/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6146/comments | https://api.github.com/repos/huggingface/datasets/issues/6146/events | https://github.com/huggingface/datasets/issues/6146 | 1,848,417,366 | I_kwDODunzps5uLJxW | 6,146 | DatasetGenerationError when load glue benchmark datasets from `load_dataset` | {
"login": "yusx-swapp",
"id": 78742415,
"node_id": "MDQ6VXNlcjc4NzQyNDE1",
"avatar_url": "https://avatars.githubusercontent.com/u/78742415?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yusx-swapp",
"html_url": "https://github.com/yusx-swapp",
"followers_url": "https://api.github.com/use... | [] | closed | false | null | [] | null | 4 | 2023-08-13T05:17:56 | 2023-08-26T22:09:09 | 2023-08-26T22:09:09 | NONE | null | ### Describe the bug
Package version: datasets-2.14.4
When I run the codes:
```
from datasets import load_dataset
dataset = load_dataset("glue", "ax")
```
I got the following errors:
---------------------------------------------------------------------------
SchemaInferenceError ... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6146/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6146/timeline | null | completed | null | null | false | [
"I've tried clear the .cache file, doesn't work.",
"This issue happens on AWS sagemaker",
"This issue can happen if there is a directory named \"glue\" relative to the Python script with the `load_dataset` call (similar issue to this one: https://github.com/huggingface/datasets/issues/5228). Is this the case?",... |
https://api.github.com/repos/huggingface/datasets/issues/6153 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6153/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6153/comments | https://api.github.com/repos/huggingface/datasets/issues/6153/events | https://github.com/huggingface/datasets/issues/6153 | 1,852,630,074 | I_kwDODunzps5ubOQ6 | 6,153 | custom load dataset to hub | {
"login": "andysingal",
"id": 20493493,
"node_id": "MDQ6VXNlcjIwNDkzNDkz",
"avatar_url": "https://avatars.githubusercontent.com/u/20493493?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/andysingal",
"html_url": "https://github.com/andysingal",
"followers_url": "https://api.github.com/use... | [] | closed | false | null | [] | null | 5 | 2023-08-13T04:42:22 | 2023-11-21T11:50:28 | 2023-10-08T17:04:16 | NONE | null | ### System Info
kaggle notebook
i transformed dataset:
```
dataset = load_dataset("Dahoas/first-instruct-human-assistant-prompt")
```
to
formatted_dataset:
```
Dataset({
features: ['message_tree_id', 'message_tree_text'],
num_rows: 33143
})
```
but would like to know how to upload to hub
### ... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6153/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6153/timeline | null | completed | null | null | false | [
"This is an issue for the [Datasets repo](https://github.com/huggingface/datasets).",
"> This is an issue for the [Datasets repo](https://github.com/huggingface/datasets).\r\n\r\nThanks @sgugger , I guess I will wait for them to address the issue . Looking forward to hearing from them ",
"You can use `.push_to_... |
https://api.github.com/repos/huggingface/datasets/issues/6145 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6145/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6145/comments | https://api.github.com/repos/huggingface/datasets/issues/6145/events | https://github.com/huggingface/datasets/pull/6145 | 1,847,811,310 | PR_kwDODunzps5Xx5If | 6,145 | Export to_iterable_dataset to document | {
"login": "npuichigo",
"id": 11533479,
"node_id": "MDQ6VXNlcjExNTMzNDc5",
"avatar_url": "https://avatars.githubusercontent.com/u/11533479?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/npuichigo",
"html_url": "https://github.com/npuichigo",
"followers_url": "https://api.github.com/users/... | [] | closed | false | null | [] | null | 2 | 2023-08-12T07:00:14 | 2023-08-15T17:04:01 | 2023-08-15T16:55:24 | CONTRIBUTOR | null | Fix the export of a missing method of `Dataset` | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6145/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6145/timeline | null | null | false | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/6145",
"html_url": "https://github.com/huggingface/datasets/pull/6145",
"diff_url": "https://github.com/huggingface/datasets/pull/6145.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/6145.patch",
"merged_at": "2023-08-15T16:55... | true | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==8.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... |
https://api.github.com/repos/huggingface/datasets/issues/6144 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6144/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6144/comments | https://api.github.com/repos/huggingface/datasets/issues/6144/events | https://github.com/huggingface/datasets/issues/6144 | 1,847,296,711 | I_kwDODunzps5uG4LH | 6,144 | NIH exporter file not found | {
"login": "brando90",
"id": 1855278,
"node_id": "MDQ6VXNlcjE4NTUyNzg=",
"avatar_url": "https://avatars.githubusercontent.com/u/1855278?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/brando90",
"html_url": "https://github.com/brando90",
"followers_url": "https://api.github.com/users/brand... | [] | open | false | null | [] | null | 6 | 2023-08-11T19:05:25 | 2023-08-14T23:28:38 | null | NONE | null | ### Describe the bug
can't use or download the nih exporter pile data.
```
15 experiment_compute_diveristy_coeff_single_dataset_then_combined_datasets_with_domain_weights()
16 File "/lfs/ampere1/0/brando9/beyond-scale-language-data-diversity/src/diversity/div_coeff.py", line 474, in experiment_compute_diveri... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6144/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6144/timeline | null | null | null | null | false | [
"related: https://github.com/huggingface/datasets/issues/3504",
"another file not found:\r\n```\r\nTraceback (most recent call last):\r\n File \"/lfs/ampere1/0/brando9/miniconda/envs/beyond_scale/lib/python3.10/site-packages/fsspec/implementations/http.py\", line 417, in _info\r\n await _file_info(\r\n File ... |
https://api.github.com/repos/huggingface/datasets/issues/6142 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6142/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6142/comments | https://api.github.com/repos/huggingface/datasets/issues/6142/events | https://github.com/huggingface/datasets/issues/6142 | 1,846,205,216 | I_kwDODunzps5uCtsg | 6,142 | the-stack-dedup fails to generate | {
"login": "michaelroyzen",
"id": 45830328,
"node_id": "MDQ6VXNlcjQ1ODMwMzI4",
"avatar_url": "https://avatars.githubusercontent.com/u/45830328?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/michaelroyzen",
"html_url": "https://github.com/michaelroyzen",
"followers_url": "https://api.githu... | [] | closed | false | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoest... | [
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.git... | null | 4 | 2023-08-11T05:10:49 | 2023-08-17T09:26:13 | 2023-08-17T09:26:13 | NONE | null | ### Describe the bug
I'm getting an error generating the-stack-dedup with datasets 2.13.1, and with 2.14.4 nothing happens.
### Steps to reproduce the bug
My code:
```
import os
import datasets as ds
MY_CACHE_DIR = "/home/ubuntu/the-stack-dedup-local"
MY_TOKEN="my-token"
the_stack_ds = ds.load_dataset("... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6142/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6142/timeline | null | completed | null | null | false | [
"@severo ",
"It seems that some parquet files have additional columns.\r\n\r\nI ran a scan and found that two files have the additional `__id__` column:\r\n\r\n1. `hf://datasets/bigcode/the-stack-dedup/data/numpy/data-00000-of-00001.parquet`\r\n2. `hf://datasets/bigcode/the-stack-dedup/data/omgrofl/data-00000-of-... |
https://api.github.com/repos/huggingface/datasets/issues/6141 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6141/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6141/comments | https://api.github.com/repos/huggingface/datasets/issues/6141/events | https://github.com/huggingface/datasets/issues/6141 | 1,846,117,729 | I_kwDODunzps5uCYVh | 6,141 | TypeError: ClientSession._request() got an unexpected keyword argument 'https' | {
"login": "q935970314",
"id": 35994018,
"node_id": "MDQ6VXNlcjM1OTk0MDE4",
"avatar_url": "https://avatars.githubusercontent.com/u/35994018?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/q935970314",
"html_url": "https://github.com/q935970314",
"followers_url": "https://api.github.com/use... | [] | closed | false | null | [] | null | 1 | 2023-08-11T02:40:32 | 2023-08-30T13:51:33 | 2023-08-30T13:51:33 | NONE | null | ### Describe the bug
Hello, when I ran the [code snippet](https://huggingface.co/docs/datasets/v2.14.4/en/loading#json) on the document, I encountered the following problem:
```
Python 3.10.9 (main, Mar 1 2023, 18:23:06) [GCC 11.2.0] on linux
Type "help", "copyright", "credits" or "license" for more informatio... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6141/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6141/timeline | null | completed | null | null | false | [
"Hi! I cannot reproduce this error on my machine or in Colab. Which version of `fsspec` do you have installed?"
] |
https://api.github.com/repos/huggingface/datasets/issues/6140 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/6140/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/6140/comments | https://api.github.com/repos/huggingface/datasets/issues/6140/events | https://github.com/huggingface/datasets/issues/6140 | 1,845,384,712 | I_kwDODunzps5t_lYI | 6,140 | Misalignment between file format specified in configs metadata YAML and the inferred builder | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.g... | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | 0 | 2023-08-10T15:07:34 | 2023-08-17T20:37:20 | 2023-08-17T20:37:20 | MEMBER | null | There is a misalignment between the format of the `data_files` specified in the configs metadata YAML (CSV):
```yaml
configs:
- config_name: default
data_files:
- split: train
path: data.csv
```
and the inferred builder (JSON). Note there are multiple JSON files in the repo, but they do not... | {
"url": "https://api.github.com/repos/huggingface/datasets/issues/6140/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
} | https://api.github.com/repos/huggingface/datasets/issues/6140/timeline | null | completed | null | null | false | [] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.