html_url stringlengths 51 51 | comments stringlengths 67 24.7k | title stringlengths 6 280 | body stringlengths 51 36.2k | comment_length int64 16 1.45k | text stringlengths 190 38.3k | embeddings sequence |
|---|---|---|---|---|---|---|
https://github.com/huggingface/datasets/issues/6638 | Looks like it works with latest datasets repository
```
- `datasets` version: 2.16.2.dev0
- Platform: Linux-5.15.0-92-generic-x86_64-with-glibc2.29
- Python version: 3.8.10
- `huggingface_hub` version: 0.20.3
- PyArrow version: 15.0.0
- Pandas version: 2.0.1
- `fsspec` version: 2023.10.0
```
Could you expla... | Cannot download wmt16 dataset | ### Describe the bug
As of this morning (PST) 2/1/2024, seeing the wmt16 dataset is missing from opus , could you suggest an alternative?
```
Downloading data files: 0%| | 0/4 [00:00<?, ?it/s]Tra... | 57 | Cannot download wmt16 dataset
### Describe the bug
As of this morning (PST) 2/1/2024, seeing the wmt16 dataset is missing from opus , could you suggest an alternative?
```
Downloading data files: 0%| ... | [
-0.36052435636520386,
0.317926287651062,
-0.002050556242465973,
0.38708245754241943,
0.4038749039173126,
0.11385878920555115,
0.2532888948917389,
0.20784994959831238,
-0.027222130447626114,
0.21313440799713135,
0.026451997458934784,
0.03987692669034004,
-0.3029175102710724,
0.0407630652189... |
https://github.com/huggingface/datasets/issues/6624 | Hi, this dataset has been disabled by the authors, so unfortunately it's no longer possible to download it. | How to download the laion-coco dataset | The laion coco dataset is not available now. How to download it
https://huggingface.co/datasets/laion/laion-coco | 18 | How to download the laion-coco dataset
The laion coco dataset is not available now. How to download it
https://huggingface.co/datasets/laion/laion-coco
Hi, this dataset has been disabled by the authors, so unfortunately it's no longer possible to download it. | [
-0.15687450766563416,
-0.23160290718078613,
-0.20348447561264038,
0.2225615680217743,
-0.014091560617089272,
-0.005569621920585632,
0.1532430648803711,
0.07347936183214188,
-0.3041417598724365,
0.11177970468997955,
-0.3511919379234314,
0.1720883846282959,
0.03486033156514168,
0.37087300419... |
https://github.com/huggingface/datasets/issues/6623 | @mariosasko, @lhoestq, @albertvillanova
hey guys! can anyone help? or can you guys suggest who can help with this? | streaming datasets doesn't work properly with multi-node | ### Feature request
Let’s say I have a dataset with 5 samples with values [1, 2, 3, 4, 5], with 2 GPUs (for DDP) and batch size of 2. This dataset is an `IterableDataset` since I am streaming it.
Now I split the dataset using `split_dataset_by_node` to ensure it doesn’t get repeated. And since it’s already splitt... | 18 | streaming datasets doesn't work properly with multi-node
### Feature request
Let’s say I have a dataset with 5 samples with values [1, 2, 3, 4, 5], with 2 GPUs (for DDP) and batch size of 2. This dataset is an `IterableDataset` since I am streaming it.
Now I split the dataset using `split_dataset_by_node` to en... | [
-0.5885023474693298,
-0.2717367708683014,
-0.022129017859697342,
0.05329866707324982,
-0.05288459733128548,
-0.24468931555747986,
0.28833165764808655,
-0.16213496029376984,
-0.05456581711769104,
0.17409411072731018,
0.13106313347816467,
0.20888292789459229,
-0.21015878021717072,
0.17105513... |
https://github.com/huggingface/datasets/issues/6623 | Hi !
1. When the dataset is running of of examples, the last batches received by the GPU can be incomplete or empty/missing. We haven't implemented yet a way to ignore the last batch. It might require the datasets to provide the number of examples per shard though, so that we can know when to stop.
2. Samplers are... | streaming datasets doesn't work properly with multi-node | ### Feature request
Let’s say I have a dataset with 5 samples with values [1, 2, 3, 4, 5], with 2 GPUs (for DDP) and batch size of 2. This dataset is an `IterableDataset` since I am streaming it.
Now I split the dataset using `split_dataset_by_node` to ensure it doesn’t get repeated. And since it’s already splitt... | 128 | streaming datasets doesn't work properly with multi-node
### Feature request
Let’s say I have a dataset with 5 samples with values [1, 2, 3, 4, 5], with 2 GPUs (for DDP) and batch size of 2. This dataset is an `IterableDataset` since I am streaming it.
Now I split the dataset using `split_dataset_by_node` to en... | [
-0.5885023474693298,
-0.2717367708683014,
-0.022129017859697342,
0.05329866707324982,
-0.05288459733128548,
-0.24468931555747986,
0.28833165764808655,
-0.16213496029376984,
-0.05456581711769104,
0.17409411072731018,
0.13106313347816467,
0.20888292789459229,
-0.21015878021717072,
0.17105513... |
https://github.com/huggingface/datasets/issues/6623 | > if dataset.n_shards % world_size != 0 then all the nodes will read/stream the full dataset in order (possibly reading/streaming the same data multiple times), BUT will only yield one example out of world_size so that each example goes to one exactly one GPU.
considering there's just 1 shard and 2 worker nodes, do ... | streaming datasets doesn't work properly with multi-node | ### Feature request
Let’s say I have a dataset with 5 samples with values [1, 2, 3, 4, 5], with 2 GPUs (for DDP) and batch size of 2. This dataset is an `IterableDataset` since I am streaming it.
Now I split the dataset using `split_dataset_by_node` to ensure it doesn’t get repeated. And since it’s already splitt... | 73 | streaming datasets doesn't work properly with multi-node
### Feature request
Let’s say I have a dataset with 5 samples with values [1, 2, 3, 4, 5], with 2 GPUs (for DDP) and batch size of 2. This dataset is an `IterableDataset` since I am streaming it.
Now I split the dataset using `split_dataset_by_node` to en... | [
-0.5885023474693298,
-0.2717367708683014,
-0.022129017859697342,
0.05329866707324982,
-0.05288459733128548,
-0.24468931555747986,
0.28833165764808655,
-0.16213496029376984,
-0.05456581711769104,
0.17409411072731018,
0.13106313347816467,
0.20888292789459229,
-0.21015878021717072,
0.17105513... |
https://github.com/huggingface/datasets/issues/6623 | Yes both nodes will stream from the 1 shard, but each node will skip half of the examples. This way in total each example is seen once and exactly once during you distributed training.
Though it terms of I/O, the dataset is effectively read/streamed twice. | streaming datasets doesn't work properly with multi-node | ### Feature request
Let’s say I have a dataset with 5 samples with values [1, 2, 3, 4, 5], with 2 GPUs (for DDP) and batch size of 2. This dataset is an `IterableDataset` since I am streaming it.
Now I split the dataset using `split_dataset_by_node` to ensure it doesn’t get repeated. And since it’s already splitt... | 45 | streaming datasets doesn't work properly with multi-node
### Feature request
Let’s say I have a dataset with 5 samples with values [1, 2, 3, 4, 5], with 2 GPUs (for DDP) and batch size of 2. This dataset is an `IterableDataset` since I am streaming it.
Now I split the dataset using `split_dataset_by_node` to en... | [
-0.5885023474693298,
-0.2717367708683014,
-0.022129017859697342,
0.05329866707324982,
-0.05288459733128548,
-0.24468931555747986,
0.28833165764808655,
-0.16213496029376984,
-0.05456581711769104,
0.17409411072731018,
0.13106313347816467,
0.20888292789459229,
-0.21015878021717072,
0.17105513... |
https://github.com/huggingface/datasets/issues/6623 | what if the number of samples in that shard % num_nodes != 0? it will break/get stuck? or is the data repeated in that case for gradient sync? | streaming datasets doesn't work properly with multi-node | ### Feature request
Let’s say I have a dataset with 5 samples with values [1, 2, 3, 4, 5], with 2 GPUs (for DDP) and batch size of 2. This dataset is an `IterableDataset` since I am streaming it.
Now I split the dataset using `split_dataset_by_node` to ensure it doesn’t get repeated. And since it’s already splitt... | 28 | streaming datasets doesn't work properly with multi-node
### Feature request
Let’s say I have a dataset with 5 samples with values [1, 2, 3, 4, 5], with 2 GPUs (for DDP) and batch size of 2. This dataset is an `IterableDataset` since I am streaming it.
Now I split the dataset using `split_dataset_by_node` to en... | [
-0.5885023474693298,
-0.2717367708683014,
-0.022129017859697342,
0.05329866707324982,
-0.05288459733128548,
-0.24468931555747986,
0.28833165764808655,
-0.16213496029376984,
-0.05456581711769104,
0.17409411072731018,
0.13106313347816467,
0.20888292789459229,
-0.21015878021717072,
0.17105513... |
https://github.com/huggingface/datasets/issues/6623 | In the case one at least one of the noes will get an empty/incomplete batch. The data is not repeated in that case. If the training loop doesn't take this into account it can lead to unexpected behaviors indeed.
In the future we'd like to add a feature that would allow the nodes to ignore the last batch, this way al... | streaming datasets doesn't work properly with multi-node | ### Feature request
Let’s say I have a dataset with 5 samples with values [1, 2, 3, 4, 5], with 2 GPUs (for DDP) and batch size of 2. This dataset is an `IterableDataset` since I am streaming it.
Now I split the dataset using `split_dataset_by_node` to ensure it doesn’t get repeated. And since it’s already splitt... | 68 | streaming datasets doesn't work properly with multi-node
### Feature request
Let’s say I have a dataset with 5 samples with values [1, 2, 3, 4, 5], with 2 GPUs (for DDP) and batch size of 2. This dataset is an `IterableDataset` since I am streaming it.
Now I split the dataset using `split_dataset_by_node` to en... | [
-0.5885023474693298,
-0.2717367708683014,
-0.022129017859697342,
0.05329866707324982,
-0.05288459733128548,
-0.24468931555747986,
0.28833165764808655,
-0.16213496029376984,
-0.05456581711769104,
0.17409411072731018,
0.13106313347816467,
0.20888292789459229,
-0.21015878021717072,
0.17105513... |
https://github.com/huggingface/datasets/issues/6618 | Hi! Can you please share the error's stack trace so we can see where it comes from? | While importing load_dataset from datasets | ### Describe the bug
cannot import name 'DEFAULT_CIPHERS' from 'urllib3.util.ssl_' this is the error i received
### Steps to reproduce the bug
from datasets import load_dataset
### Expected behavior
No errors
### Environment info
python 3.11.5 | 17 | While importing load_dataset from datasets
### Describe the bug
cannot import name 'DEFAULT_CIPHERS' from 'urllib3.util.ssl_' this is the error i received
### Steps to reproduce the bug
from datasets import load_dataset
### Expected behavior
No errors
### Environment info
python 3.11.5
Hi! Can you please sh... | [
0.02553679049015045,
-0.0960899293422699,
-0.01202813908457756,
0.30808398127555847,
0.20100738108158112,
0.12118995189666748,
0.3076993227005005,
-0.02027871645987034,
0.16383129358291626,
0.16387607157230377,
0.07356870919466019,
0.20870698988437653,
-0.0453374907374382,
0.22486478090286... |
https://github.com/huggingface/datasets/issues/6612 | "Hi ! We recently updated `cnn_dailymail` and now `datasets>=2.14` is needed to load it.\r\n\r\nYou (...TRUNCATED) | cnn_dailymail repeats itself | "### Describe the bug\r\n\r\nWhen I try to load `cnn_dailymail` dataset, it takes longer than usual (...TRUNCATED) | 25 | "cnn_dailymail repeats itself \n ### Describe the bug\r\n\r\nWhen I try to load `cnn_dailymail` data(...TRUNCATED) | [0.07091876864433289,-0.30874955654144287,0.029564738273620605,0.6589643955230713,-0.008779346942901(...TRUNCATED) |
End of preview. Expand in Data Studio
README.md exists but content is empty.
- Downloads last month
- 2