html_url stringlengths 51 51 | comments stringlengths 67 24.7k | title stringlengths 6 280 | body stringlengths 51 36.2k | comment_length int64 16 1.45k | text stringlengths 190 38.3k | embeddings list |
|---|---|---|---|---|---|---|
https://github.com/huggingface/datasets/issues/6638 | Looks like it works with latest datasets repository
```
- `datasets` version: 2.16.2.dev0
- Platform: Linux-5.15.0-92-generic-x86_64-with-glibc2.29
- Python version: 3.8.10
- `huggingface_hub` version: 0.20.3
- PyArrow version: 15.0.0
- Pandas version: 2.0.1
- `fsspec` version: 2023.10.0
```
Could you expla... | Cannot download wmt16 dataset | ### Describe the bug
As of this morning (PST) 2/1/2024, seeing the wmt16 dataset is missing from opus , could you suggest an alternative?
```
Downloading data files: 0%| | 0/4 [00:00<?, ?it/s]Tra... | 57 | Cannot download wmt16 dataset
### Describe the bug
As of this morning (PST) 2/1/2024, seeing the wmt16 dataset is missing from opus , could you suggest an alternative?
```
Downloading data files: 0%| ... | [
-0.36052435636520386,
0.317926287651062,
-0.002050556242465973,
0.38708245754241943,
0.4038749039173126,
0.11385878920555115,
0.2532888948917389,
0.20784994959831238,
-0.027222130447626114,
0.21313440799713135,
0.026451997458934784,
0.03987692669034004,
-0.3029175102710724,
0.0407630652189... |
https://github.com/huggingface/datasets/issues/6624 | Hi, this dataset has been disabled by the authors, so unfortunately it's no longer possible to download it. | How to download the laion-coco dataset | The laion coco dataset is not available now. How to download it
https://huggingface.co/datasets/laion/laion-coco | 18 | How to download the laion-coco dataset
The laion coco dataset is not available now. How to download it
https://huggingface.co/datasets/laion/laion-coco
Hi, this dataset has been disabled by the authors, so unfortunately it's no longer possible to download it. | [
-0.15687450766563416,
-0.23160290718078613,
-0.20348447561264038,
0.2225615680217743,
-0.014091560617089272,
-0.005569621920585632,
0.1532430648803711,
0.07347936183214188,
-0.3041417598724365,
0.11177970468997955,
-0.3511919379234314,
0.1720883846282959,
0.03486033156514168,
0.37087300419... |
https://github.com/huggingface/datasets/issues/6623 | @mariosasko, @lhoestq, @albertvillanova
hey guys! can anyone help? or can you guys suggest who can help with this? | streaming datasets doesn't work properly with multi-node | ### Feature request
Let’s say I have a dataset with 5 samples with values [1, 2, 3, 4, 5], with 2 GPUs (for DDP) and batch size of 2. This dataset is an `IterableDataset` since I am streaming it.
Now I split the dataset using `split_dataset_by_node` to ensure it doesn’t get repeated. And since it’s already splitt... | 18 | streaming datasets doesn't work properly with multi-node
### Feature request
Let’s say I have a dataset with 5 samples with values [1, 2, 3, 4, 5], with 2 GPUs (for DDP) and batch size of 2. This dataset is an `IterableDataset` since I am streaming it.
Now I split the dataset using `split_dataset_by_node` to en... | [
-0.5885023474693298,
-0.2717367708683014,
-0.022129017859697342,
0.05329866707324982,
-0.05288459733128548,
-0.24468931555747986,
0.28833165764808655,
-0.16213496029376984,
-0.05456581711769104,
0.17409411072731018,
0.13106313347816467,
0.20888292789459229,
-0.21015878021717072,
0.17105513... |
https://github.com/huggingface/datasets/issues/6623 | Hi !
1. When the dataset is running of of examples, the last batches received by the GPU can be incomplete or empty/missing. We haven't implemented yet a way to ignore the last batch. It might require the datasets to provide the number of examples per shard though, so that we can know when to stop.
2. Samplers are... | streaming datasets doesn't work properly with multi-node | ### Feature request
Let’s say I have a dataset with 5 samples with values [1, 2, 3, 4, 5], with 2 GPUs (for DDP) and batch size of 2. This dataset is an `IterableDataset` since I am streaming it.
Now I split the dataset using `split_dataset_by_node` to ensure it doesn’t get repeated. And since it’s already splitt... | 128 | streaming datasets doesn't work properly with multi-node
### Feature request
Let’s say I have a dataset with 5 samples with values [1, 2, 3, 4, 5], with 2 GPUs (for DDP) and batch size of 2. This dataset is an `IterableDataset` since I am streaming it.
Now I split the dataset using `split_dataset_by_node` to en... | [
-0.5885023474693298,
-0.2717367708683014,
-0.022129017859697342,
0.05329866707324982,
-0.05288459733128548,
-0.24468931555747986,
0.28833165764808655,
-0.16213496029376984,
-0.05456581711769104,
0.17409411072731018,
0.13106313347816467,
0.20888292789459229,
-0.21015878021717072,
0.17105513... |
https://github.com/huggingface/datasets/issues/6623 | > if dataset.n_shards % world_size != 0 then all the nodes will read/stream the full dataset in order (possibly reading/streaming the same data multiple times), BUT will only yield one example out of world_size so that each example goes to one exactly one GPU.
considering there's just 1 shard and 2 worker nodes, do ... | streaming datasets doesn't work properly with multi-node | ### Feature request
Let’s say I have a dataset with 5 samples with values [1, 2, 3, 4, 5], with 2 GPUs (for DDP) and batch size of 2. This dataset is an `IterableDataset` since I am streaming it.
Now I split the dataset using `split_dataset_by_node` to ensure it doesn’t get repeated. And since it’s already splitt... | 73 | streaming datasets doesn't work properly with multi-node
### Feature request
Let’s say I have a dataset with 5 samples with values [1, 2, 3, 4, 5], with 2 GPUs (for DDP) and batch size of 2. This dataset is an `IterableDataset` since I am streaming it.
Now I split the dataset using `split_dataset_by_node` to en... | [
-0.5885023474693298,
-0.2717367708683014,
-0.022129017859697342,
0.05329866707324982,
-0.05288459733128548,
-0.24468931555747986,
0.28833165764808655,
-0.16213496029376984,
-0.05456581711769104,
0.17409411072731018,
0.13106313347816467,
0.20888292789459229,
-0.21015878021717072,
0.17105513... |
https://github.com/huggingface/datasets/issues/6623 | Yes both nodes will stream from the 1 shard, but each node will skip half of the examples. This way in total each example is seen once and exactly once during you distributed training.
Though it terms of I/O, the dataset is effectively read/streamed twice. | streaming datasets doesn't work properly with multi-node | ### Feature request
Let’s say I have a dataset with 5 samples with values [1, 2, 3, 4, 5], with 2 GPUs (for DDP) and batch size of 2. This dataset is an `IterableDataset` since I am streaming it.
Now I split the dataset using `split_dataset_by_node` to ensure it doesn’t get repeated. And since it’s already splitt... | 45 | streaming datasets doesn't work properly with multi-node
### Feature request
Let’s say I have a dataset with 5 samples with values [1, 2, 3, 4, 5], with 2 GPUs (for DDP) and batch size of 2. This dataset is an `IterableDataset` since I am streaming it.
Now I split the dataset using `split_dataset_by_node` to en... | [
-0.5885023474693298,
-0.2717367708683014,
-0.022129017859697342,
0.05329866707324982,
-0.05288459733128548,
-0.24468931555747986,
0.28833165764808655,
-0.16213496029376984,
-0.05456581711769104,
0.17409411072731018,
0.13106313347816467,
0.20888292789459229,
-0.21015878021717072,
0.17105513... |
https://github.com/huggingface/datasets/issues/6623 | what if the number of samples in that shard % num_nodes != 0? it will break/get stuck? or is the data repeated in that case for gradient sync? | streaming datasets doesn't work properly with multi-node | ### Feature request
Let’s say I have a dataset with 5 samples with values [1, 2, 3, 4, 5], with 2 GPUs (for DDP) and batch size of 2. This dataset is an `IterableDataset` since I am streaming it.
Now I split the dataset using `split_dataset_by_node` to ensure it doesn’t get repeated. And since it’s already splitt... | 28 | streaming datasets doesn't work properly with multi-node
### Feature request
Let’s say I have a dataset with 5 samples with values [1, 2, 3, 4, 5], with 2 GPUs (for DDP) and batch size of 2. This dataset is an `IterableDataset` since I am streaming it.
Now I split the dataset using `split_dataset_by_node` to en... | [
-0.5885023474693298,
-0.2717367708683014,
-0.022129017859697342,
0.05329866707324982,
-0.05288459733128548,
-0.24468931555747986,
0.28833165764808655,
-0.16213496029376984,
-0.05456581711769104,
0.17409411072731018,
0.13106313347816467,
0.20888292789459229,
-0.21015878021717072,
0.17105513... |
https://github.com/huggingface/datasets/issues/6623 | In the case one at least one of the noes will get an empty/incomplete batch. The data is not repeated in that case. If the training loop doesn't take this into account it can lead to unexpected behaviors indeed.
In the future we'd like to add a feature that would allow the nodes to ignore the last batch, this way al... | streaming datasets doesn't work properly with multi-node | ### Feature request
Let’s say I have a dataset with 5 samples with values [1, 2, 3, 4, 5], with 2 GPUs (for DDP) and batch size of 2. This dataset is an `IterableDataset` since I am streaming it.
Now I split the dataset using `split_dataset_by_node` to ensure it doesn’t get repeated. And since it’s already splitt... | 68 | streaming datasets doesn't work properly with multi-node
### Feature request
Let’s say I have a dataset with 5 samples with values [1, 2, 3, 4, 5], with 2 GPUs (for DDP) and batch size of 2. This dataset is an `IterableDataset` since I am streaming it.
Now I split the dataset using `split_dataset_by_node` to en... | [
-0.5885023474693298,
-0.2717367708683014,
-0.022129017859697342,
0.05329866707324982,
-0.05288459733128548,
-0.24468931555747986,
0.28833165764808655,
-0.16213496029376984,
-0.05456581711769104,
0.17409411072731018,
0.13106313347816467,
0.20888292789459229,
-0.21015878021717072,
0.17105513... |
https://github.com/huggingface/datasets/issues/6618 | Hi! Can you please share the error's stack trace so we can see where it comes from? | While importing load_dataset from datasets | ### Describe the bug
cannot import name 'DEFAULT_CIPHERS' from 'urllib3.util.ssl_' this is the error i received
### Steps to reproduce the bug
from datasets import load_dataset
### Expected behavior
No errors
### Environment info
python 3.11.5 | 17 | While importing load_dataset from datasets
### Describe the bug
cannot import name 'DEFAULT_CIPHERS' from 'urllib3.util.ssl_' this is the error i received
### Steps to reproduce the bug
from datasets import load_dataset
### Expected behavior
No errors
### Environment info
python 3.11.5
Hi! Can you please sh... | [
0.02553679049015045,
-0.0960899293422699,
-0.01202813908457756,
0.30808398127555847,
0.20100738108158112,
0.12118995189666748,
0.3076993227005005,
-0.02027871645987034,
0.16383129358291626,
0.16387607157230377,
0.07356870919466019,
0.20870698988437653,
-0.0453374907374382,
0.22486478090286... |
https://github.com/huggingface/datasets/issues/6612 | Hi ! We recently updated `cnn_dailymail` and now `datasets>=2.14` is needed to load it.
You can update `datasets` with
```
pip install -U datasets
``` | cnn_dailymail repeats itself | ### Describe the bug
When I try to load `cnn_dailymail` dataset, it takes longer than usual and when I checked the dataset it's 3x bigger than it's supposed to be.
Check https://huggingface.co/datasets/cnn_dailymail: it says 287k rows for train. But when I check length of train split it says 861339.
Also I che... | 25 | cnn_dailymail repeats itself
### Describe the bug
When I try to load `cnn_dailymail` dataset, it takes longer than usual and when I checked the dataset it's 3x bigger than it's supposed to be.
Check https://huggingface.co/datasets/cnn_dailymail: it says 287k rows for train. But when I check length of train spli... | [
0.07091876864433289,
-0.30874955654144287,
0.029564738273620605,
0.6589643955230713,
-0.008779346942901611,
0.07012949883937836,
0.4404844641685486,
-0.0358312763273716,
-0.12530618906021118,
0.07747453451156616,
0.24863269925117493,
0.3104359209537506,
-0.21459093689918518,
0.297971189022... |
https://github.com/huggingface/datasets/issues/6610 | Hi! You are passing the wrong feature type to `cast_column`. This is the fixed call:
```python
ais_dataset = ais_dataset.cast_column("my_labeled_bbox", {"bbox": Sequence(Value(dtype="int64")), "label": ClassLabel(names=["cat", "dog"])})
``` | cast_column to Sequence(subfeatures_dict) has err | ### Describe the bug
I am working with the following demo code:
```
from datasets import load_dataset
from datasets.features import Sequence, Value, ClassLabel, Features
ais_dataset = load_dataset("/data/ryan.gao/ais_dataset_cache/raw/1978/")
ais_dataset = ais_dataset["train"]
def add_class(example):
... | 25 | cast_column to Sequence(subfeatures_dict) has err
### Describe the bug
I am working with the following demo code:
```
from datasets import load_dataset
from datasets.features import Sequence, Value, ClassLabel, Features
ais_dataset = load_dataset("/data/ryan.gao/ais_dataset_cache/raw/1978/")
ais_dataset = a... | [
-0.06808836758136749,
-0.2487647533416748,
-0.06157679110765457,
0.015728870406746864,
0.5308369398117065,
0.24775570631027222,
0.4921039938926697,
0.2797868847846985,
0.1971459835767746,
-0.10834231972694397,
0.1848737597465515,
0.22724923491477966,
-0.12083391845226288,
0.378632396459579... |
https://github.com/huggingface/datasets/issues/6610 | > Hi! You are passing the wrong feature type to `cast_column`. This is the fixed call:
>
> ```python
> ais_dataset = ais_dataset.cast_column("my_labeled_bbox", {"bbox": Sequence(Value(dtype="int64")), "label": ClassLabel(names=["cat", "dog"])})
> ```
thanks | cast_column to Sequence(subfeatures_dict) has err | ### Describe the bug
I am working with the following demo code:
```
from datasets import load_dataset
from datasets.features import Sequence, Value, ClassLabel, Features
ais_dataset = load_dataset("/data/ryan.gao/ais_dataset_cache/raw/1978/")
ais_dataset = ais_dataset["train"]
def add_class(example):
... | 31 | cast_column to Sequence(subfeatures_dict) has err
### Describe the bug
I am working with the following demo code:
```
from datasets import load_dataset
from datasets.features import Sequence, Value, ClassLabel, Features
ais_dataset = load_dataset("/data/ryan.gao/ais_dataset_cache/raw/1978/")
ais_dataset = a... | [
-0.06808836758136749,
-0.2487647533416748,
-0.06157679110765457,
0.015728870406746864,
0.5308369398117065,
0.24775570631027222,
0.4921039938926697,
0.2797868847846985,
0.1971459835767746,
-0.10834231972694397,
0.1848737597465515,
0.22724923491477966,
-0.12083391845226288,
0.378632396459579... |
https://github.com/huggingface/datasets/issues/6609 | I opened https://github.com/huggingface/datasets/pull/6632 to fix this issue. Once it's merged we'll do a new release of `datasets` | Wrong path for cache directory in offline mode | ### Describe the bug
Dear huggingfacers,
I'm trying to use a subset of the-stack dataset. When I run the command the first time
```
dataset = load_dataset(
path='bigcode/the-stack',
data_dir='data/fortran',
split='train' )
```
It downloads the files and caches them normally.
Nevertheless, ... | 17 | Wrong path for cache directory in offline mode
### Describe the bug
Dear huggingfacers,
I'm trying to use a subset of the-stack dataset. When I run the command the first time
```
dataset = load_dataset(
path='bigcode/the-stack',
data_dir='data/fortran',
split='train' )
```
It downloads the ... | [
-0.07495970278978348,
-0.1799859255552292,
0.013488173484802246,
0.40448620915412903,
0.08871869742870331,
-0.08352793753147125,
0.22764024138450623,
-0.0113661615177989,
0.158774733543396,
0.026804406195878983,
0.08159554749727249,
0.05718464031815529,
0.169479638338089,
-0.03335284814238... |
https://github.com/huggingface/datasets/issues/6604 | I don't think the PR fixes the root cause, since it still relies on the `random` library which will often have its seed fixed. I think the builtin `uuid.uuid4()` is a better choice: https://docs.python.org/3/library/uuid.html | Transform fingerprint collisions due to setting fixed random seed | ### Describe the bug
The transform fingerprinting logic relies on the `random` library for random bits when the function is not hashable (e.g. bound methods as used in `trl`: https://github.com/huggingface/trl/blob/main/trl/trainer/dpo_trainer.py#L356). This causes collisions when the training code sets a fixed random... | 34 | Transform fingerprint collisions due to setting fixed random seed
### Describe the bug
The transform fingerprinting logic relies on the `random` library for random bits when the function is not hashable (e.g. bound methods as used in `trl`: https://github.com/huggingface/trl/blob/main/trl/trainer/dpo_trainer.py#L356... | [
-0.1706395000219345,
-0.5525231957435608,
0.13183769583702087,
0.25794491171836853,
0.42352157831192017,
-0.03414187580347061,
0.15525738894939423,
0.3210907578468323,
-0.301504909992218,
0.17565152049064636,
-0.048530951142311096,
-0.053445473313331604,
-0.02663678303360939,
0.15088921785... |
https://github.com/huggingface/datasets/issues/6603 | ```
ds = datasets.Dataset.from_dict(dict(a=[i for i in range(100)]))
ds.map(lambda item: dict(b=item['a'] * 2), cache_file_name="/tmp/whatever-fn") # this worked
ds.map(lambda item: dict(b=item['a'] * 2), cache_file_name="/tmp/whatever-folder/filename") # this failed
ds.map(lambda item: dict(b=item['a'] * 2), cache... | datasets map `cache_file_name` does not work | ### Describe the bug
In the documentation `datasets.Dataset.map` arg `cache_file_name` is said to be a string, but it doesn't work.
### Steps to reproduce the bug
1. pick a dataset
2. write a map function
3. do `ds.map(..., cache_file_name='some_filename')`
4. it crashes
### Expected behavior
It will tell you t... | 71 | datasets map `cache_file_name` does not work
### Describe the bug
In the documentation `datasets.Dataset.map` arg `cache_file_name` is said to be a string, but it doesn't work.
### Steps to reproduce the bug
1. pick a dataset
2. write a map function
3. do `ds.map(..., cache_file_name='some_filename')`
4. it cra... | [
-0.17607736587524414,
-0.22184181213378906,
0.028800055384635925,
0.3735642731189728,
0.35459649562835693,
0.26312312483787537,
0.2672741711139679,
0.3279056251049042,
0.2868582606315613,
0.11249053478240967,
-0.24395491182804108,
0.40131598711013794,
-0.07068009674549103,
-0.2143307328224... |
https://github.com/huggingface/datasets/issues/6600 | Hi! Parquet is the only format that supports complex/nested features such as `Translation`. So, this should work:
```python
test_dataset = load_dataset("opus100", name="en-fr", split="test")
# Save with .to_parquet()
test_parquet_path = "try_testset_save.parquet"
test_dataset.to_parquet(test_parquet_path)
# L... | Loading CSV exported dataset has unexpected format | ### Describe the bug
I wanted to be able to save a HF dataset for translations and load it again in another script, but I'm a bit confused with the documentation and the result I've got so I'm opening this issue to ask if this behavior is as expected.
### Steps to reproduce the bug
The documentation I've mainly cons... | 44 | Loading CSV exported dataset has unexpected format
### Describe the bug
I wanted to be able to save a HF dataset for translations and load it again in another script, but I'm a bit confused with the documentation and the result I've got so I'm opening this issue to ask if this behavior is as expected.
### Steps to ... | [
0.09366397559642792,
-0.301914244890213,
0.01883000135421753,
0.3817500174045563,
0.24329328536987305,
0.04965081065893173,
0.315578430891037,
0.07877084612846375,
0.13491548597812653,
0.09632612019777298,
-0.24815884232521057,
-0.04743877053260803,
0.17620520293712616,
0.2116120159626007,... |
https://github.com/huggingface/datasets/issues/6599 | Hi! Non-generic data processing is out of this library's scope, so it's downstream libraries/users' responsibility to implement such logic. | Easy way to segment into 30s snippets given an m4a file and a vtt file | ### Feature request
Uploading datasets is straightforward thanks to the ability to push Audio to hub. However, it would be nice if the data (text and audio) could be segmented when being pushed (if not possible already).
### Motivation
It's easy to create a vtt file from an audio file. If there could be auto-segment... | 19 | Easy way to segment into 30s snippets given an m4a file and a vtt file
### Feature request
Uploading datasets is straightforward thanks to the ability to push Audio to hub. However, it would be nice if the data (text and audio) could be segmented when being pushed (if not possible already).
### Motivation
It's ea... | [
-0.650300920009613,
-0.3339020013809204,
-0.1150285005569458,
-0.06848037242889404,
0.23705556988716125,
-0.3140990436077118,
0.5622481107711792,
0.4451252222061157,
-0.5105161666870117,
0.2401493936777115,
-0.3567483723163605,
-0.044706277549266815,
-0.3566725254058838,
0.3612205386161804... |
https://github.com/huggingface/datasets/issues/6598 | I am facing similar issue while reading a csv file from s3. Wondering if somebody has found a workaround. | Unexpected keyword argument 'hf' when downloading CSV dataset from S3 | ### Describe the bug
I receive this error message when using `load_dataset` with "csv" path and `dataset_files=s3://...`:
```
TypeError: Session.__init__() got an unexpected keyword argument 'hf'
```
I found a similar issue here: https://stackoverflow.com/questions/77596258/aws-issue-load-dataset-from-s3-fails-w... | 19 | Unexpected keyword argument 'hf' when downloading CSV dataset from S3
### Describe the bug
I receive this error message when using `load_dataset` with "csv" path and `dataset_files=s3://...`:
```
TypeError: Session.__init__() got an unexpected keyword argument 'hf'
```
I found a similar issue here: https://sta... | [
0.05472029000520706,
-0.3772714138031006,
0.010761946439743042,
0.1927299201488495,
0.10700391232967377,
-0.1065262109041214,
0.39194580912590027,
0.10678645968437195,
0.04045405983924866,
-0.0982537716627121,
0.05330744385719299,
0.23410926759243011,
0.11373177170753479,
0.240796178579330... |
https://github.com/huggingface/datasets/issues/6597 | Also note the information in the docstring: https://github.com/huggingface/datasets/blob/9d6d16117a30ba345b0236407975f701c5b288d4/src/datasets/dataset_dict.py#L1582-L1585
> Also accepts `<dataset_name>`, which will default to the namespace of the logged-in user.
This behavior was "reverted" by the PR:
- #6519
... | Dataset.push_to_hub of a canonical dataset creates an additional dataset under the user namespace | While using `Dataset.push_to_hub` of a canonical dataset, an additional dataset was created under my user namespace.
## Steps to reproduce the bug
The command:
```python
commit_info = ds.push_to_hub(
"caner",
config_name="default",
commit_message="Convert dataset to Parquet",
commit_descriptio... | 103 | Dataset.push_to_hub of a canonical dataset creates an additional dataset under the user namespace
While using `Dataset.push_to_hub` of a canonical dataset, an additional dataset was created under my user namespace.
## Steps to reproduce the bug
The command:
```python
commit_info = ds.push_to_hub(
"caner",
... | [
-0.0006193593144416809,
-0.1821044683456421,
0.02426648885011673,
0.128635436296463,
0.1866663694381714,
-0.11163036525249481,
0.22518958151340485,
0.07185616344213486,
0.0009449049830436707,
0.29711079597473145,
-0.012589387595653534,
0.3734179735183716,
-0.135340616106987,
0.054037660360... |
https://github.com/huggingface/datasets/issues/6597 | IIUC, this could also be "fixed" by `create_repo("dataset_name")` not defaulting to `create_repo("user/dataset_name")` (when the user's token is available), which would be consistent with the rest of the `HfApi` ops used in the `push_to_hub` implementation. This is a (small) breaking change for `huggingface_hub`, but j... | Dataset.push_to_hub of a canonical dataset creates an additional dataset under the user namespace | While using `Dataset.push_to_hub` of a canonical dataset, an additional dataset was created under my user namespace.
## Steps to reproduce the bug
The command:
```python
commit_info = ds.push_to_hub(
"caner",
config_name="default",
commit_message="Convert dataset to Parquet",
commit_descriptio... | 50 | Dataset.push_to_hub of a canonical dataset creates an additional dataset under the user namespace
While using `Dataset.push_to_hub` of a canonical dataset, an additional dataset was created under my user namespace.
## Steps to reproduce the bug
The command:
```python
commit_info = ds.push_to_hub(
"caner",
... | [
0.015438482165336609,
-0.24919451773166656,
0.0909704715013504,
0.20266947150230408,
0.17012590169906616,
-0.09327994287014008,
0.33426424860954285,
0.1293419897556305,
0.10203634202480316,
0.2504817545413971,
-0.17379999160766602,
0.16911724209785461,
0.13417616486549377,
0.18486499786376... |
https://github.com/huggingface/datasets/issues/6597 | Hmm, creating repo with implicit namespace (e.g. `create_repo("dataset_name")`) is a convenient feature used in a lot of integrations. It is not consistent with other HfApi methods specifically because it is the method to create repos. Once the repo is created, the return value provides the explicit repo_id (`namespace... | Dataset.push_to_hub of a canonical dataset creates an additional dataset under the user namespace | While using `Dataset.push_to_hub` of a canonical dataset, an additional dataset was created under my user namespace.
## Steps to reproduce the bug
The command:
```python
commit_info = ds.push_to_hub(
"caner",
config_name="default",
commit_message="Convert dataset to Parquet",
commit_descriptio... | 162 | Dataset.push_to_hub of a canonical dataset creates an additional dataset under the user namespace
While using `Dataset.push_to_hub` of a canonical dataset, an additional dataset was created under my user namespace.
## Steps to reproduce the bug
The command:
```python
commit_info = ds.push_to_hub(
"caner",
... | [
-0.05977807939052582,
-0.03659529983997345,
0.11248317360877991,
0.10316121578216553,
0.3251781463623047,
-0.11299433559179306,
0.43455737829208374,
0.2440657913684845,
0.14013831317424774,
0.32332760095596313,
-0.33410531282424927,
0.3199589252471924,
0.16297756135463715,
0.12308022379875... |
https://github.com/huggingface/datasets/issues/6597 | As canonical datasets are going to disappear in the following couple of months, I would not make any effort on their support.
I propose reverting #6519, so that the behavior of `push_to_hub` is aligned with the one described in its dosctring: "Also accepts `<dataset_name>`, which will default to the namespace of the... | Dataset.push_to_hub of a canonical dataset creates an additional dataset under the user namespace | While using `Dataset.push_to_hub` of a canonical dataset, an additional dataset was created under my user namespace.
## Steps to reproduce the bug
The command:
```python
commit_info = ds.push_to_hub(
"caner",
config_name="default",
commit_message="Convert dataset to Parquet",
commit_descriptio... | 58 | Dataset.push_to_hub of a canonical dataset creates an additional dataset under the user namespace
While using `Dataset.push_to_hub` of a canonical dataset, an additional dataset was created under my user namespace.
## Steps to reproduce the bug
The command:
```python
commit_info = ds.push_to_hub(
"caner",
... | [
-0.11891815066337585,
-0.05135495960712433,
0.05666367709636688,
0.08923321962356567,
0.2523309886455536,
-0.1649923175573349,
0.24353288114070892,
0.13927210867404938,
-0.03309636563062668,
0.23708900809288025,
0.011932600289583206,
0.4353119730949402,
-0.10320337861776352,
0.060031294822... |
https://github.com/huggingface/datasets/issues/6595 | Hi ! I think the issue comes from the "float16" features that are not supported yet in Parquet
Feel free to open an issue in `pyarrow` about this. In the meantime, I'd encourage you to use "float32" for your "pooled_prompt_embeds" and "prompt_embeds" features.
You can cast them to "float32" using
```python
fr... | Loading big dataset raises pyarrow.lib.ArrowNotImplementedError 2 | ### Describe the bug
I'm aware of the issue #5695 .
I'm using a modified SDXL trainer: https://github.com/kopyl/diffusers/blob/5e70f604155aeecee254a5c63c5e4236ad4a0d3d/examples/text_to_image/train_text_to_image_sdxl.py#L1027C16-L1027C16
So i
1. Map dataset
2. Save to disk
3. Try to upload:
```
import data... | 64 | Loading big dataset raises pyarrow.lib.ArrowNotImplementedError 2
### Describe the bug
I'm aware of the issue #5695 .
I'm using a modified SDXL trainer: https://github.com/kopyl/diffusers/blob/5e70f604155aeecee254a5c63c5e4236ad4a0d3d/examples/text_to_image/train_text_to_image_sdxl.py#L1027C16-L1027C16
So i
... | [
-0.28692787885665894,
-0.0044410377740859985,
0.16262230277061462,
0.20641998946666718,
0.36014455556869507,
-0.21503600478172302,
0.30898481607437134,
0.37914708256721497,
-0.34722834825515747,
0.15772300958633423,
0.015633590519428253,
0.6490374207496643,
-0.33155137300491333,
0.27174314... |
https://github.com/huggingface/datasets/issues/6595 | @lhoestq hm. Thank you very much.
Do you think it won't have any impact on the training? That it won't break it or the quality won't degrade because of this?
I need to use it for [SDXL training](https://github.com/huggingface/diffusers/blob/main/examples/text_to_image/train_text_to_image_sdxl.py) | Loading big dataset raises pyarrow.lib.ArrowNotImplementedError 2 | ### Describe the bug
I'm aware of the issue #5695 .
I'm using a modified SDXL trainer: https://github.com/kopyl/diffusers/blob/5e70f604155aeecee254a5c63c5e4236ad4a0d3d/examples/text_to_image/train_text_to_image_sdxl.py#L1027C16-L1027C16
So i
1. Map dataset
2. Save to disk
3. Try to upload:
```
import data... | 38 | Loading big dataset raises pyarrow.lib.ArrowNotImplementedError 2
### Describe the bug
I'm aware of the issue #5695 .
I'm using a modified SDXL trainer: https://github.com/kopyl/diffusers/blob/5e70f604155aeecee254a5c63c5e4236ad4a0d3d/examples/text_to_image/train_text_to_image_sdxl.py#L1027C16-L1027C16
So i
... | [
-0.28692787885665894,
-0.0044410377740859985,
0.16262230277061462,
0.20641998946666718,
0.36014455556869507,
-0.21503600478172302,
0.30898481607437134,
0.37914708256721497,
-0.34722834825515747,
0.15772300958633423,
0.015633590519428253,
0.6490374207496643,
-0.33155137300491333,
0.27174314... |
https://github.com/huggingface/datasets/issues/6595 | Increasing the precision should not degrade training (it only increases the precision), but make sure that it doesn't break your pytorch code (e.g. if it expects a float16 instead of a float32 somewhere) | Loading big dataset raises pyarrow.lib.ArrowNotImplementedError 2 | ### Describe the bug
I'm aware of the issue #5695 .
I'm using a modified SDXL trainer: https://github.com/kopyl/diffusers/blob/5e70f604155aeecee254a5c63c5e4236ad4a0d3d/examples/text_to_image/train_text_to_image_sdxl.py#L1027C16-L1027C16
So i
1. Map dataset
2. Save to disk
3. Try to upload:
```
import data... | 33 | Loading big dataset raises pyarrow.lib.ArrowNotImplementedError 2
### Describe the bug
I'm aware of the issue #5695 .
I'm using a modified SDXL trainer: https://github.com/kopyl/diffusers/blob/5e70f604155aeecee254a5c63c5e4236ad4a0d3d/examples/text_to_image/train_text_to_image_sdxl.py#L1027C16-L1027C16
So i
... | [
-0.28692787885665894,
-0.0044410377740859985,
0.16262230277061462,
0.20641998946666718,
0.36014455556869507,
-0.21503600478172302,
0.30898481607437134,
0.37914708256721497,
-0.34722834825515747,
0.15772300958633423,
0.015633590519428253,
0.6490374207496643,
-0.33155137300491333,
0.27174314... |
https://github.com/huggingface/datasets/issues/6595 | @lhoestq just fyi pyarrow 15.0.0 (just released) supports float16 as the underlying parquetcpp does as well now :) | Loading big dataset raises pyarrow.lib.ArrowNotImplementedError 2 | ### Describe the bug
I'm aware of the issue #5695 .
I'm using a modified SDXL trainer: https://github.com/kopyl/diffusers/blob/5e70f604155aeecee254a5c63c5e4236ad4a0d3d/examples/text_to_image/train_text_to_image_sdxl.py#L1027C16-L1027C16
So i
1. Map dataset
2. Save to disk
3. Try to upload:
```
import data... | 18 | Loading big dataset raises pyarrow.lib.ArrowNotImplementedError 2
### Describe the bug
I'm aware of the issue #5695 .
I'm using a modified SDXL trainer: https://github.com/kopyl/diffusers/blob/5e70f604155aeecee254a5c63c5e4236ad4a0d3d/examples/text_to_image/train_text_to_image_sdxl.py#L1027C16-L1027C16
So i
... | [
-0.28692787885665894,
-0.0044410377740859985,
0.16262230277061462,
0.20641998946666718,
0.36014455556869507,
-0.21503600478172302,
0.30898481607437134,
0.37914708256721497,
-0.34722834825515747,
0.15772300958633423,
0.015633590519428253,
0.6490374207496643,
-0.33155137300491333,
0.27174314... |
https://github.com/huggingface/datasets/issues/6595 | Oh that's amazing ! (and great timing ^^)
@kopyl can you try to update `pyarrow` and try again ?
Btw @assignUser there seems to be some casting implementations missing with float16 in 15.0.0, e.g.
```
ArrowNotImplementedError: Unsupported cast from int64 to halffloat using function cast_half_float
```
```... | Loading big dataset raises pyarrow.lib.ArrowNotImplementedError 2 | ### Describe the bug
I'm aware of the issue #5695 .
I'm using a modified SDXL trainer: https://github.com/kopyl/diffusers/blob/5e70f604155aeecee254a5c63c5e4236ad4a0d3d/examples/text_to_image/train_text_to_image_sdxl.py#L1027C16-L1027C16
So i
1. Map dataset
2. Save to disk
3. Try to upload:
```
import data... | 58 | Loading big dataset raises pyarrow.lib.ArrowNotImplementedError 2
### Describe the bug
I'm aware of the issue #5695 .
I'm using a modified SDXL trainer: https://github.com/kopyl/diffusers/blob/5e70f604155aeecee254a5c63c5e4236ad4a0d3d/examples/text_to_image/train_text_to_image_sdxl.py#L1027C16-L1027C16
So i
... | [
-0.28692787885665894,
-0.0044410377740859985,
0.16262230277061462,
0.20641998946666718,
0.36014455556869507,
-0.21503600478172302,
0.30898481607437134,
0.37914708256721497,
-0.34722834825515747,
0.15772300958633423,
0.015633590519428253,
0.6490374207496643,
-0.33155137300491333,
0.27174314... |
https://github.com/huggingface/datasets/issues/6595 | Ah you are right casting is not implemented yet, it's even mentioned in the docs. This pr references the relevant issues if you'd like to track them
https://github.com/apache/arrow/pull/38494 | Loading big dataset raises pyarrow.lib.ArrowNotImplementedError 2 | ### Describe the bug
I'm aware of the issue #5695 .
I'm using a modified SDXL trainer: https://github.com/kopyl/diffusers/blob/5e70f604155aeecee254a5c63c5e4236ad4a0d3d/examples/text_to_image/train_text_to_image_sdxl.py#L1027C16-L1027C16
So i
1. Map dataset
2. Save to disk
3. Try to upload:
```
import data... | 28 | Loading big dataset raises pyarrow.lib.ArrowNotImplementedError 2
### Describe the bug
I'm aware of the issue #5695 .
I'm using a modified SDXL trainer: https://github.com/kopyl/diffusers/blob/5e70f604155aeecee254a5c63c5e4236ad4a0d3d/examples/text_to_image/train_text_to_image_sdxl.py#L1027C16-L1027C16
So i
... | [
-0.28692787885665894,
-0.0044410377740859985,
0.16262230277061462,
0.20641998946666718,
0.36014455556869507,
-0.21503600478172302,
0.30898481607437134,
0.37914708256721497,
-0.34722834825515747,
0.15772300958633423,
0.015633590519428253,
0.6490374207496643,
-0.33155137300491333,
0.27174314... |
https://github.com/huggingface/datasets/issues/6595 | @lhoestq i just recently found out that it's supported in 15.0.0, but wanted to try it first before telling you...
Trying this right now and it seemingly works (although i need to wait till the end to make sure there is nothing wrong). Will update you when it's finished.
<img width="918" alt="image" src="https://... | Loading big dataset raises pyarrow.lib.ArrowNotImplementedError 2 | ### Describe the bug
I'm aware of the issue #5695 .
I'm using a modified SDXL trainer: https://github.com/kopyl/diffusers/blob/5e70f604155aeecee254a5c63c5e4236ad4a0d3d/examples/text_to_image/train_text_to_image_sdxl.py#L1027C16-L1027C16
So i
1. Map dataset
2. Save to disk
3. Try to upload:
```
import data... | 87 | Loading big dataset raises pyarrow.lib.ArrowNotImplementedError 2
### Describe the bug
I'm aware of the issue #5695 .
I'm using a modified SDXL trainer: https://github.com/kopyl/diffusers/blob/5e70f604155aeecee254a5c63c5e4236ad4a0d3d/examples/text_to_image/train_text_to_image_sdxl.py#L1027C16-L1027C16
So i
... | [
-0.28692787885665894,
-0.0044410377740859985,
0.16262230277061462,
0.20641998946666718,
0.36014455556869507,
-0.21503600478172302,
0.30898481607437134,
0.37914708256721497,
-0.34722834825515747,
0.15772300958633423,
0.015633590519428253,
0.6490374207496643,
-0.33155137300491333,
0.27174314... |
https://github.com/huggingface/datasets/issues/6595 | @lhoestq also it's strange that there was no error for a dataset with the same features, same data type, but smaller (much smaller).
Altho i'm not sure about this, but chances are the dataset was loaded directly, not `load_from_disk`.... Maybe because of this. | Loading big dataset raises pyarrow.lib.ArrowNotImplementedError 2 | ### Describe the bug
I'm aware of the issue #5695 .
I'm using a modified SDXL trainer: https://github.com/kopyl/diffusers/blob/5e70f604155aeecee254a5c63c5e4236ad4a0d3d/examples/text_to_image/train_text_to_image_sdxl.py#L1027C16-L1027C16
So i
1. Map dataset
2. Save to disk
3. Try to upload:
```
import data... | 43 | Loading big dataset raises pyarrow.lib.ArrowNotImplementedError 2
### Describe the bug
I'm aware of the issue #5695 .
I'm using a modified SDXL trainer: https://github.com/kopyl/diffusers/blob/5e70f604155aeecee254a5c63c5e4236ad4a0d3d/examples/text_to_image/train_text_to_image_sdxl.py#L1027C16-L1027C16
So i
... | [
-0.28692787885665894,
-0.0044410377740859985,
0.16262230277061462,
0.20641998946666718,
0.36014455556869507,
-0.21503600478172302,
0.30898481607437134,
0.37914708256721497,
-0.34722834825515747,
0.15772300958633423,
0.015633590519428253,
0.6490374207496643,
-0.33155137300491333,
0.27174314... |
https://github.com/huggingface/datasets/issues/6595 | > What does that missing casting implementation mean for my specific case and what does it mean in general?
Nothing for you, just that casting to float16 using `.cast_column("my_column_name", Value("float16"))` raises an error
> Do you know how to push_to_hub with multiple processes?
It's not possible (yet ?).... | Loading big dataset raises pyarrow.lib.ArrowNotImplementedError 2 | ### Describe the bug
I'm aware of the issue #5695 .
I'm using a modified SDXL trainer: https://github.com/kopyl/diffusers/blob/5e70f604155aeecee254a5c63c5e4236ad4a0d3d/examples/text_to_image/train_text_to_image_sdxl.py#L1027C16-L1027C16
So i
1. Map dataset
2. Save to disk
3. Try to upload:
```
import data... | 89 | Loading big dataset raises pyarrow.lib.ArrowNotImplementedError 2
### Describe the bug
I'm aware of the issue #5695 .
I'm using a modified SDXL trainer: https://github.com/kopyl/diffusers/blob/5e70f604155aeecee254a5c63c5e4236ad4a0d3d/examples/text_to_image/train_text_to_image_sdxl.py#L1027C16-L1027C16
So i
... | [
-0.28692787885665894,
-0.0044410377740859985,
0.16262230277061462,
0.20641998946666718,
0.36014455556869507,
-0.21503600478172302,
0.30898481607437134,
0.37914708256721497,
-0.34722834825515747,
0.15772300958633423,
0.015633590519428253,
0.6490374207496643,
-0.33155137300491333,
0.27174314... |
https://github.com/huggingface/datasets/issues/6595 | @lhoestq thank you very much.
That would be amazing, I need to create a feature request for this :)
By the way, in short, how does hf_transfer improves the upload speed under the hood? | Loading big dataset raises pyarrow.lib.ArrowNotImplementedError 2 | ### Describe the bug
I'm aware of the issue #5695 .
I'm using a modified SDXL trainer: https://github.com/kopyl/diffusers/blob/5e70f604155aeecee254a5c63c5e4236ad4a0d3d/examples/text_to_image/train_text_to_image_sdxl.py#L1027C16-L1027C16
So i
1. Map dataset
2. Save to disk
3. Try to upload:
```
import data... | 34 | Loading big dataset raises pyarrow.lib.ArrowNotImplementedError 2
### Describe the bug
I'm aware of the issue #5695 .
I'm using a modified SDXL trainer: https://github.com/kopyl/diffusers/blob/5e70f604155aeecee254a5c63c5e4236ad4a0d3d/examples/text_to_image/train_text_to_image_sdxl.py#L1027C16-L1027C16
So i
... | [
-0.28692787885665894,
-0.0044410377740859985,
0.16262230277061462,
0.20641998946666718,
0.36014455556869507,
-0.21503600478172302,
0.30898481607437134,
0.37914708256721497,
-0.34722834825515747,
0.15772300958633423,
0.015633590519428253,
0.6490374207496643,
-0.33155137300491333,
0.27174314... |
https://github.com/huggingface/datasets/issues/6595 | @lhoestq i was just able to successfully upload without the dataset with the new pyarrow update and without increasing the precision :) | Loading big dataset raises pyarrow.lib.ArrowNotImplementedError 2 | ### Describe the bug
I'm aware of the issue #5695 .
I'm using a modified SDXL trainer: https://github.com/kopyl/diffusers/blob/5e70f604155aeecee254a5c63c5e4236ad4a0d3d/examples/text_to_image/train_text_to_image_sdxl.py#L1027C16-L1027C16
So i
1. Map dataset
2. Save to disk
3. Try to upload:
```
import data... | 22 | Loading big dataset raises pyarrow.lib.ArrowNotImplementedError 2
### Describe the bug
I'm aware of the issue #5695 .
I'm using a modified SDXL trainer: https://github.com/kopyl/diffusers/blob/5e70f604155aeecee254a5c63c5e4236ad4a0d3d/examples/text_to_image/train_text_to_image_sdxl.py#L1027C16-L1027C16
So i
... | [
-0.28692787885665894,
-0.0044410377740859985,
0.16262230277061462,
0.20641998946666718,
0.36014455556869507,
-0.21503600478172302,
0.30898481607437134,
0.37914708256721497,
-0.34722834825515747,
0.15772300958633423,
0.015633590519428253,
0.6490374207496643,
-0.33155137300491333,
0.27174314... |
https://github.com/huggingface/datasets/issues/6592 | Hi! `tqdm` doesn't work well in non-interactive environments, so there isn't much we can do about this. It's best to [disable it](https://huggingface.co/docs/datasets/v2.16.1/en/package_reference/utilities#datasets.disable_progress_bars) in such environments and instead use logging to track progress. | Logs are delayed when doing .map when `docker logs` | ### Describe the bug
When I run my SD training in a Docker image and then listen to logs like `docker logs train -f`, the progress bar is delayed.
It's updating every few percent.
When you have a large dataset that has to be mapped (like 1+ million samples), it's crucial to see the updates in real-time, not every co... | 32 | Logs are delayed when doing .map when `docker logs`
### Describe the bug
When I run my SD training in a Docker image and then listen to logs like `docker logs train -f`, the progress bar is delayed.
It's updating every few percent.
When you have a large dataset that has to be mapped (like 1+ million samples), it's... | [
-0.5378693342208862,
-0.46852540969848633,
-0.023110635578632355,
-0.14820663630962372,
0.2111946940422058,
-0.05372349172830582,
0.33600133657455444,
0.13271595537662506,
-0.030799970030784607,
0.34662723541259766,
-0.05821090191602707,
0.3205311596393585,
-0.1769188642501831,
0.224839180... |
https://github.com/huggingface/datasets/issues/6591 | Hi! Indeed, Dropbox is not a reliable host. I've just merged https://huggingface.co/datasets/PolyAI/minds14/discussions/24 to fix this by hosting the data files inside the repo. | The datasets models housed in Dropbox can't support a lot of users downloading them | ### Describe the bug
I'm using the datasets
```
from datasets import load_dataset, Audio
dataset = load_dataset("PolyAI/minds14", name="en-US", split="train")
```
And it seems that sometimes when I imagine a lot of users are accessing the same resources, the Dropbox host fails:
`raise ConnectionError(... | 23 | The datasets models housed in Dropbox can't support a lot of users downloading them
### Describe the bug
I'm using the datasets
```
from datasets import load_dataset, Audio
dataset = load_dataset("PolyAI/minds14", name="en-US", split="train")
```
And it seems that sometimes when I imagine a lot of user... | [
-0.34906262159347534,
0.30475425720214844,
-0.036654695868492126,
0.5930954217910767,
0.2086874395608902,
-0.12326231598854065,
0.3543434739112854,
0.08312855660915375,
0.21345342695713043,
0.14639872312545776,
-0.38722750544548035,
-0.04541517421603203,
-0.1008751392364502,
0.193244695663... |
https://github.com/huggingface/datasets/issues/6585 | Hi ! This issue comes from the fact that `map()` with `num_proc>1` shards the dataset in multiple chunks to be processed (one per process) and merges them. The DatasetInfos of each chunk are then merged together, but for some fields like `dataset_name` it's not been implemented and default to None.
The DatasetInfo m... | losing DatasetInfo in Dataset.map when num_proc > 1 | ### Describe the bug
Hello and thanks for developing this package!
When I process a Dataset with the map function using multiple processors some set attributes of the DatasetInfo get lost and are None in the resulting Dataset.
### Steps to reproduce the bug
```python
from datasets import Dataset, DatasetInfo... | 65 | losing DatasetInfo in Dataset.map when num_proc > 1
### Describe the bug
Hello and thanks for developing this package!
When I process a Dataset with the map function using multiple processors some set attributes of the DatasetInfo get lost and are None in the resulting Dataset.
### Steps to reproduce the bug... | [
-0.35594192147254944,
-0.10743378102779388,
-0.007776990532875061,
0.6413049101829529,
0.2363981008529663,
0.20442619919776917,
0.16280147433280945,
0.22665756940841675,
0.17084787786006927,
0.3607620298862457,
0.25110965967178345,
0.6130760908126831,
0.121337890625,
0.14003291726112366,
... |
https://github.com/huggingface/datasets/issues/6584 | @lhoestq
```
Traceback (most recent call last):
File "/home/dongzf/miniconda3/envs/dataset_ai/lib/python3.11/runpy.py", line 198, in _run_module_as_main
return _run_code(code, main_globals, None,
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/dongzf/miniconda3/envs/dataset_ai/lib/python3.11/r... | np.fromfile not supported | How to do np.fromfile to use it like np.load
```python
def xnumpy_fromfile(filepath_or_buffer, *args, download_config: Optional[DownloadConfig] = None, **kwargs):
import numpy as np
if hasattr(filepath_or_buffer, "read"):
return np.fromfile(filepath_or_buffer, *args, **kwargs)
else:
... | 105 | np.fromfile not supported
How to do np.fromfile to use it like np.load
```python
def xnumpy_fromfile(filepath_or_buffer, *args, download_config: Optional[DownloadConfig] = None, **kwargs):
import numpy as np
if hasattr(filepath_or_buffer, "read"):
return np.fromfile(filepath_or_buffer, *ar... | [
-0.17793172597885132,
-0.15880124270915985,
-0.05569717288017273,
0.12712755799293518,
0.36935845017433167,
-0.17062661051750183,
0.31794315576553345,
0.3332071900367737,
0.26488226652145386,
0.1581835001707077,
-0.04350850731134415,
0.40969303250312805,
0.08633576333522797,
0.339933156967... |
https://github.com/huggingface/datasets/issues/6584 | I used this method to read point cloud data in the script
```python
with open(velodyne_filepath,"rb") as obj:
velodyne_data = numpy.frombuffer(obj.read(), dtype=numpy.float32).reshape([-1, 4])
``` | np.fromfile not supported | How to do np.fromfile to use it like np.load
```python
def xnumpy_fromfile(filepath_or_buffer, *args, download_config: Optional[DownloadConfig] = None, **kwargs):
import numpy as np
if hasattr(filepath_or_buffer, "read"):
return np.fromfile(filepath_or_buffer, *args, **kwargs)
else:
... | 23 | np.fromfile not supported
How to do np.fromfile to use it like np.load
```python
def xnumpy_fromfile(filepath_or_buffer, *args, download_config: Optional[DownloadConfig] = None, **kwargs):
import numpy as np
if hasattr(filepath_or_buffer, "read"):
return np.fromfile(filepath_or_buffer, *ar... | [
-0.26546746492385864,
-0.19931796193122864,
-0.07483597099781036,
0.15513744950294495,
0.2779790163040161,
-0.2699272930622101,
0.21601557731628418,
0.22307102382183075,
0.18922102451324463,
0.20724502205848694,
0.07521598786115646,
0.3033961057662964,
0.22715897858142853,
0.43538713455200... |
https://github.com/huggingface/datasets/issues/6579 | Hi @haok1402, I have created an issue in the Discussion tab of the corresponding dataset: https://huggingface.co/datasets/eli5/discussions/7
Let's continue the discussion there! | Unable to load `eli5` dataset with streaming | ### Describe the bug
Unable to load `eli5` dataset with streaming.
### Steps to reproduce the bug
This fails with FileNotFoundError: https://files.pushshift.io/reddit/submissions
```
from datasets import load_dataset
load_dataset("eli5", streaming=True)
```
This works correctly.
```
from datasets import lo... | 21 | Unable to load `eli5` dataset with streaming
### Describe the bug
Unable to load `eli5` dataset with streaming.
### Steps to reproduce the bug
This fails with FileNotFoundError: https://files.pushshift.io/reddit/submissions
```
from datasets import load_dataset
load_dataset("eli5", streaming=True)
```
This... | [
-0.3034294843673706,
-0.37919026613235474,
0.0074720680713653564,
0.35745134949684143,
0.3565647602081299,
-0.05574629455804825,
0.20355500280857086,
0.06192155182361603,
0.0340673103928566,
0.1009824275970459,
-0.22593586146831512,
0.18525493144989014,
0.04847179725766182,
0.3414548337459... |
https://github.com/huggingface/datasets/issues/6577 | Hi! We should be able to avoid this error by retrying to read the data when it happens. I'll open a PR in `huggingface_hub` to address this. | 502 Server Errors when streaming large dataset | ### Describe the bug
When streaming a [large ASR dataset](https://huggingface.co/datasets/sanchit-gandhi/concatenated-train-set) from the Hug (~3TB) I often encounter 502 Server Errors seemingly randomly during streaming:
```
huggingface_hub.utils._errors.HfHubHTTPError: 502 Server Error: Bad Gateway for url: http... | 27 | 502 Server Errors when streaming large dataset
### Describe the bug
When streaming a [large ASR dataset](https://huggingface.co/datasets/sanchit-gandhi/concatenated-train-set) from the Hug (~3TB) I often encounter 502 Server Errors seemingly randomly during streaming:
```
huggingface_hub.utils._errors.HfHubHTTPE... | [
-0.22672490775585175,
-0.5951704978942871,
0.15816134214401245,
0.31269171833992004,
0.21198102831840515,
-0.14130067825317383,
0.1209697276353836,
0.03074195235967636,
-0.4138365089893341,
-0.19666787981987,
-0.00406932458281517,
-0.03385841101408005,
-0.0011482946574687958,
0.14881344139... |
https://github.com/huggingface/datasets/issues/6577 | Thanks for the fix @mariosasko! Just wondering whether "500 error" should also be excluded? I got these errors overnight:
```
huggingface_hub.utils._errors.HfHubHTTPError: 500 Server Error: Internal Server Error for url: https://huggingface.co/da
tasets/sanchit-gandhi/concatenated-train-set-label-length-256/resolv... | 502 Server Errors when streaming large dataset | ### Describe the bug
When streaming a [large ASR dataset](https://huggingface.co/datasets/sanchit-gandhi/concatenated-train-set) from the Hug (~3TB) I often encounter 502 Server Errors seemingly randomly during streaming:
```
huggingface_hub.utils._errors.HfHubHTTPError: 502 Server Error: Bad Gateway for url: http... | 49 | 502 Server Errors when streaming large dataset
### Describe the bug
When streaming a [large ASR dataset](https://huggingface.co/datasets/sanchit-gandhi/concatenated-train-set) from the Hug (~3TB) I often encounter 502 Server Errors seemingly randomly during streaming:
```
huggingface_hub.utils._errors.HfHubHTTPE... | [
-0.22672490775585175,
-0.5951704978942871,
0.15816134214401245,
0.31269171833992004,
0.21198102831840515,
-0.14130067825317383,
0.1209697276353836,
0.03074195235967636,
-0.4138365089893341,
-0.19666787981987,
-0.00406932458281517,
-0.03385841101408005,
-0.0011482946574687958,
0.14881344139... |
https://github.com/huggingface/datasets/issues/6568 | Seems like I just used the old code which did not have `keep_in_memory=True` argument, sorry.
Although i encountered a different problem – at 97% my python process just hung for around 11 minutes with no logs (when running dataset.map without `keep_in_memory=True` over around 3 million of dataset samples)... | keep_in_memory=True does not seem to work | UPD: [Fixed](https://github.com/huggingface/datasets/issues/6568#issuecomment-1880817794) . But a new issue came up :( | 48 | keep_in_memory=True does not seem to work
UPD: [Fixed](https://github.com/huggingface/datasets/issues/6568#issuecomment-1880817794) . But a new issue came up :(
Seems like I just used the old code which did not have `keep_in_memory=True` argument, sorry.
Although i encountered a different problem – at 97% my pyt... | [
-0.5037835240364075,
-0.29181116819381714,
-0.12347058951854706,
0.014672212302684784,
0.3082585036754608,
-0.032749563455581665,
0.06944156438112259,
0.22768092155456543,
0.14811402559280396,
0.21746718883514404,
-0.13731825351715088,
0.2877124547958374,
-0.2045990377664566,
0.29882973432... |
https://github.com/huggingface/datasets/issues/6568 | Can you open a new issue and provide a bit more details ? What kind of map operations did you run ? | keep_in_memory=True does not seem to work | UPD: [Fixed](https://github.com/huggingface/datasets/issues/6568#issuecomment-1880817794) . But a new issue came up :( | 22 | keep_in_memory=True does not seem to work
UPD: [Fixed](https://github.com/huggingface/datasets/issues/6568#issuecomment-1880817794) . But a new issue came up :(
Can you open a new issue and provide a bit more details ? What kind of map operations did you run ? | [
-0.3748677968978882,
-0.49990132451057434,
-0.11895636469125748,
0.2844051420688629,
0.3289105296134949,
-0.2569744288921356,
0.14168044924736023,
0.1253788322210312,
0.10821300745010376,
0.3469023108482361,
-0.14063313603401184,
0.25066202878952026,
-0.0918361097574234,
0.2976199984550476... |
https://github.com/huggingface/datasets/issues/6568 | Hey. I will try to find some free time to describe it.
(can't do it now, cause i need to reproduce it myself to be sure about everything, which requires spinning a new Azuree VM, copying a huge dataset to drive from network disk for a long time etc...) | keep_in_memory=True does not seem to work | UPD: [Fixed](https://github.com/huggingface/datasets/issues/6568#issuecomment-1880817794) . But a new issue came up :( | 49 | keep_in_memory=True does not seem to work
UPD: [Fixed](https://github.com/huggingface/datasets/issues/6568#issuecomment-1880817794) . But a new issue came up :(
Hey. I will try to find some free time to describe it.
(can't do it now, cause i need to reproduce it myself to be sure about everything, which requires... | [
-0.30082494020462036,
-0.45631641149520874,
-0.1075349673628807,
0.14783304929733276,
0.19342222809791565,
-0.2760157883167267,
0.03683233633637428,
0.1468944251537323,
-0.1389954388141632,
0.28641635179519653,
-0.086149200797081,
0.16647785902023315,
-0.02673282101750374,
0.30877977609634... |
https://github.com/huggingface/datasets/issues/6568 | @lhoestq loading dataset like this does not spawn 50 python processes:
```
datasets.load_dataset("/preprocessed_2256k/train", num_proc=50)
```
I have 64 vCPU so i hoped it could speed up the dataset loading...
My dataset onlly has images and metadata.csv with text column alongside image file path column | keep_in_memory=True does not seem to work | UPD: [Fixed](https://github.com/huggingface/datasets/issues/6568#issuecomment-1880817794) . But a new issue came up :( | 44 | keep_in_memory=True does not seem to work
UPD: [Fixed](https://github.com/huggingface/datasets/issues/6568#issuecomment-1880817794) . But a new issue came up :(
@lhoestq loading dataset like this does not spawn 50 python processes:
```
datasets.load_dataset("/preprocessed_2256k/train", num_proc=50)
```
I ha... | [
-0.4864200949668884,
-0.2658992111682892,
-0.0638510063290596,
0.3380093276500702,
0.3865876793861389,
-0.06995498389005661,
0.20234602689743042,
0.1355111449956894,
0.1304621696472168,
0.05938532203435898,
-0.12590380012989044,
0.1253167688846588,
-0.19034166634082794,
0.3952617347240448,... |
https://github.com/huggingface/datasets/issues/6568 | now noticed
```
'Setting num_proc from 50 back to 1 for the train split to disable multiprocessing as it only contains one shard
```
Any way to work around this? | keep_in_memory=True does not seem to work | UPD: [Fixed](https://github.com/huggingface/datasets/issues/6568#issuecomment-1880817794) . But a new issue came up :( | 30 | keep_in_memory=True does not seem to work
UPD: [Fixed](https://github.com/huggingface/datasets/issues/6568#issuecomment-1880817794) . But a new issue came up :(
now noticed
```
'Setting num_proc from 50 back to 1 for the train split to disable multiprocessing as it only contains one shard
```
Any way to work ... | [
-0.3578808009624481,
-0.48597973585128784,
-0.05335165560245514,
0.24324099719524384,
0.22478020191192627,
-0.3293018937110901,
0.29327523708343506,
0.05787957087159157,
-0.25964412093162537,
0.37969741225242615,
-0.09009426832199097,
-0.006218463182449341,
-0.20374412834644318,
0.60562366... |
https://github.com/huggingface/datasets/issues/6567 | I think you are reporting an issue with the `transformers` library. Note this is the repository of the `datasets` library. I recommend that you open an issue in their repository: https://github.com/huggingface/transformers/issues
EDIT: I have not the rights to transfer the issue
~~I am transferring your issue to th... | AttributeError: 'str' object has no attribute 'to' | ### Describe the bug
```
--------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
[<ipython-input-6-80c6086794e8>](https://localhost:8080/#) in <cell line: 10>()
8 report_to="wandb")
9
---> 10 trainer =... | 49 | AttributeError: 'str' object has no attribute 'to'
### Describe the bug
```
--------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
[<ipython-input-6-80c6086794e8>](https://localhost:8080/#) in <cell line: 10>()
... | [
-0.08758515119552612,
-0.32732489705085754,
0.08826373517513275,
0.04437369853258133,
0.5676831007003784,
0.01807337999343872,
0.5729138255119324,
0.4033966362476349,
-0.17493602633476257,
0.32560089230537415,
-0.011736348271369934,
0.22309105098247528,
-0.11257587373256683,
-0.03418769687... |
https://github.com/huggingface/datasets/issues/6567 | Thanks, I hope someone from transformers library addresses this issue.
On Mon, Jan 8, 2024 at 15:29 Albert Villanova del Moral <
***@***.***> wrote:
> I think you are reporting an issue with the transformers library. Note
> this is the repository of the datasets library. I am transferring your
> issue to their... | AttributeError: 'str' object has no attribute 'to' | ### Describe the bug
```
--------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
[<ipython-input-6-80c6086794e8>](https://localhost:8080/#) in <cell line: 10>()
8 report_to="wandb")
9
---> 10 trainer =... | 91 | AttributeError: 'str' object has no attribute 'to'
### Describe the bug
```
--------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
[<ipython-input-6-80c6086794e8>](https://localhost:8080/#) in <cell line: 10>()
... | [
-0.021199770271778107,
-0.24903526902198792,
0.0938526839017868,
0.019450323656201363,
0.608004629611969,
0.023181691765785217,
0.6019617319107056,
0.42540740966796875,
-0.18105585873126984,
0.31553900241851807,
0.025728940963745117,
0.1877838373184204,
-0.1374649852514267,
-0.151730209589... |
https://github.com/huggingface/datasets/issues/6567 | @andysingal, I recommend that you open an issue in their repository: https://github.com/huggingface/transformers/issues
I don't have the rights to transfer this issue to their repo. | AttributeError: 'str' object has no attribute 'to' | ### Describe the bug
```
--------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
[<ipython-input-6-80c6086794e8>](https://localhost:8080/#) in <cell line: 10>()
8 report_to="wandb")
9
---> 10 trainer =... | 24 | AttributeError: 'str' object has no attribute 'to'
### Describe the bug
```
--------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
[<ipython-input-6-80c6086794e8>](https://localhost:8080/#) in <cell line: 10>()
... | [
0.0757463276386261,
-0.4929776191711426,
0.0982719212770462,
0.06737048178911209,
0.5980656743049622,
-0.017362095415592194,
0.5761122107505798,
0.3846137523651123,
-0.1268261820077896,
0.3702743947505951,
-0.019784869626164436,
0.16745971143245697,
-0.11638899147510529,
-0.063405297696590... |
https://github.com/huggingface/datasets/issues/6566 | I also see the same error and get passed it by casting that line to float.
so `for x in obj.detach().cpu().numpy()` becomes `for x in obj.detach().to(torch.float).cpu().numpy()`
I got the idea from [this ](https://github.com/kohya-ss/sd-webui-additional-networks/pull/128/files) PR where someone was facing a sim... | I train controlnet_sdxl in bf16 datatype, got unsupported ERROR in datasets | ### Describe the bug
```
Traceback (most recent call last):
File "train_controlnet_sdxl.py", line 1252, in <module>
main(args)
File "train_controlnet_sdxl.py", line 1013, in main
train_dataset = train_dataset.map(compute_embeddings_fn, batched=True, new_fingerprint=new_fingerprint)
File "/home/mini... | 51 | I train controlnet_sdxl in bf16 datatype, got unsupported ERROR in datasets
### Describe the bug
```
Traceback (most recent call last):
File "train_controlnet_sdxl.py", line 1252, in <module>
main(args)
File "train_controlnet_sdxl.py", line 1013, in main
train_dataset = train_dataset.map(compute_emb... | [
-0.5037292242050171,
-0.22553563117980957,
-0.013837732374668121,
0.18832528591156006,
0.5082089304924011,
0.14244911074638367,
0.488095223903656,
0.20407439768314362,
0.4085872173309326,
0.15492910146713257,
0.07041168957948685,
0.09321921318769455,
-0.4428427815437317,
0.2823957800865173... |
https://github.com/huggingface/datasets/issues/6565 | My current workaround this issue is to return `None` in the second element and then filter out samples which have `None` in them.
```python
def merge_samples(batch):
if len(batch['a']) == 1:
batch['c'] = [batch['a'][0]]
batch['d'] = [None]
else:
batch['c'] = [batch['a'][0]]
... | `drop_last_batch=True` for IterableDataset map function is ignored with multiprocessing DataLoader | ### Describe the bug
Scenario:
- Interleaving two iterable datasets of unequal lengths (`all_exhausted`), followed by a batch mapping with batch size 2 to effectively merge the two datasets and get a sample from each dataset in a single batch, with `drop_last_batch=True` to skip the last batch in case it doesn't ha... | 59 | `drop_last_batch=True` for IterableDataset map function is ignored with multiprocessing DataLoader
### Describe the bug
Scenario:
- Interleaving two iterable datasets of unequal lengths (`all_exhausted`), followed by a batch mapping with batch size 2 to effectively merge the two datasets and get a sample from e... | [
-0.29564857482910156,
-0.06441396474838257,
-0.031954340636730194,
0.10420383512973785,
-0.08406788110733032,
-0.06830085813999176,
0.658607006072998,
0.2046472430229187,
0.2394963800907135,
0.4104592502117157,
-0.027381081134080887,
0.37076422572135925,
-0.1618267297744751,
-0.24033448100... |
https://github.com/huggingface/datasets/issues/6563 | <del>Installing `datasets` from `main` did the trick so I guess it will be fixed in the next release.
NVM https://github.com/huggingface/datasets/blob/d26abadce0b884db32382b92422d8a6aa997d40a/src/datasets/utils/info_utils.py#L5 | `ImportError`: cannot import name 'insecure_hashlib' from 'huggingface_hub.utils' (.../huggingface_hub/utils/__init__.py) | ### Describe the bug
Yep its not [there](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/utils/__init__.py) anymore.
```text
+ python /home/trainer/sft_train.py --model_name cognitivecomputations/dolphin-2.2.1-mistral-7b --dataset_name wasertech/OneOS --load_in_4bit --use_peft --batch_... | 20 | `ImportError`: cannot import name 'insecure_hashlib' from 'huggingface_hub.utils' (.../huggingface_hub/utils/__init__.py)
### Describe the bug
Yep its not [there](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/utils/__init__.py) anymore.
```text
+ python /home/trainer/sft_train.py -... | [
-0.05503574013710022,
-0.570375382900238,
-0.049042604863643646,
0.1578817367553711,
0.3562760055065155,
-0.001315489411354065,
0.1583208292722702,
0.356046199798584,
0.1492595672607422,
0.24469077587127686,
-0.12720763683319092,
0.24285480380058289,
-0.30560654401779175,
0.235667914152145... |
https://github.com/huggingface/datasets/issues/6563 | Ha yes I had pinned `tokenizers` to an old version so it downgraded `huggingface_hub`. Note to myself keep HuggingFace modules relatively close together chronologically release wise. | `ImportError`: cannot import name 'insecure_hashlib' from 'huggingface_hub.utils' (.../huggingface_hub/utils/__init__.py) | ### Describe the bug
Yep its not [there](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/utils/__init__.py) anymore.
```text
+ python /home/trainer/sft_train.py --model_name cognitivecomputations/dolphin-2.2.1-mistral-7b --dataset_name wasertech/OneOS --load_in_4bit --use_peft --batch_... | 26 | `ImportError`: cannot import name 'insecure_hashlib' from 'huggingface_hub.utils' (.../huggingface_hub/utils/__init__.py)
### Describe the bug
Yep its not [there](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/utils/__init__.py) anymore.
```text
+ python /home/trainer/sft_train.py -... | [
-0.05503574013710022,
-0.570375382900238,
-0.049042604863643646,
0.1578817367553711,
0.3562760055065155,
-0.001315489411354065,
0.1583208292722702,
0.356046199798584,
0.1492595672607422,
0.24469077587127686,
-0.12720763683319092,
0.24285480380058289,
-0.30560654401779175,
0.235667914152145... |
https://github.com/huggingface/datasets/issues/6561 | In particular, I would like to have an example of how to replace the following configuration (from https://huggingface.co/docs/hub/datasets-manual-configuration#splits)
```
---
configs:
- config_name: default
data_files:
- split: train
path: "data/*.csv"
- split: test
path: "holdout/*.csv"
---... | Document YAML configuration with "data_dir" | See https://huggingface.co/datasets/uonlp/CulturaX/discussions/15#6597e83f185db94370d6bf50 for reference | 41 | Document YAML configuration with "data_dir"
See https://huggingface.co/datasets/uonlp/CulturaX/discussions/15#6597e83f185db94370d6bf50 for reference
In particular, I would like to have an example of how to replace the following configuration (from https://huggingface.co/docs/hub/datasets-manual-configuration#splits... | [
-0.25297069549560547,
-0.1603081226348877,
0.0808432325720787,
0.018189026042819023,
0.23896105587482452,
0.40161192417144775,
0.35298261046409607,
0.054908160120248795,
0.022724732756614685,
0.16462454199790955,
0.032079875469207764,
0.03859062120318413,
0.06359618902206421,
0.42989626526... |
https://github.com/huggingface/datasets/issues/6559 | Hi ! The "allenai--c4" config doesn't exist (this naming schema comes from old versions of `datasets`)
You can load it this way instead:
```python
from datasets import load_dataset
cache_dir = 'path/to/your/cache/directory'
dataset = load_dataset('allenai/c4', data_files={'train': 'en/c4-train.00000-of-01024.j... | Latest version 2.16.1, when load dataset error occurs. ValueError: BuilderConfig 'allenai--c4' not found. Available: ['default'] | ### Describe the bug
python script is:
```
from datasets import load_dataset
cache_dir = 'path/to/your/cache/directory'
dataset = load_dataset('allenai/c4','allenai--c4', data_files={'train': 'en/c4-train.00000-of-01024.json.gz'}, split='train', use_auth_token=False, cache_dir=cache_dir)
```
the script su... | 39 | Latest version 2.16.1, when load dataset error occurs. ValueError: BuilderConfig 'allenai--c4' not found. Available: ['default']
### Describe the bug
python script is:
```
from datasets import load_dataset
cache_dir = 'path/to/your/cache/directory'
dataset = load_dataset('allenai/c4','allenai--c4', data_files=... | [
-0.5463299751281738,
0.45560142397880554,
-0.02869529463350773,
0.3143913149833679,
0.14552779495716095,
0.10748502612113953,
0.31146010756492615,
0.3529205918312073,
0.18226969242095947,
0.20120511949062347,
0.2688750624656677,
0.6102460026741028,
-0.1364494264125824,
-0.1731400191783905,... |
https://github.com/huggingface/datasets/issues/6559 | > Hi ! The "allenai--c4" config doesn't exist (this naming schema comes from old versions of `datasets`)
>
> You can load it this way instead:
>
> ```python
> from datasets import load_dataset
> cache_dir = 'path/to/your/cache/directory'
> dataset = load_dataset('allenai/c4', data_files={'train': 'en/c4-train.... | Latest version 2.16.1, when load dataset error occurs. ValueError: BuilderConfig 'allenai--c4' not found. Available: ['default'] | ### Describe the bug
python script is:
```
from datasets import load_dataset
cache_dir = 'path/to/your/cache/directory'
dataset = load_dataset('allenai/c4','allenai--c4', data_files={'train': 'en/c4-train.00000-of-01024.json.gz'}, split='train', use_auth_token=False, cache_dir=cache_dir)
```
the script su... | 57 | Latest version 2.16.1, when load dataset error occurs. ValueError: BuilderConfig 'allenai--c4' not found. Available: ['default']
### Describe the bug
python script is:
```
from datasets import load_dataset
cache_dir = 'path/to/your/cache/directory'
dataset = load_dataset('allenai/c4','allenai--c4', data_files=... | [
-0.5519501566886902,
0.45018383860588074,
-0.026410909369587898,
0.3139788806438446,
0.1435113549232483,
0.11366935074329376,
0.30293846130371094,
0.3482833206653595,
0.18301022052764893,
0.20341356098651886,
0.2682672142982483,
0.6109302043914795,
-0.1382366269826889,
-0.18821798264980316... |
https://github.com/huggingface/datasets/issues/6558 | You can add
```python
from PIL import ImageFile
ImageFile.LOAD_TRUNCATED_IMAGES = True
```
after the imports to be able to read truncated images. | OSError: image file is truncated (1 bytes not processed) #28323 | ### Describe the bug
```
---------------------------------------------------------------------------
OSError Traceback (most recent call last)
Cell In[24], line 28
23 return example
25 # Filter the dataset
26 # filtered_dataset = dataset.filter(contains_number... | 22 | OSError: image file is truncated (1 bytes not processed) #28323
### Describe the bug
```
---------------------------------------------------------------------------
OSError Traceback (most recent call last)
Cell In[24], line 28
23 return example
25 # Filter the da... | [
-0.3276703357696533,
-0.31357821822166443,
-0.26014402508735657,
0.10543324053287506,
0.24383752048015594,
0.03358208388090134,
0.23733654618263245,
0.46688112616539,
0.08176564425230026,
0.19741684198379517,
0.054364994168281555,
0.06374165415763855,
-0.06677894294261932,
0.07968498766422... |
https://github.com/huggingface/datasets/issues/6554 | I don't think this bug is a thing ? Do you have some code that leads to this issue ? | Parquet exports are used even if revision is passed | We should not used Parquet exports if `revision` is passed.
I think this is a regression. | 20 | Parquet exports are used even if revision is passed
We should not used Parquet exports if `revision` is passed.
I think this is a regression.
I don't think this bug is a thing ? Do you have some code that leads to this issue ? | [
0.00247994065284729,
-0.33710503578186035,
-0.11629632860422134,
0.24757957458496094,
0.059504974633455276,
-0.4397578537464142,
0.18885020911693573,
0.13291427493095398,
-0.1428072452545166,
0.35665690898895264,
0.3908851742744446,
0.49041202664375305,
0.31143370270729065,
0.0302693918347... |
https://github.com/huggingface/datasets/issues/6552 | This bug comes from the `huggingface_hub` library, see: https://github.com/huggingface/huggingface_hub/issues/1952
A fix is provided at https://github.com/huggingface/huggingface_hub/pull/1953. Feel free to install `huggingface_hub` from this PR, or wait for it to be merged and the new version of `huggingface_hub` t... | Loading a dataset from Google Colab hangs at "Resolving data files". | ### Describe the bug
Hello,
I'm trying to load a dataset from Google Colab but the process hangs at `Resolving data files`:

It is happening when the `_get_origin_metadata` definition is invoked:
```python
d... | 39 | Loading a dataset from Google Colab hangs at "Resolving data files".
### Describe the bug
Hello,
I'm trying to load a dataset from Google Colab but the process hangs at `Resolving data files`:

It is happening... | [
-0.19339069724082947,
-0.11780481040477753,
0.030356813222169876,
0.40891021490097046,
0.09424534440040588,
-0.0761006623506546,
0.26371243596076965,
0.04111425578594208,
0.033734604716300964,
0.24775028228759766,
-0.11173553764820099,
0.4329826235771179,
0.03821532055735588,
-0.0427216887... |
https://github.com/huggingface/datasets/issues/6549 | Maybe we can add a helper message like `Maybe try again using "hf://path/without/resolve"` if the path contains `/resolve/` ?
e.g.
```
FileNotFoundError: Unable to find 'hf://datasets/HuggingFaceTB/eval_data/resolve/main/eval_data_context_and_answers.json'
It looks like you used parts of the URL of the file fro... | Loading from hf hub with clearer error message | ### Feature request
Shouldn't this kinda work ?
```
Dataset.from_json("hf://datasets/HuggingFaceTB/eval_data/resolve/main/eval_data_context_and_answers.json")
```
I got an error
```
File ~/miniconda3/envs/datatrove/lib/python3.10/site-packages/datasets/data_files.py:380, in resolve_pattern(pattern, base_path, al... | 86 | Loading from hf hub with clearer error message
### Feature request
Shouldn't this kinda work ?
```
Dataset.from_json("hf://datasets/HuggingFaceTB/eval_data/resolve/main/eval_data_context_and_answers.json")
```
I got an error
```
File ~/miniconda3/envs/datatrove/lib/python3.10/site-packages/datasets/data_files.... | [
0.2066141963005066,
-0.25089403986930847,
0.036134351044893265,
0.32898765802383423,
0.2908268868923187,
-0.08821091055870056,
0.07552556693553925,
0.5636864900588989,
0.05773176997900009,
0.033721961081027985,
-0.17770984768867493,
0.16017961502075195,
0.1350436508655548,
0.17397338151931... |
https://github.com/huggingface/datasets/issues/6548 | It looks like a transient DNS issue. It should work fine now if you try again.
There is no parameter in load_dataset to skip failed downloads. In your case it would have skipped every single subsequent download until the DNS issue was resolved anyway. | Skip if a dataset has issues | ### Describe the bug
Hello everyone,
I'm using **load_datasets** from **huggingface** to download the datasets and I'm facing an issue, the download starts but it reaches some state and then fails with the following error:
Couldn't reach https://huggingface.co/datasets/wikimedia/wikipedia/resolve/4cb9b0d719291f1a10... | 44 | Skip if a dataset has issues
### Describe the bug
Hello everyone,
I'm using **load_datasets** from **huggingface** to download the datasets and I'm facing an issue, the download starts but it reaches some state and then fails with the following error:
Couldn't reach https://huggingface.co/datasets/wikimedia/wikip... | [
-0.13599760830402374,
-0.5338322520256042,
0.022321894764900208,
0.3548619747161865,
0.3999386727809906,
0.1268061101436615,
-0.14067818224430084,
0.225484237074852,
0.08940277248620987,
0.32972007989883423,
0.442242830991745,
0.13631854951381683,
0.05722906067967415,
-0.06314912438392639,... |
https://github.com/huggingface/datasets/issues/6542 | Hi ! We now recommend using the `wikimedia/wikipedia` dataset, can you try loading this one instead ?
```python
wiki_dataset = load_dataset("wikimedia/wikipedia", "20231101.en")
``` | Datasets : wikipedia 20220301.en error | ### Describe the bug
When I used load_dataset to download this data set, the following error occurred. The main problem was that the target data did not exist.
### Steps to reproduce the bug
1.I tried downloading directly.
```python
wiki_dataset = load_dataset("wikipedia", "20220301.en")
```
An exception occurre... | 23 | Datasets : wikipedia 20220301.en error
### Describe the bug
When I used load_dataset to download this data set, the following error occurred. The main problem was that the target data did not exist.
### Steps to reproduce the bug
1.I tried downloading directly.
```python
wiki_dataset = load_dataset("wikipedia",... | [
-0.018569722771644592,
0.21296685934066772,
-0.01566591113805771,
0.33132097125053406,
0.11043636500835419,
0.19831033051013947,
0.29937219619750977,
0.35208413004875183,
0.21328392624855042,
0.07991998642683029,
0.12571120262145996,
0.11823579668998718,
0.016085289418697357,
-0.0669093132... |
https://github.com/huggingface/datasets/issues/6542 | This bug has been fixed in `2.16.1` thanks to https://github.com/huggingface/datasets/pull/6544, feel free to update `datasets` and re-run your code :)
```
pip install -U datasets
``` | Datasets : wikipedia 20220301.en error | ### Describe the bug
When I used load_dataset to download this data set, the following error occurred. The main problem was that the target data did not exist.
### Steps to reproduce the bug
1.I tried downloading directly.
```python
wiki_dataset = load_dataset("wikipedia", "20220301.en")
```
An exception occurre... | 26 | Datasets : wikipedia 20220301.en error
### Describe the bug
When I used load_dataset to download this data set, the following error occurred. The main problem was that the target data did not exist.
### Steps to reproduce the bug
1.I tried downloading directly.
```python
wiki_dataset = load_dataset("wikipedia",... | [
-0.08289985358715057,
0.20689281821250916,
-0.015040706843137741,
0.33663633465766907,
0.11554808169603348,
0.1788325011730194,
0.28936833143234253,
0.349710077047348,
0.2294771373271942,
0.09866957366466522,
0.10366464406251907,
0.15296542644500732,
0.018877964466810226,
-0.00778213143348... |
https://github.com/huggingface/datasets/issues/6541 | This is a problem with your environment. You should be able to fix it by upgrading `numpy` based on [this](https://github.com/numpy/numpy/issues/23570) issue. | Dataset not loading successfully. | ### Describe the bug
When I run down the below code shows this error: AttributeError: module 'numpy' has no attribute '_no_nep50_warning'
I also added this issue in transformers library please check out: [link](https://github.com/huggingface/transformers/issues/28099)
### Steps to reproduce the bug
## Reproduction
... | 21 | Dataset not loading successfully.
### Describe the bug
When I run down the below code shows this error: AttributeError: module 'numpy' has no attribute '_no_nep50_warning'
I also added this issue in transformers library please check out: [link](https://github.com/huggingface/transformers/issues/28099)
### Steps to... | [
-0.23438847064971924,
-0.3350293040275574,
0.06285360455513,
0.4334274232387543,
0.47662264108657837,
-0.10106334835290909,
0.43459445238113403,
0.1095186322927475,
0.12539657950401306,
0.29031631350517273,
-0.3058722913265228,
0.24882371723651886,
-0.28929731249809265,
-0.0038884356617927... |
https://github.com/huggingface/datasets/issues/6541 | Then, this shouldn't throw an error on your machine:
```python
import numpy
numpy._no_nep50_warning
```
If it does, run `python -m pip install numpy` to ensure the correct `pip` is used for the package installation. | Dataset not loading successfully. | ### Describe the bug
When I run down the below code shows this error: AttributeError: module 'numpy' has no attribute '_no_nep50_warning'
I also added this issue in transformers library please check out: [link](https://github.com/huggingface/transformers/issues/28099)
### Steps to reproduce the bug
## Reproduction
... | 34 | Dataset not loading successfully.
### Describe the bug
When I run down the below code shows this error: AttributeError: module 'numpy' has no attribute '_no_nep50_warning'
I also added this issue in transformers library please check out: [link](https://github.com/huggingface/transformers/issues/28099)
### Steps to... | [
-0.23438847064971924,
-0.3350293040275574,
0.06285360455513,
0.4334274232387543,
0.47662264108657837,
-0.10106334835290909,
0.43459445238113403,
0.1095186322927475,
0.12539657950401306,
0.29031631350517273,
-0.3058722913265228,
0.24882371723651886,
-0.28929731249809265,
-0.0038884356617927... |
https://github.com/huggingface/datasets/issues/6541 | Your suggestion to run `python -m pip install numpy` proved to be successful, and my issue has been resolved. I am grateful for your assistance, @mariosasko | Dataset not loading successfully. | ### Describe the bug
When I run down the below code shows this error: AttributeError: module 'numpy' has no attribute '_no_nep50_warning'
I also added this issue in transformers library please check out: [link](https://github.com/huggingface/transformers/issues/28099)
### Steps to reproduce the bug
## Reproduction
... | 26 | Dataset not loading successfully.
### Describe the bug
When I run down the below code shows this error: AttributeError: module 'numpy' has no attribute '_no_nep50_warning'
I also added this issue in transformers library please check out: [link](https://github.com/huggingface/transformers/issues/28099)
### Steps to... | [
-0.23438847064971924,
-0.3350293040275574,
0.06285360455513,
0.4334274232387543,
0.47662264108657837,
-0.10106334835290909,
0.43459445238113403,
0.1095186322927475,
0.12539657950401306,
0.29031631350517273,
-0.3058722913265228,
0.24882371723651886,
-0.28929731249809265,
-0.0038884356617927... |
https://github.com/huggingface/datasets/issues/6540 | Concatenating datasets doesn't create any indices mapping - so flattening indices is not needed (unless you shuffle the dataset).
Can you share the snippet of code you are using to merge your datasets and save them to disk ? | Extreme inefficiency for `save_to_disk` when merging datasets | ### Describe the bug
Hi, I tried to merge in total 22M sequences of data, where each sequence is of maximum length 2000. I found that merging these datasets and then `save_to_disk` is extremely slow because of flattening the indices. Wondering if you have any suggestions or guidance on this. Thank you very much!
###... | 39 | Extreme inefficiency for `save_to_disk` when merging datasets
### Describe the bug
Hi, I tried to merge in total 22M sequences of data, where each sequence is of maximum length 2000. I found that merging these datasets and then `save_to_disk` is extremely slow because of flattening the indices. Wondering if you have... | [
-0.1920756697654724,
-0.4081778824329376,
0.06094733253121376,
0.3518158197402954,
-0.021621473133563995,
0.42556124925613403,
-0.05339167267084122,
0.3826778531074524,
-0.1905861794948578,
-0.006987802684307098,
0.09002067148685455,
0.11778099834918976,
-0.2164076864719391,
0.049476526677... |
https://github.com/huggingface/datasets/issues/6538 | Hi ! Are you sure you have `datasets` 2.16 ? I just checked and on 2.16 I can run `from datasets.arrow_writer import SchemaInferenceError` without error | ImportError: cannot import name 'SchemaInferenceError' from 'datasets.arrow_writer' (/opt/conda/lib/python3.10/site-packages/datasets/arrow_writer.py) | ### Describe the bug
While importing from packages getting the error
Code:
```
import os
import torch
from datasets import load_dataset, Dataset
from transformers import (
AutoModelForCausalLM,
AutoTokenizer,
BitsAndBytesConfig,
HfArgumentParser,
TrainingArguments,
pipeline,
... | 25 | ImportError: cannot import name 'SchemaInferenceError' from 'datasets.arrow_writer' (/opt/conda/lib/python3.10/site-packages/datasets/arrow_writer.py)
### Describe the bug
While importing from packages getting the error
Code:
```
import os
import torch
from datasets import load_dataset, Dataset
from transf... | [
-0.256902813911438,
-0.06662493944168091,
-0.05580161139369011,
0.6121095418930054,
0.2748967707157135,
0.06340034306049347,
0.24404215812683105,
0.24720130860805511,
-0.019955575466156006,
0.11987867951393127,
-0.13812042772769928,
0.38939008116722107,
-0.1408829241991043,
0.1007615476846... |
https://github.com/huggingface/datasets/issues/6538 | I have the same issue - using with datasets version 2.16.1. Also this is on a kaggle notebook - other people with the same issue also seem to be having it on kaggle? | ImportError: cannot import name 'SchemaInferenceError' from 'datasets.arrow_writer' (/opt/conda/lib/python3.10/site-packages/datasets/arrow_writer.py) | ### Describe the bug
While importing from packages getting the error
Code:
```
import os
import torch
from datasets import load_dataset, Dataset
from transformers import (
AutoModelForCausalLM,
AutoTokenizer,
BitsAndBytesConfig,
HfArgumentParser,
TrainingArguments,
pipeline,
... | 33 | ImportError: cannot import name 'SchemaInferenceError' from 'datasets.arrow_writer' (/opt/conda/lib/python3.10/site-packages/datasets/arrow_writer.py)
### Describe the bug
While importing from packages getting the error
Code:
```
import os
import torch
from datasets import load_dataset, Dataset
from transf... | [
-0.256902813911438,
-0.06662493944168091,
-0.05580161139369011,
0.6121095418930054,
0.2748967707157135,
0.06340034306049347,
0.24404215812683105,
0.24720130860805511,
-0.019955575466156006,
0.11987867951393127,
-0.13812042772769928,
0.38939008116722107,
-0.1408829241991043,
0.1007615476846... |
https://github.com/huggingface/datasets/issues/6538 | > Hi ! Are you sure you have `datasets` 2.16 ? I just checked and on 2.16 I can run `from datasets.arrow_writer import SchemaInferenceError` without error
Yes, I am sure
```
!pip show datasets
Name: datasets
Version: 2.16.1
Summary: HuggingFace community-driven open-source library of datasets
Home-page: http... | ImportError: cannot import name 'SchemaInferenceError' from 'datasets.arrow_writer' (/opt/conda/lib/python3.10/site-packages/datasets/arrow_writer.py) | ### Describe the bug
While importing from packages getting the error
Code:
```
import os
import torch
from datasets import load_dataset, Dataset
from transformers import (
AutoModelForCausalLM,
AutoTokenizer,
BitsAndBytesConfig,
HfArgumentParser,
TrainingArguments,
pipeline,
... | 76 | ImportError: cannot import name 'SchemaInferenceError' from 'datasets.arrow_writer' (/opt/conda/lib/python3.10/site-packages/datasets/arrow_writer.py)
### Describe the bug
While importing from packages getting the error
Code:
```
import os
import torch
from datasets import load_dataset, Dataset
from transf... | [
-0.256902813911438,
-0.06662493944168091,
-0.05580161139369011,
0.6121095418930054,
0.2748967707157135,
0.06340034306049347,
0.24404215812683105,
0.24720130860805511,
-0.019955575466156006,
0.11987867951393127,
-0.13812042772769928,
0.38939008116722107,
-0.1408829241991043,
0.1007615476846... |
https://github.com/huggingface/datasets/issues/6538 | > I have the same issue - using with datasets version 2.16.1. Also this is on a kaggle notebook - other people with the same issue also seem to be having it on kaggle?
Don't know about other people. But I am having this issue whose solution I can't find anywhere. And this issue still persists. | ImportError: cannot import name 'SchemaInferenceError' from 'datasets.arrow_writer' (/opt/conda/lib/python3.10/site-packages/datasets/arrow_writer.py) | ### Describe the bug
While importing from packages getting the error
Code:
```
import os
import torch
from datasets import load_dataset, Dataset
from transformers import (
AutoModelForCausalLM,
AutoTokenizer,
BitsAndBytesConfig,
HfArgumentParser,
TrainingArguments,
pipeline,
... | 56 | ImportError: cannot import name 'SchemaInferenceError' from 'datasets.arrow_writer' (/opt/conda/lib/python3.10/site-packages/datasets/arrow_writer.py)
### Describe the bug
While importing from packages getting the error
Code:
```
import os
import torch
from datasets import load_dataset, Dataset
from transf... | [
-0.256902813911438,
-0.06662493944168091,
-0.05580161139369011,
0.6121095418930054,
0.2748967707157135,
0.06340034306049347,
0.24404215812683105,
0.24720130860805511,
-0.019955575466156006,
0.11987867951393127,
-0.13812042772769928,
0.38939008116722107,
-0.1408829241991043,
0.1007615476846... |
https://github.com/huggingface/datasets/issues/6538 | > I have the same issue now and didn't have this problem around 2 weeks ago.
Same here | ImportError: cannot import name 'SchemaInferenceError' from 'datasets.arrow_writer' (/opt/conda/lib/python3.10/site-packages/datasets/arrow_writer.py) | ### Describe the bug
While importing from packages getting the error
Code:
```
import os
import torch
from datasets import load_dataset, Dataset
from transformers import (
AutoModelForCausalLM,
AutoTokenizer,
BitsAndBytesConfig,
HfArgumentParser,
TrainingArguments,
pipeline,
... | 18 | ImportError: cannot import name 'SchemaInferenceError' from 'datasets.arrow_writer' (/opt/conda/lib/python3.10/site-packages/datasets/arrow_writer.py)
### Describe the bug
While importing from packages getting the error
Code:
```
import os
import torch
from datasets import load_dataset, Dataset
from transf... | [
-0.256902813911438,
-0.06662493944168091,
-0.05580161139369011,
0.6121095418930054,
0.2748967707157135,
0.06340034306049347,
0.24404215812683105,
0.24720130860805511,
-0.019955575466156006,
0.11987867951393127,
-0.13812042772769928,
0.38939008116722107,
-0.1408829241991043,
0.1007615476846... |
https://github.com/huggingface/datasets/issues/6538 | I was having the same issue but the datasets version was 2.6.1, after I updated it to latest(2.16), error is gone while importing.
| ImportError: cannot import name 'SchemaInferenceError' from 'datasets.arrow_writer' (/opt/conda/lib/python3.10/site-packages/datasets/arrow_writer.py) | ### Describe the bug
While importing from packages getting the error
Code:
```
import os
import torch
from datasets import load_dataset, Dataset
from transformers import (
AutoModelForCausalLM,
AutoTokenizer,
BitsAndBytesConfig,
HfArgumentParser,
TrainingArguments,
pipeline,
... | 23 | ImportError: cannot import name 'SchemaInferenceError' from 'datasets.arrow_writer' (/opt/conda/lib/python3.10/site-packages/datasets/arrow_writer.py)
### Describe the bug
While importing from packages getting the error
Code:
```
import os
import torch
from datasets import load_dataset, Dataset
from transf... | [
-0.256902813911438,
-0.06662493944168091,
-0.05580161139369011,
0.6121095418930054,
0.2748967707157135,
0.06340034306049347,
0.24404215812683105,
0.24720130860805511,
-0.019955575466156006,
0.11987867951393127,
-0.13812042772769928,
0.38939008116722107,
-0.1408829241991043,
0.1007615476846... |
https://github.com/huggingface/datasets/issues/6538 | > I was having the same issue but the datasets version was 2.6.1, after I updated it to latest(2.16), error is gone while importing.
I also have datasets version 2.16, but the error is still there. | ImportError: cannot import name 'SchemaInferenceError' from 'datasets.arrow_writer' (/opt/conda/lib/python3.10/site-packages/datasets/arrow_writer.py) | ### Describe the bug
While importing from packages getting the error
Code:
```
import os
import torch
from datasets import load_dataset, Dataset
from transformers import (
AutoModelForCausalLM,
AutoTokenizer,
BitsAndBytesConfig,
HfArgumentParser,
TrainingArguments,
pipeline,
... | 36 | ImportError: cannot import name 'SchemaInferenceError' from 'datasets.arrow_writer' (/opt/conda/lib/python3.10/site-packages/datasets/arrow_writer.py)
### Describe the bug
While importing from packages getting the error
Code:
```
import os
import torch
from datasets import load_dataset, Dataset
from transf... | [
-0.256902813911438,
-0.06662493944168091,
-0.05580161139369011,
0.6121095418930054,
0.2748967707157135,
0.06340034306049347,
0.24404215812683105,
0.24720130860805511,
-0.019955575466156006,
0.11987867951393127,
-0.13812042772769928,
0.38939008116722107,
-0.1408829241991043,
0.1007615476846... |
https://github.com/huggingface/datasets/issues/6538 | > > Can you try re-installing `datasets` ?
>
> I tried re-installing. Still getting the same error.
In kaggle I used:
- `%pip install -U datasets`
and then restarted runtime and then everything works fine. | ImportError: cannot import name 'SchemaInferenceError' from 'datasets.arrow_writer' (/opt/conda/lib/python3.10/site-packages/datasets/arrow_writer.py) | ### Describe the bug
While importing from packages getting the error
Code:
```
import os
import torch
from datasets import load_dataset, Dataset
from transformers import (
AutoModelForCausalLM,
AutoTokenizer,
BitsAndBytesConfig,
HfArgumentParser,
TrainingArguments,
pipeline,
... | 36 | ImportError: cannot import name 'SchemaInferenceError' from 'datasets.arrow_writer' (/opt/conda/lib/python3.10/site-packages/datasets/arrow_writer.py)
### Describe the bug
While importing from packages getting the error
Code:
```
import os
import torch
from datasets import load_dataset, Dataset
from transf... | [
-0.256902813911438,
-0.06662493944168091,
-0.05580161139369011,
0.6121095418930054,
0.2748967707157135,
0.06340034306049347,
0.24404215812683105,
0.24720130860805511,
-0.019955575466156006,
0.11987867951393127,
-0.13812042772769928,
0.38939008116722107,
-0.1408829241991043,
0.1007615476846... |
https://github.com/huggingface/datasets/issues/6538 | > > > Can you try re-installing `datasets` ?
> >
> >
> > I tried re-installing. Still getting the same error.
>
> In kaggle I used:
>
> * `%pip install -U datasets`
> and then restarted runtime and then everything works fine.
Yes, this is working. When I restart the runtime after installing packages, i... | ImportError: cannot import name 'SchemaInferenceError' from 'datasets.arrow_writer' (/opt/conda/lib/python3.10/site-packages/datasets/arrow_writer.py) | ### Describe the bug
While importing from packages getting the error
Code:
```
import os
import torch
from datasets import load_dataset, Dataset
from transformers import (
AutoModelForCausalLM,
AutoTokenizer,
BitsAndBytesConfig,
HfArgumentParser,
TrainingArguments,
pipeline,
... | 78 | ImportError: cannot import name 'SchemaInferenceError' from 'datasets.arrow_writer' (/opt/conda/lib/python3.10/site-packages/datasets/arrow_writer.py)
### Describe the bug
While importing from packages getting the error
Code:
```
import os
import torch
from datasets import load_dataset, Dataset
from transf... | [
-0.256902813911438,
-0.06662493944168091,
-0.05580161139369011,
0.6121095418930054,
0.2748967707157135,
0.06340034306049347,
0.24404215812683105,
0.24720130860805511,
-0.019955575466156006,
0.11987867951393127,
-0.13812042772769928,
0.38939008116722107,
-0.1408829241991043,
0.1007615476846... |
https://github.com/huggingface/datasets/issues/6538 | > > > > Can you try re-installing `datasets` ?
> > >
> > >
> > > I tried re-installing. Still getting the same error.
> >
> >
> > In kaggle I used:
> >
> > * `%pip install -U datasets`
> > and then restarted runtime and then everything works fine.
>
> Yes, this is working. When I restart the runtime ... | ImportError: cannot import name 'SchemaInferenceError' from 'datasets.arrow_writer' (/opt/conda/lib/python3.10/site-packages/datasets/arrow_writer.py) | ### Describe the bug
While importing from packages getting the error
Code:
```
import os
import torch
from datasets import load_dataset, Dataset
from transformers import (
AutoModelForCausalLM,
AutoTokenizer,
BitsAndBytesConfig,
HfArgumentParser,
TrainingArguments,
pipeline,
... | 98 | ImportError: cannot import name 'SchemaInferenceError' from 'datasets.arrow_writer' (/opt/conda/lib/python3.10/site-packages/datasets/arrow_writer.py)
### Describe the bug
While importing from packages getting the error
Code:
```
import os
import torch
from datasets import load_dataset, Dataset
from transf... | [
-0.256902813911438,
-0.06662493944168091,
-0.05580161139369011,
0.6121095418930054,
0.2748967707157135,
0.06340034306049347,
0.24404215812683105,
0.24720130860805511,
-0.019955575466156006,
0.11987867951393127,
-0.13812042772769928,
0.38939008116722107,
-0.1408829241991043,
0.1007615476846... |
https://github.com/huggingface/datasets/issues/6538 | > > > > > Can you try re-installing `datasets` ?
> > > >
> > > >
> > > > I tried re-installing. Still getting the same error.
> > >
> > >
> > > In kaggle I used:
> > >
> > > * `%pip install -U datasets`
> > > and then restarted runtime and then everything works fine.
> >
> >
> > Yes, this is workin... | ImportError: cannot import name 'SchemaInferenceError' from 'datasets.arrow_writer' (/opt/conda/lib/python3.10/site-packages/datasets/arrow_writer.py) | ### Describe the bug
While importing from packages getting the error
Code:
```
import os
import torch
from datasets import load_dataset, Dataset
from transformers import (
AutoModelForCausalLM,
AutoTokenizer,
BitsAndBytesConfig,
HfArgumentParser,
TrainingArguments,
pipeline,
... | 157 | ImportError: cannot import name 'SchemaInferenceError' from 'datasets.arrow_writer' (/opt/conda/lib/python3.10/site-packages/datasets/arrow_writer.py)
### Describe the bug
While importing from packages getting the error
Code:
```
import os
import torch
from datasets import load_dataset, Dataset
from transf... | [
-0.256902813911438,
-0.06662493944168091,
-0.05580161139369011,
0.6121095418930054,
0.2748967707157135,
0.06340034306049347,
0.24404215812683105,
0.24720130860805511,
-0.019955575466156006,
0.11987867951393127,
-0.13812042772769928,
0.38939008116722107,
-0.1408829241991043,
0.1007615476846... |
https://github.com/huggingface/datasets/issues/6538 | Closing this issue as it is not related to the datasets library; rather, it's linked to platform-related issues. | ImportError: cannot import name 'SchemaInferenceError' from 'datasets.arrow_writer' (/opt/conda/lib/python3.10/site-packages/datasets/arrow_writer.py) | ### Describe the bug
While importing from packages getting the error
Code:
```
import os
import torch
from datasets import load_dataset, Dataset
from transformers import (
AutoModelForCausalLM,
AutoTokenizer,
BitsAndBytesConfig,
HfArgumentParser,
TrainingArguments,
pipeline,
... | 18 | ImportError: cannot import name 'SchemaInferenceError' from 'datasets.arrow_writer' (/opt/conda/lib/python3.10/site-packages/datasets/arrow_writer.py)
### Describe the bug
While importing from packages getting the error
Code:
```
import os
import torch
from datasets import load_dataset, Dataset
from transf... | [
-0.256902813911438,
-0.06662493944168091,
-0.05580161139369011,
0.6121095418930054,
0.2748967707157135,
0.06340034306049347,
0.24404215812683105,
0.24720130860805511,
-0.019955575466156006,
0.11987867951393127,
-0.13812042772769928,
0.38939008116722107,
-0.1408829241991043,
0.1007615476846... |
https://github.com/huggingface/datasets/issues/6537 | Conceptually, we can use xarray to load the netCDF file, then xarray -> pandas -> pyarrow. | Adding support for netCDF (*.nc) files | ### Feature request
netCDF (*.nc) is a file format for storing multidimensional scientific data, which is used by packages like `xarray` (labelled multi-dimensional arrays in Python). It would be nice to have native support for netCDF in `datasets`.
### Motivation
When uploading *.nc files onto Huggingface Hub throu... | 16 | Adding support for netCDF (*.nc) files
### Feature request
netCDF (*.nc) is a file format for storing multidimensional scientific data, which is used by packages like `xarray` (labelled multi-dimensional arrays in Python). It would be nice to have native support for netCDF in `datasets`.
### Motivation
When upload... | [
-0.38227832317352295,
-0.14507780969142914,
-0.011451058089733124,
0.0008691772818565369,
-0.07568608224391937,
0.011976070702075958,
-0.2338274121284485,
0.40313783288002014,
-0.12558530271053314,
0.2797532379627228,
-0.4577072858810425,
0.04672252759337425,
-0.45220503211021423,
0.532621... |
https://github.com/huggingface/datasets/issues/6537 | I'd still need to verify that such a conversion would be lossless, especially for multi-dimensional data. | Adding support for netCDF (*.nc) files | ### Feature request
netCDF (*.nc) is a file format for storing multidimensional scientific data, which is used by packages like `xarray` (labelled multi-dimensional arrays in Python). It would be nice to have native support for netCDF in `datasets`.
### Motivation
When uploading *.nc files onto Huggingface Hub throu... | 16 | Adding support for netCDF (*.nc) files
### Feature request
netCDF (*.nc) is a file format for storing multidimensional scientific data, which is used by packages like `xarray` (labelled multi-dimensional arrays in Python). It would be nice to have native support for netCDF in `datasets`.
### Motivation
When upload... | [
-0.42485103011131287,
-0.14924445748329163,
-0.01656416431069374,
-0.053587038069963455,
-0.08428335934877396,
0.03142045438289642,
-0.2771828770637512,
0.3843265771865845,
-0.10386538505554199,
0.3364311754703522,
-0.42337337136268616,
0.005743648856878281,
-0.5121448636054993,
0.57444715... |
https://github.com/huggingface/datasets/issues/6638 | Looks like it works with latest datasets repository
```
- `datasets` version: 2.16.2.dev0
- Platform: Linux-5.15.0-92-generic-x86_64-with-glibc2.29
- Python version: 3.8.10
- `huggingface_hub` version: 0.20.3
- PyArrow version: 15.0.0
- Pandas version: 2.0.1
- `fsspec` version: 2023.10.0
```
Could you expla... | Cannot download wmt16 dataset | ### Describe the bug
As of this morning (PST) 2/1/2024, seeing the wmt16 dataset is missing from opus , could you suggest an alternative?
```
Downloading data files: 0%| | 0/4 [00:00<?, ?it/s]Tra... | 57 | Cannot download wmt16 dataset
### Describe the bug
As of this morning (PST) 2/1/2024, seeing the wmt16 dataset is missing from opus , could you suggest an alternative?
```
Downloading data files: 0%| ... | [
-0.36052435636520386,
0.317926287651062,
-0.002050556242465973,
0.38708245754241943,
0.4038749039173126,
0.11385878920555115,
0.2532888948917389,
0.20784994959831238,
-0.027222130447626114,
0.21313440799713135,
0.026451997458934784,
0.03987692669034004,
-0.3029175102710724,
0.0407630652189... |
https://github.com/huggingface/datasets/issues/6624 | Hi, this dataset has been disabled by the authors, so unfortunately it's no longer possible to download it. | How to download the laion-coco dataset | The laion coco dataset is not available now. How to download it
https://huggingface.co/datasets/laion/laion-coco | 18 | How to download the laion-coco dataset
The laion coco dataset is not available now. How to download it
https://huggingface.co/datasets/laion/laion-coco
Hi, this dataset has been disabled by the authors, so unfortunately it's no longer possible to download it. | [
-0.15687450766563416,
-0.23160290718078613,
-0.20348447561264038,
0.2225615680217743,
-0.014091560617089272,
-0.005569621920585632,
0.1532430648803711,
0.07347936183214188,
-0.3041417598724365,
0.11177970468997955,
-0.3511919379234314,
0.1720883846282959,
0.03486033156514168,
0.37087300419... |
https://github.com/huggingface/datasets/issues/6623 | @mariosasko, @lhoestq, @albertvillanova
hey guys! can anyone help? or can you guys suggest who can help with this? | streaming datasets doesn't work properly with multi-node | ### Feature request
Let’s say I have a dataset with 5 samples with values [1, 2, 3, 4, 5], with 2 GPUs (for DDP) and batch size of 2. This dataset is an `IterableDataset` since I am streaming it.
Now I split the dataset using `split_dataset_by_node` to ensure it doesn’t get repeated. And since it’s already splitt... | 18 | streaming datasets doesn't work properly with multi-node
### Feature request
Let’s say I have a dataset with 5 samples with values [1, 2, 3, 4, 5], with 2 GPUs (for DDP) and batch size of 2. This dataset is an `IterableDataset` since I am streaming it.
Now I split the dataset using `split_dataset_by_node` to en... | [
-0.5885023474693298,
-0.2717367708683014,
-0.022129017859697342,
0.05329866707324982,
-0.05288459733128548,
-0.24468931555747986,
0.28833165764808655,
-0.16213496029376984,
-0.05456581711769104,
0.17409411072731018,
0.13106313347816467,
0.20888292789459229,
-0.21015878021717072,
0.17105513... |
https://github.com/huggingface/datasets/issues/6623 | Hi !
1. When the dataset is running of of examples, the last batches received by the GPU can be incomplete or empty/missing. We haven't implemented yet a way to ignore the last batch. It might require the datasets to provide the number of examples per shard though, so that we can know when to stop.
2. Samplers are... | streaming datasets doesn't work properly with multi-node | ### Feature request
Let’s say I have a dataset with 5 samples with values [1, 2, 3, 4, 5], with 2 GPUs (for DDP) and batch size of 2. This dataset is an `IterableDataset` since I am streaming it.
Now I split the dataset using `split_dataset_by_node` to ensure it doesn’t get repeated. And since it’s already splitt... | 128 | streaming datasets doesn't work properly with multi-node
### Feature request
Let’s say I have a dataset with 5 samples with values [1, 2, 3, 4, 5], with 2 GPUs (for DDP) and batch size of 2. This dataset is an `IterableDataset` since I am streaming it.
Now I split the dataset using `split_dataset_by_node` to en... | [
-0.5885023474693298,
-0.2717367708683014,
-0.022129017859697342,
0.05329866707324982,
-0.05288459733128548,
-0.24468931555747986,
0.28833165764808655,
-0.16213496029376984,
-0.05456581711769104,
0.17409411072731018,
0.13106313347816467,
0.20888292789459229,
-0.21015878021717072,
0.17105513... |
https://github.com/huggingface/datasets/issues/6623 | > if dataset.n_shards % world_size != 0 then all the nodes will read/stream the full dataset in order (possibly reading/streaming the same data multiple times), BUT will only yield one example out of world_size so that each example goes to one exactly one GPU.
considering there's just 1 shard and 2 worker nodes, do ... | streaming datasets doesn't work properly with multi-node | ### Feature request
Let’s say I have a dataset with 5 samples with values [1, 2, 3, 4, 5], with 2 GPUs (for DDP) and batch size of 2. This dataset is an `IterableDataset` since I am streaming it.
Now I split the dataset using `split_dataset_by_node` to ensure it doesn’t get repeated. And since it’s already splitt... | 73 | streaming datasets doesn't work properly with multi-node
### Feature request
Let’s say I have a dataset with 5 samples with values [1, 2, 3, 4, 5], with 2 GPUs (for DDP) and batch size of 2. This dataset is an `IterableDataset` since I am streaming it.
Now I split the dataset using `split_dataset_by_node` to en... | [
-0.5885023474693298,
-0.2717367708683014,
-0.022129017859697342,
0.05329866707324982,
-0.05288459733128548,
-0.24468931555747986,
0.28833165764808655,
-0.16213496029376984,
-0.05456581711769104,
0.17409411072731018,
0.13106313347816467,
0.20888292789459229,
-0.21015878021717072,
0.17105513... |
https://github.com/huggingface/datasets/issues/6623 | Yes both nodes will stream from the 1 shard, but each node will skip half of the examples. This way in total each example is seen once and exactly once during you distributed training.
Though it terms of I/O, the dataset is effectively read/streamed twice. | streaming datasets doesn't work properly with multi-node | ### Feature request
Let’s say I have a dataset with 5 samples with values [1, 2, 3, 4, 5], with 2 GPUs (for DDP) and batch size of 2. This dataset is an `IterableDataset` since I am streaming it.
Now I split the dataset using `split_dataset_by_node` to ensure it doesn’t get repeated. And since it’s already splitt... | 45 | streaming datasets doesn't work properly with multi-node
### Feature request
Let’s say I have a dataset with 5 samples with values [1, 2, 3, 4, 5], with 2 GPUs (for DDP) and batch size of 2. This dataset is an `IterableDataset` since I am streaming it.
Now I split the dataset using `split_dataset_by_node` to en... | [
-0.5885023474693298,
-0.2717367708683014,
-0.022129017859697342,
0.05329866707324982,
-0.05288459733128548,
-0.24468931555747986,
0.28833165764808655,
-0.16213496029376984,
-0.05456581711769104,
0.17409411072731018,
0.13106313347816467,
0.20888292789459229,
-0.21015878021717072,
0.17105513... |
https://github.com/huggingface/datasets/issues/6623 | what if the number of samples in that shard % num_nodes != 0? it will break/get stuck? or is the data repeated in that case for gradient sync? | streaming datasets doesn't work properly with multi-node | ### Feature request
Let’s say I have a dataset with 5 samples with values [1, 2, 3, 4, 5], with 2 GPUs (for DDP) and batch size of 2. This dataset is an `IterableDataset` since I am streaming it.
Now I split the dataset using `split_dataset_by_node` to ensure it doesn’t get repeated. And since it’s already splitt... | 28 | streaming datasets doesn't work properly with multi-node
### Feature request
Let’s say I have a dataset with 5 samples with values [1, 2, 3, 4, 5], with 2 GPUs (for DDP) and batch size of 2. This dataset is an `IterableDataset` since I am streaming it.
Now I split the dataset using `split_dataset_by_node` to en... | [
-0.5885023474693298,
-0.2717367708683014,
-0.022129017859697342,
0.05329866707324982,
-0.05288459733128548,
-0.24468931555747986,
0.28833165764808655,
-0.16213496029376984,
-0.05456581711769104,
0.17409411072731018,
0.13106313347816467,
0.20888292789459229,
-0.21015878021717072,
0.17105513... |
https://github.com/huggingface/datasets/issues/6623 | In the case one at least one of the noes will get an empty/incomplete batch. The data is not repeated in that case. If the training loop doesn't take this into account it can lead to unexpected behaviors indeed.
In the future we'd like to add a feature that would allow the nodes to ignore the last batch, this way al... | streaming datasets doesn't work properly with multi-node | ### Feature request
Let’s say I have a dataset with 5 samples with values [1, 2, 3, 4, 5], with 2 GPUs (for DDP) and batch size of 2. This dataset is an `IterableDataset` since I am streaming it.
Now I split the dataset using `split_dataset_by_node` to ensure it doesn’t get repeated. And since it’s already splitt... | 68 | streaming datasets doesn't work properly with multi-node
### Feature request
Let’s say I have a dataset with 5 samples with values [1, 2, 3, 4, 5], with 2 GPUs (for DDP) and batch size of 2. This dataset is an `IterableDataset` since I am streaming it.
Now I split the dataset using `split_dataset_by_node` to en... | [
-0.5885023474693298,
-0.2717367708683014,
-0.022129017859697342,
0.05329866707324982,
-0.05288459733128548,
-0.24468931555747986,
0.28833165764808655,
-0.16213496029376984,
-0.05456581711769104,
0.17409411072731018,
0.13106313347816467,
0.20888292789459229,
-0.21015878021717072,
0.17105513... |
https://github.com/huggingface/datasets/issues/6618 | Hi! Can you please share the error's stack trace so we can see where it comes from? | While importing load_dataset from datasets | ### Describe the bug
cannot import name 'DEFAULT_CIPHERS' from 'urllib3.util.ssl_' this is the error i received
### Steps to reproduce the bug
from datasets import load_dataset
### Expected behavior
No errors
### Environment info
python 3.11.5 | 17 | While importing load_dataset from datasets
### Describe the bug
cannot import name 'DEFAULT_CIPHERS' from 'urllib3.util.ssl_' this is the error i received
### Steps to reproduce the bug
from datasets import load_dataset
### Expected behavior
No errors
### Environment info
python 3.11.5
Hi! Can you please sh... | [
0.02553679049015045,
-0.0960899293422699,
-0.01202813908457756,
0.30808398127555847,
0.20100738108158112,
0.12118995189666748,
0.3076993227005005,
-0.02027871645987034,
0.16383129358291626,
0.16387607157230377,
0.07356870919466019,
0.20870698988437653,
-0.0453374907374382,
0.22486478090286... |
https://github.com/huggingface/datasets/issues/6612 | Hi ! We recently updated `cnn_dailymail` and now `datasets>=2.14` is needed to load it.
You can update `datasets` with
```
pip install -U datasets
``` | cnn_dailymail repeats itself | ### Describe the bug
When I try to load `cnn_dailymail` dataset, it takes longer than usual and when I checked the dataset it's 3x bigger than it's supposed to be.
Check https://huggingface.co/datasets/cnn_dailymail: it says 287k rows for train. But when I check length of train split it says 861339.
Also I che... | 25 | cnn_dailymail repeats itself
### Describe the bug
When I try to load `cnn_dailymail` dataset, it takes longer than usual and when I checked the dataset it's 3x bigger than it's supposed to be.
Check https://huggingface.co/datasets/cnn_dailymail: it says 287k rows for train. But when I check length of train spli... | [
0.07091876864433289,
-0.30874955654144287,
0.029564738273620605,
0.6589643955230713,
-0.008779346942901611,
0.07012949883937836,
0.4404844641685486,
-0.0358312763273716,
-0.12530618906021118,
0.07747453451156616,
0.24863269925117493,
0.3104359209537506,
-0.21459093689918518,
0.297971189022... |
https://github.com/huggingface/datasets/issues/6610 | Hi! You are passing the wrong feature type to `cast_column`. This is the fixed call:
```python
ais_dataset = ais_dataset.cast_column("my_labeled_bbox", {"bbox": Sequence(Value(dtype="int64")), "label": ClassLabel(names=["cat", "dog"])})
``` | cast_column to Sequence(subfeatures_dict) has err | ### Describe the bug
I am working with the following demo code:
```
from datasets import load_dataset
from datasets.features import Sequence, Value, ClassLabel, Features
ais_dataset = load_dataset("/data/ryan.gao/ais_dataset_cache/raw/1978/")
ais_dataset = ais_dataset["train"]
def add_class(example):
... | 25 | cast_column to Sequence(subfeatures_dict) has err
### Describe the bug
I am working with the following demo code:
```
from datasets import load_dataset
from datasets.features import Sequence, Value, ClassLabel, Features
ais_dataset = load_dataset("/data/ryan.gao/ais_dataset_cache/raw/1978/")
ais_dataset = a... | [
-0.06808836758136749,
-0.2487647533416748,
-0.06157679110765457,
0.015728870406746864,
0.5308369398117065,
0.24775570631027222,
0.4921039938926697,
0.2797868847846985,
0.1971459835767746,
-0.10834231972694397,
0.1848737597465515,
0.22724923491477966,
-0.12083391845226288,
0.378632396459579... |
https://github.com/huggingface/datasets/issues/6610 | > Hi! You are passing the wrong feature type to `cast_column`. This is the fixed call:
>
> ```python
> ais_dataset = ais_dataset.cast_column("my_labeled_bbox", {"bbox": Sequence(Value(dtype="int64")), "label": ClassLabel(names=["cat", "dog"])})
> ```
thanks | cast_column to Sequence(subfeatures_dict) has err | ### Describe the bug
I am working with the following demo code:
```
from datasets import load_dataset
from datasets.features import Sequence, Value, ClassLabel, Features
ais_dataset = load_dataset("/data/ryan.gao/ais_dataset_cache/raw/1978/")
ais_dataset = ais_dataset["train"]
def add_class(example):
... | 31 | cast_column to Sequence(subfeatures_dict) has err
### Describe the bug
I am working with the following demo code:
```
from datasets import load_dataset
from datasets.features import Sequence, Value, ClassLabel, Features
ais_dataset = load_dataset("/data/ryan.gao/ais_dataset_cache/raw/1978/")
ais_dataset = a... | [
-0.06808836758136749,
-0.2487647533416748,
-0.06157679110765457,
0.015728870406746864,
0.5308369398117065,
0.24775570631027222,
0.4921039938926697,
0.2797868847846985,
0.1971459835767746,
-0.10834231972694397,
0.1848737597465515,
0.22724923491477966,
-0.12083391845226288,
0.378632396459579... |
https://github.com/huggingface/datasets/issues/6609 | I opened https://github.com/huggingface/datasets/pull/6632 to fix this issue. Once it's merged we'll do a new release of `datasets` | Wrong path for cache directory in offline mode | ### Describe the bug
Dear huggingfacers,
I'm trying to use a subset of the-stack dataset. When I run the command the first time
```
dataset = load_dataset(
path='bigcode/the-stack',
data_dir='data/fortran',
split='train' )
```
It downloads the files and caches them normally.
Nevertheless, ... | 17 | Wrong path for cache directory in offline mode
### Describe the bug
Dear huggingfacers,
I'm trying to use a subset of the-stack dataset. When I run the command the first time
```
dataset = load_dataset(
path='bigcode/the-stack',
data_dir='data/fortran',
split='train' )
```
It downloads the ... | [
-0.07495970278978348,
-0.1799859255552292,
0.013488173484802246,
0.40448620915412903,
0.08871869742870331,
-0.08352793753147125,
0.22764024138450623,
-0.0113661615177989,
0.158774733543396,
0.026804406195878983,
0.08159554749727249,
0.05718464031815529,
0.169479638338089,
-0.03335284814238... |
https://github.com/huggingface/datasets/issues/6604 | I don't think the PR fixes the root cause, since it still relies on the `random` library which will often have its seed fixed. I think the builtin `uuid.uuid4()` is a better choice: https://docs.python.org/3/library/uuid.html | Transform fingerprint collisions due to setting fixed random seed | ### Describe the bug
The transform fingerprinting logic relies on the `random` library for random bits when the function is not hashable (e.g. bound methods as used in `trl`: https://github.com/huggingface/trl/blob/main/trl/trainer/dpo_trainer.py#L356). This causes collisions when the training code sets a fixed random... | 34 | Transform fingerprint collisions due to setting fixed random seed
### Describe the bug
The transform fingerprinting logic relies on the `random` library for random bits when the function is not hashable (e.g. bound methods as used in `trl`: https://github.com/huggingface/trl/blob/main/trl/trainer/dpo_trainer.py#L356... | [
-0.1706395000219345,
-0.5525231957435608,
0.13183769583702087,
0.25794491171836853,
0.42352157831192017,
-0.03414187580347061,
0.15525738894939423,
0.3210907578468323,
-0.301504909992218,
0.17565152049064636,
-0.048530951142311096,
-0.053445473313331604,
-0.02663678303360939,
0.15088921785... |
https://github.com/huggingface/datasets/issues/6603 | ```
ds = datasets.Dataset.from_dict(dict(a=[i for i in range(100)]))
ds.map(lambda item: dict(b=item['a'] * 2), cache_file_name="/tmp/whatever-fn") # this worked
ds.map(lambda item: dict(b=item['a'] * 2), cache_file_name="/tmp/whatever-folder/filename") # this failed
ds.map(lambda item: dict(b=item['a'] * 2), cache... | datasets map `cache_file_name` does not work | ### Describe the bug
In the documentation `datasets.Dataset.map` arg `cache_file_name` is said to be a string, but it doesn't work.
### Steps to reproduce the bug
1. pick a dataset
2. write a map function
3. do `ds.map(..., cache_file_name='some_filename')`
4. it crashes
### Expected behavior
It will tell you t... | 71 | datasets map `cache_file_name` does not work
### Describe the bug
In the documentation `datasets.Dataset.map` arg `cache_file_name` is said to be a string, but it doesn't work.
### Steps to reproduce the bug
1. pick a dataset
2. write a map function
3. do `ds.map(..., cache_file_name='some_filename')`
4. it cra... | [
-0.17607736587524414,
-0.22184181213378906,
0.028800055384635925,
0.3735642731189728,
0.35459649562835693,
0.26312312483787537,
0.2672741711139679,
0.3279056251049042,
0.2868582606315613,
0.11249053478240967,
-0.24395491182804108,
0.40131598711013794,
-0.07068009674549103,
-0.2143307328224... |
https://github.com/huggingface/datasets/issues/6600 | Hi! Parquet is the only format that supports complex/nested features such as `Translation`. So, this should work:
```python
test_dataset = load_dataset("opus100", name="en-fr", split="test")
# Save with .to_parquet()
test_parquet_path = "try_testset_save.parquet"
test_dataset.to_parquet(test_parquet_path)
# L... | Loading CSV exported dataset has unexpected format | ### Describe the bug
I wanted to be able to save a HF dataset for translations and load it again in another script, but I'm a bit confused with the documentation and the result I've got so I'm opening this issue to ask if this behavior is as expected.
### Steps to reproduce the bug
The documentation I've mainly cons... | 44 | Loading CSV exported dataset has unexpected format
### Describe the bug
I wanted to be able to save a HF dataset for translations and load it again in another script, but I'm a bit confused with the documentation and the result I've got so I'm opening this issue to ask if this behavior is as expected.
### Steps to ... | [
0.09366397559642792,
-0.301914244890213,
0.01883000135421753,
0.3817500174045563,
0.24329328536987305,
0.04965081065893173,
0.315578430891037,
0.07877084612846375,
0.13491548597812653,
0.09632612019777298,
-0.24815884232521057,
-0.04743877053260803,
0.17620520293712616,
0.2116120159626007,... |
https://github.com/huggingface/datasets/issues/6599 | Hi! Non-generic data processing is out of this library's scope, so it's downstream libraries/users' responsibility to implement such logic. | Easy way to segment into 30s snippets given an m4a file and a vtt file | ### Feature request
Uploading datasets is straightforward thanks to the ability to push Audio to hub. However, it would be nice if the data (text and audio) could be segmented when being pushed (if not possible already).
### Motivation
It's easy to create a vtt file from an audio file. If there could be auto-segment... | 19 | Easy way to segment into 30s snippets given an m4a file and a vtt file
### Feature request
Uploading datasets is straightforward thanks to the ability to push Audio to hub. However, it would be nice if the data (text and audio) could be segmented when being pushed (if not possible already).
### Motivation
It's ea... | [
-0.650300920009613,
-0.3339020013809204,
-0.1150285005569458,
-0.06848037242889404,
0.23705556988716125,
-0.3140990436077118,
0.5622481107711792,
0.4451252222061157,
-0.5105161666870117,
0.2401493936777115,
-0.3567483723163605,
-0.044706277549266815,
-0.3566725254058838,
0.3612205386161804... |
https://github.com/huggingface/datasets/issues/6598 | I am facing similar issue while reading a csv file from s3. Wondering if somebody has found a workaround. | Unexpected keyword argument 'hf' when downloading CSV dataset from S3 | ### Describe the bug
I receive this error message when using `load_dataset` with "csv" path and `dataset_files=s3://...`:
```
TypeError: Session.__init__() got an unexpected keyword argument 'hf'
```
I found a similar issue here: https://stackoverflow.com/questions/77596258/aws-issue-load-dataset-from-s3-fails-w... | 19 | Unexpected keyword argument 'hf' when downloading CSV dataset from S3
### Describe the bug
I receive this error message when using `load_dataset` with "csv" path and `dataset_files=s3://...`:
```
TypeError: Session.__init__() got an unexpected keyword argument 'hf'
```
I found a similar issue here: https://sta... | [
0.05472029000520706,
-0.3772714138031006,
0.010761946439743042,
0.1927299201488495,
0.10700391232967377,
-0.1065262109041214,
0.39194580912590027,
0.10678645968437195,
0.04045405983924866,
-0.0982537716627121,
0.05330744385719299,
0.23410926759243011,
0.11373177170753479,
0.240796178579330... |
https://github.com/huggingface/datasets/issues/6597 | Also note the information in the docstring: https://github.com/huggingface/datasets/blob/9d6d16117a30ba345b0236407975f701c5b288d4/src/datasets/dataset_dict.py#L1582-L1585
> Also accepts `<dataset_name>`, which will default to the namespace of the logged-in user.
This behavior was "reverted" by the PR:
- #6519
... | Dataset.push_to_hub of a canonical dataset creates an additional dataset under the user namespace | While using `Dataset.push_to_hub` of a canonical dataset, an additional dataset was created under my user namespace.
## Steps to reproduce the bug
The command:
```python
commit_info = ds.push_to_hub(
"caner",
config_name="default",
commit_message="Convert dataset to Parquet",
commit_descriptio... | 103 | Dataset.push_to_hub of a canonical dataset creates an additional dataset under the user namespace
While using `Dataset.push_to_hub` of a canonical dataset, an additional dataset was created under my user namespace.
## Steps to reproduce the bug
The command:
```python
commit_info = ds.push_to_hub(
"caner",
... | [
-0.0006193593144416809,
-0.1821044683456421,
0.02426648885011673,
0.128635436296463,
0.1866663694381714,
-0.11163036525249481,
0.22518958151340485,
0.07185616344213486,
0.0009449049830436707,
0.29711079597473145,
-0.012589387595653534,
0.3734179735183716,
-0.135340616106987,
0.054037660360... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.