html_url
stringlengths
48
51
title
stringlengths
5
268
comments
stringlengths
70
51.8k
body
stringlengths
0
29.8k
comment_length
int64
16
1.52k
text
stringlengths
164
54.1k
embeddings
list
https://github.com/huggingface/datasets/issues/2820
Downloading “reddit” dataset keeps timing out.
Oh, then could you try deleting the parent directory `1ec12301abba4daa60eb3a90e53529b5b173296b22dc3bef3186e205c75e594c` instead ? This way the download manager will know that it has to uncompress the data again
## Describe the bug A clear and concise description of what the bug is. Everytime I try and download the reddit dataset it times out before finishing and I have to try again. There is some timeout error that I will post once it happens again. ## Steps to reproduce the bug ```python from datasets import load_d...
27
Downloading “reddit” dataset keeps timing out. ## Describe the bug A clear and concise description of what the bug is. Everytime I try and download the reddit dataset it times out before finishing and I have to try again. There is some timeout error that I will post once it happens again. ## Steps to reproduc...
[ -0.3229553699, -0.1548748016, -0.0958516523, 0.2122768909, 0.1727839112, 0.2089084983, 0.031165788, 0.2200944275, -0.0430058129, -0.0564695299, -0.0722840279, 0.1025689021, 0.2630367875, -0.0284358636, -0.1015664488, 0.2591444552, 0.075186044, -0.1532896757, -0.1340025365, 0.09...
https://github.com/huggingface/datasets/issues/2820
Downloading “reddit” dataset keeps timing out.
It seems to have worked. It only took like 20min! I think the extra timeout length did the trick! One thing is that it downloaded a total of 41gb instead of 20gb but at least it finished.
## Describe the bug A clear and concise description of what the bug is. Everytime I try and download the reddit dataset it times out before finishing and I have to try again. There is some timeout error that I will post once it happens again. ## Steps to reproduce the bug ```python from datasets import load_d...
37
Downloading “reddit” dataset keeps timing out. ## Describe the bug A clear and concise description of what the bug is. Everytime I try and download the reddit dataset it times out before finishing and I have to try again. There is some timeout error that I will post once it happens again. ## Steps to reproduc...
[ -0.3443332911, -0.1409981251, -0.0883191377, 0.1077194437, 0.1344687492, 0.141922012, 0.0290934015, 0.2305130363, -0.0500457995, -0.1488426179, -0.0651007369, 0.1319429129, 0.2865750492, 0.0448666215, -0.055188667, 0.2586466372, 0.0915641338, -0.2118441612, -0.0709678978, 0.064...
https://github.com/huggingface/datasets/issues/2818
cannot load data from my loacal path
Hi ! The `data_files` parameter must be a string, a list/tuple or a python dict. Can you check the type of your `config.train_path` please ? Or use `data_files=str(config.train_path)` ?
## Describe the bug I just want to directly load data from my local path,but find a bug.And I compare it with pandas to provide my local path is real. here is my code ```python3 # print my local path print(config.train_path) # read data and print data length tarin=pd.read_csv(config.train_path) print(len(tari...
29
cannot load data from my loacal path ## Describe the bug I just want to directly load data from my local path,but find a bug.And I compare it with pandas to provide my local path is real. here is my code ```python3 # print my local path print(config.train_path) # read data and print data length tarin=pd.read...
[ -0.0574387684, -0.0158846527, 0.0742203966, 0.5506054163, 0.3027438521, -0.1935664415, 0.4147057533, 0.076913625, 0.0825466365, 0.0638931841, -0.0089266263, 0.5863606334, 0.0122842751, -0.0284531452, 0.1138427705, -0.0146792252, 0.1873941272, 0.0855282247, -0.1241159886, -0.256...
https://github.com/huggingface/datasets/issues/2813
Remove compression from xopen
After discussing with @lhoestq, a reasonable alternative: - `download_manager.extract(urlpath)` adds prefixes to `urlpath` in the same way as `fsspec` does for protocols, but we implement custom prefixes for all compression formats: `bz2::http://domain.org/filename.bz2` - `xopen` parses the `urlpath` and extracts...
We implemented support for streaming with 2 requirements: - transparent use for the end user: just needs to pass the parameter `streaming=True` - no additional work for the contributors: previous loading scripts should also work in streaming mode with no (or minor) changes; and new loading scripts should not involve ...
105
Remove compression from xopen We implemented support for streaming with 2 requirements: - transparent use for the end user: just needs to pass the parameter `streaming=True` - no additional work for the contributors: previous loading scripts should also work in streaming mode with no (or minor) changes; and new loa...
[ -0.4191689491, -0.0069031268, 0.0059125833, 0.0238577276, 0.1248178184, -0.3256881535, -0.2292777896, 0.4713353515, 0.0886647403, 0.2409463227, -0.2471540272, 0.4001291692, 0.081967026, 0.137193352, -0.0226640757, -0.1370153874, -0.0278308615, 0.1331308782, 0.0882369801, -0.005...
https://github.com/huggingface/datasets/issues/2799
Loading JSON throws ArrowNotImplementedError
Hi @lewtun, thanks for reporting. Apparently, `pyarrow.json` tries to cast timestamp-like fields in your JSON file to pyarrow timestamp type, and it fails with `ArrowNotImplementedError`. I will investigate if there is a way to tell pyarrow not to try that timestamp casting.
## Describe the bug I have created a [dataset](https://huggingface.co/datasets/lewtun/github-issues-test) of GitHub issues in line-separated JSON format and am finding that I cannot load it with the `json` loading script (see stack trace below). Curiously, there is no problem loading the dataset with `pandas` which...
42
Loading JSON throws ArrowNotImplementedError ## Describe the bug I have created a [dataset](https://huggingface.co/datasets/lewtun/github-issues-test) of GitHub issues in line-separated JSON format and am finding that I cannot load it with the `json` loading script (see stack trace below). Curiously, there is no ...
[ -0.0681869462, 0.1841620952, 0.0364153907, 0.378070116, 0.2355375141, -0.0062702075, 0.4846678972, 0.3670630455, 0.4180774391, -0.0028389022, 0.0605549589, 0.5898873806, 0.0276202224, -0.1488581598, -0.2172113806, -0.1838726699, 0.0313559324, 0.1905161291, 0.1274460107, 0.02858...
https://github.com/huggingface/datasets/issues/2799
Loading JSON throws ArrowNotImplementedError
I think the issue is more complex than that... I just took one of your JSON lines and pyarrow.json read it without problem.
## Describe the bug I have created a [dataset](https://huggingface.co/datasets/lewtun/github-issues-test) of GitHub issues in line-separated JSON format and am finding that I cannot load it with the `json` loading script (see stack trace below). Curiously, there is no problem loading the dataset with `pandas` which...
23
Loading JSON throws ArrowNotImplementedError ## Describe the bug I have created a [dataset](https://huggingface.co/datasets/lewtun/github-issues-test) of GitHub issues in line-separated JSON format and am finding that I cannot load it with the `json` loading script (see stack trace below). Curiously, there is no ...
[ -0.0681869462, 0.1841620952, 0.0364153907, 0.378070116, 0.2355375141, -0.0062702075, 0.4846678972, 0.3670630455, 0.4180774391, -0.0028389022, 0.0605549589, 0.5898873806, 0.0276202224, -0.1488581598, -0.2172113806, -0.1838726699, 0.0313559324, 0.1905161291, 0.1274460107, 0.02858...
https://github.com/huggingface/datasets/issues/2799
Loading JSON throws ArrowNotImplementedError
> I just took one of your JSON lines an pyarrow.json read it without problem. yes, and for some peculiar reason the error is non-deterministic (i was eventually able to load the whole dataset by just re-running the `load_dataset` cell multiple times 🤔) thanks for looking into this 🙏 !
## Describe the bug I have created a [dataset](https://huggingface.co/datasets/lewtun/github-issues-test) of GitHub issues in line-separated JSON format and am finding that I cannot load it with the `json` loading script (see stack trace below). Curiously, there is no problem loading the dataset with `pandas` which...
50
Loading JSON throws ArrowNotImplementedError ## Describe the bug I have created a [dataset](https://huggingface.co/datasets/lewtun/github-issues-test) of GitHub issues in line-separated JSON format and am finding that I cannot load it with the `json` loading script (see stack trace below). Curiously, there is no ...
[ -0.0681869462, 0.1841620952, 0.0364153907, 0.378070116, 0.2355375141, -0.0062702075, 0.4846678972, 0.3670630455, 0.4180774391, -0.0028389022, 0.0605549589, 0.5898873806, 0.0276202224, -0.1488581598, -0.2172113806, -0.1838726699, 0.0313559324, 0.1905161291, 0.1274460107, 0.02858...
https://github.com/huggingface/datasets/issues/2799
Loading JSON throws ArrowNotImplementedError
The code works fine on my side. Not sure what's going on here :/ I remember we did a few changes in the JSON loader in #2638 , did you do an update `datasets` when debugging this ?
## Describe the bug I have created a [dataset](https://huggingface.co/datasets/lewtun/github-issues-test) of GitHub issues in line-separated JSON format and am finding that I cannot load it with the `json` loading script (see stack trace below). Curiously, there is no problem loading the dataset with `pandas` which...
38
Loading JSON throws ArrowNotImplementedError ## Describe the bug I have created a [dataset](https://huggingface.co/datasets/lewtun/github-issues-test) of GitHub issues in line-separated JSON format and am finding that I cannot load it with the `json` loading script (see stack trace below). Curiously, there is no ...
[ -0.0681869462, 0.1841620952, 0.0364153907, 0.378070116, 0.2355375141, -0.0062702075, 0.4846678972, 0.3670630455, 0.4180774391, -0.0028389022, 0.0605549589, 0.5898873806, 0.0276202224, -0.1488581598, -0.2172113806, -0.1838726699, 0.0313559324, 0.1905161291, 0.1274460107, 0.02858...
https://github.com/huggingface/datasets/issues/2799
Loading JSON throws ArrowNotImplementedError
OK after upgrading `datasets` to v1.12.1 the issue seems to have gone away. Closing this now :)
## Describe the bug I have created a [dataset](https://huggingface.co/datasets/lewtun/github-issues-test) of GitHub issues in line-separated JSON format and am finding that I cannot load it with the `json` loading script (see stack trace below). Curiously, there is no problem loading the dataset with `pandas` which...
17
Loading JSON throws ArrowNotImplementedError ## Describe the bug I have created a [dataset](https://huggingface.co/datasets/lewtun/github-issues-test) of GitHub issues in line-separated JSON format and am finding that I cannot load it with the `json` loading script (see stack trace below). Curiously, there is no ...
[ -0.0681869462, 0.1841620952, 0.0364153907, 0.378070116, 0.2355375141, -0.0062702075, 0.4846678972, 0.3670630455, 0.4180774391, -0.0028389022, 0.0605549589, 0.5898873806, 0.0276202224, -0.1488581598, -0.2172113806, -0.1838726699, 0.0313559324, 0.1905161291, 0.1274460107, 0.02858...
https://github.com/huggingface/datasets/issues/2799
Loading JSON throws ArrowNotImplementedError
Oops, I spoke too soon 😓 After deleting the cache and trying the above code snippet again I am hitting the same error. You can also reproduce it in the Colab notebook I linked to in the issue description.
## Describe the bug I have created a [dataset](https://huggingface.co/datasets/lewtun/github-issues-test) of GitHub issues in line-separated JSON format and am finding that I cannot load it with the `json` loading script (see stack trace below). Curiously, there is no problem loading the dataset with `pandas` which...
39
Loading JSON throws ArrowNotImplementedError ## Describe the bug I have created a [dataset](https://huggingface.co/datasets/lewtun/github-issues-test) of GitHub issues in line-separated JSON format and am finding that I cannot load it with the `json` loading script (see stack trace below). Curiously, there is no ...
[ -0.0681869462, 0.1841620952, 0.0364153907, 0.378070116, 0.2355375141, -0.0062702075, 0.4846678972, 0.3670630455, 0.4180774391, -0.0028389022, 0.0605549589, 0.5898873806, 0.0276202224, -0.1488581598, -0.2172113806, -0.1838726699, 0.0313559324, 0.1905161291, 0.1274460107, 0.02858...
https://github.com/huggingface/datasets/issues/2788
How to sample every file in a list of files making up a split in a dataset when loading?
Hi ! This is not possible just with `load_dataset`. You can do something like this instead: ```python seed=42 data_files_dict = { "train": [train_file1, train_file2], "test": [test_file1, test_file2], "val": [val_file1, val_file2] } dataset = datasets.load_dataset( "csv", data_files=dat...
I am loading a dataset with multiple train, test, and validation files like this: ``` data_files_dict = { "train": [train_file1, train_file2], "test": [test_file1, test_file2], "val": [val_file1, val_file2] } dataset = datasets.load_dataset( "csv", data_files=data_files_dict, split=[...
67
How to sample every file in a list of files making up a split in a dataset when loading? I am loading a dataset with multiple train, test, and validation files like this: ``` data_files_dict = { "train": [train_file1, train_file2], "test": [test_file1, test_file2], "val": [val_file1, val_file2] } ...
[ -0.3138791621, -0.1796859056, -0.1142877936, 0.1454610974, -0.0157867279, 0.3574571609, 0.4449555874, 0.4948764741, 0.5567873716, 0.0195081681, -0.074514553, 0.198577404, 0.0281435736, 0.1434572339, 0.1276394427, -0.3237333, -0.0534548499, 0.1187503934, 0.2719000578, 0.01010695...
https://github.com/huggingface/datasets/issues/2787
ConnectionError: Couldn't reach https://raw.githubusercontent.com
the bug code locate in : if data_args.task_name is not None: # Downloading and loading a dataset from the hub. datasets = load_dataset("glue", data_args.task_name, cache_dir=model_args.cache_dir)
Hello, I am trying to run run_glue.py and it gives me this error - Traceback (most recent call last): File "E:/BERT/pytorch_hugging/transformers/examples/pytorch/text-classification/run_glue.py", line 546, in <module> main() File "E:/BERT/pytorch_hugging/transformers/examples/pytorch/text-classification/...
25
ConnectionError: Couldn't reach https://raw.githubusercontent.com Hello, I am trying to run run_glue.py and it gives me this error - Traceback (most recent call last): File "E:/BERT/pytorch_hugging/transformers/examples/pytorch/text-classification/run_glue.py", line 546, in <module> main() File "E:/BER...
[ -0.2038710564, -0.1601336449, -0.0819258094, 0.0469019413, 0.2665439546, -0.0723189861, 0.1546959728, 0.3012445569, 0.1090680212, -0.0955980122, -0.1769393086, -0.1569310278, 0.0424282476, 0.009808816, 0.0353741236, -0.1757959127, -0.0955914408, -0.0462968722, -0.2514626682, 0....
https://github.com/huggingface/datasets/issues/2787
ConnectionError: Couldn't reach https://raw.githubusercontent.com
Hi @jinec, From time to time we get this kind of `ConnectionError` coming from the github.com website: https://raw.githubusercontent.com Normally, it should work if you wait a little and then retry. Could you please confirm if the problem persists?
Hello, I am trying to run run_glue.py and it gives me this error - Traceback (most recent call last): File "E:/BERT/pytorch_hugging/transformers/examples/pytorch/text-classification/run_glue.py", line 546, in <module> main() File "E:/BERT/pytorch_hugging/transformers/examples/pytorch/text-classification/...
38
ConnectionError: Couldn't reach https://raw.githubusercontent.com Hello, I am trying to run run_glue.py and it gives me this error - Traceback (most recent call last): File "E:/BERT/pytorch_hugging/transformers/examples/pytorch/text-classification/run_glue.py", line 546, in <module> main() File "E:/BER...
[ -0.2038710564, -0.1601336449, -0.0819258094, 0.0469019413, 0.2665439546, -0.0723189861, 0.1546959728, 0.3012445569, 0.1090680212, -0.0955980122, -0.1769393086, -0.1569310278, 0.0424282476, 0.009808816, 0.0353741236, -0.1757959127, -0.0955914408, -0.0462968722, -0.2514626682, 0....
https://github.com/huggingface/datasets/issues/2787
ConnectionError: Couldn't reach https://raw.githubusercontent.com
> I can access https://raw.githubusercontent.com/huggingface/datasets/1.7.0/datasets/glue/glue.py without problem... I can not access https://raw.githubusercontent.com/huggingface/datasets either, I am in China
Hello, I am trying to run run_glue.py and it gives me this error - Traceback (most recent call last): File "E:/BERT/pytorch_hugging/transformers/examples/pytorch/text-classification/run_glue.py", line 546, in <module> main() File "E:/BERT/pytorch_hugging/transformers/examples/pytorch/text-classification/...
17
ConnectionError: Couldn't reach https://raw.githubusercontent.com Hello, I am trying to run run_glue.py and it gives me this error - Traceback (most recent call last): File "E:/BERT/pytorch_hugging/transformers/examples/pytorch/text-classification/run_glue.py", line 546, in <module> main() File "E:/BER...
[ -0.2038710564, -0.1601336449, -0.0819258094, 0.0469019413, 0.2665439546, -0.0723189861, 0.1546959728, 0.3012445569, 0.1090680212, -0.0955980122, -0.1769393086, -0.1569310278, 0.0424282476, 0.009808816, 0.0353741236, -0.1757959127, -0.0955914408, -0.0462968722, -0.2514626682, 0....
https://github.com/huggingface/datasets/issues/2775
`generate_random_fingerprint()` deterministic with 🤗Transformers' `set_seed()`
I dug into what I believe is the root of this issue and added a repro in my comment. If this is better addressed as a cross-team issue, let me know and I can open an issue in the Transformers repo
## Describe the bug **Update:** I dug into this to try to reproduce the underlying issue, and I believe it's that `set_seed()` from the `transformers` library makes the "random" fingerprint identical each time. I believe this is still a bug, because `datasets` is used exactly this way in `transformers` after `set_se...
41
`generate_random_fingerprint()` deterministic with 🤗Transformers' `set_seed()` ## Describe the bug **Update:** I dug into this to try to reproduce the underlying issue, and I believe it's that `set_seed()` from the `transformers` library makes the "random" fingerprint identical each time. I believe this is still ...
[ -0.1622434109, -0.0606819987, 0.0938824564, 0.0836029798, 0.3144049942, -0.1429730952, 0.6139578819, 0.01321337, -0.1196164787, 0.005816251, 0.1650344133, 0.201795429, -0.0787889659, 0.0945226178, -0.0190070346, 0.2130855918, 0.0278787725, -0.140286833, -0.2763956189, -0.141063...
https://github.com/huggingface/datasets/issues/2775
`generate_random_fingerprint()` deterministic with 🤗Transformers' `set_seed()`
Hi ! IMO we shouldn't try to modify `set_seed` from transformers but maybe make `datasets` have its own RNG just to generate random fingerprints. Any opinion on this @LysandreJik ?
## Describe the bug **Update:** I dug into this to try to reproduce the underlying issue, and I believe it's that `set_seed()` from the `transformers` library makes the "random" fingerprint identical each time. I believe this is still a bug, because `datasets` is used exactly this way in `transformers` after `set_se...
30
`generate_random_fingerprint()` deterministic with 🤗Transformers' `set_seed()` ## Describe the bug **Update:** I dug into this to try to reproduce the underlying issue, and I believe it's that `set_seed()` from the `transformers` library makes the "random" fingerprint identical each time. I believe this is still ...
[ -0.1622434109, -0.0606819987, 0.0938824564, 0.0836029798, 0.3144049942, -0.1429730952, 0.6139578819, 0.01321337, -0.1196164787, 0.005816251, 0.1650344133, 0.201795429, -0.0787889659, 0.0945226178, -0.0190070346, 0.2130855918, 0.0278787725, -0.140286833, -0.2763956189, -0.141063...
https://github.com/huggingface/datasets/issues/2768
`ArrowInvalid: Added column's length must match table's length.` after using `select`
Hi, the `select` method creates an indices mapping and doesn't modify the underlying PyArrow table by default for better performance. To modify the underlying table after the `select` call, call `flatten_indices` on the dataset object as follows: ```python from datasets import load_dataset ds = load_dataset("tw...
## Describe the bug I would like to add a column to a downsampled dataset. However I get an error message saying the length don't match with the length of the unsampled dataset indicated. I suspect that the dataset size is not updated when calling `select`. ## Steps to reproduce the bug ```python from datasets im...
53
`ArrowInvalid: Added column's length must match table's length.` after using `select` ## Describe the bug I would like to add a column to a downsampled dataset. However I get an error message saying the length don't match with the length of the unsampled dataset indicated. I suspect that the dataset size is not upda...
[ -0.2274190784, -0.1898314059, -0.008905299, 0.0325090215, 0.0982831195, 0.0415741503, 0.0728504211, 0.1837980747, 0.060487546, 0.1647535264, 0.2048402131, 0.6761353016, 0.0139966682, -0.3138835132, 0.0691088885, -0.2314470708, 0.0780237466, 0.1225908697, -0.3308290839, -0.06523...
https://github.com/huggingface/datasets/issues/2767
equal operation to perform unbatch for huggingface datasets
Hi @lhoestq Maybe this is clearer to explain like this, currently map function, map one example to "one" modified one, lets assume we want to map one example to "multiple" examples, in which we do not know in advance how many examples they would be per each entry. I greatly appreciate telling me how I can handle this...
Hi I need to use "unbatch" operation in tensorflow on a huggingface dataset, I could not find this operation, could you kindly direct me how I can do it, here is the problem I am trying to solve: I am considering "record" dataset in SuperGlue and I need to replicate each entery of the dataset for each answer, to ma...
62
equal operation to perform unbatch for huggingface datasets Hi I need to use "unbatch" operation in tensorflow on a huggingface dataset, I could not find this operation, could you kindly direct me how I can do it, here is the problem I am trying to solve: I am considering "record" dataset in SuperGlue and I need...
[ -0.0286770985, -0.7930670977, 0.0335283875, -0.0818498656, -0.0049163997, -0.1033487245, 0.2304781675, -0.0015911907, 0.4284727871, 0.2395399511, -0.3240567446, -0.045957692, 0.0236225575, 0.4699031115, 0.1768154055, -0.2348046303, 0.0796520188, 0.1120869815, -0.2576455772, -0....
https://github.com/huggingface/datasets/issues/2767
equal operation to perform unbatch for huggingface datasets
Hi, this is also my question on how to perform similar operation as "unbatch" in tensorflow in great huggingface dataset library. thanks.
Hi I need to use "unbatch" operation in tensorflow on a huggingface dataset, I could not find this operation, could you kindly direct me how I can do it, here is the problem I am trying to solve: I am considering "record" dataset in SuperGlue and I need to replicate each entery of the dataset for each answer, to ma...
22
equal operation to perform unbatch for huggingface datasets Hi I need to use "unbatch" operation in tensorflow on a huggingface dataset, I could not find this operation, could you kindly direct me how I can do it, here is the problem I am trying to solve: I am considering "record" dataset in SuperGlue and I need...
[ -0.0424980409, -0.7478793859, 0.0441974252, -0.0638017505, 0.0602152459, -0.0741191059, 0.2164922804, 0.0234037451, 0.4041273594, 0.2328331769, -0.3757910132, -0.0309471469, 0.0087436475, 0.4603342414, 0.2217880189, -0.2381002009, 0.0493122041, 0.0999174416, -0.2166883498, -0.0...
https://github.com/huggingface/datasets/issues/2767
equal operation to perform unbatch for huggingface datasets
Hi, `Dataset.map` in the batched mode allows you to map a single row to multiple rows. So to perform "unbatch", you can do the following: ```python import collections def unbatch(batch): new_batch = collections.defaultdict(list) keys = batch.keys() for values in zip(*batch.values()): ex ...
Hi I need to use "unbatch" operation in tensorflow on a huggingface dataset, I could not find this operation, could you kindly direct me how I can do it, here is the problem I am trying to solve: I am considering "record" dataset in SuperGlue and I need to replicate each entery of the dataset for each answer, to ma...
72
equal operation to perform unbatch for huggingface datasets Hi I need to use "unbatch" operation in tensorflow on a huggingface dataset, I could not find this operation, could you kindly direct me how I can do it, here is the problem I am trying to solve: I am considering "record" dataset in SuperGlue and I need...
[ -0.0753812045, -0.7735244632, 0.0261515044, -0.0693189129, 0.022894334, -0.0482676178, 0.3034125566, 0.0408934243, 0.3975751698, 0.2398027182, -0.3650855124, -0.0645895749, 0.0264361836, 0.4119095802, 0.1797593236, -0.2043051273, 0.063304171, 0.09900897, -0.2375570387, -0.05853...
https://github.com/huggingface/datasets/issues/2767
equal operation to perform unbatch for huggingface datasets
Dear @mariosasko First, thank you very much for coming back to me on this, I appreciate it a lot. I tried this solution, I am getting errors, do you mind giving me one test example to be able to run your code, to understand better the format of the inputs to your function? in this function https://github.com/googl...
Hi I need to use "unbatch" operation in tensorflow on a huggingface dataset, I could not find this operation, could you kindly direct me how I can do it, here is the problem I am trying to solve: I am considering "record" dataset in SuperGlue and I need to replicate each entery of the dataset for each answer, to ma...
90
equal operation to perform unbatch for huggingface datasets Hi I need to use "unbatch" operation in tensorflow on a huggingface dataset, I could not find this operation, could you kindly direct me how I can do it, here is the problem I am trying to solve: I am considering "record" dataset in SuperGlue and I need...
[ -0.010877654, -0.7745167017, 0.0500379279, -0.0438204668, 0.0585960671, -0.1315133721, 0.2449043989, 0.0355795063, 0.4034036398, 0.2071986198, -0.3647295833, -0.0576839745, 0.0258305669, 0.4528594017, 0.1368792206, -0.241541177, 0.0354216248, 0.1300747097, -0.3146574199, -0.097...
https://github.com/huggingface/datasets/issues/2767
equal operation to perform unbatch for huggingface datasets
Hi @mariosasko I think finally I got this, I think you mean to do things in one step, here is the full example for completeness: ``` def unbatch(batch): new_batch = collections.defaultdict(list) keys = batch.keys() for values in zip(*batch.values()): ex = {k: v for k, v in zip(keys, values...
Hi I need to use "unbatch" operation in tensorflow on a huggingface dataset, I could not find this operation, could you kindly direct me how I can do it, here is the problem I am trying to solve: I am considering "record" dataset in SuperGlue and I need to replicate each entery of the dataset for each answer, to ma...
131
equal operation to perform unbatch for huggingface datasets Hi I need to use "unbatch" operation in tensorflow on a huggingface dataset, I could not find this operation, could you kindly direct me how I can do it, here is the problem I am trying to solve: I am considering "record" dataset in SuperGlue and I need...
[ -0.0244648904, -0.8023656607, 0.0363252684, -0.0556129403, 0.0216098391, -0.0733307973, 0.2536011934, 0.0494492762, 0.3815232217, 0.2414879054, -0.3534255028, -0.065472275, 0.0393710509, 0.444589287, 0.1729433686, -0.2635640204, 0.0600869246, 0.1323879957, -0.2501766682, -0.096...
https://github.com/huggingface/datasets/issues/2765
BERTScore Error
Hi, The `use_fast_tokenizer` argument has been recently added to the bert-score lib. I've opened a PR with the fix. In the meantime, you can try to downgrade the version of bert-score with the following command to make the code work: ``` pip uninstall bert-score pip install "bert-score<0.3.10" ```
## Describe the bug A clear and concise description of what the bug is. ## Steps to reproduce the bug ```python predictions = ["hello there", "general kenobi"] references = ["hello there", "general kenobi"] bert = load_metric('bertscore') bert.compute(predictions=predictions, references=references,lang='en') ...
48
BERTScore Error ## Describe the bug A clear and concise description of what the bug is. ## Steps to reproduce the bug ```python predictions = ["hello there", "general kenobi"] references = ["hello there", "general kenobi"] bert = load_metric('bertscore') bert.compute(predictions=predictions, references=refer...
[ -0.135768339, 0.1683019102, 0.0354213491, 0.2092980444, 0.3473314643, 0.0116616935, 0.2289915085, 0.3530414104, -0.1199427098, 0.3440087438, 0.0769438297, 0.4752254784, -0.11414738, -0.1162744761, -0.0818047822, -0.3451918364, 0.0918749869, 0.3889159858, 0.0978737101, -0.176629...
https://github.com/huggingface/datasets/issues/2763
English wikipedia datasets is not clean
Hi ! Certain users might need these data (for training or simply to explore/index the dataset). Feel free to implement a map function that gets rid of these paragraphs and process the wikipedia dataset with it before training
## Describe the bug Wikipedia english dumps contain many wikipedia paragraphs like "References", "Category:" and "See Also" that should not be used for training. ## Steps to reproduce the bug ```python # Sample code to reproduce the bug from datasets import load_dataset w = load_dataset('wikipedia', '20200501.e...
38
English wikipedia datasets is not clean ## Describe the bug Wikipedia english dumps contain many wikipedia paragraphs like "References", "Category:" and "See Also" that should not be used for training. ## Steps to reproduce the bug ```python # Sample code to reproduce the bug from datasets import load_dataset ...
[ -0.0217496157, 0.2180520594, -0.1063897014, 0.4800750911, 0.2548156977, 0.1268990189, 0.3188249469, 0.3147776723, 0.2105302513, 0.0805180445, -0.1847257465, 0.2396584302, 0.3409662247, -0.2226555198, 0.2059661299, -0.359701544, 0.2095891833, 0.0540648811, -0.3535145521, -0.2804...
https://github.com/huggingface/datasets/issues/2761
Error loading C4 realnewslike dataset
Hi @danshirron, `c4` was updated few days back by @lhoestq. The new configs are `['en', 'en.noclean', 'en.realnewslike', 'en.webtextlike'].` You'll need to remove any older version of this dataset you previously downloaded and then run `load_dataset` again with new configuration.
## Describe the bug Error loading C4 realnewslike dataset. Validation part mismatch ## Steps to reproduce the bug ```python raw_datasets = load_dataset('c4', 'realnewslike', cache_dir=model_args.cache_dir) ## Expected results success on data loading ## Actual results Downloading: 100%|███████████████████████...
39
Error loading C4 realnewslike dataset ## Describe the bug Error loading C4 realnewslike dataset. Validation part mismatch ## Steps to reproduce the bug ```python raw_datasets = load_dataset('c4', 'realnewslike', cache_dir=model_args.cache_dir) ## Expected results success on data loading ## Actual results D...
[ -0.2315264046, 0.0246403217, -0.0223660525, 0.4574565291, 0.1744246185, 0.2295815647, -0.0073710401, 0.5081833601, -0.031652987, 0.0191250816, -0.1842969656, -0.0640127659, -0.0091638286, 0.3338085711, -0.0433652624, -0.2194699943, -0.1107671037, 0.1606065631, 0.1407684088, 0.0...
https://github.com/huggingface/datasets/issues/2761
Error loading C4 realnewslike dataset
@bhavitvyamalik @lhoestq , just tried the above and got: >>> a=datasets.load_dataset('c4','en.realnewslike') Downloading: 3.29kB [00:00, 1.66MB/s] ...
## Describe the bug Error loading C4 realnewslike dataset. Validation part mismatch ## Steps to reproduce the bug ```python raw_datasets = load_dataset('c4', 'realnewslike', cache_dir=model_args.cache_dir) ## Expected results success on data loading ## Actual results Downloading: 100%|███████████████████████...
91
Error loading C4 realnewslike dataset ## Describe the bug Error loading C4 realnewslike dataset. Validation part mismatch ## Steps to reproduce the bug ```python raw_datasets = load_dataset('c4', 'realnewslike', cache_dir=model_args.cache_dir) ## Expected results success on data loading ## Actual results D...
[ -0.2315264046, 0.0246403217, -0.0223660525, 0.4574565291, 0.1744246185, 0.2295815647, -0.0073710401, 0.5081833601, -0.031652987, 0.0191250816, -0.1842969656, -0.0640127659, -0.0091638286, 0.3338085711, -0.0433652624, -0.2194699943, -0.1107671037, 0.1606065631, 0.1407684088, 0.0...
https://github.com/huggingface/datasets/issues/2761
Error loading C4 realnewslike dataset
I think I had an older version of datasets installed and that's why I commented the old configurations in my last comment, my bad! I re-checked and updated it to latest version (`datasets==1.11.0`) and it's showing `available configs: ['en', 'realnewslike', 'en.noblocklist', 'en.noclean']`. I tried `raw_datasets = ...
## Describe the bug Error loading C4 realnewslike dataset. Validation part mismatch ## Steps to reproduce the bug ```python raw_datasets = load_dataset('c4', 'realnewslike', cache_dir=model_args.cache_dir) ## Expected results success on data loading ## Actual results Downloading: 100%|███████████████████████...
79
Error loading C4 realnewslike dataset ## Describe the bug Error loading C4 realnewslike dataset. Validation part mismatch ## Steps to reproduce the bug ```python raw_datasets = load_dataset('c4', 'realnewslike', cache_dir=model_args.cache_dir) ## Expected results success on data loading ## Actual results D...
[ -0.2315264046, 0.0246403217, -0.0223660525, 0.4574565291, 0.1744246185, 0.2295815647, -0.0073710401, 0.5081833601, -0.031652987, 0.0191250816, -0.1842969656, -0.0640127659, -0.0091638286, 0.3338085711, -0.0433652624, -0.2194699943, -0.1107671037, 0.1606065631, 0.1407684088, 0.0...
https://github.com/huggingface/datasets/issues/2761
Error loading C4 realnewslike dataset
It works. I probably had some issue with the cache. after cleaning it im able to download the dataset. Thanks
## Describe the bug Error loading C4 realnewslike dataset. Validation part mismatch ## Steps to reproduce the bug ```python raw_datasets = load_dataset('c4', 'realnewslike', cache_dir=model_args.cache_dir) ## Expected results success on data loading ## Actual results Downloading: 100%|███████████████████████...
20
Error loading C4 realnewslike dataset ## Describe the bug Error loading C4 realnewslike dataset. Validation part mismatch ## Steps to reproduce the bug ```python raw_datasets = load_dataset('c4', 'realnewslike', cache_dir=model_args.cache_dir) ## Expected results success on data loading ## Actual results D...
[ -0.2315264046, 0.0246403217, -0.0223660525, 0.4574565291, 0.1744246185, 0.2295815647, -0.0073710401, 0.5081833601, -0.031652987, 0.0191250816, -0.1842969656, -0.0640127659, -0.0091638286, 0.3338085711, -0.0433652624, -0.2194699943, -0.1107671037, 0.1606065631, 0.1407684088, 0.0...
https://github.com/huggingface/datasets/issues/2759
the meteor metric seems not consist with the official version
the issue is caused by the differences between varied meteor versions: meteor1.0 is for https://aclanthology.org/W07-0734.pdf meteor1.5 is for https://aclanthology.org/W14-3348.pdf here is a very similar issue in NLTK https://github.com/nltk/nltk/issues/2655
## Describe the bug The computed meteor score seems strange because the value is very different from the scores computed by other tools. For example, I use the meteor score computed by [NLGeval](https://github.com/Maluuba/nlg-eval) as the reference (which reuses the official jar file for the computation) ## Steps t...
28
the meteor metric seems not consist with the official version ## Describe the bug The computed meteor score seems strange because the value is very different from the scores computed by other tools. For example, I use the meteor score computed by [NLGeval](https://github.com/Maluuba/nlg-eval) as the reference (which...
[ -0.1358162165, -0.2349781841, -0.0059329979, 0.4061325192, 0.1851485223, -0.2260376215, -0.1716091335, 0.2357301563, -0.0438347608, 0.0740857571, 0.0668202415, 0.3859930634, -0.0958541185, -0.4473045766, -0.039201986, -0.0029860223, -0.0017699897, -0.2732463479, -0.1561686993, ...
https://github.com/huggingface/datasets/issues/2759
the meteor metric seems not consist with the official version
Hi @jianguda, thanks for reporting. Currently, at 🤗 `datasets` we are using METEOR 1.0 (indeed using NLTK: `from nltk.translate import meteor_score`): See the [citation here](https://github.com/huggingface/datasets/blob/master/metrics/meteor/meteor.py#L23-L35). If there is some open source implementation of METE...
## Describe the bug The computed meteor score seems strange because the value is very different from the scores computed by other tools. For example, I use the meteor score computed by [NLGeval](https://github.com/Maluuba/nlg-eval) as the reference (which reuses the official jar file for the computation) ## Steps t...
42
the meteor metric seems not consist with the official version ## Describe the bug The computed meteor score seems strange because the value is very different from the scores computed by other tools. For example, I use the meteor score computed by [NLGeval](https://github.com/Maluuba/nlg-eval) as the reference (which...
[ -0.147818163, -0.2521870136, -0.0246564895, 0.3410741687, 0.2311811745, -0.2251471579, -0.1337346137, 0.2375298738, -0.1029681563, 0.1020670608, 0.0819614008, 0.3635592759, -0.1630418897, -0.3619389832, -0.0496547595, 0.0434393175, -0.0236536413, -0.3071350455, -0.153465271, -0...
https://github.com/huggingface/datasets/issues/2757
Unexpected type after `concatenate_datasets`
Hi @JulesBelveze, thanks for your question. Note that 🤗 `datasets` internally store their data in Apache Arrow format. However, when accessing dataset columns, by default they are returned as native Python objects (lists in this case). If you would like their columns to be returned in a more suitable format ...
## Describe the bug I am trying to concatenate two `Dataset` using `concatenate_datasets` but it turns out that after concatenation the features are casted from `torch.Tensor` to `list`. It then leads to a weird tensors when trying to convert it to a `DataLoader`. However, if I use each `Dataset` separately everythi...
80
Unexpected type after `concatenate_datasets` ## Describe the bug I am trying to concatenate two `Dataset` using `concatenate_datasets` but it turns out that after concatenation the features are casted from `torch.Tensor` to `list`. It then leads to a weird tensors when trying to convert it to a `DataLoader`. Howev...
[ 0.0659024864, -0.2509692013, 0.0296786502, 0.5075554848, 0.4053796828, 0.1719069183, 0.4817780852, 0.1346458197, -0.3654303551, -0.1315237582, -0.1543311328, 0.3956092298, -0.0383027643, -0.019348748, -0.0897326767, -0.291996628, 0.2896147966, -0.0016608424, -0.2081796378, -0.2...
https://github.com/huggingface/datasets/issues/2750
Second concatenation of datasets produces errors
Hi @Aktsvigun! We are planning to address this issue before our next release, in a couple of weeks at most. 😅 In the meantime, if you would like to contribute, feel free to open a Pull Request. You are welcome. Here you can find more information: [How to contribute to Datasets?](CONTRIBUTING.md)
Hi, I am need to concatenate my dataset with others several times, and after I concatenate it for the second time, the features of features (e.g. tags names) are collapsed. This hinders, for instance, the usage of tokenize function with `data.map`. ``` from datasets import load_dataset, concatenate_datasets d...
51
Second concatenation of datasets produces errors Hi, I am need to concatenate my dataset with others several times, and after I concatenate it for the second time, the features of features (e.g. tags names) are collapsed. This hinders, for instance, the usage of tokenize function with `data.map`. ``` from data...
[ -0.1504654586, -0.059169814, -0.0577094853, 0.255951494, 0.1930537075, 0.0901743174, 0.0821338445, 0.2622762322, -0.2609197795, -0.0551396459, 0.0358311608, 0.1265256107, 0.2174617797, 0.0901992321, -0.2850415111, -0.1801926196, 0.1788394302, 0.0559674166, 0.0485578291, -0.1189...
https://github.com/huggingface/datasets/issues/2749
Raise a proper exception when trying to stream a dataset that requires to manually download files
Hi @severo, thanks for reporting. As discussed, datasets requiring manual download should be: - programmatically identifiable - properly handled with more clear error message when trying to load them with streaming In relation with programmatically identifiability, note that for datasets requiring manual downlo...
## Describe the bug At least for 'reclor', 'telugu_books', 'turkish_movie_sentiment', 'ubuntu_dialogs_corpus', 'wikihow', trying to `load_dataset` in streaming mode raises a `TypeError` without any detail about why it fails. ## Steps to reproduce the bug ```python from datasets import load_dataset dataset = ...
64
Raise a proper exception when trying to stream a dataset that requires to manually download files ## Describe the bug At least for 'reclor', 'telugu_books', 'turkish_movie_sentiment', 'ubuntu_dialogs_corpus', 'wikihow', trying to `load_dataset` in streaming mode raises a `TypeError` without any detail about why it...
[ -0.2240490168, 0.0735054314, 0.0688554645, 0.14548783, 0.3467076719, 0.0917349011, 0.1795141548, 0.0385216773, 0.1048628092, 0.0732550919, 0.0714532286, 0.2278849632, -0.3061071038, 0.1689810157, -0.0743807703, -0.0853378624, -0.0969870314, 0.1296143234, 0.2995477319, 0.1218052...
https://github.com/huggingface/datasets/issues/2746
Cannot load `few-nerd` dataset
Hi @Mehrad0711, I'm afraid there is no "canonical" Hugging Face dataset named "few-nerd". There are 2 kinds of datasets hosted at the Hugging Face Hub: - canonical datasets (their identifier contains no slash "/"): we, the Hugging Face team, supervise their implementation and we make sure they work correctly by ...
## Describe the bug Cannot load `few-nerd` dataset. ## Steps to reproduce the bug ```python from datasets import load_dataset load_dataset('few-nerd', 'supervised') ``` ## Actual results Executing above code will give the following error: ``` Using the latest cached version of the module from /Users...
242
Cannot load `few-nerd` dataset ## Describe the bug Cannot load `few-nerd` dataset. ## Steps to reproduce the bug ```python from datasets import load_dataset load_dataset('few-nerd', 'supervised') ``` ## Actual results Executing above code will give the following error: ``` Using the latest cached ...
[ -0.3042268753, -0.0985628963, 0.0078414939, 0.5131357908, 0.3285458386, -0.0576421171, 0.520152092, 0.2553007603, 0.1991575062, 0.1681089699, -0.46047768, -0.0910486281, -0.2877757549, -0.297074616, 0.4347798228, 0.0320020542, -0.0251882374, -0.0349554755, 0.1119612157, 0.00843...
https://github.com/huggingface/datasets/issues/2743
Dataset JSON is incorrect
As discussed, the metadata JSON files must be regenerated because the keys were nor properly generated and they will not be read by the builder: > Indeed there is some problem/bug while reading the datasets_info.json file: there is a mismatch with the config.name keys in the file... In the meanwhile, in order to be a...
## Describe the bug The JSON file generated for https://github.com/huggingface/datasets/blob/573f3d35081cee239d1b962878206e9abe6cde91/datasets/journalists_questions/journalists_questions.py is https://github.com/huggingface/datasets/blob/573f3d35081cee239d1b962878206e9abe6cde91/datasets/journalists_questions/dataset...
109
Dataset JSON is incorrect ## Describe the bug The JSON file generated for https://github.com/huggingface/datasets/blob/573f3d35081cee239d1b962878206e9abe6cde91/datasets/journalists_questions/journalists_questions.py is https://github.com/huggingface/datasets/blob/573f3d35081cee239d1b962878206e9abe6cde91/datasets/j...
[ 0.1457876712, 0.0401425511, -0.0529891998, 0.4768225253, 0.1063417643, 0.2229724824, 0.1550249457, 0.3400089145, -0.2390943319, 0.0082265735, 0.0357938111, 0.4321311116, 0.0995689481, 0.1382744312, -0.0019156188, -0.190864265, 0.0582645312, -0.0400951095, 0.0514684618, 0.013578...
https://github.com/huggingface/datasets/issues/2742
Improve detection of streamable file types
maybe we should rather attempt to download a `Range` from the server and see if it works?
**Is your feature request related to a problem? Please describe.** ```python from datasets import load_dataset_builder from datasets.utils.streaming_download_manager import StreamingDownloadManager builder = load_dataset_builder("journalists_questions", name="plain_text") builder._split_generators(StreamingDownl...
17
Improve detection of streamable file types **Is your feature request related to a problem? Please describe.** ```python from datasets import load_dataset_builder from datasets.utils.streaming_download_manager import StreamingDownloadManager builder = load_dataset_builder("journalists_questions", name="plain_tex...
[ -0.4683198631, -0.0467547476, -0.0979573205, 0.1375527829, 0.166396156, -0.2239751816, 0.1926614046, 0.4674959779, 0.0628534108, 0.2362468243, -0.0067694224, -0.0470783524, -0.4149927199, 0.2554932833, 0.0666616485, -0.1503153145, 0.0009771101, 0.0538057946, 0.2009107471, 0.054...
https://github.com/huggingface/datasets/issues/2737
SacreBLEU update
Hi @devrimcavusoglu, I tried your code with latest version of `datasets`and `sacrebleu==1.5.1` and it's running fine after changing one small thing: ``` sacrebleu = datasets.load_metric('sacrebleu') predictions = ["It is a guide to action which ensures that the military always obeys the commands of the party"] re...
With the latest release of [sacrebleu](https://github.com/mjpost/sacrebleu), `datasets.metrics.sacrebleu` is broken, and getting error. AttributeError: module 'sacrebleu' has no attribute 'DEFAULT_TOKENIZER' this happens since in new version of sacrebleu there is no `DEFAULT_TOKENIZER`, but sacrebleu.py tries...
101
SacreBLEU update With the latest release of [sacrebleu](https://github.com/mjpost/sacrebleu), `datasets.metrics.sacrebleu` is broken, and getting error. AttributeError: module 'sacrebleu' has no attribute 'DEFAULT_TOKENIZER' this happens since in new version of sacrebleu there is no `DEFAULT_TOKENIZER`, but...
[ -0.3851112723, 0.1802151352, -0.0012568831, -0.0679100454, 0.5497626662, -0.2189846635, 0.0627021343, 0.3480108082, -0.2101945877, 0.1243231371, 0.0086565763, 0.3338843882, -0.0440220162, -0.0246401131, -0.1152010858, -0.0089817932, 0.1046470255, 0.059853822, 0.34362638, 0.0075...
https://github.com/huggingface/datasets/issues/2737
SacreBLEU update
@bhavitvyamalik hmm. I forgot double brackets, but still didn't work when used it with double brackets. It may be an isseu with platform (using win-10 currently), or versions. What is your platform and your version info for datasets, python, and sacrebleu ?
With the latest release of [sacrebleu](https://github.com/mjpost/sacrebleu), `datasets.metrics.sacrebleu` is broken, and getting error. AttributeError: module 'sacrebleu' has no attribute 'DEFAULT_TOKENIZER' this happens since in new version of sacrebleu there is no `DEFAULT_TOKENIZER`, but sacrebleu.py tries...
42
SacreBLEU update With the latest release of [sacrebleu](https://github.com/mjpost/sacrebleu), `datasets.metrics.sacrebleu` is broken, and getting error. AttributeError: module 'sacrebleu' has no attribute 'DEFAULT_TOKENIZER' this happens since in new version of sacrebleu there is no `DEFAULT_TOKENIZER`, but...
[ -0.3610984087, 0.1312095225, -0.0056598745, -0.1026851684, 0.4761407971, -0.1557439864, 0.1335221827, 0.3350857496, -0.0614673495, 0.1180701107, 0.0635039881, 0.4146681726, -0.0860368758, -0.1110804081, -0.0810957253, -0.0622076653, 0.1325369775, 0.0672767907, 0.3208293915, -0....
https://github.com/huggingface/datasets/issues/2737
SacreBLEU update
You can check that here, I've reproduced your code in [Google colab](https://colab.research.google.com/drive/1X90fHRgMLKczOVgVk7NDEw_ciZFDjaCM?usp=sharing). Looks like there was some issue in `sacrebleu` which was fixed later from what I've found [here](https://github.com/pytorch/fairseq/issues/2049#issuecomment-622367...
With the latest release of [sacrebleu](https://github.com/mjpost/sacrebleu), `datasets.metrics.sacrebleu` is broken, and getting error. AttributeError: module 'sacrebleu' has no attribute 'DEFAULT_TOKENIZER' this happens since in new version of sacrebleu there is no `DEFAULT_TOKENIZER`, but sacrebleu.py tries...
36
SacreBLEU update With the latest release of [sacrebleu](https://github.com/mjpost/sacrebleu), `datasets.metrics.sacrebleu` is broken, and getting error. AttributeError: module 'sacrebleu' has no attribute 'DEFAULT_TOKENIZER' this happens since in new version of sacrebleu there is no `DEFAULT_TOKENIZER`, but...
[ -0.3578205109, 0.191182524, -0.0130563416, -0.1219669729, 0.4591222405, -0.2587323189, 0.1023558378, 0.3462675214, -0.1047269255, 0.1702915281, 0.0292683542, 0.3873142302, -0.028606886, -0.1296034753, -0.0896715969, -0.0818409622, 0.0223366208, 0.1103718504, 0.2108441442, -0.06...
https://github.com/huggingface/datasets/issues/2737
SacreBLEU update
It seems that next release of `sacrebleu` (v2.0.0) will break our `datasets` implementation to compute it. See my Google Colab: https://colab.research.google.com/drive/1SKmvvjQi6k_3OHsX5NPkZdiaJIfXyv9X?usp=sharing I'm reopening this Issue and making a Pull Request to fix it.
With the latest release of [sacrebleu](https://github.com/mjpost/sacrebleu), `datasets.metrics.sacrebleu` is broken, and getting error. AttributeError: module 'sacrebleu' has no attribute 'DEFAULT_TOKENIZER' this happens since in new version of sacrebleu there is no `DEFAULT_TOKENIZER`, but sacrebleu.py tries...
33
SacreBLEU update With the latest release of [sacrebleu](https://github.com/mjpost/sacrebleu), `datasets.metrics.sacrebleu` is broken, and getting error. AttributeError: module 'sacrebleu' has no attribute 'DEFAULT_TOKENIZER' this happens since in new version of sacrebleu there is no `DEFAULT_TOKENIZER`, but...
[ -0.3739028871, 0.2329157293, 0.0166787617, -0.1529369056, 0.3844795227, -0.2280019224, 0.141117245, 0.3971857727, -0.1434215456, 0.1172358543, 0.0634808913, 0.3675318062, -0.0747413188, -0.0431301333, -0.0400881357, 0.0182082355, 0.1286733896, 0.0568152443, 0.2899595201, 0.0479...
https://github.com/huggingface/datasets/issues/2736
Add Microsoft Building Footprints dataset
Motivation: this can be a useful dataset for researchers working on climate change adaptation, urban studies, geography, etc. I'll see if I can figure out how to add it!
## Adding a Dataset - **Name:** Microsoft Building Footprints - **Description:** With the goal to increase the coverage of building footprint data available as open data for OpenStreetMap and humanitarian efforts, we have released millions of building footprints as open data available to download free of charge. - *...
29
Add Microsoft Building Footprints dataset ## Adding a Dataset - **Name:** Microsoft Building Footprints - **Description:** With the goal to increase the coverage of building footprint data available as open data for OpenStreetMap and humanitarian efforts, we have released millions of building footprints as open dat...
[ -0.6230252385, 0.0684203506, -0.2241069973, -0.0011125691, -0.0653919727, -0.0625751391, 0.102517128, 0.2215957493, 0.1189826429, 0.323628962, 0.2269605994, 0.2179023921, -0.2382514775, 0.4009600878, 0.0867367536, -0.0855380371, -0.0039108694, 0.0257601775, -0.3132507205, -0.05...
https://github.com/huggingface/datasets/issues/2730
Update CommonVoice with new release
Does anybody know if there is a bundled link, which would allow direct data download instead of manual? Something similar to: `https://voice-prod-bundler-ee1969a6ce8178826482b88e843c335139bd3fb4.s3.amazonaws.com/cv-corpus-6.1-2020-12-11/ab.tar.gz` ? cc @patil-suraj
## Adding a Dataset - **Name:** CommonVoice mid-2021 release - **Description:** more data in CommonVoice: Languages that have increased the most by percentage are Thai (almost 20x growth, from 12 hours to 250 hours), Luganda (almost 9x growth, from 8 to 80), Esperanto (7x growth, from 100 to 840), and Tamil (almost 8...
25
Update CommonVoice with new release ## Adding a Dataset - **Name:** CommonVoice mid-2021 release - **Description:** more data in CommonVoice: Languages that have increased the most by percentage are Thai (almost 20x growth, from 12 hours to 250 hours), Luganda (almost 9x growth, from 8 to 80), Esperanto (7x growth,...
[ -0.3897119462, 0.0689001679, -0.0436254442, -0.1407741904, -0.0075453967, 0.23610425, -0.0866277367, 0.4099909365, -0.0332521051, 0.0552656539, -0.2268212289, 0.1267443597, -0.1374961436, 0.3757570982, 0.3153651059, -0.1025516763, -0.0221796222, 0.0116233528, 0.3541307449, -0.2...
https://github.com/huggingface/datasets/issues/2728
Concurrent use of same dataset (already downloaded)
Launching simultaneous job relying on the same datasets try some writing issue. I guess it is unexpected since I only need to load some already downloaded file.
## Describe the bug When launching several jobs at the same time loading the same dataset trigger some errors see (last comments). ## Steps to reproduce the bug export HF_DATASETS_CACHE=/gpfswork/rech/toto/datasets for MODEL in "bert-base-uncased" "roberta-base" "distilbert-base-cased"; do # "bert-base-uncased" ...
27
Concurrent use of same dataset (already downloaded) ## Describe the bug When launching several jobs at the same time loading the same dataset trigger some errors see (last comments). ## Steps to reproduce the bug export HF_DATASETS_CACHE=/gpfswork/rech/toto/datasets for MODEL in "bert-base-uncased" "roberta-bas...
[ -0.5779123306, -0.0809999704, -0.0453642756, 0.4501566291, 0.2459627241, 0.0139754117, 0.5565330386, 0.2723813951, 0.1843711585, 0.1907433271, -0.0311777517, 0.14643538, 0.0212426111, 0.1441881955, -0.0885508806, 0.0235603936, 0.1309164762, -0.2257413715, -0.3968794942, -0.0203...
https://github.com/huggingface/datasets/issues/2728
Concurrent use of same dataset (already downloaded)
If i have two jobs that use the same dataset. I got : File "compute_measures.py", line 181, in <module> train_loader, val_loader, test_loader = get_dataloader(args) File "/gpfsdswork/projects/rech/toto/intRAOcular/dataset_utils.py", line 69, in get_dataloader dataset_train = load_dataset('paws', "la...
## Describe the bug When launching several jobs at the same time loading the same dataset trigger some errors see (last comments). ## Steps to reproduce the bug export HF_DATASETS_CACHE=/gpfswork/rech/toto/datasets for MODEL in "bert-base-uncased" "roberta-base" "distilbert-base-cased"; do # "bert-base-uncased" ...
78
Concurrent use of same dataset (already downloaded) ## Describe the bug When launching several jobs at the same time loading the same dataset trigger some errors see (last comments). ## Steps to reproduce the bug export HF_DATASETS_CACHE=/gpfswork/rech/toto/datasets for MODEL in "bert-base-uncased" "roberta-bas...
[ -0.5779123306, -0.0809999704, -0.0453642756, 0.4501566291, 0.2459627241, 0.0139754117, 0.5565330386, 0.2723813951, 0.1843711585, 0.1907433271, -0.0311777517, 0.14643538, 0.0212426111, 0.1441881955, -0.0885508806, 0.0235603936, 0.1309164762, -0.2257413715, -0.3968794942, -0.0203...
https://github.com/huggingface/datasets/issues/2728
Concurrent use of same dataset (already downloaded)
You can probably have a solution much faster than me (first time I use the library). But I suspect some write function are used when loading the dataset from cache.
## Describe the bug When launching several jobs at the same time loading the same dataset trigger some errors see (last comments). ## Steps to reproduce the bug export HF_DATASETS_CACHE=/gpfswork/rech/toto/datasets for MODEL in "bert-base-uncased" "roberta-base" "distilbert-base-cased"; do # "bert-base-uncased" ...
30
Concurrent use of same dataset (already downloaded) ## Describe the bug When launching several jobs at the same time loading the same dataset trigger some errors see (last comments). ## Steps to reproduce the bug export HF_DATASETS_CACHE=/gpfswork/rech/toto/datasets for MODEL in "bert-base-uncased" "roberta-bas...
[ -0.5779123306, -0.0809999704, -0.0453642756, 0.4501566291, 0.2459627241, 0.0139754117, 0.5565330386, 0.2723813951, 0.1843711585, 0.1907433271, -0.0311777517, 0.14643538, 0.0212426111, 0.1441881955, -0.0885508806, 0.0235603936, 0.1309164762, -0.2257413715, -0.3968794942, -0.0203...
https://github.com/huggingface/datasets/issues/2728
Concurrent use of same dataset (already downloaded)
I have the same issue: ``` Traceback (most recent call last): File "/dccstor/tslm/envs/anaconda3/envs/trf-a100/lib/python3.9/site-packages/datasets/builder.py", line 652, in _download_and_prepare self._prepare_split(split_generator, **prepare_split_kwargs) File "/dccstor/tslm/envs/anaconda3/envs/trf-a100/l...
## Describe the bug When launching several jobs at the same time loading the same dataset trigger some errors see (last comments). ## Steps to reproduce the bug export HF_DATASETS_CACHE=/gpfswork/rech/toto/datasets for MODEL in "bert-base-uncased" "roberta-base" "distilbert-base-cased"; do # "bert-base-uncased" ...
172
Concurrent use of same dataset (already downloaded) ## Describe the bug When launching several jobs at the same time loading the same dataset trigger some errors see (last comments). ## Steps to reproduce the bug export HF_DATASETS_CACHE=/gpfswork/rech/toto/datasets for MODEL in "bert-base-uncased" "roberta-bas...
[ -0.5779123306, -0.0809999704, -0.0453642756, 0.4501566291, 0.2459627241, 0.0139754117, 0.5565330386, 0.2723813951, 0.1843711585, 0.1907433271, -0.0311777517, 0.14643538, 0.0212426111, 0.1441881955, -0.0885508806, 0.0235603936, 0.1309164762, -0.2257413715, -0.3968794942, -0.0203...
https://github.com/huggingface/datasets/issues/2727
Error in loading the Arabic Billion Words Corpus
I modified the dataset loading script to catch the `IndexError` and inspect the records at which the error is happening, and I found this: For the `Techreen` config, the error happens in 36 records when trying to find the `Text` or `Dateline` tags. All these 36 records look something like: ``` <Techreen> <ID>TRN_A...
## Describe the bug I get `IndexError: list index out of range` when trying to load the `Techreen` and `Almustaqbal` configs of the dataset. ## Steps to reproduce the bug ```python load_dataset("arabic_billion_words", "Techreen") load_dataset("arabic_billion_words", "Almustaqbal") ``` ## Expected results Th...
128
Error in loading the Arabic Billion Words Corpus ## Describe the bug I get `IndexError: list index out of range` when trying to load the `Techreen` and `Almustaqbal` configs of the dataset. ## Steps to reproduce the bug ```python load_dataset("arabic_billion_words", "Techreen") load_dataset("arabic_billion_wor...
[ -0.1755404472, 0.1067499518, -0.1183999032, 0.4231292903, -0.1488430202, 0.2618190944, 0.232335031, 0.3904678524, 0.3619006276, -0.0010759756, -0.2331464142, -0.0116893537, 0.0589740761, 0.011106275, -0.0046197567, -0.2025988996, -0.0308901034, -0.1021869183, 0.2275919616, 0.14...
https://github.com/huggingface/datasets/issues/2727
Error in loading the Arabic Billion Words Corpus
Thanks @M-Salti for reporting this issue and for your investigation. Indeed, those `IndexError` should be catched and the corresponding record should be ignored. I'm opening a Pull Request to fix it.
## Describe the bug I get `IndexError: list index out of range` when trying to load the `Techreen` and `Almustaqbal` configs of the dataset. ## Steps to reproduce the bug ```python load_dataset("arabic_billion_words", "Techreen") load_dataset("arabic_billion_words", "Almustaqbal") ``` ## Expected results Th...
31
Error in loading the Arabic Billion Words Corpus ## Describe the bug I get `IndexError: list index out of range` when trying to load the `Techreen` and `Almustaqbal` configs of the dataset. ## Steps to reproduce the bug ```python load_dataset("arabic_billion_words", "Techreen") load_dataset("arabic_billion_wor...
[ -0.1877750605, -0.0464502722, -0.0886767581, 0.4570910037, -0.1361012161, 0.2721575797, 0.2241880447, 0.3228883743, 0.4278391302, -0.013256073, -0.2501671612, -0.0138163045, 0.1214889213, 0.0700828657, -0.0400826, -0.2980046272, 0.0483286679, -0.0983882919, 0.2569688559, 0.0944...
https://github.com/huggingface/datasets/issues/2724
404 Error when loading remote data files from private repo
I guess the issue is when computing the ETags of the remote files. Indeed `use_auth_token` must be passed to `request_etags` here: https://github.com/huggingface/datasets/blob/35b5e4bc0cb2ed896e40f3eb2a4aa3de1cb1a6c5/src/datasets/builder.py#L160-L160
## Describe the bug When loading remote data files from a private repo, a 404 error is raised. ## Steps to reproduce the bug ```python url = hf_hub_url("lewtun/asr-preds-test", "preds.jsonl", repo_type="dataset") dset = load_dataset("json", data_files=url, use_auth_token=True) # HTTPError: 404 Client Error: Not...
22
404 Error when loading remote data files from private repo ## Describe the bug When loading remote data files from a private repo, a 404 error is raised. ## Steps to reproduce the bug ```python url = hf_hub_url("lewtun/asr-preds-test", "preds.jsonl", repo_type="dataset") dset = load_dataset("json", data_files=...
[ 0.0871631354, 0.0258457065, 0.0274067335, 0.6045911312, -0.1784685701, -0.0898554772, 0.2882223129, 0.3268501163, 0.0184908807, 0.1675238013, -0.3694000244, -0.083986029, 0.2241544574, -0.0408567488, 0.0888222605, -0.0422794372, 0.0376423523, 0.1200494915, 0.3278563023, -0.2406...
https://github.com/huggingface/datasets/issues/2724
404 Error when loading remote data files from private repo
Yes, I remember having properly implemented that: - https://github.com/huggingface/datasets/commit/7a9c62f7cef9ecc293f629f859d4375a6bd26dc8#diff-f933ce41f71c6c0d1ce658e27de62cbe0b45d777e9e68056dd012ac3eb9324f7R160 - https://github.com/huggingface/datasets/pull/2628/commits/6350a03b4b830339a745f7b1da46ece784ca734c ...
## Describe the bug When loading remote data files from a private repo, a 404 error is raised. ## Steps to reproduce the bug ```python url = hf_hub_url("lewtun/asr-preds-test", "preds.jsonl", repo_type="dataset") dset = load_dataset("json", data_files=url, use_auth_token=True) # HTTPError: 404 Client Error: Not...
18
404 Error when loading remote data files from private repo ## Describe the bug When loading remote data files from a private repo, a 404 error is raised. ## Steps to reproduce the bug ```python url = hf_hub_url("lewtun/asr-preds-test", "preds.jsonl", repo_type="dataset") dset = load_dataset("json", data_files=...
[ 0.1519908905, 0.0217427015, 0.0421524309, 0.4689883888, -0.1487778127, -0.0837456807, 0.1870218515, 0.3314776421, 0.0251009837, 0.1321311593, -0.414760679, -0.038175676, 0.1177171022, -0.0845711976, 0.227465719, -0.0939543471, 0.0643758774, 0.2031137645, 0.1788401604, -0.196415...
https://github.com/huggingface/datasets/issues/2722
Missing cache file
This could be solved by going to the glue/ directory and delete sst2 directory, then load the dataset again will help you redownload the dataset.
Strangely missing cache file after I restart my program again. `glue_dataset = datasets.load_dataset('glue', 'sst2')` `FileNotFoundError: [Errno 2] No such file or directory: /Users/chris/.cache/huggingface/datasets/glue/sst2/1.0.0/dacbe3125aa31d7f70367a07a8a9e72a5a0bfeb5fc42e75c9db75b96d6053ad/dataset_info.json...
25
Missing cache file Strangely missing cache file after I restart my program again. `glue_dataset = datasets.load_dataset('glue', 'sst2')` `FileNotFoundError: [Errno 2] No such file or directory: /Users/chris/.cache/huggingface/datasets/glue/sst2/1.0.0/dacbe3125aa31d7f70367a07a8a9e72a5a0bfeb5fc42e75c9db75b96d605...
[ -0.1489497423, -0.2841113508, -0.0903950706, 0.1460706294, 0.2751775384, 0.0493273586, 0.0770322233, 0.2098801583, 0.2618973255, 0.0594648272, -0.0856971145, 0.0969994366, 0.0986218452, 0.2517308593, 0.1167899668, -0.1429447681, -0.1188407987, 0.2136433572, -0.3757640719, 0.102...
https://github.com/huggingface/datasets/issues/2722
Missing cache file
Hi ! Not sure why this file was missing, but yes the way to fix this is to delete the sst2 directory and to reload the dataset
Strangely missing cache file after I restart my program again. `glue_dataset = datasets.load_dataset('glue', 'sst2')` `FileNotFoundError: [Errno 2] No such file or directory: /Users/chris/.cache/huggingface/datasets/glue/sst2/1.0.0/dacbe3125aa31d7f70367a07a8a9e72a5a0bfeb5fc42e75c9db75b96d6053ad/dataset_info.json...
27
Missing cache file Strangely missing cache file after I restart my program again. `glue_dataset = datasets.load_dataset('glue', 'sst2')` `FileNotFoundError: [Errno 2] No such file or directory: /Users/chris/.cache/huggingface/datasets/glue/sst2/1.0.0/dacbe3125aa31d7f70367a07a8a9e72a5a0bfeb5fc42e75c9db75b96d605...
[ -0.0967948511, -0.2115667611, -0.0807494894, 0.1962110698, 0.3262524009, 0.2369839847, 0.0145860845, 0.2186019123, 0.304212302, 0.0247303285, 0.0359604806, 0.032418102, 0.0695072636, 0.0254835859, 0.1852320731, -0.1839708835, -0.0519990958, 0.2447131574, -0.2526427209, 0.113448...
https://github.com/huggingface/datasets/issues/2716
Calling shuffle on IterableDataset will disable batching in case any functions were mapped
Hi :) Good catch ! Feel free to open a PR if you want to contribute, this would be very welcome ;)
When using dataset in streaming mode, if one applies `shuffle` method on the dataset and `map` method for which `batched=True` than the batching operation will not happen, instead `batched` will be set to `False` I did RCA on the dataset codebase, the problem is emerging from [this line of code](https://github.com/h...
22
Calling shuffle on IterableDataset will disable batching in case any functions were mapped When using dataset in streaming mode, if one applies `shuffle` method on the dataset and `map` method for which `batched=True` than the batching operation will not happen, instead `batched` will be set to `False` I did RCA o...
[ -0.4098214507, -0.2538071871, 0.0164555945, -0.0311612934, 0.3398854136, -0.0579712801, 0.3023080826, 0.0674434528, -0.2175253928, 0.3773573935, -0.2310502827, 0.2912555337, -0.1922606826, 0.2657684088, -0.0942454338, 0.1088105589, 0.0774023533, 0.0824449956, -0.4071668684, -0....
https://github.com/huggingface/datasets/issues/2714
add more precise information for size
We already have this information in the dataset_infos.json files of each dataset. Maybe we can parse these files in the backend to return their content with the endpoint at huggingface.co/api/datasets For now if you want to access this info you have to load the json for each dataset. For example: - for a dataset o...
For the import into ELG, we would like a more precise description of the size of the dataset, instead of the current size categories. The size can be expressed in bytes, or any other preferred size unit. As suggested in the slack channel, perhaps this could be computed with a regex for existing datasets.
71
add more precise information for size For the import into ELG, we would like a more precise description of the size of the dataset, instead of the current size categories. The size can be expressed in bytes, or any other preferred size unit. As suggested in the slack channel, perhaps this could be computed with a reg...
[ -0.0413543992, -0.5577721596, -0.1589673012, 0.3919744194, 0.2053748965, -0.0639246106, -0.0737591684, 0.0503647476, 0.1064482331, 0.0798682496, -0.4394704103, 0.088177681, -0.0656969026, 0.4548653364, 0.0609380528, 0.0388583206, -0.2295207083, -0.1126662195, -0.1492431611, -0....
https://github.com/huggingface/datasets/issues/2709
Missing documentation for wnut_17 (ner_tags)
Hi @maxpel, thanks for reporting this issue. Indeed, the documentation in the dataset card is not complete. I’m opening a Pull Request to fix it. As the paper explains, there are 6 entity types and we have ordered them alphabetically: `corporation`, `creative-work`, `group`, `location`, `person` and `product`. ...
On the info page of the wnut_17 data set (https://huggingface.co/datasets/wnut_17), the model output of ner-tags is only documented for these 5 cases: `ner_tags: a list of classification labels, with possible values including O (0), B-corporation (1), I-corporation (2), B-creative-work (3), I-creative-work (4).` ...
145
Missing documentation for wnut_17 (ner_tags) On the info page of the wnut_17 data set (https://huggingface.co/datasets/wnut_17), the model output of ner-tags is only documented for these 5 cases: `ner_tags: a list of classification labels, with possible values including O (0), B-corporation (1), I-corporation (2),...
[ 0.2751238942, -0.297044456, 0.0208109058, 0.4673909843, -0.0559452139, 0.0520154238, -0.0117356014, -0.2991323173, -0.2569103837, -0.2186732292, 0.0983797833, 0.3813357651, -0.1071817279, 0.2050388753, 0.1961682737, -0.0721682757, 0.1684131324, -0.1887159348, 0.3061255813, -0.1...
https://github.com/huggingface/datasets/issues/2708
QASC: incomplete training set
Hi @danyaljj, thanks for reporting. Unfortunately, I have not been able to reproduce your problem. My train split has 8134 examples: ```ipython In [10]: ds["train"] Out[10]: Dataset({ features: ['id', 'question', 'choices', 'answerKey', 'fact1', 'fact2', 'combinedfact', 'formatted_question'], num_rows:...
## Describe the bug The training instances are not loaded properly. ## Steps to reproduce the bug ```python from datasets import load_dataset dataset = load_dataset("qasc", script_version='1.10.2') def load_instances(split): instances = dataset[split] print(f"split: {split} - size: {len(instanc...
496
QASC: incomplete training set ## Describe the bug The training instances are not loaded properly. ## Steps to reproduce the bug ```python from datasets import load_dataset dataset = load_dataset("qasc", script_version='1.10.2') def load_instances(split): instances = dataset[split] print(f"sp...
[ -0.2450184077, -0.3615008295, -0.1671690941, 0.372361213, 0.1236954182, 0.1451466382, 0.016439151, 0.4380204082, -0.1213207021, 0.0934932008, 0.1632601768, 0.1249718964, 0.0430465713, 0.2471370548, -0.0943753719, -0.1647272557, -0.0069865733, 0.2966330945, -0.2305756658, -0.074...
https://github.com/huggingface/datasets/issues/2707
404 Not Found Error when loading LAMA dataset
Hi @dwil2444! I was able to reproduce your error when I downgraded to v1.1.2. Updating to the latest version of Datasets fixed the error for me :)
The [LAMA](https://huggingface.co/datasets/viewer/?dataset=lama) probing dataset is not available for download: Steps to Reproduce: 1. `from datasets import load_dataset` 2. `dataset = load_dataset('lama', 'trex')`. Results: `FileNotFoundError: Couldn't find file locally at lama/lama.py, or remotely ...
27
404 Not Found Error when loading LAMA dataset The [LAMA](https://huggingface.co/datasets/viewer/?dataset=lama) probing dataset is not available for download: Steps to Reproduce: 1. `from datasets import load_dataset` 2. `dataset = load_dataset('lama', 'trex')`. Results: `FileNotFoundError: Couldn't...
[ 0.0275758803, -0.1798950136, -0.0416178107, 0.5950451493, 0.4188718498, 0.1276884824, -0.1021334827, 0.4045757353, -0.4222147763, 0.2916163504, -0.3122205436, 0.164151594, -0.1594457626, -0.2467196882, 0.0084441118, -0.3867283463, -0.1002723873, -0.0234838724, -0.2141504437, 0....
https://github.com/huggingface/datasets/issues/2707
404 Not Found Error when loading LAMA dataset
Hi @dwil2444, thanks for reporting. Could you please confirm which `datasets` version you were using and if the problem persists after you update it to the latest version: `pip install -U datasets`? Thanks @stevhliu for the hint to fix this! ;)
The [LAMA](https://huggingface.co/datasets/viewer/?dataset=lama) probing dataset is not available for download: Steps to Reproduce: 1. `from datasets import load_dataset` 2. `dataset = load_dataset('lama', 'trex')`. Results: `FileNotFoundError: Couldn't find file locally at lama/lama.py, or remotely ...
41
404 Not Found Error when loading LAMA dataset The [LAMA](https://huggingface.co/datasets/viewer/?dataset=lama) probing dataset is not available for download: Steps to Reproduce: 1. `from datasets import load_dataset` 2. `dataset = load_dataset('lama', 'trex')`. Results: `FileNotFoundError: Couldn't...
[ 0.0376230478, -0.2442802936, -0.0386777967, 0.6305746436, 0.4072244167, 0.0883308351, -0.0716987699, 0.3489052057, -0.3947160244, 0.2646037936, -0.2870089412, 0.2205040008, -0.1573988348, -0.2962249219, 0.0108721294, -0.3568077385, -0.1149339825, -0.0169954747, -0.1815580726, 0...
https://github.com/huggingface/datasets/issues/2707
404 Not Found Error when loading LAMA dataset
@stevhliu @albertvillanova updating to the latest version of datasets did in fact fix this issue. Thanks a lot for your help!
The [LAMA](https://huggingface.co/datasets/viewer/?dataset=lama) probing dataset is not available for download: Steps to Reproduce: 1. `from datasets import load_dataset` 2. `dataset = load_dataset('lama', 'trex')`. Results: `FileNotFoundError: Couldn't find file locally at lama/lama.py, or remotely ...
21
404 Not Found Error when loading LAMA dataset The [LAMA](https://huggingface.co/datasets/viewer/?dataset=lama) probing dataset is not available for download: Steps to Reproduce: 1. `from datasets import load_dataset` 2. `dataset = load_dataset('lama', 'trex')`. Results: `FileNotFoundError: Couldn't...
[ 0.0579445064, -0.2256723195, -0.0337309241, 0.6018184423, 0.3969443142, 0.1228494644, -0.0771399885, 0.3927092552, -0.428257823, 0.2528082728, -0.3340739906, 0.1994090974, -0.1437427104, -0.263453275, 0.0329602435, -0.3631293178, -0.1055586338, -0.0356406756, -0.1782314479, 0.1...
https://github.com/huggingface/datasets/issues/2705
404 not found error on loading WIKIANN dataset
Hi @ronbutan, thanks for reporting. You are right: we have recently found that the link to the original PAN-X dataset (also called WikiANN), hosted at Dropbox, is no longer working. We have opened an issue in the GitHub repository of the original dataset (afshinrahimi/mmner#4) and we have also contacted the autho...
## Describe the bug Unable to retreive wikiann English dataset ## Steps to reproduce the bug ```python from datasets import list_datasets, load_dataset, list_metrics, load_metric WIKIANN = load_dataset("wikiann","en") ``` ## Expected results Colab notebook should display successful download status ## Act...
94
404 not found error on loading WIKIANN dataset ## Describe the bug Unable to retreive wikiann English dataset ## Steps to reproduce the bug ```python from datasets import list_datasets, load_dataset, list_metrics, load_metric WIKIANN = load_dataset("wikiann","en") ``` ## Expected results Colab notebook sh...
[ -0.1983267367, 0.0143740028, 0.002106335, 0.2360560298, 0.0338600874, 0.0631660745, 0.0542529933, 0.1971949935, -0.0232600905, 0.1844016463, -0.1990167499, 0.348231703, 0.1889101416, -0.035204798, 0.1387398243, -0.251796484, 0.0296934247, 0.2635378242, 0.0581072457, -0.09758893...
https://github.com/huggingface/datasets/issues/2700
from datasets import Dataset is failing
Hi @kswamy15, thanks for reporting. We are fixing this critical issue and making an urgent patch release of the `datasets` library today. In the meantime, you can circumvent this issue by updating the `tqdm` library: `!pip install -U tqdm`
## Describe the bug A clear and concise description of what the bug is. ## Steps to reproduce the bug ```python # Sample code to reproduce the bug from datasets import Dataset ``` ## Expected results A clear and concise description of the expected results. ## Actual results Specify the actual results or...
39
from datasets import Dataset is failing ## Describe the bug A clear and concise description of what the bug is. ## Steps to reproduce the bug ```python # Sample code to reproduce the bug from datasets import Dataset ``` ## Expected results A clear and concise description of the expected results. ## Ac...
[ -0.4085452557, -0.1528587192, -0.1239811629, 0.0641507953, 0.1121255979, 0.1035050005, 0.3430989087, 0.223898232, -0.1365937889, 0.0548988692, -0.0883902758, 0.2876404226, 0.0480185971, 0.0611999705, -0.1352101564, 0.0198291466, -0.055388391, -0.010933172, -0.333774209, 0.12665...
https://github.com/huggingface/datasets/issues/2699
cannot combine splits merging and streaming?
Hi ! That's missing indeed. We'll try to implement this for the next version :) I guess we just need to implement #2564 first, and then we should be able to add support for splits combinations
this does not work: `dataset = datasets.load_dataset('mc4','iw',split='train+validation',streaming=True)` with error: `ValueError: Bad split: train+validation. Available splits: ['train', 'validation']` these work: `dataset = datasets.load_dataset('mc4','iw',split='train+validation')` `dataset = datasets.load_d...
36
cannot combine splits merging and streaming? this does not work: `dataset = datasets.load_dataset('mc4','iw',split='train+validation',streaming=True)` with error: `ValueError: Bad split: train+validation. Available splits: ['train', 'validation']` these work: `dataset = datasets.load_dataset('mc4','iw',split='...
[ -0.5081484318, -0.3373529315, -0.110279046, -0.0140301287, 0.033048708, 0.0741378367, 0.0743087083, 0.4854652286, -0.0663450211, 0.1295400113, -0.2676171958, 0.2315490842, -0.0514130816, 0.5110139251, 0.0302072465, -0.3258370459, 0.128616184, 0.0924143791, -0.3972887397, 0.2039...
https://github.com/huggingface/datasets/issues/2695
Cannot import load_dataset on Colab
I'm facing the same issue on Colab today too. ``` ModuleNotFoundError Traceback (most recent call last) <ipython-input-4-5833ac0f5437> in <module>() 3 4 from ray import tune ----> 5 from datasets import DatasetDict, Dataset 6 from datasets import load_dataset, load_metr...
## Describe the bug Got tqdm concurrent module not found error during importing load_dataset from datasets. ## Steps to reproduce the bug Here [colab notebook](https://colab.research.google.com/drive/1pErWWnVP4P4mVHjSFUtkePd8Na_Qirg4?usp=sharing) to reproduce the error On colab: ```python !pip install dataset...
111
Cannot import load_dataset on Colab ## Describe the bug Got tqdm concurrent module not found error during importing load_dataset from datasets. ## Steps to reproduce the bug Here [colab notebook](https://colab.research.google.com/drive/1pErWWnVP4P4mVHjSFUtkePd8Na_Qirg4?usp=sharing) to reproduce the error On c...
[ -0.4603259563, -0.2148274332, -0.0243752003, 0.2682135403, 0.1354126334, 0.0519945472, 0.4973767996, -0.1074849591, 0.1983706802, 0.0849532038, -0.3266305327, 0.4753238261, -0.1742699891, 0.148748368, -0.2584455013, 0.0984705389, -0.0661862344, 0.0582034886, -0.2907032967, 0.09...
https://github.com/huggingface/datasets/issues/2695
Cannot import load_dataset on Colab
@phosseini I think it is related to [1.10.0](https://github.com/huggingface/datasets/actions/runs/1052653701) release done 3 hours ago. (cc: @lhoestq ) For now I just downgraded to 1.9.0 and it is working fine.
## Describe the bug Got tqdm concurrent module not found error during importing load_dataset from datasets. ## Steps to reproduce the bug Here [colab notebook](https://colab.research.google.com/drive/1pErWWnVP4P4mVHjSFUtkePd8Na_Qirg4?usp=sharing) to reproduce the error On colab: ```python !pip install dataset...
28
Cannot import load_dataset on Colab ## Describe the bug Got tqdm concurrent module not found error during importing load_dataset from datasets. ## Steps to reproduce the bug Here [colab notebook](https://colab.research.google.com/drive/1pErWWnVP4P4mVHjSFUtkePd8Na_Qirg4?usp=sharing) to reproduce the error On c...
[ -0.4603259563, -0.2148274332, -0.0243752003, 0.2682135403, 0.1354126334, 0.0519945472, 0.4973767996, -0.1074849591, 0.1983706802, 0.0849532038, -0.3266305327, 0.4753238261, -0.1742699891, 0.148748368, -0.2584455013, 0.0984705389, -0.0661862344, 0.0582034886, -0.2907032967, 0.09...
https://github.com/huggingface/datasets/issues/2695
Cannot import load_dataset on Colab
> @phosseini > I think it is related to [1.10.0](https://github.com/huggingface/datasets/actions/runs/1052653701) release done 3 hours ago. (cc: @lhoestq ) > For now I just downgraded to 1.9.0 and it is working fine. Same here, downgraded to 1.9.0 for now and works fine.
## Describe the bug Got tqdm concurrent module not found error during importing load_dataset from datasets. ## Steps to reproduce the bug Here [colab notebook](https://colab.research.google.com/drive/1pErWWnVP4P4mVHjSFUtkePd8Na_Qirg4?usp=sharing) to reproduce the error On colab: ```python !pip install dataset...
41
Cannot import load_dataset on Colab ## Describe the bug Got tqdm concurrent module not found error during importing load_dataset from datasets. ## Steps to reproduce the bug Here [colab notebook](https://colab.research.google.com/drive/1pErWWnVP4P4mVHjSFUtkePd8Na_Qirg4?usp=sharing) to reproduce the error On c...
[ -0.4603259563, -0.2148274332, -0.0243752003, 0.2682135403, 0.1354126334, 0.0519945472, 0.4973767996, -0.1074849591, 0.1983706802, 0.0849532038, -0.3266305327, 0.4753238261, -0.1742699891, 0.148748368, -0.2584455013, 0.0984705389, -0.0661862344, 0.0582034886, -0.2907032967, 0.09...
https://github.com/huggingface/datasets/issues/2695
Cannot import load_dataset on Colab
Hi, updating tqdm to the newest version resolves the issue for me. You can do this as follows in Colab: ``` !pip install tqdm --upgrade ```
## Describe the bug Got tqdm concurrent module not found error during importing load_dataset from datasets. ## Steps to reproduce the bug Here [colab notebook](https://colab.research.google.com/drive/1pErWWnVP4P4mVHjSFUtkePd8Na_Qirg4?usp=sharing) to reproduce the error On colab: ```python !pip install dataset...
26
Cannot import load_dataset on Colab ## Describe the bug Got tqdm concurrent module not found error during importing load_dataset from datasets. ## Steps to reproduce the bug Here [colab notebook](https://colab.research.google.com/drive/1pErWWnVP4P4mVHjSFUtkePd8Na_Qirg4?usp=sharing) to reproduce the error On c...
[ -0.4603259563, -0.2148274332, -0.0243752003, 0.2682135403, 0.1354126334, 0.0519945472, 0.4973767996, -0.1074849591, 0.1983706802, 0.0849532038, -0.3266305327, 0.4753238261, -0.1742699891, 0.148748368, -0.2584455013, 0.0984705389, -0.0661862344, 0.0582034886, -0.2907032967, 0.09...
https://github.com/huggingface/datasets/issues/2695
Cannot import load_dataset on Colab
Hi @bayartsogt-ya and @phosseini, thanks for reporting. We are fixing this critical issue and making an urgent patch release of the `datasets` library today. In the meantime, as pointed out by @mariosasko, you can circumvent this issue by updating the `tqdm` library: ``` !pip install -U tqdm ```
## Describe the bug Got tqdm concurrent module not found error during importing load_dataset from datasets. ## Steps to reproduce the bug Here [colab notebook](https://colab.research.google.com/drive/1pErWWnVP4P4mVHjSFUtkePd8Na_Qirg4?usp=sharing) to reproduce the error On colab: ```python !pip install dataset...
48
Cannot import load_dataset on Colab ## Describe the bug Got tqdm concurrent module not found error during importing load_dataset from datasets. ## Steps to reproduce the bug Here [colab notebook](https://colab.research.google.com/drive/1pErWWnVP4P4mVHjSFUtkePd8Na_Qirg4?usp=sharing) to reproduce the error On c...
[ -0.4603259563, -0.2148274332, -0.0243752003, 0.2682135403, 0.1354126334, 0.0519945472, 0.4973767996, -0.1074849591, 0.1983706802, 0.0849532038, -0.3266305327, 0.4753238261, -0.1742699891, 0.148748368, -0.2584455013, 0.0984705389, -0.0661862344, 0.0582034886, -0.2907032967, 0.09...
https://github.com/huggingface/datasets/issues/2691
xtreme / pan-x cannot be downloaded
Hi @severo, thanks for reporting. However I have not been able to reproduce this issue. Could you please confirm if the problem persists for you? Maybe Dropbox (where the data source is hosted) was temporarily unavailable when you tried.
## Describe the bug Dataset xtreme / pan-x cannot be loaded Seems related to https://github.com/huggingface/datasets/pull/2326 ## Steps to reproduce the bug ```python dataset = load_dataset("xtreme", "PAN-X.fr") ``` ## Expected results Load the dataset ## Actual results ``` FileNotFoundError:...
39
xtreme / pan-x cannot be downloaded ## Describe the bug Dataset xtreme / pan-x cannot be loaded Seems related to https://github.com/huggingface/datasets/pull/2326 ## Steps to reproduce the bug ```python dataset = load_dataset("xtreme", "PAN-X.fr") ``` ## Expected results Load the dataset ## Act...
[ -0.3526866436, -0.3867759705, -0.0510648787, 0.291082859, 0.2072109878, 0.0486670285, -0.1169928387, 0.2295709252, 0.1450732052, 0.0749083161, -0.2071777135, 0.2856612802, 0.0034075044, 0.0860849619, 0.1964893937, -0.2445529103, 0.0702191144, 0.1196100861, 0.0714931637, -0.0395...
https://github.com/huggingface/datasets/issues/2691
xtreme / pan-x cannot be downloaded
Hmmm, the file (https://www.dropbox.com/s/dl/12h3qqog6q4bjve/panx_dataset.tar) really seems to be unavailable... I tried from various connexions and machines and got the same 404 error. Maybe the dataset has been loaded from the cache in your case?
## Describe the bug Dataset xtreme / pan-x cannot be loaded Seems related to https://github.com/huggingface/datasets/pull/2326 ## Steps to reproduce the bug ```python dataset = load_dataset("xtreme", "PAN-X.fr") ``` ## Expected results Load the dataset ## Actual results ``` FileNotFoundError:...
34
xtreme / pan-x cannot be downloaded ## Describe the bug Dataset xtreme / pan-x cannot be loaded Seems related to https://github.com/huggingface/datasets/pull/2326 ## Steps to reproduce the bug ```python dataset = load_dataset("xtreme", "PAN-X.fr") ``` ## Expected results Load the dataset ## Act...
[ -0.2679746151, -0.4418179989, -0.1176263914, 0.263651818, 0.2272869051, 0.1094407365, -0.1848930418, 0.2379710227, 0.1063499004, 0.1472175121, -0.1563760936, 0.2612071037, 0.0340305939, 0.0054813251, 0.1887654364, -0.2929283977, -0.0231134742, 0.1571929902, 0.0854804963, -0.012...
https://github.com/huggingface/datasets/issues/2691
xtreme / pan-x cannot be downloaded
Yes @severo, weird... I could access the file when I answered to you, but now I cannot longer access it either... Maybe it was from the cache as you point out. Anyway, I have opened an issue in the GitHub repository responsible for the original dataset: https://github.com/afshinrahimi/mmner/issues/4 I have also con...
## Describe the bug Dataset xtreme / pan-x cannot be loaded Seems related to https://github.com/huggingface/datasets/pull/2326 ## Steps to reproduce the bug ```python dataset = load_dataset("xtreme", "PAN-X.fr") ``` ## Expected results Load the dataset ## Actual results ``` FileNotFoundError:...
62
xtreme / pan-x cannot be downloaded ## Describe the bug Dataset xtreme / pan-x cannot be loaded Seems related to https://github.com/huggingface/datasets/pull/2326 ## Steps to reproduce the bug ```python dataset = load_dataset("xtreme", "PAN-X.fr") ``` ## Expected results Load the dataset ## Act...
[ -0.318205893, -0.3272899091, -0.1103757918, 0.2835793495, 0.2328959405, 0.0658762902, -0.0919085741, 0.1860668808, 0.046389997, 0.240070343, -0.2260717154, 0.2097328007, -0.0343381651, -0.1489106119, 0.1411538124, -0.2330801934, -0.0239752196, 0.0649311095, 0.0869291723, -0.003...
https://github.com/huggingface/datasets/issues/2691
xtreme / pan-x cannot be downloaded
Reply from the author/maintainer: > Will fix the issue and let you know during the weekend.
## Describe the bug Dataset xtreme / pan-x cannot be loaded Seems related to https://github.com/huggingface/datasets/pull/2326 ## Steps to reproduce the bug ```python dataset = load_dataset("xtreme", "PAN-X.fr") ``` ## Expected results Load the dataset ## Actual results ``` FileNotFoundError:...
16
xtreme / pan-x cannot be downloaded ## Describe the bug Dataset xtreme / pan-x cannot be loaded Seems related to https://github.com/huggingface/datasets/pull/2326 ## Steps to reproduce the bug ```python dataset = load_dataset("xtreme", "PAN-X.fr") ``` ## Expected results Load the dataset ## Act...
[ -0.3508207798, -0.4881739318, -0.101257056, 0.267937392, 0.2424273491, 0.0743880644, -0.1332193017, 0.2452752888, 0.1442700624, 0.1187146083, -0.243665278, 0.2646328807, 0.0461624637, 0.0465014353, 0.1917270273, -0.3267599642, 0.0447653159, 0.1279419214, 0.0111562684, -0.037644...
https://github.com/huggingface/datasets/issues/2691
xtreme / pan-x cannot be downloaded
The author told that apparently Dropbox has changed their policy and no longer allow downloading the file without having signed in first. The author asked Hugging Face to host their dataset.
## Describe the bug Dataset xtreme / pan-x cannot be loaded Seems related to https://github.com/huggingface/datasets/pull/2326 ## Steps to reproduce the bug ```python dataset = load_dataset("xtreme", "PAN-X.fr") ``` ## Expected results Load the dataset ## Actual results ``` FileNotFoundError:...
31
xtreme / pan-x cannot be downloaded ## Describe the bug Dataset xtreme / pan-x cannot be loaded Seems related to https://github.com/huggingface/datasets/pull/2326 ## Steps to reproduce the bug ```python dataset = load_dataset("xtreme", "PAN-X.fr") ``` ## Expected results Load the dataset ## Act...
[ -0.2759370208, -0.3480379581, -0.059366826, 0.3470795453, 0.1086417213, 0.0554848909, 0.0214383323, 0.2132671475, 0.3107885122, 0.0681138337, -0.2372270226, 0.2172873318, -0.0742673278, 0.1829396039, 0.2306468189, -0.1670993567, 0.0182072297, 0.0383959413, 0.1040742844, -0.1059...
https://github.com/huggingface/datasets/issues/2689
cannot save the dataset to disk after rename_column
Hi ! That's because you are trying to overwrite a file that is already open and being used. Indeed `foo/dataset.arrow` is open and used by your `dataset` object. When you do `rename_column`, the resulting dataset reads the data from the same arrow file. In other cases like when using `map` on the other hand, the r...
## Describe the bug If you use `rename_column` and do no other modification, you will be unable to save the dataset using `save_to_disk` ## Steps to reproduce the bug ```python # Sample code to reproduce the bug In [1]: from datasets import Dataset, load_from_disk In [5]: dataset=Dataset.from_dict({'foo': [0]})...
102
cannot save the dataset to disk after rename_column ## Describe the bug If you use `rename_column` and do no other modification, you will be unable to save the dataset using `save_to_disk` ## Steps to reproduce the bug ```python # Sample code to reproduce the bug In [1]: from datasets import Dataset, load_from...
[ -0.0523220971, 0.2342097461, -0.0231683105, -0.0133466832, 0.3893547952, 0.2707955241, 0.4929158092, 0.2463015914, 0.0219601467, 0.1201095432, -0.0814651996, 0.4607103467, -0.171995163, -0.1512237042, -0.013640916, -0.0714223981, 0.2988215685, -0.1098153889, -0.0071084835, 0.15...
https://github.com/huggingface/datasets/issues/2688
hebrew language codes he and iw should be treated as aliases
Hi @eyaler, thanks for reporting. While you are true with respect the Hebrew language tag ("iw" is deprecated and "he" is the preferred value), in the "mc4" dataset (which is a derived dataset) we have kept the language tags present in the original dataset: [Google C4](https://www.tensorflow.org/datasets/catalog/c4)...
https://huggingface.co/datasets/mc4 not listed when searching for hebrew datasets (he) as it uses the older language code iw, preventing discoverability.
46
hebrew language codes he and iw should be treated as aliases https://huggingface.co/datasets/mc4 not listed when searching for hebrew datasets (he) as it uses the older language code iw, preventing discoverability. Hi @eyaler, thanks for reporting. While you are true with respect the Hebrew language tag ("iw" i...
[ -0.1847827882, -0.0222233683, -0.1543210149, 0.0329203457, 0.0282864012, 0.0720352605, 0.5509449244, 0.3117236197, 0.0522167981, 0.104304567, -0.3407955468, -0.2625411749, 0.0178136863, 0.0098184794, 0.1778061241, 0.1035927162, 0.076326929, 0.0028085522, 0.0307026803, -0.259934...
https://github.com/huggingface/datasets/issues/2688
hebrew language codes he and iw should be treated as aliases
For discoverability on the website I updated the YAML tags at the top of the mC4 dataset card https://github.com/huggingface/datasets/commit/38288087b1b02f97586e0346e8f28f4960f1fd37 Once the website is updated, mC4 will be listed in https://huggingface.co/datasets?filter=languages:he
https://huggingface.co/datasets/mc4 not listed when searching for hebrew datasets (he) as it uses the older language code iw, preventing discoverability.
30
hebrew language codes he and iw should be treated as aliases https://huggingface.co/datasets/mc4 not listed when searching for hebrew datasets (he) as it uses the older language code iw, preventing discoverability. For discoverability on the website I updated the YAML tags at the top of the mC4 dataset card https:...
[ -0.1667794287, -0.0835686401, -0.0892026424, 0.0611216649, 0.2324796915, -0.0180356503, 0.2219811082, 0.4122387767, -0.015884392, -0.018412739, -0.3140927255, -0.1698120683, 0.0033469424, 0.2545409501, 0.1932531893, 0.1975349039, 0.0163774379, -0.0476838984, -0.0980866104, -0.1...
https://github.com/huggingface/datasets/issues/2681
5 duplicate datasets
Yes this was documented in the PR that added this hf->paperswithcode mapping (https://github.com/huggingface/datasets/pull/2404) and AFAICT those are slightly distinct datasets so I think it's a wontfix For context on the paperswithcode mapping you can also refer to https://github.com/huggingface/huggingface_hub/pul...
## Describe the bug In 5 cases, I could find a dataset on Paperswithcode which references two Hugging Face datasets as dataset loaders. They are: - https://paperswithcode.com/dataset/multinli -> https://huggingface.co/datasets/multi_nli and https://huggingface.co/datasets/multi_nli_mismatch <img width="838...
45
5 duplicate datasets ## Describe the bug In 5 cases, I could find a dataset on Paperswithcode which references two Hugging Face datasets as dataset loaders. They are: - https://paperswithcode.com/dataset/multinli -> https://huggingface.co/datasets/multi_nli and https://huggingface.co/datasets/multi_nli_mismatch...
[ 0.0702891648, -0.0303931981, 0.0002650805, 0.3026075661, 0.1615349799, 0.022487171, 0.3812553883, 0.1487316489, 0.0632143766, 0.2060804814, -0.1454353333, 0.0585724674, 0.1328157187, -0.0630715787, 0.071940504, -0.0062968424, 0.24742423, -0.1418849975, -0.2275307775, -0.0822187...
https://github.com/huggingface/datasets/issues/2679
Cannot load the blog_authorship_corpus due to codec errors
Hi @izaskr, thanks for reporting. However the traceback you joined does not correspond to the codec error message: it is about other error `NonMatchingSplitsSizesError`. Maybe you missed some important part of your traceback... I'm going to have a look at the dataset anyway...
## Describe the bug A codec error is raised while loading the blog_authorship_corpus. ## Steps to reproduce the bug ``` from datasets import load_dataset raw_datasets = load_dataset("blog_authorship_corpus") ``` ## Expected results Loading the dataset without errors. ## Actual results An error simila...
43
Cannot load the blog_authorship_corpus due to codec errors ## Describe the bug A codec error is raised while loading the blog_authorship_corpus. ## Steps to reproduce the bug ``` from datasets import load_dataset raw_datasets = load_dataset("blog_authorship_corpus") ``` ## Expected results Loading the ...
[ -0.208489418, 0.478081286, -0.0540075488, 0.3998488188, 0.3617143929, 0.2268405855, 0.0785167441, 0.4288443327, -0.1586730033, 0.2435749173, 0.0468285158, 0.2679275274, -0.066664651, -0.2108721435, 0.0490836129, 0.0175640192, -0.0882997215, 0.1426829696, 0.2282302082, -0.234280...
https://github.com/huggingface/datasets/issues/2679
Cannot load the blog_authorship_corpus due to codec errors
Hi @izaskr, thanks again for having reported this issue. After investigation, I have created a Pull Request (#2685) to fix several issues with this dataset: - the `NonMatchingSplitsSizesError` - the `UnicodeDecodeError` Once the Pull Request merged into master, you will be able to load this dataset if you insta...
## Describe the bug A codec error is raised while loading the blog_authorship_corpus. ## Steps to reproduce the bug ``` from datasets import load_dataset raw_datasets = load_dataset("blog_authorship_corpus") ``` ## Expected results Loading the dataset without errors. ## Actual results An error simila...
75
Cannot load the blog_authorship_corpus due to codec errors ## Describe the bug A codec error is raised while loading the blog_authorship_corpus. ## Steps to reproduce the bug ``` from datasets import load_dataset raw_datasets = load_dataset("blog_authorship_corpus") ``` ## Expected results Loading the ...
[ -0.208489418, 0.478081286, -0.0540075488, 0.3998488188, 0.3617143929, 0.2268405855, 0.0785167441, 0.4288443327, -0.1586730033, 0.2435749173, 0.0468285158, 0.2679275274, -0.066664651, -0.2108721435, 0.0490836129, 0.0175640192, -0.0882997215, 0.1426829696, 0.2282302082, -0.234280...
https://github.com/huggingface/datasets/issues/2679
Cannot load the blog_authorship_corpus due to codec errors
@albertvillanova Can you shed light on how this fix works? We're experiencing a similar issue. If we run several runs (eg in a Wandb sweep) the first run "works" but then we get `NonMatchingSplitsSizesError` | run num | actual train examples # | expected example # | recorded example # | | ------- | -------...
## Describe the bug A codec error is raised while loading the blog_authorship_corpus. ## Steps to reproduce the bug ``` from datasets import load_dataset raw_datasets = load_dataset("blog_authorship_corpus") ``` ## Expected results Loading the dataset without errors. ## Actual results An error simila...
135
Cannot load the blog_authorship_corpus due to codec errors ## Describe the bug A codec error is raised while loading the blog_authorship_corpus. ## Steps to reproduce the bug ``` from datasets import load_dataset raw_datasets = load_dataset("blog_authorship_corpus") ``` ## Expected results Loading the ...
[ -0.208489418, 0.478081286, -0.0540075488, 0.3998488188, 0.3617143929, 0.2268405855, 0.0785167441, 0.4288443327, -0.1586730033, 0.2435749173, 0.0468285158, 0.2679275274, -0.066664651, -0.2108721435, 0.0490836129, 0.0175640192, -0.0882997215, 0.1426829696, 0.2282302082, -0.234280...
https://github.com/huggingface/datasets/issues/2678
Import Error in Kaggle notebook
@lhoestq I did, and then let pip handle the installation in `pip import datasets`. I also tried using conda but it gives the same error. Edit: pyarrow version on kaggle is 4.0.0, it gets replaced with 4.0.1. So, I don't think uninstalling will change anything. ``` Install Trace of datasets: Collecting datasets ...
## Describe the bug Not able to import datasets library in kaggle notebooks ## Steps to reproduce the bug ```python !pip install datasets import datasets ``` ## Expected results No such error ## Actual results ``` ImportError Traceback (most recent call last) <ipython-inp...
322
Import Error in Kaggle notebook ## Describe the bug Not able to import datasets library in kaggle notebooks ## Steps to reproduce the bug ```python !pip install datasets import datasets ``` ## Expected results No such error ## Actual results ``` ImportError Traceback (mo...
[ -0.3157348037, -0.0831312016, -0.1109721437, 0.1658833772, 0.17944552, -0.022079356, 0.320815146, 0.2708317637, -0.0284398645, -0.052530393, -0.1436257511, 0.8592841029, 0.0922809392, 0.2378657162, 0.020910196, -0.0479101799, 0.1613056511, 0.2256401032, -0.0434860885, 0.0514690...
https://github.com/huggingface/datasets/issues/2678
Import Error in Kaggle notebook
You may need to restart your kaggle notebook after installing a newer version of `pyarrow`. If it doesn't work we'll probably have to create an issue on [arrow's JIRA](https://issues.apache.org/jira/projects/ARROW/issues/), and maybe ask kaggle why it could fail
## Describe the bug Not able to import datasets library in kaggle notebooks ## Steps to reproduce the bug ```python !pip install datasets import datasets ``` ## Expected results No such error ## Actual results ``` ImportError Traceback (most recent call last) <ipython-inp...
37
Import Error in Kaggle notebook ## Describe the bug Not able to import datasets library in kaggle notebooks ## Steps to reproduce the bug ```python !pip install datasets import datasets ``` ## Expected results No such error ## Actual results ``` ImportError Traceback (mo...
[ -0.3392024338, -0.0755516142, -0.1140920669, 0.1258437335, 0.2003508508, 0.0130887926, 0.3512435853, 0.3286039233, -0.0306174345, -0.0346202627, -0.1804618537, 0.8273152113, 0.0816758871, 0.2334335744, 0.0459382273, -0.0699088797, 0.1140711457, 0.2013816088, -0.0490401797, 0.07...
https://github.com/huggingface/datasets/issues/2678
Import Error in Kaggle notebook
> You may need to restart your kaggle notebook before after installing a newer version of `pyarrow`. > > If it doesn't work we'll probably have to create an issue on [arrow's JIRA](https://issues.apache.org/jira/projects/ARROW/issues/), and maybe ask kaggle why it could fail It works after restarting. My bad, I ...
## Describe the bug Not able to import datasets library in kaggle notebooks ## Steps to reproduce the bug ```python !pip install datasets import datasets ``` ## Expected results No such error ## Actual results ``` ImportError Traceback (most recent call last) <ipython-inp...
57
Import Error in Kaggle notebook ## Describe the bug Not able to import datasets library in kaggle notebooks ## Steps to reproduce the bug ```python !pip install datasets import datasets ``` ## Expected results No such error ## Actual results ``` ImportError Traceback (mo...
[ -0.3438676298, -0.0770803168, -0.1142193228, 0.1320916563, 0.1916217655, 0.0109463334, 0.3406415582, 0.3310460448, -0.0405511335, -0.0315703973, -0.1770029962, 0.825958252, 0.078407079, 0.2167554498, 0.0422965437, -0.0751287416, 0.1151350141, 0.2060001791, -0.0540337041, 0.0684...
https://github.com/huggingface/datasets/issues/2677
Error when downloading C4
Hi Thanks for reporting ! It looks like these files are not correctly reported in the list of expected files to download, let me fix that ;)
Hi, I am trying to download `en` corpus from C4 dataset. However, I get an error caused by validation files download (see image). My code is very primitive: `datasets.load_dataset('c4', 'en')` Is this a bug or do I have some configurations missing on my server? Thanks! <img width="1014" alt="Снимок экрана 2...
27
Error when downloading C4 Hi, I am trying to download `en` corpus from C4 dataset. However, I get an error caused by validation files download (see image). My code is very primitive: `datasets.load_dataset('c4', 'en')` Is this a bug or do I have some configurations missing on my server? Thanks! <img width...
[ 0.0968456864, -0.0686113611, -0.053906925, 0.4750130177, 0.2311238945, 0.2395801842, 0.0058163237, 0.1676318944, -0.0170364641, -0.1058177575, -0.0259849541, -0.2992730141, 0.1618456095, -0.0318009667, 0.0029826164, -0.1193321198, 0.1358549595, 0.0848611891, -0.1391200274, -0.3...
https://github.com/huggingface/datasets/issues/2677
Error when downloading C4
Alright this is fixed now. We'll do a new release soon to make the fix available. In the meantime feel free to simply pass `ignore_verifications=True` to `load_dataset` to skip this error
Hi, I am trying to download `en` corpus from C4 dataset. However, I get an error caused by validation files download (see image). My code is very primitive: `datasets.load_dataset('c4', 'en')` Is this a bug or do I have some configurations missing on my server? Thanks! <img width="1014" alt="Снимок экрана 2...
31
Error when downloading C4 Hi, I am trying to download `en` corpus from C4 dataset. However, I get an error caused by validation files download (see image). My code is very primitive: `datasets.load_dataset('c4', 'en')` Is this a bug or do I have some configurations missing on my server? Thanks! <img width...
[ 0.0360016599, -0.0390105695, -0.0446486734, 0.4204435349, 0.2243453711, 0.2426010221, -0.0268923044, 0.1530847847, -0.067271091, -0.0291625112, 0.0164736211, -0.2508125305, 0.1566775292, 0.0499307066, 0.0004603267, -0.0572997704, 0.1233928949, 0.0768574625, -0.1300052106, -0.29...
https://github.com/huggingface/datasets/issues/2669
Metric kwargs are not passed to underlying external metric f1_score
Hi @BramVanroy, thanks for reporting. First, note that `"min"` is not an allowed value for `average`. According to scikit-learn [documentation](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.f1_score.html), `average` can only take the values: `{"micro", "macro", "samples", "weighted", "binary"} or...
## Describe the bug When I want to use F1 score with average="min", this keyword argument does not seem to be passed through to the underlying sklearn metric. This is evident because [sklearn](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.f1_score.html) throws an error telling me so. ## Steps to...
96
Metric kwargs are not passed to underlying external metric f1_score ## Describe the bug When I want to use F1 score with average="min", this keyword argument does not seem to be passed through to the underlying sklearn metric. This is evident because [sklearn](https://scikit-learn.org/stable/modules/generated/sklear...
[ -0.0516289249, -0.5501687527, 0.0873142257, 0.2038567066, 0.4681910276, -0.0643794686, 0.1748597473, -0.1953884065, 0.3451758623, 0.282897383, 0.067913413, 0.4462167025, 0.1783717573, 0.0242550001, -0.0182034075, 0.0616981536, 0.105803974, -0.3587127328, -0.0846230909, -0.29872...
https://github.com/huggingface/datasets/issues/2669
Metric kwargs are not passed to underlying external metric f1_score
Thanks, that was it. A bit strange though, since `load_metric` had an argument `metric_init_kwargs`. I assume that that's for specific initialisation arguments whereas `average` is for the function itself.
## Describe the bug When I want to use F1 score with average="min", this keyword argument does not seem to be passed through to the underlying sklearn metric. This is evident because [sklearn](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.f1_score.html) throws an error telling me so. ## Steps to...
29
Metric kwargs are not passed to underlying external metric f1_score ## Describe the bug When I want to use F1 score with average="min", this keyword argument does not seem to be passed through to the underlying sklearn metric. This is evident because [sklearn](https://scikit-learn.org/stable/modules/generated/sklear...
[ -0.0516289249, -0.5501687527, 0.0873142257, 0.2038567066, 0.4681910276, -0.0643794686, 0.1748597473, -0.1953884065, 0.3451758623, 0.282897383, 0.067913413, 0.4462167025, 0.1783717573, 0.0242550001, -0.0182034075, 0.0616981536, 0.105803974, -0.3587127328, -0.0846230909, -0.29872...
https://github.com/huggingface/datasets/issues/2663
[`to_json`] add multi-proc sharding support
Hi @stas00, I want to work on this issue and I was thinking why don't we use `imap` [in this loop](https://github.com/huggingface/datasets/blob/440b14d0dd428ae1b25881aa72ba7bbb8ad9ff84/src/datasets/io/json.py#L99)? This way, using offset (which is being used to slice the pyarrow table) we can convert pyarrow table t...
As discussed on slack it appears that `to_json` is quite slow on huge datasets like OSCAR. I implemented sharded saving, which is much much faster - but the tqdm bars all overwrite each other, so it's hard to make sense of the progress, so if possible ideally this multi-proc support could be implemented internally i...
139
[`to_json`] add multi-proc sharding support As discussed on slack it appears that `to_json` is quite slow on huge datasets like OSCAR. I implemented sharded saving, which is much much faster - but the tqdm bars all overwrite each other, so it's hard to make sense of the progress, so if possible ideally this multi-...
[ -0.2420468032, -0.3245051801, -0.0334172547, -0.0440206677, -0.0977221504, -0.0860116184, 0.4022949934, 0.1104016155, -0.0120914951, 0.3323679566, -0.039944265, 0.2764454484, -0.1313709319, 0.2204847932, -0.23964338, -0.0810554102, 0.140750885, -0.1243920326, 0.4335381985, 0.18...
https://github.com/huggingface/datasets/issues/2655
Allow the selection of multiple columns at once
Hi! I was looking into this and hope you can clarify a point. Your my_dataset variable would be of type DatasetDict which means the alternative you've described (dict comprehension) is what makes sense. Is there a reason why you wouldn't want to convert my_dataset to a pandas df if you'd like to use it like one? Plea...
**Is your feature request related to a problem? Please describe.** Similar to pandas, it would be great if we could select multiple columns at once. **Describe the solution you'd like** ```python my_dataset = ... # Has columns ['idx', 'sentence', 'label'] idx, label = my_dataset[['idx', 'label']] ``` **...
64
Allow the selection of multiple columns at once **Is your feature request related to a problem? Please describe.** Similar to pandas, it would be great if we could select multiple columns at once. **Describe the solution you'd like** ```python my_dataset = ... # Has columns ['idx', 'sentence', 'label'] id...
[ -0.057370469, -0.2296809852, -0.1969854087, 0.0512580238, 0.201422736, 0.2186502963, 0.5181807876, 0.1173711792, 0.3343809545, 0.408284843, -0.1473215669, 0.3830782473, 0.000723416, 0.219599694, -0.2610398233, -0.3490300477, -0.1463273168, 0.1411355734, 0.0857271925, 0.01358710...
https://github.com/huggingface/datasets/issues/2655
Allow the selection of multiple columns at once
Hi! Sorry for the delay. In this case, the dataset would be a `datasets.Dataset` and we want to select multiple columns, the `idx` and `label` columns for example. My issue is that my dataset is too big for memory if I load everything into pandas.
**Is your feature request related to a problem? Please describe.** Similar to pandas, it would be great if we could select multiple columns at once. **Describe the solution you'd like** ```python my_dataset = ... # Has columns ['idx', 'sentence', 'label'] idx, label = my_dataset[['idx', 'label']] ``` **...
45
Allow the selection of multiple columns at once **Is your feature request related to a problem? Please describe.** Similar to pandas, it would be great if we could select multiple columns at once. **Describe the solution you'd like** ```python my_dataset = ... # Has columns ['idx', 'sentence', 'label'] id...
[ -0.1123059466, -0.3079829812, -0.1617852449, 0.1745088398, 0.2768076658, 0.2954529524, 0.4830071926, 0.1078623235, 0.2709565163, 0.4354280233, -0.1071548685, 0.1811608076, -0.0057085603, 0.1110234484, -0.120576337, -0.4149236381, -0.1460115314, 0.1239383146, 0.1093642861, 0.160...
https://github.com/huggingface/datasets/issues/2654
Give a user feedback if the dataset he loads is streamable or not
I understand it already raises a `NotImplementedError` exception, eg: ``` >>> dataset = load_dataset("journalists_questions", name="plain_text", split="train", streaming=True) [...] NotImplementedError: Extraction protocol for file at https://drive.google.com/uc?export=download&id=1CBrh-9OrSpKmPQBxTK_ji6mq6WTN_...
**Is your feature request related to a problem? Please describe.** I would love to know if a `dataset` is with the current implementation streamable or not. **Describe the solution you'd like** We could show a warning when a dataset is loaded with `load_dataset('...',streaming=True)` when its lot streamable, e.g....
30
Give a user feedback if the dataset he loads is streamable or not **Is your feature request related to a problem? Please describe.** I would love to know if a `dataset` is with the current implementation streamable or not. **Describe the solution you'd like** We could show a warning when a dataset is loaded wit...
[ -0.3121465147, 0.1084257066, -0.0998006538, 0.0617052652, 0.1377475411, -0.1033983454, 0.1825838685, 0.2757681906, -0.0348223597, 0.2482773066, 0.2372249961, 0.2212014794, -0.4187546074, 0.2228206396, -0.2498128712, -0.1057738364, -0.1940760016, 0.226680398, 0.1973604709, 0.024...
https://github.com/huggingface/datasets/issues/2653
Add SD task for SUPERB
Note that this subset requires us to: * generate the LibriMix corpus from LibriSpeech * prepare the corpus for diarization As suggested by @lhoestq we should perform these steps locally and add the prepared data to this public repo on the Hub: https://huggingface.co/datasets/superb/superb-data Then we can use...
Include the SD (Speaker Diarization) task as described in the [SUPERB paper](https://arxiv.org/abs/2105.01051) and `s3prl` [instructions](https://github.com/s3prl/s3prl/tree/master/s3prl/downstream#sd-speaker-diarization). Steps: - [x] Generate the LibriMix corpus - [x] Prepare the corpus for diarization - [x] Up...
94
Add SD task for SUPERB Include the SD (Speaker Diarization) task as described in the [SUPERB paper](https://arxiv.org/abs/2105.01051) and `s3prl` [instructions](https://github.com/s3prl/s3prl/tree/master/s3prl/downstream#sd-speaker-diarization). Steps: - [x] Generate the LibriMix corpus - [x] Prepare the corpus ...
[ -0.2387331128, -0.1557698846, 0.0066518299, 0.111236304, 0.3777110279, -0.1991260946, 0.0082019996, -0.1370200962, 0.1117522866, 0.3464582264, -0.3380064964, 0.4965120554, -0.1501334161, 0.4443325102, 0.2138429433, 0.2012204826, 0.0950581655, 0.20730488, -0.2991645336, 0.051345...
https://github.com/huggingface/datasets/issues/2653
Add SD task for SUPERB
@lewtun @lhoestq: I have already generated the LibriMix corpus and prepared the corpus for diarization. The output is 3 dirs (train, dev, test), each one containing 6 files: reco2dur rttm segments spk2utt utt2spk wav.scp Next steps: - Upload these files to the superb-data repo - Transcribe the correspondi...
Include the SD (Speaker Diarization) task as described in the [SUPERB paper](https://arxiv.org/abs/2105.01051) and `s3prl` [instructions](https://github.com/s3prl/s3prl/tree/master/s3prl/downstream#sd-speaker-diarization). Steps: - [x] Generate the LibriMix corpus - [x] Prepare the corpus for diarization - [x] Up...
73
Add SD task for SUPERB Include the SD (Speaker Diarization) task as described in the [SUPERB paper](https://arxiv.org/abs/2105.01051) and `s3prl` [instructions](https://github.com/s3prl/s3prl/tree/master/s3prl/downstream#sd-speaker-diarization). Steps: - [x] Generate the LibriMix corpus - [x] Prepare the corpus ...
[ -0.23063232, -0.3324864805, -0.0547317863, 0.0661656037, 0.3352734447, -0.2812388241, 0.0837418735, -0.0611492172, -0.1144543886, 0.4088609219, -0.4140785933, 0.4271570146, -0.1571260393, 0.3467776179, 0.1571533084, 0.0596580617, 0.0408407971, 0.2455734015, -0.3144541681, -0.04...
https://github.com/huggingface/datasets/issues/2651
Setting log level higher than warning does not suppress progress bar
Hi, you can suppress progress bars by patching logging as follows: ```python import datasets import logging datasets.logging.get_verbosity = lambda: logging.NOTSET # map call ... ```
## Describe the bug I would like to disable progress bars for `.map` method (and other methods like `.filter` and `load_dataset` as well). According to #1627 one can suppress it by setting log level higher than `warning`, however doing so doesn't suppress it with version 1.9.0. I also tried to set `DATASETS_VERBOS...
25
Setting log level higher than warning does not suppress progress bar ## Describe the bug I would like to disable progress bars for `.map` method (and other methods like `.filter` and `load_dataset` as well). According to #1627 one can suppress it by setting log level higher than `warning`, however doing so doesn't ...
[ -0.4419383407, -0.1637801528, 0.0876798481, -0.1625062525, 0.1394932866, -0.0227063373, 0.429366529, 0.2288745642, 0.0989457145, 0.1445457786, 0.1754807532, 0.6159744263, -0.1407672614, 0.1008446366, -0.1938808709, 0.1940040886, 0.0229356252, 0.0618274137, 0.0973696783, -0.0051...
https://github.com/huggingface/datasets/issues/2651
Setting log level higher than warning does not suppress progress bar
Note also that you can disable the progress bar with ```python from datasets.utils import disable_progress_bar disable_progress_bar() ``` See https://github.com/huggingface/datasets/blob/8814b393984c1c2e1800ba370de2a9f7c8644908/src/datasets/utils/tqdm_utils.py#L84
## Describe the bug I would like to disable progress bars for `.map` method (and other methods like `.filter` and `load_dataset` as well). According to #1627 one can suppress it by setting log level higher than `warning`, however doing so doesn't suppress it with version 1.9.0. I also tried to set `DATASETS_VERBOS...
19
Setting log level higher than warning does not suppress progress bar ## Describe the bug I would like to disable progress bars for `.map` method (and other methods like `.filter` and `load_dataset` as well). According to #1627 one can suppress it by setting log level higher than `warning`, however doing so doesn't ...
[ -0.438827008, -0.2037364393, 0.0796427131, -0.1522615105, 0.1692188829, -0.015966028, 0.4777558148, 0.2021973878, 0.0318350419, 0.1475354284, 0.1682004929, 0.5794956684, -0.1528973132, 0.1000311077, -0.1835139692, 0.1944965124, 0.0263120346, 0.0424373709, 0.0457617752, 0.000441...
https://github.com/huggingface/datasets/issues/2646
downloading of yahoo_answers_topics dataset failed
Hi ! I just tested and it worked fine today for me. I think this is because the dataset is stored on Google Drive which has a quota limit for the number of downloads per day, see this similar issue https://github.com/huggingface/datasets/issues/996 Feel free to try again today, now that the quota was reset
## Describe the bug I get an error datasets.utils.info_utils.NonMatchingChecksumError: Checksums didn't match for dataset source files when I try to download the yahoo_answers_topics dataset ## Steps to reproduce the bug self.dataset = load_dataset( 'yahoo_answers_topics', cache_dir=self.config...
53
downloading of yahoo_answers_topics dataset failed ## Describe the bug I get an error datasets.utils.info_utils.NonMatchingChecksumError: Checksums didn't match for dataset source files when I try to download the yahoo_answers_topics dataset ## Steps to reproduce the bug self.dataset = load_dataset( ...
[ -0.4173350036, 0.1438481808, -0.0522664227, 0.2103923708, 0.2381462604, -0.0566485561, 0.2377114296, 0.2905799448, 0.1442448199, 0.0573152006, -0.0823222399, -0.0347228162, 0.0095935017, 0.2967810035, -0.1141854227, 0.1532197446, 0.120711863, -0.242334187, -0.3290198743, 0.1872...
https://github.com/huggingface/datasets/issues/2645
load_dataset processing failed with OS error after downloading a dataset
Hi ! It looks like an issue with pytorch. Could you try to run `import torch` and see if it raises an error ?
## Describe the bug After downloading a dataset like opus100, there is a bug that OSError: Cannot find data file. Original error: dlopen: cannot load any more object with static TLS ## Steps to reproduce the bug ```python from datasets import load_dataset this_dataset = load_dataset('opus100', 'af-en') ``` ...
24
load_dataset processing failed with OS error after downloading a dataset ## Describe the bug After downloading a dataset like opus100, there is a bug that OSError: Cannot find data file. Original error: dlopen: cannot load any more object with static TLS ## Steps to reproduce the bug ```python from datasets...
[ -0.4430500865, 0.2031197101, -0.0454364792, 0.4847755134, 0.3039823174, -0.0233754888, 0.2349692732, 0.3660461605, -0.1082285941, 0.092752628, -0.1625686586, 0.4678146839, -0.0074347109, -0.0836083815, -0.0174172278, -0.1323250532, 0.0387219936, 0.2778638899, -0.6045758128, -0....
https://github.com/huggingface/datasets/issues/2645
load_dataset processing failed with OS error after downloading a dataset
> Hi ! It looks like an issue with pytorch. > > Could you try to run `import torch` and see if it raises an error ? It works. Thank you!
## Describe the bug After downloading a dataset like opus100, there is a bug that OSError: Cannot find data file. Original error: dlopen: cannot load any more object with static TLS ## Steps to reproduce the bug ```python from datasets import load_dataset this_dataset = load_dataset('opus100', 'af-en') ``` ...
31
load_dataset processing failed with OS error after downloading a dataset ## Describe the bug After downloading a dataset like opus100, there is a bug that OSError: Cannot find data file. Original error: dlopen: cannot load any more object with static TLS ## Steps to reproduce the bug ```python from datasets...
[ -0.4430500865, 0.2031197101, -0.0454364792, 0.4847755134, 0.3039823174, -0.0233754888, 0.2349692732, 0.3660461605, -0.1082285941, 0.092752628, -0.1625686586, 0.4678146839, -0.0074347109, -0.0836083815, -0.0174172278, -0.1323250532, 0.0387219936, 0.2778638899, -0.6045758128, -0....
https://github.com/huggingface/datasets/issues/2644
Batched `map` not allowed to return 0 items
Hi ! Thanks for reporting. Indeed it looks like type inference makes it fail. We should probably just ignore this step until a non-empty batch is passed.
## Describe the bug I'm trying to use `map` to filter a large dataset by selecting rows that match an expensive condition (files referenced by one of the columns need to exist in the filesystem, so we have to `stat` them). According to [the documentation](https://huggingface.co/docs/datasets/processing.html#augmenting...
27
Batched `map` not allowed to return 0 items ## Describe the bug I'm trying to use `map` to filter a large dataset by selecting rows that match an expensive condition (files referenced by one of the columns need to exist in the filesystem, so we have to `stat` them). According to [the documentation](https://huggingfa...
[ -0.2328359336, -0.3903887272, -0.0444568731, 0.1644899547, -0.1085450575, -0.0687687993, 0.1163393706, 0.4056321383, 0.6400416493, 0.1178811789, -0.0565716475, 0.2460853457, -0.3954414129, 0.1043546125, -0.1629213691, 0.1608028412, -0.0449333005, 0.1405792236, -0.1691173315, -0...
https://github.com/huggingface/datasets/issues/2644
Batched `map` not allowed to return 0 items
Sounds good! Do you want me to propose a PR? I'm quite busy right now, but if it's not too urgent I could take a look next week.
## Describe the bug I'm trying to use `map` to filter a large dataset by selecting rows that match an expensive condition (files referenced by one of the columns need to exist in the filesystem, so we have to `stat` them). According to [the documentation](https://huggingface.co/docs/datasets/processing.html#augmenting...
28
Batched `map` not allowed to return 0 items ## Describe the bug I'm trying to use `map` to filter a large dataset by selecting rows that match an expensive condition (files referenced by one of the columns need to exist in the filesystem, so we have to `stat` them). According to [the documentation](https://huggingfa...
[ -0.2328359336, -0.3903887272, -0.0444568731, 0.1644899547, -0.1085450575, -0.0687687993, 0.1163393706, 0.4056321383, 0.6400416493, 0.1178811789, -0.0565716475, 0.2460853457, -0.3954414129, 0.1043546125, -0.1629213691, 0.1608028412, -0.0449333005, 0.1405792236, -0.1691173315, -0...
https://github.com/huggingface/datasets/issues/2644
Batched `map` not allowed to return 0 items
Sure if you're interested feel free to open a PR :) You can also ping me anytime if you have questions or if I can help !
## Describe the bug I'm trying to use `map` to filter a large dataset by selecting rows that match an expensive condition (files referenced by one of the columns need to exist in the filesystem, so we have to `stat` them). According to [the documentation](https://huggingface.co/docs/datasets/processing.html#augmenting...
27
Batched `map` not allowed to return 0 items ## Describe the bug I'm trying to use `map` to filter a large dataset by selecting rows that match an expensive condition (files referenced by one of the columns need to exist in the filesystem, so we have to `stat` them). According to [the documentation](https://huggingfa...
[ -0.2328359336, -0.3903887272, -0.0444568731, 0.1644899547, -0.1085450575, -0.0687687993, 0.1163393706, 0.4056321383, 0.6400416493, 0.1178811789, -0.0565716475, 0.2460853457, -0.3954414129, 0.1043546125, -0.1629213691, 0.1608028412, -0.0449333005, 0.1405792236, -0.1691173315, -0...