url stringlengths 58 61 | repository_url stringclasses 1
value | labels_url stringlengths 72 75 | comments_url stringlengths 67 70 | events_url stringlengths 65 68 | html_url stringlengths 46 51 | id int64 599M 1B | node_id stringlengths 18 32 | number int64 1 2.96k | title stringlengths 1 268 | user dict | labels list | state stringclasses 2
values | locked bool 1
class | assignee dict | assignees list | milestone dict | comments list | created_at int64 1,587B 1,632B | updated_at int64 1,587B 1,632B | closed_at int64 1,587B 1,632B ⌀ | author_association stringclasses 4
values | active_lock_reason null | pull_request dict | body stringlengths 0 228k ⌀ | timeline_url stringlengths 67 70 | performed_via_github_app null | is_pull_request bool 2
classes |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/datasets/issues/724 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/724/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/724/comments | https://api.github.com/repos/huggingface/datasets/issues/724/events | https://github.com/huggingface/datasets/issues/724 | 718,947,700 | MDU6SXNzdWU3MTg5NDc3MDA= | 724 | need to redirect /nlp to /datasets and remove outdated info | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/fo... | [] | closed | false | null | [] | null | [
"Should be fixed now: \r\n\r\n\r\n\r\nNot sure I understand what you mean by the second part?\r\n",
"Thank you!\r\n\r\n> Not sure I understand what you mean by the second part?\r\n\r\nCompare the 2:\r\n* htt... | 1,602,457,932,000 | 1,602,694,812,000 | 1,602,694,812,000 | CONTRIBUTOR | null | null | It looks like the website still has all the `nlp` data, e.g.: https://huggingface.co/nlp/viewer/?dataset=wikihow&config=all
should probably redirect to: https://huggingface.co/datasets/wikihow
also for some reason the new information is slightly borked. If you look at the old one it was nicely formatted and had t... | https://api.github.com/repos/huggingface/datasets/issues/724/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/723 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/723/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/723/comments | https://api.github.com/repos/huggingface/datasets/issues/723/events | https://github.com/huggingface/datasets/issues/723 | 718,926,723 | MDU6SXNzdWU3MTg5MjY3MjM= | 723 | Adding pseudo-labels to datasets | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/ss... | [] | closed | false | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/ss... | [
{
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api... | null | [
"Nice ! :)\r\nIt's indeed the first time we have such contributions so we'll have to figure out the appropriate way to integrate them.\r\nCould you add details on what they could be used for ?\r\n",
"They can be used as training data for a smaller model.",
"Sounds just like a regular dataset to me then, no?",
... | 1,602,450,345,000 | 1,627,967,511,000 | 1,627,967,511,000 | MEMBER | null | null | I recently [uploaded pseudo-labels](https://github.com/huggingface/transformers/blob/master/examples/seq2seq/precomputed_pseudo_labels.md) for CNN/DM, XSUM and WMT16-en-ro to s3, and thom mentioned I should add them to this repo.
Since pseudo-labels are just a large model's generations on an existing dataset, what is ... | https://api.github.com/repos/huggingface/datasets/issues/723/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/722 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/722/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/722/comments | https://api.github.com/repos/huggingface/datasets/issues/722/events | https://github.com/huggingface/datasets/pull/722 | 718,689,117 | MDExOlB1bGxSZXF1ZXN0NTAxMDI3NjAw | 722 | datasets(RWTH-PHOENIX-Weather 2014 T): add initial loading script | {
"login": "AmitMY",
"id": 5757359,
"node_id": "MDQ6VXNlcjU3NTczNTk=",
"avatar_url": "https://avatars.githubusercontent.com/u/5757359?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/AmitMY",
"html_url": "https://github.com/AmitMY",
"followers_url": "https://api.github.com/users/AmitMY/foll... | [] | open | false | null | [] | null | [
"This might be interesting to @kayoyin the author of https://github.com/kayoyin/transformer-slt – pinging you just in case :)",
"Thanks Amit, this is a great idea! I'm thinking of porting the SLT models from my paper here as well, having this dataset would be perfect for that :)",
"Thanks for your contribution,... | 1,602,359,048,000 | 1,609,830,411,000 | null | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/722",
"html_url": "https://github.com/huggingface/datasets/pull/722",
"diff_url": "https://github.com/huggingface/datasets/pull/722.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/722.patch"
} | This is the first sign language dataset in this repo as far as I know.
Following an old issue I opened https://github.com/huggingface/datasets/issues/302.
I added the dataset official REAMDE file, but I see it's not very standard, so it can be removed.
| https://api.github.com/repos/huggingface/datasets/issues/722/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/721 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/721/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/721/comments | https://api.github.com/repos/huggingface/datasets/issues/721/events | https://github.com/huggingface/datasets/issues/721 | 718,647,147 | MDU6SXNzdWU3MTg2NDcxNDc= | 721 | feat(dl_manager): add support for ftp downloads | {
"login": "AmitMY",
"id": 5757359,
"node_id": "MDQ6VXNlcjU3NTczNTk=",
"avatar_url": "https://avatars.githubusercontent.com/u/5757359?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/AmitMY",
"html_url": "https://github.com/AmitMY",
"followers_url": "https://api.github.com/users/AmitMY/foll... | [] | open | false | null | [] | null | [
"We only support http by default for downloading.\r\nIf you really need to use ftp, then feel free to use a library that allows to download through ftp in your dataset script (I see that you've started working on #722 , that's awesome !). The users will get a message to install the extra library when they load the ... | 1,602,345,020,000 | 1,603,531,473,000 | null | CONTRIBUTOR | null | null | I am working on a new dataset (#302) and encounter a problem downloading it.
```python
# This is the official download link from https://www-i6.informatik.rwth-aachen.de/~koller/RWTH-PHOENIX-2014-T/
_URL = "ftp://wasserstoff.informatik.rwth-aachen.de/pub/rwth-phoenix/2016/phoenix-2014-T.v3.tar.gz"
dl_manager.do... | https://api.github.com/repos/huggingface/datasets/issues/721/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/720 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/720/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/720/comments | https://api.github.com/repos/huggingface/datasets/issues/720/events | https://github.com/huggingface/datasets/issues/720 | 716,581,266 | MDU6SXNzdWU3MTY1ODEyNjY= | 720 | OSError: Cannot find data file when not using the dummy dataset in RAG | {
"login": "josemlopez",
"id": 4112135,
"node_id": "MDQ6VXNlcjQxMTIxMzU=",
"avatar_url": "https://avatars.githubusercontent.com/u/4112135?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/josemlopez",
"html_url": "https://github.com/josemlopez",
"followers_url": "https://api.github.com/users... | [] | closed | false | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoest... | [
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.git... | null | [
"Same issue here. I will be digging further, but it looks like the [script](https://github.com/huggingface/datasets/blob/master/datasets/wiki_dpr/wiki_dpr.py#L132) is attempting to open a file that is not downloaded yet. \r\n\r\n```\r\n99dcbca09109e58502e6b9271d4d3f3791b43f61f3161a76b25d2775ab1a4498.lock\r\n```\r\n... | 1,602,080,833,000 | 1,608,732,271,000 | 1,608,732,271,000 | NONE | null | null | ## Environment info
transformers version: 3.3.1
Platform: Linux-4.19
Python version: 3.7.7
PyTorch version (GPU?): 1.6.0
Tensorflow version (GPU?): No
Using GPU in script?: Yes
Using distributed or parallel set-up in script?: No
## To reproduce
Steps to reproduce the behaviour... | https://api.github.com/repos/huggingface/datasets/issues/720/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/719 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/719/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/719/comments | https://api.github.com/repos/huggingface/datasets/issues/719/events | https://github.com/huggingface/datasets/pull/719 | 716,492,263 | MDExOlB1bGxSZXF1ZXN0NDk5MjE5Mjg2 | 719 | Fix train_test_split output format | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoest... | [] | closed | false | null | [] | null | [] | 1,602,074,341,000 | 1,602,077,888,000 | 1,602,077,886,000 | MEMBER | null | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/719",
"html_url": "https://github.com/huggingface/datasets/pull/719",
"diff_url": "https://github.com/huggingface/datasets/pull/719.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/719.patch"
} | There was an issue in the `transmit_format` wrapper that returned bad formats when using train_test_split.
This was due to `column_names` being handled as a List[str] instead of Dict[str, List[str]] when the dataset transform (train_test_split) returns a DatasetDict (one set of column names per split).
This should ... | https://api.github.com/repos/huggingface/datasets/issues/719/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/718 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/718/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/718/comments | https://api.github.com/repos/huggingface/datasets/issues/718/events | https://github.com/huggingface/datasets/pull/718 | 715,694,709 | MDExOlB1bGxSZXF1ZXN0NDk4NTU5MDcw | 718 | Don't use tqdm 4.50.0 | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoest... | [] | closed | false | null | [] | null | [] | 1,601,991,953,000 | 1,601,992,164,000 | 1,601,992,162,000 | MEMBER | null | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/718",
"html_url": "https://github.com/huggingface/datasets/pull/718",
"diff_url": "https://github.com/huggingface/datasets/pull/718.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/718.patch"
} | tqdm 4.50.0 introduced permission errors on windows
see [here](https://app.circleci.com/pipelines/github/huggingface/datasets/235/workflows/cfb6a39f-68eb-4802-8b17-2cd5e8ea7369/jobs/1111) for the error details.
For now I just added `<4.50.0` in the setup.py
Hopefully we can find what's wrong with this version soon | https://api.github.com/repos/huggingface/datasets/issues/718/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/717 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/717/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/717/comments | https://api.github.com/repos/huggingface/datasets/issues/717/events | https://github.com/huggingface/datasets/pull/717 | 714,959,268 | MDExOlB1bGxSZXF1ZXN0NDk3OTUwOTA2 | 717 | Fixes #712 Error in the Overview.ipynb notebook | {
"login": "subhrm",
"id": 850012,
"node_id": "MDQ6VXNlcjg1MDAxMg==",
"avatar_url": "https://avatars.githubusercontent.com/u/850012?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/subhrm",
"html_url": "https://github.com/subhrm",
"followers_url": "https://api.github.com/users/subhrm/follow... | [] | closed | false | null | [] | null | [] | 1,601,913,041,000 | 1,601,965,903,000 | 1,601,915,141,000 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/717",
"html_url": "https://github.com/huggingface/datasets/pull/717",
"diff_url": "https://github.com/huggingface/datasets/pull/717.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/717.patch"
} | Fixes #712 Error in the Overview.ipynb notebook by adding `with_details=True` parameter to `list_datasets` function in Cell 3 of **overview** notebook | https://api.github.com/repos/huggingface/datasets/issues/717/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/716 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/716/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/716/comments | https://api.github.com/repos/huggingface/datasets/issues/716/events | https://github.com/huggingface/datasets/pull/716 | 714,952,888 | MDExOlB1bGxSZXF1ZXN0NDk3OTQ1ODAw | 716 | Fixes #712 Attribute error in cell 3 of the overview notebook | {
"login": "subhrm",
"id": 850012,
"node_id": "MDQ6VXNlcjg1MDAxMg==",
"avatar_url": "https://avatars.githubusercontent.com/u/850012?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/subhrm",
"html_url": "https://github.com/subhrm",
"followers_url": "https://api.github.com/users/subhrm/follow... | [] | closed | false | null | [] | null | [
"Referencing the wrong issue # in the commit message. Closing this to fix it again."
] | 1,601,912,529,000 | 1,601,912,798,000 | 1,601,912,792,000 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/716",
"html_url": "https://github.com/huggingface/datasets/pull/716",
"diff_url": "https://github.com/huggingface/datasets/pull/716.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/716.patch"
} | Fixes the Attribute error in cell 3 of the overview notebook | https://api.github.com/repos/huggingface/datasets/issues/716/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/715 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/715/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/715/comments | https://api.github.com/repos/huggingface/datasets/issues/715/events | https://github.com/huggingface/datasets/pull/715 | 714,690,192 | MDExOlB1bGxSZXF1ZXN0NDk3NzMwMDQ2 | 715 | Use python read for text dataset | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoest... | [] | closed | false | null | [] | null | [
"One thing though, could we try to read the files in parallel?",
"We could but I'm not sure this would help a lot since the bottleneck is the drive IO if the files are big enough.\r\nIt could make sense for very small files.",
"Looks like windows is not a big fan of this approach\r\nI'm working on a fix",
"I ... | 1,601,891,275,000 | 1,601,903,598,000 | 1,601,903,597,000 | MEMBER | null | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/715",
"html_url": "https://github.com/huggingface/datasets/pull/715",
"diff_url": "https://github.com/huggingface/datasets/pull/715.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/715.patch"
} | As mentioned in #622 the pandas reader used for text dataset doesn't work properly when there are \r characters in the text file.
Instead I switched to pure python using `open` and `read`.
From my benchmark on a 100MB text file, it's the same speed as the previous pandas reader. | https://api.github.com/repos/huggingface/datasets/issues/715/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/714 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/714/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/714/comments | https://api.github.com/repos/huggingface/datasets/issues/714/events | https://github.com/huggingface/datasets/pull/714 | 714,487,881 | MDExOlB1bGxSZXF1ZXN0NDk3NTYzNjAx | 714 | Add the official dependabot implementation | {
"login": "ALazyMeme",
"id": 12804673,
"node_id": "MDQ6VXNlcjEyODA0Njcz",
"avatar_url": "https://avatars.githubusercontent.com/u/12804673?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ALazyMeme",
"html_url": "https://github.com/ALazyMeme",
"followers_url": "https://api.github.com/users/... | [] | closed | false | null | [] | null | [] | 1,601,869,785,000 | 1,602,503,361,000 | 1,602,503,361,000 | NONE | null | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/714",
"html_url": "https://github.com/huggingface/datasets/pull/714",
"diff_url": "https://github.com/huggingface/datasets/pull/714.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/714.patch"
} | This will keep dependencies up to date. This will require a pr label `dependencies` being created in order to function correctly. | https://api.github.com/repos/huggingface/datasets/issues/714/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/713 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/713/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/713/comments | https://api.github.com/repos/huggingface/datasets/issues/713/events | https://github.com/huggingface/datasets/pull/713 | 714,475,732 | MDExOlB1bGxSZXF1ZXN0NDk3NTUzOTUy | 713 | Fix reading text files with carriage return symbols | {
"login": "mozharovsky",
"id": 6762769,
"node_id": "MDQ6VXNlcjY3NjI3Njk=",
"avatar_url": "https://avatars.githubusercontent.com/u/6762769?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mozharovsky",
"html_url": "https://github.com/mozharovsky",
"followers_url": "https://api.github.com/us... | [] | closed | false | null | [] | null | [
"Discussed in #622, fixed in #715. Closing the issue. Thanks @lhoestq, it works now! 👍 "
] | 1,601,867,223,000 | 1,602,223,105,000 | 1,601,905,769,000 | NONE | null | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/713",
"html_url": "https://github.com/huggingface/datasets/pull/713",
"diff_url": "https://github.com/huggingface/datasets/pull/713.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/713.patch"
} | The new pandas-based text reader isn't able to work properly with files that contain carriage return symbols (`\r`).
It fails with the following error message:
```
...
File "pandas/_libs/parsers.pyx", line 847, in pandas._libs.parsers.TextReader.read
File "pandas/_libs/parsers.pyx", line 874, in pandas._l... | https://api.github.com/repos/huggingface/datasets/issues/713/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/712 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/712/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/712/comments | https://api.github.com/repos/huggingface/datasets/issues/712/events | https://github.com/huggingface/datasets/issues/712 | 714,242,316 | MDU6SXNzdWU3MTQyNDIzMTY= | 712 | Error in the notebooks/Overview.ipynb notebook | {
"login": "subhrm",
"id": 850012,
"node_id": "MDQ6VXNlcjg1MDAxMg==",
"avatar_url": "https://avatars.githubusercontent.com/u/850012?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/subhrm",
"html_url": "https://github.com/subhrm",
"followers_url": "https://api.github.com/users/subhrm/follow... | [] | closed | false | null | [] | null | [
"Do this:\r\n``` python\r\nsquad_dataset = list_datasets(with_details=True)[datasets.index('squad')]\r\npprint(squad_dataset.__dict__) # It's a simple python dataclass\r\n```",
"Thanks! This worked. I have created a PR to fix this in the notebook. "
] | 1,601,791,111,000 | 1,601,915,140,000 | 1,601,915,140,000 | CONTRIBUTOR | null | null | Hi,
I got the following error in **cell number 3** while exploring the **Overview.ipynb** notebook in google colab. I used the [link ](https://colab.research.google.com/github/huggingface/datasets/blob/master/notebooks/Overview.ipynb) provided in the main README file to open it in colab.
```python
# You can acc... | https://api.github.com/repos/huggingface/datasets/issues/712/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/710 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/710/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/710/comments | https://api.github.com/repos/huggingface/datasets/issues/710/events | https://github.com/huggingface/datasets/pull/710 | 714,186,999 | MDExOlB1bGxSZXF1ZXN0NDk3MzQ1NjQ0 | 710 | fix README typos/ consistency | {
"login": "discdiver",
"id": 7703961,
"node_id": "MDQ6VXNlcjc3MDM5NjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/7703961?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/discdiver",
"html_url": "https://github.com/discdiver",
"followers_url": "https://api.github.com/users/di... | [] | closed | false | null | [] | null | [] | 1,601,763,656,000 | 1,602,928,365,000 | 1,602,928,365,000 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/710",
"html_url": "https://github.com/huggingface/datasets/pull/710",
"diff_url": "https://github.com/huggingface/datasets/pull/710.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/710.patch"
} | https://api.github.com/repos/huggingface/datasets/issues/710/timeline | null | true | |
https://api.github.com/repos/huggingface/datasets/issues/709 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/709/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/709/comments | https://api.github.com/repos/huggingface/datasets/issues/709/events | https://github.com/huggingface/datasets/issues/709 | 714,067,902 | MDU6SXNzdWU3MTQwNjc5MDI= | 709 | How to use similarity settings other then "BM25" in Elasticsearch index ? | {
"login": "nsankar",
"id": 431890,
"node_id": "MDQ6VXNlcjQzMTg5MA==",
"avatar_url": "https://avatars.githubusercontent.com/u/431890?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nsankar",
"html_url": "https://github.com/nsankar",
"followers_url": "https://api.github.com/users/nsankar/fo... | [] | open | false | null | [] | null | [
"Datasets does not use elasticsearch API to define custom similarity. If you want to use a custom similarity, the best would be to run a curl request directly to your elasticsearch instance (see sample hereafter, directly from ES documentation), then you should be able to use `my_similarity` in your configuration p... | 1,601,723,929,000 | 1,626,634,975,000 | null | NONE | null | null | **QUESTION : How should we use other similarity algorithms supported by Elasticsearch other than "BM25" ?**
**ES Reference**
https://www.elastic.co/guide/en/elasticsearch/reference/current/index-modules-similarity.html
**HF doc reference:**
https://huggingface.co/docs/datasets/faiss_and_ea.html
**context :**
=... | https://api.github.com/repos/huggingface/datasets/issues/709/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/708 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/708/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/708/comments | https://api.github.com/repos/huggingface/datasets/issues/708/events | https://github.com/huggingface/datasets/issues/708 | 714,020,953 | MDU6SXNzdWU3MTQwMjA5NTM= | 708 | Datasets performance slow? - 6.4x slower than in memory dataset | {
"login": "eugeneware",
"id": 38154,
"node_id": "MDQ6VXNlcjM4MTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/38154?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/eugeneware",
"html_url": "https://github.com/eugeneware",
"followers_url": "https://api.github.com/users/eugenew... | [] | closed | false | null | [] | null | [
"Facing a similar issue here. My model using SQuAD dataset takes about 1h to process with in memory data and more than 2h with datasets directly.",
"And if you use in-memory-data with datasets with `load_dataset(..., keep_in_memory=True)`?",
"Thanks for the tip @thomwolf ! I did not see that flag in the docs. I... | 1,601,707,447,000 | 1,613,139,208,000 | 1,613,139,208,000 | NONE | null | null | I've been very excited about this amazing datasets project. However, I've noticed that the performance can be substantially slower than using an in-memory dataset.
Now, this is expected I guess, due to memory mapping data using arrow files, and you don't get anything for free. But I was surprised at how much slower.... | https://api.github.com/repos/huggingface/datasets/issues/708/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/707 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/707/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/707/comments | https://api.github.com/repos/huggingface/datasets/issues/707/events | https://github.com/huggingface/datasets/issues/707 | 713,954,666 | MDU6SXNzdWU3MTM5NTQ2NjY= | 707 | Requirements should specify pyarrow<1 | {
"login": "mathcass",
"id": 918541,
"node_id": "MDQ6VXNlcjkxODU0MQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/918541?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mathcass",
"html_url": "https://github.com/mathcass",
"followers_url": "https://api.github.com/users/mathcas... | [] | closed | false | null | [] | null | [
"Hello @mathcass I would want to work on this issue. May I do the same? ",
"@punitaojha, certainly. Feel free to work on this. Let me know if you need any help or clarity.",
"Hello @mathcass \r\n1. I did fork the repository and clone the same on my local system. \r\n\r\n2. Then learnt about how we can publish o... | 1,601,681,979,000 | 1,607,070,159,000 | 1,601,844,628,000 | NONE | null | null | I was looking at the docs on [Perplexity](https://huggingface.co/transformers/perplexity.html) via GPT2. When you load datasets and try to load Wikitext, you get the error,
```
module 'pyarrow' has no attribute 'PyExtensionType'
```
I traced it back to datasets having installed PyArrow 1.0.1 but there's not pinni... | https://api.github.com/repos/huggingface/datasets/issues/707/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/706 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/706/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/706/comments | https://api.github.com/repos/huggingface/datasets/issues/706/events | https://github.com/huggingface/datasets/pull/706 | 713,721,959 | MDExOlB1bGxSZXF1ZXN0NDk2OTkwMDA0 | 706 | Fix config creation for data files with NamedSplit | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoest... | [] | closed | false | null | [] | null | [] | 1,601,653,609,000 | 1,601,885,700,000 | 1,601,885,699,000 | MEMBER | null | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/706",
"html_url": "https://github.com/huggingface/datasets/pull/706",
"diff_url": "https://github.com/huggingface/datasets/pull/706.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/706.patch"
} | During config creation, we need to iterate through the data files of all the splits to compute a hash.
To make sure the hash is unique given a certain combination of files/splits, we sort the split names.
However the `NamedSplit` objects can't be passed to `sorted` and currently it raises an error: we need to sort th... | https://api.github.com/repos/huggingface/datasets/issues/706/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/705 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/705/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/705/comments | https://api.github.com/repos/huggingface/datasets/issues/705/events | https://github.com/huggingface/datasets/issues/705 | 713,709,100 | MDU6SXNzdWU3MTM3MDkxMDA= | 705 | TypeError: '<' not supported between instances of 'NamedSplit' and 'NamedSplit' | {
"login": "pvcastro",
"id": 12713359,
"node_id": "MDQ6VXNlcjEyNzEzMzU5",
"avatar_url": "https://avatars.githubusercontent.com/u/12713359?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pvcastro",
"html_url": "https://github.com/pvcastro",
"followers_url": "https://api.github.com/users/pvc... | [] | closed | false | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoest... | [
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.git... | null | [
"Hi !\r\nThanks for reporting :) \r\nIndeed this is an issue on the `datasets` side.\r\nI'm creating a PR",
"Thanks @lhoestq !"
] | 1,601,652,475,000 | 1,601,885,699,000 | 1,601,885,699,000 | NONE | null | null | ## Environment info
<!-- You can run the command `transformers-cli env` and copy-and-paste its output below.
Don't forget to fill out the missing fields in that output! -->
- `transformers` version: 3.3.1 (installed from master)
- `datasets` version: 1.0.2 (installed as a dependency from transformers)
... | https://api.github.com/repos/huggingface/datasets/issues/705/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/704 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/704/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/704/comments | https://api.github.com/repos/huggingface/datasets/issues/704/events | https://github.com/huggingface/datasets/pull/704 | 713,572,556 | MDExOlB1bGxSZXF1ZXN0NDk2ODY2NTQ0 | 704 | Fix remote tests for new datasets | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoest... | [] | closed | false | null | [] | null | [] | 1,601,640,484,000 | 1,601,640,722,000 | 1,601,640,721,000 | MEMBER | null | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/704",
"html_url": "https://github.com/huggingface/datasets/pull/704",
"diff_url": "https://github.com/huggingface/datasets/pull/704.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/704.patch"
} | When adding a new dataset, the remote tests fail because they try to get the new dataset from the master branch (i.e., where the dataset doesn't exist yet)
To fix that I reverted to the use of the HF API that fetch the available datasets on S3 that is synced with the master branch | https://api.github.com/repos/huggingface/datasets/issues/704/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/703 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/703/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/703/comments | https://api.github.com/repos/huggingface/datasets/issues/703/events | https://github.com/huggingface/datasets/pull/703 | 713,559,718 | MDExOlB1bGxSZXF1ZXN0NDk2ODU1OTQ5 | 703 | Add hotpot QA | {
"login": "ghomasHudson",
"id": 13795113,
"node_id": "MDQ6VXNlcjEzNzk1MTEz",
"avatar_url": "https://avatars.githubusercontent.com/u/13795113?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ghomasHudson",
"html_url": "https://github.com/ghomasHudson",
"followers_url": "https://api.github.c... | [] | closed | false | null | [] | null | [
"Awesome :) \r\n\r\nDon't pay attention to the RemoteDatasetTest error, I'm fixing it right now",
"You can rebase from master to fix the CI test :)",
"If we're lucky we can even include this dataset in today's release",
"Just thinking since `type` can only be `comparison` or `bridge` and `level` can only be `... | 1,601,639,068,000 | 1,601,643,281,000 | 1,601,643,281,000 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/703",
"html_url": "https://github.com/huggingface/datasets/pull/703",
"diff_url": "https://github.com/huggingface/datasets/pull/703.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/703.patch"
} | Added the [HotpotQA](https://github.com/hotpotqa/hotpot) multi-hop question answering dataset.
| https://api.github.com/repos/huggingface/datasets/issues/703/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/702 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/702/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/702/comments | https://api.github.com/repos/huggingface/datasets/issues/702/events | https://github.com/huggingface/datasets/pull/702 | 713,499,628 | MDExOlB1bGxSZXF1ZXN0NDk2ODA3Mjg4 | 702 | Complete rouge kwargs | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoest... | [] | closed | false | null | [] | null | [] | 1,601,632,741,000 | 1,601,633,464,000 | 1,601,633,463,000 | MEMBER | null | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/702",
"html_url": "https://github.com/huggingface/datasets/pull/702",
"diff_url": "https://github.com/huggingface/datasets/pull/702.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/702.patch"
} | In #701 we noticed that some kwargs were missing for rouge | https://api.github.com/repos/huggingface/datasets/issues/702/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/701 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/701/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/701/comments | https://api.github.com/repos/huggingface/datasets/issues/701/events | https://github.com/huggingface/datasets/pull/701 | 713,485,757 | MDExOlB1bGxSZXF1ZXN0NDk2Nzk2MTQ1 | 701 | Add rouge 2 and rouge Lsum to rouge metric outputs | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoest... | [] | closed | false | null | [] | null | [
"Oups too late, sorry"
] | 1,601,631,346,000 | 1,601,632,514,000 | 1,601,632,338,000 | MEMBER | null | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/701",
"html_url": "https://github.com/huggingface/datasets/pull/701",
"diff_url": "https://github.com/huggingface/datasets/pull/701.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/701.patch"
} | Continuation of #700
Rouge 2 and Rouge Lsum were missing in Rouge's outputs.
Rouge Lsum is also useful to evaluate Rouge L for sentences with `\n`
Fix #617 | https://api.github.com/repos/huggingface/datasets/issues/701/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/700 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/700/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/700/comments | https://api.github.com/repos/huggingface/datasets/issues/700/events | https://github.com/huggingface/datasets/pull/700 | 713,450,295 | MDExOlB1bGxSZXF1ZXN0NDk2NzY3MTMz | 700 | Add rouge-2 in rouge_types for metric calculation | {
"login": "Shashi456",
"id": 18056781,
"node_id": "MDQ6VXNlcjE4MDU2Nzgx",
"avatar_url": "https://avatars.githubusercontent.com/u/18056781?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Shashi456",
"html_url": "https://github.com/Shashi456",
"followers_url": "https://api.github.com/users/... | [] | closed | false | null | [] | null | [
"Indeed there's currently a mismatch between the description and what it rouge actually returns.\r\nThanks for proposing this fix :) \r\n\r\nI think it's better to return rouge 1-2-L.\r\nWas there a reason to only include rouge 1 and rouge L @thomwolf ? ",
"rougeLsum is also missing, could you add it ?",
"Addin... | 1,601,627,805,000 | 1,601,636,929,000 | 1,601,632,745,000 | NONE | null | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/700",
"html_url": "https://github.com/huggingface/datasets/pull/700",
"diff_url": "https://github.com/huggingface/datasets/pull/700.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/700.patch"
} | The description of the ROUGE metric says,
```
_KWARGS_DESCRIPTION = """
Calculates average rouge scores for a list of hypotheses and references
Args:
predictions: list of predictions to score. Each predictions
should be a string with tokens separated by spaces.
references: list of reference for ... | https://api.github.com/repos/huggingface/datasets/issues/700/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/699 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/699/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/699/comments | https://api.github.com/repos/huggingface/datasets/issues/699/events | https://github.com/huggingface/datasets/issues/699 | 713,395,642 | MDU6SXNzdWU3MTMzOTU2NDI= | 699 | XNLI dataset is not loading | {
"login": "imadarsh1001",
"id": 14936525,
"node_id": "MDQ6VXNlcjE0OTM2NTI1",
"avatar_url": "https://avatars.githubusercontent.com/u/14936525?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/imadarsh1001",
"html_url": "https://github.com/imadarsh1001",
"followers_url": "https://api.github.c... | [] | closed | false | null | [] | null | [
"also i tried below code to solve checksum error \r\n`datasets-cli test ./datasets/xnli --save_infos --all_configs`\r\n\r\nand it shows \r\n\r\n```\r\n2020-10-02 07:06:16.588760: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.1\r\nTraceback (most ... | 1,601,621,596,000 | 1,601,747,152,000 | 1,601,747,017,000 | NONE | null | null | `dataset = datasets.load_dataset(path='xnli')`
showing below error
```
/opt/conda/lib/python3.7/site-packages/nlp/utils/info_utils.py in verify_checksums(expected_checksums, recorded_checksums, verification_name)
36 if len(bad_urls) > 0:
37 error_msg = "Checksums didn't match" + for_verifi... | https://api.github.com/repos/huggingface/datasets/issues/699/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/697 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/697/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/697/comments | https://api.github.com/repos/huggingface/datasets/issues/697/events | https://github.com/huggingface/datasets/pull/697 | 712,979,029 | MDExOlB1bGxSZXF1ZXN0NDk2MzczNDU5 | 697 | Update README.md | {
"login": "bishug",
"id": 71011306,
"node_id": "MDQ6VXNlcjcxMDExMzA2",
"avatar_url": "https://avatars.githubusercontent.com/u/71011306?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bishug",
"html_url": "https://github.com/bishug",
"followers_url": "https://api.github.com/users/bishug/fo... | [] | closed | false | null | [] | null | [] | 1,601,568,162,000 | 1,601,568,720,000 | 1,601,568,720,000 | NONE | null | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/697",
"html_url": "https://github.com/huggingface/datasets/pull/697",
"diff_url": "https://github.com/huggingface/datasets/pull/697.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/697.patch"
} | Hey I was just telling my subscribers to check out your repositories
Thank you | https://api.github.com/repos/huggingface/datasets/issues/697/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/696 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/696/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/696/comments | https://api.github.com/repos/huggingface/datasets/issues/696/events | https://github.com/huggingface/datasets/pull/696 | 712,942,977 | MDExOlB1bGxSZXF1ZXN0NDk2MzQzMjEy | 696 | Elasticsearch index docs | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoest... | [] | closed | false | null | [] | null | [] | 1,601,565,538,000 | 1,601,624,899,000 | 1,601,624,898,000 | MEMBER | null | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/696",
"html_url": "https://github.com/huggingface/datasets/pull/696",
"diff_url": "https://github.com/huggingface/datasets/pull/696.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/696.patch"
} | I added the docs for ES indexes.
I also added a `load_elasticsearch_index` method to load an index that has already been built.
I checked the tests for the ES index and we have tests that mock ElasticSearch.
I think this is good for now but at some point it would be cool to have an end-to-end test with a real ES... | https://api.github.com/repos/huggingface/datasets/issues/696/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/695 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/695/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/695/comments | https://api.github.com/repos/huggingface/datasets/issues/695/events | https://github.com/huggingface/datasets/pull/695 | 712,843,949 | MDExOlB1bGxSZXF1ZXN0NDk2MjU5NTM0 | 695 | Update XNLI download link | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoest... | [] | closed | false | null | [] | null | [] | 1,601,558,842,000 | 1,601,560,875,000 | 1,601,560,874,000 | MEMBER | null | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/695",
"html_url": "https://github.com/huggingface/datasets/pull/695",
"diff_url": "https://github.com/huggingface/datasets/pull/695.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/695.patch"
} | The old link isn't working anymore. I updated it with the new official link.
Fix #690 | https://api.github.com/repos/huggingface/datasets/issues/695/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/694 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/694/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/694/comments | https://api.github.com/repos/huggingface/datasets/issues/694/events | https://github.com/huggingface/datasets/pull/694 | 712,827,751 | MDExOlB1bGxSZXF1ZXN0NDk2MjQ1NzU0 | 694 | Use GitHub instead of aws in remote dataset tests | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoest... | [] | closed | false | null | [] | null | [] | 1,601,557,670,000 | 1,601,624,848,000 | 1,601,624,847,000 | MEMBER | null | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/694",
"html_url": "https://github.com/huggingface/datasets/pull/694",
"diff_url": "https://github.com/huggingface/datasets/pull/694.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/694.patch"
} | Recently we switched from aws s3 to github to download dataset scripts.
However in the tests, the dummy data were still downloaded from s3.
So I changed that to download them from github instead, in the MockDownloadManager.
Moreover I noticed that `anli`'s dummy data were quite heavy (18MB compressed, i.e. the ent... | https://api.github.com/repos/huggingface/datasets/issues/694/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/693 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/693/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/693/comments | https://api.github.com/repos/huggingface/datasets/issues/693/events | https://github.com/huggingface/datasets/pull/693 | 712,822,200 | MDExOlB1bGxSZXF1ZXN0NDk2MjQxMjUw | 693 | Rachel ker add dataset/mlsum | {
"login": "pdhg",
"id": 32742136,
"node_id": "MDQ6VXNlcjMyNzQyMTM2",
"avatar_url": "https://avatars.githubusercontent.com/u/32742136?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdhg",
"html_url": "https://github.com/pdhg",
"followers_url": "https://api.github.com/users/pdhg/followers"... | [] | closed | false | null | [] | null | [
"It looks like an outdated PR (we've already added mlsum). Closing it"
] | 1,601,557,270,000 | 1,601,571,673,000 | 1,601,571,673,000 | NONE | null | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/693",
"html_url": "https://github.com/huggingface/datasets/pull/693",
"diff_url": "https://github.com/huggingface/datasets/pull/693.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/693.patch"
} | . | https://api.github.com/repos/huggingface/datasets/issues/693/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/692 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/692/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/692/comments | https://api.github.com/repos/huggingface/datasets/issues/692/events | https://github.com/huggingface/datasets/pull/692 | 712,818,968 | MDExOlB1bGxSZXF1ZXN0NDk2MjM4NzIw | 692 | Update README.md | {
"login": "mayank1897",
"id": 62796466,
"node_id": "MDQ6VXNlcjYyNzk2NDY2",
"avatar_url": "https://avatars.githubusercontent.com/u/62796466?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mayank1897",
"html_url": "https://github.com/mayank1897",
"followers_url": "https://api.github.com/use... | [] | closed | false | null | [] | null | [
"Hacktoberfest spam",
"To enhance its readability.....not Hacktoberfest spam",
"How is adding a punctuation to the end of a sentence justified as \"To enhance its readability\". \r\nConsidering that this is not your first \"README enhancement '' please don't spam the open source community with useless PR to get... | 1,601,557,042,000 | 1,601,636,519,000 | 1,601,636,519,000 | NONE | null | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/692",
"html_url": "https://github.com/huggingface/datasets/pull/692",
"diff_url": "https://github.com/huggingface/datasets/pull/692.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/692.patch"
} | https://api.github.com/repos/huggingface/datasets/issues/692/timeline | null | true | |
https://api.github.com/repos/huggingface/datasets/issues/691 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/691/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/691/comments | https://api.github.com/repos/huggingface/datasets/issues/691/events | https://github.com/huggingface/datasets/issues/691 | 712,389,499 | MDU6SXNzdWU3MTIzODk0OTk= | 691 | Add UI filter to filter datasets based on task | {
"login": "praateekmahajan",
"id": 7589415,
"node_id": "MDQ6VXNlcjc1ODk0MTU=",
"avatar_url": "https://avatars.githubusercontent.com/u/7589415?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/praateekmahajan",
"html_url": "https://github.com/praateekmahajan",
"followers_url": "https://api.g... | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | open | false | null | [] | null | [
"Already supported."
] | 1,601,513,778,000 | 1,603,812,270,000 | null | NONE | null | null | This is great work, so huge shoutout to contributors and huggingface.
The [/nlp/viewer](https://huggingface.co/nlp/viewer/) is great and the [/datasets](https://huggingface.co/datasets) page is great. I was wondering if in both or either places we can have a filter that selects if a dataset is good for the following... | https://api.github.com/repos/huggingface/datasets/issues/691/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/690 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/690/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/690/comments | https://api.github.com/repos/huggingface/datasets/issues/690/events | https://github.com/huggingface/datasets/issues/690 | 712,150,321 | MDU6SXNzdWU3MTIxNTAzMjE= | 690 | XNLI dataset: NonMatchingChecksumError | {
"login": "xiey1",
"id": 13307358,
"node_id": "MDQ6VXNlcjEzMzA3MzU4",
"avatar_url": "https://avatars.githubusercontent.com/u/13307358?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/xiey1",
"html_url": "https://github.com/xiey1",
"followers_url": "https://api.github.com/users/xiey1/follow... | [] | closed | false | null | [] | null | [
"Thanks for reporting.\r\nThe data file must have been updated by the host.\r\nI'll update the checksum with the new one.",
"Well actually it looks like the link isn't working anymore :(",
"The new link is https://cims.nyu.edu/~sbowman/xnli/XNLI-1.0.zip\r\nI'll update the dataset script",
"I'll do a release i... | 1,601,488,203,000 | 1,601,572,508,000 | 1,601,560,874,000 | NONE | null | null | Hi,
I tried to download "xnli" dataset in colab using
`xnli = load_dataset(path='xnli')`
but got 'NonMatchingChecksumError' error
`NonMatchingChecksumError Traceback (most recent call last)
<ipython-input-27-a87bedc82eeb> in <module>()
----> 1 xnli = load_dataset(path='xnli')
3 frames
/usr... | https://api.github.com/repos/huggingface/datasets/issues/690/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/689 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/689/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/689/comments | https://api.github.com/repos/huggingface/datasets/issues/689/events | https://github.com/huggingface/datasets/pull/689 | 712,095,262 | MDExOlB1bGxSZXF1ZXN0NDk1NjMzNjMy | 689 | Switch to pandas reader for text dataset | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoest... | [] | closed | false | null | [] | null | [
"If the windows tests in the CI pass, today will be a happy day"
] | 1,601,483,292,000 | 1,601,484,332,000 | 1,601,484,331,000 | MEMBER | null | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/689",
"html_url": "https://github.com/huggingface/datasets/pull/689",
"diff_url": "https://github.com/huggingface/datasets/pull/689.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/689.patch"
} | Following the discussion in #622 , it appears that there's no appropriate ways to use the payrrow csv reader to read text files because of the separator.
In this PR I switched to pandas to read the file.
Moreover pandas allows to read the file by chunk, which means that you can build the arrow dataset from a text... | https://api.github.com/repos/huggingface/datasets/issues/689/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/688 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/688/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/688/comments | https://api.github.com/repos/huggingface/datasets/issues/688/events | https://github.com/huggingface/datasets/pull/688 | 711,804,828 | MDExOlB1bGxSZXF1ZXN0NDk1MzkwMTc1 | 688 | Disable tokenizers parallelism in multiprocessed map | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoest... | [] | closed | false | null | [] | null | [] | 1,601,459,614,000 | 1,601,541,946,000 | 1,601,541,945,000 | MEMBER | null | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/688",
"html_url": "https://github.com/huggingface/datasets/pull/688",
"diff_url": "https://github.com/huggingface/datasets/pull/688.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/688.patch"
} | It was reported in #620 that using multiprocessing with a tokenizers shows this message:
```
The current process just got forked. Disabling parallelism to avoid deadlocks...
To disable this warning, please explicitly set TOKENIZERS_PARALLELISM=(true | false)
```
This message is shown when TOKENIZERS_PARALLELISM is... | https://api.github.com/repos/huggingface/datasets/issues/688/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/687 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/687/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/687/comments | https://api.github.com/repos/huggingface/datasets/issues/687/events | https://github.com/huggingface/datasets/issues/687 | 711,664,810 | MDU6SXNzdWU3MTE2NjQ4MTA= | 687 | `ArrowInvalid` occurs while running `Dataset.map()` function | {
"login": "peinan",
"id": 5601012,
"node_id": "MDQ6VXNlcjU2MDEwMTI=",
"avatar_url": "https://avatars.githubusercontent.com/u/5601012?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/peinan",
"html_url": "https://github.com/peinan",
"followers_url": "https://api.github.com/users/peinan/foll... | [] | closed | false | null | [] | null | [
"Hi !\r\n\r\nThis is because `encode` expects one single text as input (str), or one tokenized text (List[str]).\r\nI believe that you actually wanted to use `encode_batch` which expects a batch of texts.\r\nHowever this method is only available for our \"fast\" tokenizers (ex: BertTokenizerFast).\r\nBertJapanese i... | 1,601,446,610,000 | 1,601,459,583,000 | 1,601,459,583,000 | NONE | null | null | It seems to fail to process the final batch. This [colab](https://colab.research.google.com/drive/1_byLZRHwGP13PHMkJWo62Wp50S_Z2HMD?usp=sharing) can reproduce the error.
Code:
```python
# train_ds = Dataset(features: {
# 'title': Value(dtype='string', id=None),
# 'score': Value(dtype='float64', id=Non... | https://api.github.com/repos/huggingface/datasets/issues/687/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/686 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/686/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/686/comments | https://api.github.com/repos/huggingface/datasets/issues/686/events | https://github.com/huggingface/datasets/issues/686 | 711,385,739 | MDU6SXNzdWU3MTEzODU3Mzk= | 686 | Dataset browser url is still https://huggingface.co/nlp/viewer/ | {
"login": "jarednielsen",
"id": 4564897,
"node_id": "MDQ6VXNlcjQ1NjQ4OTc=",
"avatar_url": "https://avatars.githubusercontent.com/u/4564897?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jarednielsen",
"html_url": "https://github.com/jarednielsen",
"followers_url": "https://api.github.com... | [] | closed | false | null | [] | null | [
"Yes! might do it with @srush one of these days. Hopefully it won't break too many links (we can always redirect from old url to new)",
"This was fixed but forgot to close the issue. cc @lhoestq @yjernite \r\n\r\nThanks @jarednielsen!"
] | 1,601,407,312,000 | 1,610,130,566,000 | 1,610,130,566,000 | CONTRIBUTOR | null | null | Might be worth updating to https://huggingface.co/datasets/viewer/ | https://api.github.com/repos/huggingface/datasets/issues/686/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/685 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/685/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/685/comments | https://api.github.com/repos/huggingface/datasets/issues/685/events | https://github.com/huggingface/datasets/pull/685 | 711,182,185 | MDExOlB1bGxSZXF1ZXN0NDk0ODg1NjIz | 685 | Add features parameter to CSV | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoest... | [] | closed | false | null | [] | null | [] | 1,601,390,616,000 | 1,601,455,196,000 | 1,601,455,194,000 | MEMBER | null | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/685",
"html_url": "https://github.com/huggingface/datasets/pull/685",
"diff_url": "https://github.com/huggingface/datasets/pull/685.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/685.patch"
} | Add support for the `features` parameter when loading a csv dataset:
```python
from datasets import load_dataset, Features
features = Features({...})
csv_dataset = load_dataset("csv", data_files=["path/to/my/file.csv"], features=features)
```
I added tests to make sure that it is also compatible with the ca... | https://api.github.com/repos/huggingface/datasets/issues/685/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/684 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/684/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/684/comments | https://api.github.com/repos/huggingface/datasets/issues/684/events | https://github.com/huggingface/datasets/pull/684 | 711,080,947 | MDExOlB1bGxSZXF1ZXN0NDk0ODA2NjE1 | 684 | Fix column order issue in cast | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoest... | [] | closed | false | null | [] | null | [] | 1,601,383,753,000 | 1,601,395,006,000 | 1,601,395,005,000 | MEMBER | null | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/684",
"html_url": "https://github.com/huggingface/datasets/pull/684",
"diff_url": "https://github.com/huggingface/datasets/pull/684.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/684.patch"
} | Previously, the order of the columns in the features passes to `cast_` mattered.
However even though features passed to `cast_` had the same order as the dataset features, it could fail because the schema that was built was always in alphabetical order.
This issue was reported by @lewtun in #623
To fix that I fi... | https://api.github.com/repos/huggingface/datasets/issues/684/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/683 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/683/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/683/comments | https://api.github.com/repos/huggingface/datasets/issues/683/events | https://github.com/huggingface/datasets/pull/683 | 710,942,704 | MDExOlB1bGxSZXF1ZXN0NDk0NzAwNzY1 | 683 | Fix wrong delimiter in text dataset | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoest... | [] | closed | false | null | [] | null | [] | 1,601,372,604,000 | 1,620,239,071,000 | 1,601,372,646,000 | MEMBER | null | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/683",
"html_url": "https://github.com/huggingface/datasets/pull/683",
"diff_url": "https://github.com/huggingface/datasets/pull/683.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/683.patch"
} | The delimiter is set to the bell character as it is used nowhere is text files usually.
However in the text dataset the delimiter was set to `\b` which is backspace in python, while the bell character is `\a`.
I replace \b by \a
Hopefully it fixes issues mentioned by some users in #622 | https://api.github.com/repos/huggingface/datasets/issues/683/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/682 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/682/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/682/comments | https://api.github.com/repos/huggingface/datasets/issues/682/events | https://github.com/huggingface/datasets/pull/682 | 710,325,399 | MDExOlB1bGxSZXF1ZXN0NDk0MTkzMzEw | 682 | Update navbar chapter titles color | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoest... | [] | closed | false | null | [] | null | [] | 1,601,303,717,000 | 1,601,314,213,000 | 1,601,314,212,000 | MEMBER | null | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/682",
"html_url": "https://github.com/huggingface/datasets/pull/682",
"diff_url": "https://github.com/huggingface/datasets/pull/682.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/682.patch"
} | Consistency with the color change that was done in transformers at https://github.com/huggingface/transformers/pull/7423
It makes the background-color of the chapter titles in the docs navbar darker, to differentiate them from the inner sections.
see changes [here](https://691-250213286-gh.circle-artifacts.com/0/do... | https://api.github.com/repos/huggingface/datasets/issues/682/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/681 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/681/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/681/comments | https://api.github.com/repos/huggingface/datasets/issues/681/events | https://github.com/huggingface/datasets/pull/681 | 710,075,721 | MDExOlB1bGxSZXF1ZXN0NDkzOTkwMjEz | 681 | Adding missing @property (+2 small flake8 fixes). | {
"login": "Narsil",
"id": 204321,
"node_id": "MDQ6VXNlcjIwNDMyMQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/204321?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Narsil",
"html_url": "https://github.com/Narsil",
"followers_url": "https://api.github.com/users/Narsil/follow... | [] | closed | false | null | [] | null | [] | 1,601,283,233,000 | 1,601,288,773,000 | 1,601,288,769,000 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/681",
"html_url": "https://github.com/huggingface/datasets/pull/681",
"diff_url": "https://github.com/huggingface/datasets/pull/681.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/681.patch"
} | Fixes #678 | https://api.github.com/repos/huggingface/datasets/issues/681/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/680 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/680/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/680/comments | https://api.github.com/repos/huggingface/datasets/issues/680/events | https://github.com/huggingface/datasets/pull/680 | 710,066,138 | MDExOlB1bGxSZXF1ZXN0NDkzOTgyMjY4 | 680 | Fix bug related to boolean in GAP dataset. | {
"login": "otakumesi",
"id": 14996977,
"node_id": "MDQ6VXNlcjE0OTk2OTc3",
"avatar_url": "https://avatars.githubusercontent.com/u/14996977?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/otakumesi",
"html_url": "https://github.com/otakumesi",
"followers_url": "https://api.github.com/users/... | [] | closed | false | null | [] | null | [
"Hi !\r\n\r\nGood catch, thanks for creating this PR :)\r\n\r\nCould you also regenerate the metadata for this dataset using \r\n```\r\ndatasets-cli test ./datasets/gap --save_infos --all_configs\r\n```\r\n\r\nThat'd be awesome",
"@lhoestq Thank you for your revieing!!!\r\n\r\nI've performed it and have read CONT... | 1,601,282,379,000 | 1,601,394,887,000 | 1,601,394,887,000 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/680",
"html_url": "https://github.com/huggingface/datasets/pull/680",
"diff_url": "https://github.com/huggingface/datasets/pull/680.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/680.patch"
} | ### Why I did
The value in `row["A-coref"]` and `row["B-coref"]` is `'TRUE'` or `'FALSE'`.
This type is `string`, then `bool('FALSE')` is equal to `True` in Python.
So, both rows are transformed into `True` now.
So, I modified this problem.
### What I did
I modified `bool(row["A-coref"])` and `bool(row["B-cor... | https://api.github.com/repos/huggingface/datasets/issues/680/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/679 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/679/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/679/comments | https://api.github.com/repos/huggingface/datasets/issues/679/events | https://github.com/huggingface/datasets/pull/679 | 710,065,838 | MDExOlB1bGxSZXF1ZXN0NDkzOTgyMDMx | 679 | Fix negative ids when slicing with an array | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoest... | [] | closed | false | null | [] | null | [] | 1,601,282,348,000 | 1,601,304,140,000 | 1,601,304,139,000 | MEMBER | null | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/679",
"html_url": "https://github.com/huggingface/datasets/pull/679",
"diff_url": "https://github.com/huggingface/datasets/pull/679.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/679.patch"
} | ```python
from datasets import Dataset
d = ds.Dataset.from_dict({"a": range(10)})
print(d[[0, -1]])
# OverflowError
```
raises an error because of the negative id.
This PR fixes that.
Fix #668 | https://api.github.com/repos/huggingface/datasets/issues/679/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/678 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/678/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/678/comments | https://api.github.com/repos/huggingface/datasets/issues/678/events | https://github.com/huggingface/datasets/issues/678 | 710,060,497 | MDU6SXNzdWU3MTAwNjA0OTc= | 678 | The download instructions for c4 datasets are not contained in the error message | {
"login": "Narsil",
"id": 204321,
"node_id": "MDQ6VXNlcjIwNDMyMQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/204321?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Narsil",
"html_url": "https://github.com/Narsil",
"followers_url": "https://api.github.com/users/Narsil/follow... | [] | closed | false | null | [] | null | [
"Good catch !\r\nIndeed the `@property` is missing.\r\n\r\nFeel free to open a PR :)",
"Also not that C4 is a dataset that needs an Apache Beam runtime to be generated.\r\nFor example Dataflow, Spark, Flink etc.\r\n\r\nUsually we generate the dataset on our side once and for all, but we haven't done it for C4 yet... | 1,601,281,854,000 | 1,601,288,769,000 | 1,601,288,769,000 | CONTRIBUTOR | null | null | The manual download instructions are not clear
```The dataset c4 with config en requires manual data.
Please follow the manual download instructions: <bound method C4.manual_download_instructions of <datasets_modules.datasets.c4.830b0c218bd41fed439812c8dd19dbd4767d2a3faa385eb695cf8666c982b1b3.c4.C4 object at 0x7ff... | https://api.github.com/repos/huggingface/datasets/issues/678/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/677 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/677/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/677/comments | https://api.github.com/repos/huggingface/datasets/issues/677/events | https://github.com/huggingface/datasets/pull/677 | 710,055,239 | MDExOlB1bGxSZXF1ZXN0NDkzOTczNDE3 | 677 | Move cache dir root creation in builder's init | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoest... | [] | closed | false | null | [] | null | [] | 1,601,281,366,000 | 1,601,304,163,000 | 1,601,304,162,000 | MEMBER | null | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/677",
"html_url": "https://github.com/huggingface/datasets/pull/677",
"diff_url": "https://github.com/huggingface/datasets/pull/677.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/677.patch"
} | We use lock files in the builder initialization but sometimes the cache directory where they're supposed to be was not created. To fix that I moved the builder's cache dir root creation in the builder's init.
Fix #671 | https://api.github.com/repos/huggingface/datasets/issues/677/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/676 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/676/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/676/comments | https://api.github.com/repos/huggingface/datasets/issues/676/events | https://github.com/huggingface/datasets/issues/676 | 710,014,319 | MDU6SXNzdWU3MTAwMTQzMTk= | 676 | train_test_split returns empty dataset item | {
"login": "mojave-pku",
"id": 26648528,
"node_id": "MDQ6VXNlcjI2NjQ4NTI4",
"avatar_url": "https://avatars.githubusercontent.com/u/26648528?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mojave-pku",
"html_url": "https://github.com/mojave-pku",
"followers_url": "https://api.github.com/use... | [] | closed | false | null | [] | null | [
"The problem still exists after removing the cache files.",
"Can you reproduce this example in a Colab so we can investigate? (or give more information on your software/hardware config)",
"Thanks for reporting.\r\nI just found the issue, I'm creating a PR",
"We'll do a release pretty soon to include the fix :... | 1,601,277,573,000 | 1,602,078,393,000 | 1,602,077,886,000 | NONE | null | null | I try to split my dataset by `train_test_split`, but after that the item in `train` and `test` `Dataset` is empty.
The codes:
```
yelp_data = datasets.load_from_disk('/home/ssd4/huanglianzhe/test_yelp')
print(yelp_data[0])
yelp_data = yelp_data.train_test_split(test_size=0.1)
print(yelp_data)
pri... | https://api.github.com/repos/huggingface/datasets/issues/676/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/675 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/675/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/675/comments | https://api.github.com/repos/huggingface/datasets/issues/675/events | https://github.com/huggingface/datasets/issues/675 | 709,818,725 | MDU6SXNzdWU3MDk4MTg3MjU= | 675 | Add custom dataset to NLP? | {
"login": "timpal0l",
"id": 6556710,
"node_id": "MDQ6VXNlcjY1NTY3MTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/6556710?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/timpal0l",
"html_url": "https://github.com/timpal0l",
"followers_url": "https://api.github.com/users/timpa... | [] | closed | false | null | [] | null | [
"Yes you can have a look here: https://huggingface.co/docs/datasets/loading_datasets.html#csv-files",
"No activity, closing"
] | 1,601,241,770,000 | 1,603,184,929,000 | 1,603,184,929,000 | CONTRIBUTOR | null | null | Is it possible to add a custom dataset such as a .csv to the NLP library?
Thanks. | https://api.github.com/repos/huggingface/datasets/issues/675/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/674 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/674/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/674/comments | https://api.github.com/repos/huggingface/datasets/issues/674/events | https://github.com/huggingface/datasets/issues/674 | 709,661,006 | MDU6SXNzdWU3MDk2NjEwMDY= | 674 | load_dataset() won't download in Windows | {
"login": "ThisDavehead",
"id": 34422661,
"node_id": "MDQ6VXNlcjM0NDIyNjYx",
"avatar_url": "https://avatars.githubusercontent.com/u/34422661?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ThisDavehead",
"html_url": "https://github.com/ThisDavehead",
"followers_url": "https://api.github.c... | [] | closed | false | null | [] | null | [
"I have the same issue. Tried to download a few of them and not a single one is downloaded successfully.\r\n\r\nThis is the output:\r\n```\r\n>>> dataset = load_dataset('blended_skill_talk', split='train')\r\nUsing custom data configuration default <-- This step never ends\r\n```",
"This was fixed i... | 1,601,178,985,000 | 1,601,886,498,000 | 1,601,886,498,000 | NONE | null | null | I don't know if this is just me or Windows. Maybe other Windows users can chime in if they don't have this problem. I've been trying to get some of the tutorials working on Windows, but when I use the load_dataset() function, it just stalls and the script keeps running indefinitely without downloading anything. I've wa... | https://api.github.com/repos/huggingface/datasets/issues/674/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/673 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/673/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/673/comments | https://api.github.com/repos/huggingface/datasets/issues/673/events | https://github.com/huggingface/datasets/issues/673 | 709,603,989 | MDU6SXNzdWU3MDk2MDM5ODk= | 673 | blog_authorship_corpus crashed | {
"login": "Moshiii",
"id": 7553188,
"node_id": "MDQ6VXNlcjc1NTMxODg=",
"avatar_url": "https://avatars.githubusercontent.com/u/7553188?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Moshiii",
"html_url": "https://github.com/Moshiii",
"followers_url": "https://api.github.com/users/Moshiii/... | [
{
"id": 2107841032,
"node_id": "MDU6TGFiZWwyMTA3ODQxMDMy",
"url": "https://api.github.com/repos/huggingface/datasets/labels/nlp-viewer",
"name": "nlp-viewer",
"color": "94203D",
"default": false,
"description": ""
}
] | open | false | null | [] | null | [
"Thanks for reporting !\r\nWe'll free some memory"
] | 1,601,151,328,000 | 1,601,280,290,000 | null | NONE | null | null | This is just to report that When I pick blog_authorship_corpus in
https://huggingface.co/nlp/viewer/?dataset=blog_authorship_corpus
I get this:

| https://api.github.com/repos/huggingface/datasets/issues/673/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/672 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/672/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/672/comments | https://api.github.com/repos/huggingface/datasets/issues/672/events | https://github.com/huggingface/datasets/issues/672 | 709,575,527 | MDU6SXNzdWU3MDk1NzU1Mjc= | 672 | Questions about XSUM | {
"login": "danyaljj",
"id": 2441454,
"node_id": "MDQ6VXNlcjI0NDE0NTQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/2441454?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/danyaljj",
"html_url": "https://github.com/danyaljj",
"followers_url": "https://api.github.com/users/danya... | [] | open | false | null | [] | null | [
"We should try to regenerate the data using the official script.\r\nBut iirc that's what we used in the first place, so not sure why it didn't match in the first place.\r\n\r\nI'll let you know when the dataset is updated",
"Thanks, looking forward to hearing your update on this thread. \r\n\r\nThis is a blocking... | 1,601,140,584,000 | 1,603,185,367,000 | null | CONTRIBUTOR | null | null | Hi there ✋
I'm looking into your `xsum` dataset and I have several questions on that.
So here is how I loaded the data:
```
>>> data = datasets.load_dataset('xsum', version='1.0.1')
>>> data['train']
Dataset(features: {'document': Value(dtype='string', id=None), 'summary': Value(dtype='string', id=None)}, nu... | https://api.github.com/repos/huggingface/datasets/issues/672/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/671 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/671/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/671/comments | https://api.github.com/repos/huggingface/datasets/issues/671/events | https://github.com/huggingface/datasets/issues/671 | 709,093,151 | MDU6SXNzdWU3MDkwOTMxNTE= | 671 | [BUG] No such file or directory | {
"login": "jbragg",
"id": 2238344,
"node_id": "MDQ6VXNlcjIyMzgzNDQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/2238344?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jbragg",
"html_url": "https://github.com/jbragg",
"followers_url": "https://api.github.com/users/jbragg/foll... | [] | closed | false | null | [] | null | [] | 1,601,051,934,000 | 1,601,304,162,000 | 1,601,304,162,000 | CONTRIBUTOR | null | null | This happens when both
1. Huggingface datasets cache dir does not exist
2. Try to load a local dataset script
builder.py throws an error when trying to create a filelock in a directory (cache/datasets) that does not exist
https://github.com/huggingface/datasets/blob/master/src/datasets/builder.py#L177
Tested o... | https://api.github.com/repos/huggingface/datasets/issues/671/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/670 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/670/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/670/comments | https://api.github.com/repos/huggingface/datasets/issues/670/events | https://github.com/huggingface/datasets/pull/670 | 709,061,231 | MDExOlB1bGxSZXF1ZXN0NDkzMTc4OTQw | 670 | Fix SQuAD metric kwargs description | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoest... | [] | closed | false | null | [] | null | [] | 1,601,050,137,000 | 1,601,395,059,000 | 1,601,395,058,000 | MEMBER | null | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/670",
"html_url": "https://github.com/huggingface/datasets/pull/670",
"diff_url": "https://github.com/huggingface/datasets/pull/670.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/670.patch"
} | The `answer_start` field was missing in the kwargs docstring.
This should fix #657
FYI another fix was proposed by @tshrjn in #658 and suggests to remove this field.
However IMO `answer_start` is useful to match the squad dataset format for consistency, even though it is not used in the metric computation. I th... | https://api.github.com/repos/huggingface/datasets/issues/670/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/669 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/669/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/669/comments | https://api.github.com/repos/huggingface/datasets/issues/669/events | https://github.com/huggingface/datasets/issues/669 | 708,857,595 | MDU6SXNzdWU3MDg4NTc1OTU= | 669 | How to skip a example when running dataset.map | {
"login": "xixiaoyao",
"id": 24541791,
"node_id": "MDQ6VXNlcjI0NTQxNzkx",
"avatar_url": "https://avatars.githubusercontent.com/u/24541791?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/xixiaoyao",
"html_url": "https://github.com/xixiaoyao",
"followers_url": "https://api.github.com/users/... | [] | closed | false | null | [] | null | [
"Hi @xixiaoyao,\r\nDepending on what you want to do you can:\r\n- use a first step of `filter` to filter out the invalid examples: https://huggingface.co/docs/datasets/processing.html#filtering-rows-select-and-filter\r\n- or directly detect the invalid examples inside the callable used with `map` and return them un... | 1,601,032,673,000 | 1,601,915,293,000 | 1,601,915,293,000 | NONE | null | null | in processing func, I process examples and detect some invalid examples, which I did not want it to be added into train dataset. However I did not find how to skip this recognized invalid example when doing dataset.map. | https://api.github.com/repos/huggingface/datasets/issues/669/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/668 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/668/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/668/comments | https://api.github.com/repos/huggingface/datasets/issues/668/events | https://github.com/huggingface/datasets/issues/668 | 708,310,956 | MDU6SXNzdWU3MDgzMTA5NTY= | 668 | OverflowError when slicing with an array containing negative ids | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoest... | [] | closed | false | null | [] | null | [] | 1,600,964,834,000 | 1,601,304,139,000 | 1,601,304,139,000 | MEMBER | null | null | ```python
from datasets import Dataset
d = ds.Dataset.from_dict({"a": range(10)})
print(d[0])
# {'a': 0}
print(d[-1])
# {'a': 9}
print(d[[0, -1]])
# OverflowError
```
results in
```
---------------------------------------------------------------------------
OverflowError ... | https://api.github.com/repos/huggingface/datasets/issues/668/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/667 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/667/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/667/comments | https://api.github.com/repos/huggingface/datasets/issues/667/events | https://github.com/huggingface/datasets/issues/667 | 708,258,392 | MDU6SXNzdWU3MDgyNTgzOTI= | 667 | Loss not decrease with Datasets and Transformers | {
"login": "wangcongcong123",
"id": 23032865,
"node_id": "MDQ6VXNlcjIzMDMyODY1",
"avatar_url": "https://avatars.githubusercontent.com/u/23032865?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/wangcongcong123",
"html_url": "https://github.com/wangcongcong123",
"followers_url": "https://api... | [] | closed | false | null | [] | null | [
"And I tested it on T5ForConditionalGeneration, that works no problem.",
"Hi did you manage to fix your issue ?\r\n\r\nIf so feel free to share your fix and close this thread"
] | 1,600,960,483,000 | 1,609,531,285,000 | 1,609,531,285,000 | NONE | null | null | HI,
The following script is used to fine-tune a BertForSequenceClassification model on SST2.
The script is adapted from [this colab](https://colab.research.google.com/github/huggingface/datasets/blob/master/notebooks/Overview.ipynb) that presents an example of fine-tuning BertForQuestionAnswering using squad data... | https://api.github.com/repos/huggingface/datasets/issues/667/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/666 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/666/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/666/comments | https://api.github.com/repos/huggingface/datasets/issues/666/events | https://github.com/huggingface/datasets/issues/666 | 707,608,578 | MDU6SXNzdWU3MDc2MDg1Nzg= | 666 | Does both 'bookcorpus' and 'wikipedia' belong to the same datasets which Google used for pretraining BERT? | {
"login": "wahab4114",
"id": 31090427,
"node_id": "MDQ6VXNlcjMxMDkwNDI3",
"avatar_url": "https://avatars.githubusercontent.com/u/31090427?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/wahab4114",
"html_url": "https://github.com/wahab4114",
"followers_url": "https://api.github.com/users/... | [] | closed | false | null | [] | null | [
"No they are other similar copies but they are not provided by the official Bert models authors."
] | 1,600,887,745,000 | 1,603,811,965,000 | 1,603,811,965,000 | NONE | null | null | https://api.github.com/repos/huggingface/datasets/issues/666/timeline | null | false | |
https://api.github.com/repos/huggingface/datasets/issues/665 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/665/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/665/comments | https://api.github.com/repos/huggingface/datasets/issues/665/events | https://github.com/huggingface/datasets/issues/665 | 707,037,738 | MDU6SXNzdWU3MDcwMzc3Mzg= | 665 | runing dataset.map, it raises TypeError: can't pickle Tokenizer objects | {
"login": "xixiaoyao",
"id": 24541791,
"node_id": "MDQ6VXNlcjI0NTQxNzkx",
"avatar_url": "https://avatars.githubusercontent.com/u/24541791?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/xixiaoyao",
"html_url": "https://github.com/xixiaoyao",
"followers_url": "https://api.github.com/users/... | [] | closed | false | null | [] | null | [
"Hi !\r\nIt works on my side with both the LongFormerTokenizer and the LongFormerTokenizerFast.\r\n\r\nWhich version of transformers/datasets are you using ?",
"transformers and datasets are both the latest",
"Then I guess you need to give us more informations on your setup (OS, python, GPU, etc) or a Google Co... | 1,600,835,294,000 | 1,602,149,536,000 | 1,602,149,536,000 | NONE | null | null | I load squad dataset. Then want to process data use following function with `Huggingface Transformers LongformerTokenizer`.
```
def convert_to_features(example):
# Tokenize contexts and questions (as pairs of inputs)
input_pairs = [example['question'], example['context']]
encodings = tokenizer.encode... | https://api.github.com/repos/huggingface/datasets/issues/665/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/664 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/664/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/664/comments | https://api.github.com/repos/huggingface/datasets/issues/664/events | https://github.com/huggingface/datasets/issues/664 | 707,017,791 | MDU6SXNzdWU3MDcwMTc3OTE= | 664 | load_dataset from local squad.py, raise error: TypeError: 'NoneType' object is not callable | {
"login": "xixiaoyao",
"id": 24541791,
"node_id": "MDQ6VXNlcjI0NTQxNzkx",
"avatar_url": "https://avatars.githubusercontent.com/u/24541791?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/xixiaoyao",
"html_url": "https://github.com/xixiaoyao",
"followers_url": "https://api.github.com/users/... | [] | closed | false | null | [] | null | [
"Hi !\r\nThanks for reporting.\r\nIt looks like no object inherits from `datasets.GeneratorBasedBuilder` (or more generally from `datasets.DatasetBuilder`) in your script.\r\n\r\nCould you check that there exist at least one dataset builder class ?",
"Hi @xixiaoyao did you manage to fix your issue ?",
"No activ... | 1,600,833,216,000 | 1,603,184,773,000 | 1,603,184,773,000 | NONE | null | null |
version: 1.0.2
```
train_dataset = datasets.load_dataset('squad')
```
The above code can works. However, when I download the squad.py from your server, and saved as `my_squad.py` to local. I run followings raise errors.
```
train_dataset = datasets.load_dataset('./my_squad.py') ... | https://api.github.com/repos/huggingface/datasets/issues/664/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/663 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/663/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/663/comments | https://api.github.com/repos/huggingface/datasets/issues/663/events | https://github.com/huggingface/datasets/pull/663 | 706,732,636 | MDExOlB1bGxSZXF1ZXN0NDkxMjI3NzUz | 663 | Created dataset card snli.md | {
"login": "mcmillanmajora",
"id": 26722925,
"node_id": "MDQ6VXNlcjI2NzIyOTI1",
"avatar_url": "https://avatars.githubusercontent.com/u/26722925?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mcmillanmajora",
"html_url": "https://github.com/mcmillanmajora",
"followers_url": "https://api.gi... | [
{
"id": 2067401494,
"node_id": "MDU6TGFiZWwyMDY3NDAxNDk0",
"url": "https://api.github.com/repos/huggingface/datasets/labels/Dataset%20discussion",
"name": "Dataset discussion",
"color": "72f99f",
"default": false,
"description": "Discussions on the datasets"
}
] | closed | false | {
"login": "yjernite",
"id": 10469459,
"node_id": "MDQ6VXNlcjEwNDY5NDU5",
"avatar_url": "https://avatars.githubusercontent.com/u/10469459?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yjernite",
"html_url": "https://github.com/yjernite",
"followers_url": "https://api.github.com/users/yje... | [
{
"login": "yjernite",
"id": 10469459,
"node_id": "MDQ6VXNlcjEwNDY5NDU5",
"avatar_url": "https://avatars.githubusercontent.com/u/10469459?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yjernite",
"html_url": "https://github.com/yjernite",
"followers_url": "https://api.... | null | [
"Adding a direct link to the rendered markdown:\r\nhttps://github.com/mcmillanmajora/datasets/blob/add_dataset_documentation/datasets/snli/README.md\r\n",
"It would be amazing if we ended up with this much information on all of our datasets :) \r\n\r\nI don't think there's too much repetition, everything that is ... | 1,600,813,777,000 | 1,602,608,720,000 | 1,602,534,412,000 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/663",
"html_url": "https://github.com/huggingface/datasets/pull/663",
"diff_url": "https://github.com/huggingface/datasets/pull/663.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/663.patch"
} | First draft of a dataset card using the SNLI corpus as an example.
This is mostly based on the [Google Doc draft](https://docs.google.com/document/d/1dKPGP-dA2W0QoTRGfqQ5eBp0CeSsTy7g2yM8RseHtos/edit), but I added a few sections and moved some things around.
- I moved **Who Was Involved** to follow **Language**, ... | https://api.github.com/repos/huggingface/datasets/issues/663/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/662 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/662/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/662/comments | https://api.github.com/repos/huggingface/datasets/issues/662/events | https://github.com/huggingface/datasets/pull/662 | 706,689,866 | MDExOlB1bGxSZXF1ZXN0NDkxMTkyNTM3 | 662 | Created dataset card snli.md | {
"login": "mcmillanmajora",
"id": 26722925,
"node_id": "MDQ6VXNlcjI2NzIyOTI1",
"avatar_url": "https://avatars.githubusercontent.com/u/26722925?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mcmillanmajora",
"html_url": "https://github.com/mcmillanmajora",
"followers_url": "https://api.gi... | [
{
"id": 2067401494,
"node_id": "MDU6TGFiZWwyMDY3NDAxNDk0",
"url": "https://api.github.com/repos/huggingface/datasets/labels/Dataset%20discussion",
"name": "Dataset discussion",
"color": "72f99f",
"default": false,
"description": "Discussions on the datasets"
}
] | closed | false | null | [] | null | [
"Resubmitting on a new fork"
] | 1,600,808,417,000 | 1,600,809,981,000 | 1,600,809,981,000 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/662",
"html_url": "https://github.com/huggingface/datasets/pull/662",
"diff_url": "https://github.com/huggingface/datasets/pull/662.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/662.patch"
} | First draft of a dataset card using the SNLI corpus as an example | https://api.github.com/repos/huggingface/datasets/issues/662/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/661 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/661/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/661/comments | https://api.github.com/repos/huggingface/datasets/issues/661/events | https://github.com/huggingface/datasets/pull/661 | 706,465,936 | MDExOlB1bGxSZXF1ZXN0NDkxMDA3NjEw | 661 | Replace pa.OSFile by open | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoest... | [] | closed | false | null | [] | null | [] | 1,600,787,159,000 | 1,620,239,076,000 | 1,600,787,725,000 | MEMBER | null | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/661",
"html_url": "https://github.com/huggingface/datasets/pull/661",
"diff_url": "https://github.com/huggingface/datasets/pull/661.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/661.patch"
} | It should fix #643 | https://api.github.com/repos/huggingface/datasets/issues/661/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/660 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/660/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/660/comments | https://api.github.com/repos/huggingface/datasets/issues/660/events | https://github.com/huggingface/datasets/pull/660 | 706,324,032 | MDExOlB1bGxSZXF1ZXN0NDkwODkyMjQ0 | 660 | add openwebtext | {
"login": "richarddwang",
"id": 17963619,
"node_id": "MDQ6VXNlcjE3OTYzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/17963619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/richarddwang",
"html_url": "https://github.com/richarddwang",
"followers_url": "https://api.github.c... | [] | closed | false | null | [] | null | [
"BTW, is there a one-line command to make our building scripts pass flake8 test? (included code quality test), I got like trailing space or mixed space and tab warning and error, and fixed them manually.",
"> BTW, is there a one-line command to make our building scripts pass flake8 test? (included code quality te... | 1,600,776,322,000 | 1,601,976,010,000 | 1,601,284,046,000 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/660",
"html_url": "https://github.com/huggingface/datasets/pull/660",
"diff_url": "https://github.com/huggingface/datasets/pull/660.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/660.patch"
} | This adds [The OpenWebText Corpus](https://skylion007.github.io/OpenWebTextCorpus/), which is a clean and large text corpus for nlp pretraining. It is an open source effort to reproduce OpenAI’s WebText dataset used by GPT-2, and it is also needed to reproduce ELECTRA.
It solves #132 .
### Besides dataset buildin... | https://api.github.com/repos/huggingface/datasets/issues/660/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/659 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/659/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/659/comments | https://api.github.com/repos/huggingface/datasets/issues/659/events | https://github.com/huggingface/datasets/pull/659 | 706,231,506 | MDExOlB1bGxSZXF1ZXN0NDkwODE4NTY1 | 659 | Keep new columns in transmit format | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoest... | [] | closed | false | null | [] | null | [] | 1,600,768,043,000 | 1,600,769,242,000 | 1,600,769,240,000 | MEMBER | null | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/659",
"html_url": "https://github.com/huggingface/datasets/pull/659",
"diff_url": "https://github.com/huggingface/datasets/pull/659.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/659.patch"
} | When a dataset is formatted with a list of columns that `__getitem__` should return, then calling `map` to add new columns doesn't add the new columns to this list.
It caused `KeyError` issues in #620
I changed the logic to add those new columns to the list that `__getitem__` should return. | https://api.github.com/repos/huggingface/datasets/issues/659/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/658 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/658/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/658/comments | https://api.github.com/repos/huggingface/datasets/issues/658/events | https://github.com/huggingface/datasets/pull/658 | 706,206,247 | MDExOlB1bGxSZXF1ZXN0NDkwNzk4MDc0 | 658 | Fix squad metric's Features | {
"login": "tshrjn",
"id": 8372098,
"node_id": "MDQ6VXNlcjgzNzIwOTg=",
"avatar_url": "https://avatars.githubusercontent.com/u/8372098?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tshrjn",
"html_url": "https://github.com/tshrjn",
"followers_url": "https://api.github.com/users/tshrjn/foll... | [] | closed | false | null | [] | null | [
"Closing this one in favor of #670 \r\n\r\nThanks again for reporting the issue and proposing this fix !\r\nLet me know if you have other remarks"
] | 1,600,765,792,000 | 1,601,395,110,000 | 1,601,395,110,000 | NONE | null | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/658",
"html_url": "https://github.com/huggingface/datasets/pull/658",
"diff_url": "https://github.com/huggingface/datasets/pull/658.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/658.patch"
} | Resolves issue [657](https://github.com/huggingface/datasets/issues/657). | https://api.github.com/repos/huggingface/datasets/issues/658/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/657 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/657/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/657/comments | https://api.github.com/repos/huggingface/datasets/issues/657/events | https://github.com/huggingface/datasets/issues/657 | 706,204,383 | MDU6SXNzdWU3MDYyMDQzODM= | 657 | Squad Metric Description & Feature Mismatch | {
"login": "tshrjn",
"id": 8372098,
"node_id": "MDQ6VXNlcjgzNzIwOTg=",
"avatar_url": "https://avatars.githubusercontent.com/u/8372098?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tshrjn",
"html_url": "https://github.com/tshrjn",
"followers_url": "https://api.github.com/users/tshrjn/foll... | [] | closed | false | null | [] | null | [
"Thanks for reporting !\r\nThere indeed a mismatch between the features and the kwargs description\r\n\r\nI believe `answer_start` was added to match the squad dataset format for consistency, even though it is not used in the metric computation. I think I'd rather keep it this way, so that you can just give `refere... | 1,600,765,620,000 | 1,602,555,416,000 | 1,601,395,058,000 | NONE | null | null | The [description](https://github.com/huggingface/datasets/blob/master/metrics/squad/squad.py#L39) doesn't mention `answer_start` in squad. However the `datasets.features` require [it](https://github.com/huggingface/datasets/blob/master/metrics/squad/squad.py#L68). It's also not used in the evaluation. | https://api.github.com/repos/huggingface/datasets/issues/657/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/656 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/656/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/656/comments | https://api.github.com/repos/huggingface/datasets/issues/656/events | https://github.com/huggingface/datasets/pull/656 | 705,736,319 | MDExOlB1bGxSZXF1ZXN0NDkwNDEwODAz | 656 | Use multiprocess from pathos for multiprocessing | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoest... | [] | closed | false | null | [] | null | [
"We can just install multiprocess actually, I'll change that",
"Just an FYI: I remember that I wanted to try pathos a couple of years back and I ran into issues considering cross-platform; the code would just break on Windows. If I can verify this PR by running CPU tests on Windows, let me know!",
"That's good ... | 1,600,704,739,000 | 1,601,304,340,000 | 1,601,304,339,000 | MEMBER | null | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/656",
"html_url": "https://github.com/huggingface/datasets/pull/656",
"diff_url": "https://github.com/huggingface/datasets/pull/656.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/656.patch"
} | [Multiprocess](https://github.com/uqfoundation/multiprocess) (from the [pathos](https://github.com/uqfoundation/pathos) project) allows to use lambda functions in multiprocessed map.
It was suggested to use it by @kandorm.
We're already using dill which is its only dependency. | https://api.github.com/repos/huggingface/datasets/issues/656/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/655 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/655/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/655/comments | https://api.github.com/repos/huggingface/datasets/issues/655/events | https://github.com/huggingface/datasets/pull/655 | 705,672,208 | MDExOlB1bGxSZXF1ZXN0NDkwMzU4OTQ3 | 655 | added Winogrande debiased subset | {
"login": "TevenLeScao",
"id": 26709476,
"node_id": "MDQ6VXNlcjI2NzA5NDc2",
"avatar_url": "https://avatars.githubusercontent.com/u/26709476?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/TevenLeScao",
"html_url": "https://github.com/TevenLeScao",
"followers_url": "https://api.github.com/... | [] | closed | false | null | [] | null | [
"To fix the CI you just have to copy the dummy data to the 1.1.0 folder, and maybe create the dummy ones for the `debiased` configuration",
"Fixed! Thanks @lhoestq "
] | 1,600,699,868,000 | 1,600,705,240,000 | 1,600,704,964,000 | MEMBER | null | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/655",
"html_url": "https://github.com/huggingface/datasets/pull/655",
"diff_url": "https://github.com/huggingface/datasets/pull/655.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/655.patch"
} | The [Winogrande](https://arxiv.org/abs/1907.10641) paper mentions a `debiased` subset that wasn't in the first release; this PR adds it. | https://api.github.com/repos/huggingface/datasets/issues/655/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/654 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/654/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/654/comments | https://api.github.com/repos/huggingface/datasets/issues/654/events | https://github.com/huggingface/datasets/pull/654 | 705,511,058 | MDExOlB1bGxSZXF1ZXN0NDkwMjI1Nzk3 | 654 | Allow empty inputs in metrics | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoest... | [] | closed | false | null | [] | null | [] | 1,600,687,596,000 | 1,601,956,308,000 | 1,600,704,818,000 | MEMBER | null | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/654",
"html_url": "https://github.com/huggingface/datasets/pull/654",
"diff_url": "https://github.com/huggingface/datasets/pull/654.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/654.patch"
} | There was an arrow error when trying to compute a metric with empty inputs. The error was occurring when reading the arrow file, before calling metric._compute. | https://api.github.com/repos/huggingface/datasets/issues/654/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/653 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/653/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/653/comments | https://api.github.com/repos/huggingface/datasets/issues/653/events | https://github.com/huggingface/datasets/pull/653 | 705,482,391 | MDExOlB1bGxSZXF1ZXN0NDkwMjAxOTg4 | 653 | handle data alteration when trying type | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoest... | [] | closed | false | null | [] | null | [] | 1,600,684,909,000 | 1,600,704,786,000 | 1,600,704,785,000 | MEMBER | null | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/653",
"html_url": "https://github.com/huggingface/datasets/pull/653",
"diff_url": "https://github.com/huggingface/datasets/pull/653.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/653.patch"
} | Fix #649
The bug came from the type inference that didn't handle a weird case in Pyarrow.
Indeed this code runs without error but alters the data in arrow:
```python
import pyarrow as pa
type = pa.struct({"a": pa.struct({"b": pa.string()})})
array_with_altered_data = pa.array([{"a": {"b": "foo", "c": "bar"}}... | https://api.github.com/repos/huggingface/datasets/issues/653/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/652 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/652/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/652/comments | https://api.github.com/repos/huggingface/datasets/issues/652/events | https://github.com/huggingface/datasets/pull/652 | 705,390,850 | MDExOlB1bGxSZXF1ZXN0NDkwMTI3MjIx | 652 | handle connection error in download_prepared_from_hf_gcs | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoest... | [] | closed | false | null | [] | null | [] | 1,600,676,471,000 | 1,600,676,923,000 | 1,600,676,922,000 | MEMBER | null | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/652",
"html_url": "https://github.com/huggingface/datasets/pull/652",
"diff_url": "https://github.com/huggingface/datasets/pull/652.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/652.patch"
} | Fix #647 | https://api.github.com/repos/huggingface/datasets/issues/652/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/651 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/651/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/651/comments | https://api.github.com/repos/huggingface/datasets/issues/651/events | https://github.com/huggingface/datasets/issues/651 | 705,212,034 | MDU6SXNzdWU3MDUyMTIwMzQ= | 651 | Problem with JSON dataset format | {
"login": "vikigenius",
"id": 12724810,
"node_id": "MDQ6VXNlcjEyNzI0ODEw",
"avatar_url": "https://avatars.githubusercontent.com/u/12724810?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vikigenius",
"html_url": "https://github.com/vikigenius",
"followers_url": "https://api.github.com/use... | [] | open | false | null | [] | null | [
"Currently the `json` dataset doesn't support this format unfortunately.\r\nHowever you could load it with\r\n```python\r\nfrom datasets import Dataset\r\nimport pandas as pd\r\n\r\ndf = pd.read_json(\"path_to_local.json\", orient=\"index\")\r\ndataset = Dataset.from_pandas(df)\r\n```",
"or you can make a custom ... | 1,600,646,234,000 | 1,600,690,464,000 | null | NONE | null | null | I have a local json dataset with the following form.
{
'id01234': {'key1': value1, 'key2': value2, 'key3': value3},
'id01235': {'key1': value1, 'key2': value2, 'key3': value3},
.
.
.
'id09999': {'key1': value1, 'key2': value2, 'key3': value3}
}
Note that instead of a list of records i... | https://api.github.com/repos/huggingface/datasets/issues/651/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/650 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/650/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/650/comments | https://api.github.com/repos/huggingface/datasets/issues/650/events | https://github.com/huggingface/datasets/issues/650 | 704,861,844 | MDU6SXNzdWU3MDQ4NjE4NDQ= | 650 | dummy data testing can't test datasets using `dl_manager.extract` in `_split_generators` | {
"login": "richarddwang",
"id": 17963619,
"node_id": "MDQ6VXNlcjE3OTYzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/17963619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/richarddwang",
"html_url": "https://github.com/richarddwang",
"followers_url": "https://api.github.c... | [] | closed | false | null | [] | null | [
"Hi :) \r\nIn your dummy data zip file you can just have `subset000.xz` as directories instead of compressed files.\r\nLet me know if it helps",
"Thanks for your comment @lhoestq ,\r\nJust for confirmation, changing dummy data like this won't make dummy test test the functionality to extract `subsetxxx.xz` but ac... | 1,600,513,623,000 | 1,600,775,650,000 | 1,600,775,649,000 | CONTRIBUTOR | null | null | Hi, I recently want to add a dataset whose source data is like this
```
openwebtext.tar.xz
|__ openwebtext
|__subset000.xz
| |__ ....txt
| |__ ....txt
| ...
|__ subset001.xz
|
....
```
So I wrote `openwebtext.py` like this
```
d... | https://api.github.com/repos/huggingface/datasets/issues/650/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/649 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/649/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/649/comments | https://api.github.com/repos/huggingface/datasets/issues/649/events | https://github.com/huggingface/datasets/issues/649 | 704,838,415 | MDU6SXNzdWU3MDQ4Mzg0MTU= | 649 | Inconsistent behavior in map | {
"login": "krandiash",
"id": 10166085,
"node_id": "MDQ6VXNlcjEwMTY2MDg1",
"avatar_url": "https://avatars.githubusercontent.com/u/10166085?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/krandiash",
"html_url": "https://github.com/krandiash",
"followers_url": "https://api.github.com/users/... | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoest... | [
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.git... | null | [
"Thanks for reporting !\r\n\r\nThis issue must have appeared when we refactored type inference in `nlp`\r\nBy default the library tries to keep the same feature types when applying `map` but apparently it has troubles with nested structures. I'll try to fix that next week"
] | 1,600,504,872,000 | 1,600,704,785,000 | 1,600,704,785,000 | NONE | null | null | I'm observing inconsistent behavior when applying .map(). This happens specifically when I'm incrementally adding onto a feature that is a nested dictionary. Here's a simple example that reproduces the problem.
```python
import datasets
# Dataset with a single feature called 'field' consisting of two examples
d... | https://api.github.com/repos/huggingface/datasets/issues/649/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/648 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/648/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/648/comments | https://api.github.com/repos/huggingface/datasets/issues/648/events | https://github.com/huggingface/datasets/issues/648 | 704,753,123 | MDU6SXNzdWU3MDQ3NTMxMjM= | 648 | offset overflow when multiprocessing batched map on large datasets. | {
"login": "richarddwang",
"id": 17963619,
"node_id": "MDQ6VXNlcjE3OTYzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/17963619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/richarddwang",
"html_url": "https://github.com/richarddwang",
"followers_url": "https://api.github.c... | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | closed | false | null | [] | null | [
"This should be fixed with #645 ",
"Feel free to re-open if it still occurs"
] | 1,600,481,711,000 | 1,600,534,027,000 | 1,600,533,991,000 | CONTRIBUTOR | null | null | It only happened when "multiprocessing" + "batched" + "large dataset" at the same time.
```
def bprocess(examples):
examples['len'] = []
for text in examples['text']:
examples['len'].append(len(text))
return examples
wiki.map(brpocess, batched=True, num_proc=8)
```
```
----------------------------... | https://api.github.com/repos/huggingface/datasets/issues/648/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/647 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/647/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/647/comments | https://api.github.com/repos/huggingface/datasets/issues/647/events | https://github.com/huggingface/datasets/issues/647 | 704,734,764 | MDU6SXNzdWU3MDQ3MzQ3NjQ= | 647 | Cannot download dataset_info.json | {
"login": "chiyuzhang94",
"id": 33407613,
"node_id": "MDQ6VXNlcjMzNDA3NjEz",
"avatar_url": "https://avatars.githubusercontent.com/u/33407613?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/chiyuzhang94",
"html_url": "https://github.com/chiyuzhang94",
"followers_url": "https://api.github.c... | [] | closed | false | null | [] | null | [
"Thanks for reporting !\r\nWe should add support for servers without internet connection indeed\r\nI'll do that early next week",
"Thanks, @lhoestq !\r\nPlease let me know when it is available. ",
"Right now the recommended way is to create the dataset on a server with internet connection and then to save it an... | 1,600,479,315,000 | 1,600,676,922,000 | 1,600,676,922,000 | NONE | null | null | I am running my job on a cloud server where does not provide for connections from the standard compute nodes to outside resources. Hence, when I use `dataset.load_dataset()` to load data, I got an error like this:
```
ConnectionError: Couldn't reach https://storage.googleapis.com/huggingface-nlp/cache/datasets/text... | https://api.github.com/repos/huggingface/datasets/issues/647/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/646 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/646/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/646/comments | https://api.github.com/repos/huggingface/datasets/issues/646/events | https://github.com/huggingface/datasets/pull/646 | 704,607,371 | MDExOlB1bGxSZXF1ZXN0NDg5NTAyMTM3 | 646 | Fix docs typos | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/use... | [] | closed | false | null | [] | null | [] | 1,600,457,547,000 | 1,600,705,854,000 | 1,600,704,852,000 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/646",
"html_url": "https://github.com/huggingface/datasets/pull/646",
"diff_url": "https://github.com/huggingface/datasets/pull/646.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/646.patch"
} | This PR fixes few typos in the docs and the error in the code snippet in the set_format section in docs/source/torch_tensorflow.rst. `torch.utils.data.Dataloader` expects padded batches so it throws an error due to not being able to stack the unpadded tensors. If we follow the Quick tour from the docs where they add th... | https://api.github.com/repos/huggingface/datasets/issues/646/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/645 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/645/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/645/comments | https://api.github.com/repos/huggingface/datasets/issues/645/events | https://github.com/huggingface/datasets/pull/645 | 704,542,234 | MDExOlB1bGxSZXF1ZXN0NDg5NDQ5MjAx | 645 | Don't use take on dataset table in pyarrow 1.0.x | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoest... | [] | closed | false | null | [] | null | [
"I tried lower batch sizes and it didn't accelerate filter (quite the opposite actually).\r\nThe slow-down also appears for pyarrow 0.17.1 for some reason, not sure it comes from these changes",
"I just checked the benchmarks of other PRs and some of them had 300s (!!) for filter. This needs some investigation.."... | 1,600,450,294,000 | 1,600,533,992,000 | 1,600,533,991,000 | MEMBER | null | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/645",
"html_url": "https://github.com/huggingface/datasets/pull/645",
"diff_url": "https://github.com/huggingface/datasets/pull/645.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/645.patch"
} | Fix #615 | https://api.github.com/repos/huggingface/datasets/issues/645/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/644 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/644/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/644/comments | https://api.github.com/repos/huggingface/datasets/issues/644/events | https://github.com/huggingface/datasets/pull/644 | 704,534,501 | MDExOlB1bGxSZXF1ZXN0NDg5NDQzMTk1 | 644 | Better windows support | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoest... | [] | closed | false | null | [] | null | [
"This PR is ready :)\r\nIt brings official support for windows.\r\n\r\nSome tests `AWSDatasetTest` are failing.\r\nThis is because I had to fix a few datasets that were not compatible with windows.\r\nThese test will pass once they got merged on master :)"
] | 1,600,449,456,000 | 1,601,042,550,000 | 1,601,042,548,000 | MEMBER | null | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/644",
"html_url": "https://github.com/huggingface/datasets/pull/644",
"diff_url": "https://github.com/huggingface/datasets/pull/644.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/644.patch"
} | There are a few differences in the behavior of python and pyarrow on windows.
For example there are restrictions when accessing/deleting files that are open
Fix #590 | https://api.github.com/repos/huggingface/datasets/issues/644/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/643 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/643/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/643/comments | https://api.github.com/repos/huggingface/datasets/issues/643/events | https://github.com/huggingface/datasets/issues/643 | 704,477,164 | MDU6SXNzdWU3MDQ0NzcxNjQ= | 643 | Caching processed dataset at wrong folder | {
"login": "mrm8488",
"id": 3653789,
"node_id": "MDQ6VXNlcjM2NTM3ODk=",
"avatar_url": "https://avatars.githubusercontent.com/u/3653789?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mrm8488",
"html_url": "https://github.com/mrm8488",
"followers_url": "https://api.github.com/users/mrm8488/... | [
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] | open | false | null | [] | null | [
"Thanks for reporting !\r\nIt uses a temporary file to write the data.\r\nHowever it looks like the temporary file is not placed in the right directory during the processing",
"Well actually I just tested and the temporary file is placed in the same directory, so it should work as expected.\r\nWhich version of `d... | 1,600,443,686,000 | 1,601,309,680,000 | null | NONE | null | null | Hi guys, I run this on my Colab (PRO):
```python
from datasets import load_dataset
dataset = load_dataset('text', data_files='/content/corpus.txt', cache_dir='/content/drive/My Drive', split='train')
def encode(examples):
return tokenizer(examples['text'], truncation=True, padding='max_length')
dataset = ... | https://api.github.com/repos/huggingface/datasets/issues/643/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/642 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/642/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/642/comments | https://api.github.com/repos/huggingface/datasets/issues/642/events | https://github.com/huggingface/datasets/pull/642 | 704,397,499 | MDExOlB1bGxSZXF1ZXN0NDg5MzMwMDAx | 642 | Rename wnut fields | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoest... | [] | closed | false | null | [] | null | [] | 1,600,437,091,000 | 1,600,449,511,000 | 1,600,449,510,000 | MEMBER | null | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/642",
"html_url": "https://github.com/huggingface/datasets/pull/642",
"diff_url": "https://github.com/huggingface/datasets/pull/642.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/642.patch"
} | As mentioned in #641 it would be cool to have it follow the naming of the other NER datasets | https://api.github.com/repos/huggingface/datasets/issues/642/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/641 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/641/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/641/comments | https://api.github.com/repos/huggingface/datasets/issues/641/events | https://github.com/huggingface/datasets/pull/641 | 704,373,940 | MDExOlB1bGxSZXF1ZXN0NDg5MzExOTU3 | 641 | Add Polyglot-NER Dataset | {
"login": "joeddav",
"id": 9353833,
"node_id": "MDQ6VXNlcjkzNTM4MzM=",
"avatar_url": "https://avatars.githubusercontent.com/u/9353833?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/joeddav",
"html_url": "https://github.com/joeddav",
"followers_url": "https://api.github.com/users/joeddav/... | [] | closed | false | null | [] | null | [
"Hi @joeddav thanks for adding this! (I did a long webarchive.org session to actually find that dataset a while ago).\r\n\r\nOne question: should we manually correct the labeling scheme to (at least) IOB1?\r\n\r\nThat means \"LOC\" will be converted to \"I-LOC\". IOB1 is not explict. mentioned in the paper, but it ... | 1,600,435,304,000 | 1,600,571,083,000 | 1,600,571,083,000 | MEMBER | null | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/641",
"html_url": "https://github.com/huggingface/datasets/pull/641",
"diff_url": "https://github.com/huggingface/datasets/pull/641.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/641.patch"
} | Adds the [Polyglot-NER dataset](https://sites.google.com/site/rmyeid/projects/polylgot-ner) with named entity tags for 40 languages. I include separate configs for each language as well as a `combined` config which lumps them all together. | https://api.github.com/repos/huggingface/datasets/issues/641/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/640 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/640/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/640/comments | https://api.github.com/repos/huggingface/datasets/issues/640/events | https://github.com/huggingface/datasets/pull/640 | 704,311,758 | MDExOlB1bGxSZXF1ZXN0NDg5MjYwNTc1 | 640 | Make shuffle compatible with temp_seed | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoest... | [] | closed | false | null | [] | null | [] | 1,600,429,138,000 | 1,600,429,671,000 | 1,600,429,670,000 | MEMBER | null | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/640",
"html_url": "https://github.com/huggingface/datasets/pull/640",
"diff_url": "https://github.com/huggingface/datasets/pull/640.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/640.patch"
} | This code used to return different dataset at each run
```python
import dataset as ds
dataset = ...
with ds.temp_seed(42):
shuffled = dataset.shuffle()
```
Now it returns the same one since the seed is set | https://api.github.com/repos/huggingface/datasets/issues/640/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/639 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/639/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/639/comments | https://api.github.com/repos/huggingface/datasets/issues/639/events | https://github.com/huggingface/datasets/pull/639 | 704,217,963 | MDExOlB1bGxSZXF1ZXN0NDg5MTgxOTY3 | 639 | Update glue QQP checksum | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoest... | [] | closed | false | null | [] | null | [] | 1,600,420,095,000 | 1,600,429,028,000 | 1,600,429,027,000 | MEMBER | null | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/639",
"html_url": "https://github.com/huggingface/datasets/pull/639",
"diff_url": "https://github.com/huggingface/datasets/pull/639.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/639.patch"
} | Fix #638 | https://api.github.com/repos/huggingface/datasets/issues/639/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/638 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/638/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/638/comments | https://api.github.com/repos/huggingface/datasets/issues/638/events | https://github.com/huggingface/datasets/issues/638 | 704,146,956 | MDU6SXNzdWU3MDQxNDY5NTY= | 638 | GLUE/QQP dataset: NonMatchingChecksumError | {
"login": "richarddwang",
"id": 17963619,
"node_id": "MDQ6VXNlcjE3OTYzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/17963619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/richarddwang",
"html_url": "https://github.com/richarddwang",
"followers_url": "https://api.github.c... | [] | closed | false | null | [] | null | [
"Hi ! Sure I'll take a look"
] | 1,600,412,950,000 | 1,600,429,027,000 | 1,600,429,027,000 | CONTRIBUTOR | null | null | Hi @lhoestq , I know you are busy and there are also other important issues. But if this is easy to be fixed, I am shamelessly wondering if you can give me some help , so I can evaluate my models and restart with my developing cycle asap. 😚
datasets version: editable install of master at 9/17
`datasets.load_data... | https://api.github.com/repos/huggingface/datasets/issues/638/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/637 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/637/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/637/comments | https://api.github.com/repos/huggingface/datasets/issues/637/events | https://github.com/huggingface/datasets/pull/637 | 703,539,909 | MDExOlB1bGxSZXF1ZXN0NDg4NjMwNzk4 | 637 | Add MATINF | {
"login": "JetRunner",
"id": 22514219,
"node_id": "MDQ6VXNlcjIyNTE0MjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/22514219?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/JetRunner",
"html_url": "https://github.com/JetRunner",
"followers_url": "https://api.github.com/users/... | [] | closed | false | null | [] | null | [] | 1,600,345,493,000 | 1,600,348,998,000 | 1,600,348,997,000 | MEMBER | null | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/637",
"html_url": "https://github.com/huggingface/datasets/pull/637",
"diff_url": "https://github.com/huggingface/datasets/pull/637.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/637.patch"
} | https://api.github.com/repos/huggingface/datasets/issues/637/timeline | null | true | |
https://api.github.com/repos/huggingface/datasets/issues/636 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/636/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/636/comments | https://api.github.com/repos/huggingface/datasets/issues/636/events | https://github.com/huggingface/datasets/pull/636 | 702,883,989 | MDExOlB1bGxSZXF1ZXN0NDg4MDg3OTA5 | 636 | Consistent ner features | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoest... | [] | closed | false | null | [] | null | [] | 1,600,271,785,000 | 1,600,336,379,000 | 1,600,336,378,000 | MEMBER | null | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/636",
"html_url": "https://github.com/huggingface/datasets/pull/636",
"diff_url": "https://github.com/huggingface/datasets/pull/636.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/636.patch"
} | As discussed in #613 , this PR aims at making NER feature names consistent across datasets.
I changed the feature names of LinCE and XTREME/PAN-X | https://api.github.com/repos/huggingface/datasets/issues/636/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/635 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/635/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/635/comments | https://api.github.com/repos/huggingface/datasets/issues/635/events | https://github.com/huggingface/datasets/pull/635 | 702,822,439 | MDExOlB1bGxSZXF1ZXN0NDg4MDM2OTE5 | 635 | Loglevel | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoest... | [] | closed | false | null | [] | null | [
"I think it's ready now @stas00, did you want to add something else ?\r\nThis PR includes your changes but with the level set to warning",
"LGTM, thank you, @lhoestq "
] | 1,600,267,073,000 | 1,600,336,339,000 | 1,600,336,338,000 | MEMBER | null | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/635",
"html_url": "https://github.com/huggingface/datasets/pull/635",
"diff_url": "https://github.com/huggingface/datasets/pull/635.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/635.patch"
} | Continuation of #618 | https://api.github.com/repos/huggingface/datasets/issues/635/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/634 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/634/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/634/comments | https://api.github.com/repos/huggingface/datasets/issues/634/events | https://github.com/huggingface/datasets/pull/634 | 702,676,041 | MDExOlB1bGxSZXF1ZXN0NDg3OTEzOTk4 | 634 | Add ConLL-2000 dataset | {
"login": "vblagoje",
"id": 458335,
"node_id": "MDQ6VXNlcjQ1ODMzNQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/458335?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vblagoje",
"html_url": "https://github.com/vblagoje",
"followers_url": "https://api.github.com/users/vblagoj... | [] | closed | false | null | [] | null | [] | 1,600,254,851,000 | 1,600,339,090,000 | 1,600,339,090,000 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/634",
"html_url": "https://github.com/huggingface/datasets/pull/634",
"diff_url": "https://github.com/huggingface/datasets/pull/634.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/634.patch"
} | Adds ConLL-2000 dataset used for text chunking. See https://www.clips.uantwerpen.be/conll2000/chunking/ for details and [motivation](https://github.com/huggingface/transformers/pull/7041#issuecomment-692710948) behind this PR | https://api.github.com/repos/huggingface/datasets/issues/634/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/633 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/633/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/633/comments | https://api.github.com/repos/huggingface/datasets/issues/633/events | https://github.com/huggingface/datasets/issues/633 | 702,440,484 | MDU6SXNzdWU3MDI0NDA0ODQ= | 633 | Load large text file for LM pre-training resulting in OOM | {
"login": "leethu2012",
"id": 29704017,
"node_id": "MDQ6VXNlcjI5NzA0MDE3",
"avatar_url": "https://avatars.githubusercontent.com/u/29704017?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/leethu2012",
"html_url": "https://github.com/leethu2012",
"followers_url": "https://api.github.com/use... | [] | open | false | null | [] | null | [
"Not sure what could cause that on the `datasets` side. Could this be a `Trainer` issue ? cc @julien-c @sgugger ?",
"There was a memory leak issue fixed recently in master. You should install from source and see if it fixes your problem.",
"@lhoestq @sgugger Thanks for your comments. I have install from source ... | 1,600,230,795,000 | 1,613,476,921,000 | null | NONE | null | null | I tried to pretrain Longformer using transformers and datasets. But I got OOM issues with loading a large text file. My script is almost like this:
```python
from datasets import load_dataset
@dataclass
class DataCollatorForDatasetsLanguageModeling(DataCollatorForLanguageModeling):
"""
Data collator u... | https://api.github.com/repos/huggingface/datasets/issues/633/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/632 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/632/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/632/comments | https://api.github.com/repos/huggingface/datasets/issues/632/events | https://github.com/huggingface/datasets/pull/632 | 702,358,124 | MDExOlB1bGxSZXF1ZXN0NDg3NjQ5OTQ2 | 632 | Fix typos in the loading datasets docs | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/use... | [] | closed | false | null | [] | null | [
"thanks!"
] | 1,600,216,061,000 | 1,600,705,871,000 | 1,600,239,164,000 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/632",
"html_url": "https://github.com/huggingface/datasets/pull/632",
"diff_url": "https://github.com/huggingface/datasets/pull/632.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/632.patch"
} | This PR fixes two typos in the loading datasets docs, one of them being a broken link to the `load_dataset` function. | https://api.github.com/repos/huggingface/datasets/issues/632/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/631 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/631/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/631/comments | https://api.github.com/repos/huggingface/datasets/issues/631/events | https://github.com/huggingface/datasets/pull/631 | 701,711,255 | MDExOlB1bGxSZXF1ZXN0NDg3MTE3OTA0 | 631 | Fix text delimiter | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoest... | [] | closed | false | null | [] | null | [
"Which OS are you using ?@abhi1nandy2",
"> Which OS are you using ?\r\n\r\nPRETTY_NAME=\"Debian GNU/Linux 9 (stretch)\"\r\nNAME=\"Debian GNU/Linux\"\r\nVERSION_ID=\"9\"\r\nVERSION=\"9 (stretch)\"\r\nVERSION_CODENAME=stretch\r\nID=debian\r\nHOME_URL=\"https://www.debian.org/\"\r\nSUPPORT_URL=\"https://www.debian.o... | 1,600,157,322,000 | 1,600,786,986,000 | 1,600,158,385,000 | MEMBER | null | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/631",
"html_url": "https://github.com/huggingface/datasets/pull/631",
"diff_url": "https://github.com/huggingface/datasets/pull/631.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/631.patch"
} | I changed the delimiter in the `text` dataset script.
It should fix the `pyarrow.lib.ArrowInvalid: CSV parse error` from #622
I changed the delimiter to an unused ascii character that is not present in text files : `\b` | https://api.github.com/repos/huggingface/datasets/issues/631/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/630 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/630/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/630/comments | https://api.github.com/repos/huggingface/datasets/issues/630/events | https://github.com/huggingface/datasets/issues/630 | 701,636,350 | MDU6SXNzdWU3MDE2MzYzNTA= | 630 | Text dataset not working with large files | {
"login": "ksjae",
"id": 17930170,
"node_id": "MDQ6VXNlcjE3OTMwMTcw",
"avatar_url": "https://avatars.githubusercontent.com/u/17930170?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ksjae",
"html_url": "https://github.com/ksjae",
"followers_url": "https://api.github.com/users/ksjae/follow... | [] | closed | false | null | [] | null | [
"Seems like it works when setting ```block_size=2100000000``` or something arbitrarily large though.",
"Can you give us some stats on the data files you use as inputs?",
"Basically ~600MB txt files(UTF-8) * 59. \r\ncontents like ```안녕하세요, 이것은 예제로 한번 말해보는 텍스트입니다. 그냥 이렇다고요.<|endoftext|>\\n```\r\n\r\nAlso, it gets... | 1,600,149,756,000 | 1,601,072,503,000 | 1,601,072,503,000 | NONE | null | null | ```
Traceback (most recent call last):
File "examples/language-modeling/run_language_modeling.py", line 333, in <module>
main()
File "examples/language-modeling/run_language_modeling.py", line 262, in main
get_dataset(data_args, tokenizer=tokenizer, cache_dir=model_args.cache_dir) if training_args.do_t... | https://api.github.com/repos/huggingface/datasets/issues/630/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/629 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/629/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/629/comments | https://api.github.com/repos/huggingface/datasets/issues/629/events | https://github.com/huggingface/datasets/issues/629 | 701,517,550 | MDU6SXNzdWU3MDE1MTc1NTA= | 629 | straddling object straddles two block boundaries | {
"login": "bharaniabhishek123",
"id": 17970177,
"node_id": "MDQ6VXNlcjE3OTcwMTc3",
"avatar_url": "https://avatars.githubusercontent.com/u/17970177?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bharaniabhishek123",
"html_url": "https://github.com/bharaniabhishek123",
"followers_url": "ht... | [] | closed | false | null | [] | null | [
"sorry it's an apache arrow issue."
] | 1,600,129,846,000 | 1,600,130,177,000 | 1,600,129,937,000 | NONE | null | null | I am trying to read json data (it's an array with lots of dictionaries) and getting block boundaries issue as below :
I tried calling read_json with readOptions but no luck .
```
table = json.read_json(fn)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "pyarrow/_json.pyx", li... | https://api.github.com/repos/huggingface/datasets/issues/629/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/628 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/628/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/628/comments | https://api.github.com/repos/huggingface/datasets/issues/628/events | https://github.com/huggingface/datasets/pull/628 | 701,496,053 | MDExOlB1bGxSZXF1ZXN0NDg2OTQyNzgx | 628 | Update docs links in the contribution guideline | {
"login": "M-Salti",
"id": 9285264,
"node_id": "MDQ6VXNlcjkyODUyNjQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/9285264?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/M-Salti",
"html_url": "https://github.com/M-Salti",
"followers_url": "https://api.github.com/users/M-Salti/... | [] | closed | false | null | [] | null | [
"Thanks!"
] | 1,600,126,039,000 | 1,604,351,003,000 | 1,600,150,775,000 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/628",
"html_url": "https://github.com/huggingface/datasets/pull/628",
"diff_url": "https://github.com/huggingface/datasets/pull/628.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/628.patch"
} | Fixed the `add a dataset` and `share a dataset` links in the contribution guideline to refer to the new docs website. | https://api.github.com/repos/huggingface/datasets/issues/628/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/627 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/627/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/627/comments | https://api.github.com/repos/huggingface/datasets/issues/627/events | https://github.com/huggingface/datasets/pull/627 | 701,411,661 | MDExOlB1bGxSZXF1ZXN0NDg2ODcxMTg2 | 627 | fix (#619) MLQA features names | {
"login": "M-Salti",
"id": 9285264,
"node_id": "MDQ6VXNlcjkyODUyNjQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/9285264?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/M-Salti",
"html_url": "https://github.com/M-Salti",
"followers_url": "https://api.github.com/users/M-Salti/... | [] | closed | false | null | [] | null | [] | 1,600,116,119,000 | 1,604,351,072,000 | 1,600,239,251,000 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/627",
"html_url": "https://github.com/huggingface/datasets/pull/627",
"diff_url": "https://github.com/huggingface/datasets/pull/627.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/627.patch"
} | Fixed the features names as suggested in (#619) in the `_generate_examples` and `_info` methods in the MLQA loading script and also changed the names in the `dataset_infos.json` file. | https://api.github.com/repos/huggingface/datasets/issues/627/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/626 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/626/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/626/comments | https://api.github.com/repos/huggingface/datasets/issues/626/events | https://github.com/huggingface/datasets/pull/626 | 701,352,605 | MDExOlB1bGxSZXF1ZXN0NDg2ODIzMTY1 | 626 | Update GLUE URLs (now hosted on FB) | {
"login": "jeswan",
"id": 57466294,
"node_id": "MDQ6VXNlcjU3NDY2Mjk0",
"avatar_url": "https://avatars.githubusercontent.com/u/57466294?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jeswan",
"html_url": "https://github.com/jeswan",
"followers_url": "https://api.github.com/users/jeswan/fo... | [] | closed | false | null | [] | null | [] | 1,600,110,339,000 | 1,600,239,198,000 | 1,600,239,198,000 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/626",
"html_url": "https://github.com/huggingface/datasets/pull/626",
"diff_url": "https://github.com/huggingface/datasets/pull/626.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/626.patch"
} | NYU is switching dataset hosting from Google to FB. This PR closes https://github.com/huggingface/datasets/issues/608 and is necessary for https://github.com/jiant-dev/jiant/issues/161. This PR updates the data URLs based on changes made in https://github.com/nyu-mll/jiant/pull/1112.
Note: rebased on huggingface/dat... | https://api.github.com/repos/huggingface/datasets/issues/626/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/625 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/625/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/625/comments | https://api.github.com/repos/huggingface/datasets/issues/625/events | https://github.com/huggingface/datasets/issues/625 | 701,057,799 | MDU6SXNzdWU3MDEwNTc3OTk= | 625 | dtype of tensors should be preserved | {
"login": "BramVanroy",
"id": 2779410,
"node_id": "MDQ6VXNlcjI3Nzk0MTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2779410?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BramVanroy",
"html_url": "https://github.com/BramVanroy",
"followers_url": "https://api.github.com/users... | [] | closed | false | null | [] | null | [
"Indeed we convert tensors to list to be able to write in arrow format. Because of this conversion we lose the dtype information. We should add the dtype detection when we do type inference. However it would require a bit of refactoring since currently the conversion happens before the type inference..\r\n\r\nAnd t... | 1,600,087,085,000 | 1,629,189,004,000 | 1,629,189,004,000 | CONTRIBUTOR | null | null | After switching to `datasets` my model just broke. After a weekend of debugging, the issue was that my model could not handle the double that the Dataset provided, as it expected a float (but didn't give a warning, which seems a [PyTorch issue](https://discuss.pytorch.org/t/is-it-required-that-input-and-hidden-for-gru-... | https://api.github.com/repos/huggingface/datasets/issues/625/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/624 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/624/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/624/comments | https://api.github.com/repos/huggingface/datasets/issues/624/events | https://github.com/huggingface/datasets/issues/624 | 700,541,628 | MDU6SXNzdWU3MDA1NDE2Mjg= | 624 | Add learningq dataset | {
"login": "krrishdholakia",
"id": 17561003,
"node_id": "MDQ6VXNlcjE3NTYxMDAz",
"avatar_url": "https://avatars.githubusercontent.com/u/17561003?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/krrishdholakia",
"html_url": "https://github.com/krrishdholakia",
"followers_url": "https://api.gi... | [
{
"id": 2067376369,
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request",
"name": "dataset request",
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset"
}
] | open | false | null | [] | null | [] | 1,599,992,427,000 | 1,600,077,002,000 | null | NONE | null | null | Hi,
Thank you again for this amazing repo.
Would it be possible for y'all to add the LearningQ dataset - https://github.com/AngusGLChen/LearningQ ?
| https://api.github.com/repos/huggingface/datasets/issues/624/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/623 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/623/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/623/comments | https://api.github.com/repos/huggingface/datasets/issues/623/events | https://github.com/huggingface/datasets/issues/623 | 700,235,308 | MDU6SXNzdWU3MDAyMzUzMDg= | 623 | Custom feature types in `load_dataset` from CSV | {
"login": "lvwerra",
"id": 8264887,
"node_id": "MDQ6VXNlcjgyNjQ4ODc=",
"avatar_url": "https://avatars.githubusercontent.com/u/8264887?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lvwerra",
"html_url": "https://github.com/lvwerra",
"followers_url": "https://api.github.com/users/lvwerra/... | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | closed | false | null | [] | null | [
"Currently `csv` doesn't support the `features` attribute (unlike `json`).\r\nWhat you can do for now is cast the features using the in-place transform `cast_`\r\n\r\n```python\r\nfrom datasets import load_dataset\r\n\r\ndataset = load_dataset('csv', data_files=file_dict, delimiter=';', column_names=['text', 'label... | 1,599,916,894,000 | 1,601,495,503,000 | 1,601,455,194,000 | CONTRIBUTOR | null | null | I am trying to load a local file with the `load_dataset` function and I want to predefine the feature types with the `features` argument. However, the types are always the same independent of the value of `features`.
I am working with the local files from the emotion dataset. To get the data you can use the followi... | https://api.github.com/repos/huggingface/datasets/issues/623/timeline | null | false |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.