url stringlengths 58 61 | repository_url stringclasses 1
value | labels_url stringlengths 72 75 | comments_url stringlengths 67 70 | events_url stringlengths 65 68 | html_url stringlengths 46 51 | id int64 599M 1B | node_id stringlengths 18 32 | number int64 1 2.96k | title stringlengths 1 268 | user dict | labels list | state stringclasses 2
values | locked bool 1
class | assignee dict | assignees list | milestone dict | comments list | created_at int64 1,587B 1,632B | updated_at int64 1,587B 1,632B | closed_at int64 1,587B 1,632B ⌀ | author_association stringclasses 4
values | active_lock_reason null | pull_request dict | body stringlengths 0 228k ⌀ | timeline_url stringlengths 67 70 | performed_via_github_app null | is_pull_request bool 2
classes |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/datasets/issues/1932 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1932/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1932/comments | https://api.github.com/repos/huggingface/datasets/issues/1932/events | https://github.com/huggingface/datasets/pull/1932 | 814,326,116 | MDExOlB1bGxSZXF1ZXN0NTc4MzMyMTQy | 1,932 | Fix builder config creation with data_dir | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoest... | [] | closed | false | null | [] | null | [] | 1,614,075,962,000 | 1,614,077,128,000 | 1,614,077,127,000 | MEMBER | null | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1932",
"html_url": "https://github.com/huggingface/datasets/pull/1932",
"diff_url": "https://github.com/huggingface/datasets/pull/1932.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1932.patch"
} | The data_dir parameter wasn't taken into account to create the config_id, therefore the resulting builder config was considered not custom. However a builder config that is non-custom must not have a name that collides with the predefined builder config names. Therefore it resulted in a `ValueError("Cannot name a custo... | https://api.github.com/repos/huggingface/datasets/issues/1932/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1931 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1931/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1931/comments | https://api.github.com/repos/huggingface/datasets/issues/1931/events | https://github.com/huggingface/datasets/pull/1931 | 814,225,074 | MDExOlB1bGxSZXF1ZXN0NTc4MjQ4NTA5 | 1,931 | add m_lama (multilingual lama) dataset | {
"login": "pdufter",
"id": 13961899,
"node_id": "MDQ6VXNlcjEzOTYxODk5",
"avatar_url": "https://avatars.githubusercontent.com/u/13961899?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pdufter",
"html_url": "https://github.com/pdufter",
"followers_url": "https://api.github.com/users/pdufte... | [] | closed | false | null | [] | null | [
"Hi, it seems I am somewhat stuck here. The failed test `ci/circleci: run_dataset_script_tests_pyarrow_1_WIN` seems to be caused by some broken connection (`ConnectionResetError: [WinError 10054] An existing connection was forcibly closed by the remote host`). Any help on this is appreciated. \r\n\r\nEdit: Seems to... | 1,614,067,917,000 | 1,614,592,863,000 | 1,614,592,863,000 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1931",
"html_url": "https://github.com/huggingface/datasets/pull/1931",
"diff_url": "https://github.com/huggingface/datasets/pull/1931.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1931.patch"
} | Add a multilingual (machine translated and automatically generated) version of the LAMA benchmark. For details see the paper https://arxiv.org/pdf/2102.00894.pdf | https://api.github.com/repos/huggingface/datasets/issues/1931/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1930 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1930/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1930/comments | https://api.github.com/repos/huggingface/datasets/issues/1930/events | https://github.com/huggingface/datasets/pull/1930 | 814,055,198 | MDExOlB1bGxSZXF1ZXN0NTc4MTAwNzI0 | 1,930 | updated the wino_bias dataset | {
"login": "JieyuZhao",
"id": 22306304,
"node_id": "MDQ6VXNlcjIyMzA2MzA0",
"avatar_url": "https://avatars.githubusercontent.com/u/22306304?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/JieyuZhao",
"html_url": "https://github.com/JieyuZhao",
"followers_url": "https://api.github.com/users/... | [] | closed | false | null | [] | null | [
"Hi @JieyuZhao ! Have you had a chance to add the different configurations ?\r\nThanks again for your help on this !",
"> Hi @JieyuZhao ! Have you had a chance to add the different configurations ?\r\n> Thanks again for your help on this !\r\n\r\nHi @lhoestq Yes, I've updated the code. Now the configuration will... | 1,614,049,660,000 | 1,617,809,096,000 | 1,617,809,096,000 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1930",
"html_url": "https://github.com/huggingface/datasets/pull/1930",
"diff_url": "https://github.com/huggingface/datasets/pull/1930.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1930.patch"
} | Updated the wino_bias.py script.
- updated the data_url
- added different configurations for different data splits
- added the coreference_cluster to the data features | https://api.github.com/repos/huggingface/datasets/issues/1930/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1929 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1929/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1929/comments | https://api.github.com/repos/huggingface/datasets/issues/1929/events | https://github.com/huggingface/datasets/pull/1929 | 813,929,669 | MDExOlB1bGxSZXF1ZXN0NTc3OTk1MTE4 | 1,929 | Improve typing and style and fix some inconsistencies | {
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/use... | [] | closed | false | null | [] | null | [
"@lhoestq Thanks for the quick review.",
"I merged master to this branch to re-run the CI before merging :)"
] | 1,614,034,061,000 | 1,614,183,374,000 | 1,614,175,434,000 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1929",
"html_url": "https://github.com/huggingface/datasets/pull/1929",
"diff_url": "https://github.com/huggingface/datasets/pull/1929.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1929.patch"
} | This PR:
* improves typing (mostly more consistent use of `typing.Optional`)
* `DatasetDict.cleanup_cache_files` now correctly returns a dict
* replaces `dict()` with the corresponding literal
* uses `dict_to_copy.copy()` instead of `dict(dict_to_copy)` for shallow copying | https://api.github.com/repos/huggingface/datasets/issues/1929/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1928 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1928/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1928/comments | https://api.github.com/repos/huggingface/datasets/issues/1928/events | https://github.com/huggingface/datasets/pull/1928 | 813,793,434 | MDExOlB1bGxSZXF1ZXN0NTc3ODgyMDM4 | 1,928 | Updating old cards | {
"login": "mcmillanmajora",
"id": 26722925,
"node_id": "MDQ6VXNlcjI2NzIyOTI1",
"avatar_url": "https://avatars.githubusercontent.com/u/26722925?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mcmillanmajora",
"html_url": "https://github.com/mcmillanmajora",
"followers_url": "https://api.gi... | [] | closed | false | null | [] | null | [] | 1,614,021,964,000 | 1,614,104,365,000 | 1,614,104,365,000 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1928",
"html_url": "https://github.com/huggingface/datasets/pull/1928",
"diff_url": "https://github.com/huggingface/datasets/pull/1928.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1928.patch"
} | Updated the cards for [Allocine](https://github.com/mcmillanmajora/datasets/tree/updating-old-cards/datasets/allocine), [CNN/DailyMail](https://github.com/mcmillanmajora/datasets/tree/updating-old-cards/datasets/cnn_dailymail), and [SNLI](https://github.com/mcmillanmajora/datasets/tree/updating-old-cards/datasets/snli)... | https://api.github.com/repos/huggingface/datasets/issues/1928/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1927 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1927/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1927/comments | https://api.github.com/repos/huggingface/datasets/issues/1927/events | https://github.com/huggingface/datasets/pull/1927 | 813,768,935 | MDExOlB1bGxSZXF1ZXN0NTc3ODYxODM5 | 1,927 | Update README.md | {
"login": "JieyuZhao",
"id": 22306304,
"node_id": "MDQ6VXNlcjIyMzA2MzA0",
"avatar_url": "https://avatars.githubusercontent.com/u/22306304?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/JieyuZhao",
"html_url": "https://github.com/JieyuZhao",
"followers_url": "https://api.github.com/users/... | [] | open | false | null | [] | null | [
"Thanks @JieyuZhao.\r\n\r\nI think this PR was superseded by your other PRs:\r\n- #1930\r\n- #2152 \r\n\r\nI'm closing this."
] | 1,614,019,894,000 | 1,614,077,565,000 | null | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1927",
"html_url": "https://github.com/huggingface/datasets/pull/1927",
"diff_url": "https://github.com/huggingface/datasets/pull/1927.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1927.patch"
} | Updated the info for the wino_bias dataset. | https://api.github.com/repos/huggingface/datasets/issues/1927/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1926 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1926/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1926/comments | https://api.github.com/repos/huggingface/datasets/issues/1926/events | https://github.com/huggingface/datasets/pull/1926 | 813,607,994 | MDExOlB1bGxSZXF1ZXN0NTc3NzI4Mjgy | 1,926 | Fix: Wiki_dpr - add missing scalar quantizer | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoest... | [] | closed | false | null | [] | null | [] | 1,614,007,925,000 | 1,614,008,994,000 | 1,614,008,993,000 | MEMBER | null | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1926",
"html_url": "https://github.com/huggingface/datasets/pull/1926",
"diff_url": "https://github.com/huggingface/datasets/pull/1926.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1926.patch"
} | All the prebuilt wiki_dpr indexes already use SQ8, I forgot to update the wiki_dpr script after building them. Now it's finally done.
The scalar quantizer SQ8 doesn't reduce the performance of the index as shown in retrieval experiments on RAG.
The quantizer reduces the size of the index a lot but increases index b... | https://api.github.com/repos/huggingface/datasets/issues/1926/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1925 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1925/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1925/comments | https://api.github.com/repos/huggingface/datasets/issues/1925/events | https://github.com/huggingface/datasets/pull/1925 | 813,600,902 | MDExOlB1bGxSZXF1ZXN0NTc3NzIyMzc3 | 1,925 | Fix: Wiki_dpr - fix when with_embeddings is False or index_name is "no_index" | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoest... | [] | closed | false | null | [] | null | [
"Hi @lhoestq ,\r\n\r\nI am running into an issue now when trying to run RAG. Running exactly as described [here](https://huggingface.co/facebook/rag-token-nq?fbclid=IwAR3bTfhls5U_t9DqsX2Vzb7NhtRHxJxfQ-uwFT7VuCPMZUM2AdAlKF_qkI8#usage) I get the error below. Wondering if it's related to this.\r\n\r\nRunning Transfor... | 1,614,007,426,000 | 1,614,216,828,000 | 1,614,008,168,000 | MEMBER | null | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1925",
"html_url": "https://github.com/huggingface/datasets/pull/1925",
"diff_url": "https://github.com/huggingface/datasets/pull/1925.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1925.patch"
} | Fix the bugs noticed in #1915
There was a bug when `with_embeddings=False` where the configuration name was the same as if `with_embeddings=True`, which led the dataset builder to do bad verifications (for example it used to expect to download the embeddings for `with_embeddings=False`).
Another issue was that s... | https://api.github.com/repos/huggingface/datasets/issues/1925/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1924 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1924/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1924/comments | https://api.github.com/repos/huggingface/datasets/issues/1924/events | https://github.com/huggingface/datasets/issues/1924 | 813,599,733 | MDU6SXNzdWU4MTM1OTk3MzM= | 1,924 | Anonymous Dataset Addition (i.e Anonymous PR?) | {
"login": "PierreColombo",
"id": 22492839,
"node_id": "MDQ6VXNlcjIyNDkyODM5",
"avatar_url": "https://avatars.githubusercontent.com/u/22492839?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/PierreColombo",
"html_url": "https://github.com/PierreColombo",
"followers_url": "https://api.githu... | [] | open | false | null | [] | null | [
"Hi !\r\nI guess you can add a dataset without the fields that must be kept anonymous, and then update those when the anonymity period is over.\r\nYou can also make the PR from an anonymous org.\r\nPinging @yjernite just to make sure it's ok",
"Hello,\r\nI would prefer to do the reverse: adding a link to an anony... | 1,614,007,350,000 | 1,614,104,890,000 | null | CONTRIBUTOR | null | null | Hello,
Thanks a lot for your librairy.
We plan to submit a paper on OpenReview using the Anonymous setting. Is it possible to add a new dataset without breaking the anonimity, with a link to the paper ?
Cheers
@eusip | https://api.github.com/repos/huggingface/datasets/issues/1924/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/1923 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1923/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1923/comments | https://api.github.com/repos/huggingface/datasets/issues/1923/events | https://github.com/huggingface/datasets/pull/1923 | 813,363,472 | MDExOlB1bGxSZXF1ZXN0NTc3NTI0MTU0 | 1,923 | Fix save_to_disk with relative path | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoest... | [] | closed | false | null | [] | null | [] | 1,613,989,639,000 | 1,613,992,964,000 | 1,613,992,963,000 | MEMBER | null | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1923",
"html_url": "https://github.com/huggingface/datasets/pull/1923",
"diff_url": "https://github.com/huggingface/datasets/pull/1923.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1923.patch"
} | As noticed in #1919 and #1920 the target directory was not created using `makedirs` so saving to it raises `FileNotFoundError`. For absolute paths it works but not for the good reason. This is because the target path was the same as the temporary path where in-memory data are written as an intermediary step.
I added... | https://api.github.com/repos/huggingface/datasets/issues/1923/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1922 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1922/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1922/comments | https://api.github.com/repos/huggingface/datasets/issues/1922/events | https://github.com/huggingface/datasets/issues/1922 | 813,140,806 | MDU6SXNzdWU4MTMxNDA4MDY= | 1,922 | How to update the "wino_bias" dataset | {
"login": "JieyuZhao",
"id": 22306304,
"node_id": "MDQ6VXNlcjIyMzA2MzA0",
"avatar_url": "https://avatars.githubusercontent.com/u/22306304?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/JieyuZhao",
"html_url": "https://github.com/JieyuZhao",
"followers_url": "https://api.github.com/users/... | [] | open | false | null | [] | null | [
"Hi @JieyuZhao !\r\n\r\nYou can edit the dataset card of wino_bias to update the URL via a Pull Request. This would be really appreciated :)\r\n\r\nThe dataset card is the README.md file you can find at https://github.com/huggingface/datasets/tree/master/datasets/wino_bias\r\nAlso the homepage url is also mentioned... | 1,613,972,379,000 | 1,613,990,159,000 | null | CONTRIBUTOR | null | null | Hi all,
Thanks for the efforts to collect all the datasets! But I think there is a problem with the wino_bias dataset. The current link is not correct. How can I update that?
Thanks! | https://api.github.com/repos/huggingface/datasets/issues/1922/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/1921 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1921/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1921/comments | https://api.github.com/repos/huggingface/datasets/issues/1921/events | https://github.com/huggingface/datasets/pull/1921 | 812,716,042 | MDExOlB1bGxSZXF1ZXN0NTc3MDEzMDM4 | 1,921 | Standardizing datasets dtypes | {
"login": "justin-yan",
"id": 7731709,
"node_id": "MDQ6VXNlcjc3MzE3MDk=",
"avatar_url": "https://avatars.githubusercontent.com/u/7731709?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/justin-yan",
"html_url": "https://github.com/justin-yan",
"followers_url": "https://api.github.com/users... | [] | closed | false | null | [] | null | [
"@lhoestq - apologies for the multiple PRs, my previous one (#1905) got mangled due to some merge conflicts that I had trouble resolving so I just cherry-picked my changes onto a fresh branch here."
] | 1,613,858,641,000 | 1,613,987,050,000 | 1,613,987,050,000 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1921",
"html_url": "https://github.com/huggingface/datasets/pull/1921",
"diff_url": "https://github.com/huggingface/datasets/pull/1921.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1921.patch"
} | This PR follows up on discussion in #1900 to have an explicit set of basic dtypes for datasets.
This moves away from str(pyarrow.DataType) as the method of choice for creating dtypes, favoring an explicit mapping to a list of supported Value dtypes.
I believe in practice this should be backward compatible, since ... | https://api.github.com/repos/huggingface/datasets/issues/1921/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1920 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1920/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1920/comments | https://api.github.com/repos/huggingface/datasets/issues/1920/events | https://github.com/huggingface/datasets/pull/1920 | 812,628,220 | MDExOlB1bGxSZXF1ZXN0NTc2OTQ5NzI2 | 1,920 | Fix save_to_disk issue | {
"login": "M-Salti",
"id": 9285264,
"node_id": "MDQ6VXNlcjkyODUyNjQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/9285264?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/M-Salti",
"html_url": "https://github.com/M-Salti",
"followers_url": "https://api.github.com/users/M-Salti/... | [] | closed | false | null | [] | null | [
"So I was curious why the issue reported at #1919 wasn't caught in [this test](https://github.com/huggingface/datasets/blob/248104c4bdb2e01c036b7578867199191fbff181/tests/test_arrow_dataset.py#L209), so I did some digging.\r\nI tried to save to a temporary directory (just like in the test), like this:\r\n```python\... | 1,613,830,959,000 | 1,613,989,811,000 | 1,613,989,811,000 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1920",
"html_url": "https://github.com/huggingface/datasets/pull/1920",
"diff_url": "https://github.com/huggingface/datasets/pull/1920.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1920.patch"
} | Fixes #1919
| https://api.github.com/repos/huggingface/datasets/issues/1920/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1919 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1919/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1919/comments | https://api.github.com/repos/huggingface/datasets/issues/1919/events | https://github.com/huggingface/datasets/issues/1919 | 812,626,872 | MDU6SXNzdWU4MTI2MjY4NzI= | 1,919 | Failure to save with save_to_disk | {
"login": "M-Salti",
"id": 9285264,
"node_id": "MDQ6VXNlcjkyODUyNjQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/9285264?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/M-Salti",
"html_url": "https://github.com/M-Salti",
"followers_url": "https://api.github.com/users/M-Salti/... | [] | closed | false | null | [] | null | [
"Hi thanks for reporting and for proposing a fix :)\r\n\r\nI just merged a fix, feel free to try it from the master branch !",
"Closing since this has been fixed by #1923"
] | 1,613,830,690,000 | 1,614,793,227,000 | 1,614,793,227,000 | CONTRIBUTOR | null | null | When I try to save a dataset locally using the `save_to_disk` method I get the error:
```bash
FileNotFoundError: [Errno 2] No such file or directory: '/content/squad/train/squad-train.arrow'
```
To replicate:
1. Install `datasets` from master
2. Run this code:
```python
from datasets import load... | https://api.github.com/repos/huggingface/datasets/issues/1919/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/1918 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1918/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1918/comments | https://api.github.com/repos/huggingface/datasets/issues/1918/events | https://github.com/huggingface/datasets/pull/1918 | 812,541,510 | MDExOlB1bGxSZXF1ZXN0NTc2ODg2OTQ0 | 1,918 | Fix QA4MRE download URLs | {
"login": "M-Salti",
"id": 9285264,
"node_id": "MDQ6VXNlcjkyODUyNjQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/9285264?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/M-Salti",
"html_url": "https://github.com/M-Salti",
"followers_url": "https://api.github.com/users/M-Salti/... | [] | closed | false | null | [] | null | [] | 1,613,806,337,000 | 1,614,000,906,000 | 1,614,000,906,000 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1918",
"html_url": "https://github.com/huggingface/datasets/pull/1918",
"diff_url": "https://github.com/huggingface/datasets/pull/1918.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1918.patch"
} | The URLs in the `dataset_infos` and `README` are correct, only the ones in the download script needed updating. | https://api.github.com/repos/huggingface/datasets/issues/1918/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1917 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1917/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1917/comments | https://api.github.com/repos/huggingface/datasets/issues/1917/events | https://github.com/huggingface/datasets/issues/1917 | 812,390,178 | MDU6SXNzdWU4MTIzOTAxNzg= | 1,917 | UnicodeDecodeError: windows 10 machine | {
"login": "yosiasz",
"id": 900951,
"node_id": "MDQ6VXNlcjkwMDk1MQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/900951?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yosiasz",
"html_url": "https://github.com/yosiasz",
"followers_url": "https://api.github.com/users/yosiasz/fo... | [] | closed | false | null | [] | null | [
"upgraded to php 3.9.2 and it works!"
] | 1,613,772,785,000 | 1,613,774,471,000 | 1,613,774,428,000 | NONE | null | null | Windows 10
Php 3.6.8
when running
```
import datasets
oscar_am = datasets.load_dataset("oscar", "unshuffled_deduplicated_am")
print(oscar_am["train"][0])
```
I get the following error
```
file "C:\PYTHON\3.6.8\lib\encodings\cp1252.py", line 23, in decode
return codecs.charmap_decode(input,self.er... | https://api.github.com/repos/huggingface/datasets/issues/1917/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/1916 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1916/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1916/comments | https://api.github.com/repos/huggingface/datasets/issues/1916/events | https://github.com/huggingface/datasets/pull/1916 | 812,291,984 | MDExOlB1bGxSZXF1ZXN0NTc2NjgwNjY5 | 1,916 | Remove unused py_utils objects | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.g... | [] | closed | false | null | [] | null | [
"Hmmm this one broke master. I'm fixing it.\r\n\r\nMaybe because your branch was outdated ?",
"Sorry @lhoestq, I forgot to update the imports... :/",
"It's fine, the CI should have caught this tbh. Not sure why it did't fail"
] | 1,613,764,285,000 | 1,614,005,816,000 | 1,614,000,769,000 | MEMBER | null | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1916",
"html_url": "https://github.com/huggingface/datasets/pull/1916",
"diff_url": "https://github.com/huggingface/datasets/pull/1916.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1916.patch"
} | Remove unused/unnecessary py_utils functions/classes. | https://api.github.com/repos/huggingface/datasets/issues/1916/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1915 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1915/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1915/comments | https://api.github.com/repos/huggingface/datasets/issues/1915/events | https://github.com/huggingface/datasets/issues/1915 | 812,229,654 | MDU6SXNzdWU4MTIyMjk2NTQ= | 1,915 | Unable to download `wiki_dpr` | {
"login": "nitarakad",
"id": 18504534,
"node_id": "MDQ6VXNlcjE4NTA0NTM0",
"avatar_url": "https://avatars.githubusercontent.com/u/18504534?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/nitarakad",
"html_url": "https://github.com/nitarakad",
"followers_url": "https://api.github.com/users/... | [] | closed | false | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoest... | [
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.git... | null | [
"Thanks for reporting ! This is a bug. For now feel free to set `ignore_verifications=False` in `load_dataset`.\r\nI'm working on a fix",
"I just merged a fix :)\r\n\r\nWe'll do a patch release soon. In the meantime feel free to try it from the master branch\r\nThanks again for reporting !",
"Closing since this... | 1,613,758,292,000 | 1,614,793,248,000 | 1,614,793,248,000 | NONE | null | null | I am trying to download the `wiki_dpr` dataset. Specifically, I want to download `psgs_w100.multiset.no_index` with no embeddings/no index. In order to do so, I ran:
`curr_dataset = load_dataset("wiki_dpr", embeddings_name="multiset", index_name="no_index")`
However, I got the following error:
`datasets.utils.i... | https://api.github.com/repos/huggingface/datasets/issues/1915/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/1914 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1914/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1914/comments | https://api.github.com/repos/huggingface/datasets/issues/1914/events | https://github.com/huggingface/datasets/pull/1914 | 812,149,201 | MDExOlB1bGxSZXF1ZXN0NTc2NTYyNTkz | 1,914 | Fix logging imports and make all datasets use library logger | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.g... | [] | closed | false | null | [] | null | [] | 1,613,751,154,000 | 1,613,936,883,000 | 1,613,936,883,000 | MEMBER | null | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1914",
"html_url": "https://github.com/huggingface/datasets/pull/1914",
"diff_url": "https://github.com/huggingface/datasets/pull/1914.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1914.patch"
} | Fix library relative logging imports and make all datasets use library logger. | https://api.github.com/repos/huggingface/datasets/issues/1914/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1913 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1913/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1913/comments | https://api.github.com/repos/huggingface/datasets/issues/1913/events | https://github.com/huggingface/datasets/pull/1913 | 812,127,307 | MDExOlB1bGxSZXF1ZXN0NTc2NTQ0NjQw | 1,913 | Add keep_linebreaks parameter to text loader | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoest... | [] | closed | false | null | [] | null | [
"Just so I understand how it can be used in practice, do you have an example showing how to load a text dataset with this option?",
"Sure ! Here is an example:\r\n```python\r\nfrom datasets import load_dataset\r\n\r\nload_dataset(\"text\", keep_linebreaks=True, data_files=...)\r\n```\r\n\r\nI'll update the docume... | 1,613,749,425,000 | 1,613,759,772,000 | 1,613,759,771,000 | MEMBER | null | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1913",
"html_url": "https://github.com/huggingface/datasets/pull/1913",
"diff_url": "https://github.com/huggingface/datasets/pull/1913.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1913.patch"
} | As asked in #870 and https://github.com/huggingface/transformers/issues/10269 there should be a parameter to keep the linebreaks when loading a text dataset.
cc @sgugger @jncasey | https://api.github.com/repos/huggingface/datasets/issues/1913/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1912 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1912/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1912/comments | https://api.github.com/repos/huggingface/datasets/issues/1912/events | https://github.com/huggingface/datasets/pull/1912 | 812,034,140 | MDExOlB1bGxSZXF1ZXN0NTc2NDY2ODQx | 1,912 | Update: WMT - use mirror links | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoest... | [] | closed | false | null | [] | null | [
"So much better - thank you for doing that, @lhoestq!",
"Also fixed the `uncorpus` urls for wmt19 ru-en and zh-en for https://github.com/huggingface/datasets/issues/1893",
"Thanks!\r\nCan this be merged sooner? \r\nI manually update it and it works well."
] | 1,613,742,154,000 | 1,614,174,293,000 | 1,614,174,293,000 | MEMBER | null | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1912",
"html_url": "https://github.com/huggingface/datasets/pull/1912",
"diff_url": "https://github.com/huggingface/datasets/pull/1912.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1912.patch"
} | As asked in #1892 I created mirrors of the data hosted on statmt.org and updated the wmt scripts.
Now downloading the wmt datasets is blazing fast :)
cc @stas00 @patrickvonplaten | https://api.github.com/repos/huggingface/datasets/issues/1912/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1911 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1911/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1911/comments | https://api.github.com/repos/huggingface/datasets/issues/1911/events | https://github.com/huggingface/datasets/issues/1911 | 812,009,956 | MDU6SXNzdWU4MTIwMDk5NTY= | 1,911 | Saving processed dataset running infinitely | {
"login": "ayubSubhaniya",
"id": 20911334,
"node_id": "MDQ6VXNlcjIwOTExMzM0",
"avatar_url": "https://avatars.githubusercontent.com/u/20911334?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ayubSubhaniya",
"html_url": "https://github.com/ayubSubhaniya",
"followers_url": "https://api.githu... | [] | open | false | null | [] | null | [
"@thomwolf @lhoestq can you guys please take a look and recommend some solution.",
"am suspicious of this thing? what's the purpose of this? pickling and unplickling\r\n`self = pickle.loads(pickle.dumps(self))`\r\n\r\n```\r\n def save_to_disk(self, dataset_path: str, fs=None):\r\n \"\"\"\r\n Save... | 1,613,740,159,000 | 1,614,065,684,000 | null | NONE | null | null | I have a text dataset of size 220M.
For pre-processing, I need to tokenize this and filter rows with the large sequence.
My tokenization took roughly 3hrs. I used map() with batch size 1024 and multi-process with 96 processes.
filter() function was way to slow, so I used a hack to use pyarrow filter table func... | https://api.github.com/repos/huggingface/datasets/issues/1911/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/1910 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1910/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1910/comments | https://api.github.com/repos/huggingface/datasets/issues/1910/events | https://github.com/huggingface/datasets/pull/1910 | 811,697,108 | MDExOlB1bGxSZXF1ZXN0NTc2MTg0MDQ3 | 1,910 | Adding CoNLLpp dataset. | {
"login": "ZihanWangKi",
"id": 21319243,
"node_id": "MDQ6VXNlcjIxMzE5MjQz",
"avatar_url": "https://avatars.githubusercontent.com/u/21319243?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ZihanWangKi",
"html_url": "https://github.com/ZihanWangKi",
"followers_url": "https://api.github.com/... | [] | closed | false | null | [] | null | [
"It looks like this PR now includes changes to many other files than the ones for CoNLLpp.\r\n\r\nTo fix that feel free to create another branch and another PR.\r\n\r\nThis was probably caused by a git rebase. You can avoid this issue by using git merge if you've already pushed your branch."
] | 1,613,711,550,000 | 1,614,895,367,000 | 1,614,895,367,000 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1910",
"html_url": "https://github.com/huggingface/datasets/pull/1910",
"diff_url": "https://github.com/huggingface/datasets/pull/1910.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1910.patch"
} | https://api.github.com/repos/huggingface/datasets/issues/1910/timeline | null | true | |
https://api.github.com/repos/huggingface/datasets/issues/1907 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1907/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1907/comments | https://api.github.com/repos/huggingface/datasets/issues/1907/events | https://github.com/huggingface/datasets/issues/1907 | 811,520,569 | MDU6SXNzdWU4MTE1MjA1Njk= | 1,907 | DBPedia14 Dataset Checksum bug? | {
"login": "francisco-perez-sorrosal",
"id": 918006,
"node_id": "MDQ6VXNlcjkxODAwNg==",
"avatar_url": "https://avatars.githubusercontent.com/u/918006?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/francisco-perez-sorrosal",
"html_url": "https://github.com/francisco-perez-sorrosal",
"follo... | [] | closed | false | null | [] | null | [
"Hi ! :)\r\n\r\nThis looks like the same issue as https://github.com/huggingface/datasets/issues/1856 \r\nBasically google drive has quota issues that makes it inconvenient for downloading files.\r\n\r\nIf the quota of a file is exceeded, you have to wait 24h for the quota to reset (which is painful).\r\n\r\nThe er... | 1,613,687,148,000 | 1,614,036,125,000 | 1,614,036,124,000 | NONE | null | null | Hi there!!!
I've been using successfully the DBPedia dataset (https://huggingface.co/datasets/dbpedia_14) with my codebase in the last couple of weeks, but in the last couple of days now I get this error:
```
Traceback (most recent call last):
File "./conditional_classification/basic_pipeline.py", line 178, i... | https://api.github.com/repos/huggingface/datasets/issues/1907/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/1906 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1906/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1906/comments | https://api.github.com/repos/huggingface/datasets/issues/1906/events | https://github.com/huggingface/datasets/issues/1906 | 811,405,274 | MDU6SXNzdWU4MTE0MDUyNzQ= | 1,906 | Feature Request: Support for Pandas `Categorical` | {
"login": "justin-yan",
"id": 7731709,
"node_id": "MDQ6VXNlcjc3MzE3MDk=",
"avatar_url": "https://avatars.githubusercontent.com/u/7731709?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/justin-yan",
"html_url": "https://github.com/justin-yan",
"followers_url": "https://api.github.com/users... | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
},
{
"id": 2067400324,
"node_id": "MDU6... | open | false | null | [] | null | [
"We already have a ClassLabel type that does this kind of mapping between the label ids (integers) and actual label values (strings).\r\n\r\nI wonder if actually we should use the DictionaryType from Arrow and the Categorical type from pandas for the `datasets` ClassLabel feature type.\r\nCurrently ClassLabel corre... | 1,613,677,565,000 | 1,614,091,130,000 | null | CONTRIBUTOR | null | null | ```
from datasets import Dataset
import pandas as pd
import pyarrow
df = pd.DataFrame(pd.Series(["a", "b", "c", "a"], dtype="category"))
pyarrow.Table.from_pandas(df)
Dataset.from_pandas(df)
# Throws NotImplementedError
# TODO(thom) this will need access to the dictionary as well (for labels). I.e. to the py_... | https://api.github.com/repos/huggingface/datasets/issues/1906/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/1905 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1905/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1905/comments | https://api.github.com/repos/huggingface/datasets/issues/1905/events | https://github.com/huggingface/datasets/pull/1905 | 811,384,174 | MDExOlB1bGxSZXF1ZXN0NTc1OTIxMDk1 | 1,905 | Standardizing datasets.dtypes | {
"login": "justin-yan",
"id": 7731709,
"node_id": "MDQ6VXNlcjc3MzE3MDk=",
"avatar_url": "https://avatars.githubusercontent.com/u/7731709?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/justin-yan",
"html_url": "https://github.com/justin-yan",
"followers_url": "https://api.github.com/users... | [] | closed | false | null | [] | null | [
"Also - I took a stab at updating the docs, but I'm not sure how to actually check the outputs to see if it's formatted properly."
] | 1,613,675,731,000 | 1,613,858,490,000 | 1,613,858,490,000 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1905",
"html_url": "https://github.com/huggingface/datasets/pull/1905",
"diff_url": "https://github.com/huggingface/datasets/pull/1905.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1905.patch"
} | This PR was further branched off of jdy-str-to-pyarrow-parsing, so it depends on https://github.com/huggingface/datasets/pull/1900 going first for the diff to be up-to-date (I'm not sure if there's a way for me to use jdy-str-to-pyarrow-parsing as a base branch while having it appear in the pull requests here).
This... | https://api.github.com/repos/huggingface/datasets/issues/1905/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1904 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1904/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1904/comments | https://api.github.com/repos/huggingface/datasets/issues/1904/events | https://github.com/huggingface/datasets/pull/1904 | 811,260,904 | MDExOlB1bGxSZXF1ZXN0NTc1ODE4MjA0 | 1,904 | Fix to_pandas for boolean ArrayXD | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoest... | [] | closed | false | null | [] | null | [
"Thanks!"
] | 1,613,665,846,000 | 1,613,668,203,000 | 1,613,668,201,000 | MEMBER | null | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1904",
"html_url": "https://github.com/huggingface/datasets/pull/1904",
"diff_url": "https://github.com/huggingface/datasets/pull/1904.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1904.patch"
} | As noticed in #1887 the conversion of a dataset with a boolean ArrayXD feature types fails because of the underlying ListArray conversion to numpy requires `zero_copy_only=False`.
zero copy is available for all primitive types except booleans
see https://arrow.apache.org/docs/python/generated/pyarrow.Array.html#pya... | https://api.github.com/repos/huggingface/datasets/issues/1904/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1903 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1903/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1903/comments | https://api.github.com/repos/huggingface/datasets/issues/1903/events | https://github.com/huggingface/datasets/pull/1903 | 811,145,531 | MDExOlB1bGxSZXF1ZXN0NTc1NzIwOTk2 | 1,903 | Initial commit for the addition of TIMIT dataset | {
"login": "vrindaprabhu",
"id": 16264631,
"node_id": "MDQ6VXNlcjE2MjY0NjMx",
"avatar_url": "https://avatars.githubusercontent.com/u/16264631?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/vrindaprabhu",
"html_url": "https://github.com/vrindaprabhu",
"followers_url": "https://api.github.c... | [] | closed | false | null | [] | null | [
"@patrickvonplaten could you please review and help me close this PR?",
"@lhoestq Thank you so much for your comments and for patiently reviewing the code. Have _hopefully_ included all the suggested changes. Let me know if any more changes are required.\r\n\r\nSorry the code had lots of silly errors from my sid... | 1,613,658,192,000 | 1,614,591,552,000 | 1,614,591,552,000 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1903",
"html_url": "https://github.com/huggingface/datasets/pull/1903",
"diff_url": "https://github.com/huggingface/datasets/pull/1903.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1903.patch"
} | Below points needs to be addressed:
- Creation of dummy dataset is failing
- Need to check on the data representation
- License is not creative commons. Copyright: Portions © 1993 Trustees of the University of Pennsylvania
Also the links (_except the download_) point to the ami corpus! ;-)
@patrickvonplaten ... | https://api.github.com/repos/huggingface/datasets/issues/1903/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1902 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1902/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1902/comments | https://api.github.com/repos/huggingface/datasets/issues/1902/events | https://github.com/huggingface/datasets/pull/1902 | 810,931,171 | MDExOlB1bGxSZXF1ZXN0NTc1NTQwMDM1 | 1,902 | Fix setimes_2 wmt urls | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoest... | [] | closed | false | null | [] | null | [] | 1,613,641,346,000 | 1,613,642,141,000 | 1,613,642,141,000 | MEMBER | null | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1902",
"html_url": "https://github.com/huggingface/datasets/pull/1902",
"diff_url": "https://github.com/huggingface/datasets/pull/1902.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1902.patch"
} | Continuation of #1901
Some other urls were missing https | https://api.github.com/repos/huggingface/datasets/issues/1902/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1901 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1901/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1901/comments | https://api.github.com/repos/huggingface/datasets/issues/1901/events | https://github.com/huggingface/datasets/pull/1901 | 810,845,605 | MDExOlB1bGxSZXF1ZXN0NTc1NDY5MDUy | 1,901 | Fix OPUS dataset download errors | {
"login": "YangWang92",
"id": 3883941,
"node_id": "MDQ6VXNlcjM4ODM5NDE=",
"avatar_url": "https://avatars.githubusercontent.com/u/3883941?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/YangWang92",
"html_url": "https://github.com/YangWang92",
"followers_url": "https://api.github.com/users... | [] | closed | false | null | [] | null | [] | 1,613,633,981,000 | 1,613,660,840,000 | 1,613,641,161,000 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1901",
"html_url": "https://github.com/huggingface/datasets/pull/1901",
"diff_url": "https://github.com/huggingface/datasets/pull/1901.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1901.patch"
} | Replace http to https.
https://github.com/huggingface/datasets/issues/854
https://discuss.huggingface.co/t/cannot-download-wmt16/2081
| https://api.github.com/repos/huggingface/datasets/issues/1901/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1900 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1900/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1900/comments | https://api.github.com/repos/huggingface/datasets/issues/1900/events | https://github.com/huggingface/datasets/pull/1900 | 810,512,488 | MDExOlB1bGxSZXF1ZXN0NTc1MTkxNTc3 | 1,900 | Issue #1895: Bugfix for string_to_arrow timestamp[ns] support | {
"login": "justin-yan",
"id": 7731709,
"node_id": "MDQ6VXNlcjc3MzE3MDk=",
"avatar_url": "https://avatars.githubusercontent.com/u/7731709?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/justin-yan",
"html_url": "https://github.com/justin-yan",
"followers_url": "https://api.github.com/users... | [] | closed | false | null | [] | null | [
"OK! Thank you for the review - I will follow up with a separate PR for the comments here (https://github.com/huggingface/datasets/pull/1900#discussion_r578319725)!"
] | 1,613,593,564,000 | 1,613,759,231,000 | 1,613,759,231,000 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1900",
"html_url": "https://github.com/huggingface/datasets/pull/1900",
"diff_url": "https://github.com/huggingface/datasets/pull/1900.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1900.patch"
} | Should resolve https://github.com/huggingface/datasets/issues/1895
The main part of this PR adds additional parsing in `string_to_arrow` to convert the timestamp dtypes that result from `str(pa_type)` back into the pa.DataType TimestampType.
While adding unit-testing, I noticed that support for the double/float t... | https://api.github.com/repos/huggingface/datasets/issues/1900/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1899 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1899/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1899/comments | https://api.github.com/repos/huggingface/datasets/issues/1899/events | https://github.com/huggingface/datasets/pull/1899 | 810,308,332 | MDExOlB1bGxSZXF1ZXN0NTc1MDIxMjc4 | 1,899 | Fix: ALT - fix duplicated examples in alt-parallel | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoest... | [] | closed | false | null | [] | null | [] | 1,613,577,236,000 | 1,613,582,449,000 | 1,613,582,449,000 | MEMBER | null | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1899",
"html_url": "https://github.com/huggingface/datasets/pull/1899",
"diff_url": "https://github.com/huggingface/datasets/pull/1899.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1899.patch"
} | As noticed in #1898 by @10-zin the examples of the `alt-paralel` configurations have all the same values for the `translation` field.
This was due to a bad copy of a python dict.
This PR fixes that. | https://api.github.com/repos/huggingface/datasets/issues/1899/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1898 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1898/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1898/comments | https://api.github.com/repos/huggingface/datasets/issues/1898/events | https://github.com/huggingface/datasets/issues/1898 | 810,157,251 | MDU6SXNzdWU4MTAxNTcyNTE= | 1,898 | ALT dataset has repeating instances in all splits | {
"login": "10-zin",
"id": 33179372,
"node_id": "MDQ6VXNlcjMzMTc5Mzcy",
"avatar_url": "https://avatars.githubusercontent.com/u/33179372?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/10-zin",
"html_url": "https://github.com/10-zin",
"followers_url": "https://api.github.com/users/10-zin/fo... | [
{
"id": 2067388877,
"node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug",
"name": "dataset bug",
"color": "2edb81",
"default": false,
"description": "A bug in a dataset script provided in the library"
}
] | closed | false | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoest... | [
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.git... | null | [
"Thanks for reporting. This looks like a very bad issue. I'm looking into it",
"I just merged a fix, we'll do a patch release soon. Thanks again for reporting, and sorry for the inconvenience.\r\nIn the meantime you can load `ALT` using `datasets` from the master branch",
"Thanks!!! works perfectly in the blead... | 1,613,566,302,000 | 1,613,715,526,000 | 1,613,715,526,000 | NONE | null | null | The [ALT](https://huggingface.co/datasets/alt) dataset has all the same instances within each split :/
Seemed like a great dataset for some experiments I wanted to carry out, especially since its medium-sized, and has all splits.
Would be great if this could be fixed :)
Added a snapshot of the contents from `exp... | https://api.github.com/repos/huggingface/datasets/issues/1898/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/1897 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1897/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1897/comments | https://api.github.com/repos/huggingface/datasets/issues/1897/events | https://github.com/huggingface/datasets/pull/1897 | 810,113,263 | MDExOlB1bGxSZXF1ZXN0NTc0ODU3MTIy | 1,897 | Fix PandasArrayExtensionArray conversion to native type | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoest... | [] | closed | false | null | [] | null | [] | 1,613,562,504,000 | 1,613,567,716,000 | 1,613,567,715,000 | MEMBER | null | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1897",
"html_url": "https://github.com/huggingface/datasets/pull/1897",
"diff_url": "https://github.com/huggingface/datasets/pull/1897.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1897.patch"
} | To make the conversion to csv work in #1887 , we need PandasArrayExtensionArray used for multidimensional numpy arrays to be converted to pandas native types.
However previously pandas.core.internals.ExtensionBlock.to_native_types would fail with an PandasExtensionArray because
1. the PandasExtensionArray.isna metho... | https://api.github.com/repos/huggingface/datasets/issues/1897/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1895 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1895/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1895/comments | https://api.github.com/repos/huggingface/datasets/issues/1895/events | https://github.com/huggingface/datasets/issues/1895 | 809,630,271 | MDU6SXNzdWU4MDk2MzAyNzE= | 1,895 | Bug Report: timestamp[ns] not recognized | {
"login": "justin-yan",
"id": 7731709,
"node_id": "MDQ6VXNlcjc3MzE3MDk=",
"avatar_url": "https://avatars.githubusercontent.com/u/7731709?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/justin-yan",
"html_url": "https://github.com/justin-yan",
"followers_url": "https://api.github.com/users... | [] | closed | false | null | [] | null | [
"Thanks for reporting !\r\n\r\nYou're right, `string_to_arrow` should be able to take `\"timestamp[ns]\"` as input and return the right pyarrow timestamp type.\r\nFeel free to suggest a fix for `string_to_arrow` and open a PR if you want to contribute ! This would be very appreciated :)\r\n\r\nTo give you more cont... | 1,613,507,884,000 | 1,613,759,231,000 | 1,613,759,231,000 | CONTRIBUTOR | null | null | Repro:
```
from datasets import Dataset
import pandas as pd
import pyarrow
df = pd.DataFrame(pd.date_range("2018-01-01", periods=3, freq="H"))
pyarrow.Table.from_pandas(df)
Dataset.from_pandas(df)
# Throws ValueError: Neither timestamp[ns] nor timestamp[ns]_ seems to be a pyarrow data type.
```
The fact... | https://api.github.com/repos/huggingface/datasets/issues/1895/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/1894 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1894/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1894/comments | https://api.github.com/repos/huggingface/datasets/issues/1894/events | https://github.com/huggingface/datasets/issues/1894 | 809,609,654 | MDU6SXNzdWU4MDk2MDk2NTQ= | 1,894 | benchmarking against MMapIndexedDataset | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/ss... | [] | open | false | null | [] | null | [
"Hi sam !\r\nIndeed we can expect the performances to be very close since both MMapIndexedDataset and the `datasets` implem use memory mapping. With memory mapping what determines the I/O performance is the speed of your hard drive/SSD.\r\n\r\nIn terms of performance we're pretty close to the optimal speed for read... | 1,613,505,898,000 | 1,613,587,948,000 | null | MEMBER | null | null | I am trying to benchmark my datasets based implementation against fairseq's [`MMapIndexedDataset`](https://github.com/pytorch/fairseq/blob/master/fairseq/data/indexed_dataset.py#L365) and finding that, according to psrecord, my `datasets` implem uses about 3% more CPU memory and runs 1% slower for `wikitext103` (~1GB o... | https://api.github.com/repos/huggingface/datasets/issues/1894/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/1893 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1893/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1893/comments | https://api.github.com/repos/huggingface/datasets/issues/1893/events | https://github.com/huggingface/datasets/issues/1893 | 809,556,503 | MDU6SXNzdWU4MDk1NTY1MDM= | 1,893 | wmt19 is broken | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/fo... | [
{
"id": 2067388877,
"node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug",
"name": "dataset bug",
"color": "2edb81",
"default": false,
"description": "A bug in a dataset script provided in the library"
}
] | closed | false | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoest... | [
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.git... | null | [
"This was also mentioned in https://github.com/huggingface/datasets/issues/488 \r\n\r\nThe bucket where is data was stored seems to be unavailable now. Maybe we can change the URL to the ones in https://conferences.unite.un.org/uncorpus/en/downloadoverview ?",
"Closing since this has been fixed by #1912"
] | 1,613,500,798,000 | 1,614,793,322,000 | 1,614,793,322,000 | CONTRIBUTOR | null | null | 1. Check which lang pairs we have: `--dataset_name wmt19`:
Please pick one among the available configs: ['cs-en', 'de-en', 'fi-en', 'gu-en', 'kk-en', 'lt-en', 'ru-en', 'zh-en', 'fr-de']
2. OK, let's pick `ru-en`:
`--dataset_name wmt19 --dataset_config "ru-en"`
no cookies:
```
Traceback (most recent c... | https://api.github.com/repos/huggingface/datasets/issues/1893/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/1892 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1892/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1892/comments | https://api.github.com/repos/huggingface/datasets/issues/1892/events | https://github.com/huggingface/datasets/issues/1892 | 809,554,174 | MDU6SXNzdWU4MDk1NTQxNzQ= | 1,892 | request to mirror wmt datasets, as they are really slow to download | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/fo... | [] | closed | false | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoest... | [
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.git... | null | [
"Yes that would be awesome. Not only the download speeds are awful, but also some files are missing.\r\nWe list all the URLs in the datasets/wmt19/wmt_utils.py so we can make a script to download them all and host on S3.\r\nAlso I think most of the materials are under the CC BY-NC-SA 3.0 license (must double check)... | 1,613,500,571,000 | 1,616,673,203,000 | 1,616,673,203,000 | CONTRIBUTOR | null | null | Would it be possible to mirror the wmt data files under hf? Some of them take hours to download and not because of the local speed. They are all quite small datasets, just extremely slow to download.
Thank you! | https://api.github.com/repos/huggingface/datasets/issues/1892/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/1891 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1891/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1891/comments | https://api.github.com/repos/huggingface/datasets/issues/1891/events | https://github.com/huggingface/datasets/issues/1891 | 809,550,001 | MDU6SXNzdWU4MDk1NTAwMDE= | 1,891 | suggestion to improve a missing dataset error | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/fo... | [] | open | false | null | [] | null | [
"This is the current error thrown for missing datasets:\r\n```\r\nFileNotFoundError: Couldn't find a dataset script at C:\\Users\\Mario\\Desktop\\projects\\datasets\\missing_dataset\\missing_dataset.py or any data file in the same directory. Couldn't find 'missing_dataset' on the Hugging Face Hub either: FileNotFou... | 1,613,500,153,000 | 1,613,500,214,000 | null | CONTRIBUTOR | null | null | I was using `--dataset_name wmt19` all was good. Then thought perhaps wmt20 is out, so I tried to use `--dataset_name wmt20`, got 3 different errors (1 repeated twice), none telling me the real issue - that `wmt20` isn't in the `datasets`:
```
True, predict_with_generate=True)
Traceback (most recent call last):
... | https://api.github.com/repos/huggingface/datasets/issues/1891/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/1890 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1890/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1890/comments | https://api.github.com/repos/huggingface/datasets/issues/1890/events | https://github.com/huggingface/datasets/pull/1890 | 809,395,586 | MDExOlB1bGxSZXF1ZXN0NTc0MjY0OTMx | 1,890 | Reformat dataset cards section titles | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoest... | [] | closed | false | null | [] | null | [] | 1,613,488,307,000 | 1,613,488,354,000 | 1,613,488,353,000 | MEMBER | null | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1890",
"html_url": "https://github.com/huggingface/datasets/pull/1890",
"diff_url": "https://github.com/huggingface/datasets/pull/1890.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1890.patch"
} | Titles are formatted like [Foo](#foo) instead of just Foo | https://api.github.com/repos/huggingface/datasets/issues/1890/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1889 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1889/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1889/comments | https://api.github.com/repos/huggingface/datasets/issues/1889/events | https://github.com/huggingface/datasets/pull/1889 | 809,276,015 | MDExOlB1bGxSZXF1ZXN0NTc0MTY1NDAz | 1,889 | Implement to_dict and to_pandas for Dataset | {
"login": "SBrandeis",
"id": 33657802,
"node_id": "MDQ6VXNlcjMzNjU3ODAy",
"avatar_url": "https://avatars.githubusercontent.com/u/33657802?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SBrandeis",
"html_url": "https://github.com/SBrandeis",
"followers_url": "https://api.github.com/users/... | [] | closed | false | null | [] | null | [
"Next step is going to add these two in the documentation ^^"
] | 1,613,479,099,000 | 1,613,673,757,000 | 1,613,673,754,000 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1889",
"html_url": "https://github.com/huggingface/datasets/pull/1889",
"diff_url": "https://github.com/huggingface/datasets/pull/1889.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1889.patch"
} | With options to return a generator or the full dataset | https://api.github.com/repos/huggingface/datasets/issues/1889/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1888 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1888/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1888/comments | https://api.github.com/repos/huggingface/datasets/issues/1888/events | https://github.com/huggingface/datasets/pull/1888 | 809,241,123 | MDExOlB1bGxSZXF1ZXN0NTc0MTM2MDU4 | 1,888 | Docs for adding new column on formatted dataset | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoest... | [] | closed | false | null | [] | null | [
"Close #1872"
] | 1,613,475,900,000 | 1,617,112,863,000 | 1,613,476,737,000 | MEMBER | null | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1888",
"html_url": "https://github.com/huggingface/datasets/pull/1888",
"diff_url": "https://github.com/huggingface/datasets/pull/1888.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1888.patch"
} | As mentioned in #1872 we should add in the documentation how the format gets updated when new columns are added
Close #1872 | https://api.github.com/repos/huggingface/datasets/issues/1888/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1887 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1887/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1887/comments | https://api.github.com/repos/huggingface/datasets/issues/1887/events | https://github.com/huggingface/datasets/pull/1887 | 809,229,809 | MDExOlB1bGxSZXF1ZXN0NTc0MTI2NTMy | 1,887 | Implement to_csv for Dataset | {
"login": "SBrandeis",
"id": 33657802,
"node_id": "MDQ6VXNlcjMzNjU3ODAy",
"avatar_url": "https://avatars.githubusercontent.com/u/33657802?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SBrandeis",
"html_url": "https://github.com/SBrandeis",
"followers_url": "https://api.github.com/users/... | [] | closed | false | null | [] | null | [
"@lhoestq I stumbled upon an interesting failure when adding tests for CSV serialization of `ArrayXD` features (see the failing unit tests in the CI)\r\n\r\nIt's due to the fact that booleans cannot be converted from arrow format to numpy without copy: https://arrow.apache.org/docs/python/generated/pyarrow.Array.ht... | 1,613,474,849,000 | 1,613,727,719,000 | 1,613,727,719,000 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1887",
"html_url": "https://github.com/huggingface/datasets/pull/1887",
"diff_url": "https://github.com/huggingface/datasets/pull/1887.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1887.patch"
} | cc @thomwolf
`to_csv` supports passing either a file path or a *binary* file object
The writing is batched to avoid loading the whole table in memory | https://api.github.com/repos/huggingface/datasets/issues/1887/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1886 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1886/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1886/comments | https://api.github.com/repos/huggingface/datasets/issues/1886/events | https://github.com/huggingface/datasets/pull/1886 | 809,221,885 | MDExOlB1bGxSZXF1ZXN0NTc0MTE5ODcz | 1,886 | Common voice | {
"login": "BirgerMoell",
"id": 1704131,
"node_id": "MDQ6VXNlcjE3MDQxMzE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1704131?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BirgerMoell",
"html_url": "https://github.com/BirgerMoell",
"followers_url": "https://api.github.com/us... | [] | closed | false | null | [] | null | [
"Does it make sense to make the domains as the different languages?\r\nA problem is that you need to download the datasets from the browser.\r\nOne idea would be to either contact Mozilla regarding API access to the dataset or make use of a headless browser for downloading the datasets (might be hard since we have ... | 1,613,474,170,000 | 1,615,315,891,000 | 1,615,315,891,000 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1886",
"html_url": "https://github.com/huggingface/datasets/pull/1886",
"diff_url": "https://github.com/huggingface/datasets/pull/1886.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1886.patch"
} | Started filling out information about the dataset and a dataset card.
To do
Create tagging file
Update the common_voice.py file with more information | https://api.github.com/repos/huggingface/datasets/issues/1886/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1885 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1885/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1885/comments | https://api.github.com/repos/huggingface/datasets/issues/1885/events | https://github.com/huggingface/datasets/pull/1885 | 808,881,501 | MDExOlB1bGxSZXF1ZXN0NTczODQyNzcz | 1,885 | add missing info on how to add large files | {
"login": "stas00",
"id": 10676103,
"node_id": "MDQ6VXNlcjEwNjc2MTAz",
"avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stas00",
"html_url": "https://github.com/stas00",
"followers_url": "https://api.github.com/users/stas00/fo... | [] | closed | false | null | [] | null | [] | 1,613,432,799,000 | 1,613,492,539,000 | 1,613,475,852,000 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1885",
"html_url": "https://github.com/huggingface/datasets/pull/1885",
"diff_url": "https://github.com/huggingface/datasets/pull/1885.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1885.patch"
} | Thanks to @lhoestq's instructions I was able to add data files to a custom dataset repo. This PR is attempting to tell others how to do the same if they need to.
@lhoestq | https://api.github.com/repos/huggingface/datasets/issues/1885/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1884 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1884/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1884/comments | https://api.github.com/repos/huggingface/datasets/issues/1884/events | https://github.com/huggingface/datasets/pull/1884 | 808,755,894 | MDExOlB1bGxSZXF1ZXN0NTczNzQwNzI5 | 1,884 | dtype fix when using numpy arrays | {
"login": "bhavitvyamalik",
"id": 19718818,
"node_id": "MDQ6VXNlcjE5NzE4ODE4",
"avatar_url": "https://avatars.githubusercontent.com/u/19718818?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/bhavitvyamalik",
"html_url": "https://github.com/bhavitvyamalik",
"followers_url": "https://api.gi... | [] | closed | false | null | [] | null | [] | 1,613,415,325,000 | 1,627,642,878,000 | 1,627,642,878,000 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1884",
"html_url": "https://github.com/huggingface/datasets/pull/1884",
"diff_url": "https://github.com/huggingface/datasets/pull/1884.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1884.patch"
} | As discussed in #625 this fix lets the user preserve the dtype of numpy array to pyarrow array which was getting lost due to conversion of numpy array -> list -> pyarrow array | https://api.github.com/repos/huggingface/datasets/issues/1884/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1883 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1883/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1883/comments | https://api.github.com/repos/huggingface/datasets/issues/1883/events | https://github.com/huggingface/datasets/pull/1883 | 808,750,623 | MDExOlB1bGxSZXF1ZXN0NTczNzM2NTIz | 1,883 | Add not-in-place implementations for several dataset transforms | {
"login": "SBrandeis",
"id": 33657802,
"node_id": "MDQ6VXNlcjMzNjU3ODAy",
"avatar_url": "https://avatars.githubusercontent.com/u/33657802?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SBrandeis",
"html_url": "https://github.com/SBrandeis",
"followers_url": "https://api.github.com/users/... | [] | closed | false | null | [] | null | [
"@lhoestq I am not sure how to test `dictionary_encode_column` (in-place version was not tested before)",
"I can take a look at dictionary_encode_column tomorrow.\r\nAlthough it's likely that it doesn't work then. It was added at the beginning of the lib and never tested nor used afaik.",
"Now let's update the ... | 1,613,414,666,000 | 1,614,178,489,000 | 1,614,178,406,000 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1883",
"html_url": "https://github.com/huggingface/datasets/pull/1883",
"diff_url": "https://github.com/huggingface/datasets/pull/1883.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1883.patch"
} | Should we deprecate in-place versions of such methods? | https://api.github.com/repos/huggingface/datasets/issues/1883/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1882 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1882/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1882/comments | https://api.github.com/repos/huggingface/datasets/issues/1882/events | https://github.com/huggingface/datasets/pull/1882 | 808,716,576 | MDExOlB1bGxSZXF1ZXN0NTczNzA4OTEw | 1,882 | Create Remote Manager | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.g... | [] | open | false | null | [] | null | [
"@lhoestq I have refactorized the logic. Instead of the previous hierarchy call (local temp file opening -> remote call -> use again temp local file logic but from within the remote caller scope), now it is flattened. Schematically:\r\n```python\r\nwith src.open() as src_file, dst.open() as dst_file:\r\n src_fil... | 1,613,410,584,000 | 1,615,220,110,000 | null | MEMBER | null | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1882",
"html_url": "https://github.com/huggingface/datasets/pull/1882",
"diff_url": "https://github.com/huggingface/datasets/pull/1882.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1882.patch"
} | Refactoring to separate the concern of remote (HTTP/FTP requests) management. | https://api.github.com/repos/huggingface/datasets/issues/1882/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1881 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1881/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1881/comments | https://api.github.com/repos/huggingface/datasets/issues/1881/events | https://github.com/huggingface/datasets/pull/1881 | 808,578,200 | MDExOlB1bGxSZXF1ZXN0NTczNTk1Nzkw | 1,881 | `list_datasets()` returns a list of strings, not objects | {
"login": "pminervini",
"id": 227357,
"node_id": "MDQ6VXNlcjIyNzM1Nw==",
"avatar_url": "https://avatars.githubusercontent.com/u/227357?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pminervini",
"html_url": "https://github.com/pminervini",
"followers_url": "https://api.github.com/users/p... | [] | closed | false | null | [] | null | [] | 1,613,398,815,000 | 1,613,401,789,000 | 1,613,401,788,000 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1881",
"html_url": "https://github.com/huggingface/datasets/pull/1881",
"diff_url": "https://github.com/huggingface/datasets/pull/1881.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1881.patch"
} | Here and there in the docs there is still stuff like this:
```python
>>> datasets_list = list_datasets()
>>> print(', '.join(dataset.id for dataset in datasets_list))
```
However, my understanding is that `list_datasets()` returns a list of strings rather than a list of objects. | https://api.github.com/repos/huggingface/datasets/issues/1881/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1880 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1880/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1880/comments | https://api.github.com/repos/huggingface/datasets/issues/1880/events | https://github.com/huggingface/datasets/pull/1880 | 808,563,439 | MDExOlB1bGxSZXF1ZXN0NTczNTgzNjg0 | 1,880 | Update multi_woz_v22 checksums | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoest... | [] | closed | false | null | [] | null | [] | 1,613,397,618,000 | 1,613,398,699,000 | 1,613,398,698,000 | MEMBER | null | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1880",
"html_url": "https://github.com/huggingface/datasets/pull/1880",
"diff_url": "https://github.com/huggingface/datasets/pull/1880.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1880.patch"
} | As noticed in #1876 the checksums of this dataset are outdated.
I updated them in this PR | https://api.github.com/repos/huggingface/datasets/issues/1880/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1879 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1879/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1879/comments | https://api.github.com/repos/huggingface/datasets/issues/1879/events | https://github.com/huggingface/datasets/pull/1879 | 808,541,442 | MDExOlB1bGxSZXF1ZXN0NTczNTY1NDAx | 1,879 | Replace flatten_nested | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.g... | [] | closed | false | null | [] | null | [
"Hi @lhoestq. If you agree to merge this, I will start separating the logic for NestedDataStructure.map ;)"
] | 1,613,395,780,000 | 1,613,759,714,000 | 1,613,759,714,000 | MEMBER | null | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1879",
"html_url": "https://github.com/huggingface/datasets/pull/1879",
"diff_url": "https://github.com/huggingface/datasets/pull/1879.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1879.patch"
} | Replace `flatten_nested` with `NestedDataStructure.flatten`.
This is a first step towards having all NestedDataStructure logic as a separated concern, independent of the caller/user of the data structure.
Eventually, all checks (whether the underlying data is list, dict, etc.) will be only inside this class.
I... | https://api.github.com/repos/huggingface/datasets/issues/1879/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1878 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1878/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1878/comments | https://api.github.com/repos/huggingface/datasets/issues/1878/events | https://github.com/huggingface/datasets/pull/1878 | 808,526,883 | MDExOlB1bGxSZXF1ZXN0NTczNTUyODk3 | 1,878 | Add LJ Speech dataset | {
"login": "anton-l",
"id": 26864830,
"node_id": "MDQ6VXNlcjI2ODY0ODMw",
"avatar_url": "https://avatars.githubusercontent.com/u/26864830?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/anton-l",
"html_url": "https://github.com/anton-l",
"followers_url": "https://api.github.com/users/anton-... | [] | closed | false | null | [] | null | [
"Hey @anton-l,\r\n\r\nThanks a lot for the very clean integration!\r\n\r\n1) I think we should now start having \"automatic-speech-recognition\" as a label in the dataset tagger (@yjernite is it easy to add?). But we can surely add this dataset with the tag you've added and then later change the label to `asr` \r\n... | 1,613,394,642,000 | 1,613,417,981,000 | 1,613,398,689,000 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1878",
"html_url": "https://github.com/huggingface/datasets/pull/1878",
"diff_url": "https://github.com/huggingface/datasets/pull/1878.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1878.patch"
} | This PR adds the LJ Speech dataset (https://keithito.com/LJ-Speech-Dataset/)
As requested by #1841
The ASR format is based on #1767
There are a couple of quirks that should be addressed:
- I tagged this dataset as `other-other-automatic-speech-recognition` and `other-other-text-to-speech` (as classified by pape... | https://api.github.com/repos/huggingface/datasets/issues/1878/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1877 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1877/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1877/comments | https://api.github.com/repos/huggingface/datasets/issues/1877/events | https://github.com/huggingface/datasets/issues/1877 | 808,462,272 | MDU6SXNzdWU4MDg0NjIyNzI= | 1,877 | Allow concatenation of both in-memory and on-disk datasets | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoest... | [] | closed | false | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoest... | [
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.git... | null | [
"I started working on this. My idea is to first add the pyarrow Table wrappers InMemoryTable and MemoryMappedTable that both implement what's necessary regarding copy/pickle. Then have another wrapper that takes the concatenation of InMemoryTable/MemoryMappedTable objects.\r\n\r\nWhat's important here is that conca... | 1,613,389,186,000 | 1,616,777,518,000 | 1,616,777,518,000 | MEMBER | null | null | This is a prerequisite for the addition of the `add_item` feature (see #1870).
Currently there is one assumption that we would need to change: a dataset is either fully in memory (dataset._data_files is empty), or the dataset can be reloaded from disk (using the dataset._data_files).
This assumption is used for pickl... | https://api.github.com/repos/huggingface/datasets/issues/1877/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/1876 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1876/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1876/comments | https://api.github.com/repos/huggingface/datasets/issues/1876/events | https://github.com/huggingface/datasets/issues/1876 | 808,025,859 | MDU6SXNzdWU4MDgwMjU4NTk= | 1,876 | load_dataset("multi_woz_v22") NonMatchingChecksumError | {
"login": "Vincent950129",
"id": 5945326,
"node_id": "MDQ6VXNlcjU5NDUzMjY=",
"avatar_url": "https://avatars.githubusercontent.com/u/5945326?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Vincent950129",
"html_url": "https://github.com/Vincent950129",
"followers_url": "https://api.github.... | [] | closed | false | null | [] | null | [
"Thanks for reporting !\r\nThis is due to the changes made in the data files in the multiwoz repo: https://github.com/budzianowski/multiwoz/pull/59\r\nI'm opening a PR to update the checksums of the data files.",
"I just merged the fix. It will be available in the new release of `datasets` later today.\r\nYou'll ... | 1,613,330,088,000 | 1,628,100,480,000 | 1,628,100,480,000 | NONE | null | null | Hi, it seems that loading the multi_woz_v22 dataset gives a NonMatchingChecksumError.
To reproduce:
`dataset = load_dataset('multi_woz_v22','v2.2_active_only',split='train')`
This will give the following error:
```
raise NonMatchingChecksumError(error_msg + str(bad_urls))
datasets.utils.info_utils.N... | https://api.github.com/repos/huggingface/datasets/issues/1876/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/1875 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1875/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1875/comments | https://api.github.com/repos/huggingface/datasets/issues/1875/events | https://github.com/huggingface/datasets/pull/1875 | 807,887,267 | MDExOlB1bGxSZXF1ZXN0NTczMDM2NzE0 | 1,875 | Adding sari metric | {
"login": "ddhruvkr",
"id": 6061911,
"node_id": "MDQ6VXNlcjYwNjE5MTE=",
"avatar_url": "https://avatars.githubusercontent.com/u/6061911?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ddhruvkr",
"html_url": "https://github.com/ddhruvkr",
"followers_url": "https://api.github.com/users/ddhru... | [] | closed | false | null | [] | null | [] | 1,613,277,515,000 | 1,613,577,387,000 | 1,613,577,387,000 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1875",
"html_url": "https://github.com/huggingface/datasets/pull/1875",
"diff_url": "https://github.com/huggingface/datasets/pull/1875.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1875.patch"
} | Adding SARI metric that is used in evaluation of text simplification. This is required as part of the GEM benchmark. | https://api.github.com/repos/huggingface/datasets/issues/1875/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1874 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1874/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1874/comments | https://api.github.com/repos/huggingface/datasets/issues/1874/events | https://github.com/huggingface/datasets/pull/1874 | 807,786,094 | MDExOlB1bGxSZXF1ZXN0NTcyOTYzMjAy | 1,874 | Adding Europarl Bilingual dataset | {
"login": "lucadiliello",
"id": 23355969,
"node_id": "MDQ6VXNlcjIzMzU1OTY5",
"avatar_url": "https://avatars.githubusercontent.com/u/23355969?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lucadiliello",
"html_url": "https://github.com/lucadiliello",
"followers_url": "https://api.github.c... | [] | closed | false | null | [] | null | [
"is there a way to check errors without subscribing to CircleCI? Because they want access to private repositories when logging.",
"I think you need to be logged in to check the errors unfortunately. Feel free to create an account with bitbucket maybe if you don't want it to access your private github repos",
"I... | 1,613,235,724,000 | 1,614,854,302,000 | 1,614,854,302,000 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1874",
"html_url": "https://github.com/huggingface/datasets/pull/1874",
"diff_url": "https://github.com/huggingface/datasets/pull/1874.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1874.patch"
} | Implementation of Europarl bilingual dataset from described [here](https://opus.nlpl.eu/Europarl.php).
This dataset allows to use every language pair detailed in the original dataset. The loading script manages also the small errors contained in the original dataset (in very rare cases (1 over 10M) there are some ke... | https://api.github.com/repos/huggingface/datasets/issues/1874/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1873 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1873/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1873/comments | https://api.github.com/repos/huggingface/datasets/issues/1873/events | https://github.com/huggingface/datasets/pull/1873 | 807,750,745 | MDExOlB1bGxSZXF1ZXN0NTcyOTM4MTYy | 1,873 | add iapp_wiki_qa_squad | {
"login": "cstorm125",
"id": 15519308,
"node_id": "MDQ6VXNlcjE1NTE5MzA4",
"avatar_url": "https://avatars.githubusercontent.com/u/15519308?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cstorm125",
"html_url": "https://github.com/cstorm125",
"followers_url": "https://api.github.com/users/... | [] | closed | false | null | [] | null | [] | 1,613,223,267,000 | 1,613,485,318,000 | 1,613,485,318,000 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1873",
"html_url": "https://github.com/huggingface/datasets/pull/1873",
"diff_url": "https://github.com/huggingface/datasets/pull/1873.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1873.patch"
} | `iapp_wiki_qa_squad` is an extractive question answering dataset from Thai Wikipedia articles.
It is adapted from [the original iapp-wiki-qa-dataset](https://github.com/iapp-technology/iapp-wiki-qa-dataset)
to [SQuAD](https://rajpurkar.github.io/SQuAD-explorer/) format, resulting in
5761/742/739 questions from 1529/... | https://api.github.com/repos/huggingface/datasets/issues/1873/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1872 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1872/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1872/comments | https://api.github.com/repos/huggingface/datasets/issues/1872/events | https://github.com/huggingface/datasets/issues/1872 | 807,711,935 | MDU6SXNzdWU4MDc3MTE5MzU= | 1,872 | Adding a new column to the dataset after set_format was called | {
"login": "villmow",
"id": 2743060,
"node_id": "MDQ6VXNlcjI3NDMwNjA=",
"avatar_url": "https://avatars.githubusercontent.com/u/2743060?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/villmow",
"html_url": "https://github.com/villmow",
"followers_url": "https://api.github.com/users/villmow/... | [] | closed | false | null | [] | null | [
"Hi ! Indeed if you add a column to a formatted dataset, then the new dataset gets a new formatting in which:\r\n```\r\nnew formatted columns = (all columns - previously unformatted columns)\r\n```\r\nTherefore the new column is going to be formatted using the `torch` formatting.\r\n\r\nIf you want your new column ... | 1,613,207,675,000 | 1,617,112,905,000 | 1,617,112,905,000 | NONE | null | null | Hi,
thanks for the nice library. I'm in the process of creating a custom dataset, which has a mix of tensors and lists of strings. I stumbled upon an error and want to know if its a problem on my side.
I load some lists of strings and integers, then call `data.set_format("torch", columns=["some_integer_column1"... | https://api.github.com/repos/huggingface/datasets/issues/1872/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/1871 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1871/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1871/comments | https://api.github.com/repos/huggingface/datasets/issues/1871/events | https://github.com/huggingface/datasets/pull/1871 | 807,697,671 | MDExOlB1bGxSZXF1ZXN0NTcyODk5Nzgz | 1,871 | Add newspop dataset | {
"login": "frankier",
"id": 299380,
"node_id": "MDQ6VXNlcjI5OTM4MA==",
"avatar_url": "https://avatars.githubusercontent.com/u/299380?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/frankier",
"html_url": "https://github.com/frankier",
"followers_url": "https://api.github.com/users/frankie... | [] | closed | false | null | [] | null | [
"Thanks for the changes :)\r\nmerging"
] | 1,613,201,483,000 | 1,615,198,365,000 | 1,615,198,365,000 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1871",
"html_url": "https://github.com/huggingface/datasets/pull/1871",
"diff_url": "https://github.com/huggingface/datasets/pull/1871.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1871.patch"
} | https://api.github.com/repos/huggingface/datasets/issues/1871/timeline | null | true | |
https://api.github.com/repos/huggingface/datasets/issues/1870 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1870/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1870/comments | https://api.github.com/repos/huggingface/datasets/issues/1870/events | https://github.com/huggingface/datasets/pull/1870 | 807,306,564 | MDExOlB1bGxSZXF1ZXN0NTcyNTc4Mjc4 | 1,870 | Implement Dataset add_item | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.g... | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | closed | false | null | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/milestones/3",
"html_url": "https://github.com/huggingface/datasets/milestone/3",
"labels_url": "https://api.github.com/repos/huggingface/datasets/milestones/3/labels",
"id": 6644287,
"node_id": "MDk6TWlsZXN0b25lNjY0NDI4Nw==",
"number": 3,
"title... | [
"Thanks @lhoestq for your remarks. Yes, I agree there are still many issues to be tackled... This PR is just a starting point, so that we can discuss how Dataset should be generalized.",
"Sure ! I opened an issue #1877 so we can discuss this specific aspect :)",
"I am going to implement this consolidation step ... | 1,613,142,226,000 | 1,619,172,091,000 | 1,619,172,091,000 | MEMBER | null | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1870",
"html_url": "https://github.com/huggingface/datasets/pull/1870",
"diff_url": "https://github.com/huggingface/datasets/pull/1870.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1870.patch"
} | Implement `Dataset.add_item`.
Close #1854. | https://api.github.com/repos/huggingface/datasets/issues/1870/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1869 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1869/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1869/comments | https://api.github.com/repos/huggingface/datasets/issues/1869/events | https://github.com/huggingface/datasets/pull/1869 | 807,159,835 | MDExOlB1bGxSZXF1ZXN0NTcyNDU0NTMy | 1,869 | Remove outdated commands in favor of huggingface-cli | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoest... | [] | closed | false | null | [] | null | [] | 1,613,129,290,000 | 1,613,146,389,000 | 1,613,146,388,000 | MEMBER | null | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1869",
"html_url": "https://github.com/huggingface/datasets/pull/1869",
"diff_url": "https://github.com/huggingface/datasets/pull/1869.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1869.patch"
} | Removing the old user commands since `huggingface_hub` is going to be used instead.
cc @julien-c | https://api.github.com/repos/huggingface/datasets/issues/1869/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1868 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1868/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1868/comments | https://api.github.com/repos/huggingface/datasets/issues/1868/events | https://github.com/huggingface/datasets/pull/1868 | 807,138,159 | MDExOlB1bGxSZXF1ZXN0NTcyNDM2MjA0 | 1,868 | Update oscar sizes | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoest... | [] | closed | false | null | [] | null | [] | 1,613,127,335,000 | 1,613,127,787,000 | 1,613,127,786,000 | MEMBER | null | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1868",
"html_url": "https://github.com/huggingface/datasets/pull/1868",
"diff_url": "https://github.com/huggingface/datasets/pull/1868.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1868.patch"
} | This commit https://github.com/huggingface/datasets/commit/837a152e4724adc5308e2c4481908c00a8d93383 removed empty lines from the oscar deduplicated datasets. This PR updates the size of each deduplicated dataset to fix possible `NonMatchingSplitsSizesError` errors. cc @cahya-wirawan | https://api.github.com/repos/huggingface/datasets/issues/1868/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1867 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1867/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1867/comments | https://api.github.com/repos/huggingface/datasets/issues/1867/events | https://github.com/huggingface/datasets/issues/1867 | 807,127,181 | MDU6SXNzdWU4MDcxMjcxODE= | 1,867 | ERROR WHEN USING SET_TRANSFORM() | {
"login": "alexvaca0",
"id": 35173563,
"node_id": "MDQ6VXNlcjM1MTczNTYz",
"avatar_url": "https://avatars.githubusercontent.com/u/35173563?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/alexvaca0",
"html_url": "https://github.com/alexvaca0",
"followers_url": "https://api.github.com/users/... | [] | closed | false | null | [] | null | [
"Hi @alejandrocros it looks like an incompatibility with the current Trainer @sgugger \r\nIndeed currently the Trainer of `transformers` doesn't support a dataset with a transform\r\n\r\nIt looks like it comes from this line: https://github.com/huggingface/transformers/blob/f51188cbe74195c14c5b3e2e8f10c2f435f9751a/... | 1,613,126,311,000 | 1,614,607,464,000 | 1,614,168,043,000 | NONE | null | null | Hi, I'm trying to use dataset.set_transform(encode) as @lhoestq told me in this issue: https://github.com/huggingface/datasets/issues/1825#issuecomment-774202797
However, when I try to use Trainer from transformers with such dataset, it throws an error:
```
TypeError: __init__() missing 1 required positional arg... | https://api.github.com/repos/huggingface/datasets/issues/1867/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/1866 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1866/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1866/comments | https://api.github.com/repos/huggingface/datasets/issues/1866/events | https://github.com/huggingface/datasets/pull/1866 | 807,017,816 | MDExOlB1bGxSZXF1ZXN0NTcyMzM3NDQ1 | 1,866 | Add dataset for Financial PhraseBank | {
"login": "frankier",
"id": 299380,
"node_id": "MDQ6VXNlcjI5OTM4MA==",
"avatar_url": "https://avatars.githubusercontent.com/u/299380?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/frankier",
"html_url": "https://github.com/frankier",
"followers_url": "https://api.github.com/users/frankie... | [] | closed | false | null | [] | null | [
"Thanks for the feedback. All accepted and metadata regenerated."
] | 1,613,115,056,000 | 1,613,571,756,000 | 1,613,571,756,000 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1866",
"html_url": "https://github.com/huggingface/datasets/pull/1866",
"diff_url": "https://github.com/huggingface/datasets/pull/1866.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1866.patch"
} | https://api.github.com/repos/huggingface/datasets/issues/1866/timeline | null | true | |
https://api.github.com/repos/huggingface/datasets/issues/1865 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1865/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1865/comments | https://api.github.com/repos/huggingface/datasets/issues/1865/events | https://github.com/huggingface/datasets/pull/1865 | 806,388,290 | MDExOlB1bGxSZXF1ZXN0NTcxODE2ODI2 | 1,865 | Updated OPUS Open Subtitles Dataset with metadata information | {
"login": "Valahaar",
"id": 19476123,
"node_id": "MDQ6VXNlcjE5NDc2MTIz",
"avatar_url": "https://avatars.githubusercontent.com/u/19476123?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Valahaar",
"html_url": "https://github.com/Valahaar",
"followers_url": "https://api.github.com/users/Val... | [] | closed | false | null | [] | null | [
"Hi !\r\nAbout the problems you mentioned:\r\n- Saving the infos is only done for the configurations inside the BUILDER_CONFIGS. Otherwise you would need to run the scripts on ALL language pairs, which is not what we want.\r\n- Moreover when you're on your branch, please specify the path to your local version of th... | 1,613,049,986,000 | 1,613,738,289,000 | 1,613,149,184,000 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1865",
"html_url": "https://github.com/huggingface/datasets/pull/1865",
"diff_url": "https://github.com/huggingface/datasets/pull/1865.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1865.patch"
} | Close #1844
Problems:
- I ran `python datasets-cli test datasets/open_subtitles --save_infos --all_configs`, hence the change in `dataset_infos.json`, but it appears that the metadata features have not been added for all pairs. Any idea why that might be?
- Possibly related to the above, I tried doing `pip uninst... | https://api.github.com/repos/huggingface/datasets/issues/1865/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1864 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1864/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1864/comments | https://api.github.com/repos/huggingface/datasets/issues/1864/events | https://github.com/huggingface/datasets/issues/1864 | 806,172,843 | MDU6SXNzdWU4MDYxNzI4NDM= | 1,864 | Add Winogender Schemas | {
"login": "NielsRogge",
"id": 48327001,
"node_id": "MDQ6VXNlcjQ4MzI3MDAx",
"avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/NielsRogge",
"html_url": "https://github.com/NielsRogge",
"followers_url": "https://api.github.com/use... | [
{
"id": 2067376369,
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request",
"name": "dataset request",
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset"
}
] | closed | false | null | [] | null | [
"Nevermind, this one is already available on the hub under the name `'wino_bias'`: https://huggingface.co/datasets/wino_bias"
] | 1,613,031,518,000 | 1,613,031,591,000 | 1,613,031,591,000 | NONE | null | null | ## Adding a Dataset
- **Name:** Winogender Schemas
- **Description:** Winogender Schemas (inspired by Winograd Schemas) are minimal pairs of sentences that differ only by the gender of one pronoun in the sentence, designed to test for the presence of gender bias in automated coreference resolution systems.
- **Paper... | https://api.github.com/repos/huggingface/datasets/issues/1864/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/1863 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1863/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1863/comments | https://api.github.com/repos/huggingface/datasets/issues/1863/events | https://github.com/huggingface/datasets/issues/1863 | 806,171,311 | MDU6SXNzdWU4MDYxNzEzMTE= | 1,863 | Add WikiCREM | {
"login": "NielsRogge",
"id": 48327001,
"node_id": "MDQ6VXNlcjQ4MzI3MDAx",
"avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/NielsRogge",
"html_url": "https://github.com/NielsRogge",
"followers_url": "https://api.github.com/use... | [
{
"id": 2067376369,
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request",
"name": "dataset request",
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset"
}
] | open | false | null | [] | null | [
"Hi @NielsRogge I would like to work on this dataset.\r\n\r\nThanks!",
"Hi @udapy, are you working on this?"
] | 1,613,031,360,000 | 1,615,102,033,000 | null | NONE | null | null | ## Adding a Dataset
- **Name:** WikiCREM
- **Description:** A large unsupervised corpus for coreference resolution.
- **Paper:** https://arxiv.org/abs/1905.06290
- **Github repo:**: https://github.com/vid-koci/bert-commonsense
- **Data:** https://ora.ox.ac.uk/objects/uuid:c83e94bb-7584-41a1-aef9-85b0e764d9e3
- **... | https://api.github.com/repos/huggingface/datasets/issues/1863/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/1862 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1862/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1862/comments | https://api.github.com/repos/huggingface/datasets/issues/1862/events | https://github.com/huggingface/datasets/pull/1862 | 805,722,293 | MDExOlB1bGxSZXF1ZXN0NTcxMjc2ODAx | 1,862 | Fix writing GPU Faiss index | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoest... | [] | closed | false | null | [] | null | [] | 1,612,978,323,000 | 1,612,981,068,000 | 1,612,981,067,000 | MEMBER | null | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1862",
"html_url": "https://github.com/huggingface/datasets/pull/1862",
"diff_url": "https://github.com/huggingface/datasets/pull/1862.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1862.patch"
} | As reported in by @corticalstack there is currently an error when we try to save a faiss index on GPU.
I fixed that by checking the index `getDevice()` method before calling `index_gpu_to_cpu`
Close #1859 | https://api.github.com/repos/huggingface/datasets/issues/1862/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1861 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1861/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1861/comments | https://api.github.com/repos/huggingface/datasets/issues/1861/events | https://github.com/huggingface/datasets/pull/1861 | 805,631,215 | MDExOlB1bGxSZXF1ZXN0NTcxMjAwNjA1 | 1,861 | Fix Limit url | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoest... | [] | closed | false | null | [] | null | [] | 1,612,971,896,000 | 1,612,973,700,000 | 1,612,973,699,000 | MEMBER | null | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1861",
"html_url": "https://github.com/huggingface/datasets/pull/1861",
"diff_url": "https://github.com/huggingface/datasets/pull/1861.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1861.patch"
} | The test.json file of the Literal-Motion-in-Text (LiMiT) dataset was removed recently on the master branch of the repo at https://github.com/ilmgut/limit_dataset
This PR uses the previous commit sha to download the file instead, as suggested by @Paethon
Close #1836 | https://api.github.com/repos/huggingface/datasets/issues/1861/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1860 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1860/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1860/comments | https://api.github.com/repos/huggingface/datasets/issues/1860/events | https://github.com/huggingface/datasets/pull/1860 | 805,510,037 | MDExOlB1bGxSZXF1ZXN0NTcxMDk4OTIz | 1,860 | Add loading from the Datasets Hub + add relative paths in download manager | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoest... | [] | closed | false | null | [] | null | [
"I just added the steps to share a dataset on the datasets hub. It's highly inspired by the steps to share a model in the `transformers` doc.\r\n\r\nMoreover once the new huggingface_hub is released we can update the version in the setup.py. We also need to update the command to create a dataset repo in the documen... | 1,612,963,451,000 | 1,613,157,210,000 | 1,613,157,209,000 | MEMBER | null | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1860",
"html_url": "https://github.com/huggingface/datasets/pull/1860",
"diff_url": "https://github.com/huggingface/datasets/pull/1860.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1860.patch"
} | With the new Datasets Hub on huggingface.co it's now possible to have a dataset repo with your own script and data.
For example: https://huggingface.co/datasets/lhoestq/custom_squad/tree/main contains one script and two json files.
You can load it using
```python
from datasets import load_dataset
d = load_data... | https://api.github.com/repos/huggingface/datasets/issues/1860/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1859 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1859/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1859/comments | https://api.github.com/repos/huggingface/datasets/issues/1859/events | https://github.com/huggingface/datasets/issues/1859 | 805,479,025 | MDU6SXNzdWU4MDU0NzkwMjU= | 1,859 | Error "in void don't know how to serialize this type of index" when saving index to disk when device=0 (GPU) | {
"login": "corticalstack",
"id": 3995321,
"node_id": "MDQ6VXNlcjM5OTUzMjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/3995321?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/corticalstack",
"html_url": "https://github.com/corticalstack",
"followers_url": "https://api.github.... | [] | closed | false | null | [] | null | [
"Hi @corticalstack ! Thanks for reporting. Indeed in the recent versions of Faiss we must use `getDevice` to check if the index in on GPU.\r\n\r\nI'm opening a PR",
"I fixed this issue. It should work fine now.\r\nFeel free to try it out by installing `datasets` from source.\r\nOtherwise you can wait for the next... | 1,612,960,860,000 | 1,612,981,932,000 | 1,612,981,067,000 | NONE | null | null | Error serializing faiss index. Error as follows:
`Error in void faiss::write_index(const faiss::Index*, faiss::IOWriter*) at /home/conda/feedstock_root/build_artifacts/faiss-split_1612472484670/work/faiss/impl/index_write.cpp:453: don't know how to serialize this type of index`
Note:
`torch.cuda.is_availabl... | https://api.github.com/repos/huggingface/datasets/issues/1859/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/1858 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1858/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1858/comments | https://api.github.com/repos/huggingface/datasets/issues/1858/events | https://github.com/huggingface/datasets/pull/1858 | 805,477,774 | MDExOlB1bGxSZXF1ZXN0NTcxMDcxNzIx | 1,858 | Clean config getenvs | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoest... | [] | closed | false | null | [] | null | [] | 1,612,960,754,000 | 1,612,972,350,000 | 1,612,972,349,000 | MEMBER | null | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1858",
"html_url": "https://github.com/huggingface/datasets/pull/1858",
"diff_url": "https://github.com/huggingface/datasets/pull/1858.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1858.patch"
} | Following #1848
Remove double getenv calls and fix one issue with rarfile
cc @albertvillanova | https://api.github.com/repos/huggingface/datasets/issues/1858/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1857 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1857/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1857/comments | https://api.github.com/repos/huggingface/datasets/issues/1857/events | https://github.com/huggingface/datasets/issues/1857 | 805,391,107 | MDU6SXNzdWU4MDUzOTExMDc= | 1,857 | Unable to upload "community provided" dataset - 400 Client Error | {
"login": "mwrzalik",
"id": 1376337,
"node_id": "MDQ6VXNlcjEzNzYzMzc=",
"avatar_url": "https://avatars.githubusercontent.com/u/1376337?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mwrzalik",
"html_url": "https://github.com/mwrzalik",
"followers_url": "https://api.github.com/users/mwrza... | [] | closed | false | null | [] | null | [
"Hi ! We're in the process of switching the community datasets to git repos, exactly like what we're doing for models.\r\nYou can find an example here:\r\nhttps://huggingface.co/datasets/lhoestq/custom_squad/tree/main\r\n\r\nWe'll update the CLI in the coming days and do a new release :)\r\n\r\nAlso cc @julien-c ma... | 1,612,953,541,000 | 1,627,967,173,000 | 1,627,967,173,000 | CONTRIBUTOR | null | null | Hi,
i'm trying to a upload a dataset as described [here](https://huggingface.co/docs/datasets/v1.2.0/share_dataset.html#sharing-a-community-provided-dataset). This is what happens:
```
$ datasets-cli login
$ datasets-cli upload_dataset my_dataset
About to upload file /path/to/my_dataset/dataset_infos.json to S3... | https://api.github.com/repos/huggingface/datasets/issues/1857/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/1856 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1856/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1856/comments | https://api.github.com/repos/huggingface/datasets/issues/1856/events | https://github.com/huggingface/datasets/issues/1856 | 805,360,200 | MDU6SXNzdWU4MDUzNjAyMDA= | 1,856 | load_dataset("amazon_polarity") NonMatchingChecksumError | {
"login": "yanxi0830",
"id": 19946372,
"node_id": "MDQ6VXNlcjE5OTQ2Mzcy",
"avatar_url": "https://avatars.githubusercontent.com/u/19946372?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yanxi0830",
"html_url": "https://github.com/yanxi0830",
"followers_url": "https://api.github.com/users/... | [] | open | false | null | [] | null | [
"Hi ! This issue may be related to #996 \r\nThis comes probably from the Quota Exceeded error from Google Drive.\r\nCan you try again tomorrow and see if you still have the error ?\r\n\r\nOn my side I didn't get any error today with `load_dataset(\"amazon_polarity\")`",
"+1 encountering this issue as well",
"@l... | 1,612,951,256,000 | 1,626,872,391,000 | null | NONE | null | null | Hi, it seems that loading the amazon_polarity dataset gives a NonMatchingChecksumError.
To reproduce:
```
load_dataset("amazon_polarity")
```
This will give the following error:
```
---------------------------------------------------------------------------
NonMatchingChecksumError Traceback ... | https://api.github.com/repos/huggingface/datasets/issues/1856/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/1855 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1855/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1855/comments | https://api.github.com/repos/huggingface/datasets/issues/1855/events | https://github.com/huggingface/datasets/pull/1855 | 805,256,579 | MDExOlB1bGxSZXF1ZXN0NTcwODkzNDY3 | 1,855 | Minor fix in the docs | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.g... | [] | closed | false | null | [] | null | [] | 1,612,942,063,000 | 1,612,960,389,000 | 1,612,960,389,000 | MEMBER | null | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1855",
"html_url": "https://github.com/huggingface/datasets/pull/1855",
"diff_url": "https://github.com/huggingface/datasets/pull/1855.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1855.patch"
} | https://api.github.com/repos/huggingface/datasets/issues/1855/timeline | null | true | |
https://api.github.com/repos/huggingface/datasets/issues/1854 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1854/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1854/comments | https://api.github.com/repos/huggingface/datasets/issues/1854/events | https://github.com/huggingface/datasets/issues/1854 | 805,204,397 | MDU6SXNzdWU4MDUyMDQzOTc= | 1,854 | Feature Request: Dataset.add_item | {
"login": "sshleifer",
"id": 6045025,
"node_id": "MDQ6VXNlcjYwNDUwMjU=",
"avatar_url": "https://avatars.githubusercontent.com/u/6045025?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sshleifer",
"html_url": "https://github.com/sshleifer",
"followers_url": "https://api.github.com/users/ss... | [
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.g... | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_... | null | [
"Hi @sshleifer.\r\n\r\nI am not sure of understanding the need of the `add_item` approach...\r\n\r\nBy just reading your \"Desired API\" section, I would say you could (nearly) get it with a 1-column Dataset:\r\n```python\r\ndata = {\"input_ids\": [np.array([4,4,2]), np.array([8,6,5,5,2]), np.array([3,3,31,5])]}\r\... | 1,612,937,160,000 | 1,619,172,090,000 | 1,619,172,090,000 | MEMBER | null | null | I'm trying to integrate `huggingface/datasets` functionality into `fairseq`, which requires (afaict) being able to build a dataset through an `add_item` method, such as https://github.com/pytorch/fairseq/blob/master/fairseq/data/indexed_dataset.py#L318, as opposed to loading all the text into arrow, and then `dataset.m... | https://api.github.com/repos/huggingface/datasets/issues/1854/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/1853 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1853/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1853/comments | https://api.github.com/repos/huggingface/datasets/issues/1853/events | https://github.com/huggingface/datasets/pull/1853 | 804,791,166 | MDExOlB1bGxSZXF1ZXN0NTcwNTAwMjc4 | 1,853 | Configure library root logger at the module level | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.g... | [] | closed | false | null | [] | null | [] | 1,612,894,272,000 | 1,612,960,354,000 | 1,612,960,354,000 | MEMBER | null | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1853",
"html_url": "https://github.com/huggingface/datasets/pull/1853",
"diff_url": "https://github.com/huggingface/datasets/pull/1853.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1853.patch"
} | Configure library root logger at the datasets.logging module level (singleton-like).
By doing it this way:
- we are sure configuration is done only once: module level code is only runned once
- no need of global variable
- no need of threading lock | https://api.github.com/repos/huggingface/datasets/issues/1853/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1852 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1852/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1852/comments | https://api.github.com/repos/huggingface/datasets/issues/1852/events | https://github.com/huggingface/datasets/pull/1852 | 804,633,033 | MDExOlB1bGxSZXF1ZXN0NTcwMzY3NTU1 | 1,852 | Add Arabic Speech Corpus | {
"login": "zaidalyafeai",
"id": 15667714,
"node_id": "MDQ6VXNlcjE1NjY3NzE0",
"avatar_url": "https://avatars.githubusercontent.com/u/15667714?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/zaidalyafeai",
"html_url": "https://github.com/zaidalyafeai",
"followers_url": "https://api.github.c... | [] | closed | false | null | [] | null | [] | 1,612,882,946,000 | 1,613,038,735,000 | 1,613,038,735,000 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1852",
"html_url": "https://github.com/huggingface/datasets/pull/1852",
"diff_url": "https://github.com/huggingface/datasets/pull/1852.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1852.patch"
} | https://api.github.com/repos/huggingface/datasets/issues/1852/timeline | null | true | |
https://api.github.com/repos/huggingface/datasets/issues/1851 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1851/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1851/comments | https://api.github.com/repos/huggingface/datasets/issues/1851/events | https://github.com/huggingface/datasets/pull/1851 | 804,523,174 | MDExOlB1bGxSZXF1ZXN0NTcwMjc2MTk5 | 1,851 | set bert_score version dependency | {
"login": "pvl",
"id": 3596,
"node_id": "MDQ6VXNlcjM1OTY=",
"avatar_url": "https://avatars.githubusercontent.com/u/3596?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pvl",
"html_url": "https://github.com/pvl",
"followers_url": "https://api.github.com/users/pvl/followers",
"following_u... | [] | closed | false | null | [] | null | [] | 1,612,875,067,000 | 1,612,880,508,000 | 1,612,880,508,000 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1851",
"html_url": "https://github.com/huggingface/datasets/pull/1851",
"diff_url": "https://github.com/huggingface/datasets/pull/1851.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1851.patch"
} | Set the bert_score version in requirements since previous versions of bert_score will fail with datasets (closes #843) | https://api.github.com/repos/huggingface/datasets/issues/1851/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1850 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1850/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1850/comments | https://api.github.com/repos/huggingface/datasets/issues/1850/events | https://github.com/huggingface/datasets/pull/1850 | 804,412,249 | MDExOlB1bGxSZXF1ZXN0NTcwMTg0MDAx | 1,850 | Add cord 19 dataset | {
"login": "ggdupont",
"id": 5583410,
"node_id": "MDQ6VXNlcjU1ODM0MTA=",
"avatar_url": "https://avatars.githubusercontent.com/u/5583410?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ggdupont",
"html_url": "https://github.com/ggdupont",
"followers_url": "https://api.github.com/users/ggdup... | [] | closed | false | null | [] | null | [
"Cleaned-up version of previous PR: https://github.com/huggingface/datasets/pull/1129",
"@lhoestq FYI",
"Before merging I might tweak a little bit the dummy data to avoid having to check if the `document_parses` and `embeddings` directories exist or not. I'll do that later today",
"Looks all good now ! Thanks... | 1,612,866,128,000 | 1,612,883,786,000 | 1,612,883,786,000 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1850",
"html_url": "https://github.com/huggingface/datasets/pull/1850",
"diff_url": "https://github.com/huggingface/datasets/pull/1850.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1850.patch"
} | Initial version only reading the metadata in CSV.
### Checklist:
- [x] Create the dataset script /datasets/my_dataset/my_dataset.py using the template
- [x] Fill the _DESCRIPTION and _CITATION variables
- [x] Implement _infos(), _split_generators() and _generate_examples()
- [x] Make sure that the BUILDER_CONFIG... | https://api.github.com/repos/huggingface/datasets/issues/1850/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1849 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1849/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1849/comments | https://api.github.com/repos/huggingface/datasets/issues/1849/events | https://github.com/huggingface/datasets/issues/1849 | 804,292,971 | MDU6SXNzdWU4MDQyOTI5NzE= | 1,849 | Add TIMIT | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://... | [
{
"id": 2067376369,
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request",
"name": "dataset request",
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset"
},
{
"id": 2725241052,
... | closed | false | null | [] | null | [
"@patrickvonplaten Could you please help me with how the output text has to be represented in the data? TIMIT has Words, Phonemes and texts. Also has lot on info on the speaker and the dialect. Could you please help me? An example of how to arrange it would be super helpful!\r\n\r\n",
"Hey @vrindaprabhu - sure I'... | 1,612,855,781,000 | 1,615,787,977,000 | 1,615,787,977,000 | MEMBER | null | null | ## Adding a Dataset
- **Name:** *TIMIT*
- **Description:** *The TIMIT corpus of read speech has been designed to provide speech data for the acquisition of acoustic-phonetic knowledge and for the development and evaluation of automatic speech recognition systems*
- **Paper:** *Homepage*: http://groups.inf.ed.ac.uk... | https://api.github.com/repos/huggingface/datasets/issues/1849/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/1848 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1848/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1848/comments | https://api.github.com/repos/huggingface/datasets/issues/1848/events | https://github.com/huggingface/datasets/pull/1848 | 803,826,506 | MDExOlB1bGxSZXF1ZXN0NTY5Njg5ODU1 | 1,848 | Refactoring: Create config module | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.g... | [] | closed | false | null | [] | null | [] | 1,612,809,831,000 | 1,612,960,175,000 | 1,612,960,175,000 | MEMBER | null | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1848",
"html_url": "https://github.com/huggingface/datasets/pull/1848",
"diff_url": "https://github.com/huggingface/datasets/pull/1848.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1848.patch"
} | Refactorize configuration settings into their own module.
This could be seen as a Pythonic singleton-like approach. Eventually a config instance class might be created. | https://api.github.com/repos/huggingface/datasets/issues/1848/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1847 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1847/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1847/comments | https://api.github.com/repos/huggingface/datasets/issues/1847/events | https://github.com/huggingface/datasets/pull/1847 | 803,824,694 | MDExOlB1bGxSZXF1ZXN0NTY5Njg4NDY0 | 1,847 | [Metrics] Add word error metric metric | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://... | [] | closed | false | null | [] | null | [
"Feel free to merge once the CI is all green ;)"
] | 1,612,809,675,000 | 1,612,893,201,000 | 1,612,893,201,000 | MEMBER | null | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1847",
"html_url": "https://github.com/huggingface/datasets/pull/1847",
"diff_url": "https://github.com/huggingface/datasets/pull/1847.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1847.patch"
} | This PR adds the word error rate metric to datasets.
WER: https://en.wikipedia.org/wiki/Word_error_rate
for speech recognition. WER is the main metric used in ASR.
`jiwer` seems to be a solid library (see https://github.com/asteroid-team/asteroid/pull/329#discussion_r525158939) | https://api.github.com/repos/huggingface/datasets/issues/1847/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1846 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1846/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1846/comments | https://api.github.com/repos/huggingface/datasets/issues/1846/events | https://github.com/huggingface/datasets/pull/1846 | 803,806,380 | MDExOlB1bGxSZXF1ZXN0NTY5NjczMzcy | 1,846 | Make DownloadManager downloaded/extracted paths accessible | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.g... | [] | closed | false | null | [] | null | [
"First I was thinking of the dict, which makes sense for .download, mapping URL to downloaded path. However does this make sense for .extract, mapping the downloaded path to the extracted path? I ask this because the user did not chose the downloaded path, so this is completely unknown for them...",
"There could ... | 1,612,808,082,000 | 1,614,262,218,000 | 1,614,262,218,000 | MEMBER | null | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1846",
"html_url": "https://github.com/huggingface/datasets/pull/1846",
"diff_url": "https://github.com/huggingface/datasets/pull/1846.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1846.patch"
} | Make accessible the file paths downloaded/extracted by DownloadManager.
Close #1831.
The approach:
- I set these paths as DownloadManager attributes: these are DownloadManager's concerns
- To access to these from DatasetBuilder, I set the DownloadManager instance as DatasetBuilder attribute: object composition | https://api.github.com/repos/huggingface/datasets/issues/1846/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1845 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1845/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1845/comments | https://api.github.com/repos/huggingface/datasets/issues/1845/events | https://github.com/huggingface/datasets/pull/1845 | 803,714,493 | MDExOlB1bGxSZXF1ZXN0NTY5NTk2MTIz | 1,845 | Enable logging propagation and remove logging handler | {
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoest... | [] | closed | false | null | [] | null | [
"Thank you @lhoestq. This logging configuration makes more sense to me.\r\n\r\nOnce propagation is allowed, the end-user can customize logging behavior and add custom handlers to the proper top logger in the hierarchy.\r\n\r\nAnd I also agree with following the best practices and removing any custom handlers:\r\n- ... | 1,612,801,333,000 | 1,612,880,558,000 | 1,612,880,557,000 | MEMBER | null | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1845",
"html_url": "https://github.com/huggingface/datasets/pull/1845",
"diff_url": "https://github.com/huggingface/datasets/pull/1845.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1845.patch"
} | We used to have logging propagation disabled because of this issue: https://github.com/tensorflow/tensorflow/issues/26691
But since it's now fixed we should re-enable it. This is important to keep the default logging behavior for users, and propagation is also needed for pytest fixtures as asked in #1826
I also re... | https://api.github.com/repos/huggingface/datasets/issues/1845/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1844 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1844/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1844/comments | https://api.github.com/repos/huggingface/datasets/issues/1844/events | https://github.com/huggingface/datasets/issues/1844 | 803,588,125 | MDU6SXNzdWU4MDM1ODgxMjU= | 1,844 | Update Open Subtitles corpus with original sentence IDs | {
"login": "Valahaar",
"id": 19476123,
"node_id": "MDQ6VXNlcjE5NDc2MTIz",
"avatar_url": "https://avatars.githubusercontent.com/u/19476123?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Valahaar",
"html_url": "https://github.com/Valahaar",
"followers_url": "https://api.github.com/users/Val... | [
{
"id": 1935892877,
"node_id": "MDU6TGFiZWwxOTM1ODkyODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/good%20first%20issue",
"name": "good first issue",
"color": "7057ff",
"default": true,
"description": "Good for newcomers"
}
] | closed | false | null | [] | null | [
"Hi ! You're right this can can useful.\r\nThis should be easy to add, so feel free to give it a try if you want to contribute :)\r\nI think we just need to add it to the _generate_examples method of the OpenSubtitles dataset builder [here](https://github.com/huggingface/datasets/blob/master/datasets/open_subtitles... | 1,612,792,513,000 | 1,613,151,538,000 | 1,613,151,538,000 | CONTRIBUTOR | null | null | Hi! It would be great if you could add the original sentence ids to [Open Subtitles](https://huggingface.co/datasets/open_subtitles).
I can think of two reasons: first, it's possible to gather sentences for an entire document (the original ids contain media id, subtitle file id and sentence id), therefore somewhat a... | https://api.github.com/repos/huggingface/datasets/issues/1844/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/1843 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1843/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1843/comments | https://api.github.com/repos/huggingface/datasets/issues/1843/events | https://github.com/huggingface/datasets/issues/1843 | 803,565,393 | MDU6SXNzdWU4MDM1NjUzOTM= | 1,843 | MustC Speech Translation | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://... | [
{
"id": 2067376369,
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request",
"name": "dataset request",
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset"
},
{
"id": 2725241052,
... | open | false | null | [] | null | [
"Hi @patrickvonplaten I would like to work on this dataset. \r\n\r\nThanks! ",
"That's awesome! Actually, I just noticed that this dataset might become a bit too big!\r\n\r\nMuST-C is the main dataset used for IWSLT19 and should probably be added as a standalone dataset. Would you be interested also in adding `d... | 1,612,790,865,000 | 1,621,004,014,000 | null | MEMBER | null | null | ## Adding a Dataset
- **Name:** *IWSLT19*
- **Description:** *The Speech Translation Task addresses the translation of English audio into German and Portuguese text.*
- **Hompage:** *https://sites.google.com/view/iwslt-evaluation-2019/speech-translation*
- **Data:** *https://sites.google.com/view/iwslt-evaluation-2... | https://api.github.com/repos/huggingface/datasets/issues/1843/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/1842 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1842/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1842/comments | https://api.github.com/repos/huggingface/datasets/issues/1842/events | https://github.com/huggingface/datasets/issues/1842 | 803,563,149 | MDU6SXNzdWU4MDM1NjMxNDk= | 1,842 | Add AMI Corpus | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://... | [
{
"id": 2067376369,
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request",
"name": "dataset request",
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset"
},
{
"id": 2725241052,
... | open | false | null | [] | null | [
"Available here: ~https://huggingface.co/datasets/ami~ https://huggingface.co/datasets/edinburghcstr/ami",
"@mariosasko actually the \"official\" AMI dataset can be found here: https://huggingface.co/datasets/edinburghcstr/ami -> the old one under `datasets/ami` doesn't work and should be deleted. \r\n\r\nThe new... | 1,612,790,700,000 | 1,612,855,576,000 | null | MEMBER | null | null | ## Adding a Dataset
- **Name:** *AMI*
- **Description:** *The AMI Meeting Corpus is a multi-modal data set consisting of 100 hours of meeting recordings. For a gentle introduction to the corpus, see the corpus overview. To access the data, follow the directions given there. Around two-thirds of the data has been elic... | https://api.github.com/repos/huggingface/datasets/issues/1842/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/1841 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1841/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1841/comments | https://api.github.com/repos/huggingface/datasets/issues/1841/events | https://github.com/huggingface/datasets/issues/1841 | 803,561,123 | MDU6SXNzdWU4MDM1NjExMjM= | 1,841 | Add ljspeech | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://... | [
{
"id": 2067376369,
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request",
"name": "dataset request",
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset"
},
{
"id": 2725241052,
... | closed | false | null | [] | null | [] | 1,612,790,546,000 | 1,615,787,942,000 | 1,615,787,942,000 | MEMBER | null | null | ## Adding a Dataset
- **Name:** *ljspeech*
- **Description:** *This is a public domain speech dataset consisting of 13,100 short audio clips of a single speaker reading passages from 7 non-fiction books. A transcription is provided for each clip. Clips vary in length from 1 to 10 seconds and have a total length of ap... | https://api.github.com/repos/huggingface/datasets/issues/1841/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/1840 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1840/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1840/comments | https://api.github.com/repos/huggingface/datasets/issues/1840/events | https://github.com/huggingface/datasets/issues/1840 | 803,560,039 | MDU6SXNzdWU4MDM1NjAwMzk= | 1,840 | Add common voice | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://... | [
{
"id": 2067376369,
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request",
"name": "dataset request",
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset"
},
{
"id": 2725241052,
... | closed | false | null | [] | null | [
"I have started working on adding this dataset.",
"Hey @BirgerMoell - awesome that you started working on Common Voice. Common Voice is a bit special since, there is no direct download link to download the data. In these cases we usually consider two options:\r\n\r\n1) Find a hacky solution to extract the downloa... | 1,612,790,465,000 | 1,615,787,781,000 | 1,615,787,781,000 | MEMBER | null | null | ## Adding a Dataset
- **Name:** *common voice*
- **Description:** *Mozilla Common Voice Dataset*
- **Paper:** Homepage: https://voice.mozilla.org/en/datasets
- **Data:** https://voice.mozilla.org/en/datasets
- **Motivation:** Important speech dataset
- **TFDatasets Implementation**: https://www.tensorflow.org/dat... | https://api.github.com/repos/huggingface/datasets/issues/1840/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/1839 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1839/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1839/comments | https://api.github.com/repos/huggingface/datasets/issues/1839/events | https://github.com/huggingface/datasets/issues/1839 | 803,559,164 | MDU6SXNzdWU4MDM1NTkxNjQ= | 1,839 | Add Voxforge | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://... | [
{
"id": 2067376369,
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request",
"name": "dataset request",
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset"
},
{
"id": 2725241052,
... | open | false | null | [] | null | [] | 1,612,790,396,000 | 1,612,790,911,000 | null | MEMBER | null | null | ## Adding a Dataset
- **Name:** *voxforge*
- **Description:** *VoxForge is a language classification dataset. It consists of user submitted audio clips submitted to the website. In this release, data from 6 languages is collected - English, Spanish, French, German, Russian, and Italian. Since the website is constant... | https://api.github.com/repos/huggingface/datasets/issues/1839/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/1838 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1838/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1838/comments | https://api.github.com/repos/huggingface/datasets/issues/1838/events | https://github.com/huggingface/datasets/issues/1838 | 803,557,521 | MDU6SXNzdWU4MDM1NTc1MjE= | 1,838 | Add tedlium | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://... | [
{
"id": 2067376369,
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request",
"name": "dataset request",
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset"
},
{
"id": 2725241052,
... | open | false | null | [] | null | [
"Hi @patrickvonplaten \r\nI can have a look to this dataset later since I am trying to add the OpenSLR dataset https://github.com/huggingface/datasets/pull/2173\r\nHopefully I have enough space since the compressed file is 21GB. The release 3 is even bigger: 54GB :-0",
"Resolved via https://github.com/huggingface... | 1,612,790,272,000 | 1,617,983,861,000 | null | MEMBER | null | null | ## Adding a Dataset
- **Name:** *tedlium*
- **Description:** *The TED-LIUM 1-3 corpus is English-language TED talks, with transcriptions, sampled at 16kHz. It contains about 118 hours of speech.*
- **Paper:** Homepage: http://www.openslr.org/7/, https://lium.univ-lemans.fr/en/ted-lium2/ &, https://www.openslr.org/51... | https://api.github.com/repos/huggingface/datasets/issues/1838/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/1837 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1837/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1837/comments | https://api.github.com/repos/huggingface/datasets/issues/1837/events | https://github.com/huggingface/datasets/issues/1837 | 803,555,650 | MDU6SXNzdWU4MDM1NTU2NTA= | 1,837 | Add VCTK | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://... | [
{
"id": 2067376369,
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request",
"name": "dataset request",
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset"
},
{
"id": 2725241052,
... | open | false | null | [] | null | [
"@patrickvonplaten I'd like to take this, if nobody has already done it. I have added datasets before through the datasets sprint, but I feel rusty on the details, so I'll look at the guide as well as similar audio PRs (#1878 in particular comes to mind). If there is any detail I should be aware of please, let me k... | 1,612,790,128,000 | 1,612,790,128,000 | null | MEMBER | null | null | ## Adding a Dataset
- **Name:** *VCTK*
- **Description:** *This CSTR VCTK Corpus includes speech data uttered by 110 English speakers with various accents. Each speaker reads out about 400 sentences, which were selected from a newspaper, the rainbow passage and an elicitation paragraph used for the speech accent arch... | https://api.github.com/repos/huggingface/datasets/issues/1837/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/1836 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1836/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1836/comments | https://api.github.com/repos/huggingface/datasets/issues/1836/events | https://github.com/huggingface/datasets/issues/1836 | 803,531,837 | MDU6SXNzdWU4MDM1MzE4Mzc= | 1,836 | test.json has been removed from the limit dataset repo (breaks dataset) | {
"login": "Paethon",
"id": 237550,
"node_id": "MDQ6VXNlcjIzNzU1MA==",
"avatar_url": "https://avatars.githubusercontent.com/u/237550?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Paethon",
"html_url": "https://github.com/Paethon",
"followers_url": "https://api.github.com/users/Paethon/fo... | [
{
"id": 2067388877,
"node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug",
"name": "dataset bug",
"color": "2edb81",
"default": false,
"description": "A bug in a dataset script provided in the library"
}
] | closed | false | null | [] | null | [
"Thanks for the heads up ! I'm opening a PR to fix that"
] | 1,612,788,353,000 | 1,612,973,698,000 | 1,612,973,698,000 | NONE | null | null | https://github.com/huggingface/datasets/blob/16042b233dbff2a7585110134e969204c69322c3/datasets/limit/limit.py#L51
The URL is not valid anymore since test.json has been removed in master for some reason. Directly referencing the last commit works:
`https://raw.githubusercontent.com/ilmgut/limit_dataset/0707d3989cd... | https://api.github.com/repos/huggingface/datasets/issues/1836/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/1835 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1835/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1835/comments | https://api.github.com/repos/huggingface/datasets/issues/1835/events | https://github.com/huggingface/datasets/issues/1835 | 803,524,790 | MDU6SXNzdWU4MDM1MjQ3OTA= | 1,835 | Add CHiME4 dataset | {
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://... | [
{
"id": 2067376369,
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request",
"name": "dataset request",
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset"
},
{
"id": 2725241052,
... | open | false | null | [] | null | [] | 1,612,787,798,000 | 1,612,790,011,000 | null | MEMBER | null | null | ## Adding a Dataset
- **Name:** Chime4
- **Description:** Chime4 is a dataset for automatic speech recognition. It is especially useful for evaluating models in a noisy environment and for multi-channel ASR
- **Paper:** Dataset comes from a channel: http://spandh.dcs.shef.ac.uk/chime_challenge/CHiME4/ . Results pape... | https://api.github.com/repos/huggingface/datasets/issues/1835/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/1834 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1834/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1834/comments | https://api.github.com/repos/huggingface/datasets/issues/1834/events | https://github.com/huggingface/datasets/pull/1834 | 803,517,094 | MDExOlB1bGxSZXF1ZXN0NTY5NDMzNDA4 | 1,834 | Fixes base_url of limit dataset | {
"login": "Paethon",
"id": 237550,
"node_id": "MDQ6VXNlcjIzNzU1MA==",
"avatar_url": "https://avatars.githubusercontent.com/u/237550?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Paethon",
"html_url": "https://github.com/Paethon",
"followers_url": "https://api.github.com/users/Paethon/fo... | [] | closed | false | null | [] | null | [
"OK, apparently it is a lot more complicated than simply changing the URL? Going to make an issue."
] | 1,612,787,195,000 | 1,612,788,170,000 | 1,612,788,170,000 | NONE | null | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1834",
"html_url": "https://github.com/huggingface/datasets/pull/1834",
"diff_url": "https://github.com/huggingface/datasets/pull/1834.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1834.patch"
} | `test.json` is not available in the master branch of the repository anymore. Linking to a specific commit. | https://api.github.com/repos/huggingface/datasets/issues/1834/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1833 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1833/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1833/comments | https://api.github.com/repos/huggingface/datasets/issues/1833/events | https://github.com/huggingface/datasets/pull/1833 | 803,120,978 | MDExOlB1bGxSZXF1ZXN0NTY5MDk5MTUx | 1,833 | Add OSCAR dataset card | {
"login": "pjox",
"id": 635220,
"node_id": "MDQ6VXNlcjYzNTIyMA==",
"avatar_url": "https://avatars.githubusercontent.com/u/635220?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pjox",
"html_url": "https://github.com/pjox",
"followers_url": "https://api.github.com/users/pjox/followers",
... | [] | closed | false | null | [] | null | [
"@lhoestq Thanks for the suggestions! I agree with all of them. Should I accept them one by one or can I accept them all at once? When I try to load the whole diff GitHub is complaining and it does no render them well (probably my browser?) 😅 ",
"I just merged the tables as suggested 😄 . However I noticed somet... | 1,612,748,389,000 | 1,613,138,965,000 | 1,613,138,904,000 | CONTRIBUTOR | null | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1833",
"html_url": "https://github.com/huggingface/datasets/pull/1833",
"diff_url": "https://github.com/huggingface/datasets/pull/1833.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1833.patch"
} | I added more information and completed the dataset card for OSCAR which was started by @lhoestq in his previous [PR](https://github.com/huggingface/datasets/pull/1824). | https://api.github.com/repos/huggingface/datasets/issues/1833/timeline | null | true |
https://api.github.com/repos/huggingface/datasets/issues/1832 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1832/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1832/comments | https://api.github.com/repos/huggingface/datasets/issues/1832/events | https://github.com/huggingface/datasets/issues/1832 | 802,880,897 | MDU6SXNzdWU4MDI4ODA4OTc= | 1,832 | Looks like nokogumbo is up-to-date now, so this is no longer needed. | {
"login": "JimmyJim1",
"id": 68724553,
"node_id": "MDQ6VXNlcjY4NzI0NTUz",
"avatar_url": "https://avatars.githubusercontent.com/u/68724553?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/JimmyJim1",
"html_url": "https://github.com/JimmyJim1",
"followers_url": "https://api.github.com/users/... | [] | closed | false | null | [] | null | [] | 1,612,680,727,000 | 1,612,805,249,000 | 1,612,805,249,000 | NONE | null | null | Looks like nokogumbo is up-to-date now, so this is no longer needed.
__Originally posted by @dependabot in https://github.com/discourse/discourse/pull/11373#issuecomment-738993432__ | https://api.github.com/repos/huggingface/datasets/issues/1832/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/1831 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1831/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1831/comments | https://api.github.com/repos/huggingface/datasets/issues/1831/events | https://github.com/huggingface/datasets/issues/1831 | 802,868,854 | MDU6SXNzdWU4MDI4Njg4NTQ= | 1,831 | Some question about raw dataset download info in the project . | {
"login": "svjack",
"id": 27874014,
"node_id": "MDQ6VXNlcjI3ODc0MDE0",
"avatar_url": "https://avatars.githubusercontent.com/u/27874014?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/svjack",
"html_url": "https://github.com/svjack",
"followers_url": "https://api.github.com/users/svjack/fo... | [] | closed | false | {
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.g... | [
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_... | null | [
"Hi ! The `dl_manager` is a `DownloadManager` object and is responsible for downloading the raw data files.\r\nIt is used by dataset builders in their `_split_generators` method to download the raw data files that are necessary to build the datasets splits.\r\n\r\nThe `Conll2003` class is a dataset builder, and so ... | 1,612,676,016,000 | 1,614,262,218,000 | 1,614,262,218,000 | NONE | null | null | Hi , i review the code in
https://github.com/huggingface/datasets/blob/master/datasets/conll2003/conll2003.py
in the _split_generators function is the truly logic of download raw datasets with dl_manager
and use Conll2003 cls by use import_main_class in load_dataset function
My question is that , with this logic i... | https://api.github.com/repos/huggingface/datasets/issues/1831/timeline | null | false |
https://api.github.com/repos/huggingface/datasets/issues/1830 | https://api.github.com/repos/huggingface/datasets | https://api.github.com/repos/huggingface/datasets/issues/1830/labels{/name} | https://api.github.com/repos/huggingface/datasets/issues/1830/comments | https://api.github.com/repos/huggingface/datasets/issues/1830/events | https://github.com/huggingface/datasets/issues/1830 | 802,790,075 | MDU6SXNzdWU4MDI3OTAwNzU= | 1,830 | using map on loaded Tokenizer 10x - 100x slower than default Tokenizer? | {
"login": "wumpusman",
"id": 7662740,
"node_id": "MDQ6VXNlcjc2NjI3NDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/7662740?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/wumpusman",
"html_url": "https://github.com/wumpusman",
"followers_url": "https://api.github.com/users/wu... | [] | open | false | null | [] | null | [
"Hi @wumpusman \r\n`datasets` has a caching mechanism that allows to cache the results of `.map` so that when you want to re-run it later it doesn't recompute it again.\r\nSo when you do `.map`, what actually happens is:\r\n1. compute the hash used to identify your `map` for the cache\r\n2. apply your function on e... | 1,612,645,226,000 | 1,614,203,774,000 | null | NONE | null | null | This could total relate to me misunderstanding particular call functions, but I added words to a GPT2Tokenizer, and saved it to disk (note I'm only showing snippets but I can share more) and the map function ran much slower:
````
def save_tokenizer(original_tokenizer,text,path="simpledata/tokenizer"):
words_u... | https://api.github.com/repos/huggingface/datasets/issues/1830/timeline | null | false |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.