id int64 959M 2.55B | title stringlengths 3 133 | body stringlengths 1 65.5k β | description stringlengths 5 65.6k | state stringclasses 2
values | created_at stringlengths 20 20 | updated_at stringlengths 20 20 | closed_at stringlengths 20 20 β | user stringclasses 174
values |
|---|---|---|---|---|---|---|---|---|
2,324,032,880 | Run the backfill on retryable errors every 2 hours (not every 30 min) | We currently have four consecutive backfill-retryable-errors jobs running simultaneously, which could cause an overload of the DB.
I would like to see if it helps somehow other services. | Run the backfill on retryable errors every 2 hours (not every 30 min): We currently have four consecutive backfill-retryable-errors jobs running simultaneously, which could cause an overload of the DB.
I would like to see if it helps somehow other services. | closed | 2024-05-29T18:56:41Z | 2024-05-29T18:59:44Z | 2024-05-29T18:59:43Z | AndreaFrancis |
2,323,807,849 | More webdataset fixes | ...by pinning a more recent version of `datasets@datasets-2.19.1-hotfix`
The webdataset fixes are needed for a dataset release, see [internal](https://huggingface.slack.com/archives/C06T0REVAH1/p1716998706658099?thread_ts=1714048738.307959&cid=C06T0REVAH1)
<s>I also included your fix for the columns order and Jso... | More webdataset fixes: ...by pinning a more recent version of `datasets@datasets-2.19.1-hotfix`
The webdataset fixes are needed for a dataset release, see [internal](https://huggingface.slack.com/archives/C06T0REVAH1/p1716998706658099?thread_ts=1714048738.307959&cid=C06T0REVAH1)
<s>I also included your fix for th... | closed | 2024-05-29T17:01:22Z | 2024-05-30T10:55:57Z | 2024-05-30T10:55:56Z | lhoestq |
2,323,470,064 | Refine blocked datasets for open llm leaderboard | null | Refine blocked datasets for open llm leaderboard: | closed | 2024-05-29T14:24:13Z | 2024-05-29T14:26:51Z | 2024-05-29T14:26:51Z | lhoestq |
2,322,774,832 | memory: use pymongoarrow to get dataset results as dataframe | As stated in the code, the current conversion from a list of mongo entries to a dataframe is not necessarily optimal:
https://github.com/huggingface/dataset-viewer/blob/27edd1f472816f70ab6723fe7fdcb96fa8375209/libs/libcommon/src/libcommon/simple_cache.py#L813-L833
see also
https://github.com/huggingface/datase... | memory: use pymongoarrow to get dataset results as dataframe: As stated in the code, the current conversion from a list of mongo entries to a dataframe is not necessarily optimal:
https://github.com/huggingface/dataset-viewer/blob/27edd1f472816f70ab6723fe7fdcb96fa8375209/libs/libcommon/src/libcommon/simple_cache.py#... | closed | 2024-05-29T09:02:22Z | 2024-06-10T20:32:14Z | 2024-06-10T20:32:14Z | severo |
2,322,724,131 | Remove unnecessary script-related worker dependencies | Remove unnecessary script-related worker dependencies.
Follow-up on:
- #2637
Fix #2476. | Remove unnecessary script-related worker dependencies: Remove unnecessary script-related worker dependencies.
Follow-up on:
- #2637
Fix #2476. | closed | 2024-05-29T08:39:45Z | 2024-05-29T09:18:37Z | 2024-05-29T09:18:36Z | albertvillanova |
2,322,596,385 | Improve the discussion message when a dataset was already in Parquet | > but it is in the parquet fomat already as per the file present in the data folder
See https://huggingface.co/datasets/Rudra360/counselling-llama2-1k/discussions/2#6656dab475a81dc7834171db | Improve the discussion message when a dataset was already in Parquet: > but it is in the parquet fomat already as per the file present in the data folder
See https://huggingface.co/datasets/Rudra360/counselling-llama2-1k/discussions/2#6656dab475a81dc7834171db | closed | 2024-05-29T07:37:13Z | 2024-06-19T16:07:06Z | 2024-06-19T16:07:06Z | severo |
2,322,427,018 | Update ruff from 0.4.5 to 0.4.6 | Update ruff from 0.4.5 to 0.4.6: https://github.com/astral-sh/ruff/releases/tag/v0.4.6 | Update ruff from 0.4.5 to 0.4.6: Update ruff from 0.4.5 to 0.4.6: https://github.com/astral-sh/ruff/releases/tag/v0.4.6 | closed | 2024-05-29T06:00:07Z | 2024-05-29T08:28:44Z | 2024-05-29T08:28:43Z | albertvillanova |
2,321,385,091 | Improve error handling (message, hints, link ot the docs) | # Problem
The dataset viewer shows this error when I open [my public dataset repo](https://huggingface.co/datasets/neoneye/base64-decode-v1).
My repo contains both `data.jsonl` and python files for regenerating the `data.jsonl` file.
```
Cannot get the config names for the dataset.
Error code: ConfigNamesE... | Improve error handling (message, hints, link ot the docs): # Problem
The dataset viewer shows this error when I open [my public dataset repo](https://huggingface.co/datasets/neoneye/base64-decode-v1).
My repo contains both `data.jsonl` and python files for regenerating the `data.jsonl` file.
```
Cannot get th... | closed | 2024-05-28T15:35:26Z | 2024-08-23T14:53:42Z | 2024-08-23T14:53:42Z | neoneye |
2,321,103,991 | run memray in unit tests | see https://github.com/huggingface/dataset-viewer/actions/runs/9270470931/job/25503544322?pr=2863#step:8:169 for example:
```
================================ MEMRAY REPORT =================================
Allocation results for tests/test_simple_cache.py::test_big_row at the high watermark
π¦ Total memory a... | run memray in unit tests: see https://github.com/huggingface/dataset-viewer/actions/runs/9270470931/job/25503544322?pr=2863#step:8:169 for example:
```
================================ MEMRAY REPORT =================================
Allocation results for tests/test_simple_cache.py::test_big_row at the high waterm... | closed | 2024-05-28T13:41:39Z | 2024-05-29T16:41:03Z | 2024-05-29T16:41:02Z | severo |
2,320,814,148 | move the backfill time | null | move the backfill time: | closed | 2024-05-28T11:25:12Z | 2024-05-28T11:26:23Z | 2024-05-28T11:25:19Z | severo |
2,320,802,396 | block huggingface-leaderboard/* | Datasets like https://huggingface.co/datasets/huggingface-leaderboard/details_huggingface__llama-13b are redirected to https://huggingface.co/datasets/open-llm-leaderboard/details_huggingface__llama-13b, and the org https://huggingface.co/datasets/huggingface-leaderboard does not seem to exist, but we currently have 20... | block huggingface-leaderboard/*: Datasets like https://huggingface.co/datasets/huggingface-leaderboard/details_huggingface__llama-13b are redirected to https://huggingface.co/datasets/open-llm-leaderboard/details_huggingface__llama-13b, and the org https://huggingface.co/datasets/huggingface-leaderboard does not seem t... | closed | 2024-05-28T11:18:43Z | 2024-05-28T12:22:42Z | 2024-05-28T11:19:14Z | severo |
2,320,567,503 | Update mypy from 1.8.0 to 1.10.0 | Update mypy from 1.8.0 to 1.10.0. | Update mypy from 1.8.0 to 1.10.0: Update mypy from 1.8.0 to 1.10.0. | closed | 2024-05-28T09:25:26Z | 2024-05-28T12:14:04Z | 2024-05-28T12:14:03Z | albertvillanova |
2,319,119,068 | Update duckdb from 0.10.0 to 0.10.3 | Update duckdb from 0.10.0 to 0.10.3 to include bug fixes. See:
- https://github.com/duckdb/duckdb/releases/tag/v0.10.3
- https://github.com/duckdb/duckdb/releases/tag/v0.10.2 | Update duckdb from 0.10.0 to 0.10.3: Update duckdb from 0.10.0 to 0.10.3 to include bug fixes. See:
- https://github.com/duckdb/duckdb/releases/tag/v0.10.3
- https://github.com/duckdb/duckdb/releases/tag/v0.10.2 | closed | 2024-05-27T13:06:42Z | 2024-05-28T15:04:03Z | 2024-05-28T15:04:02Z | albertvillanova |
2,319,016,326 | Update pip-audit dev dependency to 2.7.3 | Update pip-audit dev dependency to 2.7.3. | Update pip-audit dev dependency to 2.7.3: Update pip-audit dev dependency to 2.7.3. | closed | 2024-05-27T12:13:53Z | 2024-05-29T08:27:11Z | 2024-05-29T08:27:10Z | albertvillanova |
2,318,968,094 | The cache metrics computation (differential) is wrong | The cache metrics are refreshed every 3 hours, and we can see that it fixes incorrect values, that are updated meanwhile using the differential computation (+1 / -1):

| The cache metrics computation (differential) is wrong: The cache metrics are refreshed every 3 hours, and we can see that it fixes incorrect values, that are updated meanwhile using the differential computation (+1 / -1):
 | fixes https://github.com/huggingface/dataset-viewer/pull/2737#issuecomment-2133157229
> shouldn't i update the job runner's version?
I don't increment the `split-descriptive-statistics` version, because I understand that the responses will not be changed, right? | increase duckdb job runner version (follows #2737): fixes https://github.com/huggingface/dataset-viewer/pull/2737#issuecomment-2133157229
> shouldn't i update the job runner's version?
I don't increment the `split-descriptive-statistics` version, because I understand that the responses will not be changed, right? | closed | 2024-05-27T10:59:54Z | 2024-05-27T11:17:58Z | 2024-05-27T11:17:57Z | severo |
2,318,868,352 | Give more ram to backfill cron jobs | The backfill jobs can restart 2, 3, even 6 times... due to OOM. And we end up with successive cron jobs occurring at the same time. | Give more ram to backfill cron jobs: The backfill jobs can restart 2, 3, even 6 times... due to OOM. And we end up with successive cron jobs occurring at the same time. | closed | 2024-05-27T10:54:31Z | 2024-05-27T11:50:45Z | 2024-05-27T10:55:51Z | severo |
2,318,356,497 | Update aiobotocore dependency | Update aiobotocore dependency. | Update aiobotocore dependency: Update aiobotocore dependency. | closed | 2024-05-27T06:37:56Z | 2024-05-27T11:19:09Z | 2024-05-27T11:19:08Z | albertvillanova |
2,317,299,496 | REST API documentation consistency improvements | I have identified the following possible instances of inconsistencies between [Open API specification](https://github.com/huggingface/dataset-viewer/blob/d1c56d3d0f59996110abf7334fc807f9db8bc8ff/docs/source/openapi.json) and [Documentation](https://github.com/huggingface/dataset-viewer/blob/d1c56d3d0f59996110abf7334fc8... | REST API documentation consistency improvements: I have identified the following possible instances of inconsistencies between [Open API specification](https://github.com/huggingface/dataset-viewer/blob/d1c56d3d0f59996110abf7334fc807f9db8bc8ff/docs/source/openapi.json) and [Documentation](https://github.com/huggingface... | open | 2024-05-25T23:16:18Z | 2024-05-29T21:03:30Z | null | alexvuka1 |
2,314,398,124 | Update requests from yanked 2.32.1 to 2.32.2 to fix vulnerability | Update requests from yanked 2.32.1 to 2.32.2.
Note that 2.32.1 version was yanked: https://pypi.org/project/requests/2.32.1/ | Update requests from yanked 2.32.1 to 2.32.2 to fix vulnerability: Update requests from yanked 2.32.1 to 2.32.2.
Note that 2.32.1 version was yanked: https://pypi.org/project/requests/2.32.1/ | closed | 2024-05-24T05:22:30Z | 2024-05-24T08:12:41Z | 2024-05-24T08:12:40Z | albertvillanova |
2,312,688,805 | Update ruff CI dependency to 0.4.5 | Update ruff CI dependency to 0.4.5: https://github.com/astral-sh/ruff/releases/tag/v0.4.5 | Update ruff CI dependency to 0.4.5: Update ruff CI dependency to 0.4.5: https://github.com/astral-sh/ruff/releases/tag/v0.4.5 | closed | 2024-05-23T11:39:39Z | 2024-05-24T05:16:18Z | 2024-05-24T05:16:18Z | albertvillanova |
2,310,675,022 | fix admin ui deps | had to pin a few s3 related deps (from poetry.lock) or pip couldn't resolve them | fix admin ui deps: had to pin a few s3 related deps (from poetry.lock) or pip couldn't resolve them | closed | 2024-05-22T14:17:03Z | 2024-05-22T14:24:17Z | 2024-05-22T14:17:29Z | lhoestq |
2,310,423,112 | Moving webhook to its own service | Fix for https://github.com/huggingface/dataset-viewer/issues/2848 | Moving webhook to its own service: Fix for https://github.com/huggingface/dataset-viewer/issues/2848 | closed | 2024-05-22T12:35:04Z | 2024-05-22T15:40:04Z | 2024-05-22T15:40:03Z | AndreaFrancis |
2,309,090,231 | Move /webhook to its own service | In order to investigate https://github.com/huggingface/dataset-viewer/issues/2395, we can separate /webhook from the rest of the endpoints served by the API service. Indeed, it's by far the slowest one to respond (15% of the requests take > 700ms, 5% > 7s !) compared to the other ones (the worst one, /info, has 15% > 6... | Move /webhook to its own service: In order to investigate https://github.com/huggingface/dataset-viewer/issues/2395, we can separate /webhook from the rest of the endpoints served by the API service. Indeed, it's by far the slowest one to respond (15% of the requests take > 700ms, 5% > 7s !) compared to the other ones ... | closed | 2024-05-21T20:49:09Z | 2024-05-22T19:17:42Z | 2024-05-22T19:17:42Z | severo |
2,309,026,481 | fix(chart): add kube tolerations | See: https://github.com/huggingface/infra/pull/1022 | fix(chart): add kube tolerations: See: https://github.com/huggingface/infra/pull/1022 | closed | 2024-05-21T20:05:53Z | 2024-05-21T20:20:19Z | 2024-05-21T20:18:41Z | rtrompier |
2,308,817,111 | Add dataset /presidio-entities endpoint | null | Add dataset /presidio-entities endpoint: | closed | 2024-05-21T17:56:19Z | 2024-05-22T16:32:40Z | 2024-05-22T16:32:39Z | lhoestq |
2,308,262,424 | Refer to Hub docs in dataset-viewer docs | See https://www.reddit.com/r/huggingface/comments/1csdrfn/comment/l50jfkx/
> It would be useful if the "dataset viewer" doc page referenced the one you gave.
From https://huggingface.co/docs/datasets-server, it's not easy to see the docs for https://huggingface.co/docs/hub/datasets-data-files-configuration or ht... | Refer to Hub docs in dataset-viewer docs: See https://www.reddit.com/r/huggingface/comments/1csdrfn/comment/l50jfkx/
> It would be useful if the "dataset viewer" doc page referenced the one you gave.
From https://huggingface.co/docs/datasets-server, it's not easy to see the docs for https://huggingface.co/docs/h... | closed | 2024-05-21T13:03:41Z | 2024-08-01T21:50:01Z | 2024-08-01T21:50:01Z | severo |
2,308,227,337 | increase the number of API pods from 12 to 20 | We currently use 5 out of 8 nodes. I'll try to increase the max number of nodes at the same time
| increase the number of API pods from 12 to 20: We currently use 5 out of 8 nodes. I'll try to increase the max number of nodes at the same time
| closed | 2024-05-21T12:47:06Z | 2024-05-21T19:41:27Z | 2024-05-21T19:41:26Z | severo |
2,308,189,060 | The admin-ui space cannot build | https://huggingface.co/spaces/datasets-maintainers/dataset-viewer-admin-ui
<img width="736" alt="Capture dβeΜcran 2024-05-21 aΜ 14 28 30" src="https://github.com/huggingface/dataset-viewer/assets/1676121/56f36a4d-e598-4166-82d3-9f47c32c56ec">
```
ERROR: Wheel 'botocore' located at /tmp/pip-unpack-6pji9o0y/botoco... | The admin-ui space cannot build: https://huggingface.co/spaces/datasets-maintainers/dataset-viewer-admin-ui
<img width="736" alt="Capture dβeΜcran 2024-05-21 aΜ 14 28 30" src="https://github.com/huggingface/dataset-viewer/assets/1676121/56f36a4d-e598-4166-82d3-9f47c32c56ec">
```
ERROR: Wheel 'botocore' located a... | closed | 2024-05-21T12:28:45Z | 2024-05-22T14:24:30Z | 2024-05-22T14:24:30Z | severo |
2,308,159,799 | fix sse prefix, after switching from nginx proxy to ALB | same issue as for /admin, which was fixed in three PR: https://github.com/huggingface/dataset-viewer/pull/2820, https://github.com/huggingface/dataset-viewer/pull/2821, https://github.com/huggingface/dataset-viewer/pull/2825 | fix sse prefix, after switching from nginx proxy to ALB: same issue as for /admin, which was fixed in three PR: https://github.com/huggingface/dataset-viewer/pull/2820, https://github.com/huggingface/dataset-viewer/pull/2821, https://github.com/huggingface/dataset-viewer/pull/2825 | closed | 2024-05-21T12:14:00Z | 2024-05-21T19:41:37Z | 2024-05-21T19:41:36Z | severo |
2,307,906,725 | Fix the no-conversion of Opus audio files | This PR fixes the no-conversion of Opus audio file:
- #2818
because the following PR does not avoid the conversion of Opus audio files to WAV:
- #2832
CC: @Airop | Fix the no-conversion of Opus audio files: This PR fixes the no-conversion of Opus audio file:
- #2818
because the following PR does not avoid the conversion of Opus audio files to WAV:
- #2832
CC: @Airop | closed | 2024-05-21T10:08:32Z | 2024-05-21T11:55:58Z | 2024-05-21T11:55:57Z | albertvillanova |
2,307,860,630 | Update requests >= 2.32.1 to fix vulnerability | Update requests >= 2.32.1 to fix vulnerability.
Supersede #2839. | Update requests >= 2.32.1 to fix vulnerability: Update requests >= 2.32.1 to fix vulnerability.
Supersede #2839. | closed | 2024-05-21T09:46:21Z | 2024-05-21T10:09:37Z | 2024-05-21T10:09:36Z | albertvillanova |
2,307,363,471 | Bump the pip group across 8 directories with 1 update | Bumps the pip group with 1 update in the /e2e directory: [requests](https://github.com/psf/requests).
Bumps the pip group with 1 update in the /front/admin_ui directory: [requests](https://github.com/psf/requests).
Bumps the pip group with 1 update in the /jobs/cache_maintenance directory: [requests](https://github.com... | Bump the pip group across 8 directories with 1 update: Bumps the pip group with 1 update in the /e2e directory: [requests](https://github.com/psf/requests).
Bumps the pip group with 1 update in the /front/admin_ui directory: [requests](https://github.com/psf/requests).
Bumps the pip group with 1 update in the /jobs/cac... | closed | 2024-05-21T05:49:16Z | 2024-05-21T10:11:12Z | 2024-05-21T10:11:10Z | dependabot[bot] |
2,304,101,438 | fix spacy model name | null | fix spacy model name: | closed | 2024-05-18T13:55:26Z | 2024-05-18T13:55:31Z | 2024-05-18T13:55:31Z | lhoestq |
2,304,099,416 | add missing spacy model | null | add missing spacy model: | closed | 2024-05-18T13:49:39Z | 2024-05-18T13:50:08Z | 2024-05-18T13:50:07Z | lhoestq |
2,304,087,344 | Update some numbers | null | Update some numbers: | closed | 2024-05-18T13:16:15Z | 2024-05-18T13:17:26Z | 2024-05-18T13:16:59Z | lhoestq |
2,303,562,567 | allow dataset viewer on private NFAA datasets for Pro and Enterprise Hub | cc @Pierrci
We enable the viewer for *private* NFAA datasets, owned by a Pro user or Enterprise Hub org.
The viewer is still disallowed on all public NFAA datasets. | allow dataset viewer on private NFAA datasets for Pro and Enterprise Hub: cc @Pierrci
We enable the viewer for *private* NFAA datasets, owned by a Pro user or Enterprise Hub org.
The viewer is still disallowed on all public NFAA datasets. | closed | 2024-05-17T20:52:07Z | 2024-05-21T20:12:28Z | 2024-05-21T19:42:29Z | severo |
2,303,473,165 | Fix truncation for urls | null | Fix truncation for urls: | closed | 2024-05-17T19:48:10Z | 2024-05-17T20:12:52Z | 2024-05-17T20:12:51Z | lhoestq |
2,303,059,376 | Don't truncate urls | ..in first-rows, otherwise we can't show images in the viewer
related to #1416 (issue for iamge and audio type that were also truncated at one point) | Don't truncate urls: ..in first-rows, otherwise we can't show images in the viewer
related to #1416 (issue for iamge and audio type that were also truncated at one point) | closed | 2024-05-17T15:45:06Z | 2024-05-17T19:48:23Z | 2024-05-17T15:51:25Z | lhoestq |
2,302,967,735 | added .opus as supported audio extension | added .opus as supported audio extension | added .opus as supported audio extension: added .opus as supported audio extension | closed | 2024-05-17T14:59:52Z | 2024-05-21T17:53:58Z | 2024-05-21T09:38:40Z | Airop |
2,302,950,655 | Fix webdatset multipart ext and npz | for a new dataset being released that has a field "url.txt" that is not read correctly as text; and another field "npz" that had an error
thsoe have been fixed in https://github.com/huggingface/datasets/pull/6904 and included in branch `datasets-2.19.1-hotfix` | Fix webdatset multipart ext and npz: for a new dataset being released that has a field "url.txt" that is not read correctly as text; and another field "npz" that had an error
thsoe have been fixed in https://github.com/huggingface/datasets/pull/6904 and included in branch `datasets-2.19.1-hotfix` | closed | 2024-05-17T14:50:50Z | 2024-05-17T15:01:04Z | 2024-05-17T15:01:03Z | lhoestq |
2,302,889,338 | doc: BM25 and Porter stemmer reference | Disclose the search algorithm and stemmer | doc: BM25 and Porter stemmer reference: Disclose the search algorithm and stemmer | closed | 2024-05-17T14:22:34Z | 2024-05-17T21:44:01Z | 2024-05-17T21:44:00Z | AndreaFrancis |
2,302,759,695 | fix: Empty split list | Fix for https://github.com/huggingface/dataset-viewer/issues/2803 | fix: Empty split list : Fix for https://github.com/huggingface/dataset-viewer/issues/2803 | closed | 2024-05-17T13:29:00Z | 2024-05-17T21:21:30Z | 2024-05-17T21:21:29Z | AndreaFrancis |
2,302,571,850 | Differentiate between `NaN` and `null` in the viewer | Currently, we don't do this and display and return in response `null` in both cases.
From the discussion in https://github.com/huggingface/dataset-viewer/pull/2797, this is agreed that it's important to let users know how to correctly treat data with these values.
This would require:
- [ ] Change in how we transfor... | Differentiate between `NaN` and `null` in the viewer: Currently, we don't do this and display and return in response `null` in both cases.
From the discussion in https://github.com/huggingface/dataset-viewer/pull/2797, this is agreed that it's important to let users know how to correctly treat data with these values. ... | open | 2024-05-17T12:05:47Z | 2024-07-01T16:27:41Z | null | polinaeterna |
2,301,485,007 | fix(chart): sseApi service to nodePort | null | fix(chart): sseApi service to nodePort: | closed | 2024-05-16T22:23:43Z | 2024-05-16T22:24:59Z | 2024-05-16T22:24:58Z | rtrompier |
2,301,475,430 | fix(chart): service type override | null | fix(chart): service type override: | closed | 2024-05-16T22:14:20Z | 2024-05-16T22:15:28Z | 2024-05-16T22:15:27Z | rtrompier |
2,301,217,115 | fix e2e test | fix #2824 | fix e2e test: fix #2824 | closed | 2024-05-16T19:47:41Z | 2024-05-17T09:18:20Z | 2024-05-16T20:20:46Z | severo |
2,301,180,037 | an e2e test is broken since #2821 | null | an e2e test is broken since #2821: | closed | 2024-05-16T19:25:34Z | 2024-05-16T20:20:47Z | 2024-05-16T20:20:47Z | severo |
2,301,119,018 | multiple nfaa tags, and remove details in error message | follow up to #2822 | multiple nfaa tags, and remove details in error message: follow up to #2822 | closed | 2024-05-16T18:54:12Z | 2024-05-16T19:19:09Z | 2024-05-16T19:01:35Z | severo |
2,300,581,077 | disable dataset viewer for 'nfaa' tag | See https://huggingface.slack.com/archives/C042H6N5JR0/p1714771691945989 (internal)
On the first backfill cron job after this PR is merged, all the datasets with the "nfaa" tag will be deleted from the dataset viewer's cache. | disable dataset viewer for 'nfaa' tag: See https://huggingface.slack.com/archives/C042H6N5JR0/p1714771691945989 (internal)
On the first backfill cron job after this PR is merged, all the datasets with the "nfaa" tag will be deleted from the dataset viewer's cache. | closed | 2024-05-16T14:31:40Z | 2024-05-16T18:54:44Z | 2024-05-16T15:35:45Z | severo |
2,299,878,775 | Fix admin again | fix #2820 | Fix admin again: fix #2820 | closed | 2024-05-16T09:40:11Z | 2024-05-17T09:19:23Z | 2024-05-16T09:40:46Z | severo |
2,299,763,382 | add prefix /admin in admin service | follow-up to #2713 | add prefix /admin in admin service: follow-up to #2713 | closed | 2024-05-16T08:49:13Z | 2024-05-16T09:31:54Z | 2024-05-16T09:31:53Z | severo |
2,299,375,257 | Align CI and production environments | While investigating an issue (see https://github.com/huggingface/dataset-viewer/pull/2792#issuecomment-2110295795), I discovered that our CI and production environments are misaligned, and thus the CI does not strictly test what will happen in production.
For example, the `libsndfile1` package:
- CI uses `1.0.31` v... | Align CI and production environments: While investigating an issue (see https://github.com/huggingface/dataset-viewer/pull/2792#issuecomment-2110295795), I discovered that our CI and production environments are misaligned, and thus the CI does not strictly test what will happen in production.
For example, the `libsn... | closed | 2024-05-16T05:18:12Z | 2024-06-24T15:48:34Z | 2024-06-24T15:04:12Z | albertvillanova |
2,299,347,208 | Do not convert Opus audio files to WAV | As suggested by @severo, we could avoid the conversion of Opus audio files to WAV: https://github.com/huggingface/dataset-viewer/pull/2811#issuecomment-2113015400
Indeed, according to https://developer.mozilla.org/en-US/docs/Web/Media/Formats/Audio_codecs, it's a well-supported codec among browsers (see "Browser com... | Do not convert Opus audio files to WAV: As suggested by @severo, we could avoid the conversion of Opus audio files to WAV: https://github.com/huggingface/dataset-viewer/pull/2811#issuecomment-2113015400
Indeed, according to https://developer.mozilla.org/en-US/docs/Web/Media/Formats/Audio_codecs, it's a well-supporte... | closed | 2024-05-16T04:50:56Z | 2024-05-21T09:38:41Z | 2024-05-21T09:38:41Z | albertvillanova |
2,298,900,148 | doc: DuckDB native support for HuggingFace urls | Following up https://github.com/duckdb/duckdb/pull/11831
Draft in progress to document DuckDB CLI usage with native support for HF URLs.
I plan to add:
- [x] Authentication for private and gated datasets (Using DuckDB Secrets Manager)
- [x] Query datasets (Some basic `SELECT` examples, `DESCRIBE`, `SUMMARIZE` for... | doc: DuckDB native support for HuggingFace urls: Following up https://github.com/duckdb/duckdb/pull/11831
Draft in progress to document DuckDB CLI usage with native support for HF URLs.
I plan to add:
- [x] Authentication for private and gated datasets (Using DuckDB Secrets Manager)
- [x] Query datasets (Some bas... | closed | 2024-05-15T21:48:40Z | 2024-05-23T14:42:35Z | 2024-05-23T14:42:34Z | AndreaFrancis |
2,298,812,731 | Remove nginx reverse proxy from Helm chart | following #2713, ALB serves the routes, and the nginx reverse proxy is unused. We remove it from Helm.
| Remove nginx reverse proxy from Helm chart: following #2713, ALB serves the routes, and the nginx reverse proxy is unused. We remove it from Helm.
| closed | 2024-05-15T20:46:20Z | 2024-05-16T21:41:37Z | 2024-05-16T21:41:36Z | severo |
2,298,757,821 | Update prod.yaml | null | Update prod.yaml: | closed | 2024-05-15T20:12:23Z | 2024-05-15T20:15:45Z | 2024-05-15T20:15:00Z | rtrompier |
2,298,744,430 | fix(chart): alb annotations | null | fix(chart): alb annotations: | closed | 2024-05-15T20:03:27Z | 2024-05-15T20:04:37Z | 2024-05-15T20:04:36Z | rtrompier |
2,298,738,966 | fix(chart): block /metrics on staging alb | null | fix(chart): block /metrics on staging alb: | closed | 2024-05-15T19:59:52Z | 2024-05-15T20:04:29Z | 2024-05-15T20:04:28Z | rtrompier |
2,298,708,588 | expose metrics on another port | it will make it easier to block public access, see https://github.com/huggingface/dataset-viewer/pull/2713#discussion_r1602156054 | expose metrics on another port: it will make it easier to block public access, see https://github.com/huggingface/dataset-viewer/pull/2713#discussion_r1602156054 | open | 2024-05-15T19:41:38Z | 2024-05-15T19:41:47Z | null | severo |
2,298,301,352 | Support create audio file from Opus format | Support create audio file from Opus format.
Fix #2584.
See that the implemented test raises this error:
```
pydub.exceptions.CouldntDecodeError: Decoding failed. ffmpeg returned error code: 1`
...
Unknown input format: 'opus'
```
Details:
```
FAILED tests/viewer_utils/test_assets.py::test_create_audio_fil... | Support create audio file from Opus format: Support create audio file from Opus format.
Fix #2584.
See that the implemented test raises this error:
```
pydub.exceptions.CouldntDecodeError: Decoding failed. ffmpeg returned error code: 1`
...
Unknown input format: 'opus'
```
Details:
```
FAILED tests/viewer... | closed | 2024-05-15T15:59:03Z | 2024-05-16T04:38:41Z | 2024-05-16T04:38:41Z | albertvillanova |
2,298,134,564 | Refactored __hf_index_id and __hf_fts_score | Issue #2798,
I have Refactored the __hf_index_id and __hf_fts_score, and added it to constants.py.
@severo, Please look into it and let me know, if there are any changes needed. | Refactored __hf_index_id and __hf_fts_score: Issue #2798,
I have Refactored the __hf_index_id and __hf_fts_score, and added it to constants.py.
@severo, Please look into it and let me know, if there are any changes needed. | closed | 2024-05-15T14:43:20Z | 2024-05-16T23:57:17Z | 2024-05-16T20:48:52Z | devesh-2002 |
2,298,109,632 | Store which splits are partial and which are complete | In each step, we should store if we truncated the data or not.
Currently, `config-parquet-and-info` only stores the fact that some of the splits have been partially converted to parquet, but not the list of them.
We want to have the info for each split.
The same goes when we convert to duckdb, and when we comp... | Store which splits are partial and which are complete: In each step, we should store if we truncated the data or not.
Currently, `config-parquet-and-info` only stores the fact that some of the splits have been partially converted to parquet, but not the list of them.
We want to have the info for each split.
Th... | open | 2024-05-15T14:33:17Z | 2024-07-30T15:43:01Z | null | severo |
2,297,450,953 | remove storage-admin | fixes #2715
| remove storage-admin: fixes #2715
| closed | 2024-05-15T10:13:42Z | 2024-05-15T18:57:32Z | 2024-05-15T18:57:31Z | severo |
2,297,331,958 | remove unnecessary code | - remove `CacheEntryDoesNotExistError` and use `CachedArtifactNotFoundError` instead
- remove useless `get_response_or_missing_error`
| remove unnecessary code: - remove `CacheEntryDoesNotExistError` and use `CachedArtifactNotFoundError` instead
- remove useless `get_response_or_missing_error`
| closed | 2024-05-15T09:20:16Z | 2024-05-15T13:31:31Z | 2024-05-15T13:31:30Z | severo |
2,297,240,437 | Show the proportion of image/audio formats in stats? | Proposal [here](https://www.linkedin.com/feed/update/urn:li:ugcPost:7114906762599612416?commentUrn=urn%3Ali%3Acomment%3A%28ugcPost%3A7114906762599612416%2C7114909996697419776%29&replyUrn=urn%3Ali%3Acomment%3A%28ugcPost%3A7114906762599612416%2C7118145549895163904%29&dashCommentUrn=urn%3Ali%3Afsd_comment%3A%2871149099966... | Show the proportion of image/audio formats in stats?: Proposal [here](https://www.linkedin.com/feed/update/urn:li:ugcPost:7114906762599612416?commentUrn=urn%3Ali%3Acomment%3A%28ugcPost%3A7114906762599612416%2C7114909996697419776%29&replyUrn=urn%3Ali%3Acomment%3A%28ugcPost%3A7114906762599612416%2C7118145549895163904%29&... | open | 2024-05-15T08:42:17Z | 2024-06-19T19:55:34Z | null | severo |
2,297,189,640 | pufanyi/MIMICIT is now data-only | Part of https://github.com/huggingface/dataset-viewer/issues/2804 (the dataset had already been moved to data-only by the maintainers).
https://huggingface.co/datasets/pufanyi/MIMICIT/tree/main | pufanyi/MIMICIT is now data-only: Part of https://github.com/huggingface/dataset-viewer/issues/2804 (the dataset had already been moved to data-only by the maintainers).
https://huggingface.co/datasets/pufanyi/MIMICIT/tree/main | closed | 2024-05-15T08:18:55Z | 2024-05-15T12:12:17Z | 2024-05-15T12:12:16Z | severo |
2,297,170,131 | Transfer script-based datasets in allowlist to data-only | <strike>all the canonical datasets (we are moving them to orgs/users at the same time)</strike>
The allow list:
https://github.com/huggingface/dataset-viewer/blob/3760d8b7b3e84aed5f536ae8bda2ae1a1548f022/chart/values.yaml#L86 | Transfer script-based datasets in allowlist to data-only: <strike>all the canonical datasets (we are moving them to orgs/users at the same time)</strike>
The allow list:
https://github.com/huggingface/dataset-viewer/blob/3760d8b7b3e84aed5f536ae8bda2ae1a1548f022/chart/values.yaml#L86 | open | 2024-05-15T08:08:53Z | 2024-06-19T16:04:42Z | null | severo |
2,296,479,008 | Empty list of splits on config-split-names | We have some entries with empty splits:
```
Atlas atlas-x5jgb3-shard-0 [primary] datasets_server_cache> db.cachedResponsesBlue.countDocuments({kind:"config-split-names", http_status:200, "content.splits":{ $exists: true, $size: 0}})
97
```
Note that the viewer for these datasets always shows: `Error code: Conf... | Empty list of splits on config-split-names: We have some entries with empty splits:
```
Atlas atlas-x5jgb3-shard-0 [primary] datasets_server_cache> db.cachedResponsesBlue.countDocuments({kind:"config-split-names", http_status:200, "content.splits":{ $exists: true, $size: 0}})
97
```
Note that the viewer for thes... | closed | 2024-05-14T21:59:19Z | 2024-06-14T16:37:38Z | 2024-06-14T16:37:38Z | AndreaFrancis |
2,295,791,138 | Add missing comment | Part of https://github.com/huggingface/dataset-viewer/issues/1443
Currently we have 414 entries with UnexpectedErrorCode because of missing comment value:
`{kind:"dataset-compatible-libraries", http_status:{$ne:200}, "details.cause_exception":"KeyError", "details.copied_from_artifact":{$exists:false}}`
```
File... | Add missing comment: Part of https://github.com/huggingface/dataset-viewer/issues/1443
Currently we have 414 entries with UnexpectedErrorCode because of missing comment value:
`{kind:"dataset-compatible-libraries", http_status:{$ne:200}, "details.cause_exception":"KeyError", "details.copied_from_artifact":{$exists:fa... | closed | 2024-05-14T15:29:39Z | 2024-05-15T14:47:10Z | 2024-05-14T18:42:51Z | AndreaFrancis |
2,295,211,531 | Update ruff CI dependency from 0.2.2 to 0.4.4 | Update ruff CI dependency from 0.2.2 to 0.4.4:
- v.0.4.0: https://github.com/astral-sh/ruff/releases/tag/v0.4.0
> Ruff's new parser is >2x faster, which translates to a 20-40% speedup for all linting and formatting invocations.
Also allow future minor updates because they were not permitted. | Update ruff CI dependency from 0.2.2 to 0.4.4: Update ruff CI dependency from 0.2.2 to 0.4.4:
- v.0.4.0: https://github.com/astral-sh/ruff/releases/tag/v0.4.0
> Ruff's new parser is >2x faster, which translates to a 20-40% speedup for all linting and formatting invocations.
Also allow future minor updates bec... | closed | 2024-05-14T11:47:12Z | 2024-05-14T12:48:06Z | 2024-05-14T12:48:05Z | albertvillanova |
2,293,900,270 | Set `library_name='dataset-viewer'` in hfh requests | See https://github.com/huggingface/dataset-viewer/pull/2788/files#diff-45fde89a57a9a9a207231137c66a1ddf75b5830ba0eb7f504f87d87f921afb27R902
```python
headers = build_hf_headers(token=self.hf_token, library_name="dataset-viewer")
``` | Set `library_name='dataset-viewer'` in hfh requests: See https://github.com/huggingface/dataset-viewer/pull/2788/files#diff-45fde89a57a9a9a207231137c66a1ddf75b5830ba0eb7f504f87d87f921afb27R902
```python
headers = build_hf_headers(token=self.hf_token, library_name="dataset-viewer")
``` | open | 2024-05-13T21:45:12Z | 2024-05-13T21:45:29Z | null | severo |
2,293,463,377 | JobManagerCrashedError when trying to generate train split viewer | When loading [this dataset](https://huggingface.co/datasets/ptx0/not-a-hotdog/viewer/default/train) onto the hub which contains an `image` field containing image bytes, I'm receiving a `JobManagerCrashedError`
It's not clear exactly why this happens, or the best way to encode the images in the dataset. I looked exha... | JobManagerCrashedError when trying to generate train split viewer: When loading [this dataset](https://huggingface.co/datasets/ptx0/not-a-hotdog/viewer/default/train) onto the hub which contains an `image` field containing image bytes, I'm receiving a `JobManagerCrashedError`
It's not clear exactly why this happens,... | closed | 2024-05-13T17:54:26Z | 2024-06-21T15:06:43Z | 2024-06-21T15:06:42Z | bghira |
2,293,332,421 | use the `ROW_IDX_COLUMN` constant name instead of copying the value everywhere | https://github.com/huggingface/dataset-viewer/blob/3e4f278a65c04f239dfb26823493d1fda2800d53/libs/libapi/src/libapi/response.py#L13
See the occurrences:
https://github.com/search?q=repo%3Ahuggingface%2Fdataset-viewer+__hf_index_id&type=code
Maybe do the same for `__hf_fts_score` too, and move the values to `co... | use the `ROW_IDX_COLUMN` constant name instead of copying the value everywhere: https://github.com/huggingface/dataset-viewer/blob/3e4f278a65c04f239dfb26823493d1fda2800d53/libs/libapi/src/libapi/response.py#L13
See the occurrences:
https://github.com/search?q=repo%3Ahuggingface%2Fdataset-viewer+__hf_index_id&type... | open | 2024-05-13T16:48:53Z | 2024-05-13T17:58:06Z | null | severo |
2,293,324,028 | Fix stats for data with NaN (not a number) values | While the viewer shows both `null` (=python `None`) and `NaN` (=python `float("nan")`) values equally (as nulls, missing data), they are internally not equal. Parquet statistics do not include `NaN` values in `null` count; `polars` returns `NaN` as a mean/median/std value for column with `NaN` values (so, with at least... | Fix stats for data with NaN (not a number) values: While the viewer shows both `null` (=python `None`) and `NaN` (=python `float("nan")`) values equally (as nulls, missing data), they are internally not equal. Parquet statistics do not include `NaN` values in `null` count; `polars` returns `NaN` as a mean/median/std va... | closed | 2024-05-13T16:44:16Z | 2024-06-06T14:35:20Z | 2024-06-06T14:35:19Z | polinaeterna |
2,293,079,680 | Catch DatasetGenerationCastError and show details | Will fix https://github.com/huggingface/dataset-viewer/issues/2601 and https://github.com/huggingface/dataset-viewer/issues/2723
Related to https://github.com/huggingface/dataset-viewer/issues/1443
Current number of records with this issue (UnexpectedError):
```
{ _id: { error: 'DatasetGenerationError' }, count: ... | Catch DatasetGenerationCastError and show details: Will fix https://github.com/huggingface/dataset-viewer/issues/2601 and https://github.com/huggingface/dataset-viewer/issues/2723
Related to https://github.com/huggingface/dataset-viewer/issues/1443
Current number of records with this issue (UnexpectedError):
```
... | closed | 2024-05-13T14:58:05Z | 2024-05-14T06:56:59Z | 2024-05-13T18:03:43Z | AndreaFrancis |
2,291,916,589 | Enclose column names in double quotes in filter docs and tests | Suggest enclosing column names in double quotes in filter docs for robustness with non-alphanumeric characters.
Note that the viewer uses DuckDB: https://duckdb.org/docs/sql/introduction
> DuckDBβs SQL dialect follows the conventions of the PostgreSQL dialect.
And in PostgreSQL SQL Syntax: https://www.postgresql... | Enclose column names in double quotes in filter docs and tests: Suggest enclosing column names in double quotes in filter docs for robustness with non-alphanumeric characters.
Note that the viewer uses DuckDB: https://duckdb.org/docs/sql/introduction
> DuckDBβs SQL dialect follows the conventions of the PostgreSQL ... | closed | 2024-05-13T06:16:08Z | 2024-05-13T13:28:08Z | 2024-05-13T13:28:07Z | albertvillanova |
2,291,829,635 | Improve robustness of column names containing non-alphanumeric characters | Currently, if a column name contains non-alphanumeric characters, some functionalities are broken, like filter.
We should improve the robustness of the viewer in this regard.
See discussion: https://huggingface.co/datasets/m-a-p/Matrix/discussions/2
CC: @lhoestq | Improve robustness of column names containing non-alphanumeric characters: Currently, if a column name contains non-alphanumeric characters, some functionalities are broken, like filter.
We should improve the robustness of the viewer in this regard.
See discussion: https://huggingface.co/datasets/m-a-p/Matrix/dis... | closed | 2024-05-13T05:04:48Z | 2024-05-13T13:28:08Z | 2024-05-13T13:28:08Z | albertvillanova |
2,289,559,495 | Fix image stats when image column type is bytes, not struct | should fix cases like this https://huggingface.co/datasets/lhoestq/test-cmcv | Fix image stats when image column type is bytes, not struct: should fix cases like this https://huggingface.co/datasets/lhoestq/test-cmcv | closed | 2024-05-10T11:33:59Z | 2024-05-13T12:01:00Z | 2024-05-13T12:00:59Z | polinaeterna |
2,289,423,712 | Install libsndfile 1.2.2 from source to support 0.12 OPUS format | Install libsndfile 1.2.2 from source in Docker image.
Fix #2584. | Install libsndfile 1.2.2 from source to support 0.12 OPUS format: Install libsndfile 1.2.2 from source in Docker image.
Fix #2584. | closed | 2024-05-10T10:06:54Z | 2024-05-16T04:41:01Z | 2024-05-16T04:41:01Z | albertvillanova |
2,288,410,392 | fix: Validate and get compression ratio if available row groups | Fix for https://github.com/huggingface/dataset-viewer/issues/2709
| fix: Validate and get compression ratio if available row groups: Fix for https://github.com/huggingface/dataset-viewer/issues/2709
| closed | 2024-05-09T20:28:50Z | 2024-05-13T14:14:49Z | 2024-05-10T15:17:52Z | AndreaFrancis |
2,286,997,106 | Add lock-no-update command to Makefile | Add lock-no-update command to Makefile.
I find this command useful when there are issues in the local Python environment. See: https://github.com/huggingface/dataset-viewer/pull/2783#issuecomment-2102015730 | Add lock-no-update command to Makefile: Add lock-no-update command to Makefile.
I find this command useful when there are issues in the local Python environment. See: https://github.com/huggingface/dataset-viewer/pull/2783#issuecomment-2102015730 | closed | 2024-05-09T06:33:00Z | 2024-05-14T04:49:15Z | 2024-05-14T04:49:14Z | albertvillanova |
2,283,735,156 | limit logs to 5000 characters | fix #2780 | limit logs to 5000 characters: fix #2780 | closed | 2024-05-07T16:06:48Z | 2024-05-14T10:00:59Z | 2024-05-14T10:00:58Z | severo |
2,283,666,694 | Smart dataset update: don't recompute everything all the time | Don't recompute a viewer if the commit is an update in the main readme content or in yaml tags that don't influence the viewer.
Only available for datasets in `datasets-maintainers` org for now.
It works this way
1. take the old_sha and new_sha of the webhook
2. check that the cache revision is old_sha
3. ch... | Smart dataset update: don't recompute everything all the time: Don't recompute a viewer if the commit is an update in the main readme content or in yaml tags that don't influence the viewer.
Only available for datasets in `datasets-maintainers` org for now.
It works this way
1. take the old_sha and new_sha of ... | closed | 2024-05-07T15:33:09Z | 2024-06-26T13:23:11Z | 2024-06-26T13:23:10Z | lhoestq |
2,283,628,842 | test info.features once they are filled | follow up of #2786. | test info.features once they are filled: follow up of #2786. | closed | 2024-05-07T15:14:33Z | 2024-05-07T15:55:28Z | 2024-05-07T15:55:28Z | severo |
2,283,077,000 | raise if a column name has more than 500 characters | fix #2779.
We could do it in datasets, but it's quicker here. I test at the first two points where we handle the columns in the DAG:
- `config-parquet-and-info`
- `first-rows`
| raise if a column name has more than 500 characters: fix #2779.
We could do it in datasets, but it's quicker here. I test at the first two points where we handle the columns in the DAG:
- `config-parquet-and-info`
- `first-rows`
| closed | 2024-05-07T11:57:34Z | 2024-05-07T13:50:12Z | 2024-05-07T13:50:11Z | severo |
2,282,604,700 | Update jinja2 to 3.1.4 to fix vulnerability | Update jinja2 to 3.1.4 to fix vulnerability. | Update jinja2 to 3.1.4 to fix vulnerability: Update jinja2 to 3.1.4 to fix vulnerability. | closed | 2024-05-07T08:21:04Z | 2024-05-07T08:51:42Z | 2024-05-07T08:51:41Z | albertvillanova |
2,282,594,853 | Update werkzeug to 3.0.3 to fix vulnerability | Update werkzeug to 3.0.3 to fix vulnerability. | Update werkzeug to 3.0.3 to fix vulnerability: Update werkzeug to 3.0.3 to fix vulnerability. | closed | 2024-05-07T08:15:49Z | 2024-05-07T08:25:05Z | 2024-05-07T08:25:05Z | albertvillanova |
2,282,584,746 | Update datasets 2.19.1 with hotfix to shorten long logs | Update datasets 2.19.1 with hotfix to shorten long logs. | Update datasets 2.19.1 with hotfix to shorten long logs: Update datasets 2.19.1 with hotfix to shorten long logs. | closed | 2024-05-07T08:10:18Z | 2024-05-09T06:59:01Z | 2024-05-09T06:59:00Z | albertvillanova |
2,281,747,265 | Rows Post Processing Error | Seeing a re-emergence of this issue on a recently release Parquet Dataset: https://github.com/huggingface/dataset-viewer/issues/602
```
Server error while post-processing the split rows. Please report the issue.
Error code: RowsPostProcessingError
```
https://huggingface.co/datasets/parler-tts/mls_eng_10k | Rows Post Processing Error: Seeing a re-emergence of this issue on a recently release Parquet Dataset: https://github.com/huggingface/dataset-viewer/issues/602
```
Server error while post-processing the split rows. Please report the issue.
Error code: RowsPostProcessingError
```
https://huggingface.co/datase... | closed | 2024-05-06T20:53:46Z | 2024-05-07T18:51:04Z | 2024-05-07T18:51:04Z | Helw150 |
2,281,412,678 | Use huggingface-hub with paths-info fix | Pin `huggingface_hub` to https://github.com/huggingface/huggingface_hub/pull/2271
This should avoid spamming the Hub with `/paths-info` calls when opening files, e.g. to read Parquet files metadata | Use huggingface-hub with paths-info fix: Pin `huggingface_hub` to https://github.com/huggingface/huggingface_hub/pull/2271
This should avoid spamming the Hub with `/paths-info` calls when opening files, e.g. to read Parquet files metadata | closed | 2024-05-06T17:48:18Z | 2024-05-07T09:22:53Z | 2024-05-07T09:22:52Z | lhoestq |
2,280,870,091 | Truncate all the logs | We sometimes have very big logs (one row > 5MB). It's not useful at all and triggers warnings from infra. When we setup the logs configuration, we could try to set a maximum length
https://github.com/huggingface/dataset-viewer/blob/95527c2f1f0b8f077ed9ec74d3c75e45dbc1d00a/libs/libcommon/src/libcommon/log.py#L7-L9
... | Truncate all the logs: We sometimes have very big logs (one row > 5MB). It's not useful at all and triggers warnings from infra. When we setup the logs configuration, we could try to set a maximum length
https://github.com/huggingface/dataset-viewer/blob/95527c2f1f0b8f077ed9ec74d3c75e45dbc1d00a/libs/libcommon/src/li... | closed | 2024-05-06T13:20:07Z | 2024-05-14T10:00:59Z | 2024-05-14T10:00:59Z | severo |
2,280,730,676 | Column name wrongly contains data | See https://huggingface.co/datasets/ankumar/benchmark_questions_gpt-4
In the croissant spec: https://huggingface.co/api/datasets/ankumar/benchmark_questions_gpt-4/croissant
<img width="1916" alt="Capture dβeΜcran 2024-05-06 aΜ 14 02 07" src="https://github.com/huggingface/dataset-viewer/assets/1676121/de3aae1e-9f... | Column name wrongly contains data: See https://huggingface.co/datasets/ankumar/benchmark_questions_gpt-4
In the croissant spec: https://huggingface.co/api/datasets/ankumar/benchmark_questions_gpt-4/croissant
<img width="1916" alt="Capture dβeΜcran 2024-05-06 aΜ 14 02 07" src="https://github.com/huggingface/datase... | closed | 2024-05-06T12:08:33Z | 2024-05-07T16:09:06Z | 2024-05-07T15:55:29Z | severo |
2,280,514,000 | Update datasets to 2.19.1 | Update datasets to 2.19.1.
Fix #2777. | Update datasets to 2.19.1: Update datasets to 2.19.1.
Fix #2777. | closed | 2024-05-06T10:11:05Z | 2024-05-06T12:04:01Z | 2024-05-06T12:04:00Z | albertvillanova |
2,280,493,806 | Update datasets to 2.19.1 | Update datasets to 2.19.1: https://github.com/huggingface/datasets/releases/tag/2.19.1
> Bug fixes
> - Fix download for dict of dicts of URLs by @albertvillanova in https://github.com/huggingface/datasets/pull/6871 | Update datasets to 2.19.1: Update datasets to 2.19.1: https://github.com/huggingface/datasets/releases/tag/2.19.1
> Bug fixes
> - Fix download for dict of dicts of URLs by @albertvillanova in https://github.com/huggingface/datasets/pull/6871 | closed | 2024-05-06T10:00:25Z | 2024-05-06T12:04:01Z | 2024-05-06T12:04:01Z | albertvillanova |
2,280,442,300 | Set max number of splits | To avoid cases with 300 splits = 300 load_dataset() * 300 data files resolutions = 90k calls to the Hub API to list files | Set max number of splits: To avoid cases with 300 splits = 300 load_dataset() * 300 data files resolutions = 90k calls to the Hub API to list files | closed | 2024-05-06T09:31:22Z | 2024-05-07T12:51:11Z | 2024-05-07T12:51:10Z | lhoestq |
2,280,416,886 | Support LeRobot datasets? | Currently:
```
Error code: ConfigNamesError
Exception: ValueError
Message: Feature type 'VideoFrame' not found. Available feature types: ['Value', 'ClassLabel', 'Translation', 'TranslationVariableLanguages', 'Sequence', 'Array2D', 'Array3D', 'Array4D', 'Array5D', 'Audio', 'Image']
```
eg on https://... | Support LeRobot datasets?: Currently:
```
Error code: ConfigNamesError
Exception: ValueError
Message: Feature type 'VideoFrame' not found. Available feature types: ['Value', 'ClassLabel', 'Translation', 'TranslationVariableLanguages', 'Sequence', 'Array2D', 'Array3D', 'Array4D', 'Array5D', 'Audio', 'Ima... | open | 2024-05-06T09:16:40Z | 2024-05-06T09:43:36Z | null | severo |
2,280,095,753 | Update tqdm to >= 4.66.3 to fix vulnerability | Update tqdm to >= 4.66.3 to fix vulnerability.
This will fix 13 Dependabot alerts. | Update tqdm to >= 4.66.3 to fix vulnerability: Update tqdm to >= 4.66.3 to fix vulnerability.
This will fix 13 Dependabot alerts. | closed | 2024-05-06T06:00:36Z | 2024-05-06T10:05:51Z | 2024-05-06T10:05:51Z | albertvillanova |
2,278,432,737 | Bump the pip group across 2 directories with 1 update | Bumps the pip group with 1 update in the /docs directory: [tqdm](https://github.com/tqdm/tqdm).
Bumps the pip group with 1 update in the /services/worker directory: [tqdm](https://github.com/tqdm/tqdm).
Updates `tqdm` from 4.64.1 to 4.66.3
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a href="https:/... | Bump the pip group across 2 directories with 1 update: Bumps the pip group with 1 update in the /docs directory: [tqdm](https://github.com/tqdm/tqdm).
Bumps the pip group with 1 update in the /services/worker directory: [tqdm](https://github.com/tqdm/tqdm).
Updates `tqdm` from 4.64.1 to 4.66.3
<details>
<summary>Relea... | closed | 2024-05-03T21:13:04Z | 2024-05-06T10:06:44Z | 2024-05-06T10:06:42Z | dependabot[bot] |
2,278,410,619 | Bump tqdm from 4.64.1 to 4.66.3 in /docs in the pip group across 1 directory | Bumps the pip group with 1 update in the /docs directory: [tqdm](https://github.com/tqdm/tqdm).
Updates `tqdm` from 4.64.1 to 4.66.3
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a href="https://github.com/tqdm/tqdm/releases">tqdm's releases</a>.</em></p>
<blockquote>
<h2>tqdm v4.66.3 stable</h2>
<ul... | Bump tqdm from 4.64.1 to 4.66.3 in /docs in the pip group across 1 directory: Bumps the pip group with 1 update in the /docs directory: [tqdm](https://github.com/tqdm/tqdm).
Updates `tqdm` from 4.64.1 to 4.66.3
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a href="https://github.com/tqdm/tqdm/release... | closed | 2024-05-03T20:54:54Z | 2024-05-03T21:13:09Z | 2024-05-03T21:13:07Z | dependabot[bot] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.