id int64 959M 2.55B | title stringlengths 3 133 | body stringlengths 1 65.5k ⌀ | description stringlengths 5 65.6k | state stringclasses 2
values | created_at stringlengths 20 20 | updated_at stringlengths 20 20 | closed_at stringlengths 20 20 ⌀ | user stringclasses 174
values |
|---|---|---|---|---|---|---|---|---|
2,391,405,227 | Fix dataset name when decreasing metrics | null | Fix dataset name when decreasing metrics: | closed | 2024-07-04T19:58:29Z | 2024-07-08T12:04:37Z | 2024-07-04T22:40:16Z | AndreaFrancis |
2,391,185,983 | [Modalities] Account for image URLs dataset for Image modality | right now datasets like https://huggingface.co/datasets/CaptionEmporium/coyo-hd-11m-llavanext are missing the Image modality even though they have image URLs | [Modalities] Account for image URLs dataset for Image modality: right now datasets like https://huggingface.co/datasets/CaptionEmporium/coyo-hd-11m-llavanext are missing the Image modality even though they have image URLs | closed | 2024-07-04T16:28:28Z | 2024-07-15T16:48:11Z | 2024-07-15T16:48:10Z | lhoestq |
2,390,869,164 | Add threshold to modalities from filetypes | Fix modalities false positives for
- https://huggingface.co/datasets/chenxx1/jia
- https://huggingface.co/datasets/proj-persona/PersonaHub
- https://huggingface.co/datasets/BAAI/Infinity-Instruct
- https://huggingface.co/datasets/m-a-p/COIG-CQIA
- https://huggingface.co/datasets/Magpie-Align/Magpie-Pro-300K-Filter... | Add threshold to modalities from filetypes: Fix modalities false positives for
- https://huggingface.co/datasets/chenxx1/jia
- https://huggingface.co/datasets/proj-persona/PersonaHub
- https://huggingface.co/datasets/BAAI/Infinity-Instruct
- https://huggingface.co/datasets/m-a-p/COIG-CQIA
- https://huggingface.co/... | closed | 2024-07-04T13:23:59Z | 2024-07-04T15:19:47Z | 2024-07-04T15:19:45Z | lhoestq |
2,389,086,231 | WIP: Try to get languages from librarian bot PR for FTS | This PR is still in progress, but it is a suggestion about how to get the language from open PRs from the librarian bot.
Please let me know what you think.
Pending: tests, refactor.
For https://huggingface.co/datasets/Osumansan/data-poison/commit/22432ba97e6c559891bd82ca084496a7f8a6699f.diff , it was able to ident... | WIP: Try to get languages from librarian bot PR for FTS : This PR is still in progress, but it is a suggestion about how to get the language from open PRs from the librarian bot.
Please let me know what you think.
Pending: tests, refactor.
For https://huggingface.co/datasets/Osumansan/data-poison/commit/22432ba97e... | closed | 2024-07-03T17:05:20Z | 2024-07-05T12:49:45Z | 2024-07-05T12:49:45Z | AndreaFrancis |
2,388,763,873 | Add duration to cached steps | Will close https://github.com/huggingface/dataset-viewer/issues/2892
As suggested in https://github.com/huggingface/dataset-viewer/pull/2908#pullrequestreview-2126425919, this adds duration field to cached responses. Duration is computed using `started_at` field of a corresponding job.
Note that this PR adds new ... | Add duration to cached steps: Will close https://github.com/huggingface/dataset-viewer/issues/2892
As suggested in https://github.com/huggingface/dataset-viewer/pull/2908#pullrequestreview-2126425919, this adds duration field to cached responses. Duration is computed using `started_at` field of a corresponding job.
... | closed | 2024-07-03T14:17:47Z | 2024-07-09T13:06:37Z | 2024-07-09T13:06:35Z | polinaeterna |
2,388,514,107 | Use placeholder revision in urls in cached responses | Previously, we were using the dataset revision in the URLs of image/audio files of cached responses of /first-rows.
However when a dataset gets its README updated, we update the `dataset_git_revision` of the cache entries and the location of the image/audio files on S3 but we don't modify the revision in the URLs in t... | Use placeholder revision in urls in cached responses: Previously, we were using the dataset revision in the URLs of image/audio files of cached responses of /first-rows.
However when a dataset gets its README updated, we update the `dataset_git_revision` of the cache entries and the location of the image/audio files o... | closed | 2024-07-03T12:33:20Z | 2024-07-15T17:27:48Z | 2024-07-15T17:27:46Z | lhoestq |
2,388,513,656 | Viewer doesn't show images properly after a smart update | we move the images on s3 in case of modified readme, but we don't update the location of the images in the first-rows responses | Viewer doesn't show images properly after a smart update: we move the images on s3 in case of modified readme, but we don't update the location of the images in the first-rows responses | closed | 2024-07-03T12:33:07Z | 2024-07-15T17:27:47Z | 2024-07-15T17:27:47Z | lhoestq |
2,385,415,052 | Viewer shows outdated cache after renaming a repo and creating a new one with the old name | Reported by @lewtun (internal link: https://huggingface.slack.com/archives/C02EMARJ65P/p1719818961944059):
> If I rename a dataset via the UI from D -> D' and then create a new dataset with the same name D, I seem to get a copy instead of an empty dataset
> Indeed it was the dataset viewer showing a cached result - t... | Viewer shows outdated cache after renaming a repo and creating a new one with the old name: Reported by @lewtun (internal link: https://huggingface.slack.com/archives/C02EMARJ65P/p1719818961944059):
> If I rename a dataset via the UI from D -> D' and then create a new dataset with the same name D, I seem to get a copy... | open | 2024-07-02T07:05:53Z | 2024-08-23T14:16:39Z | null | albertvillanova |
2,384,098,328 | Fix ISO 639-1 mapping for stemming | null | Fix ISO 639-1 mapping for stemming: | closed | 2024-07-01T15:08:12Z | 2024-07-01T15:33:48Z | 2024-07-01T15:33:46Z | AndreaFrancis |
2,378,565,820 | Removing has_fts field from split-duckdb-index | Context: https://github.com/huggingface/dataset-viewer/pull/2928#discussion_r1652733919
I'm removing the `has_fts` field from `split-duckdb-index`, given that now, the `stemmer` field will indicate if the split supports the feature. `stemmer`=None means no FTS; otherwise, it does.
| Removing has_fts field from split-duckdb-index: Context: https://github.com/huggingface/dataset-viewer/pull/2928#discussion_r1652733919
I'm removing the `has_fts` field from `split-duckdb-index`, given that now, the `stemmer` field will indicate if the split supports the feature. `stemmer`=None means no FTS; otherwi... | closed | 2024-06-27T16:08:56Z | 2024-07-01T15:58:18Z | 2024-07-01T15:58:17Z | AndreaFrancis |
2,378,431,229 | update test_plan_job_creation_and_termination | ...to fix the CI after https://github.com/huggingface/dataset-viewer/pull/2958 | update test_plan_job_creation_and_termination: ...to fix the CI after https://github.com/huggingface/dataset-viewer/pull/2958 | closed | 2024-06-27T15:11:13Z | 2024-06-28T08:17:24Z | 2024-06-27T15:26:52Z | lhoestq |
2,378,383,575 | Detect rename in smart update | reported in https://huggingface.co/datasets/crossingminds/shopping-queries-image-dataset/discussions/2 | Detect rename in smart update: reported in https://huggingface.co/datasets/crossingminds/shopping-queries-image-dataset/discussions/2 | closed | 2024-06-27T14:50:49Z | 2024-06-27T15:38:41Z | 2024-06-27T15:38:39Z | lhoestq |
2,378,123,069 | add diagram to docs | fixes #2956 | add diagram to docs: fixes #2956 | closed | 2024-06-27T13:09:52Z | 2024-06-27T15:22:21Z | 2024-06-27T15:22:18Z | severo |
2,377,961,711 | Remove blocked only job types | fixes #2957 | Remove blocked only job types: fixes #2957 | closed | 2024-06-27T11:59:46Z | 2024-06-27T13:17:06Z | 2024-06-27T13:17:04Z | severo |
2,377,929,968 | Remove logic for `WORKER_JOB_TYPES_BLOCKED` and `WORKER_JOB_TYPES_ONLY` | they are not used anymore (empty lists). | Remove logic for `WORKER_JOB_TYPES_BLOCKED` and `WORKER_JOB_TYPES_ONLY`: they are not used anymore (empty lists). | closed | 2024-06-27T11:47:27Z | 2024-06-27T13:17:05Z | 2024-06-27T13:17:05Z | severo |
2,377,862,917 | Elaborate a diagram that describes the queues/prioritization logic | it would be useful to discuss issues like https://github.com/huggingface/dataset-viewer/issues/2955 | Elaborate a diagram that describes the queues/prioritization logic: it would be useful to discuss issues like https://github.com/huggingface/dataset-viewer/issues/2955 | closed | 2024-06-27T11:11:22Z | 2024-06-27T15:22:19Z | 2024-06-27T15:22:19Z | severo |
2,377,860,215 | prioritize jobs from trendy/important datasets | internal Slack discussion: https://huggingface.slack.com/archives/C04L6P8KNQ5/p1719482620323589?thread_ts=1719418419.785649&cid=C04L6P8KNQ5
> prioritization of datasets should probably be based on some popularity signal like number of likes or traffic to the dataset page in the future
| prioritize jobs from trendy/important datasets: internal Slack discussion: https://huggingface.slack.com/archives/C04L6P8KNQ5/p1719482620323589?thread_ts=1719418419.785649&cid=C04L6P8KNQ5
> prioritization of datasets should probably be based on some popularity signal like number of likes or traffic to the dataset pa... | open | 2024-06-27T11:09:55Z | 2024-07-30T16:28:58Z | null | severo |
2,375,845,547 | Smart update on all datasets | I did a bunch of test on https://huggingface.co/datasets/datasets-maintainers/test-smart-update and it works fine: edits to README.md unrelated to the `configs` don't trigger a full recomputation of the viewer :)
I also tested with image data and they are correctly handled. | Smart update on all datasets: I did a bunch of test on https://huggingface.co/datasets/datasets-maintainers/test-smart-update and it works fine: edits to README.md unrelated to the `configs` don't trigger a full recomputation of the viewer :)
I also tested with image data and they are correctly handled. | closed | 2024-06-26T16:57:18Z | 2024-06-26T17:00:02Z | 2024-06-26T17:00:00Z | lhoestq |
2,375,826,127 | add /admin/blocked-datasets endpoint | I will use it in https://observablehq.com/@huggingface/datasets-server-jobs-queue to be able to understand better the current jobs. | add /admin/blocked-datasets endpoint: I will use it in https://observablehq.com/@huggingface/datasets-server-jobs-queue to be able to understand better the current jobs. | closed | 2024-06-26T16:46:35Z | 2024-06-26T19:20:12Z | 2024-06-26T19:20:10Z | severo |
2,375,558,903 | No cache in smart update | ...since it was causing
```
smart_update_dataset failed with PermissionError: [Errno 13] Permission denied: '/.cache'
```
I used the HfFileSystem instead to read the README.md files, since it doesn't use caching | No cache in smart update: ...since it was causing
```
smart_update_dataset failed with PermissionError: [Errno 13] Permission denied: '/.cache'
```
I used the HfFileSystem instead to read the README.md files, since it doesn't use caching | closed | 2024-06-26T14:43:27Z | 2024-06-26T16:05:23Z | 2024-06-26T16:05:21Z | lhoestq |
2,373,152,372 | Add estimated_num_rows in openapi | null | Add estimated_num_rows in openapi: | closed | 2024-06-25T16:46:42Z | 2024-07-25T12:02:34Z | 2024-07-25T12:02:33Z | lhoestq |
2,373,113,430 | add missing migration for estimated_num_rows | null | add missing migration for estimated_num_rows: | closed | 2024-06-25T16:25:31Z | 2024-06-25T16:28:44Z | 2024-06-25T16:28:44Z | lhoestq |
2,372,983,582 | Ignore blocked datasets in WorkerSize metrics for auto scaling | Fix for https://github.com/huggingface/dataset-viewer/issues/2945 | Ignore blocked datasets in WorkerSize metrics for auto scaling: Fix for https://github.com/huggingface/dataset-viewer/issues/2945 | closed | 2024-06-25T15:26:26Z | 2024-06-26T16:23:57Z | 2024-06-26T15:15:52Z | AndreaFrancis |
2,371,365,933 | Exclude blocked datasets from Job metrics | Fix for https://github.com/huggingface/dataset-viewer/issues/2945 | Exclude blocked datasets from Job metrics: Fix for https://github.com/huggingface/dataset-viewer/issues/2945 | closed | 2024-06-25T00:38:56Z | 2024-06-25T15:11:57Z | 2024-06-25T15:11:56Z | AndreaFrancis |
2,371,192,358 | update indexes | I think the old ones will remain. I'll remove them manually...
These two indexes have been proposed by mongo cloud. The reason is: https://github.com/huggingface/dataset-viewer/pull/2933/files#diff-4c951d0a5e21ef5c719bc392169f41e726461028dfd8e049778fedff37ba38c8R422 | update indexes: I think the old ones will remain. I'll remove them manually...
These two indexes have been proposed by mongo cloud. The reason is: https://github.com/huggingface/dataset-viewer/pull/2933/files#diff-4c951d0a5e21ef5c719bc392169f41e726461028dfd8e049778fedff37ba38c8R422 | closed | 2024-06-24T22:08:29Z | 2024-06-24T22:11:13Z | 2024-06-24T22:11:12Z | severo |
2,370,640,346 | add cudf to toctree | Follow up to https://github.com/huggingface/dataset-viewer/pull/2941
I realized the docs were not building in the CI: https://github.com/huggingface/dataset-viewer/actions/runs/9648374360/job/26609396615
Apologies for not checking this in my prior PR.
` features in Croissant crumbs. | WIP: still checking that it works as intended on mlcroissant side. | Support `Sequence()` features in Croissant crumbs.: WIP: still checking that it works as intended on mlcroissant side. | closed | 2024-06-24T13:08:51Z | 2024-07-22T11:01:13Z | 2024-07-22T11:00:38Z | marcenacp |
2,369,842,186 | Increase blockage duration | blocked for 1h -> 6h
based on the last 6h -> 1h
The effect should be to block more datasets, more quickly, and for a longer time. Currently, all the resources are still dedicated to datasets that are updated every xxx minutes. | Increase blockage duration: blocked for 1h -> 6h
based on the last 6h -> 1h
The effect should be to block more datasets, more quickly, and for a longer time. Currently, all the resources are still dedicated to datasets that are updated every xxx minutes. | closed | 2024-06-24T10:20:46Z | 2024-06-24T10:46:13Z | 2024-06-24T10:46:12Z | severo |
2,369,033,982 | add cudf example | Firstly, thanks a lot for this project and hosting so many datasets!
This PR add an example for how to read in data using cudf. This can be useful if you have access to a GPU and want to use the GPU to accelerate any ETL.
The code works and can be testing in this google colab notebook: https://colab.research.goog... | add cudf example: Firstly, thanks a lot for this project and hosting so many datasets!
This PR add an example for how to read in data using cudf. This can be useful if you have access to a GPU and want to use the GPU to accelerate any ETL.
The code works and can be testing in this google colab notebook: https://c... | closed | 2024-06-24T01:54:48Z | 2024-06-24T15:45:57Z | 2024-06-24T15:45:57Z | raybellwaves |
2,366,797,233 | Add num_rows estimate in hub_cache | Added estimated_num_rows to config-size, dataset-size, and I also updated `hub_cache` to use estimated_num_rows if possible as `num_rows` (this way no need to modify anything in moon)
TODO
- [x] mongodb migration of size jobs
- [x] update tests
- [x] support mix of partial and exact num_rows in config-size and ... | Add num_rows estimate in hub_cache: Added estimated_num_rows to config-size, dataset-size, and I also updated `hub_cache` to use estimated_num_rows if possible as `num_rows` (this way no need to modify anything in moon)
TODO
- [x] mongodb migration of size jobs
- [x] update tests
- [x] support mix of partial an... | closed | 2024-06-21T15:41:31Z | 2024-06-25T16:18:28Z | 2024-06-25T16:12:54Z | lhoestq |
2,366,734,798 | fix flaky test gen_kwargs | null | fix flaky test gen_kwargs: | closed | 2024-06-21T15:05:48Z | 2024-06-21T15:08:18Z | 2024-06-21T15:08:17Z | lhoestq |
2,366,647,717 | Do not keep DataFrames in memory in orchestrator classes | Do not keep unnecessary DataFrames in memory in orchestrator classes and instead forward them for use only in class instantiation or reset them after being forwarded.
This PR is related to:
- #2921 | Do not keep DataFrames in memory in orchestrator classes: Do not keep unnecessary DataFrames in memory in orchestrator classes and instead forward them for use only in class instantiation or reset them after being forwarded.
This PR is related to:
- #2921 | closed | 2024-06-21T14:18:17Z | 2024-07-29T15:05:03Z | 2024-07-29T15:05:02Z | albertvillanova |
2,366,247,400 | Enable estimate info (size) on all datasets | TODO before merging:
- [x] merge https://github.com/huggingface/dataset-viewer/pull/2932
- [x] test json in prod (allenai/c4 en.noblocklist: correct relative error of only 0.02%)
- [x] test webdataset in prod (datasets-maintainers/small-publaynet-wds-10x: correct with relative error of only 0.07%) | Enable estimate info (size) on all datasets: TODO before merging:
- [x] merge https://github.com/huggingface/dataset-viewer/pull/2932
- [x] test json in prod (allenai/c4 en.noblocklist: correct relative error of only 0.02%)
- [x] test webdataset in prod (datasets-maintainers/small-publaynet-wds-10x: correct with r... | closed | 2024-06-21T10:33:27Z | 2024-06-24T13:14:51Z | 2024-06-21T13:25:22Z | lhoestq |
2,366,097,076 | Update urllib3 to 1.26.19 and 2.2.2 to fix vulnerability | Update urllib3 to 1.26.19 and 2.2.2 to fix vulnerability.
This PR will close 14 Dependabot alerts. | Update urllib3 to 1.26.19 and 2.2.2 to fix vulnerability: Update urllib3 to 1.26.19 and 2.2.2 to fix vulnerability.
This PR will close 14 Dependabot alerts. | closed | 2024-06-21T09:14:26Z | 2024-06-25T11:50:49Z | 2024-06-25T11:50:49Z | albertvillanova |
2,365,937,805 | divide the rate-limit budget by 5 | null | divide the rate-limit budget by 5: | closed | 2024-06-21T07:46:31Z | 2024-06-21T07:51:35Z | 2024-06-21T07:51:34Z | severo |
2,365,913,782 | Update scikit-learn to 1.5.0 to fix vulnerability | Update scikit-learn to 1.5.0 to fix vulnerability.
This will close 12 Dependabot alerts. | Update scikit-learn to 1.5.0 to fix vulnerability: Update scikit-learn to 1.5.0 to fix vulnerability.
This will close 12 Dependabot alerts. | closed | 2024-06-21T07:31:32Z | 2024-06-21T08:21:29Z | 2024-06-21T08:21:28Z | albertvillanova |
2,364,810,195 | create datasetBlockages collection + block datasets | We apply rate limiting on the jobs, based on the total duration in a window (see https://github.com/huggingface/dataset-viewer/issues/2279#issuecomment-2178655627).
Follows #2931 | create datasetBlockages collection + block datasets: We apply rate limiting on the jobs, based on the total duration in a window (see https://github.com/huggingface/dataset-viewer/issues/2279#issuecomment-2178655627).
Follows #2931 | closed | 2024-06-20T16:12:52Z | 2024-06-20T20:48:38Z | 2024-06-20T20:48:37Z | severo |
2,364,670,609 | Fix estimate info for zip datasets | I simply had to track metadata reads only once to fix the estimator.
(otherwise every file opened in a zip archive triggers an additional read of the metadata with the central directory of the zip file that prevents the estimator from converging)
ex: locally and on only 100MB of parquet conversion (prod is 5GB), ... | Fix estimate info for zip datasets: I simply had to track metadata reads only once to fix the estimator.
(otherwise every file opened in a zip archive triggers an additional read of the metadata with the central directory of the zip file that prevents the estimator from converging)
ex: locally and on only 100MB o... | closed | 2024-06-20T15:00:39Z | 2024-06-21T13:13:05Z | 2024-06-21T13:13:05Z | lhoestq |
2,364,486,343 | Create pastJobs collection | It will be used to apply rate limiting on the jobs, based on the total duration in a window (see https://github.com/huggingface/dataset-viewer/issues/2279#issuecomment-2178655627).
| Create pastJobs collection: It will be used to apply rate limiting on the jobs, based on the total duration in a window (see https://github.com/huggingface/dataset-viewer/issues/2279#issuecomment-2178655627).
| closed | 2024-06-20T13:41:00Z | 2024-06-20T20:24:49Z | 2024-06-20T20:24:48Z | severo |
2,364,286,392 | [refactoring] split queue.py in 3 modules | To prepare https://github.com/huggingface/dataset-viewer/issues/2279 | [refactoring] split queue.py in 3 modules: To prepare https://github.com/huggingface/dataset-viewer/issues/2279 | closed | 2024-06-20T12:11:09Z | 2024-06-20T12:39:17Z | 2024-06-20T12:39:16Z | severo |
2,363,928,494 | Use current priority for children jobs | When we change the priority of a job manually after it has started, we want the children jobs to use the same priority | Use current priority for children jobs: When we change the priority of a job manually after it has started, we want the children jobs to use the same priority | closed | 2024-06-20T09:07:05Z | 2024-06-20T09:18:09Z | 2024-06-20T09:18:08Z | severo |
2,363,156,688 | FTS: Add specific stemmer for monolingual datasets | null | FTS: Add specific stemmer for monolingual datasets: | closed | 2024-06-19T21:32:29Z | 2024-06-26T14:14:54Z | 2024-06-26T14:14:52Z | AndreaFrancis |
2,363,092,971 | Separate expected errors from unexpected ones in Grafana | We should never have: `UnexpectedError`, `PreviousStepFormatError`
Also: some error should be temporary: `PreviousStepNotReady`, etc.
Instead of one chart with the three kinds of errors, we should have:
- one with the expected errors
- one with the transitory errors
- one with the unexpected errors
By the w... | Separate expected errors from unexpected ones in Grafana: We should never have: `UnexpectedError`, `PreviousStepFormatError`
Also: some error should be temporary: `PreviousStepNotReady`, etc.
Instead of one chart with the three kinds of errors, we should have:
- one with the expected errors
- one with the trans... | open | 2024-06-19T20:40:19Z | 2024-06-19T20:40:28Z | null | severo |
2,363,079,584 | only raise error in config-is-valid if format is bad | Should remove the remaining PreviousStepFormatError: https://github.com/huggingface/dataset-viewer/issues/2433#issuecomment-2179014134 | only raise error in config-is-valid if format is bad: Should remove the remaining PreviousStepFormatError: https://github.com/huggingface/dataset-viewer/issues/2433#issuecomment-2179014134 | closed | 2024-06-19T20:27:44Z | 2024-06-20T09:17:53Z | 2024-06-20T09:17:53Z | severo |
2,362,863,359 | Reorder and hide columns within dataset viewer | # Problem
When doing some basic vibe checks for datasets, I realized that the order in which columns are shown might not always be useful for viewing and exploring the data. I might want to quickly show `chosen_model` next to `chosen_response` and `chosen_avg_rating` and continue exploring based on that. Also, this re... | Reorder and hide columns within dataset viewer : # Problem
When doing some basic vibe checks for datasets, I realized that the order in which columns are shown might not always be useful for viewing and exploring the data. I might want to quickly show `chosen_model` next to `chosen_response` and `chosen_avg_rating` an... | open | 2024-06-19T17:32:33Z | 2024-06-19T20:43:39Z | null | davidberenstein1957 |
2,362,699,032 | Delete canonical datasets | The cache still contains entries for canonical datasets that have been moved to their own namespace (see https://github.com/huggingface/dataset-viewer/issues/2478#issuecomment-2179018465 for a list).
We must delete the cache entries (+ assets/cached assets/jobs) | Delete canonical datasets: The cache still contains entries for canonical datasets that have been moved to their own namespace (see https://github.com/huggingface/dataset-viewer/issues/2478#issuecomment-2179018465 for a list).
We must delete the cache entries (+ assets/cached assets/jobs) | closed | 2024-06-19T15:51:11Z | 2024-07-30T16:29:59Z | 2024-07-30T16:29:59Z | severo |
2,362,660,660 | Fix estimate info allow_list | null | Fix estimate info allow_list: | closed | 2024-06-19T15:29:58Z | 2024-06-19T15:30:07Z | 2024-06-19T15:30:06Z | lhoestq |
2,360,869,190 | admin-ui: Do not mark gated datasets as error | In the admin-ui, some datasets have all the features working (is-valid) but are shown as errors.
This is due to gated datasets; I just added another sign to identify those and let us know that the issue is unrelated to jobs not being correctly processed.
Another question is if we should consider this dataset as part ... | admin-ui: Do not mark gated datasets as error: In the admin-ui, some datasets have all the features working (is-valid) but are shown as errors.
This is due to gated datasets; I just added another sign to identify those and let us know that the issue is unrelated to jobs not being correctly processed.
Another question... | closed | 2024-06-18T22:48:21Z | 2024-06-19T14:43:09Z | 2024-06-19T14:43:08Z | AndreaFrancis |
2,360,258,601 | Do not keep DataFrames in memory in State classes | Do not keep unnecessary DataFrames in memory in State classes and instead forward them for use only in class instantiation.
This PR reduces memory use by avoiding keeping unnecessary DataFrames in all State classes.
This PR supersedes (neither copies nor views are longer necessary):
- #2903 | Do not keep DataFrames in memory in State classes: Do not keep unnecessary DataFrames in memory in State classes and instead forward them for use only in class instantiation.
This PR reduces memory use by avoiding keeping unnecessary DataFrames in all State classes.
This PR supersedes (neither copies nor views ar... | closed | 2024-06-18T16:27:41Z | 2024-06-19T05:46:09Z | 2024-06-19T05:46:08Z | albertvillanova |
2,359,917,370 | order the steps alphabetically | fixes #2917 | order the steps alphabetically: fixes #2917 | closed | 2024-06-18T13:48:37Z | 2024-06-18T13:48:53Z | 2024-06-18T13:48:52Z | severo |
2,359,907,706 | Do not propagate error for is valid and hub cache | We always want to have a "status", even if some of the previous steps are not available. | Do not propagate error for is valid and hub cache: We always want to have a "status", even if some of the previous steps are not available. | closed | 2024-06-18T13:44:32Z | 2024-06-19T15:09:01Z | 2024-06-19T15:09:00Z | severo |
2,359,600,864 | The "dataset-hub-cache" and "dataset-is-valid" steps should always return a value | For example, we detect that the dataset `nkp37/OpenVid-1M` has Image and Video modalities (steps `dataset-filetypes` and `dataset-modalities`), but because the datasets library fails to list the configs, the following steps also return an error. | The "dataset-hub-cache" and "dataset-is-valid" steps should always return a value: For example, we detect that the dataset `nkp37/OpenVid-1M` has Image and Video modalities (steps `dataset-filetypes` and `dataset-modalities`), but because the datasets library fails to list the configs, the following steps also return a... | closed | 2024-06-18T11:13:41Z | 2024-06-19T15:09:01Z | 2024-06-19T15:09:01Z | severo |
2,359,593,459 | admin UI: automatically fill the steps list | Currently, the step `dataset-filetypes` is absent from the job types list
<img width="521" alt="Capture d’écran 2024-06-18 à 13 09 01" src="https://github.com/huggingface/dataset-viewer/assets/1676121/2fc31de2-63d9-46d2-81f8-cc79dc78e868">
The list should be created automatically from the processing graph + sor... | admin UI: automatically fill the steps list: Currently, the step `dataset-filetypes` is absent from the job types list
<img width="521" alt="Capture d’écran 2024-06-18 à 13 09 01" src="https://github.com/huggingface/dataset-viewer/assets/1676121/2fc31de2-63d9-46d2-81f8-cc79dc78e868">
The list should be created ... | closed | 2024-06-18T11:09:31Z | 2024-06-18T13:49:31Z | 2024-06-18T13:48:53Z | severo |
2,359,570,130 | [modality detection] One image in the repo -> Image modality | See https://huggingface.co/datasets/BAAI/Infinity-Instruct/tree/main
<img width="465" alt="Capture d’écran 2024-06-18 à 12 55 35" src="https://github.com/huggingface/dataset-viewer/assets/1676121/c00b63a2-f1bd-4957-b56e-425c0a99f149">
<img width="469" alt="Capture d’écran 2024-06-18 à 12 55 49" src="https://git... | [modality detection] One image in the repo -> Image modality: See https://huggingface.co/datasets/BAAI/Infinity-Instruct/tree/main
<img width="465" alt="Capture d’écran 2024-06-18 à 12 55 35" src="https://github.com/huggingface/dataset-viewer/assets/1676121/c00b63a2-f1bd-4957-b56e-425c0a99f149">
<img width="469" ... | open | 2024-06-18T10:57:14Z | 2024-06-18T10:57:21Z | null | severo |
2,358,366,100 | Bump urllib3 from 2.0.7 to 2.2.2 in /docs in the pip group across 1 directory | Bumps the pip group with 1 update in the /docs directory: [urllib3](https://github.com/urllib3/urllib3).
Updates `urllib3` from 2.0.7 to 2.2.2
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a href="https://github.com/urllib3/urllib3/releases">urllib3's releases</a>.</em></p>
<blockquote>
<h2>2.2.2</h2... | Bump urllib3 from 2.0.7 to 2.2.2 in /docs in the pip group across 1 directory: Bumps the pip group with 1 update in the /docs directory: [urllib3](https://github.com/urllib3/urllib3).
Updates `urllib3` from 2.0.7 to 2.2.2
<details>
<summary>Release notes</summary>
<p><em>Sourced from <a href="https://github.com/urllib... | closed | 2024-06-17T22:17:57Z | 2024-07-27T15:04:09Z | 2024-07-27T15:04:01Z | dependabot[bot] |
2,356,806,606 | Prevents viewer from being pinged for the datasets on both leaderboard orgs | null | Prevents viewer from being pinged for the datasets on both leaderboard orgs: | closed | 2024-06-17T09:05:22Z | 2024-06-17T09:06:18Z | 2024-06-17T09:06:17Z | clefourrier |
2,353,738,090 | dataset-filetypes is not a small step | It has to do HTTP requests, it can manage big objects, and it depends on `datasets`. So... it might be better to process it with "medium" workers, not "light" ones that generally only transform cached entries. | dataset-filetypes is not a small step: It has to do HTTP requests, it can manage big objects, and it depends on `datasets`. So... it might be better to process it with "medium" workers, not "light" ones that generally only transform cached entries. | closed | 2024-06-14T16:43:33Z | 2024-06-14T17:05:45Z | 2024-06-14T17:05:44Z | severo |
2,353,447,269 | Additional modalities detection | - [x] detect tabular
- [x] detect timeseries | Additional modalities detection: - [x] detect tabular
- [x] detect timeseries | closed | 2024-06-14T13:58:17Z | 2024-06-14T17:05:56Z | 2024-06-14T17:05:56Z | severo |
2,352,814,842 | feat(chart): auto deploy when secrets change | Will automatically redeploy applications when secrets are changed in Infisical (max 1 min after the change). | feat(chart): auto deploy when secrets change: Will automatically redeploy applications when secrets are changed in Infisical (max 1 min after the change). | closed | 2024-06-14T08:16:54Z | 2024-06-26T08:20:17Z | 2024-06-26T08:20:16Z | rtrompier |
2,351,595,836 | fix: extensions are always lowercase | followup to #2905 | fix: extensions are always lowercase: followup to #2905 | closed | 2024-06-13T16:41:28Z | 2024-06-13T20:48:49Z | 2024-06-13T20:48:48Z | severo |
2,351,457,901 | Detect dataset modalities using dataset-filetypes | See #2898
| Detect dataset modalities using dataset-filetypes: See #2898
| closed | 2024-06-13T15:34:21Z | 2024-06-14T17:18:51Z | 2024-06-14T17:18:50Z | severo |
2,351,217,562 | Add `started_at` field to cached response documents | Will close https://github.com/huggingface/dataset-viewer/issues/2892
To pass `started_at` field from Job to CachedResponse, I updated `JobInfo` class so that it stores `started_at` info too. It is None by default (and set to actual time by `Queue._start_newest_job_and_delete_others()`). Maybe there is better way to ... | Add `started_at` field to cached response documents: Will close https://github.com/huggingface/dataset-viewer/issues/2892
To pass `started_at` field from Job to CachedResponse, I updated `JobInfo` class so that it stores `started_at` info too. It is None by default (and set to actual time by `Queue._start_newest_job... | closed | 2024-06-13T13:51:53Z | 2024-07-04T15:29:08Z | 2024-07-04T15:29:08Z | polinaeterna |
2,349,106,855 | Move secrets to Infisical | null | Move secrets to Infisical: | closed | 2024-06-12T15:44:59Z | 2024-06-13T18:17:14Z | 2024-06-13T18:17:14Z | rtrompier |
2,349,078,160 | [Config-parquet-and-info] Compute estimated dataset info | This will be useful to show the estimate number of rows of datasets that are partially converted to Parquet
I added `estimated_dataset_info` to the` parquet-and-info` response.
It contains estimations of:
- download_size
- num_bytes
- num_examples
Then we'll be able to propagate this info to the `size` jobs a... | [Config-parquet-and-info] Compute estimated dataset info: This will be useful to show the estimate number of rows of datasets that are partially converted to Parquet
I added `estimated_dataset_info` to the` parquet-and-info` response.
It contains estimations of:
- download_size
- num_bytes
- num_examples
Then... | closed | 2024-06-12T15:31:57Z | 2024-06-19T13:57:05Z | 2024-06-19T13:57:04Z | lhoestq |
2,348,299,148 | Add step dataset-filetypes | needed for https://github.com/huggingface/dataset-viewer/issues/2898#issuecomment-2162460762 | Add step dataset-filetypes: needed for https://github.com/huggingface/dataset-viewer/issues/2898#issuecomment-2162460762 | closed | 2024-06-12T09:37:27Z | 2024-06-13T15:13:07Z | 2024-06-13T15:13:06Z | severo |
2,348,186,694 | Fix skipped async tests caused by pytest-memray | Fix skipped async tests caused by pytest-memray: do not pass memray argument to pytest.
Fix #2901.
Reported underlying issue in pytest-memray:
- https://github.com/bloomberg/pytest-memray/issues/119 | Fix skipped async tests caused by pytest-memray: Fix skipped async tests caused by pytest-memray: do not pass memray argument to pytest.
Fix #2901.
Reported underlying issue in pytest-memray:
- https://github.com/bloomberg/pytest-memray/issues/119 | closed | 2024-06-12T08:46:37Z | 2024-06-17T13:19:44Z | 2024-06-12T11:25:18Z | albertvillanova |
2,348,166,719 | Pass copies of DataFrames instead of views | As the memory leak may be caused by improperly de-referenced objects, better pass copies of DataFrames instead of views.
In a subsequent PR I could try to optimize memory usage by not storing unnecessary data. | Pass copies of DataFrames instead of views: As the memory leak may be caused by improperly de-referenced objects, better pass copies of DataFrames instead of views.
In a subsequent PR I could try to optimize memory usage by not storing unnecessary data. | closed | 2024-06-12T08:37:00Z | 2024-06-20T07:59:38Z | 2024-06-20T07:59:38Z | albertvillanova |
2,347,919,987 | Minor fix id with length of str dataset name | This is a minor fix of some Tasks ids containing the length of of the string dataset name.
I discovered this while investigating the memory leak issue. | Minor fix id with length of str dataset name: This is a minor fix of some Tasks ids containing the length of of the string dataset name.
I discovered this while investigating the memory leak issue. | closed | 2024-06-12T06:30:18Z | 2024-06-12T08:04:27Z | 2024-06-12T08:03:44Z | albertvillanova |
2,347,790,772 | Async tests using anyio are skipped after including pytest-memray | Async tests using anyio are currently skipped. See: https://github.com/huggingface/dataset-viewer/actions/runs/9464138411/job/26070809625
```
tests/test_authentication.py ssssssssssssssssssssssssssssssssssssssssssssss
=============================== warnings summary ===============================
/home/runner/... | Async tests using anyio are skipped after including pytest-memray: Async tests using anyio are currently skipped. See: https://github.com/huggingface/dataset-viewer/actions/runs/9464138411/job/26070809625
```
tests/test_authentication.py ssssssssssssssssssssssssssssssssssssssssssssss
==============================... | closed | 2024-06-12T04:41:05Z | 2024-06-12T11:25:19Z | 2024-06-12T11:25:18Z | albertvillanova |
2,346,612,106 | 2754 partial instead of error | fix #2754 | 2754 partial instead of error: fix #2754 | closed | 2024-06-11T14:40:02Z | 2024-06-14T13:00:31Z | 2024-06-13T13:57:19Z | severo |
2,346,610,285 | Standardize access to metrics and healthcheck | In some apps, the metrics and healthcheck are public:
- https://datasets-server.huggingface.co/admin/metrics
- https://datasets-server.huggingface.co/sse/metrics
- https://datasets-server.huggingface.co/sse/healthcheck
- https://datasets-server.huggingface.co/healthcheck
- On others, it’s forbidden or not found:... | Standardize access to metrics and healthcheck: In some apps, the metrics and healthcheck are public:
- https://datasets-server.huggingface.co/admin/metrics
- https://datasets-server.huggingface.co/sse/metrics
- https://datasets-server.huggingface.co/sse/healthcheck
- https://datasets-server.huggingface.co/healthc... | open | 2024-06-11T14:39:10Z | 2024-07-11T15:38:17Z | null | AndreaFrancis |
2,346,536,771 | detect more modalities | Currently, we only detect and report "audio", "image" and "text".
Ideally, we would have:
<img width="318" alt="Capture d’écran 2024-06-11 à 16 07 20" src="https://github.com/huggingface/dataset-viewer/assets/1676121/1a21dfff-5c78-45bd-8baf-de1f4b203b6a">
See https://github.com/huggingface/moon-landing/pu... | detect more modalities: Currently, we only detect and report "audio", "image" and "text".
Ideally, we would have:
<img width="318" alt="Capture d’écran 2024-06-11 à 16 07 20" src="https://github.com/huggingface/dataset-viewer/assets/1676121/1a21dfff-5c78-45bd-8baf-de1f4b203b6a">
See https://github.com/hug... | closed | 2024-06-11T14:07:26Z | 2024-06-14T17:18:52Z | 2024-06-14T17:18:51Z | severo |
2,346,351,897 | Use `HfFileSystem` in config-parquet-metadata step instead of `HttpFileSystem` | `config-parquet-metadata` step is failing again for [FineWeb](https://huggingface.co/datasets/HuggingFaceFW/fineweb) with errors like
```
"Could not read the parquet files: 504, message='Gateway Time-out', url=URL('https://huggingface.co/datasets/HuggingFaceFW/fineweb/resolve/refs%2Fconvert%2Fparquet/default/train-p... | Use `HfFileSystem` in config-parquet-metadata step instead of `HttpFileSystem`: `config-parquet-metadata` step is failing again for [FineWeb](https://huggingface.co/datasets/HuggingFaceFW/fineweb) with errors like
```
"Could not read the parquet files: 504, message='Gateway Time-out', url=URL('https://huggingface.co... | closed | 2024-06-11T12:50:24Z | 2024-06-13T16:09:14Z | 2024-06-13T16:09:13Z | polinaeterna |
2,345,865,728 | Remove Prometheus context label | Remove Prometheus context label.
Fix #2895. | Remove Prometheus context label: Remove Prometheus context label.
Fix #2895. | closed | 2024-06-11T09:18:13Z | 2024-06-11T11:23:11Z | 2024-06-11T10:44:09Z | albertvillanova |
2,345,427,383 | Too high label cardinality metrics in Prometheus | As discussed privately (CC: @McPatate), we may have too high label cardinality metrics in Prometheus. As stated in Prometheus docs: https://prometheus.io/docs/practices/naming/#labels
> CAUTION: Remember that every unique combination of key-value label pairs represents a new time series, which can dramatically increas... | Too high label cardinality metrics in Prometheus: As discussed privately (CC: @McPatate), we may have too high label cardinality metrics in Prometheus. As stated in Prometheus docs: https://prometheus.io/docs/practices/naming/#labels
> CAUTION: Remember that every unique combination of key-value label pairs represents... | closed | 2024-06-11T05:29:14Z | 2024-06-11T10:44:10Z | 2024-06-11T10:44:10Z | albertvillanova |
2,343,312,605 | feat(ci): add trufflehog secrets detection | ### What does this PR do?
Adding a GH action to scan for leaked secrets on each commit.
### Context
`trufflehog` will scan the commit that triggered the CI for any token leak. `trufflehog` works with a large number of what they call "detectors", each of which will read the text from the commit to see if there ... | feat(ci): add trufflehog secrets detection: ### What does this PR do?
Adding a GH action to scan for leaked secrets on each commit.
### Context
`trufflehog` will scan the commit that triggered the CI for any token leak. `trufflehog` works with a large number of what they call "detectors", each of which will re... | closed | 2024-06-10T08:57:46Z | 2024-06-10T09:18:37Z | 2024-06-10T09:18:36Z | McPatate |
2,343,269,192 | Fix string representation of storage client | Fix string representation of storage client.
The string representation of the storage client is used by the orchestrator `DeleteDatasetStorageTask` in both the task `id` attribute and as a `label` for prometheus_client Histogram:
- https://github.com/huggingface/dataset-viewer/blob/1ea458cb22396f8af955bf5b1ebbb92d8... | Fix string representation of storage client: Fix string representation of storage client.
The string representation of the storage client is used by the orchestrator `DeleteDatasetStorageTask` in both the task `id` attribute and as a `label` for prometheus_client Histogram:
- https://github.com/huggingface/dataset-... | closed | 2024-06-10T08:40:59Z | 2024-06-10T09:41:23Z | 2024-06-10T09:41:22Z | albertvillanova |
2,340,448,231 | Store `started_at` or duration info in cached steps too | This would help to better monitor how much time a step take, especially depending on dataset size, so that we can see how certain changes influence processing speed and make more informed decisions about these changes and our size limits (see https://github.com/huggingface/dataset-viewer/issues/2878). Especially useful... | Store `started_at` or duration info in cached steps too: This would help to better monitor how much time a step take, especially depending on dataset size, so that we can see how certain changes influence processing speed and make more informed decisions about these changes and our size limits (see https://github.com/h... | closed | 2024-06-07T13:22:50Z | 2024-07-09T13:06:36Z | 2024-07-09T13:06:36Z | polinaeterna |
2,339,758,443 | Make StorageClient not warn when deleting a non-existing directory | Make StorageClient not warn when trying to delete a non-existing directory.
I think we should only log a warning if the directory exists and could not be deleted.
| Make StorageClient not warn when deleting a non-existing directory: Make StorageClient not warn when trying to delete a non-existing directory.
I think we should only log a warning if the directory exists and could not be deleted.
| closed | 2024-06-07T07:12:26Z | 2024-06-10T13:54:00Z | 2024-06-10T13:53:59Z | albertvillanova |
2,338,816,983 | No mongo cache in DatasetRemovalPlan | don't keep the full query result in memory (as mongoengine does by default)
this should reduce the frequency of memory spikes and could have an effect of the memory issues we're having | No mongo cache in DatasetRemovalPlan: don't keep the full query result in memory (as mongoengine does by default)
this should reduce the frequency of memory spikes and could have an effect of the memory issues we're having | closed | 2024-06-06T17:33:46Z | 2024-06-07T11:17:46Z | 2024-06-07T11:17:45Z | lhoestq |
2,338,519,538 | No auto backfill on most nfaa datasets | they correlate with memory spikes, I'm just hardcoding this to see the impact on memory and further investigate | No auto backfill on most nfaa datasets: they correlate with memory spikes, I'm just hardcoding this to see the impact on memory and further investigate | closed | 2024-06-06T15:20:27Z | 2024-06-06T15:59:12Z | 2024-06-06T15:59:11Z | lhoestq |
2,338,451,407 | Fix get_shape in statistics when argument is bytes, not dict | Will fix duckdb-index step for [common-canvas/commoncatalog-cc-by](https://huggingface.co/datasets/common-canvas/commoncatalog-cc-by)
| Fix get_shape in statistics when argument is bytes, not dict : Will fix duckdb-index step for [common-canvas/commoncatalog-cc-by](https://huggingface.co/datasets/common-canvas/commoncatalog-cc-by)
| closed | 2024-06-06T14:47:47Z | 2024-06-06T16:58:07Z | 2024-06-06T16:58:06Z | polinaeterna |
2,337,295,236 | Update ruff to 0.4.8 | Update ruff to 0.4.8: https://github.com/astral-sh/ruff/releases/tag/v0.4.8
> Linter performance has been improved by around 10% on some microbenchmarks | Update ruff to 0.4.8: Update ruff to 0.4.8: https://github.com/astral-sh/ruff/releases/tag/v0.4.8
> Linter performance has been improved by around 10% on some microbenchmarks | closed | 2024-06-06T04:41:04Z | 2024-06-06T13:27:24Z | 2024-06-06T10:16:06Z | albertvillanova |
2,336,241,023 | Update uvicorn (restart expired workers) | null | Update uvicorn (restart expired workers): | closed | 2024-06-05T15:38:54Z | 2024-06-06T10:27:10Z | 2024-06-06T10:02:46Z | lhoestq |
2,336,103,264 | add missing deps to dev images | otherwise I can't build the images on mac m2
I didn't touch the prod images | add missing deps to dev images: otherwise I can't build the images on mac m2
I didn't touch the prod images | closed | 2024-06-05T14:36:42Z | 2024-06-06T13:28:48Z | 2024-06-06T13:28:47Z | lhoestq |
2,335,801,984 | Add retry mechanism to get_parquet_file in parquet metadata step | ...to see if it helps with [FineWeb](https://huggingface.co/datasets/HuggingFaceFW/fineweb) `config-parquet-metadata` issue.
Currently the error says just `Server disconnected` which seems to be a `aiohttp.ServerDisconnectedError` error.
If that works, a more fundamental solution would be to completely switch to... | Add retry mechanism to get_parquet_file in parquet metadata step: ...to see if it helps with [FineWeb](https://huggingface.co/datasets/HuggingFaceFW/fineweb) `config-parquet-metadata` issue.
Currently the error says just `Server disconnected` which seems to be a `aiohttp.ServerDisconnectedError` error.
If that w... | closed | 2024-06-05T12:41:23Z | 2024-06-05T15:06:59Z | 2024-06-05T13:25:30Z | polinaeterna |
2,334,890,871 | Update pytest to 8.2.2 and pytest-asyncio to 0.23.7 | Update pytest to 8.2.2: https://github.com/pytest-dev/pytest/releases/tag/8.2.2
The update of pytest requires the update of pytest-asyncio as well. See:
- https://github.com/pytest-dev/pytest-asyncio/pull/823
Otherwise, we get an AttributeError: https://github.com/huggingface/dataset-viewer/actions/runs/93783536... | Update pytest to 8.2.2 and pytest-asyncio to 0.23.7: Update pytest to 8.2.2: https://github.com/pytest-dev/pytest/releases/tag/8.2.2
The update of pytest requires the update of pytest-asyncio as well. See:
- https://github.com/pytest-dev/pytest-asyncio/pull/823
Otherwise, we get an AttributeError: https://github... | closed | 2024-06-05T04:48:39Z | 2024-06-06T10:42:21Z | 2024-06-06T10:42:20Z | albertvillanova |
2,333,534,259 | Apply recommendations from duckdb to improve speed | DuckDB has a dedicated page called "My Workload Is Slow"
https://duckdb.org/docs/guides/performance/my_workload_is_slow
and more generally all the https://duckdb.org/docs/guides/performance/overview section.
It could be good to review if some recommendations apply to our usage of duckdb. | Apply recommendations from duckdb to improve speed: DuckDB has a dedicated page called "My Workload Is Slow"
https://duckdb.org/docs/guides/performance/my_workload_is_slow
and more generally all the https://duckdb.org/docs/guides/performance/overview section.
It could be good to review if some recommendations ... | open | 2024-06-04T13:24:33Z | 2024-06-04T13:48:48Z | null | severo |
2,333,396,475 | Remove canonical datasets from docs | Remove canonical datasets from docs, now that we no longer have canonical datasets. | Remove canonical datasets from docs: Remove canonical datasets from docs, now that we no longer have canonical datasets. | closed | 2024-06-04T12:22:43Z | 2024-07-08T06:34:01Z | 2024-07-08T06:34:01Z | albertvillanova |
2,333,376,108 | Allow mnist and fashion mnist + remove canonical dataset logic | null | Allow mnist and fashion mnist + remove canonical dataset logic: | closed | 2024-06-04T12:13:35Z | 2024-06-04T13:16:32Z | 2024-06-04T13:16:32Z | lhoestq |
2,331,996,540 | Use pymongoarrow to get dataset results as dataframe | Fix for https://github.com/huggingface/dataset-viewer/issues/2868 | Use pymongoarrow to get dataset results as dataframe: Fix for https://github.com/huggingface/dataset-viewer/issues/2868 | closed | 2024-06-03T20:29:48Z | 2024-06-05T13:32:08Z | 2024-06-05T13:32:07Z | AndreaFrancis |
2,330,565,896 | Remove or increase the 5GB limit? | The dataset viewer shows statistics and provides filter + sort + search only for the first 5GB of each split. We are also unable to provide the exact number of rows for bigger splits.
Note that we "show" all the rows for parquet-native datasets (i.e., we can access the rows randomly, i.e., we have pagination).
Sh... | Remove or increase the 5GB limit?: The dataset viewer shows statistics and provides filter + sort + search only for the first 5GB of each split. We are also unable to provide the exact number of rows for bigger splits.
Note that we "show" all the rows for parquet-native datasets (i.e., we can access the rows randoml... | closed | 2024-06-03T08:55:08Z | 2024-07-22T11:32:49Z | 2024-07-11T15:04:04Z | severo |
2,330,348,030 | Update ruff to 0.4.7 | Update ruff to 0.4.7. | Update ruff to 0.4.7: Update ruff to 0.4.7. | closed | 2024-06-03T07:08:23Z | 2024-06-03T08:46:25Z | 2024-06-03T08:46:24Z | albertvillanova |
2,326,204,531 | Feature Request: Freeze/Restart/Log Viewer Option for Users. | ### Description
I've noticed a couple of things that could make using the dataset-viewer better. Here are three simple suggestions based on what I've experienced:
### Suggestions
1. **Keeping Parquet Viewer Steady**
- Issue: Every time I tweak something like the content in the README, the Parquet viewer reset... | Feature Request: Freeze/Restart/Log Viewer Option for Users.: ### Description
I've noticed a couple of things that could make using the dataset-viewer better. Here are three simple suggestions based on what I've experienced:
### Suggestions
1. **Keeping Parquet Viewer Steady**
- Issue: Every time I tweak some... | closed | 2024-05-30T17:27:23Z | 2024-05-31T08:51:34Z | 2024-05-31T08:51:33Z | kargaranamir |
2,326,109,839 | Create a new error code (retryable) for "Consistency check failed" | See https://huggingface.co/datasets/cis-lmu/Taxi1500-RawData/viewer/mal_Mlym
It currently gives this error:
```
Consistency check failed: file should be of size 13393376 but has size 11730856 ((…)a9b50f83/mal_Mlym/taxi1500/dataset.arrow). We are sorry for the inconvenience. Please retry download and pass `force_... | Create a new error code (retryable) for "Consistency check failed": See https://huggingface.co/datasets/cis-lmu/Taxi1500-RawData/viewer/mal_Mlym
It currently gives this error:
```
Consistency check failed: file should be of size 13393376 but has size 11730856 ((…)a9b50f83/mal_Mlym/taxi1500/dataset.arrow). We are... | closed | 2024-05-30T16:42:22Z | 2024-05-31T08:18:47Z | 2024-05-31T08:18:47Z | severo |
2,325,808,452 | Re-add torch dependency | needed for webdataset with .pth files in them | Re-add torch dependency: needed for webdataset with .pth files in them | closed | 2024-05-30T14:22:00Z | 2024-05-30T16:15:46Z | 2024-05-30T16:15:45Z | lhoestq |
2,325,626,169 | add "duration" field to audio cells | As it was done with width/height for image cells: https://github.com/huggingface/dataset-viewer/pull/600
It will help to show additional information in the dataset viewer, in particular: highlighting a row will select the appropriate bar in the durations histogram. | add "duration" field to audio cells: As it was done with width/height for image cells: https://github.com/huggingface/dataset-viewer/pull/600
It will help to show additional information in the dataset viewer, in particular: highlighting a row will select the appropriate bar in the durations histogram. | open | 2024-05-30T13:00:16Z | 2024-05-30T16:26:52Z | null | severo |
2,325,300,313 | BFF endpoint to replace multiple parallel requests | WIP | BFF endpoint to replace multiple parallel requests: WIP | closed | 2024-05-30T10:22:45Z | 2024-07-29T11:37:24Z | 2024-07-07T15:04:11Z | severo |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.