id int64 959M 2.55B | title stringlengths 3 133 | body stringlengths 1 65.5k ⌀ | description stringlengths 5 65.6k | state stringclasses 2
values | created_at stringlengths 20 20 | updated_at stringlengths 20 20 | closed_at stringlengths 20 20 ⌀ | user stringclasses 174
values |
|---|---|---|---|---|---|---|---|---|
1,023,861,988 | feat: 🎸 allow empty features | Anyway: we don't check if the number of features correspond to the
number of columns in the rows, so... why not let the features be empty.
The client will have to guess the types. | feat: 🎸 allow empty features: Anyway: we don't check if the number of features correspond to the
number of columns in the rows, so... why not let the features be empty.
The client will have to guess the types. | closed | 2021-10-12T14:01:46Z | 2021-10-12T14:59:38Z | 2021-10-12T14:59:35Z | severo |
1,023,612,320 | feat: 🎸 add /webhook endpoint | See https://github.com/huggingface/datasets-preview-backend/issues/36 | feat: 🎸 add /webhook endpoint: See https://github.com/huggingface/datasets-preview-backend/issues/36 | closed | 2021-10-12T09:58:09Z | 2021-10-12T11:24:27Z | 2021-10-12T11:17:50Z | severo |
1,023,534,749 | feat: 🎸 support allenai/c4 dataset | See https://github.com/huggingface/datasets/issues/2859 and https://github.com/huggingface/datasets-preview-backend/issues/17 | feat: 🎸 support allenai/c4 dataset: See https://github.com/huggingface/datasets/issues/2859 and https://github.com/huggingface/datasets-preview-backend/issues/17 | closed | 2021-10-12T08:41:03Z | 2021-10-12T08:41:42Z | 2021-10-12T08:41:41Z | severo |
1,023,511,100 | Support image datasets | Some examples we want to support
- Array2D
- [x] `mnist` - https://datasets-preview.huggingface.tech/rows?dataset=mnist
- Array3D
- [x] `cifar10` - https://datasets-preview.huggingface.tech/rows?dataset=cifar10
- [x] `cifar100` - https://datasets-preview.huggingface.tech/rows?dataset=cifar100
- local file... | Support image datasets: Some examples we want to support
- Array2D
- [x] `mnist` - https://datasets-preview.huggingface.tech/rows?dataset=mnist
- Array3D
- [x] `cifar10` - https://datasets-preview.huggingface.tech/rows?dataset=cifar10
- [x] `cifar100` - https://datasets-preview.huggingface.tech/rows?datase... | closed | 2021-10-12T08:20:12Z | 2021-10-21T15:30:22Z | 2021-10-21T15:30:22Z | severo |
1,022,900,711 | feat: 🎸 only cache one entry per dataset | Before: every endpoint call generated a cache entry. Now: all the
endpoints calls related to a dataset use the same cached value (which
takes longer to compute). The benefits are a simpler code, and most
importantly: it's easier to manage cache consistency (everything is OK
for a dataset, or nothing)
BREAKING CH... | feat: 🎸 only cache one entry per dataset: Before: every endpoint call generated a cache entry. Now: all the
endpoints calls related to a dataset use the same cached value (which
takes longer to compute). The benefits are a simpler code, and most
importantly: it's easier to manage cache consistency (everything is OK... | closed | 2021-10-11T16:20:09Z | 2021-10-11T16:23:24Z | 2021-10-11T16:23:23Z | severo |
1,021,343,485 | Refactor | null | Refactor: | closed | 2021-10-08T17:58:27Z | 2021-10-08T17:59:05Z | 2021-10-08T17:59:04Z | severo |
1,020,847,936 | feat: 🎸 remove benchmark related code | It's covered by the /cache-reports endpoint | feat: 🎸 remove benchmark related code: It's covered by the /cache-reports endpoint | closed | 2021-10-08T08:43:52Z | 2021-10-08T08:44:00Z | 2021-10-08T08:43:59Z | severo |
1,020,819,085 | No cache expiration | null | No cache expiration: | closed | 2021-10-08T08:10:41Z | 2021-10-08T08:35:30Z | 2021-10-08T08:35:29Z | severo |
1,015,568,634 | Add valid endpoint | Fixes #24 | Add valid endpoint: Fixes #24 | closed | 2021-10-04T19:51:26Z | 2021-10-04T19:51:56Z | 2021-10-04T19:51:55Z | severo |
1,015,381,449 | Cache functions and responses | null | Cache functions and responses: | closed | 2021-10-04T16:23:56Z | 2021-10-04T16:24:46Z | 2021-10-04T16:24:45Z | severo |
1,015,121,236 | raft - 404 | https://datasets-preview.huggingface.tech/rows?dataset=raft&config=ade_corpus_v2&split=test | raft - 404: https://datasets-preview.huggingface.tech/rows?dataset=raft&config=ade_corpus_v2&split=test | closed | 2021-10-04T12:33:03Z | 2021-10-12T08:52:15Z | 2021-10-12T08:52:15Z | severo |
1,013,636,022 | Should the features be associated to a split, instead of a config? | For now, we assume that all the splits of a config will share the same features, but it seems that it's not necessarily the case (https://github.com/huggingface/datasets/issues/2968). Am I right @lhoestq ?
Is there any example of such a dataset on the hub or in the canonical ones? | Should the features be associated to a split, instead of a config?: For now, we assume that all the splits of a config will share the same features, but it seems that it's not necessarily the case (https://github.com/huggingface/datasets/issues/2968). Am I right @lhoestq ?
Is there any example of such a dataset on t... | closed | 2021-10-01T18:14:53Z | 2021-10-05T09:25:04Z | 2021-10-05T09:25:04Z | severo |
1,013,142,890 | /splits does not error when no config exists and a wrong config is passed | https://datasets-preview.huggingface.tech/splits?dataset=sent_comp&config=doesnotexist
returns:
```
{
"splits": [
{
"dataset": "sent_comp",
"config": "doesnotexist",
"split": "validation"
},
{
"dataset": "sent_comp",
... | /splits does not error when no config exists and a wrong config is passed: https://datasets-preview.huggingface.tech/splits?dataset=sent_comp&config=doesnotexist
returns:
```
{
"splits": [
{
"dataset": "sent_comp",
"config": "doesnotexist",
"split": "validat... | closed | 2021-10-01T09:52:40Z | 2022-09-16T20:09:54Z | 2022-09-16T20:09:53Z | severo |
1,008,141,232 | generate infos | fixes #52 | generate infos: fixes #52 | closed | 2021-09-27T13:17:04Z | 2021-09-27T13:21:01Z | 2021-09-27T13:21:00Z | severo |
1,008,034,013 | Regenerate dataset-info instead of loading it? | Currently, getting the rows with `/rows` requires a previous (internal) call to `/infos` to get the features (type of the columns). But sometimes the dataset-info.json file is missing, or not coherent with the dataset script (for example: https://huggingface.co/datasets/lhoestq/custom_squad/tree/main), while we are usi... | Regenerate dataset-info instead of loading it?: Currently, getting the rows with `/rows` requires a previous (internal) call to `/infos` to get the features (type of the columns). But sometimes the dataset-info.json file is missing, or not coherent with the dataset script (for example: https://huggingface.co/datasets/l... | closed | 2021-09-27T11:28:13Z | 2021-09-27T13:21:00Z | 2021-09-27T13:21:00Z | severo |
1,006,450,875 | cache both the functions returns and the endpoints results | Currently only the endpoints results are cached. We use them inside the code to get quick results by taking advantage of the cache, but it's not their aim, and we have to parse / decode.
It would be better to directly cache the results of the functions (memoïze).
Also: we could cache the raised exceptions as here... | cache both the functions returns and the endpoints results: Currently only the endpoints results are cached. We use them inside the code to get quick results by taking advantage of the cache, but it's not their aim, and we have to parse / decode.
It would be better to directly cache the results of the functions (mem... | closed | 2021-09-24T13:10:23Z | 2021-10-04T16:24:45Z | 2021-10-04T16:24:45Z | severo |
1,006,447,227 | refactor to reduce functions complexity | For example,
https://github.com/huggingface/datasets-preview-backend/blob/13e533238eb6b6dfdcd8e7d3c23ed134c67b5525/src/datasets_preview_backend/queries/rows.py#L25
is rightly flagged by https://sourcery.ai/ as too convoluted. It's hard to debug and test, and there are too many special cases. | refactor to reduce functions complexity: For example,
https://github.com/huggingface/datasets-preview-backend/blob/13e533238eb6b6dfdcd8e7d3c23ed134c67b5525/src/datasets_preview_backend/queries/rows.py#L25
is rightly flagged by https://sourcery.ai/ as too convoluted. It's hard to debug and test, and there are too... | closed | 2021-09-24T13:06:22Z | 2021-10-12T08:49:32Z | 2021-10-12T08:49:31Z | severo |
1,006,441,817 | Add types to rows | fixes #25 | Add types to rows: fixes #25 | closed | 2021-09-24T13:00:43Z | 2021-09-24T13:06:14Z | 2021-09-24T13:06:13Z | severo |
1,006,439,805 | "flatten" the nested values? | See https://huggingface.co/docs/datasets/process.html#flatten | "flatten" the nested values?: See https://huggingface.co/docs/datasets/process.html#flatten | closed | 2021-09-24T12:58:34Z | 2022-09-16T20:10:22Z | 2022-09-16T20:10:22Z | severo |
1,006,364,861 | Info to infos | null | Info to infos: | closed | 2021-09-24T11:26:50Z | 2021-09-24T11:44:06Z | 2021-09-24T11:44:05Z | severo |
1,006,203,348 | feat: 🎸 blocklist "allenai/c4" dataset | see https://github.com/huggingface/datasets-preview-backend/issues/17#issuecomment-918515398 | feat: 🎸 blocklist "allenai/c4" dataset: see https://github.com/huggingface/datasets-preview-backend/issues/17#issuecomment-918515398 | closed | 2021-09-24T08:14:01Z | 2021-09-24T08:18:36Z | 2021-09-24T08:18:35Z | severo |
1,006,196,371 | use `environs` to manage the env vars? | https://pypi.org/project/environs/ instead of utils.py | use `environs` to manage the env vars?: https://pypi.org/project/environs/ instead of utils.py | closed | 2021-09-24T08:05:38Z | 2022-09-19T08:49:33Z | 2022-09-19T08:49:33Z | severo |
1,005,706,223 | Grouped endpoints | null | Grouped endpoints: | closed | 2021-09-23T18:01:49Z | 2021-09-23T18:11:53Z | 2021-09-23T18:11:52Z | severo |
1,005,536,552 | Fix serialization in benchmark | ```
INFO: 127.0.0.1:38826 - "GET /info?dataset=oelkrise%2FCRT HTTP/1.1" 404 Not Found
``` | Fix serialization in benchmark: ```
INFO: 127.0.0.1:38826 - "GET /info?dataset=oelkrise%2FCRT HTTP/1.1" 404 Not Found
``` | closed | 2021-09-23T14:57:58Z | 2021-09-24T07:31:56Z | 2021-09-24T07:31:56Z | severo |
1,005,354,426 | feat: 🎸 remove ability to chose the number of extracted rows | ✅ Closes: #32 | feat: 🎸 remove ability to chose the number of extracted rows: ✅ Closes: #32 | closed | 2021-09-23T12:10:42Z | 2021-09-23T12:18:47Z | 2021-09-23T12:18:46Z | severo |
1,005,279,019 | Move benchmark to a different repo? | It's a client of the API | Move benchmark to a different repo?: It's a client of the API | closed | 2021-09-23T10:44:08Z | 2021-10-12T08:49:11Z | 2021-10-12T08:49:11Z | severo |
1,005,231,422 | Add basic cache | Fixes #3. See https://github.com/huggingface/datasets-preview-backend/milestone/1 for remaning issues. | Add basic cache: Fixes #3. See https://github.com/huggingface/datasets-preview-backend/milestone/1 for remaning issues. | closed | 2021-09-23T09:54:04Z | 2021-09-23T09:58:14Z | 2021-09-23T09:58:14Z | severo |
1,005,220,766 | Enable the private datasets | The code is already present to pass the token, but it's disabled in the code (hardcoded):
https://github.com/huggingface/datasets-preview-backend/blob/df04ffba9ca1a432ed65e220cf7722e518e0d4f8/src/datasets_preview_backend/cache.py#L119-L120
- [ ] enable private datasets and manage their cache adequately
- [ ] sep... | Enable the private datasets: The code is already present to pass the token, but it's disabled in the code (hardcoded):
https://github.com/huggingface/datasets-preview-backend/blob/df04ffba9ca1a432ed65e220cf7722e518e0d4f8/src/datasets_preview_backend/cache.py#L119-L120
- [ ] enable private datasets and manage thei... | closed | 2021-09-23T09:42:47Z | 2024-01-31T10:14:08Z | 2024-01-31T09:50:02Z | severo |
1,005,219,639 | Provide the ETag header | - [ ] set and manage the `ETag` header to save bandwidth when the client (browser) revalidates. See https://developer.mozilla.org/en-US/docs/Web/HTTP/Caching and https://gist.github.com/timheap/1f4d9284e4f4d4f545439577c0ca6300
```python
# TODO: use key for ETag? It will need to be serialized
# ke... | Provide the ETag header: - [ ] set and manage the `ETag` header to save bandwidth when the client (browser) revalidates. See https://developer.mozilla.org/en-US/docs/Web/HTTP/Caching and https://gist.github.com/timheap/1f4d9284e4f4d4f545439577c0ca6300
```python
# TODO: use key for ETag? It will need to be... | closed | 2021-09-23T09:41:40Z | 2022-09-19T10:00:49Z | 2022-09-19T10:00:48Z | severo |
1,005,217,301 | Update canonical datasets using a webhook | Webhook invalidation of canonical datasets (GitHub):
- [x] setup the [`revision` argument](https://huggingface.co/docs/datasets/package_reference/loading_methods.html#datasets.load_dataset) to download datasets from the master branch - #119
- [x] set up a webhook on `datasets` library on every push to the master... | Update canonical datasets using a webhook: Webhook invalidation of canonical datasets (GitHub):
- [x] setup the [`revision` argument](https://huggingface.co/docs/datasets/package_reference/loading_methods.html#datasets.load_dataset) to download datasets from the master branch - #119
- [x] set up a webhook on `da... | closed | 2021-09-23T09:39:18Z | 2022-01-26T13:44:08Z | 2022-01-26T11:20:04Z | severo |
1,005,216,379 | Update hub datasets with webhook | Webhook invalidation of community datasets (hf.co):
- [x] setup a webhook on hf.co for datasets creation, update, deletion -> waiting for https://github.com/huggingface/moon-landing/issues/1344
- [x] add an endpoint to listen to the webhook
- [x] parse the webhook to find which caches should be invalidated
- [x] re... | Update hub datasets with webhook: Webhook invalidation of community datasets (hf.co):
- [x] setup a webhook on hf.co for datasets creation, update, deletion -> waiting for https://github.com/huggingface/moon-landing/issues/1344
- [x] add an endpoint to listen to the webhook
- [x] parse the webhook to find which cach... | closed | 2021-09-23T09:38:17Z | 2021-10-18T12:33:27Z | 2021-10-18T12:33:27Z | severo |
1,005,214,423 | Refresh the cache? | Force a cache refresh on a regular basis (cron) | Refresh the cache?: Force a cache refresh on a regular basis (cron) | closed | 2021-09-23T09:36:02Z | 2021-10-12T08:34:41Z | 2021-10-12T08:34:41Z | severo |
1,005,213,132 | warm the cache | Warm the cache at application startup. We want:
- to avoid blocking the application, so: run asynchronously, and without hammering the server
- to have a warm cache as fast as possible (persisting the previous cache, then refreshing it at startup? - related: #35 )
- [x] create a function to list all the datasets a... | warm the cache: Warm the cache at application startup. We want:
- to avoid blocking the application, so: run asynchronously, and without hammering the server
- to have a warm cache as fast as possible (persisting the previous cache, then refreshing it at startup? - related: #35 )
- [x] create a function to list al... | closed | 2021-09-23T09:34:35Z | 2021-10-12T08:35:27Z | 2021-10-12T08:35:27Z | severo |
1,005,206,110 | Add a parameter to specify the number of rows | It's a problem for the cache, so until we manage random access, we can:
- [ ] fill the cache with a large (maximum) number of rows, ie up to 1000
- [ ] also cache the default request (N = 100) -> set to the parameter used in moon-landing
- [ ] if a request comes with N = 247, for example, generate the response on ... | Add a parameter to specify the number of rows: It's a problem for the cache, so until we manage random access, we can:
- [ ] fill the cache with a large (maximum) number of rows, ie up to 1000
- [ ] also cache the default request (N = 100) -> set to the parameter used in moon-landing
- [ ] if a request comes with ... | closed | 2021-09-23T09:26:34Z | 2022-09-16T20:13:55Z | 2022-09-16T20:13:55Z | severo |
1,005,204,434 | Remove the `rows`/ `num_rows` argument | We will fix it to 100, in order to simplify the cache management. | Remove the `rows`/ `num_rows` argument: We will fix it to 100, in order to simplify the cache management. | closed | 2021-09-23T09:24:46Z | 2021-09-23T12:18:46Z | 2021-09-23T12:18:46Z | severo |
1,005,200,587 | Manage concurrency | Currently (in the cache branch), only one worker is allowed.
We want to have multiple workers, but for that we need to have a shared cache:
- [ ] migrate from diskcache to redis
- [ ] remove the hardcoded limit of 1 worker | Manage concurrency: Currently (in the cache branch), only one worker is allowed.
We want to have multiple workers, but for that we need to have a shared cache:
- [ ] migrate from diskcache to redis
- [ ] remove the hardcoded limit of 1 worker | closed | 2021-09-23T09:20:43Z | 2021-10-06T08:08:35Z | 2021-10-06T08:08:35Z | severo |
999,436,625 | Use FastAPI instead of only Starlette? | It would allow to have doc, and surely a lot of other benefits | Use FastAPI instead of only Starlette?: It would allow to have doc, and surely a lot of other benefits | closed | 2021-09-17T14:45:40Z | 2021-09-20T10:25:17Z | 2021-09-20T07:13:00Z | severo |
999,350,374 | Ensure non-ASCII characters are handled as expected | See https://github.com/huggingface/datasets-viewer/pull/15
It should be tested | Ensure non-ASCII characters are handled as expected: See https://github.com/huggingface/datasets-viewer/pull/15
It should be tested | closed | 2021-09-17T13:22:39Z | 2022-09-16T20:15:19Z | 2022-09-16T20:15:19Z | severo |
997,976,540 | Add endpoint to get splits + configs at once | See https://github.com/huggingface/moon-landing/pull/1040#discussion_r709494993
Also evaluate doing the same for the rows (the payload might be too heavy) | Add endpoint to get splits + configs at once: See https://github.com/huggingface/moon-landing/pull/1040#discussion_r709494993
Also evaluate doing the same for the rows (the payload might be too heavy) | closed | 2021-09-16T09:14:52Z | 2021-09-23T18:13:17Z | 2021-09-23T18:13:17Z | severo |
997,907,203 | endpoint to generate bitmaps for mnist or cifar10 on the fly | if there are very few instances of raw image data in datasets i think it's best to generate server side vs. writing client side code
no strong opinion on this though, depends on the number/variety of datasets i guess
(For Audio I don't know if we have some datasets with raw tensors inside them? @lhoestq @albertv... | endpoint to generate bitmaps for mnist or cifar10 on the fly: if there are very few instances of raw image data in datasets i think it's best to generate server side vs. writing client side code
no strong opinion on this though, depends on the number/variety of datasets i guess
(For Audio I don't know if we have... | closed | 2021-09-16T08:03:09Z | 2021-10-18T12:42:24Z | 2021-10-18T12:42:24Z | julien-c |
997,904,214 | Add endpoint to proxy local files inside datasets' data | for instance for:
<img width="2012" alt="Screenshot 2021-09-16 at 09 59 38" src="https://user-images.githubusercontent.com/326577/133573786-ca9b8b60-2271-4256-b1e4-aa05302fd2f3.png">
| Add endpoint to proxy local files inside datasets' data: for instance for:
<img width="2012" alt="Screenshot 2021-09-16 at 09 59 38" src="https://user-images.githubusercontent.com/326577/133573786-ca9b8b60-2271-4256-b1e4-aa05302fd2f3.png">
| closed | 2021-09-16T08:00:11Z | 2021-10-21T15:35:49Z | 2021-10-21T15:35:48Z | julien-c |
997,893,918 | Add columns types to /rows response | We currently just have the keys (column identifier) and values. We might want to give each column's type: "our own serialization scheme".
For ClassLabel, this means to pass the "pretty names" (the names associates to the values) along with the type
See https://github.com/huggingface/moon-landing/pull/1040#discussio... | Add columns types to /rows response: We currently just have the keys (column identifier) and values. We might want to give each column's type: "our own serialization scheme".
For ClassLabel, this means to pass the "pretty names" (the names associates to the values) along with the type
See https://github.com/hugging... | closed | 2021-09-16T07:48:51Z | 2021-09-24T13:06:13Z | 2021-09-24T13:06:13Z | severo |
997,434,233 | Expose a list of "valid" i.e. previewable datasets for moon-landing to be able to tag/showcase them | (linked to caching and pre-warming, obviously) | Expose a list of "valid" i.e. previewable datasets for moon-landing to be able to tag/showcase them: (linked to caching and pre-warming, obviously) | closed | 2021-09-15T19:28:47Z | 2021-10-04T19:51:55Z | 2021-10-04T19:51:55Z | julien-c |
997,012,873 | run benchmark automatically every week, and store the results | - [ ] create a github action?
- [ ] store the report in... an HF dataset? or in a github repo? should it be private?
- [ ] get these reports from https://observablehq.com/@huggingface/quality-assessment-of-datasets-loading | run benchmark automatically every week, and store the results: - [ ] create a github action?
- [ ] store the report in... an HF dataset? or in a github repo? should it be private?
- [ ] get these reports from https://observablehq.com/@huggingface/quality-assessment-of-datasets-loading | closed | 2021-09-15T12:15:00Z | 2021-10-12T08:48:02Z | 2021-10-12T08:48:01Z | severo |
995,697,610 | `make benchmark` is very long and blocks | Sometimes `make benchmark` blocks (nothing happens, and only one process is running, while the load is low). Ideally, it would not block, and other processes would be launched anyway so that the full capacity of the CPUs would be used (`-j -l 7` parameters of `make`)
To unblock, I have to kill and relaunch `make ben... | `make benchmark` is very long and blocks: Sometimes `make benchmark` blocks (nothing happens, and only one process is running, while the load is low). Ideally, it would not block, and other processes would be launched anyway so that the full capacity of the CPUs would be used (`-j -l 7` parameters of `make`)
To unbl... | closed | 2021-09-14T07:42:44Z | 2021-10-12T08:34:17Z | 2021-10-12T08:34:17Z | severo |
995,690,502 | exception seen during `make benchmark` | Not sure which dataset threw this exception though, that's why I put the previous rows for further investigation.
```
poetry run python ../scripts/get_rows_report.py wikiann___CONFIG___or___SPLIT___test ../tmp/get_rows_reports/wikiann___CONFIG___or___SPLIT___test.json
poetry run python ../scripts/get_rows_report.p... | exception seen during `make benchmark`: Not sure which dataset threw this exception though, that's why I put the previous rows for further investigation.
```
poetry run python ../scripts/get_rows_report.py wikiann___CONFIG___or___SPLIT___test ../tmp/get_rows_reports/wikiann___CONFIG___or___SPLIT___test.json
poetry... | closed | 2021-09-14T07:33:59Z | 2021-09-23T12:39:13Z | 2021-09-23T12:39:13Z | severo |
995,212,806 | Upgrade datasets to 1.12.0 | - [x] See https://github.com/huggingface/datasets/releases/tag/1.12.0
- [x] launch benchmark and report to https://github.com/huggingface/datasets-preview-backend/issues/9 | Upgrade datasets to 1.12.0: - [x] See https://github.com/huggingface/datasets/releases/tag/1.12.0
- [x] launch benchmark and report to https://github.com/huggingface/datasets-preview-backend/issues/9 | closed | 2021-09-13T18:49:24Z | 2021-09-14T08:31:24Z | 2021-09-14T08:31:24Z | severo |
985,093,664 | Add unit tests to CI | null | Add unit tests to CI: | closed | 2021-09-01T12:30:48Z | 2021-09-02T12:56:45Z | 2021-09-02T12:56:45Z | severo |
984,798,758 | CI: how to acknowledge a "safety" warning? | We use `safety` to check vulnerabilities in the dependencies. But in the case below, `tensorflow` is marked as insecure while the last published version on pipy is still 2.6.0. What to do in this case?
```
+==============================================================================+
| ... | CI: how to acknowledge a "safety" warning?: We use `safety` to check vulnerabilities in the dependencies. But in the case below, `tensorflow` is marked as insecure while the last published version on pipy is still 2.6.0. What to do in this case?
```
+=================================================================... | closed | 2021-09-01T07:20:45Z | 2021-09-15T11:58:56Z | 2021-09-15T11:58:48Z | severo |
984,024,435 | Prevent DoS when accessing some datasets | For example: https://huggingface.co/datasets/allenai/c4 script is doing 69219 output requests **on every received request**, which occupies all the CPUs.
```
pm2 logs
```
```
0|datasets | INFO: 3.238.194.17:0 - "GET /configs?dataset=allenai/c4 HTTP/1.1" 200 OK
```
```
Check remote data files: 78%|██... | Prevent DoS when accessing some datasets: For example: https://huggingface.co/datasets/allenai/c4 script is doing 69219 output requests **on every received request**, which occupies all the CPUs.
```
pm2 logs
```
```
0|datasets | INFO: 3.238.194.17:0 - "GET /configs?dataset=allenai/c4 HTTP/1.1" 200 OK
``... | closed | 2021-08-31T15:56:11Z | 2021-10-15T15:57:49Z | 2021-10-15T15:57:49Z | severo |
981,207,911 | Raise an issue when no row can be fetched | Currently, https://datasets-preview.huggingface.tech/rows?dataset=superb&config=asr&split=train&rows=5 returns
```
{
"dataset": "superb",
"config": "asr",
"split": "train",
"rows": [ ]
}
```
while it should return 5 rows. An error should be raised in that case.
Beware: manage the special... | Raise an issue when no row can be fetched: Currently, https://datasets-preview.huggingface.tech/rows?dataset=superb&config=asr&split=train&rows=5 returns
```
{
"dataset": "superb",
"config": "asr",
"split": "train",
"rows": [ ]
}
```
while it should return 5 rows. An error should be raised ... | closed | 2021-08-27T12:36:50Z | 2021-09-14T08:50:53Z | 2021-09-14T08:50:53Z | severo |
980,264,033 | Add an endpoint to get the dataset card? | See https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_api.py#L427, `full` argument
The dataset card is the README.md. | Add an endpoint to get the dataset card?: See https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/hf_api.py#L427, `full` argument
The dataset card is the README.md. | closed | 2021-08-26T13:43:29Z | 2022-09-16T20:15:52Z | 2022-09-16T20:15:52Z | severo |
980,177,961 | Properly manage the case config is None | For example:
https://datasets-preview.huggingface.tech/splits?dataset=sent_comp&config=null returns
~~~json
{
"dataset": "sent_comp",
"config": "null",
"splits": [
"validation",
"train"
]
}
~~~
this should have errored since there is no `"null"` config (it's `null`).
... | Properly manage the case config is None: For example:
https://datasets-preview.huggingface.tech/splits?dataset=sent_comp&config=null returns
~~~json
{
"dataset": "sent_comp",
"config": "null",
"splits": [
"validation",
"train"
]
}
~~~
this should have errored since the... | closed | 2021-08-26T12:16:27Z | 2021-08-26T13:26:00Z | 2021-08-26T13:26:00Z | severo |
979,971,116 | Get random access to the rows | Currently, only the first rows can be obtained with /rows. We want to get access to slices of the rows through pagination, eg /rows?from=40000&rows=10
| Get random access to the rows: Currently, only the first rows can be obtained with /rows. We want to get access to slices of the rows through pagination, eg /rows?from=40000&rows=10
| closed | 2021-08-26T08:21:34Z | 2023-06-14T12:16:22Z | 2023-06-14T12:16:22Z | severo |
979,408,913 | Install the datasets that require manual download | Some datasets require a manual download (https://huggingface.co/datasets/arxiv_dataset, for example). We might manually download them on the server, so that the backend returns the rows, instead of an error. | Install the datasets that require manual download: Some datasets require a manual download (https://huggingface.co/datasets/arxiv_dataset, for example). We might manually download them on the server, so that the backend returns the rows, instead of an error. | closed | 2021-08-25T16:30:11Z | 2022-06-17T11:47:18Z | 2022-06-17T11:47:18Z | severo |
978,940,131 | Give the cause of the error in the endpoints | It can thus be used in the hub to show hints to the dataset owner (or user?) to improve the script and fix the bug. | Give the cause of the error in the endpoints: It can thus be used in the hub to show hints to the dataset owner (or user?) to improve the script and fix the bug. | closed | 2021-08-25T09:45:58Z | 2021-08-26T14:39:22Z | 2021-08-26T14:39:21Z | severo |
978,938,259 | Use /info as the source for configs and splits? | It's a refactor. As the dataset info contains the configs and splits, maybe the code can be factorized. Before doing it: review the errors for /info, /configs, and /splits (https://observablehq.com/@huggingface/quality-assessment-of-datasets-loading) and ensure we will not increase the number of erroneous datasets. | Use /info as the source for configs and splits?: It's a refactor. As the dataset info contains the configs and splits, maybe the code can be factorized. Before doing it: review the errors for /info, /configs, and /splits (https://observablehq.com/@huggingface/quality-assessment-of-datasets-loading) and ensure we will n... | closed | 2021-08-25T09:43:51Z | 2021-09-01T07:08:25Z | 2021-09-01T07:08:25Z | severo |
972,528,326 | Increase the proportion of hf.co datasets that can be previewed | For different reasons, some datasets cannot be previewed. It might be because the loading script is buggy, because the data is in a format that cannot be streamed, etc.
The script https://github.com/huggingface/datasets-preview-backend/blob/master/quality/test_datasets.py tests the three endpoints on all the dataset... | Increase the proportion of hf.co datasets that can be previewed: For different reasons, some datasets cannot be previewed. It might be because the loading script is buggy, because the data is in a format that cannot be streamed, etc.
The script https://github.com/huggingface/datasets-preview-backend/blob/master/qual... | closed | 2021-08-17T10:17:23Z | 2022-01-31T21:34:27Z | 2022-01-31T21:34:07Z | severo |
972,521,183 | Add CI | Check types and code quality | Add CI: Check types and code quality | closed | 2021-08-17T10:09:35Z | 2021-09-01T07:09:56Z | 2021-09-01T07:09:56Z | severo |
964,047,835 | Support private datasets | For now, only public datasets can be queried.
To support private datasets :
- [x] add `use_auth_token` argument to all the queries functions (and upstream too in https://github.com/huggingface/datasets/blob/master/src/datasets/inspect.py)
- [x] obtain the authentication header <strike>or cookie</strike> from the r... | Support private datasets: For now, only public datasets can be queried.
To support private datasets :
- [x] add `use_auth_token` argument to all the queries functions (and upstream too in https://github.com/huggingface/datasets/blob/master/src/datasets/inspect.py)
- [x] obtain the authentication header <strike>or ... | closed | 2021-08-09T14:19:51Z | 2021-09-16T08:27:33Z | 2021-09-16T08:27:32Z | severo |
964,030,998 | Expand the purpose of this backend? | Depending on the evolution of https://github.com/huggingface/datasets, this project might disappear, or its features might be reduced, in particular, if one day it allows caching the data by self-generating:
- an arrow or a parquet data file (maybe with sharding and compression for the largest datasets)
- or a SQL ... | Expand the purpose of this backend?: Depending on the evolution of https://github.com/huggingface/datasets, this project might disappear, or its features might be reduced, in particular, if one day it allows caching the data by self-generating:
- an arrow or a parquet data file (maybe with sharding and compression f... | closed | 2021-08-09T14:03:41Z | 2022-02-04T11:24:32Z | 2022-02-04T11:24:32Z | severo |
963,792,000 | Upgrade `datasets` and adapt the tests | Two issues have been fixed in `datasets`:
- https://github.com/huggingface/datasets/issues/2743
- https://github.com/huggingface/datasets/issues/2749
Also, support for streaming compressed files is improving:
- https://github.com/huggingface/datasets/pull/2786
- https://github.com/huggingface/datasets/pull/2... | Upgrade `datasets` and adapt the tests: Two issues have been fixed in `datasets`:
- https://github.com/huggingface/datasets/issues/2743
- https://github.com/huggingface/datasets/issues/2749
Also, support for streaming compressed files is improving:
- https://github.com/huggingface/datasets/pull/2786
- https:... | closed | 2021-08-09T09:08:33Z | 2021-09-01T07:09:31Z | 2021-09-01T07:09:31Z | severo |
963,775,717 | Establish and meet SLO | https://en.wikipedia.org/wiki/Service-level_objective
as stated in https://github.com/huggingface/datasets-preview-backend/issues/1#issuecomment-894430211:
> we need to "guarantee" that row fetches from moon-landing will be under a specified latency (to be discussed), even in the case of cache misses in `datasets... | Establish and meet SLO: https://en.wikipedia.org/wiki/Service-level_objective
as stated in https://github.com/huggingface/datasets-preview-backend/issues/1#issuecomment-894430211:
> we need to "guarantee" that row fetches from moon-landing will be under a specified latency (to be discussed), even in the case of c... | closed | 2021-08-09T08:47:22Z | 2022-09-07T15:21:13Z | 2022-09-07T15:21:12Z | severo |
959,186,064 | Cache the responses | The datasets generally don't change often, so it's surely worth caching the responses.
Three levels of cache are involved:
- client (browser, moon-landing): use Response headers (cache-control, ETag, see https://developer.mozilla.org/en-US/docs/Web/HTTP/Caching)
- application: serve the cached responses. Invalid... | Cache the responses: The datasets generally don't change often, so it's surely worth caching the responses.
Three levels of cache are involved:
- client (browser, moon-landing): use Response headers (cache-control, ETag, see https://developer.mozilla.org/en-US/docs/Web/HTTP/Caching)
- application: serve the cach... | closed | 2021-08-03T14:38:12Z | 2021-09-23T09:58:13Z | 2021-09-23T09:58:13Z | severo |
959,180,054 | Instrument the application | Measure the response time, status code, RAM usage, etc. to be able to take decision (see #1). Also statistics about the most common requests (endpoint, dataset, parameters) | Instrument the application: Measure the response time, status code, RAM usage, etc. to be able to take decision (see #1). Also statistics about the most common requests (endpoint, dataset, parameters) | closed | 2021-08-03T14:32:10Z | 2022-09-16T20:16:46Z | 2022-09-16T20:16:46Z | severo |
959,179,429 | Scale the application | Both `uvicorn` and `pm2` allow specifying the number of workers. `pm2` seems interesting since it provides a way to increase or decrease the number of workers without restart.
But before using multiple workers, it's important to instrument the app in order to detect if we need it (eg: monitor the response time). | Scale the application: Both `uvicorn` and `pm2` allow specifying the number of workers. `pm2` seems interesting since it provides a way to increase or decrease the number of workers without restart.
But before using multiple workers, it's important to instrument the app in order to detect if we need it (eg: monitor ... | closed | 2021-08-03T14:31:31Z | 2022-05-11T15:09:59Z | 2022-05-11T15:09:59Z | severo |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.