pretty_name: Predictions — ECDC ERVISS — ILI/ARI primary-care consultation rates
license: other
tags:
- epi-eval
- predictions
- forecast-evaluation
- companion-of-ecdc-erviss
configs:
- config_name: default
data_files:
- split: train
path: data/*.parquet
Predictions for ECDC ERVISS — ILI/ARI primary-care consultation rates
Community-submitted forecasts targeting EPI-Eval/ecdc-erviss.
Each row is one quantile (or point) forecast for one target date — see the
schema below.
This repo accumulates accepted submissions from many forecasters. New predictions arrive as community pull requests opened from the EPI-Eval dashboard; a maintainer reviews each PR before merging.
Schema (v1)
| column | type | notes |
|---|---|---|
target_date |
string (YYYY-MM-DD) |
The date being forecast |
target_dataset |
string | Always ecdc-erviss |
target_column |
string | Truth column being forecast (see below) |
submitter |
string | Forecaster name or HF username |
model_name |
string | Identifier for the model run |
description |
string | Free-form notes on the model |
quantile |
float (nullable) | In [0, 1]. null = point estimate |
value |
float | Forecast value (in the truth column's units) |
submitted_at |
string (ISO 8601) | UTC submission timestamp |
| (pass-through dims) | string | Categorical dims from the source CSV |
Long format: one row per (target_date, [dim values…], quantile). A
forecaster providing the median plus 50%/80%/95% intervals emits 7 rows per
date (one point + 6 quantiles). Multiple submissions from the same forecaster
land as separate parquet files under data/.
Forecast targets
Truth columns from EPI-Eval/ecdc-erviss you can forecast:
ili_rate(consultations per 100,000 population)ari_rate(consultations per 100,000 population)
Submitting
The dashboard at apart-forecasting-tool handles the full submission flow: drag-drop a CSV, pick this dataset as the "Compare to" target, review your scores against the truth, and click "Submit to HuggingFace." The dashboard serializes your CSV into the schema above and opens a community PR here.
Notes
- Predictions whose
target_datefalls outside the truth dataset's coverage (forecast horizon) are still accepted. Comparison metrics on the dashboard compute only on the dates where truth is available. - A submission's
submittervalue is its only identity claim — there's no signed authentication. Reviewers should sanity-check unfamiliar submitters.
Initialized by upload_pipeline.core.bootstrap_predictions_repos.