Datasets:
Upload README.md with huggingface_hub
Browse files
README.md
CHANGED
|
@@ -153,38 +153,18 @@ Both formats therefore contain identical information and differ only in storage
|
|
| 153 |
|
| 154 |
One dataset instance corresponds to **one report**.
|
| 155 |
|
|
|
|
| 156 |
|
| 157 |
-
|
| 158 |
|
|
|
|
| 159 |
|
| 160 |
-
|
| 161 |
-
| ---- | ---- | ----------- | --------------- |
|
| 162 |
-
| **document_metadata** | | | |
|
| 163 |
-
| `report` | string | Identifiant unique du rapport | |
|
| 164 |
-
| `full_text` | string | Texte intégral du rapport | |
|
| 165 |
-
| **spans** | | | |
|
| 166 |
-
| `report` | string | Identifiant du rapport | |
|
| 167 |
-
| `span_id` | integer | Identifiant de l'annotation | |
|
| 168 |
-
| `span_type` | string | Type de l'entité annotée | `EntiteAnonymisation` |
|
| 169 |
-
| `begin` | integer | Offset de début | |
|
| 170 |
-
| `end` | integer | Offset de fin | |
|
| 171 |
-
| `span_text` | string | Texte de l'entité | |
|
| 172 |
-
| `attribute_Categorie` | string | Categorie | `ADDRESS`, `CITY`, `COUNTRY`, `FAMILY_STATUS`, `FIRST_NAME`, `IDENTIFYING_DATE`, `LAST_NAME`, `PATIENT_BIRTHDATE`, `PATIENT_NATIONALITY`, `PATIENT_SOCIAL_IDENTITY` |
|
| 173 |
-
| `attribute_RolePER` | string | RolePER | `Carer`, `Other`, `Patient` |
|
| 174 |
-
| `attribute_RoleLOC` | string | RoleLOC | `Hospital`, `Other`, `Patient` |
|
| 175 |
-
| `attribute_RoleNUM` | string | RoleNUM | `Carer`, `Hospital` |
|
| 176 |
-
|
| 177 |
-
### Data Splits
|
| 178 |
-
|
| 179 |
-
Only the training set is released here. The remaining portion of the corpus will be temporarily embargoed to enable future evaluations under controlled conditions, thereby limiting the risk of large language model contamination through prior exposure to the data. You can evaluate your system on the test set through the CodaBench platform.
|
| 180 |
|
| 181 |
-
|
| 182 |
-
You can find the detailed annotation protocol here: [annotation_guidelines.pdf](guidelines/annotation_guidelines.pdf)
|
| 183 |
|
| 184 |
-
|
| 185 |
|
| 186 |
-
The Hugging Face dataset and the standalone JSON files provide the same information but use different data representations. On one hand, the Hugging Face dataset is stored in an optimized columnar Parquet format where data is structured into three distinct configurations: document_metadata for full texts, spans for annotations, and relations for links between spans. This flat table structure is specifically designed for efficient machine learning workflows and easy filtering. On the other hand, the standalone JSON files follow the original Inception/UIMA CAS nested structure where each document contains all its associated layers in a single file. While the organization shifts from a nested document-based view to an optimized relational view, the underlying data content remains strictly identical.
|
| 187 |
-
You can easily load and explore the different parts of this dataset using the Hugging Face datasets library as shown in the following example:
|
| 188 |
|
| 189 |
```python
|
| 190 |
from datasets import load_dataset
|
|
@@ -205,7 +185,32 @@ if len(ds_rel["train"]) > 0:
|
|
| 205 |
rel = ds_rel["train"][0]
|
| 206 |
print(f"Relation: {rel['source_text']} -> {rel['target_text']}")
|
| 207 |
```
|
|
|
|
|
|
|
| 208 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 209 |
|
| 210 |
### Licensing Information
|
| 211 |
|
|
|
|
| 153 |
|
| 154 |
One dataset instance corresponds to **one report**.
|
| 155 |
|
| 156 |
+
### Hugging Face dataset
|
| 157 |
|
| 158 |
+
To facilitate machine learning workflows and easy filtering, the Hugging Face representation is organized into three distinct configurations. This allows for efficient filtering and relational access without parsing the original nested JSON:
|
| 159 |
|
| 160 |
+
- document_metadata: For accessing full texts and global identifiers.
|
| 161 |
|
| 162 |
+
- spans: For extracting specific annotations and entities.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 163 |
|
| 164 |
+
- relations: For analyzing links and dependencies between spans (if they exist).
|
|
|
|
| 165 |
|
| 166 |
+
You can easily load and explore these different parts using the datasets library, as shown in the following example:
|
| 167 |
|
|
|
|
|
|
|
| 168 |
|
| 169 |
```python
|
| 170 |
from datasets import load_dataset
|
|
|
|
| 185 |
rel = ds_rel["train"][0]
|
| 186 |
print(f"Relation: {rel['source_text']} -> {rel['target_text']}")
|
| 187 |
```
|
| 188 |
+
### Data Fields
|
| 189 |
+
|
| 190 |
|
| 191 |
+
| Path | Type | Description | Possible values |
|
| 192 |
+
| ---- | ---- | ----------- | --------------- |
|
| 193 |
+
| **document_metadata** | | | |
|
| 194 |
+
| `report` | string | Identifiant unique du rapport | |
|
| 195 |
+
| `full_text` | string | Texte intégral du rapport | |
|
| 196 |
+
| **spans** | | | |
|
| 197 |
+
| `report` | string | Identifiant du rapport | |
|
| 198 |
+
| `span_id` | integer | Identifiant de l'annotation | |
|
| 199 |
+
| `span_type` | string | Type de l'entité annotée | `EntiteAnonymisation` |
|
| 200 |
+
| `begin` | integer | Offset de début | |
|
| 201 |
+
| `end` | integer | Offset de fin | |
|
| 202 |
+
| `span_text` | string | Texte de l'entité | |
|
| 203 |
+
| `attribute_Categorie` | string | Categorie | `ADDRESS`, `CITY`, `COUNTRY`, `FAMILY_STATUS`, `FIRST_NAME`, `IDENTIFYING_DATE`, `LAST_NAME`, `PATIENT_BIRTHDATE`, `PATIENT_NATIONALITY`, `PATIENT_SOCIAL_IDENTITY` |
|
| 204 |
+
| `attribute_RolePER` | string | RolePER | `Carer`, `Other`, `Patient` |
|
| 205 |
+
| `attribute_RoleLOC` | string | RoleLOC | `Hospital`, `Other`, `Patient` |
|
| 206 |
+
| `attribute_RoleNUM` | string | RoleNUM | `Carer`, `Hospital` |
|
| 207 |
+
|
| 208 |
+
### Data Splits
|
| 209 |
+
|
| 210 |
+
Only the training set is released here. The remaining portion of the corpus will be temporarily embargoed to enable future evaluations under controlled conditions, thereby limiting the risk of large language model contamination through prior exposure to the data. You can evaluate your system on the test set through the CodaBench platform.
|
| 211 |
+
|
| 212 |
+
### Annotation Guidelines
|
| 213 |
+
You can find the detailed annotation protocol here: [annotation_guidelines.pdf](guidelines/annotation_guidelines.pdf)
|
| 214 |
|
| 215 |
### Licensing Information
|
| 216 |
|