Datasets:
Languages:
Greek
Size:
10K - 100K
Tags:
speech_corpus
greek_language
text-to-speech
neural_TTS
speech_synthesis
under_resourced_languages
License:
Update README.md
Browse files
README.md
CHANGED
|
@@ -27,27 +27,21 @@ size_categories:
|
|
| 27 |
## Dataset Structure
|
| 28 |
**The dataset is provided in sharded Parquet format for optimized loading. The original folder structure ("Speakers/" and "Text/") and naming conventions are preserved as metadata features ("speaker_id", "section", "session", etc.) within the dataset.**
|
| 29 |
|
| 30 |
-
|
| 31 |
-
|
| 32 |
-
|
| 33 |
-
|
| 34 |
-
|
| 35 |
-
The files follow a strict naming convention that encodes the metadata:
|
| 36 |
-
|
| 37 |
-
### Text Files
|
| 38 |
-
Format: `Greek-<corpus>-<session>-<sentence>.txt`
|
| 39 |
-
* `<corpus>`: The section of the corpus (e.g., `Harvard`, `B2`, `C1`, `C2`).
|
| 40 |
-
* `<session>`: The session number (e.g., `01`, `29`).
|
| 41 |
-
* `<sentence>`: The unique sentence ID within that session.
|
| 42 |
-
|
| 43 |
-
### Audio Files
|
| 44 |
-
Format: `Greek-<speaker>-<corpus>-<session>-<sentence>.wav`
|
| 45 |
-
* `<speaker>`: The speaker identifier.
|
| 46 |
* `M`: Main Male Speaker
|
| 47 |
* `F`: Main Female Speaker
|
| 48 |
* `M1`...`MX`: Other Male Speakers
|
| 49 |
* `F1`...`FX`: Other Female Speakers
|
| 50 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 51 |
## Usage
|
| 52 |
You can load this dataset directly in Python using the Hugging Face `datasets` library.
|
| 53 |
|
|
@@ -66,6 +60,9 @@ print("Text:", sample["text"])
|
|
| 66 |
# Print metadata
|
| 67 |
print("Speaker:", sample["speaker_id"])
|
| 68 |
print("Corpus Section:", sample["section"])
|
|
|
|
|
|
|
|
|
|
| 69 |
|
| 70 |
# The 'audio' column is automatically decoded into a dictionary with 'array' and 'sampling_rate'
|
| 71 |
audio_data = sample["audio"]["array"]
|
|
|
|
| 27 |
## Dataset Structure
|
| 28 |
**The dataset is provided in sharded Parquet format for optimized loading. The original folder structure ("Speakers/" and "Text/") and naming conventions are preserved as metadata features ("speaker_id", "section", "session", etc.) within the dataset.**
|
| 29 |
|
| 30 |
+
## Data Fields
|
| 31 |
+
|
| 32 |
+
When you load the dataset, the metadata originally encoded in the raw file names is automatically parsed into the following distinct columns for easier filtering and analysis:
|
| 33 |
+
|
| 34 |
+
* **`speaker_id`** *(string)*: The speaker identifier.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 35 |
* `M`: Main Male Speaker
|
| 36 |
* `F`: Main Female Speaker
|
| 37 |
* `M1`...`MX`: Other Male Speakers
|
| 38 |
* `F1`...`FX`: Other Female Speakers
|
| 39 |
+
* **`section`** *(string)*: The section of the corpus the text originated from (e.g., `Harvard`, `B2`, `C1`, `C2`).
|
| 40 |
+
* **`session`** *(int64)*: The recording session number.
|
| 41 |
+
* **`sentence_id`** *(int64)*: The unique sentence identifier within that specific session.
|
| 42 |
+
* **`text`** *(string)*: The Greek transcript of the audio.
|
| 43 |
+
* **`audio`** *(audio)*: The loaded audio feature, which includes the raw audio array and the sampling rate.
|
| 44 |
+
* **`file_name`** *(string)*: The original filename of the source `.wav` audio file (e.g., `Greek-F1-B2-05-20.wav`).
|
| 45 |
## Usage
|
| 46 |
You can load this dataset directly in Python using the Hugging Face `datasets` library.
|
| 47 |
|
|
|
|
| 60 |
# Print metadata
|
| 61 |
print("Speaker:", sample["speaker_id"])
|
| 62 |
print("Corpus Section:", sample["section"])
|
| 63 |
+
print("Session:", sample["session"])
|
| 64 |
+
print("Sentence ID:", sample["sentence_id"])
|
| 65 |
+
print("Original File Name:", sample["file_name"])
|
| 66 |
|
| 67 |
# The 'audio' column is automatically decoded into a dictionary with 'array' and 'sampling_rate'
|
| 68 |
audio_data = sample["audio"]["array"]
|