Spaces:
Running
Running
milwright commited on
Commit ·
54cf274
1
Parent(s): 01a1595
add hugging face space metadata to readme
Browse files
README.md
CHANGED
|
@@ -1,4 +1,11 @@
|
|
| 1 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 2 |
|
| 3 |
An interactive reading comprehension game using AI to generate cloze (fill-in-the-blank) exercises from public domain literature.
|
| 4 |
|
|
@@ -10,7 +17,7 @@ The Cloze Reader transforms passages from Project Gutenberg into adaptive vocabu
|
|
| 10 |
|
| 11 |
**Educational cloze testing (1953)**: Wilson L. Taylor introduced the cloze procedure—systematically deleting words from passages to measure reading comprehension. It became standard in U.S. educational assessment by the 1960s.
|
| 12 |
|
| 13 |
-
**Masked language modeling (2018)**: BERT and subsequent models rediscovered cloze methodology independently as a training objective, randomly masking tokens and predicting from context.
|
| 14 |
|
| 15 |
**This project**: Uses language models trained on prediction tasks to generate prediction exercises for human readers. While Gemma-3 uses next-token prediction rather than masked language modeling, the system demonstrates how assessment and training methodologies are now instrumentalized through identical computational systems.
|
| 16 |
|
|
|
|
| 1 |
+
---
|
| 2 |
+
title: Cloze Reader
|
| 3 |
+
emoji: 📚
|
| 4 |
+
colorFrom: yellow
|
| 5 |
+
colorTo: gray
|
| 6 |
+
sdk: docker
|
| 7 |
+
pinned: true
|
| 8 |
+
---
|
| 9 |
|
| 10 |
An interactive reading comprehension game using AI to generate cloze (fill-in-the-blank) exercises from public domain literature.
|
| 11 |
|
|
|
|
| 17 |
|
| 18 |
**Educational cloze testing (1953)**: Wilson L. Taylor introduced the cloze procedure—systematically deleting words from passages to measure reading comprehension. It became standard in U.S. educational assessment by the 1960s.
|
| 19 |
|
| 20 |
+
**Masked language modeling (2018)**: BERT and subsequent models rediscovered cloze methodology independently as a training objective, randomly masking tokens and predicting from context.
|
| 21 |
|
| 22 |
**This project**: Uses language models trained on prediction tasks to generate prediction exercises for human readers. While Gemma-3 uses next-token prediction rather than masked language modeling, the system demonstrates how assessment and training methodologies are now instrumentalized through identical computational systems.
|
| 23 |
|