EditEval / README.md
bzz2's picture
Upload README.md with huggingface_hub
c1eabae verified
metadata
configs:
  - config_name: jfleg
    data_files: jfleg/data.jsonl
    default: true
  - config_name: asset
    data_files: asset/data.jsonl
  - config_name: fruit
    data_files: fruit/data.jsonl
  - config_name: iterater
    data_files: iterater/data.jsonl
  - config_name: iterater_clarity
    data_files: iterater_clarity/data.jsonl
  - config_name: iterater_coherence
    data_files: iterater_coherence/data.jsonl
  - config_name: iterater_fluency
    data_files: iterater_fluency/data.jsonl
  - config_name: stsb_multi_mt
    data_files: stsb_multi_mt/data.jsonl
  - config_name: turk
    data_files: turk/data.jsonl
  - config_name: wafer_insert
    data_files: wafer_insert/data.jsonl
  - config_name: wnc
    data_files: wnc/data.jsonl
license: cc-by-nc-sa-4.0
task_categories:
  - text-generation
tags:
  - text-editing
  - benchmark

EditEval: The Instruction-Based Benchmark for Text Improvements

This dataset contains the EditEval benchmark data, converted to JSONL and organized by task/dataset.

Subsets

Config Task Examples
jfleg Fluency 1,501
asset Simplification 2,359
turk Simplification 2,359
iterater Mixed (all tasks) 621
iterater_fluency Fluency 203
iterater_clarity Clarity 342
iterater_coherence Coherence 76
stsb_multi_mt Paraphrasing 153
wnc Neutralization 1,700
wafer_insert Updating (insertion) 9,108
fruit Updating 150

Usage

from datasets import load_dataset

# Load a specific subset
ds = load_dataset("bzz2/EditEval", "jfleg")

# Load with a limit
ds = load_dataset("bzz2/EditEval", "fruit", split="train[:10]")

Schema

Each record has the following fields:

  • id — unique example identifier
  • input — source text to be edited
  • title — article/document title (if applicable)
  • task_type — editing task (fluency, simplification, neutralization, etc.)
  • retrieved_documents — supporting documents (used by updating tasks)
  • meta — additional metadata (JSON string)

Citation

@inproceedings{dwivedi-edit-2022,
  doi = {10.48550/ARXIV.2209.13331},
  url = {https://arxiv.org/abs/2209.13331},
  author = {Dwivedi-Yu, Jane and Schick, Timo and Jiang, Zhengbao and Lomeli, Maria and Lewis, Patrick and Izacard, Gautier and Grave, Edouard and Riedel, Sebastian and Petroni, Fabio},
  title = {EditEval: An Instruction-Based Benchmark for Text Improvements},
  publisher = {arXiv},
  year = {2022},
}