Datasets:
metadata
license: other
task_categories:
- text-classification
- text-retrieval
language:
- en
pretty_name: S2ORC Safety
size_categories:
- 10K<n<100K
configs:
- config_name: default
data_files:
- split: train
path: main/*.parquet
S2ORC Safety
This dataset is a filtered and enriched subset of an S2ORC computer science paper corpus, focused on AI safety and adjacent safety-relevant research.
It contains 16,806 papers selected through:
- local embedding generation
- clustering
- GPT-5.4 mini cluster-level screening
- GPT-5.4 mini paper-level labeling
- a rescue relabel pass on suspicious exclusions
- structured metadata extraction over the accepted paper set
- filtering out
304rows that were missing bothparsed_titleandabstract
Main Files
main/*.parquet- sharded full enriched source rows
- extracted metadata
- normalized GitHub repo links
- Hugging Face code mirror links
- normalized model / dataset / metric / scalar fields
metadata/*.parquet- sharded metadata extraction only
paper_metadata_summary_normalized.json- corpus-level summary statistics over the normalized metadata fields
code_links/*.parquet- sharded paper-to-code join table with normalized GitHub URLs and HF mirror paths
Contents
The main parquet includes:
original enriched paper fields from the source corpus
- title, abstract, full text, sections, references, authors, venue metadata, URLs
- extracted source-side fields like
summary,methods,results,models,datasets,metrics,limitations,training_details
metadata extraction fields
- reproducibility:
repro_steps_jsonsetup_requirements_jsontraining_or_eval_recipe_jsonartifact_availability_jsoncode_urls_jsondataset_urls_jsonmodel_urls_json
- safety taxonomy:
safety_area_jsonattack_or_defense_jsonthreat_model_jsontarget_system_jsonharm_type_json
- experimental details:
target_models_jsondatasets_benchmarks_jsonbaselines_compared_jsonevaluation_metrics_jsonmain_results_jsonclaimed_contributions_json
- practicality:
compute_requirements_jsonruntime_costhuman_eval_requiredclosed_model_dependencydeployment_readinessreplication_difficultyextraction_confidence
- reproducibility:
normalized fields
setup_requirements_norm_jsontarget_models_norm_jsondatasets_benchmarks_norm_jsonbaselines_compared_norm_jsonevaluation_metrics_norm_jsonruntime_cost_normhuman_eval_required_normclosed_model_dependency_normdeployment_readiness_normreplication_difficulty_norm
code link fields
github_repo_urls_jsonhf_code_paths_jsonhf_code_web_urls_jsongithub_repo_counthf_code_repo_count
Missing Values
- missing list-like fields are stored as empty JSON arrays
- missing scalar categorical fields are stored as
"None specified"
Notes
- This is a broad-tent AI safety dataset rather than a narrow alignment-only dataset.
- The labeling and extraction steps were LLM-assisted and should be treated as high-utility annotations, not ground truth.
- Process-only columns used to build the release were removed from the published parquet.
- The companion code mirror is published separately as
AlgorithmicResearchGroup/s2orc-safety-code. - Normalization is conservative. It collapses obvious duplicates like
CIFAR10/CIFAR-10,ResNet50/ResNet-50, andaccuracy/Accuracy, but does not try to solve full ontology matching.