Datasets:
Tasks:
Text Generation
Modalities:
Text
Formats:
arrow
Languages:
code
Size:
10M - 100M
ArXiv:
License:
Update README.md
Browse files
README.md
CHANGED
|
@@ -10,7 +10,7 @@ task_categories:
|
|
| 10 |
language:
|
| 11 |
- code
|
| 12 |
size_categories:
|
| 13 |
-
-
|
| 14 |
---
|
| 15 |
|
| 16 |
<div align="center">
|
|
@@ -32,9 +32,9 @@ Each row represents a single commit that changes exactly one file in a repositor
|
|
| 32 |
|
| 33 |
## Collection Pipeline
|
| 34 |
|
| 35 |
-
The commit mining pipeline is described in detail in the [Themis paper](https://arxiv.org/abs/xxxx.xxxxx) and the [Dataset](https://github.com/iNeil77/Themis/tree/main/Dataset) folder in the GitHub repository. At a high level:
|
| 36 |
|
| 37 |
-
1. **BigQuery Mining** — A [GoogleSQL query](https://github.com/iNeil77/Themis/blob/main/Dataset/Commit_Mining_SQL/consolidated_query.sql) extracts single-file commits from `bigquery-public-data.github_repos`, filtering for permissive licenses, target programming languages, and non-trivial commit messages.
|
| 38 |
|
| 39 |
2. **Repository Reputation Filtering** — Commits are subset to those originating from [curated high-reputation repositories](https://github.com/iNeil77/Themis/tree/main/Dataset/Repos) (15+ GitHub stars, 5+ contributors, 10+ issues).
|
| 40 |
|
|
@@ -48,7 +48,7 @@ The commit mining pipeline is described in detail in the [Themis paper](https://
|
|
| 48 |
|
| 49 |
The steps below are applied downstream to produce the final preference pairs in Themis-CodePreference, and are **not** reflected in this raw dataset:
|
| 50 |
|
| 51 |
-
- **Pull Request Merging** — Commits are cross-referenced with GHTorrent pull request data to retain only non-reverted commits that are part of successfully merged pull requests, ensuring implicit human validation.
|
| 52 |
- **Aspect Classification** — Commits are assigned to quality dimensions (Functional Correctness, Runtime Efficiency, Memory Efficiency, Security Hardness, Readability & Maintainability) using criteria-specialized [ModernBERT](https://huggingface.co/answerdotai/ModernBERT-base) commit classifiers, trained on seed positives retrieved via [curated term lists](https://github.com/iNeil77/Themis/tree/main/Dataset/Commit_Mining_Terms).
|
| 53 |
- **LLM Scoring & Instruction Synthesis** — Frontier LMs validate preference strength and generate realistic inverse instructions.
|
| 54 |
- **LLM-as-a-Judge Preference Labelling** — Multi-sample voting with frontier LMs produces consensus preference labels.
|
|
@@ -119,4 +119,4 @@ This dataset is released under the [Apache 2.0 License](https://www.apache.org/l
|
|
| 119 |
journal={arXiv preprint arXiv:xxxx.xxxxx},
|
| 120 |
year={2025}
|
| 121 |
}
|
| 122 |
-
```
|
|
|
|
| 10 |
language:
|
| 11 |
- code
|
| 12 |
size_categories:
|
| 13 |
+
- 1M<n<10M
|
| 14 |
---
|
| 15 |
|
| 16 |
<div align="center">
|
|
|
|
| 32 |
|
| 33 |
## Collection Pipeline
|
| 34 |
|
| 35 |
+
The commit mining pipeline is described in detail in the [Themis paper](https://arxiv.org/abs/xxxx.xxxxx) and the [Dataset](https://github.com/iNeil77/Themis/tree/main/Dataset) folder in the GitHub repository. The BigQuery SQL query and scraping infrastructure are modified from the [OctoPack](https://arxiv.org/abs/2308.07124) pipeline ([CommitPack](https://huggingface.co/datasets/bigcode/commitpack)); the subsequent filtering, classification, and preference construction stages are original to Themis. At a high level:
|
| 36 |
|
| 37 |
+
1. **BigQuery Mining** — A [GoogleSQL query](https://github.com/iNeil77/Themis/blob/main/Dataset/Commit_Mining_SQL/consolidated_query.sql) (modified from [OctoPack](https://arxiv.org/abs/2308.07124)) extracts single-file commits from `bigquery-public-data.github_repos`, filtering for permissive licenses, target programming languages, and non-trivial commit messages.
|
| 38 |
|
| 39 |
2. **Repository Reputation Filtering** — Commits are subset to those originating from [curated high-reputation repositories](https://github.com/iNeil77/Themis/tree/main/Dataset/Repos) (15+ GitHub stars, 5+ contributors, 10+ issues).
|
| 40 |
|
|
|
|
| 48 |
|
| 49 |
The steps below are applied downstream to produce the final preference pairs in Themis-CodePreference, and are **not** reflected in this raw dataset:
|
| 50 |
|
| 51 |
+
- **Temporal Cutoff & Pull Request Merging** — For training data (Themis-CodePreference), only commits pushed before **March 2019** are retained. For benchmark data (Themis-CodeRewardBench), commits are scoped to **June 2019 – January 2021** from disjoint repositories. Commits are then cross-referenced with GHTorrent pull request data to retain only non-reverted commits that are part of successfully merged pull requests, ensuring implicit human validation.
|
| 52 |
- **Aspect Classification** — Commits are assigned to quality dimensions (Functional Correctness, Runtime Efficiency, Memory Efficiency, Security Hardness, Readability & Maintainability) using criteria-specialized [ModernBERT](https://huggingface.co/answerdotai/ModernBERT-base) commit classifiers, trained on seed positives retrieved via [curated term lists](https://github.com/iNeil77/Themis/tree/main/Dataset/Commit_Mining_Terms).
|
| 53 |
- **LLM Scoring & Instruction Synthesis** — Frontier LMs validate preference strength and generate realistic inverse instructions.
|
| 54 |
- **LLM-as-a-Judge Preference Labelling** — Multi-sample voting with frontier LMs produces consensus preference labels.
|
|
|
|
| 119 |
journal={arXiv preprint arXiv:xxxx.xxxxx},
|
| 120 |
year={2025}
|
| 121 |
}
|
| 122 |
+
```
|