Dataset Viewer
The dataset could not be loaded because the splits use different data file formats, which is not supported. Read more about the splits configuration. Click for more details.
Couldn't infer the same data file format for all splits. Got {NamedSplit('train'): ('json', {}), NamedSplit('validation'): ('json', {}), NamedSplit('test'): (None, {})}
Error code:   FileFormatMismatchBetweenSplitsError

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

GNN Ruby Code Study

Systematic study of Graph Neural Network architectures for Ruby code complexity prediction and generation.

Paper: Graph Neural Networks for Ruby Code Complexity Prediction and Generation: A Systematic Architecture Study

Dataset

22,452 Ruby methods parsed into AST graphs with 74-dimensional node features.

Split Samples File
Train 19,084 dataset/train.jsonl
Validation 3,368 dataset/val.jsonl

Each JSONL record contains:

  • repo_name: Source repository
  • file_path: Original file path
  • raw_source: Raw Ruby source code
  • complexity_score: McCabe cyclomatic complexity
  • ast_json: Full AST as nested JSON (node types + literal values)
  • id: Unique identifier

Node Features (74D)

  • One-hot encoding of 73 AST node types (def, send, args, lvar, str, ...) + 1 unknown
  • Types cover Ruby AST nodes; literal values (identifiers, strings, numbers) map to unknown

Key Findings

  1. 5-layer GraphSAGE achieves MAE 4.018 (RΒ² = 0.709) for complexity prediction β€” 16% better than 3-layer baseline (9.9Οƒ significant)
  2. GNN autoencoders produce 0% valid Ruby across all 15+ tested configurations
  3. The literal value bottleneck: Teacher-forced GIN achieves 81% node type accuracy and 99.5% type diversity, but 0% syntax validity because 47% of AST elements are literals with no learnable representation
  4. Chain decoders collapse: 93% of predictions default to UNKNOWN without structural supervision
  5. Total cost: ~$4.32 across 51 GPU experiments on Vast.ai RTX 4090 + local RTX 2070 SUPER

Repository Structure

β”œβ”€β”€ paper.md                           # Full research paper
β”œβ”€β”€ dataset/
β”‚   β”œβ”€β”€ train.jsonl                    # 19,084 Ruby methods (37 MB)
β”‚   └── val.jsonl                      # 3,368 Ruby methods (6.5 MB)
β”œβ”€β”€ models/
β”‚   β”œβ”€β”€ encoder_sage_5layer.pt         # Pre-trained SAGE encoder
β”‚   └── decoders/                      # Trained decoder checkpoints
β”‚       β”œβ”€β”€ tf-gin-256-deep.pt         # Best: teacher-forced GIN, 5 layers
β”‚       β”œβ”€β”€ tf-gin-{128,256,512}.pt    # Dimension ablation
β”‚       └── chain-gin-256.pt           # Control (no structural supervision)
β”œβ”€β”€ results/
β”‚   β”œβ”€β”€ fleet_experiments.json         # All Vast.ai experiment metrics
β”‚   β”œβ”€β”€ autonomous_research.json       # 18 baseline variance replicates
β”‚   └── gin_deep_dive/                 # Local deep-dive analysis
β”‚       β”œβ”€β”€ summary.json               # Ablation summary table
β”‚       └── *_results.json             # Per-config detailed results
β”œβ”€β”€ experiments/                       # Ratiocinator fleet YAML specs
β”œβ”€β”€ specs/                             # Ratiocinator research YAML specs
β”œβ”€β”€ src/                               # Model source code
β”‚   β”œβ”€β”€ models.py                      # GNN architectures
β”‚   β”œβ”€β”€ data_processing.py             # ASTβ†’graph pipeline
β”‚   β”œβ”€β”€ loss.py                        # Loss functions
β”‚   β”œβ”€β”€ train.py                       # Complexity prediction trainer
β”‚   └── train_autoencoder.py           # Autoencoder trainer
└── scripts/                           # Runner and evaluation scripts

Reproducing Results

# Clone the experiment branch
git clone -b experiment/ratiocinator-gnn-study https://github.com/timlawrenz/jubilant-palm-tree
cd jubilant-palm-tree

# Install dependencies
python -m venv .venv && source .venv/bin/activate
pip install torch torchvision torch_geometric

# Train complexity prediction (Track 1)
python train.py --conv_type SAGE --num_layers 5 --epochs 50

# Train autoencoder with teacher-forced GIN decoder (Track 4)
python train_autoencoder.py --decoder_conv_type GIN --decoder_edge_mode teacher_forced --epochs 30

# Run the full deep-dive ablation
python scripts/gin_deep_dive.py

Source Code

Citation

If you use this dataset or findings, please cite:

@misc{lawrenz2025gnnruby,
  title={Graph Neural Networks for Ruby Code Complexity Prediction and Generation: A Systematic Architecture Study},
  author={Tim Lawrenz},
  year={2025},
  howpublished={\url{https://huggingface.co/datasets/timlawrenz/gnn-ruby-code-study}}
}
Downloads last month
-