v2rmp Routing ML Models
A collection of lightweight neural models for route optimization (VRP/CPP), built with Candle (pure Rust ML) and trained on synthetic + real-world VRP instances.
Part of the v2rmp project β a Rust TUI/CLI for road network extraction, compilation, and multi-vehicle route optimization.
Models
| Model | File | Architecture | Purpose |
|---|---|---|---|
| AutoML Predictor | automl_v2.safetensors |
28 β 64 β 5 MLP | Predict instance-aware solver hyperparameters (max iterations, temperature, tabu tenure, cooling rate, neighbourhood radius) |
| Solver Selector | solver_selector_v2.safetensors |
28 β 128 β 64 β 6 MLP | Classify VRP instances to the best algorithm among 6 solvers (default, Clarke-Wright, sweep, Or-Opt, 2-Opt, neural-guided) |
| Quality Predictor | quality_predictor_v2.safetensors |
28 β 64 β 32 β 2 MLP | Predict gap-to-optimal (%) and tour length (km) before solving |
| Move Scorer | move_scorer_v2.safetensors |
16 β 32 β 16 β 1 MLP | Score candidate 2-Opt / Or-Opt moves for neural-guided local search |
| Graph Embedder | graph_embed.safetensors |
2-layer GraphSAGE (10 β 64 β 64) | Produce 64-dim learned embeddings for road network edges |
Input Features
All MLP models share the same 28-dim normalized instance feature vector derived from VRP instance statistics (stop count, vehicle count, bounding box spread, demand statistics, distance matrix stats, etc.).
Model Loading (Rust / Candle)
use candle_core::{Device, DType};
use candle_nn::{linear, Linear, Module, VarBuilder};
// Load safetensors
let tensors = candle_core::safetensors::load("solver_selector_v2.safetensors", &device)?;
let vb = VarBuilder::from_tensors(tensors, DType::F32, &device);
// Build layers
let lin1 = linear(28, 128, vb.pp("lin1"))?;
let lin2 = linear(128, 64, vb.pp("lin2"))?;
let lin3 = linear(64, 6, vb.pp("lin3"))?;
See the v2rmp source for full loading code.
License
MIT OR Apache-2.0 (same as v2rmp crate).
Generated by ML Intern
This model repository was generated by ML Intern, an agent for machine learning research and development on the Hugging Face Hub.
- Try ML Intern: https://smolagents-ml-intern.hf.space
- Source code: https://github.com/huggingface/ml-intern
Usage
from transformers import AutoModelForCausalLM, AutoTokenizer
model_id = 'aerialblancaservices/v2rmp-routing-ml'
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id)
For non-causal architectures, replace AutoModelForCausalLM with the appropriate AutoModel class.
- Downloads last month
- 218