v2rmp Neural Models

Neural network models for the v2rmp route optimization engine. These are small, fast MLPs trained offline with PyTorch and exported to Candle-compatible safetensors for pure-Rust inference.

Models

Model Architecture Input Output Purpose
solver_selector 28 β†’ 128 β†’ 64 β†’ 6 Instance features (28-D) Solver probabilities (6 classes) Recommend best VRP solver for an instance
quality_predictor 28 β†’ 64 β†’ 32 β†’ 2 Instance features (28-D) Gap (%), tour length (km) Predict route quality before solving
automl 28 β†’ 64 β†’ 5 Instance features (28-D) Max iter, temp, tabu, cooling, neighbourhood Instance-aware hyperparameter tuning
move_scorer 16 β†’ 32 β†’ 16 β†’ 1 Move features (16-D) Improvement score Score 2-opt candidate moves for local search
graph_embed GraphSAGE placeholder Node features (10-D) 64-D embeddings Road network node embeddings (placeholder)

Feature Vector (28-D)

The input to solver_selector, quality_predictor, and automl is a 28-dimensional normalized feature vector extracted from a VRP instance:

  • Geometric (8): n_stops, n_vehicles, avg_pairwise_km, lat_spread, lon_spread, density, area_km2, depot_centroid_dist
  • Graph (8): knn_avg_degree, knn_max_degree, knn_clustering, knn_diameter, knn_mst_weight, knn_avg_shortest_path, knn_spectral_gap, knn_assortativity
  • Demand (4): total_demand, demand_std, tight_capacity_flag, capacity_ratio
  • Distance matrix (4): dist_mean, dist_std, dist_skewness, depot_dist_mean
  • Objective (4): one-hot for min_distance, min_time, balance_load, min_vehicles

Solver Classes (6)

The solver_selector predicts among:

  1. default
  2. clarke_wright
  3. sweep
  4. or_opt
  5. two_opt
  6. neural_guided

Training

  • Framework: PyTorch 2.x
  • Loss: Focal loss (Ξ³=2.5) + label smoothing (0.08) for classifier; MSE for regressors
  • Optimizer: AdamW with cosine warm restarts (classifier) / ReduceLROnPlateau (regressors)
  • Data: Synthetic VRP instances with solver distance labels
  • Export: state_dict keys remapped to Candle lin{N}.weight / lin{N}.bias convention

Usage (Rust / Candle)

use candle_core::Device;
use candle_nn::{linear, Linear, Module, VarBuilder};

let vb = VarBuilder::from_tensors(tensors, DType::F32, &device);
let lin1 = linear(28, 128, vb.pp("lin1"))?;
let lin2 = linear(128, 64, vb.pp("lin2"))?;
let lin3 = linear(64, 6, vb.pp("lin3"))?;

License

MIT OR Apache-2.0 (same as v2rmp)

Generated by ML Intern

This model repository was generated by ML Intern, an agent for machine learning research and development on the Hugging Face Hub.

Downloads last month
93
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support