motoko-spatial-1-1b / README.md
hrudu's picture
Add minimal model scaffold
1efb490
metadata
license: apache-2.0
tags:
  - robotics
  - haptics
  - spatial-understanding
  - touch-sensing
  - force-estimation
pipeline_tag: robotics

Motoko Spatial 1B

Motoko Spatial 1B is a foundation model for 3D haptic spatial understanding in robotics. It takes raw sensor array input from distributed touch sensors across a robot surface and outputs spatial force maps, contact region predictions, and pressure distribution fields.

Model Details

  • Model type: 3D haptic spatial foundation model
  • Parameters: 1B
  • Architecture: Hybrid CNN + Transformer
  • Input: 3D coordinate arrays and sensor pressure grids
  • Output: Force field maps, contact region masks, and pressure heatmaps
  • License: Apache-2.0

Intended Use

Motoko Spatial 1B is designed for robotics systems that need dense touch and contact understanding from distributed tactile sensors.

Primary use cases include:

  • Dexterous multi-finger manipulation
  • Full-body robot touch sensing
  • Terrain and surface contact mapping
  • Collision detection
  • Safe human-robot contact

Inputs

The model expects structured haptic sensor input containing:

  • 3D sensor coordinates
  • Pressure grid values
  • Optional force and torque channels
  • Sensor timing or sampling metadata when available

Raw haptic arrays should be converted into model input tensors with preprocessor/feature_extractor.py.

Outputs

The model produces spatial predictions for downstream robotics control and perception:

  • Spatial force field maps
  • Contact region masks
  • Pressure distribution heatmaps

Repository Files

File Description
config.json Architecture definition, including layers, attention heads, hidden size, channel count, and spatial dimensions.
configs/sensor_config.yaml Sensor array layout, sampling rate, axes, channel names, and physical units.
preprocessor/preprocessor_config.json Signal normalization, channel statistics, windowing, and resampling configuration.
model/model.safetensors Actual trained model weights. The current scaffold contains a placeholder until trained weights are added.
model/model.safetensors.index.json Weight index used for loading sharded or indexed safetensors weights.
preprocessor/feature_extractor.py Converts raw haptic arrays into normalized model input tensors.
tokenizer_config.json Signal tokenizer metadata for quantized or discretized haptic tokens.
tokenizer.json Minimal tokenizer vocabulary placeholder.
configs/training_config.yaml Training hyperparameters and checkpoint cadence.
examples/inference.py Basic inference preprocessing example.
examples/spatial_map.py Spatial force map construction example.

Limitations

This repository is currently a minimal Hugging Face model scaffold. The included model/model.safetensors file is a placeholder and should be replaced with trained weights before production use.

Citation

Citation information will be added when a technical report or paper is available.