YAML Metadata Warning:empty or missing yaml metadata in repo card

Check out the documentation for more information.

Body3D β€” 3D Body Measurement from Intel RealSense D455

Automated body measurement extraction from .bag files recorded with the Intel RealSense D455 depth camera. Captures from 3 viewpoints (front, left, right) are fused into a unified 3D body scan, from which anthropometric measurements are computed.

πŸ“ Measurements Extracted

Measurement Method Typical Accuracy
Height Point cloud bounding box + landmark validation < 1% error
Weight Body volume integration Γ— density (985 kg/mΒ³) ~5-10% error
Neck Circumference Cross-section slice + radial filtering ~10-15% error
Chest Circumference Cross-section slice with torso isolation ~5-10% error
Waist Circumference Narrowest torso cross-section ~5-10% error
Hip Circumference Widest cross-section at hip level ~5% error
Wrist Circumference Forearm-perpendicular cross-section < 2% error
Shoulder Width Bi-acromial landmark distance < 1% error
BMI Derived from height + estimated weight β€”

πŸš€ How to Run It β€” Step by Step

Step 1: Install Dependencies

# Clone the repo
git clone https://huggingface.co/mdashraf9723/body3d-realsense-measurements
cd body3d-realsense-measurements

# Install Python packages
pip install -r requirements.txt

# On Ubuntu/Debian, you also need:
sudo apt-get install libgl1-mesa-glx libglib2.0-0

Headless server? Use pip install open3d-cpu instead of open3d.


Step 2: Record Your .bag Files (if not done already)

Set up your D455 camera at 3 positions around the person:

         [LEFT CAM]        
            ↓  (90Β°)       
                            
            🧍 ← Person (arms slightly out, A-pose)
                            
[FRONT CAM] β†’    ← [RIGHT CAM]
   (0Β°)              (-90Β°)    

Option A β€” Using Intel RealSense Viewer:

  1. Open RealSense Viewer
  2. Enable Depth + Color streams (1280Γ—720, 30fps)
  3. Click Record β†’ save as front.bag
  4. Move camera to left side β†’ record left.bag
  5. Move camera to right side β†’ record right.bag

Option B β€” Using Python script:

import pyrealsense2 as rs
import time

pipeline = rs.pipeline()
config = rs.config()
config.enable_stream(rs.stream.depth, 1280, 720, rs.format.z16, 30)
config.enable_stream(rs.stream.color, 1280, 720, rs.format.bgr8, 30)
config.enable_record_to_file("front.bag")  # change name for each view

pipeline.start(config)
time.sleep(5)  # Record 5 seconds
pipeline.stop()
print("Done!")

Recording Tips:

  • Stand 1.5–2.5m from camera
  • Person should stand still, arms slightly away from body (A-pose)
  • Record 3–5 seconds per view
  • Good lighting, avoid direct sunlight (interferes with IR depth sensor)
  • Clear background helps with person segmentation

Step 3: Run the Measurement Pipeline

Put your .bag files in the same folder as the scripts, then run:

Multi-View (3 .bag files β€” BEST accuracy):

python pipeline.py \
    --front front.bag \
    --left left.bag \
    --right right.bag \
    --output my_measurements.json

Multi-View with known camera angles:

python pipeline.py \
    --front front.bag \
    --left left.bag \
    --right right.bag \
    --angles 0 -90 90 \
    --output my_measurements.json

Single View (1 .bag file β€” less accurate):

python pipeline.py --bag front.bag --output my_measurements.json

Test/Demo mode (no camera needed):

python pipeline.py --demo

Step 4: Read the Results

The output my_measurements.json will look like:

{
  "height_cm": 175.0,
  "estimated_weight_kg": 72.5,
  "bmi": 23.7,
  "neck_circumference_cm": 38.2,
  "chest_circumference_cm": 94.5,
  "waist_circumference_cm": 82.1,
  "hip_circumference_cm": 96.3,
  "wrist_circumference_cm": 16.4,
  "shoulder_width_cm": 44.0
}

It also prints a formatted table in the terminal:

============================================================
  BODY MEASUREMENTS
============================================================
  Height............................. 175.0 cm
  Estimated Weight................... 72.5 kg
  BMI................................ 23.7
  ----------------------------------------
  Neck Circumference................. 38.2 cm
  Chest Circumference................ 94.5 cm
  Waist Circumference................ 82.1 cm
  Hip Circumference.................. 96.3 cm
  Wrist Circumference................ 16.4 cm
  ----------------------------------------
  Shoulder Width..................... 44.0 cm
============================================================

Quick Reference

What you have Command
3 .bag files (front, left, right) python pipeline.py --front front.bag --left left.bag --right right.bag
1 .bag file python pipeline.py --bag recording.bag
No camera, just testing python pipeline.py --demo

All CLI Options

python pipeline.py --help

Options:
  --demo                Run with synthetic test data (no camera needed)
  --bag FILE            Single .bag file path
  --front FILE          Front view .bag file
  --left FILE           Left view .bag file
  --right FILE          Right view .bag file
  --angles FLOAT...     Camera angles in degrees (e.g. 0 -90 90)
  --output FILE         Output JSON path (default: measurements.json)
  --voxel-size FLOAT    Point cloud resolution in meters (default: 0.01)
  --max-depth FLOAT     Max depth from camera in meters (default: 3.0)
  --save-ply FILE       Save merged point cloud to PLY file

πŸ—οΈ Pipeline Architecture

.bag files (3 views) β†’ bag_reader.py β†’ registration.py β†’ landmarks.py β†’ measurements.py β†’ JSON output

Modules:

  1. bag_reader.py β€” Reads RealSense .bag files via pyrealsense2, applies spatial/temporal/hole-filling filters, extracts aligned depth+color frames, generates Open3D point clouds with camera intrinsics
  2. registration.py β€” FPFH feature extraction + RANSAC global registration + Point-to-plane ICP refinement to merge 3 views; DBSCAN clustering for person segmentation; RANSAC floor plane removal
  3. landmarks.py — MediaPipe Pose detection (33 body landmarks), 2D→3D lifting via depth unprojection with robust neighborhood depth sampling
  4. measurements.py β€” Angular-sweep boundary extraction on cross-sectional slices with radial distance filtering to isolate torso from arms; volume integration for weight; multi-method weighted estimation
  5. pipeline.py β€” End-to-end CLI orchestrator with single-view, multi-view, and demo modes

πŸ“ File Structure

body3d-realsense-measurements/
β”œβ”€β”€ pipeline.py          # Main entry point (CLI)
β”œβ”€β”€ bag_reader.py        # Module 1: RealSense .bag file reader
β”œβ”€β”€ registration.py      # Module 2: Multi-view point cloud registration
β”œβ”€β”€ landmarks.py         # Module 3: MediaPipe body landmarks
β”œβ”€β”€ measurements.py      # Module 4: Body measurement computation
β”œβ”€β”€ requirements.txt     # Python dependencies
└── README.md            # This file

πŸ§ͺ How Measurements Work

Cross-Section Circumference (Angular Sweep Method)

For each body measurement (chest, waist, hip, neck, wrist):

  1. Slice the point cloud at the landmark height (Β±1-2.5cm tolerance)
  2. Filter by radial distance from body center (excludes arms for torso measurements)
  3. Project slice points to 2D plane
  4. Angular sweep: Cast 72-90 rays from centroid, find outermost point per angle bin
  5. Compute perimeter of the ordered boundary polygon

Weight Estimation

Three methods combined with weighted average:

  1. Volume integration (weight: 50%): Integrate cross-sectional areas along height axis, multiply by body density (985 kg/mΒ³)
  2. Measurement regression (weight: 15%): Empirical formula from height + chest + waist + hip
  3. Hamwi formula (weight: 5%): Clinical height-based estimate

Person Segmentation

  1. RANSAC floor detection: Fits plane to largest flat surface, removes floor points
  2. DBSCAN clustering: Groups remaining points by proximity (5cm neighborhood)
  3. Largest cluster: Selected as the person

πŸ§ͺ Validated Results (Synthetic)

Tested on synthetic body with known ground truth dimensions:

βœ“ height_cm.............. GT: 175.0  Est: 175.0  Err: 0.0 (0.0%)
βœ“ chest_circumference.... GT:  94.2  Est:  85.7  Err: 8.5 (9.1%)
βœ“ waist_circumference.... GT:  81.7  Est:  85.6  Err: 3.9 (4.7%)
βœ“ hip_circumference...... GT:  97.4  Est:  93.1  Err: 4.3 (4.5%)
⚠ neck_circumference..... GT:  37.7  Est:  43.0  Err: 5.3 (14.2%)
βœ“ wrist_circumference.... GT:  15.7  Est:  15.6  Err: 0.1 (0.7%)
βœ“ shoulder_width......... GT:  44.0  Est:  44.0  Err: 0.0 (0.0%)

πŸ”§ Advanced: SMPL-Based Measurements

For higher accuracy (~1-2cm error), you can integrate SMPL body model fitting:

  1. Register at smpl.is.tue.mpg.de and download model files
  2. Install smplx: pip install smplx
  3. Install SMPL-Anthropometry: pip install git+https://github.com/DavidBoja/SMPL-Anthropometry

The pipeline can be extended to fit SMPL to the merged point cloud and extract measurements from the parametric mesh, giving more precise circumferences through anatomical priors.


πŸ“š References

  • ArtEq (CVPR 2023) β€” SE(3)-equivariant SMPL fitting from point clouds
  • ETCH (2025) β€” Clothed human body estimation from point clouds
  • Pose-Independent Anthropometry (2025) β€” Body measurements from sparse landmarks
  • A2B (2024) β€” Bidirectional anthropometric ↔ SMPL-X shape mapping

⚠️ Limitations

  • 3-view gaps: Back of torso may have incomplete coverage β†’ affects waist/hip accuracy
  • Clothing: Loose clothing adds to circumferences. Use ETCH method for clothed subjects
  • Weight estimation: Volume-based weight has ~5-10% error; depends on scan completeness
  • Small features: Wrist/neck require good point cloud density at those locations
  • Coordinate system: Assumes Y-up. Different camera orientations may need axis remapping
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Papers for mdashraf9723/body3d-realsense-measurements