YAML Metadata Warning:empty or missing yaml metadata in repo card
Check out the documentation for more information.
Body3D β 3D Body Measurement from Intel RealSense D455
Automated body measurement extraction from .bag files recorded with the Intel RealSense D455 depth camera. Captures from 3 viewpoints (front, left, right) are fused into a unified 3D body scan, from which anthropometric measurements are computed.
π Measurements Extracted
| Measurement | Method | Typical Accuracy |
|---|---|---|
| Height | Point cloud bounding box + landmark validation | < 1% error |
| Weight | Body volume integration Γ density (985 kg/mΒ³) | ~5-10% error |
| Neck Circumference | Cross-section slice + radial filtering | ~10-15% error |
| Chest Circumference | Cross-section slice with torso isolation | ~5-10% error |
| Waist Circumference | Narrowest torso cross-section | ~5-10% error |
| Hip Circumference | Widest cross-section at hip level | ~5% error |
| Wrist Circumference | Forearm-perpendicular cross-section | < 2% error |
| Shoulder Width | Bi-acromial landmark distance | < 1% error |
| BMI | Derived from height + estimated weight | β |
π How to Run It β Step by Step
Step 1: Install Dependencies
# Clone the repo
git clone https://huggingface.co/mdashraf9723/body3d-realsense-measurements
cd body3d-realsense-measurements
# Install Python packages
pip install -r requirements.txt
# On Ubuntu/Debian, you also need:
sudo apt-get install libgl1-mesa-glx libglib2.0-0
Headless server? Use
pip install open3d-cpuinstead ofopen3d.
Step 2: Record Your .bag Files (if not done already)
Set up your D455 camera at 3 positions around the person:
[LEFT CAM]
β (90Β°)
π§ β Person (arms slightly out, A-pose)
[FRONT CAM] β β [RIGHT CAM]
(0Β°) (-90Β°)
Option A β Using Intel RealSense Viewer:
- Open RealSense Viewer
- Enable Depth + Color streams (1280Γ720, 30fps)
- Click Record β save as
front.bag - Move camera to left side β record
left.bag - Move camera to right side β record
right.bag
Option B β Using Python script:
import pyrealsense2 as rs
import time
pipeline = rs.pipeline()
config = rs.config()
config.enable_stream(rs.stream.depth, 1280, 720, rs.format.z16, 30)
config.enable_stream(rs.stream.color, 1280, 720, rs.format.bgr8, 30)
config.enable_record_to_file("front.bag") # change name for each view
pipeline.start(config)
time.sleep(5) # Record 5 seconds
pipeline.stop()
print("Done!")
Recording Tips:
- Stand 1.5β2.5m from camera
- Person should stand still, arms slightly away from body (A-pose)
- Record 3β5 seconds per view
- Good lighting, avoid direct sunlight (interferes with IR depth sensor)
- Clear background helps with person segmentation
Step 3: Run the Measurement Pipeline
Put your .bag files in the same folder as the scripts, then run:
Multi-View (3 .bag files β BEST accuracy):
python pipeline.py \
--front front.bag \
--left left.bag \
--right right.bag \
--output my_measurements.json
Multi-View with known camera angles:
python pipeline.py \
--front front.bag \
--left left.bag \
--right right.bag \
--angles 0 -90 90 \
--output my_measurements.json
Single View (1 .bag file β less accurate):
python pipeline.py --bag front.bag --output my_measurements.json
Test/Demo mode (no camera needed):
python pipeline.py --demo
Step 4: Read the Results
The output my_measurements.json will look like:
{
"height_cm": 175.0,
"estimated_weight_kg": 72.5,
"bmi": 23.7,
"neck_circumference_cm": 38.2,
"chest_circumference_cm": 94.5,
"waist_circumference_cm": 82.1,
"hip_circumference_cm": 96.3,
"wrist_circumference_cm": 16.4,
"shoulder_width_cm": 44.0
}
It also prints a formatted table in the terminal:
============================================================
BODY MEASUREMENTS
============================================================
Height............................. 175.0 cm
Estimated Weight................... 72.5 kg
BMI................................ 23.7
----------------------------------------
Neck Circumference................. 38.2 cm
Chest Circumference................ 94.5 cm
Waist Circumference................ 82.1 cm
Hip Circumference.................. 96.3 cm
Wrist Circumference................ 16.4 cm
----------------------------------------
Shoulder Width..................... 44.0 cm
============================================================
Quick Reference
| What you have | Command |
|---|---|
3 .bag files (front, left, right) |
python pipeline.py --front front.bag --left left.bag --right right.bag |
1 .bag file |
python pipeline.py --bag recording.bag |
| No camera, just testing | python pipeline.py --demo |
All CLI Options
python pipeline.py --help
Options:
--demo Run with synthetic test data (no camera needed)
--bag FILE Single .bag file path
--front FILE Front view .bag file
--left FILE Left view .bag file
--right FILE Right view .bag file
--angles FLOAT... Camera angles in degrees (e.g. 0 -90 90)
--output FILE Output JSON path (default: measurements.json)
--voxel-size FLOAT Point cloud resolution in meters (default: 0.01)
--max-depth FLOAT Max depth from camera in meters (default: 3.0)
--save-ply FILE Save merged point cloud to PLY file
ποΈ Pipeline Architecture
.bag files (3 views) β bag_reader.py β registration.py β landmarks.py β measurements.py β JSON output
Modules:
- bag_reader.py β Reads RealSense
.bagfiles viapyrealsense2, applies spatial/temporal/hole-filling filters, extracts aligned depth+color frames, generates Open3D point clouds with camera intrinsics - registration.py β FPFH feature extraction + RANSAC global registration + Point-to-plane ICP refinement to merge 3 views; DBSCAN clustering for person segmentation; RANSAC floor plane removal
- landmarks.py β MediaPipe Pose detection (33 body landmarks), 2Dβ3D lifting via depth unprojection with robust neighborhood depth sampling
- measurements.py β Angular-sweep boundary extraction on cross-sectional slices with radial distance filtering to isolate torso from arms; volume integration for weight; multi-method weighted estimation
- pipeline.py β End-to-end CLI orchestrator with single-view, multi-view, and demo modes
π File Structure
body3d-realsense-measurements/
βββ pipeline.py # Main entry point (CLI)
βββ bag_reader.py # Module 1: RealSense .bag file reader
βββ registration.py # Module 2: Multi-view point cloud registration
βββ landmarks.py # Module 3: MediaPipe body landmarks
βββ measurements.py # Module 4: Body measurement computation
βββ requirements.txt # Python dependencies
βββ README.md # This file
π§ͺ How Measurements Work
Cross-Section Circumference (Angular Sweep Method)
For each body measurement (chest, waist, hip, neck, wrist):
- Slice the point cloud at the landmark height (Β±1-2.5cm tolerance)
- Filter by radial distance from body center (excludes arms for torso measurements)
- Project slice points to 2D plane
- Angular sweep: Cast 72-90 rays from centroid, find outermost point per angle bin
- Compute perimeter of the ordered boundary polygon
Weight Estimation
Three methods combined with weighted average:
- Volume integration (weight: 50%): Integrate cross-sectional areas along height axis, multiply by body density (985 kg/mΒ³)
- Measurement regression (weight: 15%): Empirical formula from height + chest + waist + hip
- Hamwi formula (weight: 5%): Clinical height-based estimate
Person Segmentation
- RANSAC floor detection: Fits plane to largest flat surface, removes floor points
- DBSCAN clustering: Groups remaining points by proximity (5cm neighborhood)
- Largest cluster: Selected as the person
π§ͺ Validated Results (Synthetic)
Tested on synthetic body with known ground truth dimensions:
β height_cm.............. GT: 175.0 Est: 175.0 Err: 0.0 (0.0%)
β chest_circumference.... GT: 94.2 Est: 85.7 Err: 8.5 (9.1%)
β waist_circumference.... GT: 81.7 Est: 85.6 Err: 3.9 (4.7%)
β hip_circumference...... GT: 97.4 Est: 93.1 Err: 4.3 (4.5%)
β neck_circumference..... GT: 37.7 Est: 43.0 Err: 5.3 (14.2%)
β wrist_circumference.... GT: 15.7 Est: 15.6 Err: 0.1 (0.7%)
β shoulder_width......... GT: 44.0 Est: 44.0 Err: 0.0 (0.0%)
π§ Advanced: SMPL-Based Measurements
For higher accuracy (~1-2cm error), you can integrate SMPL body model fitting:
- Register at smpl.is.tue.mpg.de and download model files
- Install
smplx:pip install smplx - Install SMPL-Anthropometry:
pip install git+https://github.com/DavidBoja/SMPL-Anthropometry
The pipeline can be extended to fit SMPL to the merged point cloud and extract measurements from the parametric mesh, giving more precise circumferences through anatomical priors.
π References
- ArtEq (CVPR 2023) β SE(3)-equivariant SMPL fitting from point clouds
- ETCH (2025) β Clothed human body estimation from point clouds
- Pose-Independent Anthropometry (2025) β Body measurements from sparse landmarks
- A2B (2024) β Bidirectional anthropometric β SMPL-X shape mapping
β οΈ Limitations
- 3-view gaps: Back of torso may have incomplete coverage β affects waist/hip accuracy
- Clothing: Loose clothing adds to circumferences. Use ETCH method for clothed subjects
- Weight estimation: Volume-based weight has ~5-10% error; depends on scan completeness
- Small features: Wrist/neck require good point cloud density at those locations
- Coordinate system: Assumes Y-up. Different camera orientations may need axis remapping