Dataset Viewer
The dataset viewer is not available for this dataset.
Cannot get the config names for the dataset.
Error code:   ConfigNamesError
Exception:    ValueError
Message:      Feature type 'Coco' not found. Available feature types: ['Value', 'ClassLabel', 'Translation', 'TranslationVariableLanguages', 'LargeList', 'List', 'Array2D', 'Array3D', 'Array4D', 'Array5D', 'Audio', 'Image', 'Video', 'Pdf', 'Nifti', 'Json']
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/dataset/config_names.py", line 66, in compute_config_names_response
                  config_names = get_dataset_config_names(
                                 ^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/inspect.py", line 161, in get_dataset_config_names
                  dataset_module = dataset_module_factory(
                                   ^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/load.py", line 1207, in dataset_module_factory
                  raise e1 from None
                File "/usr/local/lib/python3.12/site-packages/datasets/load.py", line 1182, in dataset_module_factory
                  ).get_module()
                    ^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/load.py", line 612, in get_module
                  dataset_infos = DatasetInfosDict.from_dataset_card_data(dataset_card_data)
                                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/info.py", line 396, in from_dataset_card_data
                  dataset_info = DatasetInfo._from_yaml_dict(dataset_card_data["dataset_info"])
                                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/info.py", line 317, in _from_yaml_dict
                  yaml_data["features"] = Features._from_yaml_list(yaml_data["features"])
                                          ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/features/features.py", line 2138, in _from_yaml_list
                  return cls.from_dict(from_yaml_inner(yaml_data))
                         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/features/features.py", line 1983, in from_dict
                  obj = generate_from_dict(dic)
                        ^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/features/features.py", line 1564, in generate_from_dict
                  return {key: generate_from_dict(value) for key, value in obj.items()}
                               ^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/features/features.py", line 1570, in generate_from_dict
                  raise ValueError(f"Feature type '{_type}' not found. Available feature types: {list(_FEATURE_TYPES.keys())}")
              ValueError: Feature type 'Coco' not found. Available feature types: ['Value', 'ClassLabel', 'Translation', 'TranslationVariableLanguages', 'LargeList', 'List', 'Array2D', 'Array3D', 'Array4D', 'Array5D', 'Audio', 'Image', 'Video', 'Pdf', 'Nifti', 'Json']

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

Examples from OCHuman-Pose: original OCHuman instances (orange) and new OCHuman-Pose instances (magenta and blue).

OCHuman-Pose

OCHuman-Pose is an annotation extension of the original OCHuman dataset for evaluating human pose estimation in crowded and heavily occluded scenes.

This dataset does not add new images. It only adds and restructures annotations for images from the original OCHuman dataset.

To use this dataset, you must download the images separately from the original OCHuman source:

Dataset Summary

OCHuman was originally designed for the pose-to-segmentation task. Because of this, many visible people in the images were not part of the standard pose evaluation annotations. This causes problems when OCHuman is used as a general in-the-wild crowded human pose benchmark: detections of real but unannotated people may be counted as false positives.

OCHuman-Pose addresses this by adding COCO-style keypoint annotations to previously missing person instances in the original OCHuman images.

Important points:

  • No new images are added.
  • Images must be downloaded from the original OCHuman dataset.
  • Annotations are provided in COCO format.
  • OCHuman-Pose contains bounding boxes and COCO-style keypoints.
  • OCHuman-Pose does not contain segmentation masks.
  • Original OCHuman masks can theoretically be mapped to the subset of original instances, but they are intentionally omitted here to avoid confusion.
  • The dataset is intended primarily for evaluation, not training.

Dataset Statistics

Original OCHuman vs. OCHuman-Pose

Split Images Original OCHuman keypoint instances OCHuman-Pose keypoint instances Added / reinstated keypoint instances
validation 2,500 4,291 6,546 +2,255
test 2,231 3,819 5,863 +2,044
total 4,731 8,110 12,409 +4,299

OCHuman-Pose adds more than 50% additional pose instances compared to the original OCHuman pose annotations.

Annotation Format

The annotations follow the COCO keypoint format.

Each person annotation contains:

  • bbox
  • keypoints
  • num_keypoints
  • category_id
  • standard COCO-style image and annotation metadata

The keypoints follow the standard 17-keypoint COCO human pose layout:

  1. nose
  2. left eye
  3. right eye
  4. left ear
  5. right ear
  6. left shoulder
  7. right shoulder
  8. left elbow
  9. right elbow
  10. left wrist
  11. right wrist
  12. left hip
  13. right hip
  14. left knee
  15. right knee
  16. left ankle
  17. right ankle

Annotation Process

The annotation process used:

  • two professional full-time in-house annotators,
  • double annotation of a subset of instances to estimate annotation variance,
  • visual inspection of a random subset by a researcher experienced in human pose estimation,
  • a dedicated 2D human pose annotation GUI designed to reduce common annotation errors such as left-right flips.

The dataset does not add new bounding boxes beyond those already present in OCHuman. Therefore, very small or insignificant background people may still be unannotated.

Here is a comparison of our annotation quality (blue) and COCO (orange). Quality measured as per-keypoint sigma.

image

Intended Use

OCHuman-Pose is intended for:

  • evaluation of 2D human pose estimation,
  • evaluation of crowded-scene human pose estimation,
  • analysis of pose estimation under occlusion and close person-person interaction,
  • comparison of top-down, bottom-up, detector-free, and iterative pose-estimation methods.

The dataset is especially useful when evaluating systems that detect people first and then estimate pose, because it reduces the false-positive problem caused by missing person annotations in the original OCHuman benchmark.

Not Intended Use

OCHuman-Pose is not intended for:

  • training large pose-estimation models,
  • segmentation evaluation,
  • pose-to-segmentation evaluation,
  • mask detection,
  • human parsing,
  • evaluating segmentation mAP.

The dataset has no training split and is relatively small. It should be treated primarily as an evaluation benchmark.

Dataset Splits

The dataset follows the original OCHuman validation and test split structure:

Split Images Pose annotations
validation 2,500 6,546
test 2,231 5,863

The original OCHuman dataset contains 5,081 images, but only 4,731 are used here, following the original evaluated OCHuman subset. The remaining ignored images are not included in OCHuman-Pose.

Results Reported in BBoxMaskPose v2

The BBoxMaskPose v2 paper reports that evaluation on OCHuman-Pose better reflects real crowded-scene pose performance than the original OCHuman annotations.

For example, ViTPose-B with ground-truth bounding boxes and detected bounding boxes shows a much smaller gap on OCHuman-Pose than on the original OCHuman benchmark:

Input boxes OCHuman val AP OCHuman test AP OCHuman-Pose val AP OCHuman-Pose test AP
Ground-truth boxes 90.9 91.0 86.4 86.2
Detected boxes 44.5 44.1 75.3 76.1

This suggests that the original OCHuman evaluation partly confounds pose-estimation errors with missing annotation effects.

Loading the Data

This dataset provides annotations only. The images are not redistributed.

Recommended usage:

  1. Download the original OCHuman images from: https://github.com/liruilong940607/ochumanapi

  2. Download the OCHuman-Pose annotations from this Hugging Face repository.

  3. Place or symlink the original images so that the file_name fields in the COCO-format annotation files resolve correctly.

  4. Use pycocotools or exococotools or OCHumanApi for COCO-like evaluation.

Example structure:

OCHuman-Pose/
β”œβ”€β”€ annotations/
β”‚   β”œβ”€β”€ ochuman_pose_val.json
β”‚   └── ochuman_pose_test.json
└── images/
    └── ... original OCHuman images ...

Citation

If you use OCHuman-Pose, please cite BBoxMaskPose v2:

@article{purkrabek2026bboxmaskposev2,
  title         = {BBoxMaskPose v2: Expanding Mutual Conditioning to 3D},
  author        = {Purkrabek, Miroslav and Kolomiiets, Constantin and Matas, Jiri},
  journal       = {arXiv preprint arXiv:2601.15200},
  year          = {2026}
}
Downloads last month
51

Paper for vrg-prague/OCHuman-Pose