| --- |
| license: other |
| task_categories: |
| - other |
| tags: |
| - smplx |
| - 3d-human |
| - pose-estimation |
| - synthetic |
| pretty_name: SMPLX Anything |
| size_categories: |
| - 1M<n<10M |
| viewer: false |
| --- |
| |
| # SMPLX Anything |
|
|
| `smplx_anything` is a unified-format preprocessed bundle (`processed_*_resized`) of four SMPL-X-based 3D human datasets, packaged into a single tar archive for easy distribution. |
|
|
| ## Sub-datasets |
|
|
| | Sub-dataset | Folder | Project page | Paper | |
| |---|---|---|---| |
| | **AGORA** | `processed_agora_resized/` | [agora.is.tue.mpg.de](https://agora.is.tue.mpg.de/) | [Patel et al., CVPR 2021](https://arxiv.org/abs/2104.14643) | |
| | **Anny-One** | `processed_annyone_resized/` | [Anny-One @ NAVER LABS Europe](https://europe.naverlabs.com/research/vision-foundation-models-for-human-understanding/anny-one/) · [GitHub](https://github.com/naver/anny) | [Baradel et al., 2025](https://arxiv.org/abs/2511.03589) | |
| | **BEDLAM** | `processed_bedlam_resized/` | [bedlam.is.tue.mpg.de](https://bedlam.is.tue.mpg.de/) | [Black et al., CVPR 2023](https://arxiv.org/abs/2306.16940) | |
| | **BEDLAM 2.0**| `processed_bedlam2_resized/` | [bedlam2.is.tue.mpg.de](https://bedlam2.is.tue.mpg.de/) | [BEDLAM 2.0, NeurIPS 2025](https://arxiv.org/abs/2511.14394) | |
|
|
| > Each sub-dataset retains the license of its original authors. **Please review and comply with the original licenses before use.** |
|
|
| --- |
|
|
| ## File layout |
|
|
| The full bundle is packaged into a single `tar` archive and **split into 40 GB chunks** for upload. |
|
|
| ``` |
| smplx_anything.tar.aa |
| smplx_anything.tar.ab |
| smplx_anything.tar.ac |
| ... |
| SHA256SUMS # integrity checksums for each split part |
| README.md |
| ``` |
|
|
| The split-file suffixes (`aa`, `ab`, ...) follow GNU `split`'s default scheme. |
|
|
| --- |
|
|
| ## Download |
|
|
| ### 1) Using the `hf` CLI (recommended) |
|
|
| ```bash |
| pip install -U "huggingface_hub>=1.0" |
| |
| hf download Yong-Hoon/smplx_anything \ |
| --repo-type dataset \ |
| --local-dir ./smplx_anything |
| ``` |
|
|
| > The legacy `huggingface-cli` was deprecated in v1.0 and replaced by `hf`. |
| > If you must use the old CLI: `huggingface-cli download Yong-Hoon/smplx_anything --repo-type dataset --local-dir ./smplx_anything` |
|
|
| ### 2) Using `git lfs` |
|
|
| ```bash |
| git lfs install |
| git clone https://huggingface.co/datasets/Yong-Hoon/smplx_anything |
| ``` |
|
|
| ### 3) Downloading only some parts |
|
|
| ```bash |
| hf download Yong-Hoon/smplx_anything \ |
| smplx_anything.tar.aa smplx_anything.tar.ab \ |
| --repo-type dataset \ |
| --local-dir ./smplx_anything |
| ``` |
|
|
| --- |
|
|
| ## Integrity check (optional) |
|
|
| ```bash |
| cd ./smplx_anything |
| sha256sum -c SHA256SUMS |
| ``` |
|
|
| --- |
|
|
| ## Extraction |
|
|
| Concatenate the split parts and pipe them straight into `tar`. **You do not need to first reassemble a single `.tar` file on disk.** |
|
|
| ```bash |
| cd ./smplx_anything |
| cat smplx_anything.tar.* | tar -xvf - |
| ``` |
|
|
| After extraction, the following four folders will appear: |
|
|
| ``` |
| processed_agora_resized/ |
| processed_annyone_resized/ |
| processed_bedlam_resized/ |
| processed_bedlam2_resized/ |
| ``` |
|
|
| > If you are tight on disk space, you can delete the split parts after extraction. |
| > However, in case extraction fails midway, we recommend running `sha256sum -c SHA256SUMS` first and only deleting the parts after a clean extraction. |
|
|
| ### If you prefer a single `.tar` file |
|
|
| ```bash |
| cat smplx_anything.tar.* > smplx_anything.tar |
| tar -xvf smplx_anything.tar |
| ``` |
|
|
| ### Windows users |
|
|
| Concatenate the split parts in PowerShell, then extract with 7-Zip or WinRAR. |
|
|
| ```powershell |
| Get-Content .\smplx_anything.tar.* -Raw -Encoding Byte | Set-Content .\smplx_anything.tar -Encoding Byte |
| # Then extract smplx_anything.tar with 7-Zip |
| ``` |
|
|
| --- |
|
|
| ## How the splits were produced (reproducibility) |
|
|
| The uploaded split files were created with the commands below. |
|
|
| ```bash |
| # Bundle the four folders into one tar stream and split into 40 GB chunks |
| tar -cf - \ |
| processed_agora_resized \ |
| processed_annyone_resized \ |
| processed_bedlam2_resized \ |
| processed_bedlam_resized \ |
| | split -b 40G - smplx_anything.tar. |
| |
| # Generate integrity checksums |
| sha256sum smplx_anything.tar.* > SHA256SUMS |
| ``` |
|
|
| - `tar -cf -` — bundle the four folders into a tar stream on stdout (no compression). |
| - `split -b 40G -` — read stdin and split it into 40 GB chunks; suffixes default to `aa`, `ab`, ... |
| - Output file prefix: `smplx_anything.tar.` |
|
|
| > No additional compression is applied at the tar level: the underlying media is already compressed (images, etc.), so further compression yields little gain and slows down extraction. |
|
|
| --- |
|
|
| ## Citation |
|
|
| Please cite the original paper for each sub-dataset you use. |
|
|
| - **AGORA**: Patel et al., [*AGORA: Avatars in Geography Optimized for Regression Analysis*](https://arxiv.org/abs/2104.14643), CVPR 2021. — [Project page](https://agora.is.tue.mpg.de/) |
| - **Anny-One / Anny**: Baradel et al., [*Human Mesh Modeling for Anny Body*](https://arxiv.org/abs/2511.03589), 2025. — [Project page](https://europe.naverlabs.com/research/vision-foundation-models-for-human-understanding/anny-one/) · [Code](https://github.com/naver/anny) |
| - **BEDLAM**: Black et al., [*BEDLAM: A Synthetic Dataset of Bodies Exhibiting Detailed Lifelike Animated Motion*](https://arxiv.org/abs/2306.16940), CVPR 2023. — [Project page](https://bedlam.is.tue.mpg.de/) |
| - **BEDLAM 2.0**: [*BEDLAM 2.0: Synthetic Humans and Cameras in Motion*](https://arxiv.org/abs/2511.14394), NeurIPS 2025 (Datasets & Benchmarks). — [Project page](https://bedlam2.is.tue.mpg.de/) |
|
|
| --- |
|
|
| ## Contact |
|
|
| For dataset-related issues, please use the **Discussions** tab on this repository. |
|
|