| --- |
| pretty_name: KITScenes Multimodal |
| license: cc-by-nc-4.0 |
| language: |
| - en |
| annotations_creators: |
| - expert-generated |
| size_categories: |
| - 1M<n<10M |
| source_datasets: |
| - original |
| tags: |
| - autonomous-driving |
| - multimodal |
| - robotics |
| - computer-vision |
| - lidar |
| - radar |
| - hd-maps |
| - lanelet2 |
| gated: true |
| extra_gated_heading: Acknowledge the terms and conditions to access the dataset |
| extra_gated_description: "**Terms and conditions:**\n\n The KITScenes dataset is provided\ |
| \ to you under a Creative Commons Attribution-NonCommercial 4.0 International Public\ |
| \ License (CC BY-NC 4.0), with the additional terms included herein. When you download\ |
| \ or use the dataset, you are agreeing to comply with the terms of CC BY-NC 4.0\ |
| \ as applicable, and also agreeing to the dataset terms (listed below). Where these\ |
| \ dataset terms conflict with the terms of CC BY-NC 4.0, these dataset terms shall\ |
| \ prevail.\n \n **Dataset terms:**\n - In case you use the dataset within your research\ |
| \ papers, you refer to our respective publication. If the dataset\ |
| \ is used in media, a link to our websites (kitscenes.com) is included.\n - We take\ |
| \ steps to protect the privacy of individuals by anonymizing faces and license plates\ |
| \ using state-of-the-art anonymization software from BrighterAI. To the extent that\ |
| \ you like to request removal of specific images/data frames from the dataset, please\ |
| \ contact info@mrt.kit.edu.\n - We reserve all rights that are not explicitly granted\ |
| \ to you. The dataset is provided as is, and you take full responsibility for any\ |
| \ risk of using it.\n" |
| extra_gated_button_content: Acknowledge terms and conditions |
| viewer: false |
| --- |
| |
| # The KITScenes Multimodal Dataset |
|
|
| > **Early release.** This is an early release version of KITScenes Multimodal and may contain minor labeling errors and format changes. We recommend waiting for the upcoming full release if dataset stability is important for your use case. |
|
|
| <video src="https://huggingface.co/datasets/immel-f/KITScenes-Multimodal-Sample-Video/resolve/main/teaser_combined_for_huggingface.mp4" controls width="100%"></video> |
| *Reprojection of the HD map labels into 6 of the 9 cameras, with the lidar points also reprojected into the rear cameras. KITScenes Multimodal contains the most detailed HD maps out of any public dataset, together with a high-fidelity sensor suite and reprojection-accurate localization.* |
|
|
| KITScenes Multimodal is a high-fidelity autonomous driving dataset designed for research toward production-grade urban driving. It focuses on complex European city environments and combines high resolution synchronized cameras, long-range lidar, 4D imaging radar, GNSS/INS localization, and production-grade Lanelet2 HD maps, the most complete HD maps of any sensor dataset to date. |
|
|
| ## Highlights |
|
|
| - **European urban focus:** recordings from Karlsruhe, Frankfurt, and Sindelfingen. |
| - **High-fidelity sensor suite:** up to 72.5 MP of synchronized global-shutter camera imagery, seven lidars, three 4D imaging radars, and redundant GNSS/INS. |
| - **Long-range sensing:** effective lidar range beyond 400 m with substantially higher return density than common public driving datasets. |
| - **HD maps in Lanelet2:** production-grade maps with lane topology, regulatory elements, with 29 road-feature classes, 220 traffic-sign classes, and 3D traffic lights, signs, and poles all localized to reprojection accuracy. |
| - **Research benchmarks:** designed to support online HD map construction, long-range monocular depth estimation, novel view synthesis, and end-to-end / world-model research. |
|
|
| ## About this early release |
|
|
| This repository currently provides an **preview** of the dataset and release structure. During this stage, files, annotations, split definitions, and documentation may be refined without notice. If you need a stable benchmark release, please wait for a more stable public release. |
|
|
| ## Intended use |
|
|
| KITScenes is intended for academic research on autonomous driving perception, mapping, spatial learning, neural rendering, and embodied AI. In its current early release form, it is best suited for early exploration, pipeline integration, and preview experiments rather than final benchmark reporting. |
|
|
| ## Access and license |
|
|
| Access is gated. By requesting access, you acknowledge the dataset terms listed above and agree to use the data under **CC BY-NC 4.0** together with the additional KITScenes terms. |
|
|
| ## Citation |
|
|
| If you use KITScenes Multimodal in research, please cite the associated KITScenes Multimodal publication. A full citation entry and paper will be added together with the full release. |
|
|
|
|
|
|