Invalid JSON: Expected property name or '}' in JSONat line 1, column 2
| {'dataset_name': 'EvHumanMotion', 'pretty_name': 'EvHumanMotion', 'license': 'apache-2.0', 'language': ['en'], 'multilinguality': 'no', 'size_categories': ['10K<n<100K'], 'source_datasets': None, 'task_categories': ['video-to-video', 'event-to-video', 'human motion generation'], 'tags': ['event camera', 'video generation', 'human animation', 'motion transfer', 'real-world', 'low-light', 'motion-blur', 'diffusion'], 'dataset_info': {'features': {'event_stream': '.aedat4 format event file', 'event_frames': 'Sliced event frames in .png format', 'video_frames': 'RGB frame sequence', 'scenario': 'One of [normal, motion_blur, over_exposure, low_light]', 'environment': 'One of [indoor_day, indoor_night_high_noise, indoor_night_low_noise, outdoor_day, outdoor_night]'}, 'splits': {'train': 'custom split recommended', 'validation': 'custom split recommended', 'test': 'custom split recommended'}}, 'description': 'EvHumanMotion is a real-world dataset captured using DAVIS346 event camera, containing 113 sequences of human motion under various scenarios including motion blur, low light, and overexposure. It is designed for event-driven video generation and motion understanding. Each sequence includes RGB frames and aligned event data, in both .aedat4 and frame-level formats.', 'homepage': 'https://huggingface.co/datasets/potentialming/EvHumanMotion', 'citation': '@article{qu2025evanimate,\n title={EvAnimate: Event-conditioned Image-to-Video Generation for Human Animation},\n author={Qu, Qiang and Li, Ming and Chen, Xiaoming and Liu, Tongliang},\n journal={arXiv preprint arXiv:2503.18552},\n year={2025}\n}'} |