lynnzuo XinshengCHEN commited on
Commit
bcb919b
·
0 Parent(s):

Duplicate from PresentBench/PresentBench

Browse files

Co-authored-by: Xinsheng Chen <XinshengCHEN@users.noreply.huggingface.co>

This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. .gitattributes +303 -0
  2. README.md +86 -0
  3. academia/CVPR_2023/Attribute-preserving_Face_Dataset_Anonymization_via_Latent_Code_Optimization/generation_task/instructions.md +149 -0
  4. academia/CVPR_2023/Attribute-preserving_Face_Dataset_Anonymization_via_Latent_Code_Optimization/generation_task/judge_prompt.json +56 -0
  5. academia/CVPR_2023/Attribute-preserving_Face_Dataset_Anonymization_via_Latent_Code_Optimization/generation_task/statistics.yaml +25 -0
  6. academia/CVPR_2023/Attribute-preserving_Face_Dataset_Anonymization_via_Latent_Code_Optimization/material.pdf +3 -0
  7. academia/CVPR_2023/Canonical_Fields_Self-Supervised_Learning_of_Pose-Canonicalized_Neural_Fields/generation_task/instructions.md +151 -0
  8. academia/CVPR_2023/Canonical_Fields_Self-Supervised_Learning_of_Pose-Canonicalized_Neural_Fields/generation_task/judge_prompt.json +27 -0
  9. academia/CVPR_2023/Canonical_Fields_Self-Supervised_Learning_of_Pose-Canonicalized_Neural_Fields/generation_task/statistics.yaml +25 -0
  10. academia/CVPR_2023/Canonical_Fields_Self-Supervised_Learning_of_Pose-Canonicalized_Neural_Fields/material.pdf +3 -0
  11. academia/CVPR_2023/Hierarchical_B-Frame_Video_Coding_Using_Two-Layer_CANF_Without_Motion_Coding/generation_task/instructions.md +146 -0
  12. academia/CVPR_2023/Hierarchical_B-Frame_Video_Coding_Using_Two-Layer_CANF_Without_Motion_Coding/generation_task/judge_prompt.json +27 -0
  13. academia/CVPR_2023/Hierarchical_B-Frame_Video_Coding_Using_Two-Layer_CANF_Without_Motion_Coding/generation_task/statistics.yaml +25 -0
  14. academia/CVPR_2023/Hierarchical_B-Frame_Video_Coding_Using_Two-Layer_CANF_Without_Motion_Coding/material.pdf +3 -0
  15. academia/CVPR_2023/Implicit_Occupancy_Flow_Fields_for_Perception_and_Prediction_in_Self-Driving/generation_task/instructions.md +149 -0
  16. academia/CVPR_2023/Implicit_Occupancy_Flow_Fields_for_Perception_and_Prediction_in_Self-Driving/generation_task/judge_prompt.json +27 -0
  17. academia/CVPR_2023/Implicit_Occupancy_Flow_Fields_for_Perception_and_Prediction_in_Self-Driving/generation_task/statistics.yaml +25 -0
  18. academia/CVPR_2023/Implicit_Occupancy_Flow_Fields_for_Perception_and_Prediction_in_Self-Driving/material.pdf +3 -0
  19. academia/CVPR_2023/TarViS_A_Unified_Approach_for_Target-based_Video_Segmentation/generation_task/instructions.md +145 -0
  20. academia/CVPR_2023/TarViS_A_Unified_Approach_for_Target-based_Video_Segmentation/generation_task/judge_prompt.json +27 -0
  21. academia/CVPR_2023/TarViS_A_Unified_Approach_for_Target-based_Video_Segmentation/generation_task/statistics.yaml +25 -0
  22. academia/CVPR_2023/TarViS_A_Unified_Approach_for_Target-based_Video_Segmentation/material.pdf +3 -0
  23. academia/CVPR_2024/Discovering_and_Mitigating_Visual_Biases_through_Keyword_Explanation/generation_task/instructions.md +172 -0
  24. academia/CVPR_2024/Discovering_and_Mitigating_Visual_Biases_through_Keyword_Explanation/generation_task/judge_prompt.json +27 -0
  25. academia/CVPR_2024/Discovering_and_Mitigating_Visual_Biases_through_Keyword_Explanation/generation_task/statistics.yaml +25 -0
  26. academia/CVPR_2024/Discovering_and_Mitigating_Visual_Biases_through_Keyword_Explanation/material.pdf +3 -0
  27. academia/CVPR_2024/Frequency-Adaptive_Dilated_Convolution_for_Semantic_Segmentation/generation_task/instructions.md +170 -0
  28. academia/CVPR_2024/Frequency-Adaptive_Dilated_Convolution_for_Semantic_Segmentation/generation_task/judge_prompt.json +27 -0
  29. academia/CVPR_2024/Frequency-Adaptive_Dilated_Convolution_for_Semantic_Segmentation/generation_task/statistics.yaml +25 -0
  30. academia/CVPR_2024/Frequency-Adaptive_Dilated_Convolution_for_Semantic_Segmentation/material.pdf +3 -0
  31. academia/CVPR_2024/RAVE_Randomized_Noise_Shuffling_for_Fast_and_Consistent_Video_Editing_with_Diffusion_Models/generation_task/instructions.md +150 -0
  32. academia/CVPR_2024/RAVE_Randomized_Noise_Shuffling_for_Fast_and_Consistent_Video_Editing_with_Diffusion_Models/generation_task/judge_prompt.json +27 -0
  33. academia/CVPR_2024/RAVE_Randomized_Noise_Shuffling_for_Fast_and_Consistent_Video_Editing_with_Diffusion_Models/generation_task/statistics.yaml +25 -0
  34. academia/CVPR_2024/RAVE_Randomized_Noise_Shuffling_for_Fast_and_Consistent_Video_Editing_with_Diffusion_Models/material.pdf +3 -0
  35. academia/CVPR_2024/SCEdit_Efficient_and_Controllable_Image_Diffusion_Generation_via_Skip_Connection_Editing/generation_task/instructions.md +125 -0
  36. academia/CVPR_2024/SCEdit_Efficient_and_Controllable_Image_Diffusion_Generation_via_Skip_Connection_Editing/generation_task/judge_prompt.json +27 -0
  37. academia/CVPR_2024/SCEdit_Efficient_and_Controllable_Image_Diffusion_Generation_via_Skip_Connection_Editing/generation_task/statistics.yaml +25 -0
  38. academia/CVPR_2024/SCEdit_Efficient_and_Controllable_Image_Diffusion_Generation_via_Skip_Connection_Editing/material.pdf +3 -0
  39. academia/CVPR_2024/TFMQ-DM_Temporal_Feature_Maintenance_Quantization_for_Diffusion_Models/generation_task/instructions.md +162 -0
  40. academia/CVPR_2024/TFMQ-DM_Temporal_Feature_Maintenance_Quantization_for_Diffusion_Models/generation_task/judge_prompt.json +27 -0
  41. academia/CVPR_2024/TFMQ-DM_Temporal_Feature_Maintenance_Quantization_for_Diffusion_Models/generation_task/statistics.yaml +25 -0
  42. academia/CVPR_2024/TFMQ-DM_Temporal_Feature_Maintenance_Quantization_for_Diffusion_Models/material.pdf +3 -0
  43. academia/CVPR_2025/AIpparel_A_Multimodal_Foundation_Model_for_Digital_Garments/generation_task/instructions.md +127 -0
  44. academia/CVPR_2025/AIpparel_A_Multimodal_Foundation_Model_for_Digital_Garments/generation_task/judge_prompt.json +27 -0
  45. academia/CVPR_2025/AIpparel_A_Multimodal_Foundation_Model_for_Digital_Garments/generation_task/statistics.yaml +25 -0
  46. academia/CVPR_2025/AIpparel_A_Multimodal_Foundation_Model_for_Digital_Garments/material.pdf +3 -0
  47. academia/CVPR_2025/DepthCrafter_Generating_Consistent_Long_Depth_Sequences_for_Open-world_Videos/generation_task/instructions.md +167 -0
  48. academia/CVPR_2025/DepthCrafter_Generating_Consistent_Long_Depth_Sequences_for_Open-world_Videos/generation_task/judge_prompt.json +27 -0
  49. academia/CVPR_2025/DepthCrafter_Generating_Consistent_Long_Depth_Sequences_for_Open-world_Videos/generation_task/statistics.yaml +25 -0
  50. academia/CVPR_2025/DepthCrafter_Generating_Consistent_Long_Depth_Sequences_for_Open-world_Videos/material.pdf +3 -0
.gitattributes ADDED
@@ -0,0 +1,303 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ *.7z filter=lfs diff=lfs merge=lfs -text
2
+ *.arrow filter=lfs diff=lfs merge=lfs -text
3
+ *.avro filter=lfs diff=lfs merge=lfs -text
4
+ *.bin filter=lfs diff=lfs merge=lfs -text
5
+ *.bz2 filter=lfs diff=lfs merge=lfs -text
6
+ *.ckpt filter=lfs diff=lfs merge=lfs -text
7
+ *.ftz filter=lfs diff=lfs merge=lfs -text
8
+ *.gz filter=lfs diff=lfs merge=lfs -text
9
+ *.h5 filter=lfs diff=lfs merge=lfs -text
10
+ *.joblib filter=lfs diff=lfs merge=lfs -text
11
+ *.lfs.* filter=lfs diff=lfs merge=lfs -text
12
+ *.lz4 filter=lfs diff=lfs merge=lfs -text
13
+ *.mds filter=lfs diff=lfs merge=lfs -text
14
+ *.mlmodel filter=lfs diff=lfs merge=lfs -text
15
+ *.model filter=lfs diff=lfs merge=lfs -text
16
+ *.msgpack filter=lfs diff=lfs merge=lfs -text
17
+ *.npy filter=lfs diff=lfs merge=lfs -text
18
+ *.npz filter=lfs diff=lfs merge=lfs -text
19
+ *.onnx filter=lfs diff=lfs merge=lfs -text
20
+ *.ot filter=lfs diff=lfs merge=lfs -text
21
+ *.parquet filter=lfs diff=lfs merge=lfs -text
22
+ *.pb filter=lfs diff=lfs merge=lfs -text
23
+ *.pickle filter=lfs diff=lfs merge=lfs -text
24
+ *.pkl filter=lfs diff=lfs merge=lfs -text
25
+ *.pt filter=lfs diff=lfs merge=lfs -text
26
+ *.pth filter=lfs diff=lfs merge=lfs -text
27
+ *.rar filter=lfs diff=lfs merge=lfs -text
28
+ *.safetensors filter=lfs diff=lfs merge=lfs -text
29
+ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
30
+ *.tar.* filter=lfs diff=lfs merge=lfs -text
31
+ *.tar filter=lfs diff=lfs merge=lfs -text
32
+ *.tflite filter=lfs diff=lfs merge=lfs -text
33
+ *.tgz filter=lfs diff=lfs merge=lfs -text
34
+ *.wasm filter=lfs diff=lfs merge=lfs -text
35
+ *.xz filter=lfs diff=lfs merge=lfs -text
36
+ *.zip filter=lfs diff=lfs merge=lfs -text
37
+ *.zst filter=lfs diff=lfs merge=lfs -text
38
+ *tfevents* filter=lfs diff=lfs merge=lfs -text
39
+ # Audio files - uncompressed
40
+ *.pcm filter=lfs diff=lfs merge=lfs -text
41
+ *.sam filter=lfs diff=lfs merge=lfs -text
42
+ *.raw filter=lfs diff=lfs merge=lfs -text
43
+ # Audio files - compressed
44
+ *.aac filter=lfs diff=lfs merge=lfs -text
45
+ *.flac filter=lfs diff=lfs merge=lfs -text
46
+ *.mp3 filter=lfs diff=lfs merge=lfs -text
47
+ *.ogg filter=lfs diff=lfs merge=lfs -text
48
+ *.wav filter=lfs diff=lfs merge=lfs -text
49
+ # Image files - uncompressed
50
+ *.bmp filter=lfs diff=lfs merge=lfs -text
51
+ *.gif filter=lfs diff=lfs merge=lfs -text
52
+ *.png filter=lfs diff=lfs merge=lfs -text
53
+ *.tiff filter=lfs diff=lfs merge=lfs -text
54
+ # Image files - compressed
55
+ *.jpg filter=lfs diff=lfs merge=lfs -text
56
+ *.jpeg filter=lfs diff=lfs merge=lfs -text
57
+ *.webp filter=lfs diff=lfs merge=lfs -text
58
+ # Video files - compressed
59
+ *.mp4 filter=lfs diff=lfs merge=lfs -text
60
+ *.webm filter=lfs diff=lfs merge=lfs -text
61
+ academia/CVPR_2023/Attribute-preserving_Face_Dataset_Anonymization_via_Latent_Code_Optimization/material.pdf filter=lfs diff=lfs merge=lfs -text
62
+ academia/CVPR_2023/Canonical_Fields_Self-Supervised_Learning_of_Pose-Canonicalized_Neural_Fields/material.pdf filter=lfs diff=lfs merge=lfs -text
63
+ academia/CVPR_2023/Hierarchical_B-Frame_Video_Coding_Using_Two-Layer_CANF_Without_Motion_Coding/material.pdf filter=lfs diff=lfs merge=lfs -text
64
+ academia/CVPR_2023/Implicit_Occupancy_Flow_Fields_for_Perception_and_Prediction_in_Self-Driving/material.pdf filter=lfs diff=lfs merge=lfs -text
65
+ academia/CVPR_2023/TarViS_A_Unified_Approach_for_Target-based_Video_Segmentation/material.pdf filter=lfs diff=lfs merge=lfs -text
66
+ academia/CVPR_2024/Discovering_and_Mitigating_Visual_Biases_through_Keyword_Explanation/material.pdf filter=lfs diff=lfs merge=lfs -text
67
+ academia/CVPR_2024/Frequency-Adaptive_Dilated_Convolution_for_Semantic_Segmentation/material.pdf filter=lfs diff=lfs merge=lfs -text
68
+ academia/CVPR_2024/RAVE_Randomized_Noise_Shuffling_for_Fast_and_Consistent_Video_Editing_with_Diffusion_Models/material.pdf filter=lfs diff=lfs merge=lfs -text
69
+ academia/CVPR_2024/SCEdit_Efficient_and_Controllable_Image_Diffusion_Generation_via_Skip_Connection_Editing/material.pdf filter=lfs diff=lfs merge=lfs -text
70
+ academia/CVPR_2024/TFMQ-DM_Temporal_Feature_Maintenance_Quantization_for_Diffusion_Models/material.pdf filter=lfs diff=lfs merge=lfs -text
71
+ academia/CVPR_2025/AIpparel_A_Multimodal_Foundation_Model_for_Digital_Garments/material.pdf filter=lfs diff=lfs merge=lfs -text
72
+ academia/CVPR_2025/DepthCrafter_Generating_Consistent_Long_Depth_Sequences_for_Open-world_Videos/material.pdf filter=lfs diff=lfs merge=lfs -text
73
+ academia/CVPR_2025/EffiDec3D_An_Optimized_Decoder_for_High-Performance_and_Efficient_3D_Medical_Image_Segmentation/material.pdf filter=lfs diff=lfs merge=lfs -text
74
+ academia/CVPR_2025/Flowing_from_Words_to_Pixels_A_Noise-Free_Framework_for_Cross-Modality_Evolution/material.pdf filter=lfs diff=lfs merge=lfs -text
75
+ academia/CVPR_2025/Free-viewpoint_Human_Animation_with_Pose-correlated_Reference_Selection/material.pdf filter=lfs diff=lfs merge=lfs -text
76
+ academia/CVPR_2025/HyperLoRA_Parameter-Efficient_Adaptive_Generation_for_Portrait_Synthesis/material.pdf filter=lfs diff=lfs merge=lfs -text
77
+ academia/CVPR_2025/Interpreting_Object-level_Foundation_Models_via_Visual_Precision_Search/material.pdf filter=lfs diff=lfs merge=lfs -text
78
+ academia/CVPR_2025/Olympus_A_Universal_Task_Router_for_Computer_Vision_Tasks/material.pdf filter=lfs diff=lfs merge=lfs -text
79
+ academia/CVPR_2025/Towards_Unbiased_and_Robust_Scene_Graph_Generation_and_Anticipation/material.pdf filter=lfs diff=lfs merge=lfs -text
80
+ academia/CVPR_2025/VL-RewardBench_A_Challenging_Benchmark_for_Vision-Language_Generative_Reward_Models/material.pdf filter=lfs diff=lfs merge=lfs -text
81
+ academia/F1000_talks/1/material.pdf filter=lfs diff=lfs merge=lfs -text
82
+ academia/F1000_talks/10/material.pdf filter=lfs diff=lfs merge=lfs -text
83
+ academia/F1000_talks/2/material.pdf filter=lfs diff=lfs merge=lfs -text
84
+ academia/F1000_talks/3/material.pdf filter=lfs diff=lfs merge=lfs -text
85
+ academia/F1000_talks/4/material.pdf filter=lfs diff=lfs merge=lfs -text
86
+ academia/F1000_talks/5/material.pdf filter=lfs diff=lfs merge=lfs -text
87
+ academia/F1000_talks/6/material.pdf filter=lfs diff=lfs merge=lfs -text
88
+ academia/F1000_talks/7/material.pdf filter=lfs diff=lfs merge=lfs -text
89
+ academia/F1000_talks/8/material.pdf filter=lfs diff=lfs merge=lfs -text
90
+ academia/F1000_talks/9/material.pdf filter=lfs diff=lfs merge=lfs -text
91
+ academia/FAST_2025/DNA_data_storage_A_generative_tool_for_Motif-based_DNA_storage/material.pdf filter=lfs diff=lfs merge=lfs -text
92
+ academia/FAST_2025/GPHash_An_Efficient_Hash_Index_for_GPU_with_Byte-Granularity_Persistent_Memory/material.pdf filter=lfs diff=lfs merge=lfs -text
93
+ academia/FAST_2025/IMPRESS_An_Importance-Informed_Multi-Tier_Prefix_KV_Storage_System_for_Large_Language_Model_Inference/material.pdf filter=lfs diff=lfs merge=lfs -text
94
+ academia/FAST_2025/LeapGNN_Accelerating_Distributed_GNN_Training_Leveraging_Feature-Centric_Model_Migration/material.pdf filter=lfs diff=lfs merge=lfs -text
95
+ academia/FAST_2025/On_Scalable_Integrity_Checking_for_Secure_Cloud_Disks/material.pdf filter=lfs diff=lfs merge=lfs -text
96
+ academia/ICLR_2024/ICLR_2024_Cameras_as_Rays__Pose_Estimation_via_Ray_Diffusion_Oral_20d4d7/material.pdf filter=lfs diff=lfs merge=lfs -text
97
+ academia/ICLR_2024/ICLR_2024_ClimODE__Climate_and_Weather_Forecasting_with_Physics-informed_Neural_ODEs_Oral_b88221/material.pdf filter=lfs diff=lfs merge=lfs -text
98
+ academia/ICLR_2024/ICLR_2024_Improved_Active_Learning_via_Dependent_Leverage_Score_Sampling_Oral_ba5728/material.pdf filter=lfs diff=lfs merge=lfs -text
99
+ academia/ICLR_2024/ICLR_2024_Learning_Energy_Decompositions_for_Partial_Inference_in_GFlowNets_Oral_54021d/material.pdf filter=lfs diff=lfs merge=lfs -text
100
+ academia/ICLR_2024/ICLR_2024_Less_is_More__Fewer_Interpretable_Region_via_Submodular_Subset_Selection_Oral_abf7a5/material.pdf filter=lfs diff=lfs merge=lfs -text
101
+ academia/ICLR_2025/AI_as_Humanity_s_Salieri-Quantifying_Linguistic_Creativity_of_Language_Models_via_Systematic_Attribution_of_Machine_Text_against_Web_Text/material.pdf filter=lfs diff=lfs merge=lfs -text
102
+ academia/ICLR_2025/Accelerated_training_through_iterative_gradient_propagation_along_the_residual_path/material.pdf filter=lfs diff=lfs merge=lfs -text
103
+ academia/ICLR_2025/Attention_as_a_Hypernetwork/material.pdf filter=lfs diff=lfs merge=lfs -text
104
+ academia/ICLR_2025/Booster-Tackling_Harmful_Fine-tuning_for_Large_Language_Models_via_Attenuating_Harmful_Perturbation/material.pdf filter=lfs diff=lfs merge=lfs -text
105
+ academia/ICLR_2025/Brain_Bandit-A_Biologically_Grounded_Neural_Network_for_Efficient_Control_of_Exploration/material.pdf filter=lfs diff=lfs merge=lfs -text
106
+ academia/ICML_2024/APT-Adaptive_Pruning_and_Tuning_Pretrained_Language_Models_for_Efficient_Training_and_Inference/material.pdf filter=lfs diff=lfs merge=lfs -text
107
+ academia/ICML_2024/A_Touch_Vision_and_Language_Dataset_for_Multimodal_Alignment/material.pdf filter=lfs diff=lfs merge=lfs -text
108
+ academia/ICML_2024/All-in-one_simulation-based_inference/material.pdf filter=lfs diff=lfs merge=lfs -text
109
+ academia/ICML_2024/Arrows_of_Time_for_Large_Language_Models/material.pdf filter=lfs diff=lfs merge=lfs -text
110
+ academia/ICML_2024/Bottleneck-Minimal_Indexing_for_Generative_Document_Retrieval/material.pdf filter=lfs diff=lfs merge=lfs -text
111
+ academia/ICML_2025/ICML_2025_Accelerating_LLM_Inference_with_Lossless_Speculative_Decoding_Algorithms_for_Heterogeneous_Vocabularies_Oral_6bfb95/material.pdf filter=lfs diff=lfs merge=lfs -text
112
+ academia/ICML_2025/ICML_2025_An_analytic_theory_of_creativity_in_convolutional_diffusion_models_Oral_2b3ae4/material.pdf filter=lfs diff=lfs merge=lfs -text
113
+ academia/ICML_2025/ICML_2025_Beyond_Self-Repellent_Kernels__History-Driven_Target_Towards_Efficient_Nonlinear_MCMC_on_General_Graphs_Oral_207f19/material.pdf filter=lfs diff=lfs merge=lfs -text
114
+ academia/ICML_2025/ICML_2025_Can_MLLMs_Reason_in_Multimodality__EMMA__An_Enhanced_MultiModal_ReAsoning_Benchmark_Oral_72c07c/material.pdf filter=lfs diff=lfs merge=lfs -text
115
+ academia/ICML_2025/ICML_2025_Emergence_in_non-neural_models__grokking_modular_arithmetic_via_average_gradient_outer_product_Oral_50041e/material.pdf filter=lfs diff=lfs merge=lfs -text
116
+ academia/NBER_conferences/1/material.pdf filter=lfs diff=lfs merge=lfs -text
117
+ academia/NBER_conferences/10/material.pdf filter=lfs diff=lfs merge=lfs -text
118
+ academia/NBER_conferences/2/material.pdf filter=lfs diff=lfs merge=lfs -text
119
+ academia/NBER_conferences/3/material.pdf filter=lfs diff=lfs merge=lfs -text
120
+ academia/NBER_conferences/4/material.pdf filter=lfs diff=lfs merge=lfs -text
121
+ academia/NBER_conferences/5/material.pdf filter=lfs diff=lfs merge=lfs -text
122
+ academia/NBER_conferences/6/material.pdf filter=lfs diff=lfs merge=lfs -text
123
+ academia/NBER_conferences/7/material.pdf filter=lfs diff=lfs merge=lfs -text
124
+ academia/NBER_conferences/8/material.pdf filter=lfs diff=lfs merge=lfs -text
125
+ academia/NBER_conferences/9/material.pdf filter=lfs diff=lfs merge=lfs -text
126
+ academia/NSDI_2025/Everything_Matters_in_Programmable_Packet_Scheduling/material.pdf filter=lfs diff=lfs merge=lfs -text
127
+ academia/NSDI_2025/High-level_Programming_for_Application_Networks/material.pdf filter=lfs diff=lfs merge=lfs -text
128
+ academia/NSDI_2025/Optimizing_RLHF_Training_for_Large_Language_Models_with_Stage_Fusion/material.pdf filter=lfs diff=lfs merge=lfs -text
129
+ academia/NSDI_2025/Suppressing_BGP_Zombies_with_Route_Status_Transparency/material.pdf filter=lfs diff=lfs merge=lfs -text
130
+ academia/NSDI_2025/When_P4_Meets_Run-to-completion_Architecture/material.pdf filter=lfs diff=lfs merge=lfs -text
131
+ academia/NeurIPS_2024/AgentBoard_An_Analytical_Evaluation_Board_of_Multi-turn_LLM_Agents/98026.pdf filter=lfs diff=lfs merge=lfs -text
132
+ academia/NeurIPS_2024/AgentBoard_An_Analytical_Evaluation_Board_of_Multi-turn_LLM_Agents/NeurIPS-2024-agentboard-an-analytical-evaluation-board-of-multi-turn-llm-agents-Paper-Datasets_and_Benchmarks_Track.pdf filter=lfs diff=lfs merge=lfs -text
133
+ academia/NeurIPS_2024/AgentBoard_An_Analytical_Evaluation_Board_of_Multi-turn_LLM_Agents/material.pdf filter=lfs diff=lfs merge=lfs -text
134
+ academia/OSDI_2025/Fork_in_the_Road_Reflections_and_Optimizations_for_Cold_Start_Latency_in_Production_Serverless_Systems/material.pdf filter=lfs diff=lfs merge=lfs -text
135
+ academia/OSDI_2025/Mako_Speculative_Distributed_Transactions_with_Geo-Replication/material.pdf filter=lfs diff=lfs merge=lfs -text
136
+ academia/OSDI_2025/Tigon_A_Distributed_Database_for_a_CXL_Pod/material.pdf filter=lfs diff=lfs merge=lfs -text
137
+ academia/OSDI_2025/Understanding_Stragglers_in_Large_Model_Training_Using_What-if_Analysis/material.pdf filter=lfs diff=lfs merge=lfs -text
138
+ academia/OSDI_2025/XSched_Preemptive_Scheduling_for_Diverse_XPUs/material.pdf filter=lfs diff=lfs merge=lfs -text
139
+ academia/USENIX/Ahoy_SAILR_There_is_No_Need_to_DREAM_of_C_A_Compiler-Aware_Structuring_Algorithm_for_Binary_Decompilation/material.pdf filter=lfs diff=lfs merge=lfs -text
140
+ academia/USENIX/AutoFHE_Automated_Adaption_of_CNNs_for_Efficient_Evaluation_over_FHE/material.pdf filter=lfs diff=lfs merge=lfs -text
141
+ academia/USENIX/Automated_Large-Scale_Analysis_of_Cookie_Notice_Compliance/material.pdf filter=lfs diff=lfs merge=lfs -text
142
+ academia/USENIX/Devil_in_the_Room_Triggering_Audio_Backdoors_in_the_Physical_World/material.pdf filter=lfs diff=lfs merge=lfs -text
143
+ academia/USENIX/Does_Online_Anonymous_Market_Vendor_Reputation_Matter/material.pdf filter=lfs diff=lfs merge=lfs -text
144
+ academia/USENIX/Exploring_Covert_Third-party_Identifiers_through_External_Storage_in_the_Android_New_Era/material.pdf filter=lfs diff=lfs merge=lfs -text
145
+ academia/USENIX/Hermes_Unlocking_Security_Analysis_of_Cellular_Network_Protocols_by_Synthesizing_Finite_State_Machines_from_Natural_Language_Specifications/material.pdf filter=lfs diff=lfs merge=lfs -text
146
+ academia/USENIX/K-Waay_Fast_and_Deniable_Post-Quantum_X3DH_without_Ring_Signatures/material.pdf filter=lfs diff=lfs merge=lfs -text
147
+ academia/USENIX/Machine_Learning_needs_Better_Randomness_Standards_Randomised_Smoothing_and_PRNG-based_attacks/material.pdf filter=lfs diff=lfs merge=lfs -text
148
+ academia/USENIX/SpecLFB_Eliminating_Cache_Side_Channels_in_Speculative_Executions/material.pdf filter=lfs diff=lfs merge=lfs -text
149
+ academia/USENIX/Splitting_the_Difference_on_Adversarial_Training/material.pdf filter=lfs diff=lfs merge=lfs -text
150
+ academia/USENIX/Swipe_Left_for_Identity_Theft_An_Analysis_of_User_Data_Privacy_Risks_on_Location-based_Dating_Apps/material.pdf filter=lfs diff=lfs merge=lfs -text
151
+ academia/USENIX/The_Effect_of_Design_Patterns_on_Present_and_Future_Cookie_Consent_Decisions/material.pdf filter=lfs diff=lfs merge=lfs -text
152
+ academia/USENIX/Understanding_the_Security_and_Privacy_Implications_of_Online_Toxic_Content_on_Refugees/material.pdf filter=lfs diff=lfs merge=lfs -text
153
+ academia/USENIX/WebRR_A_Forensic_System_for_Replaying_and_Investigating_Web-Based_Attacks/material.pdf filter=lfs diff=lfs merge=lfs -text
154
+ advertising/Apple_Mac/MacBook_Air/material.pdf filter=lfs diff=lfs merge=lfs -text
155
+ advertising/Apple_Mac/MacBook_Pro/material.pdf filter=lfs diff=lfs merge=lfs -text
156
+ advertising/Apple_iPad/iPad/material.pdf filter=lfs diff=lfs merge=lfs -text
157
+ advertising/Apple_iPad/iPad_Air/material.pdf filter=lfs diff=lfs merge=lfs -text
158
+ advertising/Apple_iPad/iPad_Pro/material.pdf filter=lfs diff=lfs merge=lfs -text
159
+ advertising/Apple_iPhone/iPhone_17/material.pdf filter=lfs diff=lfs merge=lfs -text
160
+ advertising/Apple_iPhone/iPhone_17_Pro/material.pdf filter=lfs diff=lfs merge=lfs -text
161
+ advertising/Apple_iPhone/iPhone_Air/material.pdf filter=lfs diff=lfs merge=lfs -text
162
+ advertising/BMW/01/material.pdf filter=lfs diff=lfs merge=lfs -text
163
+ advertising/BMW/02/material.pdf filter=lfs diff=lfs merge=lfs -text
164
+ advertising/BMW/03/material.pdf filter=lfs diff=lfs merge=lfs -text
165
+ advertising/BMW/04/material.pdf filter=lfs diff=lfs merge=lfs -text
166
+ advertising/BMW/05/material.pdf filter=lfs diff=lfs merge=lfs -text
167
+ advertising/lenovo/ThinkBook_16_G7_ARP/material.pdf filter=lfs diff=lfs merge=lfs -text
168
+ advertising/lenovo/ThinkPad_X1_Carbon_Gen_12/material.pdf filter=lfs diff=lfs merge=lfs -text
169
+ advertising/lenovo/Yoga_Pro_9_16IAH10/material.pdf filter=lfs diff=lfs merge=lfs -text
170
+ economics/Alphabet_Investor_Relations/2024q1/material.pdf filter=lfs diff=lfs merge=lfs -text
171
+ economics/Alphabet_Investor_Relations/2024q2/material.pdf filter=lfs diff=lfs merge=lfs -text
172
+ economics/Alphabet_Investor_Relations/2024q3/material.pdf filter=lfs diff=lfs merge=lfs -text
173
+ economics/Alphabet_Investor_Relations/2024q4/material.pdf filter=lfs diff=lfs merge=lfs -text
174
+ economics/Alphabet_Investor_Relations/2025q1/material.pdf filter=lfs diff=lfs merge=lfs -text
175
+ economics/Alphabet_Investor_Relations/2025q2/material.pdf filter=lfs diff=lfs merge=lfs -text
176
+ economics/Alphabet_Investor_Relations/2025q3/material.pdf filter=lfs diff=lfs merge=lfs -text
177
+ economics/JPMorgan_Chase/Earning1/material.pdf filter=lfs diff=lfs merge=lfs -text
178
+ economics/JPMorgan_Chase/Earning2/material.pdf filter=lfs diff=lfs merge=lfs -text
179
+ economics/JPMorgan_Chase/Earning3/material.pdf filter=lfs diff=lfs merge=lfs -text
180
+ economics/JPMorgan_Chase/Earning4/material.pdf filter=lfs diff=lfs merge=lfs -text
181
+ economics/JPMorgan_Chase/Earning5/material.pdf filter=lfs diff=lfs merge=lfs -text
182
+ economics/JPMorgan_Chase/Earning6/material.pdf filter=lfs diff=lfs merge=lfs -text
183
+ economics/JPMorgan_Chase/Earning7/material.pdf filter=lfs diff=lfs merge=lfs -text
184
+ economics/Microsoft_press_release/FY2024Q1/material.pdf filter=lfs diff=lfs merge=lfs -text
185
+ economics/Microsoft_press_release/FY2024Q2/material.pdf filter=lfs diff=lfs merge=lfs -text
186
+ economics/Microsoft_press_release/FY2024Q3/material.pdf filter=lfs diff=lfs merge=lfs -text
187
+ economics/Microsoft_press_release/FY2024Q4/material.pdf filter=lfs diff=lfs merge=lfs -text
188
+ economics/Microsoft_press_release/FY2025Q1/material.pdf filter=lfs diff=lfs merge=lfs -text
189
+ economics/Microsoft_press_release/FY2025Q2/material.pdf filter=lfs diff=lfs merge=lfs -text
190
+ economics/Microsoft_press_release/FY2025Q3/material.pdf filter=lfs diff=lfs merge=lfs -text
191
+ economics/Microsoft_press_release/FY2025Q4/material.pdf filter=lfs diff=lfs merge=lfs -text
192
+ economics/Microsoft_press_release/FY2026Q1/material.pdf filter=lfs diff=lfs merge=lfs -text
193
+ economics/OECD_Economic_Outlook/Interim_Report_September_2025/material.pdf filter=lfs diff=lfs merge=lfs -text
194
+ economics/OECD_Economic_Outlook/Volume_2024_Issue_1/material.pdf filter=lfs diff=lfs merge=lfs -text
195
+ economics/OECD_Economic_Outlook/Volume_2024_Issue_2/material.pdf filter=lfs diff=lfs merge=lfs -text
196
+ economics/OECD_Economic_Outlook/Volume_2025_Issue_1/material.pdf filter=lfs diff=lfs merge=lfs -text
197
+ economics/OECD_Economic_Outlook/Volume_2025_Issue_2/material.pdf filter=lfs diff=lfs merge=lfs -text
198
+ economics/TESLA_update_letter/2017Q4/material.pdf filter=lfs diff=lfs merge=lfs -text
199
+ economics/TESLA_update_letter/2018Q1/material.pdf filter=lfs diff=lfs merge=lfs -text
200
+ economics/TESLA_update_letter/2018Q2/material.pdf filter=lfs diff=lfs merge=lfs -text
201
+ economics/TESLA_update_letter/2018Q3/material.pdf filter=lfs diff=lfs merge=lfs -text
202
+ economics/TESLA_update_letter/2018Q4/material.pdf filter=lfs diff=lfs merge=lfs -text
203
+ economics/TESLA_update_letter/2019Q1/material.pdf filter=lfs diff=lfs merge=lfs -text
204
+ economics/TESLA_update_letter/2019Q2/material.pdf filter=lfs diff=lfs merge=lfs -text
205
+ economics/World_Bank_GPE/GPE_Jan_2024/material.pdf filter=lfs diff=lfs merge=lfs -text
206
+ economics/World_Bank_GPE/GPE_Jan_2025/material.pdf filter=lfs diff=lfs merge=lfs -text
207
+ economics/World_Bank_GPE/GPE_June_2023/material.pdf filter=lfs diff=lfs merge=lfs -text
208
+ economics/World_Bank_GPE/GPE_June_2024/material.pdf filter=lfs diff=lfs merge=lfs -text
209
+ economics/World_Bank_GPE/GPE_June_2025/material.pdf filter=lfs diff=lfs merge=lfs -text
210
+ education/CSAPP-Lectures_2015Fall/Computer[[:space:]]Systems[[:space:]]A[[:space:]]Programmers[[:space:]]Perspective[[:space:]](Bryant,[[:space:]]Randal[[:space:]]EOHallaron,[[:space:]]David[[:space:]]R).pdf filter=lfs diff=lfs merge=lfs -text
211
+ education/CSAPP-Lectures_2015Fall/Lecture01/material.pdf filter=lfs diff=lfs merge=lfs -text
212
+ education/CSAPP-Lectures_2015Fall/Lecture02/material.pdf filter=lfs diff=lfs merge=lfs -text
213
+ education/CSAPP-Lectures_2015Fall/Lecture03/material.pdf filter=lfs diff=lfs merge=lfs -text
214
+ education/CSAPP-Lectures_2015Fall/Lecture04/material.pdf filter=lfs diff=lfs merge=lfs -text
215
+ education/CSAPP-Lectures_2015Fall/Lecture05/material.pdf filter=lfs diff=lfs merge=lfs -text
216
+ education/CSAPP-Lectures_2015Fall/Lecture06/material.pdf filter=lfs diff=lfs merge=lfs -text
217
+ education/CSAPP-Lectures_2015Fall/Lecture07/material.pdf filter=lfs diff=lfs merge=lfs -text
218
+ education/CSAPP-Lectures_2015Fall/Lecture08/material.pdf filter=lfs diff=lfs merge=lfs -text
219
+ education/CSAPP-Lectures_2015Fall/Lecture09/material.pdf filter=lfs diff=lfs merge=lfs -text
220
+ education/CSAPP-Lectures_2015Fall/Lecture10/material.pdf filter=lfs diff=lfs merge=lfs -text
221
+ education/CSAPP-Lectures_2015Fall/Lecture11/material.pdf filter=lfs diff=lfs merge=lfs -text
222
+ education/CSAPP-Lectures_2015Fall/Lecture12/material.pdf filter=lfs diff=lfs merge=lfs -text
223
+ education/CSAPP-Lectures_2015Fall/Lecture13/material.pdf filter=lfs diff=lfs merge=lfs -text
224
+ education/CSAPP-Lectures_2015Fall/Lecture14/material.pdf filter=lfs diff=lfs merge=lfs -text
225
+ education/CSAPP-Lectures_2015Fall/Lecture15/material.pdf filter=lfs diff=lfs merge=lfs -text
226
+ education/CSAPP-Lectures_2015Fall/Lecture16/material.pdf filter=lfs diff=lfs merge=lfs -text
227
+ education/CSAPP-Lectures_2015Fall/Lecture17/material.pdf filter=lfs diff=lfs merge=lfs -text
228
+ education/CSAPP-Lectures_2015Fall/Lecture18/material.pdf filter=lfs diff=lfs merge=lfs -text
229
+ education/CSAPP-Lectures_2015Fall/Lecture19/material.pdf filter=lfs diff=lfs merge=lfs -text
230
+ education/CSAPP-Lectures_2015Fall/Lecture20/material.pdf filter=lfs diff=lfs merge=lfs -text
231
+ education/Computer_science_lectures/Lecture1/material.pdf filter=lfs diff=lfs merge=lfs -text
232
+ education/Computer_science_lectures/Lecture2/material.pdf filter=lfs diff=lfs merge=lfs -text
233
+ education/Computer_science_lectures/Lecture3/material.pdf filter=lfs diff=lfs merge=lfs -text
234
+ education/Computer_science_lectures/Lecture4/material.pdf filter=lfs diff=lfs merge=lfs -text
235
+ education/Computer_science_lectures/Lecture5/material.pdf filter=lfs diff=lfs merge=lfs -text
236
+ education/Computer_science_lectures/Lecture6/material.pdf filter=lfs diff=lfs merge=lfs -text
237
+ education/Computer_science_lectures/Lecture7/material.pdf filter=lfs diff=lfs merge=lfs -text
238
+ education/Computer_science_lectures/Lecture8/material.pdf filter=lfs diff=lfs merge=lfs -text
239
+ education/Computer_science_lectures/Lecture9/material.pdf filter=lfs diff=lfs merge=lfs -text
240
+ education/MIT-Financing_Economic_Development/437F16_Lec10/material_1.pdf filter=lfs diff=lfs merge=lfs -text
241
+ education/MIT-Financing_Economic_Development/437F16_Lec10/material_2.pdf filter=lfs diff=lfs merge=lfs -text
242
+ education/MIT-Financing_Economic_Development/437F16_Lec10/material_3.pdf filter=lfs diff=lfs merge=lfs -text
243
+ education/MIT-Financing_Economic_Development/437F16_Lec11/material_1.pdf filter=lfs diff=lfs merge=lfs -text
244
+ education/MIT-Financing_Economic_Development/437F16_Lec11/material_2.pdf filter=lfs diff=lfs merge=lfs -text
245
+ education/MIT-Financing_Economic_Development/437F16_Lec13/material_1.pdf filter=lfs diff=lfs merge=lfs -text
246
+ education/MIT-Financing_Economic_Development/437F16_Lec13/material_2.pdf filter=lfs diff=lfs merge=lfs -text
247
+ education/MIT-Financing_Economic_Development/437F16_Lec14/material_1.pdf filter=lfs diff=lfs merge=lfs -text
248
+ education/MIT-Financing_Economic_Development/437F16_Lec14/material_2.pdf filter=lfs diff=lfs merge=lfs -text
249
+ education/MIT-Financing_Economic_Development/437F16_Lec14/material_3.pdf filter=lfs diff=lfs merge=lfs -text
250
+ education/MIT-Financing_Economic_Development/437F16_Lec16/material_1.pdf filter=lfs diff=lfs merge=lfs -text
251
+ education/MIT-Financing_Economic_Development/437F16_Lec16/material_2.pdf filter=lfs diff=lfs merge=lfs -text
252
+ education/MIT-Financing_Economic_Development/437F16_Lec17/material_1.pdf filter=lfs diff=lfs merge=lfs -text
253
+ education/MIT-Financing_Economic_Development/437F16_Lec17/material_2.pdf filter=lfs diff=lfs merge=lfs -text
254
+ education/MIT-Financing_Economic_Development/437F16_Lec19/material_1.pdf filter=lfs diff=lfs merge=lfs -text
255
+ education/MIT-Financing_Economic_Development/437F16_Lec19/material_2.pdf filter=lfs diff=lfs merge=lfs -text
256
+ education/MIT-Financing_Economic_Development/437F16_Lec19/material_3.pdf filter=lfs diff=lfs merge=lfs -text
257
+ education/MIT-Financing_Economic_Development/437F16_Lec2/material_1.pdf filter=lfs diff=lfs merge=lfs -text
258
+ education/MIT-Financing_Economic_Development/437F16_Lec2/material_2.pdf filter=lfs diff=lfs merge=lfs -text
259
+ education/MIT-Financing_Economic_Development/437F16_Lec2/material_4.pdf filter=lfs diff=lfs merge=lfs -text
260
+ education/MIT-Financing_Economic_Development/437F16_Lec3/material_1.pdf filter=lfs diff=lfs merge=lfs -text
261
+ education/MIT-Financing_Economic_Development/437F16_Lec3/material_2.pdf filter=lfs diff=lfs merge=lfs -text
262
+ education/MIT-Financing_Economic_Development/437F16_Lec3/material_3.pdf filter=lfs diff=lfs merge=lfs -text
263
+ education/MIT-Financing_Economic_Development/437F16_Lec5/material_1.pdf filter=lfs diff=lfs merge=lfs -text
264
+ education/MIT-Financing_Economic_Development/437F16_Lec5/material_2.pdf filter=lfs diff=lfs merge=lfs -text
265
+ education/MIT-Financing_Economic_Development/437F16_Lec5/material_3.pdf filter=lfs diff=lfs merge=lfs -text
266
+ education/MIT-the_human_brain/01/material_1.pdf filter=lfs diff=lfs merge=lfs -text
267
+ education/MIT-the_human_brain/01/material_2.pdf filter=lfs diff=lfs merge=lfs -text
268
+ education/MIT-the_human_brain/02/material_1.pdf filter=lfs diff=lfs merge=lfs -text
269
+ education/MIT-the_human_brain/02/material_2.pdf filter=lfs diff=lfs merge=lfs -text
270
+ education/MIT-the_human_brain/04/material_1.pdf filter=lfs diff=lfs merge=lfs -text
271
+ education/MIT-the_human_brain/04/material_2.pdf filter=lfs diff=lfs merge=lfs -text
272
+ education/MIT-the_human_brain/05/material_1.pdf filter=lfs diff=lfs merge=lfs -text
273
+ education/MIT-the_human_brain/05/material_2.pdf filter=lfs diff=lfs merge=lfs -text
274
+ education/MIT-the_human_brain/05/material_3.pdf filter=lfs diff=lfs merge=lfs -text
275
+ education/MIT-the_human_brain/06/material_1.pdf filter=lfs diff=lfs merge=lfs -text
276
+ education/MIT-the_human_brain/06/material_2.pdf filter=lfs diff=lfs merge=lfs -text
277
+ education/MIT-the_human_brain/06/material_3.pdf filter=lfs diff=lfs merge=lfs -text
278
+ education/MIT-the_human_brain/07/material_1.pdf filter=lfs diff=lfs merge=lfs -text
279
+ education/MIT-the_human_brain/07/material_2.pdf filter=lfs diff=lfs merge=lfs -text
280
+ education/MIT-the_human_brain/07/material_3.pdf filter=lfs diff=lfs merge=lfs -text
281
+ education/MIT-the_human_brain/08/material_1.pdf filter=lfs diff=lfs merge=lfs -text
282
+ education/MIT-the_human_brain/08/material_2.pdf filter=lfs diff=lfs merge=lfs -text
283
+ education/MIT-the_human_brain/08/material_3.pdf filter=lfs diff=lfs merge=lfs -text
284
+ education/MIT-the_human_brain/09/material_1.pdf filter=lfs diff=lfs merge=lfs -text
285
+ education/MIT-the_human_brain/09/material_2.pdf filter=lfs diff=lfs merge=lfs -text
286
+ education/MIT-the_human_brain/09/material_3.pdf filter=lfs diff=lfs merge=lfs -text
287
+ education/MIT-the_human_brain/10/material_1.pdf filter=lfs diff=lfs merge=lfs -text
288
+ education/MIT-the_human_brain/10/material_2.pdf filter=lfs diff=lfs merge=lfs -text
289
+ education/MIT-the_human_brain/10/material_3.pdf filter=lfs diff=lfs merge=lfs -text
290
+ education/MIT-the_human_brain/11/material_1.pdf filter=lfs diff=lfs merge=lfs -text
291
+ education/MIT-the_human_brain/11/material_2.pdf filter=lfs diff=lfs merge=lfs -text
292
+ education/MIT-the_human_brain/11/material_3.pdf filter=lfs diff=lfs merge=lfs -text
293
+ education/THU_DSA/Lecture1/material.pdf filter=lfs diff=lfs merge=lfs -text
294
+ education/THU_DSA/Lecture10/material.pdf filter=lfs diff=lfs merge=lfs -text
295
+ education/THU_DSA/Lecture11/material.pdf filter=lfs diff=lfs merge=lfs -text
296
+ education/THU_DSA/Lecture2/material.pdf filter=lfs diff=lfs merge=lfs -text
297
+ education/THU_DSA/Lecture3/material.pdf filter=lfs diff=lfs merge=lfs -text
298
+ education/THU_DSA/Lecture4/material.pdf filter=lfs diff=lfs merge=lfs -text
299
+ education/THU_DSA/Lecture5/material.pdf filter=lfs diff=lfs merge=lfs -text
300
+ education/THU_DSA/Lecture6/material.pdf filter=lfs diff=lfs merge=lfs -text
301
+ education/THU_DSA/Lecture7/material.pdf filter=lfs diff=lfs merge=lfs -text
302
+ education/THU_DSA/Lecture8/material.pdf filter=lfs diff=lfs merge=lfs -text
303
+ education/THU_DSA/Lecture9/material.pdf filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,86 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ task_categories:
3
+ - any-to-any
4
+ language:
5
+ - en
6
+ - zh
7
+ size_categories:
8
+ - n<1K
9
+ ---
10
+
11
+ # PresentBench: A Fine-Grained Rubric-Based Benchmark for Slide Generation
12
+
13
+ [[🌐 Homepage](https://presentbench.github.io/)] [[📖 Paper](https://arxiv.org/pdf/2603.07244)] [[💻 Code](http://github.com/PresentBench/PresentBench)]
14
+
15
+ This repository hosts the PresentBench benchmark dataset.
16
+
17
+
18
+ ## 📄 Abstract
19
+
20
+ Slides serve as a critical medium for conveying information in presentation-oriented scenarios such as academia, education, and business. Despite their importance, creating high-quality slide decks remains time-consuming and cognitively demanding. Recent advances in generative models, such as Nano Banana Pro, have made automated slide generation increasingly feasible. However, existing evaluations of slide generation are often coarse-grained and rely on holistic judgments, making it difficult to accurately assess model capabilities or track meaningful advances in the field. In practice, the lack of fine-grained, verifiable evaluation criteria poses a critical bottleneck for both research and real-world deployment.
21
+
22
+ In this paper, we propose PresentBench, a fine-grained, rubric-based benchmark for evaluating automated real-world slide generation. It contains 238 evaluation instances, each supplemented with background materials required for slide creation. Moreover, we manually design an average of 54.1 checklist items per instance, each formulated as a binary question, to enable fine-grained, instance-specific evaluation of the generated slide decks.
23
+
24
+ Extensive experiments show that PresentBench provides more reliable evaluation results than existing methods, and exhibits significantly stronger alignment with human preferences. Furthermore, our benchmark reveals that NotebookLM significantly outperforms other slide generation methods, highlighting substantial recent progress in this domain.
25
+
26
+
27
+ ## 🏆 Leaderboard
28
+
29
+ Comparative results across five domains. The highest scores are highlighted in red, and the second-highest scores are highlighted in blue.
30
+
31
+ | Method | Total | Academia | Advertising | Education | Economics | Talk |
32
+ |---|---:|---:|---:|---:|---:|---:|
33
+ | NotebookLM | <span style="color:red">62.5</span> | <span style="color:red">68.6</span> | <span style="color:red">54.9</span> | <span style="color:red">55.0</span> | <span style="color:red">58.2</span> | <span style="color:red">69.2</span> |
34
+ | Manus 1.6 | <span style="color:blue">57.8</span> | <span style="color:blue">64.0</span> | <span style="color:blue">52.4</span> | 50.7 | <span style="color:blue">52.8</span> | <span style="color:blue">63.0</span> |
35
+ | Tiangong | 54.7 | 59.2 | 44.5 | <span style="color:blue">53.7</span> | 46.5 | 59.8 |
36
+ | Zhipu | 53.6 | 57.5 | 41.0 | 52.5 | 47.6 | 59.0 |
37
+ | PPTAgent v2 | 50.2 | 53.3 | 46.7 | 46.1 | 46.1 | 56.6 |
38
+ | Gamma | 49.2 | 54.4 | 46.7 | 47.8 | 35.1 | 56.3 |
39
+ | Doubao | 48.0 | 50.3 | 42.9 | 45.4 | 44.0 | 54.7 |
40
+ | Qwen | 35.9 | 39.4 | 31.9 | 36.6 | 26.5 | 38.6 |
41
+
42
+
43
+ ## 🗂️ Dataset Structure
44
+
45
+ Domains under `<dataset_root>/` include (non‑exhaustive):
46
+ - `academia/`
47
+ - `advertising/`
48
+ - `economics/`
49
+ - `education/`
50
+ - `talk/`
51
+ Each leaf case typically looks like:
52
+ - `material.pdf|material.md|material_N.md|material_N.pdf` – source documents (PDFs, text, etc.).
53
+ - `generation_task/` – prompts and evaluation configuration:
54
+ - `generation_prompt.md`
55
+ - `judge_prompt.json`
56
+
57
+
58
+ ## ⚙️ Usage
59
+
60
+ To evaluate slide generation systems with this dataset, please follow the evaluation pipeline and scripts provided in the [code repository](http://github.com/PresentBench/PresentBench) (e.g., environment setup, data preparation, inference, and evaluation).
61
+
62
+
63
+ ## 📜 Licensing Information
64
+
65
+ The `PresentBench` benchmark aggregates background materials collected from multiple public sources. Each source remains governed by its own original license and terms of use.
66
+
67
+ - **Data Source Licenses:** Users must strictly comply with the licensing terms and conditions of each original background-material source included in this benchmark. We recommend carefully reviewing the original license for each source before use.
68
+
69
+ - **Prompts and Evaluation Rubrics:** The task instructions and evaluation checklists are created by us. To the extent that we hold any related intellectual property rights, these contributions are made available under the **Creative Commons Attribution-NonCommercial 4.0 International (CC-BY-NC-4.0)** license.
70
+
71
+ - **Copyright Concerns:** This benchmark is compiled for academic research purposes. If you believe any content in `PresentBench` infringes upon your copyright, please contact us immediately at chen.xs.gm[at]gmail.com. We will promptly review and address the matter, including removal of the concerned content upon verification.
72
+
73
+
74
+
75
+ ## 📚 Citation
76
+
77
+ **BibTeX:**
78
+ ```bibtex
79
+ @article{chen2026presentbench,
80
+ title={PresentBench: A Fine-Grained Rubric-Based Benchmark for Slide Generation},
81
+ author={Chen, Xin-Sheng and Zhu, Jiayu and Li, Pei-lin and Wang, Hanzheng and Yang, Shuojin and Guo, Meng-Hao},
82
+ journal={arXiv preprint arXiv:2603.07244},
83
+ year={2026}
84
+ }
85
+ ```
86
+
academia/CVPR_2023/Attribute-preserving_Face_Dataset_Anonymization_via_Latent_Code_Optimization/generation_task/instructions.md ADDED
@@ -0,0 +1,149 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ You are to generate a complete, conference-quality academic slide deck suitable for an oral presentation at a top-tier AI conference (e.g., NeurIPS / ICML / ICLR / AAAI), based strictly on the paper. The slides must be accurate, well-structured, and **faithful to the original paper**, with no fabricated content.
2
+
3
+ ---
4
+
5
+ # **Strict Constraints for the Slides**
6
+
7
+ Below are the **hard constraints** you MUST satisfy. Slides violating these constraints are considered **incorrect**.
8
+
9
+ ## 1. Content Requirements
10
+
11
+ The slide deck must have **16-20 slides**.
12
+
13
+ The slide deck must include the following sections, in the order listed below (the number of slides in each section may be determined as appropriate).
14
+
15
+ 1. **Title Slide**
16
+ * Paper Title
17
+ * Author Team
18
+ * Affiliation
19
+ * Conference
20
+
21
+ 2. **Outline / Agenda**
22
+
23
+ 3. **Introduction / Background**
24
+ * Privacy Concerns: Large-scale face datasets pose significant privacy risks for individuals. Privacy risks stem not only from facial identity, but also from contextual cues such as background, clothing, or hairstyle.
25
+ * Anonymization Goal: Protect identity while preserving data utility for downstream tasks (e.g., training expression detectors).
26
+ * Current Landscape & Limitations:
27
+ * require the costly training of additional, purpose-trained neural networks
28
+ * fail to retain the facial attributes of the original images in the anonymized counterparts, the preservation of which is of paramount importance for their use in downstream tasks
29
+
30
+ 4. **Limitations of Existing Methods**
31
+ * High Computational Cost: Methods like DeepPrivacy or CIAGAN require training complex generative models from scratch.
32
+ * Attribute Distortion: Traditional blurring or pixelation destroys the spatial structure needed for ML training.
33
+ * Identity Leakage: Simple swaps or modifications often fail to sufficiently distance the new face from the original identity.
34
+ * Design Constraint: You MUST include a visual comparison (refer to Figure 1, you can directly use Figure 1) showing how simple anonymization destroys data utility versus the proposed method.
35
+
36
+ 5. **Overview of the Proposed Method**
37
+ * Core Idea: Anonymization through direct optimization of latent codes in a pre-trained StyleGAN2 space.
38
+ * Key Contribution 1: No Training Required. Operates on fixed, pre-trained GANs, making it plug-and-play for various face datasets.
39
+ * Key Contribution 2: Attribute Preservation. Explicitly constrains the optimization via feature-space losses to keep non-identity features constant.
40
+ * Key Contribution 3: High Visual Fidelity. Leverages StyleGAN's power to produce photorealistic anonymized images.
41
+
42
+ 6. **Method - Initialization Strategy (Crucial Step)**
43
+ * Fake Dataset Generation: Creating a large pool of synthetic images ($\mathcal{X}_F$) using StyleGAN2.
44
+ * Semantic Pairing: Finding the nearest "fake" neighbor for each real image using FaRL feature space (kNN).
45
+ * Latent Code Splicing:
46
+ * Layers 0-2 (Geometric/Pose): Taken from the real image inversion.
47
+ * Layers 3-7 (Identity): Initialized from the fake neighbor and optimized.
48
+ * Layers 8-17 (Texture/Background): Taken from the real image inversion.
49
+ * Your slide deck MUST include a method diagram/step-by-step pipeline consistent with Figure 2 (real dataset, fake dataset generation, pairing, optimize middle layers)
50
+
51
+ 7. **Optimization & Loss Functions**
52
+ * Optimization Target: Only optimizing the middle layers (3-7) of the latent code.
53
+ * Identity Loss ($\mathcal{L}_{id}$): Uses ArcFace to ensure the new face is mathematically distant from the original (controlled by margin $m$).
54
+ * Attribute Preservation Loss ($\mathcal{L}_{att}$):
55
+ * Uses FaRL ViT-based image encoder.
56
+ * Matches patch-level features (14x14 flattened vectors) to ensure semantic attributes (expression, age) remain consistent.
57
+ * *Note: No explicit landmark loss is used; geometry is preserved via Layer 0-2 initialization.*
58
+
59
+ 8. **Dataset and Training Details**
60
+ * Source Datasets: Evaluated on CelebA-HQ and LFW (Labeled Faces in the Wild).
61
+ * Per-image latent code optimization (no network training), using Adam for a fixed number of optimization steps.
62
+ * Evaluation Setup: Measuring the trade-off between "Anonymization Success" and "Attribute Preservation."
63
+
64
+ 9. **Experimental Setup**
65
+ * Metrics for Anonymization: Identity proximity (ID-score) using state-of-the-art face recognizers.
66
+ * Metrics for Utility: Accuracy on downstream tasks like expression recognition or head pose estimation.
67
+ * Baselines: Comparison with CIAGAN, DeepPrivacy.
68
+
69
+ 10. **Experimental Results & Analysis**
70
+ * Privacy + Realism (CelebA-HQ / LFW):
71
+ * Strong image realism with **100% face detection**; on CelebA-HQ the full method reports **FID ≈ 29.9** while remaining fully detectable.
72
+ * **Low re-identification** rates, competitive with CIAGAN / DeepPrivacy, but with better practical usability due to consistently detectable faces.
73
+ * Slides must reproduce and clearly present the results from Table 1 and Table 2 for only the CIAGAN, DeepPrivacy, and Ours rows (include all corresponding columns/metrics for these three methods).
74
+ * Utility (Attribute Preservation):
75
+ * On CelebA-HQ attribute transfer, preserved attributes well (acc. ≈ 0.8181) vs original (acc. ≈ 0.8539). Slides must fully reproduce and clearly present all results from the paper’s Table 3 (complete rows/columns and metrics).
76
+ * On LFW (pseudo-label evaluation), achieves the **best downstream accuracy** among baselines. Slides must fully reproduce and clearly present all results from the paper’s Table 6 (complete rows/columns and metrics).
77
+ * Controllable Trade-off (Ablation on margin *m*):
78
+ * Larger **m** → **better attribute accuracy** but **slightly higher re-ID**, confirming a tunable privacy–utility knob. Slides must fully reproduce and clearly present all results from the paper’s Table 4 (complete rows/columns and metrics).
79
+
80
+ 11. **Visual Analysis & Qualitative Results**
81
+ * Style Consistency: Visual samples (refer to Figure 3, 4) showing a person's identity changing while their smile, glasses, and head tilt remain highly consistent.
82
+ * Diverse Cases: Handling of various ethnicities, ages, and challenging lighting conditions.
83
+ * Note: You are allowed to copy only the visual samples reported in Tables 3 and 4, and must not use any other results or generate additional results.
84
+
85
+ 12. **Key Takeaways & Limitations**
86
+ * Takeaways: Latent optimization is an effective, training-free way to anonymize datasets; attribute preservation is the key to maintaining data utility.
87
+ * Limitations: Optimization per image is slower than a feed-forward pass; quality is bounded by the expressiveness of the pretrained GAN and the fidelity of GAN inversion.
88
+
89
+ 13. **Conclusion**
90
+ * Summary: A latent-code-optimization anonymization framework that avoids training additional networks and improves attribute preservation while achieving competitive de-identification vs SOTA.
91
+
92
+ ---
93
+
94
+ ## 2. Content Constraints
95
+
96
+ * **Faithfulness to background materials**: Use only the information in the paper. You must not fabricate additional experiments or modify or reinterpret the authors' claims.
97
+ * **Accuracy:** All content must be factually accurate, especially quantitative content and facts.
98
+ * **Brevity:** Use short, concise phrases, not long paragraphs. Focus on summarizing key facts and events without excessive detail. Bullet points may be used for clarity. If you use bullet points, each slide should have no more than 6 bullet points.
99
+ * **Sufficient Depth**: Do not summarize the paper in an overly superficial or high-level manner. The slides should preserve essential technical details, key arguments, and substantive insights rather than only presenting vague conclusions.
100
+ * **Logical Flow:** The slides should present a clear narrative, starting from early space exploration to recent developments. Ensure there is a clear progression of time and events.
101
+ * **Relevance of Information**: You must not add unrelated content.
102
+ * **Code & Markup Formatting**: Avoid raw LaTeX or Markdown code unless necessary.
103
+ * **Citation & Referencing**: Accurately reference the paper's results, diagrams, and examples.
104
+ * If a slide uses data from the paper, you must clearly indicate the source of the data on that slide (e.g., page xx, Figure xx, Table xx).
105
+ * All references (if any) must be placed in the bottom-left corner of the slide.
106
+
107
+ ## 3. Visual & Design
108
+
109
+ * **Images:** Include relevant images. Images must be high quality, clearly labeled, and relevant to the content.
110
+ * **Charts and Diagrams:** Use appropriate charts and diagrams where needed to visually present and clarify information, rather than relying only on text (and demos).
111
+ * If the slide includes charts or figures, ensure that all visual elements are clearly annotated (e.g., axes are labeled, units are specified, legends are included where needed, and data points are explained when necessary).
112
+ * Include **figures or diagrams descriptions** when appropriate, e.g., “The chart (from page 4 in the paper) shows proprietary models outperform open-weight ones.”
113
+ * **Legibility:** Use legible fonts and avoid clutter. Text should be large enough to be easily read.
114
+ * **Visual Balance:** Balance text and visuals so slides are easy to read when projected.
115
+ * **Layout:** Maintain a clean, professional layout with appropriate fonts, colors, and formatting.
116
+ * **Style Consistency**: The entire slide deck should follow a unified and coherent visual style.
117
+ * **Information Load**: Slides should avoid excessive information per page to preserve readability.
118
+
119
+ ## 4. Text Quality
120
+
121
+ * All generated text should be clear, with no missing or incorrect characters or words.
122
+ * Spelling, grammar, and typography must be accurate and correct throughout the content.
123
+
124
+ ## 5. Technical Fidelity Requirements
125
+
126
+ * **Quantitative Coverage**: Ensure that key data and experimental results (possibly presented in charts or tables in the paper) are included in the slide deck. In other words, the presentation should not only discuss the ideas of the paper but also present specific quantitative details (e.g., statistical data, experimental results, etc.).
127
+ * The slide deck must include at least 5 slides with quantitative details.
128
+
129
+ * **Quantitative Detail Correctness**: Ensure quantitative details (task counts, benchmark size, etc.) are correct.
130
+
131
+ * **Table & Chart Traceability and Annotation**: Ensure that any figures and tables in your slide deck are consistent with the paper. Specifically, for every figure and table in the slides:
132
+ * If it is directly copied from the paper, clearly indicate on the slide which figure or table it corresponds to in the paper (e.g., Figure 1 in the paper, Table 2 in the paper).
133
+ * If it is newly plotted based on data from the paper, clearly specify which section of the paper the data are taken from (e.g., Section 3.1). In addition, provide a clear explanation of the meaning of each legend item in the figure and each row and column in the table.
134
+ * For charts, every axis, unit, and label must be explicit
135
+
136
+ * **Point-Level Accuracy for Plots**: If scatter plots, line charts or radar charts are used in the slide deck, ensure that every data point exactly matches the corresponding data point in the original figure from the paper. Note that the values must be **precisely** the same, not just the shape of the graph.
137
+
138
+ * **Conceptual Illustration**: The slides may include data used only for conceptual illustration. However, if such data are included, you must clearly indicate on the corresponding slide which data are conceptual illustrations rather than experimental data reported in the paper.
139
+
140
+ ## 5. Presentation Tone and Audience
141
+
142
+ * **Tone:** The tone should be informative, academic, and professional. It should avoid casual or informal conversational language, while remaining clear and suitable for oral presentation. The slide deck should maintain a consistent tone.
143
+ * **Audience:** The presentation is intended for an academic audience with relevant background knowledge in the field. The content should be accessible to graduate-level students and researchers, assuming familiarity with standard concepts and terminology, while still providing sufficient context to understand the motivation, methodology, and key contributions.
144
+
145
+ ---
146
+
147
+ # **Output Expected**
148
+
149
+ A **complete slide deck** satisfying all constraints above.
academia/CVPR_2023/Attribute-preserving_Face_Dataset_Anonymization_via_Latent_Code_Optimization/generation_task/judge_prompt.json ADDED
@@ -0,0 +1,56 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "material_dependent_checklist_1": [
3
+ "\n**Does the first slide list the title, authors, affiliations, and the conference?**\n\n Note: You only need to check whether the slides contain the required contents; you do not need to verify their correctness.\n If **no**, describe what is missing (Title: *Attribute-preserving Face Dataset Anonymization via Latent Code Optimization*; Authors: Simone Barattin*, Christos Tzelepis*, Ioannis Patras, Nicu Sebe; Affiliations: University of Trento, Queen Mary University of London; Conference: CVPR 2023).\n",
4
+ "\n**Does the deck include an Outline/Agenda slide(s) right after the title slide (and does it follow the slide order as presented in the deck itself)?**\n\n Note: You only need to check whether the slides contain the required contents; you do not need to verify their correctness.\n If **no**, describe what is missing.\n",
5
+ "\n**Does the Introduction/Background section explicitly state the core problem: anonymize face identity while keeping the dataset useful for downstream tasks?**\n\n Note: You only need to check whether the slides contain the required contents; you do not need to verify their correctness.\n If **no**, describe what is missing. \n",
6
+ "\n**Does the Introduction/Background mention privacy risks beyond identity (e.g., contextual cues like background/clothing/hairstyle)?**\n\n Note: You only need to check whether the slides contain the required contents; you do not need to verify their correctness.\n If **no**, describe what is missing. \n",
7
+ "\n**Does the deck clearly list the paper’s stated shortcomings of prior work (costly extra training networks and/or poor attribute retention)?**\n\n Note: You only need to check whether the slides contain the required contents; you do not need to verify their correctness.\n If **no**, describe what is missing (two drawbacks: (i) require costly training of additional purpose-trained networks; and/or (ii) fail to retain facial attributes; and why attribute preservation matters for downstream tasks). \n",
8
+ "\n**Does the “Limitations of Existing Methods” section include the required points (compute cost, attribute distortion by blur/pixelation, identity leakage) without overstating what the paper experimentally tests?**\n\n Note: You only need to check whether the slides contain the required contents; you do not need to verify their correctness.\n If **no**, describe what is missing (compute cost; blur/pixelation harms utility; identity leakage) and/or what is incorrectly claimed as experimentally evaluated.\n",
9
+ "\n**Does the deck include a visual comparison slide referencing Figure 1 (showing how “ID anonymized” can destroy attribute utility vs “Attr. preserved”)?**\n\n Note: You only need to check whether the slides contain the required contents; you do not need to verify their correctness.\n If **no**, describe what is missing (Figure 1 comparison: ID anonymized and Attr. preserved; CIAGAN vs DeepPrivacy vs Ours).\n",
10
+ "\n**Does the “Overview of the Proposed Method” slide(s) clearly state the paper’s core idea: task-agnostic anonymization by directly optimizing latent codes in a pre-trained GAN space (training-free)?**\n\n Note: You only need to check whether the slides contain the required contents; you do not need to verify their correctness.\n If **no**, describe what is missing (direct latent optimization; pre-trained GAN; task-agnostic; avoids training new anonymization networks).\n",
11
+ "\n**Does the “Initialization Strategy” slide(s) correctly describe the sandwich/splicing rule and the semantic meaning of each layer block?**\n\n Note: You only need to check whether the slides contain the required contents; you do not need to verify their correctness.\n If **no**, describe what is missing (Layers 0–2 from real for pose/coarse geometry; Layers 3–7 from fake neighbor and optimized for identity; Layers 8–17 from real for color distribution/background). \n",
12
+ "\n**Does the deck explicitly state the two losses and what each enforces (identity obfuscation vs attribute preservation)?**\n\n Note: You only need to check whether the slides contain the required contents; you do not need to verify their correctness.\n If **no**, describe what is missing (identity obfuscation loss enforces “desired distance away from original”; attribute preservation loss in FaRL feature space preserves facial attributes).\n",
13
+ "\n**Does the deck include a method diagram/step-by-step pipeline consistent with Figure 2 (real dataset, fake dataset generation, pairing, optimize middle layers)?**\n\n Note: You only need to check whether the slides contain the required contents; you do not need to verify their correctness.\n If **no**, describe what is missing (Figure 2 pipeline: $\\mathcal{X}_R$, $\\mathcal{X}_F$, FaRL-based kNN pairing, splice latent layers, optimize only a subset of layers). \n",
14
+ "\n**Does the deck specify the optimized latent subvector shape and what is fixed vs learnable?**\n\n Note: You only need to check whether the slides contain the required contents; you do not need to verify their correctness.\n If **no**, describe what is missing (learnable part: $w_{\tilde{A}} \\in \\mathbb{R}^{5\times 512}$ for layers 3–7).\n",
15
+ "\n**Does the “Identity loss” slide(s) include the exact definition and explain the role of margin m as a privacy–utility knob?**\n\n Note: You only need to check whether the slides contain the required contents; you do not need to verify their correctness.\n If **no**, describe what is missing ($\\mathcal{L}_{id}(x_{A}^{i}, x_{R}^{i}) = \\left| \\cos(\\mathcal{E}_{\\mathcal{A}}(x_{A}^{i}), \\mathcal{E}_{\\mathcal{A}}(x_{R}^{i})) - m \right|$; $m=0$ enforces orthogonality → larger identity difference; $m=1$ enforces similarity; $m$ controls trade-off). \n",
16
+ "\n**Does the “Attribute preservation loss” slide(s) include the exact definition and the important implementation detail about using ViT patch-level features (14×14×512) instead of CLS?**\n\n Note: You only need to check whether the slides contain the required contents; you do not need to verify their correctness.\n If **no**, describe what is missing ($\\mathcal{L}_{att}(x_A,x_R)=\\|\\mathcal{E}_{\\mathcal{F}}(x_A)−\\mathcal{E}_{\\mathcal{F}}(x_R)\\|_1$; $\\mathcal{E}_{\\mathcal{F}} = \text{FaRL ViT-based encoder}$; patch-level features flattened; why patches preserve more info than CLS). \n",
17
+ "\n**Does the deck clearly state that the method is per-image latent optimization (not training a new anonymization model)?**\n\n Note: You only need to check whether the slides contain the required contents; you do not need to verify their correctness.\n If **no**, describe what is missing (per-image latent code optimization; no network training; optimization with Adam for fixed steps). \n",
18
+ "\n**Does the “Datasets” slide(s) include the key dataset stats used in the paper for CelebA-HQ and LFW?**\n\n Note: You only need to check whether the slides contain the required contents; you do not need to verify their correctness.\n If **no**, describe what is missing (evaluated on CelebA-HQ and LFW). \n",
19
+ "\n**Does the “Baselines” slide(s) list the actual baselines compared in the experiments (CIAGAN, DeepPrivacy), and avoid presenting extra baselines as if they were evaluated if the paper didn’t?**\n\n Note: You only need to check whether the slides contain the required contents; you do not need to verify their correctness.\n If **no**, describe what is missing/incorrect (paper baseline set: CIAGAN and DeepPrivacy; anything else should be framed as background only).\n",
20
+ "\n**Does the “Evaluation metrics” slide(s) define re-identification rate and detection rate, and name the exact detectors/recognizers used?**\n\n Note: You only need to check whether the slides contain the required contents; you do not need to verify their correctness.\n If **no**, describe what is missing (re-ID rate via FaceNet pre-trained on CASIA WebFace & VGGFace2; detection rate via MTCNN; definitions of each metric). \n",
21
+ "\n**Does the deck include a quantitative results slide(s) reproducing Table 1 (CelebA-HQ) with required rows and columns?**\n\n The table should include at least the following rows: CIAGAN, DeepPrivacy, Ours, and the following columns: FID, Detection dlib/MTCNN, Face re-ID CASIA/VGG.\n\n Note: You only need to check whether the slides contain the required contents; you do not need to verify their correctness.\n If **no**, describe what is missing. \n",
22
+ "\n**Does the deck include a quantitative results slide(s) reproducing Table 2 (LFW) with all columns and correct numbers (including FID(C-HQ))?**\n\n The table should include at least the following rows: CIAGAN, DeepPrivacy, Ours, and the following columns: FID, FID (C-HQ), Detection dlib/MTCNN, Face re-ID CASIA/VGG.\n\n Note: You only need to check whether the slides contain the required contents; you do not need to verify their correctness.\n If **no**, describe what is missing.\n",
23
+ "\n**Does the deck include an attribute-preservation quantitative slide(s) reproducing Table 3 (CelebA-HQ: inner/outer/both accuracy)?**\n\n The table should include at least the following rows: Original, CIAGAN, DeepPrivacy, Ours, and the following columns: Inner face, Outer face, Combined.\n\n Note: You only need to check whether the slides contain the required contents; you do not need to verify their correctness.\n If **no**, describe what is missing. \n",
24
+ "\n**Does the deck include LFW attribute preservation results (Table 6) with the exact reported numbers?**\n\n The table should include at least the following rows: CIAGAN, DeepPrivacy, Ours, and the following columns: CelebA-HQ (labels from [22]), LFW (labels from [22]), LFW (labels from [17]).\n \n [17] Yuming Jiang, Ziqi Huang, Xingang Pan, Chen Change Loy,and Ziwei Liu. Talk-to-edit: Fine-grained facial editing viadialog, 2021. 7, 8\n [22] Ji Lin, Richard Zhang, Frieder Ganz, Song Han, and Jun-Yan Zhu. Anycost gans for interactive image synthesis andediting, 2021. 7, 8\n\n Note: You only need to check whether the slides contain the required contents; you do not need to verify their correctness. The citation numbers shown on the slides do not need to be the same as those in the paper (as shown above); they only need to correctly distinguish the different rows.\n\n If **no**, describe what is missing.\n",
25
+ "\n**Does the deck include the ablation on margin m reproducing Table 4 and explaining the privacy–utility trend?**\n\n The table should include at least the following rows: Ours (m=.0), Ours (m=.9), and the following columns: FID, Detection MTCNN, Face re-ID CASIA/VGG, Accuracy.\n\n Note: You only need to check whether the slides contain the required contents; you do not need to verify their correctness.\n If **no**, describe what is missing. \n",
26
+ "\n**Does the deck include qualitative results slides referencing Figures 3 and 4 and describing what to look for (identity changes while attributes like smile/glasses/head tilt remain)?**\n\n Note: You only need to check whether the slides contain the required contents; you do not need to verify their correctness.\n If **no**, describe what is missing (visual examples from CelebA-HQ and LFW; identity obfuscation vs attribute consistency; “better preserve facial expression and make-up” claim). \n",
27
+ "\n**Does the deck include a “Key contributions” slide that matches the paper’s claims (training-free, attribute-preserving via FaRL feature matching, quantitative + qualitative validation)?**\n\n Note: You only need to check whether the slides contain the required contents; you do not need to verify their correctness.\n If **no**, describe what is missing (task-agnostic latent optimization; FaRL-based feature matching; shown via qualitative+quantitative experiments). \n",
28
+ "\n**Does the deck include a “Limitations” slide (per-image optimization is slower; bounded by GAN expressiveness and inversion fidelity), without inventing paper-unsupported limitations?**\n\n Note: You only need to check whether the slides contain the required contents; you do not need to verify their correctness.\n If **no**, describe what is missing (slower than feed-forward; bounded by pretrained GAN expressiveness and GAN inversion fidelity). \n",
29
+ "\n**Does the deck include a proper Conclusion slide that summarizes what the method achieves (privacy + attribute retention)?**\n\n Note: You only need to check whether the slides contain the required contents; you do not need to verify their correctness.\n If **no**, describe what is missing (summary of the approach and evidence; avoid extra claims not supported by results). \n"
30
+ ],
31
+ "material_dependent_checklist_2": [
32
+ "\n**Does the first slide correctly list the title, authors, affiliations, and the conference?**\n\n Note: If any of these items do not appear on the slides, the answer should be \"no\".\n If **no**, describe what is incorrect (Title: *Attribute-preserving Face Dataset Anonymization via Latent Code Optimization*; Authors: Simone Barattin*, Christos Tzelepis*, Ioannis Patras, Nicu Sebe; Affiliations: University of Trento, Queen Mary University of London; Conference: CVPR 2023).\n",
33
+ "\n**Are the slides' statements in the Introduction/Background section consistent with the paper?**\n\n Note: If any of these items do not appear on the slides, the answer should be \"no\".\n If **no**, point out exactly where they diverge (e.g., misstated motivation, incorrect problem framing, or unsupported claims) and indicate the relevant slide(s).\n",
34
+ "\n**Does the deck correctly list the paper's stated shortcomings of prior work (costly extra training networks and/or poor attribute retention)?**\n\n Note: If any of these items do not appear on the slides, the answer should be \"no\".\n If **no**, describe what is missing (two drawbacks: (i) require costly training of additional purpose-trained networks; and/or (ii) fail to retain facial attributes; and why attribute preservation matters for downstream tasks). \n",
35
+ "\n**When presenting the “Limitations of Existing Methods,” do the slides include a visual comparison that is consistent with Figure 1 (directly copying Figure 1 is allowed)?**\n\n Note: If any of these items do not appear on the slides, the answer should be \"no\".\n If **no**, specify what is missing or inconsistent (e.g., the comparison is absent, the wrong figure is used, or the visual content is altered/misrepresented) and indicate the relevant slide(s).\n",
36
+ "\n**Is the proposed method correctly described as *per-image latent code optimization* in a *pre-trained* StyleGAN2, with the GAN weights kept fixed (no training of a new anonymization network)?**\n\n Note: If any of these items do not appear on the slides, the answer should be \"no\".\n If **no**, specify where the slides claim (or imply) that StyleGAN2 (or another anonymizer) is trained/fine-tuned.\n",
37
+ "\n**Are the core components and their roles accurately identified (StyleGAN2 generator, W+ space, e4e inversion for real images, ArcFace for identity features, FaRL ViT encoder for semantic/attribute features)?**\n\n Note: If any of these items do not appear on the slides, the answer should be \"no\".\n If **no**, list any incorrect module names, swapped roles, or missing/extra components.\n",
38
+ "\n**Is the “fake dataset generation + pairing” process accurately described?**\n(Generate a large fake set $|\\mathcal{X}_F|>|\\mathcal{X}_R|$ by sampling StyleGAN2; represent images in FaRL space; pair each real image with the closest fake neighbor using kNN / Euclidean distance in that feature space.)\n\n Note: If any of these items do not appear on the slides, the answer should be \"no\".\n If **no**, point out inaccuracies (e.g., saying pairing is done by identity similarity, or using the wrong feature extractor/metric).\n",
39
+ "\n**Is the “sandwich/splicing” initialization precisely correct?**\n(Layers 0-2 from real inversion for pose/coarse geometry; layers 3–7 from fake neighbor and *trainable*; layers 8–17 from real inversion for color/background; trainable block is 5×512.)\n\n Note: If any of these items do not appear on the slides, the answer should be \"no\".\n If **no**, identify the wrong layer ranges, wrong source (real vs fake), or wrong optimized subvector size.\n",
40
+ "\n**Are the latent-space details accurate (W+ and dimensionality), and are layer semantics not misstated?**\n(E.g., W+ latent code is 18×512; the paper notes early layers relate to coarse/medium attributes and later ones to finer attributes.)\n\n Note: If any of these items do not appear on the slides, the answer should be \"no\".\n If **no**, list errors such as using Z/W (instead of W+), wrong dimensionality, or incorrect layer-to-attribute mapping.\n",
41
+ "\n**Are the two losses written and interpreted consistently with the paper’s exact definitions?**\n\n * Identity loss: absolute difference between cosine(ArcFace(x_A), ArcFace(x_R)) and margin *m*; *m=0* → orthogonality (more privacy), *m=1* → high similarity (less privacy).\n * Attribute loss: L1 distance in FaRL space; patch-level ViT features (14×14×512 flattened) are used for better attribute preservation than CLS.\n \n Note: If any of these items do not appear on the slides, the answer should be \"no\".\n If **no**, enumerate equation-level mistakes (wrong sign/metric, wrong embedding network, wrong meaning of *m*, CLS vs patch confusion).\n",
42
+ "\n**Does the deck avoid inventing additional objectives not used in the paper (e.g., landmark loss, adversarial loss, segmentation loss, reconstruction loss) or claiming they are required for geometry preservation?**\n\n Note: If any of these items do not appear on the slides, the answer should be \"no\".\n If **no**, list the fabricated losses/claims and where they appear.\n",
43
+ "\n**Does the deck avoid fabricating optimization hyperparameters or training schedules that are not explicitly stated in the paper text (e.g., exact Adam LR, exact per-image step count), and does it avoid presenting such details as “reported by the authors” when they are not?**\n\n Note: If any of these items do not appear on the slides, the answer should be \"no\".\n If **no**, identify the unsupported hyperparameters and the slides that present them as factual.\n",
44
+ "\n**Are the evaluation metrics correctly defined and attributed to the correct tools?**\n\n * Re-identification rate computed using FaceNet pre-trained on CASIA WebFace and VGGFace2.\n * Detection rate measured via face detectors (paper tables report dlib/MTCNN).\n * FID reported for anonymized datasets.\n \n Note: If any of these items do not appear on the slides, the answer should be \"no\".\n If **no**, point out incorrect metric definitions (e.g., calling re-ID “verification accuracy” without matching the paper’s definition) or wrong detector/recognizer names.\n",
45
+ "\n**Does the deck avoid presenting un-evaluated baselines as if they were quantitatively compared in this paper?**\n(Example risk: listing “k-Same / pixelation / blurring” as *experimental baselines with reported numbers* even though the paper’s quantitative comparisons are against CIAGAN and DeepPrivacy.)\n\n Note: If any of these items do not appear on the slides, the answer should be \"no\".\n If **no**, identify where the deck implies the paper reported those baseline results.\n",
46
+ "\n**Does the deck avoid fabricating scope expansions not supported by the paper (e.g., “works on full-body video” or “extends to video-based anonymization” as an authors’ contribution/result)?**\n\n Note: If any of these items do not appear on the slides, the answer should be \"no\".\n If **no**, cite the slide(s) making the claim and describe why it is unsupported.\n",
47
+ "\n**Are all citations/references shown on the slides (e.g., author–year, paper titles) exactly the same as in the paper’s reference list (no missing authors, altered titles, wrong venues/years, or invented references)?**\n\n Note: If any of these items do not appear on the slides, the answer should be \"no\".\n If **no**, list each mismatched or fabricated reference and where it appears.\n",
48
+ "\n**If any URLs appear on the slides, are they exactly the same URLs that appear in the paper (character-for-character, including protocol, domain, path, and version identifiers), without adding new links not present in the paper?**\n\n Note: If any of these items do not appear on the slides, the answer should be \"no\".\n If **no**, list each URL that differs or was added, and indicate the slide(s).\n",
49
+ "\n**Are all visual sample images strictly taken from Figure 3 and Figure 4 only, without using any other qualitative results from the paper and without generating any additional images/results?**\n\n Note: If any of these items do not appear on the slides, the answer should be \"no\".\n If **no**, identify any slide that uses visuals outside Figure 3–4 or that contains newly generated/constructed samples.\n",
50
+ "\n**Do the slides accurately present the required parts of Table 1?**\n\n All required rows and columns from Table 1 in the paper are shown below:\n\n | Method | FID ↓ | Detection (dlib) ↑ | Detection (MTCNN %) ↑ | Face re-ID (CASIA %) ↓ | Face re-ID (VGG %) ↓ |\n |---------------------|-------|---------------------|------------------------|------------------------|----------------------|\n | CIAGAN | 37.94 | 95.10 | 99.82 | **2.19** | **0.37** |\n | DeepPrivacy | 32.99 | 92.82 | 99.85 | 3.61 | 1.05 |\n | **Ours** | **29.93** | **100** | **100** | 2.80 | 1.67 |\n\n Note: If any of these rows, columns or entries do not appear on the slides, the answer should be \"no\".\n If **no**, list each mismatched value and the slide location.\n",
51
+ "\n**Do the slides accurately present the required parts of Table 2?**\n\n All required rows and columns from Table 2 in the paper are shown below:\n\n | Method | FID ↓ | FID (C-HQ) ↓ | Detection (dlib) ↑ | Detection (MTCNN %) ↑ | Face re-ID (CASIA %) ↓ | Face re-ID (VGG %) ↓ |\n |-------------|-----------|--------------|---------------------|------------------------|------------------------|----------------------|\n | CIAGAN | **22.07** | 85.23 | 98.14 | 99.89 | **0.17** | **0.91** |\n | DeepPrivacy | 23.46 | 123.67 | 96.70 | 99.57 | 2.74 | 1.52 |\n | Ours | 27.45 | **68.88** | **100** | **100** | 2.07 | 1.58 |\n\n Note: If any of these rows, columns or entries do not appear on the slides, the answer should be \"no\".\n If **no**, list each mismatched value and the slide location.\n",
52
+ "\n**Do the slides accurately present the required parts of Table 3?**\n\n All required rows and columns from Table 3 in the paper are shown below:\n\n | Method | Inner face | Outer face | Combined |\n |--------------|------------|------------|----------|\n | Original | 0.8409 | 0.8683 | 0.8539 |\n | CIAGAN | 0.7277 | 0.8372 | 0.7852 |\n | DeepPrivacy | 0.7658 | 0.8511 | 0.8135 |\n | **Ours** | **0.7817** | **0.8518** | **0.8181** |\n\n Note: If any of these rows, columns or entries do not appear on the slides, the answer should be \"no\".\n If **no**, list each mismatched value and the slide location.\n",
53
+ "\n**Do the slides accurately present the required parts of Table 6?**\n\n All required rows and columns from Table 6 in the paper are shown below:\n\n | Method | CelebA-HQ (labels from [22]) | LFW (labels from [22]) | LFW (labels from [17]) |\n |--------------|------------------------------|------------------------|------------------------|\n | CIAGAN | 0.7721 | 0.9143 | 0.7045 |\n | DeepPrivacy | 0.7902 | 0.9133 | 0.7019 |\n | **Ours** | **0.8215** | **0.9157** | **0.7209** |\n\n [17] Yuming Jiang, Ziqi Huang, Xingang Pan, Chen Change Loy,and Ziwei Liu. Talk-to-edit: Fine-grained facial editing viadialog, 2021. 7, 8\n [22] Ji Lin, Richard Zhang, Frieder Ganz, Song Han, and Jun-Yan Zhu. Anycost gans for interactive image synthesis andediting, 2021. 7, 8\n\n Note: If any of these rows, columns or entries do not appear on the slides, the answer should be \"no\". The citation numbers shown on the slides do not need to be the same as those in the paper (as shown above); they only need to correctly distinguish the different rows.\n If **no**, list each mismatched value and the slide location.\n",
54
+ "\n**Do the slides accurately present the required parts of Table 4?**\n\n All required rows and columns from Table 4 in the paper are shown below:\n\n | Method | FID ↓ | Detection (MTCNN %) ↑ | Face re-ID (CASIA %) ↓ | Face re-ID (VGG %) ↓ | Accuracy ↑ |\n |----------------|-------|------------------------|------------------------|----------------------|------------|\n | Ours (m=0.0) | 29.93 | **100** | **2.80** | **3.67** | 0.8181 |\n | Ours (m=0.9) | **27.58** | **100** | 3.41 | 1.76 | **0.83** |\n\n Note: If any of these rows, columns or entries do not appear on the slides, the answer should be \"no\".\n If **no**, list each mismatched value and the slide location.\n"
55
+ ]
56
+ }
academia/CVPR_2023/Attribute-preserving_Face_Dataset_Anonymization_via_Latent_Code_Optimization/generation_task/statistics.yaml ADDED
@@ -0,0 +1,25 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ case_path: academia/CVPR_2023/Attribute-preserving_Face_Dataset_Anonymization_via_Latent_Code_Optimization
2
+ category: academia
3
+ input_metrics:
4
+ total_input_tokens: 8217
5
+ generation_prompt_tokens: 2617
6
+ materials_total_tokens: 5600
7
+ material_count: 1
8
+ pdf_total_pages: 10
9
+ file_details:
10
+ - name: material.pdf
11
+ tokens: 5600
12
+ pages: 10
13
+ checklist_counts:
14
+ common:
15
+ details:
16
+ Presentation Fundamentals: 13
17
+ Visual Design and Layout: 17
18
+ sum: 30
19
+ specific:
20
+ details:
21
+ Content Completeness: 27
22
+ Content Correctness: 23
23
+ Content Fidelity (per-slide-deck dynamic): 0
24
+ sum: 50
25
+ total_count: 80
academia/CVPR_2023/Attribute-preserving_Face_Dataset_Anonymization_via_Latent_Code_Optimization/material.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9b896abc5015b384e8b39757b86ee8e3bb1f1c2d57fe44fa9564687e910642d4
3
+ size 7293971
academia/CVPR_2023/Canonical_Fields_Self-Supervised_Learning_of_Pose-Canonicalized_Neural_Fields/generation_task/instructions.md ADDED
@@ -0,0 +1,151 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ You are to generate a complete, conference-quality academic slide deck suitable for an oral presentation at a top-tier AI conference (e.g., NeurIPS / ICML / ICLR / AAAI), based strictly on the paper. The slides must be accurate, well-structured, and **faithful to the original paper**, with no fabricated content.
2
+
3
+ ---
4
+
5
+ # **Strict Constraints for the Slides**
6
+
7
+ Below are the **hard constraints** you MUST satisfy. Slides violating these constraints are considered **incorrect**.
8
+
9
+ ## 1. Content Requirements
10
+
11
+ The slide deck must have **16-20 slides**.
12
+
13
+ The slide deck must include the following sections, in the order listed below (the number of slides in each section may be determined as appropriate).
14
+
15
+ 1.Title Slide
16
+
17
+ Paper Title: Canonical Fields: Self-Supervised Learning of Pose-Canonicalized Neural Fields
18
+ Author Team: Shaurya Dewan¹, Rahul Sajnani², Adrien Poulenard³, Rohith Agaram¹, Madhava Krishna¹, Srinath Sridhar²
19
+ Affiliation: IIIT-Hyderabad, Brown University, Stanford University
20
+ Conference: IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2023
21
+
22
+ 2.Outline / Agenda
23
+
24
+ 3.Introduction / Background
25
+
26
+ Neural Fields (NeRFs): Coordinate-based networks representing 3D shape and appearance.
27
+ Current Landscape:
28
+ Generalization to object categories often requires pre-canonicalized datasets (e.g., ShapeNet).
29
+ Existing self-supervised methods primarily operate on 3D point clouds, meshes, or voxels.
30
+ Motivation & Problem Statement: Directly canonicalizing neural fields is hard due to their continuous, noisy, and implicit nature.
31
+
32
+ 4.Limitations of Existing Methods:
33
+
34
+ Strong Supervision Dependency: Methods like ShapeNet rely on manual alignment, limiting real-world scalability.
35
+ Representation Constraints: Point cloud-based methods cannot handle the continuous and noisy nature of neural fields.
36
+ Inability to Manipulate: Neural fields are parameterized as weights, making direct transformation estimation challenging.
37
+ Design Constraint: Include an input-to-canonical example (refer to Fig 1) showing how arbitrarily posed NeRFs are aligned to a consistent orientation.
38
+
39
+ 5.Overview of the Proposed Method
40
+
41
+ Core Idea: Canonical Field Network (CaFi-Net), a self-supervised method to canonicalize the 3D position and orientation of objects represented as neural fields.
42
+ Key Contribution 1: Siamese Network Architecture. Extracts equivariant field features for category-level canonicalization.
43
+ Key Contribution 2: Density-Based Weighting. A mechanism to handle noise and outliers in radiance fields by focusing on occupied regions.
44
+ Key Contribution 3: First Self-Supervised Field Canonicalizer. Operates directly on continuous fields without conversion to point clouds.
45
+
46
+ 6.Methodology: CaFi-Net Architecture
47
+
48
+ Step 1: NeRF Sampling: Uniformly sampling the density field within the object bounding box.
49
+ Step 2: Signal Representation: Combining density values and density gradients to capture object surfaces.
50
+ Step 3: Equivariant Convolution: Using Tensor Field Networks (TFNs) to process vector fields and extract rotation-equivariant features.
51
+
52
+ 7.Key Algorithm: Canonicalization & Losses
53
+
54
+ Invariant Embedding: Computing the dot product between global equivariant features and spherical harmonics.
55
+ Siamese Shape Loss: Penalizing inconsistency between different instances of the same category to regularize training.
56
+ Design Constraint: Display the framework diagram (refer to Fig 3) showing the flow from Signal Representation -> Equivariant Convolution -> Canonical Render.
57
+
58
+ 8.Dataset and Training Details
59
+
60
+ Data Statistics: A new dataset of 1300 NeRF models across 13 ShapeNet categories.
61
+ Training Setup: Trained for 300 epochs on Nvidia 1080-Ti using Adam optimizer.
62
+ Foreground Clustering: K-means clustering (K=2) on densities to focus learning on the object rather than background.
63
+
64
+ 9.Experimental Setup
65
+
66
+ Test Benchmarks: Evaluated on 13 categories (e.g., Car, Chair, Plane).
67
+ Comparison Baselines: Compared against 3D point cloud-based methods (PCA, CaCa, ConDor).
68
+ Evaluation Metrics: Instance-Level Consistency (IC), Category-Level Consistency (CC), and Ground Truth Equivariance Consistency (GEC).
69
+
70
+ 10.Experimental Results & Analysis
71
+
72
+ Performance Achievement: Matches or exceeds point cloud-based methods (ConDor) despite operating on noisier field data.
73
+ Category Consistency: Demonstrates robust alignment across diverse shapes with low variance.
74
+ Design Constraint: Include a comparison table (refer to Table 1) highlighting the best performance in Ground Truth Equivariance Consistency.
75
+
76
+ 11.Ablation & Visual Analysis
77
+
78
+ Signal Importance: Using density gradients significantly improves performance over using just coordinates or density.
79
+ Siamese Strategy: The Siamese loss is crucial for establishing shape similarity within a category.
80
+ Foreground Focus: Density-based weighting is essential for handling the noisy nature of raw NeRF outputs.
81
+
82
+ 12.Key Takeaways & Limitations
83
+
84
+ Takeaways: CaFi-Net enables direct 3D pose manipulation of neural fields; density gradients are vital for surface-aware canonicalization.
85
+
86
+ Limitations: Performance can be sensitive to the quality of the initial NeRF fitting; assumes scenes primarily contain a single object.
87
+
88
+ 13.Conclusion
89
+
90
+ Summary: CaFi-Net provides a pioneering self-supervised framework for canonicalizing neural radiance fields.
91
+
92
+ Future Work: Extending the method to handle articulated objects and more complex scene backgrounds.
93
+
94
+ ---
95
+
96
+ ## 2. Content Constraints
97
+
98
+ * **Faithfulness to background materials**: Use only the information in the paper. You must not fabricate additional experiments or modify or reinterpret the authors' claims.
99
+ * **Accuracy:** All content must be factually accurate, especially quantitative content and facts.
100
+ * **Brevity:** Use short, concise phrases, not long paragraphs. Focus on summarizing key facts and events without excessive detail. Bullet points may be used for clarity. If you use bullet points, each slide should have no more than 6 bullet points.
101
+ * **Sufficient Depth**: Do not summarize the paper in an overly superficial or high-level manner. The slides should preserve essential technical details, key arguments, and substantive insights rather than only presenting vague conclusions.
102
+ * **Logical Flow:** The slides should present a clear narrative, starting from early space exploration to recent developments. Ensure there is a clear progression of time and events.
103
+ * **Relevance of Information**: You must not add unrelated content.
104
+ * **Code & Markup Formatting**: Avoid raw LaTeX or Markdown code unless necessary.
105
+ * **Citation & Referencing**: Accurately reference the paper's results, diagrams, and examples.
106
+ * If a slide uses data from the paper, you must clearly indicate the source of the data on that slide (e.g., page xx, Figure xx, Table xx).
107
+ * All references (if any) must be placed in the bottom-left corner of the slide.
108
+
109
+ ## 3. Visual & Design
110
+
111
+ * **Images:** Include relevant images. Images must be high quality, clearly labeled, and relevant to the content.
112
+ * **Charts and Diagrams:** Use appropriate charts and diagrams where needed to visually present and clarify information, rather than relying only on text (and demos).
113
+ * If the slide includes charts or figures, ensure that all visual elements are clearly annotated (e.g., axes are labeled, units are specified, legends are included where needed, and data points are explained when necessary).
114
+ * Include **figures or diagrams descriptions** when appropriate, e.g., “The chart (from page 4 in the paper) shows proprietary models outperform open-weight ones.”
115
+ * **Legibility:** Use legible fonts and avoid clutter. Text should be large enough to be easily read.
116
+ * **Visual Balance:** Balance text and visuals so slides are easy to read when projected.
117
+ * **Layout:** Maintain a clean, professional layout with appropriate fonts, colors, and formatting.
118
+ * **Style Consistency**: The entire slide deck should follow a unified and coherent visual style.
119
+ * **Information Load**: Slides should avoid excessive information per page to preserve readability.
120
+
121
+ ## 4. Text Quality
122
+
123
+ * All generated text should be clear, with no missing or incorrect characters or words.
124
+ * Spelling, grammar, and typography must be accurate and correct throughout the content.
125
+
126
+ ## 5. Technical Fidelity Requirements
127
+
128
+ * **Quantitative Coverage**: Ensure that key data and experimental results (possibly presented in charts or tables in the paper) are included in the slide deck. In other words, the presentation should not only discuss the ideas of the paper but also present specific quantitative details (e.g., statistical data, experimental results, etc.).
129
+ * The slide deck must include at least 5 slides with quantitative details.
130
+
131
+ * **Quantitative Detail Correctness**: Ensure quantitative details (task counts, benchmark size, etc.) are correct.
132
+
133
+ * **Table & Chart Traceability and Annotation**: Ensure that any figures and tables in your slide deck are consistent with the paper. Specifically, for every figure and table in the slides:
134
+ * If it is directly copied from the paper, clearly indicate on the slide which figure or table it corresponds to in the paper (e.g., Figure 1 in the paper, Table 2 in the paper).
135
+ * If it is newly plotted based on data from the paper, clearly specify which section of the paper the data are taken from (e.g., Section 3.1). In addition, provide a clear explanation of the meaning of each legend item in the figure and each row and column in the table.
136
+ * For charts, every axis, unit, and label must be explicit
137
+
138
+ * **Point-Level Accuracy for Plots**: If scatter plots, line charts or radar charts are used in the slide deck, ensure that every data point exactly matches the corresponding data point in the original figure from the paper. Note that the values must be **precisely** the same, not just the shape of the graph.
139
+
140
+ * **Conceptual Illustration**: The slides may include data used only for conceptual illustration. However, if such data are included, you must clearly indicate on the corresponding slide which data are conceptual illustrations rather than experimental data reported in the paper.
141
+
142
+ ## 5. Presentation Tone and Audience
143
+
144
+ * **Tone:** The tone should be informative, academic, and professional. It should avoid casual or informal conversational language, while remaining clear and suitable for oral presentation. The slide deck should maintain a consistent tone.
145
+ * **Audience:** The presentation is intended for an academic audience with relevant background knowledge in the field. The content should be accessible to graduate-level students and researchers, assuming familiarity with standard concepts and terminology, while still providing sufficient context to understand the motivation, methodology, and key contributions.
146
+
147
+ ---
148
+
149
+ # **Output Expected**
150
+
151
+ A **complete slide deck** satisfying all constraints above.
academia/CVPR_2023/Canonical_Fields_Self-Supervised_Learning_of_Pose-Canonicalized_Neural_Fields/generation_task/judge_prompt.json ADDED
@@ -0,0 +1,27 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "material_dependent_checklist_1": [
3
+ "\n**Does the first slide correctly list the title, authors, and the conference?**\nIf **no**, describe what is missing from the first slide (Title: Canonical Fields: Self-Supervised Learning of Pose-Canonicalized Neural Fields; Conf: CVPR 2023).\n",
4
+ "\n**Does the beginning of the presentation include a clear agenda or outline?**\nIf **no**, specify where it is missing.\n",
5
+ "\n**Is there a slide dedicated to the background of Neural Fields (NeRFs) that points out the limitations of category-level generalization (e.g., \"dependence on pre-canonicalized datasets like ShapeNet\")?**\nIf **no**, explain where the background info on coordinate-based representations is lacking.\n",
6
+ "\n**Does the slide deck clearly define the core concept of \"CaFi-Net\" for self-supervised canonicalization of position and orientation?**\nIf **no**, describe the missing points in explaining the \"self-supervised pose alignment\" framework.\n",
7
+ "\n**Is there a slide describing the \"Siamese Network Architecture\" and how it extracts equivariant features from radiance fields?**\nIf **no**, indicate whether the structural link between the two branches and the shared weights was omitted.\n",
8
+ "\n**2.6 Is there a slide explaining the \"Density-Based Weighting\" mechanism and its role in handling noise and outliers in NeRF data?**\nIf **no**, specify if the mechanism for focusing on occupied 3D regions is missing.\n",
9
+ "\n**2.7 Does the deck present the core logic for using \"Density Gradients\" (how they represent surface geometry more effectively than raw coordinates)?**\nIf **no**, specify if the explanation of the input signal representation is missing.\n",
10
+ "\n**2.8 Is there a slide summarizing the training data sources used (e.g., the 13 categories derived from ShapeNet and converted into NeRF models)?**\nIf **no**, explain if the dataset construction section is missing.\n",
11
+ "\n**2.9 Does the experimental section cover comparative results against baselines like PCA, CaCa, or ConDor?**\nIf **no**, indicate if the performance analysis relative to point cloud-based methods was omitted.\n",
12
+ "\n**2.10 Does the deck include qualitative results showing the model's ability to align arbitrarily posed objects into a consistent canonical frame?**\nIf **no**, indicate if visual evidence of successful 3D alignment is missing.\n",
13
+ "\n**2.11 Is there a slide summarizing the \"Key Takeaways\" and limitations (e.g., sensitivity to initial NeRF reconstruction quality)?**\nIf **no**, describe the missing insights.\n"
14
+ ],
15
+ "material_dependent_checklist_2": [
16
+ "\n** Is the description of the limitations of existing canonicalization accurate? (e.g., most methods require supervised labels or only work on discrete point clouds.)**\nIf **no**, specify the inaccurate descriptions.\n",
17
+ "\n**Is the technical roadmap correctly presented as \"Self-Supervised Learning\" rather than \"Supervised Pose Estimation\"?**\nIf **no**, point out the deviation in understanding the label-free alignment principle.\n",
18
+ "\n**Are the explanations for \"Equivariant Convolutions\" consistent with the paper? (It uses Tensor Field Networks to process vector-valued signals.)**\nIf **no**, explain the errors in definition.\n",
19
+ "\n**3.4 Are the details of the \"Siamese Shape Loss\" or the invariant embedding objectives accurate?**\nIf **no**, specifically point out errors in the loss functions used for category-level consistency.\n",
20
+ "\n**Does the performance data in \"Experimental Results\" match the paper's tables? (e.g., achieving superior Ground Truth Equivariance Consistency compared to ConDor.)**\nIf **no**, list the specific discrepancies between the values on the slides and the paper.\n",
21
+ "\n**Does the deck accurately distinguish between \"Instance-Level Consistency\" and \"Category-Level Consistency\"?**\nIf **no**, explain where these two evaluation dimensions are confused.\n",
22
+ "\n**Are the definitions of evaluation metrics (e.g., IC, CC, and GEC) consistent with the paper's standards?**\nIf **no**, point out errors in metric interpretation.\n",
23
+ "\n**Does the slide deck avoid fabricating facts (e.g., claiming it can handle dynamic scenes when it is focused on static object canonicalization)?**\nIf **no**, point out the fabricated content.\n",
24
+ "\n**Do the visual results accurately reflect the model's \"Rotation Robustness\"? (i.e., achieving the same canonical pose regardless of the input NeRF's initial rotation.)**\nIf **no**, specify the slides where the equivariance capabilities are misinterpreted.\n",
25
+ "\n**Is the signal representation (using density $\\sigma$ and its gradient $\nabla\\sigma$) correctly identified as the input to the TFN?**\nIf **no**, provide the incorrect technical details found on the slides.\n"
26
+ ]
27
+ }
academia/CVPR_2023/Canonical_Fields_Self-Supervised_Learning_of_Pose-Canonicalized_Neural_Fields/generation_task/statistics.yaml ADDED
@@ -0,0 +1,25 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ case_path: academia/CVPR_2023/Canonical_Fields_Self-Supervised_Learning_of_Pose-Canonicalized_Neural_Fields
2
+ category: academia
3
+ input_metrics:
4
+ total_input_tokens: 8394
5
+ generation_prompt_tokens: 2234
6
+ materials_total_tokens: 6160
7
+ material_count: 1
8
+ pdf_total_pages: 11
9
+ file_details:
10
+ - name: material.pdf
11
+ tokens: 6160
12
+ pages: 11
13
+ checklist_counts:
14
+ common:
15
+ details:
16
+ Presentation Fundamentals: 13
17
+ Visual Design and Layout: 17
18
+ sum: 30
19
+ specific:
20
+ details:
21
+ Content Completeness: 11
22
+ Content Correctness: 10
23
+ Content Fidelity (per-slide-deck dynamic): 0
24
+ sum: 21
25
+ total_count: 51
academia/CVPR_2023/Canonical_Fields_Self-Supervised_Learning_of_Pose-Canonicalized_Neural_Fields/material.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:1426792366a33ce60e7206178861a3cacfa0cef02731e0898a26c37130f01995
3
+ size 3322384
academia/CVPR_2023/Hierarchical_B-Frame_Video_Coding_Using_Two-Layer_CANF_Without_Motion_Coding/generation_task/instructions.md ADDED
@@ -0,0 +1,146 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ You are to generate a complete, conference-quality academic slide deck suitable for an oral presentation at a top-tier AI conference (e.g., NeurIPS / ICML / ICLR / AAAI), based strictly on the paper. The slides must be accurate, well-structured, and **faithful to the original paper**, with no fabricated content.
2
+
3
+ ---
4
+
5
+ # **Strict Constraints for the Slides**
6
+
7
+ Below are the **hard constraints** you MUST satisfy. Slides violating these constraints are considered **incorrect**.
8
+
9
+ ## 1. Content Requirements
10
+
11
+ The slide deck must have **16-20 slides**.
12
+
13
+ The slide deck must include the following sections, in the order listed below (the number of slides in each section may be determined as appropriate).
14
+
15
+ 1.Title Slide
16
+
17
+ Paper Title: Hierarchical B-frame Video Coding Using Two-Layer CANF Without Motion Coding
18
+ Author Team: David Alexandre, Hsueh-Ming Hang, Wen-Hsiao Peng
19
+ Affiliation: National Yang Ming Chiao Tung University
20
+ Conference: IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2023
21
+
22
+ 2.Outline / Agenda
23
+
24
+ 3.Introduction / Background
25
+
26
+ Traditional Video Coding: Relies on "Motion Coding + Residual Coding" (e.g., H.265, H.266).
27
+ Current Dilemma: Motion vector (MV) estimation and transmission consume significant bitrate, especially at low bitrates or complex motion scenarios.
28
+ Core Question: Can we achieve high-quality video compression without transmitting any motion information?
29
+
30
+ 4.Limitations of Existing Methods:
31
+
32
+ Overhead of Motion Coding: Explicitly coding motion fields requires complex entropy coding and occupies a fixed portion of the bit budget.
33
+ Error Propagation: Poor motion estimation leads to large residuals, which are difficult for subsequent layers to compensate.
34
+ Design Constraint: Include a diagram (refer to Fig 1) showing the difference between the standard "Motion-Residual" loop and the proposed "Base-Enhancement" architecture.
35
+
36
+ 5.Overview of the Proposed Method
37
+
38
+ Core Idea: Replacing explicit motion coding with a two-layer Conditional Augmented Normalization Flows (CANF) framework.
39
+ Key Contribution 1: Zero Motion Transmission. No bits are spent on motion vectors; the temporal alignment is handled implicitly.
40
+ Key Contribution 2: Hierarchical B-frame Structure. Optimizes bidirectional prediction by leveraging temporal symmetry without explicit flow maps.
41
+ Key Contribution 3: Competitive Performance. Matches or exceeds state-of-the-art learned codecs (like DCVC) in specific rate-distortion regions.
42
+
43
+ 6.Methodology: Two-Layer Architecture
44
+
45
+ Base Layer: A low-resolution image compressor that acts as a "proxy" for motion, providing a global structural prior.
46
+ Enhancement Layer: Uses CANF to model the conditional distribution of the high-resolution frame given the base layer and previous reference frames.
47
+ Synthesis: The decoder reconstructs the full-resolution frame by fusing low-res information with warped high-res temporal context.
48
+
49
+ 7.Key Algorithm: Conditional Augmented Normalization Flows (CANF)
50
+
51
+ Flow-based Modeling: Transforms a complex image distribution into a simple latent distribution via a sequence of invertible transformations.
52
+ Conditional Integration: The warping results from reference frames are used as "conditions" rather than "predictors," allowing the model to adaptively correct errors.
53
+ Design Constraint: Display the CANF transformation flow (refer to Fig 2) showing how conditional signals guide the latent space mapping.
54
+
55
+ 8.Dataset and Training Details
56
+
57
+ Data Statistics: Trained on the Vimeo-90K septuplet dataset.
58
+ Training Strategy: Multi-stage training (Base layer first, then joint optimization) using Rate-Distortion (R-D) loss: L = R + λ·D.
59
+ Metrics: Optimized for MSE (Mean Squared Error) across various λ values to cover different bitrates.
60
+
61
+ 9.Experimental Setup
62
+
63
+ Test Sets: Evaluated on HEVC Common Test Conditions (Class B, C, D) and UVG datasets.
64
+ Configuration: Random Access (RA) mode with a Group of Pictures (GOP) size of 8 or 16.
65
+ Comparison: Benchmarked against HM-16.20 (H.265), VTM-12.0 (H.266), and SOTA learned codecs like DVC and TCM.
66
+
67
+ 10.Experimental Results & Analysis
68
+
69
+ R-D Performance: Achieving comparable results to VTM in several sequences without any explicit motion vectors.
70
+ Ablation Study: Proves that the two-layer approach significantly outperforms single-layer flow models.
71
+ Visual Quality: Show cases (refer to Fig 4) where the model preserves textures better than traditional codecs at low bitrates.
72
+ Design Constraint: Include a comparison table (refer to Table 1) showing BD-Rate savings relative to HEVC/H.265.
73
+
74
+ 11.Visual Analysis & Error Studies
75
+
76
+ Implicit Alignment: Analysis of how the model "hallucinates" motion details using the base layer and temporal context.
77
+ Bitrate Allocation: Demonstrates that bits saved from motion coding are effectively redistributed to enhance visual texture and sharpness.
78
+
79
+ 12.Key Takeaways & Limitations
80
+
81
+ Takeaways: Motion coding is not strictly necessary for learned video compression; hierarchical CANF is a powerful tool for temporal modeling.
82
+ Limitations: Higher computational complexity at the decoder due to the flow-based architecture; performance in extremely fast motion scenes needs further optimization.
83
+
84
+ 13.Conclusion
85
+
86
+ Summary: The proposed Two-Layer CANF offers a novel, motion-free paradigm for next-generation video coding.
87
+ Future Work: Reducing complexity and extending the framework to P-frame and low-latency scenarios.
88
+
89
+ ---
90
+
91
+ ## 2. Content Constraints
92
+
93
+ * **Faithfulness to background materials**: Use only the information in the paper. You must not fabricate additional experiments or modify or reinterpret the authors' claims.
94
+ * **Accuracy:** All content must be factually accurate, especially quantitative content and facts.
95
+ * **Brevity:** Use short, concise phrases, not long paragraphs. Focus on summarizing key facts and events without excessive detail. Bullet points may be used for clarity. If you use bullet points, each slide should have no more than 6 bullet points.
96
+ * **Sufficient Depth**: Do not summarize the paper in an overly superficial or high-level manner. The slides should preserve essential technical details, key arguments, and substantive insights rather than only presenting vague conclusions.
97
+ * **Logical Flow:** The slides should present a clear narrative, starting from early space exploration to recent developments. Ensure there is a clear progression of time and events.
98
+ * **Relevance of Information**: You must not add unrelated content.
99
+ * **Code & Markup Formatting**: Avoid raw LaTeX or Markdown code unless necessary.
100
+ * **Citation & Referencing**: Accurately reference the paper's results, diagrams, and examples.
101
+ * If a slide uses data from the paper, you must clearly indicate the source of the data on that slide (e.g., page xx, Figure xx, Table xx).
102
+ * All references (if any) must be placed in the bottom-left corner of the slide.
103
+
104
+ ## 3. Visual & Design
105
+
106
+ * **Images:** Include relevant images. Images must be high quality, clearly labeled, and relevant to the content.
107
+ * **Charts and Diagrams:** Use appropriate charts and diagrams where needed to visually present and clarify information, rather than relying only on text (and demos).
108
+ * If the slide includes charts or figures, ensure that all visual elements are clearly annotated (e.g., axes are labeled, units are specified, legends are included where needed, and data points are explained when necessary).
109
+ * Include **figures or diagrams descriptions** when appropriate, e.g., “The chart (from page 4 in the paper) shows proprietary models outperform open-weight ones.”
110
+ * **Legibility:** Use legible fonts and avoid clutter. Text should be large enough to be easily read.
111
+ * **Visual Balance:** Balance text and visuals so slides are easy to read when projected.
112
+ * **Layout:** Maintain a clean, professional layout with appropriate fonts, colors, and formatting.
113
+ * **Style Consistency**: The entire slide deck should follow a unified and coherent visual style.
114
+ * **Information Load**: Slides should avoid excessive information per page to preserve readability.
115
+
116
+ ## 4. Text Quality
117
+
118
+ * All generated text should be clear, with no missing or incorrect characters or words.
119
+ * Spelling, grammar, and typography must be accurate and correct throughout the content.
120
+
121
+ ## 5. Technical Fidelity Requirements
122
+
123
+ * **Quantitative Coverage**: Ensure that key data and experimental results (possibly presented in charts or tables in the paper) are included in the slide deck. In other words, the presentation should not only discuss the ideas of the paper but also present specific quantitative details (e.g., statistical data, experimental results, etc.).
124
+ * The slide deck must include at least 5 slides with quantitative details.
125
+
126
+ * **Quantitative Detail Correctness**: Ensure quantitative details (task counts, benchmark size, etc.) are correct.
127
+
128
+ * **Table & Chart Traceability and Annotation**: Ensure that any figures and tables in your slide deck are consistent with the paper. Specifically, for every figure and table in the slides:
129
+ * If it is directly copied from the paper, clearly indicate on the slide which figure or table it corresponds to in the paper (e.g., Figure 1 in the paper, Table 2 in the paper).
130
+ * If it is newly plotted based on data from the paper, clearly specify which section of the paper the data are taken from (e.g., Section 3.1). In addition, provide a clear explanation of the meaning of each legend item in the figure and each row and column in the table.
131
+ * For charts, every axis, unit, and label must be explicit
132
+
133
+ * **Point-Level Accuracy for Plots**: If scatter plots, line charts or radar charts are used in the slide deck, ensure that every data point exactly matches the corresponding data point in the original figure from the paper. Note that the values must be **precisely** the same, not just the shape of the graph.
134
+
135
+ * **Conceptual Illustration**: The slides may include data used only for conceptual illustration. However, if such data are included, you must clearly indicate on the corresponding slide which data are conceptual illustrations rather than experimental data reported in the paper.
136
+
137
+ ## 5. Presentation Tone and Audience
138
+
139
+ * **Tone:** The tone should be informative, academic, and professional. It should avoid casual or informal conversational language, while remaining clear and suitable for oral presentation. The slide deck should maintain a consistent tone.
140
+ * **Audience:** The presentation is intended for an academic audience with relevant background knowledge in the field. The content should be accessible to graduate-level students and researchers, assuming familiarity with standard concepts and terminology, while still providing sufficient context to understand the motivation, methodology, and key contributions.
141
+
142
+ ---
143
+
144
+ # **Output Expected**
145
+
146
+ A **complete slide deck** satisfying all constraints above.
academia/CVPR_2023/Hierarchical_B-Frame_Video_Coding_Using_Two-Layer_CANF_Without_Motion_Coding/generation_task/judge_prompt.json ADDED
@@ -0,0 +1,27 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "material_dependent_checklist_1": [
3
+ "\n**Does the first slide correctly list the title, authors, and the conference?**\nIf **no**, describe what is missing (Title: Hierarchical B-frame Video Coding Using Two-Layer CANF Without Motion Coding; Conf: CVPR 2023).\n",
4
+ "\n**Does the presentation include a clear agenda or outline?**\nIf **no**, specify where it is missing.\n",
5
+ "\n**Is there a slide dedicated to the \"No Motion Coding\" paradigm shift?**\nIf **no**, explain where the explanation of why motion vectors are omitted is lacking.\n",
6
+ "\n**Does the slide deck clearly define the \"Base Layer\" as a low-resolution image compressor?**\nIf **no**, describe the missing points regarding its role as a structural prior.\n",
7
+ "\n**Is there a slide describing the \"Conditional Augmented Normalization Flows (CANF)\" mechanism?**\nIf **no**, indicate whether the mathematical/architectural link between the condition and the flow is omitted.\n",
8
+ "\n**Is there a slide explaining the Hierarchical B-frame structure (GOP configuration)?**\nIf **no**, specify if the temporal hierarchy explanation is missing.\n",
9
+ "\n**Does the deck present the \"Warping\" process without explicit MV transmission?**\nIf **no**, specify if the explanation of how reference frames are utilized is missing.\n",
10
+ "\n**Is there a slide summarizing the training loss (Rate-Distortion optimization)?**\nIf **no**, explain if the λ-parameter and MSE/SSIM optimization info is missing.\n",
11
+ "\n**Does the experimental section compare the model against H.265 (HM) and H.266 (VTM)?**\nIf **no**, indicate if the BD-Rate comparison was omitted.\n",
12
+ "\n**Does the deck include qualitative results (e.g., visual crop comparisons) showing texture preservation?**\nIf **no**, indicate if visual evidence of reconstruction quality is missing.\n",
13
+ "\n**Is there a slide summarizing \"Key Takeaways\" and the trade-off between motion coding and complexity?**\nIf **no**, describe the missing insights.\n"
14
+ ],
15
+ "material_dependent_checklist_2": [
16
+ "\n**Is the claim about \"Not transmitting any motion information\" accurately presented?**\nIf **no**, point out where explicit motion vectors are incorrectly mentioned as transmitted.\n",
17
+ "\n**Is the technical roadmap correctly presented as a \"Flow-based\" model rather than a standard \"Autoencoder\"?**\nIf **no**, point out the deviation in understanding the CANF principle.\n",
18
+ "\n**Are the explanations for the \"Two-Layer\" structure consistent with the paper? (Base Layer = LR image, Enhancement Layer = HR refinement.)**\nIf **no**, explain the errors in definition.\n",
19
+ "\n**Are the details of the \"Hierarchical B-frame\" configuration accurate (e.g., GOP=16 or 32)?**\nIf **no**, specifically point out errors in the temporal structure.\n",
20
+ "\n**Does the performance data in \"Experimental Results\" match the paper's tables? (e.g., BD-Rate savings on UVG or HEVC datasets.)**\nIf **no**, list the specific discrepancies between the values on the slides and the paper.\n",
21
+ "\n**Does the deck accurately distinguish between \"Residual Coding\" in traditional codecs and \"Conditional Flow\" in this paper?**\nIf **no**, explain where these concepts are confused.\n",
22
+ "\n**Are the definitions of evaluation metrics (e.g., PSNR, MS-SSIM, BD-Rate) consistent with the paper?**\nIf **no**, point out errors in metric interpretation.\n",
23
+ "\n**Does the slide deck avoid fabricating facts (e.g., claiming it is faster than H.264 when flow-based models are usually slower)?**\nIf **no**, point out the fabricated content.\n",
24
+ "\n**Do the visual results accurately reflect the model's performance in high-motion vs. low-motion sequences?**\nIf **no**, specify the slides where the content adaptability is misinterpreted.\n",
25
+ "\n**Is the training dataset (Vimeo-90K) and the test datasets (HEVC Classes) correctly identified?**\nIf **no**, provide the incorrect technical details found on the slides.\n"
26
+ ]
27
+ }
academia/CVPR_2023/Hierarchical_B-Frame_Video_Coding_Using_Two-Layer_CANF_Without_Motion_Coding/generation_task/statistics.yaml ADDED
@@ -0,0 +1,25 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ case_path: academia/CVPR_2023/Hierarchical_B-Frame_Video_Coding_Using_Two-Layer_CANF_Without_Motion_Coding
2
+ category: academia
3
+ input_metrics:
4
+ total_input_tokens: 7835
5
+ generation_prompt_tokens: 2235
6
+ materials_total_tokens: 5600
7
+ material_count: 1
8
+ pdf_total_pages: 10
9
+ file_details:
10
+ - name: material.pdf
11
+ tokens: 5600
12
+ pages: 10
13
+ checklist_counts:
14
+ common:
15
+ details:
16
+ Presentation Fundamentals: 13
17
+ Visual Design and Layout: 17
18
+ sum: 30
19
+ specific:
20
+ details:
21
+ Content Completeness: 11
22
+ Content Correctness: 10
23
+ Content Fidelity (per-slide-deck dynamic): 0
24
+ sum: 21
25
+ total_count: 51
academia/CVPR_2023/Hierarchical_B-Frame_Video_Coding_Using_Two-Layer_CANF_Without_Motion_Coding/material.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4c2639ddde0b12f2b8ff6619bf0d9a9925872f9345973861af38355f385745f7
3
+ size 7075625
academia/CVPR_2023/Implicit_Occupancy_Flow_Fields_for_Perception_and_Prediction_in_Self-Driving/generation_task/instructions.md ADDED
@@ -0,0 +1,149 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ You are to generate a complete, conference-quality academic slide deck suitable for an oral presentation at a top-tier AI conference (e.g., NeurIPS / ICML / ICLR / AAAI), based strictly on the paper. The slides must be accurate, well-structured, and **faithful to the original paper**, with no fabricated content.
2
+
3
+ ---
4
+
5
+ # **Strict Constraints for the Slides**
6
+
7
+ Below are the **hard constraints** you MUST satisfy. Slides violating these constraints are considered **incorrect**.
8
+
9
+ ## 1. Content Requirements
10
+
11
+ The slide deck must have **16-20 slides**.
12
+
13
+ The slide deck must include the following sections, in the order listed below (the number of slides in each section may be determined as appropriate).
14
+
15
+ 1.Title Slide
16
+
17
+ Paper Title: Implicit Occupancy Flow Fields for Perception and Prediction in Self-Driving
18
+ Author Team: Ben Agro*, Quinlan Sykora*, Sergio Casas, Raquel Urtasun
19
+ Affiliation: Waabi, University of Toronto
20
+ Conference: IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2023
21
+
22
+ 2.Outline / Agenda
23
+
24
+ 3.Introduction / Background
25
+
26
+ Traditional Autonomy Stacks: Rely on a sequential pipeline of perception (detection/tracking), motion forecasting, and planning.
27
+ Existing Paradigms:
28
+ Object-based: Performs detection followed by trajectory forecasting; suffers from information loss and safety concerns due to detection thresholds.
29
+ Object-free (Explicit): Predicts dense occupancy and flow grids; computationally expensive and limited by the receptive field of convolutional networks.
30
+
31
+ 4.Limitations of Existing Methods:
32
+
33
+ Efficiency Bottleneck: Explicit grid methods waste computation on regions irrelevant to the motion planner.
34
+ Safety Risks: Object-based methods may fail to recall objects below confidence thresholds, leading to "blindness" in planning.
35
+ Limited Receptive Field: Standard CNNs struggle with high-speed agents (e.g., highway scenarios) where sensor evidence is spatially distant from future predicted locations.
36
+ Design Constraint: Include a comparison (refer to Fig 1) showing the difference between predicting whole-scene grids (Explicit) versus targeted query points (Implicit).
37
+
38
+ 5.Overview of the Proposed Method
39
+
40
+ Core Idea: IMPLICITO, a unified model representing occupancy and flow as a continuous implicit field queried by the motion planner.
41
+ Key Contribution 1: Implicit Representation. Enables efficient parallel evaluation of arbitrary spatio-temporal query points.
42
+ Key Contribution 2: Global Attention Mechanism. Uses deformable offsets and cross-attention to capture long-range context for high-speed motion.
43
+ Key Contribution 3: State-of-the-Art Performance. Outperforms both object-based and explicit object-free baselines in urban and highway settings.
44
+
45
+ 6.Methodology: System Architecture
46
+
47
+ Encoder: A two-stream CNN processing voxelized LiDAR (5 frames history) and HD map rasters to produce a BEV feature map Z.
48
+ Implicit Decoder: Queries feature map Z using bi-linear interpolation, followed by a query-based attention module to aggregate global context.
49
+ Parallel Inference: The decoder can process thousands of candidate trajectory points from the planner in parallel.
50
+
51
+ 7.Key Algorithm: Global Attention Mechanism
52
+
53
+ Feature Aggregation: For a query point (x, y, t), the model predicts K reference points as offsets to find relevant sensor evidence.
54
+ Cross-Attention: Aggregates features from the query location and predicted reference points to form a rich context vector.
55
+ Backwards Flow: Predicts the motion as a translation vector from t-1 to t, effectively handling multi-modal futures with a single vector.
56
+ Design Constraint: Display the architecture diagram (refer to Fig 2) showing the flow from Sensor Inputs -> Encoder -> Implicit Decoder with Attention.
57
+
58
+ 8.Dataset and Training Details
59
+
60
+ Datasets: Argoverse 2 Sensor (AV2) for urban scenarios and HighwaySim (HwySim) for high-speed environments.
61
+ Supervision: Trained using a combination of Binary Cross-Entropy (BCE) for occupancy and L2 loss for backwards flow.
62
+ Sampling: Continuous query points are sampled uniformly across the spatio-temporal volume [0, H] x [0, W] x [0, T].
63
+
64
+ 9.Experimental Setup
65
+
66
+ Baselines: Compared against 5 SOTA models including MultiPath, LaneGCN (Object-based) and MP3, OccFlow (Object-free).
67
+ Metrics: Mean Average Precision (mAP), Soft-IoU, Expected Calibration Error (ECE), and Foreground Endpoint Error (EPE).
68
+
69
+ 10.Experimental Results & Analysis
70
+
71
+ Performance Lead: IMPLICITO achieves superior results across all metrics; e.g., on AV2, mAP reaches 0.799 compared to MP3's 0.774.
72
+ Highway Advantage: Significant gains in HwySim due to the global attention mechanism capturing fast-moving vehicles.
73
+ Efficiency: The implicit decoder significantly reduces computation compared to generating high-resolution dense grids.
74
+ Design Constraint: Include a results table (refer to Table 1) highlighting the performance gap between IMPLICITO and existing baselines.
75
+
76
+ 11.Visual Analysis & Qualitative Studies
77
+
78
+ Perception Accuracy: IMPLICITO correctly identifies occupancy where object-based models miss detections or hallucinate.
79
+ Motion Consistency: The predicted flow-fields align better with HD map geometry and actor behaviors.
80
+ Attention Visualization: Show (refer to Fig 4) how attention offsets "look back" along traffic lanes to find corresponding LiDAR evidence.
81
+
82
+ 12.Key Takeaways & Limitations
83
+
84
+ Takeaways: Implicit fields provide a flexible and efficient interface for motion planning; global attention is crucial for long-term forecasting.
85
+ Limitations: Performance still depends on the quality of the underlying BEV feature representation; requires careful sampling during training.
86
+
87
+ 13.Conclusion
88
+
89
+ Summary: IMPLICITO provides a continuous, efficient, and highly accurate representation for self-driving perception and prediction.
90
+ Future Work: Exploring the integration of this implicit representation directly into end-to-end planning cost functions.
91
+
92
+ ---
93
+
94
+ ## 2. Content Constraints
95
+
96
+ * **Faithfulness to background materials**: Use only the information in the paper. You must not fabricate additional experiments or modify or reinterpret the authors' claims.
97
+ * **Accuracy:** All content must be factually accurate, especially quantitative content and facts.
98
+ * **Brevity:** Use short, concise phrases, not long paragraphs. Focus on summarizing key facts and events without excessive detail. Bullet points may be used for clarity. If you use bullet points, each slide should have no more than 6 bullet points.
99
+ * **Sufficient Depth**: Do not summarize the paper in an overly superficial or high-level manner. The slides should preserve essential technical details, key arguments, and substantive insights rather than only presenting vague conclusions.
100
+ * **Logical Flow:** The slides should present a clear narrative, starting from early space exploration to recent developments. Ensure there is a clear progression of time and events.
101
+ * **Relevance of Information**: You must not add unrelated content.
102
+ * **Code & Markup Formatting**: Avoid raw LaTeX or Markdown code unless necessary.
103
+ * **Citation & Referencing**: Accurately reference the paper's results, diagrams, and examples.
104
+ * If a slide uses data from the paper, you must clearly indicate the source of the data on that slide (e.g., page xx, Figure xx, Table xx).
105
+ * All references (if any) must be placed in the bottom-left corner of the slide.
106
+
107
+ ## 3. Visual & Design
108
+
109
+ * **Images:** Include relevant images. Images must be high quality, clearly labeled, and relevant to the content.
110
+ * **Charts and Diagrams:** Use appropriate charts and diagrams where needed to visually present and clarify information, rather than relying only on text (and demos).
111
+ * If the slide includes charts or figures, ensure that all visual elements are clearly annotated (e.g., axes are labeled, units are specified, legends are included where needed, and data points are explained when necessary).
112
+ * Include **figures or diagrams descriptions** when appropriate, e.g., “The chart (from page 4 in the paper) shows proprietary models outperform open-weight ones.”
113
+ * **Legibility:** Use legible fonts and avoid clutter. Text should be large enough to be easily read.
114
+ * **Visual Balance:** Balance text and visuals so slides are easy to read when projected.
115
+ * **Layout:** Maintain a clean, professional layout with appropriate fonts, colors, and formatting.
116
+ * **Style Consistency**: The entire slide deck should follow a unified and coherent visual style.
117
+ * **Information Load**: Slides should avoid excessive information per page to preserve readability.
118
+
119
+ ## 4. Text Quality
120
+
121
+ * All generated text should be clear, with no missing or incorrect characters or words.
122
+ * Spelling, grammar, and typography must be accurate and correct throughout the content.
123
+
124
+ ## 5. Technical Fidelity Requirements
125
+
126
+ * **Quantitative Coverage**: Ensure that key data and experimental results (possibly presented in charts or tables in the paper) are included in the slide deck. In other words, the presentation should not only discuss the ideas of the paper but also present specific quantitative details (e.g., statistical data, experimental results, etc.).
127
+ * The slide deck must include at least 5 slides with quantitative details.
128
+
129
+ * **Quantitative Detail Correctness**: Ensure quantitative details (task counts, benchmark size, etc.) are correct.
130
+
131
+ * **Table & Chart Traceability and Annotation**: Ensure that any figures and tables in your slide deck are consistent with the paper. Specifically, for every figure and table in the slides:
132
+ * If it is directly copied from the paper, clearly indicate on the slide which figure or table it corresponds to in the paper (e.g., Figure 1 in the paper, Table 2 in the paper).
133
+ * If it is newly plotted based on data from the paper, clearly specify which section of the paper the data are taken from (e.g., Section 3.1). In addition, provide a clear explanation of the meaning of each legend item in the figure and each row and column in the table.
134
+ * For charts, every axis, unit, and label must be explicit
135
+
136
+ * **Point-Level Accuracy for Plots**: If scatter plots, line charts or radar charts are used in the slide deck, ensure that every data point exactly matches the corresponding data point in the original figure from the paper. Note that the values must be **precisely** the same, not just the shape of the graph.
137
+
138
+ * **Conceptual Illustration**: The slides may include data used only for conceptual illustration. However, if such data are included, you must clearly indicate on the corresponding slide which data are conceptual illustrations rather than experimental data reported in the paper.
139
+
140
+ ## 5. Presentation Tone and Audience
141
+
142
+ * **Tone:** The tone should be informative, academic, and professional. It should avoid casual or informal conversational language, while remaining clear and suitable for oral presentation. The slide deck should maintain a consistent tone.
143
+ * **Audience:** The presentation is intended for an academic audience with relevant background knowledge in the field. The content should be accessible to graduate-level students and researchers, assuming familiarity with standard concepts and terminology, while still providing sufficient context to understand the motivation, methodology, and key contributions.
144
+
145
+ ---
146
+
147
+ # **Output Expected**
148
+
149
+ A **complete slide deck** satisfying all constraints above.
academia/CVPR_2023/Implicit_Occupancy_Flow_Fields_for_Perception_and_Prediction_in_Self-Driving/generation_task/judge_prompt.json ADDED
@@ -0,0 +1,27 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "material_dependent_checklist_1": [
3
+ "\nDoes the first slide correctly list the title, authors, and the conference?\nIf **no**, describe what is missing from the first slide (Title: Implicit Occupancy Flow Fields for Perception and Prediction in Self-Driving; Conf: CVPR 2023).\n",
4
+ "\nDoes the beginning of the presentation include a clear agenda or outline?\nIf **no**, specify where it is missing.\n",
5
+ "\nIs there a slide dedicated to the background of autonomous driving perception that points out the limitations of \"Object-based\" and \"Explicit Grid-based\" methods?\nIf **no**, explain where the background info on the \"detection threshold\" and \"receptive field\" issues is lacking.\n",
6
+ "\nDoes the slide deck clearly define the core concept of \"Implicit Representation\" for occupancy and flow (querying continuous spatio-temporal coordinates)?\nIf **no**, describe the missing points in explaining the \"continuous field\" framework.\n",
7
+ "\nIs there a slide describing the \"Dynamic Query-Aware Attention\" architecture and how it uses deformable offsets to aggregate global context?\nIf **no**, indicate whether the structural link between the query points and the BEV feature map was omitted.\n",
8
+ "\nIs there a slide explaining the \"Backwards Flow\" mechanism and its role in associating future occupancy with past observations?\nIf **no**, specify if the mechanism for temporal consistency is missing.\n",
9
+ "\nDoes the deck present the core logic for the \"Implicit Decoder\" (how it processes thousands of query points in parallel)?\nIf **no**, specify if the explanation of computational efficiency and planner-centric querying is missing.\n",
10
+ "\nIs there a slide summarizing the training data and sampling strategy (e.g., using Argoverse 2 and HighwaySim with uniform spatio-temporal sampling)?\nIf **no**, explain if the dataset and training supervision section is missing.\n",
11
+ "\nDoes the experimental section cover comparative results against baselines like MP3, FIERY, or standard Object-based detectors?\nIf **no**, indicate if the performance analysis relative to state-of-the-art perception-prediction methods was omitted.\n",
12
+ "\nDoes the deck include qualitative results showing the model's ability to handle high-speed agents and multi-modal futures?\nIf **no**, indicate if visual evidence of the \"global attention\" effectiveness is missing.\n",
13
+ "\nIs there a slide summarizing the \"Key Takeaways\" and limitations (e.g., dependence on the quality of the BEV encoder)?\nIf **no**, describe the missing insights.\n"
14
+ ],
15
+ "material_dependent_checklist_2": [
16
+ "\nIs the description of the limitations of explicit grids accurate? (e.g., they suffer from limited receptive fields and high computational cost for high-resolution outputs.)\nIf **no**, specify the inaccurate descriptions.\n",
17
+ "\nIs the technical roadmap correctly presented as an \"Implicit Object-Free\" method rather than an \"Object-Detection\" pipeline?\nIf **no**, point out the deviation in understanding the dense-but-queried occupancy principle.\n",
18
+ "\nAre the explanations for the \"Query-Aware Attention\" consistent with the paper? (It uses K reference points to look back at the feature map.)\nIf **no**, explain the errors in definition.\n",
19
+ "\nAre the details of the \"Spatio-Temporal Supervision\" or loss functions (BCE for occupancy, L2 for flow) accurate?\nIf **no**, specifically point out errors in the training objectives.\n",
20
+ "\nDoes the performance data in \"Experimental Results\" match the paper's tables? (e.g., achieving higher mAP and lower EPE on Argoverse 2 compared to MP3.)\nIf **no**, list the specific discrepancies between the values on the slides and the paper.\n",
21
+ "\nDoes the deck accurately distinguish between \"Occupancy\" (state) and \"Flow\" (motion) within the IMPLICITO framework?\nIf **no**, explain where these concepts are confused.\n",
22
+ "\nAre the definitions of evaluation metrics (e.g., Soft-IoU, Expected Calibration Error) consistent with the paper's standards?\nIf **no**, point out errors in metric interpretation.\n",
23
+ "\nDoes the slide deck avoid fabricating facts (e.g., claiming it uses Camera-only inputs when the paper focuses on LiDAR and HD Maps)?\nIf **no**, point out the fabricated content.\n",
24
+ "\nDo the visual results accurately reflect the model's \"Temporal Forecasting\" ability? (i.e., predicting occupancy several seconds into the future.)\nIf **no**, specify the slides where the prediction horizon is misinterpreted.\n",
25
+ "\nIs the input representation (e.g., 5-frame LiDAR history and rasterized HD Maps) correctly identified?\nIf **no**, provide the incorrect technical details found on the slides.\n"
26
+ ]
27
+ }
academia/CVPR_2023/Implicit_Occupancy_Flow_Fields_for_Perception_and_Prediction_in_Self-Driving/generation_task/statistics.yaml ADDED
@@ -0,0 +1,25 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ case_path: academia/CVPR_2023/Implicit_Occupancy_Flow_Fields_for_Perception_and_Prediction_in_Self-Driving
2
+ category: academia
3
+ input_metrics:
4
+ total_input_tokens: 7908
5
+ generation_prompt_tokens: 2308
6
+ materials_total_tokens: 5600
7
+ material_count: 1
8
+ pdf_total_pages: 10
9
+ file_details:
10
+ - name: material.pdf
11
+ tokens: 5600
12
+ pages: 10
13
+ checklist_counts:
14
+ common:
15
+ details:
16
+ Presentation Fundamentals: 13
17
+ Visual Design and Layout: 17
18
+ sum: 30
19
+ specific:
20
+ details:
21
+ Content Completeness: 11
22
+ Content Correctness: 10
23
+ Content Fidelity (per-slide-deck dynamic): 0
24
+ sum: 21
25
+ total_count: 51
academia/CVPR_2023/Implicit_Occupancy_Flow_Fields_for_Perception_and_Prediction_in_Self-Driving/material.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:604aab33ba756732c85af4f1cf807a44c260e852eb3216195cc6f43097609dea
3
+ size 4172516
academia/CVPR_2023/TarViS_A_Unified_Approach_for_Target-based_Video_Segmentation/generation_task/instructions.md ADDED
@@ -0,0 +1,145 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ You are to generate a complete, conference-quality academic slide deck suitable for an oral presentation at a top-tier AI conference (e.g., NeurIPS / ICML / ICLR / AAAI), based strictly on the paper. The slides must be accurate, well-structured, and **faithful to the original paper**, with no fabricated content.
2
+
3
+ ---
4
+
5
+ # **Strict Constraints for the Slides**
6
+
7
+ Below are the **hard constraints** you MUST satisfy. Slides violating these constraints are considered **incorrect**.
8
+
9
+ ## 1. Content Requirements
10
+
11
+ The slide deck must have **16-20 slides**.
12
+
13
+ The slide deck must include the following sections, in the order listed below (the number of slides in each section may be determined as appropriate).
14
+
15
+ 1.Title Slide
16
+
17
+ Paper Title: TarViS: A Unified Approach for Target-Based Video Segmentation
18
+ Author Team: Ali Athar, Alexander Hermans, Jonathon Luiten, Deva Ramanan, Bastian Leibe
19
+ Affiliation: RWTH Aachen University, Carnegie Mellon University
20
+ Conference: IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2023
21
+
22
+ 2.Outline / Agenda
23
+
24
+ 3.Introduction / Background
25
+
26
+ The Fragmentation of Video Segmentation: Currently divided into specific tasks like Video Instance Segmentation (VIS), Video Object Segmentation (VOS), etc.
27
+ Current Landscape: Most methods are task-specific and cannot generalize; training and inference require different pipelines.
28
+ Motivation: To create a single, unified architecture capable of handling any target-based video segmentation task.
29
+
30
+ 4.Limitations of Existing Methods:
31
+
32
+ Task Specificity: Specialized architectures for VOS cannot perform VPS, leading to redundant research and engineering.
33
+ Input Inconsistency: Different tasks rely on different guidance (masks, text, or categories), making unification difficult.
34
+ Data Silos: Models trained on one task struggle to leverage the rich annotations available in other related video tasks.
35
+
36
+ 5.Overview of the Proposed Method
37
+
38
+ Core Idea: TarViS — a transformer-based architecture that models all segmentation targets as abstract "queries."
39
+ Key Contribution 1: Unified Architecture. One model, one set of weights, multiple tasks (VIS, VOS, VPS, OTS).
40
+ Key Contribution 2: Flexible Guidance. A modular "Source-Specific Aggregator" that converts various inputs into initial target queries.
41
+ Key Contribution 3: SOTA Performance. Demonstrates superior or competitive results across four diverse benchmarks.
42
+
43
+ 6.Methodology: The TarViS Architecture
44
+
45
+ Backbone: A standard visual encoder (e.g., ResNet or Swin) extracting multi-scale spatio-temporal features.
46
+ Temporal Neck: Aggregates features across frames to build a global understanding of the video volume.
47
+ Transformer Decoder: Iteratively refines "Target Queries" by attending to the video features.
48
+ Design Constraint: Include a visual overview (refer to Fig 2) showing how different task inputs are mapped to a common query space.
49
+
50
+ 7.Key Algorithm: Target Query Refinement
51
+
52
+ Initialization: Depending on the task, queries are initialized from categories (VIS/VPS) or reference masks (VOS).
53
+ Communication: Queries interact with each other to handle occlusions and identity overlaps.
54
+ Mask Prediction: Each query is decoded into a pixel-precise mask sequence across the entire video clip.
55
+
56
+ 8.Training and Joint Optimization
57
+
58
+ Datasets: COCO, YouTube-VIS, Cityscapes-VPS, and others depending on the task suite.
59
+ Joint Training: Training on a mixture of datasets using a unified loss function (Dice loss + Cross-entropy).
60
+ Task-Agnostic Learning: The model learns general video segmentation cues that transfer across specific benchmarks.
61
+
62
+ 9.Experimental Setup
63
+
64
+ Benchmarks: Evaluated on 4 tasks: Video Instance Segmentation (VIS), Video Object Segmentation (VOS), Video Panoptic Segmentation (VPS), and Referring Video Object Segmentation (RVOS).
65
+ Metrics: J&F score for VOS, PQ (Panoptic Quality) for VPS, and mAP for VIS.
66
+
67
+ 10.Experimental Results & Analysis
68
+
69
+ Cross-Task Efficiency: TarViS outperforms task-specific models in many scenarios while using fewer total parameters.
70
+ Hot-Swapping: Showcases the ability to switch between VOS and VIS without any parameter changes.
71
+ Design Constraint: Include a performance table (refer to Table 1 & 2) showing TarViS vs. specialized SOTA models like Mask2Former or PCAN.
72
+
73
+ 11.Visual Analysis & Qualitative Results
74
+
75
+ Robustness: Demonstrates stable tracking and segmentation through long-term occlusions and rapid camera movement.
76
+ Versatility: Examples of the model segmenting specific people (VOS), all cars (VIS), and the entire background (VPS) in the same clip.
77
+
78
+ 12.Key Takeaways & Limitations
79
+
80
+ Takeaways: Unified modeling is the future of video perception; queries are a powerful abstraction for multi-modal guidance.
81
+ Limitations: Computational cost increases with the number of targets in a scene; real-time performance on high-resolution video remains a challenge.
82
+
83
+ 13.Conclusion
84
+
85
+ Summary: TarViS bridges the gap between fragmented video segmentation tasks with a truly unified transformer-based approach.
86
+ Future Work: Incorporating more modalities (like audio) and extending to even more diverse video understanding tasks.
87
+
88
+ ---
89
+
90
+ ## 2. Content Constraints
91
+
92
+ * **Faithfulness to background materials**: Use only the information in the paper. You must not fabricate additional experiments or modify or reinterpret the authors' claims.
93
+ * **Accuracy:** All content must be factually accurate, especially quantitative content and facts.
94
+ * **Brevity:** Use short, concise phrases, not long paragraphs. Focus on summarizing key facts and events without excessive detail. Bullet points may be used for clarity. If you use bullet points, each slide should have no more than 6 bullet points.
95
+ * **Sufficient Depth**: Do not summarize the paper in an overly superficial or high-level manner. The slides should preserve essential technical details, key arguments, and substantive insights rather than only presenting vague conclusions.
96
+ * **Logical Flow:** The slides should present a clear narrative, starting from early space exploration to recent developments. Ensure there is a clear progression of time and events.
97
+ * **Relevance of Information**: You must not add unrelated content.
98
+ * **Code & Markup Formatting**: Avoid raw LaTeX or Markdown code unless necessary.
99
+ * **Citation & Referencing**: Accurately reference the paper's results, diagrams, and examples.
100
+ * If a slide uses data from the paper, you must clearly indicate the source of the data on that slide (e.g., page xx, Figure xx, Table xx).
101
+ * All references (if any) must be placed in the bottom-left corner of the slide.
102
+
103
+ ## 3. Visual & Design
104
+
105
+ * **Images:** Include relevant images. Images must be high quality, clearly labeled, and relevant to the content.
106
+ * **Charts and Diagrams:** Use appropriate charts and diagrams where needed to visually present and clarify information, rather than relying only on text (and demos).
107
+ * If the slide includes charts or figures, ensure that all visual elements are clearly annotated (e.g., axes are labeled, units are specified, legends are included where needed, and data points are explained when necessary).
108
+ * Include **figures or diagrams descriptions** when appropriate, e.g., “The chart (from page 4 in the paper) shows proprietary models outperform open-weight ones.”
109
+ * **Legibility:** Use legible fonts and avoid clutter. Text should be large enough to be easily read.
110
+ * **Visual Balance:** Balance text and visuals so slides are easy to read when projected.
111
+ * **Layout:** Maintain a clean, professional layout with appropriate fonts, colors, and formatting.
112
+ * **Style Consistency**: The entire slide deck should follow a unified and coherent visual style.
113
+ * **Information Load**: Slides should avoid excessive information per page to preserve readability.
114
+
115
+ ## 4. Text Quality
116
+
117
+ * All generated text should be clear, with no missing or incorrect characters or words.
118
+ * Spelling, grammar, and typography must be accurate and correct throughout the content.
119
+
120
+ ## 5. Technical Fidelity Requirements
121
+
122
+ * **Quantitative Coverage**: Ensure that key data and experimental results (possibly presented in charts or tables in the paper) are included in the slide deck. In other words, the presentation should not only discuss the ideas of the paper but also present specific quantitative details (e.g., statistical data, experimental results, etc.).
123
+ * The slide deck must include at least 5 slides with quantitative details.
124
+
125
+ * **Quantitative Detail Correctness**: Ensure quantitative details (task counts, benchmark size, etc.) are correct.
126
+
127
+ * **Table & Chart Traceability and Annotation**: Ensure that any figures and tables in your slide deck are consistent with the paper. Specifically, for every figure and table in the slides:
128
+ * If it is directly copied from the paper, clearly indicate on the slide which figure or table it corresponds to in the paper (e.g., Figure 1 in the paper, Table 2 in the paper).
129
+ * If it is newly plotted based on data from the paper, clearly specify which section of the paper the data are taken from (e.g., Section 3.1). In addition, provide a clear explanation of the meaning of each legend item in the figure and each row and column in the table.
130
+ * For charts, every axis, unit, and label must be explicit
131
+
132
+ * **Point-Level Accuracy for Plots**: If scatter plots, line charts or radar charts are used in the slide deck, ensure that every data point exactly matches the corresponding data point in the original figure from the paper. Note that the values must be **precisely** the same, not just the shape of the graph.
133
+
134
+ * **Conceptual Illustration**: The slides may include data used only for conceptual illustration. However, if such data are included, you must clearly indicate on the corresponding slide which data are conceptual illustrations rather than experimental data reported in the paper.
135
+
136
+ ## 5. Presentation Tone and Audience
137
+
138
+ * **Tone:** The tone should be informative, academic, and professional. It should avoid casual or informal conversational language, while remaining clear and suitable for oral presentation. The slide deck should maintain a consistent tone.
139
+ * **Audience:** The presentation is intended for an academic audience with relevant background knowledge in the field. The content should be accessible to graduate-level students and researchers, assuming familiarity with standard concepts and terminology, while still providing sufficient context to understand the motivation, methodology, and key contributions.
140
+
141
+ ---
142
+
143
+ # **Output Expected**
144
+
145
+ A **complete slide deck** satisfying all constraints above.
academia/CVPR_2023/TarViS_A_Unified_Approach_for_Target-based_Video_Segmentation/generation_task/judge_prompt.json ADDED
@@ -0,0 +1,27 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "material_dependent_checklist_1": [
3
+ "\nDoes the first slide correctly list the title, authors, and the conference?\nIf **no**, describe what is missing from the first slide (Title: Implicit Occupancy Flow Fields for Perception and Prediction in Self-Driving; Conf: CVPR 2023).\n",
4
+ "\nDoes the beginning of the presentation include a clear agenda or outline?\nIf **no**, specify where it is missing.\n",
5
+ "\nIs there a slide dedicated to the background of autonomous driving perception that points out the limitations of \"Object-based\" and \"Explicit Grid-based\" methods?\nIf **no**, explain where the background info on the \"detection threshold\" and \"receptive field\" issues is lacking.\n",
6
+ "\nDoes the slide deck clearly define the core concept of \"Implicit Representation\" for occupancy and flow (querying continuous spatio-temporal coordinates)?\nIf **no**, describe the missing points in explaining the \"continuous field\" framework.\n",
7
+ "\nIs there a slide describing the \"Dynamic Query-Aware Attention\" architecture and how it uses deformable offsets to aggregate global context?\nIf **no**, indicate whether the structural link between the query points and the BEV feature map was omitted.\n",
8
+ "\nIs there a slide explaining the \"Backwards Flow\" mechanism and its role in associating future occupancy with past observations?\nIf **no**, specify if the mechanism for temporal consistency is missing.\n",
9
+ "\nDoes the deck present the core logic for the \"Implicit Decoder\" (how it processes thousands of query points in parallel)?\nIf **no**, specify if the explanation of computational efficiency and planner-centric querying is missing.\n",
10
+ "\nIs there a slide summarizing the training data and sampling strategy (e.g., using Argoverse 2 and HighwaySim with uniform spatio-temporal sampling)?\nIf **no**, explain if the dataset and training supervision section is missing.\n",
11
+ "\nDoes the experimental section cover comparative results against baselines like MP3, FIERY, or standard Object-based detectors?\nIf **no**, indicate if the performance analysis relative to state-of-the-art perception-prediction methods was omitted.\n",
12
+ "\nDoes the deck include qualitative results showing the model's ability to handle high-speed agents and multi-modal futures?\nIf **no**, indicate if visual evidence of the \"global attention\" effectiveness is missing.\n",
13
+ "\nIs there a slide summarizing the \"Key Takeaways\" and limitations (e.g., dependence on the quality of the BEV encoder)?\nIf **no**, describe the missing insights.\n"
14
+ ],
15
+ "material_dependent_checklist_2": [
16
+ "\nIs the description of the limitations of explicit grids accurate? (e.g., they suffer from limited receptive fields and high computational cost for high-resolution outputs.)\nIf **no**, specify the inaccurate descriptions.\n",
17
+ "\nIs the technical roadmap correctly presented as an \"Implicit Object-Free\" method rather than an \"Object-Detection\" pipeline?\nIf **no**, point out the deviation in understanding the dense-but-queried occupancy principle.\n",
18
+ "\nAre the explanations for the \"Query-Aware Attention\" consistent with the paper? (It uses K reference points to look back at the feature map.)\nIf **no**, explain the errors in definition.\n",
19
+ "\nAre the details of the \"Spatio-Temporal Supervision\" or loss functions (BCE for occupancy, L2 for flow) accurate?\nIf **no**, specifically point out errors in the training objectives.\n",
20
+ "\nDoes the performance data in \"Experimental Results\" match the paper's tables? (e.g., achieving higher mAP and lower EPE on Argoverse 2 compared to MP3.)\nIf **no**, list the specific discrepancies between the values on the slides and the paper.\n",
21
+ "\nDoes the deck accurately distinguish between \"Occupancy\" (state) and \"Flow\" (motion) within the IMPLICITO framework?\nIf **no**, explain where these concepts are confused.\n",
22
+ "\nAre the definitions of evaluation metrics (e.g., Soft-IoU, Expected Calibration Error) consistent with the paper's standards?\nIf **no**, point out errors in metric interpretation.\n",
23
+ "\nDoes the slide deck avoid fabricating facts (e.g., claiming it uses Camera-only inputs when the paper focuses on LiDAR and HD Maps)?\nIf **no**, point out the fabricated content.\n",
24
+ "\nDo the visual results accurately reflect the model's \"Temporal Forecasting\" ability? (i.e., predicting occupancy several seconds into the future.)\nIf **no**, specify the slides where the prediction horizon is misinterpreted.\n",
25
+ "\nIs the input representation (e.g., 5-frame LiDAR history and rasterized HD Maps) correctly identified?\nIf **no**, provide the incorrect technical details found on the slides.\n"
26
+ ]
27
+ }
academia/CVPR_2023/TarViS_A_Unified_Approach_for_Target-based_Video_Segmentation/generation_task/statistics.yaml ADDED
@@ -0,0 +1,25 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ case_path: academia/CVPR_2023/TarViS_A_Unified_Approach_for_Target-based_Video_Segmentation
2
+ category: academia
3
+ input_metrics:
4
+ total_input_tokens: 8344
5
+ generation_prompt_tokens: 2184
6
+ materials_total_tokens: 6160
7
+ material_count: 1
8
+ pdf_total_pages: 11
9
+ file_details:
10
+ - name: material.pdf
11
+ tokens: 6160
12
+ pages: 11
13
+ checklist_counts:
14
+ common:
15
+ details:
16
+ Presentation Fundamentals: 13
17
+ Visual Design and Layout: 17
18
+ sum: 30
19
+ specific:
20
+ details:
21
+ Content Completeness: 11
22
+ Content Correctness: 10
23
+ Content Fidelity (per-slide-deck dynamic): 0
24
+ sum: 21
25
+ total_count: 51
academia/CVPR_2023/TarViS_A_Unified_Approach_for_Target-based_Video_Segmentation/material.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:6efa65003ff1ae1cac7f37b17243fd7080f0a8b2d7fcc9774f8821a1f8403200
3
+ size 8338604
academia/CVPR_2024/Discovering_and_Mitigating_Visual_Biases_through_Keyword_Explanation/generation_task/instructions.md ADDED
@@ -0,0 +1,172 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ You are to generate a complete, conference-quality academic slide deck suitable for an oral presentation at a top-tier AI conference (e.g., NeurIPS / ICML / ICLR / AAAI), based strictly on the paper. The slides must be accurate, well-structured, and **faithful to the original paper**, with no fabricated content.
2
+
3
+ ---
4
+
5
+ # **Strict Constraints for the Slides**
6
+
7
+ Below are the **hard constraints** you MUST satisfy. Slides violating these constraints are considered **incorrect**.
8
+
9
+ ## 1. Content Requirements
10
+
11
+ The slide deck must have **16-20 slides**.
12
+
13
+ The slide deck must include the following sections, in the order listed below (the number of slides in each section may be determined as appropriate).
14
+
15
+ ---
16
+
17
+ ## **1. Structure Requirements**
18
+
19
+ The slide deck must contain the following **ordered sections**:
20
+
21
+ 1.**Title Slide**
22
+
23
+ Paper Title:Discovering and Mitigating Visual Biases through Keyword Explanation
24
+
25
+ Author Team:Younghyun Kim, Sangwoo Mo*, Minkyu Kim, Kyungmin Lee, Jaeho Lee, Jinwoo Shin
26
+
27
+ Affiliation:KAIST, University of Michigan, KRAFTON, POSTECH
28
+
29
+ Conference:IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2024
30
+
31
+ 2.**Outline / Agenda**
32
+
33
+ 3.**Introduction / Background**
34
+
35
+ Visual Bias in Computer Vision:Biased datasets lead to model failures (spurious correlations or distribution shifts), harming performance and fairness.
36
+
37
+ Current Landscape:
38
+ Methods identify biases indirectly via visualization or sample statistics.
39
+ These approaches lack explainability and require intensive human supervision to interpret failures.
40
+
41
+ Motivation:Need for a framework that identifies biases in an explainable, automated, and actionable form.
42
+
43
+ 4.**Limitations of Existing Methods:**
44
+
45
+ Indirect Definition:Bias is often defined through sample groups rather than descriptive traits.
46
+
47
+ Vocabulary Constraints:Existing vision-language methods often rely on pre-defined vocabularies, failing to detect novel or fine-grained biases.
48
+
49
+ Hard to Utilize:Detailed sentence captions or neuron analysis are informative but difficult to integrate directly into debiasing pipelines.
50
+
51
+ Design Constraint:Include a conceptual example (refer to Fig 1) showing how an "ant" image is misclassified as a "bee" due to the contextual bias of a "flower" background.
52
+
53
+ 5.**Overview of the Proposed Method**
54
+
55
+ Core Idea:The Bias-to-Text (B2T) framework, which interprets visual biases as keywords by aggregating traits from language descriptions of mispredicted images.
56
+
57
+ Key Contribution 1:Automated Bias Discovery. Generates language descriptions and extracts common keywords to identify potential biases without human supervision.
58
+
59
+ Key Contribution 2:Validation Mechanism. Uses a vision-language scoring model (CLIP score) to confirm if keywords accurately represent bias by measuring similarity to failure cases.
60
+
61
+ Key Contribution 3:Versatile Applications. Demonstrates that discovered keywords can be used for debiased training, CLIP prompting, and model comparison.
62
+
63
+ 6.**Methodology:Bias Keyword Generation**
64
+
65
+ Step 1:Captioning & Extraction. Using pre-trained models (e.g., ClipCap) to generate captions for mispredicted images and applying the YAKE algorithm to extract common keywords.
66
+
67
+ Step 2:Verification via CLIP Score. Calculating the difference in similarity between the keyword and incorrect vs. correct predictions to ensure the keyword captures the bias-conflicting attribute.
68
+
69
+ 7.**Key Algorithm:Bias Label Inference & Application**
70
+
71
+ Zero-shot Labeling:Applying discovered keywords to a CLIP classifier to infer sample-wise bias labels for unannotated datasets.
72
+
73
+ Application Pipeline:Using inferred labels for Distributionally Robust Optimization (DRO) to minimize loss across all bias groups.
74
+
75
+ Design Constraint:Display the method flow diagram (refer to Fig 2) showing the two steps: (1) Keyword Generation/Verification and (2) Applications like debiased training and model comparison.
76
+
77
+ 8.**Dataset and Training Details**
78
+
79
+ Datasets:Evaluated on benchmark datasets (CelebA, Waterbirds), distribution shift sets (ImageNet-R/C), and large-scale datasets (Dollar Street, ImageNet).
80
+
81
+ Model Architectures:ResNet-50 and ViT backbones evaluated using the B2T framework.
82
+
83
+ 9.**Experimental Setup**
84
+
85
+ Baselines:Compared against unsupervised bias discovery methods like JTT, Domino, and Failure Direction.
86
+
87
+ Evaluation Metrics:Worst-group accuracy, Average accuracy, and AUROC for sample-wise bias labeling.
88
+
89
+ 10.**Experimental Results & Analysis**
90
+
91
+ Efficiency in Discovery:B2T identifies known biases (gender, background) and uncovers novel ones (geographic bias in Dollar Street).
92
+
93
+ Debiasing Performance:DRO-B2T achieves near-optimal worst-group accuracy (90.4% on CelebA), significantly outperforming prior unsupervised methods.
94
+
95
+ Design Constraint:Include a comparison table (refer to Table 1) showing the worst-group and average accuracies of DRO-B2T versus other unsupervised debiasing methods.
96
+
97
+ 11.**Visual Analysis & Novel Bias Discovery**
98
+
99
+ Geographic Bias:In Dollar Street, B2T uncovers that "stoves" from low-income countries are misclassified due to "fire" (traditional design) vs. modern appearances.
100
+
101
+ Contextual Bias:In ImageNet, it reveals that the model relies on "flower" as a shortcut to predict "bee," leading to errors when "ants" appear on flowers.
102
+
103
+ 12.**Key Takeaways & Limitations**
104
+
105
+ Takeaways:Keyword explanations provide a practical, explainable group naming for biases; the CLIP score effectively reflects the severity of bias.
106
+
107
+ Limitations:Performance depends on the quality of the underlying captioning and vision-language scoring models.
108
+
109
+ 13.**Conclusion**
110
+
111
+ Summary:B2T offers a robust and versatile approach to discovering and mitigating visual biases through explainable keywords.
112
+
113
+ Future Work:Advancing the framework using more powerful vision-language models like GPT-4 and extending it to tasks beyond classification, such as object detection.
114
+
115
+ ---
116
+
117
+ ## 2. Content Constraints
118
+
119
+ * **Faithfulness to background materials**: Use only the information in the paper. You must not fabricate additional experiments or modify or reinterpret the authors' claims.
120
+ * **Accuracy:** All content must be factually accurate, especially quantitative content and facts.
121
+ * **Brevity:** Use short, concise phrases, not long paragraphs. Focus on summarizing key facts and events without excessive detail. Bullet points may be used for clarity. If you use bullet points, each slide should have no more than 6 bullet points.
122
+ * **Sufficient Depth**: Do not summarize the paper in an overly superficial or high-level manner. The slides should preserve essential technical details, key arguments, and substantive insights rather than only presenting vague conclusions.
123
+ * **Logical Flow:** The slides should present a clear narrative, starting from early space exploration to recent developments. Ensure there is a clear progression of time and events.
124
+ * **Relevance of Information**: You must not add unrelated content.
125
+ * **Code & Markup Formatting**: Avoid raw LaTeX or Markdown code unless necessary.
126
+ * **Citation & Referencing**: Accurately reference the paper's results, diagrams, and examples.
127
+ * If a slide uses data from the paper, you must clearly indicate the source of the data on that slide (e.g., page xx, Figure xx, Table xx).
128
+ * All references (if any) must be placed in the bottom-left corner of the slide.
129
+
130
+ ## 3. Visual & Design
131
+
132
+ * **Images:** Include relevant images. Images must be high quality, clearly labeled, and relevant to the content.
133
+ * **Charts and Diagrams:** Use appropriate charts and diagrams where needed to visually present and clarify information, rather than relying only on text (and demos).
134
+ * If the slide includes charts or figures, ensure that all visual elements are clearly annotated (e.g., axes are labeled, units are specified, legends are included where needed, and data points are explained when necessary).
135
+ * Include **figures or diagrams descriptions** when appropriate, e.g., “The chart (from page 4 in the paper) shows proprietary models outperform open-weight ones.”
136
+ * **Legibility:** Use legible fonts and avoid clutter. Text should be large enough to be easily read.
137
+ * **Visual Balance:** Balance text and visuals so slides are easy to read when projected.
138
+ * **Layout:** Maintain a clean, professional layout with appropriate fonts, colors, and formatting.
139
+ * **Style Consistency**: The entire slide deck should follow a unified and coherent visual style.
140
+ * **Information Load**: Slides should avoid excessive information per page to preserve readability.
141
+
142
+ ## 4. Text Quality
143
+
144
+ * All generated text should be clear, with no missing or incorrect characters or words.
145
+ * Spelling, grammar, and typography must be accurate and correct throughout the content.
146
+
147
+ ## 5. Technical Fidelity Requirements
148
+
149
+ * **Quantitative Coverage**: Ensure that key data and experimental results (possibly presented in charts or tables in the paper) are included in the slide deck. In other words, the presentation should not only discuss the ideas of the paper but also present specific quantitative details (e.g., statistical data, experimental results, etc.).
150
+ * The slide deck must include at least 5 slides with quantitative details.
151
+
152
+ * **Quantitative Detail Correctness**: Ensure quantitative details (task counts, benchmark size, etc.) are correct.
153
+
154
+ * **Table & Chart Traceability and Annotation**: Ensure that any figures and tables in your slide deck are consistent with the paper. Specifically, for every figure and table in the slides:
155
+ * If it is directly copied from the paper, clearly indicate on the slide which figure or table it corresponds to in the paper (e.g., Figure 1 in the paper, Table 2 in the paper).
156
+ * If it is newly plotted based on data from the paper, clearly specify which section of the paper the data are taken from (e.g., Section 3.1). In addition, provide a clear explanation of the meaning of each legend item in the figure and each row and column in the table.
157
+ * For charts, every axis, unit, and label must be explicit
158
+
159
+ * **Point-Level Accuracy for Plots**: If scatter plots, line charts or radar charts are used in the slide deck, ensure that every data point exactly matches the corresponding data point in the original figure from the paper. Note that the values must be **precisely** the same, not just the shape of the graph.
160
+
161
+ * **Conceptual Illustration**: The slides may include data used only for conceptual illustration. However, if such data are included, you must clearly indicate on the corresponding slide which data are conceptual illustrations rather than experimental data reported in the paper.
162
+
163
+ ## 5. Presentation Tone and Audience
164
+
165
+ * **Tone:** The tone should be informative, academic, and professional. It should avoid casual or informal conversational language, while remaining clear and suitable for oral presentation. The slide deck should maintain a consistent tone.
166
+ * **Audience:** The presentation is intended for an academic audience with relevant background knowledge in the field. The content should be accessible to graduate-level students and researchers, assuming familiarity with standard concepts and terminology, while still providing sufficient context to understand the motivation, methodology, and key contributions.
167
+
168
+ ---
169
+
170
+ # **Output Expected**
171
+
172
+ A **complete slide deck** satisfying all constraints above.
academia/CVPR_2024/Discovering_and_Mitigating_Visual_Biases_through_Keyword_Explanation/generation_task/judge_prompt.json ADDED
@@ -0,0 +1,27 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "material_dependent_checklist_1": [
3
+ "\nDoes the first slide correctly list the title, authors, and the conference?\nIf **no**, describe what is missing from the first slide (Title: Discovering and Mitigating Visual Biases through Keyword Explanation; Conf: CVPR 2024).\n",
4
+ "\nDoes the beginning of the presentation include a clear agenda or outline?\nIf **no**, specify where it is missing.\n",
5
+ "\nIs there a slide dedicated to the background of Visual Bias that points out the limitations of existing methods (e.g., \"indirect definition via samples\" and \"requirement for human supervision\")?\nIf **no**, explain where the background info on explainability challenges is lacking.\n",
6
+ "\nDoes the slide deck clearly define the core concept of the \"Bias-to-Text (B2T)\" framework and its use of keywords?\nIf **no**, describe the missing points in explaining the keyword-based interpretation.\n",
7
+ "\nIs there a slide describing the \"CLIP score\" and how it validates whether a keyword represents a true bias?\nIf **no**, indicate whether the verification mechanism using similarity differences was omitted.\n",
8
+ "\nIs there a slide explaining the \"Captioning & Keyword Extraction\" step (e.g., using ClipCap and YAKE)?\nIf **no**, specify if the technical pipeline for generating keywords is missing.\n",
9
+ "\nDoes the deck present the application of keywords in \"Debiased DRO training\" (how they infer labels for training)?\nIf **no**, specify if the link between keyword discovery and model improvement is missing.\n",
10
+ "\nIs there a slide summarizing the diverse datasets used (e.g., CelebA, Waterbirds, Dollar Street, ImageNet)?\nIf **no**, explain if the dataset section is missing.\n",
11
+ "\nDoes the experimental section cover comparative results against baselines like JTT, Domino, or Failure Direction?\nIf **no**, indicate if the performance analysis relative to prior unsupervised bias discovery methods was omitted.\n",
12
+ "\nDoes the deck include qualitative results showing novel biases discovered (e.g., \"cave\" for wardrobes in low-income regions)?\nIf **no**, indicate if visual evidence of novel bias discovery is missing.\n",
13
+ "\nIs there a slide summarizing the \"Key Takeaways\" and limitations (e.g., dependence on captioning model quality)?\nIf **no**, describe the missing insights.\n"
14
+ ],
15
+ "material_dependent_checklist_2": [
16
+ "\nIs the description of the \"CLIP score\" accurate? (e.g., it measures the similarity difference between incorrect and correct predictions.)\nIf **no**, specify the inaccurate mathematical or logical descriptions.\n",
17
+ "\nIs the technical roadmap correctly presented as \"interpreting biases as keywords\" rather than \"manually labeling failure cases\"?\nIf **no**, point out the deviation in understanding the automated nature of B2T.\n",
18
+ "\nAre the explanations for \"Contextual Bias\" consistent with the paper's examples (e.g., \"ant\" misclassified as \"bee\" due to \"flower\")?\nIf **no**, explain the errors in bias examples.\n",
19
+ "\nAre the details of the \"Distributionally Robust Optimization (DRO)\" integration accurate?\nIf **no**, specifically point out errors in how keywords facilitate debiased training.\n",
20
+ "\nDoes the performance data in \"Experimental Results\" match the paper's tables? (e.g., DRO-B2T achieving ~90.4% worst-group accuracy on CelebA.)\nIf **no**, list the specific discrepancies between the values on the slides and the paper.\n",
21
+ "\nDoes the deck accurately distinguish between \"Known Biases\" and \"Novel Biases\" uncovered by the framework?\nIf **no**, explain where these categories are confused.\n",
22
+ "\nAre the definitions of evaluation metrics (e.g., Worst-group Accuracy, AUROC for bias labeling) consistent with the paper's standards?\nIf **no**, point out errors in metric interpretation.\n",
23
+ "\nDoes the slide deck avoid fabricating facts (e.g., claiming it requires a pre-defined bias vocabulary when it is zero-shot)?\nIf **no**, point out the fabricated content.\n",
24
+ "\nDo the visual results accurately reflect the \"Geographic Bias\" found in the Dollar Street dataset?\nIf **no**, specify the slides where the income-level-related biases are misinterpreted.\n",
25
+ "\nIs the role of the \"YAKE\" algorithm correctly identified as the keyword extraction tool?\nIf **no**, provide the incorrect technical details found on the slides.\n"
26
+ ]
27
+ }
academia/CVPR_2024/Discovering_and_Mitigating_Visual_Biases_through_Keyword_Explanation/generation_task/statistics.yaml ADDED
@@ -0,0 +1,25 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ case_path: academia/CVPR_2024/Discovering_and_Mitigating_Visual_Biases_through_Keyword_Explanation
2
+ category: academia
3
+ input_metrics:
4
+ total_input_tokens: 8489
5
+ generation_prompt_tokens: 2329
6
+ materials_total_tokens: 6160
7
+ material_count: 1
8
+ pdf_total_pages: 11
9
+ file_details:
10
+ - name: material.pdf
11
+ tokens: 6160
12
+ pages: 11
13
+ checklist_counts:
14
+ common:
15
+ details:
16
+ Presentation Fundamentals: 13
17
+ Visual Design and Layout: 17
18
+ sum: 30
19
+ specific:
20
+ details:
21
+ Content Completeness: 11
22
+ Content Correctness: 10
23
+ Content Fidelity (per-slide-deck dynamic): 0
24
+ sum: 21
25
+ total_count: 51
academia/CVPR_2024/Discovering_and_Mitigating_Visual_Biases_through_Keyword_Explanation/material.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a6a29e8bd331aff7f4554754ea2a8e6eeb25cc9c75e5fdfbb3a9bd14e748fd1e
3
+ size 1462530
academia/CVPR_2024/Frequency-Adaptive_Dilated_Convolution_for_Semantic_Segmentation/generation_task/instructions.md ADDED
@@ -0,0 +1,170 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ You are to generate a complete, conference-quality academic slide deck suitable for an oral presentation at a top-tier AI conference (e.g., NeurIPS / ICML / ICLR / AAAI), based strictly on the paper. The slides must be accurate, well-structured, and **faithful to the original paper**, with no fabricated content.
2
+
3
+ ---
4
+
5
+ # **Strict Constraints for the Slides**
6
+
7
+ Below are the **hard constraints** you MUST satisfy. Slides violating these constraints are considered **incorrect**.
8
+
9
+ ## 1. Content Requirements
10
+
11
+ The slide deck must have **16-20 slides**.
12
+
13
+ The slide deck must include the following sections, in the order listed below (the number of slides in each section may be determined as appropriate).
14
+
15
+ **1. Title Slide**
16
+
17
+ Paper Title: Frequency-Adaptive Dilated Convolution for Semantic Segmentation
18
+
19
+ Author Team: Linwei Chen, Lin Gu, Dezhi Zheng, Ying Fu
20
+
21
+ Affiliation: Beijing Institute of Technology, RIKEN, The University of Tokyo
22
+
23
+ Conference: IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2024
24
+
25
+ **2. Outline / Agenda**
26
+
27
+ **3. Introduction / Background**
28
+
29
+ Dilated Convolution: A widely used technique to expand the receptive field without increasing computational load by inserting gaps between filter values.
30
+
31
+ Current Practice: Standard methods typically fix a global dilation rate (D) as a hyperparameter for the entire feature map.
32
+
33
+ Frequency Perspective: Increasing the dilation rate scales the frequency response, reducing the effective bandwidth and limiting the ability to process high-frequency details.
34
+
35
+ **4.Limitations of Existing Methods:**
36
+
37
+ Fixed Dilation Trade-off: High dilation rates enlarge the receptive field but cause a loss of high-frequency information, leading to artifacts like gridding.
38
+
39
+ Spatial Uniformity: Traditional dilated convolution ignores that different image regions (e.g., edges vs. flat backgrounds) have different frequency characteristics.
40
+
41
+ Potential for Erroneous Learning: Content-adaptive methods like Deformable Convolution (DCN) can introduce spatial deviations, which are detrimental to position-sensitive tasks like segmentation.
42
+
43
+ Design Constraint: Include a visual example (refer to Fig 1) showing how different patches (Patch 1 with high frequency vs. Patch 2 with low frequency) require different dilation rates for optimal perception.
44
+
45
+ **5.Overview of the Proposed Method**
46
+
47
+ Core Idea: Frequency-Adaptive Dilated Convolution (FADC), which optimizes dilated convolution by dynamically balancing bandwidth and receptive field based on local spectrum analysis.
48
+
49
+ Key Component 1: Adaptive Dilation Rate (AdaDR). Spatially adjusts dilation rates to match local frequency components.
50
+
51
+ Key Component 2: Adaptive Kernel (AdaKern). A plug-in module that decomposes weights into frequency components to enhance effective bandwidth per channel.
52
+
53
+ Key Component 3: Frequency Selection (FreqSelect). Directly reweights feature representations to balance frequency power and encourage larger receptive fields in low-frequency areas.
54
+
55
+ **6.Methodology: Frequency-Adaptive Dilated Convolution**
56
+
57
+ Adaptive Dilation Rate (AdaDR): Predicts a pixel-specific dilation rate D(p) using a lightweight convolutional layer, ensuring smaller dilation for high-frequency edges and larger dilation for low-frequency backgrounds.
58
+
59
+ Spectrum-Guided Optimization: The selection of dilation is formulated as a trade-off problem to maximize the receptive field while minimizing frequency information loss.
60
+
61
+ **7.Key Algorithm: Plug-in Frequency Modules**
62
+
63
+ AdaKern Module: Decomposes static weights into low-frequency (mean filter) and high-frequency (residual) components, then applies dynamic weights (lambda) to adjust their ratio.
64
+
65
+ FreqSelect Module: Uses Fourier Transform to split features into four frequency bands and applies a selection map to reweight them spatially, preventing the network from over-focusing on high frequencies.
66
+
67
+ Design Constraint: Display the FADC overview diagram (refer to Fig 2) illustrating the flow of AdaDR, AdaKern, and FreqSelect working together.
68
+
69
+ **8.Dataset and Training Details**
70
+
71
+ Datasets: Evaluated on Cityscapes and ADE20K for semantic segmentation; COCO for object detection and instance segmentation.
72
+
73
+ Implementation: Integrated into popular frameworks including PSPNet, DeepLabV3+, Mask2Former, and PIDNet.
74
+
75
+ Backbones: Tested across ResNet-50, ResNet-101, Swin-B, and HorNet-B.
76
+
77
+ **9.Experimental Setup**
78
+
79
+ Baselines: Compared against standard Dilated Convolution, Deformable Convolution (DCNv2), and previous Adaptive Dilated Convolution (ADC).
80
+
81
+ Evaluation Metrics: mean Intersection over Union (mIoU) for segmentation; Average Precision (AP) for detection.
82
+
83
+ Inference Efficiency: Measured in GFLOPS and Frames Per Second (FPS) on a single RTX 3090.
84
+
85
+ **10. Experimental Results & Analysis**
86
+
87
+ Performance Gains: FADC improves PSPNet by +2.6 mIoU on Cityscapes and enhances ResNet-50 on ADE20K by +3.7 mIoU, outperforming the heavier ResNet-101.
88
+
89
+ Real-time Efficiency: PIDNet-M with FADC achieves 81.0 mIoU at 37.7 FPS, surpassing the larger PIDNet-L while remaining faster.
90
+
91
+ Generalizability: Successfully integrated into DCNv3 (InternImage) and Dilated Attention (DiNAT), providing consistent performance boosts across tasks.
92
+
93
+ Design Constraint: Include a comparison table (refer to Table 1 or Table 2) showing FADC's superiority over DCNv2 and standard Dilated Convolution in terms of mIoU and FLOPs.
94
+
95
+ **11. Visual Analysis & Frequency Studies**
96
+
97
+ Feature Visualization: High-frequency power maps confirm that FADC accurately identifies object boundaries and assigns lower dilation rates there to preserve detail.
98
+
99
+ Effective Bandwidth: Frequency response curves (refer to Fig 3) demonstrate that AdaKern successfully increases the high-frequency response compared to static kernels.
100
+
101
+ **12. Key Takeaways & Limitations**
102
+
103
+ Takeaways: Frequency analysis provides a principled way to optimize dilation; FADC is lightweight, avoids spatial deviations, and is compatible with various architectures.
104
+
105
+ Limitations: While effective, the dynamic prediction of dilation rates adds a small amount of overhead; extreme dilation might still face theoretical sampling limits.
106
+
107
+ **13. Conclusion**
108
+
109
+ Summary: FADC introduces a spatially variant, frequency-aware approach to dilated convolution, setting new benchmarks in semantic segmentation.
110
+
111
+ Future Work: Exploring the application of frequency-adaptive strategies in other vision tasks and potentially extending to video processing for temporal frequency analysis.
112
+
113
+ ---
114
+
115
+ ## 2. Content Constraints
116
+
117
+ * **Faithfulness to background materials**: Use only the information in the paper. You must not fabricate additional experiments or modify or reinterpret the authors' claims.
118
+ * **Accuracy:** All content must be factually accurate, especially quantitative content and facts.
119
+ * **Brevity:** Use short, concise phrases, not long paragraphs. Focus on summarizing key facts and events without excessive detail. Bullet points may be used for clarity. If you use bullet points, each slide should have no more than 6 bullet points.
120
+ * **Sufficient Depth**: Do not summarize the paper in an overly superficial or high-level manner. The slides should preserve essential technical details, key arguments, and substantive insights rather than only presenting vague conclusions.
121
+ * **Logical Flow:** The slides should present a clear narrative, starting from early space exploration to recent developments. Ensure there is a clear progression of time and events.
122
+ * **Relevance of Information**: You must not add unrelated content.
123
+ * **Code & Markup Formatting**: Avoid raw LaTeX or Markdown code unless necessary.
124
+ * **Citation & Referencing**: Accurately reference the paper's results, diagrams, and examples.
125
+ * If a slide uses data from the paper, you must clearly indicate the source of the data on that slide (e.g., page xx, Figure xx, Table xx).
126
+ * All references (if any) must be placed in the bottom-left corner of the slide.
127
+
128
+ ## 3. Visual & Design
129
+
130
+ * **Images:** Include relevant images. Images must be high quality, clearly labeled, and relevant to the content.
131
+ * **Charts and Diagrams:** Use appropriate charts and diagrams where needed to visually present and clarify information, rather than relying only on text (and demos).
132
+ * If the slide includes charts or figures, ensure that all visual elements are clearly annotated (e.g., axes are labeled, units are specified, legends are included where needed, and data points are explained when necessary).
133
+ * Include **figures or diagrams descriptions** when appropriate, e.g., “The chart (from page 4 in the paper) shows proprietary models outperform open-weight ones.”
134
+ * **Legibility:** Use legible fonts and avoid clutter. Text should be large enough to be easily read.
135
+ * **Visual Balance:** Balance text and visuals so slides are easy to read when projected.
136
+ * **Layout:** Maintain a clean, professional layout with appropriate fonts, colors, and formatting.
137
+ * **Style Consistency**: The entire slide deck should follow a unified and coherent visual style.
138
+ * **Information Load**: Slides should avoid excessive information per page to preserve readability.
139
+
140
+ ## 4. Text Quality
141
+
142
+ * All generated text should be clear, with no missing or incorrect characters or words.
143
+ * Spelling, grammar, and typography must be accurate and correct throughout the content.
144
+
145
+ ## 5. Technical Fidelity Requirements
146
+
147
+ * **Quantitative Coverage**: Ensure that key data and experimental results (possibly presented in charts or tables in the paper) are included in the slide deck. In other words, the presentation should not only discuss the ideas of the paper but also present specific quantitative details (e.g., statistical data, experimental results, etc.).
148
+ * The slide deck must include at least 5 slides with quantitative details.
149
+
150
+ * **Quantitative Detail Correctness**: Ensure quantitative details (task counts, benchmark size, etc.) are correct.
151
+
152
+ * **Table & Chart Traceability and Annotation**: Ensure that any figures and tables in your slide deck are consistent with the paper. Specifically, for every figure and table in the slides:
153
+ * If it is directly copied from the paper, clearly indicate on the slide which figure or table it corresponds to in the paper (e.g., Figure 1 in the paper, Table 2 in the paper).
154
+ * If it is newly plotted based on data from the paper, clearly specify which section of the paper the data are taken from (e.g., Section 3.1). In addition, provide a clear explanation of the meaning of each legend item in the figure and each row and column in the table.
155
+ * For charts, every axis, unit, and label must be explicit
156
+
157
+ * **Point-Level Accuracy for Plots**: If scatter plots, line charts or radar charts are used in the slide deck, ensure that every data point exactly matches the corresponding data point in the original figure from the paper. Note that the values must be **precisely** the same, not just the shape of the graph.
158
+
159
+ * **Conceptual Illustration**: The slides may include data used only for conceptual illustration. However, if such data are included, you must clearly indicate on the corresponding slide which data are conceptual illustrations rather than experimental data reported in the paper.
160
+
161
+ ## 5. Presentation Tone and Audience
162
+
163
+ * **Tone:** The tone should be informative, academic, and professional. It should avoid casual or informal conversational language, while remaining clear and suitable for oral presentation. The slide deck should maintain a consistent tone.
164
+ * **Audience:** The presentation is intended for an academic audience with relevant background knowledge in the field. The content should be accessible to graduate-level students and researchers, assuming familiarity with standard concepts and terminology, while still providing sufficient context to understand the motivation, methodology, and key contributions.
165
+
166
+ ---
167
+
168
+ # **Output Expected**
169
+
170
+ A **complete slide deck** satisfying all constraints above.
academia/CVPR_2024/Frequency-Adaptive_Dilated_Convolution_for_Semantic_Segmentation/generation_task/judge_prompt.json ADDED
@@ -0,0 +1,27 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "material_dependent_checklist_1": [
3
+ "\nDoes the first slide correctly list the title, authors, and the conference?\nIf **no**, describe what is missing from the first slide (Title: Frequency-Adaptive Dilated Convolution for Semantic Segmentation; Conf: CVPR 2024).\n",
4
+ "\nDoes the beginning of the presentation include a clear agenda or outline?\nIf **no**, specify where it is missing.\n",
5
+ "\nIs there a slide dedicated to the background of Dilated Convolution that points out the limitations of a \"Fixed Dilation Rate\" (e.g., \"loss of high-frequency details\" and \"gridding artifacts\")?\nIf **no**, explain where the background info on frequency-domain limitations is lacking.\n",
6
+ "\nDoes the slide deck clearly define the core concept of \"Frequency-Adaptive Dilated Convolution (FADC)\" and its goal to balance receptive field with bandwidth?\nIf **no**, describe the missing points in explaining the frequency-aware adaptation framework.\n",
7
+ "\nIs there a slide describing the \"Adaptive Dilation Rate (AdaDR)\" mechanism and how it spatially adjusts D based on local frequency?\nIf **no**, indicate whether the structural link between local spectrum analysis and dilation selection was omitted.\n",
8
+ "\nIs there a slide explaining the \"Adaptive Kernel (AdaKern)\" and its role in decomposing weights into low and high-frequency components?\nIf **no**, specify if the mechanism for dynamic weight adjustment per channel is missing.\n",
9
+ "\nDoes the deck present the core logic for \"Frequency Selection (FreqSelect)\" (how it reweights feature maps to encourage larger dilation in low-frequency regions)?\nIf **no**, specify if the Fourier-based frequency band selection process is missing.\n",
10
+ "\nIs there a slide summarizing the training datasets used (e.g., Cityscapes, ADE20K, or COCO)?\nIf **no**, explain if the dataset section is missing.\n",
11
+ "\nDoes the experimental section cover comparative results against baselines like standard Dilated Conv, DCNv2, or ADC?\nIf **no**, indicate if the performance analysis relative to spatial-adaptive methods was omitted.\n",
12
+ "\nDoes the deck include visual analysis (e.g., Fig 3 or Fig 5) showing frequency response curves or dilation rate maps?\nIf **no**, indicate if visual evidence of frequency adaptation is missing.\n",
13
+ "\nIs there a slide summarizing the \"Key Takeaways\" and limitations (e.g., computational overhead of dynamic prediction)?\nIf **no**, describe the missing insights.\n"
14
+ ],
15
+ "material_dependent_checklist_2": [
16
+ "\nIs the description of the limitations of standard Dilated Convolution accurate? (e.g., it reduces effective bandwidth as dilation rate increases.)\nIf **no**, specify the inaccurate descriptions.\n",
17
+ "\nIs the technical roadmap correctly presented as \"Frequency-Adaptive\" rather than just \"Spatial-Adaptive\" like Deformable Conv?\nIf **no**, point out the deviation in understanding the spectral analysis principle.\n",
18
+ "\nAre the explanations for \"AdaKern\" consistent with the paper? (It uses a plug-in module to enhance high-frequency parts of the convolution weights.)\nIf **no**, explain the errors in definition.\n",
19
+ "\nAre the details of the \"Spectrum-Guided Optimization\" or the selection objective accurate?\nIf **no**, specifically point out errors in the mathematical formulation of dilation selection.\n",
20
+ "\nDoes the performance data in \"Experimental Results\" match the paper's tables? (e.g., PSPNet with FADC achieving 81.0 mIoU on Cityscapes.)\nIf **no**, list the specific discrepancies between the values on the slides and the paper.\n",
21
+ "\nDoes the deck accurately distinguish between the roles of \"AdaDR\" (spatial dilation adjustment) and \"AdaKern\" (weight spectrum adjustment)?\nIf **no**, explain where these components are confused.\n",
22
+ "\nAre the definitions of evaluation metrics (e.g., mIoU, GFLOPs, FPS) consistent with the paper's standards?\nIf **no**, point out errors in metric interpretation.\n",
23
+ "\nDoes the slide deck avoid fabricating facts (e.g., claiming it uses 3D Fourier Transforms when it is a 2D image-based method)?\nIf **no**, point out the fabricated content.\n",
24
+ "\nDo the visual results accurately reflect the \"Frequency-Awareness\"? (i.e., smaller dilation rates for high-frequency edges.)\nIf **no**, specify the slides where the adaptive behavior is misinterpreted.\n",
25
+ "\nIs the base architecture (e.g., ResNet-50, Swin-B, or PIDNet) and the integration method correctly identified?\nIf **no**, provide the incorrect technical details found on the slides.\n"
26
+ ]
27
+ }
academia/CVPR_2024/Frequency-Adaptive_Dilated_Convolution_for_Semantic_Segmentation/generation_task/statistics.yaml ADDED
@@ -0,0 +1,25 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ case_path: academia/CVPR_2024/Frequency-Adaptive_Dilated_Convolution_for_Semantic_Segmentation
2
+ category: academia
3
+ input_metrics:
4
+ total_input_tokens: 9158
5
+ generation_prompt_tokens: 2438
6
+ materials_total_tokens: 6720
7
+ material_count: 1
8
+ pdf_total_pages: 12
9
+ file_details:
10
+ - name: material.pdf
11
+ tokens: 6720
12
+ pages: 12
13
+ checklist_counts:
14
+ common:
15
+ details:
16
+ Presentation Fundamentals: 13
17
+ Visual Design and Layout: 17
18
+ sum: 30
19
+ specific:
20
+ details:
21
+ Content Completeness: 11
22
+ Content Correctness: 10
23
+ Content Fidelity (per-slide-deck dynamic): 0
24
+ sum: 21
25
+ total_count: 51
academia/CVPR_2024/Frequency-Adaptive_Dilated_Convolution_for_Semantic_Segmentation/material.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b90a53bc246c4e893c0cae7da34fc34c71140e766e40692e5b05d67d7de499fe
3
+ size 7596076
academia/CVPR_2024/RAVE_Randomized_Noise_Shuffling_for_Fast_and_Consistent_Video_Editing_with_Diffusion_Models/generation_task/instructions.md ADDED
@@ -0,0 +1,150 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ You are to generate a complete, conference-quality academic slide deck suitable for an oral presentation at a top-tier AI conference (e.g., NeurIPS / ICML / ICLR / AAAI), based strictly on the paper. The slides must be accurate, well-structured, and **faithful to the original paper**, with no fabricated content.
2
+
3
+ ---
4
+
5
+ # **Strict Constraints for the Slides**
6
+
7
+ Below are the **hard constraints** you MUST satisfy. Slides violating these constraints are considered **incorrect**.
8
+
9
+ ## 1. Content Requirements
10
+
11
+ The slide deck must have **16-20 slides**.
12
+
13
+ The slide deck must include the following sections, in the order listed below (the number of slides in each section may be determined as appropriate).
14
+
15
+ 1.**Title Slide**
16
+
17
+ Paper Title:RAVE: Randomized Noise Shuffling for Fast and Consistent Video Editing with Diffusion Models
18
+ Author Team:Ozgur Kara*, Bariscan Kurtkaya*, Hidir Yesiltepe, James M. Rehg, Pinar Yanardag
19
+ Affiliation:Georgia Tech, KUIS AI Center, UIUC, Virginia Tech
20
+ Conference:IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2024
21
+
22
+ 2.**Outline / Agenda**
23
+
24
+
25
+ 3.**Introduction / Background**
26
+
27
+ Video Editing with Diffusion Models:Leveraging powerful pre-trained text-to-image (T2I) models for video content creation.
28
+ Current Landscape:
29
+
30
+ Existing zero-shot methods attempt to ensure consistency via spatio-temporal attention.
31
+ Methods often rely on expensive training or optimization-heavy processes.
32
+ Motivation:Need for a fast, training-free method that handles long videos and complex shape transformations.
33
+
34
+ 4.**Limitations of Existing Methods:**
35
+
36
+ Memory Constraints:Spatio-temporal attention across all frames is computationally prohibitive for long videos.
37
+ Temporal Inconsistency:Sparse-causal attention or frame-by-frame editing leads to flickering and style drift.
38
+ Efficiency Bottleneck:Many methods require additional training or time-consuming inversion/optical flow calculation.
39
+ Design Constraint:Include a visual comparison (refer to Fig 2) showing how self-attention and sparse-causal attention fail to maintain consistency in background and object details.
40
+
41
+ 5.**Overview of the Proposed Method**
42
+
43
+ Core Idea:RAVE, a zero-shot video editing framework that uses a novel "Randomized Noise Shuffling" strategy to ensure global consistency.
44
+ Key Contribution 1:Noise Shuffling Strategy. Encourages global spatio-temporal interaction without increasing memory requirements.
45
+ Key Contribution 2:Training-Free & Zero-Shot. Compatible with any pre-trained T2I model (e.g., Stable Diffusion) and ControlNet.
46
+ Key Contribution 3:Efficiency & Speed. Achieves high-quality edits ~25% faster than state-of-the-art baselines like TokenFlow.
47
+
48
+ 6.**Methodology:Grid Trick & Video Editing**
49
+
50
+ Step 1:Grid Layout (Character Sheet). Organizing video frames into an n x m grid, allowing the T2I model to treat them as a single image.
51
+ Step 2:Preprocessing. Performing DDIM inversion on the input video and extracting control conditions (e.g., depth maps) for spatial guidance.
52
+
53
+ 7.**Key Algorithm:Randomized Noise Shuffling**
54
+
55
+ Core Mechanism:In each diffusion step, frames are randomly shuffled and re-assigned to different grids.
56
+ Effect:Ensures that every frame eventually interacts with every other frame through the model's self-attention and convolutional layers.
57
+ Benefit:Maintains global style consistency and reduces flickering without the O(N^2) cost of full attention.
58
+ Design Constraint:Display the framework diagram (refer to Fig 3) showing the flow:Input Video -> Grid Formation -> Noise Shuffling -> Denoising -> Output Video.
59
+
60
+ 8.**Dataset and Implementation Details**
61
+
62
+ New Dataset:A comprehensive evaluation set featuring object-centric scenes, complex human activities (dancing, typing), and dynamic scenes.
63
+ Setup:Stable Diffusion 1.5 with ControlNet (Depth/SoftEdge); 50 DDIM steps; single A40 GPU.
64
+
65
+ 9.**Experimental Setup**
66
+
67
+ Baselines:Compared against TokenFlow, Rerender-A-Video, Text2Video-Zero, Pix2Video, and FateZero.
68
+ Metrics:CLIP-F (Temporal Consistency), WarpSSIM (Structural Consistency), CLIP-T (Textual Alignment), and Q_edit (Holistic Score).
69
+
70
+ 10.**Experimental Results & Analysis**
71
+
72
+ Superior Performance:RAVE outperforms baselines in both consistency and textual alignment, especially on long videos (90+ frames).
73
+ Ablation Study:Shuffling is critical; without it, style diverges across different grids in long sequences.
74
+ Design Constraint:Include the Quantitative Comparison Table (refer to Table 1) highlighting RAVE's lead in Q_edit and Runtime.
75
+
76
+ 11.**Qualitative Analysis & Applications**
77
+
78
+ Versatile Edits:Demonstrates local attribute editing, style transfer (watercolor), and significant shape transformations (wolf to dinosaur).
79
+ Consistency:Maintains stable colors and structures even in complex motion scenarios.
80
+
81
+ 12.**Key Takeaways & Limitations**
82
+
83
+ Takeaways:RAVE provides a fast, memory-efficient solution for consistent video editing; noise shuffling is a powerful alternative to explicit temporal attention.
84
+ Limitations:Still dependent on the quality of the underlying T2I model and ControlNet guidance.
85
+
86
+ 13.**Conclusion**
87
+
88
+ Summary:RAVE introduces randomized noise shuffling to bridge the gap between image and video editing efficiency.
89
+ Future Work:Exploring more advanced grid configurations and integrating with next-generation diffusion backbones.
90
+ ---
91
+
92
+ ## **2.
93
+
94
+
95
+ ## 2. Content Constraints
96
+
97
+ * **Faithfulness to background materials**: Use only the information in the paper. You must not fabricate additional experiments or modify or reinterpret the authors' claims.
98
+ * **Accuracy:** All content must be factually accurate, especially quantitative content and facts.
99
+ * **Brevity:** Use short, concise phrases, not long paragraphs. Focus on summarizing key facts and events without excessive detail. Bullet points may be used for clarity. If you use bullet points, each slide should have no more than 6 bullet points.
100
+ * **Sufficient Depth**: Do not summarize the paper in an overly superficial or high-level manner. The slides should preserve essential technical details, key arguments, and substantive insights rather than only presenting vague conclusions.
101
+ * **Logical Flow:** The slides should present a clear narrative, starting from early space exploration to recent developments. Ensure there is a clear progression of time and events.
102
+ * **Relevance of Information**: You must not add unrelated content.
103
+ * **Code & Markup Formatting**: Avoid raw LaTeX or Markdown code unless necessary.
104
+ * **Citation & Referencing**: Accurately reference the paper's results, diagrams, and examples.
105
+ * If a slide uses data from the paper, you must clearly indicate the source of the data on that slide (e.g., page xx, Figure xx, Table xx).
106
+ * All references (if any) must be placed in the bottom-left corner of the slide.
107
+
108
+ ## 3. Visual & Design
109
+
110
+ * **Images:** Include relevant images. Images must be high quality, clearly labeled, and relevant to the content.
111
+ * **Charts and Diagrams:** Use appropriate charts and diagrams where needed to visually present and clarify information, rather than relying only on text (and demos).
112
+ * If the slide includes charts or figures, ensure that all visual elements are clearly annotated (e.g., axes are labeled, units are specified, legends are included where needed, and data points are explained when necessary).
113
+ * Include **figures or diagrams descriptions** when appropriate, e.g., “The chart (from page 4 in the paper) shows proprietary models outperform open-weight ones.”
114
+ * **Legibility:** Use legible fonts and avoid clutter. Text should be large enough to be easily read.
115
+ * **Visual Balance:** Balance text and visuals so slides are easy to read when projected.
116
+ * **Layout:** Maintain a clean, professional layout with appropriate fonts, colors, and formatting.
117
+ * **Style Consistency**: The entire slide deck should follow a unified and coherent visual style.
118
+ * **Information Load**: Slides should avoid excessive information per page to preserve readability.
119
+
120
+ ## 4. Text Quality
121
+
122
+ * All generated text should be clear, with no missing or incorrect characters or words.
123
+ * Spelling, grammar, and typography must be accurate and correct throughout the content.
124
+
125
+ ## 5. Technical Fidelity Requirements
126
+
127
+ * **Quantitative Coverage**: Ensure that key data and experimental results (possibly presented in charts or tables in the paper) are included in the slide deck. In other words, the presentation should not only discuss the ideas of the paper but also present specific quantitative details (e.g., statistical data, experimental results, etc.).
128
+ * The slide deck must include at least 5 slides with quantitative details.
129
+
130
+ * **Quantitative Detail Correctness**: Ensure quantitative details (task counts, benchmark size, etc.) are correct.
131
+
132
+ * **Table & Chart Traceability and Annotation**: Ensure that any figures and tables in your slide deck are consistent with the paper. Specifically, for every figure and table in the slides:
133
+ * If it is directly copied from the paper, clearly indicate on the slide which figure or table it corresponds to in the paper (e.g., Figure 1 in the paper, Table 2 in the paper).
134
+ * If it is newly plotted based on data from the paper, clearly specify which section of the paper the data are taken from (e.g., Section 3.1). In addition, provide a clear explanation of the meaning of each legend item in the figure and each row and column in the table.
135
+ * For charts, every axis, unit, and label must be explicit
136
+
137
+ * **Point-Level Accuracy for Plots**: If scatter plots, line charts or radar charts are used in the slide deck, ensure that every data point exactly matches the corresponding data point in the original figure from the paper. Note that the values must be **precisely** the same, not just the shape of the graph.
138
+
139
+ * **Conceptual Illustration**: The slides may include data used only for conceptual illustration. However, if such data are included, you must clearly indicate on the corresponding slide which data are conceptual illustrations rather than experimental data reported in the paper.
140
+
141
+ ## 5. Presentation Tone and Audience
142
+
143
+ * **Tone:** The tone should be informative, academic, and professional. It should avoid casual or informal conversational language, while remaining clear and suitable for oral presentation. The slide deck should maintain a consistent tone.
144
+ * **Audience:** The presentation is intended for an academic audience with relevant background knowledge in the field. The content should be accessible to graduate-level students and researchers, assuming familiarity with standard concepts and terminology, while still providing sufficient context to understand the motivation, methodology, and key contributions.
145
+
146
+ ---
147
+
148
+ # **Output Expected**
149
+
150
+ A **complete slide deck** satisfying all constraints above.
academia/CVPR_2024/RAVE_Randomized_Noise_Shuffling_for_Fast_and_Consistent_Video_Editing_with_Diffusion_Models/generation_task/judge_prompt.json ADDED
@@ -0,0 +1,27 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "material_dependent_checklist_1": [
3
+ "\nDoes the first slide correctly list the title, authors, and the conference?\nIf **no**, describe what is missing from the first slide (Title: RAVE: Randomized Noise Shuffling for Fast and Consistent Video Editing with Diffusion Models; Conf: CVPR 2024).\n",
4
+ "\nDoes the beginning of the presentation include a clear agenda or outline?\nIf **no**, specify where it is missing.\n",
5
+ "\nIs there a slide dedicated to the background of Video Editing that points out the limitations of existing methods (e.g., \"high memory cost of spatio-temporal attention\" and \"temporal inconsistency/flickering\")?\nIf **no**, explain where the background info on video diffusion challenges is lacking.\n",
6
+ "\nDoes the slide deck clearly define the core concept of \"Randomized Noise Shuffling\" and how it enables global interaction?\nIf **no**, describe the missing points in explaining the shuffling mechanism.\n",
7
+ "\nIs there a slide describing the \"Grid Trick\" (Character Sheet) and how it reorganizes video frames into a single image format?\nIf **no**, indicate whether the explanation of frame-to-grid mapping was omitted.\n",
8
+ "\nIs there a slide explaining the integration with \"ControlNet\" and its role in providing spatial guidance (e.g., Depth or SoftEdge)?\nIf **no**, specify if the mechanism for maintaining structural integrity is missing.\n",
9
+ "\nDoes the deck present the logic for handling long videos (how the video is sampled into multiple grids and processed)?\nIf **no**, specify if the strategy for temporal extension beyond a single grid is missing.\n",
10
+ "\nIs there a slide summarizing the \"RAVE Dataset\" or the diverse video categories used for evaluation (e.g., human dancing, animal motion)?\nIf **no**, explain if the dataset section is missing.\n",
11
+ "\nDoes the experimental section cover comparative results against baselines like TokenFlow, Rerender-A-Video, or FateZero?\nIf **no**, indicate if the performance analysis relative to state-of-the-art video editing methods was omitted.\n",
12
+ "\nDoes the deck include qualitative results showing the model's ability to handle complex shape transformations (e.g., \"wolf to dinosaur\")?\nIf **no**, indicate if visual evidence of significant semantic editing is missing.\n",
13
+ "\nIs there a slide summarizing the \"Key Takeaways\" and limitations (e.g., dependence on the quality of the base T2I model)?\nIf **no**, describe the missing insights.\n"
14
+ ],
15
+ "material_dependent_checklist_2": [
16
+ "\nIs the description of the \"Zero-shot\" nature accurate? (e.g., it requires no training or fine-tuning on the input video.)\nIf **no**, specify the inaccurate descriptions regarding training requirements.\n",
17
+ "\nIs the technical roadmap correctly presented as a \"Shuffling-based Interaction\" rather than \"Full Spatio-Temporal Attention\"?\nIf **no**, point out the deviation in understanding how frames interact across time.\n",
18
+ "\nAre the explanations for the \"Grid Trick\" consistent with the paper? (It utilizes the 2D self-attention of T2I models to approximate 3D consistency.)\nIf **no**, explain the errors in definition.\n",
19
+ "\nAre the details of the \"DDIM Inversion\" process and its necessity for video reconstruction accurate?\nIf **no**, specifically point out errors in the inversion or latent manipulation steps.\n",
20
+ "\nDoes the performance data in \"Experimental Results\" match the paper's tables? (e.g., achieving faster inference speeds than TokenFlow while maintaining higher CLIP-T scores.)\nIf **no**, list the specific discrepancies between the values on the slides and the paper.\n",
21
+ "\nDoes the deck accurately distinguish between \"Disjoint\" sampling and \"Randomized\" sampling within the RAVE framework?\nIf **no**, explain where these sampling strategies are confused.\n",
22
+ "\nAre the definitions of evaluation metrics (e.g., CLIP-F for temporal consistency, WarpSSIM for structural stability) consistent with the paper's standards?\nIf **no**, point out errors in metric interpretation.\n",
23
+ "\nDoes the slide deck avoid fabricating facts (e.g., claiming it uses Optical Flow when it is a flow-free method)?\nIf **no**, point out the fabricated content.\n",
24
+ "\nDo the visual results accurately reflect the model's \"Temporal Consistency\"? (i.e., minimal flickering between consecutive frames.)\nIf **no**, specify the slides where the video quality claims are misinterpreted.\n",
25
+ "\nIs the base model (Stable Diffusion v1.5) and the compatibility with different ControlNet types correctly identified?\nIf **no**, provide the incorrect technical details found on the slides.\n"
26
+ ]
27
+ }
academia/CVPR_2024/RAVE_Randomized_Noise_Shuffling_for_Fast_and_Consistent_Video_Editing_with_Diffusion_Models/generation_task/statistics.yaml ADDED
@@ -0,0 +1,25 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ case_path: academia/CVPR_2024/RAVE_Randomized_Noise_Shuffling_for_Fast_and_Consistent_Video_Editing_with_Diffusion_Models
2
+ category: academia
3
+ input_metrics:
4
+ total_input_tokens: 7863
5
+ generation_prompt_tokens: 2263
6
+ materials_total_tokens: 5600
7
+ material_count: 1
8
+ pdf_total_pages: 10
9
+ file_details:
10
+ - name: material.pdf
11
+ tokens: 5600
12
+ pages: 10
13
+ checklist_counts:
14
+ common:
15
+ details:
16
+ Presentation Fundamentals: 13
17
+ Visual Design and Layout: 17
18
+ sum: 30
19
+ specific:
20
+ details:
21
+ Content Completeness: 11
22
+ Content Correctness: 10
23
+ Content Fidelity (per-slide-deck dynamic): 0
24
+ sum: 21
25
+ total_count: 51
academia/CVPR_2024/RAVE_Randomized_Noise_Shuffling_for_Fast_and_Consistent_Video_Editing_with_Diffusion_Models/material.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f0c1053aa00447ab7694e7d22cb39b119e9b34cc732068cafabd67fc616d32ff
3
+ size 7080447
academia/CVPR_2024/SCEdit_Efficient_and_Controllable_Image_Diffusion_Generation_via_Skip_Connection_Editing/generation_task/instructions.md ADDED
@@ -0,0 +1,125 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ You are to generate a complete, conference-quality academic slide deck suitable for an oral presentation at a top-tier AI conference (e.g., NeurIPS / ICML / ICLR / AAAI), based strictly on the paper. The slides must be accurate, well-structured, and **faithful to the original paper**, with no fabricated content.
2
+
3
+ ---
4
+
5
+ # **Strict Constraints for the Slides**
6
+
7
+ Below are the **hard constraints** you MUST satisfy. Slides violating these constraints are considered **incorrect**.
8
+
9
+ ## 1. Content Requirements
10
+
11
+ The slide deck must have **16-20 slides**.
12
+
13
+ The slide deck must include the following sections, in the order listed below (the number of slides in each section may be determined as appropriate).
14
+
15
+ **1.Title Slide Paper**
16
+
17
+ Title:SCEdit:Efficient and Controllable Image Diffusion Generation via Skip Connection Editing Author Team:Zeyinzi Jiang, Chaojie Mao, Yulin Pan, Zhen Han, Jingfeng Zhang Affiliation:Alibaba Group Conference:IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2024
18
+
19
+ **2.Outline / Agenda**
20
+
21
+
22
+ **3.Introduction / Background**
23
+
24
+ Image Diffusion Models:Powerful tools for text-to-image and controllable synthesis, but full fine-tuning is resource-intensive and impractical for customized scenarios. Current Landscape:Efficient tuning methods (e.g., LoRA) still suffer from high memory usage because backpropagation must pass through the entire U-Net backbone. Motivation:Investigate the specific role of Skip Connections (SCs) in U-Net to find a more efficient way to edit and adapt generative models.
25
+
26
+ **4.Limitations of Existing Methods:**
27
+
28
+ High Resource Consumption:Popular methods like LoRA add trainable matrices across the whole U-Net, leading to gradient accumulation in both encoder and decoder. Training Inefficiency:Backpropagation through the entire network backbone limits the speed and increases the memory footprint during the adaptation process. Complexity in Multi-condition:Integrating multiple control signals often requires complex architectural changes or multiple large-scale adapters. Design Constraint:Include a visualization (refer to Fig 3) showing how removing skip connections leads to a significant loss of structural information and decreased feature variance.
29
+
30
+ **5.Overview of the Proposed Method Core Idea:**
31
+
32
+ An efficient generative tuning framework (SCEdit) that adapts models by editing latent features directly within the U-Net's Skip Connections. Key Contribution 1:SC-Tuner. A lightweight module that decouples the encoder from backpropagation, reducing memory usage by up to 52% in text-to-image tasks. Key Contribution 2:CSC-Tuner. An extension for controllable synthesis that simplifies multi-condition injection using only 7.9% of ControlNet's parameters. Key Contribution 3:Superior Performance. Achieves better FID scores and qualitative results in both few-shot style transfer and complex controllable generation.
33
+
34
+ **6.Methodology:**
35
+
36
+ Skip Connection Editing Concept:The encoder generates multi-scale features, while the decoder uses SCs to retrieve high-frequency information. SCEdit modifies these features "on the fly" during the skip. Encoder Decoupling:By inserting tuners only in the SC path, the encoder becomes a frozen feature extractor, and gradients only flow through the decoder and tuners.
37
+
38
+ **7.Key Algorithm:**
39
+
40
+ SC-Tuner & CSC-Tuner SC-Tuner:Utilizes an Adapter-based "Tuner OP" with residual connections to modify skip features: O = Tuner(x) + x. CSC-Tuner:Supports multi-condition inputs (Canny, Depth, Pose, etc.) by combining weighted condition embeddings with original skip features. Design Constraint:Display the framework diagram (refer to Fig 4) showing the integration of SC-Tuner/CSC-Tuner and the Cascade Dense Convolution for condition encoding.
41
+
42
+ **8.Dataset and Training Details**
43
+
44
+ Text-to-Image:Trained on COCO2017 (118k images) and few-shot customized style datasets (30 samples per style). Controllable Synthesis:Evaluated on a filtered LAION artistic dataset (600k images) across various conditions like Canny, Depth, and Segmentation. Implementation:Based on Stable Diffusion (v1.5/v2.1), using AdamW optimizer with a learning rate of 5e-5.
45
+
46
+ **9.Experimental Setup**
47
+
48
+ Baselines:Compared against Full Fine-tuning, LoRA, ControlNet, T2I-Adapter, and ControlNet-XS. Metrics:Fréchet Inception Distance (FID), trainable parameters, memory consumption, and training speed.
49
+
50
+ **10.Experimental Results & Analysis**
51
+
52
+ Efficiency Gains:SCEdit reduces memory by 52.1% compared to LoRA (r=64) and training time by significant margins while achieving a better FID (13.82 vs 13.96). Controllable Excellence:Outperforms ControlNet in FID (71.78 vs 74.86) while using nearly 13x fewer parameters. Design Constraint:Include the performance comparison table (refer to Table 1 or Table 2) highlighting the trade-off between FID and memory usage.
53
+
54
+ **11.Visual Analysis & Composable Generation**
55
+
56
+ Multi-Condition Fusion:Independently trained models (e.g., Canny and Color) can be combined without further training for "training-free" scene translation. Style Transfer:SCEdit captures style characteristics more accurately than LoRA in few-shot scenarios, maintaining better text alignment.
57
+
58
+ **12.Key Takeaways & Limitations Takeaways:**
59
+
60
+ Skip connections are the "control center" for structural information; editing them is sufficient for powerful and efficient model adaptation. Limitations:While highly efficient, extreme reduction in tuner dimensions may eventually impact complex semantic alignment.
61
+
62
+ **13.Conclusion Summary:**
63
+
64
+ SCEdit provides a unified, lightweight, and plug-and-play solution for both text-to-image tuning and controllable synthesis. Future Work:Exploring the application of Skip Connection Editing in other architectures beyond U-Net, such as transformer-based diffusion backbones.
65
+
66
+ ---
67
+
68
+
69
+
70
+ ## 2. Content Constraints
71
+
72
+ * **Faithfulness to background materials**: Use only the information in the paper. You must not fabricate additional experiments or modify or reinterpret the authors' claims.
73
+ * **Accuracy:** All content must be factually accurate, especially quantitative content and facts.
74
+ * **Brevity:** Use short, concise phrases, not long paragraphs. Focus on summarizing key facts and events without excessive detail. Bullet points may be used for clarity. If you use bullet points, each slide should have no more than 6 bullet points.
75
+ * **Sufficient Depth**: Do not summarize the paper in an overly superficial or high-level manner. The slides should preserve essential technical details, key arguments, and substantive insights rather than only presenting vague conclusions.
76
+ * **Logical Flow:** The slides should present a clear narrative, starting from early space exploration to recent developments. Ensure there is a clear progression of time and events.
77
+ * **Relevance of Information**: You must not add unrelated content.
78
+ * **Code & Markup Formatting**: Avoid raw LaTeX or Markdown code unless necessary.
79
+ * **Citation & Referencing**: Accurately reference the paper's results, diagrams, and examples.
80
+ * If a slide uses data from the paper, you must clearly indicate the source of the data on that slide (e.g., page xx, Figure xx, Table xx).
81
+ * All references (if any) must be placed in the bottom-left corner of the slide.
82
+
83
+ ## 3. Visual & Design
84
+
85
+ * **Images:** Include relevant images. Images must be high quality, clearly labeled, and relevant to the content.
86
+ * **Charts and Diagrams:** Use appropriate charts and diagrams where needed to visually present and clarify information, rather than relying only on text (and demos).
87
+ * If the slide includes charts or figures, ensure that all visual elements are clearly annotated (e.g., axes are labeled, units are specified, legends are included where needed, and data points are explained when necessary).
88
+ * Include **figures or diagrams descriptions** when appropriate, e.g., “The chart (from page 4 in the paper) shows proprietary models outperform open-weight ones.”
89
+ * **Legibility:** Use legible fonts and avoid clutter. Text should be large enough to be easily read.
90
+ * **Visual Balance:** Balance text and visuals so slides are easy to read when projected.
91
+ * **Layout:** Maintain a clean, professional layout with appropriate fonts, colors, and formatting.
92
+ * **Style Consistency**: The entire slide deck should follow a unified and coherent visual style.
93
+ * **Information Load**: Slides should avoid excessive information per page to preserve readability.
94
+
95
+ ## 4. Text Quality
96
+
97
+ * All generated text should be clear, with no missing or incorrect characters or words.
98
+ * Spelling, grammar, and typography must be accurate and correct throughout the content.
99
+
100
+ ## 5. Technical Fidelity Requirements
101
+
102
+ * **Quantitative Coverage**: Ensure that key data and experimental results (possibly presented in charts or tables in the paper) are included in the slide deck. In other words, the presentation should not only discuss the ideas of the paper but also present specific quantitative details (e.g., statistical data, experimental results, etc.).
103
+ * The slide deck must include at least 5 slides with quantitative details.
104
+
105
+ * **Quantitative Detail Correctness**: Ensure quantitative details (task counts, benchmark size, etc.) are correct.
106
+
107
+ * **Table & Chart Traceability and Annotation**: Ensure that any figures and tables in your slide deck are consistent with the paper. Specifically, for every figure and table in the slides:
108
+ * If it is directly copied from the paper, clearly indicate on the slide which figure or table it corresponds to in the paper (e.g., Figure 1 in the paper, Table 2 in the paper).
109
+ * If it is newly plotted based on data from the paper, clearly specify which section of the paper the data are taken from (e.g., Section 3.1). In addition, provide a clear explanation of the meaning of each legend item in the figure and each row and column in the table.
110
+ * For charts, every axis, unit, and label must be explicit
111
+
112
+ * **Point-Level Accuracy for Plots**: If scatter plots, line charts or radar charts are used in the slide deck, ensure that every data point exactly matches the corresponding data point in the original figure from the paper. Note that the values must be **precisely** the same, not just the shape of the graph.
113
+
114
+ * **Conceptual Illustration**: The slides may include data used only for conceptual illustration. However, if such data are included, you must clearly indicate on the corresponding slide which data are conceptual illustrations rather than experimental data reported in the paper.
115
+
116
+ ## 5. Presentation Tone and Audience
117
+
118
+ * **Tone:** The tone should be informative, academic, and professional. It should avoid casual or informal conversational language, while remaining clear and suitable for oral presentation. The slide deck should maintain a consistent tone.
119
+ * **Audience:** The presentation is intended for an academic audience with relevant background knowledge in the field. The content should be accessible to graduate-level students and researchers, assuming familiarity with standard concepts and terminology, while still providing sufficient context to understand the motivation, methodology, and key contributions.
120
+
121
+ ---
122
+
123
+ # **Output Expected**
124
+
125
+ A **complete slide deck** satisfying all constraints above.
academia/CVPR_2024/SCEdit_Efficient_and_Controllable_Image_Diffusion_Generation_via_Skip_Connection_Editing/generation_task/judge_prompt.json ADDED
@@ -0,0 +1,27 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "material_dependent_checklist_1": [
3
+ "\nDoes the first slide correctly list the title, authors, and the conference? \nIf no, describe what is missing from the first slide (Title: SCEdit: Efficient and Controllable Image Diffusion Generation via Skip Connection Editing; Conf: CVPR 2024).\n",
4
+ "\nDoes the beginning of the presentation include a clear agenda or outline? \nIf no, specify where it is missing.\n",
5
+ "\nIs there a slide dedicated to the background of Image Diffusion Models that points out the limitations of existing tuning methods like LoRA or ControlNet (e.g., \"high memory consumption\" and \"backpropagation through the entire backbone\")? \nIf no, explain where the background info on adaptation efficiency is lacking.\n",
6
+ "\nDoes the slide deck clearly define the core concept of \"Skip Connection Editing\" and why SCs are chosen as the editing target? \nIf no, describe the missing points in explaining the importance of skip connections in U-Net.\n",
7
+ "\nIs there a slide describing the \"SC-Tuner\" architecture and how it achieves \"Encoder Decoupling\" to save memory? \nIf no, indicate whether the structural link between the frozen encoder and the trainable tuner was omitted.\n",
8
+ "\nIs there a slide explaining the \"CSC-Tuner\" (Controllable SC-Tuner) and its role in multi-condition image synthesis? \nIf no, specify if the mechanism for integrating multiple control signals is missing.\n",
9
+ "\nDoes the deck present the logic for the \"Tuner OP\" (how the adapter-based module modifies the features within the skip connection)? \nIf no, specify if the mathematical formulation of the tuning operation is missing.\n",
10
+ "\nIs there a slide summarizing the datasets used for evaluation (e.g., COCO2017 for T2I, or specific condition datasets like Canny/Depth/Segmentation)? \nIf no, explain if the dataset section is missing.\n",
11
+ "\nDoes the experimental section cover comparative results against baselines like ControlNet, T2I-Adapter, or ControlNet-XS? \nIf no, indicate if the performance analysis relative to existing controllable generation methods was omitted.\n",
12
+ "\nDoes the deck include qualitative results showing the model's \"Compositional Generation\" capability (e.g., combining Canny and Color conditions)? \nIf no, indicate if visual evidence of multi-condition fusion is missing.\n",
13
+ "\nIs there a slide summarizing the \"Key Takeaways\" and limitations (e.g., performance bottlenecks in extremely low-rank settings)? \nIf no, describe the missing insights.\n"
14
+ ],
15
+ "material_dependent_checklist_2": [
16
+ "\nIs the description of the memory efficiency accurate? (e.g., SCEdit reduces training memory by up to 52% for text-to-image tasks.) \nIf no, specify the inaccurate descriptions.\n",
17
+ "\nIs the technical roadmap correctly presented as \"Editing Skip Connections\" rather than \"Modifying the Main Backbone Layers\"? \nIf no, point out the deviation in understanding the core architecture.\n",
18
+ "\nAre the explanations for \"Encoder Decoupling\" consistent with the paper? (The fact that gradients do not need to pass through the encoder during training.) \nIf no, explain the errors in definition.\n",
19
+ "\nAre the details of the \"Tuner OP\" residual connection (O = Tuner(x) + x) accurate? \nIf no, specifically point out errors in the formula or implementation logic.\n",
20
+ "\nDoes the performance data in \"Experimental Results\" match the paper's tables? (e.g., achieving better FID than ControlNet with significantly fewer parameters.) \nIf no, list the specific discrepancies between the values on the slides and the paper.\n",
21
+ "\nDoes the deck accurately distinguish between \"SC-Tuner\" for general tuning and \"CSC-Tuner\" for controllable generation? \nIf no, explain where these two modules are confused.\n",
22
+ "\nAre the definitions of evaluation metrics (e.g., FID, CLIP Score, mIoU for segmentation) consistent with the paper's standards? \nIf no, point out errors in metric interpretation.\n",
23
+ "\nDoes the slide deck avoid fabricating facts (e.g., claiming it requires training the entire U-Net when it actually freezes the encoder)? \nIf no, point out the fabricated content.\n",
24
+ "\nDo the visual results accurately reflect the model's \"Training-free Composition\"? (i.e., combining separately trained condition tuners directly during inference.) \nIf no, specify the slides where the composition capabilities are misinterpreted.\n",
25
+ "\nIs the parameter scale (e.g., CSC-Tuner using only 7.9% of ControlNet's parameters) correctly identified? \nIf no, provide the incorrect technical details found on the slides.\n"
26
+ ]
27
+ }
academia/CVPR_2024/SCEdit_Efficient_and_Controllable_Image_Diffusion_Generation_via_Skip_Connection_Editing/generation_task/statistics.yaml ADDED
@@ -0,0 +1,25 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ case_path: academia/CVPR_2024/SCEdit_Efficient_and_Controllable_Image_Diffusion_Generation_via_Skip_Connection_Editing
2
+ category: academia
3
+ input_metrics:
4
+ total_input_tokens: 7934
5
+ generation_prompt_tokens: 2334
6
+ materials_total_tokens: 5600
7
+ material_count: 1
8
+ pdf_total_pages: 10
9
+ file_details:
10
+ - name: material.pdf
11
+ tokens: 5600
12
+ pages: 10
13
+ checklist_counts:
14
+ common:
15
+ details:
16
+ Presentation Fundamentals: 13
17
+ Visual Design and Layout: 17
18
+ sum: 30
19
+ specific:
20
+ details:
21
+ Content Completeness: 11
22
+ Content Correctness: 10
23
+ Content Fidelity (per-slide-deck dynamic): 0
24
+ sum: 21
25
+ total_count: 51
academia/CVPR_2024/SCEdit_Efficient_and_Controllable_Image_Diffusion_Generation_via_Skip_Connection_Editing/material.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2019cfc0b29b28492255047dd7b971711f0242e0f2ace0b3c9990888f1ddfed5
3
+ size 5581887
academia/CVPR_2024/TFMQ-DM_Temporal_Feature_Maintenance_Quantization_for_Diffusion_Models/generation_task/instructions.md ADDED
@@ -0,0 +1,162 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ You are to generate a complete, conference-quality academic slide deck suitable for an oral presentation at a top-tier AI conference (e.g., NeurIPS / ICML / ICLR / AAAI), based strictly on the paper. The slides must be accurate, well-structured, and **faithful to the original paper**, with no fabricated content.
2
+
3
+ ---
4
+
5
+ # **Strict Constraints for the Slides**
6
+
7
+ Below are the **hard constraints** you MUST satisfy. Slides violating these constraints are considered **incorrect**.
8
+
9
+ ## 1. Content Requirements
10
+
11
+ The slide deck must have **16-20 slides**.
12
+
13
+ The slide deck must include the following sections, in the order listed below (the number of slides in each section may be determined as appropriate).
14
+
15
+ 1. **Title Slide**
16
+
17
+
18
+ Paper Title: TFMQ-DM: Temporal Feature Maintenance Quantization for Diffusion Models
19
+ Author Team: Yushi Huang, Ruihao Gong, Jing Liu, Tianlong Chen, Xianglong Liu
20
+ Affiliation: Beihang University, SenseTime Research, Monash University, UT Austin
21
+ Conference: IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2024
22
+
23
+ 2. **Outline / Agenda**
24
+
25
+
26
+ 3. **Introduction / Background**
27
+
28
+
29
+ Diffusion Models (DMs): State-of-the-art frameworks for high-quality image, voice, and text synthesis.
30
+ Efficiency Bottleneck: Massive computational costs due to multi-step denoising and large network architectures (e.g., Stable Diffusion).
31
+ Post-Training Quantization (PTQ): A critical training-free solution to reduce memory and speed up inference.
32
+ Current Problem: Existing PTQ methods suffer severe performance drops in low-bit settings (e.g., 4-bit) due to a lack of specialized optimization for temporal features.
33
+
34
+ 4. **Limitations of Existing Methods:**
35
+
36
+
37
+ Temporal Feature Disturbance: Previous methods overlook the independence of the time-step t, causing temporal features to overfit to limited calibration data.
38
+ Inappropriate Reconstruction Targets: Optimizing entire Residual Bottleneck Blocks instead of focusing on the specific modules generating temporal features.
39
+ Unaware of Finite Activations: Failing to account for the fact that time-step activations form a finite set with significant range variations across steps.
40
+ Design Constraint: Include a visual example (refer to Fig 3) showing the denoising trajectory deviation between full-precision and quantized models.
41
+
42
+ 5. **Overview of the Proposed Method**
43
+
44
+
45
+ Core Idea: A Temporal Feature Maintenance Quantization (TFMQ) framework that isolates and preserves temporal information to ensure end-to-end generation quality.
46
+ Key Contribution 1: Temporal Information Block. A novel block design that consolidates all modules related only to the time-step t, independent of sampling data.
47
+ Key Contribution 2: Temporal Information Aware Reconstruction (TIAR). A weight quantization method specifically targeting minimal disturbance of temporal features.
48
+ Key Contribution 3: Finite Set Calibration (FSC). An activation quantization strategy that adapts to the finite, time-dependent nature of temporal activations.
49
+
50
+ 6. **Methodology: Framework Components**
51
+
52
+
53
+ Step 1: Temporal Information Block: Grouping time embeddings and embedding layers into a unified block to separate time-related features from data-related features.
54
+ Step 2: TIAR Optimization: Using a reconstruction objective that minimizes the Frobenius norm specifically between full-precision and quantized temporal features.
55
+
56
+ 7. **Key Algorithm: Finite Set Calibration (FSC)**
57
+
58
+
59
+ Time-Step Specific Parameters: Employs different quantization parameters (scale and offset) for activations at each individual time-step t.
60
+ Efficient Estimation: Utilizing Min-max calibration within the finite solution space to achieve high performance with negligible overhead.
61
+ Design Constraint: Display the overview diagram (refer to Fig 1) showing the flow of Temporal Information Block -> TIAR -> FSC.
62
+
63
+ 8. **Dataset and Training Details**
64
+
65
+
66
+ Benchmarks: Evaluated across CIFAR-10, LSUN-Bedrooms/Churches, CelebA-HQ, FFHQ, ImageNet, and MS-COCO.
67
+ Models: Tested on DDPM, LDM (Latent Diffusion), and Stable Diffusion-v1-4.
68
+ Settings: Channel-wise weight quantization and layer-wise activation quantization; 20k iterations for weight reconstruction.
69
+
70
+ 9. **Experimental Setup**
71
+
72
+
73
+ Evaluation Metrics: Fréchet Inception Distance (FID), sFID (spatial relationships), Inception Score (IS), and CLIP score for text-guided generation.
74
+ Hardware: All experiments conducted on a single H800 GPU using the PyTorch framework.
75
+
76
+ 10. **Experimental Results & Analysis**
77
+
78
+
79
+ 4-Bit Breakthrough: Achieves performance nearly on par with full-precision models under 4-bit weight quantization for the first time.
80
+
81
+ Significant FID Reduction: On CelebA-HQ 256x256, reduces FID by 6.71 compared to previous state-of-the-art methods in w4a8 settings.
82
+ Efficiency Gains: Accelerates quantization time by 2.0x on LSUN-Bedrooms while incurring almost no extra computational cost during inference.
83
+ Design Constraint: Include a comparison table (refer to Table 2) showing the performance gap reduction across various datasets like LSUN and CelebA-HQ.
84
+
85
+ 11. **Visual Analysis & Performance Study**
86
+
87
+
88
+ Trajectory Maintenance: TIAR successfully prevents the denoising trajectory deviation common in low-bit quantization.
89
+ Stability: Demonstrates consistent results across diverse sampling steps and guidance scales, proving the robustness of maintaining temporal information.
90
+
91
+ 12. **Key Takeaways & Limitations**
92
+
93
+
94
+ Takeaways: Separating temporal features from sampling data is the key to successful low-bit PTQ for diffusion models.
95
+ Speed and Quality: TFMQ-DM provides a superior balance between compression efficiency and image fidelity.
96
+ Limitations: While 4-bit weight quantization is near-lossless, extremely low-bit activation quantization still poses challenges for some high-resolution tasks.
97
+
98
+ 13. **Conclusion**
99
+
100
+
101
+ Summary: TFMQ-DM introduces a pioneering approach to temporal feature preservation, setting a new SOTA for diffusion model quantization.
102
+ Future Work: Extending the temporal maintenance concept to other generative architectures and video diffusion models.
103
+
104
+ ---
105
+
106
+
107
+ ## 2. Content Constraints
108
+
109
+ * **Faithfulness to background materials**: Use only the information in the paper. You must not fabricate additional experiments or modify or reinterpret the authors' claims.
110
+ * **Accuracy:** All content must be factually accurate, especially quantitative content and facts.
111
+ * **Brevity:** Use short, concise phrases, not long paragraphs. Focus on summarizing key facts and events without excessive detail. Bullet points may be used for clarity. If you use bullet points, each slide should have no more than 6 bullet points.
112
+ * **Sufficient Depth**: Do not summarize the paper in an overly superficial or high-level manner. The slides should preserve essential technical details, key arguments, and substantive insights rather than only presenting vague conclusions.
113
+ * **Logical Flow:** The slides should present a clear narrative, starting from early space exploration to recent developments. Ensure there is a clear progression of time and events.
114
+ * **Relevance of Information**: You must not add unrelated content.
115
+ * **Code & Markup Formatting**: Avoid raw LaTeX or Markdown code unless necessary.
116
+ * **Citation & Referencing**: Accurately reference the paper's results, diagrams, and examples.
117
+ * If a slide uses data from the paper, you must clearly indicate the source of the data on that slide (e.g., page xx, Figure xx, Table xx).
118
+ * All references (if any) must be placed in the bottom-left corner of the slide.
119
+
120
+ ## 3. Visual & Design
121
+
122
+ * **Images:** Include relevant images. Images must be high quality, clearly labeled, and relevant to the content.
123
+ * **Charts and Diagrams:** Use appropriate charts and diagrams where needed to visually present and clarify information, rather than relying only on text (and demos).
124
+ * If the slide includes charts or figures, ensure that all visual elements are clearly annotated (e.g., axes are labeled, units are specified, legends are included where needed, and data points are explained when necessary).
125
+ * Include **figures or diagrams descriptions** when appropriate, e.g., “The chart (from page 4 in the paper) shows proprietary models outperform open-weight ones.”
126
+ * **Legibility:** Use legible fonts and avoid clutter. Text should be large enough to be easily read.
127
+ * **Visual Balance:** Balance text and visuals so slides are easy to read when projected.
128
+ * **Layout:** Maintain a clean, professional layout with appropriate fonts, colors, and formatting.
129
+ * **Style Consistency**: The entire slide deck should follow a unified and coherent visual style.
130
+ * **Information Load**: Slides should avoid excessive information per page to preserve readability.
131
+
132
+ ## 4. Text Quality
133
+
134
+ * All generated text should be clear, with no missing or incorrect characters or words.
135
+ * Spelling, grammar, and typography must be accurate and correct throughout the content.
136
+
137
+ ## 5. Technical Fidelity Requirements
138
+
139
+ * **Quantitative Coverage**: Ensure that key data and experimental results (possibly presented in charts or tables in the paper) are included in the slide deck. In other words, the presentation should not only discuss the ideas of the paper but also present specific quantitative details (e.g., statistical data, experimental results, etc.).
140
+ * The slide deck must include at least 5 slides with quantitative details.
141
+
142
+ * **Quantitative Detail Correctness**: Ensure quantitative details (task counts, benchmark size, etc.) are correct.
143
+
144
+ * **Table & Chart Traceability and Annotation**: Ensure that any figures and tables in your slide deck are consistent with the paper. Specifically, for every figure and table in the slides:
145
+ * If it is directly copied from the paper, clearly indicate on the slide which figure or table it corresponds to in the paper (e.g., Figure 1 in the paper, Table 2 in the paper).
146
+ * If it is newly plotted based on data from the paper, clearly specify which section of the paper the data are taken from (e.g., Section 3.1). In addition, provide a clear explanation of the meaning of each legend item in the figure and each row and column in the table.
147
+ * For charts, every axis, unit, and label must be explicit
148
+
149
+ * **Point-Level Accuracy for Plots**: If scatter plots, line charts or radar charts are used in the slide deck, ensure that every data point exactly matches the corresponding data point in the original figure from the paper. Note that the values must be **precisely** the same, not just the shape of the graph.
150
+
151
+ * **Conceptual Illustration**: The slides may include data used only for conceptual illustration. However, if such data are included, you must clearly indicate on the corresponding slide which data are conceptual illustrations rather than experimental data reported in the paper.
152
+
153
+ ## 5. Presentation Tone and Audience
154
+
155
+ * **Tone:** The tone should be informative, academic, and professional. It should avoid casual or informal conversational language, while remaining clear and suitable for oral presentation. The slide deck should maintain a consistent tone.
156
+ * **Audience:** The presentation is intended for an academic audience with relevant background knowledge in the field. The content should be accessible to graduate-level students and researchers, assuming familiarity with standard concepts and terminology, while still providing sufficient context to understand the motivation, methodology, and key contributions.
157
+
158
+ ---
159
+
160
+ # **Output Expected**
161
+
162
+ A **complete slide deck** satisfying all constraints above.
academia/CVPR_2024/TFMQ-DM_Temporal_Feature_Maintenance_Quantization_for_Diffusion_Models/generation_task/judge_prompt.json ADDED
@@ -0,0 +1,27 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "material_dependent_checklist_1": [
3
+ "\nDoes the first slide correctly list the title, authors, and the conference?\nIf no, describe what is missing from the first slide (Title: TFMQ-DM: Temporal Feature Maintenance Quantization for Diffusion Models; Conf: CVPR 2024).\n",
4
+ "\nDoes the beginning of the presentation include a clear agenda or outline?\nIf no, specify where it is missing.\n",
5
+ "\nIs there a slide dedicated to the background of Diffusion Models PTQ that points out the limitations of existing methods (e.g., \"temporal feature disturbance\" and \"performance collapse in low-bit settings\")?\nIf no, explain where the background info on quantization challenges is lacking.\n",
6
+ "\nDoes the slide deck clearly define the core concept of \"Temporal Feature Maintenance\" and why isolating time-step related modules is crucial?\nIf no, describe the missing points in explaining the importance of temporal information.\n",
7
+ "\nIs there a slide describing the \"Temporal Information Block\" architecture and how it re-groups modules like Time Embeddings?\nIf no, indicate whether the structural reorganization of the network for quantization was omitted.\n",
8
+ "\nIs there a slide explaining the \"Temporal Information Aware Reconstruction (TIAR)\" and its role in minimizing denoising trajectory deviation?\nIf no, specify if the specialized weight reconstruction mechanism is missing.\n",
9
+ "\nDoes the deck present the core logic for the \"Finite Set Calibration (FSC)\" (how it handles the finite and discrete nature of time-step activations)?\nIf no, specify if the unique activation calibration strategy is missing.\n",
10
+ "\nIs there a slide summarizing the datasets used for evaluation (e.g., CIFAR-10, LSUN, CelebA-HQ, and MS-COCO)?\nIf no, explain if the dataset section is missing.\n",
11
+ "\nDoes the experimental section cover comparative results against baselines like Q-Diffusion, PTQ4DM, or standard Post-Training Quantization?\nIf no, indicate if the performance analysis relative to existing DM quantization methods was omitted.\n",
12
+ "\nDoes the deck include qualitative results showing the generated image quality under 4-bit weight quantization compared to full-precision (FP32) models?\nIf no, indicate if visual evidence of generation fidelity is missing.\n",
13
+ "\nIs there a slide summarizing the \"Key Takeaways\" and limitations (e.g., the trade-off between quantization speed and extremely low-bit activation performance)?\nIf no, describe the missing insights.\n"
14
+ ],
15
+ "material_dependent_checklist_2": [
16
+ "\nIs the description of the limitations of previous PTQ methods accurate? (e.g., they treat time-step features like normal data features, leading to temporal info loss.)\nIf no, specify the inaccurate descriptions.\n",
17
+ "\nIs the technical roadmap correctly presented as \"Post-Training Quantization (PTQ)\" rather than \"Quantization-Aware Training (QAT)\"?\nIf no, point out the deviation in understanding the training-free principle.\n",
18
+ "\nAre the explanations for the \"Temporal Information Block\" consistent with the paper? (It re-groups modules that only depend on t and are independent of input data.)\nIf no, explain the errors in definition.\n",
19
+ "\nAre the details of the \"TIAR\" objective function accurate? (e.g., focusing on the Frobenius norm of temporal features rather than the entire residual block.)\nIf no, specifically point out errors in the optimization targets.\n",
20
+ "\nDoes the performance data in \"Experimental Results\" match the paper's tables? (e.g., achieving nearly lossless 4-bit weight quantization on LDM and Stable Diffusion.)\nIf no, list the specific discrepancies between the values on the slides and the paper.\n",
21
+ "\nDoes the deck accurately distinguish between \"Data-related features\" and \"Time-related features\" within the U-Net architecture?\nIf no, explain where these concepts are confused.\n",
22
+ "\nAre the definitions of evaluation metrics (e.g., FID, sFID, CLIP score) consistent with the paper's standards?\nIf no, point out errors in metric interpretation.\n",
23
+ "\nDoes the slide deck avoid fabricating facts (e.g., claiming it is a training-based method when it is actually training-free)?\nIf no, point out the fabricated content.\n",
24
+ "\nDo the visual results (e.g., Fig 3) accurately reflect the model's ability to maintain the \"Denoising Trajectory\"?\nIf no, specify the slides where the trajectory maintenance is misinterpreted.\n",
25
+ "\nAre the bit-width configurations (e.g., W4A8, W8A8) and the base models (e.g., LDM-4, SD-v1.4) correctly identified?\nIf no, provide the incorrect technical details found on the slides.\n"
26
+ ]
27
+ }
academia/CVPR_2024/TFMQ-DM_Temporal_Feature_Maintenance_Quantization_for_Diffusion_Models/generation_task/statistics.yaml ADDED
@@ -0,0 +1,25 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ case_path: academia/CVPR_2024/TFMQ-DM_Temporal_Feature_Maintenance_Quantization_for_Diffusion_Models
2
+ category: academia
3
+ input_metrics:
4
+ total_input_tokens: 7987
5
+ generation_prompt_tokens: 2387
6
+ materials_total_tokens: 5600
7
+ material_count: 1
8
+ pdf_total_pages: 10
9
+ file_details:
10
+ - name: material.pdf
11
+ tokens: 5600
12
+ pages: 10
13
+ checklist_counts:
14
+ common:
15
+ details:
16
+ Presentation Fundamentals: 13
17
+ Visual Design and Layout: 17
18
+ sum: 30
19
+ specific:
20
+ details:
21
+ Content Completeness: 11
22
+ Content Correctness: 10
23
+ Content Fidelity (per-slide-deck dynamic): 0
24
+ sum: 21
25
+ total_count: 51
academia/CVPR_2024/TFMQ-DM_Temporal_Feature_Maintenance_Quantization_for_Diffusion_Models/material.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5695e5a6414750b99fbfdc550f9b927c7d82cbf807492ea8cbb40fb0a9ab949b
3
+ size 8164494
academia/CVPR_2025/AIpparel_A_Multimodal_Foundation_Model_for_Digital_Garments/generation_task/instructions.md ADDED
@@ -0,0 +1,127 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ You are to generate a complete, conference-quality academic slide deck suitable for an oral presentation at a top-tier AI conference (e.g., NeurIPS / ICML / ICLR / AAAI), based strictly on the paper. The slides must be accurate, well-structured, and **faithful to the original paper**, with no fabricated content.
2
+
3
+ ---
4
+
5
+ # **Strict Constraints for the Slides**
6
+
7
+ Below are the **hard constraints** you MUST satisfy. Slides violating these constraints are considered **incorrect**.
8
+
9
+ ## 1. Content Requirements
10
+
11
+ The slide deck must have **16-20 slides**.
12
+
13
+ The slide deck must include the following sections, in the order listed below (the number of slides in each section may be determined as appropriate).
14
+
15
+ 1. **Title Slide Paper Title**
16
+
17
+ AIpparel: A Multimodal Foundation Model for Digital Garments
18
+ Author Team: Kiyohiro Nakayama, Jan Ackermann*, Timur Levent Kesdogan*, Yang Zheng, Maria Korosteleva, Olga Sorkine-Hornung, Leonidas J. Guibas, Guandao Yang, Gordon Wetzstein
19
+ Affiliation: Stanford University, ETH Zürich
20
+ Conference: IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2025
21
+
22
+ 2. **Outline / Agenda**
23
+
24
+ 3. **Introduction / Background**
25
+
26
+ Digital Garment Creation: Essential for cultural identity and personal style, but remains a time-consuming manual process. Current Landscape: Existing methods are mostly single-modal (e.g., only images, text, or 3D points) and struggle with complex geometries. Motivation: To simplify pattern-making, we need a scalable model capable of handling diverse multimodal inputs like language and images simultaneously.
27
+
28
+
29
+ 4. **Limitations of Existing Methods:**
30
+
31
+ Single-Modality Focus: Models are often task-specific and difficult to adapt to combined input types. Data Scarcity: Lack of large-scale multimodal sewing pattern datasets containing text, images, and editing pairs. Complexity Issues: Previous models are limited to simple garments with predefined parameters. Design Constraint: Include Figure 1 showing Alpparel's ability to generate complex sewing patterns (e.g., knee-length jumpsuit) from text and images, which can be directly simulated in 3D.
32
+
33
+ 5. **Overview of the Proposed Method Core Idea:**
34
+
35
+ Fine-tuning a Large Multimodal Model (LLaVA) on a custom-curated large-scale dataset of 120,000+ unique garments. Key Contribution 1: GCD-MM Dataset. The first large-scale multimodal sewing pattern dataset with text, images, and editing instructions. Key Contribution 2: Novel Tokenization Scheme. An efficient, learning-friendly representation that reduces token usage by 100x compared to previous methods. Key Contribution 3: Multimodal Capabilities. Enables novel applications like language-instructed interactive garment editing.
36
+
37
+ 6. **Methodology: Benchmark/Dataset**
38
+
39
+ Construction Step 1: Building on GCD: Extending the GarmentCodeData (GCD) dataset with multimodal labels. Step 2: Rule-based & AI Annotation: Using a rule-based algorithm for key features combined with GPT-4o to generate accurate natural language descriptions. Step 3: Editing Pairs: Creating paired sewing patterns with text instructions (e.g., "make the skirt longer") using programming abstractions.
40
+
41
+ 7. **Key Algorithm:**
42
+
43
+ Sewing Pattern Tokenization & Regression Tokenization: Representing patterns as drawing commands with special tokens (<SOG>, <SOP>, etc.) to fit within LMM context limits. Regression Heads: Small MLP heads map hidden embeddings to continuous parameters (vertex positions, 3D transformations). Positional Embeddings: Incorporating vertex positions and 3D transformations as embeddings added to vocabulary tokens. Design Constraint: Display Figure 2 showing the tokenization flow from multimodal input -> LLaVA -> Regression Heads -> Simulation-ready patterns.
44
+
45
+ 8. **Dataset and Training Details**
46
+
47
+ Data Statistics: 120,000 unique garments with multimodal annotations. Training Objective: Combined loss of Cross-Entropy for discrete tokens and L2 loss for continuous parameters. Hardware/Model: Based on LLaVA 1.5-7B; keeps vision encoder frozen while fine-tuning the language model and regression heads.
48
+
49
+ 9. **Experimental Setup Tasks:**
50
+
51
+ Evaluated on Image-to-Garment, Text-to-Garment, and Language-instructed Editing. Baselines: Compared against SewFormer and DressCode. Metrics: Panel L2 distance, #Panel Accuracy, #Edge Accuracy, and #Stitch Accuracy.
52
+
53
+ 10. **Experimental Results & Analysis Performance SOTA:**
54
+
55
+ Outperforms baselines by a large margin on complex datasets like GCD-MM (e.g., #Stitch Accuracy of 77.2% vs 2.8% for SewFormer-FT). Multimodal Superiority: Successfully handles "common-sense" queries (e.g., "semi-formal garden party") where baselines fail. Design Constraint: Include Table 2 showing the quantitative performance gap between Alpparel and SewFormer on the GCD-MM dataset.
56
+
57
+ 11. **Visual Analysis & Editing Studies Editing Precision:**
58
+
59
+ Accurately follows instructions like "include a hood" or "make the skirt longer" while maintaining the original style. Reconstruction Quality: Captures fine details like waistband panels and sleeve cuffs that baselines miss.
60
+
61
+ 12. **Key Takeaways & Limitations Takeaways:**
62
+
63
+ Efficient tokenization is critical for scaling LMMs to complex geometries; multimodal training enables intuitive garment design and editing. Limitations: Relies on procedurally generated data; future work could bridge the gap between synthetic and real-world 3D scans.
64
+
65
+ 13. **Conclusion Summary:**
66
+
67
+ AIpparel is the first multimodal foundation model for sewing patterns, offering a scalable recipe for digital garment generation. Future Work: Releasing GCD-MM to the public to inspire further research in multimodal garment generation.
68
+
69
+
70
+ ---
71
+
72
+ ## 2. Content Constraints
73
+
74
+ * **Faithfulness to background materials**: Use only the information in the paper. You must not fabricate additional experiments or modify or reinterpret the authors' claims.
75
+ * **Accuracy:** All content must be factually accurate, especially quantitative content and facts.
76
+ * **Brevity:** Use short, concise phrases, not long paragraphs. Focus on summarizing key facts and events without excessive detail. Bullet points may be used for clarity. If you use bullet points, each slide should have no more than 6 bullet points.
77
+ * **Sufficient Depth**: Do not summarize the paper in an overly superficial or high-level manner. The slides should preserve essential technical details, key arguments, and substantive insights rather than only presenting vague conclusions.
78
+ * **Logical Flow:** The slides should present a clear narrative, starting from early space exploration to recent developments. Ensure there is a clear progression of time and events.
79
+ * **Relevance of Information**: You must not add unrelated content.
80
+ * **Code & Markup Formatting**: Avoid raw LaTeX or Markdown code unless necessary.
81
+ * **Citation & Referencing**: Accurately reference the paper's results, diagrams, and examples.
82
+ * If a slide uses data from the paper, you must clearly indicate the source of the data on that slide (e.g., page xx, Figure xx, Table xx).
83
+ * All references (if any) must be placed in the bottom-left corner of the slide.
84
+
85
+ ## 3. Visual & Design
86
+
87
+ * **Images:** Include relevant images. Images must be high quality, clearly labeled, and relevant to the content.
88
+ * **Charts and Diagrams:** Use appropriate charts and diagrams where needed to visually present and clarify information, rather than relying only on text (and demos).
89
+ * If the slide includes charts or figures, ensure that all visual elements are clearly annotated (e.g., axes are labeled, units are specified, legends are included where needed, and data points are explained when necessary).
90
+ * Include **figures or diagrams descriptions** when appropriate, e.g., “The chart (from page 4 in the paper) shows proprietary models outperform open-weight ones.”
91
+ * **Legibility:** Use legible fonts and avoid clutter. Text should be large enough to be easily read.
92
+ * **Visual Balance:** Balance text and visuals so slides are easy to read when projected.
93
+ * **Layout:** Maintain a clean, professional layout with appropriate fonts, colors, and formatting.
94
+ * **Style Consistency**: The entire slide deck should follow a unified and coherent visual style.
95
+ * **Information Load**: Slides should avoid excessive information per page to preserve readability.
96
+
97
+ ## 4. Text Quality
98
+
99
+ * All generated text should be clear, with no missing or incorrect characters or words.
100
+ * Spelling, grammar, and typography must be accurate and correct throughout the content.
101
+
102
+ ## 5. Technical Fidelity Requirements
103
+
104
+ * **Quantitative Coverage**: Ensure that key data and experimental results (possibly presented in charts or tables in the paper) are included in the slide deck. In other words, the presentation should not only discuss the ideas of the paper but also present specific quantitative details (e.g., statistical data, experimental results, etc.).
105
+ * The slide deck must include at least 5 slides with quantitative details.
106
+
107
+ * **Quantitative Detail Correctness**: Ensure quantitative details (task counts, benchmark size, etc.) are correct.
108
+
109
+ * **Table & Chart Traceability and Annotation**: Ensure that any figures and tables in your slide deck are consistent with the paper. Specifically, for every figure and table in the slides:
110
+ * If it is directly copied from the paper, clearly indicate on the slide which figure or table it corresponds to in the paper (e.g., Figure 1 in the paper, Table 2 in the paper).
111
+ * If it is newly plotted based on data from the paper, clearly specify which section of the paper the data are taken from (e.g., Section 3.1). In addition, provide a clear explanation of the meaning of each legend item in the figure and each row and column in the table.
112
+ * For charts, every axis, unit, and label must be explicit
113
+
114
+ * **Point-Level Accuracy for Plots**: If scatter plots, line charts or radar charts are used in the slide deck, ensure that every data point exactly matches the corresponding data point in the original figure from the paper. Note that the values must be **precisely** the same, not just the shape of the graph.
115
+
116
+ * **Conceptual Illustration**: The slides may include data used only for conceptual illustration. However, if such data are included, you must clearly indicate on the corresponding slide which data are conceptual illustrations rather than experimental data reported in the paper.
117
+
118
+ ## 5. Presentation Tone and Audience
119
+
120
+ * **Tone:** The tone should be informative, academic, and professional. It should avoid casual or informal conversational language, while remaining clear and suitable for oral presentation. The slide deck should maintain a consistent tone.
121
+ * **Audience:** The presentation is intended for an academic audience with relevant background knowledge in the field. The content should be accessible to graduate-level students and researchers, assuming familiarity with standard concepts and terminology, while still providing sufficient context to understand the motivation, methodology, and key contributions.
122
+
123
+ ---
124
+
125
+ # **Output Expected**
126
+
127
+ A **complete slide deck** satisfying all constraints above.
academia/CVPR_2025/AIpparel_A_Multimodal_Foundation_Model_for_Digital_Garments/generation_task/judge_prompt.json ADDED
@@ -0,0 +1,27 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "material_dependent_checklist_1": [
3
+ "\nDoes the first slide correctly list the title, authors, and the conference? If no, describe what is missing from the first slide (Title: AIpparel: A Multimodal Foundation Model for Digital Garments; Authors: Kiyohiro Nakayama, Jan Ackermann*, Timur Levent Kesdogan*, etc.; Conf: CVPR 2025).\n",
4
+ "\nDoes the beginning of the presentation include a clear agenda or outline? If no, specify where it is missing.\n",
5
+ "\nIs there a slide dedicated to the background of Digital Garment Creation that points out the limitations of existing methods (e.g., \"single-modality focus\", \"difficulty in capturing complex geometry\", and \"lack of large-scale multimodal datasets\")? If no, explain where the background info on the challenges of sewing pattern generation is lacking.\n",
6
+ "\nDoes the slide deck clearly define the core concept of using a \"Large Multimodal Model (LLaVA)\" as a foundation for predicting sewing patterns? If no, describe the missing points in explaining the multimodal adaptation framework.\n",
7
+ "\nIs there a slide describing the \"Drawing Command Tokenization\" and how it reduces the token sequence length (e.g., 100x reduction) for complex patterns? If no, indicate whether the efficiency gains in pattern representation were omitted.\n",
8
+ "\nIs there a slide explaining the \"Hybrid Discrete-Continuous Prediction\" (using regression heads for continuous parameters like vertex positions)? If no, specify if the mechanism for high-precision geometric prediction is missing.\n",
9
+ "\nDoes the deck present the construction of the \"GCD-MM Dataset\" (how it extends GarmentCodeData with text, images, and editing pairs)? If no, specify if the data engineering process (including AI-assisted annotation) is missing.\n",
10
+ "\nIs there a slide summarizing the \"Language-Instructed Garment Editing\" task and how it enables interactive design? If no, explain if this novel application section is missing.\n",
11
+ "\nDoes the experimental section cover comparative results against baselines like SewFormer or DressCode? If no, indicate if the performance analysis on complex garments (e.g., #Stitch Accuracy) was omitted.\n",
12
+ "\nDoes the deck include qualitative results showing the model's ability to generate garments from \"common-sense\" prompts (e.g., \"semi-formal garden party\")? If no, indicate if visual evidence of semantic understanding is missing.\n",
13
+ "\nIs there a slide summarizing the \"Key Takeaways\" and limitations (e.g., reliance on procedurally generated data vs. real-world scans)? If no, describe the missing insights.\n"
14
+ ],
15
+ "material_dependent_checklist_2": [
16
+ "\nIs the description of the limitations of \"SewFormer\" or \"DressCode\" accurate? (e.g., they struggle with complex topologies and lack multimodal flexibility.) If no, specify the inaccurate descriptions.\n",
17
+ "\nIs the technical roadmap correctly presented as an \"End-to-End Multimodal Generation\" rather than a \"Template-based Retrieval\"? If no, point out the deviation in understanding the generative nature of AIpparel.\n",
18
+ "\nAre the explanations for the \"Special Tokens\" (e.g., <SOG>, <SOP>, <B>, <L>) consistent with the paper's drawing command syntax? If no, explain the errors in token definition.\n",
19
+ "\nAre the details of the \"Combined Loss Function\" (Cross-Entropy for tokens + L2 for regression parameters) accurate? If no, specifically point out errors in the training objectives.\n",
20
+ "\nDoes the performance data in \"Experimental Results\" match the paper's tables? (e.g., achieving significant improvements in Edge/Stitch accuracy on the GCD-MM test set.) If no, list the specific discrepancies between the values on the slides and the paper.\n",
21
+ "\nDoes the deck accurately distinguish between the \"2D Sewing Pattern\" (output of the model) and the \"3D Draping Simulation\" (downstream verification)? If no, explain where these stages are confused.\n",
22
+ "\nAre the definitions of evaluation metrics (e.g., Panel L2 distance, #Stitch Accuracy) consistent with the paper's standards? If no, point out errors in metric interpretation.\n",
23
+ "\nDoes the slide deck avoid fabricating facts (e.g., claiming the model was trained on 3D point clouds when it is a token-based LMM for 2D patterns)? If no, point out the fabricated content.\n",
24
+ "\nDo the visual results accurately reflect the model's \"Zero-shot Multimodal Capability\"? (i.e., combining text and image inputs to generate a novel design.) If no, specify the slides where the input-output logic is misinterpreted.\n",
25
+ "\nIs the base model (LLaVA-1.5-7B) and the specific regression head architecture correctly identified? If no, provide the incorrect technical details found on the slides.\n"
26
+ ]
27
+ }
academia/CVPR_2025/AIpparel_A_Multimodal_Foundation_Model_for_Digital_Garments/generation_task/statistics.yaml ADDED
@@ -0,0 +1,25 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ case_path: academia/CVPR_2025/AIpparel_A_Multimodal_Foundation_Model_for_Digital_Garments
2
+ category: academia
3
+ input_metrics:
4
+ total_input_tokens: 9042
5
+ generation_prompt_tokens: 2322
6
+ materials_total_tokens: 6720
7
+ material_count: 1
8
+ pdf_total_pages: 12
9
+ file_details:
10
+ - name: material.pdf
11
+ tokens: 6720
12
+ pages: 12
13
+ checklist_counts:
14
+ common:
15
+ details:
16
+ Presentation Fundamentals: 13
17
+ Visual Design and Layout: 17
18
+ sum: 30
19
+ specific:
20
+ details:
21
+ Content Completeness: 11
22
+ Content Correctness: 10
23
+ Content Fidelity (per-slide-deck dynamic): 0
24
+ sum: 21
25
+ total_count: 51
academia/CVPR_2025/AIpparel_A_Multimodal_Foundation_Model_for_Digital_Garments/material.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f66cfe66167d138d1be4d21c5f4084da473c973d81158e5224835b133f8b4591
3
+ size 6479202
academia/CVPR_2025/DepthCrafter_Generating_Consistent_Long_Depth_Sequences_for_Open-world_Videos/generation_task/instructions.md ADDED
@@ -0,0 +1,167 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ You are to generate a complete, conference-quality academic slide deck suitable for an oral presentation at a top-tier AI conference (e.g., NeurIPS / ICML / ICLR / AAAI), based strictly on the paper. The slides must be accurate, well-structured, and **faithful to the original paper**, with no fabricated content.
2
+
3
+ ---
4
+
5
+ # **Strict Constraints for the Slides**
6
+
7
+ Below are the **hard constraints** you MUST satisfy. Slides violating these constraints are considered **incorrect**.
8
+
9
+ ## 1. Content Requirements
10
+
11
+ The slide deck must have **16-20 slides**.
12
+
13
+ The slide deck must include the following sections, in the order listed below (the number of slides in each section may be determined as appropriate).
14
+
15
+
16
+ 1. **Title Slide**
17
+
18
+ Paper Title: DepthCrafter: Generating Consistent Long Depth Sequences for Open-world Videos
19
+
20
+ Author Team: Wenbo Hu, Xiaoyu Li, Sijie Zhao, Xiangjun Gao, Xiaodong Cun, Yong Zhang, Long Quan, and Ying Shan
21
+
22
+ Affiliations: Tencent AI Lab, The Hong Kong University of Science and Technology, ARC Lab (Tencent PCG)
23
+
24
+ Conference: IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2025
25
+
26
+ 2. **Outline / Agenda**
27
+
28
+ 3. **Introduction / Background**
29
+
30
+ Video Depth Estimation (VDE): A fundamental task for 3D reconstruction, visual effects, and autonomous driving in open-world scenarios.
31
+
32
+ Current Landscape: Discriminative models (e.g., Depth Anything) provide strong zero-shot spatial accuracy but often lack temporal consistency for long sequences.
33
+
34
+ 4. **Motivation & Problem Statement**
35
+
36
+ Limitations of Existing Methods:
37
+ Temporal Flickering: Frame-by-frame estimation or window-based methods lead to inconsistencies over time.
38
+ Constraint Dependency: Many methods require additional inputs like camera poses (SfM) or optical flow, which are unreliable in dynamic open-world videos.
39
+ Length vs. Detail Trade-off: Difficulty in maintaining fine-grained details while scaling to long video sequences.
40
+
41
+ Design Constraint: Must include a comparison visualization (refer to Fig 1) showing DepthCrafter's superior temporal consistency and detail compared to Depth-Anything-V2.
42
+
43
+ 5. **Overview of the Proposed Method**
44
+
45
+ Core Idea: A generative video-to-depth framework built upon a pre-trained Image-to-Video (I2V) diffusion model.
46
+
47
+ Key Contribution 1: Three-stage Training Strategy, transitioning from single-frame depth to short-video and finally to long-video depth generation.
48
+
49
+ Key Contribution 2: Local-Global Temporal Hybrid Attention, enabling the model to handle long sequences by combining local window attention with global sparse attention.
50
+
51
+ Key Contribution 3: Achievement of SOTA performance in zero-shot video depth estimation without requiring any auxiliary camera/motion information.
52
+
53
+ 6. **Methodology: From Diffusion to Depth**
54
+
55
+ Step 1: Leveraging Pre-trained Priors: Utilizing Stable Video Diffusion (SVD) as the backbone to inherit open-world video understanding.
56
+
57
+ Step 2: Architecture Adaptation: Replacing the RGB decoder with a depth-specific head and fine-tuning the denoising UNet for depth-to-video mapping.
58
+
59
+ 7. **Key Algorithm: Three-Stage Training & Attention**
60
+
61
+ Stage 1: Image-level Fine-tuning for spatial precision.
62
+ Stage 2: Short-sequence Training (e.g., 25 frames) for basic temporal consistency.
63
+ Stage 3: Long-sequence Training (e.g., 100+ frames) for global coherence.
64
+
65
+ Design Constraint: Display the conceptual diagram of the Local-Global Hybrid Attention mechanism and the "sliding window with context" logic for extremely long videos.
66
+
67
+ 8. **Dataset and Training Details**
68
+
69
+ Data Sources: Mix of synthetic and real-world datasets (e.g., TartanAir, FlyingThings, ETH3D) and large-scale unlabeled video data for self-supervision.
70
+
71
+ Training Strategy: Use of high-quality depth pseudo-labels and varied aspect ratios to enhance generalization.
72
+
73
+ 9. **Experimental Setup**
74
+
75
+ Test Datasets: Sintel, KITTI, NYUv2 (Zero-shot evaluation), and diverse open-world clips from the internet.
76
+
77
+ Baseline Models: Marigold, Depth Anything V2 (Video version), ZoeDepth, and traditional VDE methods.
78
+
79
+ Evaluation Metrics: Accuracy (Abs Rel, δ1), Temporal Consistency (TC error), and visual quality (detail sharpness).
80
+
81
+ 10. **Experimental Results & Analysis**
82
+
83
+ Quantitative Excellence: Superior temporal consistency metrics compared to flow-based or frame-independent models.
84
+
85
+ Generalization: Performs robustly on diverse content (fast motion, zoom-in, thin structures) without SfM failure modes.
86
+
87
+ Design Constraint: Include a quantitative comparison table (refer to Table 1/2) showing performance on Sintel and NYUv2 datasets.
88
+
89
+ 11. **Visual Analysis & Case Studies**
90
+
91
+ Qualitative Comparison: Show DepthCrafter handling complex occlusions and dynamic objects where baselines fail (refer to Fig 4/5).
92
+
93
+ Depth Stability: Demonstrate the 1D profile of a single pixel over time to visualize the reduction in flickering.
94
+
95
+ Applications: Show downstream results like 3D cinematic "Ken Burns" effects or video-to-3D scene reconstruction.
96
+
97
+ 12. **Key Takeaways & Limitations**
98
+
99
+ Takeaways: Diffusion models are powerful priors for geometric tasks; hybrid attention is key for long-form consistency.
100
+
101
+ Limitations: High computational cost of diffusion sampling; occasional depth-scale ambiguity in extremely featureless regions.
102
+
103
+ 13. **Conclusion**
104
+
105
+ Summary: DepthCrafter sets a new benchmark for consistent, high-detail long video depth estimation in the wild.
106
+
107
+ Future Work: Optimizing inference speed (e.g., distillation) and integrating with real-time 3D Gaussian Splatting.
108
+
109
+ ---
110
+
111
+
112
+ ## 2. Content Constraints
113
+
114
+ * **Faithfulness to background materials**: Use only the information in the paper. You must not fabricate additional experiments or modify or reinterpret the authors' claims.
115
+ * **Accuracy:** All content must be factually accurate, especially quantitative content and facts.
116
+ * **Brevity:** Use short, concise phrases, not long paragraphs. Focus on summarizing key facts and events without excessive detail. Bullet points may be used for clarity. If you use bullet points, each slide should have no more than 6 bullet points.
117
+ * **Sufficient Depth**: Do not summarize the paper in an overly superficial or high-level manner. The slides should preserve essential technical details, key arguments, and substantive insights rather than only presenting vague conclusions.
118
+ * **Logical Flow:** The slides should present a clear narrative, starting from early space exploration to recent developments. Ensure there is a clear progression of time and events.
119
+ * **Relevance of Information**: You must not add unrelated content.
120
+ * **Code & Markup Formatting**: Avoid raw LaTeX or Markdown code unless necessary.
121
+ * **Citation & Referencing**: Accurately reference the paper's results, diagrams, and examples.
122
+ * If a slide uses data from the paper, you must clearly indicate the source of the data on that slide (e.g., page xx, Figure xx, Table xx).
123
+ * All references (if any) must be placed in the bottom-left corner of the slide.
124
+
125
+ ## 3. Visual & Design
126
+
127
+ * **Images:** Include relevant images. Images must be high quality, clearly labeled, and relevant to the content.
128
+ * **Charts and Diagrams:** Use appropriate charts and diagrams where needed to visually present and clarify information, rather than relying only on text (and demos).
129
+ * If the slide includes charts or figures, ensure that all visual elements are clearly annotated (e.g., axes are labeled, units are specified, legends are included where needed, and data points are explained when necessary).
130
+ * Include **figures or diagrams descriptions** when appropriate, e.g., “The chart (from page 4 in the paper) shows proprietary models outperform open-weight ones.”
131
+ * **Legibility:** Use legible fonts and avoid clutter. Text should be large enough to be easily read.
132
+ * **Visual Balance:** Balance text and visuals so slides are easy to read when projected.
133
+ * **Layout:** Maintain a clean, professional layout with appropriate fonts, colors, and formatting.
134
+ * **Style Consistency**: The entire slide deck should follow a unified and coherent visual style.
135
+ * **Information Load**: Slides should avoid excessive information per page to preserve readability.
136
+
137
+ ## 4. Text Quality
138
+
139
+ * All generated text should be clear, with no missing or incorrect characters or words.
140
+ * Spelling, grammar, and typography must be accurate and correct throughout the content.
141
+
142
+ ## 5. Technical Fidelity Requirements
143
+
144
+ * **Quantitative Coverage**: Ensure that key data and experimental results (possibly presented in charts or tables in the paper) are included in the slide deck. In other words, the presentation should not only discuss the ideas of the paper but also present specific quantitative details (e.g., statistical data, experimental results, etc.).
145
+ * The slide deck must include at least 5 slides with quantitative details.
146
+
147
+ * **Quantitative Detail Correctness**: Ensure quantitative details (task counts, benchmark size, etc.) are correct.
148
+
149
+ * **Table & Chart Traceability and Annotation**: Ensure that any figures and tables in your slide deck are consistent with the paper. Specifically, for every figure and table in the slides:
150
+ * If it is directly copied from the paper, clearly indicate on the slide which figure or table it corresponds to in the paper (e.g., Figure 1 in the paper, Table 2 in the paper).
151
+ * If it is newly plotted based on data from the paper, clearly specify which section of the paper the data are taken from (e.g., Section 3.1). In addition, provide a clear explanation of the meaning of each legend item in the figure and each row and column in the table.
152
+ * For charts, every axis, unit, and label must be explicit
153
+
154
+ * **Point-Level Accuracy for Plots**: If scatter plots, line charts or radar charts are used in the slide deck, ensure that every data point exactly matches the corresponding data point in the original figure from the paper. Note that the values must be **precisely** the same, not just the shape of the graph.
155
+
156
+ * **Conceptual Illustration**: The slides may include data used only for conceptual illustration. However, if such data are included, you must clearly indicate on the corresponding slide which data are conceptual illustrations rather than experimental data reported in the paper.
157
+
158
+ ## 5. Presentation Tone and Audience
159
+
160
+ * **Tone:** The tone should be informative, academic, and professional. It should avoid casual or informal conversational language, while remaining clear and suitable for oral presentation. The slide deck should maintain a consistent tone.
161
+ * **Audience:** The presentation is intended for an academic audience with relevant background knowledge in the field. The content should be accessible to graduate-level students and researchers, assuming familiarity with standard concepts and terminology, while still providing sufficient context to understand the motivation, methodology, and key contributions.
162
+
163
+ ---
164
+
165
+ # **Output Expected**
166
+
167
+ A **complete slide deck** satisfying all constraints above.
academia/CVPR_2025/DepthCrafter_Generating_Consistent_Long_Depth_Sequences_for_Open-world_Videos/generation_task/judge_prompt.json ADDED
@@ -0,0 +1,27 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "material_dependent_checklist_1": [
3
+ "\nDoes the first slide correctly list the title, authors, and the conference? If no, describe what is missing from the first slide (Title: DepthCrafter: Generating Consistent Long Depth Sequences for Open-world Videos; Conf: CVPR 2025).\n",
4
+ "\nDoes the beginning of the presentation include a clear agenda or outline? If no, specify where it is missing.\n",
5
+ "\nIs there a slide dedicated to the background of Video Depth Estimation (VDE) that points out the \"Temporal Flickering\" and \"SfM/Flow dependency\" limitations of existing methods? If no, explain where the background info on the consistency and auxiliary information constraints is lacking.\n",
6
+ "\nDoes the slide deck clearly define the core concept of adapting a pre-trained \"Image-to-Video (I2V) Diffusion Model\" for depth estimation? If no, describe the missing points in explaining the diffusion-based backbone choice.\n",
7
+ "\nIs there a slide describing the \"Three-stage Training Strategy\" (Image-level, Short-video, Long-video)? If no, indicate whether this critical progressive learning roadmap was omitted.\n",
8
+ "\nIs there a slide explaining the \"Local-Global Temporal Hybrid Attention\" and how it enables processing of long sequences? If no, specify if the technical solution for long-term consistency is missing.\n",
9
+ "\nDoes the deck present the core logic for the sliding window inference or the hybrid attention mechanism? If no, specify if these structural representations are missing or oversimplified.\n",
10
+ "\nIs there a slide summarizing the data sources used (e.g., Mix of synthetic data and unlabeled real-world videos)? If no, explain if the dataset/data strategy section is missing.\n",
11
+ "\nDoes the experimental section cover zero-shot comparative results on benchmarks like Sintel, KITTI, or NYUv2? If no, indicate if the performance analysis on these standard benchmarks was omitted.\n",
12
+ "\nDoes the deck include qualitative visualizations or 1D temporal profiles showing the reduction in flickering compared to baselines (e.g., refer to Fig 1 or Fig 4)? If no, indicate if the visual evidence for temporal consistency is missing.\n",
13
+ "\nIs there a slide summarizing the \"Key Takeaways\" and the limitations (e.g., computational cost, scale ambiguity)? If no, describe the missing insights.\n"
14
+ ],
15
+ "material_dependent_checklist_2": [
16
+ "\nIs the description of the limitations of discriminative models accurate? (e.g., they often lack temporal coherence in long sequences or fail when camera poses are unavailable.) If no, specify the inaccurate descriptions.\n",
17
+ "\nIs the technical roadmap correctly presented as a \"Generative Video-to-Depth\" framework rather than misleadingly described as \"Frame-by-frame regression\" or \"Standard SfM\"? If no, point out the deviation in understanding the technical principles.\n",
18
+ "\nAre the explanations for \"Hybrid Attention\" consistent with the paper? (It combines local window attention for smoothness and global sparse attention for long-range coherence.) If no, explain the errors in definition.\n",
19
+ "\nAre the details of the Three-stage Training accurate (e.g., Stage 1 for spatial precision, Stage 2/3 for temporal evolution)? If no, specifically point out chronological or logical errors in the training phases.\n",
20
+ "\nDoes the performance data in \"Experimental Results\" match the paper's tables? (e.g., outperforming Depth-Anything-V2 in temporal consistency metrics.) If no, list the specific discrepancies between the values on the slides and the paper.\n",
21
+ "\nDoes the deck accurately distinguish between the \"Local Attention\" (within windows) and the \"Global Sparse Attention\" (across the whole sequence)? If no, explain where these temporal components are confused.\n",
22
+ "\nAre the definitions of evaluation metrics (Abs Rel, δ1, TC error) consistent with the paper's standards for depth accuracy and consistency? If no, point out errors in metric interpretation.\n",
23
+ "\nDoes the slide deck avoid fabricating facts (e.g., claiming the model requires per-video optimization or camera intrinsic parameters)? If no, point out the fabricated content.\n",
24
+ "\nDo the visual results (e.g., the fine-grained details in Fig 5) accurately reflect how the model preserves thin structures compared to Marigold or ZoeDepth? If no, specify the slides where the visual comparison is misinterpreted.\n",
25
+ "\nIs the training data scale or the backbone model (SVD) correctly identified? If no, provide the incorrect technical details found on the slides.\n"
26
+ ]
27
+ }
academia/CVPR_2025/DepthCrafter_Generating_Consistent_Long_Depth_Sequences_for_Open-world_Videos/generation_task/statistics.yaml ADDED
@@ -0,0 +1,25 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ case_path: academia/CVPR_2025/DepthCrafter_Generating_Consistent_Long_Depth_Sequences_for_Open-world_Videos
2
+ category: academia
3
+ input_metrics:
4
+ total_input_tokens: 8455
5
+ generation_prompt_tokens: 2295
6
+ materials_total_tokens: 6160
7
+ material_count: 1
8
+ pdf_total_pages: 11
9
+ file_details:
10
+ - name: material.pdf
11
+ tokens: 6160
12
+ pages: 11
13
+ checklist_counts:
14
+ common:
15
+ details:
16
+ Presentation Fundamentals: 13
17
+ Visual Design and Layout: 17
18
+ sum: 30
19
+ specific:
20
+ details:
21
+ Content Completeness: 11
22
+ Content Correctness: 10
23
+ Content Fidelity (per-slide-deck dynamic): 0
24
+ sum: 21
25
+ total_count: 51
academia/CVPR_2025/DepthCrafter_Generating_Consistent_Long_Depth_Sequences_for_Open-world_Videos/material.pdf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:aa379e8823cfb4814f487b01822416976729e1d61263f1ed3ec731b3ec5c05ee
3
+ size 9661295