Xiiiiii0220 nielsr HF Staff commited on
Commit
10d9e04
·
1 Parent(s): 22090c5

Improve model card and add metadata (#1)

Browse files

- Improve model card and add metadata (8bf24ed9ff15ac5af8b21bdb93e4ef8e160d7ee1)


Co-authored-by: Niels Rogge <nielsr@users.noreply.huggingface.co>

Files changed (1) hide show
  1. README.md +61 -4
README.md CHANGED
@@ -1,10 +1,67 @@
1
  ---
 
 
2
  tags:
3
  - model_hub_mixin
4
  - pytorch_model_hub_mixin
5
  ---
6
 
7
- This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration:
8
- - Code: https://github.com/Intellindust-AI-Lab/EdgeCrafter
9
- - Paper: https://arxiv.org/abs/2603.18739
10
- - Docs: [More Information Needed]
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ license: apache-2.0
3
+ pipeline_tag: object-detection
4
  tags:
5
  - model_hub_mixin
6
  - pytorch_model_hub_mixin
7
  ---
8
 
9
+ # EdgeCrafter: Compact ViTs for Edge Dense Prediction
10
+
11
+ EdgeCrafter is a unified compact ViT framework for edge dense prediction tasks. This repository specifically contains the **ECDet-S** model, an object detection architecture built from a distilled compact backbone and an edge-friendly encoder-decoder design.
12
+
13
+ - **Paper:** [EdgeCrafter: Compact ViTs for Edge Dense Prediction via Task-Specialized Distillation](https://arxiv.org/abs/2603.18739)
14
+ - **Project Page:** [https://intellindust-ai-lab.github.io/projects/EdgeCrafter/](https://intellindust-ai-lab.github.io/projects/EdgeCrafter/)
15
+ - **Repository:** [https://github.com/Intellindust-AI-Lab/EdgeCrafter](https://github.com/Intellindust-AI-Lab/EdgeCrafter)
16
+
17
+ ## Model Description
18
+
19
+ EdgeCrafter bridges the accuracy-efficiency gap between compact Vision Transformers (ViTs) and CNN-based architectures (like YOLO) on resource-constrained devices. By employing task-specialized distillation and edge-aware architectural designs, ECDet achieves high performance with minimal parameters. ECDet-S, for instance, reaches 51.7 AP on the COCO dataset with fewer than 10M parameters.
20
+
21
+ ### COCO2017 Validation Results (Object Detection)
22
+
23
+ | Model | Size | AP<sub>50:95</sub> | #Params | GFLOPs | Latency (ms) |
24
+ |:-----:|:----:|:--:|:-------:|:------:|:------------:|
25
+ | **ECDet-S** | 640 | 51.7 | 10 | 26 | 5.41 |
26
+ | **ECDet-M** | 640 | 54.3 | 18 | 53 | 7.98 |
27
+ | **ECDet-L** | 640 | 57.0 | 31 | 101 | 10.49 |
28
+ | **ECDet-X** | 640 | 57.9 | 49 | 151 | 12.70 |
29
+
30
+ *Note: Latency is measured on an NVIDIA T4 GPU with batch size 1 under FP16 precision using TensorRT (v10.6).*
31
+
32
+ ## Installation
33
+
34
+ ```bash
35
+ # Create conda environment
36
+ conda create -n ec python=3.11 -y
37
+ conda activate ec
38
+
39
+ # Install dependencies
40
+ pip install -r requirements.txt
41
+ ```
42
+
43
+ ## Quick Start (Inference)
44
+
45
+ You can run inference on a sample image using the provided scripts:
46
+
47
+ ```bash
48
+ # 1. Download the pre-trained model (if not already present)
49
+ # 2. Run PyTorch inference
50
+ # Make sure to replace `path/to/your/image.jpg` with an actual image path
51
+ python tools/inference/torch_inf.py -c configs/ecdet/ecdet_s.yml -r ecdet_s.pth -i path/to/your/image.jpg
52
+ ```
53
+
54
+ ## Citation
55
+
56
+ If you find EdgeCrafter useful in your research, please consider citing:
57
+
58
+ ```bibtex
59
+ @article{liu2026edgecrafter,
60
+ title={EdgeCrafter: Compact ViTs for Edge Dense Prediction via Task-Specialized Distillation},
61
+ author={Liu, Longfei and Hou, Yongjie and Li, Yang and Wang, Qirui and Sha, Youyang and Yu, Yongjun and Wang, Yinzhi and Ru, Peizhe and Yu, Xuanlong and Shen, Xi},
62
+ journal={arXiv},
63
+ year={2026}
64
+ }
65
+ ```
66
+
67
+ This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration.