nielsr HF Staff commited on
Commit
8630fdc
·
verified ·
1 Parent(s): 1088e8b

Add metadata and improve model card for EdgeCrafter (ECSeg)

Browse files

Hi! I'm Niels from the Hugging Face team. I've updated the model card for EdgeCrafter to improve its discoverability and documentation.

Specifically, I have:
- Added the `pipeline_tag: image-segmentation` to the YAML metadata.
- Added the `license: apache-2.0` based on the project repository.
- Linked the model card to the official paper, project page, and GitHub repository.
- Included the performance table for the instance segmentation family (ECSeg).
- Added a sample usage section based on the inference instructions in the GitHub README.

Files changed (1) hide show
  1. README.md +48 -4
README.md CHANGED
@@ -1,10 +1,54 @@
1
  ---
 
 
2
  tags:
3
  - model_hub_mixin
4
  - pytorch_model_hub_mixin
5
  ---
6
 
7
- This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration:
8
- - Code: https://github.com/Intellindust-AI-Lab/EdgeCrafter
9
- - Paper: https://arxiv.org/abs/2603.18739
10
- - Docs: [More Information Needed]
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ license: apache-2.0
3
+ pipeline_tag: image-segmentation
4
  tags:
5
  - model_hub_mixin
6
  - pytorch_model_hub_mixin
7
  ---
8
 
9
+ # EdgeCrafter: ECSeg
10
+
11
+ EdgeCrafter is a unified compact Vision Transformer (ViT) framework designed for efficient dense prediction tasks on edge devices. This specific checkpoint is part of the **ECSeg** series, which focuses on high-performance instance segmentation using a distilled compact backbone and an edge-friendly encoder-decoder design.
12
+
13
+ - **Project Page:** [EdgeCrafter](https://intellindust-ai-lab.github.io/projects/EdgeCrafter/)
14
+ - **Paper:** [EdgeCrafter: Compact ViTs for Edge Dense Prediction via Task-Specialized Distillation](https://arxiv.org/abs/2603.18739)
15
+ - **Repository:** [GitHub - Intellindust-AI-Lab/EdgeCrafter](https://github.com/Intellindust-AI-Lab/EdgeCrafter)
16
+
17
+ ## Performance (Instance Segmentation on COCO2017)
18
+
19
+ | Model | Size | AP<sub>50:95</sub> | #Params | GFLOPs | Latency (ms) |
20
+ |:-----:|:----:|:--:|:-------:|:------:|:------------:|
21
+ | **ECSeg-S** | 640 | 43.0 | 10M | 33 | 6.96 |
22
+ | **ECSeg-M** | 640 | 45.2 | 20M | 64 | 9.85 |
23
+ | **ECSeg-L** | 640 | 47.1 | 34M | 111 | 12.56 |
24
+ | **ECSeg-X** | 640 | 48.4 | 50M | 168 | 14.96 |
25
+
26
+ *Note: Latency is measured on an NVIDIA T4 GPU with batch size 1 under FP16 precision using TensorRT (v10.6).*
27
+
28
+ ## Usage
29
+
30
+ To run inference with this model, follow the instructions in the official repository. You can use the provided inference script:
31
+
32
+ ```bash
33
+ # 1. Clone the repository and install dependencies
34
+ git clone https://github.com/Intellindust-AI-Lab/EdgeCrafter
35
+ cd EdgeCrafter/ecdetseg
36
+ pip install -r requirements.txt
37
+
38
+ # 2. Run PyTorch inference
39
+ # Replace `path/to/your/image.jpg` with an actual image path
40
+ python tools/inference/torch_inf.py -c configs/ecseg/ecseg_s.yml -r ecdet_s.pth -i path/to/your/image.jpg
41
+ ```
42
+
43
+ ## Citation
44
+
45
+ ```bibtex
46
+ @article{liu2026edgecrafter,
47
+ title={EdgeCrafter: Compact ViTs for Edge Dense Prediction via Task-Specialized Distillation},
48
+ author={Liu, Longfei and Hou, Yongjie and Li, Yang and Wang, Qirui and Sha, Youyang and Yu, Yongjun and Wang, Yinzhi and Ru, Peizhe and Yu, Xuanlong and Shen, Xi},
49
+ journal={arXiv},
50
+ year={2026}
51
+ }
52
+ ```
53
+
54
+ This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration.