nielsr HF Staff commited on
Commit
f566fa5
·
verified ·
1 Parent(s): 0a2f4ca

Improve model card and add metadata

Browse files

Hi, I'm Niels from the Hugging Face community team. I've updated the model card for EdgeCrafter (ECSeg-S) to include:
- Relevant metadata such as `pipeline_tag: image-segmentation` and `license: apache-2.0`.
- Links to the original paper, GitHub repository, and project page.
- Inference instructions and the BibTeX citation for research use.

This will ensure the model is correctly categorized on the Hub and easy for users to find and use.

Files changed (1) hide show
  1. README.md +49 -4
README.md CHANGED
@@ -1,10 +1,55 @@
1
  ---
 
 
2
  tags:
3
  - model_hub_mixin
4
  - pytorch_model_hub_mixin
5
  ---
6
 
7
- This model has been pushed to the Hub using the [PytorchModelHubMixin](https://huggingface.co/docs/huggingface_hub/package_reference/mixins#huggingface_hub.PyTorchModelHubMixin) integration:
8
- - Code: https://github.com/Intellindust-AI-Lab/EdgeCrafter
9
- - Paper: https://arxiv.org/abs/2603.18739
10
- - Docs: [More Information Needed]
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ license: apache-2.0
3
+ pipeline_tag: image-segmentation
4
  tags:
5
  - model_hub_mixin
6
  - pytorch_model_hub_mixin
7
  ---
8
 
9
+ # EdgeCrafter: Compact ViTs for Edge Dense Prediction
10
+
11
+ EdgeCrafter is a unified framework for compact Vision Transformers (ViTs) designed for high-performance dense prediction (detection, instance segmentation, and pose estimation) on resource-constrained edge devices. This specific model, **ECSeg-S**, is a lightweight instance segmentation model.
12
+
13
+ - **Paper:** [EdgeCrafter: Compact ViTs for Edge Dense Prediction via Task-Specialized Distillation](https://huggingface.co/papers/2603.18739)
14
+ - **GitHub Repository:** [Intellindust-AI-Lab/EdgeCrafter](https://github.com/Intellindust-AI-Lab/EdgeCrafter)
15
+ - **Project Page:** [EdgeCrafter Project Page](https://intellindust-ai-lab.github.io/projects/EdgeCrafter/)
16
+
17
+ ## Model Description
18
+ ECSeg-S is built using a distilled compact backbone and an edge-friendly encoder-decoder design. It achieves a strong accuracy-efficiency tradeoff, making it suitable for real-time applications on edge hardware. For instance segmentation, it achieves performance comparable to RF-DETR while using significantly fewer parameters.
19
+
20
+ ## Quick Start (Inference)
21
+
22
+ To run inference on a sample image, follow the instructions from the official repository:
23
+
24
+ ### 1. Installation
25
+ ```bash
26
+ # Create conda environment
27
+ conda create -n ec python=3.11 -y
28
+ conda activate ec
29
+
30
+ # Install dependencies
31
+ pip install -r requirements.txt
32
+ ```
33
+
34
+ ### 2. Run Inference
35
+ ```bash
36
+ # Navigate to the detection/segmentation folder
37
+ cd ecdetseg
38
+
39
+ # Run PyTorch inference
40
+ # Replace `path/to/your/image.jpg` with an actual image path
41
+ python tools/inference/torch_inf.py -c configs/ecseg/ecseg_s.yml -r /path/to/ecseg_s.pth -i path/to/your/image.jpg
42
+ ```
43
+
44
+ ## Citation
45
+
46
+ If you find EdgeCrafter useful in your research, please consider citing:
47
+
48
+ ```bibtex
49
+ @article{liu2026edgecrafter,
50
+ title={EdgeCrafter: Compact ViTs for Edge Dense Prediction via Task-Specialized Distillation},
51
+ author={Liu, Longfei and Hou, Yongjie and Li, Yang and Wang, Qirui and Sha, Youyang and Yu, Yongjun and Wang, Yinzhi and Ru, Peizhe and Yu, Xuanlong and Shen, Xi},
52
+ journal={arXiv},
53
+ year={2026}
54
+ }
55
+ ```