ducha-aiki commited on
Commit
e96c476
·
verified ·
1 Parent(s): f7c9645

Update README

Browse files
Files changed (1) hide show
  1. README.md +45 -0
README.md ADDED
@@ -0,0 +1,45 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ tags:
4
+ - kornia
5
+ - image-classification
6
+ - backbone
7
+ ---
8
+
9
+ # kornia/tiny_vit
10
+
11
+ Pretrained weights for **TinyViT**,
12
+ used as the encoder backbone in
13
+ [`kornia.models.SegmentAnything`](https://kornia.readthedocs.io/en/latest/models.html)
14
+ (MobileSAM) and available via
15
+ [`kornia.models.TinyViT`](https://kornia.readthedocs.io/en/latest/models.html).
16
+
17
+ TinyViT is a small Vision Transformer trained with knowledge distillation from large
18
+ teacher models on ImageNet-22K. ECCV 2022.
19
+
20
+ **Original repo:** [microsoft/Cream/TinyViT](https://github.com/microsoft/Cream/tree/main/TinyViT)
21
+
22
+ ## Weights
23
+
24
+ | File | Params | Pre-training | Fine-tuning |
25
+ |------|--------|-------------|-------------|
26
+ | `tiny_vit_5m_22k_distill.pth` | 5M | ImageNet-22K | — |
27
+ | `tiny_vit_5m_22kto1k_distill.pth` | 5M | ImageNet-22K | ImageNet-1K 224 |
28
+ | `tiny_vit_11m_22k_distill.pth` | 11M | ImageNet-22K | — |
29
+ | `tiny_vit_11m_22kto1k_distill.pth` | 11M | ImageNet-22K | ImageNet-1K 224 |
30
+ | `tiny_vit_21m_22k_distill.pth` | 21M | ImageNet-22K | — |
31
+ | `tiny_vit_21m_22kto1k_distill.pth` | 21M | ImageNet-22K | ImageNet-1K 224 |
32
+ | `tiny_vit_21m_22kto1k_384_distill.pth` | 21M | ImageNet-22K | ImageNet-1K 384 |
33
+ | `tiny_vit_21m_22kto1k_512_distill.pth` | 21M | ImageNet-22K | ImageNet-1K 512 |
34
+
35
+ ## Citation
36
+
37
+ ```bibtex
38
+ @inproceedings{wu2022tinyvit,
39
+ title = {{TinyViT}: Fast Pretraining Distillation for Small Vision Transformers},
40
+ author = {Wu, Kan and Zhang, Jinnian and Peng, Houwen and Liu, Mengchen
41
+ and Xiao, Bin and Fu, Jianlong and Yuan, Lu},
42
+ booktitle = {ECCV},
43
+ year = {2022}
44
+ }
45
+ ```