Add LiteRT converted coat_lite_tiny
Browse files- README.md +42 -0
- model.tflite +3 -0
README.md
ADDED
|
@@ -0,0 +1,42 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
library_name: litert
|
| 3 |
+
base_model: timm/coat_lite_tiny.in1k
|
| 4 |
+
tags:
|
| 5 |
+
- vision
|
| 6 |
+
- image-classification
|
| 7 |
+
datasets:
|
| 8 |
+
- imagenet-1k
|
| 9 |
+
---
|
| 10 |
+
|
| 11 |
+
# coat_lite_tiny
|
| 12 |
+
|
| 13 |
+
Converted TIMM image classification model for LiteRT.
|
| 14 |
+
|
| 15 |
+
- Source architecture: coat_lite_tiny
|
| 16 |
+
- File: model.tflite
|
| 17 |
+
|
| 18 |
+
## Model Details
|
| 19 |
+
|
| 20 |
+
- **Model Type:** Image classification / feature backbone
|
| 21 |
+
- **Model Stats:**
|
| 22 |
+
- Params (M): 5.7
|
| 23 |
+
- GMACs: 1.6
|
| 24 |
+
- Activations (M): 11.6
|
| 25 |
+
- Image size: 224 x 224
|
| 26 |
+
- **Papers:**
|
| 27 |
+
- Co-Scale Conv-Attentional Image Transformers: https://arxiv.org/abs/2104.06399
|
| 28 |
+
- **Dataset:** ImageNet-1k
|
| 29 |
+
- **Original:** https://github.com/mlpc-ucsd/CoaT
|
| 30 |
+
|
| 31 |
+
## Citation
|
| 32 |
+
|
| 33 |
+
```bibtex
|
| 34 |
+
@InProceedings{Xu_2021_ICCV,
|
| 35 |
+
author = {Xu, Weijian and Xu, Yifan and Chang, Tyler and Tu, Zhuowen},
|
| 36 |
+
title = {Co-Scale Conv-Attentional Image Transformers},
|
| 37 |
+
booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)},
|
| 38 |
+
month = {October},
|
| 39 |
+
year = {2021},
|
| 40 |
+
pages = {9981-9990}
|
| 41 |
+
}
|
| 42 |
+
```
|
model.tflite
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:5a6f0a4aab14b19ecdb68fc370cdad4f538dc32bf1043e68faa9b5d1fe6505f8
|
| 3 |
+
size 23044560
|