repo_id stringlengths 15 89 | file_path stringlengths 27 180 | content stringlengths 1 2.23M | __index_level_0__ int64 0 0 |
|---|---|---|---|
hf_public_repos/pytorch-image-models/hfdocs/source | hf_public_repos/pytorch-image-models/hfdocs/source/models/tf-inception-v3.mdx | # (Tensorflow) Inception v3
**Inception v3** is a convolutional neural network architecture from the Inception family that makes several improvements including using [Label Smoothing](https://paperswithcode.com/method/label-smoothing), Factorized 7 x 7 convolutions, and the use of an [auxiliary classifer](https://pape... | 0 |
hf_public_repos/pytorch-image-models/hfdocs/source | hf_public_repos/pytorch-image-models/hfdocs/source/models/selecsls.mdx | # SelecSLS
**SelecSLS** uses novel selective long and short range skip connections to improve the information flow allowing for a drastically faster network without compromising accuracy.
## How do I use this model on an image?
To load a pretrained model:
```py
>>> import timm
>>> model = timm.create_model('selecsl... | 0 |
hf_public_repos/pytorch-image-models/hfdocs/source | hf_public_repos/pytorch-image-models/hfdocs/source/models/resnest.mdx | # ResNeSt
A **ResNeSt** is a variant on a [ResNet](https://paperswithcode.com/method/resnet), which instead stacks [Split-Attention blocks](https://paperswithcode.com/method/split-attention). The cardinal group representations are then concatenated along the channel dimension: \\( V = \text{Concat} \\){\\( V^{1},V^{2}... | 0 |
hf_public_repos/pytorch-image-models/hfdocs/source | hf_public_repos/pytorch-image-models/hfdocs/source/models/csp-resnext.mdx | # CSP-ResNeXt
**CSPResNeXt** is a convolutional neural network where we apply the Cross Stage Partial Network (CSPNet) approach to [ResNeXt](https://paperswithcode.com/method/resnext). The CSPNet partitions the feature map of the base layer into two parts and then merges them through a cross-stage hierarchy. The use o... | 0 |
hf_public_repos/pytorch-image-models/hfdocs/source | hf_public_repos/pytorch-image-models/hfdocs/source/models/res2next.mdx | # Res2NeXt
**Res2NeXt** is an image model that employs a variation on [ResNeXt](https://paperswithcode.com/method/resnext) bottleneck residual blocks. The motivation is to be able to represent features at multiple scales. This is achieved through a novel building block for CNNs that constructs hierarchical residual-li... | 0 |
hf_public_repos/pytorch-image-models/hfdocs/source | hf_public_repos/pytorch-image-models/hfdocs/source/models/resnet-d.mdx | # ResNet-D
**ResNet-D** is a modification on the [ResNet](https://paperswithcode.com/method/resnet) architecture that utilises an [average pooling](https://paperswithcode.com/method/average-pooling) tweak for downsampling. The motivation is that in the unmodified ResNet, the [1×1 convolution](https://paperswithcode.co... | 0 |
hf_public_repos/pytorch-image-models/hfdocs/source | hf_public_repos/pytorch-image-models/hfdocs/source/models/ese-vovnet.mdx | # ESE-VoVNet
**VoVNet** is a convolutional neural network that seeks to make [DenseNet](https://paperswithcode.com/method/densenet) more efficient by concatenating all features only once in the last feature map, which makes input size constant and enables enlarging new output channel.
Read about [one-shot aggregatio... | 0 |
hf_public_repos/pytorch-image-models/hfdocs/source | hf_public_repos/pytorch-image-models/hfdocs/source/models/seresnext.mdx | # SE-ResNeXt
**SE ResNeXt** is a variant of a [ResNext](https://www.paperswithcode.com/method/resneXt) that employs [squeeze-and-excitation blocks](https://paperswithcode.com/method/squeeze-and-excitation-block) to enable the network to perform dynamic channel-wise feature recalibration.
## How do I use this model on... | 0 |
hf_public_repos/pytorch-image-models/hfdocs/source | hf_public_repos/pytorch-image-models/hfdocs/source/models/ecaresnet.mdx | # ECA-ResNet
An **ECA ResNet** is a variant on a [ResNet](https://paperswithcode.com/method/resnet) that utilises an [Efficient Channel Attention module](https://paperswithcode.com/method/efficient-channel-attention). Efficient Channel Attention is an architectural unit based on [squeeze-and-excitation blocks](https:/... | 0 |
hf_public_repos/pytorch-image-models/hfdocs/source | hf_public_repos/pytorch-image-models/hfdocs/source/models/fbnet.mdx | # FBNet
**FBNet** is a type of convolutional neural architectures discovered through [DNAS](https://paperswithcode.com/method/dnas) neural architecture search. It utilises a basic type of image model block inspired by [MobileNetv2](https://paperswithcode.com/method/mobilenetv2) that utilises depthwise convolutions and... | 0 |
hf_public_repos/pytorch-image-models/hfdocs/source | hf_public_repos/pytorch-image-models/hfdocs/source/models/resnet.mdx | # ResNet
**Residual Networks**, or **ResNets**, learn residual functions with reference to the layer inputs, instead of learning unreferenced functions. Instead of hoping each few stacked layers directly fit a desired underlying mapping, residual nets let these layers fit a residual mapping. They stack [residual block... | 0 |
hf_public_repos/pytorch-image-models/hfdocs/source | hf_public_repos/pytorch-image-models/hfdocs/source/models/gloun-inception-v3.mdx | # (Gluon) Inception v3
**Inception v3** is a convolutional neural network architecture from the Inception family that makes several improvements including using [Label Smoothing](https://paperswithcode.com/method/label-smoothing), Factorized 7 x 7 convolutions, and the use of an [auxiliary classifer](https://paperswit... | 0 |
hf_public_repos/pytorch-image-models/hfdocs/source | hf_public_repos/pytorch-image-models/hfdocs/source/models/res2net.mdx | # Res2Net
**Res2Net** is an image model that employs a variation on bottleneck residual blocks, [Res2Net Blocks](https://paperswithcode.com/method/res2net-block). The motivation is to be able to represent features at multiple scales. This is achieved through a novel building block for CNNs that constructs hierarchical... | 0 |
hf_public_repos/pytorch-image-models/hfdocs/source | hf_public_repos/pytorch-image-models/hfdocs/source/models/advprop.mdx | # AdvProp (EfficientNet)
**AdvProp** is an adversarial training scheme which treats adversarial examples as additional examples, to prevent overfitting. Key to the method is the usage of a separate auxiliary batch norm for adversarial examples, as they have different underlying distributions to normal examples.
The w... | 0 |
hf_public_repos/pytorch-image-models/hfdocs/source | hf_public_repos/pytorch-image-models/hfdocs/source/models/legacy-se-resnext.mdx | # (Legacy) SE-ResNeXt
**SE ResNeXt** is a variant of a [ResNeXt](https://www.paperswithcode.com/method/resnext) that employs [squeeze-and-excitation blocks](https://paperswithcode.com/method/squeeze-and-excitation-block) to enable the network to perform dynamic channel-wise feature recalibration.
## How do I use this... | 0 |
hf_public_repos/pytorch-image-models/hfdocs/source | hf_public_repos/pytorch-image-models/hfdocs/source/models/gloun-xception.mdx | # (Gluon) Xception
**Xception** is a convolutional neural network architecture that relies solely on [depthwise separable convolution](https://paperswithcode.com/method/depthwise-separable-convolution) layers.
The weights from this model were ported from [Gluon](https://cv.gluon.ai/model_zoo/classification.html).
##... | 0 |
hf_public_repos/pytorch-image-models/hfdocs/source | hf_public_repos/pytorch-image-models/hfdocs/source/models/inception-resnet-v2.mdx | # Inception ResNet v2
**Inception-ResNet-v2** is a convolutional neural architecture that builds on the Inception family of architectures but incorporates [residual connections](https://paperswithcode.com/method/residual-connection) (replacing the filter concatenation stage of the Inception architecture).
## How do I... | 0 |
hf_public_repos/pytorch-image-models/hfdocs/source | hf_public_repos/pytorch-image-models/hfdocs/source/models/skresnet.mdx | # SK-ResNet
**SK ResNet** is a variant of a [ResNet](https://www.paperswithcode.com/method/resnet) that employs a [Selective Kernel](https://paperswithcode.com/method/selective-kernel) unit. In general, all the large kernel convolutions in the original bottleneck blocks in ResNet are replaced by the proposed [SK convo... | 0 |
hf_public_repos/pytorch-image-models/hfdocs/source | hf_public_repos/pytorch-image-models/hfdocs/source/models/tresnet.mdx | # TResNet
A **TResNet** is a variant on a [ResNet](https://paperswithcode.com/method/resnet) that aim to boost accuracy while maintaining GPU training and inference efficiency. They contain several design tricks including a SpaceToDepth stem, [Anti-Alias downsampling](https://paperswithcode.com/method/anti-alias-down... | 0 |
hf_public_repos/pytorch-image-models/hfdocs/source | hf_public_repos/pytorch-image-models/hfdocs/source/models/rexnet.mdx | # RexNet
**Rank Expansion Networks** (ReXNets) follow a set of new design principles for designing bottlenecks in image classification models. Authors refine each layer by 1) expanding the input channel size of the convolution layer and 2) replacing the [ReLU6s](https://www.paperswithcode.com/method/relu6).
## How do... | 0 |
hf_public_repos/pytorch-image-models/hfdocs/source | hf_public_repos/pytorch-image-models/hfdocs/source/models/wide-resnet.mdx | # Wide ResNet
**Wide Residual Networks** are a variant on [ResNets](https://paperswithcode.com/method/resnet) where we decrease depth and increase the width of residual networks. This is achieved through the use of [wide residual blocks](https://paperswithcode.com/method/wide-residual-block).
## How do I use this mod... | 0 |
hf_public_repos/pytorch-image-models/hfdocs/source | hf_public_repos/pytorch-image-models/hfdocs/source/models/tf-mixnet.mdx | # (Tensorflow) MixNet
**MixNet** is a type of convolutional neural network discovered via AutoML that utilises [MixConvs](https://paperswithcode.com/method/mixconv) instead of regular [depthwise convolutions](https://paperswithcode.com/method/depthwise-convolution).
The weights from this model were ported from [Tenso... | 0 |
hf_public_repos/pytorch-image-models/hfdocs/source | hf_public_repos/pytorch-image-models/hfdocs/source/models/tf-efficientnet.mdx | # (Tensorflow) EfficientNet
**EfficientNet** is a convolutional neural network architecture and scaling method that uniformly scales all dimensions of depth/width/resolution using a *compound coefficient*. Unlike conventional practice that arbitrary scales these factors, the EfficientNet scaling method uniformly scal... | 0 |
hf_public_repos/pytorch-image-models/hfdocs/source | hf_public_repos/pytorch-image-models/hfdocs/source/models/hrnet.mdx | # HRNet
**HRNet**, or **High-Resolution Net**, is a general purpose convolutional neural network for tasks like semantic segmentation, object detection and image classification. It is able to maintain high resolution representations through the whole process. We start from a high-resolution convolution stream, gradual... | 0 |
hf_public_repos/pytorch-image-models/hfdocs/source | hf_public_repos/pytorch-image-models/hfdocs/source/models/regnetx.mdx | # RegNetX
**RegNetX** is a convolutional network design space with simple, regular models with parameters: depth \\( d \\), initial width \\( w\_{0} > 0 \\), and slope \\( w\_{a} > 0 \\), and generates a different block width \\( u\_{j} \\) for each block \\( j < d \\). The key restriction for the RegNet types of mode... | 0 |
hf_public_repos/pytorch-image-models/hfdocs/source | hf_public_repos/pytorch-image-models/hfdocs/source/models/dpn.mdx | # Dual Path Network (DPN)
A **Dual Path Network (DPN)** is a convolutional neural network which presents a new topology of connection paths internally. The intuition is that [ResNets](https://paperswithcode.com/method/resnet) enables feature re-usage while DenseNet enables new feature exploration, and both are importa... | 0 |
hf_public_repos/pytorch-image-models/hfdocs/source | hf_public_repos/pytorch-image-models/hfdocs/source/models/xception.mdx | # Xception
**Xception** is a convolutional neural network architecture that relies solely on [depthwise separable convolution layers](https://paperswithcode.com/method/depthwise-separable-convolution).
The weights from this model were ported from [Tensorflow/Models](https://github.com/tensorflow/models).
## How do I... | 0 |
hf_public_repos/pytorch-image-models/hfdocs/source | hf_public_repos/pytorch-image-models/hfdocs/source/models/ig-resnext.mdx | # Instagram ResNeXt WSL
A **ResNeXt** repeats a [building block](https://paperswithcode.com/method/resnext-block) that aggregates a set of transformations with the same topology. Compared to a [ResNet](https://paperswithcode.com/method/resnet), it exposes a new dimension, *cardinality* (the size of the set of transfo... | 0 |
hf_public_repos/pytorch-image-models/hfdocs/source | hf_public_repos/pytorch-image-models/hfdocs/source/models/legacy-se-resnet.mdx | # (Legacy) SE-ResNet
**SE ResNet** is a variant of a [ResNet](https://www.paperswithcode.com/method/resnet) that employs [squeeze-and-excitation blocks](https://paperswithcode.com/method/squeeze-and-excitation-block) to enable the network to perform dynamic channel-wise feature recalibration.
## How do I use this mod... | 0 |
hf_public_repos/pytorch-image-models/hfdocs/source | hf_public_repos/pytorch-image-models/hfdocs/source/models/gloun-resnet.mdx | # (Gluon) ResNet
**Residual Networks**, or **ResNets**, learn residual functions with reference to the layer inputs, instead of learning unreferenced functions. Instead of hoping each few stacked layers directly fit a desired underlying mapping, residual nets let these layers fit a residual mapping. They stack [residu... | 0 |
hf_public_repos/pytorch-image-models/hfdocs/source | hf_public_repos/pytorch-image-models/hfdocs/source/models/legacy-senet.mdx | # (Legacy) SENet
A **SENet** is a convolutional neural network architecture that employs [squeeze-and-excitation blocks](https://paperswithcode.com/method/squeeze-and-excitation-block) to enable the network to perform dynamic channel-wise feature recalibration.
The weights from this model were ported from Gluon.
## ... | 0 |
hf_public_repos/pytorch-image-models/hfdocs/source | hf_public_repos/pytorch-image-models/hfdocs/source/models/efficientnet-pruned.mdx | # EfficientNet (Knapsack Pruned)
**EfficientNet** is a convolutional neural network architecture and scaling method that uniformly scales all dimensions of depth/width/resolution using a *compound coefficient*. Unlike conventional practice that arbitrary scales these factors, the EfficientNet scaling method uniformly... | 0 |
hf_public_repos/pytorch-image-models/hfdocs/source | hf_public_repos/pytorch-image-models/hfdocs/source/models/tf-efficientnet-condconv.mdx | # (Tensorflow) EfficientNet CondConv
**EfficientNet** is a convolutional neural network architecture and scaling method that uniformly scales all dimensions of depth/width/resolution using a *compound coefficient*. Unlike conventional practice that arbitrary scales these factors, the EfficientNet scaling method unifo... | 0 |
hf_public_repos/pytorch-image-models/hfdocs/source | hf_public_repos/pytorch-image-models/hfdocs/source/models/spnasnet.mdx | # SPNASNet
**Single-Path NAS** is a novel differentiable NAS method for designing hardware-efficient ConvNets in less than 4 hours.
## How do I use this model on an image?
To load a pretrained model:
```py
>>> import timm
>>> model = timm.create_model('spnasnet_100', pretrained=True)
>>> model.eval()
```
To load a... | 0 |
hf_public_repos/pytorch-image-models/hfdocs/source | hf_public_repos/pytorch-image-models/hfdocs/source/models/gloun-resnext.mdx | # (Gluon) ResNeXt
A **ResNeXt** repeats a [building block](https://paperswithcode.com/method/resnext-block) that aggregates a set of transformations with the same topology. Compared to a [ResNet](https://paperswithcode.com/method/resnet), it exposes a new dimension, *cardinality* (the size of the set of transformatio... | 0 |
hf_public_repos/pytorch-image-models/hfdocs/source | hf_public_repos/pytorch-image-models/hfdocs/source/models/pnasnet.mdx | # PNASNet
**Progressive Neural Architecture Search**, or **PNAS**, is a method for learning the structure of convolutional neural networks (CNNs). It uses a sequential model-based optimization (SMBO) strategy, where we search the space of cell structures, starting with simple (shallow) models and progressing to comple... | 0 |
hf_public_repos/pytorch-image-models/hfdocs/source | hf_public_repos/pytorch-image-models/hfdocs/source/models/swsl-resnext.mdx | # SWSL ResNeXt
A **ResNeXt** repeats a [building block](https://paperswithcode.com/method/resnext-block) that aggregates a set of transformations with the same topology. Compared to a [ResNet](https://paperswithcode.com/method/resnet), it exposes a new dimension, *cardinality* (the size of the set of transformations)... | 0 |
hf_public_repos/pytorch-image-models/hfdocs/source | hf_public_repos/pytorch-image-models/hfdocs/source/models/ssl-resnet.mdx | # SSL ResNet
**Residual Networks**, or **ResNets**, learn residual functions with reference to the layer inputs, instead of learning unreferenced functions. Instead of hoping each few stacked layers directly fit a desired underlying mapping, residual nets let these layers fit a residual mapping. They stack [residual b... | 0 |
hf_public_repos/pytorch-image-models/hfdocs/source | hf_public_repos/pytorch-image-models/hfdocs/source/models/inception-v3.mdx | # Inception v3
**Inception v3** is a convolutional neural network architecture from the Inception family that makes several improvements including using [Label Smoothing](https://paperswithcode.com/method/label-smoothing), Factorized 7 x 7 convolutions, and the use of an [auxiliary classifer](https://paperswithcode.co... | 0 |
hf_public_repos/pytorch-image-models/hfdocs/source | hf_public_repos/pytorch-image-models/hfdocs/source/models/big-transfer.mdx | # Big Transfer (BiT)
**Big Transfer (BiT)** is a type of pretraining recipe that pre-trains on a large supervised source dataset, and fine-tunes the weights on the target task. Models are trained on the JFT-300M dataset. The finetuned models contained in this collection are finetuned on ImageNet.
## How do I use thi... | 0 |
hf_public_repos/pytorch-image-models/hfdocs/source | hf_public_repos/pytorch-image-models/hfdocs/source/models/mnasnet.mdx | # MnasNet
**MnasNet** is a type of convolutional neural network optimized for mobile devices that is discovered through mobile neural architecture search, which explicitly incorporates model latency into the main objective so that the search can identify a model that achieves a good trade-off between accuracy and late... | 0 |
hf_public_repos/pytorch-image-models/hfdocs/source | hf_public_repos/pytorch-image-models/hfdocs/source/models/tf-mobilenet-v3.mdx | # (Tensorflow) MobileNet v3
**MobileNetV3** is a convolutional neural network that is designed for mobile phone CPUs. The network design includes the use of a [hard swish activation](https://paperswithcode.com/method/hard-swish) and [squeeze-and-excitation](https://paperswithcode.com/method/squeeze-and-excitation-bloc... | 0 |
hf_public_repos/pytorch-image-models/hfdocs/source | hf_public_repos/pytorch-image-models/hfdocs/source/reference/optimizers.mdx | # Optimization
This page contains the API reference documentation for learning rate optimizers included in `timm`.
## Optimizers
### Factory functions
[[autodoc]] timm.optim.optim_factory.create_optimizer
[[autodoc]] timm.optim.optim_factory.create_optimizer_v2
### Optimizer Classes
[[autodoc]] timm.optim.adabeli... | 0 |
hf_public_repos/pytorch-image-models/hfdocs/source | hf_public_repos/pytorch-image-models/hfdocs/source/reference/models.mdx | # Models
[[autodoc]] timm.create_model
[[autodoc]] timm.list_models
| 0 |
hf_public_repos/pytorch-image-models/hfdocs/source | hf_public_repos/pytorch-image-models/hfdocs/source/reference/schedulers.mdx | # Learning Rate Schedulers
This page contains the API reference documentation for learning rate schedulers included in `timm`.
## Schedulers
### Factory functions
[[autodoc]] timm.scheduler.scheduler_factory.create_scheduler
[[autodoc]] timm.scheduler.scheduler_factory.create_scheduler_v2
### Scheduler Classes
[[... | 0 |
hf_public_repos/pytorch-image-models/hfdocs/source | hf_public_repos/pytorch-image-models/hfdocs/source/reference/data.mdx | # Data
[[autodoc]] timm.data.create_dataset
[[autodoc]] timm.data.create_loader
[[autodoc]] timm.data.create_transform
[[autodoc]] timm.data.resolve_data_config | 0 |
hf_public_repos/pytorch-image-models | hf_public_repos/pytorch-image-models/results/README.md | # Validation and Benchmark Results
This folder contains validation and benchmark results for the models in this collection. Validation scores are currently only run for models with pretrained weights and ImageNet-1k heads, benchmark numbers are run for all.
## Datasets
There are currently results for the ImageNet va... | 0 |
hf_public_repos/pytorch-image-models | hf_public_repos/pytorch-image-models/results/results-imagenet.csv | model,top1,top1_err,top5,top5_err,param_count,img_size,crop_pct,interpolation
eva02_large_patch14_448.mim_m38m_ft_in22k_in1k,90.052,9.948,99.048,0.952,305.08,448,1.000,bicubic
eva02_large_patch14_448.mim_in22k_ft_in22k_in1k,89.970,10.030,99.012,0.988,305.08,448,1.000,bicubic
eva_giant_patch14_560.m30m_ft_in22k_in1k,89.... | 0 |
hf_public_repos/pytorch-image-models | hf_public_repos/pytorch-image-models/results/model_metadata-in1k.csv | model,pretrain
adv_inception_v3,in1k-adv
bat_resnext26ts,in1k
beit_base_patch16_224,in21k-selfsl
beit_base_patch16_384,in21k-selfsl
beit_large_patch16_224,in21k-selfsl
beit_large_patch16_384,in21k-selfsl
beit_large_patch16_512,in21k-selfsl
botnet26t_256,in1k
cait_m36_384,in1k-dist
cait_m48_448,in1k-dist
cait_s24_224,in... | 0 |
hf_public_repos/pytorch-image-models | hf_public_repos/pytorch-image-models/results/benchmark-infer-amp-nhwc-pt210-cu121-rtx3090.csv | model,infer_img_size,infer_batch_size,infer_samples_per_sec,infer_step_time,infer_gmacs,infer_macts,param_count
tinynet_e,106,1024.0,75290.96,13.591,0.03,0.69,2.04
mobilenetv3_small_050,224,1024.0,56785.93,18.023,0.03,0.92,1.59
efficientvit_m0,224,1024.0,50656.23,20.205,0.08,0.91,2.35
lcnet_035,224,1024.0,48853.22,20.9... | 0 |
hf_public_repos/pytorch-image-models | hf_public_repos/pytorch-image-models/results/benchmark-train-amp-nhwc-pt111-cu113-rtx3090.csv | model,train_samples_per_sec,train_step_time,train_batch_size,train_img_size,param_count
tinynet_e,10725.36,46.047,512,106,2.04
mobilenetv3_small_050,9864.52,50.786,512,224,1.59
lcnet_035,9593.72,52.888,512,224,1.64
lcnet_050,8283.82,61.296,512,224,1.88
tf_mobilenetv3_small_minimal_100,8178.73,62.055,512,224,2.04
tinyne... | 0 |
hf_public_repos/pytorch-image-models | hf_public_repos/pytorch-image-models/results/results-imagenet-r.csv | model,top1,top1_err,top5,top5_err,param_count,img_size,crop_pct,interpolation,top1_diff,top5_diff,rank_diff
convnext_xxlarge.clip_laion2b_soup_ft_in1k,90.623,9.377,97.913,2.087,846.47,256,1.000,bicubic,-7.127,-1.897,+18
eva_giant_patch14_336.clip_ft_in1k,90.550,9.450,97.230,2.770,"1,013.01",336,1.000,bicubic,-7.310,-2.... | 0 |
hf_public_repos/pytorch-image-models | hf_public_repos/pytorch-image-models/results/results-imagenet-r-clean.csv | model,top1,top1_err,top5,top5_err,param_count,img_size,crop_pct,interpolation
eva02_large_patch14_448.mim_in22k_ft_in22k_in1k,98.150,1.850,99.880,0.120,305.08,448,1.000,bicubic
eva02_large_patch14_448.mim_m38m_ft_in22k_in1k,98.030,1.970,99.890,0.110,305.08,448,1.000,bicubic
eva_giant_patch14_560.m30m_ft_in22k_in1k,98.0... | 0 |
hf_public_repos/pytorch-image-models | hf_public_repos/pytorch-image-models/results/results-imagenetv2-matched-frequency.csv | model,top1,top1_err,top5,top5_err,param_count,img_size,crop_pct,interpolation,top1_diff,top5_diff,rank_diff
eva_giant_patch14_336.clip_ft_in1k,82.200,17.800,96.290,3.710,"1,013.01",336,1.000,bicubic,-7.266,-2.536,+6
eva02_large_patch14_448.mim_in22k_ft_in1k,82.130,17.870,96.260,3.740,305.08,448,1.000,bicubic,-7.492,-2.... | 0 |
hf_public_repos/pytorch-image-models | hf_public_repos/pytorch-image-models/results/benchmark-train-amp-nchw-pt112-cu113-rtx3090.csv | model,train_samples_per_sec,train_step_time,train_batch_size,train_img_size,param_count
tinynet_e,10001.12,50.423,512,106,2.04
mobilenetv3_small_050,7406.47,68.392,512,224,1.59
tf_mobilenetv3_small_minimal_100,6438.14,78.983,512,224,2.04
mobilenetv3_small_075,6186.83,82.006,512,224,2.04
tf_mobilenetv3_small_075,5783.46... | 0 |
hf_public_repos/pytorch-image-models | hf_public_repos/pytorch-image-models/results/benchmark-train-amp-nhwc-pt112-cu113-rtx3090.csv | model,train_samples_per_sec,train_step_time,train_batch_size,train_img_size,param_count
tinynet_e,11915.85,41.681,512,106,2.04
mobilenetv3_small_050,11290.99,44.293,512,224,1.59
lcnet_035,10015.98,50.125,512,224,1.64
lcnet_050,9286.37,54.37,512,224,1.88
tf_mobilenetv3_small_minimal_100,9042.22,55.986,512,224,2.04
mobil... | 0 |
hf_public_repos/pytorch-image-models | hf_public_repos/pytorch-image-models/results/benchmark-infer-amp-nchw-pt113-cu117-rtx3090.csv | model,infer_samples_per_sec,infer_step_time,infer_batch_size,infer_img_size,infer_gmacs,infer_macts,param_count
tinynet_e,49277.65,20.77,1024,106,0.03,0.69,2.04
mobilenetv3_small_050,45562.75,22.464,1024,224,0.03,0.92,1.59
lcnet_035,41026.68,24.949,1024,224,0.03,1.04,1.64
lcnet_050,37575.13,27.242,1024,224,0.05,1.26,1.... | 0 |
hf_public_repos/pytorch-image-models | hf_public_repos/pytorch-image-models/results/benchmark-infer-amp-nhwc-pt112-cu113-rtx3090.csv | model,infer_samples_per_sec,infer_step_time,infer_batch_size,infer_img_size,infer_gmacs,infer_macts,param_count
tinynet_e,70939.06,14.424,1024,106,0.03,0.69,2.04
mobilenetv3_small_050,53363.87,19.179,1024,224,0.03,0.92,1.59
lcnet_035,39908.29,25.648,1024,224,0.03,1.04,1.64
mobilenetv3_small_075,38048.72,26.902,1024,224... | 0 |
hf_public_repos/pytorch-image-models | hf_public_repos/pytorch-image-models/results/benchmark-train-amp-nchw-pt111-cu113-rtx3090.csv | model,train_samples_per_sec,train_step_time,train_batch_size,train_img_size,param_count
tinynet_e,9380.97,53.881,512,106,2.04
mobilenetv3_small_050,7276.68,69.643,512,224,1.59
tf_mobilenetv3_small_minimal_100,6334.14,80.291,512,224,2.04
mobilenetv3_small_075,5920.21,85.765,512,224,2.04
lcnet_035,5760.61,88.397,512,224,... | 0 |
hf_public_repos/pytorch-image-models | hf_public_repos/pytorch-image-models/results/benchmark-infer-amp-nchw-pt111-cu113-rtx3090.csv | model,infer_samples_per_sec,infer_step_time,infer_batch_size,infer_img_size,param_count
tinynet_e,47972.76,21.335,1024,106,2.04
mobilenetv3_small_050,42473.43,24.099,1024,224,1.59
lcnet_035,39739.31,25.756,1024,224,1.64
lcnet_050,35211.0,29.071,1024,224,1.88
mobilenetv3_small_075,31410.3,32.589,1024,224,2.04
mobilenetv... | 0 |
hf_public_repos/pytorch-image-models | hf_public_repos/pytorch-image-models/results/benchmark-infer-amp-nchw-pt210-cu121-rtx3090.csv | model,infer_img_size,infer_batch_size,infer_samples_per_sec,infer_step_time,infer_gmacs,infer_macts,param_count
tinynet_e,106,1024.0,50604.03,20.225,0.03,0.69,2.04
mobilenetv3_small_050,224,1024.0,46069.42,22.217,0.03,0.92,1.59
lcnet_035,224,1024.0,41190.64,24.85,0.03,1.04,1.64
lcnet_050,224,1024.0,37663.82,27.178,0.05... | 0 |
hf_public_repos/pytorch-image-models | hf_public_repos/pytorch-image-models/results/results-sketch.csv | model,top1,top1_err,top5,top5_err,param_count,img_size,crop_pct,interpolation,top1_diff,top5_diff,rank_diff
eva_giant_patch14_336.clip_ft_in1k,71.177,28.823,90.299,9.701,"1,013.01",336,1.000,bicubic,-18.289,-8.527,+6
eva02_large_patch14_448.mim_m38m_ft_in22k_in1k,70.662,29.338,89.856,10.144,305.08,448,1.000,bicubic,-19... | 0 |
hf_public_repos/pytorch-image-models | hf_public_repos/pytorch-image-models/results/results-imagenet-a-clean.csv | model,top1,top1_err,top5,top5_err,param_count,img_size,crop_pct,interpolation
eva02_large_patch14_448.mim_in22k_ft_in22k_in1k,98.930,1.070,99.910,0.090,305.08,448,1.000,bicubic
eva02_large_patch14_448.mim_m38m_ft_in22k_in1k,98.850,1.150,99.880,0.120,305.08,448,1.000,bicubic
eva02_large_patch14_448.mim_in22k_ft_in1k,98.... | 0 |
hf_public_repos/pytorch-image-models | hf_public_repos/pytorch-image-models/results/results-imagenet-real.csv | model,top1,top1_err,top5,top5_err,param_count,img_size,crop_pct,interpolation,top1_diff,top5_diff,rank_diff
eva02_large_patch14_448.mim_m38m_ft_in22k_in1k,91.129,8.871,98.713,1.287,305.08,448,1.000,bicubic,+1.077,-0.335,0
eva_giant_patch14_336.clip_ft_in1k,91.058,8.942,98.602,1.399,"1,013.01",336,1.000,bicubic,+1.592,-... | 0 |
hf_public_repos/pytorch-image-models | hf_public_repos/pytorch-image-models/results/benchmark-infer-amp-nhwc-pt111-cu113-rtx3090.csv | model,infer_samples_per_sec,infer_step_time,infer_batch_size,infer_img_size,param_count
tinynet_e,68298.73,14.982,1024,106,2.04
mobilenetv3_small_050,48773.32,20.985,1024,224,1.59
lcnet_035,47045.94,21.755,1024,224,1.64
lcnet_050,41541.83,24.639,1024,224,1.88
mobilenetv3_small_075,37803.23,27.076,1024,224,2.04
mobilene... | 0 |
hf_public_repos/pytorch-image-models | hf_public_repos/pytorch-image-models/results/benchmark-infer-amp-nhwc-pt113-cu117-rtx3090.csv | model,infer_samples_per_sec,infer_step_time,infer_batch_size,infer_img_size,infer_gmacs,infer_macts,param_count
tinynet_e,72737.62,14.068,1024,106,0.03,0.69,2.04
mobilenetv3_small_050,54822.3,18.668,1024,224,0.03,0.92,1.59
lcnet_035,53629.35,19.084,1024,224,0.03,1.04,1.64
lcnet_050,45492.41,22.499,1024,224,0.05,1.26,1.... | 0 |
hf_public_repos/pytorch-image-models | hf_public_repos/pytorch-image-models/results/results-imagenet-a.csv | model,top1,top1_err,top5,top5_err,param_count,img_size,crop_pct,interpolation,top1_diff,top5_diff,rank_diff
eva02_large_patch14_448.mim_m38m_ft_in22k_in1k,88.227,11.773,97.093,2.907,305.08,448,1.000,bicubic,-10.623,-2.787,+1
eva02_large_patch14_448.mim_in22k_ft_in22k_in1k,87.893,12.107,96.920,3.080,305.08,448,1.000,bic... | 0 |
hf_public_repos/pytorch-image-models | hf_public_repos/pytorch-image-models/results/benchmark-infer-amp-nchw-pt112-cu113-rtx3090.csv | model,infer_samples_per_sec,infer_step_time,infer_batch_size,infer_img_size,infer_gmacs,infer_macts,param_count
tinynet_e,49285.12,20.767,1024,106,0.03,0.69,2.04
mobilenetv3_small_050,43905.96,23.312,1024,224,0.03,0.92,1.59
lcnet_035,40961.84,24.988,1024,224,0.03,1.04,1.64
lcnet_050,36451.18,28.081,1024,224,0.05,1.26,1... | 0 |
hf_public_repos/pytorch-image-models | hf_public_repos/pytorch-image-models/results/generate_csv_results.py | import numpy as np
import pandas as pd
results = {
'results-imagenet.csv': [
'results-imagenet-real.csv',
'results-imagenetv2-matched-frequency.csv',
'results-sketch.csv'
],
'results-imagenet-a-clean.csv': [
'results-imagenet-a.csv',
],
'results-imagenet-r-clean.csv... | 0 |
hf_public_repos/pytorch-image-models | hf_public_repos/pytorch-image-models/timm/__init__.py | from .version import __version__
from .layers import is_scriptable, is_exportable, set_scriptable, set_exportable
from .models import create_model, list_models, list_pretrained, is_model, list_modules, model_entrypoint, \
is_model_pretrained, get_pretrained_cfg, get_pretrained_cfg_value
| 0 |
hf_public_repos/pytorch-image-models | hf_public_repos/pytorch-image-models/timm/version.py | __version__ = '0.9.13dev0'
| 0 |
hf_public_repos/pytorch-image-models/timm | hf_public_repos/pytorch-image-models/timm/layers/activations_me.py | """ Activations (memory-efficient w/ custom autograd)
A collection of activations fn and modules with a common interface so that they can
easily be swapped. All have an `inplace` arg even if not used.
These activations are not compatible with jit scripting or ONNX export of the model, please use either
the JIT or bas... | 0 |
hf_public_repos/pytorch-image-models/timm | hf_public_repos/pytorch-image-models/timm/layers/activations.py | """ Activations
A collection of activations fn and modules with a common interface so that they can
easily be swapped. All have an `inplace` arg even if not used.
Hacked together by / Copyright 2020 Ross Wightman
"""
import torch
from torch import nn as nn
from torch.nn import functional as F
def swish(x, inplace:... | 0 |
hf_public_repos/pytorch-image-models/timm | hf_public_repos/pytorch-image-models/timm/layers/norm_act.py | """ Normalization + Activation Layers
Provides Norm+Act fns for standard PyTorch norm layers such as
* BatchNorm
* GroupNorm
* LayerNorm
This allows swapping with alternative layers that are natively both norm + act such as
* EvoNorm (evo_norm.py)
* FilterResponseNorm (filter_response_norm.py)
* InplaceABN (inplace_a... | 0 |
hf_public_repos/pytorch-image-models/timm | hf_public_repos/pytorch-image-models/timm/layers/mixed_conv2d.py | """ PyTorch Mixed Convolution
Paper: MixConv: Mixed Depthwise Convolutional Kernels (https://arxiv.org/abs/1907.09595)
Hacked together by / Copyright 2020 Ross Wightman
"""
import torch
from torch import nn as nn
from .conv2d_same import create_conv2d_pad
def _split_channels(num_chan, num_groups):
split = [nu... | 0 |
hf_public_repos/pytorch-image-models/timm | hf_public_repos/pytorch-image-models/timm/layers/conv2d_same.py | """ Conv2d w/ Same Padding
Hacked together by / Copyright 2020 Ross Wightman
"""
import torch
import torch.nn as nn
import torch.nn.functional as F
from typing import Tuple, Optional
from .config import is_exportable, is_scriptable
from .padding import pad_same, pad_same_arg, get_padding_value
_USE_EXPORT_CONV = Fa... | 0 |
hf_public_repos/pytorch-image-models/timm | hf_public_repos/pytorch-image-models/timm/layers/bottleneck_attn.py | """ Bottleneck Self Attention (Bottleneck Transformers)
Paper: `Bottleneck Transformers for Visual Recognition` - https://arxiv.org/abs/2101.11605
@misc{2101.11605,
Author = {Aravind Srinivas and Tsung-Yi Lin and Niki Parmar and Jonathon Shlens and Pieter Abbeel and Ashish Vaswani},
Title = {Bottleneck Transformers f... | 0 |
hf_public_repos/pytorch-image-models/timm | hf_public_repos/pytorch-image-models/timm/layers/conv_bn_act.py | """ Conv2d + BN + Act
Hacked together by / Copyright 2020 Ross Wightman
"""
import functools
from torch import nn as nn
from .create_conv2d import create_conv2d
from .create_norm_act import get_norm_act_layer
class ConvNormAct(nn.Module):
def __init__(
self,
in_channels,
out_... | 0 |
hf_public_repos/pytorch-image-models/timm | hf_public_repos/pytorch-image-models/timm/layers/norm.py | """ Normalization layers and wrappers
Norm layer definitions that support fast norm and consistent channel arg order (always first arg).
Hacked together by / Copyright 2022 Ross Wightman
"""
import numbers
from typing import Tuple
import torch
import torch.nn as nn
import torch.nn.functional as F
from .fast_norm im... | 0 |
hf_public_repos/pytorch-image-models/timm | hf_public_repos/pytorch-image-models/timm/layers/pool2d_same.py | """ AvgPool2d w/ Same Padding
Hacked together by / Copyright 2020 Ross Wightman
"""
import torch
import torch.nn as nn
import torch.nn.functional as F
from typing import List, Tuple, Optional
from .helpers import to_2tuple
from .padding import pad_same, get_padding_value
def avg_pool2d_same(x, kernel_size: List[int... | 0 |
hf_public_repos/pytorch-image-models/timm | hf_public_repos/pytorch-image-models/timm/layers/patch_dropout.py | from typing import Optional, Tuple, Union
import torch
import torch.nn as nn
class PatchDropout(nn.Module):
"""
https://arxiv.org/abs/2212.00794
"""
return_indices: torch.jit.Final[bool]
def __init__(
self,
prob: float = 0.5,
num_prefix_tokens: int = 1,
... | 0 |
hf_public_repos/pytorch-image-models/timm | hf_public_repos/pytorch-image-models/timm/layers/activations_jit.py | """ Activations
A collection of jit-scripted activations fn and modules with a common interface so that they can
easily be swapped. All have an `inplace` arg even if not used.
All jit scripted activations are lacking in-place variations on purpose, scripted kernel fusion does not
currently work across in-place op bou... | 0 |
hf_public_repos/pytorch-image-models/timm | hf_public_repos/pytorch-image-models/timm/layers/drop.py | """ DropBlock, DropPath
PyTorch implementations of DropBlock and DropPath (Stochastic Depth) regularization layers.
Papers:
DropBlock: A regularization method for convolutional networks (https://arxiv.org/abs/1810.12890)
Deep Networks with Stochastic Depth (https://arxiv.org/abs/1603.09382)
Code:
DropBlock impl ins... | 0 |
hf_public_repos/pytorch-image-models/timm | hf_public_repos/pytorch-image-models/timm/layers/median_pool.py | """ Median Pool
Hacked together by / Copyright 2020 Ross Wightman
"""
import torch.nn as nn
import torch.nn.functional as F
from .helpers import to_2tuple, to_4tuple
class MedianPool2d(nn.Module):
""" Median pool (usable as median filter when stride=1) module.
Args:
kernel_size: size of pooling kern... | 0 |
hf_public_repos/pytorch-image-models/timm | hf_public_repos/pytorch-image-models/timm/layers/cbam.py | """ CBAM (sort-of) Attention
Experimental impl of CBAM: Convolutional Block Attention Module: https://arxiv.org/abs/1807.06521
WARNING: Results with these attention layers have been mixed. They can significantly reduce performance on
some tasks, especially fine-grained it seems. I may end up removing this impl.
Hack... | 0 |
hf_public_repos/pytorch-image-models/timm | hf_public_repos/pytorch-image-models/timm/layers/attention_pool2d.py | """ Attention Pool 2D
Implementations of 2D spatial feature pooling using multi-head attention instead of average pool.
Based on idea in CLIP by OpenAI, licensed Apache 2.0
https://github.com/openai/CLIP/blob/3b473b0e682c091a9e53623eebc1ca1657385717/clip/model.py
Hacked together by / Copyright 2021 Ross Wightman
"""... | 0 |
hf_public_repos/pytorch-image-models/timm | hf_public_repos/pytorch-image-models/timm/layers/split_batchnorm.py | """ Split BatchNorm
A PyTorch BatchNorm layer that splits input batch into N equal parts and passes each through
a separate BN layer. The first split is passed through the parent BN layers with weight/bias
keys the same as the original BN. All other splits pass through BN sub-layers under the '.aux_bn'
namespace.
Thi... | 0 |
hf_public_repos/pytorch-image-models/timm | hf_public_repos/pytorch-image-models/timm/layers/trace_utils.py | try:
from torch import _assert
except ImportError:
def _assert(condition: bool, message: str):
assert condition, message
def _float_to_int(x: float) -> int:
"""
Symbolic tracing helper to substitute for inbuilt `int`.
Hint: Inbuilt `int` can't accept an argument of type `Proxy`
"""
... | 0 |
hf_public_repos/pytorch-image-models/timm | hf_public_repos/pytorch-image-models/timm/layers/lambda_layer.py | """ Lambda Layer
Paper: `LambdaNetworks: Modeling Long-Range Interactions Without Attention`
- https://arxiv.org/abs/2102.08602
@misc{2102.08602,
Author = {Irwan Bello},
Title = {LambdaNetworks: Modeling Long-Range Interactions Without Attention},
Year = {2021},
}
Status:
This impl is a WIP. Code snippets in the... | 0 |
hf_public_repos/pytorch-image-models/timm | hf_public_repos/pytorch-image-models/timm/layers/split_attn.py | """ Split Attention Conv2d (for ResNeSt Models)
Paper: `ResNeSt: Split-Attention Networks` - /https://arxiv.org/abs/2004.08955
Adapted from original PyTorch impl at https://github.com/zhanghang1989/ResNeSt
Modified for torchscript compat, performance, and consistency with timm by Ross Wightman
"""
import torch
impor... | 0 |
hf_public_repos/pytorch-image-models/timm | hf_public_repos/pytorch-image-models/timm/layers/non_local_attn.py | """ Bilinear-Attention-Transform and Non-Local Attention
Paper: `Non-Local Neural Networks With Grouped Bilinear Attentional Transforms`
- https://openaccess.thecvf.com/content_CVPR_2020/html/Chi_Non-Local_Neural_Networks_With_Grouped_Bilinear_Attentional_Transforms_CVPR_2020_paper.html
Adapted from original code:... | 0 |
hf_public_repos/pytorch-image-models/timm | hf_public_repos/pytorch-image-models/timm/layers/cond_conv2d.py | """ PyTorch Conditionally Parameterized Convolution (CondConv)
Paper: CondConv: Conditionally Parameterized Convolutions for Efficient Inference
(https://arxiv.org/abs/1904.04971)
Hacked together by / Copyright 2020 Ross Wightman
"""
import math
from functools import partial
import numpy as np
import torch
from torc... | 0 |
hf_public_repos/pytorch-image-models/timm | hf_public_repos/pytorch-image-models/timm/layers/attention_pool.py | from typing import Optional
import torch
import torch.nn as nn
import torch.nn.functional as F
from .config import use_fused_attn
from .mlp import Mlp
from .weight_init import trunc_normal_tf_
class AttentionPoolLatent(nn.Module):
""" Attention pooling w/ latent query
"""
fused_attn: torch.jit.Final[boo... | 0 |
hf_public_repos/pytorch-image-models/timm | hf_public_repos/pytorch-image-models/timm/layers/selective_kernel.py | """ Selective Kernel Convolution/Attention
Paper: Selective Kernel Networks (https://arxiv.org/abs/1903.06586)
Hacked together by / Copyright 2020 Ross Wightman
"""
import torch
from torch import nn as nn
from .conv_bn_act import ConvNormActAa
from .helpers import make_divisible
from .trace_utils import _assert
de... | 0 |
hf_public_repos/pytorch-image-models/timm | hf_public_repos/pytorch-image-models/timm/layers/space_to_depth.py | import torch
import torch.nn as nn
class SpaceToDepth(nn.Module):
bs: torch.jit.Final[int]
def __init__(self, block_size=4):
super().__init__()
assert block_size == 4
self.bs = block_size
def forward(self, x):
N, C, H, W = x.size()
x = x.view(N, C, H // self.bs, s... | 0 |
hf_public_repos/pytorch-image-models/timm | hf_public_repos/pytorch-image-models/timm/layers/gather_excite.py | """ Gather-Excite Attention Block
Paper: `Gather-Excite: Exploiting Feature Context in CNNs` - https://arxiv.org/abs/1810.12348
Official code here, but it's only partial impl in Caffe: https://github.com/hujie-frank/GENet
I've tried to support all of the extent both w/ and w/o params. I don't believe I've seen anoth... | 0 |
hf_public_repos/pytorch-image-models/timm | hf_public_repos/pytorch-image-models/timm/layers/squeeze_excite.py | """ Squeeze-and-Excitation Channel Attention
An SE implementation originally based on PyTorch SE-Net impl.
Has since evolved with additional functionality / configuration.
Paper: `Squeeze-and-Excitation Networks` - https://arxiv.org/abs/1709.01507
Also included is Effective Squeeze-Excitation (ESE).
Paper: `CenterMa... | 0 |
hf_public_repos/pytorch-image-models/timm | hf_public_repos/pytorch-image-models/timm/layers/pos_embed_rel.py | """ Relative position embedding modules and functions
Hacked together by / Copyright 2022 Ross Wightman
"""
import math
import os
from typing import Optional, Tuple
import torch
import torch.nn as nn
import torch.nn.functional as F
from .interpolate import RegularGridInterpolator
from .mlp import Mlp
from .weight_in... | 0 |
hf_public_repos/pytorch-image-models/timm | hf_public_repos/pytorch-image-models/timm/layers/interpolate.py | """ Interpolation helpers for timm layers
RegularGridInterpolator from https://github.com/sbarratt/torch_interpolations
Copyright Shane Barratt, Apache 2.0 license
"""
import torch
from itertools import product
class RegularGridInterpolator:
""" Interpolate data defined on a rectilinear grid with even or uneven ... | 0 |
hf_public_repos/pytorch-image-models/timm | hf_public_repos/pytorch-image-models/timm/layers/classifier.py | """ Classifier head and layer factory
Hacked together by / Copyright 2020 Ross Wightman
"""
from collections import OrderedDict
from functools import partial
from typing import Optional, Union, Callable
import torch
import torch.nn as nn
from torch.nn import functional as F
from .adaptive_avgmax_pool import SelectAd... | 0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.