repo_id stringlengths 15 89 | file_path stringlengths 27 180 | content stringlengths 1 2.23M | __index_level_0__ int64 0 0 |
|---|---|---|---|
hf_public_repos | hf_public_repos/pytorch-image-models/MANIFEST.in | include timm/models/_pruned/*.txt
include timm/data/_info/*.txt
include timm/data/_info/*.json
| 0 |
hf_public_repos | hf_public_repos/pytorch-image-models/distributed_train.sh | #!/bin/bash
NUM_PROC=$1
shift
torchrun --nproc_per_node=$NUM_PROC train.py "$@"
| 0 |
hf_public_repos | hf_public_repos/pytorch-image-models/LICENSE | Apache License
Version 2.0, January 2004
http://www.apache.org/licenses/
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
1. Definitions.
"License" shall mean the terms and conditions for use, reproduction,
... | 0 |
hf_public_repos | hf_public_repos/pytorch-image-models/README.md | # PyTorch Image Models
- [What's New](#whats-new)
- [Introduction](#introduction)
- [Models](#models)
- [Features](#features)
- [Results](#results)
- [Getting Started (Documentation)](#getting-started-documentation)
- [Train, Validation, Inference Scripts](#train-validation-inference-scripts)
- [Awesome PyTorch Resourc... | 0 |
hf_public_repos | hf_public_repos/pytorch-image-models/model-index.yml | Import:
- ./docs/models/*.md
Library:
Name: PyTorch Image Models
Headline: PyTorch image models, scripts, pretrained weights
Website: https://rwightman.github.io/pytorch-image-models/
Repository: https://github.com/rwightman/pytorch-image-models
Docs: https://rwightman.github.io/pytorch-image-models/
README... | 0 |
hf_public_repos | hf_public_repos/pytorch-image-models/CONTRIBUTING.md | *This guideline is very much a work-in-progress.*
Contributions to `timm` for code, documentation, tests are more than welcome!
There haven't been any formal guidelines to date so please bear with me, and feel free to add to this guide.
# Coding style
Code linting and auto-format (black) are not currently in place ... | 0 |
hf_public_repos | hf_public_repos/pytorch-image-models/benchmark.py | #!/usr/bin/env python3
""" Model Benchmark Script
An inference and train step benchmark script for timm models.
Hacked together by Ross Wightman (https://github.com/rwightman)
"""
import argparse
import csv
import json
import logging
import time
from collections import OrderedDict
from contextlib import suppress
from... | 0 |
hf_public_repos | hf_public_repos/pytorch-image-models/bulk_runner.py | #!/usr/bin/env python3
""" Bulk Model Script Runner
Run validation or benchmark script in separate process for each model
Benchmark all 'vit*' models:
python bulk_runner.py --model-list 'vit*' --results-file vit_bench.csv benchmark.py --amp -b 512
Validate all models:
python bulk_runner.py --model-list all --resul... | 0 |
hf_public_repos | hf_public_repos/pytorch-image-models/clean_checkpoint.py | #!/usr/bin/env python3
""" Checkpoint Cleaning Script
Takes training checkpoints with GPU tensors, optimizer state, extra dict keys, etc.
and outputs a CPU tensor checkpoint with only the `state_dict` along with SHA256
calculation for model zoo compatibility.
Hacked together by / Copyright 2020 Ross Wightman (https:... | 0 |
hf_public_repos | hf_public_repos/pytorch-image-models/requirements-dev.txt | pytest
pytest-timeout
pytest-xdist
pytest-forked
expecttest
| 0 |
hf_public_repos | hf_public_repos/pytorch-image-models/train.py | #!/usr/bin/env python3
""" ImageNet Training Script
This is intended to be a lean and easily modifiable ImageNet training script that reproduces ImageNet
training results with some of the latest networks and training techniques. It favours canonical PyTorch
and standard Python style over trying to be able to 'do it al... | 0 |
hf_public_repos | hf_public_repos/pytorch-image-models/hubconf.py | dependencies = ['torch']
import timm
globals().update(timm.models._registry._model_entrypoints)
| 0 |
hf_public_repos | hf_public_repos/pytorch-image-models/setup.py | """ Setup
"""
from setuptools import setup, find_packages
from codecs import open
from os import path
here = path.abspath(path.dirname(__file__))
# Get the long description from the README file
with open(path.join(here, 'README.md'), encoding='utf-8') as f:
long_description = f.read()
exec(open('timm/version.py'... | 0 |
hf_public_repos | hf_public_repos/pytorch-image-models/inference.py | #!/usr/bin/env python3
"""PyTorch Inference Script
An example inference script that outputs top-k class ids for images in a folder into a csv.
Hacked together by / Copyright 2020 Ross Wightman (https://github.com/rwightman)
"""
import argparse
import json
import logging
import os
import time
from contextlib import su... | 0 |
hf_public_repos | hf_public_repos/pytorch-image-models/validate.py | #!/usr/bin/env python3
""" ImageNet Validation Script
This is intended to be a lean and easily modifiable ImageNet validation script for evaluating pretrained
models or training checkpoints against ImageNet or similarly organized image datasets. It prioritizes
canonical PyTorch, standard Python style, and good perform... | 0 |
hf_public_repos | hf_public_repos/pytorch-image-models/onnx_export.py | """ ONNX export script
Export PyTorch models as ONNX graphs.
This export script originally started as an adaptation of code snippets found at
https://pytorch.org/tutorials/advanced/super_resolution_with_onnxruntime.html
The default parameters work with PyTorch 1.6 and ONNX 1.7 and produce an optimal ONNX graph
for h... | 0 |
hf_public_repos | hf_public_repos/pytorch-image-models/requirements.txt | torch>=1.7
torchvision
pyyaml
huggingface_hub
safetensors>=0.2 | 0 |
hf_public_repos | hf_public_repos/pytorch-image-models/setup.cfg | [dist_conda]
conda_name_differences = 'torch:pytorch'
channels = pytorch
noarch = True
| 0 |
hf_public_repos | hf_public_repos/pytorch-image-models/onnx_validate.py | """ ONNX-runtime validation script
This script was created to verify accuracy and performance of exported ONNX
models running with the onnxruntime. It utilizes the PyTorch dataloader/processing
pipeline for a fair comparison against the originals.
Copyright 2020 Ross Wightman
"""
import argparse
import numpy as np
im... | 0 |
hf_public_repos | hf_public_repos/pytorch-image-models/avg_checkpoints.py | #!/usr/bin/env python3
""" Checkpoint Averaging Script
This script averages all model weights for checkpoints in specified path that match
the specified filter wildcard. All checkpoints must be from the exact same model.
For any hope of decent results, the checkpoints should be from the same or child
(via resumes) tr... | 0 |
hf_public_repos | hf_public_repos/pytorch-image-models/pyproject.toml | [tool.pytest.ini_options]
markers = [
"base: marker for model tests using the basic setup",
"cfg: marker for model tests checking the config",
"torchscript: marker for model tests using torchscript",
"features: marker for model tests checking feature extraction",
"fxforward: marker for model tests u... | 0 |
hf_public_repos | hf_public_repos/pytorch-image-models/requirements-docs.txt | mkdocs
mkdocs-material
mkdocs-redirects
mdx_truly_sane_lists
mkdocs-awesome-pages-plugin
| 0 |
hf_public_repos | hf_public_repos/pytorch-image-models/mkdocs.yml | site_name: 'Pytorch Image Models'
site_description: 'Pretained Image Recognition Models'
repo_name: 'rwightman/pytorch-image-models'
repo_url: 'https://github.com/rwightman/pytorch-image-models'
nav:
- index.md
- models.md
- ... | models/*.md
- results.md
- scripts.md
- training_hparam_examples.md
- featu... | 0 |
hf_public_repos/pytorch-image-models | hf_public_repos/pytorch-image-models/tests/test_utils.py | from torch.nn.modules.batchnorm import BatchNorm2d
from torchvision.ops.misc import FrozenBatchNorm2d
import timm
from timm.utils.model import freeze, unfreeze
def test_freeze_unfreeze():
model = timm.create_model('resnet18')
# Freeze all
freeze(model)
# Check top level module
assert model.fc.we... | 0 |
hf_public_repos/pytorch-image-models | hf_public_repos/pytorch-image-models/tests/test_layers.py | import torch
import torch.nn as nn
from timm.layers import create_act_layer, set_layer_config
import importlib
import os
torch_backend = os.environ.get('TORCH_BACKEND')
if torch_backend is not None:
importlib.import_module(torch_backend)
torch_device = os.environ.get('TORCH_DEVICE', 'cpu')
class MLP(nn.Module):... | 0 |
hf_public_repos/pytorch-image-models | hf_public_repos/pytorch-image-models/tests/test_models.py | """Run tests for all models
Tests that run on CI should have a specific marker, e.g. @pytest.mark.base. This
marker is used to parallelize the CI runs, with one runner for each marker.
If new tests are added, ensure that they use one of the existing markers
(documented in pyproject.toml > pytest > markers) or that a ... | 0 |
hf_public_repos/pytorch-image-models | hf_public_repos/pytorch-image-models/tests/test_optim.py | """ Optimzier Tests
These tests were adapted from PyTorch' optimizer tests.
"""
import math
import pytest
import functools
from copy import deepcopy
import torch
from torch.testing._internal.common_utils import TestCase
from torch.nn import Parameter
from timm.scheduler import PlateauLRScheduler
from timm.optim imp... | 0 |
hf_public_repos/pytorch-image-models | hf_public_repos/pytorch-image-models/docs/models.md | # Model Summaries
The model architectures included come from a wide variety of sources. Sources, including papers, original impl ("reference code") that I rewrote / adapted, and PyTorch impl that I leveraged directly ("code") are listed below.
Most included models have pretrained weights. The weights are either:
1. ... | 0 |
hf_public_repos/pytorch-image-models | hf_public_repos/pytorch-image-models/docs/archived_changes.md | # Archived Changes
### Nov 22, 2021
* A number of updated weights anew new model defs
* `eca_halonext26ts` - 79.5 @ 256
* `resnet50_gn` (new) - 80.1 @ 224, 81.3 @ 288
* `resnet50` - 80.7 @ 224, 80.9 @ 288 (trained at 176, not replacing current a1 weights as default since these don't scale as well to higher res, ... | 0 |
hf_public_repos/pytorch-image-models | hf_public_repos/pytorch-image-models/docs/index.md | # Getting Started
## Welcome
Welcome to the `timm` documentation, a lean set of docs that covers the basics of `timm`.
For a more comprehensive set of docs (currently under development), please visit [timmdocs](http://timm.fast.ai) by [Aman Arora](https://github.com/amaarora).
## Install
The library can be install... | 0 |
hf_public_repos/pytorch-image-models | hf_public_repos/pytorch-image-models/docs/scripts.md | # Scripts
A train, validation, inference, and checkpoint cleaning script included in the github root folder. Scripts are not currently packaged in the pip release.
The training and validation scripts evolved from early versions of the [PyTorch Imagenet Examples](https://github.com/pytorch/examples). I have added signi... | 0 |
hf_public_repos/pytorch-image-models | hf_public_repos/pytorch-image-models/docs/results.md | # Results
CSV files containing an ImageNet-1K and out-of-distribution (OOD) test set validation results for all models with pretrained weights is located in the repository [results folder](https://github.com/rwightman/pytorch-image-models/tree/master/results).
## Self-trained Weights
The table below includes ImageNe... | 0 |
hf_public_repos/pytorch-image-models | hf_public_repos/pytorch-image-models/docs/changes.md | # Recent Changes
### Aug 29, 2022
* MaxVit window size scales with img_size by default. Add new RelPosMlp MaxViT weight that leverages this:
* `maxvit_rmlp_nano_rw_256` - 83.0 @ 256, 83.6 @ 320 (T)
### Aug 26, 2022
* CoAtNet (https://arxiv.org/abs/2106.04803) and MaxVit (https://arxiv.org/abs/2204.01697) `timm` or... | 0 |
hf_public_repos/pytorch-image-models | hf_public_repos/pytorch-image-models/docs/feature_extraction.md | # Feature Extraction
All of the models in `timm` have consistent mechanisms for obtaining various types of features from the model for tasks besides classification.
## Penultimate Layer Features (Pre-Classifier Features)
The features from the penultimate model layer can be obtained in several ways without requiring ... | 0 |
hf_public_repos/pytorch-image-models | hf_public_repos/pytorch-image-models/docs/training_hparam_examples.md | # Training Examples
## EfficientNet-B2 with RandAugment - 80.4 top-1, 95.1 top-5
These params are for dual Titan RTX cards with NVIDIA Apex installed:
`./distributed_train.sh 2 /imagenet/ --model efficientnet_b2 -b 128 --sched step --epochs 450 --decay-epochs 2.4 --decay-rate .97 --opt rmsproptf --opt-eps .001 -j 8 -... | 0 |
hf_public_repos/pytorch-image-models/docs | hf_public_repos/pytorch-image-models/docs/models/selecsls.md | # SelecSLS
**SelecSLS** uses novel selective long and short range skip connections to improve the information flow allowing for a drastically faster network without compromising accuracy.
## How do I use this model on an image?
To load a pretrained model:
```python
import timm
model = timm.create_model('selecsls42b'... | 0 |
hf_public_repos/pytorch-image-models/docs | hf_public_repos/pytorch-image-models/docs/models/densenet.md | # DenseNet
**DenseNet** is a type of convolutional neural network that utilises dense connections between layers, through [Dense Blocks](http://www.paperswithcode.com/method/dense-block), where we connect *all layers* (with matching feature-map sizes) directly with each other. To preserve the feed-forward nature, each... | 0 |
hf_public_repos/pytorch-image-models/docs | hf_public_repos/pytorch-image-models/docs/models/dla.md | # Deep Layer Aggregation
Extending “shallow” skip connections, **Dense Layer Aggregation (DLA)** incorporates more depth and sharing. The authors introduce two structures for deep layer aggregation (DLA): iterative deep aggregation (IDA) and hierarchical deep aggregation (HDA). These structures are expressed through ... | 0 |
hf_public_repos/pytorch-image-models/docs | hf_public_repos/pytorch-image-models/docs/models/tf-mobilenet-v3.md | # (Tensorflow) MobileNet v3
**MobileNetV3** is a convolutional neural network that is designed for mobile phone CPUs. The network design includes the use of a [hard swish activation](https://paperswithcode.com/method/hard-swish) and [squeeze-and-excitation](https://paperswithcode.com/method/squeeze-and-excitation-bloc... | 0 |
hf_public_repos/pytorch-image-models/docs | hf_public_repos/pytorch-image-models/docs/models/csp-resnext.md | # CSP-ResNeXt
**CSPResNeXt** is a convolutional neural network where we apply the Cross Stage Partial Network (CSPNet) approach to [ResNeXt](https://paperswithcode.com/method/resnext). The CSPNet partitions the feature map of the base layer into two parts and then merges them through a cross-stage hierarchy. The use o... | 0 |
hf_public_repos/pytorch-image-models/docs | hf_public_repos/pytorch-image-models/docs/models/spnasnet.md | # SPNASNet
**Single-Path NAS** is a novel differentiable NAS method for designing hardware-efficient ConvNets in less than 4 hours.
## How do I use this model on an image?
To load a pretrained model:
```python
import timm
model = timm.create_model('spnasnet_100', pretrained=True)
model.eval()
```
To load and prepro... | 0 |
hf_public_repos/pytorch-image-models/docs | hf_public_repos/pytorch-image-models/docs/models/swsl-resnext.md | # SWSL ResNeXt
A **ResNeXt** repeats a [building block](https://paperswithcode.com/method/resnext-block) that aggregates a set of transformations with the same topology. Compared to a [ResNet](https://paperswithcode.com/method/resnet), it exposes a new dimension, *cardinality* (the size of the set of transformations)... | 0 |
hf_public_repos/pytorch-image-models/docs | hf_public_repos/pytorch-image-models/docs/models/tf-inception-v3.md | # (Tensorflow) Inception v3
**Inception v3** is a convolutional neural network architecture from the Inception family that makes several improvements including using [Label Smoothing](https://paperswithcode.com/method/label-smoothing), Factorized 7 x 7 convolutions, and the use of an [auxiliary classifer](https://pape... | 0 |
hf_public_repos/pytorch-image-models/docs | hf_public_repos/pytorch-image-models/docs/models/ig-resnext.md | # Instagram ResNeXt WSL
A **ResNeXt** repeats a [building block](https://paperswithcode.com/method/resnext-block) that aggregates a set of transformations with the same topology. Compared to a [ResNet](https://paperswithcode.com/method/resnet), it exposes a new dimension, *cardinality* (the size of the set of transfo... | 0 |
hf_public_repos/pytorch-image-models/docs | hf_public_repos/pytorch-image-models/docs/models/rexnet.md | # RexNet
**Rank Expansion Networks** (ReXNets) follow a set of new design principles for designing bottlenecks in image classification models. Authors refine each layer by 1) expanding the input channel size of the convolution layer and 2) replacing the [ReLU6s](https://www.paperswithcode.com/method/relu6).
## How do... | 0 |
hf_public_repos/pytorch-image-models/docs | hf_public_repos/pytorch-image-models/docs/models/ssl-resnext.md | # SSL ResNeXT
A **ResNeXt** repeats a [building block](https://paperswithcode.com/method/resnext-block) that aggregates a set of transformations with the same topology. Compared to a [ResNet](https://paperswithcode.com/method/resnet), it exposes a new dimension, *cardinality* (the size of the set of transformations) ... | 0 |
hf_public_repos/pytorch-image-models/docs | hf_public_repos/pytorch-image-models/docs/models/resnext.md | # ResNeXt
A **ResNeXt** repeats a [building block](https://paperswithcode.com/method/resnext-block) that aggregates a set of transformations with the same topology. Compared to a [ResNet](https://paperswithcode.com/method/resnet), it exposes a new dimension, *cardinality* (the size of the set of transformations) $C$,... | 0 |
hf_public_repos/pytorch-image-models/docs | hf_public_repos/pytorch-image-models/docs/models/tf-mixnet.md | # (Tensorflow) MixNet
**MixNet** is a type of convolutional neural network discovered via AutoML that utilises [MixConvs](https://paperswithcode.com/method/mixconv) instead of regular [depthwise convolutions](https://paperswithcode.com/method/depthwise-convolution).
The weights from this model were ported from [Tenso... | 0 |
hf_public_repos/pytorch-image-models/docs | hf_public_repos/pytorch-image-models/docs/models/efficientnet-pruned.md | # EfficientNet (Knapsack Pruned)
**EfficientNet** is a convolutional neural network architecture and scaling method that uniformly scales all dimensions of depth/width/resolution using a *compound coefficient*. Unlike conventional practice that arbitrary scales these factors, the EfficientNet scaling method uniformly... | 0 |
hf_public_repos/pytorch-image-models/docs | hf_public_repos/pytorch-image-models/docs/models/fbnet.md | # FBNet
**FBNet** is a type of convolutional neural architectures discovered through [DNAS](https://paperswithcode.com/method/dnas) neural architecture search. It utilises a basic type of image model block inspired by [MobileNetv2](https://paperswithcode.com/method/mobilenetv2) that utilises depthwise convolutions and... | 0 |
hf_public_repos/pytorch-image-models/docs | hf_public_repos/pytorch-image-models/docs/models/gloun-seresnext.md | # (Gluon) SE-ResNeXt
**SE ResNeXt** is a variant of a [ResNext](https://www.paperswithcode.com/method/resnext) that employs [squeeze-and-excitation blocks](https://paperswithcode.com/method/squeeze-and-excitation-block) to enable the network to perform dynamic channel-wise feature recalibration.
The weights from this... | 0 |
hf_public_repos/pytorch-image-models/docs | hf_public_repos/pytorch-image-models/docs/models/legacy-se-resnet.md | # (Legacy) SE-ResNet
**SE ResNet** is a variant of a [ResNet](https://www.paperswithcode.com/method/resnet) that employs [squeeze-and-excitation blocks](https://paperswithcode.com/method/squeeze-and-excitation-block) to enable the network to perform dynamic channel-wise feature recalibration.
## How do I use this mod... | 0 |
hf_public_repos/pytorch-image-models/docs | hf_public_repos/pytorch-image-models/docs/models/inception-v4.md | # Inception v4
**Inception-v4** is a convolutional neural network architecture that builds on previous iterations of the Inception family by simplifying the architecture and using more inception modules than [Inception-v3](https://paperswithcode.com/method/inception-v3).
## How do I use this model on an image?
To load... | 0 |
hf_public_repos/pytorch-image-models/docs | hf_public_repos/pytorch-image-models/docs/models/tresnet.md | # TResNet
A **TResNet** is a variant on a [ResNet](https://paperswithcode.com/method/resnet) that aim to boost accuracy while maintaining GPU training and inference efficiency. They contain several design tricks including a SpaceToDepth stem, [Anti-Alias downsampling](https://paperswithcode.com/method/anti-alias-down... | 0 |
hf_public_repos/pytorch-image-models/docs | hf_public_repos/pytorch-image-models/docs/models/tf-efficientnet.md | # (Tensorflow) EfficientNet
**EfficientNet** is a convolutional neural network architecture and scaling method that uniformly scales all dimensions of depth/width/resolution using a *compound coefficient*. Unlike conventional practice that arbitrary scales these factors, the EfficientNet scaling method uniformly scal... | 0 |
hf_public_repos/pytorch-image-models/docs | hf_public_repos/pytorch-image-models/docs/models/nasnet.md | # NASNet
**NASNet** is a type of convolutional neural network discovered through neural architecture search. The building blocks consist of normal and reduction cells.
## How do I use this model on an image?
To load a pretrained model:
```python
import timm
model = timm.create_model('nasnetalarge', pretrained=True)
... | 0 |
hf_public_repos/pytorch-image-models/docs | hf_public_repos/pytorch-image-models/docs/models/ssl-resnet.md | # SSL ResNet
**Residual Networks**, or **ResNets**, learn residual functions with reference to the layer inputs, instead of learning unreferenced functions. Instead of hoping each few stacked layers directly fit a desired underlying mapping, residual nets let these layers fit a residual mapping. They stack [residual b... | 0 |
hf_public_repos/pytorch-image-models/docs | hf_public_repos/pytorch-image-models/docs/models/mobilenet-v3.md | # MobileNet v3
**MobileNetV3** is a convolutional neural network that is designed for mobile phone CPUs. The network design includes the use of a [hard swish activation](https://paperswithcode.com/method/hard-swish) and [squeeze-and-excitation](https://paperswithcode.com/method/squeeze-and-excitation-block) modules in... | 0 |
hf_public_repos/pytorch-image-models/docs | hf_public_repos/pytorch-image-models/docs/models/ese-vovnet.md | # ESE-VoVNet
**VoVNet** is a convolutional neural network that seeks to make [DenseNet](https://paperswithcode.com/method/densenet) more efficient by concatenating all features only once in the last feature map, which makes input size constant and enables enlarging new output channel.
Read about [one-shot aggregatio... | 0 |
hf_public_repos/pytorch-image-models/docs | hf_public_repos/pytorch-image-models/docs/models/mixnet.md | # MixNet
**MixNet** is a type of convolutional neural network discovered via AutoML that utilises [MixConvs](https://paperswithcode.com/method/mixconv) instead of regular [depthwise convolutions](https://paperswithcode.com/method/depthwise-convolution).
## How do I use this model on an image?
To load a pretrained mod... | 0 |
hf_public_repos/pytorch-image-models/docs | hf_public_repos/pytorch-image-models/docs/models/gloun-resnet.md | # (Gluon) ResNet
**Residual Networks**, or **ResNets**, learn residual functions with reference to the layer inputs, instead of learning unreferenced functions. Instead of hoping each few stacked layers directly fit a desired underlying mapping, residual nets let these layers fit a residual mapping. They stack [residu... | 0 |
hf_public_repos/pytorch-image-models/docs | hf_public_repos/pytorch-image-models/docs/models/.pages | title: Model Pages | 0 |
hf_public_repos/pytorch-image-models/docs | hf_public_repos/pytorch-image-models/docs/models/resnet-d.md | # ResNet-D
**ResNet-D** is a modification on the [ResNet](https://paperswithcode.com/method/resnet) architecture that utilises an [average pooling](https://paperswithcode.com/method/average-pooling) tweak for downsampling. The motivation is that in the unmodified ResNet, the [1×1 convolution](https://paperswithcode.co... | 0 |
hf_public_repos/pytorch-image-models/docs | hf_public_repos/pytorch-image-models/docs/models/ecaresnet.md | # ECA-ResNet
An **ECA ResNet** is a variant on a [ResNet](https://paperswithcode.com/method/resnet) that utilises an [Efficient Channel Attention module](https://paperswithcode.com/method/efficient-channel-attention). Efficient Channel Attention is an architectural unit based on [squeeze-and-excitation blocks](https:/... | 0 |
hf_public_repos/pytorch-image-models/docs | hf_public_repos/pytorch-image-models/docs/models/legacy-senet.md | # (Legacy) SENet
A **SENet** is a convolutional neural network architecture that employs [squeeze-and-excitation blocks](https://paperswithcode.com/method/squeeze-and-excitation-block) to enable the network to perform dynamic channel-wise feature recalibration.
The weights from this model were ported from Gluon.
## ... | 0 |
hf_public_repos/pytorch-image-models/docs | hf_public_repos/pytorch-image-models/docs/models/tf-efficientnet-lite.md | # (Tensorflow) EfficientNet Lite
**EfficientNet** is a convolutional neural network architecture and scaling method that uniformly scales all dimensions of depth/width/resolution using a *compound coefficient*. Unlike conventional practice that arbitrary scales these factors, the EfficientNet scaling method uniformly... | 0 |
hf_public_repos/pytorch-image-models/docs | hf_public_repos/pytorch-image-models/docs/models/resnest.md | # ResNeSt
A **ResNeSt** is a variant on a [ResNet](https://paperswithcode.com/method/resnet), which instead stacks [Split-Attention blocks](https://paperswithcode.com/method/split-attention). The cardinal group representations are then concatenated along the channel dimension: $V = \text{Concat}${$V^{1},V^{2},\cdots{V... | 0 |
hf_public_repos/pytorch-image-models/docs | hf_public_repos/pytorch-image-models/docs/models/mobilenet-v2.md | # MobileNet v2
**MobileNetV2** is a convolutional neural network architecture that seeks to perform well on mobile devices. It is based on an [inverted residual structure](https://paperswithcode.com/method/inverted-residual-block) where the residual connections are between the bottleneck layers. The intermediate expa... | 0 |
hf_public_repos/pytorch-image-models/docs | hf_public_repos/pytorch-image-models/docs/models/gloun-xception.md | # (Gluon) Xception
**Xception** is a convolutional neural network architecture that relies solely on [depthwise separable convolution](https://paperswithcode.com/method/depthwise-separable-convolution) layers.
The weights from this model were ported from [Gluon](https://cv.gluon.ai/model_zoo/classification.html).
##... | 0 |
hf_public_repos/pytorch-image-models/docs | hf_public_repos/pytorch-image-models/docs/models/wide-resnet.md | # Wide ResNet
**Wide Residual Networks** are a variant on [ResNets](https://paperswithcode.com/method/resnet) where we decrease depth and increase the width of residual networks. This is achieved through the use of [wide residual blocks](https://paperswithcode.com/method/wide-residual-block).
## How do I use this mod... | 0 |
hf_public_repos/pytorch-image-models/docs | hf_public_repos/pytorch-image-models/docs/models/csp-resnet.md | # CSP-ResNet
**CSPResNet** is a convolutional neural network where we apply the Cross Stage Partial Network (CSPNet) approach to [ResNet](https://paperswithcode.com/method/resnet). The CSPNet partitions the feature map of the base layer into two parts and then merges them through a cross-stage hierarchy. The use of a ... | 0 |
hf_public_repos/pytorch-image-models/docs | hf_public_repos/pytorch-image-models/docs/models/legacy-se-resnext.md | # (Legacy) SE-ResNeXt
**SE ResNeXt** is a variant of a [ResNeXt](https://www.paperswithcode.com/method/resnext) that employs [squeeze-and-excitation blocks](https://paperswithcode.com/method/squeeze-and-excitation-block) to enable the network to perform dynamic channel-wise feature recalibration.
## How do I use this... | 0 |
hf_public_repos/pytorch-image-models/docs | hf_public_repos/pytorch-image-models/docs/models/se-resnet.md | # SE-ResNet
**SE ResNet** is a variant of a [ResNet](https://www.paperswithcode.com/method/resnet) that employs [squeeze-and-excitation blocks](https://paperswithcode.com/method/squeeze-and-excitation-block) to enable the network to perform dynamic channel-wise feature recalibration.
## How do I use this model on an ... | 0 |
hf_public_repos/pytorch-image-models/docs | hf_public_repos/pytorch-image-models/docs/models/res2net.md | # Res2Net
**Res2Net** is an image model that employs a variation on bottleneck residual blocks, [Res2Net Blocks](https://paperswithcode.com/method/res2net-block). The motivation is to be able to represent features at multiple scales. This is achieved through a novel building block for CNNs that constructs hierarchical... | 0 |
hf_public_repos/pytorch-image-models/docs | hf_public_repos/pytorch-image-models/docs/models/dpn.md | # Dual Path Network (DPN)
A **Dual Path Network (DPN)** is a convolutional neural network which presents a new topology of connection paths internally. The intuition is that [ResNets](https://paperswithcode.com/method/resnet) enables feature re-usage while DenseNet enables new feature exploration, and both are importa... | 0 |
hf_public_repos/pytorch-image-models/docs | hf_public_repos/pytorch-image-models/docs/models/res2next.md | # Res2NeXt
**Res2NeXt** is an image model that employs a variation on [ResNeXt](https://paperswithcode.com/method/resnext) bottleneck residual blocks. The motivation is to be able to represent features at multiple scales. This is achieved through a novel building block for CNNs that constructs hierarchical residual-li... | 0 |
hf_public_repos/pytorch-image-models/docs | hf_public_repos/pytorch-image-models/docs/models/xception.md | # Xception
**Xception** is a convolutional neural network architecture that relies solely on [depthwise separable convolution layers](https://paperswithcode.com/method/depthwise-separable-convolution).
The weights from this model were ported from [Tensorflow/Models](https://github.com/tensorflow/models).
## How do I... | 0 |
hf_public_repos/pytorch-image-models/docs | hf_public_repos/pytorch-image-models/docs/models/ensemble-adversarial.md | # # Ensemble Adversarial Inception ResNet v2
**Inception-ResNet-v2** is a convolutional neural architecture that builds on the Inception family of architectures but incorporates [residual connections](https://paperswithcode.com/method/residual-connection) (replacing the filter concatenation stage of the Inception arch... | 0 |
hf_public_repos/pytorch-image-models/docs | hf_public_repos/pytorch-image-models/docs/models/seresnext.md | # SE-ResNeXt
**SE ResNeXt** is a variant of a [ResNext](https://www.paperswithcode.com/method/resneXt) that employs [squeeze-and-excitation blocks](https://paperswithcode.com/method/squeeze-and-excitation-block) to enable the network to perform dynamic channel-wise feature recalibration.
## How do I use this model on... | 0 |
hf_public_repos/pytorch-image-models/docs | hf_public_repos/pytorch-image-models/docs/models/regnetx.md | # RegNetX
**RegNetX** is a convolutional network design space with simple, regular models with parameters: depth $d$, initial width $w\_{0} > 0$, and slope $w\_{a} > 0$, and generates a different block width $u\_{j}$ for each block $j < d$. The key restriction for the RegNet types of model is that there is a linear pa... | 0 |
hf_public_repos/pytorch-image-models/docs | hf_public_repos/pytorch-image-models/docs/models/swsl-resnet.md | # SWSL ResNet
**Residual Networks**, or **ResNets**, learn residual functions with reference to the layer inputs, instead of learning unreferenced functions. Instead of hoping each few stacked layers directly fit a desired underlying mapping, residual nets let these layers fit a residual mapping. They stack [residual ... | 0 |
hf_public_repos/pytorch-image-models/docs | hf_public_repos/pytorch-image-models/docs/models/efficientnet.md | # EfficientNet
**EfficientNet** is a convolutional neural network architecture and scaling method that uniformly scales all dimensions of depth/width/resolution using a *compound coefficient*. Unlike conventional practice that arbitrary scales these factors, the EfficientNet scaling method uniformly scales network wi... | 0 |
hf_public_repos/pytorch-image-models/docs | hf_public_repos/pytorch-image-models/docs/models/mnasnet.md | # MnasNet
**MnasNet** is a type of convolutional neural network optimized for mobile devices that is discovered through mobile neural architecture search, which explicitly incorporates model latency into the main objective so that the search can identify a model that achieves a good trade-off between accuracy and late... | 0 |
hf_public_repos/pytorch-image-models/docs | hf_public_repos/pytorch-image-models/docs/models/gloun-resnext.md | # (Gluon) ResNeXt
A **ResNeXt** repeats a [building block](https://paperswithcode.com/method/resnext-block) that aggregates a set of transformations with the same topology. Compared to a [ResNet](https://paperswithcode.com/method/resnet), it exposes a new dimension, *cardinality* (the size of the set of transformatio... | 0 |
hf_public_repos/pytorch-image-models/docs | hf_public_repos/pytorch-image-models/docs/models/regnety.md | # RegNetY
**RegNetY** is a convolutional network design space with simple, regular models with parameters: depth $d$, initial width $w\_{0} > 0$, and slope $w\_{a} > 0$, and generates a different block width $u\_{j}$ for each block $j < d$. The key restriction for the RegNet types of model is that there is a linear pa... | 0 |
hf_public_repos/pytorch-image-models/docs | hf_public_repos/pytorch-image-models/docs/models/inception-resnet-v2.md | # Inception ResNet v2
**Inception-ResNet-v2** is a convolutional neural architecture that builds on the Inception family of architectures but incorporates [residual connections](https://paperswithcode.com/method/residual-connection) (replacing the filter concatenation stage of the Inception architecture).
## How do I... | 0 |
hf_public_repos/pytorch-image-models/docs | hf_public_repos/pytorch-image-models/docs/models/pnasnet.md | # PNASNet
**Progressive Neural Architecture Search**, or **PNAS**, is a method for learning the structure of convolutional neural networks (CNNs). It uses a sequential model-based optimization (SMBO) strategy, where we search the space of cell structures, starting with simple (shallow) models and progressing to comple... | 0 |
hf_public_repos/pytorch-image-models/docs | hf_public_repos/pytorch-image-models/docs/models/noisy-student.md | # Noisy Student (EfficientNet)
**Noisy Student Training** is a semi-supervised learning approach. It extends the idea of self-training
and distillation with the use of equal-or-larger student models and noise added to the student during learning. It has three main steps:
1. train a teacher model on labeled images
2.... | 0 |
hf_public_repos/pytorch-image-models/docs | hf_public_repos/pytorch-image-models/docs/models/vision-transformer.md | # Vision Transformer (ViT)
The **Vision Transformer** is a model for image classification that employs a Transformer-like architecture over patches of the image. This includes the use of [Multi-Head Attention](https://paperswithcode.com/method/multi-head-attention), [Scaled Dot-Product Attention](https://paperswithcod... | 0 |
hf_public_repos/pytorch-image-models/docs | hf_public_repos/pytorch-image-models/docs/models/skresnet.md | # SK-ResNet
**SK ResNet** is a variant of a [ResNet](https://www.paperswithcode.com/method/resnet) that employs a [Selective Kernel](https://paperswithcode.com/method/selective-kernel) unit. In general, all the large kernel convolutions in the original bottleneck blocks in ResNet are replaced by the proposed [SK convo... | 0 |
hf_public_repos/pytorch-image-models/docs | hf_public_repos/pytorch-image-models/docs/models/skresnext.md | # SK-ResNeXt
**SK ResNeXt** is a variant of a [ResNeXt](https://www.paperswithcode.com/method/resnext) that employs a [Selective Kernel](https://paperswithcode.com/method/selective-kernel) unit. In general, all the large kernel convolutions in the original bottleneck blocks in ResNext are replaced by the proposed [SK ... | 0 |
hf_public_repos/pytorch-image-models/docs | hf_public_repos/pytorch-image-models/docs/models/csp-darknet.md | # CSP-DarkNet
**CSPDarknet53** is a convolutional neural network and backbone for object detection that uses [DarkNet-53](https://paperswithcode.com/method/darknet-53). It employs a CSPNet strategy to partition the feature map of the base layer into two parts and then merges them through a cross-stage hierarchy. The u... | 0 |
hf_public_repos/pytorch-image-models/docs | hf_public_repos/pytorch-image-models/docs/models/inception-v3.md | # Inception v3
**Inception v3** is a convolutional neural network architecture from the Inception family that makes several improvements including using [Label Smoothing](https://paperswithcode.com/method/label-smoothing), Factorized 7 x 7 convolutions, and the use of an [auxiliary classifer](https://paperswithcode.co... | 0 |
hf_public_repos/pytorch-image-models/docs | hf_public_repos/pytorch-image-models/docs/models/adversarial-inception-v3.md | # Adversarial Inception v3
**Inception v3** is a convolutional neural network architecture from the Inception family that makes several improvements including using [Label Smoothing](https://paperswithcode.com/method/label-smoothing), Factorized 7 x 7 convolutions, and the use of an [auxiliary classifer](https://paper... | 0 |
hf_public_repos/pytorch-image-models/docs | hf_public_repos/pytorch-image-models/docs/models/gloun-inception-v3.md | # (Gluon) Inception v3
**Inception v3** is a convolutional neural network architecture from the Inception family that makes several improvements including using [Label Smoothing](https://paperswithcode.com/method/label-smoothing), Factorized 7 x 7 convolutions, and the use of an [auxiliary classifer](https://paperswit... | 0 |
hf_public_repos/pytorch-image-models/docs | hf_public_repos/pytorch-image-models/docs/models/hrnet.md | # HRNet
**HRNet**, or **High-Resolution Net**, is a general purpose convolutional neural network for tasks like semantic segmentation, object detection and image classification. It is able to maintain high resolution representations through the whole process. We start from a high-resolution convolution stream, gradual... | 0 |
hf_public_repos/pytorch-image-models/docs | hf_public_repos/pytorch-image-models/docs/models/resnet.md | # ResNet
**Residual Networks**, or **ResNets**, learn residual functions with reference to the layer inputs, instead of learning unreferenced functions. Instead of hoping each few stacked layers directly fit a desired underlying mapping, residual nets let these layers fit a residual mapping. They stack [residual block... | 0 |
hf_public_repos/pytorch-image-models/docs | hf_public_repos/pytorch-image-models/docs/models/advprop.md | # AdvProp (EfficientNet)
**AdvProp** is an adversarial training scheme which treats adversarial examples as additional examples, to prevent overfitting. Key to the method is the usage of a separate auxiliary batch norm for adversarial examples, as they have different underlying distributions to normal examples.
The w... | 0 |
hf_public_repos/pytorch-image-models/docs | hf_public_repos/pytorch-image-models/docs/models/gloun-senet.md | # (Gluon) SENet
A **SENet** is a convolutional neural network architecture that employs [squeeze-and-excitation blocks](https://paperswithcode.com/method/squeeze-and-excitation-block) to enable the network to perform dynamic channel-wise feature recalibration.
The weights from this model were ported from [Gluon](http... | 0 |
hf_public_repos/pytorch-image-models/docs | hf_public_repos/pytorch-image-models/docs/models/big-transfer.md | # Big Transfer (BiT)
**Big Transfer (BiT)** is a type of pretraining recipe that pre-trains on a large supervised source dataset, and fine-tunes the weights on the target task. Models are trained on the JFT-300M dataset. The finetuned models contained in this collection are finetuned on ImageNet.
## How do I use thi... | 0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.