Skip to content

Latest commit

 

History

History
46 lines (35 loc) · 1.98 KB

pytorch_vision_shufflenet_v2.md

File metadata and controls

46 lines (35 loc) · 1.98 KB
Error in user YAML: (<unknown>): mapping values are not allowed in this context at line 5 column 23
---
layout: pytorch_hub_detail
background-class: pytorch-hub-background
body-class: pytorch-hub
title: ShuffleNet v2
summary: ShuffleNet v2: Based on a series of controlled experiments, this work derives several practical guidelines for efficient network design.
category: researchers
image: pytorch-logo.png
author: Pytorch Team
tags: [CV, image classification]
github-link: https://github.com/pytorch/vision.git
featured_image_1: shufflenet_v2_1.png
featured_image_2: shufflenet_v2_2.png
---

Model Description

Previously, neural network architecture design was mostly guided by the indirect metric of computation complexity, i.e., FLOPs. However, the direct metric, e.g., speed, also depends on the other factors such as memory access cost and platform characteristics. Based on a series of controlled experiments, this work derives several practical guidelines for efficient network design. Accordingly, a new architecture is presented, called ShuffleNet V2. Comprehensive ablation experiments verify that our model is the state of-the-art in terms of speed and accuracy tradeoff.

Model structure Top-1 error Top-5 error
shufflenet_v2 30.64 11.68

Notes on Inputs

All pre-trained models expect input images normalized in the same way, i.e. mini-batches of 3-channel RGB images of shape (3 x H x W), where H and W are expected to be at least 224. The images have to be loaded in to a range of [0, 1] and then normalized using mean = [0.485, 0.456, 0.406] and std = [0.229, 0.224, 0.225]. You can use the following transform to normalize:

normalize = transforms.Normalize(mean=[0.485, 0.456, 0.406],
                                 std=[0.229, 0.224, 0.225])

Example:

import torch
model = torch.hub.load('pytorch/vision', 'shufflenet_v2_x1_0', pretrained=True)

Resources: