Skip to article frontmatterSkip to article content
Site not loading correctly?

This may be due to an incorrect BASE_URL configuration. See the MyST Documentation for reference.

Models in Torchvision and Ways to Finetune Them

Things on this page are fragmentary and immature notes/thoughts of the author. Please read with your own judgement!

inception_v3 requires an input of (299, 299) while other models requires an input of (224, 224). Due to adaptive pooling used in some models, they can run on varying sized intput without throwing errors (but the results are usually not correct). You have to resize/crop an image to be the right input size (and then other necessary transformations, e.g., to_tensor and Normalize) before feeding it to a pretrained model.