Closed
Description
Hi I have a dataset with images of slightly different sizes. To train a model with them (before #501), I padded them all to the max image size and was able to successfully train a model. However, the predictions are all shifted from the actual limbs when performing inference on a video with dimensions smaller than the max image size (see an example below). It seems like the padding added for inference is being added to the top and left instead of the bottom and right.
I performed inference with version 1.1.1
Thanks for your help!
Sam