Hi @RyanLee,
YOLO models are robust to input size changes due to their fully convolutional design. This allows the model to process input images of various sizes as long as the dimensions are divisible by the stride of the network's layers (typically powers of 2, e.g., 32 for most common YOLO versions). So even though a YOLO-based model was trained with, let's say, 640x640 input image shape, you can export it using 640x352.
Regarding the performance degradation, I have personally never measured it, but I have never noticed any big performance gap.
The reason why we're sometimes reducing the height during export is that having images with an aspect ratio of 16:9 is more realistic for our cameras than 1:1. Furthermore when switching input image shape from 640x640 to 640x352, the model latency improves as the model has fewer pixels to process which is crucial in edge AI.
I hope this addresses all your questions! Please feel free to reach out if anything remains unclear or if you have additional queries.
Wishing you a Merry Christmas and a wonderful holiday season!
Kind regards,
Jan