Thank you for your help with this.
So to use cv2 I would have to first capture the frames, process them on the host for adding padding and then feeding them back to the NN?
I'm not too sure how to go about that, since at the moment, following the tutorials, I first create all of the nodes and links on the pipeline, and then start the main loop where I already have everything from the camera.
How could I pass cv2 modified frames from the host back to the NN node on the camera?
Do you think feeding padded frames but keeping the size at 300x300 would increase inference time?
What i'd like to achieve is to perform is a person detection throughout the entire sensor.
Not keeping the preview aspect ratio as Brandon showed above, seems to work, but I was wondering if by doing so i'm reducing the accuracy of the detections.
Thank you very much once again.