Hi all,
When using the preview output on a Camera node (v2.6), does it crop or scale the ISP output?
I’ve got my ISP scaling the sensor image down to ~720p, and I want to feed that into a neural network. The docs seem to suggest that the preview output crops the image to the requested size, rather than scaling it — which would mean losing a lot of sensor data.
If that’s the case, would the right approach be to use the video output instead and then run it through an ImageManip node to get a properly letterboxed/scaled image for the NN input?
Any guidance?
Thanks,
D.