Hello @erik !! Is there any internal method to aling the RGB with the depth image (array) and then crop the RGB so it displays the same elements as the depth one? I've found the method to aling them:
but I cannot manage to extract exactly the depth portion although I've tried different approximations.
I'm not an expert in photogrametry, so maybe it is much simpler than I think.
Thanks for any help you can provide me.
The purpose of this is to use the relative positions of different objects, detected by YOLO models in RGB images, in the depth array; but as the RGB takes a bigger FOV, the positions do not mirror between the original arrays.