Hey, I'm using an Oak D Lite and want to get the depth of a point / ROI on my rgb image. I have the intrinsics of the rgb camera and followed the Sync demo to perform rgb-depth alignment. While I see that my depth and rgb images are time-synchronized, I observe that the depth image has a wider FOV then my RGB camera (an object appears in the depth image before the rgb image when moving horizontally). Does this mean that my depth image is not correctly aligned to the RGB image? Using pixel coordinates (u,v) in the rgb image, can I use the same pixel coordinates in the depth image and my rgb camera intrinsics to get accurate world coordinates?