Hi all, I’ve been working on code to determine the position of an OAK-D camera based on tracking objects with known fixed positions. Currently, I’m doing this by using a program on the host that performs object detection then uses SIFT with solvePNP on the detected objects to determine camera position (Basically, a modified version of this code https://github.com/GigaFlopsis/image_pose_estimation). I wanted to know if something similar could be done with the current implementation of feature tracking in the DepthAI libraries in order to leverage more of the OAK-D’s hardware features. If so, is there an example of how to combine the results from both tracked features mono streams to do this?