Hi all, I’ve been working on code to determine the position of an OAK-D camera based on tracking objects with known fixed positions. Currently, I’m doing this by using a program on the host that performs object detection then uses SIFT with solvePNP on the detected objects to determine camera position (Basically, a modified version of this code https://github.com/GigaFlopsis/image_pose_estimation). I wanted to know if something similar could be done with the current implementation of feature tracking in the DepthAI libraries in order to leverage more of the OAK-D’s hardware features. If so, is there an example of how to combine the results from both tracked features mono streams to do this?
Using DepthAI feature tracking to determine camera pose
Hello jdao ,
I don't think there are any opensource projects yet, but one of the OAK challenge competitor has created this. Looking at their github, I assume they have used this code.
Thanks, Erik