Hey everyone,
I am currently working on a university project with a deadline coming up in 4 days and I need help with setting up the pipeline.
I am running TouchDesigner on my M4 MacBook together with a OAK-D pro ff.
I want to use the OAK-D to recognize people in a low light environment and get their X, Y and Z position. Therfore I need to use the IR-Mono cameras as well as the IR-Dot projector for Active IR stereo depth. If I understood correctly, there is not really an object detection model that will accept stereo depth imagery so for object detection I would need to either use the RGB cam (which might cause problems in the low light conditions) or somehow rig the Mono camera (which is a bit better in low light) output into the object detection model.
I also read that there is a build in function that allows to project the 2D coordinates of a detected object into the 3D space of the stereo depth and returns the X, Y and Z coordinates automatically.
The problem with all of this is, that I have very little python experience and don't really understand the TouchDesigner OAK samples. I tried patch working something together but it didn't work. Also AI can't really help me as this is a quite niche problem.
I would be so great full if someone could link me to some helpful resources/ tutorials or help me with the code!
Thank you so much!