hello!
using an OAK-D installed in portrait orientation. got the code working for RGB and monos (including the 2-stage manip rotate at resolution of multiples of 16 and resize (without preserving aspect ratio) to 300x300 for mobilenet). using C++.
everything "works" (in terms of compile/running), NN recognizes and frames things, but the depth values and tracking are wonky and i now assume that the stereo code is thrown off by the cameras locations (above/below instead of left/right).
is there a way to compensate for that?
or should i dislocate the depth and NN, use non-spatial MobileNetDetection, iterating the tracklets and calling the spatial calculator on depth image with rotated coords?
in other words is the MobileNetSpatialDetectionNetwork a simple "convenient" one-stop solution that correlates the NN coords with the depth frame and slaps the coordinates on the blobs, or is there more to it than that? does the ObjectTracker care about depth data?
endgame is to get the equivalent data produced by the spatial_object_tracker
example, but in portrait orientation.
thanks!
alex.