• DepthAI-v2
  • OAK Lite and gen1 different calculation of z with the same model

Hi EmilioMachado

EmilioMachado Gen1 with an old mobilenetssd model worked perfect for 3 or 4 years

But you are using the SDK now right? Could you maybe try the spatial location calculator example to see if Z axis is correct. The SDK works in the same way as YoloSpatialDetectionNetwork, which is a combination of Yolo and Spatial Calculator. Seems there is a problem with the latter.

Thanks,
Jaka

Yes SDK, spatial location calculator work perfect. The problems in yolo main seem to start below 70 cm and now it reports this error: [14442C104128C1D200] [1.1] [79.669] [SpatialDetectionNetwork(8)] [error] ROI x:0 y:1.00289 width:0.23385671 height:-0.0028899908 is not a valid rectangle.

thanks

    Hi EmilioMachado
    Then the problem is most likely your model. I would assume that using a different model (yolo or mbnet) works as expected?

    What script and arguments are you using? It might be a problem if you have best.xml / best.bin, as if that's the case, it will use already compiled model (as it only checks name), instead of compiling a new one. So i'd suggest changing the names of xml/bin (and update to new names inside your .json)

    Thanks,
    Jaka

    Hi Jaka

    I couldn't, I changed the names of best.xml and .bin and generated different .blob with diferent model (2 luxonis yolo5 notebook, 1 roboflow yolo5 notebook). I can see them inside .cache--> blobconverter, they all have the same problem.

    Mbnet work perfect, I need update class in an old model but tensorflow version (2.8 minimun google colab notebook today) are incompatible with openvino version in blobconverter.

    Can I use pytorch insted TF2 for train a Mbnet?

      EmilioMachado

      In general, mobilenetSSD has much better performance, but we have a Pytorch compatible version in the PR here: luxonis/depthai-ml-training58. Note that it is still being improved so I can't help with any bad performance.

      What you are seeing with YoloV5 seems like wrong predictions though. When you use a model trained in our notebook, you should do the following:

      1/ Take the best.pt and upload it to tools.luxonis.com

      2/ Extract the downloaded .blob and .json

      3/ Use this script with flags -m (path to the blob) and -c (path to the config)

      If you do that, can you confirm this working for you? Do detections look OK? If yes, then everything is fine and there might be some issue in how you use SpatialLocationCalculator.

      Hi Matija

      Thank you for your response

      Yes, detections are ok, the problem begins with the distance z below 68 centimeters in OAK- gen1 (Above 68 centimeters the precision in z begins to improve), OAK-lite FF work perfect, same model, same script, same PC. I just change one camera for the other in the USB port.

        Hi EmilioMachado
        Could you post some images/demo? I'm mainly interested in bbox created by the detection network and the bbox used by the spatial calculator (something like https://docs.luxonis.com/projects/api/en/latest/samples/SpatialDetection/spatial_tiny_yolo/#demo - just images, doesn't have to be a video). It doesn't really make sense to me that SLC and detection both work fine, but fusing them together doesn't.

        Thanks,
        Jaka

          OAK gen1

          The first 2 images are with YOLO5, bad Z distance, the last one with mobilenet, perfect Z distance

          Hi EmilioMachado
          Could you by running the example I sent but switch it with you model?

          jakaskerl I'm mainly interested in bbox created by the detection network and the bbox used by the spatial calculator

          This is missing the SpatialLocationCalculator roi. I think it's not where it's supposed to be, that's what the spatials are incorrect.

          Thanks,
          Jaka

          Hi Jaka

          I adapted Spatial_object_tracker to yolo and it works perfect, except that fps is low (9-10). How can I increase it?

            Hi Jaka

            Thank you very much for your response and for your patience. I was working with 416 image shape. A quick solution was to compile 2 more versions with tool.luxonis with 352 and 320 image shape and modify setPreviewSize and setAnchorMasks for each one. 416 (9-10 FPS), 352(12-13 FPS) and 320(16-17 FPS). I had to lower confidence threshold from 0.5 to 0.4. But, meanwhile, 17 FPS and 0.4 is good enough for my project. Just one last question, I have 2 OAK gen1 cameras and one of them reports this error with the current script: [14442C10518FA1D000] [1.1] [1.230] [StereoDepth(4)] [error] RGB camera calibration missing, aligning to RGB won 't work
            . How do I correct the calibration?