• DepthAI-v2
  • Convertion of custom network from mo (.xml & .bin) to OAK-D not working

Hi all, I have trained a custom CNN with TensorFlow that I can successfully translate with Model Optimizer into .xml and .bin files, and successfully infer with the Intel Inference Engine, inference_request.infer() with good accuracy (position and class).

However, I cannot successfully deploy to OAK-D via the Blob converter (http://blobconverter.luxonis.com/).
I get no error messages, but the OAK-D just outputs a constant prediction.
Has anybody else experienced similar issues ?

  • erik replied to this.

    Hello NielsOhlsen ,
    Does the model expect INT8 or FP16? You have to specify the input layer with -ip when compiling the IR into .blob. Could you also share the script you use to run the inference with this model?
    Thanks, Erik

      erik Hi Erik, thanks for the answer. I am in contact with Matija, and I have now simplified my network from a 3 tower model to just one tower, taken out a batch normalization layer and that simpler model runs inference on the OAK-D.

      So I believe my data formats are correct. I do, BTW use -ip U8 (as default proposed by blob converter).

      I will now step by step rebuild my model from something simpler, until I again have issues, then I will get back to Matija, who has proposed we contact Intel Premium Support, as it does seem that Blob converter is not translating my topology correctly.

      • erik replied to this.

        Hello NielsOhlsen , oh that's interesting, we did see a few issues with openvino (especially with quantization), so this could very well be an issue on their side as well.
        Thanks, Erik