Hi SebastianH ,
2 options:

  • Preferred: Specify correct layout using openvino's model optimizer, using --layout, docs here.
  • Change frame layout with depthai, using colorCamera.setInterleaved(bool)

I hope this helps!
Thanks, Erik

    erik
    I tryed using colorCamera.setInterleaved(True) but i didnt help i will try your first recommendation and let you know 🙂

    Hi erik
    Will there be any performance issue using colorCamera.setInterleaved(True) instead of chaning the model?

    @erik Hi Erik, I have setinterleaved(True) (HWC) and I set the model input and outpuat layout to NHWC and looks like the openvino compiler changed it to NCHW alone and i get this message now:

    [NeuralNetwork(0)] [warning] Input image is interleaved (HWC), NN specifies planar (CHW) data order

    any fix?

    • erik replied to this.

      Nadi This means you need to set setinterleaved(False), so it's planar, which is what NN specifies.
      @SebastianH no performance issues on the model (or minimal, simple conversion layer), but some nodes that process frames (eg ImageManip) are slower if they process interleaved frames (see docs here)
      Thanks, Erik

      @erik I fixed it but now i got only output layer when i call getalllayers() , do you have a clue?

      @Nadi what else are you expecting to get returned from the output of the NN inference?

      • Nadi replied to this.

        I am not talking about the output layer, i am talking about getalllayers -> i expect it to return input and output layers
        and now it returns only the output

        @Nadi Why would you need input layer? If you want to overlay the semantic seg. result with the rgb image, you can use neuralNetworkNode.passthrough output to get the input image. Thoughts?

        10 months later

        [184430107165680F00] [3.3] [89.777] [NeuralNetwork(0)] [warning] Input image (300x300) does not match NN (3x300)

        Model: MobileNetV2

        I am getting the same error and I tried the above two options which you have mentioned, but no luck.
        I am assuming that it is just a warning, will it affect in any way.

          I'm having a similiar issue

          [18443010D18F411300] [2.2] [9.560] [DetectionNetwork(1)] [warning] Input image (244x244) does not match NN (3x244)

          My blob conversion:
          blob_path = blobconverter.from_tf(

              frozen_pb="./frozen_graph.pb",

              data_type="FP16",

              shaves=6,

              optimizer_params=[

                  f"--input_shape=[1,{SHAPE},{SHAPE},3]"

              ]

          )

          I'm confused about how I should modify this blob_path to make sure it outputs the desired shape/right convention @jakaskerl

            jakaskerl
            thank you for your reply.

            1. what is trial and error Openvino version? I am using openvino 2022.2 version.
            2. I am using blobconverter app to convert [https://blobconverter.luxonis.com/]. How to use the one you have mentioned?
              FYI I am using TF model

            Your help will be highly appreciated. TIA

              Hi heebahsaleem

              1. Trial and error is referring to the process of finding the right Openvino version for your model to work. Basically you would have to compile the model with each Openvino version, untill you find the right one (perhaps 2021.4 could work right away), hence the "trial and error".
              2. Inside blobconverter, you have "advanced" tab which allows you to select the number of shaves you are using, and pass in additional parameters. Under "Compile parameters:" you should be able to pass in the -il parameter and the shape you want.

              userOfCamera
              According to README I sent above the shapes should be defined as either "NCHW" or "NHWC". So the compiler knows how to reshape the input.

              Thanks,
              Jaka

              @jakaskerl
              I set --layout to both nchw and nhwc but the same issue persists

              blob_path = blobconverter.from_tf(

                  frozen_pb="./frozen_graph.pb",

                  data_type="FP16",

                  shaves=6,

                  optimizer_params=[

                      f"--input_shape=[1,{SHAPE},{SHAPE},3]",

                      "--layout=nhwc"

                  ]

              )

              Am I supposed to set this using the -il flag? Can you show me an example

              This seemed to solve the problem for me (setting --layout = nhwc->nchw)

              My NN doesn't seem to be working on the Camera, however this could be due to something else (to be investigated)

              Here is my full blob_path

              blob_path = blobconverter.from_tf(

                  frozen_pb="./frozen_graph.pb",

                  data_type="FP16",

                  shaves=6,

                  optimizer_params=[

                      f"--input_shape=[1,{SHAPE},{SHAPE},3]",

                      "--layout=nhwc->nchw"

                  ]

              )