@Nadi what else are you expecting to get returned from the output of the NN inference?

  • Nadi replied to this.

    I am not talking about the output layer, i am talking about getalllayers -> i expect it to return input and output layers
    and now it returns only the output

    @Nadi Why would you need input layer? If you want to overlay the semantic seg. result with the rgb image, you can use neuralNetworkNode.passthrough output to get the input image. Thoughts?

    10 months later

    [184430107165680F00] [3.3] [89.777] [NeuralNetwork(0)] [warning] Input image (300x300) does not match NN (3x300)

    Model: MobileNetV2

    I am getting the same error and I tried the above two options which you have mentioned, but no luck.
    I am assuming that it is just a warning, will it affect in any way.

      I'm having a similiar issue

      [18443010D18F411300] [2.2] [9.560] [DetectionNetwork(1)] [warning] Input image (244x244) does not match NN (3x244)

      My blob conversion:
      blob_path = blobconverter.from_tf(

          frozen_pb="./frozen_graph.pb",

          data_type="FP16",

          shaves=6,

          optimizer_params=[

              f"--input_shape=[1,{SHAPE},{SHAPE},3]"

          ]

      )

      I'm confused about how I should modify this blob_path to make sure it outputs the desired shape/right convention @jakaskerl

        jakaskerl
        thank you for your reply.

        1. what is trial and error Openvino version? I am using openvino 2022.2 version.
        2. I am using blobconverter app to convert [https://blobconverter.luxonis.com/]. How to use the one you have mentioned?
          FYI I am using TF model

        Your help will be highly appreciated. TIA

          Hi heebahsaleem

          1. Trial and error is referring to the process of finding the right Openvino version for your model to work. Basically you would have to compile the model with each Openvino version, untill you find the right one (perhaps 2021.4 could work right away), hence the "trial and error".
          2. Inside blobconverter, you have "advanced" tab which allows you to select the number of shaves you are using, and pass in additional parameters. Under "Compile parameters:" you should be able to pass in the -il parameter and the shape you want.

          userOfCamera
          According to README I sent above the shapes should be defined as either "NCHW" or "NHWC". So the compiler knows how to reshape the input.

          Thanks,
          Jaka

          @jakaskerl
          I set --layout to both nchw and nhwc but the same issue persists

          blob_path = blobconverter.from_tf(

              frozen_pb="./frozen_graph.pb",

              data_type="FP16",

              shaves=6,

              optimizer_params=[

                  f"--input_shape=[1,{SHAPE},{SHAPE},3]",

                  "--layout=nhwc"

              ]

          )

          Am I supposed to set this using the -il flag? Can you show me an example

          This seemed to solve the problem for me (setting --layout = nhwc->nchw)

          My NN doesn't seem to be working on the Camera, however this could be due to something else (to be investigated)

          Here is my full blob_path

          blob_path = blobconverter.from_tf(

              frozen_pb="./frozen_graph.pb",

              data_type="FP16",

              shaves=6,

              optimizer_params=[

                  f"--input_shape=[1,{SHAPE},{SHAPE},3]",

                  "--layout=nhwc->nchw"

              ]

          )