• DepthAI
  • shortest path to upgrade to openvino 2022.1

erik

I installed depthai environment a while ago (before 2022.1) so when I run oak depthai the virtual environment I installed will automatically select the 2021.4 version to run the blob. And the following is one example I encounter:

  1. I use blobconverter and select 2021.4 to compile my blob
  2. when I run the depthai program (including your code to report the blob.version), the error message shows up as below:
    #your code report blob parameters
    blob.version= Version.VERSION_2021_4
    Inputs
    Name: right, Type: DataType.U8F, Shape: [320, 240, 3, 1]
    Name: left, Type: DataType.U8F, Shape: [320, 240, 3, 1]
    Outputs
    Name: output, Type: DataType.FP16, Shape: [320, 240, 2, 1]
    #error messages below
    [184430106183EB0F00] [32.457] [NeuralNetwork(8)] [warning] Network compiled for 4 shaves, maximum available 12, compiling for 6 shaves likely will yield in better performance
    [184430106183EB0F00] [32.469] [NeuralNetwork(8)] [error] Neural network blob compiled with uncompatible openvino version. Selected openvino version 2021.4. If you want to select an explicit openvino version use: setOpenVINOVersion while creating pipeline.
    [184430106183EB0F00] [32.470] [NeuralNetwork(8)] [error] Neural network blob compiled with uncompatible openvino version. Selected openvino version 2021.4. If you want to select an explicit openvino version use: setOpenVINOVersion while creating pipeline.
    As you can see, something is wrong with the error message: it said the blob was compiled with uncompatible version and indicated that I am using 2021.4 but as you can see the blob.version reported 2021.4. (And I did compile it with 2021.4 blobcoverter!) What am I missing? Please advise. Thanks. (Do I need to upgrade my depthai to align to openvino_2022.1?)
  • erik replied to this.

    Hi ynjiun ,
    Could you provide a MRE for this issue (containing blobconverter)? Would love to check this out.
    Thanks, Erik

      ynjiun Could you please remove any unnecessary code - so it's minimal code (so any host-side code)? And please use blobconverter in the script.
      Thanks, Erik

        erik
        what was done:

        1. try to simplified mre.py further.
        2. try to include the blobconverter script in the mre.py but failed, please take a look what's missing
        3. try to create CLI script as bc.bat but failed, too. Please take a look what's missing

        the model.blob was generated by manual feeding in model.onnx and clicking through your online blobconverter website and now renamed to model_orig.blob

        Thank you for your help

        • erik replied to this.

          Hi ynjiun ,
          Thank you for the MRE! When I was looking at the model I thought to myself that's some HUGE model, I don't think our devices are capable of running something like that. And the devil is often in details; the full log contains the Log: 'softMaxNClasses' '157' 'CMX memory is not enough!'.. So it's not actually a problem with openvino version, but model is too large to be run on the device. Sorry about the inconvenience.
          Thanks, Erik

            erik
            That's good to know.

            By the way, so far I am not able to perform blobconverter over the script, could you help to take a look what's wrong bleow:

            import blobconverter
            
            blob_path = blobconverter.from_onnx(
                model="model.onnx",
                data_type="FP16",
                shaves=6,
                optimizer_params=[
                    "-ip U8",
                    "-op FP16",
                ],
            )

            And where to put the openvino version 2021.4 in the script?
            If I run the above script, the error message is as below:

            Downloading /home/paul/.cache/blobconverter/model_openvino_2021.4_6shave.blob...
            {
                "exit_code": 1,
                "message": "Command failed with exit code 1, command: /app/venvs/venv2021_4/bin/python /app/model_compiler/openvino_2021.4/converter.py --precisions FP16 --output_dir /tmp/blobconverter/a02cbb6fa85b4407a6463849a6ea51eb --download_dir /tmp/blobconverter/a02cbb6fa85b4407a6463849a6ea51eb --name model --model_root /tmp/blobconverter/a02cbb6fa85b4407a6463849a6ea51eb",
                "stderr": "usage: main.py [options]\nmain.py: error: unrecognized arguments: -ip U8\n",
                "stdout": "========== Converting model to IR (FP16)\nConversion command: /app/venvs/venv2021_4/bin/python -m mo --framework=onnx --data_type=FP16 --output_dir=/tmp/blobconverter/a02cbb6fa85b4407a6463849a6ea51eb/model/FP16 --model_name=model '-ip U8' '-op FP16' --data_type=FP16 --input_model=/tmp/blobconverter/a02cbb6fa85b4407a6463849a6ea51eb/model/FP16/model.onnx\n\nFAILED:\nmodel\n"
            }
            Traceback (most recent call last):
              File "/home/paul/oak/MRE/mre.py", line 27, in <module>
                blob_path = blobconverter.from_onnx(
              File "/home/paul/oak/lib/python3.8/site-packages/blobconverter/__init__.py", line 428, in from_onnx
                return compile_blob(blob_name=Path(model_name).stem, req_data={"name": Path(model_name).stem}, req_files=files, data_type=data_type, **kwargs)
              File "/home/paul/oak/lib/python3.8/site-packages/blobconverter/__init__.py", line 322, in compile_blob
                response.raise_for_status()
              File "/home/paul/oak/lib/python3.8/site-packages/requests/models.py", line 953, in raise_for_status
                raise HTTPError(http_error_msg, response=self)
            requests.exceptions.HTTPError: 400 Client Error: BAD REQUEST for url: https://blobconverter.luxonis.com/compile?version=2021.4&no_cache=False
            • erik replied to this.

              Hi ynjiun ,
              I have actually converter that first, and it worked as expected:

              import blobconverter
              blobconverter.from_onnx(
                  model='model.onnx',
                  data_type="FP16",
                  version='2021.4',
                  shaves=6,
                  use_cache=False,
                  optimizer_params=["--data_type=FP16"],
                  compile_params = ["-ip U8 -op FP16"]
              )

              That said, I am not sure why compile_params are an array, yet they are ok when you just pass a single string to them. I would need to look into the internals of blobconverter, maybe one day🙂
              Thanks, Erik

                erik
                Thanks a lot for your help.

                I go back to look at my error message again:

                [184430106183EB0F00] [400.856] [NeuralNetwork(7)] [critical] Fatal error in openvino '2021.4'. Likely because the model was compiled for different openvino version. If you want to select an explicit openvino version use: setOpenVINOVersion while creating pipeline. If error persists please report to developers. Log: 'softMaxNClasses' '157'

                And did not see what you see at the end 'CMX memory is not enough!'

                Just curious what's the different between your platform compared with mine. If I can see that message early, probably wouldn't need to go this far to find out ; ))

                By the way, would RVP3 (KeemBay) version have bigger memory to run this model?

                Thank you for your help along the way....

                • erik replied to this.

                  Hi ynjiun !
                  Yes, I believe RVC3 will have more CMX (like x2 the amount) compared to RVC2, so it might be possible to run this model on S3🙂
                  Thanks, Erik

                    erik

                    What RVC3 product you would recommend me to try? Are they available yet? Would they run depthAI code I have? or need to install new SDK? Please advise.