I have the .blob file of my model but I can't run it on the camera to make detections

  • jakaskerl replied to this.
  • RicardoGuadalupeGomezMartnez

    The MobileNet tutorial at the moment is outdated. We are working on a refactor and its migration to Pytorch with an export process that will make it compatible with MobilenetDetectionNetwork. It should be release in the following days.

    erik My labels are correct even in the predictions, only when I convert my model to .blob they are not taken into account, if I change the labels in Jason the changes are not applied.

    @RicardoGuadalupeGomezMartnez

    I would need to see the code. If you are not reading the JSON in your script, then you need to change the labelMap in the code. If you are using the JSON, it will use those labels. For example, if you try running it with main_api.py from here and specify the modified JSON with -c path/to/config.json and -m path/to/model.blobit will read it from the JSON directly. Labels are not stored in .blob and should be passed through the code.

      a month later

      Matija I get it, I have been trying to convert it to a blob through this page http://blobconverter.luxonis.com/ and it always gives me an error

      ========== Converting savedmodel to IR (FP16) Conversion command: /app/venvs/venv2022_1/bin/python -- /app/venvs/venv2022_1/bin/mo --framework=tf --data_type=FP16 --output_dir=/tmp/blobconverter/92db4f7da4e84c2cb280f6df8b659280/savedmodel/FP16 --model_name=savedmodel --input= --data_type=FP16 '--mean_values=[127.5,127.5,127.5]' '--scale_values=[255,255,255]' --input_model=/tmp/blobconverter/92db4f7da4e84c2cb280f6df8b659280/savedmodel/FP16/savedmodel.pb Model Optimizer arguments: Common parameters: - Path to the Input Model: /tmp/blobconverter/92db4f7da4e84c2cb280f6df8b659280/savedmodel/FP16/savedmodel.pb - Path for generated IR: /tmp/blobconverter/92db4f7da4e84c2cb280f6df8b659280/savedmodel/FP16 - IR output name: savedmodel - Log level: ERROR - Batch: Not specified, inherited from the model - Input layers: Not specified, inherited from the model - Output layers: Not specified, inherited from the model - Input shapes: Not specified, inherited from the model - Source layout: Not specified - Target layout: Not specified - Layout: Not specified - Mean values: [127.5,127.5,127.5] - Scale values: [255,255,255] - Scale factor: Not specified - Precision of IR: FP16 - Enable fusing: True - User transformations: Not specified - Reverse input channels: False - Enable IR generation for fixed input shape: False - Use the transformations config file: None Advanced parameters: - Force the usage of legacy Frontend of Model Optimizer for model conversion into IR: False - Force the usage of new Frontend of Model Optimizer for model conversion into IR: False TensorFlow specific parameters: - Input model in text protobuf format: False - Path to model dump for TensorBoard: None - List of shared libraries with TensorFlow custom layers implementation: None - Update the configuration file with input/output node names: None - Use configuration file used to generate the model with Object Detection API: None - Use the config file: None OpenVINO runtime found in: /opt/intel/openvino2022_1/python/python3.8/openvino OpenVINO runtime version: 2022.1.0-7019-cdb9bec7210-releases/2022/1 Model Optimizer version: 2022.1.0-7019-cdb9bec7210-releases/2022/1 FAILED: savedmodel

      Yes, if you check the mo.py (in the tutorial), you can see that many flags regarding the TF operations and so on are passed to it. While this is possible to some degree with online blobconverter (the path would have to be changed slightly), it is not possible to modify them (like in the cell above mo.py).

      9 months later