Hi Team,
I’m reaching out for assistance with deploying a custom object detection model, trained using MobileNetSSD, onto the OAK-D-CM4 device. While I have successfully trained the model, I’ve been encountering challenges during deployment. Most of the documentation and community examples I’ve found seem to be deprecated or result in errors.
Here’s a brief summary of my current setup:
Model Architecture: MobileNet-SSD (trained using TensorFlow 1)
Exported Format: Converted to TFLite, then to OpenVINO IR using
mo.py
(with--add_postprocessing_op=false
)Device: OAK-D-CM4
Deployment Toolchain: DepthAI (Python API)
My understanding is that MobileNet-SSD models trained using TensorFlow 2 cannot be deployed on OAK devices. Please correct me if I’m mistaken.
Also, when attempting conversion with --add_postprocessing_op=True
, I encountered model conversion issues. Hence, I proceeded without the postprocessing op. Please let me know if there's a better approach here.
I understand that models without built-in postprocessing (i.e., converted with --add_postprocessing_op=false
) require host-side decoding of detection outputs. If there are up-to-date, working examples or recommendations for implementing the required postprocessing logic on the host, I would greatly appreciate that.
Additionally, if MobileNetSSD is not ideal anymore, are there other light-weight, non-YOLO models you would recommend for real-time object detection on OAK devices?
Looking forward to any guidance or updated references you can provide.
Best regards,
Nileena