• DepthAIOAK4
  • Performance and Usage of Detection Network for Oak 4 S (RVC 4) in Python

Hi,
I have an OAK 4 S that I'm trying out for object detection. I ran this detection network example:
https://rvc4.docs.luxonis.com/software/depthai/examples/detection_network/

1. This runs at 30fps. Given that rvc4 can handle significant higher fps, is this performance limited to 30fps by the camera fps?
2. Instead of using the live image from the camera, I want to feed my own CV image to the neural network and visualise the results. I can't find any examples in Python that help me connect the neural network input to a node that can take in a static CV image. Can you please help me with that?
3. Adding on to 2, the detection network takes the camera node for its build method:
detectionNetwork = pipeline.create(dai.node.DetectionNetwork).build(cameraNode, dai.NNModelDescription("yolov6-nano"))
If I want to feed static images and not use any camera node. How do I create the detection network? (apart from feeding it images, which I don't know how to do, as mentioned in point 2).

Thanks,
Sharad

    SharadMaheshwari This runs at 30fps. Given that rvc4 can handle significant higher fps, is this performance limited to 30fps by the camera fps?

    I think max FPS for IMX586 is 30FPS ATM so it is the limitation, yes.

    SharadMaheshwari 2. Instead of using the live image from the camera, I want to feed my own CV image to the neural network and visualise the results. I can't find any examples in Python that help me connect the neural network input to a node that can take in a static CV image. Can you please help me with that?

    For static image: luxonis/depthai-coreblob/v3_develop/examples/python/Benchmark/benchmark_nn.py - you can use it to measure the capabilities of the NN model.

    To stream a video (you can ignore the remoteConnector part):
    luxonis/depthai-coreblob/v3_develop/examples/python/DetectionNetwork/detection_network_replay.py

    Thanks,
    Jaka

    Hi,
    thanks for your response.
    I tried that and it seems to work. Additionally, I'm not trying to use my own YOLOV6n model. I have the onnx file, but my conversion to blob is failing.
    I installed blobconverter python package (>=1.2.9).
    When I run the CLI command:
    python3 -m blobconverter --onnx-model /path/to/models/model.onnx --shaves 6

    I get the following error:
    requests.exceptions.HTTPError: 400 Client Error: BAD REQUEST for url: https://blobconverter.luxonis.com/compile?version=2022.1&no_cache=False

    Can you help please?

    Additionally, are blob files used on rvc2 compatible for rvc4 devices? For rvc2, we used to get both a blob file and a json file. I don't know if the json file is also useful for rvc2.

    Luxonis web interface for blob conversation doesn't seem to have rvc4 as an option 🙁
    Thanks!

      ErenTa
      RVC4 uses a different VPU so it doesn't use blobs. Blobconverter therefore won't work for OAK4 devices.

      Not sure if local conversion is possible atm cc @KlemenSkrlj

      Thanks,
      Jaka

      Hi @ErenTa ,
      We do have a package called ModelConverter which you can use locally to convert ONNX models to .dlc files that run on RVC4 devices. I'm linking here the pypi package, it's repository (which has a rather extensive README) and documentation. If you have any additional questions feel free to ask.

      Best,
      Klemen

        8 days later

        KlemenSkrlj I've converted Yolov7-tiny model using hub.luxonis.com. When inferencing with NN archive and DetectionNetwork, it gives "Unsupported parser: YOLOExtendedParser" error.

        @ErenTa We currently have two different parsers for YOLO models and the difference between them is in what versions of YOLOs they can parse. The parser integrated natively in DepthAI doesn't yet support YOLOv7. But we do support it already inside depthai-nodes package (repo and pypi). We also have documentation here on the inference part of the pipeline using both depthai and depthai-nodes packages. So in your case you could do something like described here and you should get the parsed model output. Hope this helps.
        Best, Klemen