SharadMaheshwari

  • 23 days ago
  • Joined Jul 1, 2024
  • 0 best answers
  • Hi,
    thanks for your response.
    I tried that and it seems to work. Additionally, I'm not trying to use my own YOLOV6n model. I have the onnx file, but my conversion to blob is failing.
    I installed blobconverter python package (>=1.2.9).
    When I run the CLI command:
    python3 -m blobconverter --onnx-model /path/to/models/model.onnx --shaves 6

    I get the following error:
    requests.exceptions.HTTPError: 400 Client Error: BAD REQUEST for url: https://blobconverter.luxonis.com/compile?version=2022.1&no_cache=False

    Can you help please?

    Additionally, are blob files used on rvc2 compatible for rvc4 devices? For rvc2, we used to get both a blob file and a json file. I don't know if the json file is also useful for rvc2.

    Luxonis web interface for blob conversation doesn't seem to have rvc4 as an option 🙁
    Thanks!

    • Hi,
      I have an OAK 4 S that I'm trying out for object detection. I ran this detection network example:
      https://rvc4.docs.luxonis.com/software/depthai/examples/detection_network/

      1. This runs at 30fps. Given that rvc4 can handle significant higher fps, is this performance limited to 30fps by the camera fps?
      2. Instead of using the live image from the camera, I want to feed my own CV image to the neural network and visualise the results. I can't find any examples in Python that help me connect the neural network input to a node that can take in a static CV image. Can you please help me with that?
      3. Adding on to 2, the detection network takes the camera node for its build method:
      detectionNetwork = pipeline.create(dai.node.DetectionNetwork).build(cameraNode, dai.NNModelDescription("yolov6-nano"))
      If I want to feed static images and not use any camera node. How do I create the detection network? (apart from feeding it images, which I don't know how to do, as mentioned in point 2).

      Thanks,
      Sharad

      • Hi @jakaskerl
        Can you please point me to the C++ code chunk for passthrough node?
        And how exactly can I make use of the information from the passthrough node?

        Thanks,
        Sharad

      • Hi,
        I just tried RGB888i, RGB888p, BGR888i and BGR888p and none of them change anything.
        I'll try looking at passthrough node implementation in examples in the meantime

        Thanks,
        Sharad

      • Hi,
        I'm testing a Yolo pipeline where I need to feed images from the host device to the device for Yolo inference. I've added my code to create the image data structure for a static image, send it, and get the detections back. The detections length is always 0 for some reason. Can someone please help and tell me if I'm feeding the image correctly?
        PS - I've only added the function to feed the image and get the results. If needed, I can also add my code where I create the pipeline before using this function
        Also, the blob and config files I use work because I've successfully used python before for my trained model with depthai.

        Here's a gist for the code (formatting is all over the place if I just paste it here)
        https://gist.github.com/thehummingbird/741b7bd4d12ef429cd68dac6e6878db2