Hi,
This is a very generic question, probably most basic question which will help to understand full power of DepthAI.

Let's say I have a program which currently runs on a host CPU, Intel NUC for example and performs decently.

I would like to run same program on a SBC, Raspberry Pi for example, the program runs but obviously doesn't give same performance as a NUC.

So, the question is, can hooking up a DepthAI module, OAK-1 for example, give great performance on low power host?

Ideally what should I use from DepthAI to port my normal Python program which opens a webcam using OpenCV, does some detections etc and then shows inferenced results using imshow.

To put in other words, do we need to use DepthAI APIs and specific programs to make use of power of DepthAI or can a current program be ported to be compatible with DepthAI and unlock great performance.

Thanks in advance for answering this question.

Best Regards,
Ram

    Hi ramkunchur ,

    Thanks for the interest. It is the latter - where our APIs are used as the processing is actually done on DepthAI. For example one can use DepthAI with an ESP32 and no other device at all. And an ESP32 only has 0.5MB or RAM. So that gives an idea of how much the processing is done on DepthAI.

    But DepthAI is an embedded system, so it does not run OpenCV. It runs our own Gen2 pipeline builder system, here:
    https://docs.luxonis.com/projects/api/en/latest/.

    It is also possible to run only DepthAI - and no other system at all.

    Thanks,
    Brandon

    Hi @Brandon ,

    Thanks so much your quick response and providing detailed information.

    Probably, I didn't frame my question correctly, let me explain what I am trying to achieve.

    I have a program which uses a pre-trained model, using OpenCV I open camera, read frame by frame, perform inference using my model and then show the results using Open CV's imshow method.

    I'm running this program on Intel NUC with i3 processor, I get very good performance (CPU)
    .
    When I run the same program on Nvidia Jetson Nano, performance is not as good as NUC, though Jetson Nano has GPU cores, libraries in my program utilizes CPU.

    So what I would like to achieve using DepthAI is run the same program (let's say from Jetson Nano host) which is connected to OAK-1 and harness the power of DepthAI to get great performance of same program.

    For this as per my understanding, will the following work?

    1. Convert my model to BLOB to be able to run on DepthAI
    2. Convert my program to use DepthAI APIs instead of OpenCV to open camera, get prediction results etc
    3. Show the resultant frames using imshow.

    Can you please share an example in which above steps were used to run a normal Python program which is ported to use power of DepthAI?

    I'll await for your response, Hope I am able to explain properly what I am trying to achieve.

    Thanks & Best Regards,
    Ram

      Hello ramkunchur,
      you could either use the DepthAI device with OpenVINOs inference engine (PR example here) or use it with our depthai library. If you plan on using webcam to capture the image, the first option would be required, and the PR above will be a perfect example for you, which takes webcam frames, sends them to the device and gets the inference results back to the host.

      If you plan to use the OAK-1s color camera to capture the frames, you would need to do steps (1-3) as you mentioned yourself. I don't believe we have a step-by-step tutorial on how to convert a python script into script that uses depthai api, but it's fairly straight forward. To compile the blob, check tutorial here. And to create a simple pipeline that takes frames from color camera and does inference on them, here's a tutorial (it uses mobilenet-ssd model).

      I hope this helps๐Ÿ™‚ Thanks, Erik

        13 days later

        ramkunchur

        Something that you could do too would be to optimize the libraries that you are using on your code to do GPU acceleration instead of relying on the CPU of the Jetson to obtain more performance out of the host. I know you can compile a version of OpenCV with Cuda support or find a prebuilt one. Here is a build script that builds OpenCV for Tegra (Nano, NX, AGX, etc.) with cuDNN support

        [https://github.com/mdegans/nano_build_opencv]

        I hope you find this helpful,
        Nick