• DepthAI-v2
  • Unable to get Yolov6n detection for static images (C++)

Hi,
I'm testing a Yolo pipeline where I need to feed images from the host device to the device for Yolo inference. I've added my code to create the image data structure for a static image, send it, and get the detections back. The detections length is always 0 for some reason. Can someone please help and tell me if I'm feeding the image correctly?
PS - I've only added the function to feed the image and get the results. If needed, I can also add my code where I create the pipeline before using this function
Also, the blob and config files I use work because I've successfully used python before for my trained model with depthai.

Here's a gist for the code (formatting is all over the place if I just paste it here)
https://gist.github.com/thehummingbird/741b7bd4d12ef429cd68dac6e6878db2

Hi @SharadMaheshwari
Before I test it, could you just try to set the host side dai:ImgFrame to RGB888i (interleaved)? Lmk if it helps.
Also consider sending a YOLO node passthrough back to the host to view it.

Thanks,
Jaka

Hi,
I just tried RGB888i, RGB888p, BGR888i and BGR888p and none of them change anything.
I'll try looking at passthrough node implementation in examples in the meantime

Thanks,
Sharad

Hi @jakaskerl
Can you please point me to the C++ code chunk for passthrough node?
And how exactly can I make use of the information from the passthrough node?

Thanks,
Sharad

Hi @SharadMaheshwari
Sorry for not correcting this sooner. There is no "passthrough node" per se. What I mean was using the NN output called passthrough (same as NN.out) to link it back to the host in order to view the image that is just "passed throough" the NN node and not changed in any way. This will tell you if the frame is received well or if something is off.

Thanks,
Jaka