I am trying to run yolov8n/s on my host (jetson nano) using the center rgb camera as the source for object detection, and then use the Oak-D Lite for the depth detection. I have tried to find code online for how to do so, but I could not find any. I also tried going through the luxonis documentation, but that did not help me much as I didn't know which functions to use. How would I go about doing this?

Thanks in advance to anyone who can help!

Hi @ArjunGoray1
It's probably easier to stream both depth and RGB (aligned and synced) to the jetson, then run your YOLO and do the spatials on jetson as well.
You should receive the bounding boxes for each detected object, which you can place over the depth image you have received from the OAK and compute spatials like done here.

Thanks,
Jaka