Hello, I am running YOLOv4-Tiny on the OAK-D LR camera for spatial detection using YoloSpatialDetectionNetwork. When using only two sensors (Left and Right for depth, and Left for detections, or Left and Center for depth, Center for detections), the processing is significantly faster compared to using Left and Right for depth while utilizing the Center camera for detections. I attempted scaling down the Center camera to 720p, but it did not improve performance. Can anyone help me figure out what's causing the performance issues and how to fix it so that I can use all 3 of the sensors? I would like to take advantage of the LR capabilities but am not sure how to proceed.
OAKD LR performance issue when using all 3 sensors
davidrochester
I feel like it's due to alignment of depth. When using L+R for depth but running detection on C, you need to align depth (typically aligned to L) to C camera which is relatively expensive.
Thanks
Jaka
jakaskerl So you suggest only using two sensors for this use? Or is there another way to optimize performance while taking advantage of all 3 sensors?
davidrochester
Yes, two sensors should do the job the same. Why do you want to take advantage of 3 sensors? You can use one sensor for both RGB streaming as well as depth so there is really no benefit in using all three at the same time.
Thanks,
Jaka
jakaskerl Ah okay, sorry for the confusion. Thank you!