Hi Udonnis ,
Wow, that's crazy low latency. So if I understand correctly the minimum FPS for such latency would be 3,400 FPS cameras. Is that right? 1/0.3ms = 3,333.3 FPS. But that doesn't account for any processing time, so they'd have to be at least that fast.
So the latency may be under 65 FPS, but the fastest framerate possible with the current implementation of our driver for the cameras is 118 FPS, so the latency there is 8ms. But that's with no processing, that's literally just the delay induced by the framerate. No MIPI latency, no processing latency, no USB latency, etc.
So I think trying to get to 0.3ms latency from computer vision is quite difficult. We have seen one custom application do 1,300 FPS object detection with our hardware, which was limited also by the sensor (the customer build their own custom sensor for this), but even that, the sensor latency alone is 0.7ms, so it's double of what you are looking for. And again, that doesn't account for any processing.
So to the question of a faster processor, actually the processor is not the bottleneck here, the sensor is. I think for such reaction rates, you'd probably want a 10,000 FPS sensor, so that you could get at least 2-3 samples in that 0.3ms window. Actually that or an event-based camera, like here: https://openaccess.thecvf.com/content_CVPR_2019/papers/Wang_Event-Based_High_Dynamic_Range_Image_and_Very_High_Frame_Rate_CVPR_2019_paper.pdf
Thoughts?
And thanks on the KickStarter. Which one was yours? Would love to take a look.
Best,
Brandon