I would like to track the position of a ball on a contrasted background and get its center position. This is not requiring AI really. I was wondering if the OAK-1 can do this?
Another question is if OAK-1's IMX378 can be swapped with the OV9282 which has a global shutter?
So I think it would be possible to do this with OAK-1 or OAK-D but I think it would be overkill. Actually I'd recommend OpenMV for this unless you do need the additional capability of DepthAI.
OpenMV also has a global shutter option that could help with this.
So OpenMV actually has a pre-canned example for this. On our platform we will eventually support algorithmic blob-tracking like this. But it's not a priority at this time. (And when we do it, we'll support depth-blob tracking as well.)
It is also possible to implement such at thing yourself on OAK-1 using a technique like here or here.
So it will actually be a lot harder on OAK-1/our platform at this time. So the only way I'd say it may make sense to go with OAK-1 or OAK-D for this is if you want to start with such tracking, but then later add depth sensing, 4k video encoding, or other functions like edge filtering/feature tracking.
Thanks for your detailed reply and happy to hear its more features coming to the OAK. I have been looking at the OpenMV and it really looks like a great candidate. I do like the extra processing power and also the finished robust packaging of the OAK1-D. Therefore I was a bit more interested to start on this platform which may address more applications in the future. But I guess its better to focus on the OpenMV for this task as you suggest.
Out of curiosity still I wonder if OAK-1's camera can be swapped with the OV9282 (color global shutter) or OV9281 (mono global shutter) ?
Hello fredjh,
unfortunately, these 2 cameras you mentioned have incompatible connectors (with the IMX378 on OAK-1).
You can, however, swap mono cameras on OAK-D (since they have the same connector) with these.
Thanks, Erik
I see. The mono camera on OAK-D is fine, if I have to go for that, it has about 1Mpiexls. Openmv comes only with 0.3mpixels. Therefore now I got a HW issue with getting enough res on Openmv platform.
I hear that there are no precanned examples for blob tracking on OAK cameras? Therefore it is a lot harder?
Hello fredjh,
you could always downscale/crop frames from the color camera before manipulating it further.
We don't support that out-of-the-box, but you could train a simple NN that would detect/track this "blob". Here are several ML training demos, which also have instructions on how to deploy it to a depthai device.
Thanks, Erik
So I did buy and test the Openmv. Its very intuitive and easy. But, I see that anything above 320x240 resolution is challenging for performance and memory, let alone 1024x768. Thus my interest to follow this topic further.
I learned that in order for the blob detection to benefit from the intel VPU is that the code is implemented in cuda (or Movidus has some other name for tis cores). Is there such an implementation for OpenCV HoughCirclesDetector Class or SimpleBlobDetector or anything similar? What will happen if there is no cuda implementation, performance or it will not run at all?
Hello fredjh,
is the blob you are trying to detect so small that you need such high resolution to detect it? So another user from our community has similar use-case (link here) - but the detection it's not on a large frame. For our VPU the model has to be in .blob which is compiled from IR (xml/bin - openvino format). You could create your own NN model that would essentially do the same as the HoughCirclesDetector/SimpleBlobDetector, here's a tutorial on how to create custom model, and a harris filter here. So that would allow you to achieve high fps/accuracy on large frames. I will also ask our ML engineer if he could look into implementing such logic (HoughCirclesDetector/SimpleBlobDetector) with pytorch.
Thanks, Erik
Just searching a bit on google, found hough transform implemented in TF here. I didn't go into details, but seems like it could be a useful starting point to creating your own hough circle detector .blob.
Hi again Erik, and thanks for your detailed reply.
The ball shows as a 10x10 circle blob in the grayscale 1024x768 image. It is in bright color while background is darker. Something like about 80 to 100 less for color value. The guide how to implement is interesting. I am afraid that it looks beyond my expertise to implement a hough transform though. I would have to find someone else to help me out on that.
However the other option of training a custom NN to look for the blob can be interesting. The only reason I have avoided that has been that I was worried that the NN based blob detection may not provide a precise boundingbox. My aim is to know if the ball is moving or not, so it is essential to get as good of a bounding box as possible that repeats frame after frame. What is your expectation on the results of an own NN plus Harris corner detection (wonder if it works on circles?).
Hello fredjh,
I also looked into hough transform a bit, and from what I understand one would really need to dig into it to implement, so training your own NN might be less time-consuming. From the link above (the user from our community) the results are quite promising, so if the ball was still there would be less than a few pixel change (on the bounding box).
Since you will need a host for the OAK-1 connection, it might be easiest to use HoughCirclesDetector from cv2 on the host.
Thanks, Erik
Hi, that helped a lot. I was hoping that the OAK would be able to do it all. I will have to look into this more. I would perhaps go back the Raspberry Pi route or maybe Jetson Nano. There seems to be more ready made implementations.
I am however, quite excited about the Open CVs and Luxonis initiative to introduce a robust smart camera and hope to use it in some future project.
Brandon Any news on this? I'm trying to track an IR LED with depth. Does it even make sense do these non-ML things on the oak-d or am I better off just streaming the stereo images from the oak-d and doing the blob tracking on the host?
Hi Arthur ,
Due to the upcoming release of Series 3 which has on-board quadcore ARM, we haven't pursued any kind of blob detection on the RVC2, as you can just use opencv for this on RVC3. So for the RVC2 devices it would be easiest to do blob tracking on the host, or perhaps try the custom NN implementation of hugh transform blob detection. Thoughts?
Thanks, Erik
Loading...
Something went wrong while trying to load the full version of this site. Try hard-refreshing this page to fix the error.