Event + XLink Errors on sample code
Hi Potatoes27 ,
Sorry about the trouble. What OS are you running? And you have tried simply python3 depthai_demo.py
?
It looks like there are problems communicating over USB.
Could you try forcing USB2? Some hosts have troubles communicating at USB3 speeds with DepthAI/OAK.
The following command will force USB2 communication:
python3 depthai_demo.py -fusb2
This is run in the depthai directory after cloning depthai repository here:
https://github.com/luxonis/depthai
Thoughts?
Another potential issue could be if the OpenCV install wasn't fully successful and so can't open a display window.
Thanks,
Brandon
Hi!
Running this on MacOS. Yup, the depthai_demo.py script runs just fine.
I should add, that a window does indeed show up when I run the code on this page https://docs.luxonis.com/en/latest/pages/samples/object_tracker/
But it's laggy and updates frames every several seconds and repeats the errors on the first image.
Is there any way to use the demoScript to run MobileNetSSD but only detect "birds" for example?
Appreciate the help!
- Edited
Very interesting. I'm wondering if there is a power issue actually. Which Mac model? Some of the older units are USB2 and may not supply enough power. And are you using OAK-1 or OAK-D?
We do not have a pre-trained birds model. But this Medium article looks quite good for making one:
https://towardsdatascience.com/smart-bird-watcher-customizing-pre-trained-ai-models-to-detect-birds-of-interest-dca1202bfbdf
(TowardsDataScience stuff is generally excellent.)
I references the Caltech Birds 200 dataset, which I wasn't aware of until just now: http://www.vision.caltech.edu/visipedia/CUB-200.html
That guide even shows how to convert the trained model to our platform (we're the VPU in that example; we use the same chip as inside the NCS2 - we just allow it to do a LOT more).
We also have Google Colab scripts for training custom models:
https://github.com/luxonis/depthai-ml-training/tree/master/colab-notebooks#tiny-yolov3-object-detector-training-
So you could use the dataset mentioned by that article (and in particular if the author already labeled the data and released it, I -think- he did based on skimming it) and then use our Colab notebook to do the training.
Also he may have given his trained model at the end, and if he did, you can use it directly on our platform, by changing the labels in here for example: https://github.com/luxonis/depthai/blob/main/resources/nn/mobilenet-ssd/mobilenet-ssd.json
Thoughts?
Thanks,
Brandon
Hey Brandon!
OAK-D, Macbook pro-2019.
I actually just ended up using the Gen 2 pipeline builder. Using example 08 I was able to only detect for "birds" through the mobilenet-ssd (specifically by editing the bboxes and looking to see if the label is "bird' which matches with the integer "3" in the mobilenet-ssd labels list).
I have some other interesting issues, but I'll make a separate thread about that. Thanks for your help!
Thanks and sounds good. Yes, Gen2 is where everything is going anyway. And we will be re-implementing tracking in Gen2.
- Edited
Hello,
I could make this tracking example work by using the "mobilenet-ssd.blob.sh11cmx11NCE1" blob file instead of "mobilenet-ssd.blob.sh14cmx14NCE1" (and leaving shaves and cmx_slices to 12). This is with "depthai 0.4.0.0".
When using "mobilenet-ssd.blob.sh14cmx14NCE1" and changing the pipeline configuration with "shaves" : 14, "cmx_slices" : 14, I get the following error:
ERROR: requested CNN resources overlaps with RGB camera
Traceback (most recent call last):
File "/Volumes/Misc/DepthAI/depthai-kbv/OtherGen2/Object tracker/Object tracker.py", line 23, in <module>
raise RuntimeError("Error initializing pipelne")
RuntimeError: Error initializing pipelne
I guess this is a limitation of Gen 1, and another reason to move to Gen2 !
Will
Hi Will,
Yes I think (but could be wrong) that this is a limit of depth still being enabled (which take some SHAVES).
But yes, Gen2 will be a lot smarter and a lot more flexible (and for the most part already is, but will be continuously improving).
For the status of features we are building, see here:
https://github.com/orgs/luxonis/projects/2
So those are the capabilities we wanted to have out in December, but we ended up with a slew of delays, many of which slow-downs as a result of the pandemic (including engineers needing to move back to full-time childcare in addition to trying to write code).
But as you can see, as of this writing, 61 done, 16 in progress, 2 bugs, and 17 on the roadmap. So we're getting closer.
And for future 2021 features that are planned (i.e. will be started after the previous list is done), see here:
https://github.com/orgs/luxonis/projects/4
Thanks again,
Brandon
I had the exact same issue on a newish MacBook Pro. After reading your suggestion I got a USBC to USBA adapter and it works perfectly. I think it's a hub issue, though I'm not sure why... You think it's a power issue? How much power does the USBC draw?
Thanks seanreynoldscs
It could be a power issue. Some USB ports do not delivery the full 900mA that OAK-D needs.
We have also seen issues with some USB ports/hubs/etc. where our switch from USB2 (which is used to load the firmware and pipeline) to USB3 (which is used for data transfer) is not supported by the hub. What we are doing is per USB spec, but it seems it's not commonly used, so it seems some hubs just skipped supporting it/testing it.
One work-around is to use -fusb2 (on depthai_demo.py)
In gen1 (and in gen2) the Device constructor takes the parameter of whether to boot a device as usb2 or usb3 mode.
In the below case, the second argument is usb2Mode. Set that to True and the device will boot in usb2 mode
device = depthai.Device('', False). # USB3 allowed
device = depthai.Device('', True). # USB2 forced
We'll include that in the upcoming gen2 documentation as well.
Thanks again,
Brandon
I had the same "ERROR: requested CNN resources overlaps with RGB camera" when testing the Object tracker sample. I used mobilenet-ssd.blob.sh14cmx14NCE1 and set shaves 14 cmx_slices 14 because I didn't have sh12cmx12nce1 in my resources/nn folder. I did not have problem with the depthai_demo running yolo-v3 object tracking. But replacing the blob files to the yolo-v3 files in my test code, I got the same error message. I don't know what in Object tracker demo code cause it to crash as compared to depthai_demo. It would seem to be not dependent on the hardware.
I'm using macOS Big Sur on MacBook Pro 2018. The cable is what's supplied with Oak-D, 1m USB 3 to USB C.
Btw, replacing shaves and slices below 14 (tested 12, 7) do not generate this error as others observed. But the video became extremely slow.
Thanks for producing this wonderful package.
Jason
So I think with Object tracker a maximum of 12 SHAVES can be used for neural inference. I don't immediately recall if/where we have this documented, but we'll make it clear. So this is why you see the issue you are seeing, and why there is this error about resourced utilization (which isn't clear).
In Gen2 this will be automatically printed if too many resources are being used, to specify that a different SHAVE configuration needs to be used. All the Gen2 nodes are smarter like this in terms of configuration and feedback. That said, tracking has not yet been implemented as a Gen2 node (so it's not available in Gen2 yet). Here's the issue for it.
In the meantime, for your application, would it be possible to pass the object detector results from OAK/depthai to a host object tracker like mosse? https://www.pyimagesearch.com/2018/07/30/opencv-object-tracking/
Thanks,
Brandon