jakaskerl Yes it works there in the GUI
Unable to run depthai.sdk for spatial tracking
Hi DarshitDesai
And what versions are you using? Try both the latest SDK and API.
Also, what USB are you using to power the device? It might be that since IR projector is enabled when instantiating stereo, the device requires more power which the cable isn't able to supply (due to it being faulty or just not capable).
Thanks,
Jaka
I am using one of my spare chargers to give power to the power port, The specifications of the charger are well within the required amperage since it's a 15 W cellphone charger with 5V-3A output. (OAK D Pro required 2 Amps). The cable is also good.
Here is the depthai (or API) version details which I currently have:
Name: depthai
Version: 2.22.0.0
Summary: DepthAI Python Library
Home-page: https://github.com/luxonis/depthai-python
Author: Luxonis
Author-email: support@luxonis.com
License: MIT
Location: C:\Users\darshit\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages
Requires:
Required-by: depthai-sdk
Here is the depthai-sdk version details that I currently have on my Windows 10 computer:
Name: depthai-sdk
Version: 1.12.1
Summary: This package provides an abstraction of the DepthAI API library.
Home-page: https://github.com/luxonis/depthai/tree/main/depthai_sdk
Author: Luxonis
Author-email: support@luxonis.com
License: MIT
Location: C:\Users\darshit\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.11_qbz5n2kfra8p0\LocalCache\local-packages\Python311\site-packages
Requires: blobconverter, depthai, depthai-pipeline-graph, marshmallow, numpy, opencv-contrib-python, pytube, PyTurboJPEG, sentry-sdk, xmltodict
Required-by:
Yes I tried the API alone, and one of these examples do work, but the sdk doesn't work for stereo part at all,
https://github.com/luxonis/depthai-python/blob/main/examples/ObjectTracker/spatial_object_tracker.py
Hi DarshitDesai
I assume you are using a Y adapter for this?
Could you try:
- Running the sdk examples with
stereo.set_ir(dot_projector_brightness=0, flood_brightness=0)
- Print out the device USB speed when booted:
print(oak.device.getUsbSpeed())
- this should be SUPER - Try using a different power solution to see if it fixes the problem
Thanks,
Jaka
If I set the set_ir method arguments as you said the scripts work, but what impact does it make, is it turning off the ir projector, than that is bad because my application requires that it gives me depth on feature less surfaces too (like plain walls). Also the USB speed is SUPER when I print it in any script.
But the spatial detection script still doesn't work, it is linked below:
I also don't know how to add the set_ir parameter since the script above is not creating a stereo object for doing that.
The following scripts I tested with adding the set_ir parameter and it runs with it,
https://docs.luxonis.com/projects/sdk/en/latest/samples/StereoComponent/SDK_stereo/#source-code
Hi DarshitDesai
I guess that would confirm the power issue. Any chance you can use a different power solution/other-more capable cable? What was the output of the usbSpeed?
Thanks,
Jaka
Hi jakaskerl it worked when I changed the chargers. Actually both chargers are of similar rating and the cables are the same, still I don't understand why it didn't work earlier.
I am now trying to flash the readymade Luxonis OAK Rpi images on an sd card which I will later use on Rpi3 which i have but everytime balena etcher flashes the image it shows at the end that it has failed, even after doing 100% validation
I am not sure what could be wrong there?
- Edited
Hi DarshitDesai
Perhaps the SD capacity is too low?
jakaskerl So the memory card is 32 gbs
Hi DarshitDesai
I see, could you try:
- viewing the debug console on balena etcher; surely there are logs available that will point to the error
- as you are running windows, try balena as administrator
- redownload the image, perhaps it's corrupted for some reason.
Let me know if it works.
Thanks,
Jaka
jakaskerl I'll check the debugger. Does it come with premium version or free version?
Etcher runs only with admin approval
Redownload didn't work with v9 and v8
Hi DarshitDesai
ctrl+shift+i to open the logs.
Thanks,
Jaka
jakaskerl It says that the unmount sd card failed in the console after etching the OS?
jakaskerl Hi I was somehow able to make it work with one of the linux pcs I had. But now when I try to run a code which worked on the Desktop PC it gives me the following error
Traceback (most recent call list):
File "/home/pi/Desktop/testrun.py", line 2, in <module>
from depthai_sdk import OakCamera
ImportError: cannot import name 'OakCamera' from 'depthaisdk' (/home/pi/depthai/depthaisdk/src/depthai_sdk/init.py)
I modified the dependencies myself since the linux image which was there didn't have any of the latest components of the Oakcamera SDK. How do I fetch the tracker (X,Y,Z) values from the spatial tracker?
Hi DarshitDesai
Great that you got it working, strange though.
What versions are you now using? Since you said you modified the dependencies I would assume you have the latest.
You can fetch the trackers inside the callback function. https://docs.luxonis.com/projects/sdk/en/latest/fundamentals/packets/#api-usage
You would need to send a trackerpacket to the callback, and print it there.
Thanks,
Jaka
jakaskerl I don't think it was a version issue, When I opened the image, there were some files like the Oakcamera.py and other dependencies which should have been there not present in the depthai/depthai_sdk folder, I just pip installed those and cloned those from github.
About the question, I am combining tracker with spatial calculation of the tracked object, both of them combined give me a x,y,z position for a class of detected object in the visualizer, now I want it raw in the form of a list or maybe a ros topic which I can publish and later subscribe to it so that my robot can act according to it, what are some ways to do that? Note ros is only a middleware example I could think of, I would prefer if something in the sdk itself helped me do it
Hi DarshitDesai
As I have mentioned above, instead of stock visualizer, make your own callback function that will run each time there is a frame ready. Tracker and spatials are both available outputs of the NN component https://docs.luxonis.com/projects/sdk/en/latest/components/nn_component/#nncomponent).
Inside that same callback you can either print a list of all xyz values, or maybe make a publish to a ros topic. This is up to you since ROS is not integrated into SDK as of now.
Thanks,
Jaka
jakaskerl I am still not able to figure out those values, can you tell me the exact api call in the python sdk that I need to type up for getting the x,y,z values?
Here's my code for your reference
from depthai_sdk import OakCamera
import depthai as dai
from depthai_sdk.classes import DetectionPacket
def cb(packet: DetectionPacket):
print(packet.img_detections)
with OakCamera() as oak:
color = oak.create_camera('color')
# List of models that are supported out-of-the-box by the SDK:
# https://docs.luxonis.com/projects/sdk/en/latest/features/ai_models/#sdk-supported-models
nn = oak.create_nn('yolov8n_coco_640x352', color, tracker=True, spatial=True)
nn.config_nn(resize_mode='stretch')
nn.config_tracker(
tracker_type=dai.TrackerType.ZERO_TERM_COLOR_HISTOGRAM,
track_labels=[0], # Track only 1st object from the object map. If unspecified, track all object types
# track_labels=['person'] # Track only people (for coco datasets, person is 1st object in the map)
assignment_policy=dai.TrackerIdAssignmentPolicy.SMALLEST_ID,
max_obj=1, # Max objects to track, which can improve performance
threshold=0.1 # Tracker threshold
)
nn.config_spatial(
bb_scale_factor=0.3, # Scaling bounding box before averaging the depth in that ROI
lower_threshold=500, # Discard depth points below 30cm
upper_threshold=8000, # Discard depth pints above 10m
# Average depth points before calculating X and Y spatial coordinates:
calc_algo=dai.SpatialLocationCalculatorAlgorithm.AVERAGE
)
oak.visualize([nn.out.tracker], fps=True)
# oak.callback(nn.out.tracker, callback=cb)
oak.visualize([nn.out.image_manip], fps=True)
oak.visualize([nn.out.spatials], fps=True)
oak.visualize(nn.out.passthrough)
# oak.start(blocking=True)
oak.start(blocking=True)