Hi DarshitDesai
I guess that would confirm the power issue. Any chance you can use a different power solution/other-more capable cable? What was the output of the usbSpeed?

Thanks,
Jaka

    Hi jakaskerl it worked when I changed the chargers. Actually both chargers are of similar rating and the cables are the same, still I don't understand why it didn't work earlier.

    I am now trying to flash the readymade Luxonis OAK Rpi images on an sd card which I will later use on Rpi3 which i have but everytime balena etcher flashes the image it shows at the end that it has failed, even after doing 100% validation

    I am not sure what could be wrong there?

      Hi DarshitDesai
      I see, could you try:

      • viewing the debug console on balena etcher; surely there are logs available that will point to the error
      • as you are running windows, try balena as administrator
      • redownload the image, perhaps it's corrupted for some reason.

      Let me know if it works.

      Thanks,
      Jaka

        jakaskerl I'll check the debugger. Does it come with premium version or free version?

        Etcher runs only with admin approval

        Redownload didn't work with v9 and v8

          jakaskerl Hi I was somehow able to make it work with one of the linux pcs I had. But now when I try to run a code which worked on the Desktop PC it gives me the following error

          Traceback (most recent call list):

          File "/home/pi/Desktop/testrun.py", line 2, in <module>

          from depthai_sdk import OakCamera

          ImportError: cannot import name 'OakCamera' from 'depthaisdk' (/home/pi/depthai/depthaisdk/src/depthai_sdk/init.py)

          I modified the dependencies myself since the linux image which was there didn't have any of the latest components of the Oakcamera SDK. How do I fetch the tracker (X,Y,Z) values from the spatial tracker?

            jakaskerl I don't think it was a version issue, When I opened the image, there were some files like the Oakcamera.py and other dependencies which should have been there not present in the depthai/depthai_sdk folder, I just pip installed those and cloned those from github.

            About the question, I am combining tracker with spatial calculation of the tracked object, both of them combined give me a x,y,z position for a class of detected object in the visualizer, now I want it raw in the form of a list or maybe a ros topic which I can publish and later subscribe to it so that my robot can act according to it, what are some ways to do that? Note ros is only a middleware example I could think of, I would prefer if something in the sdk itself helped me do it

              Hi DarshitDesai
              As I have mentioned above, instead of stock visualizer, make your own callback function that will run each time there is a frame ready. Tracker and spatials are both available outputs of the NN component https://docs.luxonis.com/projects/sdk/en/latest/components/nn_component/#nncomponent).

              Inside that same callback you can either print a list of all xyz values, or maybe make a publish to a ros topic. This is up to you since ROS is not integrated into SDK as of now.

              Thanks,
              Jaka

                jakaskerl I am still not able to figure out those values, can you tell me the exact api call in the python sdk that I need to type up for getting the x,y,z values?

                Here's my code for your reference

                from depthai_sdk import OakCamera

                import depthai as dai

                from depthai_sdk.classes import DetectionPacket

                def cb(packet: DetectionPacket):

                    print(packet.img_detections)

                with OakCamera() as oak:

                color = oak.create_camera('color')
                
                # List of models that are supported out-of-the-box by the SDK:
                
                # https://docs.luxonis.com/projects/sdk/en/latest/features/ai_models/#sdk-supported-models
                
                nn = oak.create_nn('yolov8n_coco_640x352', color, tracker=True, spatial=True)
                
                nn.config_nn(resize_mode='stretch')
                
                nn.config_tracker(
                
                    tracker_type=dai.TrackerType.ZERO_TERM_COLOR_HISTOGRAM,
                
                    track_labels=[0], # Track only 1st object from the object map. If unspecified, track all object types
                
                    # track_labels=['person'] # Track only people (for coco datasets, person is 1st object in the map)
                
                    assignment_policy=dai.TrackerIdAssignmentPolicy.SMALLEST_ID,
                
                    max_obj=1, # Max objects to track, which can improve performance
                
                    threshold=0.1 # Tracker threshold
                
                )
                
                nn.config_spatial(
                
                    bb_scale_factor=0.3, # Scaling bounding box before averaging the depth in that ROI
                
                    lower_threshold=500, # Discard depth points below 30cm
                
                    upper_threshold=8000, # Discard depth pints above 10m
                
                    # Average depth points before calculating X and Y spatial coordinates:
                
                    calc_algo=dai.SpatialLocationCalculatorAlgorithm.AVERAGE
                
                )
                
                oak.visualize([nn.out.tracker], fps=True)
                
                # oak.callback(nn.out.tracker, callback=cb)
                
                oak.visualize([nn.out.image_manip], fps=True)
                
                oak.visualize([nn.out.spatials], fps=True)
                
                oak.visualize(nn.out.passthrough)
                
                
                
                # oak.start(blocking=True)
                
                oak.start(blocking=True)

                  Hi DarshitDesai
                  I think you should be using trackerpacket if you are sending trackers as your callback arguments.

                  Tracklets should give you a list of all tracked objects and their positions.

                  Thanks,
                  Jaka

                    jakaskerl there are two detection api packets, SpatialMappingBbpacket and TrackerPacket, which of the x,y,z are more accurate or have optimal estimates from the kalman filter?

                      Hi DarshitDesai
                      Tracker packet is the SDK equivalent for the the tracker message in API, so you should use that. Filter can be applied with tracker_config when tracker is enabled.

                      Thanks,
                      Jaka

                        jakaskerl In my code I did use Spatial tracking feature, Wouldn't the spatialmapping packet have good results?

                          Hi DarshitDesai
                          Should work as well yes, since it includes the info for spatials. However there is no tracking here to my knowledge. It's basically just a depth frame with bb mappings.

                          Thanks,
                          Jaka