DarshitDesai

  • Sep 19, 2024
  • Joined Aug 25, 2023
  • 0 best answers
  • Are there any examples with RVC3 or RVC2 where the .blob file converted from the tflite model gives me depth image?

    • jakaskerl Actually specifying the nn_type doesn't work, here are screenshots and the code I wrote to include my model

      The first example here is when I include my json file and the oak downloads the model:

      		nn = oak.create_nn('/home/pi/.local/lib/python3.9/site-packages/depthai_sdk/nn_models/yolov8n_coco_640x352/config.json', color, tracker=True, spatial=True)

      See how the network is able to get the messages:

      This example below is when I included the line to include my blob file, I tried including both my own 5 shave compiled file and the one that was downloaded by the oak api (which was 6 shave by default), it printed nothing in the message:

      		#nn = oak.create_nn('/home/pi/Roving-Comforter/yolov8n_coco_640x352_openvino_2022.1_6shave.blob', color, nn_type='yolo', tracker=True, spatial=True)

      Here as you can see it prints an empty message and hence any code further than that isnt executed

      • jakaskerl So I would need to specify nn_type=yolo and in the documentation it says "on-device NN result decoding" What does that mean?

      • jakaskerl Hi I am also facing the same issue, I made the yolov8 blob and when I try to import it using oak.create_nn() it gives me this error

        My version of depth ai SDK

        pi@luxonis:~/Roving-Comforter $ pip show depthai-sdk
        Name: depthai-sdk
        Version: 1.12.1
        Summary: This package provides an abstraction of the DepthAI API library.
        Home-page: https://github.com/luxonis/depthai/tree/main/depthai_sdk
        Author: Luxonis
        Author-email: support@luxonis.com
        License: MIT
        Location: /home/pi/.local/lib/python3.9/site-packages
        Requires: blobconverter, depthai, depthai-pipeline-graph, marshmallow, numpy, opencv-contrib-python, pytube, PyTurboJPEG, sentry-sdk, xmltodict
        Required-by: 
        pi@luxonis:~/Roving-Comforter $ 
        [2024-04-22 21:48:17] INFO [root.__init__:147] Setting IR laser dot projector brightness to 800mA
        [2024-04-22 21:48:17] INFO [root.__exit__:408] Closing OAK camera
        Traceback (most recent call last):
          File "/home/pi/Roving-Comforter/testrun.py", line 132, in <module>
            nn = oak.create_nn('/home/pi/Roving-Comforter/yolov8n_coco_640x352_openvino_2022.1_6shave.blob', color, tracker=True, spatial=True)
          File "/home/pi/.local/lib/python3.9/site-packages/depthai_sdk/oak_camera.py", line 253, in create_nn
            comp = NNComponent(self._oak.device,
          File "/home/pi/.local/lib/python3.9/site-packages/depthai_sdk/components/nn_component.py", line 215, in __init__
            self._spatial.depth.link(self.node.inputDepth)
        AttributeError: 'depthai.node.NeuralNetwork' object has no attribute 'inputDepth'
        Sentry is attempting to send 2 pending error messages
        Waiting up to 2 seconds
        Press Ctrl-C to quit
      • @jakaskerl when I try to run this with my own model it throws this error

        pi@luxonis:~/Roving-Comforter $ python3 testrun.py

        Switch ON Detected

        [2024-04-22 21:33:43] INFO [root.init:147] Setting IR laser dot projector brightness to 800mA

        [2024-04-22 21:33:43] INFO [root.exit:408] Closing OAK camera

        Traceback (most recent call last):

        File "/home/pi/Roving-Comforter/testrun.py", line 132, in <module>

        nn = oak.create_nn('/home/pi/Roving-Comforter/yolov8n_coco_640x352.blob', color, tracker=True, spatial=True)

        File "/home/pi/.local/lib/python3.9/site-packages/depthai_sdk/oak_camera.py", line 253, in create_nn

        comp = NNComponent(self._oak.device,

        File "/home/pi/.local/lib/python3.9/site-packages/depthai_sdk/components/nn_component.py", line 215, in init

        self._spatial.depth.link(self.node.inputDepth)

        AttributeError: 'depthai.node.NeuralNetwork' object has no attribute 'inputDepth'

        Sentry is attempting to send 2 pending error messages

        Waiting up to 2 seconds

        Press Ctrl-C to quit

      • Hi is the Y adapter available at other places, we need it urgently and it says it will be available early March 2024, is there some way to make it on our own?

      • jakaskerl version details of sdk

        pi@luxonis:~/latest code $ pip show depthai-sdk

        Name: depthai-sdk

        Version: 1.12.1

        Summary: This package provides an abstraction of the DepthAI API library.

        Home-page: luxonis/depthaitree/main/depthai_sdk

        Author: Luxonis

        Author-email: support@luxonis.com

        License: MIT

        Location: /home/pi/.local/lib/python3.9/site-packages

        Editable project location: /home/pi/.local/lib/python3.9/site-packages

        Requires: blobconverter, depthai, depthai-pipeline-graph, marshmallow, numpy, opencv-contrib-python, pytube, PyTurboJPEG, sentry-sdk, xmltodict

        Required-by:

      • @jakaskerl I see the below thing when I do echo $DISPLAY

        Edit: When I run an example depth ai python code it works, in my code it is not working, what could be the issue?

      • jakaskerl I will check that out and send back the output of echo $DISPLAY

        What I wanted to say was I get the object positions while running the above code, only open cv window doesn't open.

        This only happens in VNC mode, when I connect a separate physical monitor to the pi directly the opencv window works

      • Hi I am using vnc viewer to run the following code but the camera view isn't streaming is there a way to adjust this?

        Code:

        #!/usr/bin/python3
        from depthai_sdk import OakCamera
        import depthai as dai
        from depthai_sdk.classes import *
        import cv2 as cv
        
        def cb2 (packet: TrackerPacket):
        	visualizer = packet.visualizer
        	visualizer.draw(packet.frame)
        	tframe = cv.resize(packet.frame,(720,480))
        	cv.imshow(packet.name, tframe)
        	message = packet.detections
        	z_disp = (4.547 *25.4)/1000
        	#t_vect = np.array([[0],[0],[0],[1]])
        	#H_mat = np.concatenate((rotation_x,t_vect),axis = 1)
        	for m in message:
        		x_obj, y_obj, z_obj = m.tracklet.spatialCoordinates.x, m.tracklet.spatialCoordinates.y, m.tracklet.spatialCoordinates.z
        		#t_camera = np.array([[x_obj],[y_obj],[z_obj],[1]])
        		#t_rotated = H_mat @ t_camera
        		#print("x = ", t_rotated[0][0],"y = ", t_rotated[1][0],"z = ", t_rotated[2][0])
        		#t_rotated_ = rotation_matrix @ t_rotated
        		#x, y, z = t_rotated_[0][0], t_rotated_[1][0], t_rotated_[2][0]
        		#print("x = ", x_obj,"y = ", y_obj,"z = ", z_obj,"frame shape=",packet.frame.shape)
        		yaw = np.degrees(np.arctan2(x_obj , z_obj))
        		pitch = np.degrees(np.arctan2(y_obj+z_disp,np.sqrt(x_obj**2 + z_obj**2)))
        		print("yaw = ", yaw, "pitch = ", pitch)
        		dutycycle_yaw = (-555.5333)*yaw + 71666.66667
        		dutycycle_pitch = 71666.67 + 1083.33 * pitch
        
        with OakCamera() as oak:
        	color = oak.create_camera('color')
        	# List of models that are supported out-of-the-box by the SDK:
        	# https://docs.luxonis.com/projects/sdk/en/latest/features/ai_models/#sdk-supported-models
        	nn = oak.create_nn('yolov8n_coco_640x352', color, tracker=True, spatial=True)
        	nn.config_nn(resize_mode='stretch')
        	nn.config_tracker(
        		tracker_type=dai.TrackerType.ZERO_TERM_COLOR_HISTOGRAM,
        		track_labels=[0], # Track only 1st object from the object map. If unspecified, track all object types
        		# track_labels=['person'] # Track only people (for coco datasets, person is 1st object in the map)
        		assignment_policy=dai.TrackerIdAssignmentPolicy.SMALLEST_ID,
        		max_obj=1, # Max objects to track, which can improve performance
        		threshold=0.1 # Tracker threshold
        	)
        	nn.config_spatial(
        		bb_scale_factor=0.3, # Scaling bounding box before averaging the depth in that ROI
        		lower_threshold=500, # Discard depth points below 30cm
        		upper_threshold=8000, # Discard depth pints above 10m
        		# Average depth points before calculating X and Y spatial coordinates:
        		calc_algo=dai.SpatialLocationCalculatorAlgorithm.AVERAGE
        	)
        	#oak.visualize([nn.out.tracker], fps=True)
        	#pi.hardware_PWM(20,50,int(71666.66667))
        	oak.visualize ([nn.out.tracker], callback=cb2)
        	#oak.visualize([nn.out.spatials], fps=True)
        	#oak.callback([nn.out.spatials], callback=cb)
        	#oak.visualize(nn.out.passthrough)
        	oak.start(blocking=True)
        • jakaskerl I used the following settings to convert the yolov8n model, is it correct for the luxonis oak-d pro SOM,

          Step 1 Select the versions:

          Step2: Select the model and no. of shaves and convert

        • Whenever I run a script Luxonis tries to download a blob file from somewhere. Now the device I have might not have Internet connection, is there a way to download the yolov8ncoco640352 blob file. Then I can mention the path of the file when I create the nn in oak.create_nn() method

        • Hi I have a stereo pair of OV7251 cameras and a Nvidia jetson, I want to run depth ai on the nvidia jetson, is there a way to use it with this hardware combination or do I need the luxonis modules?

          • jakaskerl Thank you can you also please help me with the coordinate system for the depth and rgb frames. Usually it z pointing out of the lens and x and y are in some suitable orientation

            • jakaskerl Actually it doesn't get picked up, I don't know why I will definitely check out the path which you wrote above in the RPI File system.

              About the callback, wouldn't I need callbacks to extrack x,y,z coordinates. Also I had remove the oak.visualize() functions earlier and it didn't give me the error but it doesn't print the x,y,z outputs too

              I commented those visualizer lines

              Edit: Ok it worked ignore the things above, I removed the visualizer lines and the spatialmappingbb callback and I am just using the tracker callback. I still have a question, if i still want to visualise the depth data how can I do that? Also what is the coordinate system orientation of the depth frame and rgb frame. (like z axis coming out of the lens, x on the left and y pointing up)

              • jakaskerl Can you tell me how to do the download step? Do I need to download it from yolov8n documentation and place it in the directory where the code is? Or I know that sdk converts model files into blob files so if there is a way to download the blob from somewhere?

                • jakaskerl Also there is one more issue, If I try using two callback functions one for viewing the tracker output using TrackerPacket and one for the spatial locations x,y,z using SpatialMappingBbPacket, it gives me an error. So to debug this I removed the SpatialMappingBbPacket Callback function and tried getting the x,y,z from tracker and viewing the frame output using opencv but it still gives me the same error. I checked the forum, I didn't find similar issues, Can you help debugging it?

                  Code:

                  Photo of error

                • The depth sdk everytime tries to connect to the internet and download the yolov8n blob model, Is there a way to store it on device and use that? I am using a raspberry pi and it is tedious to connect it to the internet again and again

                  • jakaskerl In my code I did use Spatial tracking feature, Wouldn't the spatialmapping packet have good results?

                    • jakaskerl there are two detection api packets, SpatialMappingBbpacket and TrackerPacket, which of the x,y,z are more accurate or have optimal estimates from the kalman filter?