• DepthAI
  • OAK-D pipeline stops responding when linking nodes

Hi,

I am working on a project using the OAK-D camera and facing an issue with linking nodes in the pipeline. My goal is to classify the key points detected by the camera for different yoga poses using a custom neural network. Specifically, I'm trying to pass the output of a landmark detection neural network to my pose classification neural network as input.

The issue is that when I use pc_nn.out.link(manager_script.inputs['from_pc_nn']) to link the pose classification output to the manager_script input, the pipeline starts running and a new window opens showing the key points being detected. However, a few seconds later the program stops working and I'm not able to classify the key points.
If I don't use this link pc_nn.out.link(manager_script.inputs['from_pc_nn']), the pipeline works fine (detecting and marking landmarks).

Here is the code of the pipeline:
`    def create_pipeline(self):
        print("Creating pipeline...")
        # Start defining a pipeline
        pipeline = dai.Pipeline()
        pipeline.setOpenVINOVersion(dai.OpenVINO.Version.VERSION_2021_4)
        self.pd_input_length = 224
        self.lm_input_length = 256
        self.pc_input_length = 66

        # ColorCamera
        print("Creating Color Camera...")
        cam = pipeline.create(dai.node.ColorCamera) 
        cam.setResolution(dai.ColorCameraProperties.SensorResolution.THE_1080_P)
        cam.setInterleaved(False)
        cam.setIspScale(self.scale_nd[0], self.scale_nd[1])
        cam.setFps(self.internal_fps)
        cam.setBoardSocket(dai.CameraBoardSocket.RGB)

        if self.crop:
            cam.setVideoSize(self.frame_size, self.frame_size)
            cam.setPreviewSize(self.frame_size, self.frame_size)
        else: 
            cam.setVideoSize(self.img_w, self.img_h)
            cam.setPreviewSize(self.img_w, self.img_h)

        if not self.laconic:
            cam_out = pipeline.create(dai.node.XLinkOut)
            cam_out.setStreamName("cam_out")
            cam_out.input.setQueueSize(1)
            cam_out.input.setBlocking(False)
            cam.video.link(cam_out.input)


        #Define manager script node
        manager_script = pipeline.create(dai.node.Script)
        manager_script.setScript(self.build_manager_script())

        #Define pose classification pre processing
        print("Creating Pose Classification pre processing image manip...")
        pre_pc_manip = pipeline.create(dai.node.ImageManip)
        pre_pc_manip.setMaxOutputFrameSize(self.pc_input_length*self.pc_input_length*3)
        pre_pc_manip.setWaitForConfigInput(True)
        pre_pc_manip.inputImage.setQueueSize(1)
        pre_pc_manip.inputImage.setBlocking(False)
        cam.preview.link(pre_pc_manip.inputImage)
        manager_script.outputs['pre_pd_manip_cfg'].link(pre_pd_manip.inputConfig)

        #Define link to send result to host 
        manager_out = pipeline.create(dai.node.XLinkOut)
        manager_out.setStreamName("manager_out")
        manager_script.outputs['host'].link(manager_out.input)

        #Define landmark pre processing image manip
        print("Creating Landmark pre processing image manip...") 
        pre_lm_manip = pipeline.create(dai.node.ImageManip)
        pre_lm_manip.setMaxOutputFrameSize(self.lm_input_length*self.lm_input_length*3)
        pre_lm_manip.setWaitForConfigInput(True)
        pre_lm_manip.inputImage.setQueueSize(1)
        pre_lm_manip.inputImage.setBlocking(False)
        cam.preview.link(pre_lm_manip.inputImage)

        manager_script.outputs['pre_lm_manip_cfg'].link(pre_lm_manip.inputConfig)

        print("Creating DiveideBy255 Neural Network...") 
        divide_nn = pipeline.create(dai.node.NeuralNetwork)
        divide_nn.setBlobPath(self.divide_by_255_model)
        pre_lm_manip.out.link(divide_nn.input) 

        #Define landmark model
        print("Creating Landmark Neural Network...") 
        lm_nn = pipeline.create(dai.node.NeuralNetwork)
        lm_nn.setBlobPath(self.lm_model)
        # lm_nn.setNumInferenceThreads(1)
        divide_nn.out.link(lm_nn.input)       
        lm_nn.out.link(manager_script.inputs['from_lm_nn'])

        #Define Pose Classify model
        print("Creating Pose Classify Neural Network...") 
        pc_nn = pipeline.create(dai.node.NeuralNetwork)
        pc_nn.setBlobPath(self.pc_model)

        pre_pc_manip.out.link(pc_nn.input)
        lm_nn.out.link(pc_nn.input)
        pc_nn.out.link(manager_script.inputs['from_pc_nn']) 
        print("Pipeline created.")

        return pipeline     `

If I don't link this, the pipeline works fine and just detects and landmarks key points. However, I want to classify these key points to identify the yoga poses.

Could someone please help me understand what might be causing this issue and how I can modify my code to link the output of my custom node to the input of the manager script?

This is basically capturing the information from the manager_script node
I need to classify the detected landmarks/key points into specific yoga poses. To do this, I'm trying to pass the landmark output to feed it into the pose classify neural network. However, I'm having trouble making this work. I'm not sure what changes I need to make to my node to enable it to send data to the manager-script node, which in turn would feed the data to the pose classify network.

`    def next_frame(self):

        self.fps.update()
            
        if self.laconic:
            video_frame = np.zeros((self.frame_size, self.frame_size, 3), dtype=np.uint8)
        else:
            in_video = self.q_video.get()
            video_frame = in_video.getCvFrame()     

        #Get result from device
        res = marshal.loads(self.q_manager_out.get().getData()) #<- Here information from manager_script captures and use for drawing the landmark

        if res["type"] != 0 and res["lm_score"] > self.lm_score_thresh:
            body = mpu.Body()
            body.rect_x_center_a = res["rect_center_x"] * self.frame_size
            body.rect_y_center_a = res["rect_center_y"] * self.frame_size
            body.rect_w_a = body.rect_h_a = res["rect_size"] * self.frame_size
            body.rotation = res["rotation"] 
            body.rect_points = mpu.rotated_rect_to_points(body.rect_x_center_a, body.rect_y_center_a, body.rect_w_a, body.rect_h_a, body.rotation)
            body.lm_score = res["lm_score"]
            self.lm_postprocess(body, res['lms'], res['lms_world'])
            if self.xyz:
                if res['xyz_ref'] == 0:
                    body.xyz_ref = None
                else:
                    if res['xyz_ref'] == 1:
                        body.xyz_ref = "mid_hips"
                    else: # res['xyz_ref'] == 2:
                        body.xyz_ref = "mid_shoulders"
                    body.xyz = np.array(res["xyz"])
                    if self.smoothing:
                        body.xyz = self.filter_xyz.apply(body.xyz)
                    body.xyz_zone = np.array(res["xyz_zone"])
                    body.xyz_ref_coords_pixel = np.mean(body.xyz_zone.reshape((2,2)), axis=0)
`

Thanks for your help in advance!

    Hi rb210002
    It is incredibly difficult to debug this code since i'm not getting the full picture of what might be going on.

    Few questions:

    • "However, a few seconds later the program stops working and I'm not able to classify the key points." So the program exits or just freezes. In case it freezes, I would check the resources (shaves, cmx) you are using for the pipeline, since it might be that you are pushing it too hard. You can do that by running DEPTHAI_LEVEL DEBUG python3 <name of file>
    • Is the script correctly forwarding data from pc_nn? It looks like the script queue could be congested which would prevent anything from going out.

    Perhaps try checking if pose classification works on the host (the rest of the pipeline should remain on the device).

    Thoughts?
    Jaka

    Hi @jakaskerl,

    I apologize for my delayed response. I am experiencing an issue with the program where it freezes after a few seconds. Upon further investigation, I noticed that the program is utilizing 2.49 Mb out of 2.50 CMX.

    I have enabled DEBUG mode and here is the output that I am receiving:

    Could you please help me verify if the script is correctly forwarding data from pc_nn? Also, I would appreciate it if you could clarify your previous suggestion - "Perhaps try checking if pose classification works on the host (the rest of the pipeline should remain on the device)." Do you mean that I should run the pipeline on the host instead of the OAK-D? I would appreciate any further guidance on this.

    Thank you for your assistance.

    • erik replied to this.

      Hello erik , I'm not sure how the pipeline graph will be beneficial in this specific situation. While I understand that pipeline graphs are used to visualize pipelines, I'm uncertain how it can help us in this particular scenario. Could you please clarify this for me?

      • erik replied to this.

        rb210002 you mentioned

        Could you please help me verify if the script is correctly forwarding data from pc_nn?

        With pipeline graph you can see FPS of each link, so you can check whether script node is forwarding data to pc_nn output. Or is this inside Script node? If so, you can use script node logging.

        Hi @erik , thank you for your previous help. I'm new to this and I'm having trouble finding a way to view the FPS of each link. I tried using script node logging, but it's not very clear to me. I also looked at the pipeline graph to see if the script is forwarding data from pc_nn correctly, but I'm still struggling. Could you please provide some guidance or resources on how to accomplish this?

        Thank you for your assistance.

          Hi erik ,

          I wanted to give you an update regarding the version of the software I've been using. I recently discovered that I had been using version 0.0.3, but I've now updated it to version 0.0.5.

          I actually tried to install the update a few days ago, but it seems that I inadvertently installed an old version. Apologies for any confusion this may have caused.

          erik and jakaskerl ,

          I have created the pipeline graph and found out the FPS are being sent to the nodes , so what could be the possible clause of the pipeline getting freeze after few seconds?

          I am attaching screenshot of pipeline graph for your reference,

          I'm willing to share my JSON file of the pipeline graph with you. Unfortunately, I'm unable to upload the file due to a pop-up error message that keeps appearing.

          Thanks

          • erik replied to this.

            Hi rb210002 , regarding freezing, it might either be an error/crash, or perhaps something is freezing due to blocking behavoiur. Feel free to prepare a MRE (max 150LoC is ideal) and we can look into it. For uploading, you can also use google drive or similar, and paste the link here.
            Thanks, Erik

              @erik , do I need to create the MRE on depthai GItHub issue page? Or I create it seperatly on google drive and share you the link?

              HI erik ,

              Here is the google drive link for the MRE, if you can confirm about creating the MRE on Depthai GitHub issue I will share the MRE on the depthai GitHub issue as well.

              Google Drive MRE

              • erik replied to this.

                erik , I've granted you access. Can you please check and confirm?

                Hi @erik ,

                I have given you access to the Google Drive. Can you please confirm if you were able to access it?

                Hi @erik , just wanted to check if you were able to access the information I provided. If you need anything else,please let me know.

                Thanks.