Hello,

I hope this message finds you well. Currently, I am working on deploying my custom pose classification model on OAK-D Lite. My aim is to achieve comparable results for pose detection and poses landmarks to the example provided in the depthai_blazepose repository on GitHub.

To accomplish this, I am using the pose_detection_sh4.blob and pose_landmark_full_sh4.blob blob files, respectively, in order to detect and extract landmarks from the poses in my input data.

However, I am encountering some errors when attempting to run my custom pose classification model on OAK-D Lite. I have attached a screenshot of the errors below for your reference:

Here's the code for my pipeline. I've only included the relevant parts as the full code can be quite lengthy. If you would like the complete code, please let me know and I would share it with you.

import cv2
import depthai as dai 
import numpy as np

##Create dai pipeline
pipeline = dai.Pipeline()

##Define RGB camera node
cam_rgb = pipeline.create(dai.node.ColorCamera)
cam_rgb.setPreviewSize(456,256)
cam_rgb.setInterleaved(False)
cam_rgb.setBoardSocket(dai.CameraBoardSocket.RGB)
cam_rgb.setResolution(dai.ColorCameraProperties.SensorResolution.THE_1080_P)
cam_rgb.setFps(30)

##ManIp Node
manip = pipeline.create(dai.node.ImageManip)
manip.initialConfig.setResize(224,256*3)
cam_rgb.preview.link(manip.inputImage)

##Define pose detection node
pose_detection = pipeline.createNeuralNetwork()
pose_detection.setBlob("pose_detection_sh4.blob")

pose_detection.setNumInferenceThreads(2)
pose_detection.input.setBlocking(False)

##Define pose landmark node
pose_landmark = pipeline.createNeuralNetwork()
pose_landmark.setBlob("pose_landmark_full_sh4.blob")
pose_landmark.setNumInferenceThreads(2)
pose_landmark.input.setBlocking(False)

##Pose Clasify
pose_classify = pipeline.createNeuralNetwork()
pose_classify.setBlob('yogapose_ir.blob')
pose_classify.setNumInferenceThreads(2)
pose_classify.input.setBlocking(False)

##Create XLinkOut nodes
xout_rgb = pipeline.create(dai.node.XLinkOut)
xout_rgb.setStreamName("rgb")

xout_detection = pipeline.create(dai.node.XLinkOut)
xout_detection.setStreamName("detections")

xout_landmarks = pipeline.create(dai.node.XLinkOut)
xout_landmarks.setStreamName("landmarks")

xout_classify = pipeline.create(dai.node.XLinkOut)
xout_classify.setStreamName("recognisedpose")

##Link nodes
manip.out.link(pose_detection.input)
pose_detection.out.link(pose_landmark.input)
pose_detection.out.link(xout_detection.input)
pose_landmark.out.link(xout_detection.input)
pose_detection.out.link(pose_classify.input)
pose_classify.out.link(xout_classify.input)
cam_rgb.preview.link(xout_rgb.input)

I would greatly appreciate any guidance or advice you can offer to help me resolve these errors and achieve my desired results.

Thank you for your time and assistance.

  • erik replied to this.

    Hi rb210002 ,
    As stated in the error message, you are sending 224x768 frames to the NN which accepts 224x224 frames. Either use a model that accepts 224x768 frames, or change the frame size;

    manip.initialConfig.setResize(224,256*3) # Change to (224,244)

    Thoughts?
    Thanks, Erik

    Hi @erik#p6862,

    I understand that the NN expects an input size of 224x224. However, I was attempting to use a size that both the landmark and pose detection NN could accept. The pose detection NN expects an input size of (224,256*3), whereas the landmark NN expects an input size of (224,224).

    I am wondering if it's possible to send images of different sizes to both NNs. If so, would it be possible to add this functionality to our pipeline?

    And also can you tell me why I am having the error input tensor 'input_1' (0) exceeds?

    Thank you for your help in advance.

    • erik replied to this.

      Hi rb210002 ,
      I would suggest using 2 ImageManips, each resizing to the required frame size, and then linking those outputs to the NN models.
      Thanks, Erik

      Thank you, @Erik#p6876, for your help. I'm receiving an error message that reads, 'Input tensor • input_I' (0) exceeds available data range. Data size 58652B tensor offset 0 size 393216B - skipping inference error.' Do you have any insights as to what might be causing this error?"

      • erik replied to this.

        Hi rb210002 , It's exactly the same thing - you are sending a frame that's larger than the NN expects.