• DepthAI-v2
  • Input tensor 'images' (0) exceeds available data range

Hello,

I've been getting this error :

[DetectionNetwork(4)] [error] Input tensor 'images' (0) exceeds available data range. Data size (153600B), tensor offset (0), size (307200B) - skipping inference

The only thing I know, is that the size is double the data size.

Using this code, what causes the error? :

import depthai as dai
import cv2

# Model paths
plank_model_path = 'plank.blob'
label_model_path = 'label.blob'

pipeline = dai.Pipeline()

# Define a source - color camera
cam = pipeline.create(dai.node.ColorCamera)
cam.setResolution(dai.ColorCameraProperties.SensorResolution.THE_1080_P)
cam.setInterleaved(False)
cam.setBoardSocket(dai.CameraBoardSocket.RGB)

# Create outputs
xout_rgb = pipeline.create(dai.node.XLinkOut)
xout_rgb.setStreamName("rgb")

# Create ImageManip node for cropping
manip = pipeline.create(dai.node.ImageManip)
manip.initialConfig.setResizeThumbnail(320, 320)
manip.initialConfig.setKeepAspectRatio(False)
manip.setMaxOutputFrameSize(320*320 * 3)

# Camera control / input
controlIn = pipeline.create(dai.node.XLinkIn)
ctrl = dai.CameraControl()
controlIn.setStreamName('control')

plankDet = pipeline.create(dai.node.YoloDetectionNetwork)
plankDet.setBlobPath(plank_model_path)
plankDet.setConfidenceThreshold(0.5)
plankDet.input.setBlocking(False)

labelDet = pipeline.create(dai.node.YoloDetectionNetwork)
labelDet.setBlobPath(label_model_path)

cam.video.link(manip.inputImage)
manip.out.link(xout_rgb.input)
controlIn.out.link(cam.inputControl)
manip.out.link(plankDet.input)

with dai.Device(pipeline) as device:

    # Output queues will be used to get the rgb frames and NN data from the outputs defined above
    q_rgb = device.getOutputQueue(xout_rgb.getStreamName(), maxSize=4, blocking=False)
    q_ctrl = device.getInputQueue(controlIn.getStreamName(), maxSize=4, blocking=False)

    ctrl = dai.CameraControl()
    ctrl.setManualExposure(2500, 1300)
    ctrl.setManualFocus(103)
    q_ctrl.send(ctrl)


    while True:
        in_rgb = q_rgb.get()

        # If 'q' is pressed on the keyboard, exit this loop
        if cv2.waitKey(1) == ord('q'):
            break

        cv2.imshow("RGB", in_rgb.getCvFrame())

    # Clean up
    cv2.destroyAllWindows()

    Hi FrancisGuindon
    When converting the model using blobconverter, make sure you set the right type -- you should set either -U8 or -FP16 as model parameters. This should solve your problem.

    Thanks,
    Jaka

      5 days later

      jakaskerl

      Hello,

      I used Luxonis' tool to convert my .pt to a blob. I haven't used blobconverter in my code or in CLI.

      I have fixed my issue by using ColorCamera's preview instead of video + imagemanip. The aspect ratio was the main reason I thought I needed imagemanip (wanted full 16:9 image in 320x320), until I figured that you can set the preview's keepaspectratio to false.

      Anyway, thank you for your time.

        Hi FrancisGuindon
        Great! Didn't think to check the NN input, just assumed it was datatype issue since that's most common.
        Anyway, yes, the neural network expects RGB image, whereas video is NV12:

        Thanks for reporting back I'll make sure to check next time,
        Jaka