Hello @jakaskerl
I'm using a saved video that I sent to the OAK device for inference. Here is the code of the steps I'm doing it.
def to_planar(arr: np.ndarray, shape: tuple) -> np.ndarray:
return cv2.resize(arr, shape).transpose(2, 0, 1).flatten()
# Prepare frame to be sent to the device
frame_nn = dai.ImgFrame()
frame_nn.setData(to_planar(video_frame, (640, 640)))
frame_nn.setType(dai.ImgFrame.Type.BGR888p)
frame_nn.setWidth(640)
frame_nn.setHeight(640)
# Send frame to the device
qVideo.send(frame_nn)
I took that part from one of the examples on github, don't remember which one now.
Here is the info about my outputs.
+++ Output layer info +++
Layer 0
Name: output0
Order: StorageOrder.CHW
dataType: DataType.FP16
dims: [1, 37, 8400]
Layer 1
Name: output1
Order: StorageOrder.NCHW
dataType: DataType.FP16
dims: [1, 32, 160, 160]