Hmm, interesting, maybe I am missing something? The set up I am using is the following:
Create left, right rgb and depth pipelines:
auto
rgb_left_pipeline = pipeline.create<dai::node::ColorCamera>();
auto
rgb_right_pipeline = pipeline.create<dai::node::ColorCamera>();
auto
depth_pipeline = pipeline.create<dai::node::StereoDepth>();
Create XLink out pipelines:
auto
rgb_xlink_out = pipeline.create<dai::node::XLinkOut>();
auto
depth_xlink_out = pipeline.create<dai::node::XLinkOut>();
Set some basic properties, stream name, resolution etc:
rgb_xlink_out->setStreamName("rgb");
depth_xlink_out->setStreamName("depth");
auto
cam_res = dai::ColorCameraProperties::SensorResolution::THE_800_P;
rgb_left_pipeline->setCamera("left"); rgb_left_pipeline->setResolution(cam_res);
rgb_left_pipeline->setFps(10);
rgb_right_pipeline->setCamera("right"); rgb_right_pipeline->setResolution(cam_res);
rgb_right_pipeline->setFps(10);
…etc
Link nodes, create device and create queues:
rgb_left_pipeline->preview.link(depth_pipeline->left);
rgb_right_pipeline->preview.link(depth_pipeline->right);
depth_pipeline->disparity.link(depth_xlink_out->input);
rgb_left_pipeline->preview.link(rgb_xlink_out->input);
dai::Device device(pipeline, dai::UsbSpeed::SUPER); print_device_info(device);
auto rgb_queue = device.getOutputQueue("rgb", 4, false);
auto depth_queue = device.getOutputQueue("depth", 4, false);
(Note: I used rgb..pipeline->preview here in place of the "out" output, since the ColorCamera pipelines dont seem to have an "out" output equivalent?)
Finally, a really basic extraction of frames for visualisation:
while(true)
{
auto rgb_frame = rgb_queue->get<dai::ImgFrame>()->getFrame();
auto depth_frame = depth_queue->get<dai::ImgFrame>()->getFrame();
depth_frame.convertTo(depth_frame, CV_8UC1, 255 / depth_pipeline->initialConfig.getMaxDisparity());
// Retrieve 'bgr' (opencv format) frame
cv::imshow("rgb", rgb_frame);
cv::imshow("depth", depth_frame);
int key = cv::waitKey(1);
if(key == 'q' || key == 'Q') {
break;
}
}
Ok, so when running, I get this error:
[1844301041C04B1200] [3.3.1.1] [3.132] [StereoDepth(2)] [error] Left input image stride ('900') should be equal to its width ('300'). Skipping frame!
Which does makes sense - I am feeding a three channel (RGB) image matrix into something (stereo depth pipeline) that is presumably expecting a single channel image matrix.
However, having gone over the documentation, I am not sure what the proper approach is here for this use case - the preview output of the rgb_left and rgb_right pipelines seems a bit sus to me, but the problem seems to be with the stereo depth node expecting a specific image format as an input.
Is there some configuration option in the stereo_depth pipeline that I need to set? or should I be changing the output of the rgb pipelines (to somehow produce both RGB and mono matrices?
Cheers, Pete