• DepthAI
  • Calibration and StereoDepth with external cameras

Hi,

I'd like to use the OAK-FFC-4P to calculate stereo depth maps from host-side image inputs. In my case, the images are being taken from an external camera stereo pair controlled by the host machine that does not interface with the OAK board. Is there a way to obtain a valid calibration for external cameras, load it, and use the StereoDepth node with host-side input (with no cameras connected to the OAK board)?

Thanks for you help.

@rsinghmn
You'd need intrinsics, extrinsics as well as undistortion coefficients and perfectly synced frames to perform stereo matching.

If you have a way of retrieving this, then on device stereo is fully possible. I'd suggest you use opencv to get the camera parameters.

Thanks,
Jaka

Thanks @jakaskerl

After obtaining these parameters, how can it be loaded on the device?

I was able to get stereo depth maps following this OpenCV tutorial that uses chessboard calibration. This uses calls to cv2.initUndistortRectifyMap and cv2.remap to rectify camera images, and the compute method in OpenCV's StereoSGBM class to get disparity maps. Is there an example that demonstrates how the camera matrix and other parameters from this tutorial maps onto the config used by the device? This was the closest I was able to find but the OpenCV calls are different. Conversely, are these low level calls (undistort/remap, sgbm compute) able to be made directly on the device using the script node or any other node (warp or imagemanip)? Or is the only way through configuring the StereoDepth node?

Thanks,

Raj

    Hi rsinghmn
    You need to fill out the calibration with new values (intrinsics and extrinsics) and load it. here is a script to help you. But it will take some trial and error, since it is very important to perfectly sync the frames from the stereo cameras.

    Thanks,
    Jaka

    11 days later

    Thanks. I'm ensuring time syncing using an external trigger setup for image acquisition.

    I've been trying to modify a intrinsics and extrinsics in the calibration json. While I can see it has an effect on the rectified and stereo depth images, I'm a little confused since the numbers don't directly map to the OpenCV calibration results (using cv2.calibrateCamera). Do you happen to know which OpenCV calls are being made for obtaining intrinsics and extrinsics?

    Also, does the stereodepth node support arbitrary mono image sizes, or does it only support the ones found in dai.MonoCameraProperties.SensorResolution (example) ? Asking because I encountered the following error when trying to feed in 1600Wx1400H images:

    [14442C1021F1D5D600] [1.1.2] [7.626] [StereoDepth(0)] [error] Maximum supported input image width for stereo is 1280. Skipping frame!

      rsinghmn
      Don't know off the top of my head, but the functions used are all here: luxonis/depthai-calibrationblob/524952bc0de2f1b9d9bb19327f6fdfa1ace7a8a2/calibration_utils.py; we are using CV2 to acquire the parameters.

      rsinghmn Also, does the stereodepth node support arbitrary mono image sizes, or does it only support the ones found in dai.MonoCameraProperties.SensorResolution (example) ? Asking because I encountered the following error when trying to feed in 1600Wx1400H images:

      You are limited by width since the HW block on the RVC2 limits image size in order to optimize the performance of stereo algorithms. Increasing the size would likely greatly impact performance anyway.

      Thanks,
      Jaka