• DepthAI
  • Rectilinear output for fisheye lens (defishing/dewarping fisheye)

Hello,

One critical requirement of my use case is the ability to create video of a large area (a football pitch) for coaches and players to later look at.

I can currently do this with RPi4, but not well! Encoding is limited to 1080, the CPU struggles badly to dewarp the fisheye, and so processing times are terrible.

I have purchased the BW1098FFC Body Kit and adapter, and am using an RPi4 and HQ camera with a 180 degree M12 fisheye lens from Arducam.

The 4K video encoding is awesome, and I can see that one of the key features of DepthAI is Warp/Dewarp.

Looking at the Python API, I was hoping to see something familiar and akin to the fisheye functions that I currently use with OpenCV, but I don’t see any.

It would seem to me that either I have a hole in my knowledge that I need to fill (very likely; I do not have expertise in this space), or DepthAI does not have this capability.

This is a critical element to the success of my project with this hardware, so I am hoping that DepthAI does support this capability, and I just need to be pointed in the right direction and get in some learning…

Thanks in advance,
Ryan

(P.S. I have also read through docs, forum, git examples code and discord and not found this information - so apologies upfront if this is already covered somewhere)

    Hi ryan ,

    Thanks for using DepthAI! And for the thorough investigation. So this capability (warp/dewarp) does exist internal to our firmware structure/etc. as we use it to rectify the grayscale (and color) images when doing stereo.

    I don't think this is its own pipeline node though, yet, so is not used standalone as its own functionality. So in other words, this warp/dewarp is automatically applied when a DepthAI is calibrated and used with the stereo-matching node (including to the RGB camera when RGB-depth is enabled,

    The plan is to have this as its own node though, as then it's useful in applications like yours (where the purpose of the warp/dewarp isn't just for an intermediary step in stereo matching).

    So the functionality exists, I just need to see how much work/time it is to get this into its own node so that you can use it directly for these purposes. Will circle back and pinging @luxonis-Sachin and @GergelySzabolcs informationally here WRT this capability.

    Thanks,
    Brandon

    Hi @ryan ,

    I was wrong on this. We do actually have support for WARP/DEWARP as it's own node in the ImageManip node.

    It supports 4K input now, but a max of 1920x1080p output, and is limited to a 3x3 transform, and fisheye would need the addition of multi-point mesh support. It looks like we can increase this limit to 4K output as well, we will just need to check on the resource allocator to allow it to use more resources for 4K.

    And it looks like we can enable multi-point mesh for this fisheye dewarp. This is already an option in the stereo node I think (enabling mesh calibration).

    Will circle back with more details. In the meantime, you could try out the 3x3 transform config warp/dewarp to see how it does. I'll see if I can figure out how to do that. :-)

    Thoughts?

    Thanks,
    Brandon

    Hi @ryan ,

    So we have an example of a 4-point transform, but not yet for the matrix transform:
    https://github.com/luxonis/depthai-python/blob/ed6c07f/examples/20_color_rotate_warp.py#L140

    And here is the call for the 3x3 transform:
    https://docs.luxonis.com/projects/api/en/latest/references/python/#depthai.ImageManipConfig.setWarpTransformMatrix3x3

    We haven't written demo for it yet though. I -think- if you run the OpenCV calibration you should be able to get this calibration and then use it though, which then I -think- will do something. But ultimately you would want the multi-point mesh.

    Thoughts?

    Thanks,
    Brandon

    Thank you Brandon for following up so quickly and so thoroughly.

    I did see the 4-points transform, which is why I kinda 'deep down in the cockles of my heart' knew that the functionality must be there somewhere and that there is a hole in my knowledge 🙂

    I also did see setWarpTransformMatrix3x3 in the API and my thought was that it would/might/should do what I needed, based on my limited understanding of the algorithms doing the actual work behind the OpenCV fisheye functions that I currently use, but as I was at the limits of my knowledge I wanted to check before I invested serious learning time into an approach that might actually be a dead-end.

    This is critically important to me, and so armed with the knowledge that there is a path there for me to pursue, I will follow it as best I can.

    If you are able to get the output to 4K, this would be fantastic (I am already doing 1080 with the RPi4 and it really needs to be higher. I was on the verge of trying to solve this by trying to get 2K by stitching two frames or such)

    To be honest, this will be challenging for me. I will appreciate anything that reduces the slope of my curve.

    I will start on this today, though will disappear down a hole for a few days (hopefully!) while I work to get my head around it properly. I will re-emerge with either results, questions, or both I expect. When I do, would your Discord server be the more appropriate forum for this communication?

    Thank you again. I very much appreaciate how good your support is. I am sure you guys must be very busy.
    Ryan

      5 days later

      Hi ryan ,

      Sorry about the delay in response here (I can't believe it's already 5 days). So yes @luxonis-Sachin in Discord would likely be able to help you more real-ish-time on this. And we're in the midst of debugging some Pi issues currently, but after that we can probably get support for 4K out. Maybe worst-case we could do a one-off build to support it before we have more dynamic configuration to allow the system to scale to support this (and not use as many resources when not doing 4K).

      Thanks,
      Brandon