Hi,

I understand the Manip can do crop, flip, etc operation.

I wonder if there is equivalent operation of, for example, shift 48 pixels to left without cropping and maintain the original image size (for example, 640x400) after shifting. I know we might use manip crop then resize, but that would destroy the pixel position relationship, thus won't be acceptable.

So what I need is a node that can shift +/-s pixels of an imgFrame without changing the frame dimensions. For example, this node would be used in the following pipeline application:

monoRight => {imgFrame[:,0:width-s]=imgFrame[:,s:width]} => stereo.right
monoLeft   =======================================> stereo.left

Is there a way to construct the above pipeline using existing depthai node? If not, could you please point me to a reference/code example on how to construct a custom image shift node (similar to such as edge detector node or feature tracker node) by a user?

Please advise. Thanks a lot for your help.

  • erik replied to this.

    erik

    Thank you very much for your information. Disparity shift is a very interesting and useful concept for an up-close object depth detection. However, it's not quite the same as image shift.

    The use case I am trying to do is to use stereoDepth disparity computation to emulate "optical flow" computation:

    monoRight(n)=> {imgFrame[:,0:width-s]=imgFrame[:,s:width]} => stereo.right
    monoRight(n-1)   =======================================> stereo.left

    By feeding monoRight frame n and monoRight previous frame n-1, I plan to get the optical flow using disparity computation. The way to deal with negative direction is to provide an image shift of 95/2 ~ 48. In this case, if we get disparity of 49 which will correspond to optical flow +1, and disparity of 47 correspond to optical flow -1. The Disparity shift won't be able to do the job. Because it is missing the "disparity" values between 0..47 which are "optical flow" -48..-1.

    I already implemented the above pipeline in a stereo_by_host.py because I can shift the frame in host. Now if we want to put this pipeline in the device, the missing node is the image shift node.

    Make sense? Please advise. Thanks a lot for your help again.

    • erik replied to this.

      Hi ynjiun ,
      Makes sense, maybe it's something similar to this? You could also actually shift the image as described above, best to use eg. pytorch for it, and use similar script to this to export the model and convert it to openvino/blob.
      THanks, Erik