• If I subtract 2 StereoDepth frames from each other how to output in OpenCV

jeremie_m

  1. This is the "final" version to do a diff between 2 depth map images:
#! /usr/bin/env python3

from pathlib import Path
import torch
from torch import nn
import blobconverter
import onnx
from onnxsim import simplify
import sys

# Define the model
class DiffImgs(nn.Module):
    def forward(self, img1, img2):
        # We will be inputting UINT16 but interprets as UINT8
        # So we need to adjust to account of the 8 bit shift
        img1DepthFP16 = 256.0 * img1[:,:,:,1::2] + img1[:,:,:,::2]
        img2DepthFP16 = 256.0 * img2[:,:,:,1::2] + img2[:,:,:,::2]

        # Create binary masks for each image
        # A pixel in the mask is 1 if the corresponding pixel in the image is 0, otherwise it's 0
        img1Mask = (img1DepthFP16 == 0)
        img2Mask = (img2DepthFP16 == 0)

        # If a pixel is 0 in either image, set the corresponding pixel in both images to 0
        img1DepthFP16 = img1DepthFP16 * (~img1Mask & ~img2Mask)
        img2DepthFP16 = img2DepthFP16 * (~img1Mask & ~img2Mask)

        # Compute the difference between the two images
        diff = torch.sub(img1DepthFP16, img2DepthFP16)

        # Square the difference
        # square_diff = torch.square(diff)

        # # Compute the square root of the square difference
        # sqrt_diff = torch.sqrt(square_diff)

        # sqrt_diff[sqrt_diff < 1500] = 0

        return diff

# Instantiate the model
model = DiffImgs()

# Create dummy input for the ONNX export
input1 = torch.randn(1, 1, 320, 544 * 2, dtype=torch.float16)
input2 = torch.randn(1, 1, 320, 544 * 2, dtype=torch.float16)

onnx_file = "diff_images.onnx"

# Export the model
torch.onnx.export(model,               # model being run
                  (input1, input2),    # model input (or a tuple for multiple inputs)
                  onnx_file,        # where to save the model (can be a file or file-like object)
                  opset_version=12,    # the ONNX version to export the model to
                  do_constant_folding=True,  # whether to execute constant folding for optimization
                  input_names = ['input1', 'input2'],   # the model's input names
                  output_names = ['output'])

# Simplify the model
onnx_model = onnx.load(onnx_file)
onnx_simplified, check = simplify(onnx_file)
onnx.save(onnx_simplified, "diff_images_simplified.onnx")

# Use blobconverter to convert onnx->IR->blob
blobconverter.from_onnx(
    model="diff_images_simplified.onnx",
    data_type="FP16",
    shaves=4,
    use_cache=False,
    output_dir="../",
    optimizer_params=[],
    compile_params=['-ip U8'],    
)

Important to note! This does not take in dynamic image sizes. It must be a certain size. For some reason dynamic dimensions are not supported. So these 2 lines:

# Create dummy input for the ONNX export

input1 = torch.randn(1, 1, 320, 544 * 2, dtype=torch.float16)

input2 = torch.randn(1, 1, 320, 544 * 2, dtype=torch.float16)

Define what size of depth images are coming in. change 320 (height) and 544 (width) to your actual depth image size.

  1. These lines are what changes the depth input from U8 (1 byte) to U16 (2 bytes):

depthFP16 = 256.0 * depth[:,:,:,1::2] + depth[:,:,:,::2]

The reason is because the depth image comes into the model at U16. We then convert it to U8 when it enters the model. We tell the nn to do that by this compile command: compile_params=['-ip U8']

So the data comes in twice as big because it changes from U16 to U8. It needs twice as many bytes to represent the image. What that operation does is a little trick to turn the U8 data into FP16 data (which is required by the NN). So what that does is it unconverts the input data back from U8 to U16 (in this case FP16).

What is your use case, do you also want to diff a "control depth" from new depth or something else.

    AdamPolak

    Thank you Adam, that helps a lot!

    I thought the image size is always the same once the camera config is fixed.

    And the transforms from U16 to U8, then unconverted to FP16, the procedure seems tricky.

    I will try to understand the dynamic dimensions and the procedure of transform.

    In fact, my case is just as your 'control depth', I want to make a subtraction of 2 successive depth frames to find the moving pixels, but I'm not so skilled at the NN model, and the subtraction must be done by NN model in the device.

    Adam, you help a lot 😃

      jeremie_m

      You are right, the image is the same size once it is fixed. I just meant that if all of a sudden you wanted to increase/decrease resolution on your depth frame to improve, you would need to create a new model.

      Heads up, you need to have quite a lot of depth filters enabled to make this diff work, the original depth frames are too noisy without post processing.

      And when you do basically any type of depth processing, like MedianFilter, it slows down the depth FPS to ~9-11.

      But it will take your diff from this (no processing):
      (2 identical frames, nothing moved in the scene)

      to this (median filter 7x7 and high_density):

      To this ( a lot of processing):

        AdamPolak quite a lot of depth filters enabled to make this diff work

        Thanks, Adam, 9-11 FPS maybe enough for me, I have to try to make the filters work in the host if the rate is too low.

        Is the config of depth filters is set as you mentioned in the code here or it's more complex than the parameters here?

        AdamPolak This is the depthai code

        jakaskerl

        Thank you for reply.

        1. As the subtraction of depth frames as we mentioned here.

        2. Selection of the farthest or nearest area of pixels in depth frame.

        3. Mask of specific shape from or to depth frame.

          (4. Some NN models use both depth and RGB image maybe.)

          .etc

          Some thoughts for now, thank you 😃

        • erik replied to this.

          jeremie_m does the code Adam provided above not work? Besides the tutorials we have on documentation / depthai-experiments, we don't have any additional ones.

            erik

            Thank you erik.

            I'm not sure if the first code works, cause it doesn't match the inputs with the second code.

            And I'm looking into how to generate the shave.blob, I'm not clear in processing the NN model.

              13 days later

              AdamPolak

              Hello Adam, I've add the following process for the inputs, but it seems something not right.

              Could you let me know how to adapt the depthai code.

              Thank you Adam.

              script.setScript("""
              old = node.io['in'].get()
              while True:
              frame = node.io['in'].get()
              node.io['img1'].send(old)
              node.io['img2'].send(frame)
              old = frame
              """
              )
              script.outputs[
              'img1'].link(nn.inputs['input1'])
              script.outputs[
              'img2'].link(nn.inputs['inout2'])

                jeremie_m

                Hey it seems like you are updating the "old" frame each time.

                Which means you are basically subtracting 2 immediate frames from each other, is that what you are trying to do?

                If you want a "control" frame, then update to remove old = frame from your code.

                Also, put in a sleep at the top of the while loop or you get unexpected behavior.

                  AdamPolak

                  Thanks you Adam.

                  I've tried what you said, but there is some problem also.

                  May I get your email address, I'd like to give you more details.

                  Thank you again Adam.

                    jeremie_m

                    Hey not really looking to publically post my email address. What is the issue that is happening?

                      AdamPolak

                      Thanks for reply.

                      I got images like this with the diff process above:

                      AdamPolak This is the "final" version to do a diff between 2 depth map images:

                      And I added time_diff code:

                      timestamp = dai.Clock.now();

                      with dai.Device(p) as device:

                      …

                      time_diff = depthDiff.getTimestamp() - timestamp
                      print(
                      'time_diff = ', time_diff)
                      timestamp = depthDiff.getTimestamp()

                      Which the output is always 0.0

                      I'm confused now.

                      AdamPolak

                      All I want to do, is just to subtract two frames in sequence, just use the older frame as the 'control' frame.

                        jeremie_m

                        Could you post:

                        1. your entire python code
                        2. your code that generated the depth diff model?

                        Right now it looks like maybe you do not have the right dimensions for your depth. It seems to have more vertical pixels than horizontal pixels.

                          AdamPolak
                          Thank you Adam.

                          The python code is as following:

                          import numpy as np
                          import cv2
                          import depthai as dai
                          
                          resolution = (1280, 800)  # 24 FPS (without visualization)
                          lrcheck = False  # Better handling for occlusions
                          extended = False  # Closer-in minimum depth, disparity range is doubled
                          subpixel = True  # True  # Better accuracy for longer distance, fractional disparity 32-levels
                          
                          p = dai.Pipeline()
                          
                          # Configure Mono Camera Properties
                          left = p.createMonoCamera()
                          left.setResolution(dai.MonoCameraProperties.SensorResolution.THE_800_P)
                          left.setBoardSocket(dai.CameraBoardSocket.LEFT)
                          
                          right = p.createMonoCamera()
                          right.setResolution(dai.MonoCameraProperties.SensorResolution.THE_800_P)
                          right.setBoardSocket(dai.CameraBoardSocket.RIGHT)
                          
                          stereo = p.createStereoDepth()
                          left.out.link(stereo.left)
                          right.out.link(stereo.right)
                          
                          # Set stereo depth options
                          stereo.setDefaultProfilePreset(dai.node.StereoDepth.PresetMode.HIGH_DENSITY)
                          config = stereo.initialConfig.get()
                          config.postProcessing.speckleFilter.enable = False
                          # config.postProcessing.speckleFilter.speckleRange = 60
                          config.postProcessing.temporalFilter.enable = False
                          
                          config.postProcessing.spatialFilter.enable = False
                          # config.postProcessing.spatialFilter.holeFillingRadius = 2
                          # config.postProcessing.spatialFilter.numIterations = 1
                          config.postProcessing.thresholdFilter.minRange = 1000  # mm
                          config.postProcessing.thresholdFilter.maxRange = 10000  # mm
                          config.censusTransform.enableMeanMode = True
                          # this 2 parameters should be fine-tuning
                          config.costMatching.linearEquationParameters.alpha = 0
                          config.costMatching.linearEquationParameters.beta = 2
                          stereo.initialConfig.set(config)
                          stereo.setLeftRightCheck(lrcheck)
                          stereo.setExtendedDisparity(extended)
                          stereo.setSubpixel(subpixel)
                          # stereo.setDepthAlign(dai.CameraBoardSocket.RGB)
                          stereo.setRectifyEdgeFillColor(0)  # Black, to better see the cutout
                          
                          
                          # Depth -> Depth Diff
                          nn = p.createNeuralNetwork()
                          nn.setBlobPath("diff_images_simplified_openvino_2022.1_4shave.blob")
                          
                          script = p.create(dai.node.Script)
                          stereo.disparity.link(script.inputs['in'])
                          timestamp = dai.Clock.now()
                          print("ts1 = ", timestamp)
                          script.setScript("""
                          old = node.io['in'].get()
                          while True:
                              frame = node.io['in'].get()
                              node.io['img1'].send(old)
                              node.io['img2'].send(frame)
                              old = frame
                          """)
                          script.outputs['img1'].link(nn.inputs['input2'])
                          script.outputs['img2'].link(nn.inputs['input1'])
                          
                          # stereo.disparity.link(nn.inputs["input1"])
                          
                          depthDiffOut = p.createXLinkOut()
                          depthDiffOut.setStreamName("depth_diff")
                          nn.out.link(depthDiffOut.input)
                          
                          with dai.Device(p) as device:
                              qDepthDiff = device.getOutputQueue(name="depth_diff", maxSize=4, blocking=False)
                              while True:
                                  depthDiff = qDepthDiff.get()
                                  print("ts0 = ", timestamp)
                                  time_diff = depthDiff.getTimestamp() - timestamp
                                  print('time_diff = ', time_diff)
                                  timestamp = depthDiff.getTimestamp()
                                  print("ts 2 = ", timestamp)
                                  # Shape it here
                                  floatVector = depthDiff.getFirstLayerFp16()
                                  diff = np.array(floatVector).reshape(resolution[0], resolution[1])
                          
                                  colorize = cv2.normalize(diff, None, 255, 0, cv2.NORM_INF, cv2.CV_8UC1)
                                  cv2.applyColorMap(colorize, cv2.COLORMAP_JET)
                                  cv2.imshow("Diff", colorize)
                                  if cv2.waitKey(1) == ord('q'):
                                      break