G
Gaurav3434

  • Dec 15, 2021
  • Joined Dec 3, 2021
  • 0 best answers
  • Hi Eric,
    In my code, the FOV of the depth image is bigger than FOV of RGB image, I want both FOVs to be exactly same.

    It looks like even if my RGB camera is reading (1920,1080- which is full FOV of RGB), its still less than the FOV of the combined frame formed by both the mono-cameras.

    As you can see in the following image, the RGB can not see the FOV as big as (mono_cameras) combined.
    And I know I can crop the depth frame and resize it to match with RGB, but I want to keep the accuracy as high as it is theoretically possible!

    As RGB's FOV is smaller than the depth camera's FOV, I am thinking of cropping depth frame instead of RGB frame.

    Can you please guide me with exact numbers, to crop the depth frame to match with RGB frame. As I am unable to know the total resolution and proportion of the depth frame. Though mono-cameras are set to 400P in my code, I still don't know the specifications of the combined image formed by both the mono cameras...

    • erik replied to this.
    • I am using the following code for depth calculation...
      I want to know FOV and resolution of the output window of this code
      please also tell me what FOV and resolution should I set for RGB, in order to show the same FOV as the depth cameras are outputting in the following code!

      #!/usr/bin/env python3
      
      import cv2
      import depthai as dai
      
      stepSize = 0.05
      
      newConfig = False
      
      # Create pipeline
      pipeline = dai.Pipeline()
      
      # Define sources and outputs
      monoLeft = pipeline.create(dai.node.MonoCamera)
      monoRight = pipeline.create(dai.node.MonoCamera)
      stereo = pipeline.create(dai.node.StereoDepth)
      spatialLocationCalculator = pipeline.create(dai.node.SpatialLocationCalculator)
      
      xoutDepth = pipeline.create(dai.node.XLinkOut)
      xoutSpatialData = pipeline.create(dai.node.XLinkOut)
      xinSpatialCalcConfig = pipeline.create(dai.node.XLinkIn)
      
      xoutDepth.setStreamName("depth")
      xoutSpatialData.setStreamName("spatialData")
      xinSpatialCalcConfig.setStreamName("spatialCalcConfig")
      
      # Properties
      monoLeft.setResolution(dai.MonoCameraProperties.SensorResolution.THE_400_P)
      monoLeft.setBoardSocket(dai.CameraBoardSocket.LEFT)
      monoRight.setResolution(dai.MonoCameraProperties.SensorResolution.THE_400_P)
      monoRight.setBoardSocket(dai.CameraBoardSocket.RIGHT)
      
      lrcheck = False
      subpixel = False
      
      stereo.initialConfig.setConfidenceThreshold(255)
      stereo.setLeftRightCheck(lrcheck)
      stereo.setSubpixel(subpixel)
      
      # Config
      topLeft = dai.Point2f(0.45, 0.45)
      bottomRight = dai.Point2f(0.5, 0.5)
      
      config = dai.SpatialLocationCalculatorConfigData()
      config.depthThresholds.lowerThreshold = 100
      config.depthThresholds.upperThreshold = 10000
      config.roi = dai.Rect(topLeft, bottomRight)
      
      spatialLocationCalculator.setWaitForConfigInput(False)
      spatialLocationCalculator.initialConfig.addROI(config)
      
      # Linking
      monoLeft.out.link(stereo.left)
      monoRight.out.link(stereo.right)
      
      spatialLocationCalculator.passthroughDepth.link(xoutDepth.input)
      stereo.depth.link(spatialLocationCalculator.inputDepth)
      
      spatialLocationCalculator.out.link(xoutSpatialData.input)
      xinSpatialCalcConfig.out.link(spatialLocationCalculator.inputConfig)
      
      # Connect to device and start pipeline
      with dai.Device(pipeline) as device:
      
          # Output queue will be used to get the depth frames from the outputs defined above
          depthQueue = device.getOutputQueue(name="depth", maxSize=4, blocking=False)
          spatialCalcQueue = device.getOutputQueue(name="spatialData", maxSize=4, blocking=False)
          spatialCalcConfigInQueue = device.getInputQueue("spatialCalcConfig")
      
          color = (255, 255, 255)
      
          print("Use WASD keys to move ROI!")
      
          while True:
              inDepth = depthQueue.get() # Blocking call, will wait until a new data has arrived
      
              depthFrame = inDepth.getFrame()
              depthFrameColor = cv2.normalize(depthFrame, None, 255, 0, cv2.NORM_INF, cv2.CV_8UC1)
              depthFrameColor = cv2.equalizeHist(depthFrameColor)
              depthFrameColor = cv2.applyColorMap(depthFrameColor, cv2.COLORMAP_HOT)
      
              spatialData = spatialCalcQueue.get().getSpatialLocations()
              for depthData in spatialData:
                  roi = depthData.config.roi
                  roi = roi.denormalize(width=depthFrameColor.shape[1], height=depthFrameColor.shape[0])
                  xmin = int(roi.topLeft().x)
                  ymin = int(roi.topLeft().y)
                  xmax = int(roi.bottomRight().x)
                  ymax = int(roi.bottomRight().y)
      
                  depthMin = depthData.depthMin
                  depthMax = depthData.depthMax
      
                  fontType = cv2.FONT_HERSHEY_TRIPLEX
                  cv2.rectangle(depthFrameColor, (xmin, ymin), (xmax, ymax), color, cv2.FONT_HERSHEY_SCRIPT_SIMPLEX)
                  cv2.putText(depthFrameColor, f"X: {int(depthData.spatialCoordinates.x)} mm", (xmin + 10, ymin + 20), fontType, 0.5, 255)
                  cv2.putText(depthFrameColor, f"Y: {int(depthData.spatialCoordinates.y)} mm", (xmin + 10, ymin + 35), fontType, 0.5, 255)
                  cv2.putText(depthFrameColor, f"Z: {int(depthData.spatialCoordinates.z)} mm", (xmin + 10, ymin + 50), fontType, 0.5, 255)
              # Show the frame
              cv2.imshow("depth", depthFrameColor)
      
              key = cv2.waitKey(1)
              if key == ord('q'):
                  break
              elif key == ord('w'):
                  if topLeft.y - stepSize >= 0:
                      topLeft.y -= stepSize
                      bottomRight.y -= stepSize
                      newConfig = True
              elif key == ord('a'):
                  if topLeft.x - stepSize >= 0:
                      topLeft.x -= stepSize
                      bottomRight.x -= stepSize
                      newConfig = True
              elif key == ord('s'):
                  if bottomRight.y + stepSize <= 1:
                      topLeft.y += stepSize
                      bottomRight.y += stepSize
                      newConfig = True
              elif key == ord('d'):
                  if bottomRight.x + stepSize <= 1:
                      topLeft.x += stepSize
                      bottomRight.x += stepSize
                      newConfig = True
      
              if newConfig:
                  config.roi = dai.Rect(topLeft, bottomRight)
                  config.calculationAlgorithm = dai.SpatialLocationCalculatorAlgorithm.AVERAGE
                  cfg = dai.SpatialLocationCalculatorConfig()
                  cfg.addROI(config)
                  spatialCalcConfigInQueue.send(cfg)
                  newConfig = False
      • erik replied to this.
      • Hello Eric, thanks for response!

        I want to know what (130) number represents in the following line, is it a distance at which its going to focus, if yes what's the unit mm, cm, m?
        cam.initialControl.setManualFocus(130)

        thanks

        • erik replied to this.
        • Hello Eric, I ran the code you posted, it worked! thanks.
          However, I wish to know, how to edit your code in order to make use of openCV libraries

          for example, I want to draw a small rectangle in the center of the captured image and then stream it from HTTP server.
          So how to edit the HTTP code that you sent me, to draw a rectangle in the middle? could you please send me an edited code?

          Also could you please tell me the line that I should add in my code to set RGB focus manually..
          I tried putting this line but it's still autofocusing:

          camRgb.initialControl.setManualFocus(130)

          • erik replied to this.
          • Hello Eric, thanks for the response!
            I got HTTP request on the server, its working!
            However, I think I'm doing something wrong while flashing pipeline, because even after flashing, the program is still running on my laptop but not on OAK-D device.

            Also let me know how do I upload my python script on OAK-D device?

            And should I run flashing pipeline program only ones? OR should I add this program into my main program so it will run every time when the OAK-D is powered ON
            I have run the following program for flashing pipeline, but let me know if this is incorrect!

                             import depthai as dai
                             pipeline = dai.Pipeline()
            
                             (f, bl) = dai.DeviceBootloader.getFirstAvailableDevice()
                             bootloader = dai.DeviceBootloader(bl)
                             progress = lambda p : print(f'Flashing progress: {p*100:.1f}%')
                             bootloader.flash(progress, pipeline)
            • erik replied to this.
            • Hi Eric,
              I flashed the bootloader and also flashed a created pipeline.
              However, How to make sure that the program is now running on the OAK-D-POE and not on my laptop?

              Also I want to know how to trigger and run the program on the OAK-D, as I want to connect my OAK-D-POE only to a PLC and not to a computer/or a microprocessor like (raspberry pi).

              thanks!

              • erik replied to this.
              • Hello, I want to run example files on the OAK-D-POE device itself.
                I did run the file 'flash_bootloader.py' as guided but that made no difference.
                And please tell me how to make sure if the program is running on OAK-D device or the host device?

                Also, please guide me on, how to send data from OAK-D-POE device to host device by ethernet. I want to send integer values back to host device.

                Thank you in advance for any help!

                • erik replied to this.