S
Shai

  • 16 days ago
  • Joined 4 Jun
  • 0 best answers
  • I have since used the imgAlign node instead of the SetDepthAlign function in the StereoDepth node, and I am using DepthAi v3.0.0-rc.4 instead of DepthAi v3.0.0-rc.3. So far, I have not seen the issue, but my testing has been limited. I will let you know if it arises again, and I will share code that causes the issue.

  • The AI chatbot on the Luxonis docs pages said that "setDepthAlign()" is not recommended for DepthAi V3 and that I should use the ImageAlign node instead. Is this true?

  • For our purposes, we are looking for a point in the RGB image (a chessboard), and we want to find the depth in the stereo depth image. Since depth-ai no longer automatically resized the depth image when calling "setDepthAlign()", I resized the stereo depth manually. However, I noticed that the field of view in the stereo depth was wider than in the RGB image. If the chessboard is near the edge of the RGB image, then the corresponding point near the edge of the resized stereo depth image is not on the chess board, and I get a very inaccurate depth reading for the location of the chess board.

    How would you recommend finding the depth of a specific (x, y) pixel in the RGB image?

  • How is that going to make the stereo output resolution match the RGB resolution? That will decrease the stereo depth resolution which is already smaller than the RGB resolution.

  • We are using depthai 3.0.0-rc3.

    • I get the following error to the command prompt when I try to connect to my OAK 4D.

      [2266104263] [169.254.1.222] [155.230] [Camera(1)] [error] Node threw exception, stopping the node. Exception message: Internal error occured. Please report. commit: f89fa998745f6e7a31757bf61e1ce9ab2b3621cc | version: 0.0.1 | file: /work/src/gst/rvc4/GstCamRvc4.cpp:983
      [2266104263] [169.254.1.222] [155.230] [Camera(2)] [error] Node threw exception, stopping the node. Exception message: Internal error occured. Please report. commit: f89fa998745f6e7a31757bf61e1ce9ab2b3621cc | version: 0.0.1 | file: /work/src/gst/rvc4/GstCamRvc4.cpp:983
      [2266104263] [169.254.1.222] [155.296] [Camera(0)] [error] Node threw exception, stopping the node. Exception message: Internal error occured. Please report. commit: f89fa998745f6e7a31757bf61e1ce9ab2b3621cc | version: 0.0.1 | file: /work/src/gst/rvc4/GstCamRvc4.cpp:983

      • I am getting the error that "StereoDetph | setOutputSize is not supported on RVC4 platform"

      • The setDepthAlign function is not resizing the stereo output to match the RGB output in depthAI v3 like it used to in v2.
        The following is my code:

            auto camRgb = pipeline_->create<dai::node::Camera>()->build(dai::CameraBoardSocket::CAM_A);
            auto monoLeft = pipeline_->create<dai::node::Camera>()->build(dai::CameraBoardSocket::CAM_B);
            auto monoRight = pipeline_->create<dai::node::Camera>()->build(dai::CameraBoardSocket::CAM_C);
            auto stereo = pipeline_->create<dai::node::StereoDepth>();
        
            monoLeft->requestFullResolutionOutput()->link(stereo->left);
            monoRight->requestFullResolutionOutput()->link(stereo->right);
        
            stereo->setDefaultProfilePreset(dai::node::StereoDepth::PresetMode::DEFAULT);
            stereo->initialConfig->setMedianFilter(dai::StereoDepthConfig::MedianFilter::KERNEL_7x7);
            stereo->setLeftRightCheck(true);
            stereo->setExtendedDisparity(true);
            stereo->setSubpixel(true);
            stereo->setDepthAlign(dai::CameraBoardSocket::CAM_A);
      • Ok, we updated the OS to 1.15.0 and we updated the software to v3.0.0-rc3 (from v3.0.0-rc2).

        When we run
        cat /proc/device-tree/model
        we get
        Luxonis, Inc. KalamaP OAK4-D R7

        Nevertheless, the issues remain.

        This is the mono image:

        And this is the depth image:

        And here is our Python script:

        #!/usr/bin/env python3
        
        import cv2
        import depthai as dai
        import numpy as np
        
        color = (125, 73, 219)
        
        # Create pipeline
        pipeline = dai.Pipeline()
        # Config
        topLeft = dai.Point2f(0.4, 0.4)
        bottomRight = dai.Point2f(0.41, 0.42)
        
        # Define sources and outputs
        monoLeft = pipeline.create(dai.node.Camera).build(dai.CameraBoardSocket.CAM_B)
        monoRight = pipeline.create(dai.node.Camera).build(dai.CameraBoardSocket.CAM_C)
        stereo = pipeline.create(dai.node.StereoDepth)
        spatialLocationCalculator = pipeline.create(dai.node.SpatialLocationCalculator)
        
        # Linking
        monoLeftOut = monoLeft.requestOutput((640, 400))
        monoRightOut = monoRight.requestOutput((640, 400))
        monoLeftOut.link(stereo.left)
        monoRightOut.link(stereo.right)
        mono_left_q = monoLeftOut.createOutputQueue()
        
        stereo.setRectification(True)
        stereo.setExtendedDisparity(True)
        
        stepSize = 0.02
        
        config = dai.SpatialLocationCalculatorConfigData()
        config.depthThresholds.lowerThreshold = 10
        config.depthThresholds.upperThreshold = 20000
        calculationAlgorithm = dai.SpatialLocationCalculatorAlgorithm.MEDIAN
        config.roi = dai.Rect(topLeft, bottomRight)
        
        spatialLocationCalculator.inputConfig.setWaitForMessage(False)
        spatialLocationCalculator.initialConfig.addROI(config)
        
        
        xoutSpatialQueue = spatialLocationCalculator.out.createOutputQueue()
        outputDepthQueue = spatialLocationCalculator.passthroughDepth.createOutputQueue()
        
        stereo.depth.link(spatialLocationCalculator.inputDepth)
        
        
        inputConfigQueue = spatialLocationCalculator.inputConfig.createInputQueue()
        pipeline.getDefaultDevice().setIrLaserDotProjectorIntensity(1)
        pipeline.getDefaultDevice().setIrFloodLightIntensity(1)
        
        with pipeline:
            pipeline.start()
            pipeline.getDefaultDevice().setIrFloodLightIntensity(1)
            pipeline.getDefaultDevice().setIrLaserDotProjectorIntensity(1)
            while pipeline.isRunning():
                spatialData = xoutSpatialQueue.get().getSpatialLocations()
                mono_left_frame = mono_left_q.get().getCvFrame()
        
                #print("Use WASD keys to move ROI!")
                outputDepthIMage : dai.ImgFrame = outputDepthQueue.get()
        
                frameDepth = outputDepthIMage.getCvFrame()
                frameDepth = outputDepthIMage.getFrame()
                print("Median depth value: ", np.median(frameDepth))
        
                depthFrameColor = cv2.normalize(frameDepth, None, 255, 0, cv2.NORM_INF, cv2.CV_8UC1)
                depthFrameColor = cv2.equalizeHist(depthFrameColor)
                # depthFrameColor = cv2.applyColorMap(depthFrameColor, cv2.COLORMAP_HOT)
                for depthData in spatialData:
                    roi = depthData.config.roi
                    roi = roi.denormalize(width=depthFrameColor.shape[1], height=depthFrameColor.shape[0])
                    xmin = int(roi.topLeft().x)
                    ymin = int(roi.topLeft().y)
                    xmax = int(roi.bottomRight().x)
                    ymax = int(roi.bottomRight().y)
        
                    depthMin = depthData.depthMin
                    depthMax = depthData.depthMax
        
                    fontType = cv2.FONT_HERSHEY_TRIPLEX
                    xminw = xmin - 20
                    if xminw >= depthFrameColor.shape[1]:
                        xminw -= 30
                    cv2.rectangle(depthFrameColor, (xmin, ymin), (xmax, ymax), color, cv2.FONT_HERSHEY_SCRIPT_SIMPLEX)
                    cv2.putText(depthFrameColor, f"X: {int(depthData.spatialCoordinates.x)} mm", (xminw, ymin + 20), fontType, 0.5, color)
                    cv2.putText(depthFrameColor, f"Y: {int(depthData.spatialCoordinates.y)} mm", (xminw, ymin + 35), fontType, 0.5, color)
                    cv2.putText(depthFrameColor, f"Z: {int(depthData.spatialCoordinates.z)} mm", (xminw, ymin + 50), fontType, 0.5, color)
                # Show the frame
                cv2.imshow("depth", depthFrameColor)
                cv2.imshow("monoleft", mono_left_frame)
        
                key = cv2.waitKey(1)
                if key == ord('q'):
                    pipeline.stop()
                    break
        
                stepSize = 0.01
        
                newConfig = False
        
                if key == ord('q'):
                    break
                elif key == ord('w'):
                    if topLeft.y - stepSize >= 0:
                        topLeft.y -= stepSize
                        bottomRight.y -= stepSize
                        newConfig = True
                elif key == ord('a'):
                    if topLeft.x - stepSize >= 0:
                        topLeft.x -= stepSize
                        bottomRight.x -= stepSize
                        newConfig = True
                elif key == ord('s'):
                    if bottomRight.y + stepSize <= 1:
                        topLeft.y += stepSize
                        bottomRight.y += stepSize
                        newConfig = True
                elif key == ord('d'):
                    if bottomRight.x + stepSize <= 1:
                        topLeft.x += stepSize
                        bottomRight.x += stepSize
                        newConfig = True
                elif key == ord('1'):
                    calculationAlgorithm = dai.SpatialLocationCalculatorAlgorithm.MEAN
                    print('Switching calculation algorithm to MEAN!')
                    newConfig = True
                elif key == ord('2'):
                    calculationAlgorithm = dai.SpatialLocationCalculatorAlgorithm.MIN
                    print('Switching calculation algorithm to MIN!')
                    newConfig = True
                elif key == ord('3'):
                    calculationAlgorithm = dai.SpatialLocationCalculatorAlgorithm.MAX
                    print('Switching calculation algorithm to MAX!')
                    newConfig = True
                elif key == ord('4'):
                    calculationAlgorithm = dai.SpatialLocationCalculatorAlgorithm.MODE
                    print('Switching calculation algorithm to MODE!')
                    newConfig = True
                elif key == ord('5'):
                    calculationAlgorithm = dai.SpatialLocationCalculatorAlgorithm.MEDIAN
                    print('Switching calculation algorithm to MEDIAN!')
                    newConfig = True
        
                if newConfig:
                    config.roi = dai.Rect(topLeft, bottomRight)
                    config.calculationAlgorithm = calculationAlgorithm
                    cfg = dai.SpatialLocationCalculatorConfig()
                    cfg.addROI(config)
                    inputConfigQueue.send(cfg)
                    newConfig = False
        • Ok, so we checked and found that our device has OS version 1.8.0. However, when trying to update the OS, we get this error:

          / # recovery --wipe_cache --update_package=/data/full_update_luxonis_ext4_debug-1.15.0.zip
          Illegal instruction (core dumped)
            • Edited

            Ok, we followed your most recent instructions and we found no difference.

            This is the image from the left Mono-Camera:

            This is the Spatial Detection Image:

            Here is our Python code:

            #!/usr/bin/env python3
            
            import cv2
            import depthai as dai
            import numpy as np
            
            color = (125, 73, 219)
            
            \# Create pipeline
            pipeline = dai.Pipeline()
            \# Config
            topLeft = dai.Point2f(0.4, 0.4)
            bottomRight = dai.Point2f(0.41, 0.42)
            
            \# Define sources and outputs
            monoLeft = pipeline.create(dai.node.Camera).build(dai.CameraBoardSocket.CAM_B)
            monoRight = pipeline.create(dai.node.Camera).build(dai.CameraBoardSocket.CAM_C)
            stereo = pipeline.create(dai.node.StereoDepth)
            spatialLocationCalculator = pipeline.create(dai.node.SpatialLocationCalculator)
            
            \# Linking
            monoLeftOut = monoLeft.requestOutput((640, 400))
            monoRightOut = monoRight.requestOutput((640, 400))
            monoLeftOut.link(stereo.left)
            monoRightOut.link(stereo.right)
            mono_left_q = monoLeftOut.createOutputQueue()
            
            stereo.setRectification(True)
            stereo.setExtendedDisparity(True)
            
            stepSize = 0.02
            
            config = dai.SpatialLocationCalculatorConfigData()
            config.depthThresholds.lowerThreshold = 10
            config.depthThresholds.upperThreshold = 20000
            calculationAlgorithm = dai.SpatialLocationCalculatorAlgorithm.MEDIAN
            config.roi = dai.Rect(topLeft, bottomRight)
            
            spatialLocationCalculator.inputConfig.setWaitForMessage(False)
            spatialLocationCalculator.initialConfig.addROI(config)
            
            xoutSpatialQueue = spatialLocationCalculator.out.createOutputQueue()
            outputDepthQueue = spatialLocationCalculator.passthroughDepth.createOutputQueue()
            
            stereo.depth.link(spatialLocationCalculator.inputDepth)
            
            inputConfigQueue = spatialLocationCalculator.inputConfig.createInputQueue()
            pipeline.getDefaultDevice().setIrLaserDotProjectorIntensity(1)
            pipeline.getDefaultDevice().setIrFloodLightIntensity(1)
            
            with pipeline:
               pipeline.start()
               pipeline.getDefaultDevice().setIrFloodLightIntensity(1)
               pipeline.getDefaultDevice().setIrLaserDotProjectorIntensity(1)
               while pipeline.isRunning():
                   spatialData = xoutSpatialQueue.get().getSpatialLocations()
                   mono_left_frame = mono_left_q.get().getCvFrame()
            
            \#print("Use WASD keys to move ROI!")
                   outputDepthIMage : dai.ImgFrame = outputDepthQueue.get()
            
            frameDepth = outputDepthIMage.getCvFrame()
                   frameDepth = outputDepthIMage.getFrame()
                   print("Median depth value: ", np.median(frameDepth))
            
            depthFrameColor = cv2.normalize(frameDepth, None, 255, 0, cv2.NORM_INF, cv2.CV_8UC1)
                   depthFrameColor = cv2.equalizeHist(depthFrameColor)
                   # depthFrameColor = cv2.applyColorMap(depthFrameColor, cv2.COLORMAP_HOT)
                   for depthData in spatialData:
                       roi = depthData.config.roi
                       roi = roi.denormalize(width=depthFrameColor.shape[1], height=depthFrameColor.shape[0])
                       xmin = int(roi.topLeft().x)
                       ymin = int(roi.topLeft().y)
                       xmax = int(roi.bottomRight().x)
                       ymax = int(roi.bottomRight().y)
            
            depthMin = depthData.depthMin
                       depthMax = depthData.depthMax
            
            fontType = cv2.FONT_HERSHEY_TRIPLEX
                       xminw = xmin - 20
                       if xminw >= depthFrameColor.shape[1]:
                           xminw -= 30
                       cv2.rectangle(depthFrameColor, (xmin, ymin), (xmax, ymax), color, cv2.FONT_HERSHEY_SCRIPT_SIMPLEX)
                       cv2.putText(depthFrameColor, f"X: {int(depthData.spatialCoordinates.x)} mm", (xminw, ymin + 20), fontType, 0.5, color)
                       cv2.putText(depthFrameColor, f"Y: {int(depthData.spatialCoordinates.y)} mm", (xminw, ymin + 35), fontType, 0.5, color)
                       cv2.putText(depthFrameColor, f"Z: {int(depthData.spatialCoordinates.z)} mm", (xminw, ymin + 50), fontType, 0.5, color)
                   # Show the frame
                   cv2.imshow("depth", depthFrameColor)
                   cv2.imshow("monoleft", mono_left_frame)
            
            key = cv2.waitKey(1)
                   if key == ord('q'):
                       pipeline.stop()
                       break
            
            stepSize = 0.01
            
            newConfig = False
            
            if key == ord('q'):
                       break
                   elif key == ord('w'):
                       if topLeft.y - stepSize >= 0:
                           topLeft.y -= stepSize
                           bottomRight.y -= stepSize
                           newConfig = True
                   elif key == ord('a'):
                       if topLeft.x - stepSize >= 0:
                           topLeft.x -= stepSize
                           bottomRight.x -= stepSize
                           newConfig = True
                   elif key == ord('s'):
                       if bottomRight.y + stepSize <= 1:
                           topLeft.y += stepSize
                           bottomRight.y += stepSize
                           newConfig = True
                   elif key == ord('d'):
                       if bottomRight.x + stepSize <= 1:
                           topLeft.x += stepSize
                           bottomRight.x += stepSize
                           newConfig = True
                   elif key == ord('1'):
                       calculationAlgorithm = dai.SpatialLocationCalculatorAlgorithm.MEAN
                       print('Switching calculation algorithm to MEAN!')
                       newConfig = True
                   elif key == ord('2'):
                       calculationAlgorithm = dai.SpatialLocationCalculatorAlgorithm.MIN
                       print('Switching calculation algorithm to MIN!')
                       newConfig = True
                   elif key == ord('3'):
                       calculationAlgorithm = dai.SpatialLocationCalculatorAlgorithm.MAX
                       print('Switching calculation algorithm to MAX!')
                       newConfig = True
                   elif key == ord('4'):
                       calculationAlgorithm = dai.SpatialLocationCalculatorAlgorithm.MODE
                       print('Switching calculation algorithm to MODE!')
                       newConfig = True
                   elif key == ord('5'):
                       calculationAlgorithm = dai.SpatialLocationCalculatorAlgorithm.MEDIAN
                       print('Switching calculation algorithm to MEDIAN!')
                       newConfig = True
            
            if newConfig:
                       config.roi = dai.Rect(topLeft, bottomRight)
                       config.calculationAlgorithm = calculationAlgorithm
                       cfg = dai.SpatialLocationCalculatorConfig()
                       cfg.addROI(config)
                       inputConfigQueue.send(cfg)
                       newConfig = False
            • Per your suggestion, we added pipeline.getDefaultDevice().setIrLaserDotProjectorIntensity(1) and pipeline.getDefaultDevice().setIrFloodLightIntensity(1) to our script, but the results are the same. We also piped out the left mono image, and we could not see any dots. The following is our script if it helps:

              #!/usr/bin/env python3

              import cv2
              import depthai as dai
              import numpy as np

              color = (125, 73, 219)

              # Create pipeline
              pipeline = dai.Pipeline()
              # Config
              topLeft = dai.Point2f(0.4, 0.4)
              bottomRight = dai.Point2f(0.41, 0.42)

              # Define sources and outputs
              monoLeft = pipeline.create(dai.node.Camera).build(dai.CameraBoardSocket.CAM_B)
              monoRight = pipeline.create(dai.node.Camera).build(dai.CameraBoardSocket.CAM_C)
              stereo = pipeline.create(dai.node.StereoDepth)
              spatialLocationCalculator = pipeline.create(dai.node.SpatialLocationCalculator)

              # Linking
              monoLeftOut = monoLeft.requestOutput((640, 400))
              monoRightOut = monoRight.requestOutput((640, 400))
              monoLeftOut.link(stereo.left)
              monoRightOut.link(stereo.right)
              mono_left_q = monoLeftOut.createOutputQueue()

              stereo.setRectification(True)
              stereo.setExtendedDisparity(True)

              stepSize = 0.02

              config = dai.SpatialLocationCalculatorConfigData()
              config.depthThresholds.lowerThreshold = 10
              config.depthThresholds.upperThreshold = 20000
              calculationAlgorithm = dai.SpatialLocationCalculatorAlgorithm.MEDIAN
              config.roi = dai.Rect(topLeft, bottomRight)

              spatialLocationCalculator.inputConfig.setWaitForMessage(False)
              spatialLocationCalculator.initialConfig.addROI(config)

              xoutSpatialQueue = spatialLocationCalculator.out.createOutputQueue()
              outputDepthQueue = spatialLocationCalculator.passthroughDepth.createOutputQueue()

              stereo.depth.link(spatialLocationCalculator.inputDepth)

              inputConfigQueue = spatialLocationCalculator.inputConfig.createInputQueue()
              pipeline.getDefaultDevice().setIrLaserDotProjectorIntensity(1)
              pipeline.getDefaultDevice().setIrFloodLightIntensity(1)

              with pipeline:
                 pipeline.start()
                 while pipeline.isRunning():
                     spatialData = xoutSpatialQueue.get().getSpatialLocations()
                     mono_left_frame = mono_left_q.get().getCvFrame()

                     #print("Use WASD keys to move ROI!")
                     outputDepthIMage : dai.ImgFrame = outputDepthQueue.get()

                     frameDepth = outputDepthIMage.getCvFrame()
                     frameDepth = outputDepthIMage.getFrame()
                     print("Median depth value: ", np.median(frameDepth))

                     depthFrameColor = cv2.normalize(frameDepth, None, 255, 0, cv2.NORM_INF, cv2.CV_8UC1)
                     depthFrameColor = cv2.equalizeHist(depthFrameColor)
                     # depthFrameColor = cv2.applyColorMap(depthFrameColor, cv2.COLORMAP_HOT)
                     for depthData in spatialData:
                         roi = depthData.config.roi
                         roi = roi.denormalize(width=depthFrameColor.shape[1], height=depthFrameColor.shape[0])
                         xmin = int(roi.topLeft().x)
                         ymin = int(roi.topLeft().y)
                         xmax = int(roi.bottomRight().x)
                         ymax = int(roi.bottomRight().y)

                         depthMin = depthData.depthMin
                         depthMax = depthData.depthMax

                         fontType = cv2.FONT_HERSHEY_TRIPLEX
                         xminw = xmin - 20
                         if xminw >= depthFrameColor.shape[1]:
                             xminw -= 30
                         cv2.rectangle(depthFrameColor, (xmin, ymin), (xmax, ymax), color, cv2.FONT_HERSHEY_SCRIPT_SIMPLEX)
                         cv2.putText(depthFrameColor, f"X: {int(depthData.spatialCoordinates.x)} mm", (xminw, ymin + 20), fontType, 0.5, color)
                         cv2.putText(depthFrameColor, f"Y: {int(depthData.spatialCoordinates.y)} mm", (xminw, ymin + 35), fontType, 0.5, color)
                         cv2.putText(depthFrameColor, f"Z: {int(depthData.spatialCoordinates.z)} mm", (xminw, ymin + 50), fontType, 0.5, color)
                     # Show the frame
                     cv2.imshow("depth", depthFrameColor)
                     cv2.imshow("monoleft", mono_left_frame)

                     key = cv2.waitKey(1)
                     if key == ord('q'):
                         pipeline.stop()
                         break

                     stepSize = 0.01

                     newConfig = False

                     if key == ord('q'):
                         break
                     elif key == ord('w'):
                         if topLeft.y - stepSize >= 0:
                             topLeft.y -= stepSize
                             bottomRight.y -= stepSize
                             newConfig = True
                     elif key == ord('a'):
                         if topLeft.x - stepSize >= 0:
                             topLeft.x -= stepSize
                             bottomRight.x -= stepSize
                             newConfig = True
                     elif key == ord('s'):
                         if bottomRight.y + stepSize <= 1:
                             topLeft.y += stepSize
                             bottomRight.y += stepSize
                             newConfig = True
                     elif key == ord('d'):
                         if bottomRight.x + stepSize <= 1:
                             topLeft.x += stepSize
                             bottomRight.x += stepSize
                             newConfig = True
                     elif key == ord('1'):
                         calculationAlgorithm = dai.SpatialLocationCalculatorAlgorithm.MEAN
                         print('Switching calculation algorithm to MEAN!')
                         newConfig = True
                     elif key == ord('2'):
                         calculationAlgorithm = dai.SpatialLocationCalculatorAlgorithm.MIN
                         print('Switching calculation algorithm to MIN!')
                         newConfig = True
                     elif key == ord('3'):
                         calculationAlgorithm = dai.SpatialLocationCalculatorAlgorithm.MAX
                         print('Switching calculation algorithm to MAX!')
                         newConfig = True
                     elif key == ord('4'):
                         calculationAlgorithm = dai.SpatialLocationCalculatorAlgorithm.MODE
                         print('Switching calculation algorithm to MODE!')
                         newConfig = True
                     elif key == ord('5'):
                         calculationAlgorithm = dai.SpatialLocationCalculatorAlgorithm.MEDIAN
                         print('Switching calculation algorithm to MEDIAN!')
                         newConfig = True

                     if newConfig:
                         config.roi = dai.Rect(topLeft, bottomRight)
                         config.calculationAlgorithm = calculationAlgorithm
                         cfg = dai.SpatialLocationCalculatorConfig()
                         cfg.addROI(config)
                         inputConfigQueue.send(cfg)
                         newConfig = False
               

              • I was testing the OAK 4 D Pro compared to an OAK 4 D. I tested them in low light settings and against a low-visual-interest surface (a plain wall). Both seemed to perform about the same in both scenarios. Most notably, the OAK 4 D Pro could not determine the distance of the plain wall at about 2.5 m away (it just returned a null value). I was using the SpatialLocationCalculator example for these experiments. Is there anything that has to be done to enable the IR dot projector or IR sensor?

                  • Edited

                  I'm sorry, but I don't understand your response. Is there any documentation on how to use ADB to connect to an OAK 4D device? What is oelinux123? Why do I need to update my OS? I am operating the most updated version of Windows.

                  • For the OAK-D S2, we are using the pre-made rgb_preview.cpp example (except we changed the resolution output):

                    #include <iostream>

                    // Includes common necessary includes for development using depthai library

                    #include "depthai/depthai.hpp"

                    int main() {

                    using namespace std;

                    // Create pipeline

                    dai::Pipeline pipeline;

                    // Define source and output

                    auto camRgb = pipeline.create<dai::node::ColorCamera>();

                    auto xoutRgb = pipeline.create<dai::node::XLinkOut>();

                    xoutRgb->setStreamName("rgb");

                    // Properties

                    camRgb->setPreviewSize(1920, 1080);

                    camRgb->setBoardSocket(dai::CameraBoardSocket::CAM_A);

                    camRgb->setResolution(dai::ColorCameraProperties::SensorResolution::THE_1080_P);

                    camRgb->setInterleaved(false);

                    camRgb->setColorOrder(dai::ColorCameraProperties::ColorOrder::RGB);

                    camRgb->setFps(30); // Set FPS to 30

                    // Linking

                    camRgb->preview.link(xoutRgb->input);

                    // Connect to device and start pipeline

                    dai::Device device(pipeline, dai::UsbSpeed::SUPER);

                    cout << "Connected cameras: " << device.getConnectedCameraFeatures() << endl;

                    // Print USB speed

                    cout << "Usb speed: " << device.getUsbSpeed() << endl;

                    // Bootloader version

                    if(device.getBootloaderVersion()) {

                    cout << "Bootloader version: " << device.getBootloaderVersion()->toString() << endl;

                    }

                    // Device name

                    cout << "Device name: " << device.getDeviceName() << " Product name: " << device.getProductName() << endl;

                    // Output queue will be used to get the rgb frames from the output defined above

                    auto qRgb = device.getOutputQueue("rgb", 4, false);

                    while(true) {

                    auto inRgb = qRgb->get<dai::ImgFrame>();

                    // Retrieve 'bgr' (opencv format) frame

                    cv::imshow("rgb", inRgb->getCvFrame());

                    int key = cv::waitKey(1);

                    if(key == 'q' || key == 'Q') {

                    break;

                    }

                    }

                    return 0;

                    }

                    For the OAK 4D, we are using the premade camera_output.cpp exampe:

                    #include <iostream>

                    #include <memory>

                    #include <opencv2/opencv.hpp>

                    #include "depthai/depthai.hpp"

                    #include <chrono>

                    int main() {

                    // Create device

                    std::shared_ptr<dai::Device> device = std::make_shared<dai::Device>();

                    // Create pipeline

                    dai::Pipeline pipeline(device);

                    // Create nodes

                    auto cam = pipeline.create<dai::node::Camera>()->build();

                    //auto videoQueue = cam->requestOutput(std::make_pair(640, 400))->createOutputQueue();

                    std::cout << "FPS: " << cam->getMaxRequestedFps() << std::endl;

                    auto videoQueue = cam->requestOutput(std::make_pair(1920, 1080))->createOutputQueue();

                    videoQueue->setBlocking(false);

                    videoQueue->setMaxSize(1);

                    // Start pipeline

                    pipeline.start();

                    while(true) {

                    auto videoIn = videoQueue->get<dai::ImgFrame>();

                    if(videoIn == nullptr) continue;

                    cv::imshow("video", videoIn->getCvFrame());

                    if(cv::waitKey(1) == 'q') {

                    break;

                    }

                    }

                    return 0;

                    }

                    • When simply grabbing a 1920x1080 image and displaying to screen, the OAK 4D is demonstrating over 5 seconds of latency. This is significantly different from our OAK-D S2 model which is able to grab a 1920x1080 image and display it to screen with nearly imperceptible latency. Is this all explained due to the different connection types (POE vs USB2)? If not, are there any suggestions for how to reduce the OAK-4D latency?

                      • I configured my OAK 4D camera to use DHCP and now I want to set it to use a static IP. However, it is no longer in setup mode and no longer responds to the QR code during setup. How do I get it back into setup mode?

                      • Can I get an idea of the timeline for these developments? Specifically, the ObjectTracker node and the new model ZOO with RCV4 compatible models?

                      • Is there a version of depthai-nodes for C++ so that I could translate the examples from Python to C++?