• DepthAI-v2
  • External trigger not working on OAK-D-SR-POE

Hans Your suggestion to "setFps() - set it to the highest value to get the most accurate triggers." means the frame is not captured within a fixed time interval from the trigger, but rather the exposure will start up to a 1/60th second random deviation?

Set to 129.6 FPS (for 800p) and exposure should start immediately 1/129.6th of a second governed by when in the trigger was active. Note that in this mode exposure is started immediately after trigger, then MIPI readout follows.
In the other mode (frame sync input), exposure of next frame and readout of current are internally overlapped.

Hans Ok, picked up and tested with another camera and regular capture and triggered capture of RGB works perfectly fine on that one. I will contact sales for a replacement of the broken device, unless you have other suggestions?

Could be fried from 24V..

Hans I seems the trigger does not actually work on the TOF sensor, but I need to trigger one of the other cameras and when it does get the latest available TOF frame, correct? I guess that is why you were telling me to set the FPS to the max?

Not sure I fully understand the question. The TOF triggering should work the same way, it shares the same FSIN line.

Hans Even when setting the output buffer to a single frame device.getOutputQueue("depth", maxSize=1,blocking=False) , I still usually get some (several seconds) old frames from before the trigger. This must mean there is another buffer on the device side.. is there a way to clear/limit that buffer as well?

You can set the size and blocking behavior on any link that has the .input. So XLINK OUT: depth.out.link(depth_xout.input) can have depth_xout.input.setBlocking(False) and same for queue size.

Thanks,
Jaka

    Hi Jaka,
    Thanks for the help. Will try the suggestions on the FPS and queue size.
    I still can't get triggering the TOF sensor directly to work. Can you share the code you used to test yourself? Or please check the code below; I tried to make it the most minimal combination of the tof_depth example and the external trigger example. This code for me just gives me continuous output with a warning "[warning] Device configured to generate sync, but no camera with sync output capability was started"
    The triggering works if I change TOF to a MonoCamera.

    #include <iostream>
    #include <chrono>
    #include <opencv2/opencv.hpp>
    #include <depthai/depthai.hpp>
    #include <windows.h>
    #include <vector>
    #include <string>
    #include "depthai-shared/datatype/RawToFConfig.hpp"
    #include "depthai/pipeline/datatype/ToFConfig.hpp"
    #include <mutex>
    #include <filesystem>
    #include <stdexcept>
    
    // Define the shared pointers and other variables
    std::shared_ptr<dai::DataOutputQueue> qDepth;
    
    dai::Pipeline pipeline;
    std::shared_ptr<dai::node::ToF> tof;
    dai::RawToFConfig tofConfig;
    std::shared_ptr<dai::Device> device;
    
    
    std::shared_ptr<dai::ToFConfig> createConfig(dai::RawToFConfig configRaw) {
        auto config = std::make_shared<dai::ToFConfig>();
        config->set(std::move(configRaw));
        return config;
    }
    
    cv::Mat getDepth() {
        auto imgFrame = qDepth->get<dai::ImgFrame>(); // blocking call, will wait until new data has arrived
    
        cv::Mat depthFrame = imgFrame->getFrame(true);
    
        return depthFrame;
    }
    
    cv::Mat convertToByte(const cv::Mat& frameDepth) {
        cv::Mat invalidMask = (frameDepth == 0);
        cv::Mat depthFrameScaled;
        try {
            double minDepth = 100.0;
            double maxDepth = minDepth + 255;
    
            cv::Mat rawDepth;
            frameDepth.convertTo(rawDepth, CV_32F);
            cv::medianBlur(rawDepth, rawDepth, 5);
            rawDepth = (rawDepth - minDepth) / (maxDepth - minDepth) * 255;
            rawDepth.convertTo(depthFrameScaled, CV_8UC1);
            depthFrameScaled.setTo(cv::Scalar(0, 0, 0), invalidMask);
        }
        catch (const std::exception& e) {
            depthFrameScaled = cv::Mat::zeros(frameDepth.size(), CV_8UC1);
        }
        return depthFrameScaled;
    }
    
    
    
    bool Init() {
        printf("Initializing DepthAI\n");
    
        tof = pipeline.create<dai::node::ToF>();
    
        // Configure the ToF node
        tofConfig = tof->initialConfig.get();
        tofConfig.enableFPPNCorrection = true;
        tofConfig.enableOpticalCorrection = true;
        tofConfig.enableWiggleCorrection = false;
        tofConfig.enablePhaseShuffleTemporalFilter = false;
        tofConfig.enablePhaseUnwrapping = true;
        tofConfig.phaseUnwrappingLevel = 1;
        tofConfig.phaseUnwrapErrorThreshold = 300;
        tofConfig.enableTemperatureCorrection = true;
        
    
        printf("Configuring the ToF node\n");
    
        auto xinTofConfig = pipeline.create<dai::node::XLinkIn>();
        xinTofConfig->setStreamName("tofConfig");
        xinTofConfig->out.link(tof->inputConfig);
    
        tof->initialConfig.set(tofConfig);
    
        auto camTof = pipeline.create<dai::node::Camera>();
        camTof->setFps(60);
        camTof->setBoardSocket(dai::CameraBoardSocket::CAM_A);
        camTof->initialControl.setFrameSyncMode(dai::CameraControl::FrameSyncMode::INPUT);
        camTof->initialControl.setExternalTrigger(1, 0);
    
        camTof->raw.link(tof->input);
    
        auto xoutDepth = pipeline.create<dai::node::XLinkOut>();
        xoutDepth->setStreamName("depth");
        tof->depth.link(xoutDepth->input);
        xoutDepth->input.setBlocking(false);
        xoutDepth->input.queueSize = 1;
    
        tofConfig = tof->initialConfig.get();
    
        device = std::make_shared<dai::Device>(pipeline);
    
        std::cout << "Connected cameras: " << device->getConnectedCameraFeatures().size() << std::endl;
        qDepth = device->getOutputQueue("depth", 1, false);
        
        return true;
    }
    
    int main(int argc, char* argv[]) {
    
        if (!Init())
            return -1;
        printf("Completed init\n");
    
        int counter = 0;
        while (true) {
            auto start = std::chrono::high_resolution_clock::now();
    
            if (qDepth->has()) {
                
                cv::Mat depthFrameScaled = convertToByte(getDepth());
                cv::imshow("Depth", depthFrameScaled);
            }
            else {
                printf(".");
            }
            
    
            int key = cv::waitKey(1);
            
            if (key == 'q') {
                break;
            }
    
            counter++;
        }
    
        return 0;
    }

      Hi Hans
      My mistake; the tof can not be triggered as other cameras. From my tests, it just constantly runs and is not affected by any triggering.
      Though I guess you could achieve essentially the same thing with a sync node between a camera (which you trigger) and the ToF. They should both trigger at once anyway, other tof frames will be discarded.

      Thanks,
      Jaka

      a month later

      Hi Jaka,

      After weeks of trying to get everything to work reliably using your suggestions, I still have issues remaining.

      I'm triggering one of the RGB camera's with a sync node to the ToF sensor. I only got this to work by setting them both to 60FPS. Using a PLC and SSR I am providing a 5ms trigger.

      Main issue is that >50% of the time, my depth image seems to consist of 2 exposures. As if the ToF exposure was interrupted by the RGB exposure or something like that? Please look at the sample images. These are of the same container moving at the same low speed. I randomly get the first or the second depth result. The color image is always fine.

      Hope you can help with this.

      Regards,

      Hans



        Hi Hans
        Try setting the frame sync mode to output for the TOF. As per this comment above:

        jakaskerl Set to 129.6 FPS (for 800p) and exposure should start immediately 1/129.6th of a second governed by when in the trigger was active. Note that in this mode exposure is started immediately after trigger, then MIPI readout follows.
        In the other mode (frame sync input), exposure of next frame and readout of current are internally overlapped.

        This could be the cause for your issues. If not, we'll look deeper.

        Thanks,
        Jaka

        Hi Jaka,
        I tried setting frame sync output mode to OUTPUT for TOF, but this had no effect. Also, I don't understand why you would want to do this. What do you output to, since the RGB camera is triggered with an external trigger? I tried setting it to INPUT and OFF, and also tried different options for the RGB cam, but the issues remain. It seems setting both to INPUT gives issues in only <10% of the cases.. though it could be a coincidence.

        I also tried again to change the framerate to 129.6, as I understand this sets the device to a special mode? Unfortunately I get the following error regardless of whether I change the FPS of only the TOF or both:

        [17.391] [system] [critical] Fatal error. Please report to developers. Log: 'Fatal error on MSS CPU: trap: 00, address: 00000000' '0'

          Hi @Hans ,
          Unfortunately the ToF doesn't support frame sync output, and also no arbitrary external trigger mode. It's possible only to be synced in continuous capture mode to an external signal that should have a fixed rate/frequency. Then the ToF aligns its internal operations to this signal.
          (we will need to better document this and report error logs from depthai for unsupported configurations, checking the return status of the camera control would not be possible due to messages being posted to the queues without a wait to be processed on device)

          Would it be possible for the external circuitry to generate this constant rate signal, or you need it at arbitrary points in time? I'm afraid what happens is that a ToF exposure/readout is interrupted by a new pulse arriving at an unexpected time.

          Hi @Luxonis-Alex, thank you for joining in on this issue.

          I am trying to capture the depth and RGB image whenever an object passes a sensor, so it is indeed an arbitrary point in time, which I think means that syncing with an external signal well not give any benefits?

          Depth and RGB doesn't have to be perfectly in sync; if it was up to 1/60th second off, that would still work for me. I have enabled the external trigger on the RGB sensor only. Then, even when setting sync mode OFF for both sensors, I get the "double image" artifact on the depth image. Even when I skip 10 frames from the moment the RGB trigger comes in, or look at the depth image right before the trigger, the artifact is still there.

          I'm afraid what happens is that a ToF exposure/readout is interrupted by a new pulse arriving at an unexpected time.

          I agree. But with sync mode OFF, there should not be two out-of-sync pulses on the line? Can I try something in code, or does this require a firmware fix? Or is this a PCB issue?

          10 days later

          Hi @Luxonis-Alex and @jakaskerl, any updates on this issue?

          If this can't be fixed I need to switch to a different ToF camera soon. If a firmware fix could solve everything I might be willing to finance this development if it can be done at short term.

          Please let me know.

          10 days later

          Hi @Hans , sorry for the delay here...
          About the "double image" artifact in the ToF depth image, I think I didn't notice the issue precisely first when I looked here. This happens if the object is moving when imaged, and is related to how the ToF sensor operates, in the default mode it uses 4 sub-frames captured sequentially and equally spaced in time to compute a depth frame.

          We have more details here:
          https://docs.luxonis.com/software/depthai-components/nodes/tof/#ToF-ToF%20Settings
          with a few other notes:

          • the sub-frame capture FPS is now double of the value passed to .setFps(...), in order to have the post-processed depth output in default mode same as the value passed to setFps. That is, the recommended maximum should not exceed .setFps(80) which results in 160 FPS sub-frame capture speed, and up to .setFps(88) (176) possible but with reduced exposure time which may produce more noise in the depth image. The exposure time (auto-adjusted only, up to 0.796ms) can be obtained from ImgFrame metadata (.getExposureTime())
          • the motion artifact can be minimized by having a high FPS and setting tofConfig.enablePhaseShuffleTemporalFilter = False and tofConfig.enableBurstMode = True, but will still be present due to having two modulation frequencies from adjacent sub-frames used for phase unwrapping (resolving ambiguities from phase repetitions), that is needed for measuring distances past about 1.5m.
          • the burst in this context refers to a full set of 4 consecutive sub-frames that are guaranteed to be all captured without sub-frame loss, and forwarded to post-processing. In case of high system load, some "burst groups" could be dropped. When enabling tofConfig.enableBurstMode = True, it just ensures that no depth decoding happens across different "bursts" which could be even further spaced in time due to potential drops.
          • the motion artifact can be eliminated completely by disabling phase unwrapping, that is setting tofConfig.phaseUnwrappingLevel = 0 (with tofConfig.enablePhaseShuffleTemporalFilter = False). In this case, distances higher than about 1.5m cannot be measured, and would give wrong values. In this mode, tofConfig.enableBurstMode still has an effect:
            • False: depth output provided for both 80MHz and 100MHz modulation frequencies, alternating. Flicker would be seen for larger than about 1.5m distances, but those values are invalid anyway.
            • True: depth output for a single modulation frequency (80MHz).

          For the sync issue, yeah as the ToF sensor doesn't have the arbitrary capture feature, best that can be done is to run it at a high FPS, but with some optimization of reducing the system load, to drop raw phase frames immediately after capture, and only pass them further to decoding when the external signal was received. It's doable, but a bit of custom firmware work would be needed.