• Multiple OAK Cameras

Trying to extract stereo data from multiple cameras based on the example at https://docs.luxonis.com/projects/api/en/latest/samples/mixed/multiple_devices/

That example works but if we try to add a stereo node the creation of the pipeline fails. The code added is

auto stereo = pipeline->create<dai::node::StereoDepth>();

That works perfectly well for a single camera where the pipeline is not created within a loop and we can acquire all images (RGB, left and right, depth etc) but crashes when executed as per the loop.

Any suggestions? All this is done in C++ not Python.

    Certainly, I need to extract the depth information and the rectified left and right images.

    The code is exactly as at https://docs.luxonis.com/projects/api/en/latest/samples/mixed/multiple_devices/ with one line added

    auto stereo = pipeline->create<dai::node::StereoDepth>();

    as per below. That one line breaks the Debug or Release versions that operate perfectly without that line.

    Code as per the web site and exactly as implemented.

    std::shared_ptr<dai:😛ipeline> createPipeline() {

    // Start defining a pipeline
    
    auto pipeline = std::make_shared<dai::Pipeline>();
    
    // Define a source - color camera
    
    auto camRgb = pipeline->create<dai::node::ColorCamera>();
    
    // RGB characteristics
    
    camRgb->setPreviewSize(500, 500);
    
    camRgb->setBoardSocket(dai::CameraBoardSocket::CAM_A);
    
    camRgb->setResolution(dai::ColorCameraProperties::SensorResolution::THE_1080_P);
    
    camRgb->setInterleaved(false);
    
    // Create outputs
    
    auto xoutRgb = pipeline->create<dai::node::XLinkOut>();
    
    // Set the stream names
    
    xoutRgb->setStreamName("rgb");
    
    camRgb->preview.link(xoutRgb->input);
    
    // StereoDepth is necessary for rectified images
    
    //auto stereo = pipeline->create<dai::node::StereoDepth>();
    
    return pipeline;

    }

      Hi GeorgeVP
      Any errors you could give me? C++ is not exactly my cup of tea, but for python, usually when you only instantiate the stereo without connecting it to anything, the script will error out with RuntimeError: StereoDepth(2) - No output of StereoDepth is connected/used!

      Could you share some error logs?
      Thanks,
      Jaka

      i will try to extract something from the error logs. I haven't run this application in Python but will try. I run things in C++ and Python but translating between the two isn't always easy. In this case my focus is in C++, the focus on using Python in the documentation and examples can be a limitation and I will have to track down any error logs produced in C++.

      Before I go to the error logs I reviewed all the connections, it appears that if they are not all defined correctly while the code compiled execution produces an exception. I have yet to complete extraction of the rectified images but up to the set up of the pipelines and display of the RGB camera output the application is working

      For your records the working code for creating the pipeline is as follows. The new code is from the line Define sources and outputs:

      std::shared_ptr<dai:😛ipeline> createPipeline() {

      // Start defining a pipeline
      
      auto pipeline = std::make_shared<dai::Pipeline>();
      
      // Define a source - color camera
      
      auto camRgb = pipeline->create<dai::node::ColorCamera>();
      
      // RGB characteristics
      
      camRgb->setPreviewSize(500, 500);
      
      camRgb->setBoardSocket(dai::CameraBoardSocket::CAM_A);
      
      camRgb->setResolution(dai::ColorCameraProperties::SensorResolution::THE_1080_P);
      
      camRgb->setInterleaved(false);
      
      // Create outputs
      
      auto xoutRgb = pipeline->create<dai::node::XLinkOut>();
      
      // Set the stream names
      
      xoutRgb->setStreamName("rgb");
      
      camRgb->preview.link(xoutRgb->input);
      
      
      // Define sources and outputs
      
      auto monoRight = pipeline->create<dai::node::MonoCamera>();
      
      auto monoLeft = pipeline->create<dai::node::MonoCamera>();
      
      // StereoDepth is necessary for rectified images
      
      auto stereo = pipeline->create<dai::node::StereoDepth>();
      
      auto xout = pipeline->create<dai::node::XLinkOut>();
      
      xout->setStreamName("disparity");
      
      // Properties
      
      monoRight->setCamera("right");
      
      monoLeft->setCamera("left");
      
      monoRight->setResolution(dai::MonoCameraProperties::SensorResolution::THE_400_P);
      
      monoLeft->setResolution(dai::MonoCameraProperties::SensorResolution::THE_400_P);
      
      stereo->setDefaultProfilePreset(dai::node::StereoDepth::PresetMode::HIGH_DENSITY);
      
      stereo->setSubpixel(true);
      
      // Linking
      
      monoRight->out.link(stereo->right);
      
      monoLeft->out.link(stereo->left);
      
      stereo->disparity.link(xout->input);
      
      return pipeline;

      }

        Hi GeorgeVP
        Makes sense, yes. The code you have originally provided only included the definition of the stereo node, but it had neither an input not an output.
        The extraction shouldn't cause any further errors, but be careful not to saturate the queues.

        Tip: You can use chatGPT or some other LLM to convert python<-->cpp for depthai, since the python api is just a binding on the cpp version and should be 1:1.

        Thanks,
        Jaka

        I have the full requirement working now. As you indicated it required getting all the input and output linkages correct. I have been using RealSense 3D cameras for a few years and want the application to be 'agnostic' in regard to which cameras are being used so understanding the architecture is key. Converting from Python to C++ using a LLM is not something I have tried yet. It may have benefits although it doesn't appear to add much to the learning process.