• DepthAI
  • Unable to stram left mono right mono mjpeg streams in script node in c++

I am trying to stream /rgb, /left, /right each url serving mjpeg stream using http by script node. I am able to do /rgb, but /left and /right both comes out "no images" on browsers.

Here is code. Anyone knows how to fix it? Basically it follows the example for mjpeg/script node for rgb. Thanks a lot.


#include <chrono>
#include <iostream>
#include <thread>

// Includes common necessary includes for development using depthai library
#include "depthai/depthai.hpp"

int main() {
using namespace std;

// Start defining a pipeline
dai::Pipeline pipeline;

auto cam = pipeline.create<dai::node::ColorCamera>();

auto jpeg = pipeline.create<dai::node::VideoEncoder>();
jpeg->setDefaultProfilePreset(10, dai::VideoEncoderProperties::Profile::MJPEG);
cam->video.link(jpeg->input);

auto camR = pipeline.create<dai::node::MonoCamera>();
camR->setCamera("right");

auto jpegR = pipeline.create<dai::node::VideoEncoder>();
jpeg->setDefaultProfilePreset(10, dai::VideoEncoderProperties::Profile::MJPEG);
camR->out.link(jpegR->input);

auto camL = pipeline.create<dai::node::MonoCamera>();
camL->setCamera("left");

auto jpegL = pipeline.create<dai::node::VideoEncoder>();
jpeg->setDefaultProfilePreset(10, dai::VideoEncoderProperties::Profile::MJPEG);
camL->out.link(jpegL->input);


// Script node
auto script = pipeline.create<dai::node::Script>();
script->setProcessor(dai::ProcessorType::LEON_CSS);

jpeg->bitstream.link(script->inputs["jpeg"]);
jpegL->bitstream.link(script->inputs["jpegL"]);
jpegR->bitstream.link(script->inputs["jpegR"]);

script->setScript(R"(
import time
import socket
import fcntl
import struct
from socketserver import ThreadingMixIn
from http.server import BaseHTTPRequestHandler, HTTPServer

PORT = 8080

def get_ip_address(ifname):
    s = socket.socket(socket.AF_INET, socket.SOCK_DGRAM)
    return socket.inet_ntoa(fcntl.ioctl(
        s.fileno(),
        -1071617759,  # SIOCGIFADDR
        struct.pack('256s', ifname[:15].encode())
    )[20:24])

class ThreadingSimpleServer(ThreadingMixIn, HTTPServer):
    pass

class HTTPHandler(BaseHTTPRequestHandler):
    def do_GET(self):
        if self.path == '/':
            self.send_response(200)
            self.end_headers()
            self.wfile.write(b'<h1>[DepthAI] Hello, world!</h1><p>Click <a href="img">here</a> for an image</p>')
        elif self.path == '/rgb' or self.path=='/left' or self.path=='/right':
            try:
                steam=''
                if self.path == '/rgb':
                    stream='jpeg'
                elif self.path == '/left':
                    stream='jpegL'
                elif self.path == '/right':
                    stream='jpegR'
                self.send_response(200)
                self.send_header('Content-type', 'multipart/x-mixed-replace; boundary=--jpgboundary')
                self.end_headers()
                fpsCounter = 0
                timeCounter = time.time()
                while True:
                    jpegImage = node.io[stream].get()
                    self.wfile.write("--jpgboundary".encode())
                    self.wfile.write(bytes([13, 10]))
                    self.send_header('Content-type', 'image/jpeg')
                    self.send_header('Content-length', str(len(jpegImage.getData())))
                    self.end_headers()
                    self.wfile.write(jpegImage.getData())
                    self.end_headers()

                    fpsCounter = fpsCounter + 1
                    if time.time() - timeCounter > 1:
                        node.warn(f'{stream} fps: {fpsCounter}')
                        fpsCounter = 0
                        timeCounter = time.time()
            except Exception as ex:
                node.warn(str(ex))

with ThreadingSimpleServer(("", PORT), HTTPHandler) as httpd:
    node.warn(f"Serving at {get_ip_address('re0')}:{PORT}")
    httpd.serve_forever()
)");

// Connect to device with pipeline
dai::Device device(pipeline);
while(!device.isClosed()) {
    this_thread::sleep_for(chrono::milliseconds(1000));
}
return 0;

}


    jkchen46033 jpeg->setDefaultProfilePreset(10, dai::VideoEncoderProperties:😛rofile::MJPEG);

    You set this thrice. Make sure it's for each encoder node separately.

    Thanks @jakaskerl It fixed that problem, now able to stream for like 10 minutes and then crashed.

    I do have two more questions below:

    Question 1:
    I am trying to set resolution for each stream. I coded this line in C++
    cam.setResolution(dai.ColorCameraProperties.SensorResolution.THE_720_P)

    But, the compiler complained and issued the error message: ‘class std::shared_ptr<dai::node::ColorCamera>’ has no member named ‘setResolution’. What is the method for setting resolution? for color and for mono. please.

    Is there a document for C++, complete one.

    Is C++ API providing the same capability as the Python? On python I am able to set resolutions.

    Question 2:

    How to make it not crash? I would like it to work at least days. For my experiments with those code above I could not get it to run more than one hour. Most of time, it will crash in 10-20 minutes.

    Thanks a lot for the helps and insights.

    @jakaskerl Please disregard my 1st question above. I got the syntax error fixed. I still have the 2nd question valid for you to reply. ie how to make it continue running for more than 24 hours. Thanks.

      jkchen46033
      I'm still waiting for your app to crash (3h in). Perhaps try with depthai 2.30?

      @jakaskerl I used depthai-core which I downloaded as submodule last week. I am not sure the exact version but it should be new as of last week.

      My code is as listed at the beginning of this thread, 3 mjpeg stream, serving as http url, by script node, and from flash (burned). So my result is: most of time it wont run more than 30 minutes. I tried browser, and ffplay, on 3 streams simultaneously, same result. crash.

      i implemented the same with Python, and with C++, same results. Crash.

      BTW, My company is putting this on robot, and it is stable running more than days, then we potentially can purchase thousands of camera units. But, currently with this crash, it is going to be difficult.

      Thanks for the help. I am hoping we can get it to run past days. But based on luxonis documentation, it lacks enough details to make it work toward it, as least in my opinion. Thanks for your continuous help.

        jkchen46033 and from flash (burned).

        That's a key detail.

        Update your repo to latest and build again. A fix was made on the develop branch that is now merged which fixes memory when flashing the app to the device.

        Thanks,
        Jaka

        @jakaskerl, Thanks for the feedback.

        I followed and did an update of core and rebuilt and tested many times. Same results. Crashed. Tested on 3 cameras. All of them crashed within one hour. Sometime within 10 minutes.

        Here is what I have for the pipeline. I either flashed it or ran it as standalone (no flashing), both would crash.
        Is my pipeline missing something critical? What is the reasoning behind? How to fix it. What to try? Suggestions?

        I need to get it to work non-stop for my employer (robotic company with ai). At least days at a minimum. Thanks.

        #include <chrono>
        #include <iostream>
        #include <thread>

        #include "depthai/depthai.hpp"

        using namespace std;

        int flash_device(dai:😛ipeline pipeline);

        int main(int argc, char* argv[]) {
        dai:😛ipeline pipeline;

        auto camRgb = pipeline.create<dai::node::ColorCamera>();
        camRgb->setResolution(dai::ColorCameraProperties::SensorResolution::THE_1080_P);
        camRgb->setVideoSize(std::tuple<int,int>(1920, 1080));
        
        camRgb->setFps(10);
        camRgb->setInterleaved(false);
        auto jpegRgb = pipeline.create<dai::node::VideoEncoder>();
        jpegRgb->setDefaultProfilePreset(10, dai::VideoEncoderProperties::Profile::MJPEG);
        camRgb->video.link(jpegRgb->input);
        
        auto monoRight = pipeline.create<dai::node::MonoCamera>();
        monoRight->setCamera("right");
        monoRight->setFps(10);
        monoRight->setResolution(dai::MonoCameraProperties::SensorResolution::THE_720_P);
        auto jpegRight = pipeline.create<dai::node::VideoEncoder>();
        jpegRight->setDefaultProfilePreset(10, dai::VideoEncoderProperties::Profile::MJPEG);
        monoRight->out.link(jpegRight->input);
        
        auto monoLeft = pipeline.create<dai::node::MonoCamera>();
        monoLeft->setCamera("left");
        monoLeft->setFps(10);
        monoLeft->setResolution(dai::MonoCameraProperties::SensorResolution::THE_720_P);
        auto jpegLeft= pipeline.create<dai::node::VideoEncoder>();
        jpegLeft->setDefaultProfilePreset(10, dai::VideoEncoderProperties::Profile::MJPEG);
        monoLeft->out.link(jpegLeft->input);
        
        // Script node
        auto script = pipeline.create<dai::node::Script>();
        script->setProcessor(dai::ProcessorType::LEON_CSS);
        
        jpegRgb->bitstream.link(script->inputs["stream_1"]);
        jpegLeft->bitstream.link(script->inputs["stream_2"]);
        jpegRight->bitstream.link(script->inputs["stream_3"]);
        
        script->setScript(R"( ...    // This part essentially is the mjpeg streaming, following script node example implementation.

        Hi @jakaskerl Do you have an update? I am still haunted by unable to stream 3 http mjpeg streams from script node, reliably for hours.