• DepthAI-v2
  • OAK-D POE S2: Multi-model + Standalone mode (TCP Streming)

Hello everyone,

I'm currently working on implementing a pipeline with two models, MobileNetSSD and FireDetection, running in standalone mode on my OAK-D S2.

While the person detection part is working as expected, I'm encountering some difficulties with the fire detection model. I'm struggling to understand how to manage and decode the detections generated by the fire detection neural network within the script that needs to be flashed onto the camera.

Below, you can find the schematic representation of my pipeline:

However, my primary challenge lies in decoding the fire detections onto device and forwarding them to the host. I'm unsure whether this is even possible, and I'd greatly appreciate any guidance or assistance on how to address this issue.

Thank you in advance!

  • Hello everyone!

    I wanted to share that I've made some progress with my project. After carefully reviewing the documentation, I found the solution I was searching for.

    It turns out, all I needed to do was utilize the ".getLayerFp16("final_result")" method to extract the results and work with the tensors.

    Consider my problem solved! 🙂

In my attempt to follow the FireDetection example, I'm currently facing some uncertainty regarding the process of transitioning from the "lpb.NNData" object, which represents the results obtained from the fire_detection neural network, to a tensor result. Specifically, I'm looking to access the correct layer, "final_result" onto script node, which will be flashed on the device.

script.setScript("""
import socket
import time

server = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
server.bind(("0.0.0.0", 5000))
server.listen()
node.warn("Server up")

labelMap_SSD = ["background", "aeroplane", "bicycle", "bird", "boat", "bottle", "bus", "car", "cat", "chair", "cow",
                "diningtable", "dog", "horse", "motorbike", "person", "pottedplant", "sheep", "sofa", "train", "tvmonitor"]

label_fire = ["fire", "normal", "smoke"]

while True:
    conn, client = server.accept()
    node.warn(f"Connected to client IP: {client}")
    try:
        while True:
            pck = node.io["frame"].get()
            data = pck.getData()
            ts = pck.getTimestamp()
            
            # --------MobilenetSSD
            detections_ssd = node.io["detSSD"].tryGet()
            if detections_ssd:
                dets = detections_ssd.detections
                
                data_ssd = []
                for det in dets:
                    label = labelMap_SSD[det.label]
                    if label == "person":
                        det_bb = [det.label, det.xmin, det.ymin, det.xmax, det.ymax]
                        data_ssd.append(det_bb)

	    # --------FireDetection
	    data_fire = node.io["detFire"].tryGet()
            # node.warn(f"data_fire: {data_fire}")
            # TODO: extract tensor data ???


            # now to send data we need to encode it (whole header is 256 characters long)
            header = f"ABCDE " + str(ts.total_seconds()).ljust(18) + str(len(data)).ljust(8) + str(data_ssd).ljust(224)
            conn.send(bytes(header, encoding='ascii'))
            conn.send(data)

    except Exception as e:
        node.warn(f"Error oak: {e}")
        node.warn("Client disconnected")
""")

Any help will be appreciated.
Irena

Hello everyone!

I wanted to share that I've made some progress with my project. After carefully reviewing the documentation, I found the solution I was searching for.

It turns out, all I needed to do was utilize the ".getLayerFp16("final_result")" method to extract the results and work with the tensors.

Consider my problem solved! 🙂

Hi @jakaskerl !

I have some questions regarding the available space on my OAK-D S2 device. When attempting to flash my pipeline (multi-model) onto the device, I encountered the following error message:

Found 1 devices
Start flashing SW_version_MobilenetSSD_FireDet_0.0.1 on device: DeviceInfo(name=10.1.1.101, mxid=1844301001A2970F00, X_LINK_BOOTLOADER, X_LINK_TCP_IP, X_LINK_MYRIAD_X, X_LINK_SUCCESS)
Flashing progress: 0.0%
Flash ERROR: Not enough space: 8388608->41156634B

My pipeline is relatively simple, and I'm not sure how to optimize it or reduce its size. I would greatly appreciate it if you could provide some tips or guide me on how to identify the key points to resolve this issue.

Lastly, I'm curious if there's a way to access the file system on the device. Any information on this would be very helpful.

Thanks in advance!
Irena

    Hello @jakaskerl !

    Thank you for your quick response.

    Indeed, when attempting to flash the pipeline, I'm using the parameter "compress = True." I'm not using the SDK in my pipeline; instead, I'm using the API.

    I've also checked the available memory on my device. However, I'm unsure of how to determine what is consuming space in my pipeline. Are there any script to check that one.

    Appreciate your help in advance!
    Irena

      Hi Irena
      Well, what are your numbers for NOR and EMMC storage?

      Irena However, I'm unsure of how to determine what is consuming space in my pipeline. Are there any script to check that one.

      To my knowledge, there is no script to check specific node sizes. Only the whole pipeline.

      Thanks,
      Jaka

        Hi jakaskerl !

        Result for OAK-D S2 PoE FF:

        Found device with name: 10.1.1.102
        Version: 0.0.22
        NETWORK Bootloader, is User Bootloader: False
        Memory 'Memory.FLASH' size: 33554432, info: JEDEC ID: 01 02 19
        Memory 'EMMC' not available...

        Here you can check script which I'm tryinf to flash on the device. I would greatly appreciate any assistance.

        Thank you in advance, Jaka!
        Irena

        4 days later

        Hi @jakaskerl !

        I was wondering if you have had a chance to conduct any testing with the information I provided?

        Thank you a lot in advance.
        Irena

          Hi Irena
          I'm confused as to why the values are so strange. The storage space is different from one script to the next, so I'm just wondering if there is a bug with the model, not the actual storage.

          The model is also not that large:

          8388608->41156634B # left side is smaller..

          Thanks,
          Jaka

          Hi @jakaskerl !

          Which of the two models are you referring to? I have a hunch it may be related to the fire and smoke detection, but I'm not entirely sure.

          I'm planning to conduct some tests, and I'll reach out to you again with the test data.

          Thank you very much in advance.
          Irena

            Hi Irena
            Talked to our firmware dev, it's likely there is just not enough space. The bootloader + the firmware + the pipeline + the model seem to take up more storage space than you have available. The fw logs don't make sense either, since there is information missing, but I was told the standalone is "semi-deprecated", so likely this won't be fixed in the future. Your only hope right now is to use a device with enough NOR or EMMC memory to support running the apps - has to be in GB domain, not MB.

            Thanks,
            Jaka

            Hi @jakaskerl !!

            Thanks for your quick response.

            I tried using other models, and it turns out they don't work. This is a major issue for us because our priority that the cameras can work in standalone mode.

            In response to your feedback and our tests, I have a series of questions regarding possible solutions and implementation on our devices.

            As you mentioned, now we are trying to avoid standalone mode. With that in mind, and considering our goal of operating 12 cameras with a single PC, we've noticed that when we execute the script (which you can check oak_ssd_yolov5.py) on the host, it launches 8 threads per camera. 7 of these threads correspond to the created nodes, plus the main script.

            Logs:

            I understand it's a complex issue, but is there any way to reduce or encapsulate this behavior?

            Secondly, can we run a single script for all our cameras? We are thinking about the scalability of our development, so it was important to us that the cameras can work in standalone mode. Since that's not possible, our question is whether we can execute a single .py to all of our devices.

            Lastly, looking ahead to our next projects, do you think if this dives, OAK-1 POE FF fit to us and our aims (custom multimodel standalone mode + TCP streming) also it's possible to incorporate an M12 connector into this device? If so, who can I speak to about it?

            I appreciate all the help and look forward to your response.
            Irena

              Hi Irena,
              I am confused as to what is actually happening here. The script you have sent should execute completely on the device, on the LEON CSS processor. This is because anything created within the pipeline will get uploaded to the device. The host should have access to anything except the XLink messages (and the internet - sockets). It could be that something else is using the threads.

              Irena Secondly, can we run a single script for all our cameras?

              Sure. Use this example to connect to multiple devices and upload pipeline to each one.

              Irena Lastly, looking ahead to our next projects, do you think if this dives, OAK-1 POE FF fit to us and our aims (custom multimodel standalone mode + TCP streming) also it's possible to incorporate an M12 connector into this device? If so, who can I speak to about it?

              cc. @erik for this one.

              Thanks,
              Jaka

                Hi @Irena ,
                Yep that would work, you can run multiple models in standalone mode, and also add TCP streaming to your application. M12 connection would require a complete redesign of the hardware & enclosure, and we likely won't be working on that, especially because our next gen of devices will all have M12 + M8 + USB-C, including the (name tbd) "OAK4-1", so single cam with RVC4 on there and M12 connector, which would perfectly suite your requirements. Thoughts?
                Thanks, Erik

                  Hello jakaskerl !!

                  I appreciate your prompt response. It's confirmed that the pipeline runs on the device. My main focus has been on the host's performance behavior when launching the script. Upon reviewing my logs, I notice several different threads upon script launch, presumably corresponding to each node in the pipeline.

                  Thank a lot for the reference you provided on managing all the cameras with a single script; it has proven very useful to me.

                  A new question arises as I delve into the following example gen2-yolo-device-decoding. Given that we cannot operate in multi-model standalone mode, we are aiming to offload most processes to the device to ease the load on the host, which is handling 12 cameras. The example suggests that the device can handle the decoding of the neural network output using the YoloDetectionNetwork node. If we have a custom model, can we perform a similar decoding of the custom model's output on the device?

                  Thank you once again for your assistance.
                  Irena

                    Hi erik !!

                    Thank you for your response. I'm eager to know if there's any information available regarding the release date and potential cost of these devices. This information holds significant relevance for our upcoming projects.

                    Thanks once again!
                    Irena

                    Hi @Irena ,
                    Planned release is June 2024, prices vary depending on the model/variation. MSRPs will likely range from $400 for OAK-1-PoE equivalent to $800 for the OAK-D-LR equivalent. All models will offer both POE and USB connectivity.

                      Irena If we have a custom model, can we perform a similar decoding of the custom model's output on the device?

                      Could you elaborate a bit on what kind of model you are using. Maybe try the model (if Yolo) with the YoloDetectionNetwork node. It essentially does what host side decoding would do, but is customized to work for Yolo models only and runs on-device.
                      Though I am not sure whether the blob will run out-of-the-box with the Yolo node; we usually suggest training the models with our training notebooks (https://github.com/luxonis/depthai-ml-training/tree/master/colab-notebooks).

                      Thanks,
                      Jaka

                        Hi erik !!

                        That's fantastic news. Thank you for your prompt response, and we will be watching for your next releases.

                        Thank you!
                        Irena

                        Hi jakaskerl!

                        Hi Jaka,

                        We are contemplating training Yolo (v6-v7) for a detection model, that is to say our custom data + yolo. From my understanding, as you mentioned, it's possible for the decoding to be performed within the device.

                        Thanks for the reference to the training models 🙂

                        The question that arises for me is whether, if we develop our own model from scratch, it is feasible to have the same functionality, i.e., decoding the results on the device?

                        Thank you in advance!
                        Irena

                          Irena

                          Irena We are contemplating training Yolo (v6-v7)

                          I'd go for v6, empirically it runs the fastest.

                          Irena The question that arises for me is whether, if we develop our own model from scratch, it is feasible to have the same functionality, i.e., decoding the results on the device?

                          You'd have to set the correct layer names, correctly prune the model and then define a relevant .json if you wish to make it work.

                          Thanks,
                          Jaka

                            Hi jakaskerl!

                            Alright, I understand that I need to handle the configuration of our custom model and its outputs to effectively manage the results.

                            Initially, we are considering the use of YOLO, whether through fine-tuning or transfer learning. Am I correct in assuming that for both scenarios, we can utilize the YoloDetectionNetwork node?

                            Thank you very much for your assistance, Jaka.
                            Irena

                              Irena
                              Yes, Yolo node is just a wrapper for standard NN node. Anything that works on NN, should work on Yolo as well. Just make sure the IO is structured in a way that enables the Yolo node to run decoding properly.

                              Thanks,
                              Jaka

                                Hi jakaskerl !!

                                Got it!

                                If any other questions arise, I will reach out to you.

                                Thank you very much! 🙂
                                Irena

                                a month later

                                Hello again!

                                I've just received my new OAK-1-POE for testing and further development of our multi-model-based software. However, I have some questions and would like to share them below.

                                I'm currently attempting to flash the pipeline with the multimodel version on the OAK-1-FF-POE device. A peculiar issue arises where the progress bar is not displayed, and after a few seconds, it indicates that the pipeline was successfully flashed:

                                100% flashed
                                "Flash OK"

                                The strange part is that when I execute the script check_Bootloader.py, it doesn't show that anything has been flashed:

                                Found device with name: 10.1.1.107
                                Version: 0.0.24
                                NETWORK Bootloader, is User Bootloader: True
                                Memory 'Memory.FLASH' size: 33554432, info: JEDEC ID: 01 02 19
                                Memory 'Memory.EMMC' size: 15758000128, info: 

                                However, when I run host.py to connect via TCP to the camera to check if it's working in standalone mode, it indeed is. In other words, the pipeline flash was successful.

                                Now, I'd like to understand why, when checking the bootloader status, it shows as if no pipeline has been loaded for execution in standalone mode. It's crucial to have this control both at the memory level and the software level (name of the loaded software/pipeline, version, etc.).

                                I'm uncertain if the information below is relevant, but it's the only difference I noticed between the OAK-D-S2-FF-POE in which I can see the name of the version flashed, etc. VS OAK-1-FF-POE, and I'm unable to decipher the significance of this parameter setting.

                                OAK-D-S2-FF-POE

                                Found device with name: 10.1.1.103
                                Current flashed configuration
                                {"appMem": 0, "network": {"ipv4": 0, ...}, "usb": {"maxUsbSpeed": 3, ...}, "userBlChecksum": 886625469, "userBlSize": 3822176}

                                OAK-1-FF-POE

                                Found device with name: 10.1.1.107
                                Current flashed configuration
                                {"appMem": 1, "network": {"ipv4": 0, ...}, "usb": {"maxUsbSpeed": 3, ...}, "userBlChecksum": 886625469, "userBlSize": 3822176}

                                I really appreciate any help. Thank you a lot!
                                Irena

                                  Hi Irena
                                  Are you running both devices on the same bootloader? Could you try and flash the latest 0.0.26 version to it, hopefully it corrects the issue.

                                  Using the device_manager.py, is bootloader status set as FLASH_BOOTED? - it usually takes some time to show up since the device needs some time (15 sec) after being powered on, to actually boot into a flashed app.

                                  Thanks,
                                  Jaka

                                    Hi jakaskerl !!

                                    Thank you for your prompt response.

                                    Both devices are currently running version 0.24, as shown below.

                                    OAK-D-S2-FF Poe (10.1.1.103)

                                    OAK-1-FF Poe (10.1.1.107)

                                    While both devices are flashed and operational, I've noticed that in one case, it displays information about the flashed pipeline, whereas in the other case, it doesn't. It's a bit peculiar.

                                    OAK-D-S2-FF Poe (10.1.1.103)

                                    OAK-1-FF Poe (10.1.1.107)

                                    Do you think I should consider updating to version 0.26?

                                    Thank you a lot!
                                    Irena

                                      Hi Irena
                                      Yes, please. Additionally, also upgrade to the latest depthai version (2.24).
                                      Let me know if that changes the output.

                                      Thanks,
                                      Jaka

                                        Hello @jakaskerl !

                                        Thank you so much for your prompt response, Jaka. I haven't had the chance to update yet, but I plan to do so in the next few days.

                                        Now, I'm reaching out for assistance with a new question that has emerged during our development process, specifically concerning the emission via TCP in standalone mode (multi-model).

                                        Our goal is to have two models on the same device, a feat we've accomplished successfully with the new camera OAK-1-FF-Poe 🙂 . The next step is to have the camera emit three different streams on three distinct ports. Each port should have the capability to accept multiple simultaneous connections. Essentially, we want one port to emit raw frames, another to emit frames with the results of model 1, and the third with the results of model 2.

                                        Currently, we can connect to all three streams simultaneously. However, the problem arises when attempting to launch a fourth connection to any of the ports; a black image appears, and all transmissions are blocked.

                                        Upon investigation, it seems the issue stems from a particular section of the code.

                                        ...
                                                    pck = node.io["frame"].get()
                                                    data = pck.getData()
                                                    ts = pck.getTimestamp()
                                         ...
                                                   detections_ssd = node.io["detSSD"].tryGet()
                                        ....
                                        etc

                                        We've experimented by remove that part and sending a simple message, which allows us to establish as many simultaneous connections as we want. Our hypothesis is that the node.io['node_name'].get operation makes too many requests, leading to the blockage.

                                        I'm reaching out to seek your insights on how we could address this issue. If anyone has encountered a similar challenge or has suggestions, we would greatly appreciate any help.

                                        We've attempted to use mutex (mutual exclusion), but it remains on the first connection and never progresses to the subsequent ones. Despite being established, the following connections do not retrieve data and display nothing.

                                        For your reference, I've included the pipeline code and the three host files we used for testing in this link-to-multimodel-script.

                                        Thank you all in advance for your assistance.
                                        Irena

                                        6 days later

                                        Hello @jakaskerl !

                                        I hope this message finds you well.

                                        Have you had the chance to review my previous inquiry? Additionally, if you have any further advice or if there are additional details you think would be helpful, I would be more than grateful to receive them.

                                        Thank you once again for your help.

                                        Best regards,
                                        Irena

                                          Hi Irena
                                          Haven't had a chance to test this yet, since I was busy with other tasks and the holidays.. Will try and test this today.

                                          Thanks,
                                          Jaka

                                            Hi jakaskerl !!

                                            Just to let you know that I have upgraded both the depthai versions and the bootloader, as per your instructions. However, I'm still encountering the same issue—the Oak-1 camera isn't displaying the name of flashed app.

                                            Any other suggestion ...

                                            Thank you so much 🙂
                                            Irena

                                            jakaskerl.

                                            Thank you very much in advance for your assistance. I hope you're having a fantastic vacation, and I'll be eagerly awaiting your response.

                                            Warm greetings and Happy Holidays to you!

                                            Best regards.
                                            Irena

                                            Hi @Irena
                                            Made some adjustments:

                                            import depthai as dai
                                            import time
                                            import blobconverter
                                            from pathlib import Path
                                            
                                            # modelOL = blobconverter.from_zoo(name="mobile_object_localizer_192x192", zoo_type="depthai", shaves=6)
                                            modelSSD = blobconverter.from_zoo(name="mobilenet-ssd", shaves=6)
                                            fire_model_path = "/Users/jaka/Desktop/tests/multimodel_scripts/models/fire-detection_openvino_2021.2_5shave.blob"
                                            
                                            SW_VERSION = "SSD_FireDet"
                                            
                                            # Start defining a pipeline
                                            pipeline = dai.Pipeline()
                                            
                                            pipeline.setOpenVINOVersion(version = dai.OpenVINO.VERSION_2021_4)
                                            
                                            # Color camera
                                            cam = pipeline.create(dai.node.ColorCamera)
                                            cam.setInterleaved(False)
                                            cam.setResolution(dai.ColorCameraProperties.SensorResolution.THE_1080_P)
                                            cam.setIspScale(1,3)
                                            # cam.setPreviewSize(640, 640)
                                            # cam.setVideoSize(640, 640)
                                            # cam.setFps(40)
                                            
                                            # Define a neural network Mobilenet-ssd
                                            nn_mssd = pipeline.create(dai.node.MobileNetDetectionNetwork)
                                            nn_mssd.setConfidenceThreshold(0.5)
                                            nn_mssd.setBlobPath(modelSSD)
                                            nn_mssd.setNumInferenceThreads(2)
                                            nn_mssd.input.setBlocking(False)
                                            
                                            # Define input image to nn_MobilenetSSD
                                            manip_mssd = pipeline.createImageManip()
                                            manip_mssd.setResize(300,300)
                                            manip_mssd.setMaxOutputFrameSize(270000) # 300x300x3
                                            manip_mssd.initialConfig.setFrameType(dai.RawImgFrame.Type.RGB888p)
                                            
                                            # Define a neural network Fire Detection
                                            nn_fire = pipeline.create(dai.node.NeuralNetwork)
                                            # nn_fire.setConfidenceThreshold(0.5)
                                            nn_fire.setBlobPath(fire_model_path)
                                            nn_fire.input.setBlocking(False)
                                            
                                            # Define input image to nn_FireDetection
                                            manip_fire = pipeline.createImageManip()
                                            manip_fire.setResize(224,224)
                                            manip_fire.setMaxOutputFrameSize(150528) # 224x224x2
                                            manip_fire.initialConfig.setFrameType(dai.RawImgFrame.Type.RGB888p)
                                            
                                            #define a script node
                                            script = pipeline.create(dai.node.Script)
                                            script.setProcessor(dai.ProcessorType.LEON_CSS)
                                            
                                            #Define a video encoder
                                            videoEnc = pipeline.create(dai.node.VideoEncoder)
                                            videoEnc.setDefaultProfilePreset(30, dai.VideoEncoderProperties.Profile.MJPEG)
                                            
                                            # Links
                                            cam.preview.link(manip_mssd.inputImage)
                                            manip_mssd.out.link(nn_mssd.input)
                                            
                                            cam.preview.link(manip_fire.inputImage)
                                            manip_fire.out.link(nn_fire.input)
                                            
                                            script.inputs['detSSD'].setBlocking(False)
                                            script.inputs['detSSD'].setQueueSize(1)
                                            nn_mssd.out.link(script.inputs["detSSD"])
                                            
                                            script.inputs['detFire'].setBlocking(False)
                                            script.inputs['detFire'].setQueueSize(1)
                                            nn_fire.out.link(script.inputs["detFire"])
                                            
                                            script.inputs['frame'].setBlocking(False)
                                            script.inputs['frame'].setQueueSize(1)
                                            videoEnc.bitstream.link(script.inputs['frame'])
                                            
                                            cam.video.link(videoEnc.input)
                                            
                                            script.setScript("""
                                            import socket
                                            import time
                                            import threading
                                            
                                            serverFrame = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
                                            serverFrame.bind(("0.0.0.0", 5010))
                                            serverFrame.listen()
                                            
                                            serverSSD = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
                                            serverSSD.bind(("0.0.0.0", 5011))
                                            serverSSD.listen()
                                            
                                            serverFire = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
                                            serverFire.bind(("0.0.0.0", 5012))
                                            serverFire.listen()
                                            
                                            node.warn("Server up")
                                            
                                            labelMap_SSD = ["background", "aeroplane", "bicycle", "bird", "boat", "bottle", "bus", "car", "cat", "chair", "cow",
                                                            "diningtable", "dog", "horse", "motorbike", "person", "pottedplant", "sheep", "sofa", "train", "tvmonitor"]
                                            
                                            label_fire = ["fire", "normal", "smoke"]
                                            
                                            client_connections = {
                                                "frame": [],
                                                "ssd": [],
                                                "fire": []
                                            }
                                            
                                            def send_frame_thread():
                                                try:
                                                    while True:
                                                        pck = node.io["frame"].tryGet()
                                                        if not pck:
                                                            continue
                                                        data = pck.getData()
                                                        ts = pck.getTimestamp()
                                                        
                                                        header = f"ABCDE " + str(ts.total_seconds()).ljust(18) + str(len(data)).ljust(8)
                                            
                                                        for conn in client_connections["frame"]:
                                                            try:
                                                                conn.send(bytes(header, encoding='ascii'))
                                                                conn.send(data)
                                                            except Exception as e:
                                                                node.warn(f"Frame client disconnected: {e}")
                                                                client_connections["frame"].remove(conn)
                                                                conn.close()
                                                except Exception as e:
                                                    node.warn(f"Error oak: {e}")
                                            
                                            def send_result_nn(type):
                                                try:
                                                    while True:
                                                        pck = node.io["frame"].tryGet()
                                                        if not pck:
                                                            continue
                                                        data = pck.getData()
                                                        ts = pck.getTimestamp()
                                                        data_to_send = []
                                            
                                                        if type == 1:  # SSD
                                                            detections_ssd = node.io["detSSD"].tryGet()
                                                            if detections_ssd:
                                                                dets = detections_ssd.detections
                                                                for det in dets:
                                                                    label = labelMap_SSD[det.label]
                                                                    if label == "person":
                                                                        det_bb = [det.label, det.xmin, det.ymin, det.xmax, det.ymax]
                                                                        data_to_send.append(det_bb)
                                            
                                                            for conn in client_connections["ssd"]:
                                                                try:
                                                                    header = f"ABCDE " + str(ts.total_seconds()).ljust(18) + str(len(data)).ljust(8) + str(data_to_send).ljust(224)
                                                                    conn.send(bytes(header, encoding='ascii'))
                                                                    conn.send(data)
                                                                except Exception as e:
                                                                    node.warn(f"SSD client disconnected: {e}")
                                                                    client_connections["ssd"].remove(conn)
                                                                    conn.close()
                                            
                                                        elif type == 2:  # Fire Detection
                                                            data_fire = node.io["detFire"].tryGet()
                                                            if data_fire:
                                                                data_to_send = data_fire.getLayerFp16("final_result")
                                            
                                                            for conn in client_connections["fire"]:
                                                                try:
                                                                    header = f"ABCDE " + str(ts.total_seconds()).ljust(18) + str(len(data)).ljust(8) + str(data_to_send).ljust(224)
                                                                    conn.send(bytes(header, encoding='ascii'))
                                                                    conn.send(data)
                                                                except Exception as e:
                                                                    node.warn(f"Fire Detection client disconnected: {e}")
                                                                    client_connections["fire"].remove(conn)
                                                                    conn.close()
                                            
                                                except Exception as e:
                                                    node.warn(f"Error oak: {e}")
                                                
                                            def get_thread(server, type):
                                                try:
                                                    while True:
                                                        conn, client = server.accept()
                                                        node.warn(f"Connected to client IP: {client}, type: {type}")
                                                        if type == 0:
                                                            client_connections["frame"].append(conn)
                                                            threading.Thread(target=send_frame_thread).start()
                                                        elif type == 1:
                                                            client_connections["ssd"].append(conn)
                                                            threading.Thread(target=send_result_nn, args=(type, )).start()
                                                        elif type == 2:
                                                            client_connections["fire"].append(conn)
                                                            threading.Thread(target=send_result_nn, args=(type, )).start()
                                                except Exception as e:
                                                    node.warn("Server error:", e)
                                            
                                            threading.Thread(target=get_thread, args=(serverFrame, 0)).start()
                                            threading.Thread(target=get_thread, args=(serverSSD, 1)).start()
                                            threading.Thread(target=get_thread, args=(serverFire, 2)).start()
                                            
                                            """)
                                            
                                            # By default, you would boot device with:
                                            with dai.Device(pipeline) as device:
                                                while True:
                                                    time.sleep(1)
                                            
                                            # But for this example, we want to flash the device with the pipeline
                                            # device_infos = dai.DeviceBootloader.getAllAvailableDevices()
                                            # print(f'Found {len(device_infos)} devices')
                                            
                                            # for device in device_infos:
                                            #     print(f"Start flashing SW_version_{SW_VERSION} on device: {device}")
                                            #     bootloader = dai.DeviceBootloader(device)
                                            #     progress = lambda p : print(f'Flashing progress: {p*100:.1f}%')
                                            #     # (r, errmsg) = bootloader.flash(progress, pipeline)
                                            #     (r, errmsg) = bootloader.flash(progress, pipeline, compress=True, applicationName=SW_VERSION)
                                            #     if r: print("Flash OK")
                                            #     else: print("Flash ERROR:", errmsg)

                                            The server should now be able to accept multiple clients connecting to the same port. Added a tryGet inside the script to prevent hanging. Make sure to handle the changes on client side.

                                            Another important consideration: Lower the fps. The device can capture and process frames quickly, but the script node is not so performant since the CPU is really slow.

                                            Thanks,
                                            Jaka

                                              12 days later

                                              Hi jakaskerl !!

                                              I hope you had a fantastic start to the new year!

                                              Thank you for your prompt response. Just to let you know that we are currently in the midst of conducting tests and tring to provided the clarity the capabilities and limitations of the device.

                                              I appreciate your initial assistance, and I will return with more specific questions as we delve deeper into our testing phase.

                                              Thank you again, and looking forward to your continued support.

                                              Best regards!
                                              Irena