• DepthAI-v2
  • OAK-D POE S2: Multi-model + Standalone mode (TCP Streming)

Hi @jakaskerl !

I was wondering if you have had a chance to conduct any testing with the information I provided?

Thank you a lot in advance.
Irena

    Hi Irena
    I'm confused as to why the values are so strange. The storage space is different from one script to the next, so I'm just wondering if there is a bug with the model, not the actual storage.

    The model is also not that large:

    8388608->41156634B # left side is smaller..

    Thanks,
    Jaka

    Hi @jakaskerl !

    Which of the two models are you referring to? I have a hunch it may be related to the fire and smoke detection, but I'm not entirely sure.

    I'm planning to conduct some tests, and I'll reach out to you again with the test data.

    Thank you very much in advance.
    Irena

      Hi Irena
      Talked to our firmware dev, it's likely there is just not enough space. The bootloader + the firmware + the pipeline + the model seem to take up more storage space than you have available. The fw logs don't make sense either, since there is information missing, but I was told the standalone is "semi-deprecated", so likely this won't be fixed in the future. Your only hope right now is to use a device with enough NOR or EMMC memory to support running the apps - has to be in GB domain, not MB.

      Thanks,
      Jaka

      Hi @jakaskerl !!

      Thanks for your quick response.

      I tried using other models, and it turns out they don't work. This is a major issue for us because our priority that the cameras can work in standalone mode.

      In response to your feedback and our tests, I have a series of questions regarding possible solutions and implementation on our devices.

      As you mentioned, now we are trying to avoid standalone mode. With that in mind, and considering our goal of operating 12 cameras with a single PC, we've noticed that when we execute the script (which you can check oak_ssd_yolov5.py) on the host, it launches 8 threads per camera. 7 of these threads correspond to the created nodes, plus the main script.

      Logs:

      I understand it's a complex issue, but is there any way to reduce or encapsulate this behavior?

      Secondly, can we run a single script for all our cameras? We are thinking about the scalability of our development, so it was important to us that the cameras can work in standalone mode. Since that's not possible, our question is whether we can execute a single .py to all of our devices.

      Lastly, looking ahead to our next projects, do you think if this dives, OAK-1 POE FF fit to us and our aims (custom multimodel standalone mode + TCP streming) also it's possible to incorporate an M12 connector into this device? If so, who can I speak to about it?

      I appreciate all the help and look forward to your response.
      Irena

        Hi Irena,
        I am confused as to what is actually happening here. The script you have sent should execute completely on the device, on the LEON CSS processor. This is because anything created within the pipeline will get uploaded to the device. The host should have access to anything except the XLink messages (and the internet - sockets). It could be that something else is using the threads.

        Irena Secondly, can we run a single script for all our cameras?

        Sure. Use this example to connect to multiple devices and upload pipeline to each one.

        Irena Lastly, looking ahead to our next projects, do you think if this dives, OAK-1 POE FF fit to us and our aims (custom multimodel standalone mode + TCP streming) also it's possible to incorporate an M12 connector into this device? If so, who can I speak to about it?

        cc. @erik for this one.

        Thanks,
        Jaka

          Hi @Irena ,
          Yep that would work, you can run multiple models in standalone mode, and also add TCP streaming to your application. M12 connection would require a complete redesign of the hardware & enclosure, and we likely won't be working on that, especially because our next gen of devices will all have M12 + M8 + USB-C, including the (name tbd) "OAK4-1", so single cam with RVC4 on there and M12 connector, which would perfectly suite your requirements. Thoughts?
          Thanks, Erik

            Hello jakaskerl !!

            I appreciate your prompt response. It's confirmed that the pipeline runs on the device. My main focus has been on the host's performance behavior when launching the script. Upon reviewing my logs, I notice several different threads upon script launch, presumably corresponding to each node in the pipeline.

            Thank a lot for the reference you provided on managing all the cameras with a single script; it has proven very useful to me.

            A new question arises as I delve into the following example gen2-yolo-device-decoding. Given that we cannot operate in multi-model standalone mode, we are aiming to offload most processes to the device to ease the load on the host, which is handling 12 cameras. The example suggests that the device can handle the decoding of the neural network output using the YoloDetectionNetwork node. If we have a custom model, can we perform a similar decoding of the custom model's output on the device?

            Thank you once again for your assistance.
            Irena

              Hi erik !!

              Thank you for your response. I'm eager to know if there's any information available regarding the release date and potential cost of these devices. This information holds significant relevance for our upcoming projects.

              Thanks once again!
              Irena

              Hi @Irena ,
              Planned release is June 2024, prices vary depending on the model/variation. MSRPs will likely range from $400 for OAK-1-PoE equivalent to $800 for the OAK-D-LR equivalent. All models will offer both POE and USB connectivity.

                Irena If we have a custom model, can we perform a similar decoding of the custom model's output on the device?

                Could you elaborate a bit on what kind of model you are using. Maybe try the model (if Yolo) with the YoloDetectionNetwork node. It essentially does what host side decoding would do, but is customized to work for Yolo models only and runs on-device.
                Though I am not sure whether the blob will run out-of-the-box with the Yolo node; we usually suggest training the models with our training notebooks (https://github.com/luxonis/depthai-ml-training/tree/master/colab-notebooks).

                Thanks,
                Jaka

                  Hi erik !!

                  That's fantastic news. Thank you for your prompt response, and we will be watching for your next releases.

                  Thank you!
                  Irena

                  Hi jakaskerl!

                  Hi Jaka,

                  We are contemplating training Yolo (v6-v7) for a detection model, that is to say our custom data + yolo. From my understanding, as you mentioned, it's possible for the decoding to be performed within the device.

                  Thanks for the reference to the training models 🙂

                  The question that arises for me is whether, if we develop our own model from scratch, it is feasible to have the same functionality, i.e., decoding the results on the device?

                  Thank you in advance!
                  Irena

                    Irena

                    Irena We are contemplating training Yolo (v6-v7)

                    I'd go for v6, empirically it runs the fastest.

                    Irena The question that arises for me is whether, if we develop our own model from scratch, it is feasible to have the same functionality, i.e., decoding the results on the device?

                    You'd have to set the correct layer names, correctly prune the model and then define a relevant .json if you wish to make it work.

                    Thanks,
                    Jaka

                      Hi jakaskerl!

                      Alright, I understand that I need to handle the configuration of our custom model and its outputs to effectively manage the results.

                      Initially, we are considering the use of YOLO, whether through fine-tuning or transfer learning. Am I correct in assuming that for both scenarios, we can utilize the YoloDetectionNetwork node?

                      Thank you very much for your assistance, Jaka.
                      Irena

                        Irena
                        Yes, Yolo node is just a wrapper for standard NN node. Anything that works on NN, should work on Yolo as well. Just make sure the IO is structured in a way that enables the Yolo node to run decoding properly.

                        Thanks,
                        Jaka

                          Hi jakaskerl !!

                          Got it!

                          If any other questions arise, I will reach out to you.

                          Thank you very much! 🙂
                          Irena

                          a month later

                          Hello again!

                          I've just received my new OAK-1-POE for testing and further development of our multi-model-based software. However, I have some questions and would like to share them below.

                          I'm currently attempting to flash the pipeline with the multimodel version on the OAK-1-FF-POE device. A peculiar issue arises where the progress bar is not displayed, and after a few seconds, it indicates that the pipeline was successfully flashed:

                          100% flashed
                          "Flash OK"

                          The strange part is that when I execute the script check_Bootloader.py, it doesn't show that anything has been flashed:

                          Found device with name: 10.1.1.107
                          Version: 0.0.24
                          NETWORK Bootloader, is User Bootloader: True
                          Memory 'Memory.FLASH' size: 33554432, info: JEDEC ID: 01 02 19
                          Memory 'Memory.EMMC' size: 15758000128, info: 

                          However, when I run host.py to connect via TCP to the camera to check if it's working in standalone mode, it indeed is. In other words, the pipeline flash was successful.

                          Now, I'd like to understand why, when checking the bootloader status, it shows as if no pipeline has been loaded for execution in standalone mode. It's crucial to have this control both at the memory level and the software level (name of the loaded software/pipeline, version, etc.).

                          I'm uncertain if the information below is relevant, but it's the only difference I noticed between the OAK-D-S2-FF-POE in which I can see the name of the version flashed, etc. VS OAK-1-FF-POE, and I'm unable to decipher the significance of this parameter setting.

                          OAK-D-S2-FF-POE

                          Found device with name: 10.1.1.103
                          Current flashed configuration
                          {"appMem": 0, "network": {"ipv4": 0, ...}, "usb": {"maxUsbSpeed": 3, ...}, "userBlChecksum": 886625469, "userBlSize": 3822176}

                          OAK-1-FF-POE

                          Found device with name: 10.1.1.107
                          Current flashed configuration
                          {"appMem": 1, "network": {"ipv4": 0, ...}, "usb": {"maxUsbSpeed": 3, ...}, "userBlChecksum": 886625469, "userBlSize": 3822176}

                          I really appreciate any help. Thank you a lot!
                          Irena

                            Hi Irena
                            Are you running both devices on the same bootloader? Could you try and flash the latest 0.0.26 version to it, hopefully it corrects the issue.

                            Using the device_manager.py, is bootloader status set as FLASH_BOOTED? - it usually takes some time to show up since the device needs some time (15 sec) after being powered on, to actually boot into a flashed app.

                            Thanks,
                            Jaka

                              Hi jakaskerl !!

                              Thank you for your prompt response.

                              Both devices are currently running version 0.24, as shown below.

                              OAK-D-S2-FF Poe (10.1.1.103)

                              OAK-1-FF Poe (10.1.1.107)

                              While both devices are flashed and operational, I've noticed that in one case, it displays information about the flashed pipeline, whereas in the other case, it doesn't. It's a bit peculiar.

                              OAK-D-S2-FF Poe (10.1.1.103)

                              OAK-1-FF Poe (10.1.1.107)

                              Do you think I should consider updating to version 0.26?

                              Thank you a lot!
                              Irena