• DepthAI-v2
  • How to send array to neural network node from Script node?

abhanupr
NNData should be in bytes.
What happens if you try a different number instead of 1.0? Does the output change? Perhaps only the scaling in wrong during conversion.

Thanks,
Jaka

    jakaskerl

    Thanks for the thoughts. I tried integers up to 21, and didn't have patience to try more ;-) Please see results below. See any pattern that can help?

    Also an additional detail: My ONNX model is configured to be float16. I am assuming this is OK? Thanks

      abhanupr Thanks, I'll check it out and get back to you!

      Hey @abhanupr,

      dai.node.NeuralNetwork expects input to be uint8. That’s why passing an integer (or a uint8 value) works as expected:

      • nn_data.setLayer("uint8", [21]) => 21.0

      • nn_data.setLayer("fp16", [21]) => 21.0

      However, when you use a floating-point value with fp16 (like nn_data.setLayer("fp16", [21.0])), it doesn't convert the value to uint8. Instead, it interprets the first byte of the fp16 representation as a uint8, resulting in 64.0. This behavior is due to the absence of type checks in DepthAI v2.29.0.0. DepthAI v3 will address this issue, but for now, you'll need to ensure you're providing the correct data type.

      Kind regards,
      Jan

        Hi JanCuhel ,

        Many thanks for the clarification.
        That's disappointing though - I think the uint8 input limitation is quite a restrictive one. I was hoping this wouldn't be the case!

        In fact, the way I landed here is as follows: I was trying to implement a Kalman filter on the on-board processor of the OAK device, to avoid having to stream data to the host and back. This is crucial to me as I am using the OAK as part of a real-time audio application and latency needs to be minimised. Now, as you would know, the Kalman filter being a recursive filter requires the state to be memorised every time step. I did not find a straightforward way to memorise the state of the neural network node. Therefore, as a workaround, I thought about passing the state as output, saving it in a Script node, and then feeding it back with one time step delay to the input of the NN. This is where I observed this datatype conversion issue, which is why I created this minimal reproducible example.

        Is there a way to save the state of the NN node other than feeding back the delayed output via a Script node? If so, please let me know how to do it and I can happily avoid the Script node-based workaround.

        If there is no other way, then it implies that one cannot implement any recursive/IIR temporal filter on non-uint8 data on the device! Am I right in thinking so, or am I missing something?

        Best,
        Abhi

          Hi abhanupr,

          I understand. There's a possible solution here, and I'll run some tests to confirm. Once I have the results, I'll update you with the details.

          Best regards,
          Jan

            Hi JanCuhel,

            Thanks! Looking forward to a possible solution.

            Best,
            Abhi

            6 days later
            15 days later

            Hi JanCuhel,

            Any solution in sight? Or at least a workaround for the moment?

            Best,
            Abhi

              Hi abhanupr,

              I'm so sorry for the long delay. I'm actively working on it and will let you know as soon as I have an update.

              Once again, I apologize for the delay.

              Best regards,
              Jan

                3 months later

                Hi JanCuhel

                Any updates on this?

                Sorry for bugging you about this, I understand that it is probably a fundamental issue with the NN node so there may not be easy fixes/workarounds. But if you find any please let me know.

                Best,
                AB

                @JanCuhel

                I noticed depthai V3 has been released. You mentioned this issue would be addressed there, but just wanted to confirm. Let me know.

                Best,
                AB

                  Hi @abhanupr ,
                  The root cause of your problem is that during conversion to .blob the default setting is to add -ip U8 as the compile params which means that the model is compiled to receive UINT8 on the input. But this is configurable flag. So for your usecase you can use this snippet to convert the model

                  blobconverter.from_onnx(
                      model=onnx_simplified_path,
                      data_type="FP16",
                      shaves=6,
                      use_cache=False,
                      output_dir=".",
                      optimizer_params=[],
                      compile_params=["-ip FP16"],
                  )

                  And now note the added -ip FP16 which tells the blobconverter (and OpenVino) to leave the input type to the model as float16. With this new .blob your test file will work as expected. And same solution works both on DepthAIv2 and DepthAIv3.
                  Hope this unblocks you now,
                  Best, Klemen