How to send array to neural network node from Script node?
If it helps to reproduce the issue I have created a github repo with the python + ONNX + blob files
Cheers
- Edited
- Best Answerset by jakaskerl
Hey @abhanupr,
dai.node.NeuralNetwork
expects input to be uint8. That’s why passing an integer (or a uint8 value) works as expected:
nn_data.setLayer("uint8", [21])
=> 21.0nn_data.setLayer("fp16", [21])
=> 21.0
However, when you use a floating-point value with fp16 (like nn_data.setLayer("fp16", [21.0])
), it doesn't convert the value to uint8. Instead, it interprets the first byte of the fp16 representation as a uint8, resulting in 64.0. This behavior is due to the absence of type checks in DepthAI v2.29.0.0. DepthAI v3 will address this issue, but for now, you'll need to ensure you're providing the correct data type.
Kind regards,
Jan
Hi JanCuhel ,
Many thanks for the clarification.
That's disappointing though - I think the uint8 input limitation is quite a restrictive one. I was hoping this wouldn't be the case!
In fact, the way I landed here is as follows: I was trying to implement a Kalman filter on the on-board processor of the OAK device, to avoid having to stream data to the host and back. This is crucial to me as I am using the OAK as part of a real-time audio application and latency needs to be minimised. Now, as you would know, the Kalman filter being a recursive filter requires the state to be memorised every time step. I did not find a straightforward way to memorise the state of the neural network node. Therefore, as a workaround, I thought about passing the state as output, saving it in a Script node, and then feeding it back with one time step delay to the input of the NN. This is where I observed this datatype conversion issue, which is why I created this minimal reproducible example.
Is there a way to save the state of the NN node other than feeding back the delayed output via a Script node? If so, please let me know how to do it and I can happily avoid the Script node-based workaround.
If there is no other way, then it implies that one cannot implement any recursive/IIR temporal filter on non-uint8 data on the device! Am I right in thinking so, or am I missing something?
Best,
Abhi
Hi @abhanupr ,
The root cause of your problem is that during conversion to .blob the default setting is to add -ip U8
as the compile params which means that the model is compiled to receive UINT8 on the input. But this is configurable flag. So for your usecase you can use this snippet to convert the model
blobconverter.from_onnx(
model=onnx_simplified_path,
data_type="FP16",
shaves=6,
use_cache=False,
output_dir=".",
optimizer_params=[],
compile_params=["-ip FP16"],
)
And now note the added -ip FP16
which tells the blobconverter (and OpenVino) to leave the input type to the model as float16. With this new .blob your test file will work as expected. And same solution works both on DepthAIv2 and DepthAIv3.
Hope this unblocks you now,
Best, Klemen