Hi, I am looking into the gen2-yolo examples.
Some of the examples use "device-decoding" , while others use "host-decoding". Could anyone explain the differences to me? How do these relate in terms of inference time?
Also, how does this relate to the network nodes in depthai-core?
I noticed, for example, that the device-decoding example uses dai.node.YoloDetectionNetwork
to make the inference, while the yolox example (host-decoding) uses dai.node.NeuralNetwork
. I guess, the post-processing of the latter is done in python and the post-processing of the former is done in C++, making the inference faster?