Hi, I was attempting to run the luxonis in standalone mode with the human-pose-estimation-0001 model from the zoo. I can't seem to find any references or documentation to directly get the keypoints of the model's output, and instead have to process the heatmaps and pafs after running the model. I wonder if this is intended behavior, or there exists a direct solution. I'm currently using the model in my pipeline as a NeuralNetwork node. Thanks!

Hi @stevex0
NeuralNetwork node will not postprocess the data like Yolo and MBnet nodes would. This make you have to run decoding yourself (usually done on host, but in your case in script node).

Thanks,
Jaka

Hi! Thanks for getting back. It seems to me that post process relies a lot on the np/cv2 libraries inside of depthai's own handler.py for the human-pose-estimation-0001 model. However, I don't think I see that neither numpy nor cv2 is supported in Script's installed python modules. Is there any options other than doing postprocessing in host?

Hi @stevex0
Not on RVC2, unless it is coupled with a host (like OAK-CM4 models).

Thanks,
Jaka