TamsLaszip
Based on the paper, it seems they use a MobileNet backbone which should be rather efficient on OAKs. If you want to give it a try, you can check their inference code and see where they initialize PyTorch models. Then you can use torch.onnx.export to export them to ONNX, and then use our blobconverter to convert to blob. I wouldn't focus on any accuracy, just to give the conversion a try. You can then benchmark this on OAK to see if it's suitable.
Feel free to join our Innovation Lab to stay up to date with upcoming products which will have more capabilities for AI inference.