Hi RyanLee ,
What @erik linked are just the beginnings and currently it is only the FPS estimations, as well as the number of parameters and FLOPs. We are in the process of migrating the models from experiments (+ we are constantly looking for new models) to the model zoo. With that in mind, we also opened a new issue that will do exactly that. Evaluate the models on known data sets with execution on OAK, and then compare the performance. This will also help us discover potential bugs and improve the performance / conversion process. Help from the community is appreciated here, and I'll be writing the evaluation script in the next few days.
Right now you can find some results for mAP and other standard evaluation metrics for models from OpenVINO model zoo (e.g., yolo-v3-tiny-tf), though I am not sure if they provide the results of inference for OpenVINO IR or original results provided for models in Pytorch, TF, ... (so, before conversion). Nor am I sure whether they use FP32 or FP16.
That said, a slight drop is expected on OAK devices, as FP16 is used.