Erik, I made myself familiar with the API, pipeline, and nodes. I also ran the examples available.
For now, I would like to use pretrained models to perform object detection and scene classification.
I am thinking to use the MS coco dataset for objects, and Places365 dataset for scene classification.
Several pretrained models exist for these datasets. They use various architectures.
For example, the Places365 dataset has models trained using Vgg16, GoogLeNet, ResNet, and AlexNet architectures.
I attempted to download the VGG-16 Places365 model and successfully converted it to .blob using the online convertor tool.
Now my questions are:
1) How to use the converted model and decode its output?
2) Does using a converted model differs based on the originating model? In other words, If I converted several models each originating from different platforms (Caffe, Tensorflow, etc) or from different architectures (VGG-16, ResNet, etc), Is the way to use them and decode their results is the same or differs from one to other?
3) Which architecture is preferred to be used on OAK devices? (the one it is optimized for)