- Edited
Good Morning i would like to have an understanding of this output i have when i run my model using the blob file. I saved the onnx file from a TensorFlow model and specified the shape as such [1, 15, 224, 224, 3]. By using this shape i was able to convert the onnx into blob file. I begun to work with this file and tried to implemented to run the inference. My task is video classification, with just 2 categories on the whole frame so not object detection or face/person detection. When i run the python code i got the following output looping and referring to the size of my nn being no the right shape. However it seems odd cause it give me the shape as [3, 224] and it seems that is taking the 3 which 9is referred to RGB channel as one shape. what is a possible solution? I would like to point out that this screenshot refers to shape [300,300], even by changing this to [224,224] will output the same error.
Over this section it talks about the shape arrangement. however how would i rearrange this? Does this apply also to my custom model?