Hello everyone. Thanks for reading my question.
This is the first time I am trying to run a ML model in a hardware different from my personal computer GPU, so I am sorry if I am making silly questions.
I would like to run a CNN model that uses LSTM cells, to make use of temporal information and archive better predictions on the video stream.
Is it possible to use this kind of model in the depthAI pipeline?
If yes, how it is done? Since the input of this kind of model is more complex than just the current video frame.