Hello everyone. Thanks for reading my question.
This is the first time I am trying to run a ML model in a hardware different from my personal computer GPU, so I am sorry if I am making silly questions.
I would like to run a CNN model that uses LSTM cells, to make use of temporal information and archive better predictions on the video stream.
Is it possible to use this kind of model in the depthAI pipeline?
If yes, how it is done? Since the input of this kind of model is more complex than just the current video frame.

    Hi PCPJ ,

    Great question. I'm not immediately sure. Seeing if I can find anything in the OpenVINO documentation on this. Will circle back either way.

    Thanks,
    Brandon

    I haven't found anything the directly answers the question just yet. But here are some things I'm finding along the way, and notes that may be helpful.

    1. We use what is called the "VPU Plugin" in OpenVINO, or the MYRIAD plugin, more specifically. We are VPU_2480.
    2. For supported layers on our platform, see the VPU section here
    3. Custom layers can be implemented for our platform (and the NCS2, same process) using OpenCL: here

    And actually here is a solution of getting the LSTM to compile for Myriad X:
    https://community.intel.com/t5/Intel-Distribution-of-OpenVINO/cannot-convert-LSTM-keras-model-to-IR-files/td-p/1138027#comment-1955572

    Thoughts?

    Thanks,
    Brandon