@tuhuateng#354 as I know ，MA2485A must be work with openVINO on arm.so what is your PythonAPI base on？
We use a combination of C++ and Python, which is open source under the MIT license. You can find the various repos within our organization on GitHub (see HERE). The neural models used are pre-converted to a format compatible with the Myriad X.
So you don't need OpenVINO at all to make it work; literally, you need Python 3.6-3.9 and the dependencies we have in the shell script and requirements.txt file. Another feature that is upcoming involves running the Gen2 pipeline with microPython on device which would technically eliminate the requirement for a host system altogether. We have open issues where you can track progress; Gen2 pipeline is HERE, and microPython support is HERE. We recently started using GitHub projects for our public roadmap, which may make it easier to see the state of things and the direction we're headed (found HERE).
All that said, our devices and DepthAI are fully compatible with OpenVINO neural models. You can train new models using frameworks supported by OpenVINO, and then use the model optimizer to convert them to Intermediate Representation format. Once you do that you can either compile the Movidius binary (blob) locally, or you can use our online converter. We have a doc on local conversion, found HERE. Realistically if you're doing local model conversion you probably would prefer a more powerful host than the Pi4 anyway, and just transfer it to that host after (so still, no need for it on the Pi4).