For my purposes, I want to perform inference using a pretrained model, made using Roboflow, running on a low-level board. There is no space for Docker, and therefore no space for the inference server. Is there anyway that the model can be downloaded?
I have the following code and I have been looking everywhere to add a method to download the model, but cannot find anything.

from depthai_sdk import OakCamera

MODEL_ID = "stuff"

MODEL_VS = "2"

PAPI_KEY = "*****WnKpTKuX******"

with OakCamera() as oak:

    color = oak.create_camera('color')

    model_config = {

        'source': 'roboflow',

        'model': f"{MODEL_ID}/{MODEL_VS}",

        'key': PAPI_KEY

    }

    nn = oak.create_nn(model_config, color)

    oak.visualize(nn, fps=True)

    oak.start(blocking=True)

# Code to download the model from `nn`:

Does anyone have any ideas or tips? Thanks!!!!

    jakaskerl, thank you for your quick response. The thing is, I don't think roboflow offers a way to get the model file. So no download for .onnx. Which is the main issue that I am facing.

    What I meant with the Docker is the following: roboflow offers a solution for local deployment where there are no calls to the roboflow API, but they all require docker images. Running docker on the embedded device that we are using would not really be a smart thing to do. I could "reverse engineer" these images and see where they store the model, but I hoped that there were better alternatives.

    yeldarb, thanks for the quick reply! The guide works for me, however, it does not offer an offline solution, unfortunately. They all make API calls to the roboflow API for inference, unless I am missing something?