• DepthAI
  • Modify the BW1098OBC to add an ESP32 System on Module

Hello,
I wanted to clarify about the image acquired by the camera , transporting image into Movidious and getting it back to the ESP32 processor, how does that happen ?
Is there anyway to run my current program (Python + OpenVino) in the hardware which has been developed without converting them?

thank you

    7 days later

    sanjith

    Hi Sanjith,

    Sorry about the delay. I have been offline because we had a kid. :-)

    We have an initial implementation for the DepthAI SPI communication (streaming of NN metadata for now).

    The final SPI protocol is not integrated yet, for now the packet consists of a NULL-terminated string.

    The DepthAI boot and initialization is still done by USB, and afterwards an SPI controller can pull the data off SPI. Tested with ESP32:
    https://github.com/luxonis/depthai/compare/spi_esp32
    https://github.com/luxonis/depthai-python/compare/master...spi_esp32

    The ESP32 app is based on this example: https://github.com/espressif/esp-idf/tree/v4.0.1/examples/peripherals/spi_slave/sender
    with these modifications: https://github.com/luxonis/depthai-python/commit/619d485

    As DepthAI streams the NN metadata on its own (when the controller initiates communication), to test that the COPI (MOSI) line works well, the ESP32 app sends a NULL-terminated string like esp32-cnt:253 that is echoed back by DepthAI, together with the NN metadata. After DepthAI boots up (by running ./depthai.py), the prints in the ESP32 app should look like:

    I (0) cpu_start: Starting scheduler on APP CPU.
    I (305) gpio: GPIO[2]| InputEn: 1| OutputEn: 0| OpenDrain: 0| Pullup: 1| Pulldown: 0| Intr:1 
    [RECV-0] 
    [RECV-1] 
    [RECV-2] 
    [RECV-3] 
    [RECV-4] 
    ... 
    [RECV-106] 
    [RECV-107] 
    [RECV-108] 
    [RECV-109] 
    [RECV-110] 
    [RECV-111] 
    [RECV-112]  | DepthAI pkt 0:
    [RECV-113] esp32-cnt:108 | DepthAI pkt 1:
    [RECV-114] esp32-cnt:109 | DepthAI pkt 2:
    [RECV-115] esp32-cnt:110 | DepthAI pkt 3:
    [RECV-116] esp32-cnt:111 | DepthAI pkt 4:
    [RECV-117] esp32-cnt:112 | DepthAI pkt 5:
    [RECV-118] esp32-cnt:113 | DepthAI pkt 6:
    [RECV-119] esp32-cnt:114 | DepthAI pkt 7:
    [RECV-120] esp32-cnt:115 | DepthAI pkt 8:
    [RECV-121] esp32-cnt:116 | DepthAI pkt 9:
    [RECV-122] esp32-cnt:117 | DepthAI pkt 10:
    [RECV-123] esp32-cnt:118 | DepthAI pkt 11:
    [RECV-124] esp32-cnt:119 | DepthAI pkt 12:
    [RECV-125] esp32-cnt:120 | DepthAI pkt 13:
    [RECV-126] esp32-cnt:121 | DepthAI pkt 14:
    [RECV-127] esp32-cnt:122 | DepthAI pkt 15:
    [RECV-128] esp32-cnt:123 | DepthAI pkt 16:
    [RECV-129] esp32-cnt:124 | DepthAI pkt 17:
    [RECV-130] esp32-cnt:125 | DepthAI pkt 18:
    [RECV-131] esp32-cnt:126 | DepthAI pkt 19:
    [RECV-132] esp32-cnt:127 | DepthAI pkt 20:
    [RECV-133] esp32-cnt:128 | DepthAI pkt 21:
    [RECV-134] esp32-cnt:129 | DepthAI pkt 22:
    [RECV-135] esp32-cnt:130 | DepthAI pkt 23:
    [RECV-136] esp32-cnt:131 | DepthAI pkt 24:
    [RECV-137] esp32-cnt:132 | DepthAI pkt 25:
    [RECV-138] esp32-cnt:133 | DepthAI pkt 26:
    [RECV-139] esp32-cnt:134 | DepthAI pkt 27:
    [RECV-140] esp32-cnt:135 | DepthAI pkt 28:
    [RECV-141] esp32-cnt:136 | DepthAI pkt 29:
    [RECV-142] esp32-cnt:137 | DepthAI pkt 30:
    [RECV-143] esp32-cnt:138 | DepthAI pkt 31:
    [RECV-144] esp32-cnt:139 | DepthAI pkt 32:
    [RECV-145] esp32-cnt:140 | DepthAI pkt 33:
    					Det0: tvmonitor    57.67% (-0.01,0.02)->(1.00,1.00)
    [RECV-146] esp32-cnt:141 | DepthAI pkt 34:
    [RECV-147] esp32-cnt:142 | DepthAI pkt 35:
    [RECV-148] esp32-cnt:143 | DepthAI pkt 36:
    [RECV-149] esp32-cnt:144 | DepthAI pkt 37:
    					Det0: person       78.27% (0.23,0.23)->(0.41,0.47)
    [RECV-150] esp32-cnt:145 | DepthAI pkt 38:
    					Det0: person       83.30% (0.22,0.22)->(0.41,0.47)
    [RECV-151] esp32-cnt:146 | DepthAI pkt 39:
    					Det0: person       92.43% (0.21,0.23)->(0.41,0.47)
    [RECV-152] esp32-cnt:147 | DepthAI pkt 40:
    					Det0: person       95.61% (0.20,0.22)->(0.41,0.48)
    					Det1: person       62.06% (0.04,0.26)->(0.23,0.48)
    [RECV-153] esp32-cnt:148 | DepthAI pkt 41:
    ...

    So we do not have image/video output over SPI yet. We have implemented the protocol internally for JPEG output, but we need to abstract this to work through our API system first. We are doing this as part of our Gen2 Pipeline Builder effort (HERE).

    So this is making it take longer, but it will be a ton more functional when it is done. And will have better parity between USB/SPI/Ethernet communication.

    Also, we are now shipping our pre-order 0th-revision boards w/ integrated ESP32 if you are interested.

    Thoughts?

    Thanks,
    Brandon

    11 days later

    Hi Brandon,
    I received the w/ integrated ESP32 board. Great work on that it an eye catching thing 😁. Thanks for the information provided. Your current DepthAI is running on ESP using py ? or is there a 'c' variant of the example available ? please route me to the link. Is the camera already connected to the movidious in the above showed example.

    please verify if https://github.com/luxonis/depthai-python/commit/619d485#diff-800c94d12c602217488e262d139e7a9f is the program to use for the example shown above?

    regards Sanjith

    Ho something I forgot to ask, HOW are we gonna load the trained models or model ( the .xml and .bin file)?

    Thank you in advance 😀

    Hi @sanjith ,

    Thanks for getting it! And sorry about the delay. Somehow my notifications from our own forum are going to SPAM in my mail client... trying to figure out what I misconfigured.

    And yes, the camera is already connected to the Movidius directly. So you can also use the BW1092 board that has integrated ESP32 directly over USB to visualize results as well.

    So yes the link you are referencing is the current version to use. It's the first proof of concept. We have since made it better and have the capability to store the .blob file onto the onboard NOR flash (the .bin and .xml are converted to a single .blob which is stored to the board). The engineer responsible for this is in Europe, so he's not online now. But we will provide this tomorrow.

    Sorry again about the delay. Also feel free to join our Discord community - we are often much faster to respond there to technical questions/etc.

    https://discord.gg/EPsZHkg9Nx

    Thanks again,
    Brandon

    11 days later

    I have listed the main requirements of our application below, please guide us in understanding what is the possibility and the methodology to satisfy these requirements.

    1.ESP32 and VPU shall speak through custom protocol developed on top of the SPI.
    General Queries
    2.Where to find necessary documentation to use BW1092 depth AI module such as user manuals, getting started guides, application notes, sample codes(VPU and esp32) which will help us to setup the development environment, start developing our application on the fly.
    3.With reference to the output of esp32_app example, string format of the output received from the VPU is fixed or customisable?
    4.with reference to https://discuss.luxonis.com/d/56-initial-bw1092-esp32-proof-of-concept-code/2 we understood depthai_flash.fw contains application firmware, config and NN blob(s). Here, application firmware indicates we can write our custom logics such as sending some startup commands from VPU to esp32,sending the output string in our custom desired format etc..., is my understanding correct or we cannot modify/update the application firmware part

    thank you

    Hi Sanjith,

    Sorry about the delay.

    1. Yes, so Jon on Slack is implementing this now. We have the initial version released here: https://discuss.luxonis.com/d/56-initial-bw1092-esp32-proof-of-concept-code But we will be refactoring this to have more capabilities.
    2. So actually the BW1092 is in 'alpha testing' now so it doesn't have a user manual, getting starting guide, applications notes, etc. but it does have sample code, here: https://discuss.luxonis.com/d/56-initial-bw1092-esp32-proof-of-concept-code. So actually we left our store page for it as a rendering only (even though we have physical units) to make clear that this is a first-rev Alpha-testing type hardware.
    3. The output initially will be fixed protocol format. But once microPython support on DepthAI is out (see here, which will be out in December, any custom protocol/etc. will be supported - as you can just write in on DepthAI directly. We will support I2C, SPI, and UART communication via microPython.
    4. I'm not sure if I understand correctly, so sorry if I'm answering wrong. I'm thinking that you are meaning the capability to update the firmware binary over SPI - yes, that can be done. The binary firmware can be updated/upgraded and written to NOR flash, and the neural blob can be updated too. But there is no capability to recompile the DepthAI firmware that runs on DepthAI as-is. Later we will support microPython code to run on our DepthAI firmware, but the base firmware itself is not and will not be available.

    Thoughts?

    Thanks,
    Brandon