F
FrancisTse

  • Sep 6, 2023
  • Joined Jan 11, 2022
  • 1 best answer
  • BradleyDillon I am so sad to hear about Brandon. Thank you for sharing. Luxonis is truly great. Brandon's vision has resulted in some fantastic products and a team of superb individuals developing and supporting them. I still can not believe the ease of building realtime imaging pipelines with depth information without needing to fiddle around with the details of stereo vision. And, doing all that at an affordable price for experimenting without a big corporate budget. I am only using the OAK-D-Lite now, but I can see all the possibilities to expand into the other products. Hopefully my work, and work from many others using the Luxonis products, will bring benefits to the human kind as Brandon had envisioned. Best wishes for the future of Luxonis and deepest sympathy to you and Brandon's family for such a great loss.

  • I have created a few pipelines for the OAK-D-Lite. Some are variations of each other to provide additional functions or tweaking of the functionalities. So far, I am very impressed with what I can do with the OAK-D-Lite. However, some the flows are getting pretty complicated. I have been hand drawing flow diagrams to document what the different pipelines are. It is getting pretty tedious. Somehow, I remembered seeing some kind of tool in the Luxonis documentation but could not find it again. Was I just imagining things or is there really such a tool? Has anyone else come up with a better way to visualize and document complicated OAK pipelines?

  • Hello Erik,
    The roi area size seems to increase as I increased the camRgb.setPreviewSize() values. As I increased it to 1920x1080, the roi size for the case with the ImageManip node starts to match the roi size with preview set directly to 300x300.
    However, I am hoping to be able to set preview to 1072x1072 as in the gen2-face-recognition experiment. I want to be able to add spatial detection to the face recognition pipeline so I can tell how far the recognized face is.
    Hope the FW team can help with this.
    Thanks,
    Francis.

  • I have been experiencing some difficulty in getting consistent depth readings from MobileNetSpatialDetectionNetwork using an OAK-D-Lite running from a Raspberry Pi 4. It appears that this inconsistency of depth readings were caused by misalignment of the depth roi used by the MobileNetSpatialDetectionNetwork to compute the spatial coordinates.

    As an illustration of the issue, I used the example code in spatial_mobilenet.py, to get perfect alignment of the roi to the face detection bounding box. I modified spatial_mobilenet.py to use the face-detection-retail-004 model from the model zoo. Here is the spatial_mobilenet_modified.py code that I ended up with.

    When I ran the code with the OAK-D-Lite pointing to a face target, I could see the face was detected and a bounding box drawn around the face on the preview image. On the depth image, there is a clear silhouette of the face target and the roi box is within the face silhouette as expected. BTW, I was able to move the target around both left, right, up, down, closer, further away and the roi box is always within the face target silhouette. I captured the screenshots of one of the preview and depth window instances for illustration.

    I then modified the code adding a ImageManip node between the camRgb.preview output and the MobileNetSpatialDetectionNetwork input. The preview size was left as 300x300 and the imageManip .setResize was set to 300x300 as well. Here is the spatial_mobilenet_modified_with_imageManip.py code that I ended up with.

    When I ran the new spatial_mobilenet_modified_with_imageManip.py code, the roi box on the depth image got smaller and moved to the upper left side. I captured screenshots of one of the preview_with_ImageManip and depth_with_ImageManip window instances for illustration. Please look for the smaller white box on the upper left corner on the depth_with_ImageManip.png image. One interesting observation is that, as I increase the FRAME_SIZE, e.g. to FRAME_SIZE(800, 800), the roi box size seems to increase proportionally. The larger the FRAME_SIZE, the larger the roi box.

    I do not know if I am not using the ImageManip node correctly or if this is what I would expect. Has anyone else seen the same behavior? It would be really nice if I can insert ImageManip nodes to build more complex pipelines and keep the depth detections consistent.

    Would appreciate hearing from others who have experimented with building complex pipelines for spatial detections using ImageManip nodes and are getting consistent coordinate results.

    Thanks,
    Francis.

    • erik replied to this.
    • Hello erik ,
      Looks like the system is dynamically deploying the allocated 13 shaves [0-12] for the three NNs and the one shave [15-15] for the four ImageManips. Very Nice!
      Thanks again for all your help and links.
      Best regards,
      Francis.

    • Hello erik ,
      I tried that and got a lot more info on the terminal output:

      francis@raspberrypi:~/Desktop/learningOAK-D-Lite/gen2-face-recognition $ DEPTHAI_LEVEL=debug /bin/python /home/francis/Desktop/learningOAK-D-Lite/gen2-face-recognition/main.py 
      [2023-02-04 15:49:52.694] [debug] Python bindings - version: 2.20.2.0 from  build: 2023-02-01 00:22:21 +0000
      [2023-02-04 15:49:52.694] [debug] Library information - version: 2.20.2, commit:  from , build: 2023-01-31 23:22:01 +0000
      [2023-02-04 15:49:52.698] [debug] Initialize - finished
      Creating pipeline...
      Creating Color Camera...
      Creating Face Detection Neural Network...
      Creating Head pose estimation NN
      Creating face recognition ImageManip/NN
      [2023-02-04 15:49:52.977] [debug] Resources - Archive 'depthai-bootloader-fwp-0.0.24.tar.xz' open: 4ms, archive read: 277ms
      [2023-02-04 15:49:53.169] [debug] Device - OpenVINO version: 2021.4
      [2023-02-04 15:49:53.170] [debug] Device - BoardConfig: {"camera":[],"emmc":null,"gpio":[],"logDevicePrints":null,"logPath":null,"logSizeMax":null,"logVerbosity":null,"network":{"mtu":0,"xlinkTcpNoDelay":true},"nonExclusiveMode":false,"pcieInternalClock":null,"sysctl":[],"uart":[],"usb":{"flashBootedPid":63037,"flashBootedVid":999,"maxSpeed":4,"pid":63035,"vid":999},"usb3PhyInternalClock":null,"watchdogInitialDelayMs":null,"watchdogTimeoutMs":null} 
      libnop:
      0000: b9 10 b9 05 81 e7 03 81 3b f6 81 e7 03 81 3d f6 04 b9 02 00 01 ba 00 be be bb 00 bb 00 be be be
      0020: be be be be 00 bb 00
      [2023-02-04 15:49:53.883] [debug] Resources - Archive 'depthai-device-fwp-8c3d6ac1c77b0bf7f9ea6fd4d962af37663d2fbd.tar.xz' open: 4ms, archive read: 1183ms
      [2023-02-04 15:49:54.747] [debug] Searching for booted device: DeviceInfo(name=1.1.1, mxid=18443010F1AECE1200, X_LINK_BOOTED, X_LINK_USB_VSC, X_LINK_MYRIAD_X, X_LINK_SUCCESS), name used as hint only
      [18443010F1AECE1200] [1.1.1] [1.190] [system] [info] Memory Usage - DDR: 0.12 / 340.42 MiB, CMX: 2.05 / 2.50 MiB, LeonOS Heap: 7.24 / 77.32 MiB, LeonRT Heap: 2.89 / 41.23 MiB
      [18443010F1AECE1200] [1.1.1] [1.190] [system] [info] Temperatures - Average: 34.28 °C, CSS: 34.64 °C, MSS 34.64 °C, UPA: 33.45 °C, DSS: 34.40 °C
      [18443010F1AECE1200] [1.1.1] [1.190] [system] [info] Cpu Usage - LeonOS 27.49%, LeonRT: 1.48%
      [2023-02-04 15:49:56.024] [debug] Schema dump: {"connections":[{"node1Id":10,"node1Output":"out","node1OutputGroup":"","node2Id":11,"node2Input":"in","node2InputGroup":""},{"node1Id":9,"node1Output":"out","node1OutputGroup":"","node2Id":10,"node2Input":"in","node2InputGroup":""},{"node1Id":6,"node1Output":"manip2_img","node1OutputGroup":"io","node2Id":9,"node2Input":"inputImage","node2InputGroup":""},{"node1Id":6,"node1Output":"manip2_cfg","node1OutputGroup":"io","node2Id":9,"node2Input":"inputConfig","node2InputGroup":""},{"node1Id":7,"node1Output":"out","node1OutputGroup":"","node2Id":8,"node2Input":"in","node2InputGroup":""},{"node1Id":6,"node1Output":"manip_img","node1OutputGroup":"io","node2Id":7,"node2Input":"inputImage","node2InputGroup":""},{"node1Id":6,"node1Output":"manip_cfg","node1OutputGroup":"io","node2Id":7,"node2Input":"inputConfig","node2InputGroup":""},{"node1Id":8,"node1Output":"passthrough","node1OutputGroup":"","node2Id":6,"node2Input":"headpose_pass","node2InputGroup":"io"},{"node1Id":8,"node1Output":"out","node1OutputGroup":"","node2Id":6,"node2Input":"headpose_in","node2InputGroup":"io"},{"node1Id":2,"node1Output":"out","node1OutputGroup":"","node2Id":6,"node2Input":"preview","node2InputGroup":"io"},{"node1Id":4,"node1Output":"passthrough","node1OutputGroup":"","node2Id":6,"node2Input":"face_pass","node2InputGroup":"io"},{"node1Id":4,"node1Output":"out","node1OutputGroup":"","node2Id":6,"node2Input":"face_det_in","node2InputGroup":"io"},{"node1Id":4,"node1Output":"out","node1OutputGroup":"","node2Id":5,"node2Input":"in","node2InputGroup":""},{"node1Id":3,"node1Output":"out","node1OutputGroup":"","node2Id":4,"node2Input":"in","node2InputGroup":""},{"node1Id":2,"node1Output":"out","node1OutputGroup":"","node2Id":3,"node2Input":"inputImage","node2InputGroup":""},{"node1Id":0,"node1Output":"preview","node1OutputGroup":"","node2Id":2,"node2Input":"inputImage","node2InputGroup":""},{"node1Id":0,"node1Output":"video","node1OutputGroup":"","node2Id":1,"node2Input":"in","node2InputGroup":""}],"globalProperties":{"calibData":null,"cameraTuningBlobSize":null,"cameraTuningBlobUri":"","leonCssFrequencyHz":700000000.0,"leonMssFrequencyHz":700000000.0,"pipelineName":null,"pipelineVersion":null,"xlinkChunkSize":-1},"nodes":[[0,{"id":0,"ioInfo":[[["","video"],{"blocking":false,"group":"","id":41,"name":"video","queueSize":8,"type":0,"waitForMessage":false}],[["","still"],{"blocking":false,"group":"","id":39,"name":"still","queueSize":8,"type":0,"waitForMessage":false}],[["","isp"],{"blocking":false,"group":"","id":38,"name":"isp","queueSize":8,"type":0,"waitForMessage":false}],[["","preview"],{"blocking":false,"group":"","id":40,"name":"preview","queueSize":8,"type":0,"waitForMessage":false}],[["","raw"],{"blocking":false,"group":"","id":37,"name":"raw","queueSize":8,"type":0,"waitForMessage":false}],[["","frameEvent"],{"blocking":false,"group":"","id":36,"name":"frameEvent","queueSize":8,"type":0,"waitForMessage":false}],[["","inputConfig"],{"blocking":false,"group":"","id":35,"name":"inputConfig","queueSize":8,"type":3,"waitForMessage":false}],[["","inputControl"],{"blocking":true,"group":"","id":34,"name":"inputControl","queueSize":8,"type":3,"waitForMessage":false}]],"name":"ColorCamera","properties":[185,24,185,27,0,3,0,0,0,185,3,0,0,0,185,5,0,0,0,0,0,185,5,0,0,0,0,0,0,0,0,0,0,0,0,185,3,0,0,0,185,3,0,0,0,0,0,0,0,0,0,0,0,0,0,0,189,0,255,0,0,0,129,48,4,129,48,4,133,48,4,133,48,4,255,255,0,136,0,0,240,65,136,0,0,128,191,136,0,0,128,191,1,185,4,0,0,0,0,3,3,4,4,4]}],[1,{"id":1,"ioInfo":[[["","in"],{"blocking":true,"group":"","id":33,"name":"in","queueSize":8,"type":3,"waitForMessage":true}]],"name":"XLinkOut","properties":[185,3,136,0,0,128,191,189,5,99,111,108,111,114,0]}],[2,{"id":2,"ioInfo":[[["","out"],{"blocking":false,"group":"","id":32,"name":"out","queueSize":8,"type":0,"waitForMessage":false}],[["","inputConfig"],{"blocking":true,"group":"","id":31,"name":"inputConfig","queueSize":8,"type":3,"waitForMessage":false}],[["","inputImage"],{"blocking":true,"group":"","id":30,"name":"inputImage","queueSize":8,"type":3,"waitForMessage":true}]],"name":"ImageManip","properties":[185,6,185,8,185,7,185,4,136,0,0,0,0,136,0,0,0,0,136,0,0,0,0,136,0,0,0,0,185,3,185,2,136,0,0,0,0,136,0,0,0,0,185,2,136,0,0,0,0,136,0,0,0,0,136,0,0,0,0,0,136,0,0,128,63,136,0,0,128,63,0,1,185,15,0,0,0,0,0,0,186,0,1,0,186,0,0,0,136,0,0,0,0,0,1,185,6,32,0,0,0,0,133,255,0,0,0,0,0,0,134,0,155,52,0,20,0,0,189,0]}],[3,{"id":3,"ioInfo":[[["","out"],{"blocking":false,"group":"","id":29,"name":"out","queueSize":8,"type":0,"waitForMessage":false}],[["","inputConfig"],{"blocking":true,"group":"","id":28,"name":"inputConfig","queueSize":8,"type":3,"waitForMessage":false}],[["","inputImage"],{"blocking":true,"group":"","id":27,"name":"inputImage","queueSize":8,"type":3,"waitForMessage":true}]],"name":"ImageManip","properties":[185,6,185,8,185,7,185,4,136,0,0,0,0,136,0,0,0,0,136,0,0,0,0,136,0,0,0,0,185,3,185,2,136,0,0,0,0,136,0,0,0,0,185,2,136,0,0,0,0,136,0,0,0,0,136,0,0,0,0,0,136,0,0,128,63,136,0,0,128,63,0,1,185,15,133,44,1,133,44,1,0,0,0,0,186,0,1,0,186,0,0,0,136,0,0,0,0,0,1,185,6,32,0,0,0,0,133,255,0,0,1,0,0,0,134,0,0,16,0,4,0,0,189,0]}],[4,{"id":4,"ioInfo":[[["","out"],{"blocking":false,"group":"","id":26,"name":"out","queueSize":8,"type":0,"waitForMessage":false}],[["","passthrough"],{"blocking":false,"group":"","id":25,"name":"passthrough","queueSize":8,"type":0,"waitForMessage":false}],[["","in"],{"blocking":true,"group":"","id":24,"name":"in","queueSize":5,"type":3,"waitForMessage":true}]],"name":"DetectionNetwork","properties":[185,6,130,0,106,20,0,189,12,97,115,115,101,116,58,95,95,98,108,111,98,8,0,0,185,7,1,136,0,0,0,63,0,0,186,0,187,0,136,0,0,0,0]}],[5,{"id":5,"ioInfo":[[["","in"],{"blocking":true,"group":"","id":23,"name":"in","queueSize":8,"type":3,"waitForMessage":true}]],"name":"XLinkOut","properties":[185,3,136,0,0,128,191,189,9,100,101,116,101,99,116,105,111,110,0]}],[6,{"id":6,"ioInfo":[[["io","manip_img"],{"blocking":false,"group":"io","id":21,"name":"manip_img","queueSize":8,"type":0,"waitForMessage":false}],[["io","manip2_cfg"],{"blocking":false,"group":"io","id":20,"name":"manip2_cfg","queueSize":8,"type":0,"waitForMessage":false}],[["io","face_det_in"],{"blocking":true,"group":"io","id":18,"name":"face_det_in","queueSize":8,"type":3,"waitForMessage":false}],[["io","manip2_img"],{"blocking":false,"group":"io","id":19,"name":"manip2_img","queueSize":8,"type":0,"waitForMessage":false}],[["io","face_pass"],{"blocking":true,"group":"io","id":17,"name":"face_pass","queueSize":8,"type":3,"waitForMessage":false}],[["io","manip_cfg"],{"blocking":false,"group":"io","id":22,"name":"manip_cfg","queueSize":8,"type":0,"waitForMessage":false}],[["io","preview"],{"blocking":true,"group":"io","id":16,"name":"preview","queueSize":8,"type":3,"waitForMessage":false}],[["io","headpose_in"],{"blocking":true,"group":"io","id":15,"name":"headpose_in","queueSize":8,"type":3,"waitForMessage":false}],[["io","headpose_pass"],{"blocking":true,"group":"io","id":14,"name":"headpose_pass","queueSize":8,"type":3,"waitForMessage":false}]],"name":"Script","properties":[185,3,189,14,97,115,115,101,116,58,95,95,115,99,114,105,112,116,189,8,60,115,99,114,105,112,116,62,0]}],[7,{"id":7,"ioInfo":[[["","out"],{"blocking":false,"group":"","id":13,"name":"out","queueSize":8,"type":0,"waitForMessage":false}],[["","inputConfig"],{"blocking":true,"group":"","id":12,"name":"inputConfig","queueSize":8,"type":3,"waitForMessage":true}],[["","inputImage"],{"blocking":true,"group":"","id":11,"name":"inputImage","queueSize":8,"type":3,"waitForMessage":true}]],"name":"ImageManip","properties":[185,6,185,8,185,7,185,4,136,0,0,0,0,136,0,0,0,0,136,0,0,0,0,136,0,0,0,0,185,3,185,2,136,0,0,0,0,136,0,0,0,0,185,2,136,0,0,0,0,136,0,0,0,0,136,0,0,0,0,0,136,0,0,128,63,136,0,0,128,63,0,1,185,15,60,60,0,0,0,0,186,0,1,0,186,0,0,0,136,0,0,0,0,0,1,185,6,32,0,0,0,0,133,255,0,0,1,0,0,0,134,0,0,16,0,4,0,0,189,0]}],[8,{"id":8,"ioInfo":[[["","out"],{"blocking":false,"group":"","id":10,"name":"out","queueSize":8,"type":0,"waitForMessage":false}],[["","passthrough"],{"blocking":false,"group":"","id":9,"name":"passthrough","queueSize":8,"type":0,"waitForMessage":false}],[["","in"],{"blocking":true,"group":"","id":8,"name":"in","queueSize":5,"type":3,"waitForMessage":true}]],"name":"NeuralNetwork","properties":[185,5,130,64,151,58,0,189,12,97,115,115,101,116,58,95,95,98,108,111,98,8,0,0]}],[9,{"id":9,"ioInfo":[[["","out"],{"blocking":false,"group":"","id":7,"name":"out","queueSize":8,"type":0,"waitForMessage":false}],[["","inputConfig"],{"blocking":true,"group":"","id":6,"name":"inputConfig","queueSize":8,"type":3,"waitForMessage":true}],[["","inputImage"],{"blocking":true,"group":"","id":5,"name":"inputImage","queueSize":8,"type":3,"waitForMessage":true}]],"name":"ImageManip","properties":[185,6,185,8,185,7,185,4,136,0,0,0,0,136,0,0,0,0,136,0,0,0,0,136,0,0,0,0,185,3,185,2,136,0,0,0,0,136,0,0,0,0,185,2,136,0,0,0,0,136,0,0,0,0,136,0,0,0,0,0,136,0,0,128,63,136,0,0,128,63,0,1,185,15,112,112,0,0,0,0,186,0,1,0,186,0,0,0,136,0,0,0,0,0,1,185,6,32,0,0,0,0,133,255,0,0,1,0,0,0,134,0,0,16,0,4,0,0,189,0]}],[10,{"id":10,"ioInfo":[[["","out"],{"blocking":false,"group":"","id":4,"name":"out","queueSize":8,"type":0,"waitForMessage":false}],[["","passthrough"],{"blocking":false,"group":"","id":3,"name":"passthrough","queueSize":8,"type":0,"waitForMessage":false}],[["","in"],{"blocking":true,"group":"","id":2,"name":"in","queueSize":5,"type":3,"waitForMessage":true}]],"name":"NeuralNetwork","properties":[185,5,130,0,49,73,0,189,12,97,115,115,101,116,58,95,95,98,108,111,98,8,0,0]}],[11,{"id":11,"ioInfo":[[["","in"],{"blocking":true,"group":"","id":1,"name":"in","queueSize":8,"type":3,"waitForMessage":true}]],"name":"XLinkOut","properties":[185,3,136,0,0,128,191,189,11,114,101,99,111,103,110,105,116,105,111,110,0]}]]}
      [2023-02-04 15:49:56.024] [debug] Asset map dump: {"map":{"/node/10/__blob":{"alignment":64,"offset":0,"size":4796672},"/node/4/__blob":{"alignment":64,"offset":8640256,"size":1337856},"/node/6/__script":{"alignment":64,"offset":8636480,"size":3776},"/node/8/__blob":{"alignment":64,"offset":4796672,"size":3839808}}}
      [18443010F1AECE1200] [1.1.1] [1.689] [system] [info] SIPP (Signal Image Processing Pipeline) internal buffer size '16384'B
      [18443010F1AECE1200] [1.1.1] [1.724] [system] [info] ImageManip internal buffer size '393408'B, shave buffer size '35840'B
      [18443010F1AECE1200] [1.1.1] [1.724] [system] [info] NeuralNetwork allocated resources: shaves: [0-12] cmx slices: [0-12] 
      ColorCamera allocated resources: no shaves; cmx slices: [13-15] 
      ImageManip allocated resources: shaves: [15-15] no cmx slices. 
      
      [18443010F1AECE1200] [1.1.1] [1.948] [NeuralNetwork(10)] [info] Needed resources: shaves: 6, ddr: 2007040 
      [18443010F1AECE1200] [1.1.1] [1.951] [DetectionNetwork(4)] [info] Needed resources: shaves: 6, ddr: 2728832 
      [18443010F1AECE1200] [1.1.1] [1.952] [NeuralNetwork(8)] [info] Needed resources: shaves: 6, ddr: 21632 
      [18443010F1AECE1200] [1.1.1] [1.970] [NeuralNetwork(10)] [info] Inference thread count: 2, number of shaves allocated per thread: 6, number of Neural Compute Engines (NCE) allocated per thread: 1
      [18443010F1AECE1200] [1.1.1] [1.970] [DetectionNetwork(4)] [info] Inference thread count: 2, number of shaves allocated per thread: 6, number of Neural Compute Engines (NCE) allocated per thread: 1
      [18443010F1AECE1200] [1.1.1] [1.970] [NeuralNetwork(8)] [info] Inference thread count: 2, number of shaves allocated per thread: 6, number of Neural Compute Engines (NCE) allocated per thread: 1
      [18443010F1AECE1200] [1.1.1] [2.192] [system] [info] Memory Usage - DDR: 132.59 / 340.42 MiB, CMX: 2.49 / 2.50 MiB, LeonOS Heap: 26.99 / 77.32 MiB, LeonRT Heap: 8.17 / 41.23 MiB
      [18443010F1AECE1200] [1.1.1] [2.192] [system] [info] Temperatures - Average: 36.00 °C, CSS: 36.77 °C, MSS 35.59 °C, UPA: 35.83 °C, DSS: 35.83 °C
      [18443010F1AECE1200] [1.1.1] [2.192] [system] [info] Cpu Usage - LeonOS 53.33%, LeonRT: 30.49%
      [18443010F1AECE1200] [1.1.1] [3.194] [system] [info] Memory Usage - DDR: 132.59 / 340.42 MiB, CMX: 2.49 / 2.50 MiB, LeonOS Heap: 27.48 / 77.32 MiB, LeonRT Heap: 8.22 / 41.23 MiB
      [18443010F1AECE1200] [1.1.1] [3.194] [system] [info] Temperatures - Average: 36.71 °C, CSS: 37.24 °C, MSS 36.30 °C, UPA: 36.77 °C, DSS: 36.53 °C
      [18443010F1AECE1200] [1.1.1] [3.194] [system] [info] Cpu Usage - LeonOS 45.36%, LeonRT: 29.64%
      [18443010F1AECE1200] [1.1.1] [4.195] [system] [info] Memory Usage - DDR: 132.59 / 340.42 MiB, CMX: 2.49 / 2.50 MiB, LeonOS Heap: 27.49 / 77.32 MiB, LeonRT Heap: 8.22 / 41.23 MiB
      [18443010F1AECE1200] [1.1.1] [4.195] [system] [info] Temperatures - Average: 37.30 °C, CSS: 38.41 °C, MSS 36.77 °C, UPA: 37.01 °C, DSS: 37.01 °C
      [18443010F1AECE1200] [1.1.1] [4.195] [system] [info] Cpu Usage - LeonOS 37.73%, LeonRT: 29.79%
      [2023-02-04 15:49:59.865] [debug] Device about to be closed...
      [2023-02-04 15:49:59.884] [debug] Timesync thread exception caught: Couldn't read data from stream: '__timesync' (X_LINK_ERROR)
      [2023-02-04 15:49:59.884] [debug] DataOutputQueue (detection) closed
      [2023-02-04 15:49:59.884] [debug] Log thread exception caught: Couldn't read data from stream: '__log' (X_LINK_ERROR)
      [2023-02-04 15:49:59.884] [debug] DataOutputQueue (recognition) closed
      [2023-02-04 15:49:59.884] [debug] DataOutputQueue (color) closed
      [2023-02-04 15:50:00.579] [debug] Watchdog thread exception caught: Couldn't write data to stream: '__watchdog' (X_LINK_ERROR)
      [2023-02-04 15:50:01.954] [debug] XLinkResetRemote of linkId: (0)
      [2023-02-04 15:50:01.954] [debug] Device closed, 2089

      I hit the "q" key at the end. That is probably when the Device closed. I will have to look over the output to study what it says.

      Thanks,
      Francis.

      • erik replied to this.
      • Hello erik ,
        Following the information in the info on Debugging DepthAI pipeline, I added the # Set debugging level block of code to the main.py program as shown here:

        with dai.Device(pipeline) as device:
        
            # Set debugging level
            device.setLogLevel(dai.LogLevel.DEBUG)
            device.setLogOutputLevel(dai.LogLevel.DEBUG)
        
            facerec = FaceRecognition(databases, args.name)
            sync = TwoStageHostSeqSync()
            text = TextHelper()

        With that, I got some info about Memory Usage, etc. But there were no info about shave usage:

        francis@raspberrypi:~/Desktop/learningOAK-D-Lite/gen2-face-recognition $ /bin/python /home/francis/Desktop/learningOAK-D-Lite/gen2-face-recognition/main.py
        Creating pipeline...
        Creating Color Camera...
        Creating Face Detection Neural Network...
        Creating Head pose estimation NN
        Creating face recognition ImageManip/NN
        [18443010F1AECE1200] [1.1.1] [2.157] [system] [info] Memory Usage - DDR: 132.59 / 340.42 MiB, CMX: 2.49 / 2.50 MiB, LeonOS Heap: 27.42 / 77.32 MiB, LeonRT Heap: 8.18 / 41.23 MiB
        [18443010F1AECE1200] [1.1.1] [2.157] [system] [info] Temperatures - Average: 37.36 °C, CSS: 38.18 °C, MSS 37.71 °C, UPA: 36.53 °C, DSS: 37.01 °C
        [18443010F1AECE1200] [1.1.1] [2.157] [system] [info] Cpu Usage - LeonOS 58.59%, LeonRT: 31.94%
        [18443010F1AECE1200] [1.1.1] [3.158] [system] [info] Memory Usage - DDR: 132.59 / 340.42 MiB, CMX: 2.49 / 2.50 MiB, LeonOS Heap: 27.49 / 77.32 MiB, LeonRT Heap: 8.22 / 41.23 MiB
        [18443010F1AECE1200] [1.1.1] [3.158] [system] [info] Temperatures - Average: 38.18 °C, CSS: 38.65 °C, MSS 37.71 °C, UPA: 38.18 °C, DSS: 38.18 °C
        [18443010F1AECE1200] [1.1.1] [3.158] [system] [info] Cpu Usage - LeonOS 44.81%, LeonRT: 35.28%
        [18443010F1AECE1200] [1.1.1] [4.159] [system] [info] Memory Usage - DDR: 132.59 / 340.42 MiB, CMX: 2.49 / 2.50 MiB, LeonOS Heap: 27.49 / 77.32 MiB, LeonRT Heap: 8.22 / 41.23 MiB
        [18443010F1AECE1200] [1.1.1] [4.159] [system] [info] Temperatures - Average: 38.65 °C, CSS: 38.65 °C, MSS 39.11 °C, UPA: 38.41 °C, DSS: 38.41 °C
        [18443010F1AECE1200] [1.1.1] [4.159] [system] [info] Cpu Usage - LeonOS 32.53%, LeonRT: 26.04%
        [18443010F1AECE1200] [1.1.1] [5.160] [system] [info] Memory Usage - DDR: 132.59 / 340.42 MiB, CMX: 2.49 / 2.50 MiB, LeonOS Heap: 27.50 / 77.32 MiB, LeonRT Heap: 8.22 / 41.23 MiB
        [18443010F1AECE1200] [1.1.1] [5.160] [system] [info] Temperatures - Average: 38.76 °C, CSS: 39.58 °C, MSS 38.88 °C, UPA: 38.18 °C, DSS: 38.41 °C
        [18443010F1AECE1200] [1.1.1] [5.160] [system] [info] Cpu Usage - LeonOS 27.66%, LeonRT: 23.91%
        [18443010F1AECE1200] [1.1.1] [6.161] [system] [info] Memory Usage - DDR: 132.59 / 340.42 MiB, CMX: 2.49 / 2.50 MiB, LeonOS Heap: 27.50 / 77.32 MiB, LeonRT Heap: 8.22 / 41.23 MiB
        [18443010F1AECE1200] [1.1.1] [6.161] [system] [info] Temperatures - Average: 39.29 °C, CSS: 39.58 °C, MSS 39.11 °C, UPA: 39.35 °C, DSS: 39.11 °C

        Is there something else I have to do?

        Thanks,
        Francis.

        • erik replied to this.
        • Hello Erik, thank you for your explanation and link. There is so much to learn. I have some follow up questions:

          1. Is it true that as long as the NNs consume the same number of shaves, the NNs can share the same resources?
          2. Is there a limit as how many NNs can share the same resources and is there a performance degradation if I added more NNs?
          3. How many shaves does other nodes such as ImageManips consume?
          4. Is there a way I can estimate how many shaves have been consumed for a pipeline?

          Thanks, Francis.

          • erik replied to this.
          • Hello all,

            I am new to learning about building OAK-D pipeline. I am running examples and creating my own pipeline to learn. My understanding is that a pipeline can only use 13 shaves when ColorCamera is set to 1080p.

            However, when I was looking over the code in depthai-experiments/gen2-face-recognition/main.py, it appears that the pipeline uses 18 shaves. There are 6 shaves used by face_det_nn, 6 used by the headpose_nn and 6 used by the face_rec_nn. Am I missing some trick that the code used so that more shaves can be used or did I misunderstood the code.

            Any help to understand is much appreciated.

            • erik replied to this.
            • Hello Erik, how can I attach the zipped folder? I tried many times using the "Press or paste to upload" button on the bottom left but don't seem to be successful.

              • erik replied to this.
              • Hello Erik, turned out that I have depthai version 2.15.0.0 and it is now updated to 2.19.1.0. The Critical error went away but I now have a different error and the pipeline seemed to be stuck and would not finish:

                francis@raspberrypi:~/Desktop/learningOAK-D-Lite $ /bin/python /home/francis/Desktop/learningOAK-D-Lite/pipeline_file_in_imageManip.py
                [18443010F1AECE1200] [1.1.1] [1.319] [ImageManip(1)] [error] Not possible to create warp params. Error: WARP_SWCH_ERR_UNSUPORTED_IMAGE_FORMAT 
                
                [18443010F1AECE1200] [1.1.1] [1.319] [ImageManip(1)] [error] Invalid configuration or input image - skipping frame

                Any idea? BTW, I tested the direct xLinkIn to xLinkOut code with the updated depthai and it is still working as before. The video was read from the file and displayed with .imShow().

                Thanks,
                Francis.

                • erik replied to this.
                • Hello Erik, I build my depthai in January of 2022 so it is probably rather old. How do I find out what version I have and how would I upgrade it to latest? I am running depthai on a Raspberry Pi 4 board with the Raspberry Pi OS:

                  francis@raspberrypi:~ $ cat /etc/os-release
                  PRETTY_NAME="Debian GNU/Linux 11 (bullseye)"
                  NAME="Debian GNU/Linux"
                  VERSION_ID="11"
                  VERSION="11 (bullseye)"
                  VERSION_CODENAME=bullseye
                  ID=debian
                  HOME_URL="https://www.debian.org/"
                  SUPPORT_URL="https://www.debian.org/support"
                  BUG_REPORT_URL="https://bugs.debian.org/"

                  BTW, are the depthai and depthai-sdk two different installations? Do I have to update each separately?

                  Thanks,
                  Francis.

                  • erik replied to this.
                  • Sorry for the ugly format. Here is the code again:

                    import cv2
                    import depthai as dai
                    import numpy as np
                    from time import monotonic
                    
                    # Define Frame
                    FRAME_SIZE = (640, 400)
                    DET_INPUT_SIZE = (300,300)
                    
                    # Define input file and capture source
                    fileName = "Test_Videos/test640x400.mp4"
                    
                    # Start defining a pipeline
                    pipeline = dai.Pipeline()
                    
                    # Define an input stream node
                    xinFrame_in = pipeline.createXLinkIn()
                    xinFrame_in.setStreamName("inFrame")
                    
                    # Create ImageManip node
                    manip = pipeline.createImageManip()                                  # create the imageManip node
                    manip.initialConfig.setResize(DET_INPUT_SIZE[0], DET_INPUT_SIZE[1])  # scale image to detection NN need
                    manip.initialConfig.setKeepAspectRatio(False)
                    
                    # Create a output stream node
                    x_manip_out = pipeline.createXLinkOut()
                    x_manip_out.setStreamName("outFrame")
                    
                    # Link input stream to manip to output stream
                    xinFrame_in.out.link(manip.inputImage)
                    manip.out.link(x_manip_out.input)
                    
                    # Start pipeline
                    with dai.Device(pipeline) as device:
                    
                        # Input queue will be used to send video frames from the file to the device.
                        q_inFrame = device.getInputQueue(name="inFrame")
                    
                        # Output queue to be used to view what is sent to the nn.
                        q_outFrame = device.getOutputQueue(name="outFrame", maxSize=1, blocking=False)
                    
                        frame = None
                    
                        def to_planar(arr: np.ndarray, shape: tuple) -> np.ndarray:
                            return cv2.resize(arr, shape).transpose(2, 0, 1).flatten()
                    
                        cap = cv2.VideoCapture(fileName)
                    
                        while cap.isOpened():
                    
                            # Get frame from file and send to xLink input
                            ret, frame = cap.read()
                            img = dai.ImgFrame()
                            img.setData(to_planar(frame, (FRAME_SIZE[0], FRAME_SIZE[1])))
                            img.setTimestamp(monotonic())
                            img.setWidth(FRAME_SIZE[0])
                            img.setHeight(FRAME_SIZE[1])
                            q_inFrame.send(img)
                    
                            out_manip = q_outFrame.get()
                            manip_frame = out_manip.getCvFrame()
                    
                            # Capture the key pressed
                            key_pressed = cv2.waitKey(1) & 0xff
                    
                            # Stop the program if Esc key was pressed
                            if key_pressed == 27:
                                break
                    
                            # Display the video input frame and the manip output
                            cv2.imshow("Direct video from file", frame)
                            cv2.imshow("manip output", manip_frame)
                    
                    cap.release()
                    cv2.destroyAllWindows()

                    And here is the error I got:

                    francis@raspberrypi:~/Desktop/learningOAK-D-Lite $ /bin/python /home/francis/Desktop/learningOAK-D-Lite/pipeline_file_in_imageManip.py
                    [18443010F1AECE1200] [247.678] [system] [critical] Fatal error. Please report to developers. Log: 'ImageManipHelper' '61'
                    Traceback (most recent call last):
                      File "/home/francis/Desktop/learningOAK-D-Lite/pipeline_file_in_imageManip.py", line 71, in <module>
                        manip_frame = out_manip.getCvFrame()
                    AttributeError: 'NoneType' object has no attribute 'getCvFrame'
                    Stack trace (most recent call last):
                    #14   Object "/bin/python", at 0x587533, in 
                    #13   Object "/lib/aarch64-linux-gnu/libc.so.6", at 0x7f8bdb2217, in __libc_start_main
                    #12   Object "/bin/python", at 0x587637, in Py_BytesMain
                    #11   Object "/bin/python", at 0x5b79eb, in Py_RunMain
                    #10   Object "/bin/python", at 0x5c958f, in Py_FinalizeEx
                    #9    Object "/bin/python", at 0x5cdde3, in 
                    #8    Object "/bin/python", at 0x5ce40f, in _PyGC_CollectNoFail
                    #7    Object "/bin/python", at 0x485b1b, in 
                    #6    Object "/bin/python", at 0x5bdabf, in 
                    #5    Object "/bin/python", at 0x525723, in PyDict_Clear
                    #4    Object "/home/francis/.local/lib/python3.9/site-packages/depthai.cpython-39-aarch64-linux-gnu.so", at 0x7f796b006f, in 
                    #3    Object "/home/francis/.local/lib/python3.9/site-packages/depthai.cpython-39-aarch64-linux-gnu.so", at 0x7f7976fa77, in 
                    #2    Object "/home/francis/.local/lib/python3.9/site-packages/depthai.cpython-39-aarch64-linux-gnu.so", at 0x7f798e6a03, in dai::DataOutputQueue::~DataOutputQueue()
                    #1    Object "/home/francis/.local/lib/python3.9/site-packages/depthai.cpython-39-aarch64-linux-gnu.so", at 0x7f798e3c97, in 
                    #0    Object "/home/francis/.local/lib/python3.9/site-packages/depthai.cpython-39-aarch64-linux-gnu.so", at 0x7f799e29a4, in 
                    Segmentation fault (Invalid permissions for mapped object [0x7f796c8b68])
                    Segmentation fault
                    • erik replied to this.
                    • I am trying to test OAK-D-Lite pipelines by sending recorded .mp4 file back through the device. I am basically following the example from the Video & MobilenetSSD example code. I started out simplifying the code to input the .mp4 file to a xLinkIn node and send the frames back out to a xLinkOut node and viewed the output with cv2.imshow() and all was fine.

                      However, when I added an ImageManip node in between the xLinkIn and xLinkOut nodes, I got the fatal error. I have used the ImageManip node with the same parameters in another pipeline and it worked fine. Did I do something wrong or did I come across a bug of some sort?

                      The following is the code I am running

                      import depthai as dai
                      import numpy as np
                      from time import monotonic
                      
                      # Define Frame
                      FRAME_SIZE = (640, 400)
                      DET_INPUT_SIZE = (300,300)
                      
                      # Define input file and capture source
                      fileName = "Test_Videos/test640x400.mp4"
                      
                      # Start defining a pipeline
                      pipeline = dai.Pipeline()
                      
                      # Define an input stream node
                      xinFrame_in = pipeline.createXLinkIn()
                      xinFrame_in.setStreamName("inFrame")
                      
                      # Create ImageManip node
                      manip = pipeline.createImageManip()                                  # create the imageManip node
                      manip.initialConfig.setResize(DET_INPUT_SIZE[0], DET_INPUT_SIZE[1])  # scale image to detection NN need
                      manip.initialConfig.setKeepAspectRatio(False)
                      
                      # Create a output stream node
                      x_manip_out = pipeline.createXLinkOut()
                      x_manip_out.setStreamName("outFrame")
                      
                      # Link input stream to manip to output stream
                      xinFrame_in.out.link(manip.inputImage)
                      manip.out.link(x_manip_out.input)
                      
                      # Start pipeline
                      with dai.Device(pipeline) as device:
                      
                          # Input queue will be used to send video frames from the file to the device.
                          q_inFrame = device.getInputQueue(name="inFrame")
                      
                          # Output queue to be used to view what is sent to the nn.
                          q_outFrame = device.getOutputQueue(name="outFrame", maxSize=1, blocking=False)
                      
                          frame = None
                      
                          def to_planar(arr: np.ndarray, shape: tuple) -> np.ndarray:
                              return cv2.resize(arr, shape).transpose(2, 0, 1).flatten()
                      
                          cap = cv2.VideoCapture(fileName)
                      
                          while cap.isOpened():
                      
                              # Get frame from file and send to xLink input
                              ret, frame = cap.read()
                              img = dai.ImgFrame()
                              img.setData(to_planar(frame, (FRAME_SIZE[0], FRAME_SIZE[1])))
                              img.setTimestamp(monotonic())
                              img.setWidth(FRAME_SIZE[0])
                              img.setHeight(FRAME_SIZE[1])
                              q_inFrame.send(img)
                      
                              out_manip = q_outFrame.get()
                              manip_frame = out_manip.getCvFrame()
                      
                              # Capture the key pressed
                              key_pressed = cv2.waitKey(1) & 0xff
                      
                              # Stop the program if Esc key was pressed
                              if key_pressed == 27:
                                  break
                      
                              # Display the video input frame and the manip output
                              cv2.imshow("Direct video from file", frame)
                              cv2.imshow("manip output", manip_frame)
                      
                      cap.release()
                      cv2.destroyAllWindows()
                      `
                      The following is the error I got:
                      `francis@raspberrypi:~/Desktop/learningOAK-D-Lite $ /bin/python /home/francis/Desktop/learningOAK-D-Lite/pipeline_file_in_imageManip.py
                      [18443010F1AECE1200] [1038.519] [system] [critical] Fatal error. Please report to developers. Log: 'ImageManipHelper' '61'
                      Stack trace (most recent call last):
                      #16   Object "/bin/python", at 0x587533, in 
                      #15   Object "/lib/aarch64-linux-gnu/libc.so.6", at 0x7f9c6d9217, in __libc_start_main
                      #14   Object "/bin/python", at 0x587637, in Py_BytesMain
                      #13   Object "/bin/python", at 0x5b7afb, in Py_RunMain
                      #12   Object "/bin/python", at 0x5c7c37, in PyRun_SimpleFileExFlags
                      #11   Object "/bin/python", at 0x5c8457, in 
                      #10   Object "/bin/python", at 0x5c251f, in 
                      #9    Object "/bin/python", at 0x5c850b, in 
                      #8    Object "/bin/python", at 0x5976fb, in PyEval_EvalCode
                      #7    Object "/bin/python", at 0x49628f, in _PyEval_EvalCodeWithName
                      #6    Object "/bin/python", at 0x4964f7, in 
                      #5    Object "/bin/python", at 0x49c257, in _PyEval_EvalFrameDefault
                      #4    Object "/bin/python", at 0x4c6cc7, in 
                      #3    Object "/bin/python", at 0x4a52ff, in _PyObject_MakeTpCall
                      #2    Object "/bin/python", at 0x4cac53, in 
                      #1    Object "/home/francis/.local/lib/python3.9/site-packages/depthai.cpython-39-aarch64-linux-gnu.so", at 0x7f89fd8037, in 
                      #0    Object "/home/francis/.local/lib/python3.9/site-packages/depthai.cpython-39-aarch64-linux-gnu.so", at 0x7f8a09cd00, in 
                      Segmentation fault (Address not mapped to object [0x219])
                      Segmentation fault
                      `
                    • Hello Erik, thanks for confirming the issue. This gives me a better understanding of what the MobileNetDetectionNetwork documentation says MobileNet detection network node is very similar to NeuralNetwork (in fact it extends it). The only difference is that this node is specifically for the MobileNet NN and it decodes the result of the NN on device. This means that out of this node is not a byte array but a ImgDetections that can easily be used in your code. I will evaluate the pre-compiled models with compatible outputs for now and experiment with editing and compiling models for later. Thanks for the link to the onnx tools. It will take me some time to learn more about nn models and how to use the tools. Best regards, Francis.

                    • Hello Erik, Sorry I did not have a chance to try out the changes extensively till now. When I increased the ImageManip maximum frame size, I got another error. It seems that the program would keep running and display the output windows for a while till a person walked into the FOV and then the program would stop with the error message:
                      [18443010F1AECE1200] [1056.112] [SpatialDetectionNetwork(4)] [critical] Fatal error in openvino '2021.4'. Likely because the model was compiled for different openvino version. If you want to select an explicit openvino version use: setOpenVINOVersion while creating pipeline. If error persists please report to developers. Log: 'Gather' '217'
                      [18443010F1AECE1200] [1059.100] [system] [critical] Fatal error. Please report to developers. Log: 'Fatal error on MSS CPU: trap: 00, address: 00000000' '0'
                      Traceback (most recent call last):
                      File "/home/francis/Desktop/learningOAK-D-Lite/spacial_face_det copy.py", line 161, in <module>
                      disp_frame = in_disp.getCvFrame()
                      AttributeError: 'depthai.ADatatype' object has no attribute 'getCvFrame'
                      Stack trace (most recent call last):
                      #14 Object "/bin/python", at 0x587533, in
                      #13 Object "/lib/aarch64-linux-gnu/libc.so.6", at 0x7f9be56217, in __libc_start_main
                      #12 Object "/bin/python", at 0x587637, in Py_BytesMain
                      #11 Object "/bin/python", at 0x5b79eb, in Py_RunMain
                      #10 Object "/bin/python", at 0x5c958f, in Py_FinalizeEx
                      #9 Object "/bin/python", at 0x5cdde3, in
                      #8 Object "/bin/python", at 0x5ce40f, in _PyGC_CollectNoFail
                      #7 Object "/bin/python", at 0x485b1b, in
                      #6 Object "/bin/python", at 0x5bdabf, in
                      #5 Object "/bin/python", at 0x525723, in PyDict_Clear
                      #4 Object "/home/francis/.local/lib/python3.9/site-packages/depthai.cpython-39-aarch64-linux-gnu.so", at 0x7f8972f06f, in
                      #3 Object "/home/francis/.local/lib/python3.9/site-packages/depthai.cpython-39-aarch64-linux-gnu.so", at 0x7f897eea77, in
                      #2 Object "/home/francis/.local/lib/python3.9/site-packages/depthai.cpython-39-aarch64-linux-gnu.so", at 0x7f89965a03, in dai::DataOutputQueue::~DataOutputQueue()
                      #1 Object "/home/francis/.local/lib/python3.9/site-packages/depthai.cpython-39-aarch64-linux-gnu.so", at 0x7f89962c97, in
                      #0 Object "/home/francis/.local/lib/python3.9/site-packages/depthai.cpython-39-aarch64-linux-gnu.so", at 0x7f89a6199c, in
                      Segmentation fault (Address not mapped to object [0x39303100000018])
                      Segmentation fault

                      I went back and reviewed all the models I tried from the same model zoo. These worked and provided the detected bounding box: face-detection-adas-001, face-detection-retail-0004, person-detection-0200, person-detection-0201 and person-detection-0202. The model that produced the error is person-detection-0203. One thing that I noticed is that all the models that worked has the same output blob shape and format- blob with shape: 1, 1, 200, 7 in the format 1, 1, N, 7. The model person-detections-0203 has an output blob with the shape 100, 5 in the format N, 5. Do I have to add additional parameters in my program to account for the difference in the model output blob shape and format, or, have to select a different openvino version to use, as the error message is suggesting. Thanks, Francis.

                      • erik replied to this.
                      • Hello Erik, that worked. Thanks for the quick reply!

                      • I am trying to test out different versions of person-detection models in the model zoo. I am using the spacial_face_det.py program https://github.com/spmallick/learnopencv/blob/master/OAK-Object-Detection-with-Depth/spacial_face_det.py and changing the DET_INPUT_SIZE and model_name variables to the different models in the model zoo https://github.com/openvinotoolkit/open_model_zoo/tree/master/models/intel/. All is fine when I tried person-detection-0200 to 0202 which all have smaller DET_INPUT_SIZE. But when I got to person-detection-0203 https://github.com/openvinotoolkit/open_model_zoo/tree/master/models/intel/person-detection-0203, which has a DET_INPUT_SIZE = (864,480), I got an error: [ImageManip(5)] [error] Output image is bigger (1244160B) than maximum frame size specified in properties (1048576B) - skipping frame. How can I increase the ImageManip maximum frame size?

                        I tried adding the line face_det_manip.initialConfig.setMaxOutputFrameSize(1244160), I got an error AttributeError: 'depthai.ImageManipConfig' object has no attribute 'setMaxOutputFrameSize'.

                        • erik replied to this.