R
rsinghmn

  • 9 Jan
  • Joined Apr 6, 2024
  • 1 best answer
  • Hi,

    Does anyone know where I might be able to find a declaration of conformity for the OAK SoM Pro board?

    OAK-SoM Pro (sets) – Luxonis

    I found these on Github but the SoM is not explicitly mentioned in any of the product lists in the documents in the certificates folder:

    depthai-hardware/certificates/Luxonis_OAK-D-PoE_CE_FCC.pdf at master · luxonis/depthai-hardware · GitHub

    Alternatively, is the SoM a subcomponent of one of the product numbers that are listed?

    Thanks for your help.

    • Thanks. I'm ensuring time syncing using an external trigger setup for image acquisition.

      I've been trying to modify a intrinsics and extrinsics in the calibration json. While I can see it has an effect on the rectified and stereo depth images, I'm a little confused since the numbers don't directly map to the OpenCV calibration results (using cv2.calibrateCamera). Do you happen to know which OpenCV calls are being made for obtaining intrinsics and extrinsics?

      Also, does the stereodepth node support arbitrary mono image sizes, or does it only support the ones found in dai.MonoCameraProperties.SensorResolution (example) ? Asking because I encountered the following error when trying to feed in 1600Wx1400H images:

      [14442C1021F1D5D600] [1.1.2] [7.626] [StereoDepth(0)] [error] Maximum supported input image width for stereo is 1280. Skipping frame!

      • Thanks @jakaskerl

        After obtaining these parameters, how can it be loaded on the device?

        I was able to get stereo depth maps following this OpenCV tutorial that uses chessboard calibration. This uses calls to cv2.initUndistortRectifyMap and cv2.remap to rectify camera images, and the compute method in OpenCV's StereoSGBM class to get disparity maps. Is there an example that demonstrates how the camera matrix and other parameters from this tutorial maps onto the config used by the device? This was the closest I was able to find but the OpenCV calls are different. Conversely, are these low level calls (undistort/remap, sgbm compute) able to be made directly on the device using the script node or any other node (warp or imagemanip)? Or is the only way through configuring the StereoDepth node?

        Thanks,

        Raj

        • Hi,

          I'd like to use the OAK-FFC-4P to calculate stereo depth maps from host-side image inputs. In my case, the images are being taken from an external camera stereo pair controlled by the host machine that does not interface with the OAK board. Is there a way to obtain a valid calibration for external cameras, load it, and use the StereoDepth node with host-side input (with no cameras connected to the OAK board)?

          Thanks for you help.

        • Thanks for the reply jakaskerl

          Do you know if there's a way to specify dai.ImgFrames of type fp16? I believe I only saw support for U8 so I thought NNData might be a general way to do it.

          • Hi Luxonis Team,

            I'm running into memory issues when trying to run my custom models on the OAK-FFC-4P and I'd like to better understand how memory is being allocated. Here is the error from the logs:

            [14442C1021F1D5D600] [1.12] [5.607] [NeuralNetwork(0)] [error] Tried to allocate '306438208'B out of '29515775'B available.
            
            [14442C1021F1D5D600] [1.12] [5.609] [NeuralNetwork(0)] [error] Neural network executor '1' out of '2' error: OUT_OF_MEMORY
            
            [14442C1021F1D5D600] [1.12] [6.412] [system] [info] Memory Usage - DDR: 305.24 / 333.39 MiB, CMX: 2.07 / 2.50 MiB, LeonOS Heap: 9.61 / 82.31 MiB, LeonRT Heap: 5.04 / 40.50 MiB / NOC ddr: 49 MB/s
            
            [14442C1021F1D5D600] [1.12] [6.412] [system] [info] Temperatures - Average: 42.35C, CSS: 43.73C, MSS 41.89C, UPA: 41.89C, DSS: 41.89C
            
            [14442C1021F1D5D600] [1.12] [6.412] [system] [info] Cpu Usage - LeonOS 8.54%, LeonRT: 89.76%

            For context, I'm trying to get host side images sent to the device for inference. My NN blob is ~9.5MB, and my input and output images are of size 768x768x1 float16. Given this, I'm not really sure what's taking up up all of the memory.

            Here is a snippet of the script I'm using for host side inference:

            def create_myriadx_nn_pipeline(nn_path):
                pipeline = dai.Pipeline()
                detection_nn = pipeline.create(dai.node.NeuralNetwork)
                detection_nn.setBlobPath(nn_path)
                detection_nn.setNumPoolFrames(2), 
                detection_nn.input.setBlocking(False)
                detection_nn.setNumInferenceThreads(1)
            
                img_in = pipeline.create(dai.node.XLinkIn)
                img_in.setMaxDataSize(768*768*2)
                img_in.setNumFrames(1)
                img_in.setStreamName("img_in")
                img_in.out.link(detection_nn.input)
            
            
                xout_rgb = pipeline.create(dai.node.XLinkOut)
                xout_rgb.setStreamName("nn_input")
                xout_rgb.input.setBlocking(False)
            
                detection_nn.passthrough.link(xout_rgb.input)
            
                xout_nn = pipeline.create(dai.node.XLinkOut)
                xout_nn.setStreamName("nn")
                xout_nn.input.setBlocking(False)
            
                detection_nn.out.link(xout_nn.input)
                device = dai.Device(pipeline) 
                device.setLogLevel(dai.LogLevel.DEBUG)
                device.setLogOutputLevel(dai.LogLevel.DEBUG)
                print('Device pipeline created')
                return device
            
            def myriad_run(device, inp):
                nn_data = dai.NNData()
                nn_data.setLayer("fp16", ims.astype(float).flatten().tolist())
                nn_data.getAllLayerNames()
                result = []
                
                img_in_q = device.getInputQueue(name="img_in")    
                q_nn = device.getOutputQueue(name="nn", maxSize=4, blocking=False)
            
                img_in_q.send(nn_data)        
                in_nn = q_nn.get()
                layers = in_nn.getAllLayers()
            
                layer1 = in_nn.getLayerFp16(layers[0].name)    
                result = np.asarray(layer1, dtype=np.float32).reshape((1,768,768,1))
                return result
            
            img = np.load(r"path\to\inputimg")
            nn_path = r"path\to\blob"
            device = create_myriadx_nn_pipeline(nn_path)
            pred = myriad_run(device, img)    
          • Hi Luxonis Team,

            Is there a way to inspect quantized weights and layer properties after blob conversion? I have an ONNX regression model that, when converted, has significant error compared to the unconverted result.

            Here are the my specifications on the online blobconverter tool:

            Model Optimizer params: --data_type=FP16 --input_shape=[1,768,768,1]

            Compile params: -ip FP16

            I'd like to try "simulate" what's going on to understand if F32 to F16 quantization is the culprit, or if there is something else going on when running on the MyriadX. So far, I've tried to reproduce it in tensorflow using F16 quantized weights and that yields results much closer to the unconverted result.

            • Thanks for the quick reply! Sure, here's a link to the files to reproduce:

              Specifying the input helped resolve a previous issue I was having. Here are the input parameters I'm setting in the online converter tool:
              Model parameters: --data_type=FP16 --input_shape=[1,768,768,1]

              Compile parameters: -ip FP16

              FWIW, I did some more testing and found that trying to convert UNets from the segmentation models package also fails similarly when specifying a 'transposed' decoder block type; UNets with 'upsampling' decoder block types convert just fine.

              I also tried converting a single transpose conv2d layer model (included in attached) and that fails similarly.

            • @Matija

              I'm also experiencing a similar issue when trying to convert a UNet variant with the online blob converter tool.

              [ ERROR ]  Exception occurred during running replacer "REPLACEMENT_ID" (<class 'openvino.tools.mo.load.tf.loader.TFLoader'>): Unexpected exception happened during extracting attributes for node model_1/sequential_24/conv2d_transpose_9/strided_slice/stack. Original exception message: index -1 is out of bounds for axis 0 with size 0

              AFAICT this references the first transpose conv layer in the decoder after the bottleneck. Do you have any other thoughts on workarounds where the layer in conflict is deeper into the network?

              Thanks so much for your help.