Hi everyone.

I'm encountering an error with ImageManip. I actually get the error:
[14442C10411D4BD000] [390.881] [system] [critical] Fatal error. Please report to developers. Log: 'ImageManipHelper' '59'

That's why I decided to write to the forum, I don't usually do.
I'm resizing the full res image from the camera with createImageManip() to 300x300 to feed to the neural network for detections.

I'm using the camera to find people in the frame, and then capture an image and crop it to the found bounding boxes.

I don't really need a preview, this is running headless on a raspberry pi.

This is my code where i'm setting the camera and pipeline parameters:
camera = pipeline.createColorCamera()
camera.initialControl.setManualFocus(focus)
camera.initialControl.setAutoFocusMode(dai.RawCameraControl.AutoFocusMode.OFF)
camera.setResolution(dai.ColorCameraProperties.SensorResolution.THE_12_MP)
camera.setInterleaved(False)
camera.setPreviewKeepAspectRatio(False)
neuralNetwork = pipeline.createNeuralNetwork()
neuralNetwork.setBlobPath(str(Path('./mobilenet-ssd.blob')))
nnXLinkOut = pipeline.createXLinkOut()
nnXLinkOut.setStreamName('nnXLinkOut')
videoXLinkOut = pipeline.createXLinkOut()
videoXLinkOut.setStreamName('videoXLinkOut')
manip = pipeline.createImageManip()
manip.initialConfig.setResize(300, 300)
camera.video.link(manip.inputImage)
jpegEncoder = pipeline.createVideoEncoder()
jpegEncoder.setDefaultProfilePreset(camera.getStillSize(), 1, dai.VideoEncoderProperties.Profile.MJPEG)
jpegXLinkOut = pipeline.createXLinkOut()
jpegXLinkOut.setStreamName('jpegXLinkOut')
controlXLinkIn = pipeline.createXLinkIn()
controlXLinkIn.setStreamName('controlXLinkIn')
manip.out.link(neuralNetwork.input)
camera.still.link(jpegEncoder.input)
controlXLinkIn.out.link(camera.inputControl)
jpegEncoder.bitstream.link(jpegXLinkOut.input)
neuralNetwork.out.link(nnXLinkOut.input)

This are my queues:

controllerQueue = dev.getInputQueue('controlXLinkIn')
jpegQueue = dev.getOutputQueue('jpegXLinkOut')
videoQueue = dev.getOutputQueue('videoXLinkOut')
neuralNetworkQueue = dev.getOutputQueue('nnXLinkOut')


And here i'm trying to get the frames:

try:
videoFrames = videoQueue.tryGetAll()
neuralNetworkFrames = neuralNetworkQueue.tryGet()
jpegFrames = jpegQueue.tryGetAll()
except Exception as error:
print('COULDNT GET FRAMES FROM CAMERA QUEUES, EXITING', error)
exit('bye bye')


Can you spot what i'm doing wrong?


I keep getting Communication exception - possible device error/misconfiguration. Original message Couldn't read data from stream: 'videoXLinkOut' (X_LINK_ERROR) or from any of the other streams.

Do I really need the videoQueue? Since i'm not really doing anything with it.

Thank you very much in advance for any help.

Cheers!




  • erik replied to this.

    Hello Lucas,
    from quickly checking the pipeline, you haven't linked anything to videoXLinkOut. Have you checked that? And could you please share the whole code/blob, preferably on github/gist, so we can further investigate this issue.
    Thanks, Erik

    Hi Erik,
    Thank you very much for your reply.

    I haven't linked videoXLinkOut to anything because I don't really need it.
    But if I disable it, it still doesn't work.

    I'm running the camera from my ubuntu 20.04 desktop at the moment.
    But the prod machine will be a raspberry pi, and that's where I wrote most of the code originally.
    In the raspi, it seemed to work sometimes, but I would get several of the X_LINK_ERROR as well.

    I've uploaded all of the code to this gist.

    Sorry for that code, it's in development and not very pretty.

    Thank you very much once again.

    Cheers!

    Lucas

    • erik replied to this.

      Hello Lucas, from my understanding your code sometimes works, sometimes it crashes (X_LINK_ERROR)? From quickly checking the code, the first thing I find strange is when using ImageManip node and streaming video into it (I wasn't aware this would even work). You could just use camRgb.preview.link() and set the preview size with camRgb.setPreviewSize(300,300). Also dev.startPipeline() is deprecated, so you can remove it. And just curious, which line usually throws an error?
      Thanks, Erik

      Hi Erik,

      Thanks again for your reply and help.
      Correct, on the raspberry pi it was working sometimes. On my ubuntu machine it wouldn't at all.
      I tried feeding the camera.preview.link() instead of video to ImageManip and it now seems to be working fine.
      That seems to have been the main culprit. On that note, is it possible to feed the camera.still.link to ImageManip instead of the preview?
      The main error was popping up when trying to read the queues with tryGet() and tryGetAll().
      I've removed the dev.startPipeline() as well.
      Overall it's running much better, but I still get the occasional Communication exception - possible device error/misconfiguration. Original message Couldn't read data from stream: 'jpegXLinkOut' (X_LINK_ERROR) or from the neuralNetworkXLinkOut.
      Would you know of another cause for this issue maybe?
      Thanks again, really appreciate your help 😃
      Cheers!

      • erik replied to this.

        Hello Lucas ,
        it's great that you got it working! still and video outputs of the ColorCamera are both NV12, and preview is RGB. I believe that currently ImageManip only supports RGB format, so I don't think feeding still to it is possible.
        Every how often do you get the jpegXLinkOut stream error? I can try running/debugging it on my end as well.
        Thanks, Erik

        Hi Erik,

        Thank you very much again for your reply.

        ImageManip is working fine now using the preview node.

        Since I upgraded the depthai version to be able to stop using startPipeline() I haven't been running into many issues. At least nothing I think at this stage I should worry you about.

        I am trying a different approach now to what I'm trying to achieve.

        Essentially what I'm trying to do is take a picture in a pitch dark room and get the camera to identify the people in it and give me the bounding boxes so that I can crop it in.

        My initial thought was to light up the room with a powerful LED for say 1 second, to give the camera enough time to identify people, and then save the image.

        That is the code I sent before. It works to a degree, but there are multiple issues with the light source that aren't related to depthai.

        And I think that from a design point of view, exposure of the picture and identification of people should be 2 separate and independent processes.

        That's why I thought about trying to follow the example from "Video & MobilenetSSD" to try to feed a frame back to the camera and in this way use depthai only as the processing unit.

        Here is a gist of that code.

        I got a moderate degree of success there.

        First of all the 12MP resolution fails when I run that code from my ubuntu machine. It says that camera still run out of memory and I couldn't get around that, but it seemed to work on a raspberry pi for a while, but now i'm getting the following errors:

        [2021-07-26 16:52:14.424] [debug] Python bindings - version: 2.8.0.0.dev+a4c841d73e5bd0eee688c90b9c5352d187767645 from 2021-07-23 15:44:42 +0300 build: 2021-07-25 03:21:47 +0000
        [2021-07-26 16:52:14.424] [debug] Library information - version: 2.8.0, commit: 7d76a830ffc51512adae455ec28b1150eabec513 from 2021-07-23 15:42:59 +0300, build: 2021-07-25 03:21:33 +0000
        [2021-07-26 16:52:14.424] [debug] Initialize - finished
        [2021-07-26 16:52:14.731] [debug] Resources - Archive 'depthai-bootloader-fwp-0.0.12.tar.xz' open: 6ms, archive read: 299ms
        [2021-07-26 16:52:15.170] [debug] Device - pipeline serialized, OpenVINO version: 2021.2
        [2021-07-26 16:52:15.387] [debug] Resources - Archive 'depthai-device-fwp-c0a7810c9c1e7678ae65035b8f23d4cac6beb568.tar.xz' open: 6ms, archive read: 956ms
        [2021-07-26 16:52:15.435] [debug] Patching OpenVINO FW version from 2021.4 to 2021.2
        [2021-07-26 16:52:16.938] [trace] RPC: [1,1,9503979954606424982,null]
        [2021-07-26 16:52:16.941] [trace] RPC: [1,1,10182484315255513117,[0]]
        [2021-07-26 16:52:16.943] [trace] RPC: [1,1,5804304869041345055,[1.0]]
        [2021-07-26 16:52:16.945] [trace] RPC: [1,1,17425566508637143278,null]
        [2021-07-26 16:52:16.945] [trace] Log vector decoded, size: 3
        [14442C10411D4BD000] [143.330] [system] [info] Memory Usage - DDR: 0.12 / 358.60 MiB, CMX: 2.09 / 2.50 MiB, LeonOS Heap: 6.29 / 82.70 MiB, LeonRT Heap: 2.83 / 26.74 MiB
        [14442C10411D4BD000] [143.330] [system] [info] Temperatures - Average: 28.86 °C, CSS: 29.83 °C, MSS 27.88 °C, UPA: 28.13 °C, DSS: 29.59 °C
        [14442C10411D4BD000] [143.330] [system] [info] Cpu Usage - LeonOS 9.46%, LeonRT: 1.76%
        [2021-07-26 16:52:17.003] [trace] RPC: [1,1,16527326580805871264,[{"connections":[{"node1Id":4,"node1Output":"bitstream","node2Id":5,"node2Input":"in"},{"node1Id":0,"node1Output":"still","node2Id":4,"node2Input":"in"},{"node1Id":1,"node1Output":"out","node2Id":3,"node2Input":"in"},{"node1Id":6,"node1Output":"out","node2Id":0,"node2Input":"inputControl"},{"node1Id":2,"node1Output":"out","node2Id":1,"node2Input":"in"}],"globalProperties":{"calibData":null,"cameraTuningBlobSize":null,"cameraTuningBlobUri":"","leonCssFrequencyHz":700000000.0,"leonMssFrequencyHz":700000000.0,"pipelineName":null,"pipelineVersion":null},"nodes":[[5,{"id":5,"ioInfo":{"in":{"blocking":true,"name":"in","queueSize":8,"type":3}},"name":"XLinkOut","properties":{"maxFpsLimit":-1.0,"metadataOnly":false,"streamName":"jpegEncoderXLinkOut"}}],[1,{"id":1,"ioInfo":{"in":{"blocking":false,"name":"in","queueSize":5,"type":3},"out":{"blocking":false,"name":"out","queueSize":8,"type":0},"passthrough":{"blocking":false,"name":"passthrough","queueSize":8,"type":0}},"name":"DetectionNetwork","properties":{"anchorMasks":{},"anchors":[],"blobSize":14505024,"blobUri":"asset:1/blob","classes":0,"confidenceThreshold":0.5,"coordinates":0,"iouThreshold":0.0,"nnFamily":1,"numFrames":8,"numNCEPerThread":0,"numThreads":2}}],[6,{"id":6,"ioInfo":{"out":{"blocking":false,"name":"out","queueSize":8,"type":0}},"name":"XLinkIn","properties":{"maxDataSize":5242880,"numFrames":8,"streamName":"controller"}}],[0,{"id":0,"ioInfo":{"inputConfig":{"blocking":false,"name":"inputConfig","queueSize":8,"type":3},"inputControl":{"blocking":true,"name":"inputControl","queueSize":8,"type":3},"isp":{"blocking":false,"name":"isp","queueSize":8,"type":0},"preview":{"blocking":false,"name":"preview","queueSize":8,"type":0},"raw":{"blocking":false,"name":"raw","queueSize":8,"type":0},"still":{"blocking":false,"name":"still","queueSize":8,"type":0},"video":{"blocking":false,"name":"video","queueSize":8,"type":0}},"name":"ColorCamera","properties":{"boardSocket":-1,"colorOrder":0,"fp16":false,"fps":30.0,"imageOrientation":-1,"initialControl":{"aeLockMode":false,"aeRegion":{"height":0,"priority":0,"width":0,"x":0,"y":0},"afRegion":{"height":0,"priority":0,"width":0,"x":0,"y":0},"antiBandingMode":0,"autoFocusMode":3,"awbLockMode":false,"awbMode":0,"brightness":0,"chromaDenoise":0,"cmdMask":0,"contrast":0,"effectMode":0,"expCompensation":0,"expManual":{"exposureTimeUs":0,"frameDurationUs":0,"sensitivityIso":0},"lensPosition":0,"lumaDenoise":0,"saturation":0,"sceneMode":0,"sharpness":0},"inputConfigSync":false,"interleaved":false,"ispScale":{"horizDenominator":0,"horizNumerator":0,"vertDenominator":0,"vertNumerator":0},"previewHeight":300,"previewKeepAspectRatio":false,"previewWidth":300,"resolution":2,"sensorCropX":-1.0,"sensorCropY":-1.0,"stillHeight":-1,"stillWidth":-1,"videoHeight":-1,"videoWidth":-1}}],[2,{"id":2,"ioInfo":{"out":{"blocking":false,"name":"out","queueSize":8,"type":0}},"name":"XLinkIn","properties":{"maxDataSize":5242880,"numFrames":8,"streamName":"frameIn"}}],[3,{"id":3,"ioInfo":{"in":{"blocking":true,"name":"in","queueSize":8,"type":3}},"name":"XLinkOut","properties":{"maxFpsLimit":-1.0,"metadataOnly":false,"streamName":"nnXLinkOut"}}],[4,{"id":4,"ioInfo":{"bitstream":{"blocking":false,"name":"bitstream","queueSize":8,"type":0},"in":{"blocking":true,"name":"in","queueSize":4,"type":3}},"name":"VideoEncoder","properties":{"bitrate":8000,"frameRate":1.0,"height":3040,"keyframeFrequency":30,"lossless":false,"maxBitrate":8000,"numBFrames":0,"numFramesPool":4,"profile":4,"quality":95,"rateCtrlMode":0,"width":4032}}]]}]]
        [2021-07-26 16:52:17.019] [trace] RPC: [1,1,14157578424912043072,[{"map":{"1/blob":{"alignment":64,"offset":0,"size":14505024}}}]]
        [2021-07-26 16:52:17.021] [trace] RPC: [1,1,13547933642676024645,[14505024]]
        [2021-07-26 16:52:17.022] [trace] RPC: [1,1,7790432089545852493,["__stream_asset_storage",2308464256,14505024]]
        [2021-07-26 16:52:17.195] [trace] RPC: [1,1,11967769899726792808,[2308464256,14505024]]
        [2021-07-26 16:52:17.201] [trace] RPC: [1,1,10180360702496156555,null]
        [2021-07-26 16:52:17.201] [trace] RPC: [1,1,14047900442330284907,null]
        [2021-07-26 16:52:17.211] [trace] Log vector decoded, size: 1
        [14442C10411D4BD000] [143.595] [system] [info] ImageManip internal buffer size '171136'B, shave buffer size '19456'B
        [2021-07-26 16:52:19.483] [debug] Timesync thread exception caught: Couldn't read data from stream: '__timesync' (X_LINK_ERROR)
        [2021-07-26 16:52:19.484] [trace] RPC: [1,1,9503979954606424982,null]
        [2021-07-26 16:52:19.489] [debug] Device about to be closed...
        [2021-07-26 16:52:19.491] [debug] Log thread exception caught: Couldn't read data from stream: '__log' (X_LINK_ERROR)
        [2021-07-26 16:52:19.910] [debug] XLinkResetRemote of linkId: (0)
        [2021-07-26 16:52:19.914] [debug] DataInputQueue (frameIn) closed
        [2021-07-26 16:52:19.914] [debug] DataInputQueue (controller) closed
        [2021-07-26 16:52:19.915] [debug] Device closed, 426
        Traceback (most recent call last):
        File "test.py", line 46, in <module>
        with dai.Device(pipeline) as device:
        RuntimeError: Couldn't read data from stream: '__rpc_main' (X_LINK_ERROR)

        Should I be asking this question on this follow up message or should I open a new post?
        Because I'd also like to ask if it's possible to try to synchronize a flash with the OAK.
        I'm triggering the flash from the gpio on the raspberry pi, and what I'm testing at the moment is to see if I can get some type of indication from depthai of when exposure starts, so that I can trigger the flash.
        I'm trying with the slowest possible exposure speed I can to try to catch the flash, but it's not that slow, I can only go to 33000 μs.

        I'm sorry for such a huge message, and I really appreciate all your help.

        Thank you very much once again.

        Cheers!

        • erik replied to this.

          Hello Lucas ,
          I really like the application for the depthai. I would suggest using Script node (it's in develop branch currently, should be released to main soon). The example code above is actually perfect for your use-case as it takes still images. You can also interface with the GPIOs to either trigger LED flash or reading when the LED will flash and take an image when gpio is eg. high. We will also add interrupts to the script node, so you could use those as well.
          I am unsure about the error you are having, is it with the script you provided? Also, are you on the latest depthai version (2.8.0.0)?
          Thanks, Erik

          Hi Erik,

          The error i'm getting is with that script from above, yes, here it is again just in case.

          I only seem to get it when I run it with camera.setResolution(dai.ColorCameraProperties.SensorResolution.THE_12_MP).
          Otherwise it works ok.

          I am on depthai version 2.8.0.0 and now on the develop branch to test the Script node.

          What i'm trying to do is to capture a xenon flash (ie normal speedlite) rather than an LED flash.
          I can do it sometimes by firing it right after I send the ctrl.setCaptureStill(True) camControlQ.send(ctrl) commands to the camera, by introducing a very small sleep in between (sleep(0.018)). But this is not consistent, it doesn't work all of the time.

          I'm wondering if I can query the camera to find out when the actual exposure starts, to then trigger the flash.
          Do you know if there's a property I can check to see when the frame exposure starts? I couldn't find it so far.
          I think it could be easier if I could set a longer exposure, but from the documentation and my tests, the longest I can expose a frame seems to be 33000μs. Would you know why that limitation?

          Thank you very much once again for all your help, and the info on the script node is awesome, I didn't know it existed.

          Cheers!

          • erik replied to this.

            Hello Lucas,
            I will ask the team if they have any other suggestion on how to sync the flash/capture event and circle back. We don't have support for querying autoexposure value yet, but we have it somewhere on our todo list (along with querying all other relevant data). Not sure about the exposure limitation, but I would assume it's the sensor's limitation.
            Thanks ,Erik

            Hi Erik,

            Thank you again, really appreciate it.

            Could it be that the exposure duration limitation might be related to frame rate?
            With the raspberry pi camera and the picamera library, I found that the frame rate limits the duration of the exposure.
            Could it be the same case here?

            Any camera or device property that could be queried regarding when the exposure commences would probably work.
            I really appreciate you're taking the time to look into this and check with the rest of the team. Thanks again!

            Cheers

            Hi @Lucas,
            For the 33ms exposure limit, there are 2 reasons:

            • the configured sensor FPS is a limit, by default it's 30, but can be lowered: camera.setFps(10). Then a larger manual exposure can be set. IIRC the current sensor
            • the default camera tuning limits the auto-exposure to about 33ms, even with the FPS lowered. A custom low-light optimized tuning can be applied instead, as:
              pipeline.setCameraTuningBlobPath("/home/user/Downloads/tuning_color_low_light.bin")
              with this file:
              https://artifacts.luxonis.com/artifactory/luxonis-depthai-data-local/misc/tuning_color_low_light.bin

            About synchronizing the capture with the external flash, the inconsistency between sending setCaptureStill() followed by triggering the flash, is somewhat expected. The camera currently runs in continuous streaming mode, and this command just sets a flag for the next frame to be processed/outputted on the still channel. So there could be a one frame-time variability, and sending the message to device could also be delayed if the host is busy.

            Some cameras (like the global shutter OV9282/OV9782) do have a STROBE output which is best suited for this purpose, the line is driven high during the exposure/integration time. But this is missing on the IMX378/IMX477 modules we use. Still, we can infer in firmware when the exposure would occur (based on the configured exposure time and the MIPI Start-of-Frame event timestamp from the previous frame), and could drive a GPIO from the SoM, that would connect to the flash-enable. However this solution may not be easily applicable to devices in enclosure, and may need PCB rework. Another option would be to send a message to host/RPi over USB about the upcoming exposure, but would be best to have it over an USB interrupt endpoint for low latency (which we don't have available yet, but only bulk). Or change the sensor config, so it is put in low-power mode, and only streams a frame (or a group of frames) when soft-triggered (from host over USB, or even from a GPIO - through MyriadX, as we also don't have a frame trigger input for these color modules).

            Alex

            Hi Alex,

            Thank you so much for your reply!

            I was able to lower the exposure speed with your help.

            That's as far as I could go, since I only understand about 70% of the rest of your reply, sorry!

            Following the queue add callback code sample in the docs, I tried using a callback for the input queue camControlQ = dev.getInputQueue('camControl', maxSize = 1, blocking = False) to see if from the host, I could hear back from the device about when it received the setCaptureStill() message, but it didn't work, input queues don't have callbacks? And probably it was the wrong approach to begin with.

            I'd like to try the approach you mentioned about sending a message to the host/RPi over USB, but i'm stuck in that regard, I really don't know how to get there.
            Would it be a code sample maybe that could point me in the right direction?

            Or really any way to hear back from the camera about when it will be performing exposure.

            How could I read the MIPI start-of-frame event timestamp that you mention?

            I've got this script that i'm playing around with at the moment, and as you can see, i'm only managing to capture some flashes by playing with the sleep between camControlQ.send(ctrl and the triggerflash() function.

            I'd really appreciate any help or pointer in the right direction.

            Thank you very much once again.

            Cheers!

            Thanks @Lucas !
            Just to provide some background, based on the Discord discussion starting here, we applied a different approach: streaming preview frames at a low resolution, just to get the capture timestamps from them, and this way sync the host for the next still capture, when it has to trigger the flash. Using manual exposure, so we know when the next frame exposure will start.