The DepthAI API documentation says the CameraControl message controls both the mono and color cameras. I ran the example programs for both cameras, so I know that tested subset of the full control capability works. But there are some controls that may not apply to the mono cameras (or maybe even the color camera). I've done a bit of searching, but can't find anything that addresses this area. Sorry if I've missed it.

Is there a list of controls apply to both cameras, or only the mono or only the color?

For example: EffectMode does not seem applicable to the mono cameras; SceneMode might not apply for the mono cameras;

I am planning on experimenting with auto-focus and auto-exposure region. I hope they apply to at least the mono cameras.

Thanks.

-

-

  • erik replied to this.

    Hello gregflurry,
    I believe that we have exposed all the camera settings/configurability to our API, and some sensors don't really support this configurability - that's why there's no effect on them. In other words; these controls depend on the camera sensors you are using. We don't have a list/table of which sensors accept/work with which controls, but I have added it on todo.
    Thanks, Erik

    It took me a while, but hacking examples mono_camera_control and auto_exposure_roi I've confirmed that it is possible to set the autoexposure region on the mono cameras. This will help me deal with the lighting conditions in my workspace. I can clean up the code a bit and "donate it" if you think it would be helpful to the community.

    One reason it took a while to confirm success was something I cannot explain. The script is pretty simple, based on the examples I cited. It creates the nodes (left and right camera, XLinkOut for both cameras, and XLinkIn for camera control), configures and connects them. The intent was to get the queues for both cameras. Then loop: get the frames from the queues; show the frames; look for keyboard input to change the size and location of the region and send a camera control message to the cameras. I had a hard-to-spot (for me) bug where I assigned both camera queue variables to the queue for the right camera. Thus, in effect, I was always reading the right camera queue twice. As a result, the exposure region never changed. But I cannot understand why that would cause problems in accepting the exposure region. Any advice would be helpful.

    While experimenting, I also found something in mono_camera_control that seems like an internal bug (or maybe a limitation). That script runs the mono cameras through an ImageManip node to crop the image to 36% of a 720P frame. It also calls ImageManip.setMaxOutputFrameSize to the maximum possible size (100% of a 720P frame). With a 36% crop, everything works. I then set the cropping to 100%, so that the output frame would be 100% of a 720P frame. I got the following error: [ImageManip(3)] [error] Output image is bigger (2764800B) than maximum frame size specified in properties (1048576B). It seems that the attempt to set the OutputFrameSize to 2764800B fails. I have no idea why.

    Thanks again!

    • erik replied to this.

      Hello gregflurry ,
      so that would actually be really helpful! I believe we still have a ticket on our "low-ish priority" todo list to make a demo like you are describing, so that would be perfect.

      From my understanding, you were sending the same config message to the right camera twice? I doubt that should case an issue and not work. But great that you figured it out and linked it properly.

      So actually this should work as expected after setting the max output size, and I have used it a few times as well. Could you share your code so we can check this out?
      Thanks, Erik

      Here is the code for investigating the autoexposure region. [Sorry: I could not figure out how to attach a Python file.]

      #!/usr/bin/env python3
      
      import cv2
      import depthai as dai
      
      def clamp(num, v0, v1):
          return max(v0, min(num, v1))
      
      # Create pipeline
      pipeline = dai.Pipeline()
      
      # Define sources and outputs
      monoRight = pipeline.createMonoCamera()
      monoLeft = pipeline.createMonoCamera()
      controlIn = pipeline.createXLinkIn()
      rightOut = pipeline.createXLinkOut()
      leftOut = pipeline.createXLinkOut()
      
      controlIn.setStreamName('control')
      rightOut.setStreamName("rightOut")
      leftOut.setStreamName("leftOut")
      
      # Properties
      monoRight.setBoardSocket(dai.CameraBoardSocket.RIGHT)
      monoLeft.setBoardSocket(dai.CameraBoardSocket.LEFT)
      monoRight.setResolution(dai.MonoCameraProperties.SensorResolution.THE_400_P)
      monoLeft.setResolution(dai.MonoCameraProperties.SensorResolution.THE_400_P)
      
      # Linking
      controlIn.out.link(monoRight.inputControl)
      controlIn.out.link(monoLeft.inputControl)
      monoRight.out.link(rightOut.input)
      monoLeft.out.link(leftOut.input)
      
      # stuff for exposure ROI
      roiPos = [0, 0]
      roiSize = [40, 40]
      roiPosMax = (monoRight.getResolutionWidth() - roiSize[0], monoRight.getResolutionHeight() - roiSize[1])
      roiStep = 10
      
      # Connect to device and start pipeline
      with dai.Device(pipeline) as device:
      
          # Output queues will be used to get the grayscale frames
          qRight = device.getOutputQueue(rightOut.getStreamName(), maxSize=4, blocking=False)
          qLeft = device.getOutputQueue(leftOut.getStreamName(), maxSize=4, blocking=False)
          controlQueue = device.getInputQueue(controlIn.getStreamName())
      
          while True:
              inRight = qRight.get()
              inLeft = qLeft.get()
              roiFrameR = inRight.getCvFrame()
              cv2.rectangle(roiFrameR, (roiPos[0], roiPos[1]), (roiPos[0] + roiSize[0], roiPos[1] + roiSize[1]), (0,0,0), 2)
              cv2.imshow("right", roiFrameR)
              cv2.imshow("left", inLeft.getCvFrame())
      
              # Update screen (1ms pooling rate)
              key = cv2.waitKey(1)
              if key == ord('q'):
                  break
      
              elif key in [ord('t'), ord('f'), ord('g'), ord('h')]:
                  if key == ord('t'):
                      roiPos[1] -= roiStep
                      # roiOrigin[1] -= roiStep
                  if key == ord('f'):
                      roiPos[0] -= roiStep
                      # roiOrigin[0] -= roiStep
                  if key == ord('g'):
                      roiPos[1] += roiStep
                      # roiOrigin[1] += roiStep
                  if key == ord('h'):
                      roiPos[0] += roiStep
                      # roiOrigin[0] += roiStep
                  roiPos[0] = clamp(roiPos[0], 0, roiPosMax[0])
                  roiPos[1] = clamp(roiPos[1], 0, roiPosMax[1])
                  # print("Setting exposure ROI:", roiPos,  roiSize)
                  ctrl = dai.CameraControl()
                  ctrl.setAutoExposureRegion(roiPos[0], roiPos[1], roiSize[0], roiSize[1])
                  controlQueue.send(ctrl)

      The code above works for changing exposure as I expect. Moving the ROI changes the exposure, though in my environment, the actual exposure takes around 5 seconds to stabilize.

      I did not clearly explain the problem with this code in my previous post. Lines 45 and 46

          qRight = device.getOutputQueue(rightOut.getStreamName(), maxSize=4, blocking=False)
          qLeft = device.getOutputQueue(leftOut.getStreamName(), maxSize=4, blocking=False)

      get the queues for the output of the right and left camera. Subsequent lines get and display the frames. And if later the ROI changes due to keyboard input the setAutoExposureRegion call takes effect.

      But, as I said, I made a mistake and had lines 45 and 46 as follows (both queues end up pointing at the right camera):

          qRight = device.getOutputQueue(rightOut.getStreamName(), maxSize=4, blocking=False)
          qLeft = device.getOutputQueue(rightOut.getStreamName(), maxSize=4, blocking=False)

      With this simple bug, the the setAutoExposureRegion call either does not happen, or does NOT take effect. I remain puzzled.

      Apparently I am very bad at explaining problems. I discovered the "size" problem using the DepthAI example program mono_camera_control.py. As it comes with the download, lines 47 and 48 set the crop range, i.e.,

      topLeft = dai.Point2f(0.2, 0.2)
      bottomRight = dai.Point2f(0.8, 0.8)

      With those values the program works fine. However, I tried the following:

      topLeft = dai.Point2f(0.0, 0.0)
      bottomRight = dai.Point2f(1.0, 1.0)

      With no other changes in the code, I get the error I mentioned, [ImageManip(3)] [error] Output image is bigger (2764800B) than maximum frame size specified in properties (1048576B). Maybe I should not expect it to work.

      Thanks!