robomick

  • 18 Sep
  • Joined Jan 29, 2024
  • 0 best answers
  • Will FFC devices ever be supported?

    • jakaskerl Thanks Jaka, looks like a different pinout. No love unfortunately finding an IMX577 with that pinout. Bummer.

    • jakaskerl

      I contact the Kai Lap Technologies and they provided a datasheet for their IMX577. This notes the mating connector type is an AXE530124 with the following pinout. Is there anywhere I can check the schematic and pinout for the OAK-D-Lite boards to make sure they match?

      • Hey John, how did you go with this? We have a spare one of these boards and were thinking of trying to swap the camera to something a bit better for underwater imagery. It would be a super compact module if the camera could be upgraded with a different MIPI camera?

        I haven't looked into the PCB requirements? I assume the board is custom made for the camera, but if I could plug in a IMX577 like this, that would be incredible!

        I've also noticed on the board a number of test points for UART and I2C? Could we connect a sensor to these with a bit of soldering? If we could pass a sensor readings like the OAK-FCC-4P it would be awesome in this form factor? We want to add a barometer sensor to our payload, and sharing the USB-C line would be handy!

        • Just though I would share our current progress in case anyone else wants to use Luxonis for underwater machine robotics, machine learning, and photogrammetry. Our goal was to have a camera that costs less than 1K, with similar image quality to a GoPro (4K, 60FPS, Pixel size greater than 1.5μm), but that we can connect with via USB to start/stop, and run models. None of the prebuilt Oak Camera meet our lens spec requirements so we went with the OAK-FFC-3P with an IMX577 module.

          Out of the box the IMX577 with the M12 lens was very green and preformed poorly underwater. The given solution was to configure ColorCamera ISP values (e.g. increasing exposure limit), but we didn't really want to increase the expose time, as this also increases motion blur from waves and the motion of our robots, which in turn decreases the photogrammetry and machine vision quality.

          We instead made a custom lens mount and chose a lens optimised for the IMX577 sensor, and the DOF we wanted by putting our sensor specs into a calculator (Calculating the depth of field (DOF)). Our first test was to use a Theia SL410M lens as it fit the specs we thought we wanted, but also has a lot of adjustability to fine tune the settings.

          These images are taken at a depth of 1m - 4m with good visibility. Ignore the red shift (we didn't have an IR filter on the lens).

          Overall the image quality was a huge improvement, without needing to increase the exposure. When we compared the images to the GoPro 13, we found that the fine detail was also a litter better. The FOV is smaller, compared to the GoPro, but a lot of the increase image size from the GoPro is unusable due to heavy distortion. The useable areas are quite similar, but with more detail using the Luxonis IMX577 with then upgraded lens.

          Now that we know the Theia SL410M works based on the optimised lens settings, we are moving to a more streamlined lens that takes up less space with an IR filter (4mm FL, #33-300 + IR Filter). We will let you know how it goes after doing some testing. We will share the CAD files if it works in case someone else wants to do the same!

          All up this wasn't the cheapest. The cost so far: Lens 377AUD + Filter 60AUD + Oak-FCC-3P 240 + IMX577 120AUD + Lens Mounts 60AUD = Total $857AUD.

          We are very keen for the OAK 4 SoM breakout board whenever this comes along. For our wishlist, we are really hoping it has a UART/I2C passthrough so we can send additional sensor data through the same USB-C line, with 4 camera ports.

          • Did you manage to get workable underwater images? Got very similar results with the IMX577. Very green compared to our other cameras. Would you to know if it's possible to improve it enough to be useable?

            Added a comparison image between the Gopro and the IMX577. We're going to try out the Color tuning for low-light environments blob, but the green seems a little extreme, so wondering if that is something else?

            Gopro 11:

            Luxonis IMX577 on OAK-FCC-3P module running on Jetson:

          • jakaskerl
            This IMX577 seems to be in a similar situation too if you are updating the calibration files and documentation.

            Got it running though by adding the model name to the json file and the sensor name to the calibrate.py file on the development branch. Done for the day, but will try doing the full calibration tomorrow. Hope it works!

            Few things that are strange.
            1. The left camera is in the middle, and the right is on the left.
            2. The text for the charuco board is super tiny for the IMX577 on the right?

          • jakaskerl

            cd depthai/

            git fetch --all

            git checkout develop

            git submodule update --init --recursive

            No love with the python3 -m depthai_viewer, but then tried it with the depth_preview.py and it's working great!

            depthai-python/examples/StereoDepth$ python3 depth_preview.py

            Woohoo! Thanks Jaka! This is useable!

            • Is this the expected result I should be seeing using the python3 -m depthai_viewer? Are the squares caused by it converting the RGB image to mono?

            • jakaskerl If you want to post your step-by-step process calibrating 2 x OV9782 cameras using a OAK-FFC-3P I would appreciate to see where I have gone wrong? I have done a fresh install following the guide steps (except for the step where I have to edit the broken calibration file). Is there something else I need to edit so the OV9782 cameras work with the OAK-FCC-3P?

            • I thought I would try installing DepthAI on my PC so I could use the depthAI viewer as it doesn't run on out AUV jetson.

              sudo wget -qO- https://docs.luxonis.com/install_depthai.sh | bashcd depthaigit submodule update --init --recursivepython3 install_requirements.py

              #Edited calibrate with:

              #sensorName = cam_info['sensorName']

              #print(f'Sensor name for {cam_info["name"]} is {sensorName}')

              cam_info['sensorName'] = "OV9782"

              # JSON file

              {

              "board_config":

              {

              "name": "OAK-FCC-3P",

              "revision": "R1M0E1",

              "cameras":{

              "CAM_C": {

              "name": "right",

              "hfov": 75,

              "type": "color",

              "extrinsics": {

              "to_cam": "CAM_B",

              "specTranslation": {

              "x": 7,

              "y": 0,

              "z": 0

              },

              "rotation":{

              "r": 0,

              "p": 0,

              "y": 0

              }

              }

              },

              "CAM_B": {

              "name": "left",

              "hfov": 75,

              "type": "color"

              }

              },

              "stereo_config":{

              "left_cam": "CAM_B",

              "right_cam": "CAM_C"

              }

              }

              }

              python3 calibrate.py -s 4.30 --board OAK-FFC-3P.json -nx 13 -ny 7 -dsb rgb

              I setup a little jig with the cameras 70mm apart.

              After calibrating with the script, this was the depthAI output. I am using OAK-FFC-OV9782-M12, not OV9282. This looks like something isn't handling the OV9782 images correctly.

            • jakaskerl
              After running git pull --recurse-submodules

              Running:
              depthai-python/examples/StereoDepth$ python3 depth_preview.py

              depthai-python/examples/StereoDepth$ python3 depth_preview_sr.py

            • That doesn't seem to work? Your calibration file might be different to mine?

              Here is line 356 from the calibration file:

              A colleague suggested hard coding it in here like this instead which allowed me to run calibration (edited from line 553).

                              cam_node.setBoardSocket(stringToCam[cam_id])
              
                              # sketchy hard code to check please remove
              
                              #sensorName = cam_info['sensorName']
              
                              cam_info['sensorName'] = "OV9782"

              Calibration ran.


              Results are pretty sad still I think? I'm not sure what to do at the moment?

              Running this example code:

              #!/usr/bin/env python3

              import cv2

              import depthai as dai

              import numpy as np

              # Closer-in minimum depth, disparity range is doubled (from 95 to 190):

              extended_disparity = False

              # Better accuracy for longer distance, fractional disparity 32-levels:

              subpixel = False

              # Better handling for occlusions:

              lr_check = True

              # Create pipeline

              pipeline = dai.Pipeline()

              # Define sources and outputs

              monoLeft = pipeline.create(dai.node.MonoCamera)

              monoRight = pipeline.create(dai.node.MonoCamera)

              depth = pipeline.create(dai.node.StereoDepth)

              # Output nodes for disparity, left, and right images

              xoutDisparity = pipeline.create(dai.node.XLinkOut)

              xoutLeft = pipeline.create(dai.node.XLinkOut)

              xoutRight = pipeline.create(dai.node.XLinkOut)

              xoutDisparity.setStreamName("disparity")

              xoutLeft.setStreamName("left")

              xoutRight.setStreamName("right")

              # Properties

              monoLeft.setResolution(dai.MonoCameraProperties.SensorResolution.THE_720_P)

              monoLeft.setBoardSocket(dai.CameraBoardSocket.LEFT) # Use setBoardSocket with enum

              monoRight.setResolution(dai.MonoCameraProperties.SensorResolution.THE_720_P)

              monoRight.setBoardSocket(dai.CameraBoardSocket.RIGHT) # Use setBoardSocket with enum

              # Configure StereoDepth node

              depth.setDefaultProfilePreset(dai.node.StereoDepth.PresetMode.HIGH_DENSITY)

              depth.initialConfig.setMedianFilter(dai.MedianFilter.KERNEL_7x7)

              depth.setLeftRightCheck(lr_check)

              depth.setExtendedDisparity(extended_disparity)

              depth.setSubpixel(subpixel)

              # Linking

              monoLeft.out.link(depth.left)

              monoRight.out.link(depth.right)

              depth.disparity.link(xoutDisparity.input)

              monoLeft.out.link(xoutLeft.input)

              monoRight.out.link(xoutRight.input)

              # Connect to device and start pipeline

              with dai.Device(pipeline) as device:

              # Output queues for disparity, left, and right frames
              
              qDisparity = device.getOutputQueue(name="disparity", maxSize=4, blocking=False)
              
              qLeft = device.getOutputQueue(name="left", maxSize=4, blocking=False)
              
              qRight = device.getOutputQueue(name="right", maxSize=4, blocking=False)
              
              while True:
              
                  inDisparity = qDisparity.get()  # blocking call, will wait until a new data has arrived
              
                  disparityFrame = inDisparity.getFrame()
              
                  # Normalization for better visualization of disparity
              
                  disparityFrame = (disparityFrame \* (255 / depth.initialConfig.getMaxDisparity())).astype(np.uint8)
              
                  
              
                  # Applying color map to disparity
              
                  disparityColor = cv2.applyColorMap(disparityFrame, cv2.COLORMAP_JET)
              
                  # Retrieve left and right camera frames
              
                  inLeft = qLeft.tryGet()
              
                  inRight = qRight.tryGet()
              
                  if inLeft:
              
                      leftFrame = inLeft.getCvFrame()
              
                      cv2.imshow("left", leftFrame)
              
                  if inRight:
              
                      rightFrame = inRight.getCvFrame()
              
                      cv2.imshow("right", rightFrame)
              
                  cv2.imshow("disparity", disparityFrame)
              
                  cv2.imshow("disparity_color", disparityColor)
              
                  if cv2.waitKey(1) == ord('q'):
              
                      break

              Updated the dataset images for review: https://www.dropbox.com/scl/fo/h9nav3q3y448q6mqkjrjf/h?rlkey=nikezpxqkgh68u1q3gd6x9b36&dl=0

            • jakaskerl

              Setting the JSON file to color throws errors running the calibration script? Are the OV9782 cameras supported for OAK-FFC-3P? Maybe they have only been tested on the RGB camera when calibrating? The both show up in the file as mono and RGB?

              This is the error:

              python3 calibrate.py -s 4.30 --board OAK-FFC-3P.json -nx 13 -ny 7 -dsb rgb

              Cam: left and focus: False

              Cam: right and focus: False

              Saving dataset to: dataset

              Sensor name for right is OV9282

              Traceback (most recent call last):

              File "calibrate.py", line 1171, in <module>

              Main().run()

              File "calibrate.py", line 1160, in run

              self.startPipeline()

              File "calibrate.py", line 460, in startPipeline

              pipeline = self.create_pipeline()

              File "calibrate.py", line 551, in create_pipeline

              cam_node.setResolution(camToRgbRes[cam_info['sensorName'].upper()])

              KeyError: 'OV9282'

              python3 calibrate.py -s 4.30 --board OAK-FFC-3P.json -nx 13 -ny 7 -dsb rgb

              Cam: left and focus: False

              Cam: right and focus: False

              Saving dataset to: dataset

              Sensor name for right is OV9282

              Traceback (most recent call last):

              File "calibrate.py", line 1171, in <module>

              Main().run()

              File "calibrate.py", line 1160, in run

              self.startPipeline()

              File "calibrate.py", line 460, in startPipeline

              pipeline = self.create_pipeline()

              File "calibrate.py", line 551, in create_pipeline

              cam_node.setResolution(camToRgbRes[cam_info['sensorName'].upper()])

              KeyError: 'OV9282'

              For this JSON file:

              {

              "board_config": {

              "name": "OAK-FCC-3P",

              "revision": "R3M0E3",

              "cameras": {

              "CAM_C": {

              "name": "right",

              "hfov": 75,

              "type": "color",

              "extrinsics": {

              "to_cam": "CAM_B",

              "specTranslation": {

              "x": 6,

              "y": 0,

              "z": 0

              },

              "rotation": {

              "r": 0,

              "p": 0,

              "y": 0

              }

              }

              },

              "CAM_B": {

              "name": "left",

              "hfov": 75,

              "type": "color"

              }

              },

              "stereo_config": {

              "left_cam": "CAM_B",

              "right_cam": "CAM_C"

              }

              }

              }

            • jakaskerl Just checking in to see if you are getting similar results when calibrating, or if I am not following the guide correctly, or have a faulty board? Not sure if I should continue down this path?

            • Also thanks for the fast responses. I am on a field trip a bit of the coast of Australia, so the timezones, and internet access are a bit wacky.

              We're going to try use the stereo, and just bag the raw mono output from the OAK so we can at least get some data. Hopefully can fix the depth data output after the mission.

            • erik

              So the original code without swapping:

              And with the swapped:
              monoLeft.out.link(depth.right)

              monoRight.out.link(depth.left)

              This is the code I am running to view the stereo:

              #!/usr/bin/env python3

              import cv2

              import depthai as dai

              import numpy as np

              # Closer-in minimum depth, disparity range is doubled (from 95 to 190):

              extended_disparity = False

              # Better accuracy for longer distance, fractional disparity 32-levels:

              subpixel = False

              # Better handling for occlusions:

              lr_check = True

              # Create pipeline

              pipeline = dai.Pipeline()

              # Define sources and outputs

              monoLeft = pipeline.create(dai.node.MonoCamera)

              monoRight = pipeline.create(dai.node.MonoCamera)

              depth = pipeline.create(dai.node.StereoDepth)

              # Output nodes for disparity, left, and right images

              xoutDisparity = pipeline.create(dai.node.XLinkOut)

              xoutLeft = pipeline.create(dai.node.XLinkOut)

              xoutRight = pipeline.create(dai.node.XLinkOut)

              xoutDisparity.setStreamName("disparity")

              xoutLeft.setStreamName("left")

              xoutRight.setStreamName("right")

              # Properties

              monoLeft.setResolution(dai.MonoCameraProperties.SensorResolution.THE_400_P)

              monoLeft.setBoardSocket(dai.CameraBoardSocket.LEFT) # Use setBoardSocket with enum

              monoRight.setResolution(dai.MonoCameraProperties.SensorResolution.THE_400_P)

              monoRight.setBoardSocket(dai.CameraBoardSocket.RIGHT) # Use setBoardSocket with enum

              # Configure StereoDepth node

              depth.setDefaultProfilePreset(dai.node.StereoDepth.PresetMode.HIGH_DENSITY)

              depth.initialConfig.setMedianFilter(dai.MedianFilter.KERNEL_7x7)

              depth.setLeftRightCheck(lr_check)

              depth.setExtendedDisparity(extended_disparity)

              depth.setSubpixel(subpixel)

              # Linking

              monoLeft.out.link(depth.left)

              monoRight.out.link(depth.right)

              depth.disparity.link(xoutDisparity.input)

              monoLeft.out.link(xoutLeft.input)

              monoRight.out.link(xoutRight.input)

              # Connect to device and start pipeline

              with dai.Device(pipeline) as device:

              # Output queues for disparity, left, and right frames

              qDisparity = device.getOutputQueue(name="disparity", maxSize=4, blocking=False)

              qLeft = device.getOutputQueue(name="left", maxSize=4, blocking=False)

              qRight = device.getOutputQueue(name="right", maxSize=4, blocking=False)

              while True:

              inDisparity = qDisparity.get() # blocking call, will wait until a new data has arrived

              disparityFrame = inDisparity.getFrame()

              # Normalization for better visualization of disparity

              disparityFrame = (disparityFrame * (255 / depth.initialConfig.getMaxDisparity())).astype(np.uint8)

              # Applying color map to disparity

              disparityColor = cv2.applyColorMap(disparityFrame, cv2.COLORMAP_JET)

              # Retrieve left and right camera frames

              inLeft = qLeft.tryGet()

              inRight = qRight.tryGet()

              if inLeft:

              leftFrame = inLeft.getCvFrame()

              cv2.imshow("left", leftFrame)

              if inRight:

              rightFrame = inRight.getCvFrame()

              cv2.imshow("right", rightFrame)

              cv2.imshow("disparity", disparityFrame)

              cv2.imshow("disparity_color", disparityColor)

              if cv2.waitKey(1) == ord('q'):

              break

              Here are the OAK calibration images, the OAK-FCC-3P json file, and the stereo_test.py file. Let me know if there's any issues accessing.
              Dropbox to images

            • The example notes CAM_C as right so I have it connected to CAM_R, and CAM_B as left so I have it connected to CAM_L? The guide notes a positive translation from C to B so I entered positive 6? I think that is all correct?



              The camera text is upside down on the camera modules, but the cameras appear up the correct way in the testing, so I assume that's not critical?

            • Hi Oak Community,

              I've bought an OAK-FFC-3P and two OAK-FFC OV9782 global shutter cameras listed on the "cameras are compatible with this OAK FFC baseboard". It's running on a Jetson Orin Nano with 8GB RAM.

              Can anyone see where I am going wrong? I think I have followed the steps correctly?

              There is no CAM_B or CAM_C on the OAK-FFC-3P like noted in the calibration tutorial, but I assume that the CAM_L is CAM_B and CAM_R is CAM_C. It notes R3M0E3 on the OAK-FFC-3P so I have noted that as the revision.

              The cameras are spaced 6cm apart.

              This is the calibration json file (I've just replaced the hfov and the x translation like in the example):
              {

              "board_config": {

              "name": "OAK-FCC-3P",

              "revision": "R3M0E3",

              "cameras": {

              "CAM_C": {

              "name": "right",

              "hfov": 75,

              "type": "mono",

              "extrinsics": {

              "to_cam": "CAM_B",

              "specTranslation": {

              "x": 6,

              "y": 0,

              "z": 0

              },

              "rotation": {

              "r": 0,

              "p": 0,

              "y": 0

              }

              }

              },

              "CAM_B": {

              "name": "left",

              "hfov": 75,

              "type": "mono"

              }

              },

              "stereo_config": {

              "left_cam": "CAM_B",

              "right_cam": "CAM_C"

              }

              }

              }

              I run it like this (with no RGB camera):
              python3 calibrate.py -s 3.4 --board OAK-FFC-3P.json -nx 11 -ny 8 -dsb rgb

              The EEPROM flashes successfully. These are the calibration dots and some example images saved in the dataset folder:

              This is the resulting depth results running depth_preview.py.

              And this is the depth_preview_sr.py results:

              Trying depth_preview_lr.py just errors:

              [14442C10619FCAD600] [1.2.4] [1.838] [ColorCamera(1)] [error] Camera not detected on socket: 0

              [14442C10619FCAD600] [1.2.4] [1.838] [ColorCamera(0)] [warning] Unsupported resolution set for detected camera OV9782, needs 800_P or 720_P. Defaulting to 800_P

              [14442C10619FCAD600] [1.2.4] [1.839] [ColorCamera(2)] [warning] Unsupported resolution set for detected camera OV9782, needs 800_P or 720_P. Defaulting to 800_P

              [14442C10619FCAD600] [1.2.4] [2.023] [StereoDepth(4)] [error] Left input image width ('854') must be multiple of 16. Skipping frame!

              [14442C10619FCAD600] [1.2.4] [2.039] [StereoDepth(4)] [error] Left input image width ('854') must be multiple of 16. Skipping frame!

              [14442C10619FCAD600] [1.2.4] [2.072] [StereoDepth(4)] [error] Left input image width ('854') must be multiple of 16. Skipping frame!

              The /depthai-experiments/gen2-camera-demo seems to get slightly better results, but still not close the demo quality?

              The point cloud generated using open3D:

              This is sad quality compared to examples demos and won't be viable 😰 Not sure how to improve this?

              • Hi @robomick
                Go to luxonis/depthaitree/develop develop branch, pull the changes (recursively). Then manually
                set the sensor names in the .json file:

                {
                    "board_config":
                    {
                        "name": "Custom FFC",
                        "revision": "R1M0E1",
                        "cameras":{
                            "CAM_A": {
                                "model": "OV9782",
                                "name": "right",
                                "hfov": 71.86,
                                "type": "color",
                                "extrinsics": {
                                    "to_cam": "CAM_D",
                                    "specTranslation": {
                                        "x": 10.95,
                                        "y": 0,
                                        "z": 0
                                    },
                                    "rotation":{
                                        "r": 0,
                                        "p": 0,
                                        "y": 0
                                    }
                                }
                            },
                            "CAM_D": {
                                "model": "OV9782",
                                "name": "left",
                                "hfov": 71.86,
                                "type": "color"
                            }
                                
                
                        },
                        "stereo_config":{
                            "left_cam": "CAM_D",
                            "right_cam": "CAM_A"
                        }
                    }
                }

                Then retry the calibration.

                Thanks,
                Jaka

            • Hi Luxonis community,
              Total beginner.
              I have ordered a OAK-FCC-3P and two 1MP OV9782 Global shutter camera modules. I also have a Arducam 20MP IMX283 USB 3.0 Camera and want to connect both to a Jetson Orin Nano.
              Is it possible to calibrate the stereo camera with reference to the USB camera (as the RGB camera) using the Depth Calibration guide?
              I am hoping to compute the depth data using the Oak stereo and IMU data, to reference against the RGB image I'm getting from my USB camera.
              Any advice would be appreciated, thanks!