I'm using an OAK1 auto focus camera with an IMX378 sensor.

As I need full resolution images, I'm using Still images from the camera, but only getting around 10 FPS instead of 30 FPS (docs here).

Is there anything that I'm missing? How do I reach 30 FPS for full resolution sensor?

This is the minimal code to recreate it:

import datetime
import depthai as dai

pipeline = dai.Pipeline()

# Configure camera
cam_rgb = pipeline.createColorCamera()
cam_rgb.setResolution(dai.ColorCameraProperties.SensorResolution.THE_12_MP)
cam_rgb.setFps(30)

# Configure XLinkOut (to send data from device to host)
xout_rgb = pipeline.createXLinkOut()
xout_rgb.setStreamName('rgb')

# Script node
script = pipeline.create(dai.node.Script)
script.setScript('''
    ctrl = CameraControl()
    ctrl.setCaptureStill(True)
    while True:
        node.io['out'].send(ctrl)
''')
# Connections: Linking camera to XLink input, so that the frames will be sent to host
script.outputs['out'].link(cam_rgb.inputControl)
cam_rgb.still.link(xout_rgb.input)

with dai.Device(pipeline, maxUsbSpeed=dai.UsbSpeed.SUPER_PLUS) as device:
    queue_rgb = device.getOutputQueue('rgb', maxSize=60, blocking=False)

    tic = datetime.datetime.now()
    while device.isPipelineRunning():
        # Try to fetch data from queue. Returns either all data packets or None if there isn't any
        queue_frames = queue_rgb.tryGetAll()

        if len(queue_frames) > 0:
            toc = datetime.datetime.now()
            dt = (toc - tic)
            fps = len(queue_frames) / dt.total_seconds()
            print(f'Received {len(queue_frames)} frames in {str(dt)} --> Rate: {fps:.2f} fps')
            tic = toc

Output:

Received 1 frames in 0:00:01.010883 --> Rate: 0.99 fps
Received 2 frames in 0:00:00.182417 --> Rate: 10.96 fps
Received 1 frames in 0:00:00.095321 --> Rate: 10.49 fps
Received 2 frames in 0:00:00.186997 --> Rate: 10.70 fps
Received 2 frames in 0:00:00.182548 --> Rate: 10.96 fps
Received 2 frames in 0:00:00.185900 --> Rate: 10.76 fps
Received 1 frames in 0:00:00.091477 --> Rate: 10.93 fps
Received 2 frames in 0:00:00.189675 --> Rate: 10.54 fps
Received 1 frames in 0:00:00.095233 --> Rate: 10.50 fps
Received 2 frames in 0:00:00.203503 --> Rate: 9.83 fps
Received 2 frames in 0:00:00.180017 --> Rate: 11.11 fps
Received 1 frames in 0:00:00.099033 --> Rate: 10.10 fps
...
  • Hi @AlanElkin
    Please redo the test with the updated bandwidth script. Pretty sure it was locking the transmission to 5Gbps (gen1) which caused them all to behave similarly.

    Thanks,
    Jaka

Hi @AlanElkin
Adding pipeline.setXLinkChunkSize(0) should speed up the processing and Xlink transport.

Thanks,
Jaka

    Hi jakaskerl

    I've tried adding pipeline.setXLinkChunkSize(0) and results improved (thanks!), but FPS is still not what it should be: around 30 FPS. Anything else or any thought on this?

    This is the new output:

    Received 1 frames in 0:00:00.903939 --> Rate: 1.11 fps
    Received 1 frames in 0:00:00.061003 --> Rate: 16.39 fps
    Received 1 frames in 0:00:00.052928 --> Rate: 18.89 fps
    Received 1 frames in 0:00:00.057465 --> Rate: 17.40 fps
    Received 2 frames in 0:00:00.110694 --> Rate: 18.07 fps
    Received 1 frames in 0:00:00.055616 --> Rate: 17.98 fps
    Received 2 frames in 0:00:00.112372 --> Rate: 17.80 fps
    Received 1 frames in 0:00:00.064235 --> Rate: 15.57 fps
    Received 1 frames in 0:00:00.066208 --> Rate: 15.10 fps
    Received 2 frames in 0:00:00.102976 --> Rate: 19.42 fps
    Received 2 frames in 0:00:00.115193 --> Rate: 17.36 fps
    Received 1 frames in 0:00:00.054441 --> Rate: 18.37 fps
    Received 2 frames in 0:00:00.122025 --> Rate: 16.39 fps
    Received 1 frames in 0:00:00.053625 --> Rate: 18.65 fps
    ...

    Hi @AlanElkin
    Only thing I changed is the chunk size:

    Received 1 frames in 0:00:00.033700 --> Rate: 29.67 fps
    Received 1 frames in 0:00:00.034360 --> Rate: 29.10 fps
    Received 1 frames in 0:00:00.033957 --> Rate: 29.45 fps
    Received 1 frames in 0:00:00.031237 --> Rate: 32.01 fps
    Received 1 frames in 0:00:00.034367 --> Rate: 29.10 fps
    Received 1 frames in 0:00:00.033948 --> Rate: 29.46 fps
    Received 1 frames in 0:00:00.032999 --> Rate: 30.30 fps
    Received 1 frames in 0:00:00.034044 --> Rate: 29.37 fps
    Received 1 frames in 0:00:00.032406 --> Rate: 30.86 fps
    Received 1 frames in 0:00:00.034284 --> Rate: 29.17 fps
    Received 1 frames in 0:00:00.032633 --> Rate: 30.64 fps
    Received 1 frames in 0:00:00.033902 --> Rate: 29.50 fps
    Received 1 frames in 0:00:00.034529 --> Rate: 28.96 fps
    Received 1 frames in 0:00:00.032338 --> Rate: 30.92 fps
    Received 1 frames in 0:00:00.033070 --> Rate: 30.24 fps
    Received 1 frames in 0:00:00.033563 --> Rate: 29.79 fps

    Are you using a POE device or a slow USB cable by any chance? luxonis/depthai-experimentsblob/master/random-scripts/oak_bandwidth_test.py

    Thanks,
    Jaka

      Hi jakaskerl and thanks again for your reply.

      I'm using the provided USB cable from Luxonis and connecting my OAK1 directly to a USB 3 port in my laptop, with no PoE or anything else.

      I've ran this test and got this:

      Downlink 2134.4 mbps
      Uplink 1995.6 mbps

      I've also ran this test, which outputs:

      UsbSpeed.SUPER
      Latency: 38.18 ms, Average latency: 38.18 ms, Std: 0.00
      Latency: 38.55 ms, Average latency: 38.37 ms, Std: 0.18
      Latency: 36.84 ms, Average latency: 37.86 ms, Std: 0.73
      Latency: 33.30 ms, Average latency: 36.72 ms, Std: 2.07
      Latency: 33.05 ms, Average latency: 35.98 ms, Std: 2.36
      Latency: 33.68 ms, Average latency: 35.60 ms, Std: 2.32
      Latency: 33.78 ms, Average latency: 35.34 ms, Std: 2.24
      Latency: 33.11 ms, Average latency: 35.06 ms, Std: 2.22
      Latency: 33.29 ms, Average latency: 34.86 ms, Std: 2.17
      Latency: 33.74 ms, Average latency: 34.75 ms, Std: 2.09
      Latency: 33.35 ms, Average latency: 34.62 ms, Std: 2.03
      Latency: 33.63 ms, Average latency: 34.54 ms, Std: 1.96
      Latency: 33.40 ms, Average latency: 34.45 ms, Std: 1.91
      Latency: 33.16 ms, Average latency: 34.36 ms, Std: 1.87
      Latency: 33.56 ms, Average latency: 34.31 ms, Std: 1.82
      Latency: 33.30 ms, Average latency: 34.24 ms, Std: 1.78
      Latency: 33.14 ms, Average latency: 34.18 ms, Std: 1.74
      Latency: 33.41 ms, Average latency: 34.14 ms, Std: 1.70
      Latency: 33.25 ms, Average latency: 34.09 ms, Std: 1.67
      Latency: 33.16 ms, Average latency: 34.04 ms, Std: 1.64
      Latency: 33.72 ms, Average latency: 34.03 ms, Std: 1.60
      Latency: 33.50 ms, Average latency: 34.00 ms, Std: 1.57
      Latency: 34.27 ms, Average latency: 34.02 ms, Std: 1.54
      Latency: 33.98 ms, Average latency: 34.01 ms, Std: 1.50
      Latency: 33.65 ms, Average latency: 34.00 ms, Std: 1.47
      Latency: 33.92 ms, Average latency: 34.00 ms, Std: 1.45
      ...

      I'm having trouble understanding reference values in the docs. Are these transfer rates and latency as expected?

      Hi @AlanElkin
      Yeah, I get the same.
      Are you setting with dai.Device(pipeline, maxUsbSpeed=dai.UsbSpeed.SUPER_PLUS) as device:?

      The above script you have sent has SUPER_PLUS enabled, I'm just rechecking because removing this results in similar FPS to what you have.

      Thanks,
      Jaka

        Hi jakaskerl

        Again, thanks for your reply.

        I'm using exactly the same code posted in the original post above, with the addition of pipeline.setXLinkChunkSize(0).

        Just in case, this is the complete code:

        import datetime
        import depthai as dai
        
        pipeline = dai.Pipeline()
        
        # Configure camera
        cam_rgb = pipeline.createColorCamera()
        cam_rgb.setResolution(dai.ColorCameraProperties.SensorResolution.THE_12_MP)
        cam_rgb.setFps(30)
        
        # Configure XLinkOut (to send data from device to host)
        xout_rgb = pipeline.createXLinkOut()
        xout_rgb.setStreamName('rgb')
        
        # Script node
        script = pipeline.create(dai.node.Script)
        script.setScript('''
            ctrl = CameraControl()
            ctrl.setCaptureStill(True)
            while True:
                node.io['out'].send(ctrl)
        ''')
        # Connections: Linking camera to XLink input, so that the frames will be sent to host
        script.outputs['out'].link(cam_rgb.inputControl)
        cam_rgb.still.link(xout_rgb.input)
        
        # Disable chunking for higher FPS (chunk size is for splitting device-sent XLink packets)
        pipeline.setXLinkChunkSize(0)
        
        with dai.Device(pipeline, maxUsbSpeed=dai.UsbSpeed.SUPER_PLUS) as device:
            queue_rgb = device.getOutputQueue('rgb', maxSize=60, blocking=False)
        
            tic = datetime.datetime.now()
            while device.isPipelineRunning():
                # Try to fetch data from queue. Returns either all data packets or None if there isn't any
                queue_frames = queue_rgb.tryGetAll()
        
                if len(queue_frames) > 0:
                    toc = datetime.datetime.now()
                    dt = (toc - tic)
                    fps = len(queue_frames) / dt.total_seconds()
                    print(f'Received {len(queue_frames)} frames in {str(dt)} --> Rate: {fps:.2f} fps')
                    tic = toc
        
                # for queue_frame in queue_frames:
                #     msg = f'  [{queue_frame.getSequenceNum()}] '
                #     msg += f'Exposure: {queue_frame.getExposureTime().total_seconds()*1000:.3f}ms, '
                #     print(msg)

        Is there anything else I can do to fix this? Or any other print you'd like to see?

        Just in case, this is my virtual environment, by running pip list:

        Package                   Version
        ------------------------- --------------
        AHRS                      0.3.1
        anyio                     4.3.0
        argon2-cffi               23.1.0
        argon2-cffi-bindings      21.2.0
        arrow                     1.3.0
        asttokens                 2.4.1
        async-lru                 2.0.4
        attrs                     23.2.0
        av                        12.0.0
        Babel                     2.14.0
        beautifulsoup4            4.12.3
        bleach                    6.1.0
        blobconverter             1.4.3
        certifi                   2024.2.2
        cffi                      1.16.0
        charset-normalizer        2.0.12
        comm                      0.2.2
        contourpy                 1.2.1
        cycler                    0.12.1
        debugpy                   1.8.1
        decorator                 5.1.1
        defusedxml                0.7.1
        Deprecated                1.2.14
        depthai                   2.25.0.0
        depthai-pipeline-graph    0.0.5
        depthai-sdk               1.13.1
        depthai-viewer            0.1.8
        distinctipy               1.3.4
        exceptiongroup            1.2.0
        executing                 2.0.1
        fastjsonschema            2.19.1
        ffmpy3                    0.2.4
        fonttools                 4.51.0
        fqdn                      1.5.1
        h11                       0.14.0
        httpcore                  1.0.5
        httpx                     0.27.0
        idna                      3.6
        ipykernel                 6.29.4
        ipython                   8.23.0
        isoduration               20.11.0
        jedi                      0.19.1
        Jinja2                    3.1.3
        json5                     0.9.24
        jsonpointer               2.4
        jsonschema                4.21.1
        jsonschema-specifications 2023.12.1
        jupyter_client            8.6.1
        jupyter_core              5.7.2
        jupyter-events            0.10.0
        jupyter-lsp               2.2.4
        jupyter_server            2.13.0
        jupyter_server_terminals  0.5.3
        jupyterlab                4.1.5
        jupyterlab_pygments       0.3.0
        jupyterlab_server         2.25.4
        kiwisolver                1.4.5
        lz4                       4.3.3
        MarkupSafe                2.1.5
        marshmallow               3.17.0
        matplotlib                3.9.0
        matplotlib-inline         0.1.6
        mcap                      1.1.1
        mcap-ros1-support         0.0.8
        mistune                   3.0.2
        nbclient                  0.10.0
        nbconvert                 7.16.3
        nbformat                  5.10.3
        nest-asyncio              1.6.0
        notebook_shim             0.2.4
        numpy                     1.26.4
        opencv-contrib-python     4.5.5.62
        overrides                 7.7.0
        packaging                 24.0
        pandas                    2.2.2
        pandocfilters             1.5.1
        parso                     0.8.3
        pexpect                   4.9.0
        pillow                    10.3.0
        pip                       24.0
        platformdirs              4.2.0
        plotly                    5.22.0
        prometheus_client         0.20.0
        prompt-toolkit            3.0.43
        psutil                    5.9.8
        ptyprocess                0.7.0
        pure-eval                 0.2.2
        pyarrow                   10.0.1
        pycparser                 2.22
        Pygments                  2.17.2
        pyparsing                 3.1.2
        PyQt5                     5.15.5
        PyQt5-Qt5                 5.15.2
        PyQt5-sip                 12.13.0
        python-dateutil           2.9.0.post0
        python-json-logger        2.0.7
        pytube                    15.0.0
        PyTurboJPEG               1.6.4
        pytz                      2024.1
        pyusb                     1.2.1
        PyYAML                    6.0.1
        pyzmq                     25.1.2
        Qt.py                     1.3.10
        referencing               0.34.0
        requests                  2.26.0
        rfc3339-validator         0.1.4
        rfc3986-validator         0.1.1
        rosbags                   0.9.11
        rpds-py                   0.18.0
        ruamel.yaml               0.18.6
        ruamel.yaml.clib          0.2.8
        scipy                     1.13.0
        Send2Trash                1.8.2
        sentry-sdk                1.21.0
        setuptools                69.0.3
        six                       1.16.0
        sniffio                   1.3.1
        soupsieve                 2.5
        stack-data                0.6.3
        tenacity                  8.3.0
        terminado                 0.18.1
        tinycss2                  1.2.1
        tomli                     2.0.1
        tornado                   6.4
        traitlets                 5.14.2
        types-pyside2             5.15.2.1.7
        types-python-dateutil     2.9.0.20240316
        typing_extensions         4.10.0
        tzdata                    2024.1
        uri-template              1.3.0
        urllib3                   1.26.18
        wcwidth                   0.2.13
        webcolors                 1.13
        webencodings              0.5.1
        websocket-client          1.7.0
        wheel                     0.42.0
        wrapt                     1.16.0
        xmltodict                 0.13.0
        zoomUtils                 0.0.0
        zstandard                 0.22.0

        @AlanElkin still isn't meant for fast captures, you can use isp output for that, which should provide better latency/fps. But USB hubs do add additional overhead;

        • I get around 17fps when using OAK USB -> USB hub -> Mac
        • I get around 30fps when using OAK USB -> Mac directly
          4 days later

          Hi erik and jakaskerl,

          Thanks for your suggestions. I've tried using isp instead but I still get slow fps, with no USB hub at all.

          This is my test code (please check the first if-statement):

          import datetime
          import depthai as dai
          
          pipeline = dai.Pipeline()
          
          # Configure camera
          cam_rgb = pipeline.createColorCamera()
          cam_rgb.setResolution(dai.ColorCameraProperties.SensorResolution.THE_12_MP)
          cam_rgb.setFps(30)
          
          # Configure XLinkOut (to send data from device to host)
          xout_rgb = pipeline.createXLinkOut()
          xout_rgb.setStreamName('rgb')
          
          if False:
              # Script node
              script = pipeline.create(dai.node.Script)
              script.setScript('''
                  ctrl = CameraControl()
                  ctrl.setCaptureStill(True)
                  while True:
                      node.io['out'].send(ctrl)
              ''')
              # Connections: Linking camera to XLink input, so that the frames will be sent to host
              script.outputs['out'].link(cam_rgb.inputControl)
              cam_rgb.still.link(xout_rgb.input)
          
          else:
              cam_rgb.isp.link(xout_rgb.input)
          
          # Disable chunking for higher FPS (chunk size is for splitting device-sent XLink packets)
          pipeline.setXLinkChunkSize(0)
          
          with dai.Device(pipeline, maxUsbSpeed=dai.UsbSpeed.SUPER_PLUS) as device:
              queue_rgb = device.getOutputQueue('rgb', maxSize=60, blocking=False)
          
              tic = datetime.datetime.now()
              while device.isPipelineRunning():
                  # Try to fetch data from queue. Returns either all data packets or None if there isn't any
                  queue_frames = queue_rgb.tryGetAll()
          
                  if len(queue_frames) > 0:
                      toc = datetime.datetime.now()
                      dt = (toc - tic)
                      fps = len(queue_frames) / dt.total_seconds()
                      print(f'Received {len(queue_frames)} frames in {str(dt)} --> Rate: {fps:.2f} fps')
                      tic = toc

          This is the output:

          Received 2 frames in 0:00:00.309474 --> Rate: 6.46 fps
          Received 2 frames in 0:00:00.102032 --> Rate: 19.60 fps
          Received 3 frames in 0:00:00.163704 --> Rate: 18.33 fps
          Received 2 frames in 0:00:00.126305 --> Rate: 15.83 fps
          Received 2 frames in 0:00:00.126683 --> Rate: 15.79 fps
          Received 3 frames in 0:00:00.163115 --> Rate: 18.39 fps
          Received 2 frames in 0:00:00.101160 --> Rate: 19.77 fps
          Received 2 frames in 0:00:00.100358 --> Rate: 19.93 fps
          Received 2 frames in 0:00:00.106135 --> Rate: 18.84 fps
          Received 3 frames in 0:00:00.145875 --> Rate: 20.57 fps
          Received 3 frames in 0:00:00.151619 --> Rate: 19.79 fps
          Received 1 frames in 0:00:00.053159 --> Rate: 18.81 fps
          Received 3 frames in 0:00:00.151166 --> Rate: 19.85 fps
          Received 3 frames in 0:00:00.151327 --> Rate: 19.82 fps
          Received 2 frames in 0:00:00.101480 --> Rate: 19.71 fps
          Received 2 frames in 0:00:00.114627 --> Rate: 17.45 fps
          ...

          As per Jaka's output, he receives Received 1 frames in every print, but I get not only 1 but a few as well. May that have anything to do with this issue?

          Hi @AlanElkin ,
          Could you try some other host computer as well? I'd suspect the USB hub/card in the computer itself.

            Hi erik ,

            I've tried running it on 2 other computers (so 3 hosts tested in total), all ubuntu based. To isolate any problem or difference coming from module/package versions, I've used luxonis' docker image to start a container, copied my test script and ran it on every computer. Unfortunately, I got same results: slow FPS.

            Nonetheless, something curious is that, just by trying something different, when I used my hosts' USB type-C connector, it started receiving higher FPS, about 30 FPS!

            So, I guess USB3 with type A connector isn't enough. In my case, I used Luxonis cable (type-A to type-C USB) + adapter. I'll try using a direct type-C to type-C USB cable to avoid using adapters at all.

            Thanks for your assistance @erik and @jakaskerl !

            Hi @AlanElkin ,
            Interesting! Does the USB C to C cable provide any better bandwidth / lower latency if you run the test scripts? My understanding was that it shouldn't change, as they\re both using the same USB3.1 interface.
            Thanks, Erik

              16 days later

              erik

              Sorry for the late reply, I took some days off.

              Indeed, I believe the values don't change noticeably (except for the FPS). Here are the measurements:

              1. With Luxonis cable (type-A to type-C USB):
                • Around 18 FPS
                • Average latency: 33.71 ms
                • Downlink 2203.9 mbps
                • Uplink 1903.9 mbps
              2. With type-C to type-C USB cable:
                • Around 30 FPS
                • Average latency: 34.14 ms
                • Downlink 2246.7 mbps
                • Uplink 1931.1 mbps

              Hi @AlanElkin
              Please redo the test with the updated bandwidth script. Pretty sure it was locking the transmission to 5Gbps (gen1) which caused them all to behave similarly.

              Thanks,
              Jaka

                Hi jakaskerl

                1. With Luxonis cable (type-A to type-C USB):

                  • Downlink 2225.2 mbps
                  • Uplink 1912.1 mbps
                2. With type-C to type-C USB cable:

                  • Downlink 3086.5 mbps

                  • Uplink 2689.2 mbps

                Nice job fixing this bug!

                Cheers