• Community
  • OAK-D POE - Depth frames lagging with MTU 9000

Hello,

We just bought an OAK-D POE camera to test it with an Nvidia Jetson TX2 4GB card.
On the Jetson card, we compiled opencv 4.7.0 (with extras and contribs), then we compiled the depthai-core library. Everything went well. We can run the compiled examples in the build/examples folder.

Then We tried to compile our own program and run it on the jetson board. We started by recording the disparity frame.
Everything works fine, visually we have no lag or it is very low.

Now we make an equivalent program but to recover the depth frame. In this case, we have an important lag of at least 1 second of the display of the depth frame.

The camera is connected to a GigaByte Ethernet network. We use other POE camera of another brand without lag problem. To work, these cameras need to configure the jetson card to use the jumbos frame (MTU 9000).

So we tested to configure the Jetson card with an MTU of 1500 as the default value of the OAK-D POE camera. In this case, we do not notice any lag on the depth frame display. We find this behavior quite strange. Why the lag appears only on the depth and not on the disparity when the jetson is configured with MTU9000 ?

Then, we wanted to try to configure the MTU of the OAK-D POE camera to 9000 but we did not find how to configure the camera in MTU 9000 in c++. We found a piece of python code but we are not sure about the transposition to c++:

auto device_config = dai::Device::Config();
device_config.board.network.mtu = 9000;// # Jumbo frames. Default 1500

How to apply the MTU 9000 configuration to the device? Is it possible ?

In the example program, depth_stereo_video.cpp, line 101, there is a note:
"// Note: in some configurations (if depth is enabled), disparity may output garbage data".
In which case this can happen? Can it cause lag?

Thanks,
Fred

  • erik replied to this.

    Hi FredericGauthier ,

    1. Likely you have already seen it, but still linking: PoE latency docs
    2. Depth vs disparity - I assume it's because depth is INT16, while disparity is INT8 (without subpixel enabled), so perhaps that could be the reason:
    3. C++ - this transposition looks correct to me, python bindings are 1:1 with C++ API. Does it not work with the snippet?
    4. This issue has been resolved a long time ago, will update it now, sorry about the misinformation.

    Thanks, Erik

    Hi Erik,
    It seems that the problem comes from the type INT16 vs INT8. If we enable subpixel with disparity, we have the same problem of lag.

    The code snippet to configure MTU seems to work. We made a mistake by doing this :

    auto device_config = dai::Device::Config();
    device_config.board.network.mtu = 9000;

    after the line : dai::Device device(pipeline);
    In this case, the configuration is not taken into account. Can you confirm that it is absolutely necessary to make the device configuration before creating it?

    So with a mtu configure to 9000 on the JTX2 and the camera configure with mtu 9000 too, we have a lag time for the depth between 100 and 150 ms (it can be around 500 ms if the camera is not set up with mtu 9000).
    We have better result if we configure both mtu to 1500 (lag time between 60 - 120 ms : our measures are not very precise !).

    Currently we have a fps of 30. Do you think that it is possible to reduce the lag by reducing the frame rate to 20 fps? If yes, is it possible ? How to do this ?

    Thanks,
    Fred

      Hi FredericGauthier ,

      1. Yes, you need to configure the pipeline before uploading it to the device (where it gets initialized). Changing anything afterwards won't affect anything.
      2. I would suggest trying it out, as latency on the network isn't that straightforward and depends on many factors. You can change the FPS with camera.setFps(20) using our depthai API.

      Thanks, Erik