We just bought an OAK-D POE camera to test it with an Nvidia Jetson TX2 4GB card.
On the Jetson card, we compiled opencv 4.7.0 (with extras and contribs), then we compiled the depthai-core library. Everything went well. We can run the compiled examples in the build/examples folder.
Then We tried to compile our own program and run it on the jetson board. We started by recording the disparity frame.
Everything works fine, visually we have no lag or it is very low.
Now we make an equivalent program but to recover the depth frame. In this case, we have an important lag of at least 1 second of the display of the depth frame.
The camera is connected to a GigaByte Ethernet network. We use other POE camera of another brand without lag problem. To work, these cameras need to configure the jetson card to use the jumbos frame (MTU 9000).
So we tested to configure the Jetson card with an MTU of 1500 as the default value of the OAK-D POE camera. In this case, we do not notice any lag on the depth frame display. We find this behavior quite strange. Why the lag appears only on the depth and not on the disparity when the jetson is configured with MTU9000 ?
Then, we wanted to try to configure the MTU of the OAK-D POE camera to 9000 but we did not find how to configure the camera in MTU 9000 in c++. We found a piece of python code but we are not sure about the transposition to c++:
auto device_config = dai::Device::Config();
device_config.board.network.mtu = 9000;// # Jumbo frames. Default 1500
How to apply the MTU 9000 configuration to the device? Is it possible ?
In the example program, depth_stereo_video.cpp, line 101, there is a note:
"// Note: in some configurations (if depth is enabled), disparity may output garbage data".
In which case this can happen? Can it cause lag?