I have an OakdSR pointing down at a conveyor belt at a distance of 16 inches. It is generating the depth maps at a frame rate of 5 FPS with the following configuration settings attached below.
However the depth map and the resulting point cloud are very inaccurate. I have posted some sample pictures below. It is shaped in a pyramid-like structure with different layers for different height even though the entire scene should be at one height as the camera is pointing down at a flat surface. I have read the intrinsic matrix from the device systems based on the image resolution to generate the point cloud. However I do not believe that this is an issue with the point cloud generation as the depth map has different layers of depth distances. I would appreciate any advice or help on what the issue is and how to make the depth map more accurate. Thanks!
This is a link to the original RGB image, depth map, and Point Cloud visualization in a drive
Stereo Configuration
stereo->setDefaultProfilePreset(dai::node::StereoDepth::PresetMode::HIGH_ACCURACY);
stereo->setLeftRightCheck(true); // LR-check is required for depth alignment
stereo->setExtendedDisparity(true);
stereo->setSubpixel(true);
stereo->setDepthAlign(dai::CameraBoardSocket::CAM_C);
auto config = stereo->initialConfig.get();
config.postProcessing.speckleFilter.enable = false;
config.postProcessing.speckleFilter.speckleRange = 50;
config.postProcessing.temporalFilter.enable = false;
config.postProcessing.spatialFilter.enable = true;
config.postProcessing.spatialFilter.holeFillingRadius = 2;
config.postProcessing.spatialFilter.numIterations = 1;
config.postProcessing.thresholdFilter.minRange = 300;
config.postProcessing.thresholdFilter.maxRange = 600;
config.postProcessing.decimationFilter.decimationFactor = 1;
stereo->initialConfig.set(config);