Hi all,
I have been working for several years with the Astra and Persee from Orbbec for an application that detects fall of elderly people.
We are currently trying to find a sensor with a wider field of view, for indoor application.
We are testing the OAK-D Pro PoE, fixed focus.
I am developping in c++ with ubuntu18.04.
We use the sensor with Laser dot projector on and flood LED on for night vision.
I do not work with the RGB camera.
Here is some code with the parameters I use:
std::string _depthName = "depth";
std::string _rectifiedRight= "rectifiedRight";
// Create pipeline
dai::Pipeline _pipeline;
dai::Device _device;
std::vector<std::string> _queueNames;
std::unordered_map<std::string, cv::Mat> _frame;
int _width = 640;
int _height = 400;
auto monoLeft = _pipeline.create<dai::node::MonoCamera>();
auto monoRight = _pipeline.create<dai::node::MonoCamera>();
auto stereo = _pipeline.create<dai::node::StereoDepth>();
auto depthOut = _pipeline.create<dai::node::XLinkOut>();
auto xout_rectifiedRight = _pipeline.create<dai::node::XLinkOut>();
xout_rectifiedRight->setStreamName(_rectifiedRight);
_queueNames.push_back(_rectifiedRight);
depthOut->setStreamName(_depthName);
_queueNames.push_back(_depthName);
//Calibration info
dai::CalibrationHandler calibData = _device.readCalibration();
std::vector<std::vector<float>> intrinsics;
intrinsics = calibData.getCameraIntrinsics(dai::CameraBoardSocket::RIGHT,
_width, _height);
float hFov_rad = 1.25489; // 71.89°
float vFov_rad = 0.9963; // 57.083;
// Properties
static constexpr int fps = 30;
static constexpr auto monoRes =
dai::MonoCameraProperties::SensorResolution::THE_400_P;
monoLeft->setResolution(monoRes);
monoLeft->setBoardSocket(dai::CameraBoardSocket::LEFT);
monoLeft->setFps(fps);
monoRight->setResolution(monoRes);
monoRight->setBoardSocket(dai::CameraBoardSocket::RIGHT);
monoRight->setFps(fps);
stereo->setDefaultProfilePreset(
dai::node::StereoDepth::PresetMode::HIGH_DENSITY);
auto config = stereo->initialConfig.get();
printConfig( config );
config.postProcessing.speckleFilter.enable = false;
config.postProcessing.speckleFilter.speckleRange = 50;
config.postProcessing.temporalFilter.enable = true;
config.postProcessing.spatialFilter.enable = true;
config.postProcessing.spatialFilter.holeFillingRadius = 2;
config.postProcessing.spatialFilter.numIterations = 1;
config.postProcessing.thresholdFilter.minRange = 200;
config.postProcessing.thresholdFilter.maxRange = 15000;
config.postProcessing.decimationFilter.decimationFactor = 1;
stereo->initialConfig.set(config);
// Options: MEDIAN_OFF, KERNEL_3x3, KERNEL_5x5, KERNEL_7x7 (default)
stereo->initialConfig.setMedianFilter(dai::MedianFilter::KERNEL_7x7);
stereo->setLeftRightCheck(true); // better handling for occlusion
stereo->setExtendedDisparity(false); // Closer-in minimum depth, disparity range is doubled
stereo->setSubpixel(true); // Better accuracy for longer distance, fractional
// disparity 32-levels
config = stereo->initialConfig.get();
printConfig( config );
// """
// Updates IR configuration
// Args:
// irLaser (int, Optional): Sets the IR laser dot projector brightness (0..1200)
// irFlood (int, Optional): Sets the IR flood illuminator light brightness (0..1500)
// """
bool activeIrLaser = _device.setIrLaserDotProjectorBrightness(900);// 0, 1200
bool activeIrFlood = _device.setIrFloodLightBrightness(500);//0, 1500 // 400 night
// Linking
monoLeft->out.link(stereo->left);
monoRight->out.link(stereo->right);
stereo->rectifiedRight.link(xout_rectifiedRight->input);
stereo->depth.link(depthOut->input);
// Connect to device and start pipeline
_device.startPipeline(_pipeline);
// Sets queues size and behavior
for (const auto &name : _queueNames) {
_device.getOutputQueue(name, 4, false);
}
//Capture
std::unordered_map<std::string, std::shared_ptr<dai::ImgFrame>> latestPacket;
while(true)
{
auto queueEvents = _device.getQueueEvents(_queueNames);
for (const auto &name : queueEvents) {
auto packets = _device.getOutputQueue(name)->tryGetAll<dai::ImgFrame>();
auto count = packets.size();
if (count > 0) {
latestPacket[name] = packets[count - 1];
}
}
for (const auto &name : _queueNames) {
if (latestPacket.find(name) != latestPacket.end()) {
if (name == _depthName) {
_frame[name] = latestPacket[name]->getFrame(true);
_depthCaptured = true;
} else {
_frame[name] = latestPacket[name]->getCvFrame();
}
}
}
}
When I compare the depth from the Astra with the depth from the Oak, the depth from the Oak seems a bit noisy.
These pictures are some capture from the astra res: 640*480
I can detect the room, the wall appears to be flat
These are some capture from the OAK res 640*400
The wall is wavy, the scene has a lot of noise
Here are two captures from the python gen2-pointcloud example
In this one it shows the same problem I have in c++
In this one, there is no depth at all behind the chairs
In this scene, the distance between the wall and the sensor is around 4m.
Do you think I can get a better depth quality with different values for the parameters?
What would be the best values to use for my application?
Thank you very much for your feedback.
Best regards
Sarah