BlogDepthAIROS

DepthAI ROS Driver

At Luxonis, we are committed to creating robotic vision solutions that help improve the engineering efficiency of the world. With our stereo depth OAK cameras, robust DepthAI API, and growing cloud-based platform, RobotHub, our goal is to provide a start-to-finish ecosystem that uncomplicates innovation.

And, with that in mind, we’re pleased to announce the release of our newest DepthAI ROS driver for OAK cameras, which is part of our ongoing effort to make the development of ROS-based software even easier.

With the DepthAI ROS driver, nearly everything is parameterized using ROS2 parameters/dynamic reconfigure, thereby providing you with even greater flexibility when it comes to customizing your OAK to your exact use-case. Currently you can find over a hundred different values to modify!

There are tons of ways for this driver to make your life easier, some of which include:

  • Several different “modes” that you can run the camera, depending on your use-case. You can for example use the camera to publish Spatial NN detections, as well as publish RGBD pointcloud or just stream data straight from sensors for host processing/calibration/modular camera setup

  • Set parameters, like exposure, focus for individual cameras at runtime.

  • Set IR LED power for better depth accuracy and night vision.

  • Experiment with onboard depth filter parameters.

  • Enable encoding to get more bandwidth with compressed images

  • Easy way to integrate multi camera setup with an example provided

  • Docker support for easy integration, build one yourself or use one from DockerHub repository

Having everything as ROS parameter also gives you the ability to reconfigure the camera `on-the-fly` by using `stop` and `start` services. You can use low quality streams and switch to higher quality when you need, or switch between different neural networks depending on what data your robot needs.

Here is an example of adjusting LED power for better depth quality:

Here is another example demonstrating manual control of RGB camera parameters in runtime:

Here we see an example of RGBD depth alignment:

Multi camera setup with OAK-D PRO, OAK-D W and OAK-D Lite, with one camera running RGBD and Mobilenet spatial detection, one running Yolo 2D detection on one running semantic segmentation.

And here we see an example of Real-Time Appearance Based (RTAB) Mapping of an interior room:

Comments (1)

At Luxonis, we are committed to creating robotic vision solutions that help improve the engineering efficiency of the world. With our stereo depth OAK cameras, robust DepthAI API, and growing cloud-based platform, RobotHub, our goal is to provide a start-to-finish ecosystem that uncomplicates innovation.

And, with that in mind, we’re pleased to announce the release of our newest DepthAI ROS driver for OAK cameras, which is part of our ongoing effort to make the development of ROS-based software even easier.

With the DepthAI ROS driver, nearly everything is parameterized using ROS2 parameters/dynamic reconfigure, thereby providing you with even greater flexibility when it comes to customizing your OAK to your exact use-case. Currently you can find over a hundred different values to modify!

There are tons of ways for this driver to make your life easier, some of which include:

  • Several different “modes” that you can run the camera, depending on your use-case. You can for example use the camera to publish Spatial NN detections, as well as publish RGBD pointcloud or just stream data straight from sensors for host processing/calibration/modular camera setup

  • Set parameters, like exposure, focus for individual cameras at runtime.

  • Set IR LED power for better depth accuracy and night vision.

  • Experiment with onboard depth filter parameters.

  • Enable encoding to get more bandwidth with compressed images

  • Easy way to integrate multi camera setup with an example provided

  • Docker support for easy integration, build one yourself or use one from DockerHub repository

Having everything as ROS parameter also gives you the ability to reconfigure the camera `on-the-fly` by using `stop` and `start` services. You can use low quality streams and switch to higher quality when you need, or switch between different neural networks depending on what data your robot needs.

Here is an example of adjusting LED power for better depth quality:

Here is another example demonstrating manual control of RGB camera parameters in runtime:

Here we see an example of RGBD depth alignment:

Multi camera setup with OAK-D PRO, OAK-D W and OAK-D Lite, with one camera running RGBD and Mobilenet spatial detection, one running Yolo 2D detection on one running semantic segmentation.

And here we see an example of Real-Time Appearance Based (RTAB) Mapping of an interior room:

erik added the ROS tag .
8 months later

Any chance of a param to cause a 2D-horizontal-centerline-pointcloud or scan topic?

I don’t need my ODL to work on all the image, only the centerline stereo pixels. If there is spare processing perhaps it could collapse N vertical pixels to the centerline single z scan.

Just “reappearing” here. I have not fired up my ODL in a long while, and my then-robot was not running ROS.

I have this hope of using my ODL only, to generate a forward sector 2D scan topic and not use a mechanical LIDAR that is nearly useless from all the black objects on my home, for my new Raspberry Pi5 / Create3 based robot: