Hello, I've created the virtual environment to be able to run my Oak-D Pro W on my Jetson Orin Nano.

In ROS 2, when I do ros2 launch depthai_examples stereo_inertial_node.launch.py

The img topic grabs the first frame and after that it no longer updates data…

(depthAI) genozen@genozen:~/Documents/depthai-python/examples$ ros2 launch depthai_examples stereo_inertial_node.launch.py 
[INFO] [launch]: All log files can be found below /home/genozen/.ros/log/2024-03-03-16-31-45-971391-genozen-10538
[INFO] [launch]: Default logging verbosity is set to INFO
[INFO] [robot_state_publisher-1]: process started with pid [10551]
[INFO] [stereo_inertial_node-2]: process started with pid [10553]
[INFO] [component_container-3]: process started with pid [10555]
[INFO] [rviz2-4]: process started with pid [10557]
[robot_state_publisher-1] [INFO] [1709505106.616323472] [oak_state_publisher]: got segment oak-d-base-frame
[robot_state_publisher-1] [INFO] [1709505106.616540822] [oak_state_publisher]: got segment oak-d_frame
[robot_state_publisher-1] [INFO] [1709505106.616564022] [oak_state_publisher]: got segment oak_imu_frame
[robot_state_publisher-1] [INFO] [1709505106.616575319] [oak_state_publisher]: got segment oak_left_camera_frame
[robot_state_publisher-1] [INFO] [1709505106.616585463] [oak_state_publisher]: got segment oak_left_camera_optical_frame
[robot_state_publisher-1] [INFO] [1709505106.616594487] [oak_state_publisher]: got segment oak_model_origin
[robot_state_publisher-1] [INFO] [1709505106.616603671] [oak_state_publisher]: got segment oak_rgb_camera_frame
[robot_state_publisher-1] [INFO] [1709505106.616612408] [oak_state_publisher]: got segment oak_rgb_camera_optical_frame
[robot_state_publisher-1] [INFO] [1709505106.616621528] [oak_state_publisher]: got segment oak_right_camera_frame
[robot_state_publisher-1] [INFO] [1709505106.616629784] [oak_state_publisher]: got segment oak_right_camera_optical_frame
[component_container-3] [INFO] [1709505106.791015554] [container]: Load Library: /opt/ros/humble/lib/libdepth_image_proc.so
[component_container-3] [INFO] [1709505106.979942799] [container]: Found class: rclcpp_components::NodeFactoryTemplate<depth_image_proc::ConvertMetricNode>
[component_container-3] [INFO] [1709505106.980066611] [container]: Instantiate class: rclcpp_components::NodeFactoryTemplate<depth_image_proc::ConvertMetricNode>
[stereo_inertial_node-2] 1280 720 1280 720
[INFO] [launch_ros.actions.load_composable_nodes]: Loaded node '/convert_metric_node' in container '/container'
[component_container-3] [INFO] [1709505107.028890728] [container]: Found class: rclcpp_components::NodeFactoryTemplate<depth_image_proc::ConvertMetricNode>
[component_container-3] [INFO] [1709505107.028974699] [container]: Found class: rclcpp_components::NodeFactoryTemplate<depth_image_proc::CropForemostNode>
[component_container-3] [INFO] [1709505107.028992587] [container]: Found class: rclcpp_components::NodeFactoryTemplate<depth_image_proc::DisparityNode>
[component_container-3] [INFO] [1709505107.029005419] [container]: Found class: rclcpp_components::NodeFactoryTemplate<depth_image_proc::PointCloudXyzNode>
[component_container-3] [INFO] [1709505107.029016236] [container]: Found class: rclcpp_components::NodeFactoryTemplate<depth_image_proc::PointCloudXyzRadialNode>
[component_container-3] [INFO] [1709505107.029025740] [container]: Found class: rclcpp_components::NodeFactoryTemplate<depth_image_proc::PointCloudXyziNode>
[component_container-3] [INFO] [1709505107.029034476] [container]: Found class: rclcpp_components::NodeFactoryTemplate<depth_image_proc::PointCloudXyziRadialNode>
[component_container-3] [INFO] [1709505107.029043500] [container]: Found class: rclcpp_components::NodeFactoryTemplate<depth_image_proc::PointCloudXyzrgbNode>
[component_container-3] [INFO] [1709505107.029051949] [container]: Instantiate class: rclcpp_components::NodeFactoryTemplate<depth_image_proc::PointCloudXyzrgbNode>
[INFO] [launch_ros.actions.load_composable_nodes]: Loaded node '/point_cloud_xyzrgb_node' in container '/container'
[stereo_inertial_node-2] Listing available devices...
[stereo_inertial_node-2] Device Mx ID: 1844301041D27F0E00
[rviz2-4] [INFO] [1709505107.634205615] [rviz2]: Stereo is NOT SUPPORTED
[rviz2-4] [INFO] [1709505107.634465398] [rviz2]: OpenGl version: 4.6 (GLSL 4.6)
[rviz2-4] [INFO] [1709505107.712087211] [rviz2]: Stereo is NOT SUPPORTED
[component_container-3] [WARN] [1709505107.848987500] [point_cloud_xyzrgb_node]: New subscription discovered on topic '/stereo/points', requesting incompatible QoS. No messages will be sent to it. Last incompatible policy: RELIABILITY_QOS_POLICY
[component_container-3] [WARN] [1709505107.850752827] [point_cloud_xyzrgb_node]: New subscription discovered on topic '/stereo/points', requesting incompatible QoS. No messages will be sent to it. Last incompatible policy: RELIABILITY_QOS_POLICY
[rviz2-4] [INFO] [1709505108.363289333] [rviz2]: Stereo is NOT SUPPORTED
[stereo_inertial_node-2] [1844301041D27F0E00] [1.2.2] [0.859] [ColorCamera(7)] [warning] Unsupported resolution set for detected camera OV9782, needs 800_P or 720_P. Defaulting to 800_P
[stereo_inertial_node-2] [1844301041D27F0E00] [1.2.2] [1.126] [SpatialDetectionNetwork(9)] [warning] Network compiled for 6 shaves, maximum available 10, compiling for 5 shaves likely will yield in better performance
[stereo_inertial_node-2] Device USB status: SUPER
[stereo_inertial_node-2] [1844301041D27F0E00] [1.2.2] [1.317] [StereoDepth(3)] [error] Disparity/depth width must be multiple of 16, but RGB camera width is 854. Set output size explicitly using 'setOutputSize(width, height)'.

With ros2 launch depthai_examples stereo.launch.py it works fine, but the pointcloud is noisey (I'm assuming it's computing depth from the stereo images, not the IR emitter?)

In short, I would like to create a ROS2 launch example that gives:

left image, right image, imu, and pointcloud (computed from the active stereo, not passive).

Could you please give me some guidance?

Also, where can I find the calibration parameters (intrinsic/extrinsic of the camera and IMU bias?), I would like to run OpenVins: https://docs.openvins.com/gs-calibration.html

    Hi @Genozen
    Looks like your config is wrong. I think you are using ISP scaling even though you are already on 800p.

    Genozen Also, where can I find the calibration parameters (intrinsic/extrinsic of the camera and IMU bias?)

    This is how it's done in stock depthai:
    intrinsics = calibData.getCameraIntrinsics(dai::CameraBoardSocket::CAM_C, 1280, 720);
    Not sure about the ROS.

    cc @Luxonis-Adam

    Thanks,
    Jaka

    Hi, stereo_intertial_node serves more as an example code, could you try running driver in depthai_ros_driver and see if it works better for you? Regarding calibration, with the driver node you can set i_calibration_dump: true to obtain all calibration information from camera, more information on parameters here

      3 months later

      Luxonis-Adam @jakaskerl
      Sry for the wait, I was able to pickup the project again.

      When I run the depthai_ros_driver, it says stopping camera after a hot second:

      (depthAI) genozen@genozen:~/Documents/depthai-python$ ros2 launch depthai_ros_driver camera.launch.py 
      [INFO] [launch]: All log files can be found below /home/genozen/.ros/log/2024-06-05-23-27-33-935002-genozen-13163
      [INFO] [launch]: Default logging verbosity is set to INFO
      [INFO] [component_container-1]: process started with pid [13177]
      [component_container-1] [INFO] [1717648054.646419717] [oak_container]: Load Library: /opt/ros/humble/lib/librobot_state_publisher_node.so
      [component_container-1] [INFO] [1717648054.652092016] [oak_container]: Found class: rclcpp_components::NodeFactoryTemplate<robot_state_publisher::RobotStatePublisher>
      [component_container-1] [INFO] [1717648054.652168594] [oak_container]: Instantiate class: rclcpp_components::NodeFactoryTemplate<robot_state_publisher::RobotStatePublisher>
      [component_container-1] [INFO] [1717648054.670401717] [oak_state_publisher]: got segment oak
      [component_container-1] [INFO] [1717648054.670518552] [oak_state_publisher]: got segment oak-d-base-frame
      [component_container-1] [INFO] [1717648054.670548377] [oak_state_publisher]: got segment oak_imu_frame
      [component_container-1] [INFO] [1717648054.670568474] [oak_state_publisher]: got segment oak_left_camera_frame
      [component_container-1] [INFO] [1717648054.670585690] [oak_state_publisher]: got segment oak_left_camera_optical_frame
      [component_container-1] [INFO] [1717648054.670605339] [oak_state_publisher]: got segment oak_model_origin
      [component_container-1] [INFO] [1717648054.670621211] [oak_state_publisher]: got segment oak_rgb_camera_frame
      [component_container-1] [INFO] [1717648054.670636700] [oak_state_publisher]: got segment oak_rgb_camera_optical_frame
      [component_container-1] [INFO] [1717648054.670652060] [oak_state_publisher]: got segment oak_right_camera_frame
      [component_container-1] [INFO] [1717648054.670667036] [oak_state_publisher]: got segment oak_right_camera_optical_frame
      [INFO] [launch_ros.actions.load_composable_nodes]: Loaded node '/oak_state_publisher' in container 'oak_container'
      [component_container-1] [INFO] [1717648054.675084449] [oak_container]: Load Library: /opt/ros/humble/lib/libdepthai_ros_driver.so
      [component_container-1] [INFO] [1717648054.835534476] [oak_container]: Found class: rclcpp_components::NodeFactoryTemplate<depthai_ros_driver::Camera>
      [component_container-1] [INFO] [1717648054.836992280] [oak_container]: Instantiate class: rclcpp_components::NodeFactoryTemplate<depthai_ros_driver::Camera>
      [component_container-1] [INFO] [1717648054.848573651] [oak]: No ip/mxid specified, connecting to the next available device.
      [component_container-1] [INFO] [1717648057.096336617] [oak]: Camera with MXID: 1844301041D27F0E00 and Name: 1.2.2 connected!
      [component_container-1] [INFO] [1717648057.098789810] [oak]: USB SPEED: SUPER
      [component_container-1] [INFO] [1717648057.145309124] [oak]: Device type: OAK-D-PRO-W
      [component_container-1] [INFO] [1717648057.148494788] [oak]: Pipeline type: RGBD
      [component_container-1] [INFO] [1717648057.164328926] [oak]: NN Family: mobilenet
      [component_container-1] [INFO] [1717648057.278682114] [oak]: NN input size: 300 x 300. Resizing input image in case of different dimensions.
      [component_container-1] [INFO] [1717648057.598887966] [oak]: Finished setting up pipeline.
      [component_container-1] [1844301041D27F0E00] [1.2.2] [1.706] [StereoDepth(7)] [error] Custom disparity/depth width must be multiple of 16.
      [component_container-1] [INFO] [1717648058.239915209] [oak]: Camera ready!
      [INFO] [launch_ros.actions.load_composable_nodes]: Loaded node '/oak' in container '/oak_container'
      [component_container-1] [ERROR] [1717648062.824220407] [oak]: No data on logger queue!
      [component_container-1] [ERROR] [1717648062.824510336] [oak]: Camera diagnostics error: No Data
      [component_container-1] [ERROR] [1717648062.824586242] [oak]: Restarting camera
      [component_container-1] [INFO] [1717648062.824632804] [oak]: Stopping camera.

      then it cycles through the same messages again.
      I did get like one topic to echo every time it cycles through:

      ---
      header:
        stamp:
          sec: 1717649261
          nanosec: 897541446
        frame_id: oak_rgb_camera_optical_frame
      height: 480
      width: 854
      distortion_model: rational_polynomial
      d:
      - 1.3258732557296753
      - 3.4114573001861572
      - -0.00010097245831275359
      - 0.00019121952936984599
      - 0.3281497657299042
      - 1.6455013751983643
      - 3.8577895164489746
      - 1.2164857387542725
      k:
      - 373.31915283203125
      - 0.0
      - 429.0673828125
      - 0.0
      - 373.31915283203125
      - 229.43893432617188
      - 0.0
      - 0.0
      - 1.0
      r:
      - 1.0
      - 0.0
      - 0.0
      - 0.0
      - 1.0
      - 0.0
      - 0.0
      - 0.0
      - 1.0
      p:
      - 373.31915283203125
      - 0.0
      - 429.0673828125
      - 0.0
      - 0.0
      - 373.31915283203125
      - 229.43893432617188
      - 0.0
      - 0.0
      - 0.0
      - 1.0
      - 0.0
      binning_x: 0
      binning_y: 0
      roi:
        x_offset: 0
        y_offset: 0
        height: 0
        width: 0
        do_rectify: false
      ---

      Perhaps I'll have to use the Y-adapter? Thinking that the power might not be enough from jetson or something…


      Also, could you help verify my setup process is correct for Oak-D to run on Jetson Orin Nano?
      I currently used this guide here:
      https://docs.luxonis.com/hardware/platform/deploy/to-jetson (set up virtual environment and SWAP?)
      https://docs.luxonis.com/software/ros/depthai-ros/ (sudo apt install …)
      or are there other more straight forward path?

      My current understanding following these guides is that we need to create SWAP space and a virtual environment for Luxonis camera to work on Jetson Orin Nano.

      And do I have to do: mkvirtualenv depthAI -p python3

      every time I want to run the ros2 nodes for Oak-D? in Jetson?

      (Sorry for bombarding questions, I'm still new to this)


      Potential duplication question:
      luxonis/depthai-ros470

      The branch no longer exists.

        Genozen height: 480
        width: 854

        This poses an issue, because the width must be a multiple of 16. Indicated by:

        Genozen [component_container-1] [1844301041D27F0E00] [1.2.2] [1.706] [StereoDepth(7)] [error] Custom disparity/depth width must be multiple of 16.

        Thanks,
        Jaka

          jakaskerl

          That's odd, I'm using the default launch configs: ros2 launch depthai_ros_driver camera.launch.py

          and the ros2 depthaAI is installed via Debian (apt install), so I would expect it to detect Oak-D-pro camera.

          Could you show me how to properly give the correct camera config, please?

          I tried doing ros2 launch depthai_ros_driver camera.launch.py params_file:=/opt/ros/humble/share/depthai_ros_driver/config/oak_d_pro_w.yaml

          but didn't seem to work…

          This is the oak_dprow.yaml file under /opt/ros/humble/share/depthai_ros_driver/config/

          /oak:

          ros__parameters:

          camera:
          
            i_nn_type: none
          
          rgb:
          
            i_resolution: '720'

          This config file doesn't seem quite right either as there's not utilization of stereo? Sorry, I'm quite confused at this point.

          Hi @Genozen
          The parameters in .yaml file work as buliding blocks for pipeline. The yaml you have sent is an example that only previews the RGB camera. If you add stereo like here, you will get depth topics as well.

          Here is a complete list of parameters.

          Thanks,
          Jaka

          2 months later

          Thanks guys, this helps a lot. I was able to stream IR image on Jetson at a solid 60 FPS. The parameter list in this link helps figuring out what are the exact parameters I could set: https://docs.luxonis.com/software/ros/depthai-ros/driver/

          I did notice there's a delay/lag when RGB-D is turned on… not sure if there are any fix to this…??

            Genozen
            How much lag? I expect some latency because the RGB-D pipeline is quite heavy.

              5 days later

              jakaskerl From the video stream on Rviz2, I would say roughly 1 to 2 seconds behind, when my hand is waving around if RGB-D mode is enabled in the config. Is this a normal amount of delay?

                Genozen
                Can't say for sure since I'm not very accustomed to ROS driver for depthai, but I think this is a bit much.

                cc Luxonis-Adam is this something to be expected?
                I have a feeling I saw this explained once before, but I can't find the post.

                Thanks,
                Jaka

                Hi, delay for RGBD can be caused by following things:

                • RGBD PCL is calculated on host, so high CPU usage could provide delays
                • Rviz node cannot be run as a composable node so additional overhead could be caused by DDS implementation, you can find more information about that on ROS DDS tuning docs
                • This is something that I cannot verify currently but if Jetson is running from SD card then the SD card speed could have an impact on overall performance
                • You can try running RGB camera in low_bandwidth mode by setting proper parameters, this could improve throughput between the device and Jetson