M
magikEye

  • Oct 24, 2024
  • Joined Apr 30, 2024
  • 0 best answers
  • Great, thanks for pointing to that resource. W're using an OAK-D Wide Pro PoE, so it's this file?

    I assume the axis system orientation is (when looking at the back of the camera) x right, y down and z forward? Could you also elaborate on the mapping from RPY to XYZ, and whether the orientation is expressed as an intrinsic or an extrinsic rotation?

  • Hi,

    1. As I see, marker positions are detected on distorted image

    Correct.

    1. They are not rectified(?) after that

    The rectification (CameraInfo.R) is applied on the received marker positions in the method markersCallback in the reproject_markers.cpp (this file is in the zip shared on Google Drive), as the stag_ros node does not take into account CameraInfo.R. The inverse of the right camera's R is applied, as R describes the rotation from the reference frame (RGB cam here) to the right camera's frame, while here we want to transform the other way. The stag_ros node takes into account distortion (CameraInfo.D) when solvePnp is called, which takes distortionMat (= the distortion coefficients) as argument. I think it should work out like that?

    Both distortion and rectification are taken into account when projecting the marker points back from 3D to 2D in projectMarkersToMimageCoordinates (shared in the first post), in cv::projectPoints. Those 2D points match those of what stag_ros detected, so stag_ros must be applying the same distortion correction (in the other direction, then). Though I may be missing something?

    That said, in the end the stag_ros package just provides some 3D positions. If the camera calibration is correct and the projection as well (the function projectMarkersToImageCoordinates), then each 3D position should be projected to the 'same' location in each image, with 'same' meaning here that it coincides with the same real-life features. The stag_ros node is just used to let it coincide with the markers, with eases visual interpretation.

  • Interesting. The observed behavior thus matches the PCB lettering. However, it still doesn't match the TF (RViz screenshot, x = red, green = x, blue = y):

    Could it be an issue with the published TF?

  • jakaskerl So according to the PCB, the x-axis points downwards (I assume the PCB is mounted in the camera such that the text isn't bottom up)?

    That does not match the behavior that we observe, here's a video with the camera being moved and the x-axis angular velocity being plotted simultaneously.

  • So, to clarify, in RViz the IMU's x-axis points downwards. Therefore, rotating the camera clockwise around its vertical axis, the angular velocity's x component must be positive (angular velocity as reported on the /oak/imu/data topic). However, the reported value is negative.

    Is this just an interpretation error on our side, or could there be something amiss with the ROS driver or the camera unit itself?

  • Hi, we observed some unexpected behavior when comparing the IMU orientation published on the topic /tf_static and the published values on the topic /oak/imu/data. More specifically, reversing the x and z axes made the IMU behave in line with other stereo + IMU cameras such as the RealSense ones. The configuration parameter camera_i_publish_tf_from_calibration was set to false. The camera is an OAK-D Pro Wide PoE.

    Is this nominal behavior?

    If more information is desired, please let know.

    • Hi @Luxonis-Adam, I hope you got some time to look into the rosbag and the code shared last week. Could you already share some insights?

    • @Luxonis-Adam you can find the bag file and the code here. It should be sufficient to reproduce it, either using the bag file or using a Luxonis camera.

    • Hi Luxonis-Adam , thank you for your reply.

      The current analysis treats the three images as independent, i.e. not as stereo. It doesn't use the P matrices at all. It does use the TF with the baseline, and the D, K and R matrices, though.

      Can I send you a private message or email with a link to the bag file and all code?

    • Hi,
      we're running an OAK-D Pro Wide PoE (P/N: A00574, D/C: 5023) and are unsure whether we're applying the factory calibration (published as a CameraInfo ROS message) correctly. We created a small test setup consisting of the camera looking on a 4-by-4 STag marker grid, run marker detection and reprojection, and the expected result is that the red circle's centers coincide with marker origins, however, as you can see, that is not the case, except of the right eye's image:

      The setup is as follows:

      • A stag_ros node detects markers in the right image and publishes their poses.
      • A custom ROS node subscribes to all images and the marker poses, and projects the 3D poses onto the 2D images. It takes into account the baseline published on TF (copied manually into the script) and the R, K and D matrices.

      For the right image, the marker observations align perfectly, indicating that the applied projection is exactly the inverse of what stag_ros does. (Actually, stag_ros doesn't take into account the camera's R matrix, so upon receipt of the marker poses, we rotate all points by the transpose of the right camera's R matrix.)

      Here's the current projection code:

      void projectMarkersToImageCoordinates(
          const std::vector<stag_ros::STagMarker> markers, 
          const sensor_msgs::CameraInfo & camera_info,
          const cv::Point3d & camera_position,
          std::vector<cv::Point2d> & uvs) {
        // Ensure output is empty
        uvs.clear();
        // Nothing to project => exit
        if (markers.empty()) return;
        // Prepare input
        std::vector<cv::Point3d> object_points;
        for (const stag_ros::STagMarker& m : markers) {
          object_points.push_back(cv::Point3d(
            m.pose.position.x - camera_position.x,
            m.pose.position.y - camera_position.y,
            m.pose.position.z - camera_position.z));
        }
        cv::Mat r(3, 3, CV_64F);
        for (int i = 0; i < 9; ++i) r.at<double>(i) = camera_info.R[i];
        cv::Mat rvec;
        cv::Rodrigues(r, rvec);
        cv::Mat tvec = cv::Mat::zeros(cv::Size(3, 1), CV_64F);
        cv::Mat k(3, 3, CV_64F);
        for (int i = 0; i < 9; ++i) k.at<double>(i) = camera_info.K[i];
        // Assume distortion model follows OpenCV format
        cv::Mat d(camera_info.D.size(), 1, CV_64F);
        for (int i = 0; i < camera_info.D.size(); ++i) d.at<double>(i) = camera_info.D[i];
        // The actual 3D => 2D projection
        cv::projectPoints(object_points, rvec, tvec, k, d, uvs);
      }

      I attempted to add the full source file, however, the forum responded that the file type is not supported.

      The above code is classic OpenCV, and it corresponds to what the existing stag_ros node does, so I guess it's close to correct. But perhaps we're still missing something.

      Could you please share your thoughts on whether we're interpreting and using the calibration correctly? If anything is unclear, please let know.

    • Hello,

      We are working with an OAK-D Pro Fixed Focus camera and were surprised by parts of the image being unexpectedly blurry. Could you please verify whether we are using the camera correctly, and whether the encountered blurriness is normal for this camera model?

      Below you find three images created using the DepthAI ROS1 driver (package depthai_ros_driver, launch file camera.launch) and rqt. The camera was positioned on a steady tripod.

      In this image from the right eye, the whole frame is rendered with good detail.

      However, in the left eye, the right side is blurry:

      Compare for example the rendering of the AprilTag board in the left and right eye:

      Camera information:
      Luxonis OAK-D Pro PoE Fixed Focus
      P/N: A00570
      D/C: 0623

      Possibly related forum threads:
      https://discuss.luxonis.com/d/714-blurry-image-from-right-lens
      https://discuss.luxonis.com/d/2344-details-in-rgb-cam-of-oak-d-pro-blurry/2

      We look forward to your response. In case more information is desired, please do not hesitate to ask.